URL
stringlengths
15
1.68k
text_list
sequencelengths
1
199
image_list
sequencelengths
1
199
metadata
stringlengths
1.19k
3.08k
https://www.physicsforums.com/threads/the-area-of-a-square.630758/
[ "# The area of a square\n\n## Main Question or Discussion Point\n\nI was just wondering the other day about the concept of area....Area to me is the space occupied in 2d by a bounded figure..... I wanted to find out WHY the area of a square is s^2 or why area of a rectangle is lxb...Consider the dimensions of a rectangle 7x5. The area can be expressed as 5 strips of length 7 i.e. 7+7+7+7+7=35, but now the strips are each 1 unit wide and hence the formula works (width of 5 units is divided into 5 parts of 1 unit), what if the strips are made smaller and smaller such that the strips are infinitesimally small then the formula doesn't make any sense because the width of 5 units is being divided into infinite parts and hence the area is coming out to be zero, thus making me dumbfounded as to the approach adopted by me earlier.\n\nThus, why is the area of a rectangle or square- lxb or sxs???\n\nwhat if the strips are made smaller and smaller such that the strips are infinitesimally small then the formula doesn't make any sense\nObviously the formula will not make sense if you compute it completely differently.\nOf course, it's entirely possible to add together all of those infinitesimally small strips, which will give you the right answer; you just have to use calculus.\n\nThe approach you described (using strips of elements unit 1 wide) seems perfectly intuitive to me and clearly describes why the area of the square is computed as it is, so I'm not sure what else you want. You could use calculus and integrate over a square to the find the area, I suppose, which would give you the formula s2. Let's do that (if you don't have any experience with calculus, just watch how a bunch of incomprehensible stuff happens that gives you the right answer):\n\nWe want to compute the area of a square with side length a. We can do this by integrating the function f(x) = a over the interval 0 < x < a...\n\n$$\\int_0^a\\! a \\, \\mathrm{d} x \\: = \\: a \\cdot a - a \\cdot 0\\: = \\: a^2$$\n\nWhat we've done here is exactly what you described; we took each of those slices and shrunk them down until they were infinitesimally small, and then added them up. This gives us the right answer.\n\nLast edited:\nHi.\n\nHm.\n\nIf width of single strip is identically zero, then there is nothing that could be done. However, if strip width is identically zero, then width is not infinitesimal. So Your reasoning got astray at the moment You passed from \"infinitesimal\" to \"identically zero\". You might have not noticed this transition. If infinitesimal, then not zero. Yes, as close to zero as You like. Never zero though. Use limits. Or integral, of course. I guess question was of speculative nature.\n\nThis reminds me of Zeno paradox.\n\nCheers.\n\nthanks i understood...i guess i have to learn calculus to understand the realm of mathematics which deals with infinitesimally small quantites...how is this book by michael spivak? Everybody told me it's great. What do you suggest?\n\nHi.\n\nI don't know about Spivak's Calculus. I do know about a ton of other books I've read on calculus, though. There are 2 types of calculus books. First type are books for students of mathematics and theoretical physics. They have to know it rigorously because it's in the curriculum and profs lecture it that way. Second type of calculus books are books written for engineers. Those written for engineers go straight for the head: they aim at calculating things. Engineer books are not concerned with purely theoretical aspects of calculus. So, one might go for second type books first, and when differential details become clear, one is advised to take a byte at the real thing. Otherwise, if not informed on the subject at all, one is easily lost in all the details of theoretical math. And calculus is not easy one way or another. It's a huge area and one never gets to master it entirely. Ever. Finally, in my opinion, reading only one book on calculus is not enough. The beast is too huge for only one weapon.\n\nCheers.\n\nThanks...! I will look into these things.\n\nSpivak's a pretty serious Calculus textbook. I've heard it's one of the best, but from what I've gathered unless you're very confident about your mathematic skills (which you may well be), I'd start with a simpler book.\n\nIn Euclidean geometry, the area of a square with side $a$ is postulated to be $a^2$ (hence, the name squared). To justify this claim, imagine you increase the side n times. How many small squares fill the large square?\n\nThen, to prove the formula for the area of a rectangle:\nThe side of the outer square is $a + b$, and the side of the inner square is $a - b$ (assuming $a > b$). Then, the area of the outer square is the sum of the areas of the inner square and the four identical rectangles:\n$$(a + b)^2 = (a - b)^2 + 4 A$$\n$$A = \\frac{(a + b)^2 - (a - b)^2}{2}$$\n$$A = \\frac{(a^2 + 2 a b + b^2) - (a^2 - 2 a b + b^2)}{4}$$\n$$A = a \\, b$$\n\nWhich is that simpler book for calculs? I don't think my math skills are that great. I would surely start with that book where everything is simple enough to completely grasp a particular idea and a concept. Please suggest some simpler book....!!\n\nAre you asking why a square with a side length of say, 2cm, has an area of 4cm2?\nIf so, you can think of it as taking the 2cm along one side and multiplying it by how many of those there are. one side is 2cm wide, and stretches over 2cm. If you are asking how you get 4cm2 from multiplying 2cm by 2cm, you can make a small dimensional analysis.\n\nsubstitute cm into \"d\" you then have ${(2d)^2}={2^2}{d^2}=4{d^2}$\n\nplug our dimension back into d's place, you get:\n\n$${2cm{\\cdot}2cm}=4{cm^2}$$\n\nAnother example is acceleration. In algebriac terms acceleration is:\n\n$\\frac{{v_f}-{v_i}}{t}=a$ where:\n\nvf=final velocity\nvi=initial velocity\nt=time\na=acceleration.\n\nthere are fundemental units of quantity; here is the wiki page for the SI Units http://en.wikipedia.org/wiki/International_System_of_Units#Units_and_prefixes\nWe take these fundemental units to make others, such as these: http://en.wikipedia.org/wiki/List_of_physical_quantities\n\nnotice the velocity is $\\frac{d}{t}$, time is (fundemntal)=t, and acceleration is $\\frac{d}{t^2}$.\n\nYou may ask, \"why is the time squared for acceleration?\" treat the dimensions algebriaclly.\n$$\\frac{(\\frac{d}{t})}{t}=\\frac{d}{tt}=\\frac{d}{t^2}$$\nThe quantities' dimensions treat each other algebriaclly." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9465698,"math_prob":0.98757464,"size":940,"snap":"2020-24-2020-29","text_gpt3_token_len":246,"char_repetition_ratio":0.13461539,"word_repetition_ratio":0.09944751,"special_character_ratio":0.26489362,"punctuation_ratio":0.124423966,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9990351,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-05-29T01:59:19Z\",\"WARC-Record-ID\":\"<urn:uuid:94537b29-db79-4598-9458-b24dae99c3e4>\",\"Content-Length\":\"100987\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a5e5ebe2-5d68-48a3-9f22-74edb73150b3>\",\"WARC-Concurrent-To\":\"<urn:uuid:3e900405-06ec-4aa2-9366-8ca0e5642b46>\",\"WARC-IP-Address\":\"23.111.143.85\",\"WARC-Target-URI\":\"https://www.physicsforums.com/threads/the-area-of-a-square.630758/\",\"WARC-Payload-Digest\":\"sha1:5HJF32OYTVJCV3AGTU2V7YXCBQCUQU33\",\"WARC-Block-Digest\":\"sha1:NIYF23T6FYXEW4NN4T4BFDMWISRPI5YY\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-24/CC-MAIN-2020-24_segments_1590347401004.26_warc_CC-MAIN-20200528232803-20200529022803-00197.warc.gz\"}"}
http://www.celestrak.com/columns/v02n03/
[ "", null, "## Orbital Coordinate Systems, Part III\n\nBy Dr. T.S. Kelso", null, "January/February 1996", null, "", null, "January/February 1996\n\n## Orbital Coordinate Systems, Part III\n\nBy Dr. T.S. Kelso\n\nLast time, we worked through the process of calculating the ECI (Earth-Centered Inertial) coordinates of an observer's position on the Earth's surface, starting with the observer's latitude and longitude. Then, we used those coordinates to calculate look angles (azimuth and elevation) from the observer's position to an orbiting satellite. The most difficult part of that process was in calculating the sidereal time, a quantity necessary to determines the Earth's orientation in inertial space.\n\nIn the process of performing those calculations, however, we made one simplifying assumption: that the Earth is a sphere. Unfortunately, this assumption is not a good one. Ignoring the fact that the Earth's shape can more accurately be described as an oblate spheroid (a flattened sphere) can have a significant effect in certain types of satellite tracking applications. In this column, we will examine the implications of our initial assumption by modifying our calculations to allow for the Earth's flattening at the poles and then tackle the related problem of determining the sub-point of an orbiting satellite. Let's start by looking at a cross-section of the Earth and defining some terms.", null, "Figure 1. Cross-Section of Oblate Earth\n\nFigure 1 is an exaggerated view of the cross-section of the Earth. For an observer on the Earth's surface, we can define a couple of terms fairly easily. The first is the local zenith. The local zenith direction is just a fancy way of saying \"straight up.\" It is the direction away from a point on the Earth's surface perpendicular (at a right angle to) the local horizon. On a sphere, this direction is always directly away from the Earth's center. However, on an oblate spheroid, this is not the case since a line from the center of the Earth to the observer's position would not point to the local zenith (except on the equator and at the poles).\n\nSince the local zenith direction depends upon the local horizon, let's take some time to better define it, as well. The local horizon is a plane which is tangent (touching at a point) to the Earth's surface at the observer's position. For our purposes, we will consider the local horizon to be the plane tangent to the reference spheroid. The term reference spheroid is used to define the oblate spheroid which 'best' defines the shape of the Earth. How 'best' is defined is a complicated process and depends upon whether the fit of the reference spheroid is regional or global. We will use the reference spheroid defined in WGS-72 (World Geodetic System, 1972) for our standard.\n\nIn WGS-72, the Earth's equatorial radius, a, is defined to be 6,378.135 km. The Earth's polar radius, b, is related to the equatorial radius by something called the flattening, f, where\n\nb = a(1 - f)\n\nThe flattening term, as defined in WGS-72, is only 1/298.26—a very small deviation from a perfect sphere. Using this value, the Earth's polar radius would be 6,356.751 km—only 22 kilometers difference from the equatorial radius.\n\nThe first real significance of using an oblate spheroid instead of a sphere to define the Earth's shape comes in determining the observer's latitude. On a sphere, latitude is defined as the angle between the line going from the center of the Earth to the observer and the Earth's equatorial plane. However, on an oblate spheroid, geodetic latitude is the angle between the local zenith direction and the Earth's equatorial plane. This angle, φ, is the latitude used on maps; the angle formed by the observer's position, the Earth's center, and the equatorial plane is more properly referred to as the geocentric latitude, φ'.\n\nThe impact of this change is that in order to calculate the observer's ECI position, we must determine the geocentric latitude from the geodetic latitude. Knowing the geocentric latitude, φ', we can then calculate the geocentric radius, ρ, and from that calculate the z coordinate (ρ sin φ') and the projection in the equatorial plane (ρ cos φ'). Let's start by developing the relationship between φ and φ' since we'll usually be given φ.\n\nFrom the basic definition of an ellipse,", null, "where\n\nR' = ρ cos(φ')\n\nand\n\nz' = ρ sin(φ').\n\nNow,\n\ntan(φ') =", null, "and\n\ntan(φ) =", null, "(that is, the normal to the tangent of the spheroid). Differentiating the equation of the ellipse,", null, "and rearranging terms,", null, "which can be written as,\n\ntan(φ') =", null, "tan(φ) = (1-f)2 tan(φ).\n\nSo, knowing the geodetic latitude and the flattening, we can now determine the geocentric latitude. Now, let's see how much of a difference results from using an oblate spheroid. Figure 2 plots the difference between geodetic and geocentric latitude as a function of geodetic latitude.", null, "Figure 2. Geocentric vs. Geodetic Latitude\n\nThat's it? All that work and the maximum error is less than two-tenths of a degree? It would hardly seem worth the effort to perform the calculation. But let's explore a little further.\n\nAlthough the development is too complicated to present here, it can be shown that\n\nρ sin(φ') = z' = a S sin(φ)\n\nand\n\nρ cos(φ') = R' = a C cos(φ)\n\nwhere", null, "", null, ".\n\nOur ECI coordinates, are now\n\nx' = a C cos(φ) cos(θ)\n\ny' = a C cos(φ) sin(θ)\n\nz' = a S sin(φ).\n\nUsing the example of calculating the ECI coordinates of 40° N (geodetic) latitude, 75° W longitude on 1995 October 01 at 9h UTC,\n\nx' = 1703.295 km, y' = 4586.650 km, z' = 4077.984 km.\n\nAlthough close to our calculations assuming a spherical Earth, we find this simplification resulted in a position error of 22.8 km.\n\nWhat we really want to know, however, is just how big an error will result when generating look angles to a satellite from an observer's position on the Earth's surface if we assume a spherical Earth. From Figure 2, we would expect to have the largest errors for observers around 45° N latitude, so let's use a location near Minneapolis at 45° N latitude and 93° W longitude for our example. On a pass of the Mir space station over Minneapolis on 1995 November 18, Mir passed almost directly overhead. At 12h 46m UTC, its ECI position was calculated to be: x = -4400.594 km, y = 1932.870 km, z = 4760.712 km. Calculating the look angles for both a spherical and oblate Earth yields the results shown in Table 1.\n\nTable 1. Look Angles for Spherical vs. Oblate Earth\n\n Spherical Earth Oblate Earth Azimuth 118.80° 100.36° Elevation 80.24° 81.52°\n\nThe pointing error produced by assuming a spherical Earth is 3.17 degrees. For most applications, this error might not be significant. However, in applications involving tracking with high-gain, typically narrow-beamwidth, antennas, an error of 3 degrees can result in a loss of communications.\n\nSo, now that we've completed the calculation of a satellite look angle for an oblate Earth, let's look at how to calculate the sub-point of a satellite in Earth orbit. We'll begin by examining the calculations for a spherical Earth first before looking at the case for an oblate Earth.\n\nFirst, let's be sure we understand what we're looking for. The satellite sub-point is that point on the Earth's surface directly below the satellite. For the case of a spherical Earth, this point is the intersection of the line from the center of the Earth to the satellite and the Earth's surface, as shown in Figure 3.", null, "Figure 3. Calculating Satellite Sub-Point—Spherical Earth\n\nGiven the ECI position of the satellite to be [x, y, z], the latitude is", null, "and the (East) longitude is", null, "where θg is the Greenwich Mean Sidereal Time (GMST). The altitude of the satellite would be", null, "where Re is the Earth's circular radius.\n\nAs seen in Figure 4, the calculation for an oblate Earth is somewhat more complicated. The first thing we notice is that our definition of satellite sub-point requires some refinement. The point on the Earth's surface directly below the satellite is not on a line joining the satellite and the center of the Earth. Instead, it is that point on the Earth's surface where the satellite would appear at the zenith.", null, "Figure 4. Calculating Satellite Sub-Point—Oblate Earth\n\nCalculating the longitude of the satellite's sub-point doesn't change. However, to calculate the geodetic latitude of the satellite sub-point, we'll want to begin by approximating φ with φ' (as calculated above) and letting", null, "(for computational efficiency). Then, we'll want to loop through the following calculations", null, "", null, "", null, "until", null, "is within the desired tolerance. To compute the altitude of the satellite above the sub-point,", null, ".\n\nUsing our example of Mir passing over Minneapolis on 1995 November 18 at 12h 46m UTC yields a sub-point at 44.91° N (geodetic) latitude, 92.31° W longitude, and 397.507 km altitude. And while we cannot solve for the sub-point directly, the number of iterations required is typically quite small. For this example, the value of", null, "after the first iteration is 0.180537 degrees, after the second iteration it's 0.000574 degrees, and after the third iteration it's 0.000002 degrees.\n\nAdmittedly, some of the differences we've found may seem small, but that will depend upon your tracking requirements. And, since they are not that much more difficult to calculate, there is little reason not to use them. As always, if you have questions or comments on this column, feel free to send me e-mail at [email protected] or write care of Satellite Times. Until next time, keep looking up!" ]
[ null, "http://www.celestrak.com/images/stlogo.gif", null, "http://www.celestrak.com/columns/v02n03/v02n03.gif", null, "http://www.celestrak.com/images/stlogo.gif", null, "http://www.celestrak.com/columns/v02n03/v02n03.gif", null, "http://www.celestrak.com/columns/v02n03/fig-1.gif", null, "http://www.celestrak.com/columns/v02n03/eq-01.gif", null, "http://www.celestrak.com/columns/v02n03/eq-02.gif", null, "http://www.celestrak.com/columns/v02n03/eq-03.gif", null, "http://www.celestrak.com/columns/v02n03/eq-04.gif", null, "http://www.celestrak.com/columns/v02n03/eq-05.gif", null, "http://www.celestrak.com/columns/v02n03/eq-06.gif", null, "http://www.celestrak.com/columns/v02n03/fig-2.gif", null, "http://www.celestrak.com/columns/v02n03/eq-07.gif", null, "http://www.celestrak.com/columns/v02n03/eq-08.gif", null, "http://www.celestrak.com/columns/v02n03/fig-3.gif", null, "http://www.celestrak.com/columns/v02n03/eq-09.gif", null, "http://www.celestrak.com/columns/v02n03/eq-10.gif", null, "http://www.celestrak.com/columns/v02n03/eq-11.gif", null, "http://www.celestrak.com/columns/v02n03/fig-4.gif", null, "http://www.celestrak.com/columns/v02n03/eq-12.gif", null, "http://www.celestrak.com/columns/v02n03/eq-13.gif", null, "http://www.celestrak.com/columns/v02n03/eq-14.gif", null, "http://www.celestrak.com/columns/v02n03/eq-15.gif", null, "http://www.celestrak.com/columns/v02n03/eq-16.gif", null, "http://www.celestrak.com/columns/v02n03/eq-17.gif", null, "http://www.celestrak.com/columns/v02n03/eq-16.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.88859415,"math_prob":0.97961223,"size":9347,"snap":"2021-31-2021-39","text_gpt3_token_len":2243,"char_repetition_ratio":0.15230654,"word_repetition_ratio":0.031786397,"special_character_ratio":0.23494169,"punctuation_ratio":0.11594992,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9969256,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52],"im_url_duplicate_count":[null,null,null,5,null,null,null,5,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,6,null,3,null,6,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-27T09:14:41Z\",\"WARC-Record-ID\":\"<urn:uuid:6f450eca-e368-47e0-890b-6b6ac0b869ce>\",\"Content-Length\":\"19410\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:967fa7d1-ba79-4994-a978-51cffab7b288>\",\"WARC-Concurrent-To\":\"<urn:uuid:b2836f0d-3f16-4b08-bc4d-4d7d3016510b>\",\"WARC-IP-Address\":\"104.168.149.178\",\"WARC-Target-URI\":\"http://www.celestrak.com/columns/v02n03/\",\"WARC-Payload-Digest\":\"sha1:33RE2DOR7SLWSVKI44Q2WADPBBDKIXYA\",\"WARC-Block-Digest\":\"sha1:FV2NOCI6WSHVRKM45QXITKIAZX22T6MI\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780058415.93_warc_CC-MAIN-20210927090448-20210927120448-00544.warc.gz\"}"}
https://www.nuget.org/packages/cs-pca/
[ "#", null, "cs-pca 1.0.1\n\n`Install-Package cs-pca -Version 1.0.1`\n`dotnet add package cs-pca --version 1.0.1`\n`<PackageReference Include=\"cs-pca\" Version=\"1.0.1\" />`\nFor projects that support PackageReference, copy this XML node into the project file to reference the package.\n`paket add cs-pca --version 1.0.1`\n`#r \"nuget: cs-pca, 1.0.1\"`\n#r directive can be used in F# Interactive, C# scripting and .NET Interactive. Copy this into the interactive tool or source code of the script to reference the package.\n```// Install cs-pca as a Cake Addin\n\n// Install cs-pca as a Cake Tool\n#tool nuget:?package=cs-pca&version=1.0.1```\n\n## cs-pca\n\nPrincipal Component Analysis implemented in C#\n\n## Install\n\n``````Install-Package cs-pca\n``````\n\n## Usage\n\nThe sample codes below shows how to use the library to reduce the number of dimension or reconstruct the original data from the reduced data:\n\n``````List<double[]> source = GetNormalizedData();\nList<double[]> Z; // PCA output\ndouble variance_retained;\nK = 5; // dimension of the Z (note that Z will have K+1 dimensions where the first dimension will be ignored)\nPCA.PCADimReducer.CompressData(source, K, out Z, out variance_retained);\n\n// To reconstruct some compressed data point from Z\nList<double[]> compressed_data_point = GetCompressedDataPoints(); // K+1 dimension data points\nList<double[]> uncompressed_data_point = ReconstructData(compressed_data_point, Z);\n``````\n\nThis package has no dependencies.\n\n### NuGet packages\n\nThis package is not used by any NuGet packages.\n\n### GitHub repositories\n\nThis package is not used by any popular GitHub repositories." ]
[ null, "https://api.nuget.org/v3-flatcontainer/cs-pca/1.0.1/icon", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.63431925,"math_prob":0.4126395,"size":1868,"snap":"2021-43-2021-49","text_gpt3_token_len":482,"char_repetition_ratio":0.125,"word_repetition_ratio":0.1328125,"special_character_ratio":0.24571735,"punctuation_ratio":0.15041783,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96113616,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-20T11:08:37Z\",\"WARC-Record-ID\":\"<urn:uuid:d8182ad1-7bcf-4bcf-b5c7-ca33caae594c>\",\"Content-Length\":\"41409\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:726fb0b3-bd94-40c0-a12e-bf33ea7c511b>\",\"WARC-Concurrent-To\":\"<urn:uuid:2f46d2e6-f9d7-43ff-91a6-dd8bd2259738>\",\"WARC-IP-Address\":\"52.240.159.111\",\"WARC-Target-URI\":\"https://www.nuget.org/packages/cs-pca/\",\"WARC-Payload-Digest\":\"sha1:BHE7P6O6B4IM5UKBFCUKWNPFWWQVT64G\",\"WARC-Block-Digest\":\"sha1:WLMOAX2MXLJO6QGVTAWMNQ7NL6RJ7E5K\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323585305.53_warc_CC-MAIN-20211020090145-20211020120145-00349.warc.gz\"}"}
https://gemma.feri.um.si/projects/industrial-projects/robust-trapezoidation-and-triangulation-algorithm-for-simple-polygons/
[ "## Robust Trapezoidation and Triangulation algorithm for simple polygons\n\nThe first industrial project of GeMMA was aimed to provide two robust dynamic link libraries for the German branch of the renowned engineering software producer. We have developed a novel polygon trapezoidation algorithm and adapted an existing polygon triangulation algorithm. Both of them are based on a sweep-line paradigm. They can handle simple (non-self-intersecting) polygons with or without holes. The latter may be separately trapezoidated/ triangulated on a request. After months of intensive testing in the AUTODESK’s testing department, our solutions were accepted for incorporation in a worldwide used commercial software product, while GeMMA was honoured with the AUTODESK’s certificate of quality. Besides the feasibility, correctness and speed, we also had to achieve high level of numerical robustness by avoiding all arithmetical operations that could produce inexact results: floating point divisions, triangular functions and square root calculations.\n\nThe proposed polygon trapezoidation algorithm represents an original approach. As the sweepline glides over the plane, a set of so-called open trapezoids is generated and maintained. An actual trapezoid is cut from the open one by adding a horizontal side in accordance with the context of a processed vertex. In this manner, a simple polygon is partitioned into a set of trapezoids with horizontal parallel sides. These may be limited either with original or with newly added vertices. Some trapezoids may also be degenerated into triangles. In comparison to the fastest Seidel’s method at the time, our algorithm revealed 30 to 100% faster performance, although both methods require O(n log n) time with respect to the number of vertices n.\n\nAnother problem considered in the project was the polygon triangulation. One could use the trapezoidation and then partition each trapezoid into two triangles, but our costomers preferred the solution without additional vertices that would typically appear during the trapezoidation. We have utilized quite traditional two-step approach which firstly decomposes the input polygon into the so-called y-monotone pieces as proposed by Garey, Johnson, Preparata and Tarjan (1978), and then separately triangulates each separate y-monotone polygon with the method introduced by Lee and Preparata (1977). To achieve the required speed, we have slightly adapted the algorithm by utilizing some redundant data structures, particularly the so-called neighbour tree for each vertex.\n\nThe theoretical analysis has shown that the first step runs in O(n log NMS), where NMS is the number of monotone pieces, while the triangulation itself requires linear O(n) time. Since the sweep-line algorithm must be pre-processed by sorting and since a popular quicksort performs best in random case, but appears slow in intuitively the simplest cases (corresponding to polygons with low NMS), this project initiated our increased interest in developing adaptive sorting algorithms. Our original solutions include vertex sort, smart quicksort and finally (still unpublished) smart merge sort." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9317705,"math_prob":0.9056323,"size":3152,"snap":"2020-34-2020-40","text_gpt3_token_len":627,"char_repetition_ratio":0.12229987,"word_repetition_ratio":0.0,"special_character_ratio":0.17385787,"punctuation_ratio":0.0806142,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96065205,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-25T22:13:00Z\",\"WARC-Record-ID\":\"<urn:uuid:90a20fbb-db57-4944-bbc3-cad99efbe323>\",\"Content-Length\":\"6731\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f3053081-b405-4dfb-82d8-84d77aaa8a1b>\",\"WARC-Concurrent-To\":\"<urn:uuid:4996ccc2-b4d5-4044-a8d9-50866b82e639>\",\"WARC-IP-Address\":\"164.8.9.62\",\"WARC-Target-URI\":\"https://gemma.feri.um.si/projects/industrial-projects/robust-trapezoidation-and-triangulation-algorithm-for-simple-polygons/\",\"WARC-Payload-Digest\":\"sha1:4HNDV65LXKELVK4MI5S7L2UBFT2Y7BBL\",\"WARC-Block-Digest\":\"sha1:7G5ZFRQH76R2RRW5KRGGQK5UOYSBXPQF\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600400228998.45_warc_CC-MAIN-20200925213517-20200926003517-00395.warc.gz\"}"}
https://quant.stackexchange.com/questions/16280/black-litterman-how-to-choose-the-uncertainty-in-the-views-omega-for-smooth/16281
[ "# Black-Litterman, how to choose the uncertainty in the views $\\Omega$ for smooth transitions from prior to posterior\n\nIn Black-Litterman we get a new vector of expected returns of the form: \\begin{align} \\Pi_{BL} = \\Pi + \\underbrace{\\tau \\Sigma P^T[P\\tau\\Sigma P^T+\\Omega]^{-1}}_{\\text{correction}}[Q-P\\Pi] \\end{align} where $P$ is the pick matrix and we mix the prior $\\Pi$ with the expected value of the views $Q$. $\\Sigma$ is the historical covariance matrix and $\\Omega$ is the covariance matrix of the views.\n\nLet us assume that $P$ is just the identity matrix and look at the choice $\\Omega = \\tau\\Sigma$, then we see that $$\\Pi_{BL} = \\frac12 \\Pi + \\frac12 Q,$$ thus we have a 50:50 mix and the covariance of the matrix does not affect the posterior at all - it is just a trivial mixture. This is against my intuition. Furthermore optimal weights using this $\\Pi_{BL}$ will differ relatively much from optimal weights of the prior (of course depending on $Q$).\n\nIf we assume $\\Omega = \\text{diag}(\\tau \\Sigma)$ then I can not find a closed form for $\\Pi_{BL}$ but appearantly the posterior is more compatible with the prior and the optimal weights are more similar than in the other setting.\n\nMy question: how can I choose $\\Omega$ best in order to get results that do not deviate too much from my prior? I know that in the literature there are theories (e.g. here The Black-Litterman Model In Detail) but I can't see through. What is used in practice?\n\nIn practice, $\\Omega$ (the covariance of the investor views) often 'inherits' the market covariance $\\Sigma$. A convenient choice is\n\n$\\Omega = \\left( 1/c -1 \\right) P \\Sigma P^T$\n\nwhere $c$ is a confidence parameter: the case $c \\rightarrow 1$ corresponds to a strongly peaked distribution of views (the investor views dominate the market), while $c \\rightarrow 0$ gives an infinitely disperse distribution where investor views have no influence. Tuning $c$ allows you to deviate smoothly from the prior $\\Pi$.\n\nThis choice for $\\Omega$ is proposed in Attilio Meucci's Risk and Asset Allocation, chapter 9.2.\n\nEdit: In the example you give ($P$ is the identity matrix and $\\Omega = \\tau \\Sigma$), the investor provides views on each asset with the same uncertainty as the market. In that case, the posterior return $\\Pi_{BL}$ is just the average of market prior $\\Pi$ and investor expectation $Q$. This seems plausible by symmetry: if you switch market and investor, $\\Pi_{BL}$ stays the same.\n\n• But you have to agree that setting $1/c-1 = \\tau$ leads to what I write above ... I will play with your $c$ factor - thanks for your answer. – Ric Jan 20 '15 at 15:49\n• Or where does $\\tau$ enter? do we have 2 factors: $(1/c-1) \\tau$? Then one would see things quite clearly in the correction term above ... – Ric Jan 20 '15 at 16:15\n• yes. I would deviate from this choice only if you can assign different confidences to your individual views (which is a fairly common situation in practice). – Felix Jan 20 '15 at 16:24\n• You might also refer to Equations 21-23 in papers.ssrn.com/sol3/papers.cfm?abstract_id=1213325 – John Jan 20 '15 at 16:28\n• no, it is $(1/c-1)$. $\\tau \\Sigma$ is the covariance of the posterior, assumed to be a normal distribution in the Black Litterman framework. – Felix Jan 20 '15 at 16:41\n\nWhen I implemented a BL model, I chose to do the omega optimization using the technique Idzorek proposed here:\n\nhttps://corporate.morningstar.com/ib/documents/MethodologyDocuments/IBBAssociates/BlackLitterman.pdf\n\nIt's a numerical procedure though.\n\n• Thanks for the link to the full publication - I have already read summaries of it ... – Ric Jan 26 '15 at 7:35" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.895965,"math_prob":0.98715985,"size":1340,"snap":"2021-21-2021-25","text_gpt3_token_len":349,"char_repetition_ratio":0.11302395,"word_repetition_ratio":0.0,"special_character_ratio":0.27014926,"punctuation_ratio":0.061068702,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9974595,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-05-17T20:44:35Z\",\"WARC-Record-ID\":\"<urn:uuid:0531f1f9-1e90-474e-897b-5a00fd617680>\",\"Content-Length\":\"180334\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:01a80c72-d3ea-4609-afcd-df3f173d2191>\",\"WARC-Concurrent-To\":\"<urn:uuid:9cda0f04-594b-4b8b-b7fd-4d0dcb52e407>\",\"WARC-IP-Address\":\"151.101.65.69\",\"WARC-Target-URI\":\"https://quant.stackexchange.com/questions/16280/black-litterman-how-to-choose-the-uncertainty-in-the-views-omega-for-smooth/16281\",\"WARC-Payload-Digest\":\"sha1:7OVJX3NIFZGEOQ3J6OKFJ3SE4SJ3JDLE\",\"WARC-Block-Digest\":\"sha1:XGP2KZI6NDIBOL7333CXOOLUHF5WPQYJ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-21/CC-MAIN-2021-21_segments_1620243992440.69_warc_CC-MAIN-20210517180757-20210517210757-00185.warc.gz\"}"}
https://isabelle.in.tum.de/repos/isabelle/rev/d47eabd80e59?revcount=30
[ "author hoelzl Mon, 14 Mar 2011 14:37:41 +0100 changeset 41975 d47eabd80e59 parent 41974 6e691abef08f child 41976 3fdbc7d5b525\nsimplified definition of open_extreal\n```--- a/src/HOL/Library/Extended_Reals.thy\tMon Mar 14 14:37:40 2011 +0100\n+++ b/src/HOL/Library/Extended_Reals.thy\tMon Mar 14 14:37:41 2011 +0100\n@@ -1076,128 +1076,89 @@\n\nsubsubsection \"Topological space\"\n\n+lemma\n+ shows extreal_max[simp]: \"extreal (max x y) = max (extreal x) (extreal y)\"\n+ and extreal_min[simp]: \"extreal (min x y) = min (extreal x) (extreal y)\"\n+ by (simp_all add: min_def max_def)\n+\ninstantiation extreal :: topological_space\nbegin\n\n-definition \"open A \\<longleftrightarrow>\n- (\\<exists>T. open T \\<and> extreal ` T = A - {\\<infinity>, -\\<infinity>})\n+definition \"open A \\<longleftrightarrow> open (extreal -` A)\n\\<and> (\\<infinity> \\<in> A \\<longrightarrow> (\\<exists>x. {extreal x <..} \\<subseteq> A))\n\\<and> (-\\<infinity> \\<in> A \\<longrightarrow> (\\<exists>x. {..<extreal x} \\<subseteq> A))\"\n\n-lemma open_PInfty: \"open A ==> \\<infinity> : A ==> (EX x. {extreal x<..} <= A)\"\n+lemma open_PInfty: \"open A \\<Longrightarrow> \\<infinity> \\<in> A \\<Longrightarrow> (\\<exists>x. {extreal x<..} \\<subseteq> A)\"\nunfolding open_extreal_def by auto\n\n-lemma open_MInfty: \"open A ==> (-\\<infinity>) : A ==> (EX x. {..<extreal x} <= A)\"\n+lemma open_MInfty: \"open A \\<Longrightarrow> -\\<infinity> \\<in> A \\<Longrightarrow> (\\<exists>x. {..<extreal x} \\<subseteq> A)\"\nunfolding open_extreal_def by auto\n\n-lemma open_PInfty2: assumes \"open A\" \"\\<infinity> : A\" obtains x where \"{extreal x<..} <= A\"\n+lemma open_PInfty2: assumes \"open A\" \"\\<infinity> \\<in> A\" obtains x where \"{extreal x<..} \\<subseteq> A\"\nusing open_PInfty[OF assms] by auto\n\n-lemma open_MInfty2: assumes \"open A\" \"(-\\<infinity>) : A\" obtains x where \"{..<extreal x} <= A\"\n+lemma open_MInfty2: assumes \"open A\" \"-\\<infinity> \\<in> A\" obtains x where \"{..<extreal x} \\<subseteq> A\"\nusing open_MInfty[OF assms] by auto\n\n-lemma extreal_openE: assumes \"open A\" obtains A' x y where\n- \"open A'\" \"extreal ` A' = A - {\\<infinity>, (-\\<infinity>)}\"\n- \"\\<infinity> : A ==> {extreal x<..} <= A\"\n- \"(-\\<infinity>) : A ==> {..<extreal y} <= A\"\n+lemma extreal_openE: assumes \"open A\" obtains x y where\n+ \"open (extreal -` A)\"\n+ \"\\<infinity> \\<in> A \\<Longrightarrow> {extreal x<..} \\<subseteq> A\"\n+ \"-\\<infinity> \\<in> A \\<Longrightarrow> {..<extreal y} \\<subseteq> A\"\nusing assms open_extreal_def by auto\n\ninstance\nproof\nlet ?U = \"UNIV::extreal set\"\nshow \"open ?U\" unfolding open_extreal_def\n- by (auto intro!: exI[of _ \"UNIV\"] exI[of _ 0])\n+ by (auto intro!: exI[of _ 0])\nnext\nfix S T::\"extreal set\" assume \"open S\" and \"open T\"\n- from `open S`[THEN extreal_openE] guess S' xS yS . note S' = this\n- from `open T`[THEN extreal_openE] guess T' xT yT . note T' = this\n-\n- have \"extreal ` (S' Int T') = (extreal ` S') Int (extreal ` T')\" by auto\n- also have \"... = S Int T - {\\<infinity>, (-\\<infinity>)}\" using S' T' by auto\n- finally have \"extreal ` (S' Int T') = S Int T - {\\<infinity>, (-\\<infinity>)}\" by auto\n- moreover have \"open (S' Int T')\" using S' T' by auto\n- moreover\n- { assume a: \"\\<infinity> : S Int T\"\n- hence \"EX x. {extreal x<..} <= S Int T\"\n- apply(rule_tac x=\"max xS xT\" in exI)\n- proof-\n- { fix x assume *: \"extreal (max xS xT) < x\"\n- hence \"x : S Int T\" apply (cases x, auto simp: max_def split: split_if_asm)\n- using a S' T' by auto\n- } thus \"{extreal (max xS xT)<..} <= S Int T\" by auto\n- qed }\n- moreover\n- { assume a: \"(-\\<infinity>) : S Int T\"\n- hence \"EX x. {..<extreal x} <= S Int T\"\n- apply(rule_tac x=\"min yS yT\" in exI)\n- proof-\n- { fix x assume *: \"extreal (min yS yT) > x\"\n- hence \"x<extreal yS & x<extreal yT\" by (cases x) auto\n- hence \"x : S Int T\" using a S' T' by auto\n- } thus \"{..<extreal (min yS yT)} <= S Int T\" by auto\n- qed }\n- ultimately show \"open (S Int T)\" unfolding open_extreal_def by auto\n+ from `open S`[THEN extreal_openE] guess xS yS .\n+ moreover from `open T`[THEN extreal_openE] guess xT yT .\n+ ultimately have\n+ \"open (extreal -` (S \\<inter> T))\"\n+ \"\\<infinity> \\<in> S \\<inter> T \\<Longrightarrow> {extreal (max xS xT) <..} \\<subseteq> S \\<inter> T\"\n+ \"-\\<infinity> \\<in> S \\<inter> T \\<Longrightarrow> {..< extreal (min yS yT)} \\<subseteq> S \\<inter> T\"\n+ by auto\n+ then show \"open (S Int T)\" unfolding open_extreal_def by blast\nnext\n- fix K assume openK: \"ALL S : K. open (S:: extreal set)\"\n- hence \"ALL S:K. EX T. open T & extreal ` T = S - {\\<infinity>, (-\\<infinity>)}\" by (auto simp: open_extreal_def)\n- from bchoice[OF this] guess T .. note T = this[rule_format]\n-\n- show \"open (Union K)\" unfolding open_extreal_def\n- proof (safe intro!: exI[of _ \"Union (T ` K)\"])\n- fix x S assume \"x : T S\" \"S : K\"\n- with T[OF `S : K`] show \"extreal x : Union K\" by auto\n- next\n- fix x S assume x: \"x ~: extreal ` (Union (T ` K))\" \"S : K\" \"x : S\" \"x ~= \\<infinity>\"\n- hence \"x ~: extreal ` (T S)\"\n- by (auto simp: image_UN UN_simps[symmetric] simp del: UN_simps)\n- thus \"x=(-\\<infinity>)\" using T[OF `S : K`] `x : S` `x ~= \\<infinity>` by auto\n- next\n- fix S assume \"\\<infinity> : S\" \"S : K\"\n- from openK[rule_format, OF `S : K`, THEN extreal_openE] guess S' x .\n- from this(3) `\\<infinity> : S`\n- show \"EX x. {extreal x<..} <= Union K\"\n- by (auto intro!: exI[of _ x] bexI[OF _ `S : K`])\n- next\n- fix S assume \"(-\\<infinity>) : S\" \"S : K\"\n- from openK[rule_format, OF `S : K`, THEN extreal_openE] guess S' x y .\n- from this(4) `(-\\<infinity>) : S`\n- show \"EX y. {..<extreal y} <= Union K\"\n- by (auto intro!: exI[of _ y] bexI[OF _ `S : K`])\n- next\n- from T[THEN conjunct1] show \"open (Union (T`K))\" by auto\n- qed auto\n+ fix K :: \"extreal set set\" assume \"\\<forall>S\\<in>K. open S\"\n+ then have *: \"\\<forall>S. \\<exists>x y. S \\<in> K \\<longrightarrow> open (extreal -` S) \\<and>\n+ (\\<infinity> \\<in> S \\<longrightarrow> {extreal x <..} \\<subseteq> S) \\<and> (-\\<infinity> \\<in> S \\<longrightarrow> {..< extreal y} \\<subseteq> S)\"\n+ by (auto simp: open_extreal_def)\n+ then show \"open (Union K)\" unfolding open_extreal_def\n+ proof (intro conjI impI)\n+ show \"open (extreal -` \\<Union>K)\"\n+ using *[unfolded choice_iff] by (auto simp: vimage_Union)\n+ qed ((metis UnionE Union_upper subset_trans *)+)\nqed\nend\n\n-lemma open_extreal_lessThan[simp]:\n- \"open {..< a :: extreal}\"\n-proof (cases a)\n- case (real x)\n- then show ?thesis unfolding open_extreal_def\n- proof (safe intro!: exI[of _ \"{..< x}\"])\n- fix y assume \"y < extreal x\"\n- moreover assume \"y ~: (extreal ` {..<x})\"\n- ultimately have \"y ~= extreal (real y)\" using real by (cases y) auto\n- thus \"y = (-\\<infinity>)\" apply (cases y) using `y < extreal x` by auto\n- qed auto\n-qed (auto simp: open_extreal_def)\n-\n-lemma open_extreal_greaterThan[simp]:\n+lemma continuous_on_extreal[intro, simp]: \"continuous_on A extreal\"\n+ unfolding continuous_on_topological open_extreal_def by auto\n+\n+lemma continuous_at_extreal[intro, simp]: \"continuous (at x) extreal\"\n+ using continuous_on_eq_continuous_at[of UNIV] by auto\n+\n+lemma continuous_within_extreal[intro, simp]: \"x \\<in> A \\<Longrightarrow> continuous (at x within A) extreal\"\n+ using continuous_on_eq_continuous_within[of A] by auto\n+\n+lemma open_extreal_lessThan[intro, simp]: \"open {..< a :: extreal}\"\n+proof -\n+ have \"\\<And>x. extreal -` {..<extreal x} = {..< x}\"\n+ \"extreal -` {..< \\<infinity>} = UNIV\" \"extreal -` {..< -\\<infinity>} = {}\" by auto\n+ then show ?thesis by (cases a) (auto simp: open_extreal_def)\n+qed\n+\n+lemma open_extreal_greaterThan[intro, simp]:\n\"open {a :: extreal <..}\"\n-proof (cases a)\n- case (real x)\n- then show ?thesis unfolding open_extreal_def\n- proof (safe intro!: exI[of _ \"{x<..}\"])\n- fix y assume \"extreal x < y\"\n- moreover assume \"y ~: (extreal ` {x<..})\"\n- moreover assume \"y ~= \\<infinity>\"\n- ultimately have \"y ~= extreal (real y)\" using real by (cases y) auto\n- hence False apply (cases y) using `extreal x < y` `y ~= \\<infinity>` by auto\n- thus \"y = (-\\<infinity>)\" by auto\n- qed auto\n-qed (auto simp: open_extreal_def)\n-\n-lemma extreal_open_greaterThanLessThan[simp]: \"open {a::extreal <..< b}\"\n+proof -\n+ have \"\\<And>x. extreal -` {extreal x<..} = {x<..}\"\n+ \"extreal -` {\\<infinity><..} = {}\" \"extreal -` {-\\<infinity><..} = UNIV\" by auto\n+ then show ?thesis by (cases a) (auto simp: open_extreal_def)\n+qed\n+\n+lemma extreal_open_greaterThanLessThan[intro, simp]: \"open {a::extreal <..< b}\"\nunfolding greaterThanLessThan_def by auto\n\nlemma closed_extreal_atLeast[simp, intro]: \"closed {a :: extreal ..}\"\n@@ -1227,19 +1188,17 @@\nobtains e where \"e>0\" \"{x-e <..< x+e} \\<subseteq> S\"\nproof-\nobtain m where m_def: \"x = extreal m\" using assms by (cases x) auto\n- obtain A where \"open A\" and A_def: \"extreal ` A = S - {\\<infinity>,(-\\<infinity>)}\"\n- using assms by (auto elim!: extreal_openE)\n- hence \"m : A\" using m_def assms by auto\n- from this obtain e where e_def: \"e>0 & ball m e <= A\"\n- using open_contains_ball[of A] `open A` by auto\n- moreover have \"ball m e = {m-e <..< m+e}\" unfolding ball_def dist_norm by auto\n- ultimately have *: \"{m-e <..< m+e} <= A\" using e_def by auto\n- { fix y assume y_def: \"y:{x-extreal e <..< x+extreal e}\"\n- from this obtain z where z_def: \"y = extreal z\" by (cases y) auto\n- hence \"z:A\" using y_def m_def * by auto\n- hence \"y:S\" using z_def A_def by auto\n- } hence \"{x-extreal e <..< x+extreal e} <= S\" by auto\n- thus thesis apply- apply(rule that[of \"extreal e\"]) using e_def by auto\n+ from `open S` have \"open (extreal -` S)\" by (rule extreal_openE)\n+ then obtain e where \"0 < e\" and e: \"ball m e \\<subseteq> extreal -` S\"\n+ using `x \\<in> S` unfolding open_contains_ball m_def by force\n+ show thesis\n+ proof (intro that subsetI)\n+ show \"0 < extreal e\" using `0 < e` by auto\n+ fix y assume \"y \\<in> {x - extreal e<..<x + extreal e}\"\n+ then obtain t where \"y = extreal t\" \"t \\<in> ball m e\"\n+ unfolding m_def by (cases y) (auto simp: ball_def dist_real_def)\n+ then show \"y \\<in> S\" using e by auto\n+ qed\nqed\n\nlemma extreal_open_cont_interval2:\n@@ -1266,41 +1225,36 @@\nfixes S :: \"extreal set\"\nassumes \"open S\"\nshows \"open (uminus ` S)\"\n-proof-\n- obtain T x y where T_def: \"open T & extreal ` T = S - {\\<infinity>, (-\\<infinity>)} &\n- (\\<infinity> : S --> {extreal x<..} <= S) & ((-\\<infinity>) : S --> {..<extreal y} <= S)\"\n- using assms extreal_openE[of S] by metis\n- have \"extreal ` uminus ` T = uminus ` extreal ` T\" apply auto\n- by (metis imageI extreal_uminus_uminus uminus_extreal.simps)\n- also have \"...=uminus ` (S - {\\<infinity>, (-\\<infinity>)})\" using T_def by auto\n- finally have \"extreal ` uminus ` T = uminus ` S - {\\<infinity>, (-\\<infinity>)}\" by (auto simp: extreal_uminus_reorder)\n- moreover have \"open (uminus ` T)\" using T_def open_negations[of T] by auto\n- ultimately have \"EX T. open T & extreal ` T = uminus ` S - {\\<infinity>, (-\\<infinity>)}\" by auto\n- moreover\n- { assume \"\\<infinity>: uminus ` S\"\n- hence \"(-\\<infinity>) : S\" by (metis image_iff extreal_uminus_uminus)\n- hence \"uminus ` {..<extreal y} <= uminus ` S\" using T_def by (intro image_mono) auto\n- hence \"EX x. {extreal x<..} <= uminus ` S\" using extreal_uminus_lessThan by auto\n- } moreover\n- { assume \"(-\\<infinity>): uminus ` S\"\n- hence \"\\<infinity> : S\" by (metis image_iff extreal_uminus_uminus)\n- hence \"uminus ` {extreal x<..} <= uminus ` S\" using T_def by (intro image_mono) auto\n- hence \"EX y. {..<extreal y} <= uminus ` S\" using extreal_uminus_greaterThan by auto\n- }\n- ultimately show ?thesis unfolding open_extreal_def by auto\n+ unfolding open_extreal_def\n+proof (intro conjI impI)\n+ obtain x y where S: \"open (extreal -` S)\"\n+ \"\\<infinity> \\<in> S \\<Longrightarrow> {extreal x<..} \\<subseteq> S\" \"-\\<infinity> \\<in> S \\<Longrightarrow> {..< extreal y} \\<subseteq> S\"\n+ using `open S` unfolding open_extreal_def by auto\n+ have \"extreal -` uminus ` S = uminus ` (extreal -` S)\"\n+ proof safe\n+ fix x y assume \"extreal x = - y\" \"y \\<in> S\"\n+ then show \"x \\<in> uminus ` extreal -` S\" by (cases y) auto\n+ next\n+ fix x assume \"extreal x \\<in> S\"\n+ then show \"- x \\<in> extreal -` uminus ` S\"\n+ by (auto intro: image_eqI[of _ _ \"extreal x\"])\n+ qed\n+ then show \"open (extreal -` uminus ` S)\"\n+ using S by (auto intro: open_negations)\n+ { assume \"\\<infinity> \\<in> uminus ` S\"\n+ then have \"-\\<infinity> \\<in> S\" by (metis image_iff extreal_uminus_uminus)\n+ then have \"uminus ` {..<extreal y} \\<subseteq> uminus ` S\" using S by (intro image_mono) auto\n+ then show \"\\<exists>x. {extreal x<..} \\<subseteq> uminus ` S\" using extreal_uminus_lessThan by auto }\n+ { assume \"-\\<infinity> \\<in> uminus ` S\"\n+ then have \"\\<infinity> : S\" by (metis image_iff extreal_uminus_uminus)\n+ then have \"uminus ` {extreal x<..} <= uminus ` S\" using S by (intro image_mono) auto\n+ then show \"\\<exists>y. {..<extreal y} <= uminus ` S\" using extreal_uminus_greaterThan by auto }\nqed\n\nlemma extreal_uminus_complement:\nfixes S :: \"extreal set\"\n- shows \"(uminus ` (- S)) = (- uminus ` S)\"\n-proof-\n-{ fix x\n- have \"x:uminus ` (- S) <-> -x:(- S)\" by (metis image_iff extreal_uminus_uminus)\n- also have \"... <-> x:(- uminus ` S)\"\n- by (metis ComplI Compl_iff image_eqI extreal_uminus_uminus extreal_minus_minus_image)\n- finally have \"x:uminus ` (- S) <-> x:(- uminus ` S)\" by auto\n-} thus ?thesis by auto\n-qed\n+ shows \"uminus ` (- S) = - uminus ` S\"\n+ by (auto intro!: bij_image_Compl_eq surjI[of _ uminus] simp: bij_betw_def)\n\nlemma extreal_closed_uminus:\nfixes S :: \"extreal set\"\n@@ -1309,7 +1263,6 @@\nusing assms unfolding closed_def\nusing extreal_open_uminus[of \"- S\"] extreal_uminus_complement by auto\n\n-\nlemma not_open_extreal_singleton:\n\"~(open {a :: extreal})\"\nproof(rule ccontr)\n@@ -1491,22 +1444,17 @@\nqed\nqed\n\n-lemma open_extreal: assumes \"open S\" shows \"open (extreal ` S)\"\n- unfolding open_extreal_def apply(rule,rule,rule,rule assms) by auto\n-\n-lemma open_real_of_extreal:\n- fixes S :: \"extreal set\" assumes \"open S\"\n- shows \"open (real ` (S - {\\<infinity>, -\\<infinity>}))\"\n-proof -\n- from `open S` obtain T where T: \"open T\" \"S - {\\<infinity>, -\\<infinity>} = extreal ` T\"\n- unfolding open_extreal_def by auto\n- show ?thesis using T by (simp add: image_image)\n-qed\n+lemma inj_extreal[simp]: \"inj_on extreal A\"\n+ unfolding inj_on_def by auto\n+\n+lemma open_extreal: \"open S \\<Longrightarrow> open (extreal ` S)\"\n+ by (auto simp: inj_vimage_image_eq open_extreal_def)\n+\n+lemma open_extreal_vimage: \"open S \\<Longrightarrow> open (extreal -` S)\"\n+ unfolding open_extreal_def by auto\n\nsubsubsection {* Convergent sequences *}\n\n-lemma inj_extreal[simp, intro]: \"inj_on extreal A\" by (auto intro: inj_onI)\n-\nlemma lim_extreal[simp]:\n\"((\\<lambda>n. extreal (f n)) ---> extreal x) net \\<longleftrightarrow> (f ---> x) net\" (is \"?l = ?r\")\nproof (intro iffI topological_tendstoI)\n@@ -1516,12 +1464,9 @@\nby (simp add: inj_image_mem_iff)\nnext\nfix S assume \"?r\" \"open S\" \"extreal x \\<in> S\"\n- have *: \"\\<And>x. x \\<in> real ` (S - {\\<infinity>, - \\<infinity>}) \\<longleftrightarrow> extreal x \\<in> S\"\n- apply (safe intro!: rev_image_eqI)\n- by (case_tac xa) auto\nshow \"eventually (\\<lambda>x. extreal (f x) \\<in> S) net\"\n- using `?r`[THEN topological_tendstoD, OF open_real_of_extreal, OF `open S`]\n- using `extreal x \\<in> S` by (simp add: *)\n+ using `?r`[THEN topological_tendstoD, OF open_extreal_vimage, OF `open S`]\n+ using `extreal x \\<in> S` by auto\nqed\n\nlemma lim_real_of_extreal[simp]:\n@@ -1744,21 +1689,18 @@\nobtain r where r[simp]: \"m = extreal r\" using m by (cases m) auto\nobtain p where p[simp]: \"t = extreal p\" using t by (cases t) auto\nhave \"r \\<noteq> 0\" \"0 < r\" and m': \"m \\<noteq> \\<infinity>\" \"m \\<noteq> -\\<infinity>\" \"m \\<noteq> 0\" using m by auto\n- from `open S`[THEN extreal_openE] guess T l u . note T = this\n+ from `open S`[THEN extreal_openE] guess l u . note T = this\nlet ?f = \"(\\<lambda>x. m * x + t)\"\nshow ?thesis unfolding open_extreal_def\nproof (intro conjI impI exI subsetI)\n- show \"open ((\\<lambda>x. r*x + p)`T)\"\n- using open_affinity[OF `open T` `r \\<noteq> 0`] by (auto simp: ac_simps)\n- have affine_infy: \"?f ` {\\<infinity>, - \\<infinity>} = {\\<infinity>, -\\<infinity>}\"\n- using `r \\<noteq> 0` by auto\n- have \"extreal ` (\\<lambda>x. r * x + p) ` T = ?f ` (extreal ` T)\"\n- by (simp add: image_image)\n- also have \"\\<dots> = ?f ` (S - {\\<infinity>, -\\<infinity>})\"\n- using T(2) by simp\n- also have \"\\<dots> = ?f ` S - {\\<infinity>, -\\<infinity>}\"\n- using extreal_inj_affinity[OF m' t] by (simp only: image_set_diff affine_infy)\n- finally show \"extreal ` (\\<lambda>x. r * x + p) ` T = ?f ` S - {\\<infinity>, -\\<infinity>}\" .\n+ have \"extreal -` ?f ` S = (\\<lambda>x. r * x + p) ` (extreal -` S)\"\n+ proof safe\n+ fix x y assume \"extreal y = m * x + t\" \"x \\<in> S\"\n+ then show \"y \\<in> (\\<lambda>x. r * x + p) ` extreal -` S\"\n+ using `r \\<noteq> 0` by (cases x) (auto intro!: image_eqI[of _ _ \"real x\"] split: split_if_asm)\n+ qed force\n+ then show \"open (extreal -` ?f ` S)\"\n+ using open_affinity[OF T(1) `r \\<noteq> 0`] by (auto simp: ac_simps)\nnext\nassume \"\\<infinity> \\<in> ?f`S\" with `0 < r` have \"\\<infinity> \\<in> S\" by auto\nfix x assume \"x \\<in> {extreal (r * l + p)<..}\"\n@@ -1769,7 +1711,7 @@\nusing m t by (cases rule: extreal3_cases[of m x t]) auto\nhave \"extreal l < (x - t)/m\"\nusing m t by (simp add: extreal_less_divide_pos extreal_less_minus)\n- then show \"(x - t)/m \\<in> S\" using T(3)[OF `\\<infinity> \\<in> S`] by auto\n+ then show \"(x - t)/m \\<in> S\" using T(2)[OF `\\<infinity> \\<in> S`] by auto\nqed\nnext\nassume \"-\\<infinity> \\<in> ?f`S\" with `0 < r` have \"-\\<infinity> \\<in> S\" by auto\n@@ -1781,7 +1723,7 @@\nusing m t by (cases rule: extreal3_cases[of m x t]) auto\nhave \"(x - t)/m < extreal u\"\nusing m t by (simp add: extreal_divide_less_pos extreal_minus_less)\n- then show \"(x - t)/m \\<in> S\" using T(4)[OF `-\\<infinity> \\<in> S`] by auto\n+ then show \"(x - t)/m \\<in> S\" using T(3)[OF `-\\<infinity> \\<in> S`] by auto\nqed\nqed\nqed\n@@ -1864,12 +1806,9 @@\nproof (rule topological_tendstoI, unfold eventually_sequentially)\nobtain rx where rx_def: \"x=extreal rx\" using assms by (cases x) auto\nfix S assume \"open S\" \"x : S\"\n- then obtain A where \"open A\" and A_eq: \"extreal ` A = S - {\\<infinity>,(-\\<infinity>)}\"\n- by (auto elim!: extreal_openE)\n- then have \"x : extreal ` A\" using `x : S` assms by auto\n- then have \"rx : A\" using rx_def by auto\n- then obtain r where \"0 < r\" and dist: \"!!y. dist y (real x) < r ==> y : A\"\n- using `open A` unfolding open_real_def rx_def by auto\n+ then have \"open (extreal -` S)\" unfolding open_extreal_def by auto\n+ with `x \\<in> S` obtain r where \"0 < r\" and dist: \"!!y. dist y rx < r ==> extreal y \\<in> S\"\n+ unfolding open_real_def rx_def by auto\nthen obtain n where\nupper: \"!!N. n <= N ==> u N < x + extreal r\" and\nlower: \"!!N. n <= N ==> x < u N + extreal r\" using assms(3)[of \"extreal r\"] by auto\n@@ -1881,13 +1820,11 @@\nfrom this obtain ra where ra_def: \"(u N) = extreal ra\" by (cases \"u N\") auto\nhence \"rx < ra + r\" and \"ra < rx + r\"\nusing rx_def assms `0 < r` lower[OF `n <= N`] upper[OF `n <= N`] by auto\n- hence \"dist (real (u N)) (real x) < r\"\n+ hence \"dist (real (u N)) rx < r\"\nusing rx_def ra_def\nby (auto simp: dist_real_def abs_diff_less_iff field_simps)\n- from dist[OF this]\n- have \"u N : extreal ` A\" using `u N ~: {\\<infinity>,(-\\<infinity>)}`\n+ from dist[OF this] show \"u N : S\" using `u N ~: {\\<infinity>,(-\\<infinity>)}`\nby (auto intro!: image_eqI[of _ _ \"real (u N)\"] simp: extreal_real)\n- thus \"u N : S\" using A_eq by simp\nqed\nqed\n\n@@ -2933,21 +2870,6 @@\nfrom this show ?thesis using continuous_imp_tendsto by auto\nqed\n\n-\n-lemma continuous_at_extreal:\n-fixes x0 :: real\n-shows \"continuous (at x0) extreal\"\n-proof-\n-{ fix T assume T_def: \"open T & extreal x0 : T\"\n- from this obtain S where S_def: \"open S & extreal ` S = T - {\\<infinity>, (-\\<infinity>)}\"\n- using extreal_openE[of T] by metis\n- moreover hence \"x0 : S\" using T_def by auto\n- moreover have \"ALL y:S. extreal y : T\" using S_def by auto\n- ultimately have \"EX S. x0 : S & open S & (ALL y:S. extreal y : T)\" by auto\n-} from this show ?thesis unfolding continuous_at_open by blast\n-qed\n-\n-\nlemma continuous_at_of_extreal:\nfixes x0 :: extreal\nassumes \"x0 ~: {\\<infinity>, (-\\<infinity>)}\"\n@@ -2995,9 +2917,6 @@\nusing continuous_at_iff_extreal assms by (auto simp add: continuous_on_eq_continuous_at)\n\n-lemma continuous_on_extreal: \"continuous_on UNIV extreal\"\n- using continuous_at_extreal continuous_on_eq_continuous_at by auto\n-\nlemma open_image_extreal: \"open(UNIV-{\\<infinity>,(-\\<infinity>)})\"\nby (metis range_extreal open_extreal open_UNIV)\n```" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6558791,"math_prob":0.91898483,"size":20123,"snap":"2021-21-2021-25","text_gpt3_token_len":6871,"char_repetition_ratio":0.23356031,"word_repetition_ratio":0.20636286,"special_character_ratio":0.38602594,"punctuation_ratio":0.15107296,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9993873,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-06-22T11:24:53Z\",\"WARC-Record-ID\":\"<urn:uuid:9ef88938-5d87-485f-b260-ce7743fcf2dc>\",\"Content-Length\":\"58185\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:90311652-bef7-4cf1-b1c9-cbab82fbd652>\",\"WARC-Concurrent-To\":\"<urn:uuid:930fc348-9147-4029-9144-5c0df01ee6c9>\",\"WARC-IP-Address\":\"131.159.46.82\",\"WARC-Target-URI\":\"https://isabelle.in.tum.de/repos/isabelle/rev/d47eabd80e59?revcount=30\",\"WARC-Payload-Digest\":\"sha1:KPXZ24IF6BDSRSTPBATQHS5ZSVUFYKMF\",\"WARC-Block-Digest\":\"sha1:YPKKBTJVIPYXSCOKFMZW733IL5KFXZA4\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-25/CC-MAIN-2021-25_segments_1623488517048.78_warc_CC-MAIN-20210622093910-20210622123910-00615.warc.gz\"}"}
https://link.springer.com/article/10.1007/s11047-014-9436-7
[ "# Topology driven modeling: the IS metaphor\n\n## Abstract\n\nIn order to define a new method for analyzing the immune system within the realm of Big Data, we bear on the metaphor provided by an extension of Parisi’s model, based on a mean field approach. The novelty is the multilinearity of the couplings in the configurational variables. This peculiarity allows us to compare the partition function $$Z$$ with a particular functor of topological field theory—the generating function of the Betti numbers of the state manifold of the system—which contains the same global information of the system configurations and of the data set representing them. The comparison between the Betti numbers of the model and the real Betti numbers obtained from the topological analysis of phenomenological data, is expected to discover hidden n-ary relations among idiotypes and anti-idiotypes. The data topological analysis will select global features, reducible neither to a mere subgraph nor to a metric or vector space. How the immune system reacts, how it evolves, how it responds to stimuli is the result of an interaction that took place among many entities constrained in specific configurations which are relational. Within this metaphor, the proposed method turns out to be a global topological application of the S[B] paradigm for modeling complex systems.\n\n## Introduction\n\nThe objective pursued in this note is to frame the research on the immune system as part of data science. Such research is naturally complex and articulated and our contribution intends to be here along the lines of seeing it as a viable candidate for topological data analytics and an example of the S[B] paradigm for modeling complex systems. We recall that data science is the practice to deriving valuable insights from data by challenging all the issues related to the processing of very large data sets, while Big Data is jargon to indicate such a large collection of data (for example, exabytes) characterized by high-dimensionality, redundancy, and noise. The analysis of Big Data requires handling high-dimensional vectors capable of weaning out the unimportant, redundant coordinates. The notion of data space, its geometry and topology are the most natural tools to handle the unprecedentedly large, high-dimensional, complex sets of data (Carlsson 2009; Edelsbrunner and Harer 2010); basic ingredient of the new data-driven complexity science (TOPDRIM 2012; Merelli and Rasetti 2013).\n\nTopology, the branch of mathematics dealing with qualitative geometric information such as connectivity, classification of loops and higher dimensional manifolds, studies properties of geometric objects (shapes) in a way which is less sensitive to metrics than geometric methods: it ignores the value of distance function and replaces it with the notion of connective nearness: proximity. All these features make topology ideal for analysing the space of data.\n\nStarting from the notion of a mean field proposed by Parisi in his simple model for idiotypic network (Parisi 1990), we propose a more sophisticated version that is multilinear in the configurational variables (the antibody concentrations) instead of being constant or at most linear. Multi-linearity allows us to recognize in the partition function $$Z$$ of the model, that embodies all the statistical properties of the system at equilibrium, features similar to those of a particular functor of a topological field theory. The latter contains indeed the same global information about the topological properties (specifically its global invariants) of the system configuration space and can be identified with the generating function of Betti numbers, namely the Poincaré polynomial of data space (Atiyah and Bott 1983). Once the homology of the space of data has been constructed, and its generating cycles have been defined, the related two sets of Betti numbers can be compared. In this way, self-consistent information is obtained, regarding $$2{\\hbox {-}}ary$$, $$3{\\hbox {-}}ary,\\, \\dots n{\\hbox {-}}ary$$ relations among antibodies. Comparison between the Betti numbers of the model and the real Betti numbers, obtained by constructing the topology of phenomenological immune system space of data, will unveil the hidden relations between idiotypes and anti-idiotypes; in particular, those relations where components interact indistinctly and therefore can not be reduced to a mere subgraph, but rather they bear on a new concept of interaction, scale-free and metric-free. The analysis of Betti numbers on phenomenological data can be dealt with techniques based on persistent homology (Carlsson 2009; Petri et al. 2013).\n\nThe challenge we are facing is to unveil whether in natural, multi-level complex systems, $$n$$-body interactions can drive the emergence of novel qualia in these systems. In physics, the interactions between material objects in real space are binary. This means that mutual forces and motions are produced by two-body interactions, the building blocks of any many-particle system. Thus at the atomic or molecular level description of matter (living or not) the total force acting on any given particle is the result of the composition of binary interactions. However, how can we discover if $$n$$-body interactions do exist? What we are proposing here is to use the IS metaphor, i.e. a complex system whose adaptivity is driven by data, as a global topological application of the S[B] paradigm. S[B] allows us to entangle in a unique model the computational component with the coordination. In particular, B accounts for the computation while S describes the global computation context (Merelli et al. 2013). The adaptation phase occurs when a machine can no longer compute in a given state of the system, thus the system changes state, i.e. the global context of computation. In the IS metaphor the computation context can be identified by the global invariants while the computation with the model of interactions, a sort of interactive machine. Each time we discover new global invariants, a new context of computation arises and with it a new IS model must be generated; we call this step the adaptation phase.\n\nIn the following, after giving a brief description of the antigen-free immune system and recalling Parisi’s mean field model, we formally define the new topological field model, and, finally, discuss the S[B] paradigm. An appendix is provided with a general introduction to the fundamental tool of persistent homology and Betti numbers.\n\n## The antigen-free immune system\n\nCells and molecules of the immune system not only recognize foreign substances; they react and regulate each other, so that the immune system can be seen as a network of interacting cells and antibodies. This perspective is known as the idiotypic or immune network theory (Jerne 1974). It refers to the immune system as a complex process that takes place at the cellular level for protecting organisms from infectious agents (the antigens), which are antibody generators. In the scheme proposed by Jerne, it is the antigen that provokes an immune response and each antibody is represented as a large Y-shaped protein. The immune system uses this protein to identify and neutralize foreign objects. The antibody can recognize and bind a specific part of the antigen; resorting to this binding mechanism it can block the attack. Moreover, in Jerne’s network theory, antibodies are capable of being recognized by other antibodies; whenever this happens the former is suppressed and its concentration is reduced while the latter is stimulated and its concentration increases (see Fig. 1).\n\nThe mechanism whereby the production of a given antibody elicits or suppresses the production of other antibodies that, in turn, elicit or suppress the production of other antibodies like a concatenation of events, hints to a strict analogy of the immune system function with memory in the brain. It recalls the way in which a firing neuron may induce or inhibit the firing of other neurons, and so forth. On the assumption that a functional network of antibodies is possible, several models have been constructed, among which Parisi’s model. The latter studies the persistence of immune memory in the absence of any driving effect of external antigens and it offers a robust, though simple, theoretical framework without providing detailed description of the system (Parisi 1990).\n\nThe model we propose is a preliminary test of data field theory; it aims at a deeper understanding of the functional properties that the global and persistent topological properties of an antibodies data space can imply. In particular, it targets at discovering the existence of $$n$$-ary relations among antibodies and determining how the ensuing configurations influence the immune system reaction to the presence of antigens. The extraction of global qualitative information from an antibodies data space (e.g. concentrations), should lead to the discovery of those characteristics that are shared in a group of immunoglobulin receptor molecules. This means not only discovering a single idiotype, but the capacity of being active in the presence of $$n$$ others. We want to prove that topological data analysis, through persistent homology and its Betti numbers, allows us to determine the effective $$n$$-antibody configurations. Note that the models proposed in literature to describe the relationship between structure and function in biological networks are all based on the concept that any relation can be reduced to a set of binary relation (Hart et al. 2009): we argue that this is not necessarily the case. We start thinking of models as relationships, i.e. facts in logical space of forms. Forms that can be directly classified by Betti numbers, extracted by calculating the Betti numbers through the persistent homology of the space of data and used in the frame of a conceptual model able to bear on those topological features.\n\n### Parisi mean field model for IS\n\nThe simplest and most efficient network of the immune system is represented by a model that can be easily formulated in the absence of antigens. Although it is well known that the number of specific lymphocytes plays a crucial role, the variables of the network model are limited to the antibody concentrations.\n\nThe mean-field idiotypic network model of antigen-free Immune System, proposed by G. Parisi and inspired by an earlier Hopfield’s model conceived to represent the brain and many other similar models (Hopfield 1982; Hoffmann 1975, 2010; Farmer et al. 1986; Varela et al. 1988), describes essentially an iterated cascade of events, in which the production of a given antibody provokes, or possibly inhibits, the production of other antibodies, which in turn induce, or possibly impede, the production of other antibodies, which in turn give rise to or prevent the production of other antibodies, etc..\n\nIn the Parisi model, the concentration $$c_i(t)$$ of antibody $$i$$ is assumed to have, in absence of external antigens, only two values, conventionally $$0$$ or $$1$$ (in the presence of antigen concentrations $$c_i$$ might become $$\\gg 1$$); $$t$$ is time. The immune system state at time $$t$$ is determined by the values of all $$c_i$$’s for all possible antibodies $$(i = 1, \\dots , N)$$. The dynamical process is typically described by a discretized time (the time step $$\\tau$$ being the time needed to implement the immune response). The dynamical variable $$h_i$$ (the mean field) represents the total stimulatory/inhibitory (depending on its sign) effect of the whole network on the $$i$$-th antibody. $$h_i$$ is positive when the excitatory effect of the other antibodies is greater than the suppressive effect and then $$c_i$$ is one. Otherwise $$h_i$$ is negative and $$c_i$$ is zero. The mean-field is expressed typically as\n\n\\begin{aligned} \\displaystyle {h_i (t)= S + {{\\mathop {\\mathop {\\sum }\\limits _{k=1}}\\limits _{k \\ne i}^{N}}} J_{ik} c_k (t)}, \\,\\,\\, where \\,\\,\\, c_i (t) = \\Theta [h_i (t-\\tau )] \\end{aligned}\n(1)\n\n$$\\Theta (x)$$ denotes the Heaviside function that is zero for negative $$x$$ and 1 for positive $$x$$, while $$J_{ik}$$ ($$J_{ii} = 0, J_{ki} = J_{ik}$$) represents the influence of antibody $$k$$ on antibody $$i$$. If $$J_{ik}$$ is positive, antibody $$k$$ triggers the production of antibody $$i$$, whereas if $$J_{ik}$$ is negative, antibody $$k$$ suppresses the production of antibody $$i$$. $$\\left| J_{ik} \\right|$$ is a measure of control efficiency that the antibody $$k$$ exercises on antibody $$i$$. The $$J_{ik}$$ are distributed in the interval $$[-1, +1]$$. $$S$$ is the threshold parameter; it regulates the dynamics when the couplings $$J_{ik}$$ are all very small; otherwise $$S$$ is equal to zero. At equilibrium, when the concentrations of antibodies are time independent, the Eq. (1) simplifies to\n\n\\begin{aligned} \\displaystyle {h_i=S+ {{\\mathop {\\mathop {\\sum }\\limits _{k=1}}\\limits _{k \\ne i}^{N}}} J_{ik} c_k}\\, , \\, c_i= \\Theta (h_i )\\in \\{ 0 , 1 \\} \\; . \\end{aligned}\n(2)\n\nThis idiotypic network model has the advantage of being simple and easy to analyze. The phenomenon of dependence of the immunity/tolerance pathway on the amount of antigens suggests that the concentration of any given antibody is crucial to determine the effects on the other antibodies. The assumption of two levels of concentration (0 or 1) bypasses the problem of the choice of a pathway.\n\nHowever, this model is elementary in view of testing the perspectives of a data field theory. We need to increase its complexity in order to reach a description of the system sufficiently detailed to catch the global features of its data space. We generalize the mean field in such a way that it crucially depends on those topological features of the space of antibody concentrations that will be reflected in the topological properties of the system space of data. We construct a model sensitive to global features, designed to benefit of the advantage of lending itself to a kind of reverse engineering of the process of field construction. In the model, the antibodies with positive $$c_i \\, (=1)$$ are actually produced by the system while those absent $$(c_i=0)$$ are suppressed. Suppression due to clonal abortion is neglected.\n\n## The topological field model for antigen-free immune system\n\nIn this section, we generalize the way how Parisi’s linear model represents immunological memory by a linear mean field. The antibodies of the idiotypic cascade are denoted by $$Ab_i$$; during the production of $$Ab_1$$, ignited directly by the antigen, the environment of lymphocytes is modified by $$Ab_2$$: the life-span of the $$Ab_1$$-producing cells and the population of helper cells specific for $$Ab_1$$ increase. The symmetry of the couplings $$(J_{ik}=J_{ki})$$ implies that $$Ab_3$$ should be rather similar to $$Ab_1$$, the internal image of $$Ab_2$$ should persist after it disappeared, its presence induces the survival of memory cells directed against the antigen. The process continues by iteration. In the extended model, we assume the production of $$Ab_i$$ is conditioned to different extents and also by the simultaneous presence of a subset of $$2, 3, ... , N$$, antibodies.\n\nA weakness of this representation is that the possible equilibrium configurations of the network are fixed, whereas we want the network to be capable of learning which antibodies should be produced without assuming that only a fraction of all antibodies have physiological relevance. Therefore, whilst we maintain the global cost function\n\n\\begin{aligned} \\displaystyle { E= \\sum _{i=1}^N h_i c_i }, \\, \\,\\, \\; c_i= \\Theta (h_i )\\in [0,1] \\; , \\end{aligned}\n(3)\n\nwe consider in the space of antibodies $$\\mathcal {A}$$, the points of which are labelled by $$i=1 \\dots ,N$$, the graph $$\\mathcal {G}$$ generated by the $$J_{ik} \\ne 0$$ (for simplicity we assume here that $$J_{ik}\\in [ -1, +1 ]$$ when $$J_{ik} \\ne 0$$). We next extend $$\\mathcal {G}$$ to the simplicial complex $$\\mathcal {C}$$, obtained from $$\\mathcal {G}$$ by completion, constructing the simplicial complex $$\\mathcal {C}$$ which has $$\\mathcal {G}$$ as 1-skeleton (scaffold), see Fig. 5. Each $$n$$-cycle in $$\\mathcal {C}$$ cannot be seen as composition of two-body interactions, but represents a true $$n$$-body interaction; in other words, any relationship expressed in the cycle is unique in its configuration. We denote by $$C^{(n)} ([l_1, \\dots l_{(n+1)} ]$$) the cycles of $$\\mathcal {C}$$, and by $$\\delta _{k,i}$$ the presence or the absence of $$i$$ in the cycle ($$\\delta _{k,i}=1$$ if $$k=i$$, $$\\delta _{k,i}=0$$ if $$k\\ne i$$) and we generalize then the standard linear form for the mean field $$h_i$$ to the form:\n\n\\begin{aligned} h_i=S+ \\sum _{k = 1}^N \\, {\\mathop {\\mathop {\\sum }\\limits _{C^{(n)} ([ \\ell _1, \\dots \\ell _{(n+1)} ])}}\\limits _{1 \\le n \\le N - 1}} J_{\\ell _1 \\dots \\ell _k \\dots \\ell _{n+1}} \\prod _{j=1}^n c_{\\ell _j} \\, \\delta _{k, i} \\; \\end{aligned}\n(4)\n\nIn the partition function\n\n\\begin{aligned} \\displaystyle {Z(x) \\doteq \\sum _{\\left\\{ c_\\ell \\right\\} } e^{-x E\\left( \\left\\{ c_\\ell \\right\\} \\right) }} \\; \\; x \\in \\mathbb {R}, \\end{aligned}\n(5)\n\nthe sum runs over the set of all possible valuations $$c_\\ell = 0 , 1 \\; , \\; \\forall \\ell$$, subdivides the set of states in classes of equivalence, giving different statistical weights—depending on a parameter $$x \\in {\\mathbb {R}}\\; , \\; x >0$$—to those states which are invariant with respect to a given set of transformations. A phase transition, if any, would allow us to pass from one class of equivalence to the other when the state symmetry is (partially or fully) broken. This turns the model into a theoretical framework where, given a parameter—for example the average specific antibody concentration—we can predict when and if a configuration may break into another, giving rise to a different immunity type, i.e. change the adaptive immunity. In terms of formal language theory, going from one configuration to another belonging to a different class of equivalence has the following meaning: if we associate to the space of data a group of possible transformations preserving its topology (e.g., its mapping class group), and the related regular language, the general semantics thus naturally generated describes the set of all transformations and hence of all ‘phases’ in the form of relations.\n\nWe consider then the functor partition function, $$Z(x)$$. We might of course access more information (patterns) by considering higher ($$k$$-th) order correlation functions,\n\n\\begin{aligned} \\Gamma _k (x) \\doteq \\frac{1}{Z(x)} \\sum _{\\left\\{ c_\\ell \\right\\} } c_{\\ell _1} \\dots c_{\\ell _k} e^{-x E\\left( \\left\\{ c_\\ell \\right\\} \\right) } \\; , \\end{aligned}\n(6)\n\nfor any given set of points $${\\ell _1 \\dots \\ell _k } \\in \\mathcal {A}$$. We can represent with strings of $$N$$ dichotomic variables the set of $$\\{c_\\ell \\}$$, $$2^{N-1}$$ possible configurations.\n\nA crucial assumption we add to the model is that the coupling constants $$J_{\\ell _1 \\dots \\ell _k \\dots \\ell _{n+1}}$$ are taken to be proportional to a linear combination (with negative coefficients) of the simplex $$n$$-volume $$V^{(n)}$$, the simplex corresponding to the cell defined by the set $$\\bigl \\{ \\ell _1 , \\dots , \\ell _n \\bigr \\}$$ in the cells of cycle $$C^{(n)} ([\\ell _1, \\dots , \\ell _{(n+1)} ])$$, with the volume of the cell boundary of dimension $$n-2$$, weighted by the curvature at that boundary. The latter measures the ease with which the $$n$$-body interaction is favored by the manifold bending. The ensuing action is expected to measure reasonably well the probability that the $$n$$-body process described by that coupling takes place.\n\nWhen the model with such interaction form is dealt with as a statistical field theory it turns out to be fully isomorphic with a Euclidean topological field theory describing a totally different physical system: gravity coupled with matter in a simplicial complex setting, consistent with general relativity. We think back to the standard example of the Ising model, which also has variables in $${\\mathbb {Z}}_2$$ (Parisi 1998) and recall that a statistical field theory is any model in statistical mechanics where the degrees of freedom comprise a field; i.e. the microstates of the system are expressed through field configurations. The features of the ensuing theory are quite general and far reaching. The topology of the associated moduli space depends only on the manifold genus $$g$$, on the dimension $$n$$ of the (vector) bundle over it used to define the field, and on the dimension $$\\delta ({\\mathrm{mod}} \\, n)$$ of the associated determinant bundle. Such space is a projective variety, smooth only if $$(\\delta ,n) = 1$$. The recursive determination of the Betti numbers in this case is given by the Harder and Narasimhan and Atiyah and Bott recursions (Harder and Narasimhan 1975; Atiyah and Bott 1983). The former explicitly counts points of the moduli space, the latter resorts to an infinite-dimensional Morse theory with the field action functional as Morse function. These recursions lead to a closed formula for the Poincaré polynomial, i.e. for the Betti numbers of the moduli space. These implicit methods were successively made explicit (Desale and Ramanan 1975).\n\nWhat is intriguing is that our field theory turns out to be isomorphic to $${\\mathbb {Z}}_2$$ (quantum) gravity, dealt with in nonperturbative fashion by standard Regge calculus (Regge 1961).\n\nLet us recall here that the construction of a consistent theory of quantum gravity in the continuum is a problem in theoretical physics that has so far defied all attempts of a rigorous formulation and resolution. The only effective approach to try and obtain a non-trivial quantum theory proceeded via discretization of space-time and of the Einstein action, i.e., by replacing the space-time continuum by a combinatorial simplicial complex and deriving the action from simple physical principles.\n\nQuantum Regge calculus, based on the well-explored classical discretization of the Einstein action due to Regge, and the essentially equivalent method of dynamical triangulations are the tools that proved most successful. Regge’s method consists in approximating Einstein’s continuum theory by a simplicial discretization of the space-time (in gravity a four-dimensional Lorentz manifold) resorting to local building blocks (simplices) and then constructing the gravitational action as the sum of a term depending on the (hyper)volumes of the different simplicial complexes and another reflecting the space-time curvature. The metric tensor associated with each simplex is expressed as a function of the squared edge lengths, which are the dynamical variables of this model. Summing over all interpolating geometries (state sum) generated by the simplicial complex construction in the embedding higher-dimensional ones (filtration), allows us to derive both the Einstein action and the equilibrium configurations simply by means of counting procedure (entropy estimate).\n\nThe $${\\mathbb {Z}}_2$$ version of the model is one in which representations of $$SU(2)$$ labeling the edges in quantum Regge calculus are reduced to $${\\mathbb {Z}}_2$$. The power of the method resides in the property that the infinite degrees of freedom of Riemannian manifolds are reduced by discretization; and the theory can deal with PL spaces, described by a finite number of parameters. Moreover, for the manifolds approximated by a simplicial complex (or by dynamically triangulated random surfaces), the local coordination numbers are automatically included among the dynamical variables, leaving the quadratic link lengths $$q_\\ell$$, globally constrained by triangle inequalities, as true degrees of freedom.\n\nMore precisely, the model adopted here for the immune system is isomorphic to the $${\\mathbb {Z}}_2$$ Regge model, where the quadratic link lengths $$q_\\ell$$ of the simplicial complexes are restricted to take on only two values: $$q_\\ell = 1 + {\\mathfrak {l}} \\sigma _\\ell$$, where $$\\sigma _\\ell = \\pm 1 = 2 c_{\\ell } - 1$$. Such model has been exactly solved (in the case of quantum gravity) via the matrix model approach (Ambjørn et al. 1985) and with the help of conformal field theory (Knizhnik et al. 1988). A crucial ingredient is the choice of functional integration measure, whose behavior, with respect to diffeomorphisms, is fundamental. The very definition of diffeomorphism is a heavy constraint in constructing the PL space exactly invariant under the action of the full diffeomorphism group (Menotti 1998), and only the recent construction of a simplicial version of the mapping class group made it viable (Merelli and Rasetti 2013).\n\nAs Regge regularization leads to the usual Liouville field theory in the continuum limit based on a description of PL manifolds with deficit angles, not edge lengths, we may assume that also in our case the correct measure has to be nonlocal. Starting point for the $${\\mathbb {Z}}_2$$ Regge model is a discrete description of general relativity in which space-time is represented by a piecewise flat, simplicial manifold (Regge skeleton). The procedure works for any space-time dimension $$d$$, metrics of arbitrary signature, and action\n\n\\begin{aligned} A ( \\mathbf{{q}} ) = x \\left( \\sum _{s^d} V^{(d)} \\left( s^d \\right) - \\zeta \\sum _{s^{d-2}} {\\mathfrak {d}} ( s^{d-2} ) \\, V^{(d-2)} \\left( s^{d-2} \\right) \\right) \\; \\end{aligned}\n(7)\n\nwith the quadratic edge lengths $$\\left\\{ q_\\ell \\right\\}$$ (more precisely, the $$\\sigma _{\\ell }$$’s) describing the dynamics of the complex. $$x$$ and $$\\zeta$$ denote free constants (in the discrete time picture, with uniform time step $$\\tau$$, energy functional and action are merely proportional). The first sum runs over all $$d$$-simplices $$s^d$$ of the simplicial complex, while $$V ( s^d )$$ is the $$d$$-volume of $$s^d$$. The second term represents the curvature of the simplicial complex, concentrated along the $$(d - 2)$$-simplices, leading to deficit angles $${\\mathfrak {d}} ( s^{d-2} )$$. The physical meaning of the terms entering action $$A$$ is what makes it acceptable for a consistent description of the immune system with higher order (‘many body’) interactions: the lower the volumes and the higher the curvature, the lower is the action $$(x, \\zeta > 0)$$.\n\nAt equilibrium, i.e. in the absence of an explicit time-dependence of the expectation values of the variables, the partition function for our antigen-free IS model is nothing but the field propagator of the theory, expressed via path integral\n\n\\begin{aligned} \\displaystyle {Z = \\int \\mathcal{{D}} \\, [\\mathbf{{q}}] \\, \\mathrm{{e}}^{- A ( \\mathbf{{q}} )}} \\end{aligned}\n(8)\n\nFunctional integration should extend over all metrics on all possible topologies, hence the path-integral approach, typically suffers from a nonuniqueness of the integration measure and a need for a nonlocal measure is advocated. The standard ‘simplicial’ measure\n\n\\begin{aligned} \\displaystyle { \\int \\mathcal{{D}} \\, [\\mathbf{{q}}] = \\prod _\\ell \\, \\int \\frac{\\mathrm{{d}} q_\\ell }{q_\\ell ^\\alpha } \\, \\mathcal{{F}} ( \\mathbf{{q}} )}, \\,\\,\\, \\mathrm{{where}} \\,\\,\\, \\alpha \\in {\\mathbb {R}} \\end{aligned}\n(9)\n\nallows exploring a family of measures, as $$\\mathcal{{F}} ( \\mathbf{{q}} )$$ can be designed to constrain integration to those configurations which do not violate triangular inequalities, and moreover can be chosen so as to remove non realistic simplices. The characteristic partition function of the model becomes then\n\n\\begin{aligned} Z = \\left[ \\prod _\\ell ^\\mathcal{{N}} \\int\\limits _0^\\infty {\\mathrm{d}} q_\\ell \\, q_\\ell ^{- \\alpha } \\right] \\, \\mathcal{{F}} ( \\mathbf{{q}} ) \\, \\mathrm{{e}}^{- \\sum _s A_s ( \\mathbf{{q}} )} \\; , \\end{aligned}\n\nwhere $$\\mathcal{{N}}$$ is the number of links and $$A_s$$ is the contribution to the action of simplex $$s$$.\n\nIt is worth recalling that in (Desale and Ramanan 1975) arithmetic techniques and the Weil conjecture were used, and a crucial ingredient was the property that the volume of a particular locally symmetric space attached to $$SL_n$$ with respect to the canonical measure—an invariant known as the Tamagawa number of $$SL_n$$—equals 1. The simplicial volume is a homotopy invariant of oriented, closed, connected manifolds defined in terms of the singular chain complex with real coefficients. Such invariant measures the efficiency of representing the fundamental space class using singular simplices. Since the fundamental class is nothing but a generalized triangulation of the manifold, the simplicial volume can be interpreted as well both as a measure for the complexity of the manifold and as a homotopy invariant approximation of the Riemannian volume. $$Z(x)$$ provides then the generating function (Poincaré polynomial) of the Betti numbers of $$\\mathcal {A}$$.\n\nThe final step is to compare the Betti numbers obtained empirically from the data against such generating function, thus determining [simply through the solution of a system of (non-linear) algebraic equations] the set of non-zero $$J_{\\ell _1 \\dots \\ell _k \\dots \\ell _{n+1}}$$. This fully determines which antibody influences which, including ‘many-body’ influences, i.e. when and if it may happen that a given set of (two or more than two) antibodies play a role only when simultaneously active.\n\nA short discussion of Regge calculus, meant to introduce in simple way, accessible also to readers not familiar with the notion of geometry over discrete spaces (simplicial complexes), and some of the notions actually used in the derivation can be found in Battaglia and Rasetti (2003), where some of the preliminary ideas of the scheme are described, successively developed in extended way for present and other applications. As for the work in $$\\mathbb {Z}_2$$ quantum gravity which our generalized model of immune system is isomorphic to, a more articulated and complete set of references is available in Giulini (2007) and Bittner et al. (1999).\n\n## A global topological application of the S[B] paradigm\n\nIn this section we introduce the $$S[B]$$ paradigm for modeling complex adaptive systems and we discuss the IS metaphor as a global topological application of the adaptation phase; the aim is to contribute to understand the adaptability feature that, as addressed in the paper of Stepney et al. (2005), still remains ‘poorly understood’.\n\nIn the $$S[B]$$ paradigm a complex system consists of two components, the computation level from where its behavior $$B$$ emerges, the interactive machine, and the context of the computation, its global structure $$S$$. Both levels are distinct but entangled in a unique computational model that evolves by learning and adapting. The computational model associated to the $$S[B]$$ plays a crucial role in the characterization of the adaption phase, it can be represented by any mathematical model of computation, provided that it allows to express the dependency between different levels of abstraction.\n\nFigure 2 shows a simple adaptive system represented by finite state machines, which is the most general among other models, such as complex automata, higher dimensional automata, hypernetworks, recurrent neural network, multiagent, etc. On the left hand side, the two components are entangled in such a way that the emergent behaviour $$B$$ is subject to the global constraints while the global structure $$S$$ is affected by the emergent behavior. On the right, an $$S[B]$$ system is depicted as a light oval $$S$$ that embeds a dark round $$B$$, showing the adaptation phase that takes place whenever the computation can no longer evolve in the current context (the $$S[B]$$ on the lower right corner). The adaptation phase allows $$S$$ to relax the set of constraints so as to permit further computations—in the figure the black arrow drawn between the two $$S$$ components, represents the change of the global context, and the dashed arrow between the dark rounds represents the unfolding of the computation. The evolution of such a model relies on the ability of the system to adapt its computation to global requirements.\n\nA full yet concise description of the formal definition of $$S[B]$$ on a finite state machine that encapsulates both the computation $$(B)$$ and its controller $$(S)$$ follows. In this framework, both $$B$$ and $$S$$ are classically described as a finite state machine of the form $$B=(Q, q_0, \\rightarrow _B)$$ ($$Q$$ set of $$B$$ states, $$q_0$$ initial $$B$$ state and $$\\rightarrow _B$$ transition relation) and $$S = (R, r_0, \\mathcal {O}, \\rightarrow _S, L)$$ where $$R$$ is a set of $$S$$ states, $$r_0$$ is the initial $$S$$ state, $$\\mathcal {O}$$ is an observation function of $$B$$ states, $$\\rightarrow _S$$ is a transition relation and $$L$$ is a state labeling function. The function $$L$$ labels each $$S$$ state with a formula representing a set of constraints over an observation of the $$B$$ states. Therefore, a $$S$$ state $$r$$ can be directly mapped to the set of $$B$$ states satisfying $$L(r)$$. Through this hierarchy, $$S$$ can be viewed as a second-order structure $$(R \\subseteq 2^Q, r_0,\\rightarrow _S \\subseteq 2^Q \\times 2^Q, L)$$ where each $$S$$ state $$r$$ is identified with its corresponding set of $$B$$ states. An $$S[B]$$ system is the combination of an interactive machine $$B=(Q, q_0, \\rightarrow _B)$$ and a coordinator $$S=(R, r_0,\\mathcal {O}, \\rightarrow _S, L)$$ such that for all $$q \\in Q$$, $$\\mathcal {O}(q) \\ne \\perp$$. In any $$S[B]$$ system the initial $$B$$ state must satisfy the constraints of the initial $$S$$ state, i.e. $$q_0 \\models L(r_0)$$.\n\nDuring adaptation phase the $$B$$ machine is no longer regulated by the $$S$$ controller, except for a condition, called transition invariant, that must be fulfilled by the system undergoing adaptation. The complete and formal definition of the $$S[B]$$ based on finite state machine, its semantics and the adaptability checking can be found in Merelli et al. (2013).\n\nIt is quite evident that the model described above can be applied when the system requirements are known a priori and the adaptation phase reduces to dynamic selection of possible states with respect to environmental changes. To overcome this limit and allow the definition of a model that can change the set of global constraints and consequently the set of computations at run-time, we adopt the IS metaphor to characterize the adaptation phase of an $$S[B]$$ model. The global context is defined as a function of the topological invariants extracted from the analysis of the space of data: the Betti numbers. In the model proposed in previous section the Betti numbers and the $$J_{\\{\\ell \\}}$$ interaction matrix faithfully represent the relations hidden in the current space of data. Thus, the adaptation phase of an $$S[B]$$ system is indeed represented as the interplay capabilities of the immune system to identify, classify and learn the new relationship emerging among the actors of the system. Figure 3 graphically mimics the adaptability checking performed by an $$S[B]$$ system; it starts on the upper left corner of the figure with the actual model $$S[B]$$ that, when necessary, may be adapted to a new context provided by the topological analysis of the space of data (set of observations of real system). The changes in the context is determined by comparing the Betti numbers of the space of data with the Betti numbers of the actual model. If there is no new knowledge, the model remains $$S[B]$$ otherwise it adapts to the new context by learning the knowledge provided by the Betti numbers, updating its computation with new set of relations $$J_{\\{\\ell \\}}$$ and becoming $$S'[B']$$. This learning process reminds us of what in literature is called recurrent neural network, a process based on active exploration of an unknown environment and the generation of a finite state automata model of the environment.\n\nSummarizing, inspired by the IS metaphor we present a computational model as an higher order relational model which deals with multilinear n-body interactions, the interactions characteristic of the immune response. In such case, the model adapts when it no longer fits the space of observed data, and the construction of the topological field model allows us to determine the values of the $$J_{{\\{\\ell \\}}}$$ matrix, hence, e.g., the classes of antibodies that are in relation in the current immune response. We call this step a recursive construction of a relational model that learns new antibody relations as immune response to the presence of an antigene.\n\nAs future work, we aim to apply the proposed approach to real-world IS phenomena treated bot in silico and in vivo experiments and compare the results with other similar models.\n\n## Concluding remarks\n\nWe have defined a new topology-based method suitable to provide a benchmarking application of the S[B] paradigm. The method relies on a multi-linear model of immune system inspired by the topology of space of data. Starting from the notion of an Ising model in a mean field, given by Parisi and others in their seminal work, we proposed a more sophisticated version that is multilinear in the configurational variables (the antibody concentrations) instead of constant or at most linear. This work is not intended to be the study of the dynamics of the immune network in view of establishing the equilibrium among antibodies, but, instead, it has a prospective interest and strategic aim at defining a new approach for the analysis of the immune system as a metaphor of a real-life system represented in terms of Big Data.\n\n## References\n\n• Ambjørn J, Durhuus B, Fröhlich J (1985) Diseases of triangulated random surface models, and possible cures. Nucl Phys B257:433–449\n\n• Atiyah MF, Bott R (1983) The Yang–Mills equations over Riemann surfaces. Philos Trans R Soc Lond Ser A 308(1505):523–615\n\n• Battaglia D, Rasetti M (2003) Quantumlike diffusion over discrete sets. Phys Lett A 313:8–15\n\n• Bittner E, Hauke A, Markum H, Riedler J, Holm C, Janke W (1999) $$Z_2$$-Regge versus standard Regge calculus in two dimensions. Phys Rev D 59(124018):1–9\n\n• Carlsson G (2009) Topology of data. Bull New Ser AMS 46(2):255–308\n\n• Desale UV, Ramanan S (1975) Poincaré polynomials of the variety of stable bundles. Math Ann 216(3):233–244\n\n• Edelsbrunner H, Harer J (2010) Computational topology, an introduction. American Mathematical Society, Providence\n\n• Farmer J, Packard N, Perelson A (1986) The immune system, adaptation, and machine learning. Physica D22:187–204\n\n• Giulini D (2007) Mapping class groups of 3-manifolds. In: Fauser B, Tolksdorf J, Zeidler E (eds) Quantum gravity: mathematical models and experimental bounds. Birkhuser Verlag, Basel, pp 161–201\n\n• Harder G, Narasimhan MS (1974/75) On the cohomology groups of moduli spaces of vector bundles on curves. Math Ann 212:215–248\n\n• Hart E, Bersini H, Santos F (2009) Structure versus function: a topological perspective on immune networks. Natural Computing 9138(8):603–624\n\n• Hoffmann GW (1975) A network theory of the immune system. Eur J Immunol 5:638–647\n\n• Hoffmann GW (2010) An improved version of the symmetrical immune network theory. www.arXiv.org/abs/1004.5107\n\n• Hopfield JJ (1982) Neural networks and physical systems with emergent collective computational abilities. Proc Natl Acad Sci USA 79(8):2554–2558\n\n• Jerne N (1974) Towards a network theory of the immune system. Ann Immunol Inst Pasteur 125C:373–389\n\n• Knizhnik V, Polyakov A, Zamolodchikov A (1988) Fractal structure of 2d-quantum gravity. Mod Phys Lett A 3(8):819–826\n\n• Menotti P (1998) Group theoretical derivation of Liouville action for Regge surfaces. Nucl Phys B 523(3):611–619\n\n• Merelli E, Paoletti N, Tesei L (2013) Adaptability checking in multi-level complex systems. http://arxiv.org/abs/1404.0698\n\n• Merelli E, Rasetti M (2013) Non-locality, topology, formal languages: new global tools to handle large data sets. Procedia Comput Sci 18:90–99\n\n• Parisi G (1998) Statistical field theory. Westview Press, Boulder\n\n• Parisi G (1990) A simple model for the immune network. Proc Natl Acad Sci USA 87:429–433\n\n• Petri G, Scolamiero M, Donato I, Vaccarino F (2013) Topological strata of weighted complex networks. PLoS One 8(6):e66506\n\n• Regge T (1961) General relativity without coordinates. Nuovo Cimento 19:558–571\n\n• Stepney S, Smith RE, Timmis J, Tyrrell AM, Neal MJ, Hone ANW (2005) Conceptual frameworks for artificial immune systems. Int J Unconv Comput 1(3):315–338\n\n• TOPDRIM. Topology driven methods of complex systems project, Future Emerging Technologies (FET) programme within Seventh Framework Programme (FP7), www.topdrim.eu\n\n• Varela F, Coutinho A, Dupire B, Vaz N (1988) Cognitive networks: immune, neural and otherwise. In: Perelson A (ed) Theoretical immunology: part two, SFI studies in science of complexity, vol 2. Addison Wesley, Reading, pp 359–371\n\n## Acknowledgments\n\nWe thank the organizers of the ICARIS Workshop for offering the opportunity to present new ideas related to the TOPDRIM project in such a stimulating and pleasant environment. We acknowledge the financial support of the Future and Emerging Technologies (FET) programme within the Seventh Framework Programme (FP7) for Research of the European Commission, under the FET-Proactive grant agreement TOPDRIM, number FP7-ICT-318121.\n\n## Author information\n\nAuthors\n\n### Corresponding author\n\nCorrespondence to Emanuela Merelli.\n\n## Appendix\n\n### Persistent homology and Betti numbers\n\nIn this appendix we describe a general approach that allows to extract global topological information from a space of data. It is based on three basic steps: (i) The interpretation of the huge collection of $$points$$ that constitutes the space of data; this is achieved by resorting to a family of simplicial complexes (Fig. 4), parametrized by some suitably chosen ‘proximity parameter’ (Fig. 5). This operation converts the data set into a global topological object. In order to fully exploit the advantages of topology, the choice of such parameter should be metric independent. In our context it measures the expression of a possible $$relation$$. (ii) The reduction of noise, affecting the data space, as the result of the parametrized persistent homology. (iii) The encoding of the data set persistent homology in the form of a parameterized version of topological invariants, in particular Betti numbers, i.e. the invariant dimensions of the homology groups. These three steps provide an exhaustive knowledge of the global features of the space of data, even though such a space is neither a metric space nor a vector space, as other approaches require (Carlsson 2009).\n\nIn order to better comprehend the scheme, it is necessary to recall that homology is a mathematical tool that ‘measures’ the shape of an object (typically a manifold). The result of this measure is an algebraic object, a succession of groups. Informally, these groups encode the number and the type of ‘holes’ in the manifold. A basic set of invariants of a topological space $$X$$ is just its collection of homology groups, $$H_i(X)$$. Computing such groups is certainly non-trivial, even though efficient algorithmic techniques are known to do it systematically. Important ingredients of these techniques, and outcomes as well of the computation, are just Betti numbers; the $$i$$-th Betti number, $$b_i=b_i(X)$$, denotes the rank of $$H_i(X)$$. It is worth remarking that Betti numbers often have an intuitive meaning, for example, $$b_0$$ is simply the number of connected components of the space considered. While oriented 2-dimensional manifolds are completely classified by $$b_1=2g$$, where $$g$$ is the genus (i.e. number of ‘holes’) of the manifold; $$b_j$$ with $$j \\ge 2$$ classifies the features (number of higher-dimensional holes) of higher-dimensional manifolds. What makes them convenient is the fact that in several cases knowing the Betti numbers is the same as knowing the full space homology. Sometimes to know the homology groups it is sufficient to know the corresponding Betti numbers, typically much simpler to compute. In the absence of torsion, if we want to distinguish two topological objects via their homology, their Betti numbers may already do it.\n\nData can be represented as a collection, unordered sequence, of points in a $$n$$-dimensional space $$E_n$$, the space of data. The conventional way to convert a collection of points within a space such as $$E_n$$ into a global object, is to use the point cloud as the vertex set of a combinatorial graph, $$\\mathcal {G}$$. The edges of the graph are exclusively determined by a given notion of proximity, specified by some weight parameter $$\\delta$$. The parameter $$\\delta$$ should not fix a ‘distance’, that would imply fixing some sort of metric, but rather provide information about ‘dependence’, i.e. correlation or, even better, relation. If dependence had to do with distance, it should be a non-metric notion, rather a chemical distance or ontological distance just to mention an example. A graph of this sort, while capturing pretty well connectivity data, essentially ignores a wealth of higher order features beyond clustering. Such features can instead be accurately discerned by thinking of the graph as the scaffold (1-skeleton) of a different, higher-dimensional, richer (more complex) discrete object, generated by completing the graph $$\\mathcal {G}$$ to a simplicial complex, $$\\mathcal {C}$$. The latter is a piecewise-linear space built from simple linear constituents (simplices) identified combinatorially along their faces. The decisions as how this is done, implies a choice of how to fill in the higher dimensional simplices of the proximity graph. Such choice is not unique, and different options lead to different global representations. Two among the most natural and common ones, equally effective to our purpose, but with different characteristic features, are: (i) the Čech simplicial complex, where $$k$$-simplices are all unordered $$(k+1)$$-tuples of points of the space $$E_n$$, whose closed $$\\frac{1}{2} \\delta$$-ball neighborhoods have a non-empty mutual intersection; (ii) the Rips complex, an abstract simplicial complex whose $$k$$-simplices are the collection of unordered $$(k+1)$$-tuples of points pairwise within distance $$\\delta$$. The Rips complex is maximal among all simplicial complexes with the given 1-skeleton (the graph), and the combinatorics of the 1-skeleton completely determines the complex. The Rips complex can thus be stored as a graph and reconstructed out of it. For a Čech complex, on the contrary, one needs to store the entire boundary operator, and the construction is more complex; however, this complex contains a larger amount of information about the topological structure of the data space.\n\nAlgebraic topology provides a mature set of tools for counting and collating holes and other topological pattern features, both spaces and maps between spaces, for simplicial complexes. It is therefore able to reveal patterns and structures not easily identifiable otherwise. As persistent homology is generated recursively, corresponding to an increasing sequence of values of $$\\delta$$. Complexes grow with $$\\delta$$. This leads us to naturally identifying the chain maps with a sequence of successive inclusions. Persistent homology is nothing but the image of the homomorphism thus induced. The available algorithms for computing persistent homology groups focus typically on this notion of filtered simplicial complex. Most invariants in algebraic topology are quite difficult to compute efficiently. Fortunately, homology is exceptional under this respect because the invariants arise as quotients of finite-dimensional spaces.\n\nTopological information is contained in persistence homology, that can be determined and presented as a sort of parameterized version of the set of Betti numbers. Its role is just that of providing summaries of information over domains of parameter values, so as to better understand relationships among the geometric objects constructed from data. The emerging geometric/topological relationships involve continuous maps between different objects, and therefore become manifestations of functoriality, i.e, imply the notion that invariants can be extended not just to the objects studied, but also to the maps between such objects. Functoriality is central in algebraic topology because the functoriality of homological invariants is what permits one to compute them from local information. We recall the K$$\\ddot{u}$$nneth theorem that allows to consider the Poincaré polynomial of the space $$X$$ as the generating function of the Betti numbers of $$X$$." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.86234266,"math_prob":0.9949964,"size":15740,"snap":"2022-05-2022-21","text_gpt3_token_len":4011,"char_repetition_ratio":0.12722419,"word_repetition_ratio":0.0066197766,"special_character_ratio":0.26404065,"punctuation_ratio":0.089234106,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9982657,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-22T05:46:37Z\",\"WARC-Record-ID\":\"<urn:uuid:61391ef4-e4a2-4d23-9140-f4bd492dd686>\",\"Content-Length\":\"201592\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7574af1b-3db8-4edf-925d-fa32bee334da>\",\"WARC-Concurrent-To\":\"<urn:uuid:89aa25a3-8788-4caf-a511-2067ed37fb5e>\",\"WARC-IP-Address\":\"146.75.36.95\",\"WARC-Target-URI\":\"https://link.springer.com/article/10.1007/s11047-014-9436-7\",\"WARC-Payload-Digest\":\"sha1:USLFLKNBKY4RREEVFBUXRJD3GWFHFDRF\",\"WARC-Block-Digest\":\"sha1:BGM5GO3JEOBM66BKUDNHYRHPHH7QV2OH\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662543797.61_warc_CC-MAIN-20220522032543-20220522062543-00766.warc.gz\"}"}
https://percentage-calculator.net/what-is-x-percent-of-y/what-is-30-percent-of-1700.php
[ "# What is 30 percent of 1700 (30% of 1700)?\n\nAnswer: 30 percent of 1700 is 510\n\n## Fastest method for calculating 30 percent of 1700 (30% of 1700)\n\nAssume the unknown value is 'Y'\n\nY = 30 / 100\n\nY = 30 / 100 x 1700\n\nY = 510\n\nAnswer: 30 percent of 1700 is 510\n\nIf you want to use a calculator, simply enter 30÷100x1700 and you will get your answer which is 510\n\nYou may also be interested in:\n\nHere is a calculator to solve percentage calculations such as what is 30% of 1700. You can solve this type of calculation with your own values by entering them into the calculator's fields, and click 'Calculate' to get the result and explanation.\n\nWhat is% of\n?\n\n## Have time and want to learn the details?\n\nLet's solve the equation for Y by first rewriting it as: 100% / 1700 = 30% / Y\n\nDrop the percentage marks to simplify your calculations: 100 / 1700 = 30 / Y\n\nMultiply both sides by Y to transfer it on the left side of the equation: Y ( 100 / 1700 ) = 30\n\nTo isolate Y, multiply both sides by 1700 / 100, we will have: Y = 30 ( 1700 / 100 )\n\nComputing the right side, we get: Y = 510\n\nThis leaves us with our final answer: 30% of 1700 is 510" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.91571975,"math_prob":0.99871683,"size":950,"snap":"2020-34-2020-40","text_gpt3_token_len":286,"char_repetition_ratio":0.1448203,"word_repetition_ratio":0.030927835,"special_character_ratio":0.3694737,"punctuation_ratio":0.08163265,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9994054,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-08-09T03:19:56Z\",\"WARC-Record-ID\":\"<urn:uuid:5ff29ec5-e0df-45ef-86de-ad4b1baa909f>\",\"Content-Length\":\"58830\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:641f9f06-6aab-40e2-8f99-2e7019a8baa8>\",\"WARC-Concurrent-To\":\"<urn:uuid:d258c343-05e0-4d0f-a289-009ea80ac8bc>\",\"WARC-IP-Address\":\"68.66.224.6\",\"WARC-Target-URI\":\"https://percentage-calculator.net/what-is-x-percent-of-y/what-is-30-percent-of-1700.php\",\"WARC-Payload-Digest\":\"sha1:K5ZAGW3SSU76BYKCQ3TPBLUVKA2ERS22\",\"WARC-Block-Digest\":\"sha1:K7UAML25HJJLLGOXXDJRTH2ZBIXAPOUD\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-34/CC-MAIN-2020-34_segments_1596439738380.22_warc_CC-MAIN-20200809013812-20200809043812-00282.warc.gz\"}"}
https://slideplayer.com/slide/7819321/
[ "", null, "# + The Practice of Statistics, 4 th edition – For AP* STARNES, YATES, MOORE Chapter 8: Estimating with Confidence Section 8.2 Estimating a Population Proportion.\n\n## Presentation on theme: \"+ The Practice of Statistics, 4 th edition – For AP* STARNES, YATES, MOORE Chapter 8: Estimating with Confidence Section 8.2 Estimating a Population Proportion.\"— Presentation transcript:\n\n+ The Practice of Statistics, 4 th edition – For AP* STARNES, YATES, MOORE Chapter 8: Estimating with Confidence Section 8.2 Estimating a Population Proportion\n\n+ Chapter 8 Estimating with Confidence 8.1Confidence Intervals: The Basics 8.2Estimating a Population Proportion 8.3Estimating a Population Mean\n\n+ Section 8.2 Estimating a Population Proportion After this section, you should be able to… CONSTRUCT and INTERPRET a confidence interval for a population proportion DETERMINE the sample size required to obtain a level C confidence interval for a population proportion with a specified margin of error DESCRIBE how the margin of error of a confidence interval changes with the sample size and the level of confidence C Learning Objectives\n\n+ Estimating a Population Proportion Activity: The Beads Your teacher has a container full of different colored beads. Your goal is to estimate the actual proportion of red beads in the container. Form teams of 3 or 4 students. Determine how to use a cup to get a simple random sample of beadsfrom the container. Each team is to collect one SRS of beads. Determine a point estimate for the unknown population proportion. Find a 90% confidence interval for the parameter p. Consider any conditions that are required for the methods you use. Compare your results with the other teams in the class.\n\n+ Conditions for Estimating p Suppose one SRS of beads resulted in 107 red beads and 144 beads of another color. The point estimate for the unknown proportion p of red beads in the population would be Estimating a Population Proportion How can we use this information to find a confidence interval for p?\n\n+ Conditions for Estimating p Check the conditions for estimating p from our sample. Estimating a Population Proportion Random: The class took an SRS of 251 beads from the container. Normal: Both np and n(1 – p) must be greater than 10. Since we don’t know p, we check that The counts of successes (red beads) and failures (non-red) are both ≥ 10. Independent: Since the class sampled without replacement, they need to check the 10% condition. At least 10(251) = 2510 beads need to be in the population. The teacher reveals there are 3000 beads in the container, so the condition is satisfied. Since all three conditions are met, it is safe to construct a confidence interval.\n\n+ Constructing a Confidence Interval for p We can use the general formula from Section 8.1 to construct a confidence interval for an unknown population proportion p : Estimating a Population Proportion Definition: When the standard deviation of a statistic is estimated from data, the results is called the standard error of the statistic.\n\n+ Finding a Critical Value How do we find the critical value for our confidence interval? Estimating a Population Proportion If the Normal condition is met, we can use a Normal curve. To find a level C confidence interval, we need to catch the central area C under the standard Normal curve. For example, to find a 95% confidence interval, we use a critical value of 2 based on the 68-95-99.7 rule. Using Table A or a calculator, we can get a more accurate critical value. Note, the critical value z* is actually 1.96 for a 95% confidence level.\n\n+ Finding a Critical Value Use Table A to find the critical value z* for an 80% confidence interval. Assume that the Normal condition is met. Estimating a Population Proportion Since we want to capture the central 80% of the standard Normal distribution, we leave out 20%, or 10% in each tail. Search Table A to find the point z* with area 0.1 to its left. So, the critical value z* for an 80% confidence interval is z* = 1.28. The closest entry is z = – 1.28. z.07.08.09 – 1.3.0853.0838.0823 – 1.2.1020.1003.0985 – 1.1.1210.1190.1170\n\n+ One-Sample z Interval for a Population Proportion Once we find the critical value z*, our confidence interval for the population proportion p is Estimating a Population Proportion Choose an SRS of size n from a large population that contains an unknown proportion p of successes. An approximate level C confidence interval for p is where z* is the critical value for the standard Normal curve with area C between – z* and z*. Use this interval only when the numbers of successes and failures in the sample are both at least 10 and the population is at least 10 times as large as the sample. One-Sample z Interval for a Population Proportion\n\n+ Calculate and interpret a 90% confidence interval for the proportion of red beads in the container. Your teacher claims 50% of the beads are red.Use your interval to comment on this claim. Estimating a Population Proportion z.03.04.05 – 1.7.0418.0409.0401 – 1.6.0516.0505.0495 – 1.5.0630.0618.0606 For a 90% confidence level, z* = 1.645 We checked the conditions earlier. sample proportion = 107/251 = 0.426 statistic ± (critical value) (standard deviation of the statistic) We are 90% confident that the interval from 0.375 to 0.477 captures the actual proportion of red beads in the container. Since this interval gives a range of plausible values for p and since 0.5 is not contained in the interval, we have reason to doubt the claim.\n\n+ The Four-Step Process We can use the familiar four-step process whenever a problem asks us to construct and interpret a confidence interval. Estimating a Population Proportion State: What parameter do you want to estimate, and at what confidence level? Plan: Identify the appropriate inference method. Check conditions. Do: If the conditions are met, perform calculations. Conclude: Interpret your interval in the context of the problem. Confidence Intervals: A Four-Step Process\n\n+ Choosing the Sample Size In planning a study, we may want to choose a sample size that allows us to estimate a population proportion within a given margin of error. Estimating a Population Proportion The margin of error (ME) in the confidence interval for p is z* is the standard Normal critical value for the level of confidence we want. To determine the sample size n that will yield a level C confidence interval for a population proportion p with a maximum margin of error ME, solve the following inequality for n: Sample Size for Desired Margin of Error\n\n+ Example: Customer Satisfaction Read the example on page 493. Determine the sample size needed to estimate p within 0.03 with 95% confidence. Estimating a Population Proportion The critical value for 95% confidence is z* = 1.96. Since the company president wants a margin of error of no more than 0.03, we need to solve the equation Multiply both sides by square root n and divide both sides by 0.03. Square both sides. Substitute 0.5 for the sample proportion to find the largest ME possible. We round up to 1068 respondents to ensure the margin of error is no more than 0.03 at 95% confidence.\n\n+ Section 8.2 Estimating a Population Proportion In this section, we learned that… Summary\n\n+ Section 8.2 Estimating a Population Proportion In this section, we learned that… When constructing a confidence interval, follow the familiar four-step process: STATE: What parameter do you want to estimate, and at what confidence level? PLAN: Identify the appropriate inference method. Check conditions. DO: If the conditions are met, perform calculations. CONCLUDE: Interpret your interval in the context of the problem. The sample size needed to obtain a confidence interval with approximate margin of error ME for a population proportion involves solving Summary\n\n+ Looking Ahead… We’ll learn how to estimate a population mean. We’ll learn about The one-sample z interval for a population mean when σ is known The t distributions when σ is unknown Constructing a confidence interval for µ Using t procedures wisely In the next Section…\n\n+\n\n+ Homework Chapter 8 #33-36, 38, 40, 42\n\nDownload ppt \"+ The Practice of Statistics, 4 th edition – For AP* STARNES, YATES, MOORE Chapter 8: Estimating with Confidence Section 8.2 Estimating a Population Proportion.\"\n\nSimilar presentations" ]
[ null, "https://slideplayer.com/static/blue_design/img/slide-loader4.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.81369376,"math_prob":0.95363694,"size":7972,"snap":"2021-21-2021-25","text_gpt3_token_len":1837,"char_repetition_ratio":0.20820783,"word_repetition_ratio":0.089686096,"special_character_ratio":0.239714,"punctuation_ratio":0.11290322,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99594134,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-05-09T17:34:51Z\",\"WARC-Record-ID\":\"<urn:uuid:e787b44f-52ac-4ae5-ab60-406437cbb234>\",\"Content-Length\":\"172043\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e2f00a5f-e343-419c-b6db-d0eda3e0a95a>\",\"WARC-Concurrent-To\":\"<urn:uuid:aec9a2a9-493e-4034-a2b2-eca94ad9d816>\",\"WARC-IP-Address\":\"138.201.54.25\",\"WARC-Target-URI\":\"https://slideplayer.com/slide/7819321/\",\"WARC-Payload-Digest\":\"sha1:2ACS7DMVN3DKDMKYIVS2OXHBDRUDIXTW\",\"WARC-Block-Digest\":\"sha1:GXQULUWF24SLKOVS6HMM7CLHSOTW6QSN\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-21/CC-MAIN-2021-21_segments_1620243989006.71_warc_CC-MAIN-20210509153220-20210509183220-00478.warc.gz\"}"}
https://research.tue.nl/en/publications/on-the-calculation-of-nearest-neighbors-in-activity-coefficient-m
[ "# On the calculation of nearest neighbors in activity coefficient models\n\n2 Citations (Scopus)\n\n### Abstract\n\nGuggenheim proposed a theoretical expression for the combinatorial entropy of mixing of unequal sized and linear and branched molecules to improve the Flory-Huggins model. Later the combinatorial activity coefficient equation, which was derived from Guggenheim's model, was applied in the UNIQUAC, UNIFAC, and COSMOSAC models. Here we derive from Guggenheim's entropy theory a new function for the number of nearest neighbors of a compound in a multicomponent mixture for which the knowledge of the coordination number and a reference area are not needed. This new relation requires only the mole, volume and surface fraction of the compounds in the mixture. The benefit of the new relation is that both the combinatorial and the residual term in the aforementioned models can be made lattice-independent. We demonstrate that the proposed relation simplifies the Staverman-Guggenheim combinatorial model and can be applied with success to the UNIQUAC and COSMOSPACE model in the description of vapor-liquid phase equilibria and excess enthalpy. We also show that the new expression for the number of nearest neighbors should replace the relative surface area and the number of surface patches in the residual part of the UNIQUAC and the COSMOSPACE model, respectively. As a result a more rigorous version of the UNIQUAC and the COSMOSPACE model is obtained. This could serve as a better basis for predictive models like UNIFAC, COSMO-RS and COSMOSAC.\n\nOriginal language English 10-23 14 Fluid Phase Equilibria 465 https://doi.org/10.1016/j.fluid.2018.02.024 Published - 15 Apr 2018\n\n### Fingerprint\n\nActivity coefficients\ncoefficients\nEntropy\nentropy\ncoordination number\nPhase equilibria\nEnthalpy\nliquid phases\nenthalpy\nVapors\nvapors\nMolecules\nLiquids\n\n### Keywords\n\n• COSMO-RS\n• COSMOSAC\n• COSMOSPACE\n• GEQUAC\n• Lattice theory\n• UNIFAC\n• UNIQUAC\n\n### Cite this\n\n@article{b03feace49a14fa19e4644a6956c145c,\ntitle = \"On the calculation of nearest neighbors in activity coefficient models\",\nabstract = \"Guggenheim proposed a theoretical expression for the combinatorial entropy of mixing of unequal sized and linear and branched molecules to improve the Flory-Huggins model. Later the combinatorial activity coefficient equation, which was derived from Guggenheim's model, was applied in the UNIQUAC, UNIFAC, and COSMOSAC models. Here we derive from Guggenheim's entropy theory a new function for the number of nearest neighbors of a compound in a multicomponent mixture for which the knowledge of the coordination number and a reference area are not needed. This new relation requires only the mole, volume and surface fraction of the compounds in the mixture. The benefit of the new relation is that both the combinatorial and the residual term in the aforementioned models can be made lattice-independent. We demonstrate that the proposed relation simplifies the Staverman-Guggenheim combinatorial model and can be applied with success to the UNIQUAC and COSMOSPACE model in the description of vapor-liquid phase equilibria and excess enthalpy. We also show that the new expression for the number of nearest neighbors should replace the relative surface area and the number of surface patches in the residual part of the UNIQUAC and the COSMOSPACE model, respectively. As a result a more rigorous version of the UNIQUAC and the COSMOSPACE model is obtained. This could serve as a better basis for predictive models like UNIFAC, COSMO-RS and COSMOSAC.\",\nkeywords = \"COSMO-RS, COSMOSAC, COSMOSPACE, GEQUAC, Lattice theory, UNIFAC, UNIQUAC\",\nauthor = \"G.J.P. Krooshof and R. Tuinier and {de With}, G.\",\nyear = \"2018\",\nmonth = \"4\",\nday = \"15\",\ndoi = \"10.1016/j.fluid.2018.02.024\",\nlanguage = \"English\",\nvolume = \"465\",\npages = \"10--23\",\njournal = \"Fluid Phase Equilibria\",\nissn = \"0378-3812\",\npublisher = \"Elsevier\",\n\n}\n\nIn: Fluid Phase Equilibria, Vol. 465, 15.04.2018, p. 10-23.\n\nTY - JOUR\n\nT1 - On the calculation of nearest neighbors in activity coefficient models\n\nAU - Krooshof, G.J.P.\n\nAU - Tuinier, R.\n\nAU - de With, G.\n\nPY - 2018/4/15\n\nY1 - 2018/4/15\n\nN2 - Guggenheim proposed a theoretical expression for the combinatorial entropy of mixing of unequal sized and linear and branched molecules to improve the Flory-Huggins model. Later the combinatorial activity coefficient equation, which was derived from Guggenheim's model, was applied in the UNIQUAC, UNIFAC, and COSMOSAC models. Here we derive from Guggenheim's entropy theory a new function for the number of nearest neighbors of a compound in a multicomponent mixture for which the knowledge of the coordination number and a reference area are not needed. This new relation requires only the mole, volume and surface fraction of the compounds in the mixture. The benefit of the new relation is that both the combinatorial and the residual term in the aforementioned models can be made lattice-independent. We demonstrate that the proposed relation simplifies the Staverman-Guggenheim combinatorial model and can be applied with success to the UNIQUAC and COSMOSPACE model in the description of vapor-liquid phase equilibria and excess enthalpy. We also show that the new expression for the number of nearest neighbors should replace the relative surface area and the number of surface patches in the residual part of the UNIQUAC and the COSMOSPACE model, respectively. As a result a more rigorous version of the UNIQUAC and the COSMOSPACE model is obtained. This could serve as a better basis for predictive models like UNIFAC, COSMO-RS and COSMOSAC.\n\nAB - Guggenheim proposed a theoretical expression for the combinatorial entropy of mixing of unequal sized and linear and branched molecules to improve the Flory-Huggins model. Later the combinatorial activity coefficient equation, which was derived from Guggenheim's model, was applied in the UNIQUAC, UNIFAC, and COSMOSAC models. Here we derive from Guggenheim's entropy theory a new function for the number of nearest neighbors of a compound in a multicomponent mixture for which the knowledge of the coordination number and a reference area are not needed. This new relation requires only the mole, volume and surface fraction of the compounds in the mixture. The benefit of the new relation is that both the combinatorial and the residual term in the aforementioned models can be made lattice-independent. We demonstrate that the proposed relation simplifies the Staverman-Guggenheim combinatorial model and can be applied with success to the UNIQUAC and COSMOSPACE model in the description of vapor-liquid phase equilibria and excess enthalpy. We also show that the new expression for the number of nearest neighbors should replace the relative surface area and the number of surface patches in the residual part of the UNIQUAC and the COSMOSPACE model, respectively. As a result a more rigorous version of the UNIQUAC and the COSMOSPACE model is obtained. This could serve as a better basis for predictive models like UNIFAC, COSMO-RS and COSMOSAC.\n\nKW - COSMO-RS\n\nKW - COSMOSAC\n\nKW - COSMOSPACE\n\nKW - GEQUAC\n\nKW - Lattice theory\n\nKW - UNIFAC\n\nKW - UNIQUAC\n\nUR - http://www.scopus.com/inward/record.url?scp=85043368492&partnerID=8YFLogxK\n\nU2 - 10.1016/j.fluid.2018.02.024\n\nDO - 10.1016/j.fluid.2018.02.024\n\nM3 - Article\n\nAN - SCOPUS:85043368492\n\nVL - 465\n\nSP - 10\n\nEP - 23\n\nJO - Fluid Phase Equilibria\n\nJF - Fluid Phase Equilibria\n\nSN - 0378-3812\n\nER -" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.86601776,"math_prob":0.8821751,"size":5463,"snap":"2019-51-2020-05","text_gpt3_token_len":1348,"char_repetition_ratio":0.13134274,"word_repetition_ratio":0.79151946,"special_character_ratio":0.2121545,"punctuation_ratio":0.09677419,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96714056,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-12T00:38:06Z\",\"WARC-Record-ID\":\"<urn:uuid:e55411df-0d98-4ae9-8311-1336199bec42>\",\"Content-Length\":\"46869\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:aa775c1c-930f-4a9f-a84c-05a2321d068c>\",\"WARC-Concurrent-To\":\"<urn:uuid:f36d9d5f-c901-4306-81be-cb7306f57d71>\",\"WARC-IP-Address\":\"52.51.22.49\",\"WARC-Target-URI\":\"https://research.tue.nl/en/publications/on-the-calculation-of-nearest-neighbors-in-activity-coefficient-m\",\"WARC-Payload-Digest\":\"sha1:4NYS4ZHX4I7U2GOB3YX6W6BA7TKL54IO\",\"WARC-Block-Digest\":\"sha1:7D5R5TH4IH5QMOSR22CT4K7ASO3HCIFM\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575540534443.68_warc_CC-MAIN-20191212000437-20191212024437-00438.warc.gz\"}"}
http://www.kpubs.org/article/articleMain.kpubs?articleANo=E1JSE6_2014_v6n2_206
[ "Numerical simulations of two-dimensional floating breakwaters in regular waves using fixed cartesian grid\nNumerical simulations of two-dimensional floating breakwaters in regular waves using fixed cartesian grid\nInternational Journal of Naval Architecture and Ocean Engineering. 2014. Jun, 6(2): 206-218", null, "This is an Open-Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/3.0) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.\n• Published : June 30, 2014", null, "PDF", null, "e-PUB", null, "PubReader", null, "PPT", null, "Export by style\nArticle\nAuthor\nMetrics\nCited by\nTagCloud\nKwang-Leol, Jeong\nDepartment of Naval Architecture and Ocean Engineering, Graduate School of Inha University, Incheon, Korea\nYoung-Gill, Lee\nDepartment of Naval Architecture and Ocean Engineering, Inha University, Incheon, Korea\n\nAbstract\nThe wave attenuation by floating breakwaters in high amplitude waves, which can lead to wave overtopping and breaking, is examined by numerical simulations. The governing equations, the Navier-Stokes equations and the continuity equation, are calculated in a fixed Cartesian grid system. The body boundaries are defined by the line segment connecting the points where the grid line and body surface meet. No-slip and divergence free conditions are satisfied at the body boundary cell. The nonlinear waves near the moving body is defined using the modified markerdensity method. To verify the present numerical method, vortex induced vibration on an elastically mounted cylinder and free roll decay are numerically simulated and the results are compared with those reported in the literature. Using the present numerical method, the wave attenuations by three kinds of floating breakwaters are simulated numerically in a regular wave to compare the performance .\nKeywords\nINTRODUCTION\nAlthough bottom fixed breakwaters have an advantage in calming harbors, there are some problems with water pollution in the harbors due to the blockage of currents, moreover, the cost of construction increases considerably in deep sea or soft ground conditions. Floating breakwaters can be an alternative, but they have poorer performance than bottom fixed breakwaters. In addition, disaster can occur if the mooring lines are damaged ( Lee and Song, 2005 ). To increase the performance of a breakwater, the effects of nonlinear phenomena, such as overtopping and breaking waves need to be analyzed. To secure the safety of mooring lines, the maximum tensions on the mooring lines have to be estimated considering the second order wave drift force on a floating breakwater.\nThe motion of a floating body has been analyzed mostly based on potential theory. Most studies of floating breakwaters were also performed based on potential theory ( Loukogeorgaki and Angelides, 2005 ; Lee and Cho, 2003 ; Lee and Song, 2005 ; Song and Kim, 2005 ). The potential theories use linearized governing equations and free surface boundary conditions. The theories, however, show satisfactory results when linear assumptions are suitable for the purpose only. If the flows include highly nonlinear phenomena such as wave breaking and wave overtopping, the potential theories cannot produce meaningful results. Because such nonlinear phenomena are not negligible estimating the performance and evaluating the safety of mooring lines, a method that can analyze nonlinear phenomena is necessary. Experiments are an alternative method, and there have been performed many researchers ( Ruol et al., 2008 ; Chun and Cho, 2011 ). Computational Fluid Dynamics (CFD) can be another alternative method. Koftis and Prinos (2005) examined flow near a floating breakwater using the CFD. In Koftis and Prinos (2005) , the motions of the breakwater were not considered. The motions should be considered in the analysis to obtain more rational results. A grid technique for a moving body and a free surface model for a nonlinear wave are important. The grid system is essential in most numerical simulations for practical purposes even though there are meshless methods using particle such as Najafi-Jilani and Rezaie-Mazyak (2011) and Jung et al. (2008) . The grid systems have been categorized in two groups; a body fitted grid system and a non-body fitted Cartesian grid system ( Mittal and Iaccarino, 2005 ). The body fitted grid system provides a more accurate result particularly near the body boundary. Many research using body fitted grid have been conducted for floating body in waves with grid deformation, overset grid or grid regeneration ( Guo et al., 2012 ; Sadat-Hosseini et al., 2013 ; Hadzic et al., 2005 ; Simonsen et al., 2013 ). In body fitted grid system, grid generation near a complex body requires considerable time and effort. Moreover, grid regeneration and interpolation between overset grids for moving a body take huge time. On the other hand, the Cartesian grid system has advantages on easy grid generation and fast calculation. In the Cartesian grid system, the grid lines do not consist with the body surface. Therefore, a special treatment is required for a body boundary treatment. These methods are called the Immersed Boundary Method (IBM). The greatest advantage of the Cartesian grid is the easy treatment of moving bodies in a fixed grid system. A range of methods for a moving body in a Cartesian grid system have been developed. On the other hand, few methods consider the problems including the free surface. Normally, zero gradient boundary conditions are imposed for pressure and velocity on the body boundary. These boundary conditions are difficult to be satisfied if a body boundary cell contains the free surface, due to the drastic differences in the densities and viscous coefficients. Lin (2007) proposes the Partial Cell Treatment (PCT) method to simulate a moving body on the free surface. The PCT method defines the body boundary using the volume ratio in a cell between the body and fluid. Therefore, the PCT method cannot define the body surface as sharply as the Ghost Cell Immersed Boundary Method (GCIBM) and Cartesian Cut Cell (CCM) ( Hu and Kashiwagi, 2009 ). In the present study, a body boundary treatment technique using no-slip and divergence free conditions is suggested for the sharp definition of the body boundaries.\nThe differences in the densities and viscous coefficients of water and air are sources of solution instability. The most popular method for free surface modeling is the Volume of Fluids (VOF) method ( Hirt and Nichols, 1981 ). In the VOF method, the free surface is treated as a transient zone, where the density and viscosity varies continuously in space. The densities and viscous coefficients of the cells near the free surface are defined using the volume fractional function. The transient zone is a source of the reduction of the accuracy of the solution. Park et al. (1999) proposed the marker-density method, which does not use transient zone. To obtain the sufficient stability, the velocities of air in the free surface cells are extrapolated from the water velocities. Lee et al. (2012) suggested the modified marker-density method which calculates the velocities of air in the free surface cells with the governing equations. In this study, the modified marker-density method is implemented for the nonlinear free surface definition.\nTo verify developed numerical method, vortex induced vibrations on the elastically mounted circular cylinder are numerically simulated and the results are compared with the results of other researches. Moreover, a free roll decay simulation is performed and the results are compared with the data in the published literature. Finally, three kinds of breakwaters in regular wave are simulated using the present numerical method.\nNUMERICAL METHOD\n- Governing equations and discretization\nThe filtered Navier-Stokes equations and the continuity equation are employed as governing equations. Subgrid-scale (SGS) turbulence model is implemented to consider the effects of the turbulence flow smaller than a grid. f i in Eq. (1) denotes the gravitational acceleration and r ij is SGS Reynolds stress.", null, "PPT Slide\nLager Image", null, "PPT Slide\nLager Image\nThe governing equations are descretized with Foreword Time, Centred Space (FTCS) except for convection terms. The convection terms are descretized using the Kawamura-Kuwahara scheme ( Kawamura and Kuwahara, 1984 ) and first order upwind scheme considering the number of fluid cells in space and the Adams-Bashforth scheme in time. The descretized governing equation is shown in Eqs. (3) and (4).", null, "PPT Slide\nLager Image", null, "PPT Slide\nLager Image\nwhere", null, "PPT Slide\nLager Image\nrepresents a discretized spatial derivation and u * denotes the tentative velocity in Eq. (5).", null, "PPT Slide\nLager Image\nBy substituting Eq. (3) into Eq. (4), the pressure Poisson equation Eq. (6) is obtained and the pressure Poisson equation is solved using the Successive Over Relaxation method. More details of spatial descretization can be found in Lee et al. (2012) .", null, "PPT Slide\nLager Image\n- Body boundary conditions\nThe governing equations are solved in a staggered Cartesian grid. Because the grid line does not consist with the body surface, the body boundaries are defined by line segments connecting the points where the grid line and body surface meet. The grids containing the line segment are defined as body boundary cell. The no-slip condition is imposed on the body surface and the zero-divergence condition is imposed in body boundary cell. The velocity near a body is assumed to follow a quadratic polynomial w ( x ) in Fig. 1. Inducing the velocity profile, '0' is imposed on the body surface for the no-slip boundary condition.\nTo satisfy the divergence free condition, the divergence of a body boundary cell is calculated by joining the fluxes passing through the grid lines together. The flux Q (in Fig. 1 ) through the grid line of the cut cell H is calculated by integrating the quadratic polynomial. Using the divergence, new pressures are calculated in Eq. (7), and the velocities are computed from them using Eq. (3). The superscripts in Eq. (7) mean the inner iteration steps. Until the pressures and velocities are converged, the iterative calculation continues to satisfy the zero-divergence condition.", null, "PPT Slide\nLager Image\nwhere D is the divergence of body boundary cells, and ω is the relaxation factor.", null, "PPT Slide\nLager Image\nSchematic sketch of a velocity profile near a fixed body boundary.", null, "PPT Slide\nLager Image\nSchematic sketch of velocity profiles near a moving body.\nWhen a body moves, velocity profiles are induced with the velocities of the body surface ( w b1 in Fig. 2 ) instead of the zero value in Fig. 1 . The velocity profiles for the flux calculation change abruptly when the body surface passes the velocity definition point in a certain time interval because the velocity definition points used for the velocity profiles change. This causes spurious pressure oscillations. To prevent the abrupt change in the volume flux through the grid face, the volume flux is determined with the weighted average of the two fluxes according to Eq. (8). Q 1 and Q 2 are obtained by integrating the velocity profile w 1 ( x ) and w 2 ( x ) , respectively. The velocity profile w 1 ( x ) is induced with w b1 , w i 1 , k , and w i 2 , k , and the velocity profile w 2 ( x ) is induced with w b1 , w i, k , and w i 1 , k .", null, "PPT Slide\nLager Image\nEq. (9) must be satisfied to conserve the mass in a body boundary cell. A B is the area occupied by the body in a cell. The second term in Eq. (9) is calculated by adding all fluxes together through grid faces. Instead of calculating A B , the amount of the body passing through the face is added to the volume flux as shown in Eq. (10) for an easy calculation of the first term of Eq. (9). Eq. (10) cannot substitute for Eq. (9) sufficiently if the body shape change sharply like a corner.", null, "PPT Slide\nLager Image", null, "PPT Slide\nLager Image\nIf a fluid cell neighboring the body boundary undergoes a change to become a body boundary cell, the pressure changes abruptly because the pressure calculation undergoes a different process according to the change in the geometrical surrounding. In this case, the pressure in the fluid cell neighboring the body boundary cell can be determined using the same process that is applied to the body boundary cell described in previous paragraphs.\n- Free surface boundary conditions\nEqs. (11) and (12) show the dynamic and kinematic boundary conditions of the free surface. Eq. (11) means that the air pressure is the same as that of water on the free surface and the surface tension and the viscous stress on free surface are ignored. Eq. (12) means that the velocities of a fluid particle and the free surface normal to the free surface are the same on that location of the free surface.", null, "PPT Slide\nLager Image", null, "PPT Slide\nLager Image\nIn the above,", null, "PPT Slide\nLager Image\nand", null, "PPT Slide\nLager Image\nmean the velocity vectors of the fluid particle and free surface respectively, and", null, "PPT Slide\nLager Image\nstands for the normal vector of the free surface.\nThe transport equation of the Marker-density is used to define the position of the free surface, as shown in Eq. (13) instead of Eq. (12). The value of the Marker-density is an artificial density obtained numerically from the water density ( ρ water ) and air density ( ρ air ). A cell is considered to be a water cell if the Marker-densities of a cell and its neighboring cells are larger than the average of the air and water densities. On the other hand, the cell is considered an air cell if the Marker-densities of a cell and its neighboring cells are less than the average of the air and water densities. Other fluid cells are treated as free surface cells. The location of the free surface is determined where the Marker-density is the average of the air and water densities.\nThe method for the free-surface definition is similar to the VOF method. In the VOF method, the volume fraction is used to determine the density and viscosity of the free surface cell. The velocity and pressure of free surface cells are calculated using the weighted average density and viscosity by a volume fraction. In the Marker-density method, on the other hand, the air and water regions are treated as separate regions. Therefore, the Marker-density value is used only to define the free surface position. In this study, the pressure and velocity distributions are calculated from the density and the viscosity of air or water, not from the Marker-density.", null, "PPT Slide\nLager Image\nIn the traditional Marker-density method, the pressure on the free surface is extrapolated equally from the nearest air cell including gravitational acceleration. Also the air velocity of free surface cell is extrapolated with Lagrangian manner to obtain the stability of the solutions ( Park et al., 1999 ). In the present method, on the other hand, the air velocity of free surface cell is calculated with governing equations. For the stability of the solution, continuous pressure gradient condition (Eq. (14)) is additionally imposed. Fig. 3 and Eq. (15) show how to calculate the pressure on the free surface.", null, "PPT Slide\nLager Image", null, "PPT Slide\nLager Image\nSchematic drawing of the pressure calculation on free surface.", null, "PPT Slide\nLager Image\nThe velocities and pressures of the free surface cells are calculated using a simultaneous iteration method. The velocities of the free surface cells are calculated according to Eq. (16).", null, "PPT Slide\nLager Image\nand", null, "PPT Slide\nLager Image\nin Eq. (16) are the tentative velocities in Eq. (5). The divergences of free surface cells are calculated using the velocities of the free surface. The pressure is updated or calculated at each inner iteration step with the new divergence according to Eq. (7). Until the pressures and velocities converge, they are calculated iteratively.", null, "PPT Slide\nLager Image\nThe process to obtain the pressure and velocity of body boundary and free surface cells are identical. Therefore, additional treatment is not necessary to get the stability of the body boundary cell including the free surface.\n- In/out flow boundary conditions\nVelocity distribution of the Stokes's 2nd order wave theory is imposed on the inflow boundary to generate regular waves. Neumann boundary condition is imposed for the pressure and wave elevation on the inflow boundary. To prevent unintended wave reflection or pressure oscillation on the outflow boundary, artificial damping zone is imposed near the outflow boundary. As shown in Eq. (17), the velocity of z- direction are artificially damped 1% every time step.", null, "PPT Slide\nLager Image\nVERIFICATIONS\n- Elastically mounted circular cylinder\nVortex shedding on the circular cylinder suspended by an elastic spring is simulated numerically at Re = 100. The SGS turbulence model is not employed in these calculations due to the low Reynolds number. The amplitude and period of the cylinder motion are affected by the mass of cylinder ( m ) and spring constant ( k ). The mass ratio ( m * = 4 m /πρ D 2 L ) is 5 and the non-dimensional velocities (", null, "PPT Slide\nLager Image\n) ranged from 0.5 to 1.5.", null, "PPT Slide\nLager Image\nAmplitude of the oscillation of circular cylinder due to vortex shedding by non-dimensional velocity.\nFig. 4 shows the amplitude of oscillating cylinder at various non-dimensional velocities. The present calculation results agree with the other research data of Bahmani and Akbari (2011) and Shiels et al. (2001) . A 'Lock-in' phenomenon, in which the period of vortex shedding coincides with the period of the mass spring system are observed from 0.8 to 1.1 of the nondimensional velocity. Fig. 5 shows the time histories of lift force coefficient of the cylinder in U R = 0.5 and U R =1.1 conditions. Even though, there are several spurious pressure oscillations, the effects are small enough to ignore. Fig. 6 shows the pressure contours, stream lines and velocity vectors near the moving cylinder. In the figure the dashed lines indicate the initial position of the cylinder. The velocity vectors in the cylinder mean the velocity of the moving cylinder. Fig. 6 shows the spatially continuous variation of velocity and pressure. The present numerical simulation method is applicable to the simulation of translational body motion by fluid force.", null, "PPT Slide\nLager Image\nLift force coefficient histories in Ur = 0.5 and Ur = 1.1 conditions.", null, "PPT Slide\nLager Image\nPressure contours, stream lines and velocity vectors in UR = 0.8 at t/T = 140.1sec.\n- Free roll decay test\nUsing a rectangular floating body, a free roll decay test is performed to check the applicability to the problem including the free surface. The breadth of the body is 0.3 m and the draft is 0.05 m . The translational movements are restrained and the rotational movement is set free. The moment of inertia is equal to 0.262 kg/m2 . The SGS turbulence model is implemented in the calculation. The initial roll angle is set 15°. Fig. 7 shows the calculation domain. Three grid size cases (0.004 m , 0.002 m , and 0.001 m ) are employed to check the grid dependency and the time interval is set to 5/10000 s .", null, "PPT Slide\nLager Image\nSchematic sketch of computation domain for the free roll decay test.\nFig. 8 presents the time histories of the roll moment and roll angle. In the case of a grid size of 0.004 m , there are spiky variations in the moment history. On the other hand, the roll moment variations are smooth in the case of a grid size of 0.002 m and 0.001 m . In the case of a grid size of 0.004 m , the roll angle is different from other cases. Between the cases of grid sizes of 0.002 m and grid size 0.001 m , there are small differences that could be ignored.", null, "PPT Slide\nLager Image\nTime histories of the roll moments and angle during free decay.\nThe results of the present simulation are compared with the experimental data reported in Jung et al. (2007) in Fig. 9 . The agreement is sufficient to apply the present method to simulate the rotational motion of a floating body when the roll angle is smaller than 6°. Fig. 10 shows the vorticity distribution near the floating body. A number of vortexes exist near the body and affect the motion of the body.", null, "PPT Slide\nLager Image\nComparison of the roll angle extinction with experimental data.", null, "PPT Slide\nLager Image\nVorticity contours and free surface shape at 2.0s.\nFLOATING BREAKWATERS\n- Calculation conditions\nThree breakwater shapes are investigated under the same displacement. The shape of the submerged area of CASE 1 is a square shape with 0.15 m edges. The ratio between the breadth and draft of CASE 2 is equal to 2. CASE 3 is designed by giving a slope to the side walls of CASE 1. Fig. 11 shows the shape of each case. Table 1 lists the principal dimensions of all the cases.", null, "PPT Slide\nLager Image\nSection shapes of the floating breakwaters.\nPrincipal dimensions of floating breakwaters.", null, "PPT Slide\nLager Image\nPrincipal dimensions of floating breakwaters.\nTo investigate the effects of wave overtopping, CASE 1-1, which is designed by reducing the freeboard of CASE 1, is additionally simulated. To limit the drift due to waves, it is assumed that all the breakwaters are moored by linear spring in the x- direction. The spring constant is set to 2 N / m . The length of incident wave is 0.75 m and the height is 0.030 m .", null, "PPT Slide\nLager Image\nSchematic sketch of computation domain for floating breakwaters.\nFig. 12 shows the schematic sketch for the calculation domain. The calculations cannot continue after the waves reflected by the breakwater reaches the inflow boundary. Therefore, the inflow boundary is located sufficiently far from the floating breakwater. A damping zone is placed near the end of the computational domain to avoid unintended reflections by the outflow boundary. The minimum grid sizes in the x- and z- directions are 0.002 m . The time interval is equal to 4/10,000 s .\n- Simulation results\nFig. 13 shows the pressure distributions around each breakwater from 21.0 s to 22.6 s . The pressure distributions around CASE 1 are similar to those of CASE 1-1 in overall time, except for those on the weather side at 21.1 s and 21.2 s . The pressure on weather side of CASE 1 is much higher than that of CASE 1-1. Such a pressure difference, caused by the difference of the free board height, causes the difference of sway or drift forces. The amplitude of sway and drift of CASE 1 is a little larger than CASE 1-1 as shown Figs. 14 and 15. There is not large difference in the pressure distribution on the bottom of CASE 1 and CASE 1-1 as shown in Fig. 13 . However, the heave amplitude of CASE 1-1 is smaller than that of CASE 1 as shown in Fig. 15 . The difference of heave is caused by the overtopped water of CASE 1-1; nevertheless the amount of the overtopped water is not large.", null, "PPT Slide\nLager Image\nComparison of pressure distributions around the floating breakwaters.\nThe pressure variation on the lee side affects the height of transmitted waves. The pressure on the lee side of CASE 3 is much higher than those of other cases as shown in Fig. 13 . Therefore the transmitted wave of CASE 3 is largest among the cases as shown in Fig. 16 . The transmitted waves are measured at x = 0.5 m . On the contrary, the transmitted wave height of CASE 2 is smallest among the cases due to the same reason. In CASE 1, CASE 1-1 and CASE 2, the break water moves to right side due to the wave loads. Especially, the horizontal displacement of CASE 2 is larger than any other as shown in Fig. 14 . In CASE 3, on the contrary, the breakwater does not move to right side because of the large pressure on the lee side as shown in Fig. 14 . Therefore the tension on the mooring line of CASE 3 would be smaller than any other. In contrast the tension on the mooring line of CASE 2 would be largest due to large horizontal displacement.", null, "PPT Slide\nLager Image\nTime histories of the motions of each breakwater.", null, "PPT Slide\nLager Image\nComparison of the motions of breakwaters in frequency domain.", null, "PPT Slide\nLager Image\nComparison of transmitted waves of each the floating breakwater.\nCONCLUSIONS\nIn the present study, the numerical simulation method developed by Lee and Jeong (2013) was verified for the simulation of two-dimensional floating body and applied to two-dimensional breakwater. To verify the present numerical method, 'lock-in' phenomena on the circular cylinder are simulated and the results are compared to existing research data and the results shows that the present numerical method is applicable to the simulating a translational movement of a body due to fluid force. A few spurious pressure oscillations are occurred but these are ignorable. Free roll decay also simulated. Because the present method cannot resolve the boundary layer accurately in high Reynolds number, there are some differences when the roll angle is larger than 6°. However, when the roll angle is smaller than 6°, the agreements are good enough to apply the present method to the simulation for floating breakwaters.\nThe attenuation performance of a floating breakwater is investigated using the present method. Wave overtopping due to small free board reduces the horizontal displacement of floating breakwater since the pressure on the weather side is reduced. The overtopped water also reduces the heave motion. The high pressure on the lee side of trapezoidal shape (CASE 3) drasticcally reduces the horizontal displacement of floating breakwater; however, the high pressure increases the height of transmitted wave. On the other hand, the small pressure on lee side of the rectangular shape (CASE 2) increases the horizontal displacement but the height of transmitted wave is reduced. Rectangular type floating breakwater has the advantage in calming harbors and trapezoidal shape floating breakwater has the advantage in the small tension on mooring lines.\nFor more accurate simulation, a method to resolve the boundary layer near the wall has to be developed. Moreover, the present numerical method has to be extended to three-dimension to consider oblique incident waves and the wave deflection.\nAcknowledgements\nThis study was supported by INHA University.\nReferences" ]
[ null, "http://www.kpubs.org/resources/images/licenceby.png", null, "http://www.kpubs.org/resources/images/pc/down_pdf.png", null, "http://www.kpubs.org/resources/images/pc/down_epub.png", null, "http://www.kpubs.org/resources/images/pc/down_pub.png", null, "http://www.kpubs.org/resources/images/pc/down_ppt.png", null, "http://www.kpubs.org/resources/images/pc/down_export.png", null, "http://www.kpubs.org/S_Tmp/J_Data/E1JSE6/2014/v6n2/E1JSE6_2014_v6n2_206_e901.jpg", null, "http://www.kpubs.org/S_Tmp/J_Data/E1JSE6/2014/v6n2/E1JSE6_2014_v6n2_206_e902.jpg", null, "http://www.kpubs.org/S_Tmp/J_Data/E1JSE6/2014/v6n2/E1JSE6_2014_v6n2_206_e903.jpg", null, "http://www.kpubs.org/S_Tmp/J_Data/E1JSE6/2014/v6n2/E1JSE6_2014_v6n2_206_e904.jpg", null, "http://www.kpubs.org/S_Tmp/J_Data/E1JSE6/2014/v6n2/E1JSE6_2014_v6n2_206_e001.jpg", null, "http://www.kpubs.org/S_Tmp/J_Data/E1JSE6/2014/v6n2/E1JSE6_2014_v6n2_206_e905.jpg", null, "http://www.kpubs.org/S_Tmp/J_Data/E1JSE6/2014/v6n2/E1JSE6_2014_v6n2_206_e906.jpg", null, "http://www.kpubs.org/S_Tmp/J_Data/E1JSE6/2014/v6n2/E1JSE6_2014_v6n2_206_e907.jpg", null, "http://www.kpubs.org/S_Tmp/J_Data/E1JSE6/2014/v6n2/E1JSE6_2014_v6n2_206_f001.jpg", null, "http://www.kpubs.org/S_Tmp/J_Data/E1JSE6/2014/v6n2/E1JSE6_2014_v6n2_206_f002.jpg", null, "http://www.kpubs.org/S_Tmp/J_Data/E1JSE6/2014/v6n2/E1JSE6_2014_v6n2_206_e908.jpg", null, "http://www.kpubs.org/S_Tmp/J_Data/E1JSE6/2014/v6n2/E1JSE6_2014_v6n2_206_e909.jpg", null, "http://www.kpubs.org/S_Tmp/J_Data/E1JSE6/2014/v6n2/E1JSE6_2014_v6n2_206_e910.jpg", null, "http://www.kpubs.org/S_Tmp/J_Data/E1JSE6/2014/v6n2/E1JSE6_2014_v6n2_206_e911.jpg", null, "http://www.kpubs.org/S_Tmp/J_Data/E1JSE6/2014/v6n2/E1JSE6_2014_v6n2_206_e912.jpg", null, "http://www.kpubs.org/S_Tmp/J_Data/E1JSE6/2014/v6n2/E1JSE6_2014_v6n2_206_e002.jpg", null, "http://www.kpubs.org/S_Tmp/J_Data/E1JSE6/2014/v6n2/E1JSE6_2014_v6n2_206_e003.jpg", null, "http://www.kpubs.org/S_Tmp/J_Data/E1JSE6/2014/v6n2/E1JSE6_2014_v6n2_206_e004.jpg", null, "http://www.kpubs.org/S_Tmp/J_Data/E1JSE6/2014/v6n2/E1JSE6_2014_v6n2_206_e913.jpg", null, "http://www.kpubs.org/S_Tmp/J_Data/E1JSE6/2014/v6n2/E1JSE6_2014_v6n2_206_e914.jpg", null, "http://www.kpubs.org/S_Tmp/J_Data/E1JSE6/2014/v6n2/E1JSE6_2014_v6n2_206_f003.jpg", null, "http://www.kpubs.org/S_Tmp/J_Data/E1JSE6/2014/v6n2/E1JSE6_2014_v6n2_206_e915.jpg", null, "http://www.kpubs.org/S_Tmp/J_Data/E1JSE6/2014/v6n2/E1JSE6_2014_v6n2_206_e005.jpg", null, "http://www.kpubs.org/S_Tmp/J_Data/E1JSE6/2014/v6n2/E1JSE6_2014_v6n2_206_e006.jpg", null, "http://www.kpubs.org/S_Tmp/J_Data/E1JSE6/2014/v6n2/E1JSE6_2014_v6n2_206_e916.jpg", null, "http://www.kpubs.org/S_Tmp/J_Data/E1JSE6/2014/v6n2/E1JSE6_2014_v6n2_206_e917.jpg", null, "http://www.kpubs.org/S_Tmp/J_Data/E1JSE6/2014/v6n2/E1JSE6_2014_v6n2_206_e008.jpg", null, "http://www.kpubs.org/S_Tmp/J_Data/E1JSE6/2014/v6n2/E1JSE6_2014_v6n2_206_f004.jpg", null, "http://www.kpubs.org/S_Tmp/J_Data/E1JSE6/2014/v6n2/E1JSE6_2014_v6n2_206_f005.jpg", null, "http://www.kpubs.org/S_Tmp/J_Data/E1JSE6/2014/v6n2/E1JSE6_2014_v6n2_206_f006.jpg", null, "http://www.kpubs.org/S_Tmp/J_Data/E1JSE6/2014/v6n2/E1JSE6_2014_v6n2_206_f007.jpg", null, "http://www.kpubs.org/S_Tmp/J_Data/E1JSE6/2014/v6n2/E1JSE6_2014_v6n2_206_f008.jpg", null, "http://www.kpubs.org/S_Tmp/J_Data/E1JSE6/2014/v6n2/E1JSE6_2014_v6n2_206_f009.jpg", null, "http://www.kpubs.org/S_Tmp/J_Data/E1JSE6/2014/v6n2/E1JSE6_2014_v6n2_206_f010.jpg", null, "http://www.kpubs.org/S_Tmp/J_Data/E1JSE6/2014/v6n2/E1JSE6_2014_v6n2_206_f011.jpg", null, "http://www.kpubs.org/S_Tmp/J_Data/E1JSE6/2014/v6n2/E1JSE6_2014_v6n2_206_t001.jpg", null, "http://www.kpubs.org/S_Tmp/J_Data/E1JSE6/2014/v6n2/E1JSE6_2014_v6n2_206_f012.jpg", null, "http://www.kpubs.org/S_Tmp/J_Data/E1JSE6/2014/v6n2/E1JSE6_2014_v6n2_206_f013.jpg", null, "http://www.kpubs.org/S_Tmp/J_Data/E1JSE6/2014/v6n2/E1JSE6_2014_v6n2_206_f014.jpg", null, "http://www.kpubs.org/S_Tmp/J_Data/E1JSE6/2014/v6n2/E1JSE6_2014_v6n2_206_f015.jpg", null, "http://www.kpubs.org/S_Tmp/J_Data/E1JSE6/2014/v6n2/E1JSE6_2014_v6n2_206_f016.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.88339674,"math_prob":0.9489691,"size":28470,"snap":"2021-43-2021-49","text_gpt3_token_len":6489,"char_repetition_ratio":0.16458231,"word_repetition_ratio":0.054716982,"special_character_ratio":0.22226906,"punctuation_ratio":0.11188286,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98372734,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-12-04T07:36:39Z\",\"WARC-Record-ID\":\"<urn:uuid:8823e83f-5382-4b46-bd94-bf73e759ebf8>\",\"Content-Length\":\"236535\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4174423c-23e0-4d70-937a-4fff75b2e51f>\",\"WARC-Concurrent-To\":\"<urn:uuid:1ff22166-40d3-4824-afef-c438f874f4f6>\",\"WARC-IP-Address\":\"203.250.198.116\",\"WARC-Target-URI\":\"http://www.kpubs.org/article/articleMain.kpubs?articleANo=E1JSE6_2014_v6n2_206\",\"WARC-Payload-Digest\":\"sha1:7FP5SE3WJ7CIKBMATOD6TLFW6BRHS6M4\",\"WARC-Block-Digest\":\"sha1:EQYCXYAEMESSHI47IZ2IPP2II4KDGYFI\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964362952.24_warc_CC-MAIN-20211204063651-20211204093651-00381.warc.gz\"}"}
https://pinoybix.org/2015/02/mcqs-in-physics-part14.html
[ "# MCQ in Physics Part 14 | ECE Board Exam\n\n(Last Updated On: February 22, 2020)", null, "This is the Multiples Choice Questions Part 14 of the Series in Physics as one of the General Engineering and Applied Sciences (GEAS) topic. In Preparation for the ECE Board Exam make sure to expose yourself and familiarize in each and every questions compiled here taken from various sources including past Board Questions in General Engineering and Applied Sciences (GEAS), Physics Books, Journals and other Physics References.\n\n### Online Questions and Answers in Physics Series\n\nFollowing is the list of multiple choice questions in this brand new series:\n\nCollege Physics MCQs\nPART 1: MCQs from Number 1 – 50                        Answer key: PART I\nPART 2: MCQs from Number 51 – 100                   Answer key: PART II\nPART 3: MCQs from Number 101 – 150                 Answer key: PART III\nPART 4: MCQs from Number 151 – 200                 Answer key: PART IV\nPART 5: MCQs from Number 201 – 250                 Answer key: PART V\nPART 6: MCQs from Number 251 – 300                 Answer key: PART VI\nPART 7: MCQs from Number 301 – 350                 Answer key: PART VII\nPART 8: MCQs from Number 351 – 400                 Answer key: PART VIII\nPART 9: MCQs from Number 401 – 450                 Answer key: PART IX\nPART 14: MCQs from Number 651 – 700                 Answer key: PART XIV\n\n### Continue Practice Exam Test Questions Part XIV of the Series\n\nChoose the letter of the best answer in each questions.\n\nNewton’s First Law\n\n651. A light hangs from two cables. One cable has a tension of 39.72 lb. and is at an angle of 43.4° with respect to the ceiling. What is the weight of the lamp if the other cable makes an angle of 17.1° with respect to the ceiling?\n\n• a. 37.2 lb.\n• b. 35.8 lb.\n• c. 36.8 lb.\n• d. 36.17 lb.\n\n652. A 46.07 N light hangs from two cables at angles 54.9° and 61.4° with respect to the ceiling. What is the tension in the first cable?\n\n• a. 24.6 N\n• b. 25 N\n• c. 23.9 N\n• d. 26.4 N\n\n653. A light hangs from two cables. One cable has a tension of 28.75 N and is at an angle of 58.1° with respect to the ceiling. What is the tension in the other cable if it makes an angle of 9.4° with respect to the ceiling?\n\n• a. 15.9 N\n• b. 16.1 N\n• c. 15.4 N\n• d. 14.9 N\n\nNewton’s Second Law\n\n654. A 8.3 kg mass and a 17.1 kg mass are tied to a light string and hung over a frictionless pulley. What is their acceleration?\n\n• a. 33.95 m/s2\n• c. 4.395 m/s2\n• b. 4 m/s2\n• d. 3.395 m/s2\n\n655. An unknown mass and a 9.9 kg mass are tied to a light string and hung over a frictionless pulley. If the tension in the string is 14.5 N, what is the unknown mass?\n\n• a. 1 kg\n• b. 0.8 kg\n• c. 1.8 kg\n• d. 0.5 kg\n\n656. A lady pulls a cart with a force of 1837 N. Neglecting friction, if the cart changes from resting to a speed of 1.3 m/s in a distance of0.03289 m, what is the total mass of the cart?\n\n• a. 71.5 kg\n• b. 75.1 kg\n• c. 70.5 kg\n• d. 17.5 kg\n\n657. A 3.66 lb. book is resting on a 19.41 lb. table. What is the normal force from the floor on each table leg?\n\n• a. 5 lbs.\n• b. 5.7675 lbs.\n• c. 4.9 lbs.\n• d. 6.7675 lbs.\n\n658. A box sits on a ramp inclined at 21.7° to horizontal. If the normal force on the box from the ramp is 20.94 N, what is the mass of the box?\n\n• a. 3.2 kg\n• b. 2.9 kg\n• c. 2.3 kg\n• d. 3.9 kg\n\n659. A 7.9 kg box sits on a ramp. If the normal force on the box from the ramp is 41.82 N, what is the angle the ramp makes with the (horizontal) ground?\n\n• a. 75.3°\n• c. 57.3°\n• b. 57.5°\n• d. 75.5°\n\n660. A man sees a 44.5 kg cart about to bump into a wall at 1.7 m/s. If the cart is 0.04203 m from the wall when he grabs it, how much force must he apply to stop it before it hits?\n\n• a. 1530 N\n• c. 1350 N\n• b. 1250 N\n• d. 1520 N\n\n661. What is the minimum force required to start a 4.2 kg box moving across the floor if the coefficient of static friction between the box and the floor is 0.6?\n\n• a. 23.761 N\n• b. 25.469 N\n• c. 24.696 N\n• d. 26.496 N\n\n662. What is the kinetic energy of a 70 kg man running along at 6.36 m/s?\n\n• a. 1315 J\n• b. 1515 J\n• c. 1215 J\n• d. 1415 J\n\n663. What is the speed of a 53.6 kg woman running with a kinetic energy of 1617 J?\n\n• a. 7.77 m/s\n• b. 7.57 m/s\n• c. 7.67 m/s\n• d. 7.87 m/s\n\n664. What is the gravitational potential energy of a149.1 kg man at a height of h = 74.21 m above the ground? (consider h = 0 to be the reference where Ug = 0)\n\n• a. 100,000 J\n• b. 107,300 J\n• c. 108,400 J\n• d. 110,580 J\n\n665. What is the height where a 121.2 kg woman would have a gravitational potential energy of 10610 J? (consider h = 0 to be the reference where the Ug = 0)\n\n• a. 9.54m\n• b. 7.57m/s\n• c. 8.94 m\n• d. 7.87m/s\n\n666. What is the change in gravitational potential energy for a 68.9 kg man walking up stairs from a height of 63.07 m to 107.69 m?\n\n• a. -30133 J\n• b. 301230 J\n• c. 30320 J\n• d. 30130 J\n\n667. What is the change in gravitational potential energy for a 132.5 kg woman walking down a hill from a height of 102.86 m to 70.38 m?\n\n• a. -4123 J\n• b. -42175 J\n• c. -5 J\n• d. 4321 J\n\n668. How much work is done by gravity when a 82.3 kg. diver jumps from a height of 5.23 m into the water?\n\n• a. 4321 J\n• b. 4218 J\n• c. 4871 J\n• d. 4334 J\n\n669. How much work must be done to move a 34.6 kg box 3.66 m across the floor if the coefficient of kinetic friction between the box and the floor is 0.3.\n\n• a. 372.3 J\n• b. -321.9 J\n• c. -372.3 J\n• d. 3.234 J\n\n670. What is the coefficient of kinetic friction between a 16 kg box and the floor if it takes 140.4 J of work to move it a distance of 3.2 m?\n\n• a. 0.28\n• b. -0.25\n• c. 0.281\n• d. 0.21\n\n671. What is the length of a 73.7 m wide rectangular lab if its mass is 1.01 kg and the moment of intertia about an axis through the center and perpendicular to the large flat face if its mass is 713.6 kg*m2?\n\n• a. 55.2 m\n• b. 53.5 m\n• c. 52.5 m\n• d. 54.5 m\n\n672. What is the mass of a hollow cylinder of radius3.38 m if it has a moment of inertia of 33.930468 kg*m2 about the central axis or rotation?\n\n• a. 3.07 kg\n• b. 2.50 kg\n• c. 2.97 kg\n• d. 1.95 kg\n\n673. A 0.324 kg ball is stuck 0.54 m from the center of a disk spinning at 5.55 rad/s. What is its angular momentum?\n\n• a. 0.5243 J*s\n• b. 0.6321 J*s\n• c. 1.021 J*s\n• d. 1.1524 J*s\n\n674. A 40.2 kg child is sitting on the edge of a 165.3 kg merry-go-round of radius 2.1 m while it is spinning at a rate of 3.229 rpm. If the child moves to the center, how fast will it be spinning? (Hint: use conservation of angular momentum)\n\n• a. 4.5 rpm,\n• b. 4.4 rpm\n• c. 4.2 rpm\n• d. 4.8 rpm\n\n675. An empty metal can rolling down a hill gets to the bottom with a speed of 1.06 m/s. What would have been the speed if the can was full? (Assume the ends of the hollow can don’t significantly affect its moment of inertia and the walls are so thin that the full can may be considered as a solid cylinder of the same radius)\n\n• a. 1.333 m/s\n• b. 1.423 m/s\n• c. 1.223 m/s\n• d. 1.323 m/s\n\n676. A light hangs from two cables. One cable has a tension of 23.83 lb. and is at an angle of 26.2° with respect to the ceiling. What is the weight of the lamp if the other cable makes an angle of 48.7° with respect to the ceiling?\n\n• a. 34.86 lb.\n• b. 33.9 lb.\n• c. 35.8 lb.\n• d. 36 lb.\n\n677. A light hangs from two cables. One cable has a tension of 25.55 N and is at an angle of 7.5° with respect to the ceiling. What is the tension in the other cable if it makes an angle of 20.2° with respect to the ceiling?\n\n• a. 28 N\n• b. 26 N\n• c. 27 N\n• d. 29 N\n\n678. A man is pulling a cart (total 26.7 kg) with a force of 1612 N. Neglecting friction, how much time does it take to get the cart from rest up to 1.5 m/s?\n\n• a. 0.04583s\n• b. 0.01252s\n• c. 0.03567s\n• d. 0.02484 s\n\n679. A lady is pulling a cart (total 55.7 kg) with a force of 395 N. Neglecting friction, what is the acceleration of the cart?\n\n• a. 7.1 m/s2\n• b. 7.092 m/s2\n• c. 7.091 m/s2\n• d. 7.093 m/s2\n\n680. A lady pulls a cart with a force of 1454 N. Neglecting friction, if the cart changes from resting to a speed of 1.7 m/s in a distance of 0.02872 m, what is the total mass of the cart?\n\n• a. 28.90 kg\n• b. 28.91 kg\n• c. 27.90 kg\n• d. 29.00 kg\n\n681. A man sees a 44.5 kg cart about to bump into a wall at 1.7 m/s. If the cart is 0.04203 m from the wall when he grabs it, how much force must he apply to stop it before it hits?\n\n• a. 1530 N\n• b. 1730 N\n• c.1630 N\n• d.1830 N\n\n682. A 5.4 kg mass and a 6.2 kg mass are tied to a light string and hung over a frictionless pulley. What is the tension in the string?\n\n• a. 56.57 N\n• b. 55.569 N\n• c. 57.1 N\n• d. 56.569 N\n\n683. A 4.6 kg mass and an 8.5 kg mass are tied to a light string and hung over a frictionless pulley. What is their acceleration?\n\n• a. 2.92 m/s2\n• b. 2.916 m/s2\n• c. 2.917 m/s2\n• d. 3 m/s2\n\n684. An unknown mass and a 13.2 kg mass are tied to a light string and hung over a frictionless pulley. If the tension in the string is 61.3114 N, what is the unknown mass?\n\n• a. 4.12 kg\n• b. 4.1 kg\n• c. 4.0 kg\n• d. 4.2kg\n\n685. A 3.1 lb. book is resting on a 73.76 lb. table. What is the normal force of the book on the table?\n\n• a. -3.2 lbs.\n• b. -3.1 lbs.\n• c. 3.2 lbs.\n• d. 3.1 lbs.\n\n686. A 5.2 kg box sits on a ramp inclined at 42.4° to horizontal. What is the normal force on the box from the ramp?\n\n• a. 37.64N\n• b. 37.53N\n• c. 37.54 N\n• d. 37.63 N\n\n687. A 6.7 kg box sits on a ramp inclined at 37.4° to horizontal. What is the normal force on the ramp from the box?\n\n• a. 52.17 N\n• b. 52.16 N\n• c. -52.16 N\n• d. -52.17 N\n\n688. A box sits on a ramp inclined at 21.7° to horizontal. If the normal force on the box from the ramp is 20.94 N, what is the mass of the box?\n\n• a. 2.3 kg\n• b. 2.4 kg\n• c. 2.2 kg\n• d. 2.1 kg\n\n689. A 7.9 kg box sits on a ramp. If the normal force on the box from the ramp is 41.82 N, what is the angle the ramp makes with the (horizontal) ground?\n\n• a. 57.3°\n• b. 56.3°\n• c. 57.4°\n• d. 55.3°\n\n690. What is the minimum force required to start a 11.5 kg box moving across the floor if the coefficient of static friction between the box and the floor is 0.64?\n\n• a. 72.13N\n• b. 72.1 N\n• c. 72.13 N\n• d. 72.128 N\n\n691. What is the mass of a box which requires a minimum pushing force of 74.088 N to start moving across a floor with a coefficient of static friction between the box and the floor of 0.6?\n\n• a. 12.2 kg\n• b. 12.6 kg\n• c. 13 kg\n• d. 12.5 kg\n\n692. If a minimum force of 79.4682 N is required to push on a 15.9 kg box to begin moving it across the floor, what is the coefficient of static friction between the box and the floor?\n\n• a. 0.51\n• b. 0.52\n• c. 0.53\n• d. 0.54\n\n693. A box is sliding down a ramp with an acceleration of 1.621 m/s2. If the ramp is at an angle of 25.1° relative to the ground, what is the coefficient of kinetic friction between the box and the ramp?\n\n• a. 0.2857\n• b. 0.3000\n• c. 0.2856\n• d. 0.2867\n\n694. What is the kinetic energy of a 70 kg man running along at 6.36 m/s?\n\n• a. 1416 J\n• b. 1417 J\n• c. 1415 J\n• d. 1418 J\n\n695. What is the change in gravitational potential energy for a 68.9 kg man walking up stairs from a height of 63.07 m to 107.69 m?\n\n• a. 30130 J\n• b. 30132J\n• c. 30131 J\n• d. 30133 J\n\n696. What is the mass of a diver whose gravitational potential energy changes by -160,500 J when diving into water from a height of 130.61 m?\n\n• a. 125.2 kg\n• b. 125.3 kg\n• c. 125.5 kg\n• d. 125.4 kg\n\n697. How much work is done by gravity when a 82.3 kg diver jumps from a height of 5.23 m into the water?\n\n• a. 4220 J\n• b. 4218 J\n• c. 4229 J\n• d. 4219 J\n\n698. What height above the water does a 133.9 kg diver jumps need to jump from for gravity to do 6062 J of work on him/her?\n\n• a. 4.61 m\n• b. 4.60 m\n• c. 4.62 m\n• d. 4.63 m\n\n699. What is the mass of a diver if gravity does 8100 J of work on him/her when jumping into the water from a height of 6.55 m? What is the mass of a diver if gravity does 8100 J of work on him/her when jumping into the water from a height of 6.55 m?\n\n• a. 126.2 kg\n• b. 126.1 kg\n• c. 126.4 kg\n• d. 126.3 kg\n\n700. How much work must be done to move a 34.6 kg box 3.66 m across the floor if the coefficient of kinetic friction between the box and the floor is 0.3?\n\n• a. 372.2 J\n• b. 372.3 J\n• c. 372.4 J\n• d. 372.5 J\n\n### Complete List of MCQs in General Engineering and Applied Science per topic\n\nHelp Me Makes a Difference!\n\n P inoyBIX educates thousands of reviewers/students a day in preparation for their board examinations. Also provides professionals with materials for their lectures and practice exams. Help me go forward with the same spirit. “Will you make a small \\$5 gift today?” Option 1 : \\$1 USD Option 2 : \\$3 USD Option 3 : \\$5 USD Option 4 : \\$10 USD Option 5 : \\$25 USD Option 6 : \\$50 USD Option 7 : \\$100 USD Option 8 : Other Amount", null, "© 2014 PinoyBIX Engineering. © 2019 All Rights Reserved | How to Donate? |", null, "", null, "#### GEAS Solution\n\nDynamics problem Economics problem Physics problem Statics problem Strength problem Thermodynamics problem\n\n#### Questions and Answers in GEAS\n\nEngineering Economics Engineering Laws and Ethics Engineering Management Engineering Materials Engineering Mechanics General Chemistry Physics Strength of Materials Thermodynamics\nConsider Simple Act of Caring!: LIKE MY FB PAGE\n\nOur app is now available on Google Play, Pinoybix Elex", null, "" ]
[ null, "https://pinoybix.org/wp-content/uploads/2015/02/geas_mcqs_in_physics.png", null, "https://www.paypalobjects.com/en_US/i/scr/pixel.gif", null, "http://www.blogarama.com/images/button_sm_1.gif", null, "https://images.dmca.com/Badges/dmca-badge-w100-5x1-09.png", null, "https://play.google.com/intl/en_us/badges/static/images/badges/en_badge_web_generic.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8416425,"math_prob":0.96845067,"size":11305,"snap":"2020-10-2020-16","text_gpt3_token_len":4159,"char_repetition_ratio":0.16228652,"word_repetition_ratio":0.30462268,"special_character_ratio":0.42158338,"punctuation_ratio":0.19585745,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9976251,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,1,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-02-29T10:25:41Z\",\"WARC-Record-ID\":\"<urn:uuid:880f3e36-aadc-462b-9f8d-2e54cb7c690a>\",\"Content-Length\":\"121862\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d888e9e7-99d9-4630-a74d-4bd4c0098544>\",\"WARC-Concurrent-To\":\"<urn:uuid:7bef6fca-f632-4b8b-bc1b-d1a31da8b758>\",\"WARC-IP-Address\":\"104.24.98.66\",\"WARC-Target-URI\":\"https://pinoybix.org/2015/02/mcqs-in-physics-part14.html\",\"WARC-Payload-Digest\":\"sha1:KGT5RCVMSCEUPXIMFNTXSXLT7FYGY4JU\",\"WARC-Block-Digest\":\"sha1:JHB6YW3QIGDP52Z46MLPCYLTXOB7MDZB\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-10/CC-MAIN-2020-10_segments_1581875148850.96_warc_CC-MAIN-20200229083813-20200229113813-00019.warc.gz\"}"}
https://mindstat.medium.com/random-is-not-actually-random-79c22c3c62f8?source=post_page-----4b3d1209f9b1--------------------------------
[ "# Random is NOT actually random\n\nIf we take a random page from any book or a paper or whatever you like, and select all the numbers from it, almost 30% of all numbers will begin with 1, 17% begin with 2 , 11% begin with 3, and the percentage goes down with the number till it reach 9. this is called the benford’s law\n\nIt is said that if the first digits of the numbers you selected does not comply with this law, then the numbers were not random.\n\nThis phenomenon was first observed by Simon Newcomb and later formulated by Frank Benford. This phenomenon can be used for almost everything, where all the 9 numbers have equal probability of appearing as the first digit(eg: length of river, height of a tower.. etc.).\n\nThe most common use of this phenomenon is in the accounting and taxes. where they use it to asses fraud or irregularities in the books.\n\nThis is how it works, let’s take the general ledger and take the first digit of all the revenue and expenditure and consider benford’s law, if it does not comply with the benford’s law, then there is a chance of irregularities in the books, and better review it again, this is also used by the tax legislators but with a more complex equation.\n\nThe benford’s law is being used in a lot of areas including the elections which is held a lot of disagreements from the experts. they say there are lot of factors other than fraud that would lead to the change in the curve. but still other possibilities of the uses of the phenomenon are being explored.\n\nThis will completely change the way we asses randomness, so whatever data we consider there is a high chance that the fist digit obey the benford’s law. this is happening because the as we go to the bigger numbers we will have to pass the smaller first digits, that means the further we go the more chance are given for the 1s and 2s.\n\nMathematically the benford’s law is based on the logarithm with base-10, which shows that the probability that the leading digit of a number will be n can be calculated as log(1+1/n). there are more complex and advanced studies and explanations based on the same law, but this is the general idea. The graph that represent the benford’s law is shown below" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.959214,"math_prob":0.97989076,"size":2657,"snap":"2021-31-2021-39","text_gpt3_token_len":596,"char_repetition_ratio":0.13494158,"word_repetition_ratio":0.0,"special_character_ratio":0.21678585,"punctuation_ratio":0.083636366,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9848108,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-07-29T08:15:42Z\",\"WARC-Record-ID\":\"<urn:uuid:6ea370fe-8ba8-4336-8dd1-86d8cdca5257>\",\"Content-Length\":\"116869\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:cc373625-d91c-4c28-a27d-53300408cf9c>\",\"WARC-Concurrent-To\":\"<urn:uuid:fccf5657-dbad-49da-a055-3903d503b2b2>\",\"WARC-IP-Address\":\"162.159.153.4\",\"WARC-Target-URI\":\"https://mindstat.medium.com/random-is-not-actually-random-79c22c3c62f8?source=post_page-----4b3d1209f9b1--------------------------------\",\"WARC-Payload-Digest\":\"sha1:WC3HDGSVS5PYT3XDUNYMOPQZDZMTD576\",\"WARC-Block-Digest\":\"sha1:MIEVPJCBCB7S6FRMGPTDMKTRZ6NSKUPJ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046153854.42_warc_CC-MAIN-20210729074313-20210729104313-00503.warc.gz\"}"}
https://www.teacherspayteachers.com/Product/Writing-and-Solving-Linear-Equations-Bundle-4184593?utm_source=taylorjsmathmaterials.com&utm_campaign=Resource%20Guide%208th
[ "", null, "# Writing and Solving Linear Equations Bundle", null, "", null, "", null, "", null, "7th - 10th\nSubjects\nStandards\nResource Type\nFormats Included\n• Zip\n•", null, "Activity\nPages\n38 pages\n\\$11.47\nBundle\nList Price:\n\\$12.75\nYou Save:\n\\$1.28\n\\$11.47\nBundle\nList Price:\n\\$12.75\nYou Save:\n\\$1.28\nEasel Activities Included\nSome resources in this bundle include ready-to-use interactive activities that students can complete on any device. Easel by TpT is free to use! Learn more.\n\n### Description\n\nStudents practice writing and solving equations in this 9 pack bundle!\n\n**If you only need one of the lessons, each is sold individually!\n\nWhat is covered:\n\n- It reviews the following vocabulary and key terms: coefficient, term, linear, nonlinear, equation, and types of angles\n\nWorksheet Breakdown:\n\n1. Writing Equations Using Symbols Worksheet: Students practice writing equations from words\n\n2. Linear and Nonlinear Expressions Worksheet: Students write an expression from words and then determine whether it is linear or nonlinear\n\n3. Linear Equations Worksheet: Students determine if given values are solutions to equations\n\n4. Solving a Linear Equation Worksheet: Students solve one-step and two-step equations\n\n5. Writing and Solving Linear Equations with Geometry Worksheet: Students practice writing and solving equations based on geometry word problems\n\n6. Solutions of a Linear Equation Worksheet: Students practice solving equations where distributive property is needed; no solution problems are included\n\n7. Classifications of Solutions Worksheet: Students simplify equations and determine if they have one, no, or infinite solutions\n\n8. Linear Equations as Proportions Worksheet: Students solve equations by multiplying each numerator with the other sides denominator\n\n9. Application of Linear Equations Worksheet: Students write and solve equations from words\n\nPossible uses:\n\n- End-of-topic cumulative review guide\n\n- Extra practice for struggling students\n\n- Homework\n\n- Test prep\n\n*************************************************************************************************************\n\nYou might be interested in:\n\n- these FREEBIES\n\n- the unit on Congruence\n\n- the unit on Linear Equations\n\n- the unit on Percents & Proportional Relationships\n\nor my store for other material to supplement the 7th & 8th grade curriculum!\n\nFollow My Store to receive email updates on new items, product launches, and sales!\n\n**If you have any requests and/or updates for my work, please message me!\n\nTotal Pages\n38 pages\nN/A\nTeaching Duration\nN/A\nReport this resource to TpT\nReported resources will be reviewed by our team. Report this resource to let us know if this resource violates TpT’s content guidelines.\n\n### Standards\n\nto see state-specific standards (only available in the US).\nSolve linear equations in one variable.\n\n### Questions & Answers\n\nTeachers Pay Teachers is an online marketplace where teachers buy and sell original educational materials." ]
[ null, "https://www.facebook.com/tr", null, "https://ecdn.teacherspayteachers.com/thumbitem/Writing-and-Solving-Linear-Equations-Bundle-4184593-1659455444/original-4184593-1.jpg", null, "https://ecdn.teacherspayteachers.com/thumbitem/Writing-and-Solving-Linear-Equations-Bundle-4184593-1659455444/original-4184593-2.jpg", null, "https://ecdn.teacherspayteachers.com/thumbitem/Writing-and-Solving-Linear-Equations-Bundle-4184593-1659455444/original-4184593-3.jpg", null, "https://ecdn.teacherspayteachers.com/thumbitem/Writing-and-Solving-Linear-Equations-Bundle-4184593-1659455444/original-4184593-4.jpg", null, "https://static1.teacherspayteachers.com/tpt-frontend/releases/production/current/tpt-easel-icon.atx6cuo0bp.svg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.84038615,"math_prob":0.76164305,"size":2095,"snap":"2022-40-2023-06","text_gpt3_token_len":397,"char_repetition_ratio":0.23529412,"word_repetition_ratio":0.027303753,"special_character_ratio":0.22863962,"punctuation_ratio":0.12386707,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9758982,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12],"im_url_duplicate_count":[null,null,null,1,null,1,null,1,null,1,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-09-28T18:58:33Z\",\"WARC-Record-ID\":\"<urn:uuid:5d37afe6-4210-4a68-9b95-f5a088ea65c8>\",\"Content-Length\":\"266039\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:65686364-3b8e-4b84-b902-b69b46ee9e03>\",\"WARC-Concurrent-To\":\"<urn:uuid:0a41aa13-6fcb-4ecf-b85f-0750568518b0>\",\"WARC-IP-Address\":\"23.197.178.13\",\"WARC-Target-URI\":\"https://www.teacherspayteachers.com/Product/Writing-and-Solving-Linear-Equations-Bundle-4184593?utm_source=taylorjsmathmaterials.com&utm_campaign=Resource%20Guide%208th\",\"WARC-Payload-Digest\":\"sha1:JQOUOJ2RZ23QPQRNJESM64VBHRH67QVH\",\"WARC-Block-Digest\":\"sha1:D6VQFHVP6XCK5XLFVVQM2ZYWOMS5WTRW\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030335276.85_warc_CC-MAIN-20220928180732-20220928210732-00025.warc.gz\"}"}
https://cloud.originlab.com/doc/COM/Classes/Application/FindMatrixSheet
[ "# 2.1.15 FindMatrixSheet\n\n## Description\n\nFind a matrix sheet by name.\n\n## Syntax\n\nVB: Function FindMatrixSheet(Name As ByVal String ) As MatrixSheet C++: MatrixSheet FindMatrixSheet(LPCSTR Name ) C#: MatrixSheet FindMatrixSheet(string Name ) \n\n## Parameters\n\nName\nThe range string of the Origin matrix sheet to be found. The range string contains the workbook name between square brackets followed by the sheet name:\n[<bookName>]<sheetName>\nThe following special notations are also supported:\n1. Empty string -- this means the active sheet from the active book\n2. Book name only -- like \"Book1\", will get the active sheet from named book\n3. Sheet name only with ! at the end -- like \"Sheet2!\", will get the named sheet from the active book\n\n## Return\n\nIf the named matrix sheet is found then an Origin MatrixSheet is returned.\n\n## Examples\n\n### VBA\n\nPrivate Sub SetMatrixSheetDimensions(sheetName As String, numRows As Integer, numCols As Integer)\nDim app As Origin.ApplicationSI\nSet app = New Origin.ApplicationSI\n\nDim msheet As Origin.MatrixSheet\nSet msheet = app.FindMatrixSheet(sheetName)\nIf msheet Is Nothing Then\nMsgBox (\"Failed to find matrix sheet\")\nExit Sub\nEnd If\n\n' Put current dimensions in Excel sheet cells A1 and B1\nRange(\"A1\") = msheet.Rows ' put number of rows in Excel cell A1\nRange(\"B1\") = msheet.Cols ' put number of columns in Excel cell B1\n\nmsheet.Rows = numRows\nmsheet.Cols = numCols\n\n' Put new dimensions in Excel sheet cells A2 and B2\nRange(\"A2\") = msheet.Rows ' put number of rows in Excel cell A1\nRange(\"B2\") = msheet.Cols ' put number of columns in Excel cell B1\nEnd Sub\n\n### C#\n\nusing Origin; // allow using MatrixSheet without having to write Origin.MatrixSheet\n\nstatic void SetMatrixSheetDimensions(string strSheetName, int nRows, int nCols)\n{\nApplicationSI app = new Origin.ApplicationSI();\n\nMatrixSheet msheet = app.FindMatrixSheet(strSheetName);\nif( msheet == null )\n{\nreturn;\n}\nConsole.WriteLine(\"Current dimensions of \" + msheet.Name + \" are \" + msheet.Rows + \" rows by \" + msheet.Cols + \" columns.\");\n\nmsheet.Rows = nRows;\nmsheet.Cols = nCols;\nConsole.WriteLine(\"New dimensions of \" + msheet.Name + \" are \" + msheet.Rows + \" rows by \" + msheet.Cols + \" columns.\");\n\n}\n\n### Python\n\nimport OriginExt as O\napp = O.Application(); app.Visible = app.MAINWND_SHOW\npageName = app.CreatePage(app.OPT_MATRIX)\nlayer = app.FindMatrixSheet(pageName)\nprint(layer.Name)\n\n8.0SR2" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5114903,"math_prob":0.94976044,"size":1906,"snap":"2021-43-2021-49","text_gpt3_token_len":492,"char_repetition_ratio":0.17770767,"word_repetition_ratio":0.23443224,"special_character_ratio":0.24973767,"punctuation_ratio":0.16969697,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9911066,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-12-06T09:22:44Z\",\"WARC-Record-ID\":\"<urn:uuid:bb57c5a4-33aa-477b-adcc-9cd42c83f5eb>\",\"Content-Length\":\"179220\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a86b19b0-155d-4836-a104-1fa2c10a0fe4>\",\"WARC-Concurrent-To\":\"<urn:uuid:27a20778-ef73-4d35-a31b-0788ca4714ac>\",\"WARC-IP-Address\":\"13.249.32.96\",\"WARC-Target-URI\":\"https://cloud.originlab.com/doc/COM/Classes/Application/FindMatrixSheet\",\"WARC-Payload-Digest\":\"sha1:RDMWLB66CKTZEQS56MFXWEB7VKSCUIXW\",\"WARC-Block-Digest\":\"sha1:B642EW4AZD74D7YGGJMTKB3HWDS7JWBD\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964363290.59_warc_CC-MAIN-20211206072825-20211206102825-00250.warc.gz\"}"}
https://blog.changkun.de/archives/2016/04/194/
[ "$P(A丨B)=\\frac{P(AB)}{P(B)}$\n\n$P(B)=P(A)P(B|A)+P(\\bar A)P(B | \\bar A) =ab+(1-a)(1-b) = 2ab+1-a-b$\n\n1. Scientific Regress: http://www.firstthings.com/article/2016/05/scientific-regress" ]
[ null ]
{"ft_lang_label":"__label__zh","ft_lang_prob":0.9646047,"math_prob":0.987168,"size":725,"snap":"2020-45-2020-50","text_gpt3_token_len":631,"char_repetition_ratio":0.073509015,"word_repetition_ratio":0.0,"special_character_ratio":0.32413793,"punctuation_ratio":0.045454547,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9991184,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-11-23T16:00:59Z\",\"WARC-Record-ID\":\"<urn:uuid:8fedfab7-644f-4158-8254-b48d39da7e06>\",\"Content-Length\":\"21332\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1e7a5f90-31d7-4df2-ae89-a286c7eee8f8>\",\"WARC-Concurrent-To\":\"<urn:uuid:52782ac6-d7f8-49e7-af01-84a5abbda313>\",\"WARC-IP-Address\":\"139.59.204.66\",\"WARC-Target-URI\":\"https://blog.changkun.de/archives/2016/04/194/\",\"WARC-Payload-Digest\":\"sha1:UN65EOKNYPTZZAGKH2DAOT6VTEIWZABN\",\"WARC-Block-Digest\":\"sha1:Z7K2YXXRMM3BHTS52B3H4AYZD4WHEK3W\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141163411.0_warc_CC-MAIN-20201123153826-20201123183826-00405.warc.gz\"}"}
https://mathoverflow.net/questions/324842/what-is-a-hypergraph-minor
[ "# What is a hypergraph minor?\n\nIs there a theory of hypergraph minors? I could only find some attempts to define them at papers/theses, whose main topic was something else. What would be a useful definition? Does the hypergraph version of the Robertson–Seymour theorem hold?\n\n• Arguably one of the baby steps of graph minor theory is the Wagner theorem on planar graphs. Already this is highly non-trivial for hypergraphs. Recent work of Carmesin has provided a finite list of forbidden minors (for some definition of minor) for the embeddability of simply-connected locally 3-connected 2-complexes in R^3, but there are infinite antichains when these hypotheses are lifted. Related notions of minor towards embedabillity have also been introduced by Nevo and Wagner – Arnaud Mar 8 at 14:47\n\nLet $$H$$ and $$H′$$ be hypergraphs. Then $$H$$ is a minor of $$H′$$ if $$H$$ can be obtained from $$H′$$ by a sequence of operations of the following kinds:\n• addition of ahyperedge $$e$$ such that the set $$e$$ induces a clique in the underlying graph, and" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9385511,"math_prob":0.9867391,"size":2002,"snap":"2019-13-2019-22","text_gpt3_token_len":491,"char_repetition_ratio":0.12112112,"word_repetition_ratio":0.0,"special_character_ratio":0.22877122,"punctuation_ratio":0.08672087,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9981258,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-03-26T11:03:31Z\",\"WARC-Record-ID\":\"<urn:uuid:31dc99ef-d84d-435d-a9f1-ffcde65c2e70>\",\"Content-Length\":\"115877\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:13fbbae9-78ff-49f2-86a1-876f5b845081>\",\"WARC-Concurrent-To\":\"<urn:uuid:7439a931-73a8-41c7-b9aa-e4527d42e337>\",\"WARC-IP-Address\":\"151.101.65.69\",\"WARC-Target-URI\":\"https://mathoverflow.net/questions/324842/what-is-a-hypergraph-minor\",\"WARC-Payload-Digest\":\"sha1:STSIIL2GNAXL7VDOCDO3FJS6CLKK3OFE\",\"WARC-Block-Digest\":\"sha1:VYB4DETPWVOG7Y2IJK66RRP3JBZTMEJC\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-13/CC-MAIN-2019-13_segments_1552912204969.39_warc_CC-MAIN-20190326095131-20190326121131-00502.warc.gz\"}"}
http://macphilly.com/page/is-14-an-even-number-39591933.html
[ "# Is 14 an even number", null, "This means that if the integer is divided by 2, it yields no remainderso it has 0 parity. Is 14 a palindrome? If the number is larger than 10, if it ends in 0 it is also an even number. Is 14 a perfect square? You can help Wikipedia by adding to it. How is 14 spelled out in other languages or countries? For example: 0N, 2N, 4N, 6N, What are the factor combinations of the number 14? What is the place value chart for the number 14?\n\n• Is 14 an Even Number\n• Even and Odd Numbers Between 1 and Even and Odd Numbers Examples\n• Even and Odd Numbers\n• Is 14 an even or odd number\n• [SOLVED] Is 14 an Even number\n• What is Even Number Definition, Facts & Example\n\n• You can divide. Related Links: Is 14 an odd number? What is the prime factorization of 14? What are the factors of 14? Is 14 a perfect square?", null, "What are the multiples of 14? The number five can be divided into two groups of two and one group of one. Even numbers always end with a digit of 0, 2, 4, 6 or 8.\n\n## Is 14 an Even Number\n\n2, 4, 6, 8, 10, 12, 14, 16,\nThis short article about mathematics can be made longer. From Wikipedia, the free encyclopedia. What are the factor combinations of the number 14? What are the prime factors of the number 14? How is 14 written in roman numerals?\n\n## Even and Odd Numbers Between 1 and Even and Odd Numbers Examples", null, "Bholi si surat ringtone for iphone\nHidden category: Math stubs.\n\nHow is 14 spelled out in other languages or countries? What is the total number of prime factors of the number 14?\n\nVideo: Is 14 an even number Even Number Song\n\nIf you divide by 2 and there is a remainder left, then the number is odd. They are even because they can be divided by 2 evenly.\n\n### Even and Odd Numbers\n\nIf the number is larger than 10, if it ends in 0 it is also an even number. Factoring Questions What are the factors or divisors of the number 14?\n\nLet's try that calculation 14 ÷ 2 = 7. Notice the answer is a whole number and doesn't have a remainder? That means the number is even, as it can cleanly be. To find out if 14 is an even number, we divided 14 by two. When we did that, we found that the answer is a whole number.\n\n## Is 14 an even or odd number\n\nIf you divide any number, such as Here are a couple of methods you can use to figure out if 14 is an even or odd number: Divide By Two Method You can divide 14 by two and if the result is an.\nIs 14 an even number?\n\nIf the number is larger than 10, if it ends in 0 it is also an even number. In other projects Wikimedia Commons. Any number multiplied with an even number will result in an even number. What is log 10 14?\n\n### [SOLVED] Is 14 an Even number\n\nHow is 14 formatted in other languages or countries? For example: 0N, 2N, 4N, 6N,", null, "Is 14 an even number Is 14 a Fibonacci number? See Terms of Use for details. They are even because they can be divided by 2 evenly.", null, "That means the number is even, as it can cleanly be divided by 2, with no remaining remainder. Is 14 an odd number?\nDefinition of Even Number explained with real life illustrated examples.\n\n### What is Even Number Definition, Facts & Example\n\nAlso learn the For example: 2, 4, 6, 8, 10, 12, 14, 16 are sequential even numbers​. All the even and odd numbers between 1 and are discussed here. What are the even numbers from 1 to ?\n\nVideo: Is 14 an even number Even and Odd Numbers Song for Kids - Odds and Evens for Grades 2 & 3\n\nThe even 12 14 16 18 20 22 24 26 28 An even number is an integer which is \"evenly divisible\" by two. 0, 2, 4, 6, 8, 10, 12, 14, 16, 18, 20, 22, 24, 26, 28, 30, 32, 34, 36, 38, 40, 42, 44, 46, 48, 50,\nAny number multiplied with an even number will result in an even number.\n\nWhat is the negative version of the number 14? Views Read Change Change source View history. Even numbers are either positive or negative,as even number are types of integers.\n\nZero is an even number because zero divided by two equals zero, which despite not being a natural numberis an integer.", null, "Potassium persulfate initiator mechanism definition Is 14 a Fibonacci number? What are the factor combinations of the number 14? From Wikipedia, the free encyclopedia. What is the total number of prime factors of the number 14? How is 14 spelled out in other languages or countries?\n\n## 1 thoughts on “Is 14 an even number”\n\n1.", null, "Nizahn:\n\nCalculation Questions What are the answers to common fractions of the number 14?" ]
[ null, "http://macphilly.com/media/is-14-an-even-number-6.jpg", null, "http://macphilly.com/media/is-14-an-even-number-4.jpg", null, "http://macphilly.com/media/is-14-an-even-number.png", null, "http://macphilly.com/media/is-14-an-even-number-2.png", null, "http://macphilly.com/media/is-14-an-even-number-5.jpg", null, "http://macphilly.com/media/is-14-an-even-number-3.jpg", null, "https://1.gravatar.com/avatar/1cb1c39857f5eef49897f849251861a9", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9193964,"math_prob":0.99207664,"size":3196,"snap":"2020-34-2020-40","text_gpt3_token_len":894,"char_repetition_ratio":0.19235589,"word_repetition_ratio":0.2015873,"special_character_ratio":0.2963079,"punctuation_ratio":0.15725806,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9989466,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14],"im_url_duplicate_count":[null,1,null,1,null,1,null,1,null,1,null,1,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-01T00:26:56Z\",\"WARC-Record-ID\":\"<urn:uuid:6df3656c-b265-43ff-bc8f-be0535022b24>\",\"Content-Length\":\"18837\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f0cfc5e4-5396-4bc0-a3df-2bdc4887d4c8>\",\"WARC-Concurrent-To\":\"<urn:uuid:37135a44-3a94-4b93-a864-3ccf7cf3e5fe>\",\"WARC-IP-Address\":\"104.28.21.43\",\"WARC-Target-URI\":\"http://macphilly.com/page/is-14-an-even-number-39591933.html\",\"WARC-Payload-Digest\":\"sha1:QBG6XD2OVYW6HIBZNQU23ODSMQIEUB34\",\"WARC-Block-Digest\":\"sha1:SO4SLCYIGUOYAGBZQWIJ5SD364UBCNIF\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600402130531.89_warc_CC-MAIN-20200930235415-20201001025415-00231.warc.gz\"}"}
https://www.eleccircuit.com/simple-voltage-regulator-using-2n3055/
[ "# Simple Voltage regulator using 2N3055\n\nYou want to use a DC regulator or learn about voltage regulators using 2N3055.  Why use this transistor? Normally, it can be used with loads that require a current not exceeding 2A and a voltage of not more than 30V.\n\nThis is enough for general work. It’s a transistor that people like to use for a long time. Therefore easy to find And very cheap. There is a lot of circuits using 2N3055.\n\nNow we recommend you 2 circuit diagram. Both circuits use a Zener diode and transistor.\n\n## 12V DC regulator circuit using 2N3055\n\nHere are 12V 1A linear regulator using a transistor and Zener diode. It is a series voltage regulator because the load current passes through the series transistor.\n\nAs the circuit diagram below, The input terminal wants an unregulated DC supply,15V to 20V. Then, the regulated voltage will come out to the load.\n\n12V 1A linear voltage regulator using 2n3055 transistor and Zener diode\n\nTo begin with, An electrical current flows through the resistor-R1 to limits current to the Zener diode. So it provides the reference voltage.\nIn the same, the base voltage of transistor-Q1 is also a constant.\n\nWhen ZD1 is 12V, the base voltage is also the 12 volts.\n\nIf we set the transistor in this form. The output voltage is the same as the Zener diode voltage. And we always call this that emitter follower. In practice, the voltage of output is lower than ZD1. Because when a transistor is working. It needs to has a base-emitter voltage.\n\n• VBE = Base-Emitter voltage\n• VZD = Zener diode voltage\n• Vout = Output voltage\n\nVout = VZD – VBE\nVBe = 0.6V\nVout = 12V – 0.6V = 11.4V\n\nThis voltage is still suitable for many loads using the 12V supply such as a receiver radios.\n\nSince it is a power supply that regulates a certain power output.\n\nIn the circuit, the transistor has a proper gain and changing of VBE help it.\n\n• When a load use more current. In general, the output voltage is low down. But the base-emitter voltage rises up, transistor Q1 works more. So it keeps the output voltage to be a constant level.\n• Then, if load use less current. The output voltage increase. But the output is still a fixed voltage. Because the voltage of Base-emitter less, transistor Q1 works less too.\n\nThe advantage of this circuit, we can use a tiny current to Zener diode and base of a transistor. Thus, it has a much more stabilized output.\n\nThe function of Others components\n\n• C1 is a smoothing capacitor at an input.\n• C2 keeps the reference voltage to be stable better.\n• C3 is a 0.047uF decoupler capacitor to filter out the transient noise.\n• R1 increases the stability of the load circuit\n• Do you know what is transient noise?\nThe power supply has a stray magnetic field. The circuit will induct them into the transient noise. The transistor-2N3055 can power load current up to 2A. But it is so hot. It so needs a proper heatsink.\n\n## Power-loss in a series regulator circuit\n\nGood power supply circuit design. It should reduce the loss of energy in the circuit to a minimum. Of course, energy will be expressed by heat.\n\nIn this series pass transistor regulator. The transistor-Q1 works look like a resistor. When we consider the power loss. It must dissipate or reduce it.\n\nDo you see an image? It is simple. Let me explain to you.\n\nLook at three cases below:\n\nIn these 3 examples, A, B, and C. The outputs are 15V, 12V and 5V. At 1A the current.\n\nDo you know which transistor has the most heat loss? Or…\nWhich transistor will heat up the most?\nYes, C example. Why?\nBecause the reason is simple.\n\nThe transistor of C is dropping the most voltage. It is effectively a dropper resistor and must dissipate heat according to ohms law.\n\nHere is show each case:\n\n• In the case of A:\nThe voltage across the transistor is 20V -15V = 5V.\nIt needs to dissipate wattage is 5V x 1A = 5W.\n• In the case of B:\nthe voltage across the transistor is 20V -12V = 7V.\nIt needs to dissipate wattage is 7V x 1A = 7W.\n\nBut…\n\n• In the case of C, the wattage is 15 watts — A much increase.\n\n### Short-circuited case\n\nIf a power supply is short-circuited. The whole input voltage will be dropped across the power transistor. And it will result in enormous heating problems.\n\nSo, for this reason, we should keep it cold with an effective heat sink.\n\n## 38V Power Supply Using 2N3055\n\nMy friend is learning about CNC, he wants a 38V Regulated Power Supply for servo motor. We have many ways to use it, but what is best for him. This circuit is one of the right choices. Because he has all the equipment. No need to buy a new one.\n\n### How this circuit works\n\nWe use the Simple Zener diode voltage regulator as main ideas, and two transistors to increase current to the load at 1A-2A.\n\nThis regulated power supply included a transformer-T1, a bridge-D1…D4, and the 38V DC filtering voltage regulator circuits, which consists of C1, C2, R1, R2, R3, Q1, and Q2.\n\nWhen 230VA or 120VAC (USA) is provided, step-down transformer-T1 changes power line AC to about 30VAC. The full-wave rectifier bridge, D1 through D4 to rectifies the AC into pulsating DC, then filtered by C1.\n\nThe capacitor C1, C3 acts as the storage capacitor or filters the noise and spikes off the AC. The 40V Zener Diode-ZD1 keeps the voltage constant across the base of BD139 NPN transistor-Q1 and Q2-2N3055 as the Darlington form, The electrolytic capacitor-C2 is used for the smoothed Zener voltage. This makes the 38V constant voltage and high power across resistor R3 and the (+) and (-) output terminals.\n\nWhen the output is connected with the low resistance load, the power transistor-Q2 will get very hot, so we always use a heat sink on it.\n\n#### Parts will you need\n\nSemiconductors:\n\n• D1-D1: 1N4002, 100V 1A Diodes\n• ZD1: 40V 1w Zener Diode\n• Q1: BD139, 80V 1.5A NPN Transistor\n• Q2: 2N3055 or TIP3055 100V, 15A, NPN transistor\n\nResistors (All 0.25 watt,5% metal/carbon film, Unless stated otherwise)\n\n• R1, R3: 3.9K\n• R2: 470 ohms\n\nElectrolytic Capacitors\n\n• C1: 470µF 50V\n• C2: 47µF 50V\n• C3: 100µF 50V\n\nT1: 230V or 120V AC primary to 30V,2A secondary transformer\n\nSW1: Power ON-OFF Switch\nF1: 0.5A Fuse\n\nNote:\nYou can use the Bridge Diode 2A-4A 200V to replace D1-D4. The transformer is used 2A min for the 1-2A load. This circuit has\n\nSharing is caring!\n\n## GET UPDATE VIA EMAIL\n\nI always try to make Electronics Learning Easy.\n\nJLCPCB - Only \\$2 for PCB Protytpe(Any Color)\n\nWith 600,000+ Customers Worldwide, 10,000+ PCB Orders Per Day\n\nUp to \\$20 shipping discount on first order now: https://jlcpcb.com/quote" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8553692,"math_prob":0.9565939,"size":6448,"snap":"2019-51-2020-05","text_gpt3_token_len":1767,"char_repetition_ratio":0.15782122,"word_repetition_ratio":0.0068906117,"special_character_ratio":0.26147643,"punctuation_ratio":0.12917595,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95923024,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-08T05:23:59Z\",\"WARC-Record-ID\":\"<urn:uuid:0832294a-bdbb-4965-80c6-184a60f928f0>\",\"Content-Length\":\"103797\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:03cf08a6-2835-42c1-8976-c023296f6412>\",\"WARC-Concurrent-To\":\"<urn:uuid:328053a1-4883-471b-91b3-0c7585226370>\",\"WARC-IP-Address\":\"104.27.154.229\",\"WARC-Target-URI\":\"https://www.eleccircuit.com/simple-voltage-regulator-using-2n3055/\",\"WARC-Payload-Digest\":\"sha1:4LTUMFE5X6DDF4NYQXRSBFVI5PHRHAXE\",\"WARC-Block-Digest\":\"sha1:DHS4JEVZSGGEAQKCPXI2QCZBBILHN3FA\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575540506459.47_warc_CC-MAIN-20191208044407-20191208072407-00214.warc.gz\"}"}
https://www.physicsforums.com/threads/converge-absolutely-or-conditionally-or-diverges.153883/
[ "# Converge absolutely or conditionally, or diverges?\n\n## Homework Statement\n\nDetermine if the series converges absolutely, converges conditionally, or diverges.\n\nequation is here: http://img409.imageshack.us/img409/7353/untitledly5.jpg [Broken]\n\n## Homework Equations\n\nmaybe alternating series, or harmonic series?\n\n## The Attempt at a Solution\n\nnot real familiar with tan with series.\nhaven't tried much, need supporting work for the answer.\nneed help.\n\nLast edited by a moderator:\n\nRelated Calculus and Beyond Homework Help News on Phys.org\nAs n-> infinity, tan(1/n) -> tan(0) -> 0\nDoes this help?\n\nquasar987\nHomework Helper\nGold Member\nActually, this says nothing at all about the series. The implication is one way only: \"Sum a_n converges ==> a_n-->0\" but \"a_n-->0 ==> nothing\".\n\nActually the series satisfies all the criteria corresponding to the convergence of an alternating series. Remains to see if it converges absolutely. I.e. does\n\n$$\\sum_{n=1}^{\\infty}\\tan(n^{-1})<\\infty$$\n\n??\n\nno it doesn't converge absolutely because it continues on to infinity.\n\nhowever, i do ask, how do you know to test it to be less than infinity? in other words, the convergence for a alternating series passes. but what other series convergence did not pass?\n\nso ultimately, this will converge conditionally.\n\nfor my work, i could prove this by showing the alternating series? and then showing that it also continues on to infinity?\n\nthanks again for all the help so far.\n\nquasar987\nHomework Helper\nGold Member\nWhat do you mean by \"continues on to infinity\" ?\n\nmjsd\nHomework Helper\nrcmango, i think you mean using the Leibniz test (for alternating series)\nthere are three conditions, check all to prove.\n\nHallsofIvy" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.81222224,"math_prob":0.5688212,"size":433,"snap":"2020-10-2020-16","text_gpt3_token_len":109,"char_repetition_ratio":0.10955711,"word_repetition_ratio":0.0,"special_character_ratio":0.24711317,"punctuation_ratio":0.1923077,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.951321,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-04-01T21:29:49Z\",\"WARC-Record-ID\":\"<urn:uuid:ba5d48a3-1abf-46e2-aeea-01eec5a658a8>\",\"Content-Length\":\"87764\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7ecf7d6f-9555-4367-a8dc-70441afb915b>\",\"WARC-Concurrent-To\":\"<urn:uuid:51200360-5c66-48e1-996e-c65d290a8224>\",\"WARC-IP-Address\":\"23.111.143.85\",\"WARC-Target-URI\":\"https://www.physicsforums.com/threads/converge-absolutely-or-conditionally-or-diverges.153883/\",\"WARC-Payload-Digest\":\"sha1:3KADNMLG27UEPMZBFVZRJPW2LXRUEPP4\",\"WARC-Block-Digest\":\"sha1:PMIIX4XZAAGQOUO7ZSAWXNIILEP5CE53\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-16/CC-MAIN-2020-16_segments_1585370506121.24_warc_CC-MAIN-20200401192839-20200401222839-00469.warc.gz\"}"}
https://www.gradesaver.com/textbooks/math/geometry/geometry-common-core-15th-edition/chapter-8-right-triangles-and-trigonometry-mid-chapter-quiz-page-515/23
[ "## Geometry: Common Core (15th Edition)\n\nThe angle with a tangent of $1$ would be an angle that measures $45^{\\circ}$. This is logical because the tangent ratio is $\\frac{opposite}{adjacent}$. In a $45^{\\circ}-45^{\\circ}-90^{\\circ}$ triangle, the two legs (which correspond to opposite and adjacent sides of each of the angles, not including the right angle) would be congruent, so the ratio of their lengths is $1$." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.92710125,"math_prob":0.9988152,"size":403,"snap":"2020-34-2020-40","text_gpt3_token_len":107,"char_repetition_ratio":0.13533835,"word_repetition_ratio":0.0,"special_character_ratio":0.2878412,"punctuation_ratio":0.077922076,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9996871,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-27T06:15:32Z\",\"WARC-Record-ID\":\"<urn:uuid:4ceefde9-330b-46e0-b43a-7eade581b8be>\",\"Content-Length\":\"83791\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:99e2f29f-f435-46b0-a207-74c5cdd822e0>\",\"WARC-Concurrent-To\":\"<urn:uuid:3f17ab7c-df78-4c04-8a0a-2daa7851565a>\",\"WARC-IP-Address\":\"54.87.252.231\",\"WARC-Target-URI\":\"https://www.gradesaver.com/textbooks/math/geometry/geometry-common-core-15th-edition/chapter-8-right-triangles-and-trigonometry-mid-chapter-quiz-page-515/23\",\"WARC-Payload-Digest\":\"sha1:OPIVBHISUJ6WNHBYEOV7JLOQ6ZCGTDPW\",\"WARC-Block-Digest\":\"sha1:DIGMTEQ6HI2IDFP62KWQOH4QRAUDHLI7\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600400265461.58_warc_CC-MAIN-20200927054550-20200927084550-00292.warc.gz\"}"}
https://forum.math.toronto.edu/index.php?PHPSESSID=2ts9icbcibl8sc9m99801fiik1&topic=2314.0;wap2
[ "APM346--2020S > Quiz 2\n\nQuiz2 TUT5101\n\n(1/1)\n\nJingjing Cui:\n$$2u_{t}+t^2u_{x}=0\\\\ \\frac{dt}{2}=\\frac{dx}{t^2}=\\frac{du}{0}\\\\ \\int\\frac{1}{2}t^2dt=\\int1dx\\\\ \\frac{1}{6}t^3+A=x\\\\ A=x-\\frac{1}{6}t^3\\\\$$\nBecause c=0, so\n$$u(t,x)=g(A)=g(x-\\frac{1}{6}t^3)$$\n\nThe initial condition given in the question: u(x,0)=f(x)\nThe characteristics curves ($A=x-\\frac{1}{6}t^3$) will always intersect t=0 (x-axis) at a unique point, no matter what value A takes. Thus, the solution always exist.\n\nVictor Ivrii:\nIn the tsecond/third lines should be $dt$, ..., not $\\partial t$,..." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5581202,"math_prob":0.9999782,"size":606,"snap":"2023-40-2023-50","text_gpt3_token_len":251,"char_repetition_ratio":0.1345515,"word_repetition_ratio":0.0,"special_character_ratio":0.41914192,"punctuation_ratio":0.13013698,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99995804,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-10-01T12:52:13Z\",\"WARC-Record-ID\":\"<urn:uuid:490262dd-6607-4584-8162-70cdd64fa19f>\",\"Content-Length\":\"2907\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b724f1b1-3da9-40eb-9ab1-8f2f555d8d5e>\",\"WARC-Concurrent-To\":\"<urn:uuid:0709f2b6-eb4e-4e75-b73d-1e04dc4f2fe9>\",\"WARC-IP-Address\":\"142.150.233.214\",\"WARC-Target-URI\":\"https://forum.math.toronto.edu/index.php?PHPSESSID=2ts9icbcibl8sc9m99801fiik1&topic=2314.0;wap2\",\"WARC-Payload-Digest\":\"sha1:3B2FUIWYW7K4L667UZ3U2L5MI53EN5SA\",\"WARC-Block-Digest\":\"sha1:4GTTMVUKKVQNIQO5SH4NMLKNKIT2VD7G\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510888.64_warc_CC-MAIN-20231001105617-20231001135617-00274.warc.gz\"}"}
https://support.sas.com/documentation/onlinedoc/or/ex_code/132/dcmpe08.html
[ "## Vehicle Routing Problem (dcmpe08)\n\n```\n/***************************************************************/\n/* */\n/* S A S S A M P L E L I B R A R Y */\n/* */\n/* NAME: dcmpe08 */\n/* TITLE: Vehicle Routing Problem (dcmpe08) */\n/* PRODUCT: OR */\n/* SYSTEM: ALL */\n/* KEYS: OR */\n/* PROCS: OPTMODEL, SGPLOT */\n/* DATA: */\n/* */\n/* SUPPORT: UPDATE: */\n/* REF: */\n/* MISC: Example 8 from the Decomposition Algorithm */\n/* chapter of Mathematical Programming. */\n/* */\n/***************************************************************/\n\n/* number of vehicles available */\n%let num_vehicles = 8;\n/* capacity of each vehicle */\n%let capacity = 3000;\n/* node, x coordinate, y coordinate, demand */\ndata vrpdata;\ninput node x y demand;\ndatalines;\n1 145 215 0\n2 151 264 1100\n3 159 261 700\n4 130 254 800\n5 128 252 1400\n6 163 247 2100\n7 146 246 400\n8 161 242 800\n9 142 239 100\n10 163 236 500\n11 148 232 600\n12 128 231 1200\n13 156 217 1300\n14 129 214 1300\n15 146 208 300\n16 164 208 900\n17 141 206 2100\n18 147 193 1000\n19 164 193 900\n20 129 189 2500\n21 155 185 1800\n22 139 182 700\n;\n\nproc optmodel;\n/* read the node location and demand data */\nset NODES;\nnum x {NODES};\nnum y {NODES};\nnum demand {NODES};\nnum capacity = &capacity;\nnum num_vehicles = &num_vehicles;\nread data vrpdata into NODES=[node] x y demand;\nset ARCS = {i in NODES, j in NODES: i ne j};\nset VEHICLES = 1..num_vehicles;\n\n/* define the depot as node 1 */\nnum depot = 1;\n\n/* define the arc cost as the rounded Euclidean distance */\nnum cost {<i,j> in ARCS} = round(sqrt((x[i]-x[j])^2 + (y[i]-y[j])^2));\n\n/* Flow[i,j,k] is the amount of demand carried on arc (i,j) by vehicle k */\nvar Flow {ARCS, VEHICLES} >= 0 <= capacity;\n/* UseNode[i,k] = 1, if and only if node i is serviced by vehicle k */\nvar UseNode {NODES, VEHICLES} binary;\n/* UseArc[i,j,k] = 1, if and only if arc (i,j) is traversed by vehicle k */\nvar UseArc {ARCS, VEHICLES} binary;\n\n/* minimize the total distance traversed */\nmin TotalCost = sum {<i,j> in ARCS, k in VEHICLES} cost[i,j] * UseArc[i,j,k];\n\n/* each non-depot node must be serviced by at least one vehicle */\ncon Assignment {i in NODES diff {depot}}:\nsum {k in VEHICLES} UseNode[i,k] >= 1;\n\n/* each vehicle must start at the depot node */\nfor{k in VEHICLES} fix UseNode[depot,k] = 1;\n\n/* some vehicle k traverses an arc that leaves node i\nif and only if UseNode[i,k] = 1 */\ncon LeaveNode {i in NODES, k in VEHICLES}:\nsum {<(i),j> in ARCS} UseArc[i,j,k] = UseNode[i,k];\n\n/* some vehicle k traverses an arc that enters node i\nif and only if UseNode[i,k] = 1 */\ncon EnterNode {i in NODES, k in VEHICLES}:\nsum {<j,(i)> in ARCS} UseArc[j,i,k] = UseNode[i,k];\n\n/* the amount of demand supplied by vehicle k to node i must equal demand\nif UseNode[i,k]=1; otherwise, it must equal 0 */\ncon FlowBalance {i in NODES diff {depot}, k in VEHICLES}:\nsum {<j,(i)> in ARCS} Flow[j,i,k] - sum {<(i),j> in ARCS} Flow[i,j,k]\n= demand[i] * UseNode[i,k];\n\n/* if UseArc[i,j,k] = 1, then the flow on arc (i,j) must be at most capacity\nif UseArc[i,j,k] = 0, then no flow is allowed on arc (i,j) */\ncon VehicleCapacity {<i,j> in ARCS, k in VEHICLES}:\nFlow[i,j,k] <= Flow[i,j,k].ub * UseArc[i,j,k];\n\n/* decomp by vehicle */\nfor {i in NODES, k in VEHICLES} do;\nLeaveNode[i,k].block = k;\nEnterNode[i,k].block = k;\nend;\nfor {i in NODES diff {depot}, k in VEHICLES} FlowBalance[i,k].block = k;\nfor {<i,j> in ARCS, k in VEHICLES} VehicleCapacity[i,j,k].block = k;\n\n/* solve using decomp (aggregate formulation) */\nsolve with MILP / varsel=ryanfoster decomp=(logfreq=20);\n\n/* create solution data set */\nstr color {k in VEHICLES} =\n['red' 'green' 'blue' 'black' 'orange' 'gray' 'maroon' 'purple'];\ncreate data node_data from [i] x y;\ncreate data edge_data from [i j k]=\n{<i,j> in ARCS, k in VEHICLES: UseArc[i,j,k].sol > 0.5}\nx1=x[i] y1=y[i] x2=x[j] y2=y[j] linecolor=color[k];\nquit;\n\ndata sganno(drop=i j);\nretain drawspace \"datavalue\" linethickness 1;\nset edge_data;\nfunction = 'line';\nrun;\n\nproc sgplot data=node_data sganno=sganno;\nscatter x=x y=y / datalabel=i;\nxaxis display=none;\nyaxis display=none;\nrun;\n\n```" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.637938,"math_prob":0.9132806,"size":4032,"snap":"2020-24-2020-29","text_gpt3_token_len":1368,"char_repetition_ratio":0.17204568,"word_repetition_ratio":0.07594936,"special_character_ratio":0.42237103,"punctuation_ratio":0.1740113,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9978336,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-05-28T15:29:01Z\",\"WARC-Record-ID\":\"<urn:uuid:a1d04b37-7ddc-4c5e-9d29-6b188e468e27>\",\"Content-Length\":\"15040\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b3c4ee90-cc85-408a-b9d2-2cce73ea8222>\",\"WARC-Concurrent-To\":\"<urn:uuid:df9bba08-4a84-492f-9e17-2d61c9f38e2c>\",\"WARC-IP-Address\":\"149.173.160.38\",\"WARC-Target-URI\":\"https://support.sas.com/documentation/onlinedoc/or/ex_code/132/dcmpe08.html\",\"WARC-Payload-Digest\":\"sha1:WNR2Z2VRCFHRQ64YAWNAAPXZAX7D3BO2\",\"WARC-Block-Digest\":\"sha1:FR75FSKCWC2FHODVI7NEXO3BBLHJ5B7V\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-24/CC-MAIN-2020-24_segments_1590347399820.9_warc_CC-MAIN-20200528135528-20200528165528-00010.warc.gz\"}"}
http://pandas.pydata.org/pandas-docs/dev/reference/api/pandas.DataFrame.index.html
[ "# pandas.DataFrame.index#\n\nDataFrame.index#\n\nThe index (row labels) of the DataFrame.\n\nThe index of a DataFrame is a series of labels that identify each row. The labels can be integers, strings, or any other hashable type. The index is used for label-based access and alignment, and can be accessed or modified using this attribute.\n\nReturns:\npandas.Index\n\nThe index labels of the DataFrame.\n\n`DataFrame.columns`\n\nThe column labels of the DataFrame.\n\n`DataFrame.to_numpy`\n\nConvert the DataFrame to a NumPy array.\n\nExamples\n\n```>>> df = pd.DataFrame({'Name': ['Alice', 'Bob', 'Aritra'],\n... 'Age': [25, 30, 35],\n... 'Location': ['Seattle', 'New York', 'Kona']},\n... index=([10, 20, 30]))\n>>> df.index\nIndex([10, 20, 30], dtype='int64')\n```\n\nIn this example, we create a DataFrame with 3 rows and 3 columns, including Name, Age, and Location information. We set the index labels to be the integers 10, 20, and 30. We then access the index attribute of the DataFrame, which returns an Index object containing the index labels.\n\n```>>> df.index = [100, 200, 300]\n>>> df\nName Age Location\n100 Alice 25 Seattle\n200 Bob 30 New York\n300 Aritra 35 Kona\n```\n\nIn this example, we modify the index labels of the DataFrame by assigning a new list of labels to the index attribute. The DataFrame is then updated with the new labels, and the output shows the modified DataFrame." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.57682157,"math_prob":0.7991934,"size":1344,"snap":"2023-40-2023-50","text_gpt3_token_len":338,"char_repetition_ratio":0.20223881,"word_repetition_ratio":0.0,"special_character_ratio":0.296875,"punctuation_ratio":0.21754386,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9855325,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-23T10:44:49Z\",\"WARC-Record-ID\":\"<urn:uuid:e20867ce-0506-46dd-abec-60b6d233f169>\",\"Content-Length\":\"50335\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f92630bd-4b98-4f76-8a5c-ad0bb531d5ce>\",\"WARC-Concurrent-To\":\"<urn:uuid:39c5ecef-5782-4b2e-b42a-180bef9adbbe>\",\"WARC-IP-Address\":\"104.26.1.204\",\"WARC-Target-URI\":\"http://pandas.pydata.org/pandas-docs/dev/reference/api/pandas.DataFrame.index.html\",\"WARC-Payload-Digest\":\"sha1:QW6HXXDGRU7X65NJCNMGKLMA3VZKHFJR\",\"WARC-Block-Digest\":\"sha1:5OZW47BHTH7AIDXMW7IYH34WUJNAGKFJ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233506480.7_warc_CC-MAIN-20230923094750-20230923124750-00562.warc.gz\"}"}
http://www.aei.com.mm/shop/thickness-distance-measurement/extech-dt200-laser-distance-meter/
[ "", null, "# Extech DT200: Laser Distance Meter\n\nPrice Each\n\nQuote Only\n\n(exc. GST)\n\n## Laser measurements up to 115ft (35m)\n\nThe DT200 features a laser measurement accurate to 0.08 in. at 32 ft. Measures from 2 in. to 115 ft. (0.05 to 35 m). Historical Storage recalls the previous 10 records (measurements or calculations). Automatically calculates Area and Volume. Indirect measurement using Pythagorean Theorem. Continuous measurement function with Min/Max distance tracking updates every 5 seconds. Addition/Subtraction, Front or rear edge reference. Low battery indicator, Auto power off. Complete with carrying case and 2 AAA batteries.\n\n??\n\n• Measures from 2″ to 115′ (0.05 to 35m)\n• Laser measurement accurate to 0.08 inches at 32 feet\n• Historical Storage recalls the previous 10 records (measurements or calculated results)\n• Automatically calculates Area and Volume\n• Indirect measurement using Pythagorean theorem\n• Continuous measurement function with Min/Max distance tracking updates every 5 seconds\n• Addition/Subtraction, Front or rear edge reference\n• Low battery indicator, Auto power off\n• Complete with carrying case and 2 AAA batteries\nSpecifications Range\nMeasurement Range 2″ to 115′ (0.05 to 35m)\nAccuracy (up to 32’/10m) ±0.08″ (±2mm)\nResolution 0.001″ (0.001m)" ]
[ null, "http://www.aei.com.mm/wp-content/uploads/2017/09/DT200-350x350.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.756834,"math_prob":0.87671244,"size":1624,"snap":"2021-04-2021-17","text_gpt3_token_len":450,"char_repetition_ratio":0.120987654,"word_repetition_ratio":0.14225942,"special_character_ratio":0.29987684,"punctuation_ratio":0.13220339,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95752084,"pos_list":[0,1,2],"im_url_duplicate_count":[null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-01-28T03:04:24Z\",\"WARC-Record-ID\":\"<urn:uuid:0f24a6cb-7a0d-4df2-851e-a751c62ce4ce>\",\"Content-Length\":\"161903\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:496436f9-5c92-4083-8745-f83428798d0c>\",\"WARC-Concurrent-To\":\"<urn:uuid:577f6738-0ec7-46db-9d35-603b06276708>\",\"WARC-IP-Address\":\"101.100.226.163\",\"WARC-Target-URI\":\"http://www.aei.com.mm/shop/thickness-distance-measurement/extech-dt200-laser-distance-meter/\",\"WARC-Payload-Digest\":\"sha1:6JCH66PU7C3IQFNQRMBSYICYAIPMERMO\",\"WARC-Block-Digest\":\"sha1:7WFEIRGJXE54LNTQBGDDTB6WIVSOWLH4\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-04/CC-MAIN-2021-04_segments_1610704835583.91_warc_CC-MAIN-20210128005448-20210128035448-00119.warc.gz\"}"}
https://ee-paper.com/design-of-signal-frequency-counting-function-based-on-at89s51-single-chip-microcomputer/
[ "The frequency counting of the input signal is completed by using the t0 and T1 timing counter function of AT89S51 single chip microcomputer. The counted frequency results are displayed by 8-bit dynamic nixie tube. It is required to be able to accurately count the signal frequency of 0-250KHZ, and the counting error shall not exceed ± 1Hz.\n\n1. Circuit schematic diagram", null, "Figure 4.31.1\n\n2. Hardware connection on the system board\n\n(1)。 Connect p0.0-p0.7 in the “single chip microcomputer system” area with abcdefgh port in the “dynamic digital display” area with 8-core flat cable.\n\n(2)。 Connect p2.0-p2.7 in the “single chip microcomputer system” area with s1s2s3s4s5s6s7s8 port in the “dynamic digital display” area with 8-core flat cable.\n\n(3)。 Connect p3.4 (T0) terminal in “single chip microcomputer system” area to wave terminal in “frequency generator” area with wire.\n\n3. Program design content\n\n(1)。 The working mode settings of timer / counter t0 and T1 can be seen from the figure. T0 counts the input frequency signal when working in the counting state, but for t0 working in the counting state, the maximum count value is FOSC / 24. Since FOSC = 12Mhz, the maximum count frequency of t0 is 250kHz. The concept of frequency is to count the number of pulses in one second, that is, the frequency value. Therefore, T1 works in the timing state, stops the counting of t0 every 1 second of timing, reads the counted value from the counting unit of T0, and then performs data processing. Send it to the nixie tube for display.\n\n(2)。 T1 works in the timing state, and the maximum timing time is 65ms, which can not reach the timing of 1s. Therefore, the timing of 50ms is adopted, with a total of 20 times, and the timing function of 1s can be completed.\n\n4. C language source program\n\n#include\n\nunsigned char code dispbit[]={0xfe,0xfd,0xfb,0xf7,0xef,0xdf,0xbf,0x7f};\n\nunsigned char code dispcode[]={0x3f,0x06,0x5b,0x4f,0x66,\n\n0x6d,0x7d,0x07,0x7f,0x6f,0x00,0x40};\n\nunsigned char dispbuf[8]={0,0,0,0,0,0,10,10};\n\nunsigned char temp[8];\n\nunsigned char dispcount;\n\nunsigned char T0count;\n\nunsigned char timecount;\n\nbit flag;\n\nunsigned long x;\n\nvoid main(void)\n\n{\n\nunsigned char i;\n\nTMOD=0x15;\n\nTH0=0;\n\nTL0=0;\n\nTH1=(65536-4000)/256;\n\nTL1=(65536-4000)%256;\n\nTR1=1;\n\nTR0=1;\n\nET0=1;\n\nET1=1;\n\nEA=1;\n\nwhile(1)\n\n{\n\nif(flag==1)\n\n{\n\nflag=0;\n\nx=T0count*65536+TH0*256+TL0;\n\nfor(i=0;i《8;i++)\n\n{\n\ntemp=0;\n\n}\n\ni=0;\n\nwhile(x/10)\n\n{\n\ntemp=x%10;\n\nx=x/10;\n\ni++;\n\n}\n\ntemp=x;\n\nfor(i=0;i《6;i++)\n\n{\n\ndispbuf=temp;\n\n}\n\nTImecount=0;\n\nT0count=0;\n\nTH0=0;\n\nTL0=0;\n\nTR0=1;\n\n}\n\n}\n\n}\n\nvoid t0(void) interrupt 1 using 0\n\n{\n\nT0count++;\n\n}\n\nvoid t1(void) interrupt 3 using 0\n\n{\n\nTH1=(65536-4000)/256;\n\nTL1=(65536-4000)%256;\n\nTImecount++;\n\nif(TImecount==250)\n\n{\n\nTR0=0;\n\nTImecount=0;\n\nflag=1;\n\n}\n\nP0=dispcode[dispbuf[dispcount]];\n\nP2=dispbit[dispcount];\n\ndispcount++;\n\nif(dispcount==8)\n\n{\n\ndispcount=0;\n\n}\n\n}" ]
[ null, "https://imgs.ee-paper.com/imgs/o4YBAF1KgNqAbovGAAGGrrnNboA066.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.648015,"math_prob":0.974148,"size":2790,"snap":"2021-43-2021-49","text_gpt3_token_len":963,"char_repetition_ratio":0.13137114,"word_repetition_ratio":0.07055961,"special_character_ratio":0.33584228,"punctuation_ratio":0.15033785,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95938575,"pos_list":[0,1,2],"im_url_duplicate_count":[null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-19T11:58:53Z\",\"WARC-Record-ID\":\"<urn:uuid:fc2d6fbc-b544-4396-93ea-fb8410671b0b>\",\"Content-Length\":\"40703\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9a7c810c-082f-4f81-864f-f962e11e1374>\",\"WARC-Concurrent-To\":\"<urn:uuid:e48df640-15cd-4878-8377-8d82cda8d8a1>\",\"WARC-IP-Address\":\"172.67.208.193\",\"WARC-Target-URI\":\"https://ee-paper.com/design-of-signal-frequency-counting-function-based-on-at89s51-single-chip-microcomputer/\",\"WARC-Payload-Digest\":\"sha1:CCT2735HTGAFD24FIOGWPKWSCTKSC6HX\",\"WARC-Block-Digest\":\"sha1:QSRXF5EVNLC5BC7ZORM44BXWIY4V7IUQ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323585265.67_warc_CC-MAIN-20211019105138-20211019135138-00677.warc.gz\"}"}
http://gitlinux.net/2020-03-15-(985)-sum-of-even-numbers-after-queries/
[ "## Description\n\nWe have an array A of integers, and an array queries of queries.\n\nFor the i-th query val = queries[i], index = queries[i], we add val to A[index]. Then, the answer to the i-th query is the sum of the even values of A.\n\n(Here, the given index = queries[i] is a 0-based index, and each query permanently modifies the array A.)\n\nReturn the answer to all queries. Your answer array should have answer[i] as the answer to the i-th query.\n\nExample 1:\n\nInput: A = [1,2,3,4], queries = [[1,0],[-3,1],[-4,0],[2,3]]\nOutput: [8,6,2,4]\nExplanation:\nAt the beginning, the array is [1,2,3,4].\nAfter adding 1 to A, the array is [2,2,3,4], and the sum of even values is 2 + 2 + 4 = 8.\nAfter adding -3 to A, the array is [2,-1,3,4], and the sum of even values is 2 + 4 = 6.\nAfter adding -4 to A, the array is [-2,-1,3,4], and the sum of even values is -2 + 4 = 2.\nAfter adding 2 to A, the array is [-2,-1,3,6], and the sum of even values is -2 + 6 = 4.\n\n\nNote:\n\n1. 1 <= A.length <= 10000\n2. -10000 <= A[i] <= 10000\n3. 1 <= queries.length <= 10000\n4. -10000 <= queries[i] <= 10000\n5. 0 <= queries[i] < A.length\n\n## Solutions\n\n题意比较简单,每次 query,都把对应的数加到对应位置上,然后计算数列中所有偶数的和。\n\n### 1. Array\n\n# Time: O(n)\n# Space: O(1)\nclass Solution:\ndef sumEvenAfterQueries(self, A: List[int], queries: List[List[int]]) -> List[int]:\nres = []\npre_sum = self.get_even_sum(A)\nfor val, idx in queries:\nif A[idx] % 2 == 0 and val % 2 == 0:\npre_sum += val\nelif A[idx] % 2 != 0 and val % 2 != 0:\npre_sum += val + A[idx]\nelif A[idx] % 2 == 0 and val % 2 != 0:\npre_sum -= A[idx]\nelse: # A[idx] % 2 != 0 and val % 2 == 0\npass\nA[idx] += val\nres.append(pre_sum)\nreturn res\n\ndef get_even_sum(self, A):\nif not A:\nreturn 0\nsum_v = 0\nfor a in A:\nif a % 2 == 0:\nsum_v += a\nreturn sum_v\n\n# 58/58 cases passed (548 ms)\n# Your runtime beats 82.58 % of python3 submissions\n# Your memory usage beats 5.88 % of python3 submissions (18.8 MB)" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.62413365,"math_prob":0.99892235,"size":1882,"snap":"2023-14-2023-23","text_gpt3_token_len":706,"char_repetition_ratio":0.1315229,"word_repetition_ratio":0.14404432,"special_character_ratio":0.42986184,"punctuation_ratio":0.19409283,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.998961,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-03-31T02:45:58Z\",\"WARC-Record-ID\":\"<urn:uuid:21a83b01-633e-4b1b-b938-0ef374bce4d3>\",\"Content-Length\":\"29550\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0386e7ab-3582-4975-8179-b12b06e0c2f8>\",\"WARC-Concurrent-To\":\"<urn:uuid:290aad1f-8e0a-4aa3-8186-304eb8c1f9ca>\",\"WARC-IP-Address\":\"185.199.111.153\",\"WARC-Target-URI\":\"http://gitlinux.net/2020-03-15-(985)-sum-of-even-numbers-after-queries/\",\"WARC-Payload-Digest\":\"sha1:XKZI5JREDSTJREQFBUNGQW7GAM3SWV5G\",\"WARC-Block-Digest\":\"sha1:EJ5WXMIYGKKWMC5UKGAYXED7LRT3TXAZ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296949533.16_warc_CC-MAIN-20230331020535-20230331050535-00369.warc.gz\"}"}
https://courses.lumenlearning.com/atd-austincc-physics1/chapter/10-6-collisions-of-extended-bodies-in-two-dimensions/
[ "## Collisions of Extended Bodies in Two Dimensions\n\n### Learning Objectives\n\nBy the end of this section, you will be able to:\n\n• Observe collisions of extended bodies in two dimensions.\n• Examine collision at the point of percussion.\n\nBowling pins are sent flying and spinning when hit by a bowling ball—angular momentum as well as linear momentum and energy have been imparted to the pins. (See Figure 1). Many collisions involve angular momentum. Cars, for example, may spin and collide on ice or a wet surface. Baseball pitchers throw curves by putting spin on the baseball. A tennis player can put a lot of top spin on the tennis ball which causes it to dive down onto the court once it crosses the net. We now take a brief look at what happens when objects that can rotate collide.\n\nConsider the relatively simple collision shown in Figure 2, in which a disk strikes and adheres to an initially motionless stick nailed at one end to a frictionless surface. After the collision, the two rotate about the nail. There is an unbalanced external force on the system at the nail. This force exerts no torque because its lever arm r is zero. Angular momentum is therefore conserved in the collision. Kinetic energy is not conserved, because the collision is inelastic. It is possible that momentum is not conserved either because the force at the nail may have a component in the direction of the disk’s initial velocity. Let us examine a case of rotation in a collision in Example 1.", null, "Figure 1. The bowling ball causes the pins to fly, some of them spinning violently. (credit: Tinou Bao, Flickr)", null, "Figure 2. (a) A disk slides toward a motionless stick on a frictionless surface. (b) The disk hits the stick at one end and adheres to it, and they rotate together, pivoting around the nail. Angular momentum is conserved for this inelastic collision because the surface is frictionless and the unbalanced external force at the nail exerts no torque.\n\n### Example 1. Rotation in a Collision\n\nSuppose the disk in Figure 2 has a mass of 50.0 g and an initial velocity of 30.0 m/s when it strikes the stick that is 1.20 m long and 2.00 kg. (a) What is the angular velocity of the two after the collision? (b) What is the kinetic energy before and after the collision? (c) What is the total linear momentum before and after the collision?\n\n#### Strategy for (a)\n\nWe can answer the first question using conservation of angular momentum as noted. Because angular momentum is , we can solve for angular velocity.\n\n#### Solution for (a)\n\nConservation of angular momentum states\n\nL′,\n\nwhere primed quantities stand for conditions after the collision and both momenta are calculated relative to the pivot point. The initial angular momentum of the system of stick-disk is that of the disk just before it strikes the stick. That is,\n\n,\n\nwhere I is the moment of inertia of the disk and ω is its angular velocity around the pivot point. Now, = mr2 (taking the disk to be approximately a point mass) and ω v/r, so that\n\n$L={{mr}^{2}}\\frac{v}{r}={mvr}\\\\$.\n\nAfter the collision,\n\nL′ = Iω.\n\nIt is ω′ that we wish to find. Conservation of angular momentum gives\n\n$I'\\omega'={mvr}\\\\$.\n\nRearranging the equation yields\n\n$\\omega' =\\frac{mvr}{I'}\\\\$,\n\nwhere I′ is the moment of inertia of the stick and disk stuck together, which is the sum of their individual moments of inertia about the nail. Figure 3 gives the formula for a rod rotating around one end to be I=Mr2/3. Thus,\n\n$I'={mr}^{2}+\\frac{{Mr}^{2}}{3}=\\left(m+\\frac{M}{3}\\right){r}^{2}\\\\$.", null, "Figure 3. Some rotational inertias.\n\nEntering known values in this equation yields,\n\n$I'=\\left(0.0500\\text{ kg}+0.667\\text{ kg}\\right){\\left(1.20\\text{ m}\\right)}^{2}=1.032\\text{ kg}\\cdot{\\text{m}}^{2}\\\\$.\n\nThe value of I′ is now entered into the expression for ω′, which yields\n\n$\\begin{array}{lll}\\omega'&=& \\frac{mvr}{I'}=\\frac{\\left(0.0500\\text{ kg}\\right)\\left(30.0\\text{ m/s}\\right)\\left(1.20\\text{ m}\\right)}{1.032\\text{ kg}\\cdot\\text{m}^{2}}\\\\ & =& 1.744\\text{ rad/s}\\approx 1.74\\text{ rad/s}\\end{array}\\\\$.\n\n#### Strategy for (b)\n\nThe kinetic energy before the collision is the incoming disk’s translational kinetic energy, and after the collision, it is the rotational kinetic energy of the two stuck together.\n\n#### Solution for (b)\n\nFirst, we calculate the translational kinetic energy by entering given values for the mass and speed of the incoming disk.\n\n$\\text{KE}=\\frac{1}{2}{mv}^{2}=\\left(0.500\\right)\\left(0.0500\\text{ kg}\\right){\\left(30.0\\text{ m/s}\\right)}^{2}=22.5 \\text{ J}\\\\$\n\nAfter the collision, the rotational kinetic energy can be found because we now know the final angular velocity and the final moment of inertia. Thus, entering the values into the rotational kinetic energy equation gives\n\n$\\begin{array}{lll}\\text{KE'}& =& \\frac{1}{2}I'{\\omega'^{2}}=\\left(0.5\\right)\\left(1.032\\text{kg}\\cdot\\text{m}^{2}\\right)\\left(1.744\\frac{\\text{rad}}{\\text{s}}\\right)^{2}\\\\ & =& 1.57\\text{ J}\\end{array}\\\\$.\n\n#### Strategy for (c)\n\nThe linear momentum before the collision is that of the disk. After the collision, it is the sum of the disk’s momentum and that of the center of mass of the stick.\n\n#### Solution of (c)\n\nBefore the collision, then, linear momentum is\n\n$p=\\text{mv}=\\left(0\\text{.}\\text{0500}\\text{kg}\\right)\\left(\\text{30}\\text{.}0\\text{m/s}\\right)=1\\text{.}\\text{50}\\text{kg}\\cdot \\text{m/s}\\\\$.\n\nAfter the collision, the disk and the stick’s center of mass move in the same direction. The total linear momentum is that of the disk moving at a new velocity v′ = ′ plus that of the stick’s center of mass, which moves at half this speed because ${v}_{\\text{CM}}=\\left(\\frac{r}{2}\\right)\\omega' =\\frac{v'}{2}\\\\$\n\nThus,\n\n$p' ={mv}' +{{Mv}}_{\\text{CM}}=\\text{mv}' +\\frac{{Mv}'}{2}\\\\$.\n\nGathering similar terms in the equation yields,\n\n$p'=\\left(m+\\frac{M}{2}\\right)v'\\\\$\n\nso that\n\n$p' =\\left(m+\\frac{M}{2}\\right)r\\omega'\\\\$.\n\nSubstituting known values into the equation,\n\n$p' =\\left(1\\text{.}\\text{050 kg}\\right)\\left(1\\text{.}\\text{20}\\text{m}\\right)\\left(1\\text{.}\\text{744 rad/s}\\right)=\\text{2.20 kg}\\cdot \\text{m/s}\\\\$.\n\n#### Discussion\n\nFirst note that the kinetic energy is less after the collision, as predicted, because the collision is inelastic. More surprising is that the momentum after the collision is actually greater than before the collision. This result can be understood if you consider how the nail affects the stick and vice versa. Apparently, the stick pushes backward on the nail when first struck by the disk. The nail’s reaction (consistent with Newton’s third law) is to push forward on the stick, imparting momentum to it in the same direction in which the disk was initially moving, thereby increasing the momentum of the system.\n\nThe above example has other implications. For example, what would happen if the disk hit very close to the nail? Obviously, a force would be exerted on the nail in the forward direction. So, when the stick is struck at the end farthest from the nail, a backward force is exerted on the nail, and when it is hit at the end nearest the nail, a forward force is exerted on the nail. Thus, striking it at a certain point in between produces no force on the nail. This intermediate point is known as the percussion point. An analogous situation occurs in tennis as seen in Figure 4. If you hit a ball with the end of your racquet, the handle is pulled away from your hand. If you hit a ball much farther down, for example, on the shaft of the racquet, the handle is pushed into your palm. And if you hit the ball at the racquet’s percussion point (what some people call the “sweet spot”), then little or no force is exerted on your hand, and there is less vibration, reducing chances of a tennis elbow. The same effect occurs for a baseball bat.", null, "Figure 4. A disk hitting a stick is compared to a tennis ball being hit by a racquet. (a) When the ball strikes the racquet near the end, a backward force is exerted on the hand. (b) When the racquet is struck much farther down, a forward force is exerted on the hand. (c) When the racquet is struck at the percussion point, no force is delivered to the hand.\n\n### Check Your Understanding\n\nIs rotational kinetic energy a vector? Justify your answer.\n\n#### Solution\n\nNo, energy is always scalar whether motion is involved or not. No form of energy has a direction in space and you can see that rotational kinetic energy does not depend on the direction of motion just as linear kinetic energy is independent of the direction of motion.\n\nSection Summary\n\n• Angular momentum L is analogous to linear momentum and is given by $L=I\\omega\\\\$ .\n• Angular momentum is changed by torque, following the relationship $\\text{net }\\tau =\\frac{\\Delta L}{\\Delta t}\\\\$.\n• Angular momentum is conserved if the net torque is zero $L=\\text{constant}\\left(\\text{net }\\tau =\\text{0}\\right)\\\\$ or $L=L′\\left(\\text{net }\\tau =0\\right)\\\\$. This equation is known as the law of conservation of angular momentum, which may be conserved in collisions.\n\n### Conceptual Questions\n\n1. Describe two different collisions—one in which angular momentum is conserved, and the other in which it is not. Which condition determines whether or not angular momentum is conserved in a collision?\n\n2. Suppose an ice hockey puck strikes a hockey stick that lies flat on the ice and is free to move in any direction. Which quantities are likely to be conserved: angular momentum, linear momentum, or kinetic energy (assuming the puck and stick are very resilient)?\n\n3. While driving his motorcycle at highway speed, a physics student notices that pulling back lightly on the right handlebar tips the cycle to the left and produces a left turn. Explain why this happens.\n\n### Problems & Exercises\n\n1. Repeat Example 1. Rotation in a Collision in which the disk strikes and adheres to the stick 0.100 m from the nail.\n\n2. Repeat Example 1. Rotation in a Collision in which the disk originally spins clockwise at 1000 rpm and has a radius of 1.50 cm.\n\n3. Twin skaters approach one another as shown in Figure 5 and lock hands. (a) Calculate their final angular velocity, given each had an initial speed of 2.50 m/s relative to the ice. Each has a mass of 70.0 kg, and each has a center of mass located 0.800 m from their locked hands. You may approximate their moments of inertia to be that of point masses at this radius. (b) Compare the initial kinetic energy and final kinetic energy.", null, "Figure 5. Twin skaters approach each other with identical speeds. Then, the skaters lock hands and spin.\n\n4. Suppose a 0.250-kg ball is thrown at 15.0 m/s to a motionless person standing on ice who catches it with an outstretched arm as shown in Figure 6.\n\n(a) Calculate the final linear velocity of the person, given his mass is 70.0 kg.\n\n(b) What is his angular velocity if each arm is 5.00 kg? You may treat the ball as a point mass and treat the person’s arms as uniform rods (each has a length of 0.900 m) and the rest of his body as a uniform cylinder of radius 0.180 m. Neglect the effect of the ball on his center of mass so that his center of mass remains in his geometrical center.\n\n(c) Compare the initial and final total kinetic energies.", null, "Figure 6. The figure shows the overhead view of a person standing motionless on ice about to catch a ball. Both arms are outstretched. After catching the ball, the skater recoils and rotates.\n\n5. Repeat Example 1. Rotation in a Collision in which the stick is free to have translational motion as well as rotational motion.\n\n### Selected Solutions to Problems & Answers\n\n1. (a) 0.156 rad/s (b) 1.17 × 10-2 J (c) 0.188 kg ⋅ m/s\n\n3. (a) 3.13 rad/s (b) Initial KE = 438 J, final KE = 438 J\n\n5. (a) 1.70 rad/s (b) Initial KE = 22.5 J, final KE = 2.04 J (c) 1.50 kg ⋅ m/s" ]
[ null, "https://s3-us-west-2.amazonaws.com/courses-images-archive-read-only/wp-content/uploads/sites/222/2014/12/20103601/Figure_11_06_01a.jpg", null, "https://s3-us-west-2.amazonaws.com/courses-images-archive-read-only/wp-content/uploads/sites/222/2014/12/20103602/Figure_11_06_02a.jpg", null, "https://s3-us-west-2.amazonaws.com/courses-images/wp-content/uploads/sites/648/2016/11/04025642/Figure_11_03_06-1.jpg", null, "https://s3-us-west-2.amazonaws.com/courses-images-archive-read-only/wp-content/uploads/sites/222/2014/12/20103612/Figure_11_06_03.jpg", null, "https://s3-us-west-2.amazonaws.com/courses-images-archive-read-only/wp-content/uploads/sites/1322/2015/12/03210407/Figure_11_06_04a.jpg", null, "https://s3-us-west-2.amazonaws.com/courses-images-archive-read-only/wp-content/uploads/sites/1322/2015/12/03210408/Figure_11_06_05a.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.877886,"math_prob":0.98839325,"size":10505,"snap":"2022-40-2023-06","text_gpt3_token_len":2752,"char_repetition_ratio":0.1597943,"word_repetition_ratio":0.026829269,"special_character_ratio":0.26539743,"punctuation_ratio":0.098794065,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99800855,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12],"im_url_duplicate_count":[null,7,null,7,null,null,null,7,null,7,null,7,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-10-07T15:13:45Z\",\"WARC-Record-ID\":\"<urn:uuid:ea9d8e74-f1cc-463a-a4e8-e56dfd8047a8>\",\"Content-Length\":\"45569\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:76d5a15c-15ad-4379-b26e-b5970c518484>\",\"WARC-Concurrent-To\":\"<urn:uuid:8d93ca5e-06c0-4aea-92ab-35a3684af278>\",\"WARC-IP-Address\":\"23.185.0.1\",\"WARC-Target-URI\":\"https://courses.lumenlearning.com/atd-austincc-physics1/chapter/10-6-collisions-of-extended-bodies-in-two-dimensions/\",\"WARC-Payload-Digest\":\"sha1:X2BTXXE6FPR65O4ZHNJVRIPDHSWVTRW3\",\"WARC-Block-Digest\":\"sha1:HELDDF35KS4H4QNB6CCCBVTQ2ED443JC\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030338213.55_warc_CC-MAIN-20221007143842-20221007173842-00602.warc.gz\"}"}
https://www.hackmath.net/en/calculator/linear-regression
[ "# Linear regression calculator\n\nThis linear regression calculator uses the least squares method to find the line of best fit for a set of paired data. The line of best fit is described by the equation f(x) = Ax + B, where A is the slope of the line and B is the y-axis intercept.\nAll you need is enter paired data into the text box, each pair of x and y each line (row).\n\nAlso calculate coefficient of correlation Pearson product-moment correlation coefficient (PPMCC or PCC or R). The Pearson correlation coefficient is used to measure the strength of a linear association between two variables, where the value R = 1 means a perfect positive correlation and the value R = -1 means a perfect negataive correlation." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8897447,"math_prob":0.9951277,"size":2461,"snap":"2023-14-2023-23","text_gpt3_token_len":687,"char_repetition_ratio":0.1029711,"word_repetition_ratio":0.0,"special_character_ratio":0.3043478,"punctuation_ratio":0.14901257,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99985147,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-03-27T22:29:10Z\",\"WARC-Record-ID\":\"<urn:uuid:141b1be6-e400-43fd-8b46-51d5a1eaf02f>\",\"Content-Length\":\"11712\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2bf2026f-9d81-48bb-9bb1-db4beb527271>\",\"WARC-Concurrent-To\":\"<urn:uuid:ebdbc59d-1fdd-4bd1-bddc-616ca42acabf>\",\"WARC-IP-Address\":\"172.67.134.123\",\"WARC-Target-URI\":\"https://www.hackmath.net/en/calculator/linear-regression\",\"WARC-Payload-Digest\":\"sha1:RUEJDQFHTJEAULA2H73GEGQU4RAGA6AO\",\"WARC-Block-Digest\":\"sha1:GZ7JWZK6QYOXIA53UERX4JHVKKNYIOQY\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296948708.2_warc_CC-MAIN-20230327220742-20230328010742-00146.warc.gz\"}"}
https://math.stackexchange.com/questions/300803/what-theorems-examples-will-make-me-really-understand-representation-theory
[ "# What theorems/examples will make me really understand representation theory?\n\nOkay, so I've been through some basic results on representation theory. I've gone over the proof of Burnside's $pq$ theorem using characters. I've also read though the basics of Lie groups and algebras. However, I still haven't come across a theorem or any examples which set off an \"Aha!\" moment in which I understand what representations really are and when their use would be appropriate.\n\nFor example, when studying group actions, in my opinion the orbit-stabilizer theorem gives me a good idea of what's going on when we study actions on finite groups - I haven't found any such analogue in representation theory.\n\nI suppose part of the problem is that representations are useful in so many distinct ways (finite groups, Lie groups, harmonic analysis, combinatorics, etc.) that I have a hard time synthesizing a coherent picture. Anyone have any good recommendations for books/topic/theorem/examples?\n\nLet $G$ be a finite solvable group and let $K/L$ be a chief factor of $G$ (this means that $K$ is a normal subgroup of $G$ and $L$ is a subgroup of $K$ which is maximal among proper subgroups of $K$ which are normal in $G$). Since this corresponds to $K$ being a minimal normal subgroup of the solvable group $G/L$ we see that $K/L$ is elementary abelian, so it is isomorphic to $(\\mathbb{F}_p)^n$ for some prime $p$ and some natural number $n$, so it is a vector space over the finite field $\\mathbb{F}_p$.\nNow $G$ acts on $K/L$ via conjugation, and due to the maximality of $L$, this gives us an irreducible representation of $G$ on the vector space $(\\mathbb{F}_p)^n$.\nOne question is then how this relates to those representations one is usually introduced to, which are over the complex numbers. But it turns out that since $G$ is solvable, we can from the existence of the above representation deduce that there is an irreducible complex character of $G$ of degree at most $n$ and such that a $p$-regular element of $G$ (ie, an element of $G$ whose order is not divisible by $p$) is in the kernel of this irreducible character iff it acts trivially on $K/L$. Further, if the representation from above is faithful (which means that the only $g\\in G$ that acts trivially on $K/L$ is the neutral element), then there exists a faithful irreducible complex character of $G$ of degree at most $n$." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.95756906,"math_prob":0.99755937,"size":903,"snap":"2021-31-2021-39","text_gpt3_token_len":186,"char_repetition_ratio":0.12458287,"word_repetition_ratio":0.0,"special_character_ratio":0.19712071,"punctuation_ratio":0.103658535,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99977344,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-07-24T15:01:03Z\",\"WARC-Record-ID\":\"<urn:uuid:b0c5c9e6-e829-4bbd-a7a4-9da82c846179>\",\"Content-Length\":\"163309\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:27d6892f-74a7-4afb-82a9-4ac677f1ff8a>\",\"WARC-Concurrent-To\":\"<urn:uuid:3315376d-e9d8-49a3-9fcd-eca4b0528a4f>\",\"WARC-IP-Address\":\"151.101.193.69\",\"WARC-Target-URI\":\"https://math.stackexchange.com/questions/300803/what-theorems-examples-will-make-me-really-understand-representation-theory\",\"WARC-Payload-Digest\":\"sha1:BYWQUUCXAPDGPXCU445MKPZBSRORUCN4\",\"WARC-Block-Digest\":\"sha1:N5MDAEHZ2T4OCHXE7R2L5CBI7LIWYVEJ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046150266.65_warc_CC-MAIN-20210724125655-20210724155655-00208.warc.gz\"}"}
https://www.sportsbookreview.com/picks/tools/sports-betting-functions-for-excel/
[ "# Sports Betting Functions for Excel\n\n• SBRVer(): Displays current template version number.\n• US2Dec(USOdds) (usage 1): Converts US-style to decimal. Example: US2Dec(-110) ≈ 1.909090909\n• US2Dec(range of USOdds) (usage 2): Converts an array or Excel range of US-style odds to decimal parlay odds. Example: US2Dec(-110,-110) ≈ 3.644628099\n• US2Par(range of USOdds): Converts an array or Excel range of US-style odds to US-style true parlay odds. Example: US2Par(-110,-110,-110) ≈ +595.7926.\n• Dec2US(DecimalOdds): Converts decimal odds to US. Example: US2Dec(1.909090909) = -110.\n• US2Win(USOdds, WagerQuantity {{default = 1}) or Dec2Win(DecOdds, WagerQuantity {{default = 1}): Converts US or Decimal odds and wager size to potential win quantity. Note that by using the default value of 1 for wager size, these two functions effectively convert from US/decimal odds to fractional odds. Example, US2Win(-120,120) = 100, or US2Win(-110) ≈ 0.90909.\n• US2Res(USOdds, WagerQuantity {{default = 1}, Result) or Dec2Win(DecOdds, WagerQuantity {{default = 1}, Result): Converts US or Decimal odds, wager size, and result (where \"WIN\", \"W\", or \"1\", corresponds to a win; \"LOSS\", \"L\", or -1 corresponds to a loss; and \"PUSH\", \"P\", or 0 corresponds to a push). Example, US2Res(-120,120,\"P\") = 0, or =US2Res(-110, 200,\"Win\") ≈ 181.82.\n• US2Prob(USOdds) or Dec2Prob(DecimalOdds): Converts from US or decimal odds to probability. Example: US2Prob(+100) = Dec2Prob(2.0000) = 50%.\n• US2Hold(range of US Odds) or Dec2Hold(range of Decimal Odds): Calculates theoretical hold based on an Excel range of US or decimal odds. Example: if cells A1 and A2 are both -110, US2Hold(A1:A2) = 4.54545%.\n• {US2Real(range of US Odds)} or {Dec2Real(range of Decimal Odds)}: (array function) Returns an array of zero-vig probabilities based on an Excel range of US or decimal odds. Example: if cells A1 and A2 are both -110, if B1, B2, and B3 were set to the array formula {=US2Real(A1:A2)}, B1 and B2 would both have the value of 50%, and B3 would have the value of the theoretical hold (4.54545%).\n• {US2Fair(range of US Odds)} or {Dec2Fair(range of Decimal Odds)}: (array function) Returns an array of fair value zero-vig odds based on an Excel range of US or decimal odds. Example: if cells A1 and A2 are -200 and +176, respectively, and if B1 and B2 were set to the array formula {=US2Fair(A1:A2)}, B1 and B2 would display the values of -184 and +184, respectively.\n• ProbUS2Edge(Probability, USOdds) or ProbDec2Edge(Probability, DecimalOdds): Calculates edge based on win probability and US or decimal odds. Example: ProbUS2Edge(55%,-110) = 5%\n• EdgeUS2Prob(Edge, USOdds) or EdgeDec2Prob(Edge, DecimalOdds): Calculates win probability based on edge and US or decimal odds. Example: ProbUS2Edge(5%,-110) = 55%.\n• ProbEdge2US(Probability, Edge) or ProbEdge2Dec(Probability, Edge): Calculates US or decimal odds based on probability and edge. Example: ProbEdge2US(55%,5%) = -110.\n• USRisk2Win(USOdds, RiskQuantity {default=1}) or DecRisk2Win(DecimalOdds, RiskQuantity {default=1}): Calculates resultant win quantities given US/Decimal odds and risk quantity. (These functions are also aliased as USR2W(·) and DecR2W(·), respectively). Example: USRisk2Win(-110,22) = USR2W(-110,22) = \\$20.\n• USWin2Risk(USOdds, WinQuantity{default=1}) or DecWin2Risk(DecimalOdds, WinQuantity {default=1}): Calculates required risk given US/Decimal odds and desired risk amount. (These functions are also aliased as USW2R(·) and DecW2R(·), respectively). Example: USWin2Risk(-110,20) = USW2R(-110,20) = \\$22.\n• Exch2US(US Exchange Odds, Commission {default = 2%}) or Exch2Dec(Decimal Exchange Odds, Commission {default = 2%}): Calculate sportsbook equivalent US or decimal odds given US or decimal exchange odds and commission. Example: Exch2US(-110, 1%) would refer to the sportsbook equivalent odds of betting at -110 given 1% sportsbook commission (~-111.11).\n• E2S(US exchange odds, exchange commission {default = 2%}): Shortcut to Exch2US(US Exchange Odds, Commission).\n• ExchUS2Hold(range of US Odds, Commission) or ExchDec2Hold(range of Decimal Odds, Commission): Calculates theoretical hold including sports betting exchange commissions based on an Excel range of US or decimal odds. Example: if the values of cells A1 and A2 both equal -102 ExchUS2Holds(A1:A2,2%) would equal the theoretical hold theoretical on the -102/-012 market inclusive of 2% exchange commission (a value of 1.961%).\n• KUtil(bankroll, Kelly multiplier {default = 1}): Calculates Kelly criterion utility for a given bankroll (expressed in percent terms) and Kelly multiplier. Example: KUtil(1.05, 0.5) would yield half-Kelly utility for a bankroll of 105% of initial.\n• InvKUtil(utilily, Kelly multiplier {default = 1}): The inverse Kelly Utility function. Calculates the bankroll (expressed in percent terms) implied by a given Kelly criterion utility and Kelly multiplier. Example: InvKutil(KUtil(X, KellyMult),KellyMult) would just equal X (provided X > 0).\n• SBKelly(Probability, Odds, Kelly Multiplier {default = 1}, Decimal Odds Flag {default = FALSE}): Calculates single bet Kelly stake given an expected win probability, paypout odds, and optional Kelly Multiplier. If the \"Decimal Odds Flag\" isn't set or is set to FALSE, then the function will use a \"best guess\" as to whether the odds specified are US or decimal-style (if absolute value<100, it will assume decimal). Setting the flag to TRUE will cause the function to always assume decimal-style odds (this could be helpful when using decimal-style odds at very high payout levels).\n• ProbUS2Edge(Probability, USOdds) or ProbDec2Edge(Probability, DecimalOdds): Calculates edge based on win probability and US or decimal odds. Example: ProbUS2Edge(55%,-110) = 5%\n• EdgeUS2Prob(Edge, USOdds) or EdgeDec2Prob(Edge, DecimalOdds): Calculates win probability based on edge and US or decimal odds. Example: ProbUS2Edge(5%,-110) = 55%.\n• ProbEdge2US(Probability, Edge) or ProbEdge2Dec(Probability, Edge): Calculates US or decimal odds based on probability and edge. Example: ProbEdge2US(55%,5%) = -110.\n• USRisk2Win(USOdds, RiskQuantity {default=1}) or DecRisk2Win(DecimalOdds, RiskQuantity {default=1}): Calculates resultant win quantities given US/Decimal odds and risk quantity. (These functions are also aliased as USR2W(·) and DecR2W(·), respectively). Example: USRisk2Win(-110,22) = USR2W(-110,22) = \\$20.\n• USWin2Risk(USOdds, WinQuantity{default=1}) or DecWin2Risk(DecimalOdds, WinQuantity {default=1}): Calculates required risk given US/Decimal odds and desired risk amount. (These functions are also aliased as USW2R(·) and DecW2R(·), respectively). Example: USWin2Risk(-110,20) = USW2R(-110,20) = \\$22.\n• Exch2US(US Exchange Odds, Commission {default = 2%}) or Exch2Dec(Decimal Exchange Odds, Commission {default = 2%}): Calculate sportsbook equivalent US or decimal odds given US or decimal exchange odds and commission. Example: Exch2US(-110, 1%) would refer to the sportsbook equivalent odds of betting at -110 given 1% sportsbook commission (~-111.11).\n• E2S(US exchange odds, exchange commission {default = 2%}): Shortcut to Exch2US(US Exchange Odds, Commission).\n• ExchUS2Hold(range of US Odds, Commission) or ExchDec2Hold(range of Decimal Odds, Commission): Calculates theoretical hold including sports betting exchange commissions based on an Excel range of US or decimal odds. Example: if the values of cells A1 and A2 both equal -102 ExchUS2Holds(A1:A2,2%) would equal the theoretical hold theoretical on the -102/-012 market inclusive of 2% exchange commission (a value of 1.961%).\n• KUtil(bankroll, Kelly multiplier {default = 1}): Calculates Kelly criterion utility for a given bankroll (expressed in percent terms) and Kelly multiplier. Example: KUtil(1.05, 0.5) would yield half-Kelly utility for a bankroll of 105% of initial.\n• InvKUtil(utilily, Kelly multiplier {default = 1}): The inverse Kelly Utility function. Calculates the bankroll (expressed in percent terms) implied by a given Kelly criterion utility and Kelly multiplier. Example: InvKutil(KUtil(X, KellyMult),KellyMult) would just equal X (provided X > 0).\n• SBKelly(Probability, Odds, Kelly Multiplier {default = 1}, Decimal Odds Flag {default = FALSE}): Calculates single bet Kelly stake given an expected win probability, paypout odds, and optional Kelly Multiplier. If the \"Decimal Odds Flag\" isn't set or is set to FALSE, then the function will use a \"best guess\" as to whether the odds specified are US or decimal-style (if absolute value<100, it will assume decimal). Setting the flag to TRUE will cause the function to always assume decimal-style odds (this could be helpful when using decimal-style odds at very high payout levels).\n• {P2L(range of win probabilities)}: (array function) Returns an array of likelihoods such that the ith element of the output array (i ∈ [0, 1, 2, ..., n], where n = the number of probabilities in the input range) corresponds to to the likelihood of exactly n-i wins and i losses given the n event win probabilities in the input range. Example: if cells (A1, A2, A3) = (75%, 70%, 65%), and cells B1, B2, B3, and B4 were set to the array formula {=P2L(A1:A3)}, cell B1 would correspond to the probability of 3 wins and 0 losses (~ 34.13%), cell B the probability of 2 wins and 1 loss (~44.38%), B3 the probability of 1 win and 2 losses (~18.88%), and B4 the probability of 0 wins and 3 losses (2.63%). Note that this function may be rather slow to calculate for large input sets.\n• {EnumCombin(range of items, size)}: (array function) Returns a 2-D array of every possible combination of of the specified size of the input range. Example: if cells (A1, A2, A3, A4, A5) = (\"A\", \"B\", \"C\", \"D\", \"E\"), then {=ENUMCOMBIN(A1:A5, 2)} would return the 6-row, 2-column array of {(\"A\",\"B\"),(\"A\",\"C\"),(\"A\",\"D\"),(\"B\",\"C\"),(\"B\",\"D\"),(\"C\",\"D\")}. Note that this function may be rather slow to calculate for large input sets. EnumComin is short for \"Enumerate Combinations\".\n• lg(p): Calculates the logit function of probability p, where lg(p) is defined as ln(p) - ln(1-p) ∀ 0<p<1.\n• invlg(x): Calculates the inverse logit function of x where invlg(x) is defined as invlg(x) = Exp(x) / (1 + Exp(x)).\n• MB2US(US Matchbook Exchange Odds, Commission {default=1%}) or MB2DEC(Decimal Matchbook Exchange Odds, Commission {default=1%}): Calculate sportsbook equivalent US or Decimal odds given US or Decimal Matckbook exchange odds and commission. This references a commission structure where the player pays a set percentage of the lesser of risk or win irrespective of bet outcome. Example: MB2US(-110, 1%) would refer to the sportsbook equivalent odds of betting at -110 given 1% Matchbook commission (~-112.12).\n• {Bets2Stats(range of Odds, range of Wager Quantities {default=1}, range of Outcomes, range of Edges {default=0%), Decimal Odds Flag = {default = FALSE})}: Array function. Takes a range of betting odds (US odds are the default, but will take decimal odds if the Decimal Odds Flag argument) is set to TRUE, an optional range of wager quantities (if not provided then 1 unit per wager is assumed), a range of outcomes (1 or a string starting with 'W' for a win, -1 or a string starting with 'L' for a loss, anything else for a push/no action), and a range of expected edges (defaults to 0). Returns an array with the following values:\n1. Number of Non-Pushed Bets\n2. Number of Wins\n3. Win %\n4. Unit Return\n5. % Return\n6. Unit Standard Deviation\n7. % Standard Deviation\n8. Standard Score\n9. p-value (from t-distribution)\n\nIf you want these functions to be available every time you start Excel you'll need to save the Book.xlt template file in your Excel XLStart directory (”C:\\Program Files\\Microsoft Office\\OFFICE11\\XLSTART\\” by default for Excel 2003 -- \\Office12\\XLSTART\\ for Excel2007, \\Office10\\XLSTART\\ for Excel 2002, and \\Office\\XLSTART\\ for Excel 2000 and 97). If the file already exists you shouldn't overwrite it unless you know the preexisting file to be empty but instead choose a different name as in the next paragraph. Alternatively, if you know what you're doing, you could manually add the Excel VBA functions or module to your preexisting Book.xlt file.\n\nIf you want these functions to only be available only by request then save the file under a different name in the XLStart directory. For example, if you saved the file as Ganchrow.xlt, then by clicking ”New” on the ”File” menu, you'd be able to select the template ”Ganchrow” and have all the above functions available." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7497906,"math_prob":0.98265505,"size":12482,"snap":"2019-35-2019-39","text_gpt3_token_len":3432,"char_repetition_ratio":0.15883955,"word_repetition_ratio":0.5201939,"special_character_ratio":0.28673288,"punctuation_ratio":0.15760408,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9981993,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-08-18T12:55:18Z\",\"WARC-Record-ID\":\"<urn:uuid:7d281e03-54e8-4025-87df-9b2aa51f0833>\",\"Content-Length\":\"166448\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:326680fa-6dfd-4c45-b7ca-31cbb3671a48>\",\"WARC-Concurrent-To\":\"<urn:uuid:1cfd11e9-1818-465a-a6c8-4af9bd9c87be>\",\"WARC-IP-Address\":\"69.172.200.156\",\"WARC-Target-URI\":\"https://www.sportsbookreview.com/picks/tools/sports-betting-functions-for-excel/\",\"WARC-Payload-Digest\":\"sha1:QFKLFJHVOPXPDHDONGXEAA53H4YGQ5QJ\",\"WARC-Block-Digest\":\"sha1:PBS2VQQ3YBWJ6YVUZVP2TLXGSBI3EDPW\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-35/CC-MAIN-2019-35_segments_1566027313889.29_warc_CC-MAIN-20190818124516-20190818150516-00452.warc.gz\"}"}
https://www.sparrho.com/item/antenna-apparatus-utilizing-aperture-of-transmission-line/d2c032/
[ "", null, "# Antenna apparatus utilizing aperture of transmission line\n\nImported: 13 Feb '17 | Published: 18 Jan '11\n\nKanji Otsuka, Tamotsu Usami, Yutaka Akiyama, Chihiro Ueda\n\nUSPTO - Utility Patents\n\n## Abstract\n\nAn antenna apparatus utilizing an aperture of transmission line, which is connected to a first transmission line having a predetermined characteristic impedance, includes a tapered line portion, and an aperture portion. The tapered line portion is connected to one end of the transmission line, and the tapered line portion includes a second transmission line including a pair of line conductors. The tapered line portion keeps a predetermined characteristic impedance constant and expands at least one of a width of the transmission line and an interval in a tapered shape at a predetermined taper angle. The aperture portion has a radiation aperture connected to one end of the tapered line portion. A size of one side of the aperture end plane of the aperture portion is set to be equal to or higher than a quarter wavelength of the minimum operating frequency of the antenna apparatus.\n\n## Description\n\n### BACKGROUND OF THE INVENTION\n\n1. Field of the Invention\n\nThe present invention relates to an antenna apparatus utilizing an aperture of transmission line, and in particular, to an antenna apparatus which can be used in frequency bands such as bands of microwaves, quasi millimeter waves, millimeter waves, or the like.\n\n2. Description of the Related Art\n\nAn antenna has been always used in wireless communication systems such as portable telephones. The concept of the conventional antenna has such a structure for resonating at a specified frequency, and a typical dipole antenna resonates at a half of an operating wavelength thereof.\n\nIn the dipole antenna, electromagnetic waves having the TM (Transverse Magnetic) mode are generated concentrically around the pole. However, electromagnetic waves that have reached a distance several times the wavelength interfere with one another at the boundary portions thereof, and the electromagnetic wave mode is transformed into TEM (Transverse Electro-Magnetic) mode (the radio waves thereof is called as transverse waves), and is radiated almost in a form of spherical waves. When the radius of curvature is increased, the electromagnetic waves become plane waves. The electromagnetic waves travel as group waves where a lots of electromagnetic waves travel so as to be distributed in a transverse straight line (namely, distributed averagely to a plane perpendicular to the traveling direction) concurrently travel. The documents which are related to the present invention are as follows:\n\n• Patent Document 1: Japanese patent laid-open publication No. JP 2005-244733 A;\n• Non-Patent Document 1: Kanji Otsuka, et al, “Measurement Potential Swing by Electric Field on Package Transmission Lines”, Proceedings of ICEP, pp. 490-495, April 2001;\n• Non-Patent Document 2: Kanji Otsuka, et al, “Measurement Evidence of Mirror Potential Traveling on Transmission Lines”, Technical Digest of 5th VLSI Packaging Workshop of Japan, pp. 27-28, December 2000; and\n• Non-Patent Document 3: Kanji Otsuka et al, “Stacked pair line”, Journal of Japan Institute of Electronics Packaging (JIEP), Vol. 4, No. 7, pp. 556-561, November, 2001.\n\nSince the group waves fill up the space, the group waves need not only frequency allocation by the Radio Law but also a sufficient protection circuit against resonant mode noises leaking from the band, and the high-frequency circuit substantially becomes a circuit having a large overhead. Furthermore, the group waves are heavily attenuated even in the air at high frequencies in the bands equal to or higher than GHz band and become a level, that is larger than such an attenuation theorem at lower frequencies that the energy becomes weak in inverse proportion to the square of the distance (because of expansion in a spherical shape), and that is in inverse proportion to the cube of the distance by approximation, and this leads to that it is difficult to perform long distance communications.\n\n### SUMMARY OF THE INVENTION\n\nAn object of the present invention is to solve the above-mentioned problems and provide an antenna apparatus, that is connected to a transmission line and has a simple configuration and a directivity of almost no change in frequency characteristics, and that is capable of performing communications even at a comparatively long distance.\n\nIn order to achieve the aforementioned objective, according to one aspect of the present invention, there is provided an antenna apparatus utilizing an aperture of transmission line, and the antenna apparatus is connected to a first transmission line having a predetermined characteristic impedance. The antenna apparatus includes a tapered line portion, and an aperture portion. The tapered line portion is connected to one end of the transmission line, and the tapered line portion includes a second transmission line including a pair of line conductors. The tapered line portion keeps a predetermined characteristic impedance constant and expands at least one of a width of the transmission line and an interval in a tapered shape at a predetermined taper angle. The aperture portion has a radiation aperture connected to one end of the tapered line portion. A size of one side of the aperture end plane of the aperture portion is set to be equal to or higher than a quarter wavelength of the minimum operating frequency of the antenna apparatus.\n\nThe antenna apparatus preferably further includes a first support member that short-circuits and supports the second transmission line including the pair of line conductors substantially in a center portion in a width direction of the transmission line of the aperture portion.\n\nIn addition, the antenna apparatus preferably further includes a pair of second support members that short-circuit and support the second transmission line including the pair of line conductors substantially at both ends in a width direction of the transmission line of the aperture portion.\n\nIn the above-mentioned antenna apparatus, the aperture portion is preferably constituted by expanding a width of the transmission line in a tapered shape.\n\nIn the above-mentioned antenna apparatus, a space located between the pair of line conductors of the first transmission line in the tapered line portion is preferably filled with a predetermined dielectric.\n\nIn the above-mentioned antenna apparatus, a space located between the pair of line conductors of the second transmission line in the aperture portion is preferably filled with a predetermined dielectric.\n\nThe above-mentioned antenna apparatus preferably further includes a first support member for supporting both end portions in a width direction of the transmission line of the first transmission line in the tapered line portion with interposition of a predetermined interval.\n\nThe above-mentioned antenna apparatus preferably further includes a second support member for supporting both end portions in the width direction of the transmission line of the second transmission line in the aperture portion with interposition of a predetermined interval.\n\nIn the above-mentioned antenna apparatus, the taper angle is preferably set to a predetermined value which is larger than zero degree and equal to or smaller than 30 degrees.\n\nIn the above-mentioned antenna apparatus, the characteristic impedance is preferably set to a predetermined value that is set within a range from 50Ω to 100 Ω.\n\nAccordingly, the antenna apparatus utilizing the aperture of transmission line according to the present invention is connected to the transmission line, and has a configuration much simpler than that of the prior art and a narrow directivity with almost no change in frequency characteristics, thereby making it possible to achieve a large antenna gain and to perform communications even at a comparatively long distance.\n\n### DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS\n\nPreferred embodiments of the present invention will be described below with reference to the drawings. In each of the following preferred embodiments, like components are denoted by like reference numerals.\n\nThe present invention is derived from a fundamental principle quite different from that of the antenna that resonates at a specified frequency such as a dipole antenna, and has a theoretically novel configuration as described in detail below.\n\nFIG. 1 is a perspective view showing an appearance of the antenna apparatus utilizing the aperture of transmission line according to the first preferred embodiment of the present invention. The antenna apparatus utilizing the aperture of transmission line of the first preferred embodiment is connected to a stacked pair line 1 including a pair of line conductors 1a and 1b each having a predetermined characteristic impedance such as 50Ω, where a pair of line conductors 1a and 1b oppose to each other.\n\nThe antenna apparatus of the present preferred embodiment is characterized by including the following:\n\n(A) a tapered line portion 2, which is connected to one end of the stacked pair line 1 and includes a transmission line including a pair of planar line conductors 2a and 2b opposing to each other, where both (this may be at least one) of a width of the transmission line and an interval between the line conductors 2a and 2b (referred to as an antenna interval hereinafter) are expanded in a tapered shape at predetermined taper angles θ and Φ with a predetermined characteristic impedance maintained or kept to be constant; and\n\n(B) an aperture portion 3 having a radiation aperture connected to one end of the tapered line portion 2, and including a pair of parallel planar line conductors 3a and 3b opposing to each other, and\n\n(C) where the size of one side of the aperture end plane of the aperture portion 3 is set to equal to or higher than the quarter wavelength of the minimum operating frequency.\n\nThe configuration and operation of the antenna apparatus utilizing the aperture of transmission line of the present preferred embodiment will be described below.\n\nThe electromagnetic waves that travel in the structure of the transmission line are confined in the structure in the traveling direction with respect to group waves, and therefore, enter the state of one line of electromagnetic waves limited by the structure. When only an electromagnetic wave having a certain frequency travels, the electromagnetic wave is also aligned in phase. If the electromagnetic wave is radiated to the space, a line of electromagnetic waves (whose frequency is lower than that of light) similar to a beam of laser light is formed. The present invention was derived from this conceptual origin. The frequency of the electromagnetic wave is, of course, lower than that of light, and it is impossible to keep a non-dispersed state unlike a beam of laser light, and the electromagnetic wave finally becomes TEM wave. However, the dispersion is suppressed at a short distance, and the square term of the distance can be ignored. Within this interval, the effect obtained by only the influence of attenuation, i.e., the square of the distance is effected, and the electromagnetic wave reaches a far destination point with weaker energy. If the distance is relatively short, the transmitting direction can be specified if the receiving location and the azimuth are determined in a manner similar to that of the case of a pencil of light, and the radio waves do not leak in azimuths other than those of transmitting and receiving. Therefore, not only applications different from those according to the conventional Radio Law can be considered, but also clean waves can be obtained since the spatial noise level (ground level) is not raised. So to speak, this results in the concept equivalent to providing conductor wiring of an electronic circuit in the air. Further, only TEM waves are in far locations, and able to be handled similar to the conventional antenna, whereas this results in a configuration in which the waves can effectively reach far locations by the distance of the convergent part in near locations.\n\nThe above description has been made on the assumption that the electromagnetic waves in the electromagnetic state of the transmission line are emitted to the space, and it becomes possible by making the transmission line abruptly or suddenly has an aperture portion in a manner similar to that of the stub wiring. This is the phenomenon conventionally known as stub noise radiation. The present invention provides an antenna that positively utilizes the phenomenon and takes its efficiency into consideration.\n\nThe structures shown in FIGS. 11 to 14 can be considered as the transmission line. Although the transmission lines, each including two pairs of lines, are shown in the present cases, the fundamental structure may be a pair of lines. These are, of course, well known in the art. A planar pair line (FIG. 13), a coplanar line (FIG. 14), a stacked pair line (FIG. 11), a split strip line (FIG. 12) and so on can be considered. It has been well known as a common sense that these transmission lines have no frequency characteristic.\n\nThe principle of the present invention is simply shown below. First of all, assuming that an inductance per unit length of the line is L0 [H/m], a capacitance per unit length is C0 [F/m], a resistance per unit length is R0 [Ω/m], and a leakage conductance per unit length is G0 [S/m], then the characteristic impedance Z0 [Ω] of the transmission line having a predetermined length is expressed by the following Equation (1):\n\n$Z 0 = R 0 + jω ⁢ ⁢ L 0 G 0 + jω ⁢ ⁢ C 0 . ( 1 )$\n\nAssuming that the line is enough short and the resistance R0 and the leakage conductance G0 can be ignored, then R0=G0=0, and the Equation (1) is expressed by the following Equation (2):\n\n$Z 0 = jω ⁢ ⁢ L 0 jω ⁢ ⁢ C 0 = L 0 C 0 = L C . ( 2 )$\n\nThe frequency dependence and length dependence are removed, and this is equivalent to the parameter of the total line length. That is, the defined characteristic impedance is identical even if the transmission line is short or extremely long. This is expressed metaphorically as the reciprocal of a conductance corresponding to the cross section area of a water pipe. It can be expressed by the physical concept of the reciprocal of the conductance of every section of the transmission line. Therefore, it can be expressed by a dimensional structure of the section which the electromagnetic waves can pass through. That is, it is expressed by the following Equation (3):\n\n$Z 0 = L 0 C 0 = 1 K ⁢ μ r ⁢ μ 0 ɛ r ⁢ ɛ 0 ⁢ ( d w ) = 377 ⁢ 1 K ⁢ μ r ɛ r ⁢ ( d w ) . ( 3 )$\n\nFor example, the structure and the parameters of the transmission line are shown in FIG. 8 such as the stacked pair line. A fringe coefficient K shown in the following Table 1 is the ratio of an electromagnetic energy (numerator) in a material (this may be, for example, air or a dielectric 1c having a specific inductive capacity ∈r and a relative magnetic permeability μr) having a width “w” of the transmission line and an interval (or aperture size) “d” interposed between a pair of line conductors 1a and 1b of FIG. 8 to the total electromagnetic energy (denominator) including energies distributed outside the same line.\n\nTABLE 1 Fringe Coefficient K w/d εr = 1, μr = 1 εr = 4.5, μr = 1 0.100 14.33 9.30 0.125 12.08 7.90 0.200 8.51 5.68 0.250 7.25 4.86 0.500 4.25 3.14 1.000 2.98 2.17 2.500 1.92 1.50 5.000 1.52 1.27 10.00 1.29 1.14\n\nIf the distribution of the electromagnetic field of the pair line is described a little more, the distribution becomes as shown in FIGS. 9 and 10. When there are positive charges in the top line conductor 1a as shown in FIG. 9, negative charges correspond to them in the bottom line conductor 1b. The electric lines of force are generated so as to make connections from the positive charges to the negative charges and come to have expansion so as to spatially minimize the mutual interference. Although the electric lines of force expand into the infinite space because of innumerable numbers of electric charges existing in the conductor, it is preferable to handle only the space that cannot be ignored by approximation. When the electric charges move in the depth direction of the sheet plane, the magnetic lines of force are generated surrounding the line conductors 1a and 1b and perpendicularly intersect the electric lines of force. Since the positive charges travel in the depth direction of the sheet plane, the magnetic fields are clockwise in the upper half and counterclockwise in the lower half. They help each other like mutually meshing gears at the center. The elements operate as negative mutual inductances M12 and M21 that cancel the self-inductances L1 and L2 of the conductors. In this case, the effective inductance L0 per unit length is expressed by the following Equation (4):\n\n$L 0 = L 1 + L 2 - M 12 - M 21 = ( μ 0 ⁢ μ r K ) ⁢ d w . ( 4 )$\n\nAs the interval between the top and bottom line conductors 1a and 1b is narrowed, the mutual inductances M12 and M21 increase, and the effective inductance L0 per unit length decreases. On the other hand, as the top and bottom line conductors 1a and 1b become closer to each other, the electric lines of force are reduced in length, and the coupling is intensified, consequently increasing the capacitance per unit length. That is, it is expressed by the following Equation (5):\n\n$C 0 = ɛ 0 ⁢ ɛ r ⁢ K ⁢ w d . ( 5 )$\n\nAs a result, as the line conductors 1a and 1b come closer to each other, the characteristic impedance is reduced as indicated by the Equation (3).\n\nFIG. 10 shows a distribution of the electromagnetic lines of force on the stacked pair line when seen in the traveling direction. The signal has the maximum amplitude so that the state of FIG. 9 is established when the right-end plane is regarded as the aperture portion, and the electromagnetic vector is perpendicular to the sheet plane as illustrated. The interval d of the aperture and the quarter wavelength (accurately ¼ of the guide wavelength λg of the transmission line) of the signal frequency are illustrated by same dimensions. The present inventor and others discovered that time corresponding to the time of vector change was the time of passing through the interval “d”, and the electromagnetic radiation could efficiently be achieved with the dimensions. The present inventor further discovered that only required was (¼)λg≦d, and the frequencies having (¼)λg shorter than the interval “d” were all efficiently radiated. In other words, it is the discovery of a directional antenna having no frequency characteristic.\n\nAs well known, in the transmission line, the electromagnetic waves cause energy reflection depending on the degree of change at the change in the characteristic impedance. When electromagnetic waves travel from a port 1 (suffix 1) to a port 2 (suffix 2), the reflectance Γ is expressed by the following Equation (6):\n\n$Γ = Z 02 - Z 01 Z 02 + Z 01 . ( 6 )$\n\nIf the transmission line has an open end, then impedance seen from charges is infinite, and therefore, the reflectance Γ=+1 in the Equation (6), which results in total reflection with no radiation of electromagnetic waves into the air. The reflection occurs when Γ=−1 at a short-circuited end, or the energy is totally consumed by the matching resistance at an impedance-matched end and emitted as thermal energy, which results in a complete failure in producing the effect of the antenna. However, it is presumed that the time space relaxation condition that satisfies the spatial radiation condition is established when (¼)λg≦d as shown in FIG. 10. An open-end transmission line structure that maintains the condition of (¼)λg≦d is the fundamental structure of the antenna apparatus utilizing the aperture of transmission line of the present invention.\n\nThe transmission line propagates a line of electromagnetic waves, which are aligned in phase in the case of a single frequency. Therefore, the radiated electromagnetic waves additionally have such an advantageous effect that they travel in a line of electromagnetic waves that hardly disperse in a manner similar to that of the case of a beam of laser light. Since the transmission line has no frequency characteristic, it becomes possible to radiate even a composite wave like a pulse without changing the composition ratio. Accordingly, an antenna structure is proposed which needs none of generally so-called high-frequency circuits such as the oscillator circuit, the frequency converter circuit in the receiver circuit.\n\nAlthough innumerable numbers of derivative structures can be naturally considered from the fundamental structure, another fundamental structure of the present invention is to concurrently serve as means having a structure for adjusting the dimensions so that the characteristic impedance Z0 of the transmission line in the circuit is uniform up to the aperture portion 3 in order to secure the line conductor interval “d” that satisfies (¼)λg≦d. In order to maintain the characteristic impedance Z0, the width “w” of the transmission line, which is a function of the interval “d”, i.e., the Equation (3), is automatically determined. When using a method for making constant the sectional parameters of the width “w” of the transmission line, the interval “d” and the time “t” by extension and contraction in similitude, a shape as shown in FIG. 1 can be obtained. In order to minimize the turbulence of the electromagnetic fields, taper angles θ and φ (note that θ is the taper angle in the width direction of the transmission line, and φ is the taper angle in the longitudinal direction of the transmission line) should be preferably larger than zero degree and equal to or smaller than 30 degrees. Since the extension and contraction of similarity can freely be achieved, a gigantic antenna and a minute micro antenna are both possible and applied to all the sorts of applications. FIG. 7 shows one example of a structure of leading from a stacked pair line 1 of a width “w1” of the transmission line including a pair of line conductors 1a and 1b surrounded by a dielectric 10 to a stacked pair line (including a pair of line conductors 1c and 1d) of a width “w2 (>w1)” of the transmission line in the air.\n\nAnother important thing is that the electromagnetic waves traveling in the transmission line are in the TEM mode as understood from FIGS. 7 and 9 to 10, and then, it is necessary to provide means for accurately keeping the same state. One example is a method for embedding the whole transmission line configuration in the dielectric, and FIG. 7 shows a conceptual illustration. An electromagnetic wave velocity c is expressed by the following Equation (7):\n\n$c = c 0 μ r ⁢ ɛ r . ( 7 )$\n\nIn the Equation (7), μr is the relative magnetic permeability of the dielectric, and ∈r is the specific inductive capacity of the dielectric. If a portion of a changed or different relative magnetic permeability or/and specific inductive capacity is formed in the transmission line, i.e., within the range in the cross section of traveling of the distribution of the effective electromagnetic lines of force in FIGS. 9 and 10, the electromagnetic lines of force in the portion are advanced or delayed, and this leads to collapse of the TEM mode. This is called the pseudo TEM mode, in which the spatial radiation efficiency is degraded by this coefficient as a result of time dispersion. It is desirable to completely enclose the transmission line with an insulator as shown in FIG. 7. For practical dimensional specifications, it is preferable to additionally expand the dielectric 10 by the width “w” of the transmission line on both sides in the plane and to expand the vertical dimension by the line length “d” in the profile.\n\nFIG. 2 is a perspective view showing an appearance of an antenna apparatus utilizing an aperture of transmission line according to the second preferred embodiment of the present invention. Referring to FIG. 2, the antenna apparatus utilizing the aperture of transmission line of the second preferred embodiment is characterized in that support members 4a and 4b made of a metal or a dielectric for short-circuiting and supporting a pair of line conductors of the transmission line at both ends in the width direction of the transmission line of the aperture portion 3 are further provided as compared with that of the first preferred embodiment.\n\nFIG. 3 is a perspective view showing an appearance of an antenna apparatus utilizing an aperture of transmission line according to the third preferred embodiment of the present invention. Referring to FIG. 3, the antenna apparatus utilizing the aperture of transmission line of the third preferred embodiment is characterized in that a support member 4c made of a metal or a dielectric for short-circuiting and supporting a pair of line conductors 3a and 3b of the transmission line substantially in a center portion in the width direction of the transmission line of the aperture portion 3 is further provided as compared with that of the first preferred embodiment.\n\nFIG. 4 is a perspective view showing an appearance of an antenna apparatus utilizing an aperture of transmission line according to the fourth preferred embodiment of the present invention. The antenna apparatus utilizing the aperture of transmission line of the fourth preferred embodiment is characterized in that a pair of parallel planar line conductors 5a and 5b of the aperture portion 3 have its width of the transmission line expanded in a tapered shape as compared with that of the first preferred embodiment.\n\nFIG. 5 is a perspective view showing an appearance of an antenna apparatus utilizing an aperture of transmission line according to the fifth preferred embodiment of the present invention. Referring to FIG. 5, the antenna apparatus utilizing the aperture of transmission line of the fifth preferred embodiment is characterized in that support members 4a and 4b made of a metal or a dielectric for short-circuiting and supporting a pair of line conductors 5a and 5b of the transmission line at both ends in the width direction of the transmission line of the aperture portion 3 are further provided as compared with that of the fourth preferred embodiment.\n\nFIG. 6 is a perspective view showing an appearance of an antenna apparatus utilizing an aperture of transmission line according to the sixth preferred embodiment of the present invention. Referring to FIG. 6, the antenna apparatus utilizing the aperture of transmission line of the sixth preferred embodiment is characterized in that a support member 4c made of a metal or a dielectric for short-circuiting and supporting a pair of line conductors 5a and 5b of the transmission line substantially in a center portion in the width direction of the transmission line of the aperture portion 3 is further provided as compared with that of the fourth preferred embodiment.\n\nAlthough the stacked pair line 1 is employed as an input line in each of the above-mentioned preferred embodiments, the present invention is not limited to this, and it is acceptable to connect another unbalanced type cable or transmission line, such as a coaxial cable, via an unbalanced connector.\n\nFurthermore, in each of the above-mentioned preferred embodiments, it is acceptable to fill a space located between a pair of line conductors 3a and 3b or 5a and 5b of the aperture portion 3 with a predetermined dielectric that supports the line conductors. Moreover, it is acceptable to further provide a support member 6 made of a dielectric for supporting both ends in the width direction of the transmission line of the tapered line portion 2 with interposition of a predetermined interval, as illustrated in FIG. 30. Furthermore, it is acceptable to further provide a support member made of a dielectric for supporting both ends in the width direction of the transmission line of the aperture portion 3 with interposition of a predetermined interval.\n\nThe taper angles θ and φ is preferably set to a predetermined value that is larger than zero degree and equal to or smaller than 30 degrees. Moreover, the characteristic impedance Z0 of the stacked pair line 1, the tapered line portion 2 and the aperture portion 3 are preferably set to a predetermined value that is equal to or larger than 50Ω and equal to or smaller than 100 Ω.\n\nAlthough it is preferable to provide the setting of (¼)λg≦d in each of the above-mentioned preferred embodiments, the similar action and advantageous effects can be obtained even with the setting of (¼)λg≦w.\n\n### IMPLEMENTAL EXAMPLES\n\nNext, the simulations and results conducted by the present inventors and others will be described below.\n\nFIG. 15 is a perspective view showing an antenna aperture plane of the antenna apparatus utilizing the aperture of transmission line used in the simulations of the present preferred embodiment. FIG. 16 is a perspective view showing an electric field [V/m] of a port on the antenna aperture plane of the antenna apparatus utilizing the aperture of transmission line used in the simulations of the present preferred embodiment. In addition, FIG. 17 is a graph showing a frequency characteristic (the frequency ranging from 0 to 10 GHz) of the reflection coefficient S11 [dB] of the antenna apparatus utilizing the aperture of transmission line used in the simulations of the present preferred embodiment. Further, FIG. 18 is a Smith chart showing an impedance characteristic at the input terminal of the antenna apparatus utilizing the aperture of transmission line used in the simulations of the present preferred embodiment.\n\nFIGS. 15 to 18 show the reflection and the impedance characteristic of the antenna apparatus utilizing the aperture of transmission line of an aperture plane of 1 m×1 m, in which the characteristics toward the space when the aperture portion 3 is used as a port are shown. As is apparent from FIG. 16, it can be understood that there are TEM waves with uniform field intensities (at the waveguide port) throughout the entire aperture plane. It is preferably better for the reflection energy (reflection coefficient S11 when indicated by the S parameter) to be smaller when directed toward the space. As is apparent from FIG. 17, (¼)λ=1000 mm and λ=75 MHz result because w=d=1 m. At the frequency, the reflection coefficient S11=−23 dB, which is very small, and the level equal to or smaller than −30 dB is maintained at higher frequency bands. We have never seen such any antenna having almost no frequency characteristic and emitting electromagnetic radiation so efficiently as described above. Moreover, as is apparent from the Smith chart of FIG. 18, although the aperture portion 3 has a characteristic impedance of 194Ω (o in FIG. 18), it can be understood that the characteristic impedance is 376Ω (● in FIG. 18) so as to be impedance-matched with that of the spatial electromagnetic impedance at 10 GHz due to the electromagnetic resonance (imaginary part) of reflection.\n\nFIG. 19A is a graph showing a directional pattern at 1 GHz of the antenna apparatus utilizing the aperture of transmission line used in the simulations of the present preferred embodiment. FIG. 19A shows the directional pattern having directivity gains in a range from 21.6 to −18.4 dBi, and the antenna apparatus has the maximum directivity gain of about 22 dBi.\n\nFIG. 19B is a graph showing a directional pattern at 2.5 GHz of the antenna apparatus utilizing the aperture of transmission line used in the simulations of the present preferred embodiment. FIG. 19B shows the directional pattern having directivity gains in a range from 29.5 to −10.5 dBi, and the antenna apparatus has the maximum directivity gain of about 30 dBi.\n\nFIG. 19C is a graph showing a directional pattern at 5 GHz of the antenna apparatus utilizing the aperture of transmission line used in the simulations of the present preferred embodiment. FIG. 19C shows the directional pattern having directivity gains in a range from 35.5 to −4.45 dBi, and the antenna apparatus has the maximum directivity gain of about 36 dBi.\n\nFIG. 19D is a graph showing a directional pattern at 7.5 GHz of the antenna apparatus utilizing the aperture of transmission line used in the simulations of the present preferred embodiment. FIG. 19D shows the directional pattern having directivity gains in a range from 39.0 to −0.977 dBi, and the antenna apparatus has the maximum directivity gain of about 39 dBi.\n\nFIG. 19E is a graph showing a directional pattern at 10 GHz of the antenna apparatus utilizing the aperture of transmission line used in the simulations of the present preferred embodiment. FIG. 19E shows the directional pattern having directivity gains in a range from 41.5 to 1.48 dBi, and the antenna apparatus has the maximum directivity gain of about 41 dBi.\n\nAs is apparent from FIGS. 19A to 19E, the gain is about 22 dBi even with the directivity at a comparatively low frequency of 1 GHz, and the antenna apparatus having such an excellent directivity has not been conventionally found out.\n\nFIGS. 20A, 20B and 20C are graphs showing spatial distributions of electromagnetic radiations at operating frequencies of 2, 5 and 10 GHz from the antenna apparatus utilizing the aperture of transmission line having a characteristic impedance of 50Ω used in the simulations of the present preferred embodiment, where the antenna apparatus has the aperture of d=70 mm, w=460 mm, and Zo=50Ω, at (¼)λ=1.11 GHz, and the electric field strength is in a range to about 158 V/m.\n\nFIG. 21A is a graph showing an energy distribution of an electromagnetic radiation field at 2 GHz at a location apart by 200 mm from the aperture plane of the antenna apparatus utilizing the aperture of transmission line used in the simulations of the present preferred embodiment. FIG. 21A shows radiation distributions of electric field strengths in a range to 49.8 V/m.\n\nFIG. 21B is a graph showing an energy distribution of an electromagnetic radiation field at 5 GHz at a location apart by 200 mm from the aperture plane of the antenna apparatus utilizing the aperture of transmission line used in the simulations of the present preferred embodiment. FIG. 21B shows radiation distributions of electric field strengths in a range to 77.7 V/m.\n\nFIG. 21C is a graph showing an energy distribution of an electromagnetic radiation field at 10 GHz at a location apart by 200 mm from the aperture plane of the antenna apparatus utilizing the aperture of transmission line used in the simulations of the present preferred embodiment. FIG. 21C shows radiation distributions of electric field strengths in a range to 87.6 V/m.\n\nFIG. 21D is a graph showing an energy distribution of an electromagnetic radiation field at 10 GHz at a location apart by 400 mm from the aperture plane of the antenna apparatus utilizing the aperture of transmission line used in the simulations of the present preferred embodiment. FIG. 21D shows radiation distributions of electric field strengths in a range to 69.3 V/m.\n\nFIGS. 21A to 21D show the degree of energy concentrations in the space of the antenna apparatus utilizing the aperture of transmission line of d=70 mm and w=460 mm. As is apparent from FIGS. 21A to 21D, although (¼)λ is 1.11 GHz and the reflection is equal to or smaller than −20 dB at 2 GHz, the state of dispersion doubled or more has already occurred at a location 200 mm apart from the aperture plane. However, almost no dispersion occurs in the transverse direction. Although the dispersion is reduced when the frequency is raised, side lobes are observed on the upper and lower sides. Almost no dispersion occurs in the transverse direction also in the case, and it can be anticipated that an azimuth on a map can be sufficiently taken with respect to the ground surface of communications. The aperture areas and the directivity gains are brought together and shown in Table 2.\n\nTABLE 2 Aperture Plane Dimensions and Directivity Gains [dBi] Aperture Sizes [mm] 1 GHz 5 GHz 10 GHz 100 × 100 3.967 18.42 23.58 300 × 300 11.15 26.32 32.13 500 × 500 16.11 30.40 36.23 1000 × 1000 21.72 35.91 41.81\n\nIt can be understood from the results in Table 2 that more excellent antenna characteristics can be obtained with respect to the directivity, as the aperture plane is larger.\n\nFIG. 22 is a graph showing an aperture area and a frequency characteristic of the reflection coefficient S11 on the aperture plane of the antenna apparatus utilizing the aperture of transmission line having a characteristic impedance Z0=100Ω used in the simulations of the present preferred embodiment. FIG. 22 shows the frequency characteristic of reflection when the aperture area is changed with the characteristic impedance Z0 of the antenna apparatus utilizing the aperture of transmission line maintained to be 100 Ω.\n\nFIG. 23A is a waveform chart showing an incident received signal waveform of Gaussian pulses of the antenna apparatus utilizing the aperture of transmission line used in the simulations of the present preferred embodiment, FIG. 23B is a waveform chart showing a received signal waveform of Gaussian pulses of the antenna apparatus with an antenna interval of 10 mm, FIG. 23C is a waveform chart showing a received signal waveform of Gaussian pulses of the antenna apparatus with an antenna interval of 30 mm, and FIG. 23D is a waveform chart showing a received signal waveform of Gaussian pulses of the antenna apparatus with an antenna interval of 60 mm. Namely, FIGS. 23A to 23D shows the receiving characteristic at the time of Gaussian pulse by using an antenna apparatus utilizing an aperture of transmission line of d=32 mm is used for transmitting and receiving. The Gaussian pulse receiving characteristics when a pair of transmission line aperture type antenna apparatuses of d=32 mm and w=80 mm are opposed to each other and used for transmitting and receiving are shown with the antenna interval changed to 10 mm, 30 mm, and 60 mm. Moreover, the frequency components of the Gaussian pulse were set to composite waves that flatly contain energies from 0.01 GHz to 20 GHz. In this case, waveforms receivable at d=60 mm are shown. However, as is apparent from FIG. 22, the reflection coefficient S11 becomes −20 dB at 6.5 GHz, and therefore, the frequency characteristic does not become flat, meaning that the transmitting characteristic is not so good.\n\nFIG. 24A is a chart showing transmitting waveforms indicated by the time-domain electric field strength (for 830 pico seconds) of the antenna apparatus with an antenna interval of 10 mm utilizing the aperture of transmission line used in the simulations of the preferred embodiment with Gaussian pulses, FIG. 24B is a chart showing transmitting waveforms indicated by the time-domain electric field strength (for 830 pico seconds) of the antenna apparatus with an antenna interval of 30 mm, and FIG. 24C is a chart showing transmitting waveforms indicated by the time-domain electric field strength (for 830 pico seconds) of the antenna apparatus with an antenna interval of 60 mm. Namely, FIGS. 24A to 24C shows transmitting waveforms indicated by the time-domain electric field strength in a range to 200 V/m (for 830 pico seconds) of the antenna apparatus utilizing the aperture of transmission line used in the simulations of the preferred embodiment with Gaussian pulses.\n\nFIG. 25A is a chart showing transmitting waveforms indicated by the time-domain electric field strength (for 1050 pico seconds) of the antenna apparatus with an antenna interval of 10 mm utilizing the aperture of transmission line used in the simulations of the preferred embodiment with Gaussian pulses, FIG. 25B is a chart showing transmitting waveforms indicated by the time-domain electric field strength (for 1050 pico seconds) of the antenna apparatus with an antenna interval of 30 mm, and FIG. 25C is a chart showing transmitting waveforms indicated by the time-domain electric field strength (for 1050 pico seconds) of the antenna apparatus with an antenna interval of 60 mm.\n\nIn FIGS. 24A to 24C and 25A to 25C, the transmitting characteristics of the antenna apparatus utilizing the aperture of transmission line of d=32 mm and w=80 mm are expressed by electric field energies.\n\nFIG. 26A is a signal waveform chart of the antenna apparatus with an antenna interval of 60 mm for showing differences when the antenna interval is changed as shown in FIGS. 23A to 23D, and FIG. 26B is a signal waveform chart of the antenna apparatus with an antenna interval of 100 mm for showing the same differences. As is apparent from FIGS. 26A and 26B, if the receiving characteristics with respect to the aperture plane of d=65 mm are compared with each other, the characteristic that is better than when d=32 mm is obtained in spite of separation by 100 mm. This means that the relation of (¼)λg≦d can be confirmed by the signal transmission simulations. It was discovered from FIGS. 23A to 23D that the antenna apparatus utilizing the aperture of transmission line was able to achieve almost same efficiencies in transmitting and receiving.\n\nFIG. 27A is a chart showing a top view of a tapered expanded field distribution when the characteristic impedance is made constant in the antenna apparatus utilizing the aperture of transmission line used in the simulations of the present preferred embodiment, and FIG. 27B is a chart showing a side view thereof, where the electric field strength is in a range to 1628 V/m. Namely, FIGS. 27A and 27B show transmitting states of electromagnetic waves of 10-GHz sinusoidal waves when a taper angle θ of 120 degrees is added with the characteristic impedance Z0 maintained or kept to be constant. The dispersion in the state of a circular arc originating at the expansion starting point is found out. The aperture portion 3 cannot perform TEM wave transmission as an antenna as a consequence of time dispersion due to the circular arc shape. The dispersion angle was about 60 degrees, and it was considered that the taper angle θ (or φ), expanding in such a state that the electromagnetic coupling on the transmission line was completely achieved, was 30 degrees, and this was adopted as the feature of the present invention.\n\nFIG. 28 is a graph showing a frequency characteristic (reference value (horizontal axis located one scale smaller than the upper limit value) in a range from 0.05 GHz to 20 GHz is 0 dB, and one scale represents 5 dB) of the reflection coefficient S11 of the antenna apparatus utilizing the aperture of transmission line used in the simulations of the present preferred embodiment. In FIG. 28, f1 denote 3 GHz, f2 denotes 5 GHz, f3 denotes 10 GHz, f4 denotes 15 GHz, and f5 denotes 20 GHz. FIG. 28 is an experimental example, which has such a structure that the expanded tapered line portion 2 is formed of an acrylic plate and floated partway. The specifications of the aperture portion 3 are as follows: d=20 mm, w=30 mm, (¼)λ=3.75 GHz. In the present experiment, the stacked pair line 1 is not provided, and the tapered line portion 2 is connected in series with a BNC connector. The characteristic impedance Z0 of the BNC connector is 50Ω, the characteristic impedance Z0 of the tapered line portion 2 formed of the acrylic part is 83.5Ω, and the characteristic impedance Z0 of the aperture portion 3 is 139.4Ω, which constitute such a structure that a large relation attenuation occurs under the conditions far from a constant characteristic impedance. Reviewing the frequency characteristic of the reflection coefficient S11 of FIG. 28, such results that are not so bad can be obtained as the radiation characteristics having substantially flat frequency characteristics and that S11=approx. −10 dB at frequencies equal to or higher than 3.75 GHz.\n\nFIG. 29A is a signal waveform chart showing signal waveforms of 10 GHz when the distance between aperture planes is changed in a pair of transmission line aperture type antenna apparatuses used in the simulations of the present preferred embodiment. In FIG. 29A, 291 denotes a case of a transmission distance of 10 cm and an amplitude of 42.76 mV, 292 denotes a case of a transmission distance of 50 cm and an amplitude of 10.95 mV, and 293 denotes a case of a transmission distance of 100 cm and an amplitude of 10.54 mV.\n\nFIG. 29B is a signal waveform chart showing signal waveforms at 10 GHz when a displacement distance from the center line is changed in a pair of transmission line aperture type antenna apparatuses used in the simulations of the present preferred embodiment. In FIG. 29B, 294 denotes a case of no displacement and an amplitude of 10.95 mV, 295 denotes a case of 5 cm displacement and an amplitude of 11.72 mV, 296 denotes a case of 10 cm displacement and an amplitude of 9.70 mV, 297 denotes a case of 20 cm displacement and an amplitude of 5.5 mV. Namely, FIGS. 29A and 29B show the transmitting and receiving characteristics when a pair of transmission line aperture type antenna apparatuses having the dimensions of d=20 mm and w=30 mm at the time of input of 10-GHz sine waves (having an amplitude of 1 V) are put in a mutually opposed state. FIG. 29A shows 10-GHz sine wave transmitting characteristics (receiving waveforms) when the antenna apparatus utilizing the aperture of transmission lines of FIG. 28 are opposed to each other and used for transmitting and receiving. Since the input voltage to the transmitting antenna apparatus is 1 V, an antenna gain of −40 dB is obtained by transmitting with the antenna apparatus with an antenna interval of 1 m. An advantageous effect similar to that of the simulations can presumably be expected so long as the characteristic impedance Z0 of the antenna is constant. Moreover, FIG. 29B shows receiving characteristics when the central axes of the transmitting and receiving antennas at an antenna interval of 50 cm are shifted in parallel in the width direction of the transmission line, and this leads to that the antenna apparatus has a substantial directivity.\n\nThe present inventor and others further conducted simulations of the antenna apparatus utilizing the aperture of transmission lines of the preferred embodiments of FIGS. 2 to 6. As a result, it was confirmed that the passing coefficient S21 and the reflection coefficient S11 scarcely suffered influences so long as the width was sufficiently small (e.g., a width of 1 μm with respect to d=100 μm) even when the support members 4a, 4b or 4c was provided. Moreover, in FIGS. 4 to 6 where the aperture portion 3 was expanded in the tapered shape, the results of a reduction in the reflection coefficient S11, a slight increase in the antenna gain and a consequent increase in the total antenna radiation efficiency were obtained.\n\n### INDUSTRIAL APPLICABILITY\n\nAs described in detail above, the antenna apparatus utilizing the aperture of transmission line of the present invention is the antenna apparatus, which is connected to the transmission line and has an extremely simple configuration as compared with that of the prior art and a narrow directivity with almost no change in the frequency characteristics, thereby allowing a remarkably large antenna gain to be achieved. Therefore, the communications can be achieved even at a comparatively long distance. The antenna apparatus utilizing the aperture of transmission line of the present invention is applicable to various applications as follows.\n\n(1) It is applicable to transmitting and receiving between IP terminals of global interconnections or wirings on an IC chip.\n\n(2) It is applicable to means for communications between IC chips.\n\n(3) It is applicable to means for communications between LSI packages.\n\n(4) It is applicable to communications between boards.\n\n(5) It is applicable to long-distance communications.\n\n(6) It is applicable to a system in which UWB or digital signals are subject to direct communications without modulation because of almost no frequency characteristic.\n\n(7) It is applicable to distance measurement and shape measurement of reflective objects.\n\n(8) It is applicable to transmitting and receiving for RFID or the like on the base station side.\n\n(9) It is applicable to transmitting with scanning the frequency, transmitting and receiving intended for scanning receiving, and applications intended for reflection receiving by utilizing the narrow directivity.\n\n(10) It is applicable to MEMS communications, communications inside a living body for medical use and satellite communications with gigantic antennas and power transmission because the principle of the characteristic impedance can be expanded and contracted in similitude.\n\n(11) It is applicable to applications having no relation to the allocation of radio frequencies because of the narrow directivity.\n\nAlthough the present invention has been fully described in connection with the preferred embodiments thereof with reference to the accompanying drawings, it is to be noted that various changes and modifications are apparent to those skilled in the art. Such changes and modifications are to be understood as included within the scope of the present invention as defined by the appended claims unless they depart therefrom.\n\n## Claims\n\n1. An antenna apparatus which is connected to a first transmission line having a predetermined characteristic impedance, the first transmission line including a pair of first planar line conductors electrically separated from each other such that magnetic lines of force are generated surrounding the first planar line conductors and perpendicularly intersect electric lines of force when electromagnetic waves travel on the first transmission line, the antenna apparatus comprising:\na tapered line portion connected to one end of the first transmission line, the tapered line portion including a second transmission line including a pair of second planar line conductors electrically separated from each other such that magnetic lines of force are generated surrounding the second planar line conductors and perpendicularly intersect electric lines of force when electromagnetic waves travel on the second transmission line, the tapered line portion keeping a predetermined characteristic impedance constant and expanding at least one of a width of the second transmission line and an interval in a tapered shape at a predetermined taper angle; and\nan aperture portion having a radiation aperture connected to one end of the tapered line portion, the aperture portion including a pair of parallel planar line conductors separated from each other,\nwherein a size of one side of the aperture end plane of the aperture portion is set to be equal to or higher than a quarter wavelength of the minimum operating frequency of the antenna apparatus.\na tapered line portion connected to one end of the first transmission line, the tapered line portion including a second transmission line including a pair of second planar line conductors electrically separated from each other such that magnetic lines of force are generated surrounding the second planar line conductors and perpendicularly intersect electric lines of force when electromagnetic waves travel on the second transmission line, the tapered line portion keeping a predetermined characteristic impedance constant and expanding at least one of a width of the second transmission line and an interval in a tapered shape at a predetermined taper angle; and\nan aperture portion having a radiation aperture connected to one end of the tapered line portion, the aperture portion including a pair of parallel planar line conductors separated from each other,\nwherein a size of one side of the aperture end plane of the aperture portion is set to be equal to or higher than a quarter wavelength of the minimum operating frequency of the antenna apparatus.\n2. The antenna apparatus as claimed in claim 1, further comprising a support member that short-circuits and supports the second transmission line including the pair of second planar line conductors substantially in a center portion in a width direction of the second transmission line of the aperture portion.\n3. The antenna apparatus as claimed in claim 1, further comprising a pair of support members that short-circuit and support the second transmission line including the pair of second planar line conductors substantially at both ends in a width direction of the second transmission line of the aperture portion.\n4. The antenna apparatus as claimed in claim 1,\nwherein the aperture portion is constituted by expanding a width of the second transmission line in a tapered shape.\nwherein the aperture portion is constituted by expanding a width of the second transmission line in a tapered shape.\n5. The antenna apparatus as claimed in claim 1,\nwherein a space located between the pair of first planar line conductors of the first transmission line in the tapered line portion is filled with a predetermined dielectric.\nwherein a space located between the pair of first planar line conductors of the first transmission line in the tapered line portion is filled with a predetermined dielectric.\n6. The antenna apparatus as claimed in claim 1,\nwherein a space located between the pair of second planar line conductors of the second transmission line in the aperture portion is filled with a predetermined dielectric.\nwherein a space located between the pair of second planar line conductors of the second transmission line in the aperture portion is filled with a predetermined dielectric.\n7. The antenna apparatus as claimed in claim 1, further comprising a support member for supporting both end portions in a width direction of the first transmission line in the tapered line portion with interposition of a predetermined interval.\n8. The antenna apparatus as claimed in claim 1, further comprising a support member for supporting both end portions in the width direction of the second transmission line in the aperture portion with interposition of a predetermined interval.\n9. The antenna apparatus as claimed in claim 1,\nwherein the taper angle is set to a predetermined value which is larger than zero degree and equal to or smaller than 30 degrees.\nwherein the taper angle is set to a predetermined value which is larger than zero degree and equal to or smaller than 30 degrees.\n10. The antenna apparatus as claimed in claim 1,\nwherein the characteristic impedance is set to a predetermined value that is set within a range from 50Ω to 100Ω.\nwherein the characteristic impedance is set to a predetermined value that is set within a range from 50Ω to 100Ω." ]
[ null, "https://pixel.quantserve.com/pixel/p-S5j449sRLqmpu.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9245282,"math_prob":0.93323684,"size":47509,"snap":"2020-34-2020-40","text_gpt3_token_len":9602,"char_repetition_ratio":0.22073466,"word_repetition_ratio":0.29833612,"special_character_ratio":0.19981477,"punctuation_ratio":0.074352056,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9505994,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-29T16:28:20Z\",\"WARC-Record-ID\":\"<urn:uuid:d0c09410-5085-4169-a744-abafe51a6a88>\",\"Content-Length\":\"231212\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:fd467e36-bc98-4b0d-ae64-01be7c9415d6>\",\"WARC-Concurrent-To\":\"<urn:uuid:0d54d0cf-602f-4206-b728-b2fe3d94a568>\",\"WARC-IP-Address\":\"54.86.205.222\",\"WARC-Target-URI\":\"https://www.sparrho.com/item/antenna-apparatus-utilizing-aperture-of-transmission-line/d2c032/\",\"WARC-Payload-Digest\":\"sha1:2W2GNOJGO4QZ7AT5PI3INE4IPVKMR65I\",\"WARC-Block-Digest\":\"sha1:WARUEHK44BHAMRX5NJQC46XC3HDHXJZK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600400202418.22_warc_CC-MAIN-20200929154729-20200929184729-00244.warc.gz\"}"}
https://rajeshshuklacatalyst.in/class-12-practical-file-programs/
[ "# Class 12 : Practical File Programs\n\n## Write a python script to take input for a number and print its table?\n\n```n=int(input(\"Enter any no \"))\ni=1\nwhile(i<=10):\nt=n*i\nprint(n,\" * \",i,\" = \",t)\ni=i+1 ```\n\nOutput:\n\nEnter any no 5\n5 * 1 = 5\n5 * 2 = 10\n5 * 3 = 15\n5 * 4 = 20\n5 * 5 = 25\n5 * 6 = 30\n5 * 7 = 35\n5 * 8 = 40\n5 * 9 = 45\n5 * 10 = 50\n>>>\n\n## Write a python script to take input for a number and print its factorial?\n\n```n=int(input(\"Enter any no \"))\ni=1\nf=1\nwhile(i<=n):\nf=f*i\ni=i+1\nprint(\"Factorial = \",f) ```\n\nOutput:\n\nEnter any no 5\nFactorial = 120\n>>>\n\n## Write a python script to take input for a number check if the entered number is Armstrong or not.\n\n```n=int(input(\"Enter the number to check : \"))\nn1=n\ns=0\nwhile(n>0):\nd=n%10;\ns=s + (d *d * d)\nn=int(n/10)\nif s==n1:\nprint(\"Armstrong Number\")\nelse:\nprint(\"Not an Armstrong Number\") ```\n\nOutput:\n\nEnter the number to check : 153\nArmstrong Number\n>>>\n\nOutput:\n\nEnter the number to check : 152\nNot an Armstrong Number\n>>>\n\n## Write a python script to take input for a number and print its factorial using recursion?\n\nSol:\n\n```#Factorial of a number using recursion\ndef recur_factorial(n):\nif n == 1:\nreturn n\nelse:\nreturn n*recur_factorial(n-1)\n#for fixed number\nnum = 7\n#using user input\nnum=int(input(\"Enter any no \"))\ncheck if the number is negative\nif num < 0:\nprint(\"Sorry, factorial does not exist for negative numbers\")\nelif num == 0:\nprint(\"The factorial of 0 is 1\")\nelse:\nprint(\"The factorial of\", num, \"is\", recur_factorial(num))```\n\nOutput:\n\nEnter any no 5\nThe factorial of 5 is 120\n>>>\n\n## Write a python script to Display Fibonacci Sequence Using Recursion?\n\nSol:\n\n```#Python program to display the Fibonacci sequence\ndef recur_fibo(n):\nif n <= 1:\nreturn n\nelse:\nreturn(recur_fibo(n-1) + recur_fibo(n-2))\nnterms = 10\n#check if the number of terms is valid\nif (nterms <= 0):\nprint(\"Plese enter a positive integer\")\nelse:\nprint(\"Fibonacci sequence:\")\nfor i in range(nterms):\nprint(recur_fibo(i)) ```\n\nOutput:\n\nFibonacci sequence:\n0\n1\n1\n2\n3\n5\n8\n13\n21\n34\n>>>\n\n### CBSE Class 12 @ Python\n\n• Class 12 @ Python Theory Syllabus\n• Class 12 @ Python Practical Syllabus\n• Revision Tour\n• Functions (Funcations, Inbuilt Functions, Python Modules, Python Packages)\n• Inbuilt Functions\n• Python Modules\n• Python Packages\n• Using Python Libraries\n• Python Data File Handling\n• Program Efficiency\n• Data Structures In Python\n• Data Visualization Using Pyplot\n• Computer Networks\n• MySQL\n• Interface Python with SQL\n• Society, Law and Ethics\n• Web Development with Django\n• Class 12 @ Python Sample Practical File\n• Class 12 @ Python Sample Papers\n• Class 12 @ Python Projects\n\n### Interview Questions\n\nC Programming\nC++ Programming\nClass 11 (Python)\nClass 12 (Python)\nC Language\nC++ Programming\nPython\n\nC Interview Questions\nC++ Interview Questions\nC Programs\nC++ Programs" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5197713,"math_prob":0.9539637,"size":2225,"snap":"2022-40-2023-06","text_gpt3_token_len":700,"char_repetition_ratio":0.14768122,"word_repetition_ratio":0.11675127,"special_character_ratio":0.34651685,"punctuation_ratio":0.10822511,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99854803,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-01-28T23:20:18Z\",\"WARC-Record-ID\":\"<urn:uuid:290153d3-4428-49c2-99c6-42067cd6b281>\",\"Content-Length\":\"73886\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:132910fe-fc32-4795-ada3-a9e007f2b8b6>\",\"WARC-Concurrent-To\":\"<urn:uuid:7126ab51-c05a-4480-940c-39f3f24920a1>\",\"WARC-IP-Address\":\"103.92.235.92\",\"WARC-Target-URI\":\"https://rajeshshuklacatalyst.in/class-12-practical-file-programs/\",\"WARC-Payload-Digest\":\"sha1:L5XERTUCR3TVWYQSGECH3XNUVCRVYVHK\",\"WARC-Block-Digest\":\"sha1:RDD5ZWTP6MTMZQ7TE3BNI3SRRXFJ5TEC\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764499695.59_warc_CC-MAIN-20230128220716-20230129010716-00586.warc.gz\"}"}
https://www.colorhexa.com/074132
[ "# #074132 Color Information\n\nIn a RGB color space, hex #074132 is composed of 2.7% red, 25.5% green and 19.6% blue. Whereas in a CMYK color space, it is composed of 89.2% cyan, 0% magenta, 23.1% yellow and 74.5% black. It has a hue angle of 164.5 degrees, a saturation of 80.6% and a lightness of 14.1%. #074132 color hex could be obtained by blending #0e8264 with #000000. Closest websafe color is: #003333.\n\n• R 3\n• G 25\n• B 20\nRGB color chart\n• C 89\n• M 0\n• Y 23\n• K 75\nCMYK color chart\n\n#074132 color description : Very dark cyan - lime green.\n\n# #074132 Color Conversion\n\nThe hexadecimal color #074132 has RGB values of R:7, G:65, B:50 and CMYK values of C:0.89, M:0, Y:0.23, K:0.75. Its decimal value is 475442.\n\nHex triplet RGB Decimal 074132 `#074132` 7, 65, 50 `rgb(7,65,50)` 2.7, 25.5, 19.6 `rgb(2.7%,25.5%,19.6%)` 89, 0, 23, 75 164.5°, 80.6, 14.1 `hsl(164.5,80.6%,14.1%)` 164.5°, 89.2, 25.5 003333 `#003333`\nCIE-LAB 23.855, -22.04, 4.136 2.553, 4.056, 3.666 0.249, 0.395, 4.056 23.855, 22.424, 169.372 23.855, -18.773, 6.936 20.139, -12.611, 3.305 00000111, 01000001, 00110010\n\n# Color Schemes with #074132\n\n• #074132\n``#074132` `rgb(7,65,50)``\n• #410716\n``#410716` `rgb(65,7,22)``\nComplementary Color\n• #074115\n``#074115` `rgb(7,65,21)``\n• #074132\n``#074132` `rgb(7,65,50)``\n• #073341\n``#073341` `rgb(7,51,65)``\nAnalogous Color\n• #411507\n``#411507` `rgb(65,21,7)``\n• #074132\n``#074132` `rgb(7,65,50)``\n• #410733\n``#410733` `rgb(65,7,51)``\nSplit Complementary Color\n• #413207\n``#413207` `rgb(65,50,7)``\n• #074132\n``#074132` `rgb(7,65,50)``\n• #320741\n``#320741` `rgb(50,7,65)``\n• #164107\n``#164107` `rgb(22,65,7)``\n• #074132\n``#074132` `rgb(7,65,50)``\n• #320741\n``#320741` `rgb(50,7,65)``\n• #410716\n``#410716` `rgb(65,7,22)``\n• #000000\n``#000000` `rgb(0,0,0)``\n• #02130f\n``#02130f` `rgb(2,19,15)``\n• #052a20\n``#052a20` `rgb(5,42,32)``\n• #074132\n``#074132` `rgb(7,65,50)``\n• #095844\n``#095844` `rgb(9,88,68)``\n• #0c6f55\n``#0c6f55` `rgb(12,111,85)``\n• #0e8667\n``#0e8667` `rgb(14,134,103)``\nMonochromatic Color\n\n# Alternatives to #074132\n\nBelow, you can see some colors close to #074132. Having a set of related colors can be useful if you need an inspirational alternative to your original color choice.\n\n• #074124\n``#074124` `rgb(7,65,36)``\n• #074128\n``#074128` `rgb(7,65,40)``\n• #07412d\n``#07412d` `rgb(7,65,45)``\n• #074132\n``#074132` `rgb(7,65,50)``\n• #074137\n``#074137` `rgb(7,65,55)``\n• #07413c\n``#07413c` `rgb(7,65,60)``\n• #074141\n``#074141` `rgb(7,65,65)``\nSimilar Colors\n\n# #074132 Preview\n\nThis text has a font color of #074132.\n\n``<span style=\"color:#074132;\">Text here</span>``\n#074132 background color\n\nThis paragraph has a background color of #074132.\n\n``<p style=\"background-color:#074132;\">Content here</p>``\n#074132 border color\n\nThis element has a border color of #074132.\n\n``<div style=\"border:1px solid #074132;\">Content here</div>``\nCSS codes\n``.text {color:#074132;}``\n``.background {background-color:#074132;}``\n``.border {border:1px solid #074132;}``\n\n# Shades and Tints of #074132\n\nA shade is achieved by adding black to any pure hue, while a tint is created by mixing white to any pure color. In this example, #010c09 is the darkest color, while #f9fefd is the lightest one.\n\n• #010c09\n``#010c09` `rgb(1,12,9)``\n• #031e17\n``#031e17` `rgb(3,30,23)``\n• #052f24\n``#052f24` `rgb(5,47,36)``\n• #074132\n``#074132` `rgb(7,65,50)``\n• #095340\n``#095340` `rgb(9,83,64)``\n• #0b644d\n``#0b644d` `rgb(11,100,77)``\n• #0d765b\n``#0d765b` `rgb(13,118,91)``\n• #0f8868\n``#0f8868` `rgb(15,136,104)``\n• #119a76\n``#119a76` `rgb(17,154,118)``\n• #12ab84\n``#12ab84` `rgb(18,171,132)``\n• #14bd91\n``#14bd91` `rgb(20,189,145)``\n• #16cf9f\n``#16cf9f` `rgb(22,207,159)``\n``#18e0ad` `rgb(24,224,173)``\n• #25e7b5\n``#25e7b5` `rgb(37,231,181)``\n• #36e9bb\n``#36e9bb` `rgb(54,233,187)``\n• #48ebc1\n``#48ebc1` `rgb(72,235,193)``\n• #5aedc7\n``#5aedc7` `rgb(90,237,199)``\n• #6cefcd\n``#6cefcd` `rgb(108,239,205)``\n• #7df1d3\n``#7df1d3` `rgb(125,241,211)``\n• #8ff3d9\n``#8ff3d9` `rgb(143,243,217)``\n• #a1f5df\n``#a1f5df` `rgb(161,245,223)``\n• #b2f7e5\n``#b2f7e5` `rgb(178,247,229)``\n• #c4f9eb\n``#c4f9eb` `rgb(196,249,235)``\n• #d6fbf1\n``#d6fbf1` `rgb(214,251,241)``\n• #e7fcf7\n``#e7fcf7` `rgb(231,252,247)``\n• #f9fefd\n``#f9fefd` `rgb(249,254,253)``\nTint Color Variation\n\n# Tones of #074132\n\nA tone is produced by adding gray to any pure hue. In this case, #232525 is the less saturated color, while #014735 is the most saturated one.\n\n• #232525\n``#232525` `rgb(35,37,37)``\n• #202826\n``#202826` `rgb(32,40,38)``\n• #1d2b27\n``#1d2b27` `rgb(29,43,39)``\n• #1a2e29\n``#1a2e29` `rgb(26,46,41)``\n• #18302a\n``#18302a` `rgb(24,48,42)``\n• #15332b\n``#15332b` `rgb(21,51,43)``\n• #12362d\n``#12362d` `rgb(18,54,45)``\n• #0f392e\n``#0f392e` `rgb(15,57,46)``\n• #0d3b2f\n``#0d3b2f` `rgb(13,59,47)``\n• #0a3e31\n``#0a3e31` `rgb(10,62,49)``\n• #074132\n``#074132` `rgb(7,65,50)``\n• #044433\n``#044433` `rgb(4,68,51)``\n• #014735\n``#014735` `rgb(1,71,53)``\nTone Color Variation\n\n# Color Blindness Simulator\n\nBelow, you can see how #074132 is perceived by people affected by a color vision deficiency. This can be useful if you need to ensure your color combinations are accessible to color-blind users.\n\nMonochromacy\n• Achromatopsia 0.005% of the population\n• Atypical Achromatopsia 0.001% of the population\nDichromacy\n• Protanopia 1% of men\n• Deuteranopia 1% of men\n• Tritanopia 0.001% of the population\nTrichromacy\n• Protanomaly 1% of men, 0.01% of women\n• Deuteranomaly 6% of men, 0.4% of women\n• Tritanomaly 0.01% of the population" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.52338296,"math_prob":0.722846,"size":3679,"snap":"2021-31-2021-39","text_gpt3_token_len":1625,"char_repetition_ratio":0.12761904,"word_repetition_ratio":0.011029412,"special_character_ratio":0.5710791,"punctuation_ratio":0.23730685,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9940069,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-08-06T04:34:07Z\",\"WARC-Record-ID\":\"<urn:uuid:7874377c-9914-4aa8-a591-0987cb707b25>\",\"Content-Length\":\"36085\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a93b56e5-f616-4ca7-b437-9b659fdaaafb>\",\"WARC-Concurrent-To\":\"<urn:uuid:6ff77d5c-992e-4569-9ca7-f24a7cff8d94>\",\"WARC-IP-Address\":\"178.32.117.56\",\"WARC-Target-URI\":\"https://www.colorhexa.com/074132\",\"WARC-Payload-Digest\":\"sha1:EHONTMS6N2OH7KDHOHJ6N7KLYPRO4CQF\",\"WARC-Block-Digest\":\"sha1:BTSBPZF3I5FIOSVOQ6BP5B3FLZBQITKT\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046152112.54_warc_CC-MAIN-20210806020121-20210806050121-00269.warc.gz\"}"}
https://math.stackexchange.com/questions/3417896/find-the-cdf-and-pdf-of-u2-is-the-distribution-of-u2-uniform-on-0-1
[ "# Find the CDF and PDF of $U^2$. Is the distribution of $U^2$ Uniform on $(0, 1)$?\n\nI have the following problem:\n\nLet $$U$$ be a $$\\text{Unif}(−1,1)$$ random variable on the interval $$(−1,1)$$.\n\nFind the CDF and PDF of $$U^2$$. Is the distribution of $$U^2$$ Uniform on $$(0, 1)$$?\n\nThe solution is as follows:\n\nLet $$X = U^2$$, for $$0 < x < 1$$, $$P(X \\le x) = P(−\\sqrt{x} \\le U \\le \\sqrt{x}) = P(U \\le \\sqrt{x}) - P(U \\le -\\sqrt{x}) = \\dfrac{\\sqrt{x} + 1}{2} - \\dfrac{-\\sqrt{x} + 1}{2} = \\sqrt{x}$$ (Note that $$P(U \\le u) = \\dfrac{u + 1}{2}$$ for $$-1 \\le u \\le 1$$. The density is then given by $$f_X(x) = \\dfrac{d}{dx} P(X \\le x) = \\dfrac{d}{dx}x^{1/2} = \\dfrac{1}{2} x^{-1/2}$$. The distribution of $$X = U^2$$ is not Unif$$(0, 1)$$ on the interval $$(0, 1)$$ as the PDF is not a constant on this interval.\n\nThe first fact that I am confused about is how the author got that the interval is now $$0 < x < 1$$ instead of $$-1 < x < 1$$.\n\nThe second fact that I am confused about, which it seems is related to the first, is how the author calculated that $$P(U \\le \\sqrt{x}) - P(U \\le -\\sqrt{x}) = \\dfrac{\\sqrt{x} + 1}{2} - \\dfrac{-\\sqrt{x} + 1}{2}$$. Did they just insert the values into the formula for the CDF (since the formula for the CDF is $$\\dfrac{x - a}{b - a}$$), or did they otherwise somehow calculate the CDF? I ask because I'm unsure if it's just a matter of memorization, or whether there is some mathematical understanding here that I am missing.\n\nI would greatly appreciate it if people could please take the time to clarify this.\n\n• Isn't it $X = |U|$? In this case for $0<x<1$ we have $P(X \\leq x) = P(|U| \\leq x) = P(-x \\leq U \\leq x)$ – G. Gare Nov 1 at 15:23\n• @G.Gare My apologies!!! I made a transcription error; it should be $U^2$. I'm really sorry!!! – The Pointer Nov 1 at 15:25\n• Then the square roots make more sense. There's another edit you missed: $P(-x\\leq U^2 \\leq x)$. – G. Gare Nov 1 at 15:26\n• @G.Gare How's that? – The Pointer Nov 1 at 15:28\n• It's ok now! Don't worry – G. Gare Nov 1 at 15:28\n\nExplaining fact 1: while $$U$$ has support $$[-1,\\,1]$$, $$X=U^2$$ has support $$[0,\\,1]$$.\n• Thanks for the answer. Can you please explain the reasoning for how we go from support $[-1, 1]$ to $[0, 1]$? I ask because it is very simple to naively just reason that, since $X = U^2$, we have $(-1)^2$ and $1^2$, and so the new support is $[1, 1]$ (which is obviously nonsense). I recognize that this would be due to a misunderstanding of what \"support\" is, but I'm curious what explanation you would use to describe the reasoning behind how we get $[0, 1]$, so that it is clear in my mind as a person who is new to this concept. – The Pointer Nov 1 at 15:32\n• @ThePointer The support is the set of allowed values. Note that $\\{u^2|u\\in[-1,\\,1]\\}=[0,\\,1]$. (For example, the graph $x=u^2$ has $x$-coordinates $\\in[0,\\,1]$ for $u\\in[-1,\\,1]$.) – J.G. Nov 1 at 15:46" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8415259,"math_prob":0.9999901,"size":1425,"snap":"2019-51-2020-05","text_gpt3_token_len":496,"char_repetition_ratio":0.14637579,"word_repetition_ratio":0.08301887,"special_character_ratio":0.3817544,"punctuation_ratio":0.07570978,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.0000013,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-14T13:40:02Z\",\"WARC-Record-ID\":\"<urn:uuid:b704fcd3-55e0-448a-b428-98db2d3874ae>\",\"Content-Length\":\"142988\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ccd0be52-9344-464c-91ca-7b05651f0912>\",\"WARC-Concurrent-To\":\"<urn:uuid:94f1cd48-2875-44d9-b5ae-c99f1d221751>\",\"WARC-IP-Address\":\"151.101.65.69\",\"WARC-Target-URI\":\"https://math.stackexchange.com/questions/3417896/find-the-cdf-and-pdf-of-u2-is-the-distribution-of-u2-uniform-on-0-1\",\"WARC-Payload-Digest\":\"sha1:HQHEFOSD3UGWLU3EUYNEGNUKFC7UCFEN\",\"WARC-Block-Digest\":\"sha1:M7UKUMPLH5Z5P7AHQB5MVO3YVPKGABUK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575541157498.50_warc_CC-MAIN-20191214122253-20191214150253-00380.warc.gz\"}"}
https://moneyexchangerate.org/currencyexchange/chf/dzd
[ "Swiss Franc to Algerian Dinar Exchange Rate and Currency Converter\n\nView current exchange rates for the pair Swiss Franc (CHF) and Algerian Dinar (DZD). This page shows a rate of Algerian Dinar for 1 Swiss Franc and compare local money of Switzerland and Algeria. Currency exchange rates updates every day and use average rates based on Trusted International exchange rate. Use Currency converter to calculate any amount of CHF to DZD exchange rate. On this page available money conversion tables of popular amounts, compare tables, history chart, popular money converter and list of live conversion of Swiss Franc in Algerian Dinar:\n\nToday exchange rate:\n\n1 CHF =\n121.87 DZD\n\nBy today rate (2019-06-26) CHF to DZD equal 121.870285\n\nInvert: DZD to CHF Currency rate\n\nCurrency converter\n\nSwiss Franc Algerian Dinar 1 CHF 1 DZD 100 Swiss Francs 100 Algerian Dinars\n\nSwiss Franc Currency Exchange Table\n\nCHF Value: Currency\n1 CHF\n=\n1.0258 USD\n1 CHF\n=\n0.9023 EUR\n1 CHF\n=\n0.8083 GBP\n1 CHF\n=\n1 CHF\n=\n1.4737 AUD\n1 CHF\n=\n1 CHF\n1 CHF\n=\n6.7362 DKK\n1 CHF\n=\n8.7629 NOK\n1 CHF\n=\n9.5243 SEK\n1 CHF\n=\n3.7678 AED\n1 CHF\n=\n7.0571 CNY\n1 CHF\n=\n8.0119 HKD\n1 CHF\n=\n109.9394 JPY\n1 CHF\n=\n71.1196 INR\n1 CHF\n=\n14486.094 IDR\n1 CHF\n=\n1.3894 SGD\n1 CHF\n=\n1186.2479 KRW\n1 CHF\n=\n14.7087 ZAR\n1 CHF\n=\n0.0001 BTC\n\nSwiss Franc currency rate vs major currencies Conversion table\n\nSwiss Franc vs other currencies\n\nAlgerian Dinar Currency Exchange Table\n\nDZD Value: Currency\n1 DZD\n=\n0.0084 USD\n1 DZD\n=\n0.0074 EUR\n1 DZD\n=\n0.0066 GBP\n1 DZD\n=\n1 DZD\n=\n0.0121 AUD\n1 DZD\n=\n0.0082 CHF\n1 DZD\n=\n0.0553 DKK\n1 DZD\n=\n0.0719 NOK\n1 DZD\n=\n0.0782 SEK\n1 DZD\n=\n0.0309 AED\n1 DZD\n=\n0.0579 CNY\n1 DZD\n=\n0.0657 HKD\n1 DZD\n=\n0.9021 JPY\n1 DZD\n=\n0.5836 INR\n1 DZD\n=\n118.8649 IDR\n1 DZD\n=\n0.0114 SGD\n1 DZD\n=\n9.7337 KRW\n1 DZD\n=\n0.1207 ZAR\n1 DZD\n=\n0 BTC\n\nAlgerian Dinar currency rate vs major currencies Conversion table\n\nAlgerian Dinar vs other currencies\n\nSwiss Franc compared to Algerian Dinar\n\nx1 x100 x1000\n1 Swiss Franc = 121.87 Algerian Dinar 100 Swiss Franc = 12187.03 Algerian Dinar 1000 Swiss Franc = 121870.28 Algerian Dinar\n2 Swiss Franc = 243.74 Algerian Dinar 200 Swiss Franc = 24374.06 Algerian Dinar 2000 Swiss Franc = 243740.57 Algerian Dinar\n3 Swiss Franc = 365.61 Algerian Dinar 300 Swiss Franc = 36561.09 Algerian Dinar 3000 Swiss Franc = 365610.85 Algerian Dinar\n4 Swiss Franc = 487.48 Algerian Dinar 400 Swiss Franc = 48748.11 Algerian Dinar 4000 Swiss Franc = 487481.14 Algerian Dinar\n5 Swiss Franc = 609.35 Algerian Dinar 500 Swiss Franc = 60935.14 Algerian Dinar 5000 Swiss Franc = 609351.42 Algerian Dinar\n6 Swiss Franc = 731.22 Algerian Dinar 600 Swiss Franc = 73122.17 Algerian Dinar 6000 Swiss Franc = 731221.71 Algerian Dinar\n7 Swiss Franc = 853.09 Algerian Dinar 700 Swiss Franc = 85309.2 Algerian Dinar 7000 Swiss Franc = 853091.99 Algerian Dinar\n8 Swiss Franc = 974.96 Algerian Dinar 800 Swiss Franc = 97496.23 Algerian Dinar 8000 Swiss Franc = 974962.28 Algerian Dinar\n9 Swiss Franc = 1096.83 Algerian Dinar 900 Swiss Franc = 109683.26 Algerian Dinar 9000 Swiss Franc = 1096832.56 Algerian Dinar\n\nSwiss Franc in Algerian Dinars History Chart\n\nDuring last 30 days average exchange rate of Swiss Franc in Algerian Dinars was 120.17954 DZD for 1 CHF. The highest price of Swiss Franc in Algerian Dinar was Tue, 25 Jun 2019 when 1 Swiss Franc = 122.1104 Algerian Dinar. The lowest change rate in last month between Swiss Francs and Algerian Dinar currencies was on Tue, 25 Jun 2019. On that day 1 CHF = 118.7362 DZD.\n\nen" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.549259,"math_prob":0.9015219,"size":2327,"snap":"2019-26-2019-30","text_gpt3_token_len":986,"char_repetition_ratio":0.339647,"word_repetition_ratio":0.013953488,"special_character_ratio":0.5345939,"punctuation_ratio":0.13690476,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95474076,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-06-26T12:13:23Z\",\"WARC-Record-ID\":\"<urn:uuid:b8a8b64f-ae12-4d55-a5f3-48b39cf8e412>\",\"Content-Length\":\"71171\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2736cd4e-0cb5-4a7b-a418-4922f0455fa2>\",\"WARC-Concurrent-To\":\"<urn:uuid:4aa7d158-55d2-47c0-9f56-f8d563116bbd>\",\"WARC-IP-Address\":\"45.55.84.94\",\"WARC-Target-URI\":\"https://moneyexchangerate.org/currencyexchange/chf/dzd\",\"WARC-Payload-Digest\":\"sha1:VQPXXZNCFVPM4UW6SHY6TVS4NIC6WUUS\",\"WARC-Block-Digest\":\"sha1:GCLBCKSJW5YUIKB2JLEYAIXW5HU2C5TT\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-26/CC-MAIN-2019-26_segments_1560628000306.84_warc_CC-MAIN-20190626114215-20190626140215-00523.warc.gz\"}"}
https://onlinemonks.com/web-stories/hedgepay-brief-price-prediction-analysis-forecast-29-Dec-2022/
[ "LIVE CURRENT PRICE (29 DEC 2022)\n\nALL TIME HIGH\n\nTOTAL MARKET CAP\n\n## HEDGEPAY (HPAY)\n\n### Current Rank #6606\n\nPRICE PREDICTION 2022\n\nthe HPAY price could reach a maximum possible level of \\$0.00093199 with the average forecast price of \\$0.00090611.\n\n## HEDGEPAY\n\nPRICE FORECAST 2023\n\nIn 2023 the price of HedgePay is expected to reach at a minimum price value of \\$0.001. The HPAY price can reach a maximum price value of \\$0.002 with the average value of \\$0.001.\n\n## HEDGEPAY\n\nPRICE ANALYSIS 2024\n\nthe HPAY price could reach a maximum possible level of \\$0.002 with the average forecast price of \\$0.002.\n\n## HEDGEPAY\n\nPRICE TARGET 2025\n\nIn 2025 the price of HedgePay is forecasted to be at around a minimum value of \\$0.003. The HedgePay price value can reach a maximum of \\$0.003 with the average trading value of \\$0.003 in USD.\n\n## HEDGEPAY\n\nPRICE PREDICTION 2026\n\nThe price of HedgePay is predicted to reach at a minimum level of \\$0.004 in 2026. The HedgePay price can reach a maximum level of \\$0.005 with the average price of \\$0.004 throughout 2026.\n\n## HEDGEPAY\n\nPRICE FORECAST 2027\n\nThe price of 1 HedgePay is expected to reach at a minimum level of \\$0.006 in 2027. The HPAY price can reach a maximum level of \\$0.007 with the average price of \\$0.006 throughout 2027.\n\n## HEDGEPAY\n\nPRICE ANALYSIS 2028\n\nIn 2028 the price of HedgePay is predicted to reach at a minimum level of \\$0.009. The HPAY price can reach a maximum level of \\$0.010 with the average trading price of \\$0.009.\n\n## HEDGEPAY\n\nPRICE TARGET 2029\n\nThe price of HedgePay is predicted to reach at a minimum value of \\$0.011 in 2029. The HedgePay price could reach a maximum value of \\$0.015 with the average trading price of \\$0.012 throughout 2029.\n\n## HEDGEPAY\n\nPRICE PREDICTION 2030\n\nThe price of HedgePay is predicted to reach at a minimum value of \\$0.017 in 2030. The HedgePay price could reach a maximum value of \\$0.020 with the average trading price of \\$0.018 throughout 2030.\n\n## HEDGEPAY\n\nPRICE FORECAST 2031\n\nIn 2031 the price of HedgePay is predicted to reach at a minimum level of \\$0.025. The HPAY price can reach a maximum level of \\$0.030 with the average trading price of \\$0.026.\n\n## HEDGEPAY\n\nHedgepay Brief, Price Prediction, Analysis & Forecast 2022-2031" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.88275373,"math_prob":0.6658028,"size":2033,"snap":"2023-14-2023-23","text_gpt3_token_len":572,"char_repetition_ratio":0.23016264,"word_repetition_ratio":0.32492998,"special_character_ratio":0.3271028,"punctuation_ratio":0.10983982,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96370196,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-03-23T04:06:23Z\",\"WARC-Record-ID\":\"<urn:uuid:7e9c4bd5-33ca-4204-9997-b0c763002dd0>\",\"Content-Length\":\"113300\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:63f25ff1-289b-45bb-8165-055e040318f0>\",\"WARC-Concurrent-To\":\"<urn:uuid:0145c08a-13d1-4914-a817-2d081c1ac79b>\",\"WARC-IP-Address\":\"137.184.178.87\",\"WARC-Target-URI\":\"https://onlinemonks.com/web-stories/hedgepay-brief-price-prediction-analysis-forecast-29-Dec-2022/\",\"WARC-Payload-Digest\":\"sha1:TCMI2GRCPIPJAJI5YCXGJWFGNHL7NEAF\",\"WARC-Block-Digest\":\"sha1:LGVEWMG2JNYGWCVBNHUDZQH7YLWLTMP3\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296944996.49_warc_CC-MAIN-20230323034459-20230323064459-00126.warc.gz\"}"}
https://eng.libretexts.org/Bookshelves/Electrical_Engineering/Book%3A_Electromagnetics_I_(Ellingson)/05%3A_Electrostatics/5.13%3A_Electric_Potential_Field_due_to_a_Continuous_Distribution_of_Charge
[ "# 5.13: Electric Potential Field due to a Continuous Distribution of Charge\n\nThe electrostatic potential field at $${\\bf r}$$ associated with $$N$$ charged particles is\n\n$V({\\bf r}) = \\frac{1}{4\\pi\\epsilon} \\sum_{n=1}^N { \\frac{q_n}{\\left|{\\bf r}-{\\bf r}_n\\right|} } \\label{m0065_eCountable}$\n\nwhere $$q_n$$ and $${\\bf r_n}$$ are the charge and position of the $$n^{\\mbox{th}}$$ particle. However, it is more common to have a continuous distribution of charge as opposed to a countable number of charged particles. We now consider how to compute $$V({\\bf r})$$ three types of these commonly-encountered distributions. Before beginning, it’s worth noting that the methods will be essentially the same, from a mathematical viewpoint, as those developed in Section [m0104_E_due_to_a_Continuous_Distribution_of_Charge]; therefore, a review of that section may be helpful before attempting this section.\n\n## Continuous Distribution of Charge Along a Curve\n\nConsider a continuous distribution of charge along a curve $$\\mathcal{C}$$. The curve can be divided into short segments of length $$\\Delta l$$. Then, the charge associated with the $$n^{\\mbox{th}}$$ segment, located at $${\\bf r}_n$$, is\n\n$q_n = \\rho_l({\\bf r}_n)~\\Delta l$\n\nwhere $$\\rho_l$$ is the line charge density (units of C/m) at $${\\bf r}_n$$. Substituting this expression into Equation \\ref{m0065_eCountable}, we obtain\n\n${\\bf V}({\\bf r}) = \\frac{1}{4\\pi\\epsilon} \\sum_{n=1}^{N} { \\frac{\\rho_l({\\bf r}_n)}{\\left|{\\bf r}-{\\bf r}_n\\right|} \\Delta l}$\n\nTaking the limit as $$\\Delta l\\to 0$$ yields:\n\n$V({\\bf r}) = \\frac{1}{4\\pi\\epsilon} \\int_{\\mathcal C} { \\frac{\\rho_l(l)}{\\left|{\\bf r}-{\\bf r}'\\right|} dl} \\label{m0065_eLineCharge}$\n\nwhere $${\\bf r}'$$ represents the varying position along $${\\mathcal C}$$ with integration along the length $$l$$.\n\n## Continuous Distribution of Charge Over a Surface\n\nConsider a continuous distribution of charge over a surface $$\\mathcal{S}$$. The surface can be divided into small patches having area $$\\Delta s$$. Then, the charge associated with the $$n^{\\mbox{th}}$$ patch, located at $${\\bf r}_n$$, is\n\n$q_n = \\rho_s({\\bf r}_n)~\\Delta s$\n\nwhere $$\\rho_s$$ is surface charge density (units of C/m$$^2$$) at $${\\bf r}_n$$. Substituting this expression into Equation \\ref{m0065_eCountable}, we obtain\n\n$V({\\bf r}) = \\frac{1}{4\\pi\\epsilon} \\sum_{n=1}^{N} { \\frac{\\rho_s({\\bf r}_n)}{\\left|{\\bf r}-{\\bf r}_n\\right|}~\\Delta s}$\n\nTaking the limit as $$\\Delta s\\to 0$$ yields:\n\n$V({\\bf r}) = \\frac{1}{4\\pi\\epsilon} \\int_{\\mathcal S} { \\frac{\\rho_s({\\bf r}')}{\\left|{\\bf r}-{\\bf r}'\\right|}~ds} \\label{m0065_eSurfCharge}$\n\nwhere $${\\bf r}'$$ represents the varying position over $${\\mathcal S}$$ with integration.\n\n## Continuous Distribution of Charge in a Volume\n\nConsider a continuous distribution of charge within a volume $$\\mathcal{V}$$. The volume can be divided into small cells (volume elements) having area $$\\Delta v$$. Then, the charge associated with the $$n^{\\mbox{th}}$$ cell, located at $${\\bf r}_n$$, is\n\n$q_n = \\rho_v({\\bf r}_n)~\\Delta v$\n\nwhere $$\\rho_v$$ is the volume charge density (units of C/m$$^3$$) at $${\\bf r}_n$$. Substituting this expression into Equation \\ref{m0065_eCountable}, we obtain\n\n$V({\\bf r}) = \\frac{1}{4\\pi\\epsilon} \\sum_{n=1}^{N} { \\frac{\\rho_v({\\bf r}_n)}{\\left|{\\bf r}-{\\bf r}_n\\right|}~\\Delta v}$\n\nTaking the limit as $$\\Delta v\\to 0$$ yields:\n\n$V({\\bf r}) = \\frac{1}{4\\pi\\epsilon} \\int_{\\mathcal V} { \\frac{\\rho_v({\\bf r}')}{\\left|{\\bf r}-{\\bf r}'\\right|}~dv}$\n\nwhere $${\\bf r}'$$ represents the varying position over $${\\mathcal V}$$ with integration." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.78866565,"math_prob":0.9999951,"size":3996,"snap":"2019-51-2020-05","text_gpt3_token_len":1293,"char_repetition_ratio":0.1503006,"word_repetition_ratio":0.16829745,"special_character_ratio":0.34134135,"punctuation_ratio":0.081491716,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.0000094,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-12T23:58:04Z\",\"WARC-Record-ID\":\"<urn:uuid:17c1e2c4-41ca-4a1b-8f57-10f56bbae798>\",\"Content-Length\":\"81762\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2ae7025a-3893-4730-98c7-3ddfe9d1acf1>\",\"WARC-Concurrent-To\":\"<urn:uuid:e4733517-7cf7-4d0d-a89c-550c7c6d0d44>\",\"WARC-IP-Address\":\"34.232.212.106\",\"WARC-Target-URI\":\"https://eng.libretexts.org/Bookshelves/Electrical_Engineering/Book%3A_Electromagnetics_I_(Ellingson)/05%3A_Electrostatics/5.13%3A_Electric_Potential_Field_due_to_a_Continuous_Distribution_of_Charge\",\"WARC-Payload-Digest\":\"sha1:6YZ2BH4T2FRKGNB5OHUIEEGGPCB7FMNG\",\"WARC-Block-Digest\":\"sha1:OBTYSFAJFYMAXCGU3XR7JL2YJT2AZT7M\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575540547536.49_warc_CC-MAIN-20191212232450-20191213020450-00030.warc.gz\"}"}
https://fr.mathworks.com/help/mpc/ref/mpcstate.html
[ "Documentation\n\n### This is machine translation\n\nMouseover text to see original. Click the button below to return to the English version of the page.\n\nNote: This page has been translated by MathWorks. Click here to see\nTo view all translated materials including this page, select Country from the country navigator on the bottom of this page.\n\n# mpcstate\n\nDefine MPC controller state\n\n## Syntax\n\n```xmpc = mpcstate(MPCobj) xmpc = mpcstate(MPCobj,xp,xd,xn,u,p) xmpc = mpcstate ```\n\n## Description\n\n`xmpc = mpcstate(MPCobj)` creates a controller state object compatible with the controller object, `MPCobj`, in which all fields are set to their default values that are associated with the controller’s nominal operating point.\n\n`xmpc = mpcstate(MPCobj,xp,xd,xn,u,p)` sets the state fields of the controller state object to specified values. The controller may be an implicit or explicit controller object. Use this controller state object to initialize an MPC controller at a specific state other than the default state.\n\n`xmpc = mpcstate` returns an `mpcstate` object in which all fields are empty.\n\n`mpcstate` objects are updated by `mpcmove` through the internal state observer based on the extended prediction model. The overall state is updated from the measured output ym(k) by a linear state observer (see State Observer).\n\n## Input Arguments\n\n `MPCobj` MPC controller, specified as either a traditional MPC controller (`mpc`) or explicit MPC controller (`generateExplicitMPC`). `xp` Plant model state estimates, specified as a vector with Nxp elements, where Nxp is the number of states in the plant model. `xd` Disturbance model state estimates, specified as a vector with Nxd elements, where Nxd is the total number of states in the input and output disturbance models. The disturbance model states are ordered such that input disturbance model states are followed by output disturbance model state estimates. `xn` Measurement noise model state estimates, specified as a vector with Nxn elements, where Nxn is the number of states in the measurement noise model. `u` Values of the manipulated variables during the previous control interval, specified as a vector with Nu elements, where Nu is the number of manipulated variables. `p` Covariance matrix for the state estimates, specified as an N-by-N matrix, where N is the sum of Nxp, Nxd and Nxn).\n\n## Output Arguments\n\n`xmpc`\n\nMPC state object, containing the following properties.\n\nProperty\n\nDescription\n\n`Plant`\n\nVector of state estimates for the controller’s plant model. Values are in engineering units and are absolute, i.e., they include state offsets.\n\nIf the controller’s plant model includes delays, the `Plant` field of the MPC state object includes states that model the delays. Therefore `length(Plant)` > order of undelayed controller plant model.\n\nDefault: controller’s `Model.Nominal.X` property.\n\n`Disturbance`\n\nVector of unmeasured disturbance model state estimates. This comprises the states of the input disturbance model followed by the states of the output disturbances model.\n\nDisturbance models may be created by default. Use the `getindist`and `getoutdist`commands to view the two disturbance model structures.\n\nDefault: zero, or empty if there are no disturbance model states.\n\n`Noise`\n\nVector of output measurement noise model state estimates.\n\nDefault: zero, or empty if there are no noise model states.\n\n`LastMove`\n\nVector of manipulated variables used in the previous control interval, u(k–1). Values are absolute, i.e., they include manipulated variable offsets.\n\nDefault: nominal values of the manipulated variables.\n\n`Covariance`\n\nn-by-n symmetrical covariance matrix for the controller state estimates, where n is the dimension of the extended controller state, i.e., the sum of the number states contained in the `Plant`, `Disturbance`, and `Noise` fields.\n\nDefault: If the controller is employing default state estimation the default is the steady-state covariance computed according to the assumptions in Controller State Estimation. See also the description of the `P` matrix in the Control System Toolbox `kalmd` command. If the controller is employing custom state estimation, this field is empty (not used).\n\n## Examples\n\ncollapse all\n\nCreate a Model Predictive Controller for a single-input-single-output (SISO) plant. For this example, the plant includes an input delay of 0.4 time units, and the control interval to 0.2 time units.\n\n```H = tf(1,[10 1],'InputDelay',0.4); MPCobj = mpc(H,0.2);```\n```-->The \"PredictionHorizon\" property of \"mpc\" object is empty. Trying PredictionHorizon = 10. -->The \"ControlHorizon\" property of the \"mpc\" object is empty. Assuming 2. -->The \"Weights.ManipulatedVariables\" property of \"mpc\" object is empty. Assuming default 0.00000. -->The \"Weights.ManipulatedVariablesRate\" property of \"mpc\" object is empty. Assuming default 0.10000. -->The \"Weights.OutputVariables\" property of \"mpc\" object is empty. Assuming default 1.00000. ```\n\nCreate the corresponding controller state object in which all states are at their default values.\n\n`xMPC = mpcstate(MPCobj)`\n```-->Converting the \"Model.Plant\" property of \"mpc\" object to state-space. -->Converting model to discrete time. -->Converting delays to states. -->Assuming output disturbance added to measured output channel #1 is integrated white noise. -->The \"Model.Noise\" property of the \"mpc\" object is empty. Assuming white noise on each measured output channel. MPCSTATE object with fields Plant: [0 0 0] Disturbance: 0 Noise: [1x0 double] LastMove: 0 Covariance: [4x4 double] ```\n\nThe plant model, `H`, is a first-order, continuous-time transfer function. The `Plant` property of the `mpcstate` object contains two additional states to model the two intervals of delay. Also, by default the controller contains a first-order output disturbance model (an integrator) and an empty measured output noise model.\n\nView the default covariance matrix.\n\n`xMPC.Covariance`\n```ans = 4×4 0.0624 0.0000 0.0000 -0.0224 0.0000 1.0000 -0.0000 -0.0000 0.0000 -0.0000 1.0000 0.0000 -0.0224 -0.0000 0.0000 0.2301 ```\n\n## See Also\n\n#### Implementing an Adaptive Cruise Controller with Simulink\n\nDownload technical paper" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8341904,"math_prob":0.9115717,"size":1984,"snap":"2019-13-2019-22","text_gpt3_token_len":460,"char_repetition_ratio":0.16414142,"word_repetition_ratio":0.071428575,"special_character_ratio":0.19455644,"punctuation_ratio":0.111428574,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9796032,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-05-23T11:04:48Z\",\"WARC-Record-ID\":\"<urn:uuid:30aefe91-79ad-4432-983e-ded291ed6105>\",\"Content-Length\":\"88564\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8508444d-539b-4503-b50a-d0fe660b3974>\",\"WARC-Concurrent-To\":\"<urn:uuid:9d165aaf-14da-469e-8bf8-6d0633896735>\",\"WARC-IP-Address\":\"23.218.145.211\",\"WARC-Target-URI\":\"https://fr.mathworks.com/help/mpc/ref/mpcstate.html\",\"WARC-Payload-Digest\":\"sha1:CX4CWBSZDXUUYNBFI75BO4ZGXNSCJEYP\",\"WARC-Block-Digest\":\"sha1:YMKTDNWHBBBE6DE5TA5PVRBLB7QEGZ24\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-22/CC-MAIN-2019-22_segments_1558232257243.19_warc_CC-MAIN-20190523103802-20190523125802-00227.warc.gz\"}"}
http://www.phy.ntnu.edu.tw/ntnujava/index.php?topic=1382.msg5185
[ "NTNUJAVA Virtual Physics LaboratoryEnjoy the fun of physics with simulations! Backup site http://enjoy.phy.ntnu.edu.tw/ntnujava/", null, "", null, "January 29, 2020, 10:55:30 am", null, "", null, "Welcome, Guest. Please login or register.Did you miss your activation email? 1 Hour 1 Day 1 Week 1 Month Forever Login with username, password and session length", null, "Discovery consists of seeing what everybody has seen and thinking what nobody has thought. ...\"Albert von Szent-Gyorgyi(1893-1986, 1937 Nobel Prize for Medicine, Lived to 93)\"\n Pages:   Go Down", null, "Author Topic: RC circuit + magnetic induction (B field)  (Read 9129 times) 0 Members and 1 Guest are viewing this topic. Click", null, "to toggle author information(expand message area).\nFu-Kwun Hwang\nHero Member", null, "", null, "", null, "", null, "", null, "", null, "Offline\n\nPosts: 3085", null, "", null, "", null, "« Embed this message on: December 12, 2009, 01:26:11 pm » posted from:Taipei,T\\'ai-pei,Taiwan", null, "", null, "A capacitor has been charged to", null, "$V_o$ , a metal bar with mass m ,resistor R is located between two parallel wire (distance L) in magnetic field B, as shown in the above figure.\nAt t=0; the switch was turn from a to b.\nCurrent I will flow throught metal bar, which F=I L B will accelerate meta bar.", null, "$m\\frac{dv}{dt}=I L B$.\nWhen the metal bar is moving, the changing in magnetic flux will induce voltage", null, "$V_i= B L v$\nSo the equation for the loop is", null, "$Vc= \\frac{Q_c}{C} = I R + B L v$, where", null, "$I=-\\frac{dQ_c}{dt}$", null, "$\\frac{1}{C} \\frac{dQ_c}{dt}=-I =R \\frac{dI}{dt}+B L \\frac{dv}{dt}= R \\frac{dI}{dt}+B L \\frac{BLI}{m}$\n,", null, "$\\frac{dI}{dt}=-(\\frac{1}{RC}+\\frac{B^2L^2}{mR}) I$\nthe solution is", null, "$I(t)=\\frac{V_o}{R}(1-e^{-\\alpha t})$ ,where", null, "$\\alpha=\\frac{1}{RC}+\\frac{B^2L^2}{mR}$\n\nThe following is a simulation for the above case:\nThe charge", null, "$Q_c(t)$, the velocity", null, "$v(t)$ and the current", null, "$I(t)$ are shown (C=1 in the calculation).\n\nEmbed a running copy of this simulation\n\nEmbed a running copy link(show simulation in a popuped window)\nFull screen applet or Problem viewing java?Add http://www.phy.ntnu.edu.tw/ to exception site list", null, "", null, "", null, "Logged\n Related Topics Subject Started by Replies Views Last post", null, "", null, "magnetic induction communication system Electromagnetics Taona 5 11531", null, "May 04, 2009, 12:38:50 am by Taona", null, "", null, "Individual and Resultant Force Magnetic Field of a wire and external mag. field Request for physics Simulations lookang 8 18516", null, "December 10, 2009, 10:42:37 pm by Fu-Kwun Hwang", null, "", null, "RC circuit + magnetic induction (B field) electromagnetism ahmedelshfie 0 4699", null, "May 25, 2010, 06:48:45 pm by ahmedelshfie", null, "", null, "Falling Magnetic Field Line Collaborative Community of EJS engrg1 3 5483", null, "September 10, 2012, 08:42:57 pm by lookang", null, "", null, "Magnetic field H Electromagnetics itctrans 0 1646", null, "March 07, 2015, 03:26:51 am by itctrans\nPage created in 0.941 seconds with 22 queries.", null, "since 2011/06/15" ]
[ null, "http://www.phy.ntnu.edu.tw/ntnujava/Themes/default/images/rss.png", null, "http://www.phy.ntnu.edu.tw/demolab/smf/Themes/default/images/smflogo.gif", null, "http://www.phy.ntnu.edu.tw/demolab/smf/Themes/default/images/upshrink.gif", null, "http://www.phy.ntnu.edu.tw/ntnujava/logo.gif", null, "http://www.phy.ntnu.edu.tw/demolab/smf/Themes/default/images/filter.gif", null, "http://www.phy.ntnu.edu.tw/demolab/smf/Themes/default/images/topic/normal_post.gif", null, "http://www.phy.ntnu.edu.tw/ntnujava/images/eye.gif", null, "http://www.phy.ntnu.edu.tw/demolab/smf/Themes/default/images/staradmin.gif", null, "http://www.phy.ntnu.edu.tw/demolab/smf/Themes/default/images/staradmin.gif", null, "http://www.phy.ntnu.edu.tw/demolab/smf/Themes/default/images/staradmin.gif", null, "http://www.phy.ntnu.edu.tw/demolab/smf/Themes/default/images/staradmin.gif", null, "http://www.phy.ntnu.edu.tw/demolab/smf/Themes/default/images/staradmin.gif", null, "http://www.phy.ntnu.edu.tw/demolab/smf/Themes/default/images/useroff.gif", null, "http://www.phy.ntnu.edu.tw/ntnujava/hwang.gif", null, "http://www.phy.ntnu.edu.tw/demolab/smf/Themes/default/images/www_sm.gif", null, "http://www.phy.ntnu.edu.tw/demolab/smf/Themes/default/images/post/xx.gif", null, "http://www.phy.ntnu.edu.tw/ntnujava/icons/list_hidden.gif", null, "http://forum.phy.ntnu.edu.tw/neditor/popups/pics/20091211_1593304203.png", null, "http://www.phy.ntnu.edu.tw/demolab/smf/teximages/6/1/3/613dc95ccddd5d8fdcc03b1bf5fc03fd.png ", null, "http://www.phy.ntnu.edu.tw/demolab/smf/teximages/4/7/4/47412905d594a12144af95c7bf871b83.png ", null, "http://www.phy.ntnu.edu.tw/demolab/smf/teximages/1/9/9/19902f702bc813f71c057a3ee8d755bd.png ", null, "http://www.phy.ntnu.edu.tw/demolab/smf/teximages/6/8/b/68ba173764f4a95cec8ef7422e110673.png ", null, "http://www.phy.ntnu.edu.tw/demolab/smf/teximages/6/f/2/6f2969eb5581c6cf79beb1cf8b15c011.png ", null, "http://www.phy.ntnu.edu.tw/demolab/smf/teximages/a/a/9/aa9d9d8cdf4692047d46f1f7e82bb8ef.png ", null, "http://www.phy.ntnu.edu.tw/demolab/smf/teximages/8/b/d/8bd67f9c4197403b0565c01efa8c2d76.png ", null, "http://www.phy.ntnu.edu.tw/demolab/smf/teximages/0/f/2/0f269dc683ba73a28ef974a7178fbfab.png ", null, "http://www.phy.ntnu.edu.tw/demolab/smf/teximages/c/b/4/cb4a99a20100cab862d61efcb6f9731c.png ", null, "http://www.phy.ntnu.edu.tw/demolab/smf/teximages/d/7/c/d7c5399e7fc7886dc146b0cdc9e8d8f3.png ", null, "http://www.phy.ntnu.edu.tw/demolab/smf/teximages/2/7/3/273a383345e167ee1791232c40eaf917.png ", null, "http://www.phy.ntnu.edu.tw/demolab/smf/teximages/a/d/0/ad0e7a9071302ad63197194332553921.png ", null, "http://www.phy.ntnu.edu.tw/ntnujava/images/help.gif", null, "http://www.phy.ntnu.edu.tw/ntnujava/images/plus.gif", null, "http://www.phy.ntnu.edu.tw/demolab/smf/Themes/default/images/ip.gif", null, "http://www.phy.ntnu.edu.tw/demolab/smf/Themes/default/images/topic/normal_post.gif", null, "http://www.phy.ntnu.edu.tw/demolab/smf/Themes/default/images/post/xx.gif", null, "http://www.phy.ntnu.edu.tw/demolab/smf/Themes/default/images/icons/last_post.gif", null, "http://www.phy.ntnu.edu.tw/demolab/smf/Themes/default/images/topic/normal_post.gif", null, "http://www.phy.ntnu.edu.tw/demolab/smf/Themes/default/images/post/xx.gif", null, "http://www.phy.ntnu.edu.tw/demolab/smf/Themes/default/images/icons/last_post.gif", null, "http://www.phy.ntnu.edu.tw/demolab/smf/Themes/default/images/topic/normal_post.gif", null, "http://www.phy.ntnu.edu.tw/demolab/smf/Themes/default/images/post/xx.gif", null, "http://www.phy.ntnu.edu.tw/demolab/smf/Themes/default/images/icons/last_post.gif", null, "http://www.phy.ntnu.edu.tw/demolab/smf/Themes/default/images/topic/normal_post.gif", null, "http://www.phy.ntnu.edu.tw/demolab/smf/Themes/default/images/post/xx.gif", null, "http://www.phy.ntnu.edu.tw/demolab/smf/Themes/default/images/icons/last_post.gif", null, "http://www.phy.ntnu.edu.tw/demolab/smf/Themes/default/images/topic/normal_post.gif", null, "http://www.phy.ntnu.edu.tw/demolab/smf/Themes/default/images/post/xx.gif", null, "http://www.phy.ntnu.edu.tw/demolab/smf/Themes/default/images/icons/last_post.gif", null, "http://www.phy.ntnu.edu.tw/cgi-bin/wwwcount.cgi", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6081579,"math_prob":0.96110135,"size":1105,"snap":"2019-51-2020-05","text_gpt3_token_len":358,"char_repetition_ratio":0.10626703,"word_repetition_ratio":0.03726708,"special_character_ratio":0.3321267,"punctuation_ratio":0.17073171,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9853632,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,8,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-29T03:52:08Z\",\"WARC-Record-ID\":\"<urn:uuid:a8b37a9c-12ba-4360-a3a6-f59dd5d424a8>\",\"Content-Length\":\"46520\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6a0f3739-6104-432e-99a2-dd6b9d067d7c>\",\"WARC-Concurrent-To\":\"<urn:uuid:46f7444c-6f92-46f9-a5cb-4b677163fe31>\",\"WARC-IP-Address\":\"140.122.141.1\",\"WARC-Target-URI\":\"http://www.phy.ntnu.edu.tw/ntnujava/index.php?topic=1382.msg5185\",\"WARC-Payload-Digest\":\"sha1:SYRT4UKRS2LPCKCCH5K2SLIEECASE6IQ\",\"WARC-Block-Digest\":\"sha1:S3VH35R5SVCZZ47TBAQOEJKPHWT3IWQB\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579251783621.89_warc_CC-MAIN-20200129010251-20200129040251-00161.warc.gz\"}"}
https://mmerevise.co.uk/gcse-maths-revision/cambridge-igcse/quadratic-graphs/
[ "GCSELevel 4-5Cambridge iGCSEWJEC\n\nQuadratic graphs can be sketched on a set of axes, and from here the roots can be found.\n\nCareful: In some cases, the equation may need to be rearranged to the form of a quadratic.\n\n## Sketching Quadratic Graphs and Finding the Roots", null, "$y=x^2+bx+c$\n\nThey have a general U shape, with one line of symmetry halfway between the $x$ intercepts. The $x$ intercepts are circled in red. The $y$ intercept is circled in blue.\n\nSketching:\n\nTo sketch a quadratic you can create an $xy$ table, for a selection of $x$ values, and plot the coordinates. Connecting these points will create the shape of your quadratic.\n\nRoots:\n\nTo find the roots of the quadratic equation, you take a look at your sketch and see where the graph crosses the $x$ axis.\n\nNote: The roots of a quadratic equation are the same as the $x$ intercepts.\n\nTake a look at the examples below.\n\nLevel 4-5GCSEWJECCambridge iGCSE\n\n## Example 1", null, "Plot the following graph on a set of $x$ and $y$ axes,\n\n$y=x^2+2x-8$\n\nHence, find the roots of the equation.\n\n[3 marks]\n\nCreating The $xy$ Table\n\nSubstituting the values $x=-5$ to $x=3$, we get the following table:", null, "Plotting these points as coordinates we get the following graph (as seen on the right).\n\nFinding the Roots\n\nFrom the sketch we can see the graph crosses the $x$ axis at $-4$ and $2$. These are the roots.\n\nLevel 4-5GCSEWJECCambridge iGCSE\n\n## Example 2", null, "Plot the following graph on a set of $x$ and $y$ axes,\n\n$y=x^2+8x+15$\n\nHence, find the roots of the equation.\n\n[3 marks]\n\nCreating The $xy$ Table\n\nSubstituting the values $x=-8$ to $x=0$, we get the following table:", null, "Plotting these points as coordinates we get the following graph (as seen on the right)\n\nFinding the Roots\n\nFrom the sketch we can see the graph crosses the $x$ axis at $-5$ and $-3$. These are the roots.\n\nLevel 4-5GCSEWJECCambridge iGCSE\n\n$y=x^2+7x+10$\n\nCreating The $xy$ Table\n\nSubstituting the values $x=-7$ to $x=0$, we get the following table:", null, "Plotting these points as coordinates we get the following graph", null, "Finding the Roots\n\nFrom the sketch we can see the graph crosses the $x$ axis at $-5$ and $-2$. These are the roots.\n\n$y=x^2-9x+14$\n\nCreating The $xy$ Table\n\nSubstituting the values $x=0$ to $x=9$, we get the following table:", null, "Plotting these points as coordinates we get the following graph", null, "Finding the Roots\n\nFrom the sketch we can see the graph crosses the $x$ axis at $2$ and $7$. These are the roots.\n\nThis question we first need to rearrange the equation to the form of a quadratic:\n\n$y=x^2+2x+1$\n\nCreating The $xy$ Table\n\nSubstituting the values $x=-8$ to $x=0$, we get the following table:", null, "Plotting these points as coordinates we get the following graph", null, "Finding the Roots\n\nFrom the sketch we can see the graph touches the $x$ axis at $-1$. For this particular question the quadratic equation only has one root ($-1$\n\n## You May Also Like...", null, "### MME Learning Portal\n\nOnline exams, practice questions and revision videos for every GCSE level 9-1 topic! No fees, no trial period, just totally free access to the UK’s best GCSE maths revision platform.\n\n£0.00" ]
[ null, "https://mmerevise.co.uk/app/uploads/2022/11/quadgraphgeneral-1024x1020.png", null, "https://mmerevise.co.uk/app/uploads/2022/11/example1-566x1024.png", null, "https://mmerevise.co.uk/wp-content/uploads/2022/11/example1table.png", null, "https://mmerevise.co.uk/app/uploads/2022/11/example2-2-551x1024.png", null, "https://mmerevise.co.uk/wp-content/uploads/2022/11/Screenshot-2022-11-17-at-10.59.18-1024x127.png", null, "https://mmerevise.co.uk/wp-content/uploads/2022/11/Screenshot-2022-11-17-at-11.51.56.png", null, "https://mmerevise.co.uk/wp-content/uploads/2022/11/question1quadgraph-574x1024.png", null, "https://mmerevise.co.uk/wp-content/uploads/2022/11/Screenshot-2022-11-17-at-12.35.06.png", null, "https://mmerevise.co.uk/wp-content/uploads/2022/11/question2quadgraphs-559x1024.png", null, "https://mmerevise.co.uk/wp-content/uploads/2022/11/Screenshot-2022-11-17-at-12.54.36.png", null, "https://mmerevise.co.uk/wp-content/uploads/2022/11/example3quadgraph-1024x974.png", null, "https://mmerevise.co.uk/app/uploads/2022/09/lp-2022-new-1-300x236.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8602617,"math_prob":0.9994913,"size":3274,"snap":"2023-40-2023-50","text_gpt3_token_len":845,"char_repetition_ratio":0.18134557,"word_repetition_ratio":0.40963855,"special_character_ratio":0.23976786,"punctuation_ratio":0.091822095,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99999785,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24],"im_url_duplicate_count":[null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-11-29T11:50:30Z\",\"WARC-Record-ID\":\"<urn:uuid:a9f35fc4-5434-4093-9559-396bc248dc21>\",\"Content-Length\":\"280551\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0bd09d12-cc44-4f67-a516-e45bfdb513e4>\",\"WARC-Concurrent-To\":\"<urn:uuid:7e8a7d02-b497-49e7-824c-cf10bd2dec5c>\",\"WARC-IP-Address\":\"104.26.3.141\",\"WARC-Target-URI\":\"https://mmerevise.co.uk/gcse-maths-revision/cambridge-igcse/quadratic-graphs/\",\"WARC-Payload-Digest\":\"sha1:MKTNDZPGTOWCI7QWTDRMOT7NWMRNBBLL\",\"WARC-Block-Digest\":\"sha1:S7KUAALRIUBHRBWBKOGOPYGFHVENSSW6\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100081.47_warc_CC-MAIN-20231129105306-20231129135306-00497.warc.gz\"}"}
https://thetopsites.net/article/60226041.shtml
[ "## Javascript changing function trouble\n\nRelated searches\n\nThe code below is how it was originally\n\n``` function getRandomEmail() {\nconst randomNumber = Math.floor(Math.random() * 200000)+1;\nreturn `User\\${randomNumber}@example.com`\n}\n```\n\nInstead of a random number generated and inserted I am trying to have an list of words added infront of the @example.com\n\nlike so\n\n```function getRandomEmail() {\nvar emailname = \"testing testing2 testing3\".split(\" \");\nreturn `User\\$(Math.floor(Math.random() * emailname.split)+ @example.com`\n```\n\nThe output I'm getting now while running the script is\n\ndid I miss anything ???\n\n1 `emailname` is already an array after the `split` in the initialization. Now what you need is it's `length`\n\n2 to insert the variable use `\\${..}`, not `\\$()`\n\n```function getRandomEmail() {\nvar emailname = \"testing testing2 testing3\".split(\" \");\nreturn `User\\${Math.floor(Math.random() * emailname.length)}@example.com`\n}\n```\n\nFunctions, Did you know that a JavaScript function's named parameter variables are Strangely, all of the values in the result object are set to \"green\" . So the arguments object isn't a “real” array, so you run into problems when you� \\$(\".some-element\").initialize( function(){ \\$(this).css(\"color\", \"blue\"); }); The difference from .each is - it takes your selector, in this case .some-element and wait for new elements with this selector in the future, if such element will be added, it will be initialized too. In our case initialize function just change element color to blue.\n\nHere's a simple solution that uses randojs.com for readability.\n\n```function getRandomEmail(){\nreturn rando([\"accounting\", \"sales\", \"hr\"]).value + \"@dundermifflin.com\";\n}\n\nconsole.log(getRandomEmail());```\n`<script src=\"https://randojs.com/1.0.0.js\"></script>`\n\nJavaScript: Don't Reassign Your Function Arguments, Functions have a set of parameters (in this case, only x ) and a body, which contains Some problems really are easier to solve with recursion than with loops. Trouble in changing function \"PrintLine::queueDeltaMove() \"YangXu. April 2017 in Motor Control. I try to understand the PrintLine::queueDeltaMove() function. However\n\nI think the following is good as far my knowledge goes. Is it correct ? Did I miss anything ?\n\nedit: doesn't function as expected. the same 'word' from the array gets used multiple times and the array is never emptied.\n\n```function getRandomEmail() {\nvar emailname = \"test1 test2 test3\".split(\" \");\nconst randomElement = emailname[Math.floor(Math.random() * emailname.length)];\nconst index = emailname.indexOf(randomElement);\nif (index > -1){ emailname.splice(index, 1);}\nconsole.log(emailname);\nif (emailname && emailname.length > 0) {\nconsole.log('emailname is not empty.');\nreturn `\\${randomElement}@example.com`;\n}else{\nconsole.log('emailname is empty.');\n}\nreturn (\"no mail available\");\n}\n\nconsole.log(getRandomEmail());```\n\nFunctions :: Eloquent JavaScript, But when a function is created using new Function , its [[Environment]] is set to reference not the current Lexical Environment, but the global one� In JavaScript, tasks involving date or time are handled using a Date object. It is a native object defined in ECMAScript, like Array or Function. which is mostly implemented in native code such as\n\n`emailname` is an array of string \"testing testing2 testing3\". You don't need to again split.\n\nAlso, if you are using back-ticks (`) then you access variables/constant using `\\${variableName}`, and NOT BY `\\$(variableName)`\n\n```function getRandomEmail() {\nvar emailname = \"testing testing2 testing3\".split(\" \");\nconst randomElement = emailname[Math.floor(Math.random() * emailname.length)];\nreturn `User\\${randomElement}@example.com`;\n}\n\nconsole.log(getRandomEmail());```\n\nThe \"new Function\" syntax, If we ever need to change the message or the way it is shown, it's enough to modify the code in one place: the function which outputs it. JavaScript Data Types. JavaScript variables can hold numbers like 100 and text values like \"John Doe\". In programming, text values are called text strings. JavaScript can handle many types of data, but for now, just think of numbers and strings. Strings are written inside double or single quotes. Numbers are written without quotes.\n\nFunctions, JavaScript Exercises, Practice, Solution: JavaScript is a cross-platform, object-oriented scripting language. Inside a host environment, JavaScript can be connected to the objects of its environment to provide programmatic control over them.\n\nThe JavaScript input text value property will produce a string, containing the value of the text field. Useful Code Examples for Practice If you are only starting to learn the get input .value JavaScript, we recommend that you start by practicing with the code examples we provide in this section.\n\nChanging HTML Content. The easiest way to modify the content of an HTML element is by using the innerHTML property.. To change the content of an HTML element, use this syntax:\n\n• The last row is incorrect. `emailname.split`. emailname was already split the row above. Probably `emailname.length`.\n• Have you just tried for instance to display some examples generated ? like with console.log() ? `console.log(getRandomEmail())`\n• Where is that `BAD_EMAIL: that email is invalid ` coming from? There's no logging in that function. 1. `getRandomEmail` function is missing closing `}` 2. `emailname.split` does not evaluate to a number, yet you're trying to do multiplication. 3. You're trying to do string template? It needs to be like this I think: `return `User\\${Math.floor(Math.random() * email.length)}@example.com``\n• If you need to `splice` the array, probably you want to init it outside the function. inside the function it will just create new array any time, so the splice do nothing. so probably you need something like this: `var emailname = \"testing testing2 testing3\".split(\" \"); function getRandomEmail() { const randomElement = emailname.splice(Math.floor(Math.random() * emailname.length), 1); return `User\\${randomElement}@example.com`; }`" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7855074,"math_prob":0.6335793,"size":6777,"snap":"2021-31-2021-39","text_gpt3_token_len":1518,"char_repetition_ratio":0.14041045,"word_repetition_ratio":0.033932135,"special_character_ratio":0.23579755,"punctuation_ratio":0.16547334,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97044444,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-28T07:20:32Z\",\"WARC-Record-ID\":\"<urn:uuid:762f3850-b8e4-47c5-a189-db068368b35a>\",\"Content-Length\":\"18061\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b068f706-88d7-48a9-9b00-86ef4c8f0c4d>\",\"WARC-Concurrent-To\":\"<urn:uuid:5d9833d1-4eca-4e46-8ff4-4be1f64db2c4>\",\"WARC-IP-Address\":\"192.169.175.36\",\"WARC-Target-URI\":\"https://thetopsites.net/article/60226041.shtml\",\"WARC-Payload-Digest\":\"sha1:CAZMXURQPFPCB2OV5DRRBMW4ONTSLXPM\",\"WARC-Block-Digest\":\"sha1:QFMDFPUKSFEUTFRJ4WN2GWG6FXUK6WZ6\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780060538.11_warc_CC-MAIN-20210928062408-20210928092408-00642.warc.gz\"}"}
https://www.colorhexa.com/0d221c
[ "# #0d221c Color Information\n\nIn a RGB color space, hex #0d221c is composed of 5.1% red, 13.3% green and 11% blue. Whereas in a CMYK color space, it is composed of 61.8% cyan, 0% magenta, 17.6% yellow and 86.7% black. It has a hue angle of 162.9 degrees, a saturation of 44.7% and a lightness of 9.2%. #0d221c color hex could be obtained by blending #1a4438 with #000000. Closest websafe color is: #003333.\n\n• R 5\n• G 13\n• B 11\nRGB color chart\n• C 62\n• M 0\n• Y 18\n• K 87\nCMYK color chart\n\n#0d221c color description : Very dark (mostly black) cyan - lime green.\n\n# #0d221c Color Conversion\n\nThe hexadecimal color #0d221c has RGB values of R:13, G:34, B:28 and CMYK values of C:0.62, M:0, Y:0.18, K:0.87. Its decimal value is 860700.\n\nHex triplet RGB Decimal 0d221c `#0d221c` 13, 34, 28 `rgb(13,34,28)` 5.1, 13.3, 11 `rgb(5.1%,13.3%,11%)` 62, 0, 18, 87 162.9°, 44.7, 9.2 `hsl(162.9,44.7%,9.2%)` 162.9°, 61.8, 13.3 003333 `#003333`\nCIE-LAB 11.369, -10.358, 1.451 0.948, 1.313, 1.302 0.266, 0.369, 1.313 11.369, 10.459, 172.023 11.369, -6.427, 1.93 11.46, -5.297, 1.286 00001101, 00100010, 00011100\n\n# Color Schemes with #0d221c\n\n• #0d221c\n``#0d221c` `rgb(13,34,28)``\n• #220d13\n``#220d13` `rgb(34,13,19)``\nComplementary Color\n• #0d2212\n``#0d2212` `rgb(13,34,18)``\n• #0d221c\n``#0d221c` `rgb(13,34,28)``\n• #0d1e22\n``#0d1e22` `rgb(13,30,34)``\nAnalogous Color\n• #22120d\n``#22120d` `rgb(34,18,13)``\n• #0d221c\n``#0d221c` `rgb(13,34,28)``\n• #220d1e\n``#220d1e` `rgb(34,13,30)``\nSplit Complementary Color\n• #221c0d\n``#221c0d` `rgb(34,28,13)``\n• #0d221c\n``#0d221c` `rgb(13,34,28)``\n• #1c0d22\n``#1c0d22` `rgb(28,13,34)``\n• #13220d\n``#13220d` `rgb(19,34,13)``\n• #0d221c\n``#0d221c` `rgb(13,34,28)``\n• #1c0d22\n``#1c0d22` `rgb(28,13,34)``\n• #220d13\n``#220d13` `rgb(34,13,19)``\n• #000000\n``#000000` `rgb(0,0,0)``\n• #000000\n``#000000` `rgb(0,0,0)``\n• #06100d\n``#06100d` `rgb(6,16,13)``\n• #0d221c\n``#0d221c` `rgb(13,34,28)``\n• #14342b\n``#14342b` `rgb(20,52,43)``\n• #1b473a\n``#1b473a` `rgb(27,71,58)``\n• #22594a\n``#22594a` `rgb(34,89,74)``\nMonochromatic Color\n\n# Alternatives to #0d221c\n\nBelow, you can see some colors close to #0d221c. Having a set of related colors can be useful if you need an inspirational alternative to your original color choice.\n\n• #0d2217\n``#0d2217` `rgb(13,34,23)``\n• #0d2219\n``#0d2219` `rgb(13,34,25)``\n• #0d221a\n``#0d221a` `rgb(13,34,26)``\n• #0d221c\n``#0d221c` `rgb(13,34,28)``\n• #0d221e\n``#0d221e` `rgb(13,34,30)``\n• #0d2220\n``#0d2220` `rgb(13,34,32)``\n• #0d2221\n``#0d2221` `rgb(13,34,33)``\nSimilar Colors\n\n# #0d221c Preview\n\nThis text has a font color of #0d221c.\n\n``<span style=\"color:#0d221c;\">Text here</span>``\n#0d221c background color\n\nThis paragraph has a background color of #0d221c.\n\n``<p style=\"background-color:#0d221c;\">Content here</p>``\n#0d221c border color\n\nThis element has a border color of #0d221c.\n\n``<div style=\"border:1px solid #0d221c;\">Content here</div>``\nCSS codes\n``.text {color:#0d221c;}``\n``.background {background-color:#0d221c;}``\n``.border {border:1px solid #0d221c;}``\n\n# Shades and Tints of #0d221c\n\nA shade is achieved by adding black to any pure hue, while a tint is created by mixing white to any pure color. In this example, #020605 is the darkest color, while #f6fcfa is the lightest one.\n\n• #020605\n``#020605` `rgb(2,6,5)``\n• #081410\n``#081410` `rgb(8,20,16)``\n• #0d221c\n``#0d221c` `rgb(13,34,28)``\n• #123028\n``#123028` `rgb(18,48,40)``\n• #183e33\n``#183e33` `rgb(24,62,51)``\n• #1d4d3f\n``#1d4d3f` `rgb(29,77,63)``\n• #235b4b\n``#235b4b` `rgb(35,91,75)``\n• #286956\n``#286956` `rgb(40,105,86)``\n• #2e7762\n``#2e7762` `rgb(46,119,98)``\n• #33856e\n``#33856e` `rgb(51,133,110)``\n• #389479\n``#389479` `rgb(56,148,121)``\n• #3ea285\n``#3ea285` `rgb(62,162,133)``\n• #43b091\n``#43b091` `rgb(67,176,145)``\n• #4cbb9b\n``#4cbb9b` `rgb(76,187,155)``\n• #5ac0a3\n``#5ac0a3` `rgb(90,192,163)``\n• #69c5ab\n``#69c5ab` `rgb(105,197,171)``\n• #77cbb3\n``#77cbb3` `rgb(119,203,179)``\n• #85d0bb\n``#85d0bb` `rgb(133,208,187)``\n• #93d6c3\n``#93d6c3` `rgb(147,214,195)``\n• #a1dbcb\n``#a1dbcb` `rgb(161,219,203)``\n• #afe1d3\n``#afe1d3` `rgb(175,225,211)``\n• #bee6da\n``#bee6da` `rgb(190,230,218)``\n• #ccebe2\n``#ccebe2` `rgb(204,235,226)``\n• #daf1ea\n``#daf1ea` `rgb(218,241,234)``\n• #e8f6f2\n``#e8f6f2` `rgb(232,246,242)``\n• #f6fcfa\n``#f6fcfa` `rgb(246,252,250)``\nTint Color Variation\n\n# Tones of #0d221c\n\nA tone is produced by adding gray to any pure hue. In this case, #161918 is the less saturated color, while #002f21 is the most saturated one.\n\n• #161918\n``#161918` `rgb(22,25,24)``\n• #141b19\n``#141b19` `rgb(20,27,25)``\n• #121d1a\n``#121d1a` `rgb(18,29,26)``\n• #111e1a\n``#111e1a` `rgb(17,30,26)``\n• #0f201b\n``#0f201b` `rgb(15,32,27)``\n• #0d221c\n``#0d221c` `rgb(13,34,28)``\n• #0b241d\n``#0b241d` `rgb(11,36,29)``\n• #09261e\n``#09261e` `rgb(9,38,30)``\n• #08271e\n``#08271e` `rgb(8,39,30)``\n• #06291f\n``#06291f` `rgb(6,41,31)``\n• #042b20\n``#042b20` `rgb(4,43,32)``\n• #022d21\n``#022d21` `rgb(2,45,33)``\n• #002f21\n``#002f21` `rgb(0,47,33)``\nTone Color Variation\n\n# Color Blindness Simulator\n\nBelow, you can see how #0d221c is perceived by people affected by a color vision deficiency. This can be useful if you need to ensure your color combinations are accessible to color-blind users.\n\nMonochromacy\n• Achromatopsia 0.005% of the population\n• Atypical Achromatopsia 0.001% of the population\nDichromacy\n• Protanopia 1% of men\n• Deuteranopia 1% of men\n• Tritanopia 0.001% of the population\nTrichromacy\n• Protanomaly 1% of men, 0.01% of women\n• Deuteranomaly 6% of men, 0.4% of women\n• Tritanomaly 0.01% of the population" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5634349,"math_prob":0.818734,"size":3682,"snap":"2021-31-2021-39","text_gpt3_token_len":1655,"char_repetition_ratio":0.12533987,"word_repetition_ratio":0.010989011,"special_character_ratio":0.55540466,"punctuation_ratio":0.23503326,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9887616,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-08-02T00:21:48Z\",\"WARC-Record-ID\":\"<urn:uuid:f180d081-b4a5-4221-9cfa-a9577a867728>\",\"Content-Length\":\"36107\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:148d42de-0c5f-4471-a239-1e79008c417c>\",\"WARC-Concurrent-To\":\"<urn:uuid:ab6c9f9b-22ea-4071-9d6b-958c7ec6e989>\",\"WARC-IP-Address\":\"178.32.117.56\",\"WARC-Target-URI\":\"https://www.colorhexa.com/0d221c\",\"WARC-Payload-Digest\":\"sha1:CL4AWCIY5BYISEB7G2SJRAN6JFKGMZNY\",\"WARC-Block-Digest\":\"sha1:6NDNKBQD74UDOUKJDATX5Q6VA3LMHIJD\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046154277.15_warc_CC-MAIN-20210801221329-20210802011329-00000.warc.gz\"}"}
https://fdocuments.net/document/graphframes-graph-queries-in-spark-sql.html
[ "", null, "", null, "", null, "", null, "", null, "• date post\n\n07-Jan-2017\n• Category\n\nData & Analytics\n\n• view\n\n646\n\n1\n\nEmbed Size (px)\n\nTranscript of GraphFrames: Graph Queries In Spark SQL\n\n• GraphFrames: Graph Queries in Apache Spark SQL\n\nAnkur DaveUC Berkeley AMPLab\n\nJoint work with Alekh Jindal (Microsoft), Li Erran Li (Uber), Reynold Xin (Databricks), Joseph Gonzalez (UC Berkeley), and Matei Zaharia (MIT and Databricks)\n\n• + Graph Queries\n\n2016Apache Spark + GraphFrames\n\nGraphFrames (2016)\n\n+ Graph Algorithms\n\n2013Apache Spark + GraphX\n\nRelational Queries\n\n2009Spark\n\n• Graph Algorithms vs. Graph Queries\n\nx\n\nPageRank\n\nAlternating Least Squares\n\nGraph Algorithms Graph Queries\n\n• Graph Algorithms vs. Graph QueriesGraph Algorithm: PageRank Graph Query: Wikipedia Collaborators\n\nEditor 1 Editor 2 Article 1 Article 2\n\nArticle 1\n\nArticle 2\n\nEditor 1\n\nEditor 2\n\nsame day} same day}\n\n• Graph Algorithms vs. Graph QueriesGraph Algorithm: PageRank\n\n// Iterate until convergence wikipedia.pregel(sendMsg = { e =>\n\ne.sendToDst(e.srcRank * e.weight)},mergeMsg = _ + _,vprog = { (id, oldRank, msgSum) =>\n\n0.15 + 0.85 * msgSum})\n\nGraph Query: Wikipedia Collaboratorswikipedia.find(\"(u1)-[e11]->(article1);(u2)-[e21]->(article1);(u1)-[e12]->(article2);(u2)-[e22]->(article2)\")\n\n.select(\"*\",\"e11.date e21.date\".as(\"d1\"),\"e12.date e22.date\".as(\"d2\"))\n\n.sort(\"d1 + d2\".desc).take(10)\n\n• Separate SystemsGraph Algorithms Graph Queries\n\n• Raw Wikipedia\n\n< / >< / >< / >XML\n\nText Table\n\nEdit GraphEdit Table\n\nFrequent Collaborators\n\nArticle Text\n\nUser Article\n\nVandalism Suspects\n\nUser User\n\nUser Article\n\n• Solution: GraphFrames\n\nGraph Algorithms Graph Queries\n\nSpark SQL\n\nGraphFrames API\n\nPattern Query Optimizer\n\n• GraphFrames API Unifies graph algorithms, graph queries, and DataFrames Available in Scala, Java, and Python\n\nclass GraphFrame {def vertices: DataFramedef edges: DataFrame\n\ndef find(pattern: String): DataFramedef registerView(pattern: String, df: DataFrame): Unit\n\ndef degrees(): DataFramedef pageRank(): GraphFramedef connectedComponents(): GraphFrame...\n\n}\n\n• Implementation\n\nParsed Pattern\n\nLogical Plan\n\nMaterialized Views\n\nOptimized Logical Plan\n\nDataFrameResult\n\nQuery String\n\nGraphRelationalTranslation Join Elimination\n\nand Reordering\n\nSpark SQL\n\nView SelectionGraph\n\nAlgorithmsGraphX\n\n• GraphRelational Translation\n\nB\n\nD\n\nA\n\nC\n\nExisting Logical PlanOutput: A,B,C\n\nSrc Dst\n\nC=Src\n\nEdge Table\n\nID Attr\n\nVertex Table\n\nD=ID\n\n• Materialized View Selection\n\nGraphX: Triplet view enabled efficient message-passing algorithms\n\nVertices\n\nB\n\nA\n\nC\n\nD\n\nEdges\n\nA B\n\nA C\n\nB C\n\nC D\n\nA\n\nB\n\nTriplet View\n\nA C\n\nB C\n\nC D\n\nGraph\n\n+\n\nUpdatedPageRanks\n\nB\n\nA\n\nC\n\nD\n\nA\n\n• Materialized View Selection\n\nGraphFrames: User-defined views enable efficient graph queries\n\nVertices\n\nB\n\nA\n\nC\n\nD\n\nEdges\n\nA B\n\nA C\n\nB C\n\nC D\n\nA\n\nB\n\nTriplet View\n\nA CB CC D\n\nGraph\n\nUser-Defined Views\n\nPageRank\n\nCommunityDetection\n\nGraph Queries\n\n• Join Elimination\n\nSrc Dst1 21 32 32 5\n\nEdges\n\nID Attr1 A2 B3 C4 D\n\nVerticesSELECT src, dstFROM edges INNER JOIN vertices ON src = id;\n\nUnnecessary join\n\ncan be eliminated if tables satisfy referential integrity, simplifying graphrelational translation:\n\nSELECT src, dst FROM edges;\n\n• Join Reordering\n\nA B B A\n\nA, B B DC BB\n\nB EBC DB\n\nC EC, DC, EExample Query\n\nLeft-Deep Plan Bushy Plan\n\nA B B A\n\nA, B\n\nB D C B\n\nB\n\nB EBBB, C\n\nUser-Defined View\n\n• EvaluationFaster than Neo4j for unanchored pattern queries\n\n0\n\n0.5\n\n1\n\n1.5\n\n2\n\n2.5\n\nGraphFrames Neo4j\n\nQue\n\nry la\n\ntenc\n\ny, s\n\nAnchored Pattern Query\n\n01020304050607080\n\nGraphFrames Neo4j\n\nQue\n\nry la\n\ntenc\n\ny, s\n\nUnanchored Pattern Query\n\nTriangle query on 1M edge subgraph of web-Google. Each system configured to use a single core.\n\n• EvaluationApproaches performance of GraphX for graph algorithms using Spark SQL whole-stage code generation\n\n0\n\n1\n\n2\n\n3\n\n4\n\n5\n\n6\n\n7\n\nGraphFrames GraphX Nave Spark\n\nPer-i\n\ntera\n\ntion\n\nrunt\n\nime,\n\ns\n\nPageRank Performance\n\nPer-iteration performance on web-Google, single 8-core machine. Nave Spark uses Scala RDD API.\n\n• EvaluationRegistering the right views can greatly improve performance for some queries\n\nWorkload: J. Huang, K. Venkatraman, and D.J. Abadi. Query optimization of distributed pattern matching. In ICDE 2014.\n\n• Future Work Suggest views automatically Exploit attribute-based partitioning in optimizer Code generation for single node\n\n• Try It Out!Released as a Spark Package at:\n\nhttps://github.com/graphframes/graphframesThanks to Joseph Bradley, Xiangrui Meng, and Timothy Hunter.\n\[email protected]" ]
[ null, "data:image/png;base64,R0lGODlhAQABAAD/ACwAAAAAAQABAAACADs=", null, "data:image/png;base64,R0lGODlhAQABAAD/ACwAAAAAAQABAAACADs=", null, "data:image/png;base64,R0lGODlhAQABAAD/ACwAAAAAAQABAAACADs=", null, "data:image/png;base64,R0lGODlhAQABAAD/ACwAAAAAAQABAAACADs=", null, "data:image/png;base64,R0lGODlhAQABAAD/ACwAAAAAAQABAAACADs=", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5547896,"math_prob":0.5489549,"size":4331,"snap":"2022-05-2022-21","text_gpt3_token_len":1234,"char_repetition_ratio":0.15045066,"word_repetition_ratio":0.09400324,"special_character_ratio":0.23204802,"punctuation_ratio":0.13784136,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95243007,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-01-21T02:04:59Z\",\"WARC-Record-ID\":\"<urn:uuid:d8b3defe-63d1-4f40-ba89-3bb30fdbc58f>\",\"Content-Length\":\"83757\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b487617c-4dd2-4931-a081-bed3859a5d9f>\",\"WARC-Concurrent-To\":\"<urn:uuid:946117e2-d24f-4862-929d-06bb112493f5>\",\"WARC-IP-Address\":\"51.178.185.126\",\"WARC-Target-URI\":\"https://fdocuments.net/document/graphframes-graph-queries-in-spark-sql.html\",\"WARC-Payload-Digest\":\"sha1:2OBD34W2MSGTF7F2UV3XOEGZW5MWBFKU\",\"WARC-Block-Digest\":\"sha1:EERA57QWARK3YZJM4H74SXTPZMKLK6MQ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-05/CC-MAIN-2022-05_segments_1642320302715.38_warc_CC-MAIN-20220121010736-20220121040736-00654.warc.gz\"}"}
https://file.scirp.org/Html/3-4900347_57402.htm
[ " Numerical Modeling of Non-Similar Mixed Convection Heat Transfer over a Stretching Surface with Slip Conditions\n\nWorld Journal of Mechanics\nVol.05 No.06(2015), Article ID:57402,11 pages\n10.4236/wjm.2015.56013\n\nNumerical Modeling of Non-Similar Mixed Convection Heat Transfer over a Stretching Surface with Slip Conditions\n\nA. Subba Rao1*, V. R. Prasad1, N. Nagendra1, K. V. N. Murthy1, N. Bhaskar Reddy2, O. Anwar Beg3\n\n1Department of Mathematics, Madanapalle Institute of Technology and Science, Madanapalle, India\n\n2Department of Mathematics, Sri Venkateswara University, Tirupathi, India\n\n3GortEngovation-Aerospace, Medical and Energy Engineering, Bradford, UK\n\nEmail: *[email protected]\n\nCopyright © 2015 by authors and Scientific Research Publishing Inc.\n\nThis work is licensed under the Creative Commons Attribution International License (CC BY).\n\nhttp://creativecommons.org/licenses/by/4.0/", null, "", null, "", null, "Received 29 April 2015; accepted 22 June 2015; published 25 June 2015\n\nABSTRACT\n\nIn this paper, the heat transfer effect on the steady boundary layer flow of a Casson fluid past a stretching surface in the presence of slip conditions was analyzed. The stretching surface is maintained at a constant temperature. The boundary layer conservation equations, which are parabolic in nature, are normalized into non-similar form and then solved numerically with the well-tested, efficient, implicit, stable Keller-box finite difference scheme. The resulting equations are solved numerically by using the Kellerbox finite-difference method, and the expressions for velocity and temperature are obtained. They satisfy all imposed initial and boundary conditions and reduce to some well-known solutions for non-Newtonian fluids. Numerical results for velocity, temperature, skin friction and Nusselt number are shown in various graphs and discussed for embedded flow parameters. It is found that both velocity and temperature decrease with an increase of the Casson fluid parameter.\n\nKeywords:\n\nStretching Surface, Non-Newtonian Fluid, Slip Condition, Keller-Box Numerical Method, Heat Transfer, Skin Friction Coefficient", null, "1. Introduction\n\nHeat transfer in non-Newtonian fluids is an important research area due to its wide applications in food processing, petroleum production, cooling of an infinite metallic plate in a cooling bath and in many industries, for example polymers melt, polymer solutions employed in the plastic processing, a long thread traveling between a feed roll and a wind-up roll etc. Flow in the boundary layer on moving solid surface was historically first investigated by Sakiadis who observed that the boundary layer growth was in the direction of motion of the continuous solid surface and deviates from that of the classical Blasius flow past a flat plate. Erickson et al. extended the Sakiadis problem to include blowing or suction at the moving surface and investigated its effects on the heat and mass transfer in the boundary layer. Bijjanal et al. obtained closed-form similarity solutions for steady two-dimensional incompressible boundary layer flow caused by a stretching sheet with Non-Uniform Heat Source/Sink. Combined forced and free convection in boundary layers adjacent to a continuous horizontal sheet maintained at a constant temperature and moving with a constant velocity it was investigated numerically by Chen and Strobel . Grubka and Bobba have investigated the stretching sheet problem for a surface moving with linear velocity and with a variable surface temperature. Dutta et al. studied numerically the Temperature Field in Flow over a Stretching Surface with Uniform Heat Flux. Chen and Char investigated the effects of variable surface temperature and variable heat flux on the heat transfer characteristics of a linearly stretching sheet subject to blowing or suction. Several excellent studies of stretching flows in materials processing were presented by Karwe and Jaluria . Patil et al. further analysed unsteady two- dimensional mixed convection flow along a vertical semi-infinite power law stretching sheet in a parallel free stream with a power-law temperature distribution. Nath et al. analysed the three-dimensional, time dependent stretch surface flow. Ali and Al-Yousef analysed mixed convection heat transfer from a uniformly stretching vertical surface with power function form for wall temperature. Partha et al. described the effects of viscous dissipation on mixed convection heat transfer from an exponentially stretching surface.\n\nNon-Newtonian transport phenomena arise in many branches of chemical and material processing engineering. Such fluids exhibit shear-stress-strain relationships which diverge significantly from the Newtonian (Navier-Stokes) model. Most non-Newtonian models involve some forms of modification to the momentum conservation equations. These include power-law, and thixotropic and viscoelastic fluids (Schowalter ). Such rheological models however cannot simulate the microstructural characteristics of many important liquids including polymer suspensions, liquid crystal melts, physiological fluids, contaminated lubricants, etc. Several fluids in chemical engineering, multiphase mixtures, pharmaceutical formulations, china clay and coal in water, paints, synthetic lubricants, salvia, synovial fluid, jams, soups, jellies, marmalades, sewage sludge etc. are non-Newtonian. The constitutive relations for these kinds of fluids give rise to more complex and higher order equations than the Navier-Stokes equations. Considerable progress has been made on the topic by using different models of non- Newtonian fluids - . Previous studies indicate that not much has been presented yet regarding Casson fluid. This model (Casson ; Nakamura et al. ; Samir Kumar ) in fact is a plastic fluid that exhibits shear thinning characteristics and that quantifies yield stress and high shear viscosity. Casson fluid model is reduced to a Newtonian fluid at a very high wall shear stress, when wall stress is much greater than yield stress. This fluid has good approximations for many substances such as biological materials, foams, molten chocolate, cosmetics, nail polish, some particulate suspensions etc. The boundary layer behaviour of viscoelastic fluid has technical applications in engineering such as glass fibre, paper production, manufacture of foods, the aerodynamic extrusion of plastic sheets, the polymer extrusion in a melt spinning process and many others.\n\nMost of the existing studies on steady boundary layer flow and heat transfer with slip conditions are limited to the non-Newtonian fluid. The considered slip conditions especially are important in the non-Newtonian fluids such as polymer melts which often exhibit wall slip. This motivates us to consider the slip conditions in the present work for non-Newtonian fluids. More exactly, our aim is to investigate steady boundary layer flow and heat transfer of a Casson fluid past a stretching sheet with slip conditions. The equations of the problem are first formulated and then transformed into their dimensionless forms where the Keller box method is applied to find the exact solutions for velocity, temperature, Skin-friction and Nusselt number.\n\n2. Mathematical Analysis\n\nWe consider steady two-dimensional laminar mixed convection heat transfer flow along a stretching surface with partial slip. By applying two equal and opposing forces along the x-axis, the sheet is stretched with a speed proportional to the distance from the fixed origin x = 0 as shown in Figure 1. It is also assumed that the external electric field is zero and the electric field due to the polarization of chargers is negligible. The temperature is\n\n(a) (b)\n\nFigure 1. (a) Physical model and coordinate system; (b) Grid meshing and a Keller box computational cell.\n\nmaintained at prescribed constant value. The fluid properties are assumed to be constant except the density variation in the buoyancy force term.\n\nThe rheological equation of state for an isotropic flow of Casson fluid is (Nakamura et al. ):", null, "(1)\n\nin which", null, "and", null, "represents the", null, "component of deformation rate,", null, "is the dynamic viscosity,", null, "denotes the product of the component of deformation rate with itself,", null, "shows a critical value of this product based on the non-Newtonian model,", null, "represents the plastic dynamic viscosity of non-Newtonian fluid and", null, "is the yield stress of fluid.\n\nUnder the usual Boussinesq and boundary layer approximations, the equations for mass continuity, (continuity/mass conservation) momentum and energy can be written in the following form:", null, "(2)", null, "(3)", null, "(4)\n\nwhere", null, "and", null, "are the velocity components in the", null, "- and -directions, is the kinematic viscosity of the conducting fluid, is the non-Newtonian Casson parameter, is the thermal diffusivity, is the temperature respectively.\n\nThe boundary conditions are prescribed at the stretching surface and the edge of the boundary layer regime, respectively as follows:\n\n(5)\n\nwhere N0 is the velocity slip factor and K0 is the thermal slip factor. For, one can recover the no- slip case. The stream function is defined by and, and therefore, the continuity equation is automatically satisfied. In order to write the governing equations and the boundary conditions in dimensionless form, the following non-dimensional quantities are introduced.\n\n(6)\n\nwhere is the dimensionless stream wise coordinate, is the dimensionless stream function, is the temperature function, is the local Reynolds number, is the local thermal Grash of parameter, is the coefficient of thermal expansion, is the density of the fluid, is the Prandtl number, is the free stream temperature. The local mixed convection parameter is small near the leading edge where the forced convection dominates and large when the buoyancy force dominates the flow field. The stretching velocity of the surface obeys the relation:\n\n(7)\n\nwhere U0 is a constant.\n\nIn view of Equations (6) and (7), Equations (2)-(4) reduce to the following coupled, nonlinear, dimensionless partial differential equations for momentum and energy for the regime\n\n(8)\n\n(9)\n\nThe transformed dimensionless boundary conditions are:\n\n(10)\n\nIn the above equations, the primes denote the differentiation with respect to, the dimensionless transverse\n\ncoordinate, and is the dimensionless tangential coordinate, and are the non-\n\ndimensional velocity and thermal slip parameters respectively. Here we assumed the typical values K0 = 0.5, N0 = 0.25 for finding the non-dimensional velocity and thermal slip parameters.\n\nThe engineering design quantities of physical interest include the skin-friction coefficient and Nusselt number, which are given by:\n\n(11)\n\n(12)\n\n3. Numerical Solution\n\nIn this study the efficient Keller-Box implicit difference method has been employed to solve the general flow model defined by Equations (8)-(9) with boundary conditions (10). Therefore a more detailed exposition is presented here. This method, originally developed for low speed aerodynamic boundary layers by Keller , and has been employed in a diverse range of coupled heat transfer problems. These include Ramachandra Prasad et al. , Rao et al. and Beg et al. .\n\nEssentially 4 phases are central to the Keller Box Scheme.\n\nThese are\n\na) Reduction of the Nth order partial differential equation system to N first order equations;\n\nb) Finite Difference Discretization;\n\nc) Quasilinearization of Non-Linear Keller Algebraic Equations;\n\nd) Block-tridiagonal Elimination of Linear Keller Algebraic Equations.\n\nPhase a: Reduction of the Nth order partial differential equation system to N first order equations\n\nEquations (8)-(9) subject to the boundary conditions (10) are first written as a system of first-order equations. For this purpose, we reset Equations (8)-(9) as a set of simultaneous equations by introducing the new variables u, v and t:\n\n(13)\n\n(14)\n\n(15)\n\n(16)\n\n(17)\n\nIn terms of the dependent variables, the boundary conditions become:\n\n(18)\n\nPhase b: Finite difference discretization\n\nA two dimensional computational grid is imposed on the x-η plane as sketched in Figure 2. The stepping process is defined by:\n\n(19)\n\n(20)\n\nwhere kn and hj denote the step distances in the ξ and η directions respectively.\n\nIf denotes the value of any variable at, then the variables and derivatives of Equations (13)-(17) at are replaced by:\n\n(a) (b)\n\nFigure 2. (a) Influence of Sf on the velocity; (b) Influence of Sf on the temperature.\n\n(21)\n\n(22)\n\n(23)\n\nWe now state the finite-difference approximation of Equations (13)-(17) for the mid-point, below\n\n(24)\n\n(25)\n\n(26)\n\n(27)\n\n(28)\n\nwhere we have used the abbreviations\n\n(29)\n\n(30)\n\n(31)\n\nThe boundary conditions are\n\n(32)\n\nPhase c: Quasilinearization of non-linear Keller algebraic equations\n\nIf we assume to be known for, Equations (24)-(28) are a system of 5J + 5 equations for the solution of 5J + 5 unknowns,. This non-linear system of algebraic equations is linearized by means of Newton’s method as explained in Keller and Prasad et al. .\n\nPhase d: Block-tridiagonal elimination of linear Keller algebraic equations\n\nThe linear system (24)-(28) can now be solved by the block-elimination method, since they possess a block- tridiagonal structure. Commonly, the block-tridiagonal structure consists of variables or constants, but here, an interesting feature can be observed, namely that it consists of block matrices. The complete linearized system is formulated as a block matrix system, where each element in the coefficient matrix is a matrix itself. Then, this system is solved using the efficient Keller-box method. The numerical results are affected by the number of mesh points in both directions. After some trials in the η-direction (radial coordinate) a larger number of mesh points are selected whereas in the ξ direction (tangential coordinate) significantly less mesh points are utilized. ηmax has been set at 10 and this defines an adequately large value at which the prescribed boundary conditions are satisfied. ξmax is set at 3.0 for this flow domain. Mesh independence is therefore achieved in the present com- putations. The computer program of the algorithm is executed in MATLAB running on a PC. The method demonstrates excellent stability, convergence and consistency, as elaborated by Keller and this system is developed by Cebeci and Bradshaw .\n\n4. Results and Discussions\n\nComprehensive solutions have been obtained and are presented in Figures 2-7. The numerical problem comprises 2 independent variables (x, h), 2 dependent fluid dynamic variables and 5 thermo physical and\n\n(a) (b)\n\nFigure 3. (a) Influence of ST on the velocity; (b) Influence of ST on the temperature.\n\n(a) (b)\n\nFigure 4. (a) Influence of β on the velocity; (b) Influence of β on the temperature.\n\n(a) (b)\n\nFigure 5. (a) Effect of Sf on the Skin-friction coefficient results; (b) Effect of Sf on the Nusselt number results.\n\n(a) (b)\n\nFigure 6. (a) Influence of ξ on the velocity; (b) Influence of ξ on the temperature.\n\n(a) (b)\n\nFigure 7. (a) Influence of Pr on the velocity; (b) Influence of Pr on the temperature.\n\nbody force control parameters. In the present computations, the following default parameters are prescribed (unless otherwise stated): Pr = 0.71, Sf = 0.5, ST = 1.0, b = 1.0, x = 1.0.\n\nIn Figure 2(a) and Figure 2(b), the influence of velocity slip parameter on velocity and temperature is illustrated. In Figure 2(a) the dimensionless velocity component at the wall reduces with increase in slip parameter and hence there will be a decrease in the boundary layer thickness. The velocity profiles damped out a bit slower for the high amount of slip parameters, because of an interception which exhibits among them. Figure 2(b) indicates that an increase in slip parameter tends to increase temperature in the flow field. By increasing Sf, thermal boundary layer thickness enhances.\n\nThe variation of velocity and temperature with the transverse coordinate (h), over the thermal slip parameter ST is illustrated in Figure 3(a) and Figure 3(b). The response of velocity is much more consistent than for the case of changing velocity slip parameter, it is strongly decreased for all locations in the radial direction. The peak velocity accompanies the case of no thermal slip (ST = 0). The maximum deceleration corresponds to the case of strongest thermal slip (ST = 3). Temperatures (Figure 3(b)) are also strongly depressed with increasing thermal slip. The maximum effect is observed at the wall. Further into the free stream, all temperature profiles converge smoothly to the vanishing value. Figure 4(a) and Figure 4(b) depict the effect of the Casson fluid parameter β on velocity and temperature. Actually, with an increase in non-Newtonian Casson parameter (β), it produces resistance in the fluid flow. An increase in β implies a decrease in yield stress of the Casson fluid and increase in the value of plastic dynamic viscosity; this effect creates resistance in the flow of fluid. It is further noted that velocity decreases as the Casson fluid parameter increases. In Figure 4(b) it is shown that the effect of causes decreases in temperature. The effect of velocity slip parameter on stretching surface shear stress, local Nusselt number variation are presented in Figure 5(a) and Figure 5(b). In consistency with the earlier graphs described for velocity evolution, with an increase in, wall shear stress is consistently reduced i.e. the flow is decelerated along the stretching surface. The impact of wall slip is therefore significant on the boundary layer characteristics of Casson flow from a surface. With an increasing, the local Nusselt number is also considerably decreased and profiles are generally monotonic decays. Maximum local Nusselt number always arises at the stretching surface and is minimized with proximity to the greater distance from the stretching surface. In both Figure 5(a) and Figure 5(b), skin friction coefficient and local Nusselt number are maximized for the case of no-slip i.e.. In Figure 6(a) and Figure 6(b), the variation of velocity and temperature fields with different values is shown. Close to the stretching surface, velocity is found to be maximized closer to the stretching surface and minimized with progressive distance away from it i.e. the flow is decelerated with increasing. However further from the wall, a marked acceleration in the flow is generated with greater distance from the surface i.e. velocity values are higher for higher values of. Temperature q is found to noticeably decrease through the boundary layer with increasing values; as such the fluid regime is cooled most efficiently at the stretching surface and heated increasingly as we progress around the stretching surface periphery upwards. The effect of Prandtl number (Pr) on the primitive flow variables of velocity and temperature is shown in Figure 7(a) and Figure 7(b). Prandtl number signifies the ratio of viscous diffusion to thermal diffusion in the boundary layer regime. With greater Pr values, viscous diffusion rate exceeds thermal diffusion rate. An increase in Pr from 0.7 through 1.0, 2.0, 4.0, 5.4 to 7.0, strongly depresses velocities (Figure 7(a)) in the regime. For Pr < 1, thermal diffusivity exceeds momentum diffusivity i.e. heat will diffuse faster than momentum. For Pr = 1.0, both the viscous and energy diffusion rates will be the same as will the thermal and velocity boundary layer thicknesses. With increasing Pr values, temperature as shown in Figure 7(b), is markedly reduced throughout the boundary layer.\n\nTo validate the present solutions, we compare the present model with the earlier Newtonian model of Merkin and we observe that an excellent agreement between the previous results as shown in Table 1.\n\n5. Conclusions\n\nIn this study, numerical solutions have been presented for flow and heat transfer of Casson fluid from a permeable isothermal stretching surface with partial slip. The model has been developed to simulate food stuff transport\n\nTable 1. Values of the local heat transfer coefficient for various values of ξ with β → ∞,.\n\nprocesses in industrial manufacturing operations. A robust, extensively-validated and implicit finite difference numerical scheme has been implemented to solve the transformed and dimensionless velocity and thermal boundary layer equations, subject to physically realistic boundary conditions. The results in summary have shown that, when increasing the velocity slip parameter, velocity, skin friction and Nusselt number decrease, but the temperature increases. A significant finding of this study is that flow separation can be controlled by increasing the value of Casson fluid parameter as well as by increasing Prandtl number.\n\nThe current study has been confined to steady-state flow i.e. ignored transient effects and neglected thermal radiation heat transfer effects . Generally, very stable and accurate solutions are obtained with the present finite difference code and it is envisaged that other non-Newtonian flows will be studied using this methodology in the future, including Maxwell upper convected fluids , and couple stress fluids . These aspects are also of relevance to rheological food processing simulations and will be considered in future investigations.\n\nAcknowledgements\n\nThe authors are grateful to the reviewers for giving their constructive comments for improving this article. The work is supported by the University Grants Commission-SERO. The authors are thankful to UGC-NEWDELHI, S.V University, Tirupati and management of MITS, Madanapalle.\n\nReferences\n\n1. Sakiadis, B.C. (1961) Boundary-Layer Behavior on Continuous Solid Surfaces: I. Boundary-Layer Equations for Two- Dimensional and Axisymmetric Flow. AIChE Journal, 7, 26-28. http://dx.doi.org/10.1002/aic.690070108\n2. Sakiadis, B.C. (1961) Boundary-Layer Behavior on Continuous Solid Surfaces: II. Boundary-Layer Equations on a Continuous Flat Surface. AIChE Journal, 7, 221-225. http://dx.doi.org/10.1002/aic.690070211\n3. Erickson. L.E., Fan. L.T. and Fox, V.G. (1966) Heat and Mass Transfer on Moving Continuous Flat Plate with Suction or Injection. Industrial Engineering Chemistry Fundamentals, 5, 19-25.\n4. Gireesha, B.J., Roopa, G.S. and Bagewadi, C.S. (2011) Boundary Layer Flow of an Unsteady Dusty Fluid and Heat Transfer over a Stretching Sheet with Non-Uniform Heat Source/Sink. Scientific Research, 3, 726-735. http://dx.doi.org/10.4236/eng.2011.37087\n5. Chen, T.S. and Strobel, F.A. (1980) Buoyancy Effects in Boundary Layer Adjacent to a Continuous, Moving Horizontal Flat Plate. Journal of Heat Transfer, 102, 170-172. http://dx.doi.org/10.1115/1.3244232\n6. Grubka, L.J. and Bobba, K.M. (1985) Heat Transfer Characteristics of a Continuous, Stretching Surface with Variable Temperature. ASME J. Heat Transfer, 107, 248-250. http://dx.doi.org/10.1115/1.3247387\n7. Dutta, B.K., Roy, P. and Gupta, A.S. (1985) Temperature Field in Flow over a Stretching Surface with Uniform Heat Flux. International Communications in Heat and Mass Transfer, 12, 89-94. http://dx.doi.org/10.1016/0735-1933(85)90010-7\n8. Chen, C.K. and Char, M.I. (1988) Heat Transfer of a Continuous Stretching Surface with Suction or Blowing. Journal of Mathematical Analysis and Applications, 135, 568-580. http://dx.doi.org/10.1016/0022-247X(88)90172-2\n9. Karwe, M.V. and Jaluria, Y. (1988) Fluid Flow and Mixed Convection Transport from a Moving Plate in Rolling and Extrusion Processes. ASME J. Heat Transfer, 110, 655-661. http://dx.doi.org/10.1115/1.3250542\n10. Karwe, M.V. and Jaluria, Y. (1991) Numerical Simulation of Thermal Transport Associated With a Continuously Moving Flat Sheet in Materials Processing. ASME J. Heat Transfer, 113, 612-619. http://dx.doi.org/10.1115/1.2910609\n11. Patil, P.M., Roy, S. and Pop, I. (2010) Unsteady Mixed Convection Flow over a Vertical Stretching Sheet in a Parallel Free Stream with Variable Wall Temperature. International Journal of Heat and Mass Transfer, 53, 4741-4748. http://dx.doi.org/10.1016/j.ijheatmasstransfer.2010.06.018\n12. Rajeswari, V., Kumari, M. and Nath, G. (1993) Unsteady Three-Dimensional Boundary Layer Flow Due to a Stretching Surface. Actamechanica, 98, 123-141.\n13. Ali, M. and Al-Yousef, F. (2002) Laminar Mixed Convection Boundary Layers Induced by a Linearly Stretching Permeable Surface. International Journal of Heat and Mass Transfer, 45, 4241-4250. http://dx.doi.org/10.1016/S0017-9310(02)00142-4\n14. Partha, M.K., Murthy, P.V.S.N. and Rajasekhar, G.P. (2005) Effect of Viscous Dissipation on the Mixed Convection Heat Transfer from an Exponentially Stretching Surface. Heat and Mass Transfer, 41, 360-366. http://dx.doi.org/10.1007/s00231-004-0552-2\n15. Schowalter, W.R. (1978) Mechanics of Non-Newtonian Fluids. Pergamon Press, New York.\n16. Rana, P. and Bhargava, R. (2012) Flow and Heat Transfer of a Nanofluid over a Nonlinearly Stretching Sheet: A Numerical Study. Communications in Nonlinear Science and Numerical Simulation, 17, 212-226. http://dx.doi.org/10.1016/j.cnsns.2011.05.009\n17. Nazar, M., Fetecau, C., Vieru, D. and Fetecau, C. (2010) New Exact Solutions Corresponding to the Second Problem of Stokes for Second Grade Fluids. Nonlinear Analysis: Real World Applications, 11, 584-591. http://dx.doi.org/10.1016/j.nonrwa.2008.10.055\n18. Fetecau, C., Hayat, T., Zierep, J. and Sajid, M. (2011) Energetic Balance for the Rayleigh―Stokes problem of an Oldroyd-B fluid. Nonlinear Analysis: Real World Applications, 12, 1-13. http://dx.doi.org/10.1016/j.nonrwa.2009.12.009\n19. Wang, S.W. and Tan, W.C. (2008) Stability Analysis of Double-Diffusive Convection of Maxwell Fluid in a Porous Medium Heated from Below. Physics Letters A, 372, 3046-3050. http://dx.doi.org/10.1016/j.physleta.2008.01.024\n20. Tan, W.C. and Xu, M.Y. (2004) Unsteady Flows of a Generalized Second Grade Fluid with the Fractional Derivative Model between Two Parallel Plates. Acta Mechanica Sinica, 20, 471-476.\n21. Zhang, Z.Y., Fu, C.J., Tan, W.C. and Wang, C.Y. (2007) On Set of Oscillatory Convection in a Porous Cylinder Saturated with a Viscoelastic Fluid. Physics of Fluids, 19, 98-104.\n22. Rashidi, M.M., Chamkha, A.J. and Keimanesh, M. (2011) Application of Multi-Step Differential Transform Method on Flow of a Second Grade Fluid over a Stretching or Shrinking Sheet. American Journal of Computational Mathematics, 6, 119-128. http://dx.doi.org/10.4236/ajcm.2011.12012\n23. Ali, N., Hayat, T. and Asghar, S. (2009) Peristaltic Flow of Maxwell Fluid in a Channel with Compliant Walls. Chaos, Solitons & Fractals, 39, 407-416. http://dx.doi.org/10.1016/j.chaos.2007.04.010\n24. Attia, H.A. and Seddeek, M.A. (2007) On the Effectiveness of Uniform Suction or Injection on Two Dimensional Stagnation-Point Flow towards a Stretching Surface with Heat Generation. Chemical Engineering Communications, 194, 553-564. http://dx.doi.org/10.1080/00986440600992537\n25. Hussain, M., Hayat, T., Asghar, S. and Fetecau, C. (2010) Oscillatory Flows of Second Grade Fluid in a Porous Space. Nonlinear Analysis: Real World Applications, 11, 2403-2414. http://dx.doi.org/10.1016/j.nonrwa.2009.07.016\n26. Casson, N. (1959) In Reheology of Dipersed System. Peragamon Press, Oxford.\n27. Nakamura, M. and Sawada, T. (1988) Numerical Study on the Flow of a Non-Newtonian Fluid through an Axisymmetric Stenosis. Journal of Biomechanical Engineering, 110, 137-143. http://dx.doi.org/10.1115/1.3108418\n28. Samir Kumar, N. (2013) Analytical Solution of MHD Stagnation-Point Flow and Heat Transfer of Casson Fluid over a Stretching Sheet with Partial Slip. ISRN Thermodynamics, 2013, Article ID: 108264.\n29. Keller, H.B. (1970) A New Difference Method for Parabolic Problems. In: Bramble, J., Ed., Numerical Methods for Partial Differential Equations, Academic Press, New York, 327-350.\n30. Prasd, V.R., Vasu, B. and Beg, O.A. (2011) Thermo-Diffusion and Diffusion-Thermo Effects on Boundary Layer Flows. LAP Lambert Academic Publishing GmbH & Co. KG, Saarbrücken.\n31. Rao, A.S., Prasad, V.R., Reddy, N.B. and Bég, O.A. (2013) Heat Transfer in a Casson Rheological Fluid from a Semi-infinite Vertical Plate with Partial Slip. Heat Transfer-Asian Research, 44, 272-291. http://dx.doi.org/10.1002/htj.21115\n32. Bég, O.A., Prasad, V.R., Vasu, B., Reddy, N.B., Li, Q. and Bhargava, R. (2011) Free Convection Heat and Mass Transfer from an Isothermal Sphere to a Micropolar Regime with Soret/Dufour Effects. International Journal of Heat and Mass Transfer, 54, 9-18. http://dx.doi.org/10.1016/j.ijheatmasstransfer.2010.10.005\n33. Prasad, V.R., Rao, A.S., Reddy, N.B., Vasu, B. and Beg, O.A. (2013) Modelling Laminar Transport Phenomena in a Casson Rheological Fluid from a Horizontal Circular Cylinder with Partial Slip. Proceedings of the Institution of Mechanical Engineers, Part E: Journal of Process Mechanical Engineering, 227, 309-326. http://dx.doi.org/10.1177/0954408912466350\n34. Cebeci, T. and Bradshaw, P. (1984) Physical and Computational Aspects of Convective Heat Transfer. Springer, New York. http://dx.doi.org/10.1007/978-3-662-02411-9\n35. Merkin, J.H. (1977) Free Convection Boundary Layers on Cylinders of Elliptic Cross Section. Journal of Heat Transfer, 99, 453-457. http://dx.doi.org/10.1115/1.3450717\n36. Prasad, V.R., Vasu, B., Prashad, D.R. and Bég, O.A. (2012) Thermal Radiation Effects on Magneto-Hydrodynamic Heat and Mass Transfer from a Horizontal Cylinder in a Variable Porosity Regime. Journal of Porous Media, 15, 261- 281. http://dx.doi.org/10.1615/JPorMedia.v15.i3.50\n37. B´eg, O.A. and Makinde, O.D. (2011) Viscoelastic Flow and Species Transfer in a Darcian High-Permeability Channel. Journal of Petroleum Science and Engineering, 76, 93-99. http://dx.doi.org/10.1016/j.petrol.2011.01.008\n38. Kairi, R.R. and Murthy, P.V.S.N. (2012) Effect of Melting on Mixed Convection Heat and Mass Transfer in a Non-Newtonian Fluid Saturated Non-Darcy Porous Medium. Journal of Heat Transfer, 134, Article ID: 042601.\n\nNOTES\n\n*Corresponding author." ]
[ null, "http://html.scirp.org/file/9-2500537x3.png", null, "http://html.scirp.org/file/9-2500537x2.png", null, "http://html.scirp.org/file/3-4900347x3.png", null, "http://html.scirp.org/file/3-4900347x5.png", null, "http://html.scirp.org/file/3-4900347x7.png", null, "http://html.scirp.org/file/3-4900347x8.png", null, "http://html.scirp.org/file/3-4900347x9.png", null, "http://html.scirp.org/file/3-4900347x10.png", null, "http://html.scirp.org/file/3-4900347x11.png", null, "http://html.scirp.org/file/3-4900347x12.png", null, "http://html.scirp.org/file/3-4900347x13.png", null, "http://html.scirp.org/file/3-4900347x14.png", null, "http://html.scirp.org/file/3-4900347x15.png", null, "http://html.scirp.org/file/3-4900347x16.png", null, "http://html.scirp.org/file/3-4900347x17.png", null, "http://html.scirp.org/file/3-4900347x18.png", null, "http://html.scirp.org/file/3-4900347x19.png", null, "http://html.scirp.org/file/3-4900347x20.png", null, "http://html.scirp.org/file/3-4900347x21.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.84532017,"math_prob":0.8753793,"size":29786,"snap":"2019-13-2019-22","text_gpt3_token_len":7252,"char_repetition_ratio":0.13508159,"word_repetition_ratio":0.03207331,"special_character_ratio":0.24451084,"punctuation_ratio":0.17690587,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98147035,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38],"im_url_duplicate_count":[null,null,null,null,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-05-25T11:40:59Z\",\"WARC-Record-ID\":\"<urn:uuid:ac8ba60f-5889-4b77-b18f-e5ce93b8e89d>\",\"Content-Length\":\"90483\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5167b46d-e765-4930-9b35-b0db9a8da625>\",\"WARC-Concurrent-To\":\"<urn:uuid:00e512a0-1bcc-4ab2-b177-da802d67b227>\",\"WARC-IP-Address\":\"104.129.171.12\",\"WARC-Target-URI\":\"https://file.scirp.org/Html/3-4900347_57402.htm\",\"WARC-Payload-Digest\":\"sha1:HLKTP4764KU4DCCJX2S4RX43QE4NNJGK\",\"WARC-Block-Digest\":\"sha1:D6JBFMEOCGPZYX7WNMGXRRZAFRIXRW3C\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-22/CC-MAIN-2019-22_segments_1558232258003.30_warc_CC-MAIN-20190525104725-20190525130725-00303.warc.gz\"}"}
http://atcoder.noip.space/contest/abc052/a
[ "# Home\n\nScore : $100$ points\n\n### Problem Statement\n\nThere are two rectangles. The lengths of the vertical sides of the first rectangle are $A$, and the lengths of the horizontal sides of the first rectangle are $B$. The lengths of the vertical sides of the second rectangle are $C$, and the lengths of the horizontal sides of the second rectangle are $D$.\n\nPrint the area of the rectangle with the larger area. If the two rectangles have equal areas, print that area.\n\n### Constraints\n\n• All input values are integers.\n• $1≤A≤10^4$\n• $1≤B≤10^4$\n• $1≤C≤10^4$\n• $1≤D≤10^4$\n\n### Input\n\nThe input is given from Standard Input in the following format:\n\n$A$ $B$ $C$ $D$\n\n\n### Output\n\nPrint the area of the rectangle with the larger area. If the two rectangles have equal areas, print that area.\n\n### Sample Input 1\n\n3 5 2 7\n\n\n### Sample Output 1\n\n15\n\n\nThe first rectangle has an area of $3×5=15$, and the second rectangle has an area of $2×7=14$. Thus, the output should be $15$, the larger area.\n\n### Sample Input 2\n\n100 600 200 300\n\n\n### Sample Output 2\n\n60000" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.86570996,"math_prob":0.9993359,"size":804,"snap":"2020-24-2020-29","text_gpt3_token_len":203,"char_repetition_ratio":0.22125,"word_repetition_ratio":0.41379312,"special_character_ratio":0.28606966,"punctuation_ratio":0.105882354,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99963295,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-06-04T10:49:55Z\",\"WARC-Record-ID\":\"<urn:uuid:cb6e1ba1-bfd9-4968-8055-29d32e515d36>\",\"Content-Length\":\"3962\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:63f84031-ef08-4867-9ac6-bc5e2885be4d>\",\"WARC-Concurrent-To\":\"<urn:uuid:60aeb20d-c785-4173-8103-7f26246026fb>\",\"WARC-IP-Address\":\"124.248.228.90\",\"WARC-Target-URI\":\"http://atcoder.noip.space/contest/abc052/a\",\"WARC-Payload-Digest\":\"sha1:IZN6CYGQCYVHAQL2D54TFGUR3PR3WQTE\",\"WARC-Block-Digest\":\"sha1:H4T3SOW3GGOOGQTW7QQMTMFNX2CV6VHO\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-24/CC-MAIN-2020-24_segments_1590347439928.61_warc_CC-MAIN-20200604094848-20200604124848-00082.warc.gz\"}"}
https://www.proprofs.com/quiz-school/story.php?title=integers-all-operations
[ "# Integers - All Operations\n\n10 Questions | Total Attempts: 368", null, "", null, "Settings", null, "", null, "Related Topics\n• 1.\nOn the hottest day in Richmond, Virginia, the temperature was 105 degrees F.  On the coldest day, the temperature was 117 degrees F lower.  What was the coldest temperature in Richmond?\n• A.\n\n-117 degrees F\n\n• B.\n\n-12 degrees F\n\n• C.\n\n12 degrees F\n\n• D.\n\n117 degrees F\n\n• 2.\nListed below are temperatures for four cities.  Using this information, determine which statement is true.\n• Denver: -5 degrees F\n• Chicago: -12 degrees F\n• New York City: -8 degrees F\n• Minneapolis: -15 degrees F\n• A.\n\nChicago is colder than Denver, but warmer than New York City.\n\n• B.\n\nNew York City is colder than Chicago and Minneapolis.\n\n• C.\n\nDenver is warmer than New York City and Minneapolis.\n\n• D.\n\nDenver is the coldest of the four cities listed.\n\n• 3.\nBryan swam to a depth of 29 feet below sea level while diving.  He descended another 31 feet.  He then rose up 16 feet.  Which process could be used to find Bryan's current depth?\n• A.\n\n• B.\n\n• C.\n\n• D.\n\n• 4.\nABC Company lost \\$21,000 in the first quarter of the year.  They lost the same amount each of the other three quarters.  Which integer represents the amount they lose for the year?  (Think about how many quarters there are in a whole year!)\n• A.\n\n\\$63,000\n\n• B.\n\n-\\$63,000\n\n• C.\n\n\\$84,000\n\n• D.\n\n-\\$84,000\n\n• 5.\nOver a 4-month period, the total change in the average high temperature was -32 degrees.  What was the average change per month?\n• A.\n\n8 degrees\n\n• B.\n\n-128 degrees\n\n• C.\n\n-28 degrees\n\n• D.\n\n-8 degrees\n\n• 6.\nWhich comparison is NOT true?\n• A.\n\n-15 > -13\n\n• B.\n\n-7 < 6\n\n• C.\n\n-23 < 22\n\n• D.\n\n8 > -8\n\n• 7.\nThe entrance to a mine is 1,890 feet above sea level.  The bottom of a mine is 3,642 feet below the entrance.  What is the elevation at the bottom of the mine?\n• A.\n\n1,752 feet\n\n• B.\n\n5,532 feet\n\n• C.\n\n-1,752 feet\n\n• D.\n\n-5,532 feet\n\n• 8.\nErin dives from the surface of a lake at a rate of 3 meters per minute.  At what depth, relative to the surface, is Erin after diving for 10 minutes?\n• A.\n\n-13 meters\n\n• B.\n\n-3.3 meters\n\n• C.\n\n-7 meters\n\n• D.\n\n-30 meters\n\n• 9.\nOne day in Butte, Montana the temperature started out at 50 degrees.  Over the course of the day, the temperature dropped 4 degrees per hour for 6 consecutive hours.  Which procedure could be used to determine the temperature after 6 hours?\n• A.\n\nDivide 6 by – 4. Add the quotient to 50.\n\n• B.\n\nDivide 50 by 6. Add the quotient to – 4.\n\n• C.\n\nMultiply 4 by 6. Add the product to 50.\n\n• D.\n\nMultiply -4 by 6. Add the product to 50.\n\n• 10.\nBilly has stock in Walmart.  The ending balance on the stock for the last five days were: +\\$5, +\\$2, -\\$4, -\\$8 and  -\\$10.  What was the average daily change in Billy's stock?\n• A.\n\n-\\$3\n\n• B.\n\n-\\$15\n\n• C.\n\n\\$29\n\n• D.\n\n-\\$6" ]
[ null, "https://www.proprofs.com/quiz-school/images/story_settings_gear.png", null, "https://www.proprofs.com/quiz-school/images/story_settings_gear_color.png", null, "https://www.proprofs.com/quiz-school/loader.gif", null, "https://www.proprofs.com/quiz-school/topic_images/p1eak4639rf651esoim51r7sptr3.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7917853,"math_prob":0.82223475,"size":813,"snap":"2020-45-2020-50","text_gpt3_token_len":287,"char_repetition_ratio":0.12731768,"word_repetition_ratio":0.075,"special_character_ratio":0.4292743,"punctuation_ratio":0.15228426,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9632004,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-21T08:27:51Z\",\"WARC-Record-ID\":\"<urn:uuid:4dcdaada-d326-4444-a97d-7c557a1b6cfd>\",\"Content-Length\":\"245019\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:05bbb09a-e6f5-4b35-8142-c1b778ae2eca>\",\"WARC-Concurrent-To\":\"<urn:uuid:e1e6d85a-2f74-443b-8ad5-a7eafdd3e518>\",\"WARC-IP-Address\":\"104.26.12.111\",\"WARC-Target-URI\":\"https://www.proprofs.com/quiz-school/story.php?title=integers-all-operations\",\"WARC-Payload-Digest\":\"sha1:WNIGHBTK6G7FFBLQMQON24C7IQPO3MZL\",\"WARC-Block-Digest\":\"sha1:HDYWJ5C35CXDTE3QJ7MKFQMTYHI2THJA\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107876136.24_warc_CC-MAIN-20201021064154-20201021094154-00656.warc.gz\"}"}
http://www.modernescpp.com/index.php/component/jaggyblog/templates-misconceptions-and-surprises
[ "# Templates: Misconceptions and Surprises\n\nContents[Show]\n\nI often teach the basics to templates. Templates are special. Therefore, I encounter many misconceptions which cause surprises. Here are a few of them.", null, "My first misconception is presumably obvious for many but not for all C++ developers.\n\nFirst of all, what does the related type mean? This is my informal term which stands for types that can be implicitly converted. Here is the starting point.\n\n```// genericAssignment.cpp\n\n#include <vector>\n\ntemplate <typename T, int N> // (1)\nstruct Point{\nPoint(std::initializer_list<T> initList): coord(initList){}\n\nstd::vector<T> coord;\n};\n\nint main(){\n\nPoint<int, 3> point1{1, 2, 3};\nPoint<int, 3> point2{4, 5, 6};\n\npoint1 = point2; // (2)\n\nauto doubleValue = 2.2;\nauto intValue = 2;\ndoubleValue = intValue; // (3)\n\nPoint<double, 3> point3{1.1, 2.2, 3.3};\npoint3 = point2; // (4)\n\n}\n```\n\nThe class template Point stands for a point in an n-dimensional space. The type of the coordinates and the dimension can be adjusted (line 1). The coordinates are stored in a std::vector<T>. When I create two points with the same coordinate type and dimension, I can assign them.\n\nNow the misconception begins. You can assign an int to a double (line 3). Therefore, it should be possible to assign a Point of ints to a Point of doubles. The C++ compiler is quite specific about line 4. Both class templates are not related and cannot be assigned. They are different types.", null, "The error message gives the first hint. I need an assignment operator that supports the conversion from Point<int, 3> to Point<double, 3>. The class template now has a generic copy assignment operator.\n\n```// genericAssignment2.cpp\n\n#include <algorithm>\n#include <iostream>\n#include <string>\n#include <vector>\n\ntemplate <typename T, int N>\nstruct Point{\n\nPoint(std::initializer_list<T> initList): coord(initList){}\n\ntemplate <typename T2>\nPoint<T, N>& operator=(const Point<T2, N>& point){ // (1)\nstatic_assert(std::is_convertible<T2, T>::value,\n\"Cannot convert source type to destination type!\");\ncoord.clear();\ncoord.insert(coord.begin(), point.coord.begin(), point.coord.end());\nreturn *this;\n}\n\nstd::vector<T> coord;\n\n};\n\nint main(){\n\nPoint<double, 3> point1{1.1, 2.2, 3.3};\nPoint<int, 3> point2{1, 2, 3};\n\nPoint<int, 2> point3{1, 2};\nPoint<std::string, 3> point4{\"Only\", \"a\", \"test\"};\n\npoint1 = point2; // (3)\n\n// point2 = point3; // (4)\n// point2 = point4; // (5)\n\n}\n```\n\nDue to line (1), the copy assignment in line (3) works. Let's have a closer look at the class template Point:\n\n• Point<T, N>& operator=(const Point<T2, N>& point): The assigned to Point is of type Point<T, N> and accepts only the Point, which has the same dimension but the type could vary: Point<T2, N>.\n• static_assert(std::is_convertible<T2, T>::value, \"Cannot convert source type to destination type!\"): This expression checks with the help of the function std::is_convertible from the type-traits library, if T2 can be converted to T.\n\nWhen I use the lines (4) and (5) the compilation fails:", null, "Line (3) gives an error because both points have a different dimension. Line (4) triggers the static_assert in the assignment operator because a std::string is not convertible to an int.\n\nI assume the next misconception has more surprise potential.\n\n## Methods inherited from Class Templates are per se not available\n\nLet's start simple.\n\n```// inheritance.cpp\n\n#include <iostream>\n\nclass Base{\npublic:\nvoid func(){ // (1)\nstd::cout << \"func\" << std::endl;\n}\n};\n\nclass Derived: public Base{\npublic:\nvoid callBase(){\nfunc(); // (2)\n}\n};\n\nint main(){\n\nstd::cout << std::endl;\n\nDerived derived;\nderived.callBase();\n\nstd::cout << std::endl;\n\n}\n```\n\nI implemented a class Base and Derived. Derived is public derived from Base and can, therefore, be used in its method callBase (line 2) the method func from class Base. Okay, I have nothing to add to the output of the program.", null, "Making Base a class template totally changes the behaviour.\n\n```// templateInheritance.cpp\n\n#include <iostream>\n\ntemplate <typename T>\nclass Base{\npublic:\nvoid func(){ // (1)\nstd::cout << \"func\" << std::endl;\n}\n};\n\ntemplate <typename T>\nclass Derived: public Base<T>{\npublic:\nvoid callBase(){\nfunc(); // (2)\n}\n};\n\nint main(){\n\nstd::cout << std::endl;\n\nDerived<int> derived;\nderived.callBase();\n\nstd::cout << std::endl;\n\n}\n```\n\nI assume the compiler error may surprise you.", null, "The line \"there are no arguments to 'func' that depend on a template parameter, so a declaration of 'func' must be available\" from the error message gives the first hint. func is a so-called non-dependent name because its name does not depend on the template parameter T. The consequence is that the compiler does not look in the from T dependent base class Base<T> and there is no name func available outside the class template.\n\nThere are three workarounds to extend the name lookup to the dependent base class. The following example uses all three.\n\n```// templateInheritance2.cpp\n\n#include <iostream>\n\ntemplate <typename T>\nclass Base{\npublic:\nvoid func1() const {\nstd::cout << \"func1()\" << std::endl;\n}\nvoid func2() const {\nstd::cout << \"func2()\" << std::endl;\n}\nvoid func3() const {\nstd::cout << \"func3()\" << std::endl;\n}\n};\n\ntemplate <typename T>\nclass Derived: public Base<T>{\npublic:\nusing Base<T>::func2; // (2)\nvoid callAllBaseFunctions(){\n\nthis->func1(); // (1)\nfunc2(); // (2)\nBase<T>::func3(); // (3)\n\n}\n};\n\nint main(){\n\nstd::cout << std::endl;\n\nDerived<int> derived;\nderived.callAllBaseFunctions();\n\nstd::cout << std::endl;\n\n}\n```\n\n• Make the name dependent: The call this->func1 in line 1 is dependent because this is implicit dependent. The name lookup will consider in this case all base classes.\n• Introduce the name into the current scope: The expression using Base<T>::func2 (line 2) introduces func2 into the current scope.\n• Call the name fully qualified: Calling func3 fully qualified (line 3) will break a virtual dispatch and may cause new surprises.\n\nIn the end, here is the output of the program.", null, "## What's next?\n\nI have more to write about dependent names in my next post. Sometimes you have to disambiguate dependent names with typename or template. If you are seeing this for the first time, you are probably as surprised as me.\n\nThanks a lot to my Patreon Supporters: Matt Braun, Roman Postanciuc, Tobias Zindl, Marko, G Prvulovic, Reinhold Dröge, Abernitzke, Frank Grimm, Sakib, Broeserl, António Pina, Sergey Agafyin, Андрей Бурмистров, Jake, GS, Lawton Shoemake, Animus24, Jozo Leko, John Breland, espkk, Wolfgang Gärtner, Louis St-Amour, Venkat Nandam, Jose Francisco, Douglas Tinkham, Kuchlong Kuchlong, Robert Blanch, Truels Wissneth, Kris Kafka, Mario Luoni, Neil Wang, Friedrich Huber, lennonli, Pramod Tikare Muralidhara, Peter Ware, Tobi Heideman, Daniel Hufschläger, Red Trip, Alexander Schwarz, Tornike Porchxidze, Alessandro Pezzato, and Evangelos Denaxas.\n\nThanks in particular to Jon Hess, Lakshman, Christian Wittenhorst, Sherhy Pyton, Dendi Suhubdy, Sudhakar Belagurusamy, and Richard Sargeant.\n\n## Seminars\n\nI'm happy to give online-seminars or face-to-face seminars world-wide. Please call me if you have any questions.\n\n### Standard Seminars (English/German)\n\nHere is a compilation of my standard seminars. These seminars are only meant to give you a first orientation.\n\n### Modernes C++,", null, "0 #1 Pramod 2020-11-05 03:03\nIn the first post 'Templates of Related Types are not Related', the heading of the post is the truth right(a surprise rather)? not a misconception.\n\n### Subscribe to the newsletter (+ pdf bundle)\n\n Name Email Please enable the javascript to submit this form\n\n### Visitors\n\nToday 7368\n\nYesterday 8041\n\nWeek 22992\n\nMonth 82621\n\nAll 6311093\n\nCurrently are 225 guests and no members online\n\n• #### Templates - First Steps\n\nThanks for putting it so clear and detailed way Eagerly waiting for next post in this series\n\n• #### And The Winner is: Templates\n\ntypo: \"Template metaprogramming is turning complete. \"\n\n• #### Lazy Futures with Coroutines\n\nI tried the example in eagerFutureWithComments.cpp and in the output, MyFuture::MyFuture immediately ...\n\n• #### std::format in C++20\n\nAlso, C++20 doesn't support the 'print' function of {fmt}. Hopefully it will appear in C++23 (there is ...\n\n• #### Automatically Resuming a Job with Coroutines on a Separate Thread\n\nThe thread is started automatically if the job is not done." ]
[ null, "https://www.modernescpp.com/images/blog/ModernCpp/TemplatesMisconceptionsAndSurprises/santa-claus-2927962_1280.png", null, "https://www.modernescpp.com/images/blog/ModernCpp/TemplatesMisconceptionsAndSurprises/genericAssignment.png", null, "https://www.modernescpp.com/images/blog/ModernCpp/TemplatesMisconceptionsAndSurprises/genericAssignment2.png", null, "https://www.modernescpp.com/images/blog/ModernCpp/TemplatesMisconceptionsAndSurprises/inheritance.png", null, "https://www.modernescpp.com/images/blog/ModernCpp/TemplatesMisconceptionsAndSurprises/templateInheritance1.png", null, "https://www.modernescpp.com/images/blog/ModernCpp/TemplatesMisconceptionsAndSurprises/templateInheritance2.png", null, "https://www.modernescpp.com/images/signatur/RainerGrimmSmall.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7001084,"math_prob":0.7681064,"size":7467,"snap":"2021-21-2021-25","text_gpt3_token_len":1942,"char_repetition_ratio":0.1167091,"word_repetition_ratio":0.09106985,"special_character_ratio":0.2726664,"punctuation_ratio":0.2192691,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9541725,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14],"im_url_duplicate_count":[null,4,null,4,null,4,null,4,null,4,null,4,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-05-12T23:28:03Z\",\"WARC-Record-ID\":\"<urn:uuid:64f98946-6da2-4dab-af44-b6bd04d651d4>\",\"Content-Length\":\"155241\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1f3bd1f5-8af2-41ac-be91-71bd15765d8d>\",\"WARC-Concurrent-To\":\"<urn:uuid:e246f2d9-b805-4182-939c-1a609451d03a>\",\"WARC-IP-Address\":\"81.169.145.150\",\"WARC-Target-URI\":\"http://www.modernescpp.com/index.php/component/jaggyblog/templates-misconceptions-and-surprises\",\"WARC-Payload-Digest\":\"sha1:62ATOI7WTYKZCG3JIUZTOEGTC7H6CKVB\",\"WARC-Block-Digest\":\"sha1:4AYROIRDZPWHRITTIKKCXPHQLICTPJ5A\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-21/CC-MAIN-2021-21_segments_1620243991413.30_warc_CC-MAIN-20210512224016-20210513014016-00309.warc.gz\"}"}
https://www.rdocumentation.org/packages/spatstat/versions/1.64-1/topics/Ldot.inhom
[ "# Ldot.inhom\n\n0th\n\nPercentile\n\n##### Inhomogeneous Multitype L Dot Function\n\nFor a multitype point pattern, estimate the inhomogeneous version of the dot $$L$$ function.\n\nKeywords\nspatial, nonparametric\n##### Usage\nLdot.inhom(X, i, …, correction)\n##### Arguments\nX\n\nThe observed point pattern, from which an estimate of the inhomogeneous cross type $$L$$ function $$L_{i\\bullet}(r)$$ will be computed. It must be a multitype point pattern (a marked point pattern whose marks are a factor). See under Details.\n\ni\n\nThe type (mark value) of the points in X from which distances are measured. A character string (or something that will be converted to a character string). Defaults to the first level of marks(X).\n\ncorrection,…\n\nOther arguments passed to Kdot.inhom.\n\n##### Details\n\nThis a generalisation of the function Ldot to include an adjustment for spatially inhomogeneous intensity, in a manner similar to the function Linhom.\n\nAll the arguments are passed to Kdot.inhom, which estimates the inhomogeneous multitype K function $$K_{i\\bullet}(r)$$ for the point pattern. The resulting values are then transformed by taking $$L(r) = \\sqrt{K(r)/\\pi}$$.\n\n##### Value\n\nAn object of class \"fv\" (see fv.object).\n\nEssentially a data frame containing numeric columns\n\nr\n\nthe values of the argument $$r$$ at which the function $$L_{i\\bullet}(r)$$ has been estimated\n\ntheo\n\nthe theoretical value of $$L_{i\\bullet}(r)$$ for a marked Poisson process, identical to $$r$$.\n\ntogether with a column or columns named \"border\", \"bord.modif\", \"iso\" and/or \"trans\", according to the selected edge corrections. These columns contain estimates of the function L_{i\\bullet}(r)Li.(r) obtained by the edge corrections named.\n\nThe argument i is interpreted as a level of the factor X$marks. It is converted to a character string if it is not already a character string. The value i=1 does not refer to the first level of the factor. ##### References Moller, J. and Waagepetersen, R. Statistical Inference and Simulation for Spatial Point Processes Chapman and Hall/CRC Boca Raton, 2003. ##### See Also Ldot, Linhom, Kdot.inhom, Lcross.inhom. ##### Aliases • Ldot.inhom ##### Examples # NOT RUN { # Lansing Woods data lan <- lansing lan <- lan[seq(1,npoints(lan), by=10)] ma <- split(lan)$maple\nlg <- unmark(lan)\n\n# Estimate intensities by nonparametric smoothing\nlambdaM <- density.ppp(ma, sigma=0.15, at=\"points\")\nL <- Ldot.inhom(lan, \"maple\", lambdaI=lambdaM,\n\n# synthetic example: type A points have intensity 50,\n# type B points have intensity 50 + 100 * x\nlamB <- as.im(function(x,y){50 + 100 * x}, owin())\nlamdot <- as.im(function(x,y) { 100 + 100 * x}, owin())\nX <- superimpose(A=runifpoispp(50), B=rpoispp(lamB))\nL <- Ldot.inhom(X, \"B\", lambdaI=lamB, lambdadot=lamdot)\n# }\n\nDocumentation reproduced from package spatstat, version 1.64-1, License: GPL (>= 2)\n\n### Community examples\n\nLooks like there are no examples yet." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.61801904,"math_prob":0.9774969,"size":2450,"snap":"2020-45-2020-50","text_gpt3_token_len":702,"char_repetition_ratio":0.10956664,"word_repetition_ratio":0.0054644807,"special_character_ratio":0.26816326,"punctuation_ratio":0.13822894,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9942017,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-30T20:45:21Z\",\"WARC-Record-ID\":\"<urn:uuid:deb8e691-a3fc-4af5-ad96-4a24069cd36f>\",\"Content-Length\":\"18031\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a65d10bd-a790-4a52-8cc3-086562d83281>\",\"WARC-Concurrent-To\":\"<urn:uuid:d5bdcabd-7c57-485e-81db-8179aaad7046>\",\"WARC-IP-Address\":\"54.242.165.78\",\"WARC-Target-URI\":\"https://www.rdocumentation.org/packages/spatstat/versions/1.64-1/topics/Ldot.inhom\",\"WARC-Payload-Digest\":\"sha1:Q2KOWLQWG2XHVK46252T5F6ZRKF74H7E\",\"WARC-Block-Digest\":\"sha1:LR7DFGGEVYS54I6BAGMEJBYYAS2LHCOG\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107911229.96_warc_CC-MAIN-20201030182757-20201030212757-00044.warc.gz\"}"}
https://www.elsevier.com/books/mathematical-modeling/unknown/978-0-12-386912-8
[ "# Mathematical Modeling\n\n## 4th Edition\n\nHardcover ISBN: 9780123869128\neBook ISBN: 9780123869968\nPublished Date: 28th January 2013\nPage Count: 384\nSales tax will be calculated at check-out Price includes VAT/GST\n77.95\n58.46\n58.46\n58.46\n62.36\n58.46\n58.46\n62.36\n65.99\n49.49\n49.49\n49.49\n52.79\n49.49\n49.49\n52.79\n107.95\n80.96\n80.96\n80.96\n86.36\n80.96\n80.96\n86.36\nUnavailable\nPrice includes VAT/GST\n\n## Description\n\nThe new edition of Mathematical Modeling, the survey text of choice for mathematical modeling courses, adds ample instructor support and online delivery for solutions manuals and software ancillaries.\n\nFrom genetic engineering to hurricane prediction, mathematical models guide much of the decision making in our society. If the assumptions and methods underlying the modeling are flawed, the outcome can be disastrously poor. With mathematical modeling growing rapidly in so many scientific and technical disciplines, Mathematical Modeling, Fourth Edition provides a rigorous treatment of the subject. The book explores a range of approaches including optimization models, dynamic models and probability models.\n\n## Key Features\n\n• Offers increased support for instructors, including MATLAB material as well as other on-line resources\n• Features new sections on time series analysis and diffusion models\n• Provides additional problems with international focus such as whale and dolphin populations, plus updated optimization problems\n\nAdvanced undergraduate or beginning graduate students in mathematics and closely related fields. Formal prerequisites consist of the usual freshman-sophomore sequence in mathematics, including one-variable calculus, multivariable calculus, linear algebra, and differential equations. Prior exposure to computing and probability and statistics is useful, but is not required.\n\nPreface\n\nPart I: Optimization Models\n\nChapter 1. One Variable Optimization\n\n1.1 The five-step Method\n\n1.2 Sensitivity Analysis\n\n1.3 Sensitivity and Robustness\n\n1.4 Exercises\n\nChapter 2. Multivariable Optimization\n\n2.1 Unconstrained Optimization\n\n2.2 Lagrange Multipliers\n\n2.3 Sensitivity Analysis and Shadow Prices\n\n2.4 Exercises\n\nChapter 3. Computational Methods for Optimization\n\n3.1 One Variable Optimization\n\n3.2 Multivariable Optimization\n\n3.3 Linear Programming\n\n3.4 Discrete Optimization\n\n3.5 Exercises\n\nPart II: Dynamic Models\n\nChapter 4. Introduction to Dynamic Models\n\n4.2 Dynamical Systems\n\n4.3 Discrete Time Dynamical Systems\n\n4.4 Exercises\n\nChapter 5. Analysis of Dynamic Models\n\n5.1 Eigenvalue Methods\n\n5.2 Eigenvalue Methods for Discrete Systems\n\n5.3 Phase Portraits\n\n5.4 Exercises\n\nChapter 6. Simulation of Dynamic Models\n\n6.1 Introduction to Simulation\n\n6.2 Continuous-Time Models\n\n6.3 The Euler Method\n\n6.4 Chaos and Fractals\n\n6.5 Exercises\n\nPart III: Probability Models\n\nChapter 7. Introduction to Probability Models\n\n7.1 Discrete Probability Models\n\n7.2 Continuous Probability Models\n\n7.3 Introduction to Statistics\n\n7.4 Diffusion\n\n7.5 Exercises\n\nChapter 8. Stochastic Models\n\n8.1 Markov Chains\n\n8.2 Markov Processes\n\n8.3 Linear Regression\n\n8.4 Time Series\n\n8.5 Exercises\n\nChapter 9. Simulation of Probability Models\n\n9.1 Monte Carlo Simulation\n\n9.2 The Markov Property\n\n9.3 Analytic Simulation\n\n9.4 Particle Tracking\n\n9.5 Fractional Diffusion\n\n9.6 Exercises\n\nAfterword\n\nIndex\n\n## Details\n\nNo. of pages:\n384\nLanguage:\nEnglish\nPublished:\n28th January 2013\nImprint:" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.78157926,"math_prob":0.6800758,"size":3764,"snap":"2019-43-2019-47","text_gpt3_token_len":825,"char_repetition_ratio":0.13829787,"word_repetition_ratio":0.0,"special_character_ratio":0.2056323,"punctuation_ratio":0.1421875,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.982235,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-20T16:07:20Z\",\"WARC-Record-ID\":\"<urn:uuid:66bcc565-cf10-4701-83c2-53043592f755>\",\"Content-Length\":\"278497\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:77ca125f-d0cf-4722-8890-ac8ac2af0885>\",\"WARC-Concurrent-To\":\"<urn:uuid:d842dc9f-08a3-436a-8357-35809943986e>\",\"WARC-IP-Address\":\"203.82.26.7\",\"WARC-Target-URI\":\"https://www.elsevier.com/books/mathematical-modeling/unknown/978-0-12-386912-8\",\"WARC-Payload-Digest\":\"sha1:TOI5USXRNSGU5YHGRRFL5HPU3KNIJKFT\",\"WARC-Block-Digest\":\"sha1:ACTWBO2BITT6K4K2ZHEG57BXAAMWLIOE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496670559.66_warc_CC-MAIN-20191120134617-20191120162617-00505.warc.gz\"}"}
https://stats.stackexchange.com/questions/421930/explaining-mds-space
[ "# Explaining MDS space\n\nI have a set of dummy variables (~300) indicating a particular feature, and rows which represent an individual.\n\nI plot this data after using nMDS to visualize which individuals are more similar to other individuals. And this seems to work as I expect.\n\nHowever, now I want to quantitatively explain why the space is the way it is. I.e. why aren't the points randomly distributed across the space. As I understand from this post, I shouldn't try to model the nMDS axis (although I could possible use the dimensions to explain a response?)\n\nI have thought that perhaps I could perform a clustering analysis on the outputs, and then model the classifications using some modelling techniques (random forest, categorical linear models or whatever). Does this seem appropriate? Are there other methods that might be more appropriate for this?\n\n• MDS is an analysis of a distance matrix, it starts with the square symmetric distance matrix between objects (individuals). What was your distance measure? – ttnphns Aug 13 '19 at 8:52\n• The data is converted to a distance matrix using dist(., \"binary\") function in R. Although I have also just used a convenience function metaMDS() which does this internally. – SamPassmore Aug 13 '19 at 8:57\n• Just a remark: Please mind that this site is not a software site, but a statistical one. Here is a lot people who don't understand what is \"dist(., \"binary\") function in R\" or \"metaMDS()\". – ttnphns Aug 13 '19 at 11:16\n• R is the most commonly used question tag on this site so I disagree on this point. I can provide more detail if you think it will be useful - but I don't understand why it helps answer the question here. Perhaps you can explain why the distance measure is important for this? – SamPassmore Aug 13 '19 at 11:42" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9510192,"math_prob":0.79991794,"size":834,"snap":"2020-24-2020-29","text_gpt3_token_len":166,"char_repetition_ratio":0.090361446,"word_repetition_ratio":0.0,"special_character_ratio":0.19784173,"punctuation_ratio":0.103225805,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9559772,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-06-06T17:59:02Z\",\"WARC-Record-ID\":\"<urn:uuid:b702d57f-6bdb-45c9-bcd4-d1d29ebbfc6e>\",\"Content-Length\":\"140172\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d634b310-fb12-46d7-a3f7-ad1ae1e26069>\",\"WARC-Concurrent-To\":\"<urn:uuid:bc647bbe-efb9-403a-a49f-29e9ac5bcf15>\",\"WARC-IP-Address\":\"151.101.65.69\",\"WARC-Target-URI\":\"https://stats.stackexchange.com/questions/421930/explaining-mds-space\",\"WARC-Payload-Digest\":\"sha1:WMHNNW5QCF3HTLLUIYV35VM3XGWNAM3V\",\"WARC-Block-Digest\":\"sha1:CARCWGSXDTX5ADKOOSNA6KWC4GWIPGUU\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-24/CC-MAIN-2020-24_segments_1590348517506.81_warc_CC-MAIN-20200606155701-20200606185701-00245.warc.gz\"}"}
https://datascience.stackexchange.com/questions/473/is-logistic-regression-actually-a-regression-algorithm/475
[ "# Is logistic regression actually a regression algorithm?\n\nThe usual definition of regression (as far as I am aware) is predicting a continuous output variable from a given set of input variables.\n\nLogistic regression is a binary classification algorithm, so it produces a categorical output.\n\nIs it really a regression algorithm? If so, why?\n\nLogistic regression is regression, first and foremost. It becomes a classifier by adding a decision rule. I will give an example that goes backwards. That is, instead of taking data and fitting a model, I'm going to start with the model in order to show how this is truly a regression problem.\n\nIn logistic regression, we are modeling the log odds, or logit, that an event occurs, which is a continuous quantity. If the probability that event $A$ occurs is $P(A)$, the odds are:\n\n$$\\frac{P(A)}{1 - P(A)}$$\n\nThe log odds, then, are:\n\n$$\\log \\left( \\frac{P(A)}{1 - P(A)}\\right)$$\n\nAs in linear regression, we model this with a linear combination of coefficients and predictors:\n\n$$\\operatorname{logit} = b_0 + b_1x_1 + b_2x_2 + \\cdots$$\n\nImagine we are given a model of whether a person has gray hair. Our model uses age as the only predictor. Here, our event A = a person has gray hair:\n\nlog odds of gray hair = -10 + 0.25 * age\n\n...Regression! Here is some Python code and a plot:\n\n%matplotlib inline\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport seaborn as sns\n\nx = np.linspace(0, 100, 100)\n\ndef log_odds(x):\nreturn -10 + .25 * x\n\nplt.plot(x, log_odds(x))\nplt.xlabel(\"age\")\nplt.ylabel(\"log odds of gray hair\")", null, "Now, let's make it a classifier. First, we need to transform the log odds to get out our probability $P(A)$. We can use the sigmoid function:\n\n$$P(A) = \\frac1{1 + \\exp(-\\text{log odds}))}$$\n\nHere's the code:\n\nplt.plot(x, 1 / (1 + np.exp(-log_odds(x))))\nplt.xlabel(\"age\")\nplt.ylabel(\"probability of gray hair\")", null, "The last thing we need to make this a classifier is to add a decision rule. One very common rule is to classify a success whenever $P(A) > 0.5$. We will adopt that rule, which implies that our classifier will predict gray hair whenever a person is older than 40 and will predict non-gray hair whenever a person is under 40.\n\nLogistic regression works great as a classifier in more realistic examples too, but before it can be a classifier, it must be a regression technique!\n\n• Though in practice people use logistic regression as synonym of logistic regression+binary classifier. – jinawee Jul 19 '19 at 18:43\n\nYes, logistic regression is a regression algorithm and it does predict a continuous outcome: the probability of an event. That we use it as a binary classifier is due to the interpretation of the outcome.\n\nDetail\n\nLogistic regression is a type of generalize linear regression model.\n\nIn an ordinary linear regression model, a continuous outcome, y, is modeled as the sum of the product of predictors and their effect:\n\ny = b_0 + b_1 * x_1 + b_2 * x_2 + ... b_n * x_n + e\n\n\nwhere e is the error.\n\nGeneralized linear models do not model y directly. Instead, they use transformations to expand the domain of y to all real numbers. This transformation is called the link function. For logistic regression the link function is the logit function (usually, see note below).\n\nThe logit function is defined as\n\nln(y/(1 + y))\n\n\nThus the form of logistic regression is:\n\nln(y/(1 + y)) = b_0 + b_1 * x_1 + b_2 * x_2 + ... b_n * x_n + e\n\n\nwhere y is the probability of an event.\n\nThe fact that we use it as a binary classifier is due to the interpretation of the outcome.\n\nNote: probit is another link function used for logistic regression but logit is the most widely used.\n\nAs you discuss the definition of regression is predicting a continuous variable. Logistic regression is a binary classifier. Logistic regression is the application of a logit function on the output of a usual regression approach. Logit function turns (-inf,+inf) to [0,1]. I think it is just for historical reasons that keeps that name.\n\nSaying something like \"I did some regression to classify images. In particular I used logistic regression.\" is wrong.\n\n• Logistic regression can be used as a binary classifier, but it isn't inherently one. You could be using it to estimate odds or determine the relationship of a predictor variable to the outcome. – MattBagg Jun 19 '14 at 21:32\n\nTo put it simply any hypothetical function $$f$$ makes for regression algorithm if $$f:X\\rightarrow \\mathbb{R}$$. Thus logistic function which is $$P(Y=1|\\lambda, x)=\\dfrac{1}{1+e^{-\\lambda^Tx}} \\in [0,1]$$ makes for a regression algorithm. Here $$\\lambda$$ is coefficient or hyperplane found from trained datasets & $$x$$ is a data point. Here, $$sign(P(Y=1|\\lambda, x))$$ is taken as class." ]
[ null, "https://i.stack.imgur.com/xR0OT.png", null, "https://i.stack.imgur.com/hSpCa.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8385819,"math_prob":0.9964009,"size":1990,"snap":"2020-24-2020-29","text_gpt3_token_len":532,"char_repetition_ratio":0.118328296,"word_repetition_ratio":0.0059701493,"special_character_ratio":0.2839196,"punctuation_ratio":0.13842481,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99997914,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,6,null,6,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-06-07T10:01:47Z\",\"WARC-Record-ID\":\"<urn:uuid:5b640d7b-b364-4550-9865-5aefe1f2871f>\",\"Content-Length\":\"171120\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:73d9d6d6-2484-48b6-a407-6c8af850031b>\",\"WARC-Concurrent-To\":\"<urn:uuid:9b91c341-228c-480f-9194-a4145bdec90e>\",\"WARC-IP-Address\":\"151.101.193.69\",\"WARC-Target-URI\":\"https://datascience.stackexchange.com/questions/473/is-logistic-regression-actually-a-regression-algorithm/475\",\"WARC-Payload-Digest\":\"sha1:O6JKI25WDXNSO5ROFP6LCH2MGHZUQB3L\",\"WARC-Block-Digest\":\"sha1:PN6LSNO2P7PCQBUCCDW62UJDR6N3WM2O\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-24/CC-MAIN-2020-24_segments_1590348526471.98_warc_CC-MAIN-20200607075929-20200607105929-00360.warc.gz\"}"}
https://www.mathlearnit.com/fraction-of-number/what-is-2-6-of-7
[ "# What is 2/6 of 7?\n\n## What is 2 / 6 of 7 and how to calculate it yourself\n\n2 / 6 of 7 = 2.33\n\n2 / 6 of 7 is 2.33. In this article, we will go through how to calculate 2 / 6 of 7 and how to calculate any fraction of any whole number (integer). This article will show a general formula for solving this equation for positive numbers, but the same rules can be applied for numbers less than zero too!\n\nLet’s dive into how to solve!\n\n### 1: First step in solving 2 / 6 of 7 is understanding your fraction\n\n2 / 6 has two important parts: the numerator (2) and the denominator (6). The numerator is the number above the division line (called the vinculum) which represent the number of parts being taken from the whole. For example: If there were 14 cars total and 1 painted red, 1 would be the numerator or parts of the total. In this case of 2 / 6, 2 is our numerator. The denominator (6) is located below the vinculum and represents the total number. In the example above 14 would be the denominator of cars. For our fraction: 2 is the numerator and 6 is the denoimator.\n\n### 2: Write out your equation of 2 / 6 times 7\n\nWhen solving for 2 / 6 of a number, students should write the equation as the whole number (7) times 2 / 6. The solution to our problem will always be smaller than 7 because we are going to end up with a fraction of 7.\n\n$$\\frac{ 2 }{ 6 } \\times 7$$\n\n### 3. Convert your whole number (7) into a fraction (7/1)\n\nTo convert any whole number into a fraction, add a 1 into the denominator. Now place 2 / 6 next to the new fraction. This gives us the equation below.\n\n$$\\frac{ 2 }{ 6 } \\times \\frac{ 7 }{1}$$\n\n### 4. Multiply your fractions together\n\nOnce we set our equations 2 / 6 and 7 / 1, we now need to multiple your values starting with the numerators. In this case, we will be multiplying 2 (the numerator of 2 / 6) and 7 (the numerator of our new fraction 7/1). If you need a refresher on multiplying fractions, please see our guide here!\n\n$$\\frac{ 2 }{ 6 } \\times \\frac{ 7 }{1} = \\frac{ 14 }{ 6 }$$\n\nOur new numerator is 14.\n\nThen we need to do the same for our denominators. In this equation, we multiply 6 (denominator of 2 / 6) and 1 (the denominator of our new fraction 7 / 1).\n\nOur new denominator is 6.\n\n### 5. Divide our new fraction (14 / 6)\n\nAfter solving for our new equation off 14 / 6, our last job is to simplify this problem using long division. For longer fractions, we recommend to all of our students to write this last part down and use left to right long division.\n\n$$\\frac{ 14 }{ 6 } = 2.33$$\n\nAnd so there you have it! Our solution is 2.33.\n\n#### Quick recap:\n\n• Turn 7 into a fraction: 7 / 1\n• Multiply 7 / 1 by our fraction, 2 / 6\n• Multiply the numerators and the denominators together\n• We get 14 / 6 from that\n• Perform a standard division: 14 divided by 6 = 2.33\n\n#### Additional way of calculating 2 / 6 of 7\n\nYou can also write our fraction, 2 / 6, as a decimal by simply dividing 2 by 6 which is 0.33. If you multiply 0.33 with 7 you will see that you will end up with the same answer as above. You may also find it useful to know that if you multiply 0.33 with 100 you get 33.0. Which means that our answer of 2.33 is 33.0 percent of 7." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9016803,"math_prob":0.99637383,"size":3258,"snap":"2023-14-2023-23","text_gpt3_token_len":998,"char_repetition_ratio":0.15365703,"word_repetition_ratio":0.23242468,"special_character_ratio":0.33609575,"punctuation_ratio":0.10748299,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9996779,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-03-31T15:44:55Z\",\"WARC-Record-ID\":\"<urn:uuid:a068783b-422e-44ee-b2ca-b31f1cf16a4e>\",\"Content-Length\":\"19493\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ffacfe48-ce4d-427f-8171-7cab4ec682ab>\",\"WARC-Concurrent-To\":\"<urn:uuid:61aaa776-4709-46de-88db-64f2f1345e53>\",\"WARC-IP-Address\":\"159.65.170.170\",\"WARC-Target-URI\":\"https://www.mathlearnit.com/fraction-of-number/what-is-2-6-of-7\",\"WARC-Payload-Digest\":\"sha1:Q5WGPMSJ56JUAVU3T6CVJTDKUY7MY3PE\",\"WARC-Block-Digest\":\"sha1:4RVPJ3AHCPX2UUTHAYV6CNZYPKS23Q77\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296949644.27_warc_CC-MAIN-20230331144941-20230331174941-00044.warc.gz\"}"}
https://www.geosci-model-dev.net/11/1873/2018/
[ "Journal cover Journal topic\nGeoscientific Model Development An interactive open-access journal of the European Geosciences Union\nJournal topic\nGeosci. Model Dev., 11, 1873–1886, 2018\nhttps://doi.org/10.5194/gmd-11-1873-2018", null, "Geosci. Model Dev., 11, 1873–1886, 2018\nhttps://doi.org/10.5194/gmd-11-1873-2018", null, "Methods for assessment of models 15 May 2018\n\nMethods for assessment of models | 15 May 2018", null, "# The SPAtial EFficiency metric (SPAEF): multiple-component evaluation of spatial patterns for optimization of hydrological models\n\nThe SPAtial EFficiency metric (SPAEF): multiple-component evaluation of spatial patterns for optimization of hydrological models\nJulian Koch1, Mehmet Cüneyd Demirel1,2, and Simon Stisen1 Julian Koch et al.\n• 1Department of Hydrology, Geological Survey of Denmark and Greenland, Copenhagen, 1350, Denmark\n• 2Department of Civil Engineering, Istanbul Technical University, 34469 Maslak, Istanbul, Turkey\n\nCorrespondence: Julian Koch ([email protected])\n\nAbstract\n\nThe process of model evaluation is not only an integral part of model development and calibration but also of paramount importance when communicating modelling results to the scientific community and stakeholders. The modelling community has a large and well-tested toolbox of metrics to evaluate temporal model performance. In contrast, spatial performance evaluation does not correspond to the grand availability of spatial observations readily available and to the sophisticate model codes simulating the spatial variability of complex hydrological processes. This study makes a contribution towards advancing spatial-pattern-oriented model calibration by rigorously testing a multiple-component performance metric. The promoted SPAtial EFficiency (SPAEF) metric reflects three equally weighted components: correlation, coefficient of variation and histogram overlap. This multiple-component approach is found to be advantageous in order to achieve the complex task of comparing spatial patterns. SPAEF, its three components individually and two alternative spatial performance metrics, i.e. connectivity analysis and fractions skill score, are applied in a spatial-pattern-oriented model calibration of a catchment model in Denmark. Results suggest the importance of multiple-component metrics because stand-alone metrics tend to fail to provide holistic pattern information. The three SPAEF components are found to be independent, which allows them to complement each other in a meaningful way. In order to optimally exploit spatial observations made available by remote sensing platforms, this study suggests applying bias insensitive metrics which further allow for a comparison of variables which are related but may differ in unit. This study applies SPAEF in the hydrological context using the mesoscale Hydrologic Model (mHM; version 5.8), but we see great potential across disciplines related to spatially distributed earth system modelling.\n\n1 Introduction\n\nSpatially distributed models, which represent various components of the earth system, are extensively applied in policy-making, management and research. Such modelling tackles a wide range of environmental problems, such as the analysis of drought patterns (Herrera-Estrada et al., 2017), assessing the spatial regularization of fertilizers in agricultural landscapes (Refsgaard et al., 2014) or modelling vegetation dynamics (Ruiz-Pérez et al., 2016). Our study focuses on hydrological variability as predicted by spatially distributed hydrological models. The correct representation of the spatial variability of hydrological fluxes often constitutes the major obstacle for many modelling efforts with respect to model structure, parameterization and forcing data.\n\nIn order to establish confidence in outputs generated by spatially explicit hydrological models and further to justify their application while recognizing their limitations, it is of paramount importance to quantify performance (Alexandrov et al., 2011; Hagen and Martens, 2008; Kumar et al., 2012). Within the field of meteorological modelling the application of spatial model evaluation is well established with benchmark studies and well-tested toolboxes (Brown et al., 2009; Dorninger et al., 2013; Gilleland et al., 2016). The hydrological modelling community has historically focused more on temporal model performance, but the call for a paradigm shift towards a spatial-pattern-oriented model evaluation using independent spatial observations has been ongoing for nearly 2 decades (Grayson and Blöschl, 2001; Koch et al., 2016a; Stisen et al., 2011; Wealands et al., 2005). Modelling the temporal dynamics of hydrological response can be considered independent of a model's spatial component as different parameters control spatial and temporal variability (Pokhrel and Gupta, 2011). Along the lines of Gupta et al. (2008), the feasibility of an adequate spatial-pattern-oriented model evaluation is constrained by the versatility of the applied performance metric. The task to quantitatively compare spatial patterns is non-trivial and the multi-layered content of spatial patterns expresses distinct requirements to such a metric (Cloke and Pappenberger, 2008; Gilleland et al., 2009; Vereecken et al., 2016). A single metric will generally not adequately address performance and instead a combination of metrics spanning multiple relevant aspects of model performance are necessary (Clark et al., 2011; Gupta et al., 2012). The advantages of using multiple-component metrics have been broadly accepted for the evaluation of temporal model performance (Kling et al., 2012), but multiple-component evaluation has not yet been highlighted for the evaluation of simulated spatial patterns.\n\nModel evaluation targeted at spatial performance requires reliable spatial observations which are broadly facilitated by remote sensing platforms across various spatial scales (McCabe et al., 2008; Orth et al., 2017). At a small scale, Glaser et al. (2016) explored the applicability of portable thermal infrared cameras to evaluate simulated spatial patterns of surface saturation in the hillslope–riparian–stream interface. At the catchment scale, Schuurmans et al. (2011) incorporate remote-sensing-based maps of latent heat in order to identify structural model deficiencies. At a regional scale, Mendiguren et al. (2017) applied a spatial-pattern-oriented model evaluation based on remote sensing estimates of evapotranspiration to diagnose shortcomings of the national hydrological model of Denmark. At a large scale, Koch et al. (2016b) utilized land surface temperature retrievals to evaluate large-scale land surface models across the continental US.\n\nThe applicability of remote sensing data to calibrate hydrological models has already been explored by several studies that incorporated spatial patterns of land surface temperature (Stisen et al., 2018), snow cover (Terink et al., 2015) or latent heat (Immerzeel and Droogers, 2008). Overall the merit of constraining model parameters against spatial observations has been widely recognized by the modelling community. However, the design of the performance metric, which ensures that the spatial information contained in the remote sensing data is utilized optimally to inform the model calibration, is rarely touched upon in the literature.", null, "Figure 1Skjern River catchment in western Denmark. The map shows the spatial distribution of soil properties, forest areas and the river network. Additionally, two discharge stations used in the optimizations are given.\n\nBennett et al. (2013) provide an excellent overview of measures that allow the modeller to quantify the performance of environmental models. They considered model evaluation a vital step during the iterative process of model development, and hence it can identify the need for additional data, alternative calibrations or updated model structure. This further emphasizes the need for robust performance metrics. In general, the properties of the applied metric and the design of the evaluation framework should always correspond to the application of the model (Krause et al., 2005).\n\nOur study highlights the development and application of a versatile metric that has the potential to advance the credibility of spatially distributed hydrological models. When designing such a metric it is important to reflect on requirements as well as frameworks to properly test it in, which has been extensively discussed in the literature (Cloke and Pappenberger, 2008; Moriasi et al., 2007; Dawson et al., 2007; Krause et al., 2005; Refsgaard and Henriksen, 2004; Schaefi and Gupta, 2007). Following these references and our own reflections we identified the following five major requirements of a spatial performance metric: (1) the metric should be easy to compute, which makes results reproducible and creates credibility within the scientific community. (2) In order to be informative during model calibration the metric should be robust and deliver a continuous response to changes in parameter values. (3) In the formulation of the metric, multiple independent components are necessary to provide a holistic evaluation of the model performance. (4) The metric should offer the possibility to compare related variables of different units; e.g. observed latent heat (W m−2) and simulated evapotranspiration (mm day−1). This enables evaluation via proxies and facilitates bias insensitivity, which is found favourable because it focuses on the pattern information contained in the remote sensing data instead of absolute values at the grid scale. (5) The metric should be easy to communicate both inside and outside the scientific community. This requires a predefined range and the possibility to put metric scores into context; i.e. what value ensures satisfactory performance? Can we directly compare scores between different catchments and models? These five points were carefully taken into consideration by Demirel et al. (2018a) for the formulation of SPAtial EFficiency (SPAEF), which they successfully applied in a spatial-pattern-oriented model calibration.\n\nIn this study, we rigorously test SPAEF and compare it with two additional spatial performance metrics: fractions skill score (Roberts and Lean, 2008) and connectivity analysis (Koch et al., 2016b). All three metrics are applied in a spatial-pattern-oriented calibration of a catchment model using the mesoscale Hydrologic Model (mHM: Samaniego et al., 2010a). Such rigorous metric testing and comparison helps to generate familiarity and is inevitable in order to establish novel metrics in the scientific community.\n\n2 Data and methods\n\n## 2.1 Study site\n\nThe Skjern River catchment is located in the western part of the Danish peninsula. The catchments size amounts to 2500 km2 and it has been studied intensively for almost a decade by the HOBE project (Jensen and Illangasekare, 2011). The climate is maritime with a mean annual precipitation of around 1050 mm, which is partitioned into more or less equal amounts of streamflow and actual evapotranspiration. Topography slopes gently from the highest point of approximately 125 m in elevation on the east to sea level in the western side of the catchment. Figure 1 shows the spatial variability of soil texture, which stresses that soils are predominately sandy with intertwined till and clay sections. Land use is dominated by arable land with patches of coniferous forest. The Skjern catchment does not exhibit a strong spatial gradient in hydrological response because general gradients in catchment morphology or climatology do not exist. This promotes the catchment as an excellent test case for a spatial-pattern-oriented model calibration because the simulated spatial patterns of hydrological variables are governed by optimizable parameters such as soil and vegetation properties.\n\n## 2.2 Hydrological model\n\nThis study utilizes the mesoscale Hydrologic Model (mHM v5.8: Samaniego et al., 2017a), which is a grid-based spatially distributed hydrological model (Kumar et al., 2013, 2010; Samaniego et al., 2010a, b). The model accounts for key hydrological processes such as canopy interception, soil moisture dynamics, surface and subsurface flow generation, snow melting, evapotranspiration and others. Daily meteorological data forces the model and a gridded digital elevation model (DEM) characterizes the morphology of the catchment. Additionally, the spatial variability of observable physical properties such as soil texture, vegetation and geology are incorporated in the model structure. A multi-scale parameter regionalization (MPR) technique enables mHM to consolidate three different spatial scales: meteorological forcing at a coarse scale, intermediate model scale and fine-scale morphological data. In the case of the Skjern model, forcing data are available at 10–20 km resolution, the DEM is used at 250 m scale and the model is executed at 1 km scale. Effective parameters at the modelling scale are regionalized through non-linear transfer functions which link spatially distributed basin characteristics at a finer scale by means of global parameters, which can be determined through calibration.\n\n## 2.3 Reference data", null, "Figure 2Reference data used for the optimization: the average cloud-free spatial pattern of midday latent heat in June (a) and observed discharge (red line) at two stations (shown in Fig. 1) for the 8-year simulation period (c, d). Also shown are the simulation results from the initial parameter set: the average cloud-free spatial pattern of daily actual evapotranspiration (aet) in June (b) and the simulated discharge (black line) at the two reference stations.\n\nThe observational data employed as a reference in the calibration are given in Fig. 2 and consist of two datasets. The first is 8 years (2001–2008) of discharge time series at two locations within the catchment where the first drains around 60 % of the catchment area and the second an additional 25 % (Fig. 1). Second, in order to complement the temporal data we provide a remote sensing estimate of latent heat for cloud-free grids in June between 2001 and 2008. The month of June is the peak of the growing season, which makes the spatial pattern distinct and relevant for a hydrological model evaluation. This reference spatial pattern is obtained by the two-source energy balance model (TSEB; Norman et al., 1995). A detailed description of the remote-sensing-based estimation of latent heat across Denmark is presented by Mendiguren et al. (2017). As outlined by Mendiguren et al. (2017), TSEB represents a two-layer model which separates soil and vegetation. Energy fluxes are estimated based on various input parameters and forcings among which land surface temperature (LST) and air temperature are found to be most sensitive. Input data for TSEB are obtained from the daytime LST MODIS product at 1 km spatial resolution. The reasoning behind averaging the latent heat maps in time to a mean monthly map is expressed twofold. First, daily spatial patterns are influenced by clouds and thus vary highly in coverage, which limits the pattern information content. Second, daily estimates are associated with higher uncertainty and are more affected by forcing data, e.g. the spatial distribution of precipitation on the previous day. Hence, aggregated monthly maps of latent heat represent a robust average that is more informative in a model calibration than daily maps because it constitutes the imprint of soil properties and vegetation on the simulated pattern, which are parameters that can be calibrated in a hydrological model in contrast to model forcing.\n\n## 2.4 Spatial performance metrics\n\n### 2.4.1 Spatial efficiency\n\nFor the formulation of a straightforward spatial performance metric we found inspiration in the Kling–Gupta efficiency (KGE; Kling and Gupta, 2009), which is a commonly used metric in hydrological modelling to evaluate discharge simulations. It is characterized by three equally weighted components, i.e. correlation, variability and bias.\n\n$\\begin{array}{ll}\\text{(1)}& & \\mathrm{KGE}=\\mathrm{1}-\\phantom{\\rule{0.125em}{0ex}}\\sqrt{{\\left({\\mathit{\\alpha }}_{Q}-\\mathrm{1}\\right)}^{\\mathrm{2}}+\\phantom{\\rule{0.125em}{0ex}}{\\left({\\mathit{\\beta }}_{Q}-\\mathrm{1}\\right)}^{\\mathrm{2}}+{\\left({\\mathit{\\gamma }}_{Q}-\\mathrm{1}\\right)}^{\\mathrm{2}}}& {\\mathit{\\alpha }}_{Q}=\\mathit{\\rho }\\left(\\text{obs,sim}\\right),{\\mathit{\\beta }}_{Q}={\\mathit{\\sigma }}_{\\mathrm{sim}}/{\\mathit{\\sigma }}_{\\mathrm{obs}}\\phantom{\\rule{0.25em}{0ex}}\\text{and}\\phantom{\\rule{0.25em}{0ex}}{\\mathit{\\gamma }}_{Q}=\\frac{{\\mathit{\\mu }}_{\\mathrm{sim}}}{{\\mathit{\\mu }}_{\\mathrm{obs}}}\\end{array}$\n\nwhere αQ is the Pearson correlation coefficient between the observed (obs) and the simulated (sim) discharge time series, βQ is the relative variability based on the ratio of standard deviation in simulated and observed values and γQ is the bias term which is normalized by the standard deviation of the observed data. KGE is selected as the discharge objective function for the optimization applied in this study.\n\nThe multiple-component nature of KGE is favourable because a model evaluation can rarely be condensed to a single component, such as the bias of correlation. Instead a more holistic and balanced assessment using several aspects is favourable for a comprehensive model evaluation as advocated by Gupta et al. (2012), Krause et al. (2005) and others.\n\nFollowing the multiple-component idea of KGE we present a novel spatial performance metric denoted SPAtial EFficiency (SPAEF), which was originally proposed by Demirel et al. (2018a, b).\n\n$\\begin{array}{}\\text{(2)}& \\text{SPAEF}=\\mathrm{1}-\\phantom{\\rule{0.125em}{0ex}}\\sqrt{{\\left(\\mathit{\\alpha }-\\mathrm{1}\\right)}^{\\mathrm{2}}+\\phantom{\\rule{0.125em}{0ex}}{\\left(\\mathit{\\beta }-\\mathrm{1}\\right)}^{\\mathrm{2}}+{\\left(\\mathit{\\gamma }-\\mathrm{1}\\right)}^{\\mathrm{2}}}\\end{array}$\n\nα=ρ(obs,sim), $\\mathit{\\beta }=\\left(\\frac{{\\mathit{\\sigma }}_{\\mathrm{sim}}}{{\\mathit{\\mu }}_{\\mathrm{sim}}}\\right)/\\left(\\frac{{\\mathit{\\sigma }}_{\\mathrm{obs}}}{{\\mathit{\\mu }}_{\\mathrm{obs}}}\\right)$ and $\\mathit{\\gamma }=\\frac{\\sum _{j=\\mathrm{1}}^{n}min\\left({K}_{j},{L}_{j}\\right)}{\\sum _{j=\\mathrm{1}}^{n}{K}_{j}}$ where α is the Pearson correlation coefficient between the observed (obs) and simulated (sim) pattern, β is the fraction of the coefficient of variation representing spatial variability and γ is the histogram intersection for the given histogram K of the observed pattern and the histogram L of the simulated pattern, each containing n bins (Swain and Ballard, 1991). In order to enable the comparison of two variables with different units and to ensure bias insensitivity, the z score of the patterns is used to compute γ. Throughout the paper α is referred to as correlation, β as cv ratio and γ as histo match.", null, "Figure 3Two examples to illustrate the importance of a multi-component analysis when comparing spatial patterns (a). The maps are normalized by their mean. The histograms of the z score normalized maps are presented in (b). The scatter plots of the mean normalized maps are given in (c). Scores for the three SPAEF components (histo match, cv ratio and correlation) are given in the graphs.\n\nThe difficulty to quantitatively compare spatial patterns and the need for multiple-component metrics such as SPAEF are illustrated in Fig. 3 in which two example patterns both generated by mHM during calibration are compared with the TSEB reference pattern. A swift visual comparison clearly disambiguates the fact that both are inadequate spatial pattern representations with respect to the reference; i.e. the first lacks spatial variability and the second misses spatial detail within the clearly separated clusters of high and low values. Correlation is a commonly known statistical measure that allows for the comparison of two variables that are collocated in space and may differ in units. Despite the visual evaluation, both examples have a reasonably high correlation, which allegedly suggests good performance. When assessing the cv ratio it becomes clear that the first example lacks spatial variability, whereas the distinct separation of the second example suggests an adequate representation of spatial variability. The deficiency of the second example becomes clear when investigating the overlap of histograms of the normalized (z score) simulated and reference pattern. The z score normalization results in a pattern with a mean equal to 0 and a standard deviation equal to 1, which is necessary to make two patterns with different units comparable. Histo match stresses non-existing spatial variability within the high and low areas despite the satisfying correlation and spatial variability.\n\n### 2.4.2 Connectivity\n\nThe connectivity metric originates from the field of hydrogeology in which it is commonly applied to characterize the spatial heterogeneity of aquifers (Koch et al., 2014; Rongier et al., 2016). Outside the hydrogeology community, connectivity analyses have also been conducted to describe the spatial patterns of soil moisture (Grayson et al., 2002; Western et al., 2001) and land surface temperature (Koch et al., 2016b). Following the classification of Renard and Allard (2013), the connectivity analysis of a continuous variable is conducted via three steps: (1) a series of threshold percentiles decomposes the domain into a series of binary maps, (2) the binary maps undergo a cluster analysis that identifies spatially connected clusters and (3) the transition from many disconnected clusters to a single connected cluster can be quantified by principles of percolation theory (Hovadik and Larue, 2007). In this context the probability of connection (Γ) is considered a suitable percolation metric. Γ states the proportion of pairs of cells that are connected among all possible pairs of connected cells of a cluster map.\n\n$\\begin{array}{}\\text{(3)}& \\mathrm{\\Gamma }\\left(t\\right)=\\frac{\\mathrm{1}}{{n}_{t}^{\\mathrm{2}}}\\sum _{i=\\mathrm{1}}^{N\\left({X}_{t}\\right)}{n}_{i}^{\\mathrm{2}},\\end{array}$\n\nwhere nt is the total number of cells in the binary map Xt below or above threshold t, which has N(Xt) distinct clusters in total. ni is the number of cells in the ith cluster in Xt. The percolation is well captured by means of an increasing threshold that moves along all percentiles of the variable's range, which makes this methodology bias insensitive. The connectivity analysis is applied individually on cells that exceed a given threshold and those that fall below, which is referred to as the low and high phase, respectively. Following Koch et al. (2016b), the root mean square error between the connectivity at all percentiles of the observed (Γ(t)obs) and the simulated (Γ(t)sim) pattern denotes a tangible pattern similarity metric and can be calculated as\n\n$\\begin{array}{}\\text{(4)}& {\\text{RMSE}}_{\\mathrm{Con}}=\\sqrt{\\frac{{\\sum }_{t=\\mathrm{1}}^{\\mathrm{100}}{\\left(\\mathrm{\\Gamma }\\left(t{\\right)}_{\\mathrm{obs}}-\\mathrm{\\Gamma }\\left(t{\\right)}_{\\mathrm{sim}}\\right)}^{\\mathrm{2}}}{\\mathrm{100}}}.\\end{array}$\n\nThe average RMSE score of the low and the high phase is employed as the pattern similarity score for the connectivity analysis and is referred to as connectivity throughout the paper.\n\n### 2.4.3 Fractions skill score\n\nThe fractions skill score (FSS) is a common metric in meteorology to provide a scale-dependent measure that quantifies the spatial skill of various competing precipitation forecasts with respect to a reference (Mittermaier et al., 2013; Roberts and Lean, 2008; Wolff et al., 2014). In the FSS framework, a fraction reflects the occurrence of values exceeding a certain threshold at a given window size n and is calculated at each cell. Typically the thresholds are derived from the variable's percentiles, which constitutes the bias insensitivity of FSS (Roberts, 2008). The FSS workflow is defined by three main steps: (1) for each threshold, truncate the observed (obs) and the simulated (sim) spatial pattern into binary maps. (2) For each cell, compute the fraction of cells that exceed the threshold and lie within a window of size n×n and (3) calculate the mean squared error (MSE) between the observed and simulated fractions and normalize it with a worst case MSE (MSEwc) that reflects the condition with zero agreement between the spatial patterns. The MSE is based on all cells (Nxy) that lie within the modelling domain with dimensions of Nx and Ny. For a certain threshold, FSS at scale n is given by\n\n$\\begin{array}{}\\text{(5)}& {\\mathrm{FSS}}_{\\left(n\\right)}=\\mathrm{1}-\\frac{{\\mathrm{MSE}}_{\\left(n\\right)}}{{\\mathrm{MSE}}_{\\left(n\\right)\\mathrm{wc}}},\\end{array}$\n\nwhere\n\n$\\begin{array}{}\\text{(6)}& {\\mathrm{MSE}}_{\\left(n\\right)}=\\frac{\\mathrm{1}}{{N}_{xy}}\\sum _{i=\\mathrm{1}}^{{N}_{x}}\\sum _{j=\\mathrm{1}}^{{N}_{y}}{\\left[{\\mathrm{ref}}_{\\left(n\\right)ij}-{\\mathrm{scen}}_{\\left(n\\right)ij}\\right]}^{\\mathrm{2}}\\end{array}$\n\nand\n\n$\\begin{array}{}\\text{(7)}& {\\mathrm{MSE}}_{\\left(n\\right)\\mathrm{wc}}=\\frac{\\mathrm{1}}{{N}_{xy}}\\left[\\sum _{i=\\mathrm{1}}^{{N}_{x}}\\sum _{j=\\mathrm{1}}^{{N}_{y}}{\\mathrm{ref}}_{\\left(n\\right)ij}^{\\mathrm{2}}+\\sum _{i=\\mathrm{1}}^{{N}_{x}}\\sum _{j=\\mathrm{1}}^{{N}_{y}}{\\mathrm{scen}}_{\\left(n\\right)ij}^{\\mathrm{2}}\\right].\\end{array}$\n\nFSS ranges from 0 to 1, where 1 indicates a perfect match between obs and sim and 0 reflects the worst possible performance. For the simulated spatial patterns in the Skjern catchment we applied the concept of critical scales (Koch et al., 2017) and therefore selected three top and three bottom percentiles each assessed at an individual critical scale. The 1st, 5th and 20th percentiles focus on the bottom 1, 5 and 20 % of cells and are investigated at 25, 15 and 5 km scale, respectively. Three top percentiles, the 99th, 95th and 80th, are analysed analogously. The average of the three top and bottom percentiles is calculated as an overall pattern similarity score and referred to as FSS throughout the paper.\n\n## 2.5 Optimization procedure\n\nThe mHM of the Skjern catchment is applied at 1 km spatial resolution and the simulation period is set to 12 years (1997–2008) during which the first 4 years are used as warm-up and the following 8 years are utilized for the calibration. The model parameters are calibrated against observed discharge time series at two stations and the average latent heat pattern of June under cloud-free conditions. The reference pattern reflects an instantaneous observation of midday latent heat (W m−2), whereas the model simulates daily actual evapotranspiration (mm day−1). Obviously these variables are closely related; however, it requires suitable spatial performance metrics to be able to quantitatively compare two patterns with different units.\n\nA sensitivity analysis was performed in order to select a limited number of parameters for the optimization. This was based on two steps: a variance-based sequential screening (Cuntz et al., 2015) followed by a Latin hypercube sampling (van Griensven et al., 2006). The mHM has 48 global parameters and the first step identified 24 informative parameters; results were presented by Demirel et al. (2018a). Subsequently we applied the Latin hypercube sampling to further reduce the number of sensitive parameters to 17. Among the selected parameters, eight represent the soil moisture module (pedo transfer functions, root fraction distribution and soil moisture stress), two control the interflow, one affects the percolation, two are sensitive to the base flow and four define the ET module via the dynamic scaling function using MODIS LAI.\n\nIn order to reflect on the ability of different spatial performance metrics to optimize the pattern performance of the distributed hydrological model applied in this study, we have designed six calibrations. All commence with the same initial parameter set and include KGE at both discharge stations as temporal objective functions. Additionally, each optimization features one of the promoted spatial performance metrics: (1) SPAEF, (2) correlation, (3) cv ratio, (4) histo match, (5) FSS and (6) connectivity. The metrics correlation, cv ratio and histo match represent the three SPAEF components. The spatial objective functions aim to optimize the average ET pattern of June and are weighted 5 times higher than the discharge objective functions. We expect the capability of the model to optimize simulated time series of discharge to be more versatile in comparison to its flexibility to optimize spatial patterns, which justifies the weighting of the objective functions. The optimizations were conducted with the help of PEST (version 14.02; Doherty, 2005) and the shuffled complex evolution (SCE-UA) algorithm (Duan et al., 1993) was selected as an optimizer. SCE-UA is considered a global optimizer and for our application it was set up to operate on two parallel complexes with 35 parameter sets in each complex. Each calibration was limited to 2500 model runs, which was found reasonable to allow for the convergence of the objective functions.\n\n3 Results and discussion\n\n## 3.1 Optimizing spatial patterns\n\nThe simulation results from the initial parameter set are depicted in Fig. 2. The simulated pattern of AET is almost uniform with very little spatial variability, which results in a low SPAEF score of 0.58. The simulated discharge has the correct timing at both stations: station no. 2 is clearly less biased than station no. 1. Both have reasonable KGE scores on the basis of the initial parameter set: 0.6 (station no. 1) and 0.7 (station no. 2).", null, "Figure 4Tracking of the simulated actual evapotranspiration maps (normalized by mean) throughout the six conducted optimizations using different objective functions. The first four columns show the trajectory of pattern improvements in accordance with one objective function. The maps depict the best fit between the reference (b) and model at various iterations throughout the optimization. The spatial similarity scores in accordance with the different metrics are given in the top right corner of each map.\n\nFigure 4 visualizes the results from the six conducted calibrations with the aim of tracking the spatial patterns of simulated ET during the course of the optimization. SCE-UA is executed in an iterative manner whereby each iteration reflects a shuffling loop in which a number of parameter sets are tested. In order to inter-compare the optimization progress across the six calibrations, Fig. 4 illustrates the optimal spatial patterns at four selected iterations during the calibration. The second iteration is the first in which SCE-UA receives feedback from the applied metric after executing random sets of parameter values in the first iteration. Iterations 6 and 10 show intermediate steps from the optimization progress. The optimal spatial pattern depicts the final result in accordance with the six tested performance metrics after 2500 model runs.\n\nFrom a metric point of view, the scores of the objective functions are improved for all six calibrations. Among the six metrics, connectivity is the only one which has to be reduced to 0; the remaining metrics have an optimal score of 1. The improvements from iteration 10 to the optimal parameter set are numerically marginal and visually not to be discriminated. The visual differences between the optimized spatial patterns are striking and the three metrics that consider local constraints (SPAEF, correlation and FSS) can clearly be distinguished from the remaining three. With respect to the reference pattern in Fig. 2, the separation between forest and non-forest has been inverted by optimizing against cv ratio and connectivity because the right allocation is not reflected by the metrics. The histo match metric is based on z score normalization, which results in a clear underestimation of spatial variability.\n\nThe importance of human-perception-based model evaluation has been widely recognized in the literature (Grayson et al., 2002; Hagen, 2003; Koch et al., 2015; Kuhnert et al., 2005). Following our visual evaluation we regard the SPAEF optimization as the most similar to the reference in Fig. 2. The three SPAEF components lead to very diverging solutions, and combined as SPAEF, the optimization yields a spatial pattern which adequately reflects the imprint of both vegetation and soil on the simulated ET patterns. FSS as an objective function performs almost equally satisfying, and revisiting the defined critical scales may improve this calibration result even further.\n\nAll metrics contain different spatial information which is used to constrain the model parameters, which results in optimized spatial patterns that clearly differ from one another. Although some metrics undoubtedly fail to inform the optimizer to identify a parameter set satisfying our visual criterion they still provide relevant pattern information to a certain extent. In consequence, these metrics do not function as stand-alone objective functions for this calibration study; e.g. cv ratio yields an inadequate spatial pattern but as a component in SPAEF it generates a satisfying solution to the optimization problem. Following Krause et al. (2005), one should carefully take the pros and cons of each performance measure into consideration when designing the calibration and validation framework of a model. Moreover, the metric should be tailored to the intended use of the model and should relate to simulated quantities which are deemed relevant for the application of the model. For the objective of our calibration study the bias insensitivity and the capability of a metric to compare variables that are related but differ in unit was most relevant.\n\nTable 1Cross-check of the six conducted calibrations (as rows). The optimal model run is evaluated by the remaining metrics (as columns). Numbers in bold indicate the optimized value of the respective optimization.", null, "Table 1 cross-checks the metric scores of the six optimized spatial patterns in Fig. 4. Reading the table column-wise allows for an investigation of whether the metrics provide independent information to the optimizer. As an example, cv ratio reaches its optimal score; however, the reaming metrics perform poorly. This indicates that cv ratio conveys independent information with respect to the other metrics. On the other hand, calibrating against correlation yields a high FSS score, which attests partly redundant information content in the two given metrics. Reading the table row-wise screens for the consistency of the calibrations. The highest metric score should be reached when calibrating against itself, which is the case for all six calibrations.\n\nAdditionally, Table 1 presents the KGE scores for the six conducted calibrations. The discharge performance has been improved by all calibrations and the scores vary slightly across them. Similar to the initial run station no. 2 performs generally better than station no. 1. The simulated discharge of the six optimized models is shown in Fig. 5 for a 4-year period at station no. 1. All calibrations simulate the discharge dynamics in accordance with the observations and are generally equipped with a good timing of the peak flows. Differences are found in the recession flow between the six simulations. However, our effort focuses on the spatial performance and it is striking how different the simulated spatial patterns can be while predicting almost identical streamflow. This supports previous findings in the literature which stress that spatial and temporal response in hydrological models are controlled by different parameters and that the one cannot be used to inform the other (Pokhrel and Gupta, 2011; Stisen et al., 2011, and others).", null, "Figure 5Simulated discharge at station no. 1 obtained by the six optimizations. Data are shown only for 4 out of the 8 years of simulation. KGE values vary between 0.84 and 0.95.\n\nFigure 4, in combination with Table 1, provide details to investigate the key weaknesses of the two metrics, FSS and connectivity, used to evaluate SPAEF. It becomes evident that calibrating against connectivity results in poor scores of the remaining metrics, which underlines its inability to capture the correct spatial allocation, variability and distribution. Thus, the key weakness of connectivity is that it cannot operate as a stand-alone metric; instead it should be accompanied by another metric, ideally correlation, which will ensure the correct allocation. On the other hand, FSS yields reasonable scores of allocation and variability between forest and non-forest areas. However, the FSS optimization lacks spatial variability within the high and low areas, which could be resolved by considering more threshold percentiles when computing the score. Therefore the weakness of FSS lies in its dependency on the threshold percentile, which has to be defined by the user.\n\nChoosing a suitable metric alone is not sufficient to undertake a successful spatial-pattern-oriented model calibration. Model agility promoted by a flexible parameterization is required to allow the simulated spatial patterns to be optimized with respect to a reference pattern (Mendoza et al., 2015). In this study, this is achieved by applying a model code (mHM: Samaniego et al., 2010a) that features a multi-scale parameter regionalization scheme (MPR) in which spatially distributed basin characteristics are transformed via global parameters to effective model parameters at the model scale. These so-called transfer functions generate seamless and physically consistent parameters fields (Mizukami et al., 2017). In contrast, Corbari and Mancini (2014) conducted a spatial validation of a subsurface–surface–land surface model against MODIS LST in which parameters were calibrated individually at each grid. In contrast to regionalization techniques such as MPR, this approach does not grant physically meaningful parameter fields and may overestimate the credibility of remote sensing data. Samaniego et al. (2017b) recently proposed a modelling protocol that describes how MPR can be added to any particular model, which extends the applicability of MPR beyond mHM. However, the choice of transfer functions may not always be trivial and their reliability is crucial for the successful application of MPR or other regionalization approaches. Another limitation of the MPR scheme in mHM is that the minimum scale at which a model can be applied depends on the data availability, since subgrid variability is fundamental to MPR (Samaniego et al., 2017b).\n\nIn order to examine the added value of spatial patterns retrieved from remote sensing data, Demirel et al. (2018a) conducted several calibration scenarios of the same model set-up as applied in this study. Calibrating only against time series of discharge resulted in a poor spatial pattern performance and, vice versa, the calibration using remote sensing data only was not able to constrain the hydrograph correctly. However, the balanced calibration using both observations did not worsen the objective function in comparison to using them as the sole calibration target, which underlined limited trade-offs between the temporal and spatial observations in the applied calibration.\n\nIn order to further advance opportunities for spatial-pattern-oriented model evaluation, hydrological models can be extended by emission models to simulated brightness temperature, which is closer to the true observations of the remote sensing sensors. As an example, Schalgeet al. (2016) implemented such a coupling, which facilitated direct model evaluation against SMAP brightness temperature. Similar solutions are feasible for LST and it has the clear advantage of bypassing the uncertainties and inconsistencies associated with remote sensing models, which the hydrological modeller has no control of.\n\n## 3.2 Spatial efficiency metric\n\nEstablishing novel metrics in the modelling community is often hindered by an intrinsic inertia supported by an excessive choice of metrics, which leads to reliance on familiar metrics. Both the implementation and the interpretation of unfamiliar metrics may be found too troublesome by many users. Familiarity can only be obtained by rigorous testing and by having a metric which provides scores in a predefined range easy to interpret. In the following we will provide a detailed analysis of the SPAEF calibration results to further the understanding of its implications and the interaction between the three components.", null, "Figure 63-D Pareto front based on the 2500 runs during the SPAEF optimization. Each component of the SPAEF metric represents an individual axis. The black line indicates the deviation between the theoretical optimal ($\\mathrm{1},\\mathrm{1},\\mathrm{1}$) SPAEF value and the optimized model run ($\\mathrm{0.72},\\mathrm{0.73},\\mathrm{0.81}$).\n\nFigure 6 depicts a three-dimensional Pareto front of the three SPAEF components on the basis of the 2500 parameter sets executed in the SPAEF calibration, which allows for an investigation of trade-offs between different objective functions. The formulation of SPAEF gives equal weights to the three components; hence the best compromise is the parameter set with the lowest Euclidian distance to the optimal point ($\\mathrm{1},\\mathrm{1},\\mathrm{1}$). If desirable, the weights could be adjusted manually to specifically focus on one of the three components. Throughout calibration, scores across the range of each component are obtained, which indicates that the components are clearly sensitive to changes in spatial performance. Further, it reveals the global nature of SCE-UA, which rigorously explores the parameter space. With an ideal score of 1, SCE-UA optimized SPAEF to 0.56, which may seem surprisingly low given the good visual agreement. This underlines the fact that SPAEF is a tough criterion with three independent components that individually penalize the overall similarity score. The question of what marks an acceptable and satisfying SPAEF score is hard to generalize and probably depends on the pattern to be assessed. The ET pattern in the Skjern catchment is dominated by local feedbacks of soil and vegetation, which constitute challenging small-scale details for a model. Alternatively, a catchment with a strong spatial gradient of e.g. precipitation or topography may naturally yield a higher SPAEF score. Such gradients in forcing or morphology are typically not calibrated and will dominate the spatial pattern of the estimated hydrological fluxes. A distinct spatial variability provided by the model inputs is therefore expected to favour correlation and cv ratio, resulting in a higher SPAEF score. However, more work is needed to study the relationship of spatial variability and SPAEF.\n\nThe patterns of the simulated variable (daily ET) and the observed variable (instantaneous latent heat) used in this study differ in unit but are linearly related. One can imagine a case of using SPAEF in a proxy validation with a non-linear relationship between the variables. In such a case, the user can consider transforming the data. This is especially crucial for correlation, which assumes linearity. The remaining components, histo match and cv ratio, are less dependent on linearity, as the first is based on z score normalization and the second on mean normalization.\n\nAs introduced earlier, human perception is considered a reliable benchmark for the evaluation of spatial performance metrics. More precisely, a metric can be regarded as reliable if it is able to emulate human vision. In order to establish a reliable benchmark dataset, Koch and Stisen (2017) have conducted a citizen science project with the aim of quantifying spatial similarity scores based on human perception. Their study was based on over 6000 simulated spatial pattern comparisons of land surface variables in the Skjern catchment. When compared to human perception, SPAEF provides a satisfying coefficient of determination of 0.73. In comparison, the coefficients of determination for connectivity, FSS and correlation are 0.48, 0.60 and 0.76, respectively.", null, "Figure 7Tracking of the three SPAEF components throughout the 2500 conducted runs of four calibrations (SPAEF, correlation, cv ratio and histo match). The envelopes represent the 10th and 90th percentile of a 100-run moving window; the line shows the median.\n\nFigure 7 highlights the evolution of the three SPAEF components by tracking their scores during the 2500 runs of four calibrations: SPAEF, correlation, cv ratio and histo match. Convergence can be observed for all components when calibrated against itself or SPAEF. This underlines the fact that the choice to limit the optimizer to 2500 runs was reasonable for this study, but may differ for other modelling studies. The results underline consistency because SPAEF provides the second best score for all components right after being calibrated against itself. Furthermore, the three components can be considered independent because optimizing against one component does not automatically lead to the improvement of another. This is especially the case for the cv ratio calibration in which correlation stagnates and histo match decreases throughout the course of the 2500 runs.\n\nUncertainty in the observations should ideally be an integral part of model evaluation. The proposed calibration framework in this study deals implicitly with the issue of uncertainty. First, the daily snapshots of midday ET are averaged to a more robust monthly map, and second, the bias insensitivity of SPAEF alleviates the effect of uncertainties in the observations. Instead of assessing the exact values at the grid scale, SPAEF evaluates global characteristics such as distribution and variability, which are less affected by data uncertainty. For some applications, the bias insensitivity may be a hurdle when the model is expected to be unbiased. In such a case the SPAEF formulation (Eq. 4) could easily be extended by a fourth component, such as the bias term (γQ) from the KGE formulation (Eq. 3). Discharge observations are most commonly available for hydrological modelling studies. Such data can provide reliable information on the overall water balance, and when being accompanied with spatial observations, the catchment internal variability of hydrological processes can be constrained as well.\n\n4 Conclusions\n\nThe complexity of spatially distributed hydrological models is currently increasing, as is the availability of satellite-based remote sensing observations. In light of the vast amount of existing remote sensing products in combination with recent developments, such as the promising Copernicus programme with its multi-satellite Sentinel missions (McCabe et al., 2017), the incorporation of detailed spatial data retrieved from remote sensing platforms will continue to enable grand opportunities for hydrological modelling in the near future.\n\nThis study aimed to make a contribution to that course by rigorously testing SPAEF, a simple and novel spatial performance metric which has the potential to advance the spatial-pattern-oriented validation and calibration of spatially distributed models. The applicability of SPAEF was tested in the hydrological context; however, its versatility promotes it to be beneficial throughout many disciplines of earth system modelling.\n\nWe applied SPAEF alongside its three components and two other spatial performance metrics (connectivity and FSS) in a calibration experiment of a mesoscale catchment ( 2500 km2) in Denmark. A satellite-retrieved map of latent heat, which represents the average evapotranspiration pattern of cloud-free days in June, was utilized beside discharge time series as the reference dataset. We draw the following main conclusions from this work.\n\nQuantifying spatial similarity is a non-trivial task and it requires taking several dimensions of spatial information simultaneously into consideration. The formulation of SPAEF is therefore based on three equally weighted components, i.e. correlation, ratio of the coefficient of variation and z score histogram overlap between a simulated and an observed pattern. SPAEF reflects the Euclidian distance of the three components from the optimum, which is equivalent to the concept of a three-dimensional Pareto front. The components are bias insensitive and allow for the assessment of two variables that differ in units. Further, we could infer independent information content on the three components, which complement each other when used jointly as SPAEF.\n\nSPAEF is straightforward to compute and has a predefined range between −∞ and 1, which simplifies communication with the scientific community and stakeholders. Nevertheless, more rigorous testing is required to further establish familiarity. The relationship between SPAEF and spatial variability has to be investigated in more detail for the purpose of putting the metric into context, i.e. comparing different catchments or models.\n\nThe right spatial performance metric alone is not enough to improve the spatial predictability of a distributed model trough calibration. The metric has to be accompanied by an agile model structure and flexible parameterization, such as regionalization techniques, by means of transfer functions, allowing the simulated pattern to adjust in a meaningful way. Naturally, this has to be further supported by high-quality forcing data, detailed catchment morphology and trustworthy spatial observations at an adequate scale.\n\nThe calibration exercise of the Skjern catchment highlighted the importance of incorporating spatial observation in the calibration of hydrological models since the six conducted calibrations yielded strikingly different ET patterns while simulating similar discharge dynamics. Based on our findings, bias insensitive spatial metrics are ideally accompanied by bias sensitive discharge metrics that secure the overall robustness in terms water balance closure.\n\nWith this contribution we hope to encourage the modelling community to rethink paradigms when formulating calibration or validation experiments by choosing appropriate metrics that focus on spatial patterns representing earth system processes.\n\nCode and data availability\nCode and data availability.\n\nThe code for the applied spatial performance metrics is made available by Demirel et al. (2018b) at https://github.com/cuneyd/spaef and Koch (2018) at https://github.com/JulKoch/SEEM. The mHM code is freely accessible via GitHub at https://github.com/mhm-ufz/mhm (Samaniego et al., 2017a). All data used to produce the results of this paper will be provided upon request by contacting Julian Koch.\n\nCompeting interests\nCompeting interests.\n\nThe authors declare that they have no conflict of interest.\n\nAcknowledgements\nAcknowledgements.\n\nThe scientific work has been carried out under the SPACE (SPAtial Calibration and Evaluation in distributed hydrological modelling using satellite remote sensing data) project (grant VKR023443), which is funded by the Villum Foundation.\n\nEdited by: Tomomichi Kato\nReviewed by: Naoki Mizukami and one anonymous referee\n\nReferences\n\nAlexandrov, G. A., Ames, D., Bellocchi, G., Bruen, M., Crout, N., Erechtchoukova, M., Hildebrandt, A., Hoffman, F., Jackisch, C., Khaiter, P., Mannina, G., Matsunaga, T., Purucker, S. T., Rivington, M., and Samaniego, L.: Technical assessment and evaluation of environmental models and software: Letter to the Editor, Environ. Model. Softw., 26, 328–336, https://doi.org/10.1016/j.envsoft.2010.08.004, 2011.\n\nBennett, N. D., Croke, B. F. W., Guariso, G., Guillaume, J. H. A., Hamilton, S. H., Jakeman, A. J., Marsili-Libelli, S., Newham, L. T. H., Norton, J. P., Perrin, C., Pierce, S. A., Robson, B., Seppelt, R., Voinov, A. A., Fath, B. D., and Andreassian, V.: Characterising performance of environmental models, Environ. Model. Softw., 40, 1–20, https://doi.org/10.1016/j.envsoft.2012.09.011, 2013.\n\nBrown, B. G., Gotway, J. H., Bullock, R., Gilleland, E., Fowler, T., Ahijevych, D., and Jensen, T.: The Model Evaluation Tools (MET): Community tools for forecast evaluation, in: Preprints, 25th Conf. on International Interactive Information and Processing Systems (IIPS) for Meteorology, Oceanography, and Hydrology, Phoenix, AZ, Amer. Meteor. Soc. A, Vol. 9, 2009.\n\nClark, M. P., Kavetski, D., and Fenicia, F.: Pursuing the method of multiple working hypotheses for hydrological modeling, Water Resour. Res., 47, W09301, https://doi.org/10.1029/2010WR009827, 2011.\n\nCloke, H. L. and Pappenberger, F.: Evaluating forecasts of extreme events for hydrological applications: An approach for screening unfamiliar performance measures, Meteorol. Appl., 15, 181–197, 2008.\n\nCorbari, C. and Mancini, M.: Calibration and Validation of a Distributed Energy–Water Balance Model Using Satellite Data of Land Surface Temperature and Ground Discharge Measurements, J. Hydrometeorol., 15, 376–392, https://doi.org/10.1175/JHM-D-12-0173.1, 2014.\n\nCuntz, M., Mai, J., Zink, M., Thober, S., Kumar, R., Schäfer, D., Schrön, M., Craven, J., Rakovec, O., Spieler, D., Prykhodko, V., Dalmasso, G., Musuuza, J., Langenberg, B., Attinger, S., and Samaniego, L.: Computationally inexpensive identification of noninformative model parameters by sequential screening, Water Resour. Res., 51, 6417–6441, https://doi.org/10.1002/2015WR016907, 2015.\n\nDawson, C. W., Abrahart, R. J., and See, L. M.: HydroTest: A web-based toolbox of evaluation metrics for the standardised assessment of hydrological forecasts, Environ. Modell. Softw., 22, 1034–1052, https://doi.org/10.1016/j.envsoft.2006.06.008, 2007.\n\nDemirel, M. C., Mai, J., Mendiguren, G., Koch, J., Samaniego, L., and Stisen, S.: Combining satellite data and appropriate objective functions for improved spatial pattern performance of a distributed hydrologic model, Hydrol. Earth Syst. Sci., 22, 1299–1315, https://doi.org/10.5194/hess-22-1299-2018, 2018a.\n\nDemirel, M. C., Stisen, S., and Koch, J.: SPAEF: SPAtial EFficiency, https://doi.org/10.5281/ZENODO.1158890, 2018b.\n\nDoherty, J.: PEST: Model Independent Parameter Estimation. Fifth Edition of User Manual, Watermark Numerical Computing, Brisbane, 2005.\n\nDorninger, M., Mittermaier, M. P., Gilleland, E., Ebert, E. E., Brown, B. G., and Wilson, L. J.: MesoVICT: Mesoscale Verification Inter-Comparison over Complex Terrain. NCAR Technical Note NCAR/TN-505+STR, 23 pp., https://doi.org/10.5065/D6416V21, 2013.\n\nDuan, Q. Y., Gupta, V. K., and Sorooshian, S.: Shuffled complex evolution approach for effective and efficient global minimization, J. Optimiz. Theory App., 76, 501–521, https://doi.org/10.1007/BF00939380, 1993.\n\nGilleland, E., Ahijevych, D., Brown, B. G., Casati, B., and Ebert, E. E.: Intercomparison of Spatial Forecast Verification Methods, Weather Forecast., 24, 1416–1430, 2009.\n\nGilleland, E., Bukovsky, M., Williams, C. L., McGinnis, S., Ammann, C. M., Brown, B. G., and Mearns, L. O.: Evaluating NARCCAP model performance for frequencies of severe-storm environments, Adv. Stat. Clim. Meteorol. Oceanogr., 2, 137–153, https://doi.org/10.5194/ascmo-2-137-2016, 2016.\n\nGlaser, B., Klaus, J., Frei, S., Frentress, J., Pfister, L., and Hopp, L.: On the value of surface saturated area dynamics mapped with thermal infrared imagery for modeling the hillslope-riparian-stream continuum, Water Resour. Res., 52, 8317–8342, https://doi.org/10.1002/2015WR018414, 2016.\n\nGrayson, R. and Blöschl, G.: Spatial patterns in catchment hydrology: observations and modelling, Cambridge University Press, 2001.\n\nGrayson, R. B., Blöschl, G., Western, A. W., and McMahon, T. A.: Advances in the use of observed spatial patterns of catchment hydrological response, Adv. Water Resour., 25, 1313–1334, https://doi.org/10.1016/s0309-1708(02)00060-x, 2002.\n\nGupta, H. V., Wagener, T., and Liu, Y. Q.: Reconciling theory with observations: elements of a diagnostic approach to model evaluation, Hydrol. Process., 22, 3802–3813, https://doi.org/10.1002/Hyp.6989, 2008.\n\nGupta, H. V., Clark, M. P., Vrugt, J. A., Abramowitz, G., and Ye, M.: Towards a comprehensive assessment of model structural adequacy, Water Resour. Res., 48, W08301, https://doi.org/10.1029/2011WR011044, 2012.\n\nHagen, A.: Fuzzy set approach to assessing similarity of categorical maps, Int. J. Geogr. Inf. Sci., 17, 235–249, https://doi.org/10.1080/13658810210157822, 2003.\n\nHagen, A. and Martens, P.: Map comparison methods for comprehensive assessment of geosimulation models, International Conference on Computational Science and Its Applications, Springer, Berlin, Heidelberg, 2008.\n\nHerrera-Estrada, J. E., Satoh, Y., and Sheffield, J.: Spatiotemporal dynamics of global drought, Geophys. Res. Lett., 44, 2254–2263, https://doi.org/10.1002/2016GL071768, 2017.\n\nHovadik, J. M. and Larue, D. K.: Static characterizations of reservoirs: refining the concepts of connectivity and continuity, Petrol. Geosci., 13, 195–211, 2007.\n\nImmerzeel, W. W. and Droogers, P.: Calibration of a distributed hydrological model based on satellite evapotranspiration, J. Hydrol., 349, 411–424, https://doi.org/10.1016/j.jhydrol.2007.11.017, 2008.\n\nJensen, K. H. and Illangasekare, T. H.: HOBE: A Hydrological Observatory, Vadose Zone J., 10, 1–7, https://doi.org/10.2136/vzj2011.0006, 2011.\n\nKling, H. and Gupta, H.: On the development of regionalization relationships for lumped watershed models: The impact of ignoring sub-basin scale variability, J. Hydrol., 373, 337–351, https://doi.org/10.1016/j.jhydrol.2009.04.031, 2009.\n\nKling, H., Fuchs, M., and Paulin, M.: Runoff conditions in the upper Danube basin under an ensemble of climate change scenarios, J. Hydrol., 424–425, 264–277, https://doi.org/10.1016/J.JHYDROL.2012.01.011, 2012.\n\nKoch, J.: SEEM: Spatial Evaluation of Environmental Models, https://doi.org/10.5281/zenodo.1154614, 2018.\n\nKoch, J. and Stisen, S.: Citizen science: A new perspective to advance spatial pattern evaluation in hydrology, PLoS One, 12, 1–20, https://doi.org/10.1371/journal.pone.0178165, 2017.\n\nKoch, J., He, X., Jensen, K. H., and Refsgaard, J. C.: Challenges in conditioning a stochastic geological model of a heterogeneous glacial aquifer to a comprehensive soft data set, Hydrol. Earth Syst. Sci., 18, 2907–2923, https://doi.org/10.5194/hess-18-2907-2014, 2014.\n\nKoch, J., Jensen, K. H., and Stisen, S.: Toward a true spatial model evaluation in distributed hydrological modeling: Kappa statistics, Fuzzy theory, and EOF-analysis benchmarked by the human perception and evaluated against a modeling case study, Water Resour. Res., 51, 1225–1246, https://doi.org/10.1002/2014WR016607, 2015.\n\nKoch, J., Cornelissen, T., Fang, Z., Bogena, H., Diekkrüger, B., Kollet, S., and Stisen, S.: Inter-comparison of three distributed hydrological models with respect to seasonal variability of soil moisture patterns at a small forested catchment, J. Hydrol., 533, 234–249, https://doi.org/10.1016/j.jhydrol.2015.12.002, 2016a.\n\nKoch, J., Siemann, A., Stisen, S., and Sheffield, J.: Spatial validation of large scale land surface models against monthly land surface temperature patterns using innovative performance metrics, J. Geophys. Res.-Atmos., 121, 5430–5452, https://doi.org/10.1002/2015JD024482, 2016b.\n\nKoch, J., Mendiguren, G., Mariethoz, G., and Stisen, S.: Spatial sensitivity analysis of simulated land-surface patterns in a catchment model using a set of innovative spatial performance metrics, J. Hydrometeorol., 18, 1121–1142, JHM-D-16-0148.1, https://doi.org/10.1175/JHM-D-16-0148.1, 2017.\n\nKrause, P., Boyle, D. P., and Bäse, F.: Comparison of different efficiency criteria for hydrological model assessment, Adv. Geosci., 5, 89–97, https://doi.org/10.5194/adgeo-5-89-2005, 2005.\n\nKuhnert, M., Voinov, A., and Seppelt, R.: Comparing raster map comparison algorithms for spatial modeling and analysis, Photogramm. Eng. Remote Sensing, 71, 975–984, 2005.\n\nKumar, R., Samaniego, L., and Attinger, S.: The effects of spatial discretization and model parameterization on the prediction of extreme runoff characteristics, J. Hydrol., 392, 54–69, https://doi.org/10.1016/j.jhydrol.2010.07.047, 2010.\n\nKumar, R., Samaniego, L., and Attinger, S.: Implications of distributed hydrologic model parameterization on water fluxes at multiple scales and locations, Water Resour. Res., 49, 360–379, https://doi.org/10.1029/2012WR012195, 2013.\n\nKumar, S. V., Peters-Lidard, C. D., Santanello, J., Harrison, K., Liu, Y., and Shaw, M.: Land surface Verification Toolkit (LVT) – a generalized framework for land surface model evaluation, Geosci. Model Dev., 5, 869–886, https://doi.org/10.5194/gmd-5-869-2012, 2012.\n\nMcCabe, M. F., Wood, E. F., Wjcik, R., Pan, M., Sheffield, J., Gao, H., and Su, H.: Hydrological consistency using multi-sensor remote sensing data for water and energy cycle studies, Remote Sens. Environ., 112, 430–444, https://doi.org/10.1016/j.rse.2007.03.027, 2008.\n\nMcCabe, M. F., Rodell, M., Alsdorf, D. E., Miralles, D. G., Uijlenhoet, R., Wagner, W., Lucieer, A., Houborg, R., Verhoest, N. E. C., Franz, T. E., Shi, J., Gao, H., and Wood, E. F.: The future of Earth observation in hydrology, Hydrol. Earth Syst. Sci., 21, 3879–3914, https://doi.org/10.5194/hess-21-3879-2017, 2017.\n\nMendiguren, G., Koch, J., and Stisen, S.: Spatial pattern evaluation of a calibrated national hydrological model – a remote-sensing-based diagnostic approach, Hydrol. Earth Syst. Sci., 21, 5987–6005, https://doi.org/10.5194/hess-21-5987-2017, 2017.\n\nMendoza, P. A., Clark, M. P., Barlage, M., Rajagopalan, B., Samaniego, L., Abramowitz, G., and Gupta, H.: Are we unnecessarily constraining the agility of complex process-based models?, Water Resour. Res., 51, 716–728, https://doi.org/10.1002/2014WR015820, 2015.\n\nMittermaier, M., Roberts, N. and Thompson, S. A.: A long-term assessment of precipitation forecast skill using the Fractions Skill Score, Meteorol. Appl., 20, 176–186, https://doi.org/10.1002/met.296, 2013.\n\nMizukami, N., Clark, M. P., Newman, A. J., Wood, A. W., Gutmann, E. D., Nijssen, B., Rakovec, O,. and Samaniego, L.: Towards seamless large-domain parameter estimation for hydrologic models, Water Resour. Res., 53, 8020–8040, https://doi.org/10.1002/2017WR020401, 2017.\n\nMoriasi, D. N., Arnold, J. G., Van Liew, M. W., Bingner, R. L., Harmel, R. D., and Veith, T. L.: Model Evaluation Guidelines for Systematic Quantification of Accuracy in Watershed Simulations, T. ASABE, 50, 885–900, https://doi.org/10.13031/2013.23153, 2007.\n\nNorman, J. M., Kustas, W. P., and Humes, K. S.: Source approach for estimating soil and vegetation energy fluxes in observations of directional radiometric surface temperature, Agr. Forest Meteorol., 77, 263–293, https://doi.org/10.1016/0168-1923(95)02265-Y, 1995.\n\nOrth, R., Dutra, E., Trigo, I. F., and Balsamo, G.: Advancing land surface model development with satellite-based Earth observations, Hydrol. Earth Syst. Sci., 21, 2483–2495, https://doi.org/10.5194/hess-21-2483-2017, 2017.\n\nPokhrel, P. and Gupta, H. V.: On the ability to infer spatial catchment variability using streamflow hydrographs, Water Resour. Res., 47, W08534, https://doi.org/10.1029/2010wr009873, 2011.\n\nRefsgaard, J. C. and Henriksen, H. J.: Modelling guidelines – Terminology and guiding principles, Adv. Water Resour., 27, 71–82, https://doi.org/10.1016/j.advwatres.2003.08.006, 2004.\n\nRefsgaard, J. C., Auken, E., Bamberg, C. A., Christensen, B. S. B., Clausen, T., Dalgaard, E., Effersø, F., Ernstsen, V., Gertz, F., Hansen, A. L., He, X., Jacobsen, B. H., Jensen, K. H., Jørgensen, F., Jørgensen, L. F., Koch, J., Nilsson, B., Petersen, C., De Schepper, G., Schamper, C., Sørensen, K. I., Therrien, R., Thirup, C., and Viezzoli, A.: Nitrate reduction in geologically heterogeneous catchments – A framework for assessing the scale of predictive capability of hydrological models, Sci. Total Environ., 468–469, 1278–1288, https://doi.org/10.1016/j.scitotenv.2013.07.042, 2014.\n\nRenard, P. and Allard, D.: Connectivity metrics for subsurface flow and transport, Adv. Water Resour., 51, 168–196, https://doi.org/10.1016/j.advwatres.2011.12.001, 2013.\n\nRoberts, N.: Assessing the spatial and temporal variation in the skill of precipitation forecasts from an NWP model, Meteorol. Appl., 15, 163–169, 2008.\n\nRoberts, N. M. and Lean, H. W.: Scale-Selective Verification of Rainfall Accumulations from High-Resolution Forecasts of Convective Events, Mon. Weather Rev., 136, 78–97, https://doi.org/10.1175/2007MWR2123.1, 2008.\n\nRongier, G., Collon, P., Renard, P., Straubhaar, J., and Sausse, J.: Comparing connected structures in ensemble of random fields, Adv. Water Resour., 96, 145–169, https://doi.org/10.1016/j.advwatres.2016.07.008, 2016.\n\nRuiz-Pérez, G., González-Sanchis, M., Del Campo, A. D., and Francés, F.: Can a parsimonious model implemented with satellite data be used for modelling the vegetation dynamics and water cycle in water-controlled environments?, Ecol. Modell., 324, 45–53, https://doi.org/10.1016/j.ecolmodel.2016.01.002, 2016.\n\nSamaniego, L., Kumar, R., and Attinger, S.: Multiscale parameter regionalization of a grid-based hydrologic model at the mesoscale, Water Resour. Res., 46, W05523, https://doi.org/10.1029/2008wr007327, 2010a.\n\nSamaniego, L., Bardossy, A., and Kumar, R.: Streamflow prediction in ungauged catchments using copula-based dissimilarity measures, Water Resour. Res., 46, W02506, https://doi.org/10.1029/2008WR007695, 2010b.\n\nSamaniego, L., Kumar, R., Mai, J., Zink, M., Thober, S., Cuntz, M., Rakovec, O., Schäfer, D., Schrön, M., Brenner, J., Demirel, C. M., Kaluza, M., Langenberg, B., Stisen, S., and Attinger, S.: mesoscale Hydrologic Model, https://doi.org/10.5281/ZENODO.1069203, 2017a.\n\nSamaniego, L., Kumar, R., Thober, S., Rakovec, O., Zink, M., Wanders, N., Eisner, S., Müller Schmied, H., Sutanudjaja, E. H., Warrach-Sagi, K., and Attinger, S.: Toward seamless hydrologic predictions across spatial scales, Hydrol. Earth Syst. Sci., 21, 4323–4346, https://doi.org/10.5194/hess-21-4323-2017, 2017b.\n\nSchaefi, B. and Gupta, H. V.: Do Nash values have value?, Hydrol. Process., 21, 2075–2080, 2007.\n\nSchalge, B., Rihani, J., Baroni, G., Erdal, D., Geppert, G., Haefliger, V., Haese, B., Saavedra, P., Neuweiler, I., Hendricks Franssen, H.-J., Ament, F., Attinger, S., Cirpka, O. A., Kollet, S., Kunstmann, H., Vereecken, H., and Simmer, C.: High-Resolution Virtual Catchment Simulations of the Subsurface-Land Surface-Atmosphere System, Hydrol. Earth Syst. Sci. Discuss., https://doi.org/10.5194/hess-2016-557, 2016.\n\nSchuurmans, J. M., van Geer, F. C., and Bierkens, M. F. P.: Remotely sensed latent heat fluxes for model error diagnosis: a case study, Hydrol. Earth Syst. Sci., 15, 759–769, https://doi.org/10.5194/hess-15-759-2011, 2011.\n\nStisen, S., McCabe, M. F., Refsgaard, J. C., Lerer, S., and Butts, M. B.: Model parameter analysis using remotely sensed pattern information in a multi-constraint framework, J. Hydrol., 409, 337–349, https://doi.org/10.1016/j.jhydrol.2011.08.030, 2011.\n\nStisen, S., Sonnenborg, T. O., Refsgaard, J. C., Koch, J., Bircher, S., and Jensen, K. H.: Moving beyond runoff calibration – Multi-constraint optimization of a surface-subsurface-atmosphere model, Hydrol. Process., in revision, 2018.\n\nSwain, M. J. and Ballard, D. H.: Color indexing, Int. J. Comput. Vis., 7, 11–32, https://doi.org/10.1007/BF00130487, 1991.\n\nTerink, W., Lutz, A. F., Simons, G. W. H., Immerzeel, W. W., and Droogers, P.: SPHY v2.0: Spatial Processes in HYdrology, Geosci. Model Dev., 8, 2009–2034, https://doi.org/10.5194/gmd-8-2009-2015, 2015.\n\nvan Griensven, A., Meixner, T., Grunwald, S., Bishop, T., Diluzio, M., and Srinivasan, R.: A global sensitivity analysis tool for the parameters of multi-variable catchment models, J. Hydrol., 324, 10–23, https://doi.org/10.1016/j.jhydrol.2005.09.008, 2006.\n\nVereecken, H., Pachepsky, Y., Simmer, C., Rihani, J., Kunoth, A., Korres, W., Graf, A., Franssen, H. J.-H., Thiele-Eich, I., and Shao, Y.: On the role of patterns in understanding the functioning of soil-vegetation-atmosphere systems, J. Hydrol., 542, 63–86, https://doi.org/10.1016/j.jhydrol.2016.08.053, 2016.\n\nWealands, S. R., Grayson, R. B., and Walker, J. P.: Quantitative comparison of spatial fields for hydrological model assessment – some promising approaches, Adv. Water Resour., 28, 15–32, https://doi.org/10.1016/j.advwatres.2004.10.001, 2005.\n\nWestern, A. W., Blöschl, G., and Grayson, R. B.: Toward capturing hydrologically significant connectivity in spatial patterns, Water Resour. Res., 37, 83–97, 2001.\n\nWolff, J. K., Harrold, M., Fowler, T., Gotway, J. H., Nance, L., and Brown, B. G.: Beyond the Basics: Evaluating Model-Based Precipitation Forecasts Using Traditional, Spatial, and Object-Based Methods, Weather Forecast., 29, 1451–1472, https://doi.org/10.1175/WAF-D-13-00135.1, 2014.\n\nShort summary\nOur work addresses a key challenge in earth system modelling: how to optimally exploit the information contained in satellite remote sensing observations in the calibration of such models. For this we thoroughly test a number of measures that quantify the fit between an observed and a simulated spatial pattern. We acknowledge the difficulties associated with such a comparison and suggest using measures that regard multiple aspects of spatial information, i.e. magnitude and variability.\nOur work addresses a key challenge in earth system modelling: how to optimally exploit the...\nCitation" ]
[ null, "https://www.geoscientific-model-development.net/licenceIcon_16.png", null, "https://www.geoscientific-model-development.net/licenceIcon_16.png", null, "https://www.geosci-model-dev.net/11/1873/2018/gmd-11-1873-2018-avatar-thumb150.png", null, "https://www.geosci-model-dev.net/11/1873/2018/gmd-11-1873-2018-f01-thumb.png", null, "https://www.geosci-model-dev.net/11/1873/2018/gmd-11-1873-2018-f02-thumb.png", null, "https://www.geosci-model-dev.net/11/1873/2018/gmd-11-1873-2018-f03-thumb.png", null, "https://www.geosci-model-dev.net/11/1873/2018/gmd-11-1873-2018-f04-thumb.png", null, "https://www.geosci-model-dev.net/11/1873/2018/gmd-11-1873-2018-t01-thumb.png", null, "https://www.geosci-model-dev.net/11/1873/2018/gmd-11-1873-2018-f05-thumb.png", null, "https://www.geosci-model-dev.net/11/1873/2018/gmd-11-1873-2018-f06-thumb.png", null, "https://www.geosci-model-dev.net/11/1873/2018/gmd-11-1873-2018-f07-thumb.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8519215,"math_prob":0.88609177,"size":62301,"snap":"2019-43-2019-47","text_gpt3_token_len":14715,"char_repetition_ratio":0.15854695,"word_repetition_ratio":0.0104595525,"special_character_ratio":0.24042952,"punctuation_ratio":0.22658248,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95197654,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22],"im_url_duplicate_count":[null,null,null,null,null,null,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-19T12:10:29Z\",\"WARC-Record-ID\":\"<urn:uuid:7d318430-4f31-40ea-bc23-5d8a6ba9ab56>\",\"Content-Length\":\"237819\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7799a1de-88a9-4819-9831-077831df1b99>\",\"WARC-Concurrent-To\":\"<urn:uuid:0b074543-f6e9-429d-bcac-9b716dd11ff3>\",\"WARC-IP-Address\":\"81.3.21.103\",\"WARC-Target-URI\":\"https://www.geosci-model-dev.net/11/1873/2018/\",\"WARC-Payload-Digest\":\"sha1:73PDZNX4GUJ64KCURNIRCJQUEFHH4VTZ\",\"WARC-Block-Digest\":\"sha1:KAWPETG42QV55XUDLL3UQXKQ3DU6P5PO\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496670135.29_warc_CC-MAIN-20191119093744-20191119121744-00240.warc.gz\"}"}
https://pt.scribd.com/document/354763847/Adsorption
[ "Você está na página 1de 4\n\n# CE 170: Environmental Engineering\n\n## In adsorption, molecules dissolved in a fluid preferentially accumulate at a solid surface.\n\nThe dissolved substance is called the adsorbate; the solid is called the adsorbent. The\nfluid can be either liquid or gaseous. The cause of the preferential accumulation is\nthought to be weak physical and chemical bonds between the adsorbate and the\nadsorbent. Although the exact nature of these bonds isnt known, the end result is that\nadsorbate molecules are in a lower energy state on the surface than they are in the fluid.\n\nEquilibrium Isotherms\n\n## Adsorption is a reversible process. As some molecules attach themselves to the surface,\n\nothers are kicked off back into the fluid. At equilibrium, the rate of movement of\nmolecules onto the surface (adsorption) and the rate movement off the surface\n(desorption) are equal. Both rates are dependent on concentrations. The concentration in\nthe fluid is usually denoted by Ce, the mass concentration in mg/L or other such units.\nThe concentration on the surface is denoted by x/m where x is the mass of adsorbate\nadsorbed onto mass m of solid.\n\nSeveral equations have been derived to describe the concentrations at equilibrium. The\nLangmuir isotherm is a theoretical equation:\n\nx k ( x / m) max C e\n\nm 1 kC e\n\nwhere k is an empirical coefficient and (x/m)max is the amount of adsorbate that would\nform a monomolecular layer on the solid adsorbent. In essence, it is the maximum mass\nof adsorbate that could be retained. Freundlich developed the following empirical\nisotherm:\n\nx 1/ n\nKC e\nm\n\nwhere K and 1/n are empirical constants. A third isotherm which is commonly used in\ngroundwater studies is the Linear Partitioning isotherm:\n\nx\nK p Ce\nm\n\nwhere Kp is an empirical coefficient. Although the three equations look different, they\ncan be thought of as curve fits to the same phenomenon. In the graph below, all three\nisotherms are plotted together. The coefficients used are listed in the table below.\n\n361563947.doc -- Page 1 of 4\nLangmuir k 0.1\n(x/m)max 100\nFreundlich K 14\n1/n 0.55\nLinear Kp 8\n\nAs can be seen in this example, all three equations describe the region below Ce=5 pretty\nequally. Below Ce=14, the Freundlich and the Langmuir isotherms give very similar\nresults. For larger Ce values, kCe becomes larger than 1 and the denominator of the\nLangmuir isotherm approaches kCe. When it reaches kCe, the top and bottom kCe terms\nin the equation cancel and (x/m) = (x/m)max. You can see the Langmuir plot in the graph\nabove start to lean over toward (x/m)max. Note that neither the Freundlich nor Linear\nisotherms have an upper boundary. These isotherms should only be used in the Ce ranges\nfor which they were calibrated.\n\nThe coefficients of the Freundlich isotherm can be determined from a fairly simple set of\njar tests experiments in which equilibrium conditions are established. If the\nconcentration of adsorbate is known before the addition of the solid adsorbent, and then\nis measured again after the establishment of equilibrium. Ce can be measured, and x/m\ncan be calculated from the drop in concentration resulting from the addition of the solid.\nWith a set of Ce and x/m data pairs, the linearized form of the Freundlich equation can be\nplotted.\n\nx/m = KCe1/n\n\n## ln(x/m) = lnK + (1/n)lnCe\n\nLinear regression gives the slope of the line (= 1/n) and the y-intercept (= lnK).\n\nApplications\n\n361563947.doc -- Page 2 of 4\nAdsorption is a common phenomenon. In the natural environment, it is a means by\nwhich compounds are immobilized. For example, nutrients (nitrogen and phosphorus\ncompounds) adsorb to soil particles, and thus are available to plants. Without adsorption,\nmany of these compounds would leach away from surface soils. In natural water bodies,\nadsorbed compounds move with the particles to which they are attached. If the particles\nare large and heavy and settle easily, adsorbed pollutants can accumulate in sediments on\nthe bottoms of lakes, rivers, and bays. By this mechanism, some pollutants are separated\nfrom swimming organisms. On the other hand, bottom-dwelling organisms receive a\nlarger dose. Filter feeders (animals like shellfish whose diet consists of small particles\nfiltered from the water) are put at particular risk. Pollutants also adsorb and desorb from\nsoil particles in aquifers, complicating efforts to track and remove ground water\npollution.\n\n## As a treatment process, adsorption is used to remove pollutants, particularly organic\n\ncompounds, from both water and air. In these systems, the adsorbent of choice is mainly\nactivated carbon. Activated carbon is charcoal that has been treated to increase its\nsurface area and its affinity for target compounds. The activation process is a closely\nheld trade secret in most cases. In drinking water treatment plants, activated carbon is\nused to remove compounds which cause tastes and odors. In ground water remediation\nplants, activated carbon is used to remove contaminants such as pesticides and solvents.\nAs described below, there are two major methods of applying activated carbon in\ntreatment systems.\n\n## Powdered Activated Carbon (PAC)\n\nWhen activated carbon (or any adsorbent) is added to a solution, the adsorbate moves\nfrom the dissolved phase to the solid surface. The concentration in the solution decreases\nuntil it reaches Ce. Adding more carbon causes the concentration in solution to decrease\nfurther. In a treatment plant, some additional process, such as settling or filtration, is\nneeded to remove the PAC from the water after the adsorption has taken place.\n\nWhen treating a water stream to meet a specified limit, operators just keep adding PAC\nuntil that desired effluent concentration (Ceff) is achieved. At the end, the PAC is in\nequilibrium with the desired effluent concentration. This fact can be used to calculate the\namount of PAC needed to treat a given volume (or flow) of water.\n\n## Mass of adsorbate to be removed (X) = (Cinf - Ceff)V\n\nx/m = KCe1/n where Ce is set at the effluent concentration, Ceff\nPAC needed = X / (x/m)\n\n## Granular Activated Carbon in Columns\n\nThe alternative method of treating a water (or air) stream is by passing the stream through\na stationary bed of carbon. To limit head losses to a reasonable value, the particle size of\nthe carbon is made larger. Accordingly, this is often called granular activated carbon\n\n361563947.doc -- Page 3 of 4\n(GAC). The most common configuration is a multi-stage column.\n\n1 2 3\n\nImagine passing a continuous water stream with a given influent concentration Cinf\nthrough Column #1 above. Let's break the stream into s series of incremental volumes,\ndV. As the first dV passes through the carbon, much of the dissolved constituent moves\nto the carbon to establish equilibrium. That volume of treated water moves downstream,\neventually to Column #2, and a new volume with concentration Cinf is brought into\nColumn #1. As before, the carbon attempts to come into equilibrium with higher\ndissolved concentration. Again, the treated water is replaced with water containing Ci.\nEventually, the carbon in Column #1 comes into equilibrium with Ci. At this point, the\nColumn #1 isnt removing any more pollutant and it is replaced with fresh carbon. The\namount of carbon used to treat a given volume (or flow) is calculated as before (see\nbelow).\n\n## Mass of adsorbate to be removed (X) = (Cinf - Ceff)V\n\nx/m = KCe1/n where Ce is set at the influent concentration, Cinf\nPAC needed = X / (x/m)\n\nBecause Cinf is larger than Ceff, x/m for a column is larger than x/m resulting from directly\nadding the carbon to the water. Larger x/m values are desired because less carbon is\nneeded to treat a given volume of water. In actuality, the economic advantage of columns\nin terms of carbon use is somewhat lessened by the fact that the K value for PAC is larger\nthan that for GAC because of PAC's larger surface area. Nevertheless, GAC in columns\nhas the operational advantage of not needing a separate step to take the PAC out of the\nwater. For these reasons, columns are a more popular treatment method than direct" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9064794,"math_prob":0.89010304,"size":5829,"snap":"2019-35-2019-39","text_gpt3_token_len":1313,"char_repetition_ratio":0.114678115,"word_repetition_ratio":0.0075839655,"special_character_ratio":0.20226453,"punctuation_ratio":0.09915014,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9551265,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-09-18T16:48:29Z\",\"WARC-Record-ID\":\"<urn:uuid:07565f6a-2ced-4265-93a7-d729136c71e9>\",\"Content-Length\":\"366702\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a38e08d2-54ee-41b9-9475-81d72f4af0a5>\",\"WARC-Concurrent-To\":\"<urn:uuid:9a909c68-d47f-49ba-8ce4-6afd9301539c>\",\"WARC-IP-Address\":\"151.101.250.152\",\"WARC-Target-URI\":\"https://pt.scribd.com/document/354763847/Adsorption\",\"WARC-Payload-Digest\":\"sha1:AWEYDPYYC4J2SRT3IANVDPRM6EOXYO53\",\"WARC-Block-Digest\":\"sha1:7JQT6R3OMF4GJ4NVMN6FXFUDNCUQF24Z\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-39/CC-MAIN-2019-39_segments_1568514573309.22_warc_CC-MAIN-20190918151927-20190918173927-00226.warc.gz\"}"}
https://en.wikipedia.org/wiki/Bohr_model
[ "# Bohr model", null, "The cake model of the hydrogen atom (Z = 1) or a hydrogen-like ion (Z > 1), where the negatively charged electron confined to an atomic shell encircles a small, positively charged atomic nucleus and where an electron jumps between orbits, is accompanied by an emitted or absorbed amount of electromagnetic energy (). The orbits in which the electron may travel are shown as grey circles; their radius increases as n2, where n is the principal quantum number. The 3 → 2 transition depicted here produces the first line of the Balmer series, and for hydrogen (Z = 1) it results in a photon of wavelength 656 nm (red light).\n\nIn atomic physics, the Bohr model or Rutherford–Bohr model of the atom, presented by Niels Bohr and Ernest Rutherford in 1913, consists of a small, dense nucleus surrounded by orbiting electrons. It is analogous to the structure of the Solar System, but with attraction provided by electrostatic force rather than gravity. In the history of atomic physics, it followed, and ultimately replaced, several earlier models, including Joseph Larmor's solar system model (1897), Jean Perrin's model (1901), the cubical model (1902), Hantaro Nagaoka's Saturnian model (1904), the plum pudding model (1904), Arthur Haas's quantum model (1910), the Rutherford model (1911), and John William Nicholson's nuclear quantum model (1912). The improvement over the 1911 Rutherford model mainly concerned the new quantum mechanical interpretation introduced by Haas and Nicholson, but forsaking any attempt to explain radiation according to classical physics.\n\nThe model's key success lay in explaining the Rydberg formula for hydrogen's spectral emission lines. While the Rydberg formula had been known experimentally, it did not gain a theoretical basis until the Bohr model was introduced. Not only did the Bohr model explain the reasons for the structure of the Rydberg formula, it also provided a justification for the fundamental physical constants that make up the formula's empirical results.\n\nThe Bohr model is a relatively primitive model of the hydrogen atom, compared to the valence shell model. As a theory, it can be derived as a first-order approximation of the hydrogen atom using the broader and much more accurate quantum mechanics and thus may be considered to be an obsolete scientific theory. However, because of its simplicity, and its correct results for selected systems (see below for application), the Bohr model is still commonly taught to introduce students to quantum mechanics or energy level diagrams before moving on to the more accurate, but more complex, valence shell atom. A related quantum model was proposed by Arthur Erich Haas in 1910 but was rejected until the 1911 Solvay Congress where it was thoroughly discussed. The quantum theory of the period between Planck's discovery of the quantum (1900) and the advent of a mature quantum mechanics (1925) is often referred to as the old quantum theory.\n\n## Origin\n\nIn the early 20th century, experiments by Ernest Rutherford established that atoms consisted of a diffuse cloud of negatively charged electrons surrounding a small, dense, positively charged nucleus. Given this experimental data, Rutherford naturally considered a planetary model of the atom, the Rutherford model of 1911. This had electrons orbiting a solar nucleus, but involved a technical difficulty: the laws of classical mechanics (i.e. the Larmor formula) predict that the electron will release electromagnetic radiation while orbiting a nucleus. Because the electron would lose energy, it would rapidly spiral inwards, collapsing into the nucleus on a timescale of around 16 picoseconds. Rutherford's atom model is disastrous because it predicts that all atoms are unstable. Also, as the electron spirals inward, the emission would rapidly increase in frequency due to the orbital period becoming shorter, resulting in electromagnetic radiation with a continuous spectrum. However, late 19th-century experiments with electric discharges had shown that atoms will only emit light (that is, electromagnetic radiation) at certain discrete frequencies. By the early twentieth century, it was expected that the atom would account for the spectral lines. In 1897, Lord Rayleigh analyzed the problem. By 1906, Rayleigh said, “the frequencies observed in the spectrum may not be frequencies of disturbance or of oscillation in the ordinary sense at all, but rather form an essential part of the original constitution of the atom as determined by conditions of stability.”\n\nThe outline of Bohr's atom came during the proceedings of the first Solvay Conference in 1911 on the subject of Radiation and Quanta, at which Bohr's mentor, Rutherford was present. Max Planck’s lecture ended with this remark: “... atoms or electrons subject to the molecular bond would obey the laws of quantum theory”. Hendrik Lorentz in the discussion of Planck's lecture raised the question of the composition of the atom based on Thomson's model with a great portion of the discussion around the atomic model developed by Arthur Erich Haas. Lorentz explained that Planck's constant could be taken as determining the size of atoms, or that the size of atoms could be taken to determine Planck's constant. Lorentz included comments regarding the emission and absorption of radiation concluding that “A stationary state will be established in which the number of electrons entering their spheres is equal to the number of those leaving them.” In the discussion of what could regulate energy differences between atoms, Max Planck simply stated: “The intermediaries could be the electrons.” The discussions outlined the need for the quantum theory to be included in the atom and the difficulties in an atomic theory. Planck in his talk said explicitly: “In order for an oscillator [molecule or atom] to be able to provide radiation in accordance with the equation, it is necessary to introduce into the laws of its operation, as we have already said at the beginning of this Report, a particular physical hypothesis which is, on a fundamental point, in contradiction with classical Mechanics, explicitly or tacitly.” Bohr's first paper on his atomic model quotes Planck almost word for word, saying: “Whatever the alteration in the laws of motion of the electrons may be, it seems necessary to introduce in the laws in question a quantity foreign to the classical electrodynamics, i. e. Planck's constant, or as it often is called the elementary quantum of action.” Bohr's footnote at the bottom of the page is to the French translation of the 1911 Solvay Congress proving he patterned his model directly on the proceedings and fundamental principles laid down by Planck, Lorentz, and the quantized Arthur Haas model of the atom which was mentioned seventeen times. Lorentz ended the discussion of Einstein's talk explaining: “The assumption that this energy must be a multiple of $h\\nu$", null, "leads to the following formula, where $n$", null, "is an integer: $qv^{2}=nh\\nu$", null, ".” Rutherford could have outlined these points to Bohr or given him a copy of the proceedings since he quoted from them and used them as a reference. In a later interview, Bohr said it was very interesting to hear Rutherford's remarks about the Solvay Congress. But Bohr said, “I saw the actual reports” of the Solvay Congress.\n\nThen in 1912, Bohr came across the John William Nicholson theory of the atom model that quantized angular momentum as h/2π. According to a centennial celebration of the Bohr atom in Nature magazine, it was Nicholson who discovered that electrons radiate the spectral lines as they descend towards the nucleus and his theory was both nuclear and quantum. Niels Bohr quoted him in his 1913 paper of the Bohr model of the atom. The importance of the work of Nicholson's nuclear quantum atomic model on Bohr's model has been emphasized by many historians.\n\nNext, Bohr was told by his friend, Hans Hansen, that the Balmer series is calculated using the Balmer formula, an empirical equation discovered by Johann Balmer in 1885 that described wavelengths of some spectral lines of hydrogen. This was further generalized by Johannes Rydberg in 1888 resulting in what is now known as the Rydberg formula. After this, Bohr declared, “everything became clear”.\n\nTo overcome the problems of Rutherford's atom, in 1913 Niels Bohr put forth three postulates that sum up most of his model:\n\n1. The electron is able to revolve in certain stable orbits around the nucleus without radiating any energy, contrary to what classical electromagnetism suggests. These stable orbits are called stationary orbits and are attained at certain discrete distances from the nucleus. The electron cannot have any other orbit in between the discrete ones.\n2. The stationary orbits are attained at distances for which the angular momentum of the revolving electron is an integer multiple of the reduced Planck constant: $m_{\\mathrm {e} }vr=n\\hbar$", null, ", where n = 1, 2, 3, ... is called the principal quantum number, and ħ = h/2π. The lowest value of n is 1; this gives the smallest possible orbital radius of 0.0529 nm known as the Bohr radius. Once an electron is in this lowest orbit, it can get no closer to the nucleus. Starting from the angular momentum quantum rule as Bohr admits is previously given by Nicholson in his 1912 paper, Bohr was able to calculate the energies of the allowed orbits of the hydrogen atom and other hydrogen-like atoms and ions. These orbits are associated with definite energies and are also called energy shells or energy levels. In these orbits, the electron's acceleration does not result in radiation and energy loss. The Bohr model of an atom was based upon Planck's quantum theory of radiation.\n3. Electrons can only gain and lose energy by jumping from one allowed orbit to another, absorbing or emitting electromagnetic radiation with a frequency ν determined by the energy difference of the levels according to the Planck relation: $\\Delta E=E_{2}-E_{1}=h\\nu$", null, ", where h is Planck's constant.\n\nOther points are:\n\n1. Like Einstein's theory of the photoelectric effect, Bohr's formula assumes that during a quantum jump a discrete amount of energy is radiated. However, unlike Einstein, Bohr stuck to the classical Maxwell theory of the electromagnetic field. Quantization of the electromagnetic field was explained by the discreteness of the atomic energy levels; Bohr did not believe in the existence of photons.\n2. According to the Maxwell theory the frequency ν of classical radiation is equal to the rotation frequency νrot of the electron in its orbit, with harmonics at integer multiples of this frequency. This result is obtained from the Bohr model for jumps between energy levels En and Enk when k is much smaller than n. These jumps reproduce the frequency of the k-th harmonic of orbit n. For sufficiently large values of n (so-called Rydberg states), the two orbits involved in the emission process have nearly the same rotation frequency, so that the classical orbital frequency is not ambiguous. But for small n (or large k), the radiation frequency has no unambiguous classical interpretation. This marks the birth of the correspondence principle, requiring quantum theory to agree with the classical theory only in the limit of large quantum numbers.\n3. The Bohr–Kramers–Slater theory (BKS theory) is a failed attempt to extend the Bohr model, which violates the conservation of energy and momentum in quantum jumps, with the conservation laws only holding on average.\n\nBohr's condition, that the angular momentum is an integer multiple of ħ was later reinterpreted in 1924 by de Broglie as a standing wave condition: the electron is described by a wave and a whole number of wavelengths must fit along the circumference of the electron's orbit:\n\n$n\\lambda =2\\pi r.$", null, "According to de Broglie's hypothesis, matter particles such as the electron behave as waves. The de Broglie wavelength of an electron is\n\n$\\lambda ={\\frac {h}{mv}},$", null, "which implies that\n\n${\\frac {nh}{mv}}=2\\pi r,$", null, "or\n\n${\\frac {nh}{2\\pi }}=mvr,$", null, "where $mvr$", null, "is the angular momentum of the orbiting electron. Writing $\\ell$", null, "for this angular momentum, the previous equation becomes\n\n$\\ell ={\\frac {nh}{2\\pi }},$", null, "which is Bohr's second postulate.\n\nBohr described angular momentum of the electron orbit as 1/2h while de Broglie's wavelength of λ = h/p described h divided by the electron momentum. In 1913, however, Bohr justified his rule by appealing to the correspondence principle, without providing any sort of wave interpretation. In 1913, the wave behavior of matter particles such as the electron was not suspected.\n\nIn 1925, a new kind of mechanics was proposed, quantum mechanics, in which Bohr's model of electrons traveling in quantized orbits was extended into a more accurate model of electron motion. The new theory was proposed by Werner Heisenberg. Another form of the same theory, wave mechanics, was discovered by the Austrian physicist Erwin Schrödinger independently, and by different reasoning. Schrödinger employed de Broglie's matter waves, but sought wave solutions of a three-dimensional wave equation describing electrons that were constrained to move about the nucleus of a hydrogen-like atom, by being trapped by the potential of the positive nuclear charge.\n\n## Electron energy levels\n\nThe Bohr model gives almost exact results only for a system where two charged points orbit each other at speeds much less than that of light. This not only involves one-electron systems such as the hydrogen atom, singly ionized helium, and doubly ionized lithium, but it includes positronium and Rydberg states of any atom where one electron is far away from everything else. It can be used for K-line X-ray transition calculations if other assumptions are added (see Moseley's law below). In high energy physics, it can be used to calculate the masses of heavy quark mesons.\n\nCalculation of the orbits requires two assumptions.\n\n• Classical mechanics\nThe electron is held in a circular orbit by electrostatic attraction. The centripetal force is equal to the Coulomb force.\n${\\frac {m_{\\mathrm {e} }v^{2}}{r}}={\\frac {Zk_{\\mathrm {e} }e^{2}}{r^{2}}},$", null, "where me is the electron's mass, e is the elementary charge, ke is the Coulomb constant and Z is the atom's atomic number. It is assumed here that the mass of the nucleus is much larger than the electron mass (which is a good assumption). This equation determines the electron's speed at any radius:\n$v={\\sqrt {\\frac {Zk_{\\mathrm {e} }e^{2}}{m_{\\mathrm {e} }r}}}.$", null, "It also determines the electron's total energy at any radius:\n$E=-{\\frac {1}{2}}m_{\\mathrm {e} }v^{2}.$", null, "The total energy is negative and inversely proportional to r. This means that it takes energy to pull the orbiting electron away from the proton. For infinite values of r, the energy is zero, corresponding to a motionless electron infinitely far from the proton. The total energy is half the potential energy, the difference being the kinetic energy of the electron. This is also true for noncircular orbits by the virial theorem.\n• A quantum rule\nThe angular momentum L = mevr is an integer multiple of ħ:\n$m_{\\mathrm {e} }vr=n\\hbar .$", null, "### Derivation\n\nIf an electron in an atom is moving on an orbit with period T, classically the electromagnetic radiation will repeat itself every orbital period. If the coupling to the electromagnetic field is weak, so that the orbit doesn't decay very much in one cycle, the radiation will be emitted in a pattern which repeats every period, so that the Fourier transform will have frequencies which are only multiples of 1/T. This is the classical radiation law: the frequencies emitted are integer multiples of 1/T.\n\nIn quantum mechanics, this emission must be in quanta of light, of frequencies consisting of integer multiples of 1/T, so that classical mechanics is an approximate description at large quantum numbers. This means that the energy level corresponding to a classical orbit of period 1/T must have nearby energy levels which differ in energy by h/T, and they should be equally spaced near that level,\n\n$\\Delta E_{n}={\\frac {h}{T(E_{n})}}.$", null, "Bohr worried whether the energy spacing 1/T should be best calculated with the period of the energy state $E_{n}$", null, ", or $E_{n+1}$", null, ", or some average—in hindsight, this model is only the leading semiclassical approximation.\n\nBohr considered circular orbits. Classically, these orbits must decay to smaller circles when photons are emitted. The level spacing between circular orbits can be calculated with the correspondence formula. For a hydrogen atom, the classical orbits have a period T determined by Kepler's third law to scale as r3/2. The energy scales as 1/r, so the level spacing formula amounts to\n\n$\\Delta E\\propto {\\frac {1}{r^{3/2}}}\\propto E^{3/2}.$", null, "It is possible to determine the energy levels by recursively stepping down orbit by orbit, but there is a shortcut.\n\nThe angular momentum L of the circular orbit scales as ${\\sqrt {r}}$", null, ". The energy in terms of the angular momentum is then\n\n$E\\propto {\\frac {1}{r}}\\propto {\\frac {1}{L^{2}}}.$", null, "Assuming, with Bohr, that quantized values of L are equally spaced, the spacing between neighboring energies is\n\n$\\Delta E\\propto {\\frac {1}{(L+\\hbar )^{2}}}-{\\frac {1}{L^{2}}}\\approx -{\\frac {2\\hbar }{L^{3}}}\\propto -E^{3/2}.$", null, "This is as desired for equally spaced angular momenta. If one kept track of the constants, the spacing would be ħ, so the angular momentum should be an integer multiple of ħ,\n\n$L={\\frac {nh}{2\\pi }}=n\\hbar .$", null, "This is how Bohr arrived at his model.\n\nSubstituting the expression for the velocity gives an equation for r in terms of n:\n$m_{\\text{e}}{\\sqrt {\\dfrac {k_{\\text{e}}Ze^{2}}{m_{\\text{e}}r}}}r=n\\hbar ,$", null, "so that the allowed orbit radius at any n is\n$r_{n}={\\frac {n^{2}\\hbar ^{2}}{Zk_{\\mathrm {e} }e^{2}m_{\\mathrm {e} }}}.$", null, "The smallest possible value of r in the hydrogen atom (Z = 1) is called the Bohr radius and is equal to:\n$r_{1}={\\frac {\\hbar ^{2}}{k_{\\mathrm {e} }e^{2}m_{\\mathrm {e} }}}\\approx 5.29\\times 10^{-11}~\\mathrm {m} .$", null, "The energy of the n-th level for any atom is determined by the radius and quantum number:\n$E=-{\\frac {Zk_{\\mathrm {e} }e^{2}}{2r_{n}}}=-{\\frac {Z^{2}(k_{\\mathrm {e} }e^{2})^{2}m_{\\mathrm {e} }}{2\\hbar ^{2}n^{2}}}\\approx {\\frac {-13.6Z^{2}}{n^{2}}}~\\mathrm {eV} .$", null, "An electron in the lowest energy level of hydrogen (n = 1) therefore has about 13.6 eV less energy than a motionless electron infinitely far from the nucleus. The next energy level (n = 2) is −3.4 eV. The third (n = 3) is −1.51 eV, and so on. For larger values of n, these are also the binding energies of a highly excited atom with one electron in a large circular orbit around the rest of the atom. The hydrogen formula also coincides with the Wallis product.\n\nThe combination of natural constants in the energy formula is called the Rydberg energy (RE):\n\n$R_{\\mathrm {E} }={\\frac {(k_{\\mathrm {e} }e^{2})^{2}m_{\\mathrm {e} }}{2\\hbar ^{2}}}.$", null, "This expression is clarified by interpreting it in combinations that form more natural units:\n\n$m_{\\mathrm {e} }c^{2}$", null, "is the rest mass energy of the electron (511 keV),\n${\\frac {k_{\\mathrm {e} }e^{2}}{\\hbar c}}=\\alpha \\approx {\\frac {1}{137}}$", null, "is the fine-structure constant,\n$R_{\\mathrm {E} }={\\frac {1}{2}}(m_{\\mathrm {e} }c^{2})\\alpha ^{2}$", null, ".\n\nSince this derivation is with the assumption that the nucleus is orbited by one electron, we can generalize this result by letting the nucleus have a charge q = Ze, where Z is the atomic number. This will now give us energy levels for hydrogenic (hydrogen-like) atoms, which can serve as a rough order-of-magnitude approximation of the actual energy levels. So for nuclei with Z protons, the energy levels are (to a rough approximation):\n\n$E_{n}=-{\\frac {Z^{2}R_{\\mathrm {E} }}{n^{2}}}.$", null, "The actual energy levels cannot be solved analytically for more than one electron (see n-body problem) because the electrons are not only affected by the nucleus but also interact with each other via the Coulomb Force.\n\nWhen Z = 1/α (Z ≈ 137), the motion becomes highly relativistic, and Z2 cancels the α2 in R; the orbit energy begins to be comparable to rest energy. Sufficiently large nuclei, if they were stable, would reduce their charge by creating a bound electron from the vacuum, ejecting the positron to infinity. This is the theoretical phenomenon of electromagnetic charge screening which predicts a maximum nuclear charge. Emission of such positrons has been observed in the collisions of heavy ions to create temporary super-heavy nuclei.\n\nThe Bohr formula properly uses the reduced mass of electron and proton in all situations, instead of the mass of the electron,\n\n$m_{\\text{red}}={\\frac {m_{\\mathrm {e} }m_{\\mathrm {p} }}{m_{\\mathrm {e} }+m_{\\mathrm {p} }}}=m_{\\mathrm {e} }{\\frac {1}{1+m_{\\mathrm {e} }/m_{\\mathrm {p} }}}.$", null, "However, these numbers are very nearly the same, due to the much larger mass of the proton, about 1836.1 times the mass of the electron, so that the reduced mass in the system is the mass of the electron multiplied by the constant 1836.1/(1+1836.1) = 0.99946. This fact was historically important in convincing Rutherford of the importance of Bohr's model, for it explained the fact that the frequencies of lines in the spectra for singly ionized helium do not differ from those of hydrogen by a factor of exactly 4, but rather by 4 times the ratio of the reduced mass for the hydrogen vs. the helium systems, which was much closer to the experimental ratio than exactly 4.\n\nFor positronium, the formula uses the reduced mass also, but in this case, it is exactly the electron mass divided by 2. For any value of the radius, the electron and the positron are each moving at half the speed around their common center of mass, and each has only one fourth the kinetic energy. The total kinetic energy is half what it would be for a single electron moving around a heavy nucleus.\n\n$E_{n}={\\frac {R_{\\mathrm {E} }}{2n^{2}}}$", null, "(positronium).\n\n## Rydberg formula\n\nThe Rydberg formula, which was known empirically before Bohr's formula, is seen in Bohr's theory as describing the energies of transitions or quantum jumps between orbital energy levels. Bohr's formula gives the numerical value of the already-known and measured the Rydberg constant, but in terms of more fundamental constants of nature, including the electron's charge and the Planck constant.\n\nWhen the electron gets moved from its original energy level to a higher one, it then jumps back each level until it comes to the original position, which results in a photon being emitted. Using the derived formula for the different energy levels of hydrogen one may determine the wavelengths of light that a hydrogen atom can emit.\n\nThe energy of a photon emitted by a hydrogen atom is given by the difference of two hydrogen energy levels:\n\n$E=E_{i}-E_{f}=R_{\\text{E}}\\left({\\frac {1}{n_{f}^{2}}}-{\\frac {1}{n_{i}^{2}}}\\right),$", null, "where nf is the final energy level, and ni is the initial energy level.\n\nSince the energy of a photon is\n\n$E={\\frac {hc}{\\lambda }},$", null, "the wavelength of the photon given off is given by\n\n${\\frac {1}{\\lambda }}=R\\left({\\frac {1}{n_{f}^{2}}}-{\\frac {1}{n_{i}^{2}}}\\right).$", null, "This is known as the Rydberg formula, and the Rydberg constant R is RE/hc, or RE/2π in natural units. This formula was known in the nineteenth century to scientists studying spectroscopy, but there was no theoretical explanation for this form or a theoretical prediction for the value of R, until Bohr. In fact, Bohr's derivation of the Rydberg constant, as well as the concomitant agreement of Bohr's formula with experimentally observed spectral lines of the Lyman (nf =1), Balmer (nf =2), and Paschen (nf =3) series, and successful theoretical prediction of other lines not yet observed, was one reason that his model was immediately accepted.\n\nTo apply to atoms with more than one electron, the Rydberg formula can be modified by replacing Z with Z − b or n with n − b where b is constant representing a screening effect due to the inner-shell and other electrons (see Electron shell and the later discussion of the \"Shell Model of the Atom\" below). This was established empirically before Bohr presented his model.\n\n## Shell model (heavier atoms)\n\nBohr's original three papers in 1913 described mainly the electron configuration in lighter elements. Bohr called his electron shells, “rings” in 1913. Atomic orbitals within shells did not exist at the time of his planetary model. Bohr explains in Part 3 of his famous 1913 paper that the maximum electrons in a shell is eight, writing: “We see, further, that a ring of n electrons cannot rotate in a single ring round a nucleus of charge ne unless n < 8.” For smaller atoms, the electron shells would be filled as follows: “rings of electrons will only join together if they contain equal numbers of electrons; and that accordingly the numbers of electrons on inner rings will only be 2, 4, 8”. However, in larger atoms the innermost shell would contain eight electrons, “on the other hand, the periodic system of the elements strongly suggests that already in neon N = 10 an inner ring of eight electrons will occur”. Bohr wrote \"From the above we are led to the following possible scheme for the arrangement of the electrons in light atoms:\"\n\nBohr's 1913 proposed configurations\nElement Electrons per shell Element Electrons per shell Element Electrons per shell\n1 1 9 4, 4, 1 17 8, 4, 4, 1\n2 2 10 8, 2 18 8, 8, 2\n3 2, 1 11 8, 2, 1 19 8, 8, 2, 1\n4 2, 2 12 8, 2, 2 20 8, 8, 2, 2\n5 2, 3 13 8, 2, 3 21 8, 8, 2, 3\n6 2, 4 14 8, 2, 4 22 8, 8, 2, 4\n7 4, 3 15 8, 4, 3 23 8, 8, 4, 3\n8 4, 2, 2 16 8, 4, 2, 2 24 8, 8, 4, 2, 2\n\nIn Bohr's third 1913 paper Part III called \"Systems Containing Several Nuclei\", he says that two atoms form molecules on a symmetrical plane and he reverts to describing hydrogen. The 1913 Bohr model did not discuss higher elements in detail and John William Nicholson was one of the first to prove in 1914 that it couldn't work for lithium, but was an attractive theory for hydrogen and ionized helium.\n\nIn 1921, following the work of chemists and others involved in work on the periodic table, Bohr extended the model of hydrogen to give an approximate model for heavier atoms. This gave a physical picture that reproduced many known atomic properties for the first time although these properties were proposed contemporarily with the identical work of chemist Charles Rugeley Bury\n\nBohr's partner in research during 1914 to 1916 was Walther Kossel who corrected Bohr's work to show that electrons interacted through the outer rings, and Kossel called the rings: “shells.” Irving Langmuir is credited with the first viable arrangement of electrons in shells with only two in the first shell and going up to eight in the next according to the octet rule of 1904, although Kossel had already predicted a maximum of eight per shell in 1916. Heavier atoms have more protons in the nucleus, and more electrons to cancel the charge. Bohr took from these chemists the idea that each discrete orbit could only hold a certain number of electrons. Per Kossel, after that the orbit is full, the next level would have to be used. This gives the atom a shell structure designed by Kossel, Langmuir, and Bury, in which each shell corresponds to a Bohr orbit.\n\nThis model is even more approximate than the model of hydrogen, because it treats the electrons in each shell as non-interacting. But the repulsions of electrons are taken into account somewhat by the phenomenon of screening. The electrons in outer orbits do not only orbit the nucleus, but they also move around the inner electrons, so the effective charge Z that they feel is reduced by the number of the electrons in the inner orbit.\n\nFor example, the lithium atom has two electrons in the lowest 1s orbit, and these orbit at Z = 2. Each one sees the nuclear charge of Z = 3 minus the screening effect of the other, which crudely reduces the nuclear charge by 1 unit. This means that the innermost electrons orbit at approximately 1/2 the Bohr radius. The outermost electron in lithium orbits at roughly the Bohr radius, since the two inner electrons reduce the nuclear charge by 2. This outer electron should be at nearly one Bohr radius from the nucleus. Because the electrons strongly repel each other, the effective charge description is very approximate; the effective charge Z doesn't usually come out to be an integer. But Moseley's law experimentally probes the innermost pair of electrons, and shows that they do see a nuclear charge of approximately Z − 1, while the outermost electron in an atom or ion with only one electron in the outermost shell orbits a core with effective charge Z − k where k is the total number of electrons in the inner shells.\n\nThe shell model was able to qualitatively explain many of the mysterious properties of atoms which became codified in the late 19th century in the periodic table of the elements. One property was the size of atoms, which could be determined approximately by measuring the viscosity of gases and density of pure crystalline solids. Atoms tend to get smaller toward the right in the periodic table, and become much larger at the next line of the table. Atoms to the right of the table tend to gain electrons, while atoms to the left tend to lose them. Every element on the last column of the table is chemically inert (noble gas).\n\nIn the shell model, this phenomenon is explained by shell-filling. Successive atoms become smaller because they are filling orbits of the same size, until the orbit is full, at which point the next atom in the table has a loosely bound outer electron, causing it to expand. The first Bohr orbit is filled when it has two electrons, which explains why helium is inert. The second orbit allows eight electrons, and when it is full the atom is neon, again inert. The third orbital contains eight again, except that in the more correct Sommerfeld treatment (reproduced in modern quantum mechanics) there are extra \"d\" electrons. The third orbit may hold an extra 10 d electrons, but these positions are not filled until a few more orbitals from the next level are filled (filling the n=3 d orbitals produces the 10 transition elements). The irregular filling pattern is an effect of interactions between electrons, which are not taken into account in either the Bohr or Sommerfeld models and which are difficult to calculate even in the modern treatment.\n\n## Moseley's law and calculation (K-alpha X-ray emission lines)\n\nNiels Bohr said in 1962: \"You see actually the Rutherford work was not taken seriously. We cannot understand today, but it was not taken seriously at all. There was no mention of it any place. The great change came from Moseley.\"\n\nIn 1913, Henry Moseley found an empirical relationship between the strongest X-ray line emitted by atoms under electron bombardment (then known as the K-alpha line), and their atomic number Z. Moseley's empiric formula was found to be derivable from Rydberg's formula and later Bohr's formula (Moseley actually mentions only Ernest Rutherford and Antonius Van den Broek in terms of models as these had been published before Moseley's work and Moseley's 1913 paper was published the same month as the first Bohr model paper). The two additional assumptions that this X-ray line came from a transition between energy levels with quantum numbers 1 and 2, and , that the atomic number Z when used in the formula for atoms heavier than hydrogen, should be diminished by 1, to (Z − 1)2.\n\nMoseley wrote to Bohr, puzzled about his results, but Bohr was not able to help. At that time, he thought that the postulated innermost \"K\" shell of electrons should have at least four electrons, not the two which would have neatly explained the result. So Moseley published his results without a theoretical explanation.\n\nIt was Walther Kossel in 1914 and in 1916 who explained that in the periodic table new elements would be created as electrons were added to the outer shell. In Kossel's paper, he writes: “This leads to the conclusion that the electrons, which are added further, should be put into concentric rings or shells, on each of which ... only a certain number of electrons—namely, eight in our case—should be arranged. As soon as one ring or shell is completed, a new one has to be started for the next element; the number of electrons, which are most easily accessible, and lie at the outermost periphery, increases again from element to element and, therefore, in the formation of each new shell the chemical periodicity is repeated.” Later, chemist Langmuir realized that the effect was caused by charge screening, with an inner shell containing only 2 electrons. In his 1919 paper, Irving Langmuir postulated the existence of \"cells\" which could each only contain two electrons each, and these were arranged in \"equidistant layers”.\n\nIn the Moseley experiment, one of the innermost electrons in the atom is knocked out, leaving a vacancy in the lowest Bohr orbit, which contains a single remaining electron. This vacancy is then filled by an electron from the next orbit, which has n=2. But the n=2 electrons see an effective charge of Z − 1, which is the value appropriate for the charge of the nucleus, when a single electron remains in the lowest Bohr orbit to screen the nuclear charge +Z, and lower it by −1 (due to the electron's negative charge screening the nuclear positive charge). The energy gained by an electron dropping from the second shell to the first gives Moseley's law for K-alpha lines,\n\n$E=h\\nu =E_{i}-E_{f}=R_{\\mathrm {E} }(Z-1)^{2}\\left({\\frac {1}{1^{2}}}-{\\frac {1}{2^{2}}}\\right),$", null, "or\n\n$f=\\nu =R_{\\mathrm {v} }\\left({\\frac {3}{4}}\\right)(Z-1)^{2}=(2.46\\times 10^{15}~{\\text{Hz}})(Z-1)^{2}.$", null, "Here, Rv = RE/h is the Rydberg constant, in terms of frequency equal to 3.28 x 1015 Hz. For values of Z between 11 and 31 this latter relationship had been empirically derived by Moseley, in a simple (linear) plot of the square root of X-ray frequency against atomic number (however, for silver, Z = 47, the experimentally obtained screening term should be replaced by 0.4). Notwithstanding its restricted validity, Moseley's law not only established the objective meaning of atomic number, but as Bohr noted, it also did more than the Rydberg derivation to establish the validity of the Rutherford/Van den Broek/Bohr nuclear model of the atom, with atomic number (place on the periodic table) standing for whole units of nuclear charge. Van den Broek had published his model in January 1913 showing the periodic table was arranged according to charge while Bohr's atomic model was not published until July 1913.\n\nThe K-alpha line of Moseley's time is now known to be a pair of close lines, written as (1 and 2) in Siegbahn notation.\n\n## Shortcomings\n\nThe Bohr model gives an incorrect value L=ħ for the ground state orbital angular momentum: The angular momentum in the true ground state is known to be zero from experiment. Although mental pictures fail somewhat at these levels of scale, an electron in the lowest modern \"orbital\" with no orbital momentum, may be thought of as not to rotate \"around\" the nucleus at all, but merely to go tightly around it in an ellipse with zero area (this may be pictured as \"back and forth\", without striking or interacting with the nucleus). This is only reproduced in a more sophisticated semiclassical treatment like Sommerfeld's. Still, even the most sophisticated semiclassical model fails to explain the fact that the lowest energy state is spherically symmetric – it doesn't point in any particular direction.\n\nNevertheless, in the modern fully quantum treatment in phase space, the proper deformation (careful full extension) of the semi-classical result adjusts the angular momentum value to the correct effective one. As a consequence, the physical ground state expression is obtained through a shift of the vanishing quantum angular momentum expression, which corresponds to spherical symmetry.\n\nIn modern quantum mechanics, the electron in hydrogen is a spherical cloud of probability that grows denser near the nucleus. The rate-constant of probability-decay in hydrogen is equal to the inverse of the Bohr radius, but since Bohr worked with circular orbits, not zero area ellipses, the fact that these two numbers exactly agree is considered a \"coincidence\". (However, many such coincidental agreements are found between the semiclassical vs. full quantum mechanical treatment of the atom; these include identical energy levels in the hydrogen atom and the derivation of a fine-structure constant, which arises from the relativistic Bohr–Sommerfeld model (see below) and which happens to be equal to an entirely different concept, in full modern quantum mechanics).\n\nThe Bohr model also has difficulty with, or else fails to explain:\n\n• Much of the spectra of larger atoms. At best, it can make predictions about the K-alpha and some L-alpha X-ray emission spectra for larger atoms, if two additional ad hoc assumptions are made. Emission spectra for atoms with a single outer-shell electron (atoms in the lithium group) can also be approximately predicted. Also, if the empiric electron–nuclear screening factors for many atoms are known, many other spectral lines can be deduced from the information, in similar atoms of differing elements, via the Ritz–Rydberg combination principles (see Rydberg formula). All these techniques essentially make use of Bohr's Newtonian energy-potential picture of the atom.\n• the relative intensities of spectral lines; although in some simple cases, Bohr's formula or modifications of it, was able to provide reasonable estimates (for example, calculations by Kramers for the Stark effect).\n• The existence of fine structure and hyperfine structure in spectral lines, which are known to be due to a variety of relativistic and subtle effects, as well as complications from electron spin.\n• The Zeeman effect – changes in spectral lines due to external magnetic fields; these are also due to more complicated quantum principles interacting with electron spin and orbital magnetic fields.\n• The model also violates the uncertainty principle in that it considers electrons to have known orbits and locations, two things which cannot be measured simultaneously.\n• Doublets and triplets appear in the spectra of some atoms as very close pairs of lines. Bohr's model cannot say why some energy levels should be very close together.\n• Multi-electron atoms do not have energy levels predicted by the model. It does not work for (neutral) helium.\n\n## Refinements\n\nSeveral enhancements to the Bohr model were proposed, most notably the Sommerfeld or Bohr–Sommerfeld models, which suggested that electrons travel in elliptical orbits around a nucleus instead of the Bohr model's circular orbits. This model supplemented the quantized angular momentum condition of the Bohr model with an additional radial quantization condition, the WilsonSommerfeld quantization condition\n\n$\\int _{0}^{T}p_{r}\\,dq_{r}=nh,$", null, "where pr is the radial momentum canonically conjugate to the coordinate q, which is the radial position, and T is one full orbital period. The integral is the action of action-angle coordinates. This condition, suggested by the correspondence principle, is the only one possible, since the quantum numbers are adiabatic invariants.\n\nThe Bohr–Sommerfeld model was fundamentally inconsistent and led to many paradoxes. The magnetic quantum number measured the tilt of the orbital plane relative to the xy plane, and it could only take a few discrete values. This contradicted the obvious fact that an atom could be turned this way and that relative to the coordinates without restriction. The Sommerfeld quantization can be performed in different canonical coordinates and sometimes gives different answers. The incorporation of radiation corrections was difficult, because it required finding action-angle coordinates for a combined radiation/atom system, which is difficult when the radiation is allowed to escape. The whole theory did not extend to non-integrable motions, which meant that many systems could not be treated even in principle. In the end, the model was replaced by the modern quantum-mechanical treatment of the hydrogen atom, which was first given by Wolfgang Pauli in 1925, using Heisenberg's matrix mechanics. The current picture of the hydrogen atom is based on the atomic orbitals of wave mechanics, which Erwin Schrödinger developed in 1926.\n\nHowever, this is not to say that the Bohr–Sommerfeld model was without its successes. Calculations based on the Bohr–Sommerfeld model were able to accurately explain a number of more complex atomic spectral effects. For example, up to first-order perturbations, the Bohr model and quantum mechanics make the same predictions for the spectral line splitting in the Stark effect. At higher-order perturbations, however, the Bohr model and quantum mechanics differ, and measurements of the Stark effect under high field strengths helped confirm the correctness of quantum mechanics over the Bohr model. The prevailing theory behind this difference lies in the shapes of the orbitals of the electrons, which vary according to the energy state of the electron.\n\nThe Bohr–Sommerfeld quantization conditions lead to questions in modern mathematics. Consistent semiclassical quantization condition requires a certain type of structure on the phase space, which places topological limitations on the types of symplectic manifolds which can be quantized. In particular, the symplectic form should be the curvature form of a connection of a Hermitian line bundle, which is called a prequantization.\n\nBohr also updated his model in 1922, assuming that certain numbers of electrons (for example, 2, 8, and 18) correspond to stable \"closed shells\".\n\n## Model of the chemical bond\n\nNiels Bohr proposed a model of the atom and a model of the chemical bond. According to his model for a diatomic molecule, the electrons of the atoms of the molecule form a rotating ring whose plane is perpendicular to the axis of the molecule and equidistant from the atomic nuclei. The dynamic equilibrium of the molecular system is achieved through the balance of forces between the forces of attraction of nuclei to the plane of the ring of electrons and the forces of mutual repulsion of the nuclei. The Bohr model of the chemical bond took into account the Coulomb repulsion – the electrons in the ring are at the maximum distance from each other." ]
[ null, "https://upload.wikimedia.org/wikipedia/commons/thumb/9/93/Bohr_atom_model.svg/310px-Bohr_atom_model.svg.png", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/3cc23768f43b26085f80e1882a94b31d24abd653", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/a601995d55609f2d9f5e233e36fbe9ea26011b3b", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/fba378e80142c3f40c2f5f34de6a06e01085de8b", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/9338fa27ac6b5292cbff7eb36bd0d04db7b3a80a", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/b5fadfb522849bae52de3c0ad2560548e31d777c", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/c406e1d9a3d7ac94158a7e1521da8bcb894a4cd1", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/1507ac17b57ffa2b4abc54a2cdfe2c766da65c3d", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/33db3b7bbd1c9b3834245f519df0dc544eba2d5f", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/807a552d2fab73bbbf23476690a7a3d6b76b1e20", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/8c3057ce7e9a57c9e26f2e0bff8943f98f9f1f57", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/f066e981e530bacc07efc6a10fa82deee985929e", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/8130fd2748ec151e4f4f79997dea1d30d8e282ed", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/4a1594f4acdde234c692b637c3f57f0b5b8f9f2a", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/aa9546247a7b0ba00654253814a1f6a4c9bcb559", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/9e87bae41f7fda293c4a9ff310782bc1fe106b27", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/c54f0897d4cc083a1920ad0f8624933ebb1bace8", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/eb67b710a3e8df7521c6b9c2a6b8ec43fb696e97", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/ad6b82f2a00af6c9efd4c16d4e99329605645c0c", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/f5ec7825a7a06c4ca02ca31609b20fbe331d5746", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/117495dd260ef30e30697b4712db1d60cee91067", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/81de6fa6e0e8df9f71815f601d8d9d58c7817a78", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/381b2f4d1a278d5d595ce3055ed6df568330eec8", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/516e537dfbb7c71c0d1910cbbed69e4165edbe6c", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/7be9a0abf0650c5191310de18a6b7ba0356a9e7d", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/62a43ca2e3464b43282ea3432f63c328ff9f0678", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/17bfa57d39b2f3bfb0fb5c653586bb77a1ee6f85", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/a523bb6c7ab84bbd70e2a4328ea32a70bd322e71", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/7570d315df2db9bf08aada585422d7bd4b254b19", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/21592ddfb75ddba1157230333bf808185a00ff83", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/6bc2dd89ea27a6c1d7558a73b2b44099562bbd1b", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/0952d999377850aca10c668131018b027a180983", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/4714ebebcad1e8baca986f055d6aeabbcc445622", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/695c7fa2de2ca512d4fed15e8fbf2fb048fa8705", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/72d5aecbf381eccb1732c5e0db234f6d3ee829c9", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/1ff230045e4342d9f979e9f94befb766ba3d718f", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/d20d0ab7b0491e620f4a51b8f3b40cac5bdba37a", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/00a7da65118103a933f1cb118ec8c110dda28ca0", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/1a2f7dfbbef58d01e9c31e047847ee2f42d56c9d", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/5648355695a6f80e95b9cc093f2dd726e246bb42", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/74fcc9eeef973020c66cb166a14b37aee08cec5d", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/06325e73354c00c9760d0a8e0ccd1ea426195697", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8936015,"math_prob":0.9745712,"size":51371,"snap":"2023-14-2023-23","text_gpt3_token_len":12430,"char_repetition_ratio":0.15227675,"word_repetition_ratio":0.034209885,"special_character_ratio":0.25019953,"punctuation_ratio":0.14007147,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9921818,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84],"im_url_duplicate_count":[null,null,null,null,null,null,null,3,null,null,null,null,null,null,null,2,null,2,null,2,null,null,null,null,null,2,null,2,null,2,null,2,null,2,null,2,null,null,null,null,null,3,null,6,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,4,null,2,null,2,null,2,null,2,null,2,null,2,null,7,null,2,null,2,null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-03-24T03:25:19Z\",\"WARC-Record-ID\":\"<urn:uuid:d32b92bb-4f0f-482b-9ada-af4bf4548785>\",\"Content-Length\":\"309432\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0b30b24e-053d-412a-9b6a-b3036776a6ea>\",\"WARC-Concurrent-To\":\"<urn:uuid:ea7583ed-8b7b-478a-8913-77ce484986c8>\",\"WARC-IP-Address\":\"208.80.154.224\",\"WARC-Target-URI\":\"https://en.wikipedia.org/wiki/Bohr_model\",\"WARC-Payload-Digest\":\"sha1:II45BUX73Y63ISKJXKVSXOPOZGQ3UR7G\",\"WARC-Block-Digest\":\"sha1:2FF7ABIGTTJSYVGY7XAAEZ65DPMBDU44\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296945242.64_warc_CC-MAIN-20230324020038-20230324050038-00713.warc.gz\"}"}
https://forums.wolfram.com/mathgroup/archive/1999/Jan/msg00326.html
[ "", null, "", null, "", null, "", null, "", null, "", null, "", null, "Transc. Eqn - Symb. Iterative Sol'n.?\n\n• To: mathgroup at smc.vnet.net\n• Subject: [mg15639] Transc. Eqn - Symb. Iterative Sol'n.?\n• From: Eric Strobel <EStrobel at schafercorp.com>\n• Date: Sat, 30 Jan 1999 04:28:37 -0500 (EST)\n• Sender: owner-wri-mathgroup at wolfram.com\n\n```Sorry for the abbreviations...\n\nI think I'm just being dense and when I see the answer, I'll emit a\nhearty \"DOH!!!\", but here goes:\n\nProblem: I *think* it should be relatively straightforward to write a\nBlock/ Module to do a symbolic Newton-Raphson (or similar such thing)\nsolution of a\n\ntranscendental equation. But I just can't seem to figure out how to\nwrite it. [I've been using Mathematica for years, but never got into\nprogramming...] I'd like to be able to have the Block serve as a\nfunction, and have an input specifying the number of iterations to do.\n\nExample: The prototypical example would be Kepler's problem, M = E - e\nSin[E],\nwhere M = mean anomaly, e = eccentricity and E = eccentric anomaly. One\nwill\nsometimes run across approximate solutions in powers of e, for e small\n-- these are usually out to the e^2 or e^3 terms and appear to have\nbeen done by\nmy suggested process (doing the first few iterations of a Newton sol'n.\nsymbolically).\n\nI'm interested in being able to take this to higher orders, and for my\nparticular equation (of course). I bring up Kepler's problem both\nbecause it\nis reminscent of mine, and because it might be of more general interest,\nparticularly to the college students among us.\n\nThanks.\n\n- Eric.\n\n```\n\n• Prev by Date: RE: speed\n• Next by Date: Re: Dividing top and bottom\n• Previous by thread: Re: Button to delete all graphics\n• Next by thread: Evaluation Group Redux" ]
[ null, "https://forums.wolfram.com/mathgroup/images/head_mathgroup.gif", null, "https://forums.wolfram.com/mathgroup/images/head_archive.gif", null, "https://forums.wolfram.com/mathgroup/images/numbers/1.gif", null, "https://forums.wolfram.com/mathgroup/images/numbers/9.gif", null, "https://forums.wolfram.com/mathgroup/images/numbers/9.gif", null, "https://forums.wolfram.com/mathgroup/images/numbers/9.gif", null, "https://forums.wolfram.com/mathgroup/images/search_archive.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.930188,"math_prob":0.6741994,"size":1507,"snap":"2023-40-2023-50","text_gpt3_token_len":387,"char_repetition_ratio":0.06919494,"word_repetition_ratio":0.0234375,"special_character_ratio":0.25746515,"punctuation_ratio":0.17041801,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9652937,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-11-29T19:27:20Z\",\"WARC-Record-ID\":\"<urn:uuid:2a876613-7973-451e-b363-83502adbe937>\",\"Content-Length\":\"45022\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0457d3ea-8773-4f0f-8cf2-fa681b11845f>\",\"WARC-Concurrent-To\":\"<urn:uuid:7878dc9f-b5e8-4a32-a1bc-9fd7585673ec>\",\"WARC-IP-Address\":\"140.177.9.73\",\"WARC-Target-URI\":\"https://forums.wolfram.com/mathgroup/archive/1999/Jan/msg00326.html\",\"WARC-Payload-Digest\":\"sha1:UBMVNN6L2ADDN7AJ6JOVKBUU4ESQ4M25\",\"WARC-Block-Digest\":\"sha1:XJUHSSPAMANR4YJ4HGEHG4NPXKZCAZ56\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100135.11_warc_CC-MAIN-20231129173017-20231129203017-00701.warc.gz\"}"}
https://www.rdocumentation.org/packages/grDevices/versions/3.6.2/topics/xy.coords
[ "grDevices (version 3.6.2)\n\n# xy.coords: Extracting Plotting Structures\n\n## Description\n\n`xy.coords` is used by many functions to obtain x and y coordinates for plotting. The use of this common mechanism across all relevant R functions produces a measure of consistency.\n\n## Usage\n\n```xy.coords(x, y = NULL, xlab = NULL, ylab = NULL, log = NULL,\nrecycle = FALSE, setLab = TRUE)```\n\n## Arguments\n\nx, y\n\nthe x and y coordinates of a set of points. Alternatively, a single argument `x` can be provided.\n\nxlab, ylab\n\nnames for the x and y variables to be extracted.\n\nlog\n\ncharacter, `\"x\"`, `\"y\"` or both, as for `plot`. Sets negative values to `NA` and gives a warning.\n\nrecycle\n\nlogical; if `TRUE`, recycle (`rep`) the shorter of `x` or `y` if their lengths differ.\n\nsetLab\n\nlogical indicating if the resulting `xlab` and `ylab` should be constructed from the “kind” of `(x,y)`; otherwise, the arguments `xlab` and `ylab` are used.\n\n## Value\n\nA list with the components\n\nx\n\nnumeric (i.e., `\"double\"`) vector of abscissa values.\n\ny\n\nnumeric vector of the same length as `x`.\n\nxlab\n\n`character(1)` or `NULL`, the ‘label’ of `x`.\n\nylab\n\n`character(1)` or `NULL`, the ‘label’ of `y`.\n\n## Details\n\nAn attempt is made to interpret the arguments `x` and `y` in a way suitable for bivariate plotting (or other bivariate procedures).\n\nIf `y` is `NULL` and `x` is a\n\nformula:\n\nof the form `yvar ~ xvar`. `xvar` and `yvar` are used as x and y variables.\n\nlist:\n\ncontaining components `x` and `y`, these are used to define plotting coordinates.\n\ntime series:\n\nthe x values are taken to be `time(x)` and the y values to be the time series.\n\nmatrix or `data.frame` with two or more columns:\n\nthe first is assumed to contain the x values and the second the y values. Note that is also true if `x` has columns named `\"x\"` and `\"y\"`; these names will be irrelevant here.\n\nIn any other case, the `x` argument is coerced to a vector and returned as y component where the resulting `x` is just the index vector `1:n`. In this case, the resulting `xlab` component is set to `\"Index\"` (if `setLab` is true as by default).\n\nIf `x` (after transformation as above) inherits from class `\"POSIXt\"` it is coerced to class `\"POSIXct\"`.\n\n`plot.default`, `lines`, `points` and `lowess` are examples of functions which use this mechanism.\n\n## Examples\n\nRun this code\n```# NOT RUN {\nff <- stats::fft(1:9)\nxy.coords(ff)\nxy.coords(ff, xlab = \"fft\") # labels \"Re(fft)\", \"Im(fft)\"\n# }\n# NOT RUN {\n<!-- % dont -->\n# }\n# NOT RUN {\nwith(cars, xy.coords(dist ~ speed, NULL)\\$xlab ) # = \"speed\"\n\nxy.coords(1:3, 1:2, recycle = TRUE) # otherwise error \"lengths differ\"\nxy.coords(-2:10, log = \"y\")\n##> xlab: \"Index\" \\\\ warning: 3 y values <= 0 omitted ..\n# }\n```\n\nRun the code above in your browser using DataCamp Workspace" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7068886,"math_prob":0.92618,"size":2285,"snap":"2022-27-2022-33","text_gpt3_token_len":645,"char_repetition_ratio":0.09644893,"word_repetition_ratio":0.024096385,"special_character_ratio":0.2717724,"punctuation_ratio":0.15195072,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9949445,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-06-27T06:23:13Z\",\"WARC-Record-ID\":\"<urn:uuid:d8b60be9-1494-45e2-909c-08ae6fbaf57a>\",\"Content-Length\":\"34545\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9f163f57-0c8a-4e5d-b740-3be8ecead4eb>\",\"WARC-Concurrent-To\":\"<urn:uuid:b2df2c26-36e0-4ee1-9621-0333a6a7bc09>\",\"WARC-IP-Address\":\"13.249.39.19\",\"WARC-Target-URI\":\"https://www.rdocumentation.org/packages/grDevices/versions/3.6.2/topics/xy.coords\",\"WARC-Payload-Digest\":\"sha1:JI527TR74AAY54HBJWX7AC5GXHAVESWB\",\"WARC-Block-Digest\":\"sha1:CPHRKG6ESHH2QFNCKLZANW233B7GCS7I\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656103328647.18_warc_CC-MAIN-20220627043200-20220627073200-00335.warc.gz\"}"}
https://kr.mathworks.com/matlabcentral/cody/problems/2868-matlab-basics-y-as-a-function-of-x/solutions/1976063
[ "Cody\n\n# Problem 2868. Matlab Basics - y as a function of x\n\nSolution 1976063\n\nSubmitted on 14 Oct 2019 by Nguyen manh Duy\nThis solution is locked. To view this solution, you need to provide a solution of the same size or smaller.\n\n### Test Suite\n\nTest Status Code Input and Output\n1   Pass\nx = 1; y_correct = 9; assert(isequal(y_fun_1(x),y_correct))\n\n2   Pass\nx = 3; y_correct = 67; assert(isequal(y_fun_1(x),y_correct))\n\n3   Pass\nx = 3.2; y_correct = 75.440000000000010; assert(isequal(y_fun_1(x),y_correct))" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5444061,"math_prob":0.99223,"size":502,"snap":"2019-43-2019-47","text_gpt3_token_len":163,"char_repetition_ratio":0.16666667,"word_repetition_ratio":0.0,"special_character_ratio":0.37051794,"punctuation_ratio":0.1368421,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99378836,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-17T18:24:48Z\",\"WARC-Record-ID\":\"<urn:uuid:b985b216-9a26-4d69-888d-0c4a4f5cb98b>\",\"Content-Length\":\"73121\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6d4e8203-9231-4398-8593-fb706cc9fb57>\",\"WARC-Concurrent-To\":\"<urn:uuid:2a1da06f-b538-47f6-b99b-28660c14faf7>\",\"WARC-IP-Address\":\"104.110.193.39\",\"WARC-Target-URI\":\"https://kr.mathworks.com/matlabcentral/cody/problems/2868-matlab-basics-y-as-a-function-of-x/solutions/1976063\",\"WARC-Payload-Digest\":\"sha1:3XNNYCZLRUNPF2I2GIIRRLCKSVE3IVUK\",\"WARC-Block-Digest\":\"sha1:G3R4JNQQGHOADXHBYWADYLL7UDZCFNJ2\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496669225.56_warc_CC-MAIN-20191117165616-20191117193616-00374.warc.gz\"}"}
https://helpdesk.homeseer.com/exportword?pageId=17467044
[ "Message-ID: <1576183690.4330.1621074861751.JavaMail.HSDOCS\\$@HSDOCS> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary=\"----=_Part_4329_1606641663.1621074861750\" ------=_Part_4329_1606641663.1621074861750 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location: file:///C:/exported.html Hubs\n\n# Hubs\n\n=20\n=20\n=20\n=20\n\n##### Here you will find informa= tion regarding our current lineup of HomeTrollers, The Pi, Plus, and Pro Hu= bs. Purchase HomeTroller\n\n=20\n=20\n=20\n=20\n=20\n=20\n\n=20\n=20\n=20\n=20\n=20\n=20\n\n• Highlight important documentation.\n=20\n=20\n=20\n=20\n\n=20\n=20\n=20\n=20\n=20\n=20\n\n=20\n=20\n=20\n=20\n=20\n=20\n\n## Browse by Controller\n\n### HomeTroller Pi\n\n=20\n\nSales and support information pertaining to the HomeTroller Pi Hub.\n\nThe current model is HT-PI-G1. Purchase Pi Hub\n\n### HomeTroller Pro\n\n= =20\n\nSales and support information pertaining to the HomeTroller Pro Hub.\n\nThe current model is HT-PRO-G1. Purchase HomeTrolle= r Pro\n\n=20\n=20\n=20\n=20\n\n=20\n1. =20\nA-C\n=20\n2. =20\nD-F\n=20\n3. =20\nG-K\n=20\n4. =20\nL-R\n=20\n5. =20\nS-Y\n=20\n6. =20\nZ\n=20\n7. =20\n=20\n=20\n\n=20\n=20\n=20\n=20\n\n=20\n\n## Recently Updated\n\n=20\n=20 =20\n=20\n=20\n=20\n• =20\n=20 =20\n=20\n=20 =20 =20\n• =20\n• =20\n=20 =20\n=20\n=20 =20 =20\n• =20\n• =20\n=20 =20\n=20\n=20 =20 =20\n• =20\n• =20\n=20 =20\n=20\n=20 =20 =20\n• =20\n• =20\n=20 =20\n=20\n=20\n=20 SmartStick+=20\n=20 =20\n• =20\n• =20\n=20 =20\n=20\n=20 =20 =20\n• =20\n• =20\n=20 =20\n=20\n=20 =20 =20\n• =20\n• =20\n=20 =20\n=20\n=20 =20 =20\n• =20\n• =20\n=20 =20\n=20\n=20 =20\n=20\n• attached May 03, 2021\n• =20\n=20\n• =20\n• =20\n=20 =20\n=20\n=20 =20\n=20\n• attached Apr 29, 2021\n• =20\n=20\n• =20\n=20 =20\n=20\n=20\n\n=20\n=20\n=20" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5645882,"math_prob":0.60239583,"size":754,"snap":"2021-21-2021-25","text_gpt3_token_len":212,"char_repetition_ratio":0.11733333,"word_repetition_ratio":0.15492958,"special_character_ratio":0.31564987,"punctuation_ratio":0.1969697,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.0000099,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-05-15T10:34:47Z\",\"WARC-Record-ID\":\"<urn:uuid:ddc95f20-14cb-4d87-8997-e8575a5eed95>\",\"Content-Length\":\"24756\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7c65fb1f-a33f-4bf9-bb3f-8e1590f5a02c>\",\"WARC-Concurrent-To\":\"<urn:uuid:0e9dc794-1dc7-4d9a-9185-c0e25da4a0b9>\",\"WARC-IP-Address\":\"71.168.88.56\",\"WARC-Target-URI\":\"https://helpdesk.homeseer.com/exportword?pageId=17467044\",\"WARC-Payload-Digest\":\"sha1:CGINXZGVMJ7NZOAUH7IJJ2456VZAT7R7\",\"WARC-Block-Digest\":\"sha1:73G2Z3SNU3KJG5F7FB444ZCHLH3KVWCJ\",\"WARC-Identified-Payload-Type\":\"message/rfc822\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-21/CC-MAIN-2021-21_segments_1620243991801.49_warc_CC-MAIN-20210515100825-20210515130825-00621.warc.gz\"}"}
https://www-formula.com/geometry/surface-area/surface-area-regular-pyramid
[ "", null, "", null, "-  perimeter of the base", null, "-  area of the base", null, "-  slant height of a pyramid", null, "- equal  sides of the base\n\nCalculate the lateral surface area of a regular pyramid if given base perimeter and slant height ( A_lat ) :", null, "Calculate the total surface area of a regular pyramid if given base perimeter, slant height and base area ( A ) :", null, "" ]
[ null, "https://www-formula.com/images/Geometry/surface-area/surface-area-regular-pyramid/surface-area-regular-pyramid.png", null, "https://www-formula.com/images/Geometry/z-let/P14-black.png", null, "https://www-formula.com/images/Geometry/z-let/A18-base.png", null, "https://www-formula.com/images/Geometry/z-let/L14-red.png", null, "https://www-formula.com/images/Geometry/z-let/a12-blue.png", null, "https://www-formula.com/images/Geometry/surface-area/surface-area-regular-pyramid/surface-area-regular-pyramid-F1.png", null, "https://www-formula.com/images/Geometry/surface-area/surface-area-regular-pyramid/surface-area-regular-pyramid-F3.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.65556544,"math_prob":0.9982594,"size":319,"snap":"2019-51-2020-05","text_gpt3_token_len":73,"char_repetition_ratio":0.14603175,"word_repetition_ratio":0.16666667,"special_character_ratio":0.23510972,"punctuation_ratio":0.05263158,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95236784,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14],"im_url_duplicate_count":[null,8,null,8,null,8,null,null,null,null,null,8,null,8,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-11T03:34:14Z\",\"WARC-Record-ID\":\"<urn:uuid:d0eb2833-a036-4aba-aa03-b093d4f3ab28>\",\"Content-Length\":\"54230\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b3cfa8db-31c0-41da-aec0-a632cd27a4ac>\",\"WARC-Concurrent-To\":\"<urn:uuid:137d3b32-1b05-4e3b-ae88-937e882b5c06>\",\"WARC-IP-Address\":\"78.46.83.50\",\"WARC-Target-URI\":\"https://www-formula.com/geometry/surface-area/surface-area-regular-pyramid\",\"WARC-Payload-Digest\":\"sha1:PIHWIMSS64I3RLOVJ34N3JIQJW62GJFW\",\"WARC-Block-Digest\":\"sha1:YMQWDDJKFBV6WGUZDB5YIAEF5NB7UCBR\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575540529745.80_warc_CC-MAIN-20191211021635-20191211045635-00048.warc.gz\"}"}
https://conklinfangmanbuickgmckc.com/auto-parts/quick-answer-how-is-motor-speed-rating-calculated.html
[ "# Quick Answer: How is motor speed rating calculated?\n\nContents\n\nIn most cases, you can look inside the motor and count the number of poles in the winding; they are distinct bundles of wire evenly spaced around the stator core. The number of poles, combined with the ac line frequency (Hertz, Hz), are all that determine the no-load revolutions per minute (rpm) of the motor.\n\n## How do you calculate motor speed?\n\nTo calculate RPM for an AC induction motor, you multiply the frequency in Hertz (Hz) by 60 — for the number of seconds in a minute — by two for the negative and positive pulses in a cycle. You then divide by the number of poles the motor has: (Hz x 60 x 2) / number of poles = no-load RPM.\n\n## What is the rated speed of a motor?\n\nThis value will be slightly less than the synchronous speed of the motor due to the decrease in speed from adding the load. The nameplate shown indicates a rated speed of 1460 RPM for this 4-pole, 50 Hz motor.\n\nIT IS INTERESTING:  Can an engine be too cold to start?\n\n## How do you calculate RPM from motor voltage?\n\nTake the Rated Voltage of the motor and divide it by the speed. To calculate the Speed Constant read the no-load speed (rpm) and convert it to radians per second. Divide this number by the Rated Voltage.\n\n## What is the formula for motor efficiency?\n\nYou can use the relationship ​​η​ = ​P​o/​P​i​, where ​P​o is output power, to determine efficiency in such cases, because ​P​i is given by ​I​ × ​V​, or current times voltage, whereas ​P​o is equal to torque ​​τ​ ​times rotational velocity ​​ω​​.\n\n## What is 120 in motor speed formula?\n\nThe equation for calculating synchronous speed is: S = 120 f/P speed = constant (120) times frequency of power source (60 Hz) divided by number of poles used in the motor (P).\n\naround 1800 rpm\n\n## How do you make a motor faster?\n\nOne easy way to make the motor run faster is to add another magnet. Hold a magnet over the top of the motor while it is running. As you move the magnet closer to the spinning coil, one of two things will happen. Either the motor will stop, or it will run faster.\n\n## What is the rpm of 1hp motor?\n\n1HP 750 W 1 HP 3000 RPM Three Phase AC Induction MotorPower1HP 750 WSpeed (RPM)1440,2880Voltage230\n\n## How do you slow down an AC motor?\n\nIf it’s a small fan motor or even a ceiling fan, this is often done by reducing voltage to the motor with a solid state control or a series inductance. This just increases the slip speed. A large AC motor can be slowed down with a Variable Frequency Drive.\n\nIT IS INTERESTING:  What is a 2 0 engine in CC?\n\n## Does current affect motor speed?\n\nThe speed of a motor is determined by the voltage and the torque by the current. If a motor is running at a certain speed with a constant torque and the load increases, the current will increase and so also the torque to maintain the same speed.8 мая 2014 г.\n\n## How do you calculate RPM speed?\n\nTo do this, use the formula: revolutions per minute = speed in meters per minute / circumference in meters. Following the example, the number of revolutions per minute is equal to: 1,877 / 1.89 = 993 revolutions per minute.\n\n## What is the efficiency of motor?\n\nFor an electric motor, efficiency is the ratio of mechanical power delivered by the motor (output) to the electrical power supplied to the motor (input). Thus, a motor that is 85 percent efficient converts 85 percent of the electrical energy input into mechanical energy.\n\n## What is the equation for efficiency?\n\nEfficiency is often measured as the ratio of useful output to total input, which can be expressed with the mathematical formula r=P/C, where P is the amount of useful output (“product”) produced per the amount C (“cost”) of resources consumed.\n\n## What is standard motor efficiency?\n\nFor these motors, the IEC 60034-30-1 standard defines four efficiency classes: IE1: Standard Efficiency. IE2: High Efficiency. IE3: Premium Efficiency. IE4: Super-premium Efficiency.", null, "" ]
[ null, "data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%20256%20256'%3E%3C/svg%3E", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8729114,"math_prob":0.97310287,"size":3725,"snap":"2021-31-2021-39","text_gpt3_token_len":857,"char_repetition_ratio":0.14969094,"word_repetition_ratio":0.0059435363,"special_character_ratio":0.23973155,"punctuation_ratio":0.10381077,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99627537,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-18T23:47:50Z\",\"WARC-Record-ID\":\"<urn:uuid:2799b74f-4b4a-4908-8e32-44534e37bcb5>\",\"Content-Length\":\"72934\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b98cc2ca-47f6-4713-872d-7652468549aa>\",\"WARC-Concurrent-To\":\"<urn:uuid:752dbccf-d7d1-4c2b-a8a3-5e173376230b>\",\"WARC-IP-Address\":\"207.244.241.49\",\"WARC-Target-URI\":\"https://conklinfangmanbuickgmckc.com/auto-parts/quick-answer-how-is-motor-speed-rating-calculated.html\",\"WARC-Payload-Digest\":\"sha1:UAK6XLZAORZWAK2MOXDF7DTXK3HMHD3A\",\"WARC-Block-Digest\":\"sha1:JWDXRTMA3GG2ZKJZ47FLXAGDUWIMOYGE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780056578.5_warc_CC-MAIN-20210918214805-20210919004805-00422.warc.gz\"}"}
https://www.powershow.com/viewht/43f75c-MDRhZ/Teaching_Mathematics_through_Problem_Solving_powerpoint_ppt_presentation
[ "# Teaching Mathematics through Problem Solving - PowerPoint PPT Presentation\n\nView by Category\nTitle:\n\n## Teaching Mathematics through Problem Solving\n\nDescription:\n\n### A problem-centered approach to teaching mathematics uses interesting and well ... Math Trailblazers (www.math.uic ... SPEAKING POINTS Elementary ... – PowerPoint PPT presentation\n\nNumber of Views:2542\nAvg rating:3.0/5.0\nSlides: 60\nProvided by: Messe8\nCategory:\nTags:\nTranscript and Presenter's Notes\n\nTitle: Teaching Mathematics through Problem Solving\n\n1\n\nTeaching Mathematics through Problem Solving\nEmma Ames Jim Fey Mary Jo Messenger Hal\nSchoen\n1\n2\nProblem Solving\n• Problem solving . . . can serve as a vehicle for\nlearning new mathematical ideas and skills. . . .\nA problem-centered approach to teaching\nmathematics uses interesting and well-selected\nproblems to launch mathematical lessons and\nengage students. In this way, new ideas,\ntechniques, and mathematical relationships emerge\nand become the focus of discussion. Good problems\ncan inspire the exploration of important\nmathematical ideas, nurture persistence, and\nreinforce the need to understand and use various\nstrategies, mathematical properties, and\nrelationships.\n• (Principles and Standards for School Mathematics,\nNational Council of Teachers of Mathematics 2000,\np. 182)\n\n3\n\n3\n4\nSelecting Classroom Tasks - Basic Questions\nR. Marcus J. T. Fey\n• Will working on the tasks foster students\nunderstanding of important mathematical ideas and\ntechniques?\n\n5\nSelecting Classroom Tasks - Basic Questions\nR. Marcus J. T. Fey\n• Will the selected tasks be engaging and\nproblematic, yet accessible, for many students in\nthe target classes?\n\n6\nSelecting Classroom Tasks - Basic Questions\nR. Marcus J. T. Fey\n• Will work on the tasks help students develop\ntheir mathematical thinkingtheir ability and\ndisposition to explore, to conjecture, to prove,\nto represent, and to communicate their\nunderstanding?\n\n7\nSelecting Classroom Tasks - Basic Questions\nR. Marcus J. T. Fey\n• Will the collection of tasks in a curriculum\nbuild coherent understanding and connections\namong important mathematical topics?\n\n8\nInteresting Variations on a Basic\nProblem Goldenberg Walter\n• Find the mean of 7, 4, 7, 6, 3, 8, and 7.\n• What if only five of the seven data are given?Can\nwe determine the missing data if we know the mean\nof the original seven?\n• What if we compute the mean of each possible\ncombination of only five of the given seven\nnumbers? (How many such combinations are\npossible?) What could we learn from, say, a\nhistogram of those means?\n\n9\nInteresting Variations on a Basic\nProblem Goldenberg Walter\n• Find the mean of 7, 4, 7, 6, 3, 8, and 7.\n• What if the original seven numbers are sampled\nfrom a population consisting of eight numbers?\nWhat might we reasonably infer about the eighth\nnumber? Do ideas from problem 2 help answer that\nquestion?\n• What if we know the mean but none of the data?\nWhat, if anything, could we say about the data?\nWhat possible sets of data would fit?\n\n10\nSome Questions That Promote Understanding - D. A.\nGrouws\n• ? How did you decide on a solution method to\ntry?\n• ? How did you solve the problem?\n• ? Did anyone solve it in a different way?\n• ? How would you compare these solution methods?\n\n11\nSome Questions That Promote Understanding - D. A.\nGrouws\n• ? Which of the solution methods do you like\nbest? Why?\n• ? Can you tell me how you solved the problem\n• ? Does this remind you of any other problems you\nhave solved?\n\n12\nTeaching Mathematics through Problem Solving\nResearch Perspectives M. K. Stein, J. Boaler,\nE. A. Silver\n• The research on TMPTS and on curricula designed\nto support it suggests both the feasibility and\nefficacy of this approach.\n• When TMPTS is implemented effectively, students\n(compared to those taught traditionally) are\nlikely to better understand mathematical\nconcepts, to be willing to tackle challenging\nproblems, and to see themselves as capable of\nlearning mathematics.\n\n13\nTeaching Mathematics through Problem Solving\nResearch Perspectives M. K. Stein, J. Boaler,\nE. A. Silver\n• TMPTS is challenging and to do it well teachers\nneed support, including good curriculum materials\nand strong professional development.\n• TMPTS can work with a wide range of students, but\nthe level of student support required may differ\ndepending on the students mathematical\nbackground and interest.\n\n14\nTeaching Mathematics through Problem Solving\nResearch Perspectives M. K. Stein, J. Boaler,\nE. A. Silver\n• ? Which of the solution methods do you like\nbest? Why?\n• ? Can you tell me how you solved the problem\n• ? Does this remind you of any other problems you\nhave solved?\n\n15\nSome Questions That Promote Understanding - D. A.\nGrouws\n• How can we change the problem to get another\ninteresting problem?\n• ? What mistakes do you think some students might\nmake in solving this problem?\n\n16\nWhat Happens in the Classroom When Mathematics\nis Taught Through Problem Solving?\nIn addition to learning mathematics, students\nlearn to be good problem solvers.\n17\nWhat Happens in the Classroom When Mathematics is\nTaught Through Problem Solving?\n• Thinking and problem solving are the fundamental\npart of our lessons.\n\n18\nWhat Happens in the Classroom When Mathematics is\nTaught Through Problem Solving? Technical\nemphasized.\nJust look at at this work young man.\nJust look at this work young man.\nYouve got some explaining to do.\nEinstein as a boy\n19\nTeam Work\n20\nWhat Happens in the Classroom When Mathematics is\nTaught Through Problem Solving?\n• Real-world problems are used frequently and\nanswers are given in terms of what makes sense\nfor any given situation.\n• What is a Problem?\n\n21\nProblems must have meaning for students.\n22\nTeaching Equation Solving and Inequalities\nThrough Problem Solving\n• Cable TV (CPMP Year 1)\n\n5 2.5X 75 2.5X\n23\nTeaching Equation Solving and Inequalities\nThrough Problem Solving\n• Cable TV (CPMP Year 1)\n\n30 5 2.5X\n24\nTeaching Equation Solving and Inequalities\nThrough Problem Solving\n• Cable TV (CPMP Year 1)\n\n75 2.5X gt 40\n25\nTeaching Equation Solving and Inequalities\nThrough Problem Solving\n• Cable TV\n\nOne way to solve the equations or inequality is\nto make tables and graphs of (time, share) data\nfor the two models and look for key points in\neach.\n26\n\nTables and Graphs 30 5\n2.5X 5 2.5X 75 2.5X\nX Y1 Y2\n0 75 5\n1 72.5 7.5\n2 70 10\n3 67.5 12.5\n4 65 15\n5 62.5 17.5\n6 60 20\n7 57.5 22.5\n8 55 25\n9 52.5 27.5\n10 50 30\n11 47.5 32.5\n12 45 35\n13 42.5 37.5\n14 40 40\n15 37.5 42.5\n\n26\n27\nTeaching Equation Solving and Inequalities\nThrough Problem Solving\nLines (CPMP Year 1) The next diagram shows linear\nmodels from four rubber band experiments, all\nplotted on the same grid. What does the pattern\nof those graphs suggest about the similarities\nand differences in the experiments?\n28\nTeaching Equation Solving and Inequalities\nThrough Problem Solving\n• Lines (CPMP Year 1)\n\n29\nTeaching Equation Solving and Inequalities\nThrough Problem Solving\nLines (CPMP Year 1)\n• (a). Sharing the work among your group members,\nmake four tables of (weight, length) pairs, one\ntable for each linear model, for weights from\n0 to 10 ounces.\n• (b).According to the tables, how long were the\ndifferent rubber bands without any weight\nattached? How is that information shown on the\ngraphs?\n• (c).Looking at data in the tables, estimate the\nrates of change in length for the four rubber\nbands as weight is added. How are those patterns\nshown on the graphs?\n\n30\nTeaching Equation Solving and Inequalities\nThrough Problem Solving\n• Lines (CPMP Year 1)\n\n31\nTeaching Equation Solving and Inequalities\nThrough Problem Solving\n• Lines (CPMP Year 1)\n\n32\nThe Bears Problem\n33\nThe Bears Problem\n• Various Levels\n• Middle School\n• Algebra\n• Precalculus\n\n34\nThe Bears Problem\n35\nThe Bears Problem\n36\nThe Bears Problem\n37\nThe Bears Problem\n38\nThe Bears Problem\n39\nThe Bears Problem\n40\nThe Bears Problem\n41\nThe Bears Problem\n42\n(No Transcript)\n43\nThe Bears Problem\n44\nLearning Through Problem Solving\n• Students Actively Participate, Reason, and\nExplain to Others\n\n45\nTeaching Through Problem Solving\n• Establish the norms that students responses\nshould include a rationale, students should\nstrive to make sense of their own methods and\nthose of their classmates, and students should\nask questions and raise challenges when they do\nnot understand.\n\n46\nTime to Reflect\n47\nFrustration is Part of a Real Problem\n48\nThe Satisfaction of Solving the Problem\naverage, THEN you can choose your own wallpaper.\n49\nTeaching Through Problem Solving\n• Always be aware of who is doing the thinking, the\nteacher or the student.\n\n50\nByproducts\n• Self esteem\n• Motivation\n• Better Understanding\n\n51\nMaterials to Support Teaching Mathematics Through\nProblem Solving\nProjects at All Levels The K 12 Mathematics\nCurriculum Center (www.edc.org/mcc) Element\nary Projects The ARC Center\n(www.arccenter.comap.com) Everyday Mathematics\n(http//everydaymath.uchicago.edu)\nInvestigations in Number Data, and Space\nTERC (www.terc.edu/investigations) Math\nTrailblazers (www.math.uic.edu/IMSE/timsmath.htm\nl)\n51\n52\nMaterials to Support Teaching Mathematics Through\nProblem Solving\nMiddle School Projects The ShowMe Center\n(www.showmecenter.Missouri.edu/) Connected\nMathematics Project (www.math.msu.edu/cmp)\nMathematics in Context (www.ebmic.com)\nMathScape Curriculum Center (www.edc.org/mathscape\n) MATHThematics Project\n(www.mcdougallittell.com/bookspots/math_thematics.\ncfm) Pathways/MMAP Curriculum\n(www.mmap.wested.org\n52\n53\nMaterials to Support Teaching Mathematics Through\nProblem Solving\nHigh School Projects COMPASS (www.ithaca.edu/com\npass) Core-Plus Mathematics Project\n(www.wmich.edu.cpmp) Interactive Mathematics\nProject (www.mathimp.org) MATH Connections\n(www.mathconnections.com) Applications /Reform\nin Secondary Education (www.comap.com/highschoo\nl/projects) SIMMS Integrated Mathematics\n(www.montana.edu/wwwsimms/Materials20.htm)\n53\n54\nWeb Resources\n54\n55\nWeb Resources\n55\n56\nWeb Resources\n56\n57\nWeb Resources\n57\n58\nWeb Resources\n58\n59" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8262303,"math_prob":0.7168565,"size":9513,"snap":"2020-24-2020-29","text_gpt3_token_len":2388,"char_repetition_ratio":0.16405511,"word_repetition_ratio":0.175,"special_character_ratio":0.22642699,"punctuation_ratio":0.1417953,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9638722,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-07-09T11:50:30Z\",\"WARC-Record-ID\":\"<urn:uuid:b991b107-0b01-4765-acf0-65d33c76931e>\",\"Content-Length\":\"104625\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:44cc4319-1e8e-42eb-85c8-1ab1578677ee>\",\"WARC-Concurrent-To\":\"<urn:uuid:01059627-6627-4cb7-b017-aefe34945ca9>\",\"WARC-IP-Address\":\"209.128.81.248\",\"WARC-Target-URI\":\"https://www.powershow.com/viewht/43f75c-MDRhZ/Teaching_Mathematics_through_Problem_Solving_powerpoint_ppt_presentation\",\"WARC-Payload-Digest\":\"sha1:KNGPUOVZL6XIQ6RL5CNRODJ2PZ2RIEP2\",\"WARC-Block-Digest\":\"sha1:LIX3VU33KNHHFVWRLSNKTZTSGW2MBWDI\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-29/CC-MAIN-2020-29_segments_1593655899931.31_warc_CC-MAIN-20200709100539-20200709130539-00221.warc.gz\"}"}
https://www.programming-helper.com/snippet/0pLbF6TumVVtOmqYJy5A
[ "Generation\n\ncode to explanationThu, 06 Apr 2023\n\n# def binary_search(arr, low, high, x): # Check base case if high >= low: mid = (high + low) // 2 # If element is present at the middle itself if arr[mid] == x: return mid # If element is smaller than mid, then it can only # be present in left subarray elif arr[mid] > x: return binary_search(arr, low, mid - 1, x) # Else the element can only be present in right subarray else: return binary_search(arr, mid + 1, high, x) else: # Element is not present in the array return -1 # Test array arr = [ 2, 3, 4, 10, 40 ] x = 10 # Function call result = binary_search(arr, 0, len(arr)-1, x) if result != -1: print(\"Element is present at index\", str(result)) else: print(\"Element is not present in array\")\n\n``````Here are a few things to note about the above code :\n\n1. The function is recursive and has four parameters : arr for the given array, x for the element to be searched, low and high for low and high indexes of the array.\n2. The base case for recursion is when high is less than low, i.e. the subarray is exhausted without a match. In such a case, the element is not present in the array.\n3. The binary search works by choosing a pivot element from the given array, comparing it with the element to be found, and then splitting the array into two halves according to whether the element to be found is less than or greater than the pivot element.\n4. If the element to be found is less than the pivot element, then it can only be present in the left half of the array. Else, it can only be present in the right half of the array.``````" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8349208,"math_prob":0.93627673,"size":1521,"snap":"2023-14-2023-23","text_gpt3_token_len":390,"char_repetition_ratio":0.1727093,"word_repetition_ratio":0.05782313,"special_character_ratio":0.27876398,"punctuation_ratio":0.14626865,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9713766,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-05T06:48:29Z\",\"WARC-Record-ID\":\"<urn:uuid:b9475a30-f75c-45f0-9085-46037ee236c0>\",\"Content-Length\":\"34362\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d9123cbc-4e43-45ea-a87d-d5ac78f2397b>\",\"WARC-Concurrent-To\":\"<urn:uuid:fe4cad30-0a1e-4197-aaca-f43cadaa2a53>\",\"WARC-IP-Address\":\"76.76.21.241\",\"WARC-Target-URI\":\"https://www.programming-helper.com/snippet/0pLbF6TumVVtOmqYJy5A\",\"WARC-Payload-Digest\":\"sha1:76EAO4IE647TDFNWWTJTRDQDSHAFXZJF\",\"WARC-Block-Digest\":\"sha1:AW76PTDZHPLGKDMK5F73FTITLKG6Q44Z\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224651325.38_warc_CC-MAIN-20230605053432-20230605083432-00468.warc.gz\"}"}
https://huaweicloud.csdn.net/63a56f62b878a54545946faf.html
[ "## 1. 多线程\n\nC++11提供了语言层面上的多线程,包含在头文件<thread>中。它解决了跨平台的问题,提供了`管理线程、保护共享数据、线程间同步操作、原子操作等类`。C++11 新标准中引入了5个头文件来支持多线程编程,如下图所示:", null, "### 1.1 多进程与多线程\n\n• 多进程并发\n\n1. 在进程间的通信,无论是使用信号、套接字,还是文件、管道等方式,其使用要么`比较复杂`,要么就是`速度较慢`或者两者兼而有之。\n2. 运行多个线程的`开销很大`,操作系统要`分配很多的资源来对这些进程进行管理`\n\n• 多线程并发\n\n### 1.2 多线程理解\n\n• 单CPU内核的多个线程。", null, "• 多个cpu或者多个内核", null, "### 1.3 创建线程\n\n• 形式1:\n``````std::thread myThread ( thread_fun);//函数形式为void thread_fun()\n//同一个函数可以代码复用,创建多个线程\n``````\n• 形式2:\n``````std::thread myThread ( thread_fun(100));\n//同一个函数可以代码复用,创建多个线程\n``````\n• 形式3:\n``````std::thread (thread_fun,1).detach();//直接创建线程,没有名字\n``````\n• 代码举例\n\n``````#include <iostream>\nusing namespace std;\n{\ncout<<\"子线程1\"<<endl;\n}\n{\ncout<<\"x:\"<<x<<endl;\ncout<<\"子线程2\"<<endl;\n}\nint main()\n{\nstd::cout << \"主线程\\n\";\n\nfirst.join(); //必须说明添加线程的方式\nsecond.join();\nstd::cout << \"子线程结束.\\n\";//必须join完成\nreturn 0;\n}\n``````\n\n### 1.4 join与detach方式\n\n• detach方式,启动的线程自主在后台运行,当前的代码继续往下执行,不等待新线程结束。\n• join方式,等待启动的线程完成,才会继续往下执行。\n\n``````if (myThread.joinable()) foo.join();\n``````\n\n#### (1)join举例\n\n``````#include <iostream>\nusing namespace std;\n{\nwhile(1)\n{\n//cout<<\"子线程1111\"<<endl;\n}\n}\n{\nwhile(1)\n{\n//cout<<\"子线程2222\"<<endl;\n}\n}\nint main()\n{\n\nfirst.join(); // pauses until first finishes 这个操作完了之后才能destroyed\nsecond.join(); // pauses until second finishes//join完了之后,才能往下执行。\nwhile(1)\n{\nstd::cout << \"主线程\\n\";\n}\nreturn 0;\n}\n``````\n\n#### (2)detach举例\n\n``````#include <iostream>\nusing namespace std;\n{\nwhile(1)\n{\ncout<<\"子线程1111\"<<endl;\n}\n}\n{\nwhile(1)\n{\ncout<<\"子线程2222\"<<endl;\n}\n}\nint main()\n{\n\nfirst.detach();\nsecond.detach();\nfor(int i = 0; i < 10; i++)\n{\nstd::cout << \"主线程\\n\";\n}\nreturn 0;\n}\n``````\n\nsleep_until如下一分钟后执行吗,如下\n``````using std::chrono::system_clock;\nstd::time_t tt = system_clock::to_time_t(system_clock::now());\n\nstruct std::tm * ptm = std::localtime(&tt);\ncout << \"Waiting for the next minute to begin...\\n\";\n++ptm->tm_min; //加一分钟\nptm->tm_sec = 0; //秒数设置为0\n//暂停执行,到下一整分执行\n``````\n\n## 2. mutex\n\nmutex头文件主要声明了与互斥量(mutex)相关的类。mutex提供了4种互斥类型,如下表所示。\n\nstd::mutex最基本的 Mutex 类。\nstd::recursive_mutex递归 Mutex 类。\nstd::time_mutex定时 Mutex 类。\nstd::recursive_timed_mutex定时递归 Mutex 类。\n\nstd::mutex 是C++11 中最基本的互斥量,std::mutex 对象提供了独占所有权的特性——即不支持递归地对 std::mutex 对象上锁,而 std::recursive_lock 则可以递归地对互斥量对象上锁。\n\n### 2.1 lock与unlock\n\nmutex常用操作:\n\n• lock():资源上锁\n• unlock():解锁资源\n• trylock():查看是否上锁,它有下列3种类情况:\n\n(1)未上锁返回false,并锁住;\n(2)其他线程已经上锁,返回true;\n(3)同一个线程已经对它上锁,将会产生死锁。\n\n``````#include <iostream> // std::cout\n#include <mutex> // std::mutex\n\nstd::mutex mtx; // mutex for critical section\n\nvoid print_block (int n, char c) {\nmtx.lock();\nfor (int i=0; i<n; ++i) { std::cout << c; }\nstd::cout << '\\n';\nmtx.unlock();\n}\n\nint main ()\n{\n\nth1.join();\nth2.join();\n\nreturn 0;\n}\n``````\n\n``````**************************************************\n\\$\\$\\$\\$\\$\\$\\$\\$\\$\\$\\$\\$\\$\\$\\$\\$\\$\\$\\$\\$\\$\\$\\$\\$\\$\\$\\$\\$\\$\\$\\$\\$\\$\\$\\$\\$\\$\\$\\$\\$\\$\\$\\$\\$\\$\\$\\$\\$\\$\\$\n``````\n\n``````#include <iostream> // std::cout\n#include <mutex> // std::mutex\n\nstd::mutex mtx_1; // mutex for critical section\nstd::mutex mtx_2; // mutex for critical section\n\nint test_num = 1;\n\nvoid print_block_1 (int n, char c) {\nmtx_1.lock();\nfor (int i=0; i<n; ++i) {\n//std::cout << c;\ntest_num = 1;\nstd::cout<<test_num<<std::endl;\n}\nstd::cout << '\\n';\nmtx_1.unlock();\n}\nvoid print_block_2 (int n, char c) {\nmtx_2.lock();\ntest_num = 2;\nfor (int i=0; i<n; ++i) {\n//std::cout << c;\ntest_num = 2;\nstd::cout<<test_num<<std::endl;\n}\nmtx_2.unlock();\n}\n\nint main ()\n{\n\nth1.join();\nth2.join();\n\nreturn 0;\n}\n``````\n\n### 2.2 lock_guard\n\nlock_guard的特点:\n\n• 创建即加锁,作用域结束自动析构并解锁,无需手工解锁\n• 不能中途解锁,必须等作用域结束才解锁\n• 不能复制\n\n``````#include <thread>\n#include <mutex>\n#include <iostream>\n\nint g_i = 0;\nstd::mutex g_i_mutex; // protects g_i,用来保护g_i\n\nvoid safe_increment()\n{\nconst std::lock_guard<std::mutex> lock(g_i_mutex);\n++g_i;\nstd::cout << std::this_thread::get_id() << \": \" << g_i << '\\n';\n// g_i_mutex自动解锁\n}\n\nint main()\n{\nstd::cout << \"main id: \" <<std::this_thread::get_id()<<std::endl;\nstd::cout << \"main: \" << g_i << '\\n';\n\nt1.join();\nt2.join();\n\nstd::cout << \"main: \" << g_i << '\\n';\n}\n``````\n\n1. 该程序的功能为,每经过一个线程,g_i 加1。\n2. 因为涉及到共同资源g_i ,所以需要一个共同mutex:g_i_mutex。\n3. main线程的id为1,所以下次的线程id依次加1。\n\n### 2.3 unique_lock\n\nunique_lock的特点:\n\n• 创建时可以不锁定(通过指定第二个参数为std::defer_lock),而在需要时再锁定\n• 可以随时加锁解锁\n• 作用域规则同 lock_grard,析构时自动释放锁\n• 不可复制,可移动\n• 条件变量需要该类型的锁作为参数(此时必须使用unique_lock)\n\n``````#include <mutex>\n#include <iostream>\nstruct Box {\nexplicit Box(int num) : num_things{num} {}\n\nint num_things;\nstd::mutex m;\n};\n\nvoid transfer(Box &from, Box &to, int num)\n{\n// defer_lock表示暂时unlock,默认自动加锁\nstd::unique_lock<std::mutex> lock1(from.m, std::defer_lock);\nstd::unique_lock<std::mutex> lock2(to.m, std::defer_lock);\n\n//两个同时加锁\nstd::lock(lock1, lock2);//或者使用lock1.lock()\n\nfrom.num_things -= num;\nto.num_things += num;\n//作用域结束自动解锁,也可以使用lock1.unlock()手动解锁\n}\n\nint main()\n{\nBox acc1(100);\nBox acc2(50);\n\nt1.join();\nt2.join();\nstd::cout << \"acc1 num_things: \" << acc1.num_things << std::endl;\nstd::cout << \"acc2 num_things: \" << acc2.num_things << std::endl;\n}\n``````\n\n1. 该函数的作用是,从一个结构体中的变量减去一个num,加载到另一个结构体的变量中去。\n2. std::mutex m;在结构体中,mutex不是共享的。但是只需要一把锁也能锁住,因为引用传递后,同一把锁传给了两个函数。\n3. cout需要在join后面进行,要不然cout的结果不一定是最终算出来的结果。\n4. std::ref 用于包装按引用传递的值。\n5. std::cref 用于包装按const引用传递的值。\n\n## 3. condition_variable\n\ncondition_variable的头文件有两个variable类,一个是condition_variable,另一个是condition_variable_any。condition_variable必须结合unique_lock使用。condition_variable_any可以使用任何的锁。下面以condition_variable为例进行介绍。\n\ncondition_variable条件变量可以阻塞(wait、wait_for、wait_until)调用的线程直到使用(notify_one或notify_all)通知恢复为止。condition_variable是一个类,这个类既有构造函数也有析构函数,使用时需要构造对应的condition_variable对象,调用对象相应的函数来实现上面的功能。\n\ncondition_variable构建对象\n\nwaitWait until notified\nwait_forWait for timeout or until notified\nwait_untilWait until notified or time point\nnotify_one解锁一个线程,如果有多个,则未知哪个线程执行\nnotify_all解锁所有线程\ncv_status这是一个类,表示variable 的状态,如下所示\n``````enum class cv_status { no_timeout, timeout };\n``````\n\n### 3.1 wait\n\n``````#include <iostream> // std::cout\n#include <mutex> // std::mutex, std::unique_lock\n#include <condition_variable> // std::condition_variable\n\nstd::mutex mtx;\nstd::condition_variable cv;\n\nint cargo = 0;\nbool shipment_available() {return cargo!=0;}\n\nvoid consume (int n) {\nfor (int i=0; i<n; ++i) {\nstd::unique_lock<std::mutex> lck(mtx);//自动上锁\n//第二个参数为false才阻塞(wait),阻塞完即unlock,给其它线程资源\ncv.wait(lck,shipment_available);\n// consume:\nstd::cout << cargo << '\\n';\ncargo=0;\n}\n}\n\nint main ()\n{\n\nfor (int i=0; i<10; ++i) {\n//每次cargo每次为0才运行。\nstd::unique_lock<std::mutex> lck(mtx);\ncargo = i+1;\ncv.notify_one();\n}\n\nreturn 0;\n}\n``````\n\n1. 主线程中的while,每次在cargo=0才运行。\n2. 每次cargo被置为0,会通知子线程unblock(非阻塞),也就是子线程可以继续往下执行。\n3. 子线程中cargo被置为0后,wait又一次启动等待。也就是说shipment_available为false,则等待。\n\n### 3.2 wait_for\n\n``````template <class Rep, class Period>\ncv_status wait_for (unique_lock<mutex>& lck,\nconst chrono::duration<Rep,Period>& rel_time);\n``````\n\n``````template <class Rep, class Period, class Predicate>\nbool wait_for (unique_lock<mutex>& lck,\nconst chrono::duration<Rep,Period>& rel_time, Predicate pred);\n``````\n\n``````#include <iostream> // std::cout\n#include <chrono> // std::chrono::seconds\n#include <mutex> // std::mutex, std::unique_lock\n#include <condition_variable> // std::condition_variable, std::cv_status\n\nstd::condition_variable cv;\n\nint value;\n\nstd::cin >> value;\ncv.notify_one();\n}\n\nint main ()\n{\nstd::cout << \"Please, enter an integer (I'll be printing dots): \\n\";\n\nstd::mutex mtx;\nstd::unique_lock<std::mutex> lck(mtx);\nwhile (cv.wait_for(lck,std::chrono::seconds(1))==std::cv_status::timeout) {\nstd::cout << '.' << std::endl;\n}\nstd::cout << \"You entered: \" << value << '\\n';\n\nth.join();\n\nreturn 0;\n}\n``````\n1. 通知或者超时都会解锁,所以主线程会一直打印。\n2. 示例中只要过去一秒,就会不断的打印。\n\n## 4. 线程池\n\n### 4.1 概念\n\n• 创建太多线程,将会浪费一定的资源,有些线程未被充分使用。\n• 销毁太多线程,将导致之后浪费时间再次创建它们。\n• 创建线程太慢,将会导致长时间的等待,性能变差。\n• 销毁线程太慢,导致其它线程资源饥饿。\n\n### 4.2 线程池的实现\n\n4. append:用于添加任务的接口\n\n``````#ifndef _THREADPOOL_H\n#include <vector>\n#include <queue>\n#include <iostream>\n#include <stdexcept>\n#include <condition_variable>\n#include <memory> //unique_ptr\n#include<assert.h>\n\nconst int MAX_THREADS = 1000; //最大线程数目\n\ntemplate <typename T>\n{\npublic:\n\nprivate:\n//工作线程需要运行的函数,不断的从任务队列中取出并执行\nstatic void *worker(void *arg);\nvoid run();\n\nprivate:\n\nstd::mutex queue_mutex;\nstd::condition_variable condition; //必须与unique_lock配合使用\nbool stop;\n};//end class\n\n//构造函数,创建线程\ntemplate <typename T>\n{\nif (number <= 0 || number > MAX_THREADS)\nthrow std::exception();\nfor (int i = 0; i < number; i++)\n{\nstd::cout << \"created Thread num is : \" << i <<std::endl;\n//直接在容器尾部创建这个元素,省去了拷贝或移动元素的过程。\n}\n}\ntemplate <typename T>\n{\n\nstd::unique_lock<std::mutex> lock(queue_mutex);\nstop = true;\n\ncondition.notify_all();\nww.join();//可以在析构函数中join\n}\n//添加任务\ntemplate <typename T>\n{\n/*操作工作队列时一定要加锁,因为他被所有线程共享*/\nqueue_mutex.lock();//同一个类的锁\nqueue_mutex.unlock();\ncondition.notify_one(); //线程池添加进去了任务,自然要通知等待的线程\nreturn true;\n}\n//单个线程\ntemplate <typename T>\n{\npool->run();//线程运行\nreturn pool;\n}\ntemplate <typename T>\n{\nwhile (!stop)\n{\nstd::unique_lock<std::mutex> lk(this->queue_mutex);\n/* unique_lock() 出作用域会自动解锁 */\nthis->condition.wait(lk, [this] { return !this->tasks_queue.empty(); });\n//如果任务为空,则wait,就停下来等待唤醒\n//需要有任务,才启动该线程,不然就休眠\n{\nassert(0&&\"断了\");//实际上不会运行到这一步,因为任务为空,wait就休眠了。\ncontinue;\n}\nelse\n{\nif (request)//来任务了,开始执行\nrequest->process();\n}\n}\n}\n#endif\n``````\n\n• 构造函数创建所需要的线程数\n• 一个线程对应一个任务,任务随时可能完成,线程则可能休眠,所以任务用队列queue实现(线程数量有限),线程用采用wait机制。\n• 任务在不断的添加,有可能大于线程数,处于队首的任务先执行。\n• 只有添加任务(append)后,才开启线程condition.notify_one()。\n• wait表示,任务为空时,则线程休眠,等待新任务的加入。\n• 添加任务时需要添加锁,因为共享资源。\n\n``````#include \"mythread.h\"\n#include<string>\n#include<math.h>\nusing namespace std;\n{\npublic:\nvoid process()\n{\n//cout << \"run.........\" << endl;\n//测试任务数量\nlong i=1000000;\nwhile(i!=0)\n{\nint j = sqrt(i);\ni--;\n}\n}\n};\nint main(void)\n{\nstd::string str;\nwhile (1)\n{\n//使用智能指针\npool.append(tt);//不停的添加任务,任务是队列queue,因为只有固定的线程数\ndelete tt;\n}\n}\n``````", null, "" ]
[ null, "https://img-blog.csdnimg.cn/20210428113430200.png#pic_center", null, "https://img-blog.csdnimg.cn/20210416084451659.png", null, "https://img-blog.csdnimg.cn/2021041608452349.png", null, "https://devpress.csdnimg.cn/b117360f085246cfab923982d73f9add.png", null ]
{"ft_lang_label":"__label__zh","ft_lang_prob":0.5159108,"math_prob":0.86603624,"size":14766,"snap":"2023-40-2023-50","text_gpt3_token_len":7286,"char_repetition_ratio":0.16061509,"word_repetition_ratio":0.14095941,"special_character_ratio":0.30265474,"punctuation_ratio":0.272509,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9714788,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,1,null,1,null,1,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-30T08:41:40Z\",\"WARC-Record-ID\":\"<urn:uuid:ab3f0dec-f1dd-498a-b6ea-96f48e57b6d3>\",\"Content-Length\":\"466926\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6aa1ab00-0eb6-4917-94de-bec86a0f997d>\",\"WARC-Concurrent-To\":\"<urn:uuid:26c69698-afbd-41de-8eaf-7cb24e1b9170>\",\"WARC-IP-Address\":\"120.46.209.149\",\"WARC-Target-URI\":\"https://huaweicloud.csdn.net/63a56f62b878a54545946faf.html\",\"WARC-Payload-Digest\":\"sha1:WVZWIVUDWJ2VBMBVIPARHI624BAXKGXA\",\"WARC-Block-Digest\":\"sha1:MB4QN3HRHAXFJY3ETOGLBRQ65UJQBVMK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510671.0_warc_CC-MAIN-20230930082033-20230930112033-00573.warc.gz\"}"}
https://www.programming-books.io/essential/cpp/static-c687012b9ce14269a9af709dc0a09d2f
[ "# static\n\nsuggest change\n\nThe `static` storage class specifier has three different meanings.\n\n1. Gives internal linkage to a variable or function declared at namespace scope.\n``````// internal function; can't be linked to\nstatic double semiperimeter(double a, double b, double c) {\nreturn (a + b + c)/2.0;\n}\n// exported to client\ndouble area(double a, double b, double c) {\nconst double s = semiperimeter(a, b, c);\nreturn sqrt(s*(s-a)*(s-b)*(s-c));\n}``````\n2. Declares a variable to have static storage duration (unless it is `thread_local`). Namespace-scope variables are implicitly static. A static local variable is initialized only once, the first time control passes through its definition, and is not destroyed every time its scope is exited.\n``````void f() {\nstatic int count = 0;\nstd::cout << \"f has been called \" << ++count << \" times so far\\n\";\n}``````\n3. When applied to the declaration of a class member, declares that member to be a static member.\n``````struct S {\nstatic S* create() {\nreturn new S;\n}\n};\nint main() {\nS* s = S::create();\n}``````\n\nNote that in the case of a static data member of a class, both 2 and 3 apply simultaneously: the `static` keyword both makes the member into a static data member and makes it into a variable with static storage duration." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8501405,"math_prob":0.9326614,"size":1312,"snap":"2023-14-2023-23","text_gpt3_token_len":309,"char_repetition_ratio":0.13379204,"word_repetition_ratio":0.017777778,"special_character_ratio":0.26143292,"punctuation_ratio":0.13833992,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9537685,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-09T22:21:31Z\",\"WARC-Record-ID\":\"<urn:uuid:a298a923-26fd-42c6-a600-07c6421105d9>\",\"Content-Length\":\"15762\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:fad51bfe-a137-47f6-b655-05e6a0c3243a>\",\"WARC-Concurrent-To\":\"<urn:uuid:79cad8d1-3770-4ffc-9fda-e7ec11439e83>\",\"WARC-IP-Address\":\"216.24.57.3\",\"WARC-Target-URI\":\"https://www.programming-books.io/essential/cpp/static-c687012b9ce14269a9af709dc0a09d2f\",\"WARC-Payload-Digest\":\"sha1:FOQJTJY4HRBHAEIVWCSWLYFQZF3HI5CT\",\"WARC-Block-Digest\":\"sha1:EY6KY2BDYR2S7NQSRV6TJEEPPPYDHSZB\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224656833.99_warc_CC-MAIN-20230609201549-20230609231549-00606.warc.gz\"}"}
https://openturns.github.io/openturns/latest/auto_data_analysis/distribution_fitting/plot_smoothing_mixture.html
[ "# Bandwidth sensitivity in kernel smoothing¶\n\n## Introduction¶\n\nWe consider the distribution", null, "for any", null, "where", null, "is the density of the Normal distribution", null, ",", null, "is the density of the Normal distribution", null, "and the weights are", null, "and", null, ".\n\nThis is a mixture of two Normal distributions: 1/4th of the observations have the", null, "distribution and 3/4th of the observations have the", null, "distribution. This example is considered in (Wand, Jones, 1994), page 14, Figure 2.3.\n\nWe consider a sample generated from independent realizations of", null, "and want to approximate the distribution from kernel smoothing. More precisely, we want to observe the sensitivity of the resulting density to the bandwidth.\n\n## Generate the mixture by merging two samples¶\n\nIn this section, we show that a mixture of two Normal distributions is nothing more than the merged sample of two independent Normal distributions. In order to generate a sample with size", null, ", we sample", null, "points from the first Normal distribution", null, "and", null, "points from the second Normal distribution", null, ". Then we merge the two samples.\n\nimport openturns as ot\nimport openturns.viewer as otv\nimport pylab as pl\nimport numpy as np\n\n\nWe choose a rather large sample size:", null, ".\n\nn = 1000\n\n\nThen we define the two Normal distributions and their parameters.\n\nw1 = 0.75\nw2 = 1.0 - w1\ndistribution1 = ot.Normal(0.0, 1.0)\ndistribution2 = ot.Normal(1.5, 1.0 / 3.0)\n\n\nWe generate two independent sub-samples from the two Normal distributions.\n\nsample1 = distribution1.getSample(int(w1 * n))\nsample2 = distribution2.getSample(int(w2 * n))\n\n\nThen we merge the sub-samples into a larger one with the add method of the Sample class.\n\nsample = ot.Sample(sample1)\nsample.getSize()\n\n\nOut:\n\n1000\n\n\nIn order to see the result, we build a kernel smoothing approximation on the sample. In order to keep it simple, let us use the default bandwidth selection rule.\n\nfactory = ot.KernelSmoothing()\nfit = factory.build(sample)\n\ngraph = fit.drawPDF()\nview = otv.View(graph)", null, "We see that the distribution of the merged sample has two modes. However, these modes are not clearly distinct. To distinguish them, we could increase the sample size. However, it might be interesting to see if the bandwidth selection rule can be better chosen: this is the purpose of the next section.\n\n## Simulation based on a mixture¶\n\nSince the distribution that we approximate is a mixture, it will be more convenient to create it from the Mixture class. It takes as input argument a list of distributions and a list of weights.\n\ndistribution = ot.Mixture([distribution1, distribution2], [w1, w2])\n\n\nThen we generate a sample from it.\n\nsample = distribution.getSample(n)\n\nfactory = ot.KernelSmoothing()\nfit = factory.build(sample)\n\nfactory.getBandwidth()\n\n\n[0.208514]\n\nWe see that the default bandwidth is close to 0.17.\n\ngraph = distribution.drawPDF()\ncurve = fit.drawPDF()\ngraph.setColors([\"dodgerblue3\", \"darkorange1\"])\ngraph.setLegends([\"Mixture\", \"Kernel smoothing\"])\ngraph.setLegendPosition(\"topleft\")\nview = otv.View(graph)", null, "We see that the result of the kernel smoothing approximation of the mixture is similar to the previous one and could be improved.\n\n## Sensitivity to the bandwidth¶\n\nIn this section, we observe the sensitivity of the kernel smoothing to the bandwidth. We consider the three following bandwidths: the small bandwidth 0.05, the large bandwidth 0.54 and 0.18 which is in-between. For each bandwidth, we use the second optional argument of the build method in order to select a specific bandwidth value.\n\nhArray = [0.05, 0.54, 0.18]\nnLen = len(hArray)\nfig = pl.figure(figsize=(10, 8))\nfor i in range(nLen):\nax = fig.add_subplot(2, 2, i + 1)\nfit = factory.build(sample, [hArray[i]])\ngraph = fit.drawPDF()\ngraph.setColors([\"dodgerblue3\"])\ngraph.setLegends([\"h=%.4f\" % (hArray[i])])\nexact = distribution.drawPDF()\ncurve = exact.getDrawable(0)\ncurve.setColor(\"darkorange1\")\ncurve.setLegend(\"Mixture\")\ncurve.setLineStyle(\"dashed\")\ngraph.setLegendPosition(\"topleft\")\ngraph.setXTitle(\"X\")\nview = otv.View(graph, figure=fig, axes=[ax])\npl.ylim(top = 0.5) # Common y-range\n\nview = otv.View(graph)\n\n•", null, "•", null, "We see that when the bandwidth is too small, the resulting kernel smoothing has many more modes than the distribution it is supposed to approximate. When the bandwidth is too large, the approximated distribution is too smooth and has only one mode instead of the expected two modes which are in the mixture distribution. When the bandwidth is equal to 0.18, the two modes are correctly represented.\n\n## Sensitivity to the bandwidth rule¶\n\nThe library provides three different rules to compute the bandwidth. In this section, we compare the results that we can get with them.\n\nh1 = factory.computeSilvermanBandwidth(sample)\nh1\n\n\nOut:\n\n0.3445636453391276\n\nh2 = factory.computePluginBandwidth(sample)\nh2\n\n\nOut:\n\n0.2021709523195656\n\nh3 = factory.computeMixedBandwidth(sample)\nh3\n\n\nOut:\n\n0.20851397168332242\n\nfactory.getBandwidth()\n\n\nOut:\n\n0.18\n\n\nWe see that the default rule is the “Mixed” rule. This is because the sample is in dimension 1 and the sample size is quite large. For a small sample in 1 dimension, the “Plugin” rule would have been used.\n\nThe following script compares the results produced by the three rules.\n\nhArray = [h1, h2, h3]\nlegends = [\"Silverman\", \"Plugin\", \"Mixed\"]\nnLen = len(hArray)\nfig = pl.figure(figsize=(10, 8))\nfor i in range(nLen):\nax = fig.add_subplot(2, 2, i + 1)\nfit = factory.build(sample, [hArray[i]])\ngraph = fit.drawPDF()\ngraph.setColors([\"dodgerblue3\"])\ngraph.setLegends([\"h=%.4f, %s\" % (hArray[i], legends[i])])\nexact = distribution.drawPDF()\ncurve = exact.getDrawable(0)\ncurve.setColor(\"darkorange1\")\ncurve.setLegend(\"Mixture\")\ncurve.setLineStyle(\"dashed\")\ngraph.setLegendPosition(\"topleft\")\ngraph.setXTitle(\"X\")\nif i > 0:\ngraph.setYTitle(\"\")\nview = otv.View(graph, figure=fig, axes=[ax])\npl.ylim(top = 0.5) # Common y-range\n\nview = otv.View(graph)\n\notv.View.ShowAll()\n\n•", null, "•", null, "We see that the bandwidth produced by Silverman’s rule is too large, leading to an oversmoothed distribution. The results produced by the Plugin and Mixed rules are comparable in this case.\n\nTotal running time of the script: ( 0 minutes 0.834 seconds)\n\nGallery generated by Sphinx-Gallery" ]
[ null, "https://openturns.github.io/openturns/latest/_images/math/c54034284385076d0385a3ff3c16df82a58b9fb7.svg", null, "https://openturns.github.io/openturns/latest/_images/math/b83fb4ad9f34ff6d0292acb686d86927ce88a006.svg", null, "https://openturns.github.io/openturns/latest/_images/math/771873b4ae457d9528110a3e616050812c406622.svg", null, "https://openturns.github.io/openturns/latest/_images/math/2f5268924064e231ae416bfe9fa223b5c205e4b9.svg", null, "https://openturns.github.io/openturns/latest/_images/math/16f7dd903d75e9d358895b02b9033f69b521811b.svg", null, "https://openturns.github.io/openturns/latest/_images/math/83b32fde1419e6ed89c61d88f328caeb34be3173.svg", null, "https://openturns.github.io/openturns/latest/_images/math/3547c51b67d38eb160229449f4a05fcfbaf1ecb9.svg", null, "https://openturns.github.io/openturns/latest/_images/math/3a0e10849fda6d77ed63b34828b8b26b74e04fee.svg", null, "https://openturns.github.io/openturns/latest/_images/math/2f5268924064e231ae416bfe9fa223b5c205e4b9.svg", null, "https://openturns.github.io/openturns/latest/_images/math/83b32fde1419e6ed89c61d88f328caeb34be3173.svg", null, "https://openturns.github.io/openturns/latest/_images/math/872014b9b4628ae8dd181b7f8cc73a9e022cc08e.svg", null, "https://openturns.github.io/openturns/latest/_images/math/80b394abd4fb264a3879675f92f191c3e346c3a0.svg", null, "https://openturns.github.io/openturns/latest/_images/math/7323045ae2b48e43aa249f9f866b8ecf16d928c5.svg", null, "https://openturns.github.io/openturns/latest/_images/math/771873b4ae457d9528110a3e616050812c406622.svg", null, "https://openturns.github.io/openturns/latest/_images/math/440cd9266f77b53c31834fb93c6d9f964237f51e.svg", null, "https://openturns.github.io/openturns/latest/_images/math/16f7dd903d75e9d358895b02b9033f69b521811b.svg", null, "https://openturns.github.io/openturns/latest/_images/math/4753fbcfc3cb7d73ee862cd0b7f901f7aa6ab724.svg", null, "https://openturns.github.io/openturns/latest/_images/sphx_glr_plot_smoothing_mixture_001.png", null, "https://openturns.github.io/openturns/latest/_images/sphx_glr_plot_smoothing_mixture_002.png", null, "https://openturns.github.io/openturns/latest/_images/sphx_glr_plot_smoothing_mixture_003.png", null, "https://openturns.github.io/openturns/latest/_images/sphx_glr_plot_smoothing_mixture_004.png", null, "https://openturns.github.io/openturns/latest/_images/sphx_glr_plot_smoothing_mixture_005.png", null, "https://openturns.github.io/openturns/latest/_images/sphx_glr_plot_smoothing_mixture_006.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8245535,"math_prob":0.9891355,"size":6253,"snap":"2020-45-2020-50","text_gpt3_token_len":1563,"char_repetition_ratio":0.1637062,"word_repetition_ratio":0.119774014,"special_character_ratio":0.25715655,"punctuation_ratio":0.16611019,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99701446,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46],"im_url_duplicate_count":[null,1,null,7,null,2,null,2,null,2,null,2,null,1,null,1,null,2,null,2,null,1,null,null,null,1,null,2,null,1,null,2,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-12-04T12:12:53Z\",\"WARC-Record-ID\":\"<urn:uuid:dc9d3c30-1599-4604-9b0a-4e52b607b666>\",\"Content-Length\":\"37680\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e087665d-fa36-499f-9694-a75ad90e5bff>\",\"WARC-Concurrent-To\":\"<urn:uuid:b044443c-2d46-438e-8bb8-9cc1fcfa1a30>\",\"WARC-IP-Address\":\"185.199.108.153\",\"WARC-Target-URI\":\"https://openturns.github.io/openturns/latest/auto_data_analysis/distribution_fitting/plot_smoothing_mixture.html\",\"WARC-Payload-Digest\":\"sha1:Q44TTLDYFM6CZYFPIUVEJ4SM6VLIJQXC\",\"WARC-Block-Digest\":\"sha1:X7OBMXZK2ZFDPGSRIN7U2ZHI2T4LZHL3\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141735600.89_warc_CC-MAIN-20201204101314-20201204131314-00688.warc.gz\"}"}
https://unangelic.org/what-is-acid-test-ratio-in-accounting/
[ "# What is acid test ratio in accounting?\n\nThe acid test ratio, which is also known as the quick ratio, compares the total of a company’s cash, temporary marketable securities, and accounts receivable to the total amount of the company’s current liabilities.\n\nIn finance, the quick ratio, also known as the acidtest ratio is a type of liquidity ratio, which measures the ability of a company to use its near cash or quick assets to extinguish or retire its current liabilities immediately.\n\nSimilarly, what is a quick ratio in accounting? The quick ratio is a measure of how well a company can meet its short-term financial liabilities. Also known as the acid-test ratio, it can be calculated as follows: (Cash + Marketable Securities + Accounts Receivable) / Current Liabilities.\n\nPeople also ask, what is the acid test ratio formula in accounting?\n\nAcid test ratio is a measure of short term liquidity of the firm and is calculated by dividing the summation of the most liquid assets like cash, cash equivalents, marketable securities or short-term investments and current accounts receivables by the total current liabilities. The ratio is also known as a Quick Ratio.\n\nHow do you find the acid test ratio?\n\nTo obtain the company’s liquid current assets, add cash and cash equivalents, short-term marketable securities, accounts receivable and vendor non-trade receivables. Then divide current liquid current assets by total current liabilities to calculate the acidtest ratio.\n\n### What is a good debt ratio?\n\nGenerally, a ratio of 0.4 – 40 percent – or lower is considered a good debt ratio. A ratio above 0.6 is generally considered to be a poor ratio, since there’s a risk that the business will not generate enough cash flow to service its debt.\n\n### What is a good cash ratio?\n\nA ratio above 1 means that all the current liabilities can be paid with cash and equivalents. A ratio below 1 means that the company needs more than just its cash reserves to pay off its current debt. Any ratio above 1 is considered to be a good liquidity measure.\n\n### What is the ideal acid test ratio?\n\nAcid Test Ratio = ( Current assets – Inventory ) / Current liabilities. Ideally, the acid test ratio should be 1:1 or higher, however this varies widely by industry. In general, the higher the ratio, the greater the company’s liquidity.\n\n### What is a good inventory turnover ratio?\n\nFor many ecommerce businesses, the ideal inventory turnover ratio is about 4 to 6. All businesses are different, of course, but in general a ratio between 4 and 6 usually means that the rate at which you restock items is well balanced with your sales.\n\n### Is a high acid test ratio good?\n\nThe higher the ratio, the more financially secure a company is in the short term. On the other hand, a high or increasing acid-test ratio generally indicates that a company is experiencing solid top-line growth, quickly converting receivables into cash, and easily able to cover its financial obligations.\n\n### What is a good gearing ratio?\n\nGood and Bad Gearing Ratios A gearing ratio higher than 50% is typically considered highly levered or geared. A gearing ratio lower than 25% is typically considered low-risk by both investors and lenders. A gearing ratio between 25% and 50% is typically considered optimal or normal for well-established companies.\n\n### Is inventory included in acid test ratio?\n\nThe key elements of current assets that are included in the ratio are cash, marketable securities, and accounts receivable. Inventory is not included in the ratio, since it can be quite difficult to sell off in the short term, and possibly at a loss.\n\n### What does debt ratio mean?\n\nThe debt ratio is a financial ratio that measures the extent of a company’s leverage. The debt ratio is defined as the ratio of total debt to total assets, expressed as a decimal or percentage. It can be interpreted as the proportion of a company’s assets that are financed by debt.\n\n### What is the formula of quick ratio?\n\nQuick ratio is calculated by dividing liquid current assets by total current liabilities. Liquid current assets include cash, marketable securities and receivables. Cash includes cash in hand and cash at bank.\n\n### What is the formula for gross profit?\n\nGross profit margin is calculated by subtracting cost of goods sold (COGS) from total revenue and dividing that number by total revenue. The top number in the equation, known as gross profit or gross margin, is the total revenue minus the direct costs of producing that good or service.\n\n### Why is it called the acid test ratio?\n\nThe acid-test ratio is a strong indicator as to whether a company has enough short-term assets on hand to cover its immediate liabilities. Also known as the quick ratio, the acid-test ratio is a liquidity ratio that measures a company’s ability to pay its current liabilities with its quick or current assets.\n\n### How is debt ratio calculated?\n\nTo calculate your debt-to-income ratio: Add up your monthly bills which may include: Monthly rent or house payment. Divide the total by your gross monthly income, which is your income before taxes. The result is your DTI, which will be in the form of a percentage. The lower the DTI; the less risky you are to lenders.\n\n### How is cash ratio calculated?\n\nThe cash ratio is usually calculated by dividing a company’s cash and cash equivalents by its current liabilities. Occasionally, people will calculate the cash ratio by dividing the sum of a company’s cash and cash equivalents and its marketable securities by its current liabilities.\n\n### What is the formula for current ratio?\n\nUsing the Balance Sheet, the current ratio is calculated by dividing current assets by current liabilities: For example, if a company’s current assets are \\$ 5,000 and its current liabilities are \\$ 2,000, then its current ratio is 2.5." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.96232533,"math_prob":0.9694156,"size":5700,"snap":"2022-05-2022-21","text_gpt3_token_len":1170,"char_repetition_ratio":0.17011937,"word_repetition_ratio":0.01875,"special_character_ratio":0.20280702,"punctuation_ratio":0.10291439,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99406224,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-26T11:12:30Z\",\"WARC-Record-ID\":\"<urn:uuid:a2a6fc20-2b2c-48a8-a52f-1d890d01c29d>\",\"Content-Length\":\"52344\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1d65dfa7-c534-43b2-ba74-14730229620d>\",\"WARC-Concurrent-To\":\"<urn:uuid:307293e8-0f03-451b-8c65-f6f52cb24b4e>\",\"WARC-IP-Address\":\"104.21.33.105\",\"WARC-Target-URI\":\"https://unangelic.org/what-is-acid-test-ratio-in-accounting/\",\"WARC-Payload-Digest\":\"sha1:VQ72VA2DKSYWSGQUMNQWXHREUUO6HS75\",\"WARC-Block-Digest\":\"sha1:OK2UOXFALS3LXE4WERGBV7NJZXUPV7XH\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662604794.68_warc_CC-MAIN-20220526100301-20220526130301-00480.warc.gz\"}"}
https://dmoj.ca/problem/aac5p4/editorial
[ "## Editorial for An Animal Contest 5 P4 - Number Game\n\nRemember to use this editorial only when stuck, and not to copy-paste code from it. Please be respectful to the problem author and editorialist.\nSubmitting an official solution before solving the problem yourself is a bannable offence.\n\nAuthors: WilliamWu277, ThingExplainer\n\nAfter some observation, we notice that for the bamboo to be stuck,", null, "needs to be less than any", null, "in the set of possible moves remaining. The optimal spot is therefore the centre as", null, "is minimized there, where", null, "is the location of the bamboo after some number of moves.\n\nSo what are the central squares?\n\nIf", null, "is even, there are two spots we can move into which are optimal,", null, "and", null, ", which are both central squares; the fact that both these squares are equally optimal will be important later on.\n\nIf", null, "is odd, only the square", null, "is an optimal central square.\n\nIt is always optimal to use up the smaller moves before the larger moves, as they provide a larger range of motion. Moreover, we always need to use at least the smallest", null, "moves, as we can move the bamboo from any position using those moves.\n\nHow many steps can we make with the smallest", null, "moves? We take a total of", null, ". This is the sum of all the natural numbers up to", null, ".\n\nNow, we can consider the problem as having two sets and needing to partition", null, "into those two sets. One set holds all the moves that go backwards and the other holds all the moves that go forwards. If", null, "is odd, we can move", null, "steps in either direction. If", null, "is even, we can move", null, "steps in either direction, using all", null, "moves. The proof for this is left as an exercise to the reader.\n\nThus, for even", null, ", no matter the parity of our starting square", null, ", we can always move into one of the two equally optimal centre positions using the smallest", null, "moves; therefore for even", null, ", the answer is always", null, ".\n\nFor odd", null, ", the answer is only", null, "if the distance to the central square from the starting square is the same parity as the steps we can take using the first", null, "moves. In other words,", null, "is the answer iff:", null, "Otherwise, the answer would be", null, ". Notice that if we use the first", null, "segments, we can get the object stuck if we move it to positions", null, ". Because these three target positions span both parities, we can always make it to one of these three positions.\n\nNow onto the construction. After determining the number of moves we need, we can greedily construct a sequence of moves that works. We will focus on the construction that requires only", null, "moves, which can be easily extended to include", null, "moves.\n\nIf each move is in the positive direction, our net change in position is", null, "steps forward. If we only want to move", null, "moves forward, we can go through the configuration of moves in the order", null, ", keeping track of our net change of position and, if changing the current move", null, "to", null, "still allows a net final position increase", null, ", we perform the negation and update the final net change of position accordingly. If we instead need to move backwards, we can employ a similar method.\n\nNow that we have a sequence of moves that will land us in the target position, we need to order the moves so that we do not go out of bounds. One way of doing this is creating vectors that store moves forward and moves backwards. We sort both vectors in terms of absolute value and then perform a simulation: we first check if we can make either the biggest move forward or backward, perform a legal move, and update our current position. This solution works in", null, ".\n\nIt is left as an exercise to the reader to prove why we do not need to sort the vectors, leading to an", null, "solution.\n\nTime Complexity:", null, "or", null, "" ]
[ null, "https://static.dmoj.ca/mathoid/3af712a8b8b97f57905207c4eb9ddcbb6c5f0bab/svg", null, "https://static.dmoj.ca/mathoid/a0f1490a20d0211c997b44bc357e1972deab8ae3/svg", null, "https://static.dmoj.ca/mathoid/3af712a8b8b97f57905207c4eb9ddcbb6c5f0bab/svg", null, "https://static.dmoj.ca/mathoid/1478c028a16709cb32d8b1a69ccca032ca1d9ef5/svg", null, "https://static.dmoj.ca/mathoid/b51a60734da64be0e618bacbea2865a8a7dcd669/svg", null, "https://static.dmoj.ca/mathoid/cbf25ae6a8f2f0effa5ee0e4c69cea0c3a56d49d/svg", null, "https://static.dmoj.ca/mathoid/63dc102e166fdf407073de372108dfd48679ae56/svg", null, "https://static.dmoj.ca/mathoid/b51a60734da64be0e618bacbea2865a8a7dcd669/svg", null, "https://static.dmoj.ca/mathoid/63c959b21487b946201d882ef93da9adebc2d232/svg", null, "https://static.dmoj.ca/mathoid/d70750b76741f2c7b967a03be275bfcc5e8855c2/svg", null, "https://static.dmoj.ca/mathoid/d70750b76741f2c7b967a03be275bfcc5e8855c2/svg", null, "https://static.dmoj.ca/mathoid/371e1741e071695b3dfbdbf6df383b911ffab3cf/svg", null, "https://static.dmoj.ca/mathoid/d70750b76741f2c7b967a03be275bfcc5e8855c2/svg", null, "https://static.dmoj.ca/mathoid/68e8b260999545089345342a82a13f20bdc5e4ee/svg", null, "https://static.dmoj.ca/mathoid/371e1741e071695b3dfbdbf6df383b911ffab3cf/svg", null, "https://static.dmoj.ca/mathoid/e57cc29cdbd5e8815bb24c5707fba13881012070/svg", null, "https://static.dmoj.ca/mathoid/371e1741e071695b3dfbdbf6df383b911ffab3cf/svg", null, "https://static.dmoj.ca/mathoid/13285c36eeb05e115679214d8bd09b8eff8e8195/svg", null, "https://static.dmoj.ca/mathoid/d70750b76741f2c7b967a03be275bfcc5e8855c2/svg", null, "https://static.dmoj.ca/mathoid/b51a60734da64be0e618bacbea2865a8a7dcd669/svg", null, "https://static.dmoj.ca/mathoid/c032adc1ff629c9b66f22749ad667e6beadf144b/svg", null, "https://static.dmoj.ca/mathoid/d9be3547dd69e72697d59be5f7cb9ba91c3b5256/svg", null, "https://static.dmoj.ca/mathoid/b51a60734da64be0e618bacbea2865a8a7dcd669/svg", null, "https://static.dmoj.ca/mathoid/cbf25ae6a8f2f0effa5ee0e4c69cea0c3a56d49d/svg", null, "https://static.dmoj.ca/mathoid/b51a60734da64be0e618bacbea2865a8a7dcd669/svg", null, "https://static.dmoj.ca/mathoid/d70750b76741f2c7b967a03be275bfcc5e8855c2/svg", null, "https://static.dmoj.ca/mathoid/d70750b76741f2c7b967a03be275bfcc5e8855c2/svg", null, "https://static.dmoj.ca/mathoid/d70750b76741f2c7b967a03be275bfcc5e8855c2/svg", null, "https://static.dmoj.ca/mathoid/30540ba3f590522c5b79a48ea926c56ec00a4158/svg", null, "https://static.dmoj.ca/mathoid/1bf6a5f1db637f538a5f72f2ab6f6e8666ac418c/svg", null, "https://static.dmoj.ca/mathoid/1bf6a5f1db637f538a5f72f2ab6f6e8666ac418c/svg", null, "https://static.dmoj.ca/mathoid/1e00ea8ff04f1a8f8331975ad4aab987c0403a48/svg", null, "https://static.dmoj.ca/mathoid/d70750b76741f2c7b967a03be275bfcc5e8855c2/svg", null, "https://static.dmoj.ca/mathoid/1bf6a5f1db637f538a5f72f2ab6f6e8666ac418c/svg", null, "https://static.dmoj.ca/mathoid/371e1741e071695b3dfbdbf6df383b911ffab3cf/svg", null, "https://static.dmoj.ca/mathoid/c63ae6dd4fc9f9dda66970e827d13f7c73fe841c/svg", null, "https://static.dmoj.ca/mathoid/3e037a28b0622ed51f58b8a8d06e7074683a28a3/svg", null, "https://static.dmoj.ca/mathoid/a0f1490a20d0211c997b44bc357e1972deab8ae3/svg", null, "https://static.dmoj.ca/mathoid/840c8c1b085e301c44de24e07c1100e2497b9568/svg", null, "https://static.dmoj.ca/mathoid/b7843e7fbcda78c7e458b7535f564db7a13057f2/svg", null, "https://static.dmoj.ca/mathoid/8099d07c677f13d465f783e54cc1bfe2d88b94ff/svg", null, "https://static.dmoj.ca/mathoid/53018dec784fdcf768655aa4d8b2a6deabf3c792/svg", null, "https://static.dmoj.ca/mathoid/8099d07c677f13d465f783e54cc1bfe2d88b94ff/svg", null, "https://static.dmoj.ca/mathoid/53018dec784fdcf768655aa4d8b2a6deabf3c792/svg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.93859494,"math_prob":0.9920869,"size":3635,"snap":"2023-40-2023-50","text_gpt3_token_len":770,"char_repetition_ratio":0.13549986,"word_repetition_ratio":0.024096385,"special_character_ratio":0.21182944,"punctuation_ratio":0.110363394,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98988193,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88],"im_url_duplicate_count":[null,4,null,null,null,4,null,2,null,null,null,5,null,2,null,null,null,2,null,null,null,null,null,8,null,null,null,2,null,8,null,2,null,8,null,2,null,null,null,null,null,null,null,2,null,null,null,5,null,null,null,null,null,null,null,null,null,2,null,6,null,6,null,2,null,null,null,6,null,8,null,null,null,2,null,null,null,2,null,2,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-25T20:55:39Z\",\"WARC-Record-ID\":\"<urn:uuid:a1141b4e-bd88-432a-8d72-425ec0df1fb5>\",\"Content-Length\":\"32616\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b3a74e0d-7b07-459c-a65f-b8dee91d57d1>\",\"WARC-Concurrent-To\":\"<urn:uuid:aeded482-6c38-4087-869f-f0fb90ec89c2>\",\"WARC-IP-Address\":\"172.67.219.125\",\"WARC-Target-URI\":\"https://dmoj.ca/problem/aac5p4/editorial\",\"WARC-Payload-Digest\":\"sha1:32BYLSRFQIENPKKIFPC2DUGURU7WVVS3\",\"WARC-Block-Digest\":\"sha1:VDSYTMSQQI7CUVI2ATXZLDHIXGHTUCEL\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510085.26_warc_CC-MAIN-20230925183615-20230925213615-00445.warc.gz\"}"}
http://nce.ads.uga.edu/wiki/doku.php?id=how_to_compute_vanraden_s_deregressed_proof
[ "#", null, "BLUPF90\n\nYutaka Masuda\n\n## Overview\n\nYou need the output of accf90 which is a program available under an agreement with UGA. A small script can compute VanRaden's (2009) deregressed proof based on PA, EBV, reliability of PA, and reliability of EBV.\n\n### Procedure\n\nFirst, run blupf90 or other IOD programs to save solutions. Then, run accf90 with the following option in the parameter file. It additionally calculates PA and its reliability.\n\nOPTION parent_avg yes\n\nThe resulting sol_and_acc file has 10 columns.\n\n1. Trait code\n2. Effect code\n3. Level code\n4. EBV\n5. Accuracy or reliability of EBV\n6. Parent average (PA)\n7. Unknown parent flag (1=both known; 2=sire unknown; 3=dam unknown; 4=both unknown)\n8. Sire code\n9. Dam code\n10. Accuracy or reliability of PA\n\nVanRaden et al. (2009) showed a deregressed proof of sire can be available from the following steps. It includes the consequence of previous studies e.g. VanRaden and Wiggans (1991). Note that the following instruction is approximated; the strict computation excludes the contribution of a daughter to its parent in the parent's EBV.\n\n1. Compute $k_{d}=(4-2h^2)/h^2$ (VanRaden and Wiggans 1991).\n2. Compute the daughter equivalent of EBV: $\\mathrm{DE}_{\\mathrm{EBV}}=k_{d}\\mathrm{REL}_{\\mathrm{EBV}}/(1-\\mathrm{REL}_{\\mathrm{EBV}})$.\n3. Compute daughter equivalent of PA: $\\mathrm{DE}_{\\mathrm{PA}}=k_{d}\\mathrm{REL}_{\\mathrm{PA}}/(1-\\mathrm{REL}_{\\mathrm{PA}})$.\n4. Compute the daughter equivalent of daughter contribution (i.e. EBV excluding PA): $\\mathrm{DE}_{\\mathrm{R}}=\\mathrm{DE}_{\\mathrm{EBV}}-\\mathrm{DE}_{\\mathrm{PA}}$.\n5. Compute the reliability of daughter contribution: $R=\\mathrm{DE}_{\\mathrm{R}}/(\\mathrm{DE}_{\\mathrm{R}}+k_{d})$.\n6. Compute the deregressed proof for this animal: $\\mathrm{DRP}=\\mathrm{PA}+(\\mathrm{EBV}-\\mathrm{PA})/R$.\n\nThe following AWK script computes the deregressed proof with above procedure.\n\npvr_drp.awk\n#\n# Computation of deregressed proof for sires.\n#\n# usage: awk -v h2=0.25 -f pvr_drp.awk sol_and_acc > drp.txt\n#\n# You can change the relitability as -v h2=value.\n# The default heritability is 0.25.\n#\nBEGIN{\n# default h2=0.25 equiv. kd=14\nif(h2<=0){ h2=0.25 }\nkd=(4-2*h2)/h2\nprint \"h2=\",h2,\"; kd=\",kd > \"/dev/stderr\"\n}\nNR>1{\nDE_EBV = kd*$5/(1 -$5)\nDE_PA = kd*$10/(1 -$10)\nDE_R = DE_EBV - DE_PA\nR = DE_R/(DE_R + kd)\nif(R>0){\nDRP = $6 + ($4-$6)/R } else { R = 0.0 DRP = 0.0 } print$0,DRP,R\n}\n\nIf you need to compute a cows' deregressed proof, please consult Wiggans et al. (2012; JDS).", null, "" ]
[ null, "http://nce.ads.uga.edu/wiki/lib/tpl/dokuwiki/images/logo.png", null, "http://nce.ads.uga.edu/wiki/lib/exe/indexer.php", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.82138723,"math_prob":0.9917391,"size":1469,"snap":"2022-27-2022-33","text_gpt3_token_len":456,"char_repetition_ratio":0.10784983,"word_repetition_ratio":0.0,"special_character_ratio":0.31858408,"punctuation_ratio":0.14237288,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99886274,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-07-06T22:33:37Z\",\"WARC-Record-ID\":\"<urn:uuid:5007a4fc-abe5-40be-88c9-9f3b37273c3a>\",\"Content-Length\":\"17636\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2e6c4ab4-3120-4a71-94c7-ca9fa44ef7f2>\",\"WARC-Concurrent-To\":\"<urn:uuid:36c3c9eb-415f-42ca-a43d-a028d166bed5>\",\"WARC-IP-Address\":\"128.192.176.6\",\"WARC-Target-URI\":\"http://nce.ads.uga.edu/wiki/doku.php?id=how_to_compute_vanraden_s_deregressed_proof\",\"WARC-Payload-Digest\":\"sha1:AC5ZNDN7ODGFIMJNPINPR5JOTESZBYTF\",\"WARC-Block-Digest\":\"sha1:DC7B5GYYBMQLQGIVJIS23FFEIFSVMGXO\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656104678225.97_warc_CC-MAIN-20220706212428-20220707002428-00027.warc.gz\"}"}
https://cstheory.stackexchange.com/questions/1454/exhausting-simulator-of-zero-knowledge-protocols-in-the-random-oracle-model
[ "# Exhausting Simulator of Zero-Knowledge Protocols in the Random Oracle Model\n\nIn a paper titled \"On Deniability in the Common Reference String and Random Oracle Model,\" Rafael Pass writes:\n\nWe note that when proving security according to the standard zero-knowledge definition in the RO [Random Oracle] model, the simulator has two advantages over a plain model simulator, namely,\n\n1. The simulator can see what values parties query the oracle on.\n2. The simulator can answer these queries in whatever way it chooses as long as the answers \"look\" OK.\n\nThe first technique, namely the ability to \"monitor\" queries to the RO, is very common in all papers referring to the concept of zero-knowledge in the RO model.\n\nNow, consider the definition of black-box zero-knowledge (PPT stands for probabilistic, polynomial-time Turing machine):\n\n$\\exists$ a PPT simulator $S$, such that $\\forall$ (possibly cheating) PPT verifier $V^*$, $\\forall$ common input $x\\in L$, and $\\forall$ randomness $r$, the following are indistinguishable:\n\n• the view of $V^*$ while interacting with the prover $P$ on input $x$ and using randomness $r$;\n• the output of $S$ on inputs $x$ and $r$, when $S$ is given black-box access to $V^*$.\n\nHere, I want to exhibit a cheating verifier $V'$, whose job is to exhaust any simulator which tries to monitor RO queries:\n\nLet $S$ be the simulator guaranteed by the existential quantifier in the definition of black-box zero-knowledge, and let $q(|x|)$ be a polynomial which upper-bounds the running time of $S$ on input $x$. Assume that $S$ tries to monitor the queries of $V^*$ to the RO.\n\nNow, consider a cheating $V'$, which first queries the RO for $q(|x|)+1$ times (on arbitrary inputs of its choice), and then acts arbitrarily maliciously.\n\nObviously, $V'$ exhausts the simulator $S$. A simple way for $S$ is to reject such malicious behavior, yet that way, a distinguisher can easily distinguish the real interaction from the simulated one. (Since in the real interaction, the prover $P$ cannot monitor $V'$'s queries, and thus won't reject based on the mere fact that $V'$ is querying too much.)\n\nWhat is the workaround for the above problem?\n\n## Edit:\n\nA good source for studying ZK in the RO model is:\n\nMartin Gagné, A Study of the Random Oracle Model, Ph.D. Thesis, University of California, Davis, 2008, 109 pages. Available on ProQuest: http://gradworks.umi.com/33/36/3336254.html\n\nParticularly, it gives definitions of black-box ZK in the RO Model in section 3.3 (page 20), attributed to Yung and Zhao:\n\nMoti Yung and Yunlei Zhao. Interactive Zero-Knowledge with Restricted Random Oracles. In Theory of Cryptography - TCC 2006, LNCS 3876, pp. 21-40, 2006.\n\n• I think you might mean \"exhaustive\" instead of \"exhausting\". – Dave Clarke Sep 20 '10 at 15:56\n• I beg to differ. I meant I found a way for \"exhausting\" the simulator of ZK protocols... There's no such thing as \"exhaustive\" simulator. – M.S. Dousti Sep 20 '10 at 16:51\n• My bad. I read exhausting as an adjective, not a verb. – Dave Clarke Sep 20 '10 at 17:51\n\nThere is a question of why one would want to define black-box ZK in the random oracle model. There are at least two reasons why people suggested the definition of black-box zero knowledge:\n\n1) For a positive result, when you say that a simulator is \"black-box zero knowledge\" it automatically gives you a nice bound on its running time (i.e., $poly(|x|) \\cdot time(V*)$ as opposed to $poly(time(V*))$) and it also may be useful to know that the simulator doesn't \"look at the guts' of $V*$ and doesn't care if $V*$ is implemented using RAM, circuit, etc... While a random-oracle model simulator may be efficient, it's obviously not black-box, because it's supposed to somehow look at the execution of $V*$ and understand from it when $V*$ is evaluating a hash function. For this reason, there is a sense in which it doesn't make sense to say that a random-oracle model simulator is \"black-box\".\n\n2) For a negative result, people use \"black-box simulator\" to capture a large class of proof techniques. In this case you can define black-box simulator also in the random oracle model and the definition that makes sense is what David said. In fact, for a negative result even not in the random oracle model, it's best if the result holds even if you allow the simulator $poly(time(V*))$ running time. Indeed, although it's not always stated, the negative results I'm aware of all have this property, since the cheating verifier $V*$ is typically a fixed polynomial algorithm that runs some pseudorandom functions, while the simulator can have any polynomial running time.\n\n• Does the same hold for \"universal simulation\" ZK? After all, black-box ZK is a type of universal-simulation ZK, whose running time is fixed before $V*$ is determined. (However, non-black-box ZK is a type of universal-simulation ZK, in which S can look at the \"guts\" of V*) – M.S. Dousti Sep 22 '10 at 6:18\n• Please see the edited question for some references. – M.S. Dousti Sep 23 '10 at 12:34\n• For a universal (non-black-box) simulator, one must allow running time polynomial in the running time of $V^{*}$ since otherwise the simulator doesn't have time to invoke $V^{*}$. But generally the point I was making is that \"black-box zero knowledge\" is not a canonical definition but rather a tool, and that tool can be used differently in the context of positive or negative results to make the results more meaningful. – Boaz Barak Sep 23 '10 at 17:31\n• I delayed replying to your comment since I wanted to read more. In particular, I read Yung and Zhao's paper (cited above), and noted that they used black-box ZK in the RO model for a positive result, while you said \"it doesn't make sense to say that a random-oracle model simulator is 'black-box'.\" Is their result philosophically problematic, or should we relax the definition of black-box? – M.S. Dousti Sep 26 '10 at 20:33\n\nHere is my take on the problem. I have not recently read any papers that deal with black-box zero-knowledge in the random oracle (RO) model, so I'm just guessing at what they mean and not at what is written there. The short answer (guess) is that BB-ZK in the RO model should let the simulator run in time polynomial in |x| and the number of RO queries issued by V*, the cheating verifier.\n\nLet's try to justify that guess. An initial observation is that the term \"black-box zero-knowledge proofs in the random oracle model\" needs a closer look to properly define. Black-box simulators are defined to work with any oracle (i.e., the cheating verifier as a black-box), and their only interface is through the oracle input/output. If we just augment this model to give a RO to all parties (perhaps by allowing their circuits to have RO gates), then we get a model where the simulator cannot program the RO - on an oracle query, everything (including RO queries) just happens \"inside\" of the V* oracle, and then it returns its next message. If we want to allow RO programming, then we need to modify the interfaces: The simulator now gets an input/output oracle for V* and no random oracle. On each call to the V* oracle, instead of producing the next message, the oracle may instead produce the next query to the RO, and the simulator can give it the RO response by calling the oracle again. Now this allows RO programming, and we can also allow the simulator's running time to depend on the number of queries to the RO.\n\nAny further exploration of the meaning of these definitions is left to the reader. I'm thinking syntacticly.\n\n• Thank for the answer David. Regardless of the ability of the simulator to program the RO, it should be able to \"monitor\" them. So, every oracle query from V* wastes M's time by at least time. Your big idea is to change the model to \"let the simulator run in time polynomial in |x| and the number of RO queries issued by V*.\" That is not the standard model, but I see it as a reasonable solution. Yet I think the \"giants\" in the community must acknowledge the authenticity of such model first... – M.S. Dousti Sep 20 '10 at 15:43\n• Can you cite a source that precisely defines \"the standard model\"? (That term is often used as a synonym for \"no random oracles or other such modifications are present in the model of computation,\" but I don't think that this is what you meant.) My expectation is that I have sketched the definition of what would be considered standard, and if not, then we can figure that out without any \"giants\" actively certifying our reasoning. – David Cash Sep 20 '10 at 16:48\n• Sure, by \"standard model\" I meant the \"standard definition\" of ZK under the RO model. You may refer to Rafael Pass's paper (cited in the question), or his MSc thesis (titled \"Alternative Variants of Zero-Knowledge Proofs\"), or Wee's paper in AsiaCrypt 2009 (\"Zero Knowledge in the Random Oracle Model, Revisited\"). None of them defined \"black-box\" ZK in the RO model (they all mentioned auxiliary input ZK), though none referred to \"run in time polynomial in |x| and the number of RO queries made by V*\". Hence, I think you are putting forward a new definition (Google it!) – M.S. Dousti Sep 20 '10 at 17:17\n• Please see the edited question for some references. – M.S. Dousti Sep 23 '10 at 12:33" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.860939,"math_prob":0.89572424,"size":2586,"snap":"2019-51-2020-05","text_gpt3_token_len":644,"char_repetition_ratio":0.11115415,"word_repetition_ratio":0.0,"special_character_ratio":0.2536736,"punctuation_ratio":0.13241106,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9764913,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-23T02:03:33Z\",\"WARC-Record-ID\":\"<urn:uuid:b857f8eb-172a-4e5d-8649-4d6a763c80d6>\",\"Content-Length\":\"161793\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d9677a07-6951-4569-b0e6-0bd2692237e1>\",\"WARC-Concurrent-To\":\"<urn:uuid:8d38a6d2-1f4c-4b6c-8063-04e8e9ced3dd>\",\"WARC-IP-Address\":\"151.101.65.69\",\"WARC-Target-URI\":\"https://cstheory.stackexchange.com/questions/1454/exhausting-simulator-of-zero-knowledge-protocols-in-the-random-oracle-model\",\"WARC-Payload-Digest\":\"sha1:N3BSCOLZX3UKNOUZ46VAAZSDZTH5AH3D\",\"WARC-Block-Digest\":\"sha1:UGZO3JAFQ3UK7LBB2QKFJA3TKB5RCNIE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579250608062.57_warc_CC-MAIN-20200123011418-20200123040418-00493.warc.gz\"}"}
https://www.physicsforums.com/threads/multi-ferroics-intro-help.1004125/
[ "# Multi-Ferroics Intro Help\n\nHomework Statement:\n1D mechanical bar. Two different materials that clamped at x = 0. both are isotropic and perfectly bonded together. Young's modulus E1 = 10GPa and E2 = 50GPa. displacement of 2um at x = 2mm\n\nFind the stresses, strains and displacements in each of the bars.\nPlot the energy per unit volume in material 1 as a function of the Young's modulus in material 1, for five cases. E = 1, ,5, 10, 25 and 50\nRelevant Equations:\nEpsilon (stress) = sigma (strain) / E (young's modulus)\ndu (displacement)/ dx = * E = strain\nI have a TA that is ignoring me, so I have to resort to this online forum. Also, a very unhelpful example from lecture (just 1).\nI have some other equations, but I dont really know how they all tie together.\n\nI find epsilon = 10um/1mm (both materials are 1 mm in length) = 1E-2mm/mm\nsigma 2 = E1*epsilon = 50E7Pa\nsigma 1 = E2*epsilon = 10E7Pa\n\nAs far as the second part, I thought I did something reasonable, but after rereading it, I realized I missed a crucial part of the problem.\nI can provide anything else that you may want to help me understand this homework (in context) since I don't have anyone to explain it to me. Other than the lectures which are very unhelpful.\n\ncaz\nGold Member\nWhat does the fact that your calculated stresses are different imply?\n\nWhat does the fact that your calculated stresses are different imply?\nThat my assumption that both materials receive the same displacement, may be wrong?\n\ncaz\nGold Member\nYes. Think about a hard spring and a soft spring in series. If you pull it, the soft spring will extend more.\n\nIf the stresses are different at the boundary between the materials, what happens to the boundary?\n\nLast edited:\nStress times cross sectional area is force. So at the boundary between the two materials, the forces are different. What does this mean physically?\nThat the stress is different between each material?\n\ncaz\nGold Member\nI edited my previous comment.\n\nWhat happens to the boundary if the stresses are different?\n\nI edited my previous comment.\n\nWhat happens to the boundary if the stresses are different?\noh. ok. now i am starting to catch on. (i am way better at programming than i ever will be at mechanics).\nso the total displacement is 10um, but that is distributed between both materials?\n\ncaz\nGold Member\nYes, the displacements are not equally distributed.\n\ncaz\nGold Member\nThe system is at equilibrium. What does that say about the forces of the boundary?\n\nThe system is at equilibrium. What does that say about the forces of the boundary?\nWouldnt that imply that the forces are the same? Or would that be the strain that is the same?\n\ncaz\nGold Member\nAt equilibrium, the boundary does not move. The is means the net force is 0. This means that the stresses are identical in both materials.\n\nUsing this condition, do you see what you need to do?\n\nAt equilibrium, the boundary does not move. The is means the net force is 0. This means that the stresses are identical in both materials.\ndidnt we say that the stresses are different? oh nvm. it was more of a question.\n\ncaz\nGold Member\nIf you assume that the stresses are identical, do you see how to do the calculation?\nDo you understand why they are identical?\n\nIf you assume that the stresses are identical, do you see how to do the calculation?\nDo you understand why they are identical?\nI am confused on how you they are in equilibrium\n\ncaz\nGold Member\nNothing is moving. Therefore there are no accelerations. Therefore there are no net forces.\n\nAt the boundary between the materials, each material is pulling the boundary towards itself with a force equal to the stress times the cross-sectional area. (Check your text on how to get the directions correct). This means that the stresses are identical.\n\nDo you understand?\n\n•", null, "Nickpga\nAlright. That makes sense. I suppose I had an imaginary force that caused the displacement. Like someone pulling it\n\ncaz\nGold Member\nYes.\n\nSo dtotal=dmat1+dmat2 with the condition that the stresses are identical for mat1 and mat2. Do you see where to go from here?\n\ncaz\nGold Member\nBtw, your relevant equation for stress is wrong and you use two different lengths for the total displacement in your first post.\n\nBtw, your relevant equation for stress is wrong and you use two different lengths for the total displacement in your first post.\nreally? stress = strain/ Y_modulus? Thats the equation the professor boxed and starred\n\ncaz\nGold Member\nStress(sigma)=strain(epsilon)*Y\n(Y and stress have the same units)\n\nStress(sigma)=strain(epsilon)*Y\n(Y and stress have the same units)\nYou know. Thats what happens when the professor cant be be bothered to write legibly. I thought stress was epsilon. and strain sigma.\n\ncaz\nGold Member\nThe other equation you need is strain=dl/l\n\nThe other equation you need is strain=dl/l\nI have that one too. I just had it wrong. Thanks a lot for your time and effort in asking me questions to make my mind work.\nI wouldn't say I am dumb, I just don't have a mind that thinks 'mechanically'.\n\n•", null, "Delta2\ncaz\nGold Member\nYour welcome. New concepts take time to absorb. Be patient.\n\nYou seem to know what to do now so I’ll be signing off.\n\n•", null, "Delta2 and Nickpga" ]
[ null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9529256,"math_prob":0.9262604,"size":1770,"snap":"2021-31-2021-39","text_gpt3_token_len":502,"char_repetition_ratio":0.09229898,"word_repetition_ratio":0.5868263,"special_character_ratio":0.27853107,"punctuation_ratio":0.09659091,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9945304,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-07-25T05:27:36Z\",\"WARC-Record-ID\":\"<urn:uuid:394d52ea-5d8a-41b1-a4c3-150ddc106ccb>\",\"Content-Length\":\"137493\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9336fb74-a093-4421-9d89-1e593bcd3ea2>\",\"WARC-Concurrent-To\":\"<urn:uuid:089b6d99-c8c8-472f-aeb8-650cd9d2454a>\",\"WARC-IP-Address\":\"172.67.68.135\",\"WARC-Target-URI\":\"https://www.physicsforums.com/threads/multi-ferroics-intro-help.1004125/\",\"WARC-Payload-Digest\":\"sha1:Q4VKJHTUJ4TUDKNB5CO5AEJRNPXMG2MQ\",\"WARC-Block-Digest\":\"sha1:JAQFFKUGTULMHFSHWIWKGL2SGTRJLBWE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046151638.93_warc_CC-MAIN-20210725045638-20210725075638-00330.warc.gz\"}"}
https://reference.wolfram.com/language/ref/Transpose.html
[ "# Transpose\n\nTranspose[list]\n\ntransposes the first two levels in list.\n\nTranspose[list,{n1,n2,}]\n\ntransposes list so that the k", null, "level in list is the nk", null, "level in the result.\n\nTranspose[list,mn]\n\ntransposes levels m and n in list, leaving all other levels unchanged.\n\n# Details and Options", null, "• Transpose[m] gives the usual transpose of a matrix m.\n• Transpose[m] can be input as m.\n• can be entered as", null, "tr", null, "or \\[Transpose].\n• For a matrix m, Transpose[m] is equivalent to Transpose[m,{2,1}].\n• For an array a of depth r3, Transpose[a] is equivalent to Transpose[a,{2,1,3,,r}], only transposing the first two levels. »\n• The ni in Transpose[a,{n1,n2,}] or Transpose[a,n1n2] must be positive integers no larger than ArrayDepth[a].\n• If {n1,n2,} is a permutation list, then the element at position {i1,i2,} of Transpose[a,{n1,n2,}] is the element at position {in1,in2,} of the array a.\n• For a permutation perm, the dimensions of Transpose[a,perm] are Permute[Dimensions[a],perm].\n• A permutation list perm in Transpose[a,perm] can also be given in Cycles form, as returned by PermutationCycles[perm]. »\n• Transpose[a,mn] or Transpose[a,TwoWayRule[m,n]] is equivalent to Transpose[a,Cycles[{{m,n}}]]. »\n• Transpose allows the ni to be repeated, computing diagonals of the subarrays determined by the repeated levels. The result is therefore an array of smaller depth.\n• For a square matrix m, Transpose[m,{1,1}] returns the main diagonal of m, as given by Diagonal[m]. »\n• In general, if np=nq then the operation Transpose[a,{n1,n2,}] is possible for an array a of dimensions {d1,d2,} if dp=dq.\n• Transpose works on SparseArray and structured array objects.\n\n# Examples\n\nopen allclose all\n\n## Basic Examples(3)\n\nTranspose a 3×3 numerical matrix:\n\nVisualize the transposition operation:\n\nTranspose a 2×3 symbolic matrix:\n\nUse", null, "followed by", null, "tr", null, "to enter the transposition operator:\n\n## Scope(12)\n\n### Matrices(6)\n\nEnter a matrix as a grid:\n\nTranspose the matrix and format the result:\n\nTranspose a row matrix into a column matrix:\n\nFormat the input and output:\n\nTranspose the column matrix back into a row matrix:\n\nTransposition of a vector leaves it unchanged:\n\nTranspose leaves the identity matrix unchanged:\n\ns is a sparse matrix:\n\nTranspose[s] is also sparse:\n\nThe indices have, in effect, just been reversed:\n\nTranspose a SymmetrizedArray object:\n\nThe result equals the negative of the original array, due to its antisymmetry:\n\nFormat a symbolic transpose in TraditionalForm:\n\n### Arrays(6)\n\nTranspose the first two levels of a rank-3 array, effectively transposing it as a matrix of vectors:\n\nTranspose an array of depth 3 using different permutations:\n\nPerform transpositions using TwoWayRule notation:\n\nPerform transpositions using Cycles notation:\n\nTranspose levels 2 and 3 of a depth-4 array:\n\nThe second and third dimensions have been exchanged:\n\nGet the leading diagonal by transposing two identical levels:\n\n## Applications(13)\n\n### Matrix Decompositions(4)", null, "is a random real matrix:\n\nFind the QRDecomposition of", null, ":", null, "is orthogonal, so its inverse is", null, ":\n\nReconstruct", null, "from the decomposition:\n\nCompute the SchurDecomposition of a matrix", null, ":\n\nThe matrix", null, "is orthogonal, so its inverse is", null, ":\n\nReconstruct", null, "from the decomposition:\n\nCompute the SingularValueDecomposition of a matrix", null, ":\n\nThe matrices", null, "and", null, "are orthogonal, so their inverses are their transposes:\n\nReconstruct", null, "from the decomposition:\n\nConstruct the singular value decomposition of", null, ", a random", null, "matrix:\n\nFirst compute the eigensystem of", null, ":\n\nThe singular values are the square roots of the nonzero eigenvalues:\n\nThe", null, "matrix is a diagonal matrix of singular values with the same shape as", null, ":\n\nThe", null, "matrix has the eigenvectors as its columns:\n\nThe", null, "matrix has columns of the form", null, "for each of the nonzero eigenvalues:\n\nVerify that", null, "and", null, "are orthogonal:\n\nVerify the decomposition:\n\n### Special Matrices(6)\n\nA symmetric matrix obeys", null, ", an antisymmetric matrix", null, ". This matrix is symmetric:\n\nConfirm with SymmetricMatrixQ:\n\nThis matrix is antisymmetric:\n\nConfirm with AntisymmetricMatrixQ:\n\nA matrix is orthogonal if", null, ". Check if the matrix", null, "is orthogonal:\n\nConfirm that it is orthogonal using OrthogonalMatrixQ:\n\nA real-valued symmetric matrix is orthogonally diagonalizable as", null, ", with", null, "diagonal and real valued and", null, "orthogonal. Verify that the following matrix is symmetric and then diagonalize it:\n\nTo diagonalize, first compute", null, "'s eigenvalues and place them in a diagonal matrix:\n\nNext, compute the unit eigenvectors:\n\nThen", null, "can be diagonalized with", null, "as previously, and", null, ":\n\nA matrix is unitary if", null, ". Show that the matrix", null, "is unitary:\n\nConfirm with UnitaryMatrixQ:\n\nA real-valued matrix", null, "is called normal if", null, ". Normal matrices are the most general kind of matrix that can be unitarily diagonalized as", null, "with", null, "diagonal and", null, "unitary. All real symmetric matrices", null, "are normal because both sides of the equality are simply", null, ":\n\nShow that the following matrix is normal and then diagonalize it:\n\nConfirm using NormalMatrixQ:\n\nA normal matrix like", null, "can be unitarily diagonalized using Eigensystem:\n\nUnlike the case of a symmetric matrix, the diagonal matrix here is complex valued:\n\nNormalizing the eigenvectors and putting them in columns gives a unitary matrix:\n\nConfirm the diagonalization", null, ":\n\nShow that real antisymmetric matrices and orthogonal matrices are normal and thus can be unitarily diagonalized. For orthogonal matrices, simply substitute in the definition", null, "to get the identity matrix on both sides:\n\nFor an antisymmetric matrix, both sides are simply", null, ":\n\nOrthogonal matrices have eigenvalues that lie on the unit circle:\n\nAntisymmetric matrices have pure imaginary eigenvalues:\n\n### Visualization(3)\n\nUse Transpose to change data grouping in BarChart:\n\nUse Transpose to swap the", null, "and", null, "axes in ListPlot3D:\n\nThis has the effect of reflecting the data across the line", null, ":\n\nMultidimensionalize (in the tensor product sense) a one-dimensional list command:\n\nFor example, accumulate at all levels of an array:\n\nReverse at all levels of an array:\n\nImport an RGB image:\n\nReverse the data at all levels, reflecting across the line", null, "and swapping red and blue channels:\n\n## Properties & Relations(18)\n\nTranspose obeys", null, ":\n\nFor compatible matrices", null, "and", null, ", Transpose obeys", null, ":\n\nMatrix inversion commutes with Transpose, i.e.", null, ":\n\nConjugate[Transpose[m]] can be done in a single step with ConjugateTranspose:\n\nMany special matrices are defined by their properties under Transpose. A symmetric matrix has", null, ":\n\nAn orthogonal matrix satisfies", null, ":\n\nThe product of a matrix and its transpose is symmetric:", null, "is the matrix product of", null, "and", null, ":", null, ", so", null, "is symmetric\n\nThe sum of a square matrix and its transpose is symmetric:", null, "is the matrix sum of", null, "and", null, ":", null, ", so", null, "is symmetric:\n\nThe difference is antisymmetric:\n\nTransposition of {{}} returns {}:\n\nThe result cannot be {{}} again because the permutation of the dimensions {1,0} is {0,1} and no expression can have dimensions {0,1}:\n\nTranspose[a] transposes the first two levels of an array:\n\nTranspose[a,perm] returns an array of dimensions Permute[Dimensions[a],perm]:\n\nTake an array with dimensions {2,3,4}:\n\nTransposing by a permutation σ transposes the element positions by σ-1:\n\nTranspose[a,Cycles[{{m,n}}]] and Transpose[a,mn] are equivalent:\n\nBoth forms are equivalent to using PermutationList[Cycles[{{m,n}}]:\n\nComposition of transpositions is equivalent to a product of their permutations, in the same order:\n\nTranspositions do not commute, in general:\n\nTranspose[a,σ] is equivalent to Flatten[a,List/@InversePermutation[σ]]:\n\nTranspose and TensorTranspose coincide on explicit arrays:\n\nTensorTranspose further supports symbolic operations that Transpose does not:\n\nTransposition of a matrix can also be performed with Thread:\n\nTranspose[m,{1,1}] is equivalent to Diagonal[m]:\n\nTranspose[a,{1,,1,2,3,}] is equivalent to tracing the levels being transposed to level 1:\n\n## Possible Issues(1)\n\nTranspose only works for rectangular arrays:", null, "" ]
[ null, "https://reference.wolfram.com/language/ref/Files/Transpose.en/1.png", null, "https://reference.wolfram.com/language/ref/Files/Transpose.en/2.png", null, "https://reference.wolfram.com/language/ref/Files/Transpose.en/details_1.png", null, "https://reference.wolfram.com/language/ref/Files/Transpose.en/3.png", null, "https://reference.wolfram.com/language/ref/Files/Transpose.en/6.png", null, "https://reference.wolfram.com/language/ref/Files/Transpose.en/9.png", null, "https://reference.wolfram.com/language/ref/Files/Transpose.en/12.png", null, "https://reference.wolfram.com/language/ref/Files/Transpose.en/15.png", null, "https://reference.wolfram.com/language/ref/Files/Transpose.en/18.png", null, "https://reference.wolfram.com/language/ref/Files/Transpose.en/19.png", null, "https://reference.wolfram.com/language/ref/Files/Transpose.en/20.png", null, "https://reference.wolfram.com/language/ref/Files/Transpose.en/21.png", null, "https://reference.wolfram.com/language/ref/Files/Transpose.en/22.png", null, "https://reference.wolfram.com/language/ref/Files/Transpose.en/23.png", null, "https://reference.wolfram.com/language/ref/Files/Transpose.en/24.png", null, "https://reference.wolfram.com/language/ref/Files/Transpose.en/25.png", null, "https://reference.wolfram.com/language/ref/Files/Transpose.en/26.png", null, "https://reference.wolfram.com/language/ref/Files/Transpose.en/27.png", null, "https://reference.wolfram.com/language/ref/Files/Transpose.en/28.png", null, "https://reference.wolfram.com/language/ref/Files/Transpose.en/29.png", null, "https://reference.wolfram.com/language/ref/Files/Transpose.en/30.png", null, "https://reference.wolfram.com/language/ref/Files/Transpose.en/31.png", null, "https://reference.wolfram.com/language/ref/Files/Transpose.en/32.png", null, "https://reference.wolfram.com/language/ref/Files/Transpose.en/33.png", null, "https://reference.wolfram.com/language/ref/Files/Transpose.en/34.png", null, "https://reference.wolfram.com/language/ref/Files/Transpose.en/35.png", null, "https://reference.wolfram.com/language/ref/Files/Transpose.en/36.png", null, "https://reference.wolfram.com/language/ref/Files/Transpose.en/37.png", null, "https://reference.wolfram.com/language/ref/Files/Transpose.en/38.png", null, "https://reference.wolfram.com/language/ref/Files/Transpose.en/39.png", null, "https://reference.wolfram.com/language/ref/Files/Transpose.en/40.png", null, "https://reference.wolfram.com/language/ref/Files/Transpose.en/41.png", null, "https://reference.wolfram.com/language/ref/Files/Transpose.en/42.png", null, "https://reference.wolfram.com/language/ref/Files/Transpose.en/43.png", null, "https://reference.wolfram.com/language/ref/Files/Transpose.en/44.png", null, "https://reference.wolfram.com/language/ref/Files/Transpose.en/45.png", null, "https://reference.wolfram.com/language/ref/Files/Transpose.en/46.png", null, "https://reference.wolfram.com/language/ref/Files/Transpose.en/47.png", null, "https://reference.wolfram.com/language/ref/Files/Transpose.en/48.png", null, "https://reference.wolfram.com/language/ref/Files/Transpose.en/49.png", null, "https://reference.wolfram.com/language/ref/Files/Transpose.en/50.png", null, "https://reference.wolfram.com/language/ref/Files/Transpose.en/51.png", null, "https://reference.wolfram.com/language/ref/Files/Transpose.en/52.png", null, "https://reference.wolfram.com/language/ref/Files/Transpose.en/53.png", null, "https://reference.wolfram.com/language/ref/Files/Transpose.en/54.png", null, "https://reference.wolfram.com/language/ref/Files/Transpose.en/55.png", null, "https://reference.wolfram.com/language/ref/Files/Transpose.en/56.png", null, "https://reference.wolfram.com/language/ref/Files/Transpose.en/57.png", null, "https://reference.wolfram.com/language/ref/Files/Transpose.en/58.png", null, "https://reference.wolfram.com/language/ref/Files/Transpose.en/59.png", null, "https://reference.wolfram.com/language/ref/Files/Transpose.en/60.png", null, "https://reference.wolfram.com/language/ref/Files/Transpose.en/61.png", null, "https://reference.wolfram.com/language/ref/Files/Transpose.en/62.png", null, "https://reference.wolfram.com/language/ref/Files/Transpose.en/63.png", null, "https://reference.wolfram.com/language/ref/Files/Transpose.en/64.png", null, "https://reference.wolfram.com/language/ref/Files/Transpose.en/65.png", null, "https://reference.wolfram.com/language/ref/Files/Transpose.en/66.png", null, "https://reference.wolfram.com/language/ref/Files/Transpose.en/67.png", null, "https://reference.wolfram.com/language/ref/Files/Transpose.en/68.png", null, "https://reference.wolfram.com/language/ref/Files/Transpose.en/69.png", null, "https://reference.wolfram.com/language/ref/Files/Transpose.en/70.png", null, "https://reference.wolfram.com/language/ref/Files/Transpose.en/71.png", null, "https://reference.wolfram.com/language/ref/Files/Transpose.en/72.png", null, "https://reference.wolfram.com/language/ref/Files/Transpose.en/73.png", null, "https://reference.wolfram.com/language/ref/Files/Transpose.en/74.png", null, "https://reference.wolfram.com/language/ref/Files/Transpose.en/75.png", null, "https://reference.wolfram.com/language/ref/Files/Transpose.en/76.png", null, "https://reference.wolfram.com/language/ref/Files/Transpose.en/77.png", null, "https://reference.wolfram.com/language/ref/Files/Transpose.en/78.png", null, "https://reference.wolfram.com/language/ref/Files/Transpose.en/79.png", null, "https://reference.wolfram.com/language/ref/Files/Transpose.en/80.png", null, "https://reference.wolfram.com/language/ref/Files/Transpose.en/81.png", null, "https://reference.wolfram.com/language/ref/Files/Transpose.en/82.png", null, "https://reference.wolfram.com/language/ref/Files/Transpose.en/83.png", null, "https://reference.wolfram.com/language/ref/Files/Transpose.en/84.png", null, "https://reference.wolfram.com/language/ref/Files/Transpose.en/85.png", null, "https://reference.wolfram.com/language/ref/Files/Transpose.en/86.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.81120294,"math_prob":0.9911304,"size":7773,"snap":"2023-14-2023-23","text_gpt3_token_len":1777,"char_repetition_ratio":0.191273,"word_repetition_ratio":0.03533569,"special_character_ratio":0.21754792,"punctuation_ratio":0.16287094,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9988607,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120,121,122,123,124,125,126,127,128,129,130,131,132,133,134,135,136,137,138,139,140,141,142,143,144,145,146,147,148,149,150,151,152,153,154],"im_url_duplicate_count":[null,10,null,10,null,10,null,10,null,10,null,10,null,10,null,10,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-03-26T22:49:05Z\",\"WARC-Record-ID\":\"<urn:uuid:5edf8cb6-318e-4f02-a9d5-8cb9a2e36f9b>\",\"Content-Length\":\"233074\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4e7380f8-b906-44f1-abf8-55115b4ba859>\",\"WARC-Concurrent-To\":\"<urn:uuid:398f4971-c34b-4e3b-a49b-1b982cb7a87e>\",\"WARC-IP-Address\":\"140.177.205.163\",\"WARC-Target-URI\":\"https://reference.wolfram.com/language/ref/Transpose.html\",\"WARC-Payload-Digest\":\"sha1:VW7MCS7TAAVVMASEKIZPTUQZHSRDVY6Z\",\"WARC-Block-Digest\":\"sha1:PUKI767XVPXR3JQXTD6GDCTRH7SAKNFO\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296946535.82_warc_CC-MAIN-20230326204136-20230326234136-00368.warc.gz\"}"}
https://discourse.julialang.org/t/put-an-underscore-before-all-digits-in-a-string/50195
[ "# Put an underscore before all digits in a string\n\nI’ve tried reading these docs https://en.wikibooks.org/wiki/Introducing_Julia/Strings_and_characters and these docs https://docs.julialang.org/en/v1/base/strings/#Base.@s_str and I’ve tried several things, such as:\n\n``````replace(\"x1\", r\"\\d+\" => s\"_\\1\")\n``````\n\nwhich returns:\n\n``````PCRE error: unknown substring\n``````\n\n. And:\n\n``````replace(\"x1\", r\"[0-9]+\" => s\"_\\1\")\n``````\n\nwhich returns:\n\n``````PCRE error: unknown substring\n``````\n\n. I’ve also tried:\n\n``````replace(\"x1\", r\"[0-9]+\" => s\"_\\g\")\n``````\n\nwhich returns:\n\n``````Bad replacement string: _\\g\n``````\n\n. And:\n\n``````replace(\"x1\", r\"\\d+\" => SubstitutionString(\"_\\\\1\"))\n``````\n\nwhich returns:\n\n``````PCRE error: unknown substring\n``````\n\n. And:\n\n``````replace(\"x1\", r\"\\d+\" => SubstitutionString(\"_\\\\g\"))\n``````\n\nwhich returns:\n\n``````Bad replacement string: _\\g\n``````\n\nMust admit I’m kind of stumped for what to do. Any ideas?\n\nIn case my question is unclear, what I’m expecting is the output “x_1”.\n\nI think what you’re missing is that `\\1` refers to the first capture in the regex match, and to create a capture you need to use parentheses in the regex:\n\n``````julia> replace(\"x123y456\", r\"(\\d+)\" => s\"_\\1\")\n\"x_123y_456\"\n``````\n\n(note the parentheses around `\\d+`)\n\n7 Likes" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6181344,"math_prob":0.66736454,"size":811,"snap":"2022-27-2022-33","text_gpt3_token_len":240,"char_repetition_ratio":0.14374225,"word_repetition_ratio":0.23469388,"special_character_ratio":0.3218249,"punctuation_ratio":0.20809248,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96069056,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-06-27T05:10:51Z\",\"WARC-Record-ID\":\"<urn:uuid:0349ffdd-5c99-4f02-9bc2-bf8007002c64>\",\"Content-Length\":\"23497\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ba75aa7b-cb5f-444a-ae24-431ff342a22c>\",\"WARC-Concurrent-To\":\"<urn:uuid:cf3743a6-1a48-4d87-9fc0-50bc19ea86da>\",\"WARC-IP-Address\":\"64.71.144.205\",\"WARC-Target-URI\":\"https://discourse.julialang.org/t/put-an-underscore-before-all-digits-in-a-string/50195\",\"WARC-Payload-Digest\":\"sha1:4D22USGV7LKHS7TF2FKM3YTZERN2Q3P4\",\"WARC-Block-Digest\":\"sha1:LLNNAJPS4XVS7XOSXGM4SMHY3O5U5NFE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656103328647.18_warc_CC-MAIN-20220627043200-20220627073200-00616.warc.gz\"}"}
https://www.numberempire.com/177
[ "Home | Menu | Get Involved | Contact webmaster", null, "", null, "", null, "", null, "", null, "# Number 177\n\none hundred seventy seven\n\n### Properties of the number 177\n\n Factorization 3 * 59 Divisors 1, 3, 59, 177 Count of divisors 4 Sum of divisors 240 Previous integer 176 Next integer 178 Is prime? NO Previous prime 173 Next prime 179 177th prime 1051 Is a Fibonacci number? NO Is a Bell number? NO Is a Catalan number? NO Is a factorial? NO Is a regular number? NO Is a perfect number? NO Polygonal number (s < 11)? NO Binary 10110001 Octal 261 Duodecimal 129 Hexadecimal b1 Square 31329 Square root 13.30413469565 Natural logarithm 5.1761497325738 Decimal logarithm 2.2479732663618 Sine 0.87758978777712 Cosine 0.47941231147032 Tangent 1.8305532978192\nNumber 177 is pronounced one hundred seventy seven. Number 177 is a composite number. Factors of 177 are 3 * 59. Number 177 has 4 divisors: 1, 3, 59, 177. Sum of the divisors is 240. Number 177 is not a Fibonacci number. It is not a Bell number. Number 177 is not a Catalan number. Number 177 is not a regular number (Hamming number). It is a not factorial of any number. Number 177 is a deficient number and therefore is not a perfect number. Binary numeral for number 177 is 10110001. Octal numeral is 261. Duodecimal value is 129. Hexadecimal representation is b1. Square of the number 177 is 31329. Square root of the number 177 is 13.30413469565. Natural logarithm of 177 is 5.1761497325738 Decimal logarithm of the number 177 is 2.2479732663618 Sine of 177 is 0.87758978777712. Cosine of the number 177 is 0.47941231147032. Tangent of the number 177 is 1.8305532978192" ]
[ null, "https://www.numberempire.com/images/graystar.png", null, "https://www.numberempire.com/images/graystar.png", null, "https://www.numberempire.com/images/graystar.png", null, "https://www.numberempire.com/images/graystar.png", null, "https://www.numberempire.com/images/graystar.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.60727865,"math_prob":0.98558325,"size":1988,"snap":"2020-34-2020-40","text_gpt3_token_len":637,"char_repetition_ratio":0.17842741,"word_repetition_ratio":0.041297935,"special_character_ratio":0.39889336,"punctuation_ratio":0.13783784,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99666077,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-08-09T02:51:45Z\",\"WARC-Record-ID\":\"<urn:uuid:9f8538e0-1412-4b84-96d5-506f595dcbd7>\",\"Content-Length\":\"24541\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2a7857d6-c6e8-463b-b209-775de7a6f440>\",\"WARC-Concurrent-To\":\"<urn:uuid:8ee19da3-eb44-45a0-b0e0-345feab2d32e>\",\"WARC-IP-Address\":\"104.24.112.69\",\"WARC-Target-URI\":\"https://www.numberempire.com/177\",\"WARC-Payload-Digest\":\"sha1:FTEPUKBC72MZ5YS5QAGUG2Z3YLYSQEZX\",\"WARC-Block-Digest\":\"sha1:GBVR6WRMH4BDJJMTFFPFN7UUMXYOQR2T\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-34/CC-MAIN-2020-34_segments_1596439738380.22_warc_CC-MAIN-20200809013812-20200809043812-00385.warc.gz\"}"}
https://www.iacr.org/cryptodb/data/paper.php?pubkey=31097
[ "## CryptoDB\n\n### Paper: Linear Cryptanalysis of FF3-1 and FEA\n\nAuthors: Tim Beyne , imec-COSIC, ESAT, KU Leuven DOI: 10.1007/978-3-030-84242-0_3 (login may be required) Search ePrint Search Google Slides CRYPTO 2021 Improved attacks on generic small-domain Feistel ciphers with alternating round tweaks are obtained using linear cryptanalysis. This results in practical distinguishing and message-recovery attacks on the United States format-preserving encryption standard FF3-1 and the South-Korean standards FEA-1 and FEA-2. The data-complexity of the proposed attacks on FF3-1 and FEA-1 is $O(N^{r/2 - 1.5})$, where $N^2$ is the domain size and $r$ is the number of rounds. For example, FF3-1 with $N = 10^3$ can be distinguished from an ideal tweakable block cipher with advantage $\\ge 1/10$ using $2^{23}$ encryption queries. Recovering the left half of a message with similar advantage requires $2^{24}$ data. The analysis of FF3-1 serves as an interesting real-world application of (generalized) linear cryptanalysis over the group $\\mathbb{Z}/N\\mathbb{Z}$.\n##### BibTeX\n@inproceedings{crypto-2021-31097,\ntitle={Linear Cryptanalysis of FF3-1 and FEA},\npublisher={Springer-Verlag},\ndoi={10.1007/978-3-030-84242-0_3},\nauthor={Tim Beyne},\nyear=2021\n}" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7580791,"math_prob":0.9169247,"size":1195,"snap":"2022-27-2022-33","text_gpt3_token_len":341,"char_repetition_ratio":0.115869015,"word_repetition_ratio":0.0,"special_character_ratio":0.28619248,"punctuation_ratio":0.097222224,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9635311,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-19T17:10:36Z\",\"WARC-Record-ID\":\"<urn:uuid:89be123c-5a20-454b-ab1f-9d31eecf5bd6>\",\"Content-Length\":\"24095\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9f2672c8-6c5e-40c4-beae-ce9b11d63191>\",\"WARC-Concurrent-To\":\"<urn:uuid:1831272e-1801-4e85-aadc-7b50ee0cbea8>\",\"WARC-IP-Address\":\"46.101.216.44\",\"WARC-Target-URI\":\"https://www.iacr.org/cryptodb/data/paper.php?pubkey=31097\",\"WARC-Payload-Digest\":\"sha1:TNT4U2VD24JIOPJDUNVIBIP3ERB64DJN\",\"WARC-Block-Digest\":\"sha1:JBQ32VJA4GC574EXSJGFS4DZTV2FYVMA\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882573744.90_warc_CC-MAIN-20220819161440-20220819191440-00574.warc.gz\"}"}
https://cryptolisting.org/coin/funx/
[ "### $0.225 0.00% ## Function X (FUNX) Rank 4349 Market Cap$0.000\nVolume 24H 0 FUNX\nOpen 24H $0.225 Low/High$0.225 - $0.225 Compare to Mkt.Cap$ 0.00000000 Volume 24H 0.00000000FUNX Market share 0% Total Supply 378.00FUNX Proof type N/A Open $0.23 Low$ 0.22 High $0.23 # What does \"function of\" mean? The function in part (b) shows a relationship that is a one-to-one function because each input is associated with a single output. However, some functions have only one input value for each output value, as well as having only one output for each input.", null, "This definition of \"graph\" refers to a set of pairs of objects. Graphs, in the sense of diagrams, are most applicable to functions from the real numbers to themselves.", null, "Functions can be written as ordered pairs, tables, or graphs. The set of input values is called the domain, and the set of output values is called the range. The idea of function, starting in the 17th century, was fundamental to the new infinitesimal calculus (see History of the function concept). Each state can be matched with two individuals who have been elected to serve as senator. In turn, each senator can be matched with one specific state that he or she represents. Both of these are real-life examples of relations. In the case of a circle, one input can give you two outputs - one on each side of the circle. Thus, the equation for a circle is not a function and you cannot write it in function form. As we just saw, the difference between a relation that is a function and a relation that is not a function is that a relation that is a function has inputs relating to one and only one output. When a relation is not a function, this is not the case. A function is a specific type of relation in which each input value has one and only one output value. An input is the independent value, and the output value is the dependent value, as it depends on the value of the input. Relations are simply correspondences between sets of values or information. A function has only one output value for each input value. These functions produce more interesting graphs with more curves. The highest power of the function tells you how many curves or ups and downs the graph may have. Linear functions have variables to the first degree and have two constants that determine the location of the graph. The constant m determines whether the line slopes down or up. ## A Benefit of Ordered Pairs Think about members of your family and their ages. The pairing of each member of your family and their age is a relation. Each family member can be paired with an age in the set of ages of your family members. Another example of a relation is the pairing of a state with its United States’ senators. ## What is algebraic function with example? Using the Vertical Line Test When both the independent quantity (input) and the dependent quantity (output) are real numbers, a function can be represented by a graph in the coordinate plane. The independent value is plotted on the x-axis and the dependent value is plotted on the y-axis. Note that input $q$ and $r$ both give output $n$. In this case, each input is associated with a single output. ## What is a void function? A function is a relation in which each input has only one output. In the relation , y is a function of x, because for each input x (1, 2, 3, or 0), there is only one output y. : y is not a function of x (x = 1 has multiple outputs), x is not a function of y (y = 2 has multiple outputs). A standard function notation is one representation that makes it easier to work with functions. Let’s begin by considering the input as the items on the menu. The output values are then the prices.Each item on the menu has only one price, so the price is a function of the item. ### Specialized notations A characteristic function is a special case of a simple function. the set of elements that get pointed to in Y (the actual values produced by the function) is called the Range. When both the independent quantity (input) and the dependent quantity (output) are real numbers, a function can be represented by a graph in the coordinate plane. The independent value is plotted on the x-axis and the dependent value is plotted on the y-axis. The fact that each input value has exactly one output value means graphs of functions have certain characteristics. ## Functions and linear equations Remember, we can use any letter to name the function; we can use the notation $h\\left(a\\right)$ to show that $h$ depends on $a$. The input value $a$ must be put into the function $h$ to get an output value.", null, "## What are the characteristics of a function? Values in the range are also known as an output values, or values of the dependent variable, and are often labeled with the lowercase letter y . A function f is a relation that assigns a single value in the range to each value in the domain. In other words, no x -values are used more than once. • This process is the method that is generally used for defining the logarithm, the exponential and the trigonometric functions of a complex number. • This section describes general properties of functions, that are independent of specific properties of the domain and the codomain. • The key here is to notice the letter that is in front of the parenthesis. • A function is a relation for which each value from the set the first components of the ordered pairs is associated with exactly one value from the set of second components of the ordered pair. • When the elements of the codomain of a function are vectors, the function is said to be a vector-valued function. ### Example: A function with two pieces: The areas that the graph avoids are where division by zero happens. Power graphs are produced by functions with only one term and a power. The power can be positive, negative, or even a fraction. In this lesson, learn how you can differentiate from the eight most common types of functions and their graphs. Learn the distinct look of each so you can easily distinguish them from each other.", null, "The red curve is the graph of a function, because any vertical line has exactly one crossing point with the curve. While we are on the subject of function evaluation we should now talk about piecewise functions. We’ve actually already seen an example of a piecewise function even if we didn’t call it a function (or a piecewise function) at the time. To see why this relation is a function simply pick any value from the set of first components. Now, go back up to the relation and find every ordered pair in which this number is the first component and list all the second components from those ordered pairs. The list of second components will consist of exactly one value. ### Identify Functions Using Graphs", null, "## What is a function table? The characteristic function of a real-valued random variable always exists, since it is an integral of a bounded continuous function over a space whose measure is finite. It is non-vanishing in a region around zero: φ(0) = 1. There is a bijection between probability distributions and characteristic functions.", null, "Then analytic continuation allows enlarging further the domain for including almost the whole complex plane. This process is the method that is generally used for defining the logarithm, the exponential and the trigonometric functions of a complex number. , called the graph of the function.[note 2] When the domain and the codomain are sets of real numbers, each such pair may be considered as the Cartesian coordinates of a point in the plane.", null, "Suppose you are making a blanket by sewing together swatches of fabric. You go to the store, and there is a sale on these swatches. You get three swatches for$4.00, regardless of whether you buy one, two, or three, and each swatch after that costs an additional \\$2.00.\n\nThat's because there are many different types of functions, and the more you continue learning math, the more you will get exposed to. What you have learned in this lesson is a good beginning framework for the types of graphs you will see.\n\n## What is a function in algebra?\n\nA function is a group of statements that together perform a task. A function declaration tells the compiler about a function's name, return type, and parameters. A function definition provides the actual body of the function. The C++ standard library provides numerous built-in functions that your program can call.", null, "This is one of the more common mistakes people make when they first deal with functions. It is very important to note that $$f\\left( x \\right)$$ is really nothing more than a really fancy way of writing $$y$$. If you keep that in mind you may find that dealing with function notation becomes a little easier. We now need to move onto something called function notation.\n\nFormally speaking, it may be identified with the function, but this hides the usual interpretation of a function as a process. Therefore, in common usage, the function is generally distinguished from its graph. Functions are also called maps or mappings, though some authors make some distinction between \"maps\" and \"functions\" (see section #Map). In order to really get a feel for what the definition of a function is telling us we should probably also check out an example of a relation that is not a function.\n\nIn general, we can determine whether a relation is a function by looking at its inputs and outputs. If an input has more than one output, the relation is not a function. If every input has exactly one output, then the relation is a function.\n\nIf we can draw any horizontal line that intersects a graph more than once, then the graph does not represent a function because that $y$ value has more than one input. The vertical line test can be used to determine whether a graph represents a function. A vertical line includes all points with a particular $x$ value. The $y$ value of a point where a vertical line intersects a graph represents an output for that input $x$ value. If we can draw any vertical line that intersects a graph more than once, then the graph does not define a function because that $x$ value has more than one output.", null, "Every C++ program has at least one function, which is main(), and all the most trivial programs can define additional functions. If m, the slope, is negative the functions value decreases with an increasing x and the opposite if we have a positive slope. Functions have been used in mathematics for a very long time, and lots of different names and ways of writing functions have come about. So, a function takes elements of a set, and gives back elements of a set." ]
[ null, "https://docs.tibco.com/pub/spotfire/7.0.1/doc/html/images/df_register_data_functions_d.png", null, "https://i.stack.imgur.com/aKPso.jpg", null, "https://media.cheggcdn.com/media%2Fb9b%2Fb9b0ff08-64ee-4456-a755-1dc9a20dff25%2Fimage.png", null, "https://upload.wikimedia.org/wikipedia/commons/thumb/6/68/Y%3Dx^2.svg/400px-Y%3Dx^2.svg.png", null, "http://www.ralfepoisson.com/projects/courses/programming_theory_in_practice/img/8_wolfram_alpha.png", null, "https://i.stack.imgur.com/YjkC2.png", null, "https://study.com/cimages/videopreview/videopreview-full/9ve7gsnenu.jpg", null, "https://i.stack.imgur.com/1rQHi.png", null, "https://d2vlcm61l7u1fs.cloudfront.net/media%2F1c7%2F1c78f00e-0e6d-4808-a5bf-b5659f3a64c9%2FphpTJL9Pi.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9160812,"math_prob":0.9905939,"size":10586,"snap":"2023-40-2023-50","text_gpt3_token_len":2273,"char_repetition_ratio":0.1882442,"word_repetition_ratio":0.08908202,"special_character_ratio":0.21339504,"punctuation_ratio":0.09068982,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9954749,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18],"im_url_duplicate_count":[null,3,null,3,null,3,null,3,null,8,null,4,null,5,null,6,null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-02T02:51:32Z\",\"WARC-Record-ID\":\"<urn:uuid:bf84a44b-f8af-4b7e-807b-6ffb9fd47d19>\",\"Content-Length\":\"1050014\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e4863abc-e9a5-4c4e-89dc-1dea02741005>\",\"WARC-Concurrent-To\":\"<urn:uuid:fe51a719-2207-44ec-ab7b-85539ad21cc4>\",\"WARC-IP-Address\":\"104.21.24.152\",\"WARC-Target-URI\":\"https://cryptolisting.org/coin/funx/\",\"WARC-Payload-Digest\":\"sha1:6MFCGKEMRUDD5VNDQBGXU7HUUST5O6IZ\",\"WARC-Block-Digest\":\"sha1:GASLLID53OGTUIXNHQNPEDX4BIYJOIN6\",\"WARC-Truncated\":\"length\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100309.57_warc_CC-MAIN-20231202010506-20231202040506-00034.warc.gz\"}"}
http://docs.ros.org/en/fuerte/api/hai_sandbox/html/namespacehai__sandbox_1_1recognize__3d.html
[ "hai_sandbox::recognize_3d Namespace Reference\n\n## Classes\n\nclass  DataScale\nclass  FiducialPicker\nclass  ImagePublisher\nclass  InterestPointDataset\nclass  NarrowTextureFeatureExtractor\nclass  PCAIntensities\nclass  Recognize3DParam\nclass  ScanLabeler\nclass  SVM\nclass  SVMPCA_ActiveLearner\n\n## Functions\n\ndef confusion_matrix\ndef dataset_to_libsvm\ndef draw_dataset\ndef draw_labeled_points\ndef draw_points\ndef find_max_in_density\ndef insert_folder_name\ndef instance_to_image\ndef instances_to_image\ndef inverse_indices\ndef make_point_exclusion_test_set\ndef preprocess_data_in_dir\ndef preprocess_scan_extract_features\n\n## Variables\n\ntuple current_scan_pred = InterestPointDataset(xs, results, locs2d, locs3d, None)\nstring dest = 'mode'\nlist dset = locations['data']\ntuple fname = raw_input('pick a file name')\ntuple fp = FiducialPicker(args)\ntuple fpfh = rospy.ServiceProxy('fpfh', fsrv.FPFHCalc)\nstring help = 'fiducialpicker, preprocess, or label'\ntuple histogram = np.matrix(res.hist.histograms)\ntuple img = cv.CloneMat(cdisp['cv'])\ntuple ip = ImagePublisher('active_learn')\nlist keys = locations['data']\ntuple kfe = KinectFeatureExtractor()\ntuple learner = SVMPCA_ActiveLearner(use_pca=True)\nmode = opt.mode\ntuple neg_to_pos_ratio = float(nneg)\nint NEGATIVE = 0\ntuple nneg = np.sum(dataset.outputs == NEGATIVE)\ntuple npos = np.sum(dataset.outputs == POSITIVE)\ntuple p = optparse.OptionParser()\ntuple picked_i = int(raw_input('pick a key to use'))\ntuple points3d = np.matrix(res.hist.points3d)\nfloat POSITIVE = 1.0\ntuple req = fsrv.FPFHCalcRequest()\ntuple res = fpfh(req)\ntuple results = np.matrix(learner.classify(sdset))\ntuple s\nlist seed_dset = keys[i]\ntrained = False\nfloat UNLABELED = 2.0\nstring weight_balance = ' -w0 1 -w1 %.2f'\n\n## Function Documentation\n\n def hai_sandbox.recognize_3d.confusion_matrix ( true_labels, predicted )\n\nDefinition at line 44 of file recognize_3d.py.\n\n def hai_sandbox.recognize_3d.dataset_to_libsvm ( dataset, filename )\n\nDefinition at line 125 of file recognize_3d.py.\n\n def hai_sandbox.recognize_3d.draw_dataset ( dataset, img, scale = `1.`, size = `2`, scan_id = `None` )\n\nDefinition at line 153 of file recognize_3d.py.\n\n def hai_sandbox.recognize_3d.draw_labeled_points ( image, dataset, pos_color = `[255`, neg_color = `[0`, scale = `1.` )\n\nDefinition at line 140 of file recognize_3d.py.\n\n def hai_sandbox.recognize_3d.draw_points ( img, img_pts, color, size = `1`, thickness = `-1` )\n\nDefinition at line 148 of file recognize_3d.py.\n\n def hai_sandbox.recognize_3d.find_max_in_density ( locs2d )\n\nDefinition at line 700 of file recognize_3d.py.\n\n def hai_sandbox.recognize_3d.insert_folder_name ( apath, folder_name )\n\nDefinition at line 98 of file recognize_3d.py.\n\n def hai_sandbox.recognize_3d.instance_to_image ( win_size, instance, min_val, max_val )\n\nDefinition at line 79 of file recognize_3d.py.\n\n def hai_sandbox.recognize_3d.instances_to_image ( win_size, instances, min_val, max_val )\n\nDefinition at line 73 of file recognize_3d.py.\n\n def hai_sandbox.recognize_3d.inverse_indices ( indices_exclude, num_elements )\n\nDefinition at line 170 of file recognize_3d.py.\n\n def hai_sandbox.recognize_3d.load_data_from_file2 ( fname, rec_param )\n\nDefinition at line 102 of file recognize_3d.py.\n\n def hai_sandbox.recognize_3d.make_point_exclusion_test_set ( training_dataset, all_data_dir, ext )\n\nDefinition at line 217 of file recognize_3d.py.\n\n def hai_sandbox.recognize_3d.preprocess_data_in_dir ( dirname, ext )\n\nDefinition at line 205 of file recognize_3d.py.\n\n def hai_sandbox.recognize_3d.preprocess_scan_extract_features ( raw_data_fname, ext )\n\nDefinition at line 176 of file recognize_3d.py.\n\n## Variable Documentation\n\n tuple hai_sandbox::recognize_3d::current_scan_pred = InterestPointDataset(xs, results, locs2d, locs3d, None)\n\nDefinition at line 2108 of file recognize_3d.py.\n\nDefinition at line 2085 of file recognize_3d.py.\n\n string hai_sandbox::recognize_3d::dest = 'mode'\n\nDefinition at line 1962 of file recognize_3d.py.\n\nDefinition at line 2010 of file recognize_3d.py.\n\n list hai_sandbox::recognize_3d::fname = raw_input('pick a file name')\n\nDefinition at line 2009 of file recognize_3d.py.\n\n tuple hai_sandbox::recognize_3d::fp = FiducialPicker(args)\n\nDefinition at line 1983 of file recognize_3d.py.\n\n tuple hai_sandbox::recognize_3d::fpfh = rospy.ServiceProxy('fpfh', fsrv.FPFHCalc)\n\nDefinition at line 2054 of file recognize_3d.py.\n\n string hai_sandbox::recognize_3d::help = 'fiducialpicker, preprocess, or label'\n\nDefinition at line 1963 of file recognize_3d.py.\n\n tuple hai_sandbox::recognize_3d::histogram = np.matrix(res.hist.histograms)\n\nDefinition at line 2069 of file recognize_3d.py.\n\n tuple hai_sandbox::recognize_3d::img = cv.CloneMat(cdisp['cv'])\n\nDefinition at line 2111 of file recognize_3d.py.\n\n tuple hai_sandbox::recognize_3d::ip = ImagePublisher('active_learn')\n\nDefinition at line 2096 of file recognize_3d.py.\n\nDefinition at line 2004 of file recognize_3d.py.\n\n tuple hai_sandbox::recognize_3d::kfe = KinectFeatureExtractor()\n\nDefinition at line 2082 of file recognize_3d.py.\n\n tuple hai_sandbox::recognize_3d::learner = SVMPCA_ActiveLearner(use_pca=True)\n\nDefinition at line 2094 of file recognize_3d.py.\n\nDefinition at line 2003 of file recognize_3d.py.\n\nDefinition at line 1975 of file recognize_3d.py.\n\n tuple hai_sandbox::recognize_3d::neg_to_pos_ratio = float(nneg)\n\nDefinition at line 2092 of file recognize_3d.py.\n\nDefinition at line 42 of file recognize_3d.py.\n\n tuple hai_sandbox::recognize_3d::nneg = np.sum(dataset.outputs == NEGATIVE)\n\nDefinition at line 2086 of file recognize_3d.py.\n\n tuple hai_sandbox::recognize_3d::npos = np.sum(dataset.outputs == POSITIVE)\n\nDefinition at line 2087 of file recognize_3d.py.\n\n tuple hai_sandbox::recognize_3d::p = optparse.OptionParser()\n\nDefinition at line 1960 of file recognize_3d.py.\n\n tuple hai_sandbox::recognize_3d::picked_i = int(raw_input('pick a key to use'))\n\nDefinition at line 2007 of file recognize_3d.py.\n\n tuple hai_sandbox::recognize_3d::points3d = np.matrix(res.hist.points3d)\n\nDefinition at line 2070 of file recognize_3d.py.\n\n float hai_sandbox::recognize_3d::POSITIVE = 1.0\n\nDefinition at line 41 of file recognize_3d.py.\n\n tuple hai_sandbox::recognize_3d::req = fsrv.FPFHCalcRequest()\n\nDefinition at line 2059 of file recognize_3d.py.\n\n tuple hai_sandbox::recognize_3d::res = fpfh(req)\n\nDefinition at line 2064 of file recognize_3d.py.\n\n tuple hai_sandbox::recognize_3d::results = np.matrix(learner.classify(sdset))\n\nDefinition at line 2107 of file recognize_3d.py.\n\nInitial value:\n```00001 ScanLabeler(args, ext='_features_df2_dict.pkl', scan_to_train_on=opt.train,\n00002 seed_dset=opt.seed, features_to_use=opt.feature)\n```\n\nDefinition at line 2020 of file recognize_3d.py.\n\nDefinition at line 2008 of file recognize_3d.py.\n\nDefinition at line 2095 of file recognize_3d.py.\n\n float hai_sandbox::recognize_3d::UNLABELED = 2.0\n\nDefinition at line 40 of file recognize_3d.py.\n\n string hai_sandbox::recognize_3d::weight_balance = ' -w0 1 -w1 %.2f'\n\nDefinition at line 2093 of file recognize_3d.py.\n\nhai_sandbox\nAuthor(s): Hai Nguyen\nautogenerated on Wed Nov 27 2013 11:46:56" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5480068,"math_prob":0.51553303,"size":7183,"snap":"2022-27-2022-33","text_gpt3_token_len":2258,"char_repetition_ratio":0.3318011,"word_repetition_ratio":0.049006622,"special_character_ratio":0.2895726,"punctuation_ratio":0.26105088,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9849128,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-12T23:36:57Z\",\"WARC-Record-ID\":\"<urn:uuid:0fae2c0a-5761-4f6d-8f3f-a200b43be30b>\",\"Content-Length\":\"68860\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9ea9b8a8-44e3-4b50-b2e9-5b780d63a370>\",\"WARC-Concurrent-To\":\"<urn:uuid:e3bf437b-8822-4f0a-8709-66a1791398f2>\",\"WARC-IP-Address\":\"140.211.9.98\",\"WARC-Target-URI\":\"http://docs.ros.org/en/fuerte/api/hai_sandbox/html/namespacehai__sandbox_1_1recognize__3d.html\",\"WARC-Payload-Digest\":\"sha1:737G22HUEJDKSPVRGLHBDIPGXUGOJXER\",\"WARC-Block-Digest\":\"sha1:ORQSHHXG534S4PFP7DDESZHIFLHSWHX4\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882571847.45_warc_CC-MAIN-20220812230927-20220813020927-00473.warc.gz\"}"}
https://forums.developer.nvidia.com/t/is-the-following-feasible-using-cuda-and-a-gpu/57315
[ "", null, "# Is the following feasible using CUDA and a GPU\n\nI’m unfamiliar with CUDA but have a problem which I think may be suited to to it. Before spending a lot of time reading documentation and spending money on hardware please could you tell me if the following is feasible and sensible\n\n``````load 5000 1000 * 1000 matrices of bytes from the host onto the GPU // assume there is sufficient memory on the GPU\nrepeat {\non the host calculate the index of a matrix on the GPU\nselect the matrix on the GPU\nform a matrix of floats on the GPU from the selected matrix of bytes using float = toInt(byte) * 256.0\nrepeat {\ncalculate a vector on host\ncopy vector to GPU\nmultiply vector by float matrix\ncopy result back to host\n} until some condition is satisfied on the host\n} until some condition is satisfied on the host\n``````\n\nI’d prefer to avoid C/C++ on the host if possible. Python would be fine.\n\nIt should be feasible. It’s hard to avoid some amount of C/C++ if you want fast CUDA code, but pycuda allows you to write most of the host code in python while just writing a kernel in CUDA C++. numba would allow you to actually write the kernel code in python, but it will have more limited flexibility compared to CUDA C++.\n\nThanks!" ]
[ null, "https://aws1.discourse-cdn.com/nvidia/original/3X/a/1/a1ef6e0c1fbd3fad5bf82538b78dfaa9c5fa1a61.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9083846,"math_prob":0.79388976,"size":823,"snap":"2021-04-2021-17","text_gpt3_token_len":188,"char_repetition_ratio":0.15262516,"word_repetition_ratio":0.077922076,"special_character_ratio":0.2345079,"punctuation_ratio":0.025157232,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.957584,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-01-15T14:23:31Z\",\"WARC-Record-ID\":\"<urn:uuid:8a377115-9a21-4c97-8a25-29d6d006a8dd>\",\"Content-Length\":\"22203\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:54802336-71f9-4673-8f65-23cfe96cb869>\",\"WARC-Concurrent-To\":\"<urn:uuid:0f32fc55-9bc8-4841-8877-8fcff1ca0041>\",\"WARC-IP-Address\":\"64.62.250.111\",\"WARC-Target-URI\":\"https://forums.developer.nvidia.com/t/is-the-following-feasible-using-cuda-and-a-gpu/57315\",\"WARC-Payload-Digest\":\"sha1:LB72X66CCF3YBOKMBDOYCZE5F2AEJCD3\",\"WARC-Block-Digest\":\"sha1:QLP2HHIW3ERVFAUIFQK7Z7TAZAHBBD5N\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-04/CC-MAIN-2021-04_segments_1610703495901.0_warc_CC-MAIN-20210115134101-20210115164101-00246.warc.gz\"}"}
https://failuretoconverge.com/2013/02/14/picking-one-observation-per-subject-based-on-max-or-min/
[ "# Picking one observation per ‘subject’ based on max (or min)…there can be only one!", null, "Today, I came across a post from the ‘What you’re doing is rather desperate’ blog that dealt with a common issue, and something that I deal with on (almost) a daily basis. It is, in fact, so common an issue that I have a script that does all the work for me and it was good diving back in for a refresh of something I wrote quite a bit ago.\n\nN.Saunders posts a much cleaner solution than mine, but mine avoids this issues that can arise when you have non-unique values as maximums (or minimums). Plus my solution avoids the use of the merge() function which, in my experience can sometimes be a memory and time hog. See below for my take on solving his issue.\n\n```## First lets create some data (and inject some gremlins)\ndf.orig <- data.frame(vars = rep(LETTERS[1:5], 2), obs1 = c(1:10), obs2 = c(11:20))\ndf.orig <- rbind(df.orig, data.frame(vars = 'A', obs1 = '6', obs2 = '15')) ## create some ties\ndf.orig <- rbind(df.orig, data.frame(vars = 'A', obs1 = '6', obs2 = '16')) ## more ties\n\ndf.orig <- df.orig[order(df.orig\\$vars, df.orig\\$obs1, df.orig\\$obs2),] ## my solution requires that you order your data first\nrow.names(df.orig) <- seq(1,nrow(df.orig)) ## since the row.names get scrambled by the order() function we need to re-establish some neatness\nx1 <- match(df.orig\\$vars, df.orig\\$vars)\nindex <- as.numeric(tapply(row.names(df.orig), x1, FUN=tail, n=1)) ## here's where the magic happens\ndf.max <- df.orig[index,]\n```" ]
[ null, "https://failuretoconverge.files.wordpress.com/2013/02/thehighlander.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8527407,"math_prob":0.9463125,"size":1552,"snap":"2021-43-2021-49","text_gpt3_token_len":414,"char_repetition_ratio":0.12209302,"word_repetition_ratio":0.056,"special_character_ratio":0.29123712,"punctuation_ratio":0.16954023,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9806041,"pos_list":[0,1,2],"im_url_duplicate_count":[null,5,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-17T21:29:14Z\",\"WARC-Record-ID\":\"<urn:uuid:6930570b-b01d-4b38-9a4e-b784d12eee19>\",\"Content-Length\":\"96410\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5daa2d2b-d311-4c95-aaf9-4669f8684d2c>\",\"WARC-Concurrent-To\":\"<urn:uuid:a8b90154-e896-4a73-8342-cc118db1bf55>\",\"WARC-IP-Address\":\"192.0.78.24\",\"WARC-Target-URI\":\"https://failuretoconverge.com/2013/02/14/picking-one-observation-per-subject-based-on-max-or-min/\",\"WARC-Payload-Digest\":\"sha1:H322WZMKA26UH3LI3HTHEZHZSHIIHE4N\",\"WARC-Block-Digest\":\"sha1:MJQEFEH36SPX7PVONYWBGM532T3OMQDY\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323585183.47_warc_CC-MAIN-20211017210244-20211018000244-00533.warc.gz\"}"}
https://kr.mathworks.com/help/comm/ref/comm.dpd-system-object.html
[ "# comm.DPD\n\nDigital predistorter\n\n## Description\n\nThe `comm.DPD` System object™ applies digital predistortion (DPD) to a complex baseband signal by using a memory polynomial to compensate for nonlinearities in a power amplifier. For more information, see Digital Predistortion.\n\nTo predistort signals:\n\n1. Create the `comm.DPD` object and set its properties.\n\n2. Call the object with arguments, as if it were a function.\n\n## Creation\n\n### Syntax\n\n``dpd = comm.DPD``\n``dpd = comm.DPD(Name,Value)``\n\n### Description\n\n````dpd = comm.DPD` creates a digital predistorter System object to predistort a signal.```\n\nexample\n\n````dpd = comm.DPD(Name,Value)` sets properties using one or more name-value pairs. For example, `comm.DPD('PolynomialType','Cross-term memory polynomial')` configures the predistorter System object to predistort the input signal by using a memory polynomial with cross terms. Enclose each property name in quotes.```\n\n## Properties\n\nexpand all\n\nUnless otherwise indicated, properties are nontunable, which means you cannot change their values after calling the object. Objects lock when you call them, and the `release` function unlocks them.\n\nIf a property is tunable, you can change its value at any time.\n\nPolynomial type used for predistortion, specified as one of these values:\n\n• `'Memory polynomial'` — Predistorts the input signal by using a memory polynomial without cross terms.\n\n• `'Cross-term memory polynomial'` — Predistorts the input signal by using a memory polynomial with cross terms.\n\nMemory-polynomial coefficients, specified as a matrix. The number of rows in the matrix must equal the memory depth of the memory polynomial.\n\n• If `PolynomialType` is ```'Memory polynomial'```, the number of columns in the matrix is the degree of the memory polynomial.\n\n• If `PolynomialType` is ```'Cross-term memory polynomial'```, the number of columns in the matrix must equal m(n-1)+1. m is the memory depth of the polynomial, and n is the degree of the memory polynomial.\n\nData Types: `double`\nComplex Number Support: Yes\n\n## Usage\n\n### Syntax\n\n``out = dpd(in)``\n\n### Description\n\nexample\n\n````out = dpd(in)` predistorts a complex baseband signal by using a memory polynomial to compensate for nonlinearities in a power amplifier.```\n\n### Input Arguments\n\nexpand all\n\nInput baseband signal, specified as a column vector.\n\nData Types: `double`\nComplex Number Support: Yes\n\n### Output Arguments\n\nexpand all\n\nPredistorted baseband signal, returned as a column vector of the same length as the input signal.\n\n## Object Functions\n\nTo use an object function, specify the System object as the first input argument. For example, to release system resources of a System object named `obj`, use this syntax:\n\n`release(obj)`\n\nexpand all\n\n `step` Run System object algorithm `release` Release resources and allow changes to System object property values and input characteristics `reset` Reset internal states of System object\n\n## Examples\n\ncollapse all\n\nApply digital predistortion (DPD) to a power amplifier input signal. The DPD coefficient estimator System object uses a captured signal containing power amplifier input and output signals to determine the predistortion coefficient matrix.\n\nLoad a file containing the input and output signals for the power amplifier.\n\n`load('commpowamp_dpd_data.mat','PA_input','PA_output')`\n\nGenerate a DPD coefficient estimator System object and a raised cosine transmit filter System object.\n\n```estimator = comm.DPDCoefficientEstimator( ... 'DesiredAmplitudeGaindB',10, ... 'PolynomialType','Memory polynomial', ... 'Degree',5,'MemoryDepth',3,'Algorithm','Least squares'); rctFilt = comm.RaisedCosineTransmitFilter('OutputSamplesPerSymbol',2);```\n\nEstimate the digital predistortion memory-polynomial coefficients.\n\n`coef = estimator(PA_input,PA_output);`\n\nGenerate a DPD System object using `coef`, the estimated coefficients output from the DPD coefficient estimator, as for the coefficient matrix.\n\n```dpd = comm.DPD('PolynomialType','Memory polynomial', ... 'Coefficients',coef);```\n\nGenerate 2000 random symbols and apply 16-QAM modulation to the signal. Apply raised cosine transmit filtering to the modulated signal.\n\n```s = randi([0,15],2000,1); u = qammod(s,16); x = rctFilt(u);```\n\nApply digital predistortion to the data. The DPD System object returns a predistorted signal to provide as input to the power amplifier.\n\n`y = dpd(x);`\n\nThis examples shows the format of the coefficient matrix for the DPD memory polynomial by using a randomly generated coefficient matrix. The example:\n\n• Creates a digital predistortion System object configured using a memory polynomial coefficient matrix with the memory depth set to `3` and the polynomial degree set to `5` consisting of random values.\n\n• Predistorts a signal using the memory-polynomial coefficient matrix.\n\n• Compares one predistorted output element to the corresponding input element that has been manually computed using the memory-polynomial coefficient matrix.\n\nCreate a coefficient matrix representing a predistorter with the output equal to the input by generating a 3-by-5 coefficient matrix of zeros and setting the `coef(1,1)` element to `1`. Add small random complex nonlinear terms to the coefficient matrix.\n\n```coef = zeros(3,5); coef(1,1) = 1; coef = coef + 0.01*(randn(3,5)+1j*randn(3,5));```\n\nCreate a DPD System object using the memory polynomial coefficient matrix, `coef`.\n\n`dpd = comm.DPD('PolynomialType','Memory polynomial','Coefficients',coef);`\n\nGenerate an input signal and predistort it using the `dpd` System object.\n\n```x = randn(20,1) + 1j*randn(20,1); y = dpd(x);```\n\nCompare the manually distorted output for an input corresponding output element `y(18)` to show how the coefficient matrix is used to calculate that particular output value.\n\n```u = x(18:-1:(18-3+1)); isequal(y(18),sum(sum(coef.*[u u.*abs(u) u.*(abs(u).^2) u.*(abs(u).^3) u.*(abs(u).^4)])))```\n```ans = logical 1 ```\n\nThis examples shows the format of the coefficient matrix for the DPD memory polynomial by using a randomly generated coefficient matrix. The example:\n\n• Creates a digital predistorter System object configured using a cross-term memory polynomial coefficient matrix with the memory depth set to `3` and the polynomial degree set to `5` consisting of random values.\n\n• Predistorts a signal using the cross-term memory polynomial coefficient matrix.\n\n• Compares one predistorted output element to the corresponding input element that has been manually computed using the cross-term memory polynomial coefficient matrix.\n\nCreate a coefficient matrix representing a predistorter with the output equal to the input by generating a 3-by-5 coefficient matrix of zeros and setting the `coef(1,1)` element to `1`. Add small random complex nonlinear terms to the coefficient matrix.\n\n```coef = zeros(3,3*(5-1)+1); coef(1,1) = 1; coef = coef + 0.01*(randn(3,13) + 1j*randn(3,13));```\n\nCreate a DPD System object using the cross-term memory polynomial coefficient matrix, `coef`.\n\n`dpd = comm.DPD('PolynomialType','Cross-term memory polynomial','Coefficients',coef);`\n\nGenerate an input signal and predistort it using the `dpd` System object.\n\n```x = randn(20,1) + 1j*randn(20,1); y = dpd(x);```\n\nCompare the manually distorted output for an input corresponding output element `y(18)` to show how the coefficient matrix is used to calculate that particular output value.\n\n```u = x(18:-1:(18-3+1)); isequal(y(18),sum(sum(coef.*[u u*abs(u.') u*(abs(u.').^2) u*(abs(u.').^3) u*(abs(u.').^4)])))```\n```ans = logical 1 ```\n\nexpand all\n\n Morgan, Dennis R., Zhengxiang Ma, Jaehyeong Kim, Michael G. Zierdt, and John Pastalan. \"A Generalized Memory Polynomial Model for Digital Predistortion of Power Amplifiers.\" IEEE® Transactions on Signal Processing. Vol. 54, Number 10, October 2006, pp. 3852–3860.\n\n M. Schetzen. The Volterra and Wiener Theories of Nonlinear Systems. New York: Wiley, 1980." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.67170745,"math_prob":0.9734768,"size":1391,"snap":"2021-31-2021-39","text_gpt3_token_len":349,"char_repetition_ratio":0.15068494,"word_repetition_ratio":0.029850746,"special_character_ratio":0.21567218,"punctuation_ratio":0.18490566,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99774706,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-07-23T16:14:28Z\",\"WARC-Record-ID\":\"<urn:uuid:9313b613-0bea-4542-921d-bd52c21695d8>\",\"Content-Length\":\"114761\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:78a77806-ed6e-438f-a707-12db8d7fce60>\",\"WARC-Concurrent-To\":\"<urn:uuid:9c4c2047-1c3e-4d65-8fd4-ffd0f1b4a4af>\",\"WARC-IP-Address\":\"23.219.12.52\",\"WARC-Target-URI\":\"https://kr.mathworks.com/help/comm/ref/comm.dpd-system-object.html\",\"WARC-Payload-Digest\":\"sha1:JXNGHN4QEJ663J6DFSZ5CEYIE5YUW6X7\",\"WARC-Block-Digest\":\"sha1:TXGBKPIOCGIFHFDGOQ2DXEZEUEYO55NW\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046149929.88_warc_CC-MAIN-20210723143921-20210723173921-00345.warc.gz\"}"}
http://medical-site.info/physics/giancoli-physics-principles-with-applications-7th-edition-pdf-25273.php
[ "# Giancoli physics principles with applications 7th edition pdf\n\nGiancoli – Physics Principles with Applications 7th c txtbk. 1, Pages· · MB·44, Physics Solutions Douglas C Giancoli 6Th medical-site.info Giancoli – Physics Principles with Applications 7th c txtbk. 1, Pages· · Electronic Principles 7th edition by Albert Malvino and David Bates . physics. Giancoli's text is a trusted classic, known for its elegant writing, clear Physics: Principles with Applications 7th Edition Pdf By Douglas C. Giancoli.\n\n Author: JANEY THORNER Language: English, Spanish, Arabic Country: Uzbekistan Genre: Environment Pages: 763 Published (Last): 14.03.2016 ISBN: 672-8-65157-884-4 Distribution: Free* [*Register to download] Uploaded by: TINISHA", null, "physics. Giancoli's text is a trusted classic, known for its elegant writing, clear Physics: Principles with Applications 7th Edition Pdf By Douglas C. Giancoli. by. Physics: Principles with Applications 7th Edition Pdf By Douglas C. Giancoli Using concrete observations and adventures you can relate to, the text includes a . Physics: Principles with Applications (7th Edition) Pdf Physics: Principles with Applications. - Kindle edition by Douglas C. Giancoli. Download it once and read it.\n\nSkip to main content. Chapter 2 Describing Motion: Kinematics in One Dimension 59 solutions. Chapter 3 Kinematics in Two Dimensions; Vectors 51 solutions. Chapter 4 Dynamics:\n\nChapter 9 Static Equilibrium; Elasticity and Fracture 58 solutions. Chapter 10 Fluids 72 solutions. Chapter 11 Vibration and Waves 61 solutions. Chapter 12 Sound 70 solutions. Chapter 13 Temperature and Kinetic Theory 69 solutions.", null, "Chapter 14 Heat 45 solutions. Chapter 15 The Laws of Thermodynamics 55 solutions. Chapter 16 Electric Charge and Electric Field 43 solutions. Chapter 17 Electric Potential 67 solutions. Chapter 18 Electric Currents 60 solutions. Chapter 19 DC Circuits 66 solutions. Chapter 20 Magnetism 63 solutions. Chapter 21 Electromagnetic Induction and Faraday's Law 73 solutions. Chapter 22 Electromagnetic Waves 43 solutions.\n\nChapter 23 Light: Geometric Optics 68 solutions. Chapter 24 The Wave Nature of Light 72 solutions. C is traveling at 2. D is decreasing its velocity by 2. Which statement concerning its acceleration must be correct? A Its acceleration is in the -x direction.\n\nB Its acceleration is zero. C Its acceleration is decreasing in magnitude as the car slows down. A Its acceleration is positive.\n\nB Its acceleration is decreasing in magnitude as the car slows down. C Its acceleration is negative. D Its acceleration is zero. A The acceleration is constantly increasing. B The acceleration is a constant non-zero value. C The acceleration is constantly decreasing. D The acceleration is equal to zero. A The acceleration could be positive. B The acceleration could be negative. C The acceleration must be zero.\n\nD The acceleration could be zero. A This can only occur if there is no acceleration. B This can occur only when the velocity is zero. C The acceleration must be constantly increasing.\n\nD The acceleration is constant.\n\nE The acceleration must be constantly decreasing. This track has markers spaced at equal distances along it from the start, as shown in the figure.\n\n## Choose a 7th Edition chapter\n\nWhich one of the following statements about this rock while it is in the air is correct? A Throughout the motion, the acceleration is downward, and the velocity is always in the same direction as the acceleration. B On the way up, its acceleration is downward and its velocity is upward, and at the highest point both its velocity and acceleration are zero. C On the way down, both its velocity and acceleration are downward, and at the highest point both its velocity and acceleration are zero.\n\nD The acceleration is downward at all points in the motion except that is zero at the highest point. E The acceleration is downward at all points in the motion. What is its acceleration just before it reaches its highest point? A slightly less than g B zero C exactly g D slightly greater than g. Which of the following statements about the direction of the velocity and acceleration of the ball as it is going up is correct?\n\nA Both its velocity and its acceleration points downward. B Its velocity points downward and its acceleration points upward. C Its velocity points upward and its acceleration points downward. D Both its velocity and its acceleration point upward. After it has been released, which statement s concerning its acceleration is correct? A Its acceleration is zero. B Its acceleration is constantly increasing. C Its acceleration is constant.\n\nD Its acceleration is constantly decreasing. E Its acceleration is greater than g. If the kg rock reaches a maximum height h, what maximum height will the kg ball reach? If it takes the kg rock a time T to reach the ground, what time will it take the kg rock to reach the ground?\n\n## Giancoli Books\n\nIf the kg rock falls with acceleration a, what is the acceleration of the kg rock? Air resistance is negligible. During the time that both objects continue to fall, their separation A decreases. B decreases at first, but then stays constant. C increases at first, but then stays constant. D stays constant. E increases. When they reach the ground below A the green ball will be moving faster than the blue ball.\n\nB the two balls will have the same speed. C the blue ball will be moving faster than the green ball. One second later, ball B is dropped from the same building. Neglect air resistance. As time progresses, the difference in their speeds. A decreases. B remains constant. C increases. D cannot be determined from the information given.\n\nOne is thrown up, and the other is thrown down, both with the same initial speed. What are their speeds when they hit the street? A The one thrown down is traveling faster. B They are traveling at the same speed. C The one thrown up is traveling faster. D It is impossible to tell because the height of the building is not given. Brick B is thrown straight down from the same building, and neither one experiences appreciable air resistance.\n\nWhich statement about their accelerations is correct? A The acceleration of A is greater than the acceleration of B.\n\nB The acceleration of B is greater than the acceleration of A. C Neither brick has any acceleration once it is released. D The two bricks have exactly the same acceleration. The position versus time graph of this object is A a horizontal straight line. B a vertical straight line. C a straight line making an angle with the time axis. D a parabolic curve. The velocity versus time graph of this object is A a horizontal straight line.\n\nB moving with constant non-zero acceleration. C at rest. D moving with increasing speed. D moving with increasing acceleration. A The truck will not have moved. B They will have traveled the same distance. C The car will have travelled further than the truck.\n\nD The truck will have travelled further than the car. A only graph a B only graph b C graphs b and c D graphs a and b E graphs c and d 46 The figure shows a graph of the position x of two cars, C and D, as a function of time t. Accordin g to this graph, which statemen ts about these cars must be true?\n\nA The magnitude of the acceleration of car C is greater than the magnitude of the acceleration of car D. B The magnitude of the acceleration of car C is less than the magnitude of the acceleration of car D. E Both cars have the same acceleration. Write the word or phrase that best completes each statement or answers the question.\n\nThe letters 47 H-L represent particular moments of time. If we take upward as the positive direction, which of the graphs shown below best represents the velocity of the stone as a function of time? If we take upward as the positive direction, which of the graphs shown below best represents the acceleration of the stone as a function of time?\n\nOver the nine-seco nd interval shown, we can say that the speed of the particle A only decreases. B decreases and then increases. C remains constant. D increases and then decreases. E only increases. Find both the distance it has traveled and 53 the magnitude of its displacement.\n\nThe speed of light is 3. How many miles are there in one light-year? It travels 1. This trip takes 45 min. What was the bear's average speed? What was the bear's average velocity? You arrive home after driving 4 hours and 15 minutes. How far is your hometown from school? What is the average speed of the motorist for this trip? What is her average speed for the trip? What is the average speed for the trip?\n\nUsing SI units 65 a what is its average speed for the ten laps?", null, "How many milliseconds after emitting the shriek does the bat hear the reflected echo from the wall? Choose the one alternative that best completes the statement or answers the questi on. She completes one lap in seconds. What is her average velocity? What is her average speed? Along the way you plan to stop for dinner. For the first 90 miles she drives at a constant speed of 30 mph. At what constant speed must she drive the remaining distance if her average speed for the total trip is to be 40 mph?\n\nHow much further in feet would a drunk driver's car travel before he hits the brakes than a sober driver's car?\n\n## Choose a 7th Edition chapter | Giancoli Answers\n\nAssume that both are initially traveling at Arthur has a speed of 3. How long does it take for them to.", null, "What is its average acceleration? Light travels at 3. Find 78 the magnitude and direction of the car's average acceleration. The collision takes 20 ms. What is the average acceleration of the ball during the collision with the wall? It travels a distance of 1. The same car can come to a full stop from 85 that speed in 4. What is the ratio of the magnitude of the starting acceleration to the stopping acceleration?\n\nIt next maintains the velocity it has reached for 10 s. Then it slows down at a steady rate of 2. What is. It then speeds up with a constant acceleration of 2. At the end of this time, what is its velocity? What is the cart's displacement during the first 6.\n\nAssuming the acceleration is constant, how far did it travel during those 2. It then travels with constant speed it has achieved for another 10 s." ]
[ null, "https://cv01.twirpx.net/0942/0942068.jpg", null, "https://imgv2-1-f.scribdassets.com/img/document/15685931/149x198/879bf9b734/1538207182", null, "https://i.ebayimg.com/images/g/l-kAAOSwF2Fbw5U1/s-l300.jpg", null, "https://afghanhistory.info/wp-content/uploads/2018/12/douglas-c-giancoli-physics-principles-with-applications-pdf-fresh-physics-for-scientists-amp-engineers-with-modern-physics-pdf-images-of-douglas-c-giancoli-physics-principles-with-applicatio.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9328567,"math_prob":0.93482864,"size":9911,"snap":"2019-51-2020-05","text_gpt3_token_len":2129,"char_repetition_ratio":0.18613102,"word_repetition_ratio":0.12609456,"special_character_ratio":0.2158208,"punctuation_ratio":0.1172555,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98719496,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,1,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-06T22:39:58Z\",\"WARC-Record-ID\":\"<urn:uuid:66fabf4e-120d-477e-82cd-81dc3e681437>\",\"Content-Length\":\"33822\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:24fab1d9-050a-4f1a-981d-7768d0071cb5>\",\"WARC-Concurrent-To\":\"<urn:uuid:d4c09caf-c452-4f00-b438-038b1f775994>\",\"WARC-IP-Address\":\"104.31.86.183\",\"WARC-Target-URI\":\"http://medical-site.info/physics/giancoli-physics-principles-with-applications-7th-edition-pdf-25273.php\",\"WARC-Payload-Digest\":\"sha1:OFKENIXSXPN56VRM3VFGBVEJZKKJ3TYQ\",\"WARC-Block-Digest\":\"sha1:W2XNTC5KQXAB2L3P6ZHDLJIPPB4QLJIB\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575540491491.18_warc_CC-MAIN-20191206222837-20191207010837-00364.warc.gz\"}"}
https://answers.everydaycalculation.com/add-fractions/75-60-plus-98-30
[ "Solutions by everydaycalculation.com\n\n1st number: 1 15/60, 2nd number: 3 8/30\n\n75/60 + 98/30 is 271/60.\n\n1. Find the least common denominator or LCM of the two denominators:\nLCM of 60 and 30 is 60\n2. For the 1st fraction, since 60 × 1 = 60,\n75/60 = 75 × 1/60 × 1 = 75/60\n3. Likewise, for the 2nd fraction, since 30 × 2 = 60,\n98/30 = 98 × 2/30 × 2 = 196/60\n75/60 + 196/60 = 75 + 196/60 = 271/60\n5. So, 75/60 + 98/30 = 271/60\nIn mixed form: 431/60\n\nMathStep (Works offline)", null, "Download our mobile app and learn to work with fractions in your own time:" ]
[ null, "https://answers.everydaycalculation.com/mathstep-app-icon.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.84638286,"math_prob":0.9991135,"size":657,"snap":"2020-45-2020-50","text_gpt3_token_len":265,"char_repetition_ratio":0.14088821,"word_repetition_ratio":0.0,"special_character_ratio":0.51141554,"punctuation_ratio":0.105960265,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9971837,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-11-24T17:36:52Z\",\"WARC-Record-ID\":\"<urn:uuid:2751673f-a450-4df3-8c6c-c412863699eb>\",\"Content-Length\":\"7029\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5a7eda4a-2302-4865-8919-c237d8a2d38a>\",\"WARC-Concurrent-To\":\"<urn:uuid:3c0edf14-9ad5-4aef-b346-58f3109b8999>\",\"WARC-IP-Address\":\"96.126.107.130\",\"WARC-Target-URI\":\"https://answers.everydaycalculation.com/add-fractions/75-60-plus-98-30\",\"WARC-Payload-Digest\":\"sha1:JJT7GPBLGSDD5YUWVXDXYLK3NBLDE7IU\",\"WARC-Block-Digest\":\"sha1:5YKIK5G3YPIIRS5WIGKGFTADD5U3TG7O\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141176922.14_warc_CC-MAIN-20201124170142-20201124200142-00195.warc.gz\"}"}
https://autarkaw.org/2009/04/11/how-do-i-solve-a-nonlinear-equation-in-matlab/?shared=email&msg=fail
[ "# How do I solve a nonlinear equation in MATLAB?\n\nMany students ask me how do I do this or that in MATLAB.  So I thought why not have a small series of my next few blogs do that.  In this blog, I show you how to solve a nonlinear equation.\n\nThe MATLAB program link is here.\n\nThe HTML version of the MATLAB program is here.\n\n%% HOW DO I DO THAT IN MATLAB SERIES?\n% In this series, I am answering questions that students have asked\n% me about MATLAB.  Most of the questions relate to a mathematical\n% procedure.\n\n%% TOPIC\n% How do I solve a nonlinear equation?\n\n%% SUMMARY\n\n% Language : Matlab 2008a;\n% Authors : Autar Kaw;\n% Mfile available at\n% http://numericalmethods.eng.usf.edu/blog/integration.m;\n% Last Revised : March 28, 2009;\n% Abstract: This program shows you how to solve a nonlinear equation.\nclc\nclear all\n\n%% INTRODUCTION\n\ndisp(‘ABSTRACT’)\ndisp(‘   This program shows you how to solve’)\ndisp(‘   a nonlinear equation’)\ndisp(‘ ‘)\ndisp(‘AUTHOR’)\ndisp(‘   Autar K Kaw of https://autarkaw.wordpress.com&#8217;)\ndisp(‘ ‘)\ndisp(‘MFILE SOURCE’)\ndisp(‘   http://numericalmethods.eng.usf.edu/blog/nonlinearequation.m&#8217;)\ndisp(‘ ‘)\ndisp(‘LAST REVISED’)\ndisp(‘   April 11, 2009’)\ndisp(‘ ‘)\n\n%% INPUTS\n% Solve the nonlinear equation x^3-15*x^2+47*x-33=0\n% Define x as a symbol\nsyms x\n% Assigning the fleft hand side o the equation f(x)=0\nf=x^3-15*x^2+47*x-33;\n%% DISPLAYING INPUTS\n\ndisp(‘INPUTS’)\nfunc=[‘  The equation to be solved is ‘ char(f), ‘=0’];\ndisp(func)\ndisp(‘  ‘)\n\n%% THE CODE\n\n% Finding the solution of the nonlinear equation\nsoln=solve(f,x);\nsolnvalue=double(soln);\n\n%% DISPLAYING OUTPUTS\n\ndisp(‘OUTPUTS’)\nfor i=1:1:length(solnvalue)\nfprintf(‘\\nThe solution# %g is %g’,i,solnvalue(i))\nend\ndisp(‘  ‘)\n\nThis post is brought to you by Holistic Numerical Methods: Numerical Methods for the STEM undergraduate at http://numericalmethods.eng.usf.edu, the textbook on Numerical Methods with Applications available from the lulu storefront, and the YouTube video lectures available at http://numericalmethods.eng.usf.edu/videos\n\n## Author: Autar Kaw\n\nAutar Kaw (http://autarkaw.com) is a Professor of Mechanical Engineering at the University of South Florida. He has been at USF since 1987, the same year in which he received his Ph. D. in Engineering Mechanics from Clemson University. He is a recipient of the 2012 U.S. Professor of the Year Award. With major funding from NSF, he is the principal and managing contributor in developing the multiple award-winning online open courseware for an undergraduate course in Numerical Methods. The OpenCourseWare (nm.MathForCollege.com) annually receives 1,000,000+ page views, 1,000,000+ views of the YouTube audiovisual lectures, and 150,000+ page views at the NumericalMethodsGuy blog. His current research interests include engineering education research methods, adaptive learning, open courseware, massive open online courses, flipped classrooms, and learning strategies. He has written four textbooks and 80 refereed technical papers, and his opinion editorials have appeared in the St. Petersburg Times and Tampa Tribune.\n\n## 19 thoughts on “How do I solve a nonlinear equation in MATLAB?”\n\n1. chris says:\n\nNice …. and thnx….\n\nLike\n\n2. chris says:\n\nCan u please tell me how i can solve a nonlinear equation with a set of constraints in matlab….\n\nthnx.\nchris\n\nLike\n\n3. Vahid says:\n\nHi\nCan you help me to solve this equation?\n\nh (d2h/dx2)=4.822h-1.5h3-3.322\nBoundary conditions:\nX=0 h=0.73\nX=-2 dh/dx=0\n\nRegards,\nVahid\n\nLike\n\n4. siamak says:\n\ni need help with solving nonlinear system of three equations that repeated over 30000000\nnodes see the number of nodes a lot and due to saving the function problem i cant use the fsolve please help me\n\nLike\n\n5. siamak says:\n\ni need to solve system of three equations over a 30000000 node my problem is how to save them to use in fsolve you know the node number\n\nLike\n\n6. somayeh says:\n\nydot1 = 20*y(1)*((y(2)-30.6176)/30.6176)-0.03*y(1);\nydot2 = -0.07*y(2)-(28.7776)*((1.4873)^t)*(y(2)^2)/y(1)^0.38-10*y(2)^2*(y(2)-30.6176)^2/30.6176;\nand initial conditions are y1(0)=1706115.9\ny2(0)=30.75819596\nthanx alot\n\nLike\n\n7. somayeh says:\n\ni used ode45 for solving it but the answer was NaN!!!!!\n\nLike\n\n1. Nnc says:\n\nI have a similar problem, I want to solve a non linear 2nd order system of 3 equations & when i use ode45 the answer is NAN :s. Have you solved yours ??\n\nLike\n\nNice blog!\n\nLike\n\n9. <a href=\" to get detail knowledge about mathlab and soln with their quastions\n\nLike\n\n10. Den says:\n\nThanks man!! That really helped my work.\n\nLike\n\n11. roshan says:\n\ni want to solve this four equations;\nx(3) – ((log(x(1)/ 65792043000 )) -( 0.07+(0.5*(x(2)^2))))/x(2);\nx(4) – x(3) + x(2);\n8507000337- x(1)*normcdf(x(3)) + 65792043000*exp(-0.07)*normcdf(x(4));\n(8507000337* 0.347089018056744) – x(1)*x(2)*normcdf(x(3))];\ncan u solve?" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7390433,"math_prob":0.91610056,"size":4258,"snap":"2019-26-2019-30","text_gpt3_token_len":1305,"char_repetition_ratio":0.15326752,"word_repetition_ratio":0.022364218,"special_character_ratio":0.31986848,"punctuation_ratio":0.14543405,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99405295,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-07-16T10:04:37Z\",\"WARC-Record-ID\":\"<urn:uuid:7117e758-4d80-4163-b4d4-91b93c6ccb81>\",\"Content-Length\":\"112398\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ca633483-8d33-482e-8948-7f141726d1d8>\",\"WARC-Concurrent-To\":\"<urn:uuid:efd391be-6217-4e34-ae7b-f4077f1dfdc0>\",\"WARC-IP-Address\":\"192.0.78.24\",\"WARC-Target-URI\":\"https://autarkaw.org/2009/04/11/how-do-i-solve-a-nonlinear-equation-in-matlab/?shared=email&msg=fail\",\"WARC-Payload-Digest\":\"sha1:DARQVQVVKED7LBKYOX3H74JN25QE42ID\",\"WARC-Block-Digest\":\"sha1:6V7JNBGGA5KCARD63Q5EHTEUJMYE2BNC\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-30/CC-MAIN-2019-30_segments_1563195524522.18_warc_CC-MAIN-20190716095720-20190716121720-00477.warc.gz\"}"}
https://aiida-kkr.readthedocs.io/_/downloads/en/latest/htmlzip/
[ "# AiiDA plugin for the Jülich KKRcode¶", null, "", null, "## Welcome to documentation of the AiiDA plugin for the Jülich KKRcode!¶\n\nThe plugin is available at https://github.com/JuDFTteam/aiida-kkr\n\nIf you use this plugin for your research, please cite the following work:\n\nPhilipp Rüßmann, Fabian Bertoldo, and Stefan Blügel, The AiiDA-KKR plugin and its application to high-throughput impurity embedding into a topological insulator, arXiv:2003.08315 [cond-mat.mtrl-sci] (2020); https://arxiv.org/abs/2003.08315\n\nAlso please cite the original AiiDA paper:\n\nGiovanni Pizzi, Andrea Cepellotti, Riccardo Sabatini, Nicola Marzari, and Boris Kozinsky, AiiDA: automated interactive infrastructure and database for computational science, Comp. Mat. Sci 111, 218-230 (2016); http://dx.doi.org/10.1016/j.commatsci.2015.09.013; http://www.aiida.net.\n\n### Requirements¶\n\nOnce all requirements are installed you need to set up the computers and codes before you can submit KKR calcutions using the aiida-kkr plugin.\n\n#### User’s guide¶\n\n##### User’s guide¶\n###### Calculations¶\n\nHere the calculations of the aiida-kkr plugin are presented. It is assumed that the user already has basic knowledge of python, aiida (e.g. database structure, verdi commands, structure nodes) and KKR (e.g. LMAX cutoff, energy contour integration). Also aiida-kkr should be installed as well as the Voronoi, KKR and KKRimp codes should already be configured.\n\nIn practice, the use of the workflows is more convenient but here the most basic calculations which are used underneath in the workflows are introduced step by step.\n\nIn the following the calculation plugins provided by aiida-kkr are introduced at the example of bulk Cu.\n\nNote\n\n```from aiida import load_profile\n```\n\nTo ensure that the aiida database is properly integrated.\n\nVoronoi starting potential generator\n\nThe Voronoi code creates starting potentials for a KKR calculation and sets up the atom-centered division of space into voronoi cells. Also corresponding shape functions are created, which are needed for full-potential corrections.\n\nThe voronoi plugin is called `kkr.voro` and it has the following input and output nodes:\n\nThree input nodes:\n• `parameters` KKR parameter set for Voronoi calculation (Dict)\n• `structure` structure data node node describing the crystal lattice (StructureData)\n• `code` Voronoi code node (code)\nThree output nodes:\n• `remote_folder` (RemoteData)\n• `retrieved` (FolderData)\n• `output_parameters` (Dict)\nAdditional optional input nodes that trigger special behavior of a Voronoi calculation are:\n• `parent_KKR` (RemoteData of a KKR Calculation)\n• `potential_overwrite` (SingleFileData)\n\nNow the basic usage of the voronoi plugin is demonstrated at the example of Cu bulk for which first the aiida structure node and the parameter node containing KKR specific parameters (LMAX cutoff etc.) are created before a voronoi calculation is set up and submitted.\n\nInput structure node\n\nFirst we create an aiida structure:\n\n```# get aiida StructureData class:\nfrom aiida.plugins import DataFactory\nStructureData = DataFactory('structure')\n```\n\nThen we create the aiida StructureData node (here for bulk Cu):\n\n```alat = 3.61 # lattice constant in Angstroem\nbravais = [[0.5*alat, 0.5*alat, 0], [0.5*alat, 0, 0.5*alat], [0, 0.5*alat, 0.5*alat]] # Bravais matrix in Ang. units\n# now create StructureData instance and set Bravais matrix and atom in unit cell\nCu = StructureData(cell=bravais)\nCu.append_atom(position=[0,0,0], symbols='Cu')\n```\nInput parameter node\n\nNext we create an empty set of KKR parameters (LMAX cutoff etc. ) for voronoi code:\n\n```# load kkrparms class which is a useful tool to create the set of input parameters for KKR-family of calculations\nfrom masci_tools.io.kkr_params import kkrparams\nparams = kkrparams(params_type='voronoi')\n```\n\nNote\n\nwe can find out which parameters are mandatory to be set using `missing_params = params.get_missing_keys(use_aiida=True)`\n\nand set at least the mandatory parameters:\n\n```params.set_multiple_values(LMAX=2, NSPIN=1, RCLUSTZ=2.3)\n```\n\nfinally create an aiida Dict node and fill with the dictionary of parameters:\n\n```Dict = DataFactory('dict') # use DataFactory to get ParamerterData class\nParaNode = Dict(dict=params.get_dict())\n```\nSubmit calculation\n\nNow we get the voronoi code:\n\n```from aiida.orm import Code # load aiida 'Code' class\n\ncodename = 'voronoi@localhost'\ncode = Code.get_from_string(codename)\n```\n\nNote\n\nMake sure that the voronoi code is installed: `verdi code list` should give you a list of installed codes where codename should be in.\n\nand create new process builder for a VoronoiCalculation:\n\n```builder = code.get_builder()\n```\n\nNote\n\nThis will already set `builder.code` to the voronoi code which we loaded above.\n\nand set resources that will be used (here serial job) in the options dict of the metadata:\n\n```builder.metadata.options = {'resources': {'num_machines':1, 'tot_num_mpiprocs':1} }\n```\n\nNote\n\nIf you use a computer without a default queue you need to set the name of the queue as well: `builder.metadata.options['queue_name'] = 'th1')`\n\nthen set structure and input parameter:\n\n```builder.structure = Cu\nbuilder.parameters = ParaNode\n```\n\nNote\n\nAdditionally you could set the `parent_KKR` and `potential_overwrite` input nodes which trigger special run modes of the voronoi code that are discussed below.\n\nNow we are ready to submit the calculation:\n\n```from aiida.engine import submit\nvoro_calc = submit(builder)\n```\n\nNote\n\ncheck calculation state (or use verdi calculation list -a -p1) using `voro_calc.process_state`\n\nVoronoi calculation with the `parent_KKR` input node\n\nTo come …\n\nVoronoi calculation with the `potential_overwrite` input node\n\nTo come …\n\nKKR calculation for bulk and interfaces\n\nA KKR calculation is provided by the `kkr.kkr` plugin, which has the following input and output nodes.\n\nThree input nodes:\n• `parameters` KKR parameter fitting the requirements for a KKR calculation (Dict)\n• `parent_folder` parent calulation remote folder node (RemoteFolder)\n• `code` KKR code node (code)\nThree output nodes:\n• `remote_folder` (RemoteData)\n• `retrieved` (FolderData)\n• `output_parameters` (Dict)\n\nNote\n\nThe parent calculation can be one of the following:\n\n1. Voronoi calculation, initial calculation starting from structure\n2. previous KKR calculation, e.g. preconverged calculation\n\nThe necessary structure information is always extracted from the voronoi parent calculation. In case of a continued calculation the voronoi parent is recuresively searched for.\n\nSpecial features exist where a fourth input node is persent and which triggers special behavior of the KKR calculation:\n• `impurity_info` Node specifying the impurity cluster (Dict)\n• `kpoints` Node specifying the kpoints for which the bandstructure is supposed to be calculated (KpointsData)\n\nThe different possible modes to run a kkr calculation (start from Voronoi calculation, continue from previous KKR calculation, host Greenfunction writeout feature) are demonstrated in the following.\n\nStart KKR calculation from voronoi parent\n\nReuse settings from voronoi calculation:\n\n```voronoi_calc_folder = voro_calc.out.remote_folder\nvoro_params = voro_calc.inputs.parameters\n```\n\nNow we update the KKR parameter set to meet the requirements for a KKR calculation (slightly different than voronoi calculation). Thus, we create a new set of parameters for a KKR calculation and fill the already set values from the previous voronoin calculation:\n\n```# new kkrparams instance for KKR calculation\nparams = kkrparams(params_type='kkr', **voro_params.get_dict())\n\n# set the missing values\nparams.set_multiple_values(RMAX=7., GMAX=65.)\n\n# choose 20 simple mixing iterations first to preconverge potential (here 5% simple mixing)\nparams.set_multiple_values(NSTEPS=20, IMIX=0, STRMIX=0.05)\n\n# create aiida Dict node from the KKR parameters\nParaNode = Dict(dict=params.get_dict())\n```\n\nNote\n\nYou can find out which parameters are missing for the KKR calculation using `params.get_missing_keys()`\n\nNow we can get the KKR code and create a new calculation instance and set the input nodes accordingly:\n\n```code = Code.get_from_string('KKRcode@localhost')\nbuilder = code.get_builder()\n\n# set input Parameter, parent calulation (previous voronoi calculation), computer resources\nbuilder.parameters = ParaNode\nbuilder.parent_folder = voronoi_calc_folder\nbuilder.metadata.options = {'resources' :{'num_machines': 1, 'num_mpiprocs_per_machine':1}}\n```\n\nWe can then run the KKR calculation:\n\n```kkr_calc = submit(builder)\n```\nContinue KKR calculation from KKR parent calculation\n\nFirst we create a new KKR calculation instance to continue KKR ontop of a previous KKR calclation:\n\n```builder = code.get_builder()\n```\n\nNext we reuse the old KKR parameters and update scf settings (default is NSTEPS=1, IMIX=0):\n\n```params.set_multiple_values(NSTEPS=50, IMIX=5)\n```\n\nand create the aiida Dict node:\n\n```ParaNode = Dict(dict=params.get_dict())\n```\n\nThen we set the input nodes for calculation:\n\n```builder.parameters = ParaNode\nkkr_calc_parent_folder = kkr_calc.outputs.remote_folder # parent remote folder of previous calculation\nbuilder.parent_folder = kkr_calc_parent_folder\nbuilder.metadata.options = {'resources': {'num_machines': 1, 'num_mpiprocs_per_machine':1}}\n```\n\nstore input nodes and submit calculation:\n\n```kkr_calc_continued = submit(builder)\n```\n\nThe finished calculation should have this output node that can be access within python using `kkr_calc_continued.outputs.output_parameters.get_dict()`. An excerpt of the ouput dictionary may look like this:\n\n```{u'alat_internal': 4.82381975,\nu'alat_internal_unit': u'a_Bohr',\nu'convergence_group': {\nu'calculation_converged': True,\nu'charge_neutrality': -1.1e-05,\nu'nsteps_exhausted': False,\nu'number_of_iterations': 47,\nu'rms': 6.4012e-08,\n...},\nu'energy': -44965.5181266111,\nu'energy_unit': u'eV',\nu'fermi_energy': 0.6285993399,\nu'fermi_energy_units': u'Ry',\nu'nspin': 1,\nu'number_of_atoms_in_unit_cell': 1,\nu'parser_errors': [],\n...\nu'warnings_group': {u'number_of_warnings': 0, u'warnings_list': []}}\n```\nSpecial run modes: host GF writeout (for KKRimp)\n\nHere we take the remote folder of the converged calculation to reuse settings and write out Green function and tmat of the crystalline host system:\n\n```kkr_converged_parent_folder = kkr_calc_continued.outputs.remote_folder\n```\n\nNow we extract the parameters of the kkr calculation and add the `KKRFLEX` run-option:\n\n```kkrcalc_converged = kkr_converged_parent_folder.get_incoming().first().node\nkkr_params_dict = kkrcalc_converged.inputs.parameters.get_dict()\nkkr_params_dict['RUNOPT'] = ['KKRFLEX']\n```\n\nThe parameters dictionary is not passed to the aiida Dict node:\n\n```ParaNode = Dict(dict=kkr_params_dict)\n```\n\nNow we create a new KKR calculation and set input nodes:\n\n```code = kkrcalc_converged.inputs.code # take the same code as in the calculation before\nbuilder= code.get_builder()\nresources = kkrcalc_converged.attributes['resources']\nbuilder.parameters = ParaNode\nbuilder.parent_folder = kkr_converged_parent_folder\n# prepare impurity_info node containing the information about the impurity cluster\nimp_info = Dict(dict={'Rcut':1.01, 'ilayer_center': 0, 'Zimp':[79.]})\n# set impurity info node to calculation\nbuilder.impurity_info = imp_info\n```\n\nNote\n\nThe `impurity_info` node should be a Dict node and its dictionary should describe the impurity cluster using the following parameters:\n\n• `ilayer_center` (int) layer index of position in the unit cell that describes the center of the impurity cluster\n• `Rcut` (float) cluster radius of impurity cluster in units of the lattice constant\n• `hcut` (float, optional) height of a cylindrical cluster with radius `Rcut`, if not given spherical cluster is taken\n• `cylinder_orient` (list of 3 float values, optional)\n• `Zimp` (list of Nimp float entries) atomic charges of the substitutional impurities on positions defined by `Rimp_rel`\n• `Rimp_rel` (list of Nimp [float, float, float] entries, optional, defaults to [0,0,0] for single impurity) cartesian positions of all Nimp impurities, relative to the center of cluster (i.e. position defined by `ilayer_center`)\n• `imp_cls` (list of [float, float, float, int] entries, optional) full list of impurity cluster positions and layer indices (x, y, z, ilayer), overwrites auto generation using `Rcut` and `hcut` settings\n\nWarning\n\n`imp_cls` functionality not implemented yet\n\nThe calculation can then be submitted:\n\n```# submit calculation\nGF_host_calc = submit(builder)\n```\n\nOnce the calculation has finished the retrieve folder should contain the `kkrflex_*` files needed for the impurity calculation.\n\nSpecial run modes: bandstructure\n\nHere we take the remote folder of the converged calculation and compute the bandstructure of the Cu bulk system. We reuse the DOS settings for the energy interval in which the bandstructure is computed from a previous calculation:\n\n```from aiida.orm import load_node\n```\n\nNow we need to generate the kpoints node for bandstructure calculation. This is done using aiida’s `get_explicit_kpoints_path` function that extracts the kpoints along high symmetry lines from a structure:\n\n```# first extract the structure node from the KKR parent calculation\nfrom aiida_kkr.calculations.voro import VoronoiCalculation\nstruc, voro_parent = VoronoiCalculation.find_parent_structure(kkr_calc_converged.outputs.remote_folder)\n# then create KpointsData node\nfrom aiida.tools.data.array.kpoints import get_explicit_kpoints_path\nkpts = get_explicit_kpoints_path(struc).get('explicit_kpoints')\n```\n\nWarning\n\nNote that the `get_explicit_kpoints_path` function returns kpoints for the primitive structure. In this example the input structure is already the primitive cell however in general this may not always be the case.\n\nThen we set the `kpoints` input node to a new KKR calculation and change some settings of the input parameters accordingly (i.e. energy contour like in DOS run):\n\n```# create bandstructure calculation reusing old settings (including same computer and resources in this example)\nkkrcode = kkr_calc_converged.inputs.code\nbuilder = kkrcode.get_builder()\nbuilder.kpoints = kpts # pass kpoints as input\nbuilder.parent_folder = kkr_calc_converged.outputs.remote_folder\n# change parameters to qdos settings (E range and number of points)\nfrom masci_tools.io.kkr_params import kkrparams\nqdos_params = kkrparams(**kkr_calc_converged.inputs.parameters.get_dict()) # reuse old settings\n# reuse the same emin/emax settings as in DOS run (extracted from input parameter node)\nqdos_params.set_multiple_values(EMIN=host_dos_calc.inputs.parameters.get_dict().get('EMIN'),\nEMAX=host_dos_calc.inputs.parameters.get_dict().get('EMAX'),\nNPT2=100)\nbuilder.parameters = Dict(dict=qdos_params.get_dict())\n```\n\nThe calculation is then ready to be submitted:\n\n```# submit calculation\nkkrcalc = submit(builder)\n```\n\nThe result of the calculation will then contain the `qdos.aa.s.dat` files in the retrieved node, where `aa` is the atom index and `s` the spin index of all atoms in the unit cell. The resulting bandstructure (for the Cu bulk test system considered here) should look like this (see here for the plotting script):", null, "Special run modes: Jij extraction\n\nThe extraction of exchange coupling parameters is triggered with the `XCPL` run option and needs at lest the `JIJRAD` paramter to be set. Here we take the remote folder of the converged calculation and compute the exchange parameters:\n\n```from aiida.orm import load_node\n```\n\nThen we set the `XCLP` run option and the `JIJRAD` parameter (the `JIJRADXY`, `JIJSITEI` and `JIJSITEJ` parameters are not mandatory and are ommitted in this example) in the input node to a new KKR calculation:\n\n```# create bandstructure calculation reusing old settings (including same computer and resources in this example)\nkkrcode = kkr_calc_converged.inputs.code\nbuilder = kkrcode.get_builder()\nbuilder.parent_folder = kkr_calc_converged.outputs.remote_folder\n# change parameters to Jij settings ('XCPL' runopt and JIJRAD parameter)\nfrom aiida_kkr.tools.kkr_params import kkrparams\nJij_params = kkrparams(**kkr_calc_converged.inputs.parameters.get_dict()) # reuse old settings\n# add 'XCPL' runopt to list of runopts\nrunopts = Jij_params.get_value('RUNOPT')\nrunopts.append('XCPL ')\nJij_params.set_value('RUNOPT', runopts)\n# now use updated parameters\nbuilder.parameters = Dict(dict=qdos_params.get_dict())\n```\n\nThe calculation is then ready to be submitted:\n\n```# submit calculation\nkkrcalc = submit(builder)\n```\n\nThe result of the calculation will then contain the `Jijatom.*` files in the retrieved node and the `shells.dat` files which allows to map the values of the exchange interaction to equivalent positions in the different shells.\n\nKKR impurity calculation\n\nPlugin: `kkr.kkrimp`\n\nFour input nodes:\n• `parameters`, optional: KKR parameter fitting the requirements for a KKRimp calculation (Dict)\n\n• Only one of\n\n1. `impurity_potential`: starting potential for the impurity run (SingleFileData)\n2. `parent_folder`: previous KKRimp parent calulation folder (RemoteFolder)\n• `code`: KKRimp code node (code)\n\n• `host_Greenfunction_folder`: KKR parent calulation folder containing the writeout of the host’s Green function files (RemoteFolder)\n\nNote\n\nIf no `parameters` node is given then the default values are extracted from the `host_Greenfunction` calculation.\n\nThree output nodes:\n• `remote_folder` (RemoteData)\n• `retrieved` (FolderData)\n• `output_parameters` (Dict)\n\nNote\n\nThe parent calculation can be one of the following:\n\n1. Voronoi calculation, initial calculation starting from structure\n2. previous KKR calculation, e.g. preconverged calculation\n\nThe necessary structure information is always extracted from the voronoi parent calculation. In case of a continued calculation the voronoi parent is recuresively searched for.\n\nCreate impurity potential\n\nNow the starting potential for the impurity calculation needs to be generated. This means that we need to create an auxiliary structure which contians the impurity in the system where we want to embed it. Then we run a Voronoi calculation to create the starting potential. Here we use the example of a Au impurity embedded into bulk Cu.\n\nThe impurity code expects an aiida SingleFileData object that contains the impurity potential. This is finally constructed using `the neworder_potential_wf` workfunction from `aiida_kkr.tools.common_workfunctions`.\n\n```# use an aiida workfunction to keep track of the provenance\nfrom aiida.work import workfunction as wf\n@wf\ndef change_struc_imp_aux_wf(struc, imp_info): # Note: works for single imp at center only!\nfrom aiida.common.constants import elements as PeriodicTableElements\n_atomic_numbers = {data['symbol']: num for num, data in PeriodicTableElements.iteritems()}\n\nnew_struc = StructureData(cell=struc.cell)\nisite = 0\nfor site in struc.sites:\nsname = site.kind_name\nkind = struc.get_kind(sname)\npos = site.position\nzatom = _atomic_numbers[kind.get_symbols_string()]\nif isite == imp_info.get_dict().get('ilayer_center'):\nzatom = imp_info.get_dict().get('Zimp')\nsymbol = PeriodicTableElements.get(zatom).get('symbol')\nnew_struc.append_atom(position=pos, symbols=symbol)\nisite += 1\n\nreturn new_struc\n\nnew_struc = change_struc_imp_aux_wf(voro_calc.inputs.structure, imp_info)\n```\n\nNote\n\nThis functionality is alreadyincorporated in the `kkr_imp_wc` workflow.\n\nThen we run the Voronoi calculation for auxiliary structure to create the impurity starting potential:\n\n```codename = 'voronoi@localhost'\ncode = Code.get_from_string(codename)\n\nbuilder = code.get_builder()\nbuilder.structure = new_struc\nbuilder.parameters = kkrcalc_converged.inputs.parameters\n\nvoro_calc_aux = submit(builder)\n```\n\nNow we create the impurity starting potential using the converged host potential for the surrounding of the impurity and the new Au impurity startpot:\n\n```from aiida_kkr.tools.common_workfunctions import neworder_potential_wf\n\npotname_converged = kkrcalc_converged._POTENTIAL\npotname_imp = 'potential_imp'\nneworder_pot1 = [int(i) for i in loadtxt(GF_host_calc.outputs.retrieved.get_abs_path('scoef'), skiprows=1)[:,3]-1]\npotname_impvorostart = voro_calc_aux._OUT_POTENTIAL_voronoi\nreplacelist_pot2 = [[0,0]]\n\nsettings_dict = {'pot1': potname_converged, 'out_pot': potname_imp, 'neworder': neworder_pot1,\n'pot2': potname_impvorostart, 'replace_newpos': replacelist_pot2, 'label': 'startpot_KKRimp',\n'description': 'starting potential for Au impurity in bulk Cu'}\nsettings = Dict(dict=settings_dict)\n\nstartpot_Au_imp_sfd = neworder_potential_wf(settings_node=settings,\nparent_calc_folder=kkrcalc_converged.outputs.remote_folder,\nparent_calc_folder2=voro_calc_aux.outputs.remote_folder)\n```\nCreate and submit initial KKRimp calculation\n\nNow we create a new impurity calculation, set all input nodes and submit the calculation to preconverge the impurity potential (Au embedded into Cu ulk host as described in the `impurity_info` node):\n\n```# needed to link to host GF writeout calculation\nGF_host_output_folder = GF_host_calc.outputs.remote_folder\n\n# create new KKRimp calculation\nfrom aiida_kkr.calculations.kkrimp import KkrimpCalculation\nkkrimp_calc = KkrimpCalculation()\n\nbuilder = Code.get_from_string('KKRimp@my_mac')\n\nbuilder.code(kkrimp_code)\nbuilder.host_Greenfunction_folder = GF_host_output_folder\nbuilder.impurity_potential = startpot_Au_imp_sfd\nbuilder.resources = resources\n\n# first set 20 simple mixing steps\nkkrimp_params = kkrparams(params_type='kkrimp')\nkkrimp_params.set_multiple_values(SCFSTEPS=20, IMIX=0, MIXFAC=0.05)\nParamsKKRimp = Dict(dict=kkrimp_params.get_dict())\nbilder.parameters = ParamsKKRimp\n\n# submit calculation\nkkrimp_calc = submit(builder)\n```\nRestart KKRimp calculation from KKRimp parent\n\nHere we demonstrate how to restart a KKRimp calculation from a parent calculation from which the starting potential is extracted autimatically. This is used to compute the converged impurity potential starting from the previous preconvergence step:\n\n```builder = kkrimp_code.get_builder()\nbuilder.parent_calc_folder = kkrimp_calc.outputs.remote_folder\nbuilder.host_Greenfunction_folder = kkrimp_calc.inputs.GFhost_folder\n\nkkrimp_params = kkrparams(params_type='kkrimp', **kkrimp_calc.inputs.parameters.get_dict())\nkkrimp_params.set_multiple_values(SCFSTEPS=99, IMIX=5, MIXFAC=0.05)\nParamsKKRimp = Dict(dict=kkrimp_params.get_dict())\nbuilder.parameters = ParamsKKRimp\n\n# submit\nkkrimp_calc_converge = submit(builder)\n```\nImpurity DOS\n\ncreate final imp DOS (new host GF for DOS contour, then KKRimp calc using converged potential)\n\nfirst prepare host GF with DOS contour:\n\n```params = kkrparams(**GF_host_calc.inputs.parameters.get_dict())\nparams.set_multiple_values(EMIN=-0.2, EMAX=GF_host_calc.res.fermi_energy+0.1, NPOL=0, NPT1=0, NPT2=101, NPT3=0)\nParaNode = Dict(dict=params.get_dict())\n\ncode = GF_host_calc.inputs.code # take the same code as in the calculation before\nbuilder= code.new_calc()\nresources = GF_host_calc.get_resources()\nbuilder.resources = resources\nbuilder.parameters = ParaNode\nbuilder.parent_folder = kkr_converged_parent_folder\nbuilder.impurity_info = GF_host_calc.inputs.impurity_info\n\nGF_host_doscalc = submit(builder)\n```\n\nThen we run the KKRimp step using the converged potential (via the `parent_calc_folder` node) and the host GF which contains the DOS contour information (via `host_Greenfunction_folder`):\n\n```builder = kkrimp_calc_converge.inputs.code.get_builder()\nbuilder.host_Greenfunction_folder(GF_host_doscalc.outputs.remote_folder)\nbuilder.parent_calc_folder(kkrimp_calc_converge.outputs.remote_folder)\nbuilder.resources(kkrimp_calc_converge.get_resources())\n\nparams = kkrparams(params_type='kkrimp', **kkrimp_calc_converge.inputs.parameters.get_dict())\nparams.set_multiple_values(RUNFLAG=['lmdos'], SCFSTEPS=1)\nParaNode = Dict(dict=params.get_dict())\n\nbuilder.parameters(ParaNode)\n\nkkrimp_doscalc = submit(builder)\n```\n\nFinally we plot the DOS:\n\n```# get interpolated DOS from GF_host_doscalc calculation:\nfrom masci_tools.io.common_functions import interpolate_dos\ndospath_host = GF_host_doscalc.outputs.retrieved.get_abs_path('')\nef, dos, dos_interpol = interpolate_dos(dospath_host, return_original=True)\ndos, dos_interpol = dos, dos_interpol\n\n# sum over spins:\nimpdos0[:,1:] = impdos0[:,1:]*2\nimpdos1[:,1:] = impdos1[:,1:]*2\n\n# plot bulk and impurity DOS\nfrom matplotlib.pyplot import figure, fill_between, plot, legend, title, axhline, axvline, xlim, ylim, ylabel, xlabel, title, show\nfigure()\nfill_between((dos_interpol[:,0]-ef)*13.6, dos_interpol[:,1]/13.6, color='lightgrey', lw=0, label='bulk Cu')\nplot((impdos0[:,0]-ef)*13.6, impdos0[:,1]/13.6, label='Au imp')\nplot((impdos0[:,0]-ef)*13.6, impdos1[:,1]/13.6, label='1st Cu neighbor')\nplot((impdos0[:,0]-ef)*13.6, (impdos1[:,1]-dos_interpol[:,1])/dos_interpol[:,1], '--', label='relative difference in 1st Cu neighbor')\nlegend()\ntitle('DOS of Au impurity embedded into bulk Cu')\naxhline(0, lw=1, color='grey')\naxvline(0, lw=1, color='grey')\nxlim(-8, 1)\nylim(-0.5,8.5)\nxlabel('E-E_F (eV)')\nylabel('DOS (states/eV)')\nshow()\n```\n\nWhich should look like this:", null, "KKR calculation importer\n\nOnly functional in version below 1.0\n\nPlugin `kkr.kkrimporter`\n\nThe calculation importer can be used to import a already finished KKR calculation to the aiida dbatabase. The KKRimporterCalculation takes the inputs\n\n• `code`: KKR code installation on the computer from which the calculation is imported\n• `computer`: computer on which the calulation has been performed\n• `resources`: resources used in the calculation\n• `remote_workdir`: remote abolute path on `computer` to the path where the calculation has been performed\n• `input_file_names`: dictionary of input file names\n• `output_file_names`, optional: dictionary of output file names\n\nand mimicks a KKR calculation (i.e. stores KKR parameter set in node `parameters` and the extracted aiida StructureData node `structure` as inputs and creates `remote_folder`, `retrieved` and `output_parameters` output nodes). A KKRimporter calculation can then be used like a KKR claculation to continue calculations with correct provenance tracking in the database.\n\nNote\n\n• At least `input_file` and `potential_file` need to be given in `input_file_names`.\n• Works also if output was a Jij calculation, then `Jijatom.*` and `shells.dat` files are retreived as well.\n\nExample on how to use the calculation importer:\n\n```# Load the KKRimporter class\nfrom aiida.orm import CalculationFactory\nKkrImporter = CalculationFactory('kkr.kkrimporter')\n\n# Load the Code node representative of the one used to perform the calculations\nfrom aiida.orm.code import Code\ncode = Code.get_from_string('KKRcode@my_mac')\n\n# Get the Computer node representative of the one the calculations were run on\ncomputer = code.get_remote_computer()\n\n# Define the computation resources used for the calculations\nresources = {'num_machines': 1, 'num_mpiprocs_per_machine': 1}\n\n# Create calculation\ncalc1 = KkrImporter(computer=computer,\nresources=resources,\nremote_workdir='<absolute-remote-path-to-calculation>',\ninput_file_names={'input_file':'inputcard', 'potential_file':'potential', 'shapefun_file':'shapefun'},\noutput_file_names={'out_potential_file':'potential'})\n\n# Link the code that was used to run the calculations.\ncalc1.use_code(code)\n\n# Get the computer's transport and create an instance.\nfrom aiida.backends.utils import get_authinfo, get_automatic_user\nauthinfo = get_authinfo(computer=computer, aiidauser=get_automatic_user())\ntransport = authinfo.get_transport()\n\n# Open the transport for the duration of the immigrations, so it's not\n# reopened for each one. This is best performed using the transport's\n# context guard through the ``with`` statement.\nwith transport as open_transport:\n# Parse the calculations' input files to automatically generate and link the\n# calculations' input nodes.\ncalc1.create_input_nodes(open_transport)\n\n# Store the calculations and their input nodes and tell the daeomon the output\n# is ready to be retrieved and parsed.\ncalc1.prepare_for_retrieval_and_parsing(open_transport)\n```\n\nAfter the calculation has finished the following nodes should appear in the aiida database:\n\n```\\$ verdi calculation show <pk-to-imported-calculation>\n----------- ------------------------------------\ntype KkrImporterCalculation\npk 22121\nuuid 848c2185-8c82-44cd-ab67-213c20aaa414\nlabel\ndescription\nctime 2018-04-24 15:29:42.136154+00:00\nmtime 2018-04-24 15:29:48.496421+00:00\ncomputer my_mac\ncode KKRcode\n----------- ------------------------------------\n##### INPUTS:\n------------ ----- -------------\nparameters 22120 Dict\nstructure 22119 StructureData\n##### OUTPUTS:\n----------------- ----- -------------\nremote_folder 22122 RemoteData\nretrieved 22123 FolderData\noutput_parameters 22124 Dict\n##### LOGS:\nThere are 1 log messages for this calculation\nRun 'verdi calculation logshow 22121' to see them\n```\nExample scripts\n\nHere is a small collection of example scripts.\n\nScripts need to be updated for new version (>1.0)\n\nFull example Voronoi-KKR-KKRimp\n\nCompact script starting with structure setup, then voronoi calculation, followed by initial KKR claculation which is then continued for convergence. The converged calculation is then used to write out the host GF and a simple inmpurity calculation is performed.\n\nDownload: `this example script`\n\n```#!/usr/bin/env python\n\n# connect to aiida db\nfrom aiida.orm import Code\nfrom aiida.orm import DataFactory\nStructureData = DataFactory('structure')\nDict = DataFactory('parameter')\n\n# load kkrparms class which is a useful tool to create the set of input parameters for KKR-family of calculations\nfrom aiida_kkr.tools.kkr_params import kkrparams\n\nfrom numpy import array\n\n# helper function\ndef wait_for_it(calc, maxwait=300):\nfrom time import sleep\nN = 0\nprint 'start waiting for calculation to finish'\nwhile not calc.has_finished() and N<(maxwait/2.):\nN += 1\nif N%5==0:\nprint('.')\nsleep(2.)\nprint('waiting done after {} seconds: {} {}'.format(N*2, calc.has_finished(), calc.has_finished_ok()))\n\n###################################################\n# initial structure\n###################################################\n\n# create Copper bulk aiida Structure\nalat = 3.61 # lattice constant in Angstroem\nbravais = alat*array([[0.5, 0.5, 0], [0.5, 0, 0.5], [0, 0.5, 0.5]]) # Bravais matrix in Ang. units\nCu = StructureData(cell=bravais)\nCu.append_atom(position=[0,0,0], symbols='Cu')\n\n###################################################\n# Voronoi step (preparation of starting potential)\n###################################################\n\n# create empty set of KKR parameters (LMAX cutoff etc. ) for voronoi code\nparams = kkrparams(params_type='voronoi')\n\n# and set at least the mandatory parameters\nparams.set_multiple_values(LMAX=2, NSPIN=1, RCLUSTZ=2.3)\n\n# finally create an aiida Dict node and fill with the dictionary of parameters\nParaNode = Dict(dict=params.get_dict())\n\n# choose a valid installation of the voronoi code\ncodename = 'voronoi@my_mac'\ncode = Code.get_from_string(codename)\n\n# create new instance of a VoronoiCalculation\nvoro_calc = code.new_calc()\n\n# and set resources that will be used (here serial job)\nvoro_calc.set_resources({'num_machines':1, 'tot_num_mpiprocs':1})\n\n### !!! use queue name if necessary !!! ###\n# voro_calc.set_queue_name('<quene_name>')\n\n# then set structure and input parameter\nvoro_calc.use_structure(Cu)\nvoro_calc.use_parameters(ParaNode)\n\n# store all nodes and submit the calculation\nvoro_calc.store_all()\nvoro_calc.submit()\n\nwait_for_it(voro_calc)\n\n# for future reference\nvoronoi_calc_folder = voro_calc.outputs.remote_folder\nvoro_params = voro_calc.inputs.parameters\n\n###################################################\n# KKR step (20 iterations simple mixing)\n###################################################\n\n# create new set of parameters for a KKR calculation and fill with values from previous voronoin calculation\nparams = kkrparams(params_type='kkr', **voro_params.get_dict())\n\n# and set the missing values\nparams.set_multiple_values(RMAX=7., GMAX=65.)\n\n# choose 20 simple mixing iterations first to preconverge potential (here 5% simple mixing)\nparams.set_multiple_values(NSTEPS=20, IMIX=0, STRMIX=0.05)\n\n# create aiida Dict node from the KKR parameters\nParaNode = Dict(dict=params.get_dict())\n\n# get KKR code and create new calculation instance\n### !!! use your code name !!! ###\ncode = Code.get_from_string('KKRcode@my_mac')\nkkr_calc = code.new_calc()\n\n# set input Parameter, parent calulation (previous voronoi calculation), computer resources\nkkr_calc.use_parameters(ParaNode)\nkkr_calc.use_parent_folder(voronoi_calc_folder)\nkkr_calc.set_resources({'num_machines': 1, 'num_mpiprocs_per_machine':1})\n\n### !!! use queue name if necessary !!! ###\n# kkr_calc.set_queue_name('<quene_name>')\n\n# store nodes and submit calculation\nkkr_calc.store_all()\nkkr_calc.submit()\n\n# wait for calculation to finish\nwait_for_it(kkr_calc)\n\n###################################################\n# 2nd KKR step (continued from previous KKR calc)\n###################################################\n\n# create new KKR calculation instance to continue KKR ontop of a previous KKR calclation\nkkr_calc_continued = code.new_calc()\n\n# reuse old KKR parameters and update scf settings (default is NSTEPS=1, IMIX=0)\nparams.set_multiple_values(NSTEPS=50, IMIX=5)\n# and create aiida Dict node\nParaNode = Dict(dict=params.get_dict())\n\n# then set input nodes for calculation\nkkr_calc_continued.use_code(code)\nkkr_calc_continued.use_parameters(ParaNode)\nkkr_calc_parent_folder = kkr_calc.outputs.remote_folder # parent remote folder of previous calculation\nkkr_calc_continued.use_parent_folder(kkr_calc_parent_folder)\nkkr_calc_continued.set_resources({'num_machines': 1, 'num_mpiprocs_per_machine':1})\n\n### !!! use queue name if necessary !!! ###\n# kkr_calc_continued.set_queue_name('<quene_name>')\n\n# store input nodes and submit calculation\nkkr_calc_continued.store_all()\nkkr_calc_continued.submit()\n\n# wait for calculation to finish\nwait_for_it(kkr_calc_continued)\n\n###################################################\n# writeout host GF (using converged calculation)\n###################################################\n\n# take remote folder of converged calculation to reuse setting and write out Green function and tmat of the crystalline host system\nkkr_converged_parent_folder = kkr_calc_continued.outputs.remote_folder\n\n# extreact kkr calculation from parent calculation folder\nkkrcalc_converged = kkr_converged_parent_folder.get_inputs()\n\n# extract parameters from parent calculation and update RUNOPT for KKRFLEX option\nkkr_params_dict = kkrcalc_converged.inputs.parameters.get_dict()\nkkr_params_dict['RUNOPT'] = ['KKRFLEX']\n\n# create aiida Dict node with set parameters that are updated compared to converged parent kkr calculation\nParaNode = Dict(dict=kkr_params_dict)\n\n# create new KKR calculation\ncode = kkrcalc_converged.get_code() # take the same code as in the calculation before\nGF_host_calc= code.new_calc()\n\n# set resources, Parameter Node and parent calculation\nresources = kkrcalc_converged.get_resources()\nGF_host_calc.set_resources(resources)\nGF_host_calc.use_parameters(ParaNode)\nGF_host_calc.use_parent_folder(kkr_converged_parent_folder)\n\n### !!! use queue name if necessary !!! ###\n# GF_host_calc.set_queue_name('<quene_name>')\n\n# prepare impurity_info node containing the information about the impurity cluster\nimp_info = Dict(dict={'Rcut':1.01, 'ilayer_center':0, 'Zimp':[79.]})\n# set impurity info node to calculation\nGF_host_calc.use_impurity_info(imp_info)\n\n# store input nodes and submit calculation\nGF_host_calc.store_all()\nGF_host_calc.submit()\n\n# wait for calculation to finish\nwait_for_it(GF_host_calc)\n\n######################################################################\n# KKRimp calculation (20 simple mixing iterations for preconvergence)\n######################################################################\n\n# first create impurity start pot using auxiliary voronoi calculation\n\n# creation of the auxiliary styructure:\n# use an aiida workfunction to keep track of the provenance\nfrom aiida.work import workfunction as wf\n@wf\ndef change_struc_imp_aux_wf(struc, imp_info): # Note: works for single imp at center only!\nfrom aiida.common.constants import elements as PeriodicTableElements\n_atomic_numbers = {data['symbol']: num for num, data in PeriodicTableElements.iteritems()}\n\nnew_struc = StructureData(cell=struc.cell)\nisite = 0\nfor site in struc.sites:\nsname = site.kind_name\nkind = struc.get_kind(sname)\npos = site.position\nzatom = _atomic_numbers[kind.get_symbols_string()]\nif isite == imp_info.get_dict().get('ilayer_center'):\nzatom = imp_info.get_dict().get('Zimp')\nsymbol = PeriodicTableElements.get(zatom).get('symbol')\nnew_struc.append_atom(position=pos, symbols=symbol)\nisite += 1\n\nreturn new_struc\n\nnew_struc = change_struc_imp_aux_wf(voro_calc.inputs.structure, imp_info)\n\n# then Voronoi calculation for auxiliary structure\n### !!! use your code name !!! ###\ncodename = 'voronoi@my_mac'\ncode = Code.get_from_string(codename)\nvoro_calc_aux = code.new_calc()\nvoro_calc_aux.set_resources({'num_machines':1, 'tot_num_mpiprocs':1})\nvoro_calc_aux.use_structure(new_struc)\nvoro_calc_aux.use_parameters(kkrcalc_converged.inputs.parameters)\nvoro_calc_aux.store_all()\nvoro_calc_aux.submit()\n### !!! use queue name if necessary !!! ###\n# voro_calc_aux.set_queue_name('<quene_name>')\n\n# wait for calculation to finish\nwait_for_it(voro_calc_aux)\n\n# then create impurity startpot using auxiliary voronoi calc and converged host potential\n\nfrom aiida_kkr.tools.common_workfunctions import neworder_potential_wf\n\npotname_converged = kkrcalc_converged._POTENTIAL\npotname_imp = 'potential_imp'\nneworder_pot1 = [int(i) for i in loadtxt(GF_host_calc.outputs.retrieved.get_abs_path('scoef'), skiprows=1)[:,3]-1]\npotname_impvorostart = voro_calc_aux._OUT_POTENTIAL_voronoi\nreplacelist_pot2 = [[0,0]]\n\nsettings_dict = {'pot1': potname_converged, 'out_pot': potname_imp, 'neworder': neworder_pot1,\n'pot2': potname_impvorostart, 'replace_newpos': replacelist_pot2, 'label': 'startpot_KKRimp',\n'description': 'starting potential for Au impurity in bulk Cu'}\nsettings = Dict(dict=settings_dict)\n\nstartpot_Au_imp_sfd = neworder_potential_wf(settings_node=settings,\nparent_calc_folder=kkrcalc_converged.out.remote_folder,\nparent_calc_folder2=voro_calc_aux.out.remote_folder)\n\n# now create KKRimp calculation and run first (some simple mixing steps) calculation\n\n# needed to link to host GF writeout calculation\nGF_host_output_folder = GF_host_calc.out.remote_folder\n\n# create new KKRimp calculation\nfrom aiida_kkr.calculations.kkrimp import KkrimpCalculation\nkkrimp_calc = KkrimpCalculation()\n\n### !!! use your code name !!! ###\nkkrimp_code = Code.get_from_string('KKRimp@my_mac')\n\nkkrimp_calc.use_code(kkrimp_code)\nkkrimp_calc.use_host_Greenfunction_folder(GF_host_output_folder)\nkkrimp_calc.use_impurity_potential(startpot_Au_imp_sfd)\nkkrimp_calc.set_resources(resources)\nkkrimp_calc.set_computer(kkrimp_code.get_computer())\n\n# first set 20 simple mixing steps\nkkrimp_params = kkrparams(params_type='kkrimp')\nkkrimp_params.set_multiple_values(SCFSTEPS=20, IMIX=0, MIXFAC=0.05)\nParamsKKRimp = Dict(dict=kkrimp_params.get_dict())\nkkrimp_calc.use_parameters(ParamsKKRimp)\n\n# store and submit\nkkrimp_calc.store_all()\nkkrimp_calc.submit()\n\n# wait for calculation to finish\nwait_for_it(kkrimp_calc)\n\n###################################################\n# continued KKRimp calculation until convergence\n###################################################\n\nkkrimp_calc_converge = kkrimp_code.new_calc()\nkkrimp_calc_converge.use_parent_calc_folder(kkrimp_calc.out.remote_folder)\nkkrimp_calc_converge.set_resources(resources)\nkkrimp_calc_converge.use_host_Greenfunction_folder(kkrimp_calc.inputs.GFhost_folder)\n\nkkrimp_params = kkrparams(params_type='kkrimp', **kkrimp_calc.inputs.parameters.get_dict())\nkkrimp_params.set_multiple_values(SCFSTEPS=99, IMIX=5, MIXFAC=0.05)\nParamsKKRimp = Dict(dict=kkrimp_params.get_dict())\nkkrimp_calc_converge.use_parameters(ParamsKKRimp)\n\n### !!! use queue name if necessary !!! ###\n# kkrimp_calc_converge.set_queue_name('<quene_name>')\n\n# store and submit\nkkrimp_calc_converge.store_all()\nkkrimp_calc_converge.submit()\n\nwait_for_it(kkrimp_calc_converge)\n```\nKKRimp DOS (starting from converged parent KKRimp calculation)\n\nScript running host GF step for DOS contour first before running KKRimp step and plotting.\n\nDownload: `this example script`\n\n```#!/usr/bin/env python\n\n# connect to aiida db\nDict = DataFactory('parameter')\n\n# some settings:\n#DOS contour (in Ry units), emax=EF+dE_emax:\nemin, dE_emax, npt = -0.2, 0.1, 101\n# kkrimp parent (converged imp pot, needs to tbe a KKRimp calculation node)\n\n# derived quantities:\nGF_host_calc = kkrimp_calc_converge.inputs.GFhost_folder.inputs.remote_folder\nkkr_converged_parent_folder = GF_host_calc.inputs.parent_calc_folder\n\n# helper function\ndef wait_for_it(calc, maxwait=300):\nfrom time import sleep\nN = 0\nprint 'start waiting for calculation to finish'\nwhile not calc.has_finished() and N<(maxwait/2.):\nN += 1\nif N%5==0:\nprint('.')\nsleep(2.)\nprint('waiting done after {} seconds: {} {}'.format(N*2, calc.has_finished(), calc.has_finished_ok()))\n\n################################################################################################\n\n# first host GF with DOS contour\nfrom aiida_kkr.tools.kkr_params import kkrparams\nparams = kkrparams(**GF_host_calc.inputs.parameters.get_dict())\nparams.set_multiple_values(EMIN=emin, EMAX=GF_host_calc.res.fermi_energy+dE_emax, NPOL=0, NPT1=0, NPT2=npt, NPT3=0)\nParaNode = Dict(dict=params.get_dict())\n\ncode = GF_host_calc.get_code() # take the same code as in the calculation before\nGF_host_doscalc= code.new_calc()\nresources = GF_host_calc.get_resources()\nGF_host_doscalc.set_resources(resources)\nGF_host_doscalc.use_parameters(ParaNode)\nGF_host_doscalc.use_parent_folder(kkr_converged_parent_folder)\nGF_host_doscalc.use_impurity_info(GF_host_calc.inputs.impurity_info)\n\n# store and submit\nGF_host_doscalc.store_all()\nGF_host_doscalc.submit()\n\n# wait for calculation to finish\nprint 'host GF calc for DOS contour'\nwait_for_it(GF_host_doscalc)\n\n# then KKRimp step using the converged potential\n\nkkrimp_doscalc = kkrimp_calc_converge.get_code().new_calc()\nkkrimp_doscalc.use_host_Greenfunction_folder(GF_host_doscalc.out.remote_folder)\nkkrimp_doscalc.use_parent_calc_folder(kkrimp_calc_converge.out.remote_folder)\nkkrimp_doscalc.set_resources(kkrimp_calc_converge.get_resources())\n\n# set to DOS settings\nparams = kkrparams(params_type='kkrimp', **kkrimp_calc_converge.inputs.parameters.get_dict())\nparams.set_multiple_values(RUNFLAG=['lmdos'], SCFSTEPS=1)\nParaNode = Dict(dict=params.get_dict())\n\nkkrimp_doscalc.use_parameters(ParaNode)\n\n# store and submit calculation\nkkrimp_doscalc.store_all()\nkkrimp_doscalc.submit()\n\n# wait for calculation to finish\n\nprint 'KKRimp calc DOS'\nwait_for_it(kkrimp_doscalc)\n\n# Finally plot the DOS:\n\n# get interpolated DOS from GF_host_doscalc calculation:\nfrom masci_tools.io.common_functions import interpolate_dos\ndospath_host = GF_host_doscalc.out.retrieved.get_abs_path('')\nef, dos, dos_interpol = interpolate_dos(dospath_host, return_original=True)\ndos, dos_interpol = dos, dos_interpol\n\n# sum over spins:\nimpdos0[:,1:] = impdos0[:,1:]*2\nimpdos1[:,1:] = impdos1[:,1:]*2\n\n# plot bulk and impurity DOS\nfrom matplotlib.pyplot import figure, fill_between, plot, legend, title, axhline, axvline, xlim, ylim, ylabel, xlabel, title, show\nfigure()\nfill_between((dos_interpol[:,0]-ef)*13.6, dos_interpol[:,1]/13.6, color='lightgrey', lw=0, label='bulk Cu')\nplot((impdos0[:,0]-ef)*13.6, impdos0[:,1]/13.6, label='Au imp')\nplot((impdos0[:,0]-ef)*13.6, impdos1[:,1]/13.6, label='1st Cu neighbor')\nplot((impdos0[:,0]-ef)*13.6, (impdos1[:,1]-dos_interpol[:,1])/dos_interpol[:,1], '--', label='relative difference in 1st Cu neighbor')\nlegend()\ntitle('DOS of Au impurity embedded into bulk Cu')\naxhline(0, lw=1, color='grey')\naxvline(0, lw=1, color='grey')\nxlim(-8, 1)\nylim(-0.5,8.5)\nxlabel('E-E_F (eV)')\nylabel('DOS (states/eV)')\nshow()\n```\nKKR bandstructure\n\nScript running a bandstructure calculation for which first from the structure node the kpoints of the high-symmetry lines are extracted and afterwards the bandstructure (i.e. `qdos`) calculation is started. Finally the results are plotted together with the DOS data (taken from KKRimp DOS preparation step).\n\nDownload: `this example script`\n\n```#!/usr/bin/env python\n\n# connect to aiida db\nfrom aiida.orm import Code, DataFactory, load_node\nStructureData = DataFactory('structure')\nDict = DataFactory('parameter')\n\n# helper function:\ndef wait_for_it(calc, maxwait=300):\nfrom time import sleep\nN = 0\nprint 'start waiting for calculation to finish'\nwhile not calc.has_finished() and N<(maxwait/2.):\nN += 1\nif N%5==0:\nprint('.')\nsleep(2.)\nprint('waiting done after {} seconds: {} {}'.format(N*2, calc.has_finished(), calc.has_finished_ok()))\n\n# some settings (parent calculations):\n\n# converged KKR calculation (taken form bulk Cu KKR example)\n# previous DOS calculation started from converged KKR calc (taken from KKRimp DOS example, i.e. GF host calculation with DOS contour)\n\n# generate kpoints for bandstructure calculation\n\nfrom aiida_kkr.calculations.voro import VoronoiCalculation\nstruc, voro_parent = VoronoiCalculation.find_parent_structure(kkr_calc_converged.out.remote_folder)\n\nfrom aiida.tools.data.array.kpoints import get_explicit_kpoints_path\nkpts = get_explicit_kpoints_path(struc).get('explicit_kpoints')\n\n# run bandstructure calculation\n\n# create bandstructure calculation reusing old settings (including same computer and resources in this example)\nkkrcode = kkr_calc_converged.get_code()\nkkrcalc = kkrcode.new_calc()\nkkrcalc.use_kpoints(kpts) # pass kpoints as input\nkkrcalc.use_parent_folder(kkr_calc_converged.out.remote_folder)\nkkrcalc.set_resources(kkr_calc_converged.get_resources())\n# change parameters to qdos settings (E range and number of points)\nfrom aiida_kkr.tools.kkr_params import kkrparams\nqdos_params = kkrparams(**kkr_calc_converged.inputs.parameters.get_dict()) # reuse old settings\n# reuse the same emin/emax settings as in DOS run (extracted from input parameter node)\nqdos_params.set_multiple_values(EMIN=host_dos_calc.inputs.parameters.get_dict().get('EMIN'),\nEMAX=host_dos_calc.inputs.parameters.get_dict().get('EMAX'),\nNPT2=100)\nkkrcalc.use_parameters(Dict(dict=qdos_params.get_dict()))\n\n# store and submit calculation\nkkrcalc.store_all()\nkkrcalc.submit()\n\nwait_for_it(kkrcalc, maxwait=600)\n\n# plot results\n\n# extract kpoint labels\nklbl = kpts.labels\n# fix overlapping labels (nicer plotting)\ntmp = klbl\ntmp = (tmp, '\\n'+tmp+' ')\nklbl = tmp\ntmp = klbl\ntmp = (tmp, ' '+tmp)\nklbl = tmp\n\n#plotting of bandstructure and previously calculated DOS data\n\nfrom masci_tools.io.common_functions import interpolate_dos\ndospath_host = host_dos_calc.out.retrieved.get_abs_path('')\nef, dos, dos_interpol = interpolate_dos(dospath_host, return_original=True)\ndos, dos_interpol = dos, dos_interpol\n\n# load qdos file and reshape\nfrom numpy import loadtxt, sum, log\nqdos_file = kkrcalc.out.retrieved.get_abs_path('qdos.01.1.dat')\nnepts = len(set(q[:,0]))\ndata = q[:,5:].reshape(nepts, len(q)/nepts, -1)\ne = (q[::len(q)/nepts, 0]-ef)*13.6\n\n# plot bandstructure\nfrom matplotlib.pyplot import figure, pcolormesh, show, xticks, ylabel, axhline, axvline, gca, title, plot, ylim, xlabel, suptitle\nfigure(figsize=((8, 4.8)))\npcolormesh(range(len(q)/nepts), e, log(sum(abs(data), axis=2)), lw=0)\nxticks([i for i in klbl], [i for i in klbl])\nylabel('E-E_F (eV)')\naxhline(0, color='lightgrey', lw=1)\ntitle('band structure')\n\n# plot DOS on right hand side of bandstructure plot\naxBand = gca()\nfrom mpl_toolkits.axes_grid1 import make_axes_locatable\ndivider = make_axes_locatable(axBand)\naxDOS = divider.append_axes(\"right\", 1.2, pad=0.1, sharey=axBand)\n\nplot(dos_interpol[:,1]/13.6, (dos_interpol[:,0]-ef)*13.6)\n\nylim(e.min(), e.max())\n\naxhline(0, color='grey', lw=1)\naxvline(0, color='grey', lw=1)\n\naxDOS.yaxis.set_tick_params(labelleft=False, labelright=True, right=True, left=False)\nxlabel('states/eV')\n\ntitle('DOS')\nsuptitle(struc.get_formula(), fontsize=16)\n\nshow()\n```\n###### Workflows¶\n\nThis page can contain a short introduction to the workflows provided by `aiida-kkr`.\n\nDensity of states\n\nThe density of states (DOS) workflow `kkr_dos_wc` automatically sets the right parameters in the input of a KKR calculation to perform a DOS calculation. The specifics of the DOS energy contour are set via the `wf_parameters` input node which contains default values if no user input is given.\n\nNote\n\nThe default values of the `wf_parameters` input node can be extraced using `kkr_dos_wc.get_wf_defaults()`.\n\nInputs:\n• `kkr` (aiida.orm.Code): KKRcode using the `kkr.kkr` plugin\n• `remote_data` (RemoteData): The remote folder of the (converged) calculation whose output potential is used as input for the DOS run\n• `wf_parameters` (ParameterData, optional): Some settings of the workflow behavior (e.g. number of energy points in DOS contour etc.)\n• `options` (ParameterData, optional): Some settings for the computer you want to use (e.g. queue_name, use_mpi, resources, …)\n• `label` (str, optional): Label of the workflow\n• `description` (str, optional): Longer description of the workflow\nReturns nodes:\n• `dos_data` (XyData): The DOS data on the DOS energy contour (i.e. at some finite temperature)\n• `dos_data_interpol` (XyData): The interpolated DOS from the line parallel to the real axis down onto the real axis\n• `results_wf` (ParameterData): The output node of the workflow containing some information on the DOS run\n\nNote\n\nThe x and y arrays of the `dos_data` output nodes can easily be accessed using:\n\n```x = dos_data_node.get_x()\ny = dos_data_node.get_y()\n```\n\nwhere the returned list is of the form `[label, numpy-array-of-data, unit]` and the y-array contains entries for total DOS, s-, p-, d-, …, and non-spherical contributions to the DOS, e.g.:\n\n```[(u'interpolated dos tot', array([[...]]), u'states/eV'),\n(u'interpolated dos s', array([[...]]), u'states/eV'),\n(u'interpolated dos p', array([[...]]), u'states/eV'),\n(u'interpolated dos d', array([[...]]), u'states/eV'),\n(u'interpolated dos ns', array([[...]]), u'states/eV')]\n```\n\nNote that the output data are 2D arrays containing the atom resolved DOS, i.e. the DOS values for all atoms in the unit cell.\n\nExample Usage\n\nWe start by getting an installation of the KKRcode:\n\n```from aiida.orm import Code\nkkrcode = Code.get_from_string('KKRcode@my_mac')\n```\n\nNext load the remote folder node of the previous calculation (here the converged calculation of the Cu bulk test case) from which we want to start the following DOS calculation:\n\n```# import old KKR remote folder\n```\n\nThen we set some settings of the workflow parameters (this step is optional):\n\n```# create workflow settings\nfrom aiida.orm import DataFactory\nParameterData = DataFactory('parameter')\nworkflow_settings = ParameterData(dict={'dos_params':{'emax': 1, 'tempr': 200, 'emin': -1,\n'kmesh': [20, 20, 20], 'nepts': 81}})\n```\n\nFinally we run the workflow:\n\n```from aiida_kkr.workflows.dos import kkr_dos_wc\nfrom aiida.work import run\nrun(kkr_dos_wc, _label='test_doscal', _description='My test dos calculation.',\nkkr=kkrcode, remote_data=kkr_remote_folder, wf_parameters=workflow_settings)\n```\n\nThe following script can be used to plot the total interpolated DOS (in the `dos_data_interpol` output node that can for example be access using `dos_data_interpol = <kkr_dos_wc-node>.out.dos_data_interpol` where `<kkr_dos_wc-node>` is the workflow node) of the calculation above:\n\n```def plot_dos(dos_data_node):\nx = dos_data_node.get_x()\ny_all = dos_data_node.get_y()\n\nfrom matplotlib.pylab import figure, xlabel, ylabel, axhline, axvline, plot, legend, title\n\nfigure()\n\n# loop over contributions (tot, s, p, d, ns)\nfor y in y_all:\nif y==y_all: # special line formatting for total DOS\nstyle = 'x-'\nlw = 3\nelse:\nstyle = '--'\nlw = 2\nplot(x, y, style, lw=lw, ms=6, label=y.split('dos '))\n\nxlabel(x+' ({})'.format(x[-1]))\nylabel(y.replace(' ns','')+' ({})'.format(y[-1]))\naxhline(0, color='grey', linestyle='dotted', zorder=-100)\naxvline(0, color='grey', linestyle='dotted', zorder=-100)\nlegend(loc=2)\ntitle('DOS of bulk Cu')\n\nplot_dos(dos_data_interpol)\n```\n\nwhich will produce the following plot:", null, "Bandstructure\n\nThe bandstructure calculation, using workchain `kkr_bs_wc`, yields the band structure in terms of the Bloch spectral function. To run the bandstructure calculation all the requried parameters are taken from the parent (converved) KkrCalculation and user-defined `wf_parameters`.\n\nNote\n\nUse `kkr_bs_wc.get_wf_defaults()` to get the default values for the `wf_parameters` input.\n\nInputs:\n\n• `wf_parameters` (Dict, optional): Workchain Specifications, contains `nepts` (int), `tempr` (float), `emin` (eV), `emax` (eV), `rclustz` (float, in units of the lattice constant). The energy range given by `emin` and `emax` are given relative to the Fermi level.\n• `options` (Dict, optional): Computer Specifications, schedualer command, parallelization, walltime etc.\n• `kpoints` (KpointsData, optional): k-point path used in the bandstructure calculation. If it is not given it is extructed from the structure. (Although it is important the k-points should come from the primitive structure, internally it will be consider in the next version.)\n• `remote_data` (RemoteData, mendaory): Parent folder of a converged KkrCalculation.\n• `kkr` (Code, mendaory): KKRhost code (i.e. using `kkr.kkr` plugin).\n• `label` (Str, optional): label for the bandstructure WorkChainNode. Can also be found in the `result_wf` output Dict as `BS_wf_label` key.\n• `description` (Str, optional) : description for the bandstructure WorkChainNode. Can be found in the `result_wf` output Dict as `BS_wf_description` key\nReturns nodes:\n• `BS_Data` (ArrayData): Consist of (BlochSpectralFunction, numpy array), (k_points, numpy array), (energy_points, numpy array), (special_kpoints, dict)\n• `result_wf` (Dict): work_chain_specifications (such as ‘ successful ’, ‘ list_of_errors ’, ‘ BS_params ’ etc) node , BS_data (‘ BlochSpectralFunction ’,‘ Kpts ’,‘ energy_points ’, ‘ k-labels ’ ) node.\n\nTo access into the data\n\n```BS_Data = <WC_NODE>.outputs.BS_Data\nbsf = BS_Data.get_array('BlochSpectralFunction')\nkpts = BS_Data.get_array('Kpts')\neng_pts = BS_Data.get_array('energy_points')\nk_label= BS_Data.extras['k-labels']\n```\n\nThe `bsf` array is a 2d-numpy array and contains the Bloch spectral function (k and energy resolved density) and `k_label` give the python dict archiving the high-symmetry points, `index:label`, in kpts.\n\nExample Usage:\n\nTo start the Band Structure calculation the steps:\n\n```from aiida.orm import load_node, Str, Code, Dict\n\n# setup the code and computer\nkkrcode = Code.get_from_string('KKRcode@COMPUTERNAME')\n\n# import the remote folder from the old converged kkr calculation\n\n# create workflow parameter settings\nworkflow_parameters = Dict(dict={'emax': 5, # in eV, relative to EF\n'tempr': 50.0, # in K\n'emin': -10, # in eV\n'rclustz' : 2.3, # alat units\n'nepts': 6})\n\n# Computer configuration\n'max_wallclock_seconds': 36000,\n'resources': {'tot_num_mpiprocs': 48, 'num_machines': 1},\n'custom_scheduler_commands':\n'#SBATCH --account=jara0191\\n\\nulimit -s unlimited; export OMP_STACKSIZE=2g',\n'withmpi': True})\n\nlabel = Str('testing_the_kkr_bs_wc')\n\nfrom aiida_kkr.workflows.bs import kkr_bs_wc\nfrom aiida.engine import run\nrun(kkr_bs_wc, **inputs)\n```\nTo plot :\n\nTo plot one or more kkr_bs_wc node.\n\n```from aiida import load_profile\nNODE = <singel or list of nodes>\nfrom aiida_kkr.tools import plot_kkr\nplot_kkr( NODE, strucplot=False, logscale=True, silent=True, noshow=True)\n```\n\nFor bulk Cu this results in a plot like this:", null, "Generate KKR start potential\n\nWorkflow: `kkr_startpot_wc`\n\nInputs:\n• `structure` (StructureData):\n• `voronoi` (Code):\n• `kkr` (Code):\n• `wf_parameters` (ParameterData, optional):\n• `options` (ParameterData, optional): Some settings for the computer you want to use (e.g. queue_name, use_mpi, resources, …)\n• `calc_parameters` (ParameterData, optional):\n• `label` (str, optional):\n• `description` (str, optional):\n\nNote\n\nThe default values of the `wf_parameters` input node can be extraced using `kkr_dos_wc.get_wf_defaults()` and it should contain the following entries:\n\nGeneral settings:\n• `r_cls` (float):\n• `natom_in_cls_min` (int):\n• `fac_cls_increase` (float):\n• `num_rerun` (int):\nComputer settings:\n• `walltime_sec` (int):\n• `custom_scheduler_commands` (str):\n• `use_mpi` (bool):\n• `queue_name` (str):\n• `resources` (dict): `{'num_machines': 1}`\nSettings for DOS check of starting potential:\n• `check_dos` (bool):\n• `threshold_dos_zero` (float):\n• `delta_e_min` (float):\n• `delta_e_min_core_states` (float):\n• `dos_params` (dict): with the keys\n• `emax` (float):\n• `tempr` (float):\n• `emin` (float):\n• `kmesh` ([int, int, int]):\n• `nepts` (int):\nOutput nodes:\n• `last_doscal_dosdata` (XyData):\n• `last_doscal_dosdata_interpol` (XyData):\n• `last_doscal_results` (ParameterData):\n• `last_params_voronoi` (ParameterData):\n• `last_voronoi_remote` (RemoteData):\n• `last_voronoi_results` (ParameterData):\n• `results_vorostart_wc` (ParameterData):\nExample Usage\n\nFirst load KKRcode and Voronoi codes:\n\n```from aiida.orm import Code\nkkrcode = Code.get_from_string('KKRcode@my_mac')\nvorocode = Code.get_from_string('voronoi@my_mac')\n```\n\nThen choose some settings for the KKR specific parameters (LMAX cutoff etc.):\n\n```from aiida_kkr.tools.kkr_params import kkrparams\nkkr_settings = kkrparams(NSPIN=1, LMAX=2)\n```\n\nNow we create a structure node for the system we want to calculate:\n\n```# create Copper bulk aiida Structure\nfrom aiida.orm import DataFactory\nStructureData = DataFactory('structure')\nalat = 3.61 # lattice constant in Angstroem\nbravais = alat*array([[0.5, 0.5, 0], [0.5, 0, 0.5], [0, 0.5, 0.5]]) # Bravais matrix in Ang. units\nCu = StructureData(cell=bravais)\nCu.append_atom(position=[0,0,0], symbols='Cu')\n```\n\nFinally we run the `kkr_startpot_wc` workflow (here using the defaults for the workflow settings):\n\n```from aiida_kkr.workflows.voro_start import kkr_startpot_wc\nfrom aiida.work import run\nParameterData = DataFactory('parameter')\nrun(kkr_startpot_wc, structure=Cu, voronoi=vorocode, kkr=kkrcode, calc_parameters=ParameterData(dict=kkr_settings.get_dict()))\n```\nKKR scf cycle\n\nWorkflow: `kkr_scf_wc`\n\nInputs:\n\n```{'strmix': 0.03, 'brymix': 0.05, 'init_pos': None, 'convergence_criterion': 1e-08,\n'custom_scheduler_commands': '', 'convergence_setting_coarse': {'npol': 7, 'tempr': 1000.0,\n'n1': 3, 'n2': 11, 'n3': 3,\n'kmesh': [10, 10, 10]},\n'mixreduce': 0.5, 'mag_init': False, 'retreive_dos_data_scf_run': False,\n'dos_params': {'emax': 0.6, 'tempr': 200, 'nepts': 81, 'kmesh': [40, 40, 40], 'emin': -1},\n'hfield': 0.02, 'queue_name': '', 'threshold_aggressive_mixing': 0.008,\n'convergence_setting_fine': {'npol': 5, 'tempr': 600.0, 'n1': 7, 'n2': 29, 'n3': 7,\n'kmesh': [30, 30, 30]},\n'use_mpi': False, 'nsteps': 50, 'resources': {'num_machines': 1}, 'delta_e_min': 1.0,\n'walltime_sec': 3600, 'check_dos': True, 'threshold_switch_high_accuracy': 0.001,\n'kkr_runmax': 5, 'threshold_dos_zero': 0.001}\n\n_WorkChainSpecInputs({'_label': None, '_description': None, '_store_provenance': True,\n'dynamic': None, 'calc_parameters': None, 'kkr': None, 'voronoi': None,\n'remote_data': None, 'wf_parameters': <ParameterData: uuid: b132dfc4-3b7c-42e7-af27-4083802aff40 (unstored)>,\n'structure': None})\n```\n\nOutputs:\n\n```{'final_dosdata_interpol': <XyData: uuid: 0c14146d-90aa-4eb8-834d-74a706e500bb (pk: 22872)>,\n'last_InputParameters': <ParameterData: uuid: 28a277ad-8998-4728-8296-75fd3b0c4eb4 (pk: 22875)>,\n'last_RemoteData': <RemoteData: uuid: d24cdfc1-938a-4308-b273-e0aa8697c975 (pk: 22876)>,\n'last_calc_out': <ParameterData: uuid: 1c8fab2d-e596-4874-9516-c1387bf7db7c (pk: 22874)>,\n'output_kkr_scf_wc_ParameterResults': <ParameterData: uuid: 0f21ac18-e556-49f8-aa26-55260d013fac (pk: 22878)>,\n'results_vorostart': <ParameterData: uuid: 93831550-8775-493a-907b-27a470b52dc8 (pk: 22877)>,\n'starting_dosdata_interpol': <XyData: uuid: 54fa57ad-f559-4837-ba1e-7db4ed67d5b0 (pk: 22873)>}\n```\nExample Usage\nCase 1: Start from previous calculation\n```from aiida.orm import Code\nkkrcode = Code.get_from_string('KKRcode@my_mac')\nvorocode = Code.get_from_string('voronoi@my_mac')\n```\n```from aiida_kkr.tools.kkr_params import kkrparams\nkkr_settings = kkrparams(NSPIN=1, LMAX=2)\n```\n```from aiida.orm import load_node\nlast_vorono_remote = kkr_startpot.get_outputs_dict().get('last_voronoi_remote')\n```\n```from aiida_kkr.workflows.kkr_scf import kkr_scf_wc\nfrom aiida.work import run\nParameterData = DataFactory('parameter')\nrun(kkr_scf_wc, kkr=kkrcode, calc_parameters=ParameterData(dict=kkr_settings.get_dict()), remote_data=last_vorono_remote)\n```\nCase 2: Start from structure and run voronoi calculation first\n```# create Copper bulk aiida Structure\nfro numpy import array\nfrom aiida.orm import DataFactory\nStructureData = DataFactory('structure')\nalat = 3.61 # lattice constant in Angstroem\nbravais = alat*array([[0.5, 0.5, 0], [0.5, 0, 0.5], [0, 0.5, 0.5]]) # Bravais matrix in Ang. units\nCu = StructureData(cell=bravais)\nCu.append_atom(position=[0,0,0], symbols='Cu')\n```\n```run(kkr_scf_wc, structure=Cu, kkr=kkrcode, voronoi=vorocode, calc_parameters=ParameterData(dict=kkr_settings.get_dict()))\n```\nKKR flex (GF calculation)\n\nThe Green’s function writeout workflow performs a KKR calculation with runoption `KKRFLEX` to write out the `kkr_flexfiles`. Those are needed for a `kkrimp` calculation.\n\nInputs:\n• `kkr` (aiida.orm.Code): KKRcode using the `kkr.kkr` plugin\n• `remote_data` (RemoteData): The remote folder of the (converged) kkr calculation\n• `imp_info` (ParameterData): ParameterData node containing the information of the desired impurities (needed to write out the `kkr_flexfiles` and the `scoef` file)\n• `options` (ParameterData, optional): Some settings for the computer (e.g. computer settings)\n• `wf_parameters` (ParameterData, optional): Some settings for the workflow behaviour\n• `label` (str, optional): Label of the workflow\n• `description` (str, optional): Longer description of the workflow\nReturns nodes:\n• `workflow_info` (ParameterData): Node containing general information about the workflow (e.g. errors, computer information, …)\n• `GF_host_remote` (RemoteData): RemoteFolder with all of the `kkrflexfiles` and further output of the workflow\nExample Usage\n\nWe start by getting an installation of the KKRcode:\n\n```from aiida.orm import Code\nkkrcode = Code.get_from_string('KKRcode@my_mac')\n```\n\nNext load the remote folder node of the previous calculation (here the converged calculation of the Cu bulk test case) from which we want to start the following KKRFLEX calculation:\n\n```# import old KKR remote folder\nkkr_remote_folder = load_node(<pid of converged calc>).out.remote_folder\n```\n\nAfterwards, the information regarding the impurity has to be given (in this example, we use a Au impurity with a cutoff radius of 2 alat which is placed in the first labelled lattice point of the unit cell). Further keywords for the `impurity_info` node can be found in the respective part of the documentation:\n\n```# set up impurity info node\nimps = ParameterData(dict={'ilayer_center':0, 'Rcut':2, 'Zimp':[79.]})\n```\n\nThen we set some settings of the options parameters (this step is optional):\n\n```# create workflow settings\nfrom aiida.orm import DataFactory\nParameterData = DataFactory('parameter')\noptions = ParameterData(dict={'use_mpi':'false', 'queue_name':'viti_node', 'walltime_sec' : 60*60*2,\n'resources':{'num_machines':1, 'num_mpiprocs_per_machine':1}})\n```\n\nFinally we run the workflow:\n\n```from aiida_kkr.workflows.gf_writeout import kkr_flex_wc\nfrom aiida.work import run\nrun(kkr_flex_wc, label='test_gf_writeout', description='My test KKRflex calculation.',\nkkr=kkrcode, remote_data=kkr_remote_folder, options=options, wf_parameters=wf_params)\n```\nKKR impurity self consistency\n\nThis workflow performs a KKRimp self consistency calculation starting from a given host-impurity startpotential and converges it.\n\nNote\n\nThis workflow does only work for a non-magnetic calculation without spin-orbit-coupling. Those two features will be added at a later stage. This is also just a sub workflow, meaning that it only converges an already given host-impurity potential. The whole kkrimp workflow starting from scratch will also be added at a later stage.\n\nInputs:\n• `kkrimp` (aiida.orm.Code): KKRimpcode using the `kkr.kkrimp` plugin\n• `host_imp_startpot` (SinglefileData, optional): File containing the host impurity potential (potential file with the whole cluster with all host and impurity potentials)\n• `remote_data` (RemoteData, optional): Output from a KKRflex calculation (can be extracted from the output of the GF writeout workflow)\n• `kkrimp_remote` (RemoteData, optional): RemoteData output from previous kkrimp calculation (if given, `host_imp_startpot` is not needed as input)\n• `impurity_info` (ParameterData, optional): Node containing information about the impurity cluster (has to be chosen consistently with `imp_info` from GF writeout step)\n• `options` (ParameterData, optional): Some general settings for the workflow (e.g. computer settings, queue, …)\n• `wf_parameters` (ParameterData, optional) : Settings for the behavior of the workflow (e.g. convergence settings, physical properties, …)\n• `label` (str, optional): Label of the workflow\n• `description` (str, optional): Longer description of the workflow\nReturns nodes:\n• `workflow_info` (ParameterData): Node containing general information about the workflow (e.g. errors, computer information, …)\n• `host_imp_pot` (SinglefileData): Converged host impurity potential that can be used for further calculations (DOS calc, new input for different KKRimp calculation)\nExample Usage\n\nWe start by getting an installation of the KKRimpcode:\n\n```from aiida.orm import Code\nkkrimpcode = Code.get_from_string('KKRimpcode@my_mac')\n```\n\nNext, either load the remote folder node of the previous calculation (here the KKRflex calculation that writes out the GF and KKRflexfiles) or the output node of the gf_writeout workflow from which we want to start the following KKRimp calculation:\n\n```# import old KKRFLEX remote folder\nGF_host_output_folder = load_node(<pid of KKRFLEX calc>).out.remote_folder # 1st possibility\n# GF_host_output_folder = load_node(<pid of gf_writeout wf output node>) # 2nd possibility: take ``GF_host_remote`` output node from gf_writeout workflow\n```\n\nNow, load a converged calculation of the host system (here Cu bulk) as well as an auxiliary voronoi calculation (here Au) for the desired impurity:\n\n```# load converged KKRcalc\nkkrcalc_converged = load_node(<pid of converged KKRcalc (Cu bulk)>)\nvoro_calc_aux = load_node(<pid of voronoi calculation for the impurity (Au)>)\n```\n\nUsing those, one can obtain the needed host-impurity potential that is needed as input for the workflow. Therefore, we use the `neworder_potential_wf` workfunction which is able to generate the startpot:\n\n```## load the neccessary function\nfrom aiida_kkr.tools.common_workfunctions import neworder_potential_wf\nimport numpy as np\n\n# extract the name of the converged host potential\npotname_converged = kkrcalc_converged._POTENTIAL\n# set the name for the potential of the desired impurity (here Au)\npotname_imp = 'potential_imp'\n\nneworder_pot1 = [int(i) for i in np.loadtxt(GF_host_calc.out.retrieved.get_abs_path('scoef'), skiprows=1)[:,3]-1]\npotname_impvorostart = voro_calc_aux._OUT_POTENTIAL_voronoi\nreplacelist_pot2 = [[0,0]]\n\n# set up settings node to use as argument for the neworder_potential function\nsettings_dict = {'pot1': potname_converged, 'out_pot': potname_imp, 'neworder': neworder_pot1,\n'pot2': potname_impvorostart, 'replace_newpos': replacelist_pot2, 'label': 'startpot_KKRimp',\n'description': 'starting potential for Au impurity in bulk Cu'}\nsettings = ParameterData(dict=settings_dict)\n\n# finally create the host-impurity potential (here ``startpot_Au_imp_sfd``) using the settings node as well as\nthe previously loaded converged KKR calculation and auxiliary voronoi calculation:\nstartpot_Au_imp_sfd = neworder_potential_wf(settings_node=settings,\nparent_calc_folder=kkrcalc_converged.out.remote_folder,\nparent_calc_folder2=voro_calc_aux.out.remote_folder)\n```\n\nNote\n\nFurther information how the neworder potential function works can be found in the respective part of this documentation.\n\nAfterwards, the information regarding the impurity has to be given (in this example, we use a Au impurity with a cutoff radius of 2 alat which is placed in the first labelled lattice point of the unit cell). Further keywords for the `impurity_info` node can be found in the respective part of the documentation:\n\n```# set up impurity info node\nimps = ParameterData(dict={'ilayer_center':0, 'Rcut':2, 'Zimp':[79.]})\n```\n\nThen, we set some settings of the options parameters on the one hand and specific wf_parameters regarding the convergence etc.:\n\n```options = ParameterData(dict={'use_mpi':'false', 'queue_name':'viti_node', 'walltime_sec' : 60*60*2,\n'resources':{'num_machines':1, 'num_mpiprocs_per_machine':20}})\nkkrimp_params = ParameterData(dict={'nsteps': 50, 'convergence_criterion': 1*10**-8, 'strmix': 0.1,\n'threshold_aggressive_mixing': 3*10**-2, 'aggressive_mix': 3,\n'aggrmix': 0.1, 'kkr_runmax': 5})\n```\n\nFinally we run the workflow:\n\n```from aiida_kkr.workflows.kkr_imp_sub import kkr_imp_sub_wc\nfrom aiida.work import run\nrun(kkr_imp_sub_wc, label='kkr_imp_sub test (CuAu)', description='test of the kkr_imp_sub workflow for Cu, Au system',\nkkrimp=kkrimpcode, options=options, host_imp_startpot=startpot_Au_imp_sfd,\nremote_data=GF_host_output_folder, wf_parameters=kkrimp_params)\n```\nKKR impurity workflow\n\nThis workflow performs a KKR impurity calculation starting from an `impurity_info` node as well as either from a coverged calculation remote for the host system (1) or from a GF writeout remote (2). In the two cases the following is done:\n\n• (1): First, the host system will be converged using the `kkr_scf` workflow. Then, the GF will be calculated using the `gf_writeout` workflow before calculating the auxiliary startpotential of the impurity. Now, the total impurity-host startpotential will be generated and then converged using the `kkr_imp_sub` workflow.\n• (2): In this case the two first steps from above will be skipped and the workflow starts by calculating the auxiliary startpotential.\n\nNote\n\nThis workflow is different from the `kkr_imp_sub` workflow that only converges a given impurity host potential. Here, the whole process of a KKR impurity calculation is done automatically.\n\nInputs:\n• `kkrimp` (aiida.orm.Code): KKRimpcode using the `kkr.kkrimp` plugin\n• `voronoi` (aiida.orm.Code): Voronoi code using the `kkr.voro` plugin\n• `kkr` (aiida.orm.Code): KKRhost code using the `kkr.kkr` plugin\n• `impurity_info` (ParameterData): Node containing information about the impurity cluster\n• `remote_data_host` (RemoteData, optional): RemoteData of a converged host calculation if you want to start the workflow from scratch\n• `remote_data_gf` (RemoteData, optional): RemoteData of a GF writeout step (if you want to skip the convergence of the host and the GF writeout step)\n• `options` (ParameterData, optional): Some general settings for the workflow (e.g. computer settings, queue, …)\n• `wf_parameters` (ParameterData, optional) : Settings for the behavior of the workflow (e.g. convergence settings, physical properties, …)\n• `voro_aux_parameters` (ParameterData, optional): Settings for the usage of the `kkr_startpot` sub workflow needed for the auxiliary voronoi potentials\n• `label` (str, optional): Label of the workflow\n• `description` (str, optional): Longer description of the workflow\nReturns nodes:\n• `workflow_info` (ParameterData): Node containing general information about the workflow\n• `last_calc_info` (ParameterData): Node containing information about the last used calculation of the workflow\n• `last_calc_output_parameters` (ParameterData): Node with all of the output parameters from the last calculation of the workflow\nExample Usage\n\nWe start by getting an installation of the codes:\n\n```from aiida.orm import Code\nkkrimpcode = Code.get_from_string('KKRimpcode@my_mac')\nkkrcode = Code.get_from_string('KKRcode@my_mac')\nvorocode = Code.get_from_string('vorocode@my_mac')\n```\n\nThen, set up an appropriate `impurity_info` node for your calculation:\n\n```# set up impurity info node\nimps = ParameterData(dict={'ilayer_center':0, 'Rcut':2, 'Zimp':[79.]})\n```\n\nNext, load either a `gf_writeout_remote` or a `converged_host_remote`:\n\n```from aiida.orm import load_node\n```\n\nSet up some more input parameter nodes for your workflow:\n\n```# node for general workflow options\noptions = ParameterData(dict={'use_mpi': False, 'walltime_sec' : 60*60*2,\n'resources':{'num_machines':1, 'num_mpiprocs_per_machine':1}})\n# node for convergence behaviour of the workflow\nkkrimp_params = ParameterData(dict={'nsteps': 99, 'convergence_criterion': 1*10**-8, 'strmix': 0.02,\n'threshold_aggressive_mixing': 8*10**-2, 'aggressive_mix': 3,\n'aggrmix': 0.04, 'kkr_runmax': 5, 'calc_orbmom': False, 'spinorbit': False,\n'newsol': False, 'mag_init': False, 'hfield': [0.05, 10],\n'non_spherical': 1, 'nspin': 2})\n# node for parameters needed for the auxiliary voronoi workflow\nvoro_aux_params = ParameterData(dict={'num_rerun' : 4, 'fac_cls_increase' : 1.5, 'check_dos': False,\n'lmax': 3, 'gmax': 65., 'rmax': 7., 'rclustz': 2.})\n```\n\nFinally, we run the workflow (for the two cases depicted above):\n\n```from aiida_kkr.workflows.kkr_scf import kkr_scf_wc\nfrom aiida_kkr.workflows.voro_start import kkr_startpot_wc\nfrom aiida_kkr.workflows.kkr_imp_sub import kkr_imp_sub_wc\nfrom aiida_kkr.workflows.gf_writeout import kkr_flex_wc\nfrom aiida_kkr.workflows.kkr_imp import kkr_imp_wc\nfrom aiida.work.launch import run, submit\n\n# don't forget to set a label and description for your workflow\n\n# case (1)\nwf_run = submit(kkr_imp_wc, label=label, description=description, voronoi=vorocode, kkrimp=kkrimpcode,\nkkr=kkrcode, options=options, impurity_info=imps, wf_parameters=kkrimp_params,\nvoro_aux_parameters=voro_aux_params, remote_data_gf=gf_writeout_remote)\n\n# case (2)\nwf_run = submit(kkr_imp_wc, label=label, description=description, voronoi=vorocode, kkrimp=kkrimpcode,\nkkr=kkrcode, options=options, impurity_info=imps, wf_parameters=kkrimp_params,\nvoro_aux_parameters=voro_aux_params, remote_data_host=converged_host_remote)\n```\nKKR impurity density of states\n\nThis workflow calculates the density of states for a given host impurity input potential.\n\nInputs:\n• `kkrimp` (aiida.orm.Code): KKRimpcode using the `kkr.kkrimp` plugin\n• `kkr` (aiida.orm.Code): KKRhost code using the `kkr.kkr` plugin\n• `host_imp_pot` (SinglefileData): converged host impurity potential from impurity workflow\n• `options` (ParameterData, optional): Some general settings for the workflow (e.g. computer settings, queue, …)\n• `wf_parameters` (ParameterData, optional) : Settings for the behavior of the workflow (e.g. convergence settings, physical properties, …)\n• `label` (str, optional): Label of the workflow\n• `description` (str, optional): Longer description of the workflow\nReturns nodes:\n• `workflow_info` (ParameterData): Node containing general information about the workflow\n• `last_calc_info` (ParameterData): Node containing information about the last used calculation of the workflow\n• `last_calc_output_parameters` (ParameterData): Node with all of the output parameters from the last calculation of the workflow\nExample Usage\n\nWe start by getting an installation of the codes:\n\n```from aiida.orm import Code\nkkrimpcode = Code.get_from_string('KKRimpcode@my_mac')\nvorocode = Code.get_from_string('vorocode@my_mac')\n```\n\nNext, load the converged host impurity potential:\n\n```from aiida.orm import load_node\nstartpot = load_node(<pid or uuid of SinglefileData>)\n```\n\nSet up some more input parameter nodes for your workflow:\n\n```# node for general workflow options\noptions = ParameterData(dict={'use_mpi': False, 'walltime_sec' : 60*60*2,\n'resources':{'num_machines':1, 'num_mpiprocs_per_machine':1}})\n# node for convergence behaviour of the workflow\nwf_params = ParameterData(dict={'ef_shift': 0. ,\n'dos_params': {'nepts': 61,\n'tempr': 200,\n'emin': -1,\n'emax': 1,\n'kmesh': [30, 30, 30]},\n'non_spherical': 1,\n'born_iter': 2,\n'init_pos' : None,\n'newsol' : False})\n```\n\nFinally, we run the workflow (for the two cases depicted above):\n\n```from aiida_kkr.workflows.kkr_imp_dos import kkr_imp_dos_wc\nfrom aiida.work.launch import run, submit\n\n# don't forget to set a label and description for your workflow\nwf_run = submit(kkr_imp_dos_wc, label=label, description=description, kkrimp=kkrimpcode,\nkkrcode=kkrcode, options=options, wf_parameters=wf_params)\n```\nEquation of states\n\nWorkflow: `aiida_kkr.workflows.eos`\n\nWarning\n\nNot implemented yet!\n\nCheck KKR parameter convergence\n\nWorkflow: `aiida_kkr.workflows.check_para_convergence`\n\nWarning\n\nNot implemented yet!\n\nIdea is to run checks after convergence for the following parameters:\n• RMAX\n• GMAX\n• energy contour\n• kmesh\nFind magnetic ground state\n\nWorkflow: `aiida_kkr.workflows.check_magnetic_state`\n\nWarning\n\nNot implemented yet!\n\nThe idea is to run a Jij calculation to estimate if the ferromagnetic state is the ground state or not. Then the unit cell could be doubled to compute the antiferromagnetic state. In case of noncollinear magnetism the full Jij tensor should be analyzed.\n\n###### Workfunctions¶\n\nHere the workfunctions provided by the aiida-kkr plugin are presented. The workfunctions are small tools useful for small tasks performed on aiida nodes that keep the provenance in the database.\n\nupdate_params_wf\n\nThe workfunktion `aiida_kkr.tools.common_workfunctions.update_params_wf` takes as an input a ParameterData node (`parameternode`) containing a KKR parameter set (i.e. created using the `kkrparams` class) and updates the parameter node with new values given in the dictionary of the second ParameterData input node (`updatenode`).\n\nInput nodes:\n• `parameternode` (ParameterData): aiida node of a KKR parameter set\n• `updatenode` (ParameterData): aiida node containing parameter names with new values\nOutput node:\n• `updated_parameter_node` (ParameterData): new parameter node with updated values\n\nNote\n\nIf the `updatenode` contains the keys `nodename` and/or `nodedesc` then the label and/or description of the output node will be set accordingly.\n\nExample Usage:\n\n```# initial KKR parameter node\ninput_node = ParameterData(dict=kkrparams(LMAX=3, EMIN=0))\ninput_node.store()\n# update some values (e.g. change EMIN)\nupdated_params = ParameterData(dict={'nodename': 'my_changed_name', 'nodedesc': 'My description text', 'EMIN': -1, 'RMAX': 10.})\nnew_params_node = update_params_wf(input_node, updated_params)\n```\nneworder_potential_wf\n\nThe workfunction `aiida_kkr.tools.common_workfunctions.neworder_potential_wf` creates a SingleFileData node that contains the new potential based in a potential file in the RemoteFolder input node (`settings_node`) which is braught to a new order according to the workfunction settings in the ParameterData input node (`parent_calc_folder`).\n\nInput nodes:\n• `settings_node` (ParameterData): Settings like filenames and neworder-list\n• `parent_calc_folder` (RemoteData): folder where initial potential file is found\n• `parent_calc_folder2` (RemoteData, optional): folder where second potential is found\nOutput node:\n• `potential_file` (SingleFileData): output potential in new order\n\nNote\n\nThe settings_dict should contain the following keys:\n• `pot1`, mandatory: <filename_input_potential>\n• `out_pot`, mandatory: <filename_output_potential>\n• `neworder`, mandatory: [list of intended order in output potential]\n• `pot2`, mandatory if `parent_calc_folder2` is given as input node: <filename_second_input_file>\n• `replace_newpos`, mandatory if `parent_calc_folder2` is given as input node: [[position in neworder list which is replace with potential from pot2, position in pot2 that is chosen for replacement]]\n• `label`, optional: label_for_output_node\n• `description`, optional: longer_description_for_output_node\nprepare_VCA_structure_wf\n\nWarning\n\nNot implemented yet!\n\nprepare_2Dcalc_wf\n\nWarning\n\nNot implemented yet!\n\n###### Tools¶\n\nHere the tools provided by `aiida-kkr` are described.\n\nPlotting tools\n\nVisualize typical nodes using `plot_kkr` from `aiida_kkr.tools.plot_kkr`. The `plot_kkr` function takes a node reference (can be a pk, uuid or the node itself or a list of these) and creates common plots for a quick visualization of the results obtained with the `aiida-kkr` plugin.\n\nUsage example:\n\n```from aiida_kkr.tools.plot_kkr import plot_kkr\n# use pk:\nplot_kkr(999999)\n# use uuid:\nplot_kkr('xxxxx-xxxxx')\n# used actual aiida node:\n# give list of nodes which goups plots together\n```\n\nThe behavior of `plot_kkr` can be controled using keyword arguments:\n\n```plot_kkr(99999, strucplot=False) # do not call ase`s view function to visualize structure\nplot_kkr(99999, silent=True) # plots only (no printout of inputs/outputs to node)\n```\n\nList of `plot_kkr` specific keyword arguments:\n\n• `silent` (bool, default: `False`): print information about input node including inputs and outputs\n• `strucplot` (bool, default: `True`): plot structure using ase’s `view` function\n• `interpol` (bool, default: `True`): use interpolated data for DOS plots\n• `all_atoms` (bool, default: `False`): plot all atoms in DOS plots (default: plot total DOS only)\n• `l_channels` (bool, default: `True`): plot l-channels in addition to total DOS\n• `logscale` (bool, default: `True`): plot rms and charge neutrality curves on a log-scale\n\nOther keyword arguments are passed onto plotting functions, e.g. to modify line properties etc. (see matplotlib documentation for a reference of possible keywords to modify line properties):\n\n```plot_kkr(99999, marker='o', color='r') # red lines with 'o' markers\n```\nExamples\nPlot structure node", null, "Visualize a structure node (also happens as sub-parts of workflows that have a structure as input if `strucplot` is not set to `False`). Shown is a screenshot of the ouput produced by ase’s `view`.\n\nPlot output of a KKR calculation", null, "Visualize the output of a `KkrCalculation`.\n\nPlot output of `kkr_dos_wc` workflow", null, "Visualize the output of a `kkr_dos_wc` workflow.\n\nPlot output of `kkr_startpot_wc` workflow", null, "Visualize the output of a `kkr_startpot_wc` workflow. The starting DOS is shown and the vertical lines indicate the position of the highest core states, the start of the ernergy contour and the Fermi level.\n\nPlot output of `kkr_scf_wc` workflow", null, "Visualize the output of an unfinished `kkr_scf_wc` workflow. The vertical lines indicate where individual calculations have started and ended.\n\nPlot output of `kkr_eos_wc` workflow", null, "Visualize the output of a `kkr_eos_wc` workflow.\n\nPlot multiple KKR calculations at once in the same plot\n```plot_kkr([34157,31962, 31974], silet=True, strucplot=False, logscale=False)\n```", null, "Visualize the output of multiple `kkr_scf_wc` workflows without plotting structure.\n\n#### Modules provided with aiida-kkr (API reference)¶\n\n##### Modules provided with aiida-kkr (API reference)¶\n###### Calculations¶\nVoronoi\n\nInput plug-in for a voronoi calculation.\n\nclass `aiida_kkr.calculations.voro.``VoronoiCalculation`(*args, **kwargs)[source]\n\nAiiDA calculation plugin for a voronoi calculation (creation of starting potential and shapefun).\n\n`_check_valid_parent`(parent_calc_folder)[source]\n\nCheck that calc is a valid parent for a FleurCalculation. It can be a VoronoiCalculation, KKRCalculation\n\nclassmethod `_get_parent`(input_folder)[source]\n\nget the parent folder of the calculation. If not parent was found return input folder\n\nclassmethod `_get_remote`(parent_folder)[source]\n\nget remote_folder from input if parent_folder is not already a remote folder\n\nclassmethod `_get_struc`(parent_calc)[source]\n\nGet structure from a parent_folder (result of a calculation, typically a remote folder)\n\nclassmethod `_has_struc`(parent_folder)[source]\n\nCheck if parent_folder has structure information in its input\n\n`_is_KkrCalc`(calc)[source]\n\ncheck if calc contains the file out_potential\n\nclassmethod `define`(spec)[source]\n\ndefine internals and inputs / outputs of calculation\n\nclassmethod `find_parent_structure`(parent_folder)[source]\n\nFind the Structure node recuresively in chain of parent calculations (structure node is input to voronoi calculation)\n\n`prepare_for_submission`(tempfolder)[source]\n\nCreate the input files from the input nodes passed to this instance of the CalcJob.\n\nParameters: tempfolder – an aiida.common.folders.Folder to temporarily write files on disk aiida.common.datastructures.CalcInfo instance\nKKRcode\n\nInput plug-in for a KKR calculation.\n\nclass `aiida_kkr.calculations.kkr.``KkrCalculation`(*args, **kwargs)[source]\n\nAiiDA calculation plugin for a KKR calculation.\n\n`_kick_out_corestates_kkrhost`(local_copy_list, tempfolder)[source]\n\nCompare value of core states from potential file in local_copy_list with EMIN and kick corestate out of potential if they lie inside the energy contour.\n\n`_prepare_qdos_calc`(parameters, kpath, structure, tempfolder, use_alat_input)[source]\n\nprepare a qdos (i.e. bandstructure) calculation, can only be done if k-points are given in input Note: this changes some settings in the parameters to ensure a DOS contour and low smearing temperature Also the qvec.dat file is written here.\n\n`_set_ef_value_potential`(ef_set, local_copy_list, tempfolder)[source]\n\nSet EF value ef_set in the potential file.\n\n`_set_parent_remotedata`(remotedata)[source]\n\nUsed to set a parent remotefolder in the restart of fleur.\n\n`_use_decimation`(parameters, tempfolder)[source]\n\nActivate decimation mode and copy decifile from output of deciout_parent calculation\n\n`_use_initial_noco_angles`(parameters, structure, tempfolder)[source]\n\nSet starting values for non-collinear calculation (writes nonco_angle.dat to tempfolder). Adapt FIXMOM runopt according to fix_dir input in initial_noco_angle input node\n\nclassmethod `define`(spec)[source]\n\nInit internal parameters at class load time\n\n`prepare_for_submission`(tempfolder)[source]\n\nCreate input files.\n\nparam tempfolder:\naiida.common.folders.Folder subclass where the plugin should put all its files.\nparam inputdict:\ndictionary of the input nodes as they would be returned by get_inputs_dict\n`aiida_kkr.calculations.kkr.``_update_params`(parameters, change_values)[source]\n\nchange parameters node from change_values list of key value pairs Retrun input parameter node if change_values list is empty\n\nKKRcode - calculation importer\n\nPlug-in to import a KKR calculation. This is based on the PwImmigrantCalculation of the aiida-quantumespresso plugin.\n\nclass `aiida_kkr.calculations.kkrimporter.``KkrImporterCalculation`(*args, **kwargs)[source]\n\nImporter dummy calculation for a previous KKR run\n\nParameters: remote_workdir (str) – Absolute path to the directory where the job was run. The transport of the computer you link ask input to the calculation is the transport that will be used to retrieve the calculation’s files. Therefore, `remote_workdir` should be the absolute path to the job’s directory on that computer. input_file_names – The file names of the job’s input file. output_file_name (dict with str entries) – The file names of the job’s output file (i.e. the file containing the stdout of kkr.x).\n`_init_internal_params`()[source]\n\nInit internal parameters at class load time\n\nKKRimp\n\nInput plug-in for a KKRimp calculation.\n\nclass `aiida_kkr.calculations.kkrimp.``KkrimpCalculation`(*args, **kwargs)[source]\n\nAiiDA calculation plugin for a KKRimp calculation.\n\n`_change_atominfo`(imp_info, kkrflex_file_paths, tempfolder)[source]\n\nchange kkrflex_atominfo to match impurity case\n\n`_check_and_extract_input_nodes`(tempfolder)[source]\n\nExtract input nodes from inputdict and check consitency of input nodes :param inputdict: dict of inputnodes :returns:\n\n• parameters (aiida_kkr.tools.kkr_params.kkrparams), optional: parameters of KKRimp that end up in config.cfg\n• code (KKRimpCodeNode): code of KKRimp on some machine\n• imp_info (DictNode): parameter node of the impurity information, extracted from host_parent_calc\n• kkrflex_file_paths (dict): dictionary of {filenames: absolute_path_to_file} for the kkrflex-files\n• shapfun_path (str): absolute path of the shapefunction of the host parent calculation\n• host_parent_calc (KkrCalculation): node of the parent host calculation where the kkrflex-files were created\n• impurity_potential (SinglefileData): single file data node containing the starting potential for the impurity calculation\n• parent_calc_folder (RemoteData): remote directory of a parent KKRimp calculation\n`_check_key_setting_consistency`(params_kkrimp, key, val)[source]\n\nCheck if key/value pair that is supposed to be set is not in conflict with previous settings of parameters in params_kkrimp\n\n`_extract_and_write_config`(parent_calc_folder, params_host, parameters, tempfolder, GFhost_folder)[source]\n\nfill kkr params for KKRimp and write config file also writes kkrflex_llyfac file if Lloyd is used in the host system\n\n`_get_and_verify_hostfiles`(tempfolder)[source]\n\nCheck inputdict for host_Greenfunction_folder and extract impurity_info, paths to kkrflex-files and path of shapefun file\n\nParameters: inputdict – input dictionary containing all input nodes to KkrimpCalculation imp_info: Dict node containing impurity information like position, Z_imp, cluster size, etc. kkrflex_file_paths: dict of absolute file paths for the kkrflex files shapefun_path: absolute path of the shapefunction file in the host calculation (needed to construct shapefun_imp) shapes: mapping array of atoms to shapes ( input) shapefun_path is None if host_Greenfunction calculation was not full-potential InputValidationError, if inputdict does not contain ‘host_Greenfunction’ InputValidationError, if host_Greenfunction_folder not of right type UniquenessError, if host_Greenfunction_folder does not have exactly one parent InputValidationError, if host_Greenfunction does not have an input node impurity_info InputValidationError, if host_Greenfunction was not a KKRFLEX calculation\n`_get_pot_and_shape`(imp_info, shapefun, shapes, impurity_potential, parent_calc_folder, tempfolder, structure)[source]\n\nwrite shapefun from impurity info and host shapefun and copy imp. potential\n\nreturns: file handle to potential file\n\n`adapt_retrieve_tmatnew`(tempfolder, allopts, retrieve_list)[source]\n\nAdd out_magneticmoments and orbitalmoments files to retrieve list\n\n`add_jij_files`(tempfolder, retrieve_list)[source]\n\ncheck if KkrimpCalculation is in Jij mode and add OUT_JIJMAT to retrieve list if needed\n\n`add_lmdos_files_to_retrieve`(tempfolder, allopts, retrieve_list, kkrflex_file_paths)[source]\n\nAdd DOS files to retrieve list\n\n`create_or_update_ldaupot`(parent_calc_folder, tempfolder)[source]\n\nWrites ldaupot to tempfolder.\n\nIf parent_calc_folder is found and it contains an onld ldaupot, we reuse the values for wldau, uldau and phi from there.\n\nclassmethod `define`(spec)[source]\n\nInit internal parameters at class load time\n\nclassmethod `get_ldaupot_from_retrieved`(retrieved, tempfolder)[source]\n\nExtract ldaupot from output of KKRimp retreived to tempfolder. The extracted file in tempfolder will be named ldaupot_old.\n\nreturns True of ldaupot was found, otherwise returns False\n\n`get_old_ldaupot`(parent_calc_folder, tempfolder)[source]\n\nCopy old ldaupot from retrieved of parent or extract from tarball. If no parent_calc_folder is present this step is skipped.\n\nCheck if host GF is found on remote machine and reuse from there\n\n`get_run_test_opts`(parameters)[source]\n\nExtract run and test options from input parameters\n\n`init_ldau`(tempfolder, retrieve_list, parent_calc_folder)[source]\n\nCheck if settings_LDAU is in input and set up LDA+U calculation. Reuse old ldaupot of parent_folder contains a file ldaupot.\n\n`prepare_for_submission`(tempfolder)[source]\n\nCreate input files.\n\nparam tempfolder:\naiida.common.folders.Folder subclass where the plugin should put all its files.\nparam inputdict:\ndictionary of the input nodes as they would be returned by get_inputs_dict\n`aiida_kkr.calculations.kkrimp.``get_ldaupot_text`(ldau_settings, ef_Ry, natom, initialize=True)[source]\n\ncreate the text for the ldaupot file\n\n###### Workflows¶\n\nThis section describes the aiida-kkr workflows.\n\nGenerate KKR start potential\n\nIn this module you find the base workflow for a dos calculation and some helper methods to do so with AiiDA\n\nclass `aiida_kkr.workflows.voro_start.``kkr_startpot_wc`(inputs=None, logger=None, runner=None, enable_persistence=True)[source]\n\nWorkchain create starting potential for a KKR calculation by running voronoi and getting the starting DOS for first checks on the validity of the input setting. Starts from a structure together with a KKR parameter node.\n\nParameters: Return result_kkr_startpot_wc: wf_parameters – (Dict), Workchain specifications options – (Dict), specifications for the computer structure – (StructureData), aiida structure node to begin calculation from (needs to contain vacancies, if KKR needs empty spheres) kkr – (Code) voronoi – (Code) calc_parameters – (Dict), KKR parameter set, passed on to voronoi run. (Dict), Information of workflow results like Success, last result node, dos array data\n`check_dos`()[source]\n\nchecks if dos of starting potential is ok\n\n`check_voronoi`()[source]\n\ncheck voronoi output. return True/False if voronoi output is ok/problematic if output is problematic try to increase some parameters (e.g. cluster radius) and rerun up tp N_rerun_max times initializes with returning True\n\nclassmethod `define`(spec)[source]\n\nDefines the outline of the workflow.\n\n`do_iteration_check`()[source]\n\ncheck if another iteration should be done\n\n`error_handler`()[source]\n\nCapture errors raised in validate_input\n\n`find_cluster_radius_alat`()[source]\n\nFind an estimate for the cluster radius that comes close to having nclsmin atoms in the cluster.\n\n`get_dos`()[source]\n\ncall to dos sub workflow passing the appropriate input and submitting the calculation\n\nclassmethod `get_wf_defaults`(silent=False)[source]\n\nPrint and return _wf_defaults dictionary. Can be used to easily create set of wf_parameters. returns _wf_defaults\n\n`return_results`()[source]\n\nreturn the results of the dos calculations This should run through and produce output nodes even if everything failed, therefore it only uses results from context.\n\n`run_voronoi`()[source]\n\nrun voronoi calculation with parameters from input\n\n`start`()[source]\n\ninit context and some parameters\n\n`aiida_kkr.workflows.voro_start.``update_voro_input`(params_old, updatenode, voro_output)[source]\n\nPseudo wf used to keep track of updated parameters in voronoi calculation. voro_output only enters as dummy argument for correct connection but logic using this value is done somewhere else.\n\nKKR scf cycle\n\nIn this module you find the base workflow for converging a kkr calculation and some helper methods to do so with AiiDA\n\n`aiida_kkr.workflows.kkr_scf.``create_scf_result_node`(**kwargs)[source]\n\nThis is a pseudo wf, to create the right graph structure of AiiDA. This workfunction will create the output node in the database. It also connects the output_node to all nodes the information commes from. So far it is just also parsed in as argument, because so far we are to lazy to put most of the code overworked from return_results in here.\n\n`aiida_kkr.workflows.kkr_scf.``extract_noco_angles`(**kwargs)[source]\n\nExtract noco angles from retrieved nonco_angles_out.dat files and save as Dict node which can be used as initial values for the next KkrCalculation. New angles are compared to old angles and if they are closer thanfix_dir_threshold they are not allowed to change anymore\n\n`aiida_kkr.workflows.kkr_scf.``get_site_symbols`(structure)[source]\n\nextract the site number taking into account a possible CPA structure\n\nclass `aiida_kkr.workflows.kkr_scf.``kkr_scf_wc`(inputs=None, logger=None, runner=None, enable_persistence=True)[source]\n\nWorkchain for converging a KKR calculation (SCF).\n\nIt converges the charge potential. Two paths are possible:\n\n(1) Start from a structure and run a voronoi calculation first, optional with calc_parameters (2) Start from an existing Voronoi or KKR calculation, with a remoteData\n\nParameters: Return output_kkr_scf_wc_para: wf_parameters – (Dict), Workchain Specifications options – (Dict); specifications for the computer structure – (StructureData), Crystal structure calc_parameters – (Dict), Voronoi/Kkr Parameters remote_data – (RemoteData), from a KKR, or Voronoi calculation voronoi – (Code) kkr – (Code) (Dict), Information of workflow results like Success, last result node, list with convergence behavior\n\nminimum input example: 1. Code1, Code2, Structure, (Parameters), (wf_parameters) 2. Code2, remote_data, (Parameters), (wf_parameters)\n\nmaximum input example: 1. Code1, Code2, Structure, Parameters\n\nwf_parameters: {‘queue_name’ : String,\n‘resources’ : dict({“num_machines”: int, “num_mpiprocs_per_machine” : int}) ‘walltime’ : int}\n1. Code2, (remote-data), wf_parameters as in 1.\n\nHints: 1. This workflow does not work with local codes!\n\n`_get_new_noco_angles`()[source]\n\nextract nonco angles from output of calculation, if fix_dir is True we skip this and leave the initial angles unchanged Here we update self.ctx.initial_noco_angles with the new values\n\n`check_dos`()[source]\n\nchecks if dos of final potential is ok\n\n`check_input_params`(params, is_voronoi=False)[source]\n\nChecks input parameter consistency and aborts wf if check fails.\n\n`check_voronoi`()[source]\n\ncheck output of kkr_startpot_wc workflow that creates starting potential, shapefun etc.\n\n`condition`()[source]\n\ncheck convergence condition\n\n`convergence_on_track`()[source]\n\nCheck if convergence behavior of the last calculation is on track (i.e. going down)\n\nclassmethod `define`(spec)[source]\n\nDefines the outline of the workflow.\n\n`get_dos`()[source]\n\ncall to dos sub workflow passing the appropriate input and submitting the calculation\n\nclassmethod `get_wf_defaults`(silent=False)[source]\n\nPrint and return _wf_default dictionary. Can be used to easily create set of wf_parameters. returns _wf_default, _options_default\n\n`inspect_kkr`()[source]\n\ncheck for convergence and store some of the results of the last calculation to context\n\n`return_results`()[source]\n\nreturn the results of the calculations This shoudl run through and produce output nodes even if everything failed, therefore it only uses results from context.\n\n`run_kkr`()[source]\n\nsubmit a KKR calculation\n\n`run_voronoi`()[source]\n\nrun the voronoi step calling voro_start workflow\n\n`start`()[source]\n\ninit context and some parameters\n\n`update_kkr_params`()[source]\n\nupdate set of KKR parameters (check for reduced mixing, change of mixing strategy, change of accuracy setting)\n\n`validate_input`()[source]\n\n# validate input and find out which path (1, or 2) to take # return True means run voronoi if false run kkr directly\n\nDensity of states\n\nIn this module you find the base workflow for a dos calculation and some helper methods to do so with AiiDA\n\nclass `aiida_kkr.workflows.dos.``kkr_dos_wc`(inputs=None, logger=None, runner=None, enable_persistence=True)[source]\n\nWorkchain a DOS calculation with KKR starting from the remoteData node of a previous calculation (either Voronoi or KKR).\n\nParameters: Return result_kkr_dos_wc: wf_parameters – (Dict); Workchain specifications options – (Dict); specifications for the computer remote_data – (RemoteData), mandatory; from a KKR or Vornoi calculation kkr – (Code), mandatory; KKR code running the dos calculation (Dict), Information of workflow results like Success, last result node, list with convergence behavior\nclassmethod `define`(spec)[source]\n\nDefines the outline of the workflow.\n\n`get_dos`()[source]\n\nsubmit a dos calculation and interpolate result if returns complete\n\nclassmethod `get_wf_defaults`(silent=False)[source]\n\nPrint and return _wf_defaults dictionary. Can be used to easily create set of wf_parameters. returns _wf_defaults\n\n`return_results`()[source]\n\nCollect results, parse DOS output and link output nodes to workflow node\n\n`set_params_dos`()[source]\n\ntake input parameter node and change to DOS contour according to input from wf_parameter input internally calls the update_params work function to keep track of provenance\n\n`start`()[source]\n\ninit context and some parameters\n\n`validate_input`()[source]\n\n# validate input and find out which path (1, or 2) to take # return True means run voronoi if false run kkr directly\n\n`aiida_kkr.workflows.dos.``parse_dosfiles`(dos_retrieved)[source]\n\nparse dos files to XyData nodes\n\nBandstructure\n\nThis module contains the band structure workflow for KKR which is done by calculating the k-resolved spectral density also known as Bloch spectral function.\n\nclass `aiida_kkr.workflows.bs.``kkr_bs_wc`(inputs=None, logger=None, runner=None, enable_persistence=True)[source]\n\nWorkchain for BandStructure calculation, starting from RemoteFolderData of the previous converged KKR calculation remote folder data\n\ninputs: :param wf_parameters: (Dict), (optional); Workchain Specifications, contains nepts, tempr, emin (in eV relative to EF), emax (in eV),\n\nand RCLUSTZ (can be used to increase the screening cluster radius) keys.\nParameters: options – (Dict), (optional); Computer Specifications, scheduler command, parallel or serial kpoints – (KpointsData),(optional); Kpoints data type from the structure, but not mendatory as it can be extracted from structure internaly from the remote data remote_data – (RemoteData)(mendaory); From the previous kkr-converged calculation. kkr – (Code)(mendaory); KKR code specifiaction label – (Str) (optional) ; label for WC but will be found in the “result_wf” output Dict as ‘BS_wf_label’ key description – (Str) (optional) : description for WC but will be found in the “result_wf” output Dict as ‘BS_wf_description’ key\n\nreturns: :out BS_Data : (ArrayData) ; Consist of BlochSpectralFunction, k_points (list), energy_points (list), special_kpoints(dict) :out result_wf: (Dict); work_chain_specifications node, BS_data node, remote_folder node\n\nclassmethod `define`(spec)[source]\n\nLayout of the workflow, defines the input nodes and the outline of the workchain\n\n`get_BS`()[source]\n\nsubmit the KkrCalcultion with the qdos settings for a bandstructure calculation\n\nclassmethod `get_wf_defaults`(silent=False)[source]\n\nReturn the default values of the workflow parameters (wf_parameters input node)\n\n`return_results`()[source]\n\nCollect results, parse BS_calc output and link output nodes to workflow node\n\n`set_params_BS`()[source]\n\nset kkr parameters for the bandstructure (i.e. qdos) calculation\n\n`start`()[source]\n\nset up context of the workflow\n\n`validate_input`()[source]\n\nvalidate input and find out which path ( converged kkr calc or wf ) to take return True means run voronoi if false run kkr directly\n\n`aiida_kkr.workflows.bs.``parse_BS_data`(retrieved_folder, fermi_level, kpoints)[source]\n\nparse the qdos files from the retreived folderand save as ArrayData\n\n`aiida_kkr.workflows.bs.``set_energy_params`(econt_new, ef, para_check)[source]\n\nset energy contour values to para_check internally convert from relative eV units to absolute Ry units\n\nEquation of states\n\nIn this module you find the base workflow for a EOS calculation and some helper methods to do so with AiiDA\n\n`aiida_kkr.workflows.eos.``get_primitive_structure`(structure, return_all)[source]\n\ncalls get_explicit_kpoints_path which gives primitive structure auxiliary workfunction to keep provenance\n\nclass `aiida_kkr.workflows.eos.``kkr_eos_wc`(inputs=None, logger=None, runner=None, enable_persistence=True)[source]\n\nWorkchain of an equation of states calculation with KKR.\n\nLayout of the workflow:\n1. determine V0, scale_range, etc. from input\n2. run voro_start for V0 and smallest volume\n2.1 get minimum for RMTCORE (needs to be fixed for all calculations to be able to compare total energies\n3. submit kkr_scf calculations for all volumes using RMTCORE setting determined in step 2\n4. collect results\n`check_voro_out`()[source]\n\ncheck outout of vorostart workflow and create input for rest of calculations (rmtcore setting etc.)\n\n`collect_data_and_fit`()[source]\n\ncollect output of KKR calculations and perform eos fitting to collect results\n\nclassmethod `define`(spec)[source]\n\nDefines the outline of the workflow.\n\nclassmethod `get_wf_defaults`(silent=False)[source]\n\nPrint and return _wf_defaults dictionary. Can be used to easily create set of wf_parameters. returns _wf_defaults, _options_default\n\n`prepare_strucs`()[source]\n\ncreate new set of scaled structures using the ‘rescale’ workfunction (see end of the workflow)\n\n`return_results`()[source]\n\ncreate output dictionary and run output node generation\n\n`run_kkr_steps`()[source]\n\nsubmit KKR calculations for all structures, skip vorostart step for smallest structure\n\n`run_vorostart`()[source]\n\nrun vorostart workflow for smallest structure to determine rmtcore setting for all others\n\n`start`()[source]\n\ninitialize context and check input nodes\n\n`aiida_kkr.workflows.eos.``rescale`(inp_structure, scale)[source]\n\nRescales a crystal structure. Keeps the provanance in the database.\n\n:param inp_structure, a StructureData node (pk, or uuid) :param scale, float scaling factor for the cell\n\nReturns: New StrcutureData node with rescalled structure, which is linked to input Structure and None if inp_structure was not a StructureData\n\ncopied and modified from aiida_fleur.tools.StructureData_util\n\n`aiida_kkr.workflows.eos.``rescale_no_wf`(structure, scale)[source]\n\nRescales a crystal structure. DOES NOT keep the provanence in the database.\n\n:param structure, a StructureData node (pk, or uuid) :param scale, float scaling factor for the cell\n\nReturns: New StrcutureData node with rescalled structure, which is linked to input Structure and None if inp_structure was not a StructureData\n\ncopied and modified from aiida_fleur.tools.StructureData_util\n\nFind Green Function writeout for KKRimp\n\nIn this module you find the base workflow for writing out the kkr_flexfiles and some helper methods to do so with AiiDA\n\nclass `aiida_kkr.workflows.gf_writeout.``kkr_flex_wc`(inputs=None, logger=None, runner=None, enable_persistence=True)[source]\n\nWorkchain of a kkr_flex calculation to calculate the Green function with KKR starting from the RemoteData node of a previous calculation (either Voronoi or KKR).\n\nParameters: Return workflow_info: options – (Dict), Workchain specifications wf_parameters – (Dict), Workflow parameters that deviate from previous KKR RemoteData remote_data – (RemoteData), mandatory; from a converged KKR calculation kkr – (Code), mandatory; KKR code running the flexfile writeout impurity_info – Dict, mandatory: node specifying information of the impurities in the system (Dict), Information of workflow results like success, last result node, list with convergence behavior (RemoteData), host GF of the system\nclassmethod `define`(spec)[source]\n\nDefines the outline of the workflow\n\n`get_flex`()[source]\n\nSubmit a KKRFLEX calculation\n\nclassmethod `get_wf_defaults`()[source]\n\nPrint and return _wf_defaults dictionary. Can be used to easily create set of wf_parameters. returns _wf_defaults\n\n`move_kkrflex_files`()[source]\n\nMove the kkrflex files from the remote folder to KkrimpCalculation._DIRNAME_GF_UPLOAD on the remote computer’s working dir. This skips retrieval to the file repository and reduces cluttering the database.\n\n`return_results`()[source]\n\nReturn the results of the KKRFLEX calculation. This should run through and produce output nodes even if everything failed, therefore it only uses results from context.\n\n`set_params_flex`()[source]\n\nTake input parameter node and change to input from wf_parameter and options\n\n`start`()[source]\n\ninit context and some parameters\n\n`validate_input`()[source]\n\nValidate input\n\nKKRimp self-consistency\n\nIn this module you find the sub workflow for the kkrimp self consistency cycle and some helper methods to do so with AiiDA\n\n`aiida_kkr.workflows.kkr_imp_sub.``clean_raw_input`(successful, pks_calcs, dry_run=False)[source]\n\nClean raw_input directories that contain copies of shapefun and potential files This however breaks provenance (strictly speaking) and therefore should only be done for the calculations of a successfully finished workflow (see email on mailing list from 25.11.2019).\n\n`aiida_kkr.workflows.kkr_imp_sub.``clean_sfd`(sfd_to_clean, nkeep=30)[source]\n\nClean up potential file (keep only header) to save space in the repository WARNING: this breaks cachability!\n\n`aiida_kkr.workflows.kkr_imp_sub.``extract_imp_pot_sfd`(retrieved_folder)[source]\n\nExtract potential file from retrieved folder and save as SingleFileData\n\nclass `aiida_kkr.workflows.kkr_imp_sub.``kkr_imp_sub_wc`(inputs=None, logger=None, runner=None, enable_persistence=True)[source]\n\nWorkchain of a kkrimp self consistency calculation starting from the host-impurity potential of the system. (Not the entire kkr_imp workflow!)\n\nParameters: Return workflow_info: options – (Dict), Workchain specifications wf_parameters – (Dict), specifications for the calculation host_imp_startpot – (RemoteData), mandatory; input host-impurity potential kkrimp – (Code), mandatory; KKRimp code converging the host-imp-potential remote_data – (RemoteData), mandatory; remote folder of a previous kkrflex calculation containing the flexfiles … kkrimp_remote – (RemoteData), remote folder of a previous kkrimp calculation impurity_info – (Dict), Parameter node with information about the impurity cluster (Dict), Information of workflow results like success, last result node, list with convergence behavior (SinglefileData), output potential of the sytem\n`condition`()[source]\n\ncheck convergence condition\n\n`convergence_on_track`()[source]\n\nCheck if convergence behavior of the last calculation is on track (i.e. going down)\n\nclassmethod `define`(spec)[source]\n\nDefines the outline of the workflow\n\n`error_handler`()[source]\n\nCapture errors raised in validate_input\n\nclassmethod `get_wf_defaults`(silent=False)[source]\n\nPrint and return _wf_defaults dictionary. Can be used to easily create set of wf_parameters.\n\nreturns _wf_defaults\n\n`inspect_kkrimp`()[source]\n\ncheck for convergence and store some of the results of the last calculation to context\n\n`return_results`()[source]\n\nReturn the results of the calculations This should run through and produce output nodes even if everything failed, therefore it only uses results from context.\n\n`run_kkrimp`()[source]\n\nsubmit a KKR impurity calculation\n\n`start`()[source]\n\ninit context and some parameters\n\n`update_kkrimp_params`()[source]\n\nupdate set of KKR parameters (check for reduced mixing, change of mixing strategy, change of accuracy setting)\n\n`validate_input`()[source]\n\nvalidate input and catch possible errors from the input\n\n`aiida_kkr.workflows.kkr_imp_sub.``remove_out_pot_impcalcs`(successful, pks_all_calcs, dry_run=False)[source]\n\nRemove out_potential file from all but the last KKRimp calculation if workflow was successful\n\nUsage:\n\n```imp_wf = load_node(266885) # maybe start with outer workflow\npk_imp_scf = imp_wf.outputs.workflow_info['used_subworkflows'].get('kkr_imp_sub')\nimp_scf_wf = load_node(pk_imp_scf) # this is now the imp scf sub workflow\nsuccessful = imp_scf_wf.outputs.workflow_info['successful']\npks_all_calcs = imp_scf_wf.outputs.workflow_info['pks_all_calcs']\n```\nKKRimp complete calculation\n\nIn this module you find the total workflow for a kkr impurity calculation and some helper methods to do so with AiiDA\n\nclass `aiida_kkr.workflows.kkr_imp.``kkr_imp_wc`(inputs=None, logger=None, runner=None, enable_persistence=True)[source]\n\nWorkchain of a kkrimp calculation starting either from scratch (with a structure and impurity_info node), or with a converged host potential and impurity startpotentials, … to calculate the converged host-impurity potential of the system.\n\nParameters: Return workflow_info: options – (Dict), Workchain specifications wf_parameters – (Dict), specifications for the kkr impurity workflow voro_aux_parameters – (Dict), specification for the auxiliary voronoi calculation for the impurity kkrimp – (Code), mandatory: KKRimp code converging the host-imp-potential kkr – (Code), mandatory: KKR code for calculation the host potential voronoi – (Code), mandatory: Voronoi code to generate the impurity startpot remote_data_gf – (RemoteData): remote folder of a previous kkrflex calculation containing the flexfiles … remote_data_host – (RemoteData): remote folder of a converged KKR host calculation (Dict), Information of workflow results (Dict), output parameters of the last called calculation (Dict), information of the last called calculation\n`construct_startpot`()[source]\n\nTake the output of GF writeout and the converged host potential as well as the auxiliary startpotentials for the impurity to construct the startpotential for the KKR impurity sub workflow\n\nclassmethod `define`(spec)[source]\n\nDefines the outline of the workflow\n\n`final_cleanup`()[source]\n\nRemove unneeded files to save space\n\n`get_ef_from_parent`()[source]\n\nExtract Fermi level in Ry to which starting potential is set\n\nclassmethod `get_wf_defaults`(silent=False)[source]\n\nPrint and return _wf_defaults dictionary. Can be used to easily create set of wf_parameters.\n\nreturns _wf_defaults\n\n`has_starting_potential_input`()[source]\n\ncheck whether or not a starting potential needs to be created\n\n`return_results`()[source]\n\nReturn the results and create all of the output nodes\n\n`run_gf_writeout`()[source]\n\nRun the gf_writeout workflow to calculate the host Green’s function and the KKR flexfiles using the converged host remote folder and the impurity info node\n\n`run_kkrimp_scf`()[source]\n\nUses both the previously generated host-impurity startpotential and the output from the GF writeout workflow as inputs to run the kkrimp_sub workflow in order to converge the host-impurity potential\n\n`run_voroaux`()[source]\n\nPerform a voronoi calculation for every impurity charge using the structure from the converged KKR host calculation\n\n`start`()[source]\n\nInit context and some parameters\n\n`validate_input`()[source]\n\nValidate the input and catch possible errors from the input\n\n###### Calculation parsers¶\n\nThis section describes the different parsers classes for calculations.\n\nVoronoi Parser\nclass `aiida_kkr.parsers.voro.``VoronoiParser`(calc)[source]\n\nParser class for parsing output of voronoi code..\n\n`__init__`(calc)[source]\n\nInitialize the instance of Voronoi_Parser\n\n`parse`(debug=False, **kwargs)[source]\n\nParse output data folder, store results in database.\n\nParameters: retrieved – a dictionary of retrieved nodes, where the key is the link name nothing if everything is fine or an exit code defined in the voronoi calculation class\nKKRcode Parser\n\nParser for the KKR Code. The parser should never fail, but it should catch all errors and warnings and show them to the user.\n\nclass `aiida_kkr.parsers.kkr.``KkrParser`(calc)[source]\n\nParser class for parsing output of KKR code..\n\n`__init__`(calc)[source]\n\nInitialize the instance of KkrParser\n\n`parse`(debug=False, **kwargs)[source]\n\nParse output data folder, store results in database.\n\nParameters: retrieved – a dictionary of retrieved nodes, where the key is the link name a tuple with two values `(bool, node_list)`, where: `bool`: variable to tell if the parsing succeeded `node_list`: list of new nodes to be stored in the db (as a list of tuples `(link_name, node)`)\n`remove_unnecessary_files`()[source]\n\nRemove files that are not needed anymore after parsing The information is completely parsed (i.e. in outdict of calculation) and keeping the file would just be a duplication.\n\nKKRcode - calculation importer Parser\n\nParser for the KKR imprter, slight modification to KKr parser (dealing of missing output files). The parser should never fail, but it should catch all errors and warnings and show them to the user.\n\nclass `aiida_kkr.parsers.kkrimporter.``KkrImporterParser`(calc)[source]\n\nParser class for parsing output of KKR code after import\n\n`__init__`(calc)[source]\n\nInitialize the instance of KkrParser\n\nKKRimp Parser\n\nParser for the KKR-impurity Code. The parser should never fail, but it should catch all errors and warnings and show them to the user.\n\nclass `aiida_kkr.parsers.kkrimp.``KkrimpParser`(calc)[source]\n\nParser class for parsing output of the KKRimp code..\n\n`__init__`(calc)[source]\n\nInitialize the instance of KkrimpParser\n\n`cleanup_outfiles`(fileidentifier, keyslist)[source]\n\nopen file and remove unneeded output\n\n`final_cleanup`()[source]\n\nCreate a tarball of the rest.\n\n`parse`(debug=False, **kwargs)[source]\n\nParse output data folder, store results in database.\n\nParameters: retrieved – a dictionary of retrieved nodes, where the key is the link name\n`remove_unnecessary_files`()[source]\n\nRemove files that are not needed anymore after parsing The information is completely parsed (i.e. in outdict of calculation) and keeping the file would just be a duplication.\n\nVoronoi Parser\nclass `aiida_kkr.parsers.voro.``VoronoiParser`(calc)[source]\n\nParser class for parsing output of voronoi code..\n\n`__init__`(calc)[source]\n\nInitialize the instance of Voronoi_Parser\n\n`parse`(debug=False, **kwargs)[source]\n\nParse output data folder, store results in database.\n\nParameters: retrieved – a dictionary of retrieved nodes, where the key is the link name nothing if everything is fine or an exit code defined in the voronoi calculation class\n###### Tools¶\n\nHere the tools provided by `aiida_kkr` are described.\n\nCommon (work)functions that need aiida\n\nHere workfunctions and normal functions using aiida-stuff (typically used within workfunctions) are collected.\n\n`aiida_kkr.tools.common_workfunctions.``check_2Dinput_consistency`(structure, parameters)[source]\n\nCheck if structure and parameter data are complete and matching.\n\nParameters: input – structure, needs to be a valid aiida StructureData node input – parameters, needs to be valid aiida Dict node\n\nreturns (False, errormessage) if an inconsistency has been found, otherwise return (True, ‘2D consistency check complete’)\n\n`aiida_kkr.tools.common_workfunctions.``extract_potname_from_remote`(parent_calc_folder)[source]\n\nextract the bname of the output potential from a RemoteData folder\n\n`aiida_kkr.tools.common_workfunctions.``find_cluster_radius`(structure, nclsmin, n_max_box=50, nbins=100)[source]\n\nTakes structure information (cell and site positions) and computes the minimal cluster radius needed such that all clusters around all atoms contain more than nclsmin atoms.\n\nNote: Here we assume spherical clusters around the atoms! structure – input structure for which the clusters are analyzed nclsmin – minimal number of atoms in the screening cluster n_max_box – maximal number of supercells in 3D volume nbins – number of bins in which the cluster number is analyzed minimal cluster radius needed in Angstroem minimal cluster radius needed in units of the lattice constant\n`aiida_kkr.tools.common_workfunctions.``generate_inputcard_from_structure`(parameters, structure, input_filename, parent_calc=None, shapes=None, isvoronoi=False, use_input_alat=False, vca_structure=False)[source]\n\nTakes information from parameter and structure data and writes input file ‘input_filename’\n\nParameters: parameters – input parameters node containing KKR-related input parameter structure – input structure node containing lattice information input_filename – input filename, typically called ‘inputcard’\n\noptional arguments :param parent_calc: input parent calculation node used to determine if EMIN\n\nparameter is automatically overwritten (from voronoi output) or not\nParameters: shapes – input shapes array (set automatically by aiida_kkr.calculations.Kkrcalculation and shall not be overwritten) isvoronoi – tell whether or not the parameter set is for a voronoi calculation or kkr calculation (have different lists of mandatory keys) use_input_alat – True/False, determines whether the input alat value is taken or the new alat is computed from the Bravais vectors assumes valid structure and parameters, i.e. for 2D case all necessary information has to be given. This is checked with function ‘check_2D_input’ called in aiida_kkr.calculations.Kkrcalculation\n`aiida_kkr.tools.common_workfunctions.``get_inputs_common`(calculation, code, remote, structure, options, label, description, params, serial, imp_info=None, host_GF=None, imp_pot=None, kkrimp_remote=None, host_GF_Efshift=None, **kwargs)[source]\n\nBase function common in get_inputs_* functions for different codes\n\n`aiida_kkr.tools.common_workfunctions.``get_inputs_kkr`(code, remote, options, label='', description='', parameters=None, serial=False, imp_info=None)[source]\n\nGet the input for a voronoi calc. Wrapper for KkrProcess setting structure, code, options, label, description etc. :param code: a valid KKRcode installation (e.g. input from Code.get_from_string(‘codename@computername’)) :param remote: remote directory of parent calculation (Voronoi or previous KKR calculation)\n\n`aiida_kkr.tools.common_workfunctions.``get_inputs_kkrimp`(code, options, label='', description='', parameters=None, serial=False, imp_info=None, host_GF=None, imp_pot=None, kkrimp_remote=None, host_GF_Efshift=None)[source]\n\nGet the input for a kkrimp calc. Wrapper for KkrimpProcess setting structure, code, options, label, description etc. :param code: a valid KKRimpcode installation (e.g. input from Code.get_from_string(‘codename@computername’)) TBD\n\n`aiida_kkr.tools.common_workfunctions.``get_inputs_kkrimporter`(code, remote, options, label='', description='', parameters=None, serial=False)[source]\n\nGet the input for a voronoi calc. Wrapper for KkrProcess setting structure, code, options, label, description etc.\n\n`aiida_kkr.tools.common_workfunctions.``get_inputs_voronoi`(code, structure, options, label='', description='', params=None, serial=True, parent_KKR=None)[source]\n\nGet the input for a voronoi calc. Wrapper for VoronoiProcess setting structure, code, options, label, description etc.\n\n`aiida_kkr.tools.common_workfunctions.``get_parent_paranode`(remote_data)[source]\n\nReturn the input parameter of the parent calculation giving the remote_data node\n\n`aiida_kkr.tools.common_workfunctions.``kick_out_corestates`(potfile, potfile_out, emin)[source]\n\nRead potential file and kick out all core states that lie higher than emin. If no core state lies higher than emin then the output potential will be the same as the input potential :param potfile: input potential :param potfile_out: output potential where some core states are kicked out :param emin: minimal energy above which all core states are kicked out from potential :returns: number of lines that have been deleted\n\n`aiida_kkr.tools.common_workfunctions.``kick_out_corestates_wf`(potential_sfd, emin)[source]\n\nWorkfunction that kicks out all core states from single file data potential that are higher than emin. :param potential_sfd: SinglefileData type of potential :param emin: Energy threshold above which all core states are removed from potential (Float) :returns: potential without core states higher than emin (SinglefileData)\n\n`aiida_kkr.tools.common_workfunctions.``neworder_potential_wf`(settings_node, parent_calc_folder, **kwargs)[source]\n\nWorkfunction to create database structure for aiida_kkr.tools.modify_potential.neworder_potential function A temporary file is written in a Sandbox folder on the computer specified via the input computer node before the output potential is stored as SinglefileData in the Database.\n\nParameters: settings_node – settings for the neworder_potential function (Dict) parent_calc_folder – parent calculation remote folder node where the input potential is retreived from (RemoteData) parent_calc_folder2 – optional, parent calculation remote folder node where the second input potential is retreived from in case ‘pot2’ and ‘replace_newpos’ are also set in settings_node (RemoteData) debug – optional, contol wether or not debug information is written out (aiida.orm.Bool) output_potential node (SinglefileData)\n\nNote\n\nThe settings_node dictionary needs to be of the following form:\n\n```settings_dict = {'neworder': [list of intended order in output potential]}\n```\n\nOptional entries are:\n\n```'out_pot': '<filename_output_potential>' name of the output potential file, defaults to 'potential_neworder' if not specified\n'pot1': '<filename_input_potential>' if not given we will try to find it from the type of the parent remote folder\n'pot2': '<filename_second_input_file>'\n'replace_newpos': [[position in neworder list which is replace with potential from pot2, position in pot2 that is chosen for replacement]]\n'switch_spins': [indices of atom for which spins are exchanged] (indices refer to position in neworder input list)\n'label': 'label_for_output_node'\n'description': 'longer_description_for_output_node'\n```\n`aiida_kkr.tools.common_workfunctions.``structure_from_params`(parameters)[source]\n\nConstruct aiida structure out of kkr parameter set (if ALATBASIS, RBASIS, ZATOM etc. are given)\n\nParameters: input – parameters, kkrparams object with structure information set (e.g. extracted from read_inputcard function) success, boolean to determine if structure creatoin was successful structure, an aiida StructureData object\n`aiida_kkr.tools.common_workfunctions.``test_and_get_codenode`(codenode, expected_code_type, use_exceptions=False)[source]\n\nPass a code node and an expected code (plugin) type. Check that the code exists, is unique, and return the Code object.\n\nParameters: codenode – the name of the code to load (in the form label@machine) expected_code_type – a string with the plugin that is expected to be loaded. In case no plugins exist with the given name, show all existing plugins of that type use_exceptions – if True, raise a ValueError exception instead of calling sys.exit(1) a Code object from kkr_scf workflow: if ‘voronoi’ in inputs: try: test_and_get_codenode(inputs.voronoi, ‘kkr.voro’, use_exceptions=True) except ValueError: error = (“The code you provided for voronoi does not “ “use the plugin kkr.voro”) self.control_end_wc(error)\n`aiida_kkr.tools.common_workfunctions.``update_params`(node, nodename=None, nodedesc=None, **kwargs)[source]\n\nUpdate parameter node given with the values given as kwargs. Returns new node.\n\nParameters: node – Input parameter node (needs to be valid KKR input parameter node). **kwargs – Input keys with values as in kkrparams. linkname – Input linkname string. Give link from old to new node a name . If no linkname is given linkname defaults to ‘updated parameters’ parameter node OutputNode = KkrCalculation.update_params(InputNode, EMIN=-1, NSTEPS=30) Keys are set as in kkrparams class. Check documentation of kkrparams for further information. If kwargs contain the key add_direct, then no kkrparams instance is used and no checks are performed but the dictionary is filled directly! By default nodename is ‘updated KKR parameters’ and description contains list of changed\n`aiida_kkr.tools.common_workfunctions.``update_params_wf`(parameternode, updatenode, **link_inputs)[source]\n\nWork function to update a KKR input parameter node. Stores new node in database and creates a link from old parameter node to new node Returns updated parameter node using update_params function\n\nNote: Input nodes need to be valid aiida Dict objects. parameternode – Input aiida Dict node cotaining KKR specific parameters updatenode – Input aiida Dict node containing a dictionary with the parameters that are supposed to be changed. If ‘nodename’ is contained in dict of updatenode the string corresponding to this key will be used as nodename for the new node. Otherwise a default name is used Similar for ‘nodedesc’ which gives new node a description updated_params = Dict(dict={‘nodename’: ‘my_changed_name’, ‘nodedesc’: ‘My description text’, ‘EMIN’: -1, ‘RMAX’: 10.}) new_params_node = update_params_wf(input_node, updated_params)\n`aiida_kkr.tools.common_workfunctions.``vca_check`(structure, parameters)[source]\nKKRimp tools\n\nTools for the impurity caluclation plugin and its workflows\n\n`aiida_kkr.tools.tools_kkrimp.``create_scoef_array`(structure, radius, h=-1, vector=[0.0, 0.0, 1.0], i=0, alat_input=None)[source]\n\nCreates the arrays that should be written into the ‘scoef’ file for a certain structure. Needed to conduct an impurity KKR calculation.\n\nParameters: structure – input structure of the StructureData type. radius – input cutoff radius in Ang. units. h – height of the cutoff cylinder (negative for spherical cluster shape). For negative values, clust_shape will be automatically assumed as ‘spherical’. If there will be given a h > 0, the clust_shape will be ‘cylindrical’. vector – orientation vector of the cylinder (just for clust_shape=’cylindrical’). i – atom index around which the cluster should be centered. Default: 0 (first atom in the structure). alat_input – input lattice constant in Ang. If None use the lattice constant that is automatically found. Otherwise rescale everything.\n`aiida_kkr.tools.tools_kkrimp.``find_neighbors`(structure, structure_array, i, radius, clust_shape='spherical', h=0.0, vector=[0.0, 0.0, 1.0])[source]\n\nApplies periodic boundary conditions and obtains the distances between the selected atom i in the cell and all other atoms that lie within a cutoff radius r_cut. Afterwards an numpy array with all those atoms including atom i (x_res) will be returned.\n\nParameters: structure – input parameter of the StructureData type containing the three bravais lattice cell vectors structure_array – input numpy structure array containing all the structure related data i – centered atom at which the origin lies (same one as in select_reference) radius – Specifies the radius of the cylinder or of the sphere, depending on clust_shape. Input in units of the lattice constant. clust_shape – specifies the shape of the cluster that is used to determine the neighbors for the ‘scoef’ file. Default value is ‘spherical’. Other possible forms are ‘cylindrical’ (‘h’ and ‘orient’ needed), … . h – needed for a cylindrical cluster shape. Specifies the height of the cylinder. Default=0. Input in units of the lattice constant. vector – needed for a cylindrical cluster shape. Specifies the orientation vector of the cylinder. Default: z-direction. array with all the atoms within the cutoff (x_res) dynamical box construction (r_cut determines which values n1, n2, n3 have) different cluster forms (spherical, cylinder, …), add default parameters, better solution for ‘orient’\n`aiida_kkr.tools.tools_kkrimp.``get_distance`(structure_array, i, j)[source]\n\nCalculates and returns the distances between to atoms i and j in the given structure_array\n\nParameters: structure_array – input numpy array of the cell containing all the atoms ((# of atoms) x 6-matrix) indices of the atoms for which the distance should be calculated (indices again start at 0) distance between atoms i and j in units of alat\n`aiida_kkr.tools.tools_kkrimp.``get_structure_data`(structure)[source]\n\nFunction to take data from AiiDA’s StructureData type and store it into a single numpy array of the following form: a = [[x-Position 1st atom, y-Position 1st atom, z-Position 1st atom, index 1st atom, charge 1st atom, 0.],\n\n[x-Position 2nd atom, y-Position 2nd atom, z-Position 2nd atom, index 2nd atom, charge 1st atom, 0.], […, …, …, …, …, …], … ]\nParameters: structure – input structure of the type StructureData numpy array a[# of atoms in the unit cell] containing the structure related data (positions in units of the unit cell length)\n`aiida_kkr.tools.tools_kkrimp.``make_scoef`(structure, radius, path, h=-1.0, vector=[0.0, 0.0, 1.0], i=0, alat_input=None)[source]\n\nCreates the ‘scoef’ file for a certain structure. Needed to conduct an impurity KKR calculation.\n\nParameters: structure – input structure of the StructureData type. radius – input cutoff radius in Ang. units. h – height of the cutoff cylinder (negative for spherical cluster shape). For negative values, clust_shape will be automatically assumed as ‘spherical’. If there will be given a h > 0, the clust_shape will be ‘cylindrical’. vector – orientation vector of the cylinder (just for clust_shape=’cylindrical’). i – atom index around which the cluster should be centered. Default: 0 (first atom in the structure). alat_input – input lattice constant in Ang. If None use the lattice constant that is automatically found. Otherwise rescale everything.\nclass `aiida_kkr.tools.tools_kkrimp.``modify_potential`[source]\n\nClass for old modify potential script, ported from modify_potential script, initially by D. Bauer\n\n`__weakref__`\n\nlist of weak references to the object (if defined)\n\n`neworder_potential`(potfile_in, potfile_out, neworder, potfile_2=None, replace_from_pot2=None, debug=False)[source]\n\nRead potential file and new potential using a list describing the order of the new potential. If a second potential is given as input together with an index list, then the corresponding of the output potential are overwritten with positions from the second input potential.\n\nParameters: potfile_in (str) – absolute path to input potential potfile_out (str) – absolute path to output potential neworder (list) – list after which output potential is constructed from input potential potfile_2 (str) – optional, absolute path to second potential file if positions in new list of potentials shall be replaced by positions of second potential, requires replace_from_pot to be given as well replace_from_pot (list) – optional, list containing tuples of (position in newlist that is to be replaced, position in pot2 with which position is replaced) modify_potential().neworder_potential(, , [])\n`shapefun_from_scoef`(scoefpath, shapefun_path, atom2shapes, shapefun_new)[source]\n\nRead shapefun and create impurity shapefun using scoef info and shapes array\n\nParameters: scoefpath – absolute path to scoef file shapefun_path – absolute path to input shapefun file shapes – shapes array for mapping between atom index and shapefunction index shapefun_new – absolute path to output shapefun file to which the new shapefunction will be written\n`aiida_kkr.tools.tools_kkrimp.``rotate_onto_z`(structure, structure_array, vector)[source]\n\nRotates all positions of a structure array of orientation ‘orient’ onto the z-axis. Needed to implement the cylindrical cutoff shape.\n\nParameters: structure – input structure of the type StructureData structure_array – input structure array, obtained by select_reference for the referenced system. vector – reference vector that has to be mapped onto the z-axis. rotated system, now the ‘orient’-axis is aligned with the z-axis\n`aiida_kkr.tools.tools_kkrimp.``select_reference`(structure_array, i)[source]\n\nFunction that references all of the atoms in the cell to one particular atom i in the cell and calculates the distance from the different atoms to atom i. New numpy array will have the form: x = [[x-Position 1st atom, y-Position 1st atom, z-Position 1st atom, index 1st atom, charge 1st atom,\n\ndistance 1st atom to atom i],\n[x-Position 2nd atom, y-Position 2nd atom, z-Position 2nd atom, index 2nd atom, charge 1st atom,\ndistance 1st atom to atom i],\n\n[…, …, …, …, …, …], … ]\n\nParameters: structure_array – input array of the cell containing all the atoms (obtained from get_structure_data) i – index of the atom which should be the new reference new structure array with the origin at the selected atom i (for KKRimp: impurity atom) the first atom in the structure_array is labelled with 0, the second with 1, …\n`aiida_kkr.tools.tools_kkrimp.``write_scoef`(x_res, path)[source]\n\nSorts the data from find_neighbors with respect to the distance to the selected atom and writes the data correctly formatted into the file ‘scoef’. Additionally the total number of atoms in the list is written out in the first line of the file.\n\nParameters: x_res – array of atoms within the cutoff radius obtained by find_neighbors (unsorted) returns scoef file with the total number of atoms in the first line, then with the formatted positions, indices, charges and distances in the subsequent lines.\n`aiida_kkr.tools.tools_kkrimp.``write_scoef_full_imp_cls`(imp_info_node, path, rescale_alat=None)[source]\n\nwrite scoef file from imp_cls info in imp_info_node\n\nPlotting tools\n\ncontains plot_kkr class for node visualization\n\n`aiida_kkr.tools.plot_kkr.``_check_tk_gui`(static)[source]\n\ncheck if tk gui can be openen, otherwise reset static to False this is only needed if we are not inside a notebook\n\n`aiida_kkr.tools.plot_kkr.``_has_ase_notebook`()[source]\n\nHelper function to check if ase_notebook is installed\n\n`aiida_kkr.tools.plot_kkr.``_in_notebook`()[source]\n\nHelper function to check if code is executed from within a jupyter notebook this is used to change to a different default visualization\n\n`aiida_kkr.tools.plot_kkr.``plot_imp_cluster`(kkrimp_calc_node, **kwargs)[source]\n\nPlot impurity cluster from KkrimpCalculation node\n\nThese kwargs can be used to control the behavior of the plotting tool:\n\nkwargs = {\nstatic = False, # make gui or static (svg) images canvas_size = (300, 300), # size of the canvas zoom = 1.0, # zoom, set to >1 (<1) to zoom in (out) atom_opacity = 0.95, # set opacity level of the atoms, useful for overlapping atoms rotations = “-80x,-20y,-5z”, # rotation in degrees around x,y,z axes show_unit_cell = True, # show the unit cell of the host filename = ‘plot_kkr_out_impstruc.svg’ # filename used for the export of a static svg image\n\n}\n\nclass `aiida_kkr.tools.plot_kkr.``plot_kkr`(nodes=None, **kwargs)[source]\n\nClass grouping all functionality to plot typical nodes (calculations, workflows, …) of the aiida-kkr plugin.\n\nParameters: nodes – node identifier which is to be visualized\n\noptional arguments:\n\nParameters: silent (bool) – print information about input node including inputs and outputs (default: False) strucplot (bool) – plot structure using ase’s view function (default: False) interpol (bool) – use interpolated data for DOS plots (default: True) all_atoms (bool) – plot all atoms in DOS plots (default: False, i.e. plot total DOS only) l_channels (bool) – plot l-channels in addition to total DOS (default: True, i.e. plot all l-channels) sum_spins (bool) – sum up both spin channels or plot both? (default: False, i.e. plot both spin channels) logscale – plot rms and charge neutrality curves on a log-scale (default: True) switch_xy (bool) – (default: False) iatom (list) – list of atom indices which are supposed to be plotted (default: [], i.e. show all atoms)\n\nadditional keyword arguments are passed onto the plotting function which allows, for example, to change the markers used in a DOS plot to crosses via marker=’x’\n\nUsage: plot_kkr(nodes, **kwargs)\n\nwhere nodes is a node identifier (the node itself, it’s pk or uuid) or a list of node identifiers.\n\nNote: If nodes is a list of nodes then the plots are grouped together if possible.\n`__init__`(nodes=None, **kwargs)[source]\n\nInitialize self. See help(type(self)) for accurate signature.\n\n`__weakref__`\n\nlist of weak references to the object (if defined)\n\n`classify_and_plot_node`(node, return_name_only=False, **kwargs)[source]\n\nFind class of the node and call plotting function.\n\n`dosplot`(d, natoms, nofig, all_atoms, l_channels, sum_spins, switch_xy, switch_sign_spin2, **kwargs)[source]\n\nplot dos from xydata node\n\n`get_node`(node)[source]\n\nGet node from pk or uuid\n\n`get_rms_kkrcalc`(node, title=None)[source]\n\nextract rms etc from kkr Calculation. Works for both finished and still running Calculations.\n\n`group_nodes`(nodes)[source]\n\nGo through list of nodes and group them together.\n\n`make_kkrimp_rmsplot`(rms_all, stot_all, pks_all, rms_goal, ptitle, **kwargs)[source]\n\nplot rms and total spin moment of kkrimp calculation or series of kkrimp calculations\n\n`plot_group`(groupname, nodesgroups, **kwargs)[source]\n\nVisualize all nodes of one group.\n\n`plot_kkr_calc`(node, **kwargs)[source]\n\nplot things for a kkr Calculation node\n\n`plot_kkr_dos`(node, **kwargs)[source]\n\nplot outputs of a kkr_dos_wc workflow\n\n`plot_kkr_eos`(node, **kwargs)[source]\n\nplot outputs of a kkr_eos workflow\n\n`plot_kkr_scf`(node, **kwargs)[source]\n\nplot outputs of a kkr_scf_wc workflow\n\n`plot_kkr_single_node`(node, **kwargs)[source]\n\nTODO docstring\n\n`plot_kkr_startpot`(node, **kwargs)[source]\n\nplot output of kkr_startpot_wc workflow\n\n`plot_kkrimp_calc`(node, return_rms=False, return_stot=False, plot_rms=True, **kwargs)[source]\n\nplot things from a kkrimp Calculation node\n\n`plot_kkrimp_dos_wc`(node, **kwargs)[source]\n\nplot things from a kkrimp_dos workflow node\n\n`plot_kkrimp_sub_wc`(node, **kwargs)[source]\n\nplot things from a kkrimp_sub_wc workflow\n\n`plot_kkrimp_wc`(node, **kwargs)[source]\n\nplot things from a kkrimp_wc workflow\n\n`plot_struc`(node, **kwargs)[source]\n\nvisualize structure using ase’s view function\n\n`plot_voro_calc`(node, **kwargs)[source]\n\nplot things for a voro Calculation node\n\n`print_clean_inouts`(node)[source]\n\nprint inputs and outputs of nodes without showing ‘CALL’ and ‘CREATE’ links in workflows.\n\n`rmsplot`(rms, neutr, nofig, ptitle, logscale, only=None, rename_second=None, **kwargs)[source]\n\nplot rms and charge neutrality\n\n`aiida_kkr.tools.plot_kkr.``save_fig_to_file`(kwargs, filename0='plot_kkr_out.png')[source]\n\nsave the figure as a png file look for filename and static in kwargs save only if static is True after _check_tk_gui check to make it work in the command line script\n\n`aiida_kkr.tools.plot_kkr.``strucplot_ase_notebook`(struc, **kwargs)[source]\n\nplotting function for aiida structure using ase_notebook visulaization" ]
[ null, "https://aiida-kkr.readthedocs.io/_/downloads/en/latest/htmlzip/_images/juKKR_logo_square_new.jpg", null, "https://aiida-kkr.readthedocs.io/_/downloads/en/latest/htmlzip/_images/AiiDA_transparent_logo.png", null, "https://aiida-kkr.readthedocs.io/_/downloads/en/latest/htmlzip/_images/bandstruc_Cu_example.png", null, "https://aiida-kkr.readthedocs.io/_/downloads/en/latest/htmlzip/_images/impDOS_Au_Cu_example.png", null, "https://aiida-kkr.readthedocs.io/_/downloads/en/latest/htmlzip/_images/DOS_Cu_example.png", null, "https://aiida-kkr.readthedocs.io/_/downloads/en/latest/htmlzip/_images/bs_Cu_example_200.png", null, "https://aiida-kkr.readthedocs.io/_/downloads/en/latest/htmlzip/_images/plot_kkr_structure.png", null, "https://aiida-kkr.readthedocs.io/_/downloads/en/latest/htmlzip/_images/plot_kkr_kkrcalc.png", null, "https://aiida-kkr.readthedocs.io/_/downloads/en/latest/htmlzip/_images/plot_kkr_dos.png", null, "https://aiida-kkr.readthedocs.io/_/downloads/en/latest/htmlzip/_images/plot_kkr_startpot.png", null, "https://aiida-kkr.readthedocs.io/_/downloads/en/latest/htmlzip/_images/plot_kkr_scf.png", null, "https://aiida-kkr.readthedocs.io/_/downloads/en/latest/htmlzip/_images/plot_kkr_eos.png", null, "https://aiida-kkr.readthedocs.io/_/downloads/en/latest/htmlzip/_images/plot_kkr_multi_kkrscf.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5599801,"math_prob":0.88550895,"size":132284,"snap":"2021-04-2021-17","text_gpt3_token_len":33319,"char_repetition_ratio":0.19681735,"word_repetition_ratio":0.24737984,"special_character_ratio":0.23740588,"punctuation_ratio":0.17476282,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9588689,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26],"im_url_duplicate_count":[null,null,null,null,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-01-16T02:12:19Z\",\"WARC-Record-ID\":\"<urn:uuid:1e4fcbc2-f122-4364-b0cb-2c34955c9a86>\",\"Content-Length\":\"1049952\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:bd4980a7-c324-4315-8799-42c4ed08d80c>\",\"WARC-Concurrent-To\":\"<urn:uuid:2d12726b-9eeb-4066-a837-058fe7e88121>\",\"WARC-IP-Address\":\"104.17.33.82\",\"WARC-Target-URI\":\"https://aiida-kkr.readthedocs.io/_/downloads/en/latest/htmlzip/\",\"WARC-Payload-Digest\":\"sha1:LBOLATO7O2JKOIZ24HZMXKDTJ3TC6XOP\",\"WARC-Block-Digest\":\"sha1:RABEGJ4BHD5TEKV6T54VZ5YB7C7PHFOQ\",\"WARC-Truncated\":\"length\",\"WARC-Identified-Payload-Type\":\"application/zip\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-04/CC-MAIN-2021-04_segments_1610703499999.6_warc_CC-MAIN-20210116014637-20210116044637-00677.warc.gz\"}"}
https://www.factiverse.com/java-programming-for-complete-beginners/ch9-1/
[ "## Introduction\n\nArrays are collections of values. Whereas a regular variable can only hold one value at a time, an array can hold multiple values. As a comparison, this is what a regular `int` variable looks like:\n\n`int x = 10;`\n\nAnd this is what an `int` array looks like:\n\n`int[] x = {7, 356, 29, 107, 480};`\n\nWhereas `x` in the first example contains only one value (10), `x` in the second example contains five values (7, 256, 29, 107, 480). The square brackets (`[]`) after the data type is how you declare an array variable instead of a normal one. The curly brackets (`{}`) contain the values to be stored in the array, separated by commas. Values in arrays are called elements and an array can hold as few or as many elements as needed. The number of elements an array contains is called the array’s length or size (once an array is created, its size cannot be changed). Every element in an array is numbered sequentially, starting at 0. This is called the index number. The figure below shows how you can imagine array `x`.\n\nYou access elements via their index number in square brackets. For example, `x` means “index 2 of array `x`”, which is 29. This is what it looks like in a program:\n\n```public class Arrays1 {\npublic static void main(String[] args) {\nint[] x = {7, 319, 29, 107, 480};\nSystem.out.println(x);\n}\n}```\n```[Run]\n29```\n\nLine 3 creates an int array called `x`. It contains the five elements from above. Line 4 prints `x` (29).\n\nIn general, we can do with array elements what can be done with regular variables. See if you can follow the program below.\n\n```public class Arrays2 {\npublic static void main(String[] args) {\nint[] x = {7, 319, 29, 107, 480};\nSystem.out.println(x);\nx = 89;\nint a = x;\nSystem.out.print(a + x - x);\n}\n}```\n```[Run]\n107\n398```\n\nLine 3 is the same as before. Line 4 prints index 3 (107). On line 5, index 3 is reassigned (107 is overwritten by 89). Line 6 copies index 0 into variable a (`x` is 7 so `a` is also 7). Line 7 calculates and prints `a + x – x`, which is 7 + 480 – 89, resulting in 398.\n\nWhat happens if you try accessing an index that doesn’t exist, like index 5?\n\n```public class Arrays3 {\npublic static void main(String[] args) {\nint[] x = {7, 319, 29, 107, 480};\nSystem.out.println(x);\n}\n}```\n\nIn this case you get an “index out of bounds” error:\n\n```[Run]\nException in thread \"main\" java.lang.ArrayIndexOutOfBoundsException: Index 5 out of bounds for length 5\nat Arrays1.main(Arrays1.java:4)```\n\n### Array Types\n\nYou can create arrays of any data type. The figure below shows Regular variables and their Array counterparts, both with arbitrary values.\n\nAs you can see, arrays can hold multiple values of their respective type. All arrays use the same indexing scheme starting at 0. For example, in row five, the String array `z` holds three strings, and the code `z` refers to the string “Howdy”. The arrays are created using an initialiser list. That is, values between curly brackets (`{}`). Conversely, the figure below shows how to create the same type of arrays the classic way. The Array Definition column shows how to create arrays of different types using the `new` keyword. For example, in row 1, the code `new int` creates an `int` array with 5 elements. Like previous examples, the array is stored in a variable (`v`). This variable’s type is `int[]` meaning it can hold an `int` array. The Default Values column shows what the array actually looks like when created in this way. Every element is given a default value depending on its type.\n\nMoving down the rows, we can see that the default value for an `int` array is 0, a `double` array is 0.0, a `char` array is `‘\\0’`, a `boolean` array is `false`, and a `String` array is `null`. The value `null` basically means no data (`‘\\0’` is the `char`’s equivalent of `null`). We’ll look into `null` a little later.\n\n## Looping Through Arrays\n\nOften, we want to perform some action on each element in an array. For example, we may want to print each element. We could do this manually like so:\n\n```public class Arrays1 {\npublic static void main(String[] args) {\nint[] x = {7, 319, 29, 107, 480};\nSystem.out.println(x);\nSystem.out.println(x);\nSystem.out.println(x);\nSystem.out.println(x);\nSystem.out.println(x);\n}\n}```\n```[Run]\n7\n319\n29\n107\n480```\n\nLines 4-8 print out each element individually. But what if we had hundreds or thousands of elements in the array? We certainly don’t want to write thousands of print statements! This is where loops come in. We can write a loop that prints the next index on each iteration. Remember in the previous chapter how we used `i` in loops to count up or down (0 to 9 for instance). Since array indexes (or indices) are also numbered, we can use `i` in place of the index number to access every array element automatically. For example:\n\n```public class Arrays3 {\npublic static void main(String[] args) {\nint[] x = {7, 319, 29, 107, 480};\nfor (int i = 0; i < 5; i++) {\nSystem.out.println(x[i]);\n}\n}\n}```\n```[Run]\n7\n319\n29\n107\n480```\n\nThis program does the same thing as the previous one but using a loop. Lines 4-6 is a for loop. Line 4 tells us that the loop starts with `i` at 0 and stops when `i` is 5. This means `i` will go through numbers 0, 1, 2, 3, 4. Thus, `x[i]` (line 5) will give each subsequent element in the array. Just so it’s clear, let’s go through it manually:\n\n1. Line 3 creates an array of five elements.\n2. Line 4 is the start of the for loop. In it, variable `i` is declared and set to 0. The condition is checked (`i < 5`), which is true. Line 5 prints `x[i]`. Since `i` is 0, this is the same as `x`, so 7 is printed.\n3. `i++` increases `i` (so `i` = 1). `i < 5` is true. Line 5 prints `x[i]` (`x`) so 319 is printed.\n4. `i++` increases `i` (so `i` = 2). `i < 5` is true. Line 5 prints `x[i]` (`x`) so 29 is printed.\n5. `i++` increases `i` (so `i` = 3). `i < 5` is true. Line 5 prints `x[i]` (`x`) so 107 is printed.\n6. `i++` increases `i` (so `i` = 4). `i < 5` is true. Line 5 prints `x[i]` (`x`) so 480 is printed.\n7. `i++` increases `i` (so `i` = 5). `i < 5` is false. The loop ends and so does the program.\n\nThe above program works but not only is it inflexible, it’s also brittle. If the array contained more than five elements, we’d have to remember to change the loop condition otherwise not all elements will be printed. Even worse, if the array contained fewer than five elements and we forgot the amend the condition then the program would crash since the loop would go out of bounds of the array. We need a way to tell the loop to repeat the same number of times as the number of elements in the array. This is actually quite easy. Arrays have an internal `length` variable that keeps track of the number of elements. We can read this variable directly as demonstrated in the following program.\n\n```public class Arrays1 {\npublic static void main(String[] args) {\nint[] x = {7, 319, 29, 107, 480, 54, 1833};\nfor (int i = 0; i < x.length; i++) {\nSystem.out.println(x[i]);\n}\n}\n}```\n\nThe condition has been changed to `i < x.length` so the loop will always repeat the appropriate number of times. I’ve added two more elements to `x` making a total of seven elements. Therefore, the loop condition is basically `i < 7`, ergo the loop repeats seven times printing indexes 0 to 6.\n\n```[Run]\n7\n319\n29\n107\n480\n54\n1833```\n\n## Now in Reverse\n\nNow let’s loop through the array backwards. In order to do so, we need `i` to start at the last index, decrease by 1 each iteration, and stop when it reaches index 0. For instance:\n\n```public class Arrays1 {\npublic static void main(String[] args) {\nint[] x = {7, 319, 29, 107, 480, 54, 1833};\nfor (int i = x.length - 1; i >= 0; i--) {\nSystem.out.println(x[i]);\n}\n}\n}```\n```[Run]\n1833\n54\n480\n107\n29\n319\n7```\n\nAll three parts of the for loop need to change in order to loop through the array backwards. Let’s look at the first part, `int i = x.length – 1`. This tells us that `i` will start at `x.length – 1`, which is 7 – 1, which is 6 (the last index). Note that an array’s length minus one will always give the last index in an array, of any size. The condition `i >= 0` tells us that the loop will repeat as long as `i` is greater than or equal to 0. The third part `i--` will decrease `i` every iteration so that `i` goes through 6, 5, 4, 3, 2, 1, 0. It also decreases once more to -1 but this causes the condition to be false, so the loop ends (otherwise line 5 would give an out of bounds error).\n\n## Simple Array Processing\n\nIterating over arrays is very useful in programming because it allows us to write common code for all elements. For example, the program below only prints elements that are greater than 100.\n\n```public class GreaterThan100 {\npublic static void main(String[] args) {\nint[] x = {7, 319, 29, 107, 480, 54, 1833};\nfor (int i = 0; i < x.length; i++) {\nif (x[i] > 100) {\nSystem.out.println(x[i]);\n}\n}\n}\n}```\n```[Run]\n319\n107\n480\n1833```\n\nAs we can see, values 319, 107, 480, and 1833 are printed whereas 7, 29, and 54 are not. This is due to the if statement on line 5 that checks if `x[i]` is greater than 100 and therefore only prints the number if true. Here is a rundown of the steps the program takes:\n\n1. Line 4: `int i = 0``i < x.length` is true.\n2. Line 5: `x[i] > 100` (7 > 100) is false.\n3. Line 4: `i++` (`i` is 1) → `i < x.length` is true.\n4. Line 5: `x[i] > 100` (319 > 100) is true.\n5. Line 6: Print `x[i]` (319).\n6. Line 4: `i++` (`i` is 2) → `i < x.length` is true.\n7. Line 5: `x[i] > 100` (29 > 100) is false.\n8. Line 4: `i++` (`i` is 3) → `i < x.length` is true.\n9. Line 5: `x[i] > 100` (107 > 100) is true.\n10. Line 6: Print `x[i]` (107).\n11. And so on…\n\nIf you’re having trouble untangling the program in your head then think about it this way: At this point we can basically ignore the loop because all it does is cause `i` to go from 0 to 7. So, let’s focus purely on the if statement (line 5). `x[i]` is going to be `x` (7), then `x` (319), then `x` (29), then `x` (107), etc. Therefore, the if statement goes through every element in the array to see if it’s greater than 100. If true, it gets printed (line 6); if false, it doesn’t. That’s it, really. Another way to think about it is that `x[i]` is like a placeholder for each element in the array, where each element is substituted one at a time.\n\n## Arrays Are Reference Types\n\nThere are two categories of (data) types: primitive types and reference types. Recall that there are only eight primitive types: `byte`, `short`, `int`, `long`, `float`, `double`, `char`, and `boolean`. All other types are reference types—these include all the reference types we’ve looked at so far such as `String`, `Scanner`, and `DecimalFormat`, as well as arrays themselves. The key difference between the two types is that variables of primitive types contain their values directly whereas variables of reference types contain their values indirectly. What does this mean in practice? Take the following code:\n\n```int a = 5;\nint b = a;```\n\nFirst, `a` is set to 5. Then, `b` is set to `a`. This means the value of `a` gets copied into `b` i.e. 5 gets copied into `b`. Both `a` and `b` end up being 5. We can picture `a` and `b` like this:\n\nLet’s now try the same thing with arrays:\n\n```int[] c = {5, 14, 11};\nint[] d = c;```\n\nArray `c` contains arbitrary values 5, 14, and 11. Underneath, `d` is set to `c`. It’s reasonable to think that the array {5, 14, 11} gets copied into `d`, so both `c` and `d` end up being {5, 14, 11}. But this isn’t exactly what happens because array variables do not contain arrays directly. In reality, the array {5, 14, 11} is stored in some memory location and the variable `c` contains a reference to it (a reference is akin to a memory address). In other words, `c` contains the location of the array, not the array itself. This means when `d` is set to `c`, the reference in `c` is copied into `d` so both variables end up referencing the same array. The following figure depicts this:\n\nAgain, what `c` contains is a reference (0x5A41) to the array. When `d` is set to `c`, this reference is copied into `d`. Therefore, both variables end up referencing the same array. This means any changes made to the array in `d` will be reflected in `c` and vice versa, because they are one and the same. Let’s look at a program that demonstrates this:\n\n```public class ArrayReference {\npublic static void main(String[] args) {\nint[] c = {5, 14, 11};\nint[] d = c;\n\nd = 75;\n\nSystem.out.println(Arrays.toString(c));\nSystem.out.println(Arrays.toString(d));\n}\n}```\n```[Run]\n[5, 75, 11]\n[5, 75, 11]```\n\nLines 3 and 4 recreate the two arrays from the example. Then, line 6 changes index 1 in `d` to 75 (originally 14). Lines 8 and 9 print out the contents of both arrays. As we can see from the output, both `c` and `d` show 75 even though it was only `d` that was altered on line 6. Again, it’s because there’s only one array and both `c` and `d` hold a reference to it, so a change to one is a change to the other. And, by the way, for printing the two arrays on lines 8 and 9, I used a handy method from the `Arrays` class called `toString`. This method takes an array and returns a human-readable string of its contents i.e. “[5, 75, 11]”. This saves us from having to manually loop through both arrays just to print out their elements.\n\n## Multidimensional Arrays\n\nMultidimensional arrays are arrays of more than 1 dimension. So far, we’ve only looked at 1-dimensional (1D) arrays, which can be thought of as a sequence of values.\n\n### Two-dimensional Array\n\nA 2-dimensional array is defined by using two pairs of square brackets:\n\n`int[][] a2d = new int;`\n\n`a2d` is a 4 by 5 array. 2D arrays can be imagined as a grid of values with rows and columns. By default, all values are 0 so `a2d` looks like this:\n\n2D arrays aren’t that much more complicated than 1D arrays, we just need to remember that we’re dealing with a 2D grid and not a 1D line, thus we need two pairs of square brackets instead of one. For example, the following line changes the element at to 5:\n\n`a2d = 5;`\n\nWe can also think of 2D arrays as an array of arrays (of values). We get a good visualisation of this by creating the same 4 x 5 array using an initialiser list:\n\n`int[][] a2d = { {0, 0, 0, 0, 0}, {0, 0, 0, 0, 0}, {0, 0, 0, 0, 0}, {0, 0, 0, 0, 0} };`\n\n`a2d` contains 4 arrays. Each array itself contains 5 values (all zeros). If this is unclear, here is each array highlighted:\n\n`int[][] a2d = { {0, 0, 0, 0, 0}, {0, 0, 0, 0, 0}, {0, 0, 0, 0, 0}, {0, 0, 0, 0, 0} };`\n\nLet’s change the element at to a different value:\n\n`a2d = 91;`\n\nThe first specifies an array. The second specifies a value in that array. Now `a2d` looks like this:\n\n`a2d { {0, 0, 0, 0, 0}, {0, 0, 0, 0, 0}, {0, 0, 91, 0, 0}, {0, 0, 0, 0, 0} };`\n\nRemember that array indexing starts at 0 so really means the 3rd value of the 3rd array, as illustrated in the following figure:\n\nHow about changing the element at :\n\n`a2d = -23;`\n`a2d { {0, 0, 0, -23, 0}, {0, 0, 0, 0, 0}, {0, 0, 91, 0, 0}, {0, 0, 0, 0, 0} };`\n\n### Three-dimensional Arrays\n\nArrays can be of any number of dimensions—3D, 4D, 5D, 10D, etc. The more dimensions, the more square brackets. A 3D array is defined with 3 pairs of square brackets:\n\n`int[][][] a3d = new int;`\n\nAnd this is the same array created using an initialiser list:\n\n`int[][][] a3d = { { {0, 0, 0}, {0, 0, 0}, {0, 0, 0}, {0, 0, 0} }, { {0, 0, 0}, {0, 0, 0}, {0, 0, 0}, {0, 0, 0} } };`\n\nYou can imagine a 3D array as a cube of values, whereby you go along the x, y, and z axes. Or you can imagine it as an array of arrays of arrays of values (yikes). This isn’t so bad if we break it down like before by highlighting each part.\n\nThis is the whole array:\n\n`{ { {0, 0, 0}, {0, 0, 0}, {0, 0, 0}, {0, 0, 0} }, { {0, 0, 0}, {0, 0, 0}, {0, 0, 0}, {0, 0, 0} } }`\n\nInside are arrays:\n\n`{ { {0, 0, 0}, {0, 0, 0}, {0, 0, 0}, {0, 0, 0} }, { {0, 0, 0}, {0, 0, 0}, {0, 0, 0}, {0, 0, 0} } }`\n\nInside are arrays each:\n\n`{ { {0, 0, 0}, {0, 0, 0}, {0, 0, 0}, {0, 0, 0} }, { {0, 0, 0}, {0, 0, 0}, {0, 0, 0}, {0, 0, 0} } }`\n\nInside are values each:\n\n`{ { {0, 0, 0}, {0, 0, 0}, {0, 0, 0}, {0, 0, 0} }, { {0, 0, 0}, {0, 0, 0}, {0, 0, 0}, {0, 0, 0} } }`\n\nTherefore element , for example, refers to the element coloured in red:\n\n`{ { {0, 0, 0}, {0, 0, 0}, {0, 0, 0}, {0, 0, 0} }, { {0, 0, 0}, {0, 0, 0}, {0, 0, 0}, {0, 0, 0} } }`\n\n## Foreach Loop\n\nA foreach loop can be used in place of a for loop when you want to loop through an array. A foreach loop simplifies the syntax and can make code easier to read. As an example, the following program uses a classic for loop to output the contents of an array.\n\n```public class ClassicForLoop {\npublic static void main(String[] args) {\ndouble[] someArray = {25.92, 1.34, -76.24, 833.11, 76.084};\n\nfor (int i = 0; i < someArray.length; i++) {\nSystem.out.println(someArray[i]);\n}\n}\n}```\n```[Run]\n25.92\n1.34\n-76.24\n833.11\n76.084```\n\nLine 3 creates an array called `someArray`, which contains five elements. The for loop underneath iterates over (loops through) the array, where line 6 simply prints out each element.\n\nNow let’s do the same thing but with a foreach loop:\n\n```public class ForEachLoop {\npublic static void main(String[] args) {\ndouble[] someArray = {25.92, 1.34, -76.24, 833.11, 76.084};\n\nfor (double e : someArray) {\nSystem.out.println(e);\n}\n}\n}```\n\nHere, `someArray` is the array to iterate over and `e` is each element in the array. A colon separates the two. You can read it as “For each `e` (element) in `someArray`, do whatever’s in the body”. To elaborate, `e` is simply a variable. Every time the loop repeats, `e` will contain the next element from `someArray`. So, on the first iteration, `e` is 25.92, the second iteration `e` is 1.34, the third iteration `e` is -76.24, and so on, until the end. On each iteration, `e` is printed on line 6. A foreach loop always repeats the same number of times as the number of elements in the array, so you don’t have to worry about it going out of bounds. Furthermore, `e` is a `double` because that’s what the arrays contains, a set of `double`s. If `someArray` were a `String` array, you would have to make `e` a `String` variable to hold each element. Also, you don’t have to call it `e`, you can call it anything you like, with it being a variable and all." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.79548216,"math_prob":0.981037,"size":18490,"snap":"2021-31-2021-39","text_gpt3_token_len":5830,"char_repetition_ratio":0.15124959,"word_repetition_ratio":0.15069252,"special_character_ratio":0.35219038,"punctuation_ratio":0.1910871,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.992584,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-23T16:05:04Z\",\"WARC-Record-ID\":\"<urn:uuid:171d311e-f732-4efd-b566-5ccade8541d0>\",\"Content-Length\":\"54616\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:17a9db11-030e-4077-8e2c-f4d217290dfa>\",\"WARC-Concurrent-To\":\"<urn:uuid:50e1ce60-06ab-47dd-af4d-3dd175c5e936>\",\"WARC-IP-Address\":\"77.72.1.16\",\"WARC-Target-URI\":\"https://www.factiverse.com/java-programming-for-complete-beginners/ch9-1/\",\"WARC-Payload-Digest\":\"sha1:XHW3FNHNL3VI3LN5XFZR3ECH2UJWPJYJ\",\"WARC-Block-Digest\":\"sha1:S475ZI2MNIMIMVJY52X5WSGAZRC5QU62\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780057424.99_warc_CC-MAIN-20210923135058-20210923165058-00227.warc.gz\"}"}
https://www.gradesaver.com/textbooks/math/other-math/CLONE-547b8018-14a8-4d02-afd6-6bc35a0864ed/chapters-1-7-cumulative-review-exercises-page-529/10
[ "## Basic College Mathematics (10th Edition)\n\n$\\frac{381}{50000}$\n$2.54*0.003 = \\frac{254}{100} * \\frac{3}{1000} = \\frac{762}{100000} = \\frac{381}{50000}$" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.81746006,"math_prob":1.0000088,"size":382,"snap":"2022-40-2023-06","text_gpt3_token_len":110,"char_repetition_ratio":0.14021164,"word_repetition_ratio":0.0,"special_character_ratio":0.37696335,"punctuation_ratio":0.08108108,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9981651,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-01-28T23:28:01Z\",\"WARC-Record-ID\":\"<urn:uuid:a7aa52eb-5566-48af-b46f-d2303f39d36c>\",\"Content-Length\":\"51364\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:bb94e757-016c-411e-896f-e48f044f3228>\",\"WARC-Concurrent-To\":\"<urn:uuid:97f0aee7-00ff-4eed-999c-ba12bb516edc>\",\"WARC-IP-Address\":\"44.206.108.25\",\"WARC-Target-URI\":\"https://www.gradesaver.com/textbooks/math/other-math/CLONE-547b8018-14a8-4d02-afd6-6bc35a0864ed/chapters-1-7-cumulative-review-exercises-page-529/10\",\"WARC-Payload-Digest\":\"sha1:PVJFF6IB7CRJXOHQI37HJ7K6PLXUQEUV\",\"WARC-Block-Digest\":\"sha1:E5OCATJYNVS3IN524GS4TWKUX4SNWSBH\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764499695.59_warc_CC-MAIN-20230128220716-20230129010716-00390.warc.gz\"}"}
https://algebra-calculators.com/difference-between-area-and-volume/
[ "# Difference Between Area and Volume\n\nArea vs Volume\n\nAs we know that geometry is the study of shapes. It deals with plane shapes and solid shapes. We calculate different terms associated with the shapes, like length, width, height, area, perimeter, volume, etc. Area and volume are the two important concepts used in our daily life. We see many shapes around like squares, rectangles, circles, polygons, etc. Every shape has its unique properties and measurements. Hence every shape has a different area and volume, based on their measurements. So here on this page, we will study the difference between area and volume in Math and formulas associated with different shapes.\n\nArea\n\nThe area is the measurement of the region covered by any two-dimensional geometric shapes. The area of any shape depends upon its dimensions. Different shapes have different areas. For instance, the area of the square differs from the area of the rectangle. The area of a shape is calculated in square units (sq units).\n\nSuppose if you want to paint the rectangular wall of your house, you need to know the area of the wall to calculate the quantity of the paint required to paint the wall and the cost of painting.\n\nIf two figures have a similar shape it is not necessary that they have equal area unless and until their dimensions are equal. Suppose two squares have sides s and s1, so the areas of the two square will be equal if s = s1\n\nVolume\n\nThe space occupied by the three-dimensional object is measured in terms of the volume of that object. The volume of a solid shape is the product of three dimensions, so the volume is expressed in cubic units. Suppose the volume of a cube is measured by the product of its length, width, and height.\n\nThe interior of a hollow object can be filled with air or some liquid that takes the shape of the object. In such cases, the volume of the substance that the interior of the object can accommodate is called the capacity of the hollow object. Thus we may say that the volume of an object is the measure of the space it occupies and the capacity of an object is the volume of the substance its interior can accommodate.\n\nArea vs Volume Definition\n\nThe area refers to the region covered by the object. And volume refers to the quantity or capacity of the object. An area is a two-dimensional object whereas volume is a three-dimensional object. The area is a plain figure while volume is a solid figure. The area covers the outer space and volume covers the inner capacity. The area is measured in square units and volume is measured in cubic units.\n\nGenerally, the area is calculated for two-dimensional objects, while volume is calculated for three-dimensional objects.\n\nHere is the pictorial representation of area and volume which shows the relation between area and volume.\n\nLet us try to understand the relation between area and volume and the difference between area and volume in detail.\n\n## Area Formula Chart for 2D shapes\n\n Name of Geometric Shapes Area Formula Variables Rectangle Area = l × w l =  length w  = width Square Area  = a2 a = sides of the square Triangle Area = ½ x b x h b = base h = height Trapezoid Area = 1/2 (a + b)h a =base 1 b = base 2 h = vertical height Parallelogram Area  = b × h a = side b=base h=vertical height Rhombus Area = a x h a = side of rhombus h = height Circle Area = πr2 r = radius of the circle = 22/7 or 3.1416 Semicircle Area = ½ πr2 r = radius of the circle\n\n## Volume Formula Chart for 3D Shapes(Solid shapes)\n\n Name of Geometric Shapes Volume Formula Abbreviations Used Cuboid L * b* h h = height,  l = length  b=breadth Cube a3 a = length of the sides Right Prism Area of Base × Height .. Right Circular Cylinder πr2h r= radius h=height Right pyramid ⅓ (Area of the Base) × Height .. Right Circular Cone ⅓ (πr2h) r = radius l = length Sphere 4/3πr3 r = radius Hemisphere ⅔ (πr3) r = radius\n\nDifference Between Area and Volume\n\nSome of the key difference between area and volume in math are:\n\nArea vs Volume\n\n The area is the measurement of the region covered by any two-dimensional geometric shapes. The volume is the space occupied by the three-dimensional object. The area is measured for plain figures Volume is measured for 3D(solid) figures. The area is measured in two dimensions i.e length and breadth. Volume is measured in three dimensions i.e length, breadth, and height. The area is measured in square units Volume is measured in cubic units. The area covers the outer space of an object Volume is the capacity of an object Example: square, rectangle, circle, etc. Example: cube, cuboid, sphere, etc.\n\nThese differences show the relation between area and volume. As now the difference between area and volume in math is clear, let us solve some examples.\n\nSolved Examples\n\nExample 1:\n\nThe sides of a square plot is 9m.  Find the area of a square plot.\n\nSolution:\n\nGiven, Side = a = 9m\n\nBy the formula of area of a square, we know that\n\nArea = a2\n\nA = 9 x 9\n\nA = 81 sq.m.or 81m2\n\nExample 2: The side of the cubic box is 9m. Find the volume of a cubic box.\n\nSolution:\n\nGiven, Side = a = 9m\n\nBy the formula of volume of a cube, we know that\n\nV = a3\n\nV = 9 x 9 x 9\n\nV = 729 sq.m or 729m2\n\n1. What is the Difference Between Area and Perimeter?\n\nAnswer: Area is defined as the space occupied by the shape. While perimeter id defined as the distance around the shape(the boundary of the shape)\n\nShapes with the same area can have different perimeters and the shapes with the same perimeter can have different areas. The area is measured in square units and the perimeter is measured in linear units. The area can be measured for 2 – dimensional objects while the perimeter is measured for one-dimensional shapes.\n\n2. Is Cube a Square?\n\nAnswer: The basic differences between cube and square are of dimensions. A square is a two-dimensional figure with two dimensions length and breadth, while a cube is a three-dimensional figure with three dimensions length, breadth, and height.\n\nThe side faces of a cube are formed by squares. The square has four sides and four vertices, whereas the cube has 12 edges(sides) and 8 vertices.\n\nFrom these properties we can say the cube is a 3-dimensional figure formed by the square-shaped faces." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9474102,"math_prob":0.99677455,"size":4732,"snap":"2022-05-2022-21","text_gpt3_token_len":1018,"char_repetition_ratio":0.18020305,"word_repetition_ratio":0.05463183,"special_character_ratio":0.21217245,"punctuation_ratio":0.09564293,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9991677,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-21T06:33:53Z\",\"WARC-Record-ID\":\"<urn:uuid:894ffec4-948b-49e2-9136-ca3d738302ef>\",\"Content-Length\":\"125190\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4d8430fb-aaaa-4510-8027-4bd2a21354fc>\",\"WARC-Concurrent-To\":\"<urn:uuid:6c3f9131-8d4f-4f3a-9434-02e32780df79>\",\"WARC-IP-Address\":\"45.89.206.59\",\"WARC-Target-URI\":\"https://algebra-calculators.com/difference-between-area-and-volume/\",\"WARC-Payload-Digest\":\"sha1:UVL5FRNQQRP3BUBZABH4IAY4JNYJBEZ2\",\"WARC-Block-Digest\":\"sha1:5ZT3WUZD7BA6LGOKXDYOFVGPIUQ4CUGI\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662538646.33_warc_CC-MAIN-20220521045616-20220521075616-00564.warc.gz\"}"}
https://www.thejournal.club/c/paper/336509/
[ "#### Engineering Nearly Linear-Time Algorithms for Small Vertex Connectivity\n\n##### Max Franck, Sorrachai Yingchareonthawornchai\n\nVertex connectivity is a well-studied concept in graph theory with numerous applications. A graph is $k$-connected if it remains connected after removing any $k-1$ vertices. The vertex connectivity of a graph is the maximum $k$ such that the graph is $k$-connected. There is a long history of algorithmic development for efficiently computing vertex connectivity. Recently, two near linear-time algorithms for small k were introduced by [Forster et al. SODA 2020]. Prior to that, the best known algorithm was one by [Henzinger et al. FOCS'96] with quadratic running time when k is small. In this paper, we study the practical performance of the algorithms by Forster et al. In addition, we introduce a new heuristic on a key subroutine called local cut detection, which we call degree counting. We prove that the new heuristic improves space-efficiency (which can be good for caching purposes) and allows the subroutine to terminate earlier. According to experimental results on random graphs with planted vertex cuts, random hyperbolic graphs, and real world graphs with vertex connectivity between 4 and 15, the degree counting heuristic offers a factor of 2-4 speedup over the original non-degree counting version for most of our data. It also outperforms the previous state-of-the-art algorithm by Henzinger et al. even on relatively small graphs.\n\narrow_drop_up" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9480703,"math_prob":0.9536699,"size":1351,"snap":"2022-27-2022-33","text_gpt3_token_len":286,"char_repetition_ratio":0.12026726,"word_repetition_ratio":0.0,"special_character_ratio":0.19985196,"punctuation_ratio":0.09016393,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98269445,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-06-25T13:31:01Z\",\"WARC-Record-ID\":\"<urn:uuid:2d91d813-0366-4f41-9ebe-3bce3bc87c64>\",\"Content-Length\":\"33018\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9db55929-d9a8-4071-885c-bbe3e784575b>\",\"WARC-Concurrent-To\":\"<urn:uuid:142f65e2-67f8-4fe6-8a1b-ceb64f5240d0>\",\"WARC-IP-Address\":\"35.173.69.207\",\"WARC-Target-URI\":\"https://www.thejournal.club/c/paper/336509/\",\"WARC-Payload-Digest\":\"sha1:CZTHRNNPREOQHNUTGDXWVLEBUERP6SNP\",\"WARC-Block-Digest\":\"sha1:3QXED7XXCYU275CDIRKMDAHYX3RTQDLU\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656103035636.10_warc_CC-MAIN-20220625125944-20220625155944-00576.warc.gz\"}"}
https://www.eureka.im/926.html
[ "# creating multiple point monitors\n\n Client needs to define multiple point surfaces. At each point surface, create a monitor for u,v,w and p. Thus for x points, there will be 4x point monitors. Each monitor is written to a uniquely-named file.In the scheme file there are three lists: x, y and z, which all must be of the same length. The components of the list represent the x,y and z coords of the point surfaces. The only requirement is that the length of x = length y = length z.;;;; SCHEME FILE ;;;; x, y, and z data(define x (list 0.1 0.2 0.3 0.4 0.1 0.2 0.3 0.4 0.1 0.2 0.3 0.4 0.1 0.2 0.3 0.4 0.1 0.2 0.3 0.4 0.1 0.2 0.3 0.4 0.1 0.2 0.3 0.4 0.1 0.2 0.3 0.4 0.1 0.2 0.3 0.4 ))(define y (list 0.1 0.2 0.3 0.4 0.1 0.2 0.3 0.4 0.1 0.2 0.3 0.4 0.1 0.2 0.3 0.4 0.1 0.2 0.3 0.4 0.1 0.2 0.3 0.4 0.1 0.2 0.3 0.4 0.1 0.2 0.3 0.4 0.1 0.2 0.3 0.4))(define z (list 0.1 0.2 0.3 0.4 0.1 0.2 0.3 0.4 0.1 0.2 0.3 0.4 0.1 0.2 0.3 0.4 0.1 0.2 0.3 0.4 0.1 0.2 0.3 0.4 0.1 0.2 0.3 0.4 0.1 0.2 0.3 0.4 0.1 0.2 0.3 0.4 ));;;;;;; Don't edit below this line... ;;; QUIET OUTPUT(define ti-menu-load-string-quiet (lambda (string) (let ( (old-current *current-output-port*) (port (open-output-string)) ) (set! *current-output-port* port) (ti-menu-load-string string) (set! *current-output-port* old-current) (close-output-port port) \"\" ) ));;; Pad string with zeros(define (pad-zeros name pad-length) (if (< (string-length name) pad-length) (begin (pad-zeros (string-append \"0\" name) pad-length)) name));;; Create a point and set 4 monitors on that point(define assign-point-monitor (lambda (ip xp yp zp) (let ((i ip) (x xp) (y yp) (z zp)) (ti-menu-load-string-quiet (string-append (string-append (string-append (string-append (string-append (string-append (string-append \"/surface point-surface \" (string-append \"point-\" (pad-zeros (number->string ip) 4))) \" \") (number->string x)) \" \") (number->string y)) \" \") (number->string z))) (ti-menu-load-string-quiet (string-append (string-append (string-append (string-append (string-append (string-append \"/solve monitors surface set-monitor \" (string-append \"mon-p-\" (pad-zeros (number->string ip) 4))) \" pressure \") (string-append \"(point-\" (pad-zeros (number->string ip) 4))) \") n n y \") (string-append \"mon-p-\" (pad-zeros (number->string ip) 4))) \".out y \"Vertex Average\"\") ) (ti-menu-load-string-quiet (string-append (string-append (string-append (string-append (string-append (string-append \"/solve monitors surface set-monitor \" (string-append \"mon-u-\" (pad-zeros (number->string ip) 4))) \" x-velocity \") (string-append \"(point-\" (pad-zeros (number->string ip) 4))) \") n n y \") (string-append \"mon-u-\" (pad-zeros (number->string ip) 4))) \".out y \"Vertex Average\"\") ) (ti-menu-load-string-quiet (string-append (string-append (string-append (string-append (string-append (string-append \"/solve monitors surface set-monitor \" (string-append \"mon-v-\" (pad-zeros (number->string ip) 4))) \" y-velocity \") (string-append \"(point-\" (pad-zeros (number->string ip) 4))) \") n n y \") (string-append \"mon-v-\" (pad-zeros (number->string ip) 4))) \".out y \"Vertex Average\"\") ) (ti-menu-load-string-quiet (string-append (string-append (string-append (string-append (string-append (string-append \"/solve monitors surface set-monitor \" (string-append \"mon-w-\" (pad-zeros (number->string ip) 4))) \" z-velocity \") (string-append \"(point-\" (pad-zeros (number->string ip) 4))) \") n n y \") (string-append \"mon-w-\" (pad-zeros (number->string ip) 4))) \".out y \"Vertex Average\"\") ) )));;; Loop over all points (main loop)(define def-mons (lambda (first last xdata ydata zdata) (let countdown ((i first) (xval xdata) (yval ydata) (zval zdata)) (if (< i last) (begin (assign-point-monitor i (car xval) (car yval) (car zval)) (countdown (+ i 1) (cdr xval) (cdr yval) (cdr zval)) ) (begin (assign-point-monitor i (car xval) (car yval) (car zval)) (display 'Done.) )))));;; Check arrays, then make call to main loop(define make-uns-mons (lambda (xd yd zd) (let ((first 0) (lenx (- (length xd) 1)) (leny (- (length yd) 1)) (lenz (- (length zd) 1)) ) (if (and (= lenx leny) (= lenx lenz)) (def-mons first lenx xd yd zd) (display \"x y and z are not the same lengthn\") ) )))\n\n No comments yet. Be the first to add a comment!" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5243558,"math_prob":0.9703899,"size":4270,"snap":"2021-04-2021-17","text_gpt3_token_len":1481,"char_repetition_ratio":0.28691983,"word_repetition_ratio":0.38267717,"special_character_ratio":0.387822,"punctuation_ratio":0.16866267,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99089015,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-04-18T00:00:40Z\",\"WARC-Record-ID\":\"<urn:uuid:10a2a5a2-c042-409f-bf70-fbe26ac1cf39>\",\"Content-Length\":\"17102\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:51e2fd47-be3d-4961-81ef-fd115ceff994>\",\"WARC-Concurrent-To\":\"<urn:uuid:61af24d9-94f3-4643-8cb5-766557aeae0b>\",\"WARC-IP-Address\":\"80.179.147.241\",\"WARC-Target-URI\":\"https://www.eureka.im/926.html\",\"WARC-Payload-Digest\":\"sha1:4EU2VREL2B5GGSG6JJ3OULOYTGHJARK5\",\"WARC-Block-Digest\":\"sha1:XZWIAYBCGL5S4S42PIVTWETMHRLYCCXW\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-17/CC-MAIN-2021-17_segments_1618038464065.57_warc_CC-MAIN-20210417222733-20210418012733-00089.warc.gz\"}"}
https://stacks.math.columbia.edu/tag/06YQ
[ "Lemma 21.17.8. Let $(\\mathcal{C}, \\mathcal{O})$ be a ringed site. A bounded above complex of flat $\\mathcal{O}$-modules is K-flat.\n\nProof. Let $\\mathcal{K}^\\bullet$ be a bounded above complex of flat $\\mathcal{O}$-modules. Let $\\mathcal{L}^\\bullet$ be an acyclic complex of $\\mathcal{O}$-modules. Note that $\\mathcal{L}^\\bullet = \\mathop{\\mathrm{colim}}\\nolimits _ m \\tau _{\\leq m}\\mathcal{L}^\\bullet$ where we take termwise colimits. Hence also\n\n$\\text{Tot}(\\mathcal{K}^\\bullet \\otimes _\\mathcal {O} \\mathcal{L}^\\bullet ) = \\mathop{\\mathrm{colim}}\\nolimits _ m \\text{Tot}( \\mathcal{K}^\\bullet \\otimes _\\mathcal {O} \\tau _{\\leq m}\\mathcal{L}^\\bullet )$\n\ntermwise. Hence to prove the complex on the left is acyclic it suffices to show each of the complexes on the right is acyclic. Since $\\tau _{\\leq m}\\mathcal{L}^\\bullet$ is acyclic this reduces us to the case where $\\mathcal{L}^\\bullet$ is bounded above. In this case the spectral sequence of Homology, Lemma 12.25.3 has\n\n${}'E_1^{p, q} = H^ p(\\mathcal{L}^\\bullet \\otimes _ R \\mathcal{K}^ q)$\n\nwhich is zero as $\\mathcal{K}^ q$ is flat and $\\mathcal{L}^\\bullet$ acyclic. Hence we win. $\\square$\n\nIn your comment you can use Markdown and LaTeX style mathematics (enclose it like $\\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar)." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7143009,"math_prob":0.9999287,"size":1238,"snap":"2023-40-2023-50","text_gpt3_token_len":354,"char_repetition_ratio":0.17666127,"word_repetition_ratio":0.010752688,"special_character_ratio":0.2714055,"punctuation_ratio":0.09543569,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.0000019,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-11T06:23:05Z\",\"WARC-Record-ID\":\"<urn:uuid:4f05f015-6e1c-4777-8db4-8f552ea102b8>\",\"Content-Length\":\"15074\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6bff4901-7d72-421f-9615-de4fd76e0fc4>\",\"WARC-Concurrent-To\":\"<urn:uuid:0676839a-d0a8-470c-92fe-7bea1e2f5f9f>\",\"WARC-IP-Address\":\"128.59.222.85\",\"WARC-Target-URI\":\"https://stacks.math.columbia.edu/tag/06YQ\",\"WARC-Payload-Digest\":\"sha1:TD5DSKWSY5HP6WMMBOGIAJHPNDFLJAHB\",\"WARC-Block-Digest\":\"sha1:AMIBCXU3WZ3B53QH6ENZO3Q4UVNSVRGA\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679103558.93_warc_CC-MAIN-20231211045204-20231211075204-00757.warc.gz\"}"}
https://edufixers.com/unit-converter/hour-to-any-time-units/
[ "# Convert Hour to Minutes & Other Time Units\n\n1\n\n## 🔁 Hours Conversion Examples\n\n### Convert hours to years\n\n• 1 h / 8766 = 0,00011408 y\n• 10 h / 8766 = 0,0011408 y\n\n### Convert hours to months\n\n• 1 h / 730,5 = 0,001369 m\n• 10 h / 730,5 = 0,01369 m\n\n### Convert hours to weeks\n\n• 1 h / 168 = 0,005952 w\n• 10 h / 168 = 0,05952 w\n\n### Convert hours to days\n\n• 1 h / 24 = 0,04167 d\n• 10 h / 24 = 0,4167 d\n\n### Convert hours to minutes\n\n• 1 h × 60 = 60 min\n• 10 h × 60 = 600 min\n\n### Convert hours to seconds\n\n• 1 h × 3600 = 3600 s\n• 10 h × 3600 = 36 000 s\n\n### Convert hours to milliseconds\n\n• 1 h × 3600000 = 3 600 000 ms\n• 10 h × 3600000 = 36 000 000 ms\n\n## ✅ Hour to Minute Converter FAQ\n\n### ✔️ How to convert h to min?\n\nIf you need to convert hours to minutes, you can use the following formula: min = h × 60. That is the easiest way to make a h to min conversion.\n\n### ✔️ What is the formula to change h into d?\n\nTo convert hours to days, you can use a simple formula: d = h / 24. For example, 2 hours equals 0,08334 days.\n\n### ✔️ How much is 1 hour?\n\n1 hour is 60 minutes, 0,04167 days, and 3600 seconds." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.52669805,"math_prob":0.99299496,"size":1380,"snap":"2023-14-2023-23","text_gpt3_token_len":583,"char_repetition_ratio":0.18023255,"word_repetition_ratio":0.005899705,"special_character_ratio":0.53985506,"punctuation_ratio":0.12,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9986513,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-03-27T04:26:44Z\",\"WARC-Record-ID\":\"<urn:uuid:a9dd8687-e50f-4a77-955c-324d132c0ddd>\",\"Content-Length\":\"126565\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:354e1035-fe47-4e4b-813d-a88c07faf112>\",\"WARC-Concurrent-To\":\"<urn:uuid:2ec4f515-a9ab-478d-bb80-2371cffabe3b>\",\"WARC-IP-Address\":\"172.67.189.204\",\"WARC-Target-URI\":\"https://edufixers.com/unit-converter/hour-to-any-time-units/\",\"WARC-Payload-Digest\":\"sha1:3HL4J4N7LZVHPFVQ3IQDZNLIX7XPOHWY\",\"WARC-Block-Digest\":\"sha1:O6E6YSER6J7Y2G5AFU4NIDCEQMKDIE52\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296946637.95_warc_CC-MAIN-20230327025922-20230327055922-00739.warc.gz\"}"}
https://www.analystforum.com/t/dollar-duration-help-vol-1-3pm-exam/17020
[ "", null, "# Dollar Duration Help (Vol 1 3PM exam)\n\nAfter doing this exam today I ran into a couple of problems on the following questions. Question 15.1 For the CTD bond, do we always assume the par value is \\$1000 if it is not given? Also, when they calculate the DD for the CTD bond they do not include the conversion factor (this ties into the next question) Question 18.4 For the DD of the CTD bond, we include the conversion factor in the calculation. But then when we calculate the # of contracts, they divide by the conversion factor again (seeming to cancel the effect out) Can anyone help me out?\n\nIf anyone has done this exam and has a quick minute to explain it would be much appreciated. Thanks\n\nsomeone brought this up recently - check out http://www.analystforum.com/phorums/read.php?13,752054,752535#msg-752535 i think schweser messed up somewhere\n\nI think it was right, but the steps weren’t necessary. All you have to do, generally, is divide by the DD of the futures contract which = CTD/conversion factor. They first calculated the DD of the CTD bond (from the futures contract), then divided to get the futures price…they could have just used the future price w/o the other 2 steps…\n\nQuestion, but in this case, they multiplied the price by the conversion factor and then they divide by it after, (making the factor drop out) So if we used DDf = CTD/ conv factor we can’t get to their answer. I checked on the other thread and did not get any clarification on the issue." ]
[ null, "https://analystforum-uploads.s3.dualstack.us-east-1.amazonaws.com/original/2X/8/8e7be8e6512cde25d070f18d332292fb5a3804d9.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.950823,"math_prob":0.7482587,"size":1598,"snap":"2021-31-2021-39","text_gpt3_token_len":387,"char_repetition_ratio":0.120451696,"word_repetition_ratio":0.007380074,"special_character_ratio":0.24780977,"punctuation_ratio":0.112426035,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9745776,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-18T12:00:47Z\",\"WARC-Record-ID\":\"<urn:uuid:a8625b56-1c01-4e22-9167-e114e75965ca>\",\"Content-Length\":\"23471\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:cebfb489-779f-412c-a648-5d45a68c7f7e>\",\"WARC-Concurrent-To\":\"<urn:uuid:c02fa776-a3fb-4fbd-b522-26d822bb615b>\",\"WARC-IP-Address\":\"45.79.51.137\",\"WARC-Target-URI\":\"https://www.analystforum.com/t/dollar-duration-help-vol-1-3pm-exam/17020\",\"WARC-Payload-Digest\":\"sha1:SXPJ6PSBR6PTOZAON5GUNAHNOX6RQLSU\",\"WARC-Block-Digest\":\"sha1:FWK2ZEH7UTWZ7MQEIXDIEKV2RXSTKCQ3\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780056392.79_warc_CC-MAIN-20210918093220-20210918123220-00574.warc.gz\"}"}
https://mathoverflow.net/questions/116178/cubical-complexes-and-bass-serre-theory
[ "# Cubical Complexes and Bass-Serre theory\n\nI have been reading through Wise's lecture notes on cubical complexes, which summarises the proof of the virtual Haken conjecture and the proof that all one-relator groups with torsion are residually finite.\n\nMy understanding of this stuff seems to have hit a wall.\n\nSpecifically, my understanding of (special) cubical complexes is contradicted by a result of Anatolin-Minasyan (Tit's alternative for graph products), which says that every non-abelian subgroup of a right-angled artin group (RAAG) surjects onto $F_2$ (and so is \"very large\"). According to my understanding of special cube complexes, every two-generated group which acts on a tree without fixed point should be a subgroup of a RAAG. Clearly, these are incompatible (and my understanding is by far and away the most likely thing to be wrong here!).\n\nI will explain what I understand what it means for a group to be the fundamental group of a (special) cubical complex, and where I think my problem lies. However, if my problem is where I think it is, I do not know how to fill the gap! Any amendments to my understanding would be much appreciated (my question is basically \"where have I gone wrong?\").\n\nSo, cubical complexes were studied by Sageev in his 1994 paper \"Ends of group pairs and non-positively curved cube complexes\", where he motivated their study as being a generalisation of Bass-Serre theory. A non-positively curved cube complex $X$ correspond to graphs of groups, and the associated $\\operatorname{CAT}(0)$ cube complex $\\widetilde{X}$ corresponds to the Bass-Serre tree.\n\nI think my problem is simply: what are the $n$-cell stabilisers?\n\nNow, graphs are non-positively curved cube complexes, and their corresponding $\\operatorname{CAT}(0)$ cube complex is a tree. Moreover, graphs are special, because their hyperplanes are simply the midpoints of edges. So, the fundamental group of a graph of groups is always special, so should embed into a RAAG. A contradiction.\n\nI hope that all makes sense.\n\nI wonder if my problem is simply that we are not dealing with $\\pi_1$ of a graph of groups, but simply of a graph. Then this fits in with the Anatolin-Minasyan result, but not with the theory being a generalisation of Bass-Serre theory (anyway, the group acts on the $\\operatorname{CAT}(0)$ cube complex $\\widetilde{X}$ and the non-positively curved complex is the quotient by the action $X=\\widetilde{X}/G$, so just being the normal fundamental group doesn't make sense).\n\n• (cont'd)... The example to bear in mind is an immersed, non-embedded, curve $gamma$ on a surface $\\Sigma$. It generates a codimension-one subgroup $\\langle\\gamma\\rangle$ (in the sense that $\\langle\\gamma\\rangle$ coarsely separates $\\pi_1\\Sigma$), but you can't cut along it, so you can't realize it as the stabilizer of an edge in an action on a tree. But you can realize is as the stabilizer of a hyperplane in an action on a square complex. – HJRW Dec 12 '12 at 17:38" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.94283956,"math_prob":0.9617465,"size":2447,"snap":"2020-24-2020-29","text_gpt3_token_len":568,"char_repetition_ratio":0.11952517,"word_repetition_ratio":0.010362694,"special_character_ratio":0.21577442,"punctuation_ratio":0.09190372,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99336785,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-07-09T21:41:26Z\",\"WARC-Record-ID\":\"<urn:uuid:e7cf41e1-d73d-4228-b748-89304720ac10>\",\"Content-Length\":\"123181\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8bde05b3-d452-469b-ab85-a99f8a62bf25>\",\"WARC-Concurrent-To\":\"<urn:uuid:12afd0ff-2245-4452-bc09-526ca87eea52>\",\"WARC-IP-Address\":\"151.101.65.69\",\"WARC-Target-URI\":\"https://mathoverflow.net/questions/116178/cubical-complexes-and-bass-serre-theory\",\"WARC-Payload-Digest\":\"sha1:KJG4ZUMXRYVM4RG4KL2GA7HCJXAN6DJD\",\"WARC-Block-Digest\":\"sha1:NFIPJ72AIHPBWBAEQRT64CXLFURCP3ZT\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-29/CC-MAIN-2020-29_segments_1593655901509.58_warc_CC-MAIN-20200709193741-20200709223741-00205.warc.gz\"}"}
https://examlord.com/waec-2020-practical-chemistry-answers-2/
[ "# WAEC 2020 Practical chemistry answers\n\n++++++++++++++++++++++++++++++++\n\nVERY IMPORTANT INFORMATION!!!\nPLEASE REMEMBER TO USE YOUR’S SCHOOL’S AVERAGE TITRE VALUE(AVERAGE VOLUME OF ACID). WE USED 15.26cm3, TRY TO KNOW YOUR SCHOOL’S AVERAGE TITRE VALUE. ASK YOUR CHEMISTRY TEACHER. THEN, ANYWHERE YOU SEE 15.26cm3 IN MY CALCULATION, PUT YOUR SCHOOL’S OWN AND RE-CALCULATE. THIS IS VERY IMPORTANT!!!\n\n(1a)\nIn a tabular form\n\nFinal |15.25|0.00|45.79|\nInitial|0.00||15.25|30.53|\nVolume of acid used |15.25|15.28|15.26\n\nAverage volume of acid used = 15.25 + 15.26/2\n= 15.255cm³\n=15.26cm³\n\n(1bi)\nGiven: Mass of conc of A = 5g/500cm³ = 5g/0.5dm³\nCa = 10g/dm³\n\nA is HNO3\nTherefore; Molar mass = 1+14+(16×3)\n= 15+48\n=63g/mol\n\nMolarity of A = gram conc/molar mass\nCa = 10/63 = 0.1587mol/dm³\n\n(1bii)\nUsing CaVa/CbVb = nA/nB\nWith reacting equation:\nHNO3 +NaOH–>NaNO3 + H2O\nnA = 1, nB = 1\n0.1587×15.26/Cb×25.00 = 1/1\n25Cb = 0.1587×15.26\nCB = 0.1587×15.26/25\nCB = 0.09687mol/dm³\n\n(1biii)\nB is NaOH\nMolar mass = 23+16+1\n=40g/mol\nConc of B in g/dm³ = molarity×molar mass\n=0.09687×40\n=3.8748g/dm³\n\n(1biv)\nNo of moles present in 250cm³ of NaOH is\n= molar conc. × volume\n= 0.09687 × 250/1000\n= 0.0242 moles\n\nMole ratio of NaOH and NaNO3 is 1:1\nNo of moles of NaNO3 which reacted is 0.0242\nMass of NaNO3 formed = molar mass × no of moles\n= 85 × 0.0242\n=2.057grams\n\n(1)\n\n####", null, "(2)\n\n(2a)\nTEST\nC + burning splint\n\nOBSERVATION\nSample C bursts into flame\nIt burns with non-smoking blue flame, without soot.\nColourless gas that turns wet with blue litmus paper faint red and turns lime water milky is present\n\nINFERENCE\nC is volatile and flammable. The gas is CO2 from combustion of a saturated organic compound.\n\n(2bi)\nTEST\nC + distilled water + Shake\n\nOBSERVATION\nClear or colourless solution is observed\n\nINFERENCE\nC is miscible with water\n\n(2bii)\nTEST\nC + Acidified K2Cr2O7\n\nOBSERVATION\nOrange color of K2Cr2O7 Solution turns pale green and eventually pale blue on cooling\n\nINFERENCE\nC is a reducing agent\n\n(2c)\nTEST\nD + C\n+10%NaoH\n+ shake\n\nOBSERVATION\nD dissolves slowly in C and produces reddish brown solution\nReddish brown solution turns yellow precipitate. The precipitate has an antiseptic odour\n\nINFERENCE\nD is soluble In organic solvents\n\n(2d)\nEthanol, ethanal or a secondary alkanol is present\n\n####", null, "(3ai)\nZinctrioxonitrate (v) – Zn(NO3)2\n\n(3aii)\n2Zn(NO3)2(g) –> 2ZnO(s) + 4NO2(g) + O2(g)\n\n(3aiii)\nThe residue when it has yellow colour which will turn white on cooling\n\n(3b)\nGiven; M1 = 1.0mol/dm³\nV1 = ?\nM2 = 0.2mol/dm³\nV2 = 250cm³\nUsing M1V1 = M2V2\n1 × V1 = 0.2×250\nV1 = 50cm³\n\nProcedure: Measure out 50cm³ of the stock solution, dilute it to 0.2mol/dm³ by adding 200cm³ of water.\n\n(3c)\nAl(SO4)3 will turn blue litmus paper red", null, "#### COMPLETED\n\n1.", null, "1.", null, "" ]
[ null, "https://res.cloudinary.com/theshedman/image/upload/v1597741939/ztwyhxjymjuizzhyvlfd.jpg", null, "https://res.cloudinary.com/theshedman/image/upload/v1597743789/uung3uydves6f2oebx3t.jpg", null, "https://res.cloudinary.com/theshedman/image/upload/v1597745845/qtyl4tckb706w3c4f86s.jpg", null, "https://secure.gravatar.com/avatar/07bb8466f29f556e4adaa721e617d11c", null, "https://secure.gravatar.com/avatar/c14fa3d3459584ad4bf22dbf38f1e8d6", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.721622,"math_prob":0.99282044,"size":2754,"snap":"2021-43-2021-49","text_gpt3_token_len":1065,"char_repetition_ratio":0.085454546,"word_repetition_ratio":0.0,"special_character_ratio":0.37436455,"punctuation_ratio":0.11611785,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98332626,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,1,null,1,null,1,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-28T10:28:56Z\",\"WARC-Record-ID\":\"<urn:uuid:7c397ec3-f39d-4958-b29b-5a29bd14fafc>\",\"Content-Length\":\"42916\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2aec281f-921a-4746-8e8b-7a3fe32242f3>\",\"WARC-Concurrent-To\":\"<urn:uuid:ffdb8545-afc9-4679-9353-ea4219e9753e>\",\"WARC-IP-Address\":\"104.21.29.248\",\"WARC-Target-URI\":\"https://examlord.com/waec-2020-practical-chemistry-answers-2/\",\"WARC-Payload-Digest\":\"sha1:DTUSGX7VTTO6K7NE4V6BDXD4BJ6QLG6S\",\"WARC-Block-Digest\":\"sha1:LXQZ6FFG7D76G5PZ5JWDWHMQDJTUJVEJ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323588284.71_warc_CC-MAIN-20211028100619-20211028130619-00586.warc.gz\"}"}
https://www.colorhexa.com/cd40b0
[ "# #cd40b0 Color Information\n\nIn a RGB color space, hex #cd40b0 is composed of 80.4% red, 25.1% green and 69% blue. Whereas in a CMYK color space, it is composed of 0% cyan, 68.8% magenta, 14.1% yellow and 19.6% black. It has a hue angle of 312.3 degrees, a saturation of 58.5% and a lightness of 52.7%. #cd40b0 color hex could be obtained by blending #ff80ff with #9b0061. Closest websafe color is: #cc3399.\n\n• R 80\n• G 25\n• B 69\nRGB color chart\n• C 0\n• M 69\n• Y 14\n• K 20\nCMYK color chart\n\n#cd40b0 color description : Moderate magenta.\n\n# #cd40b0 Color Conversion\n\nThe hexadecimal color #cd40b0 has RGB values of R:205, G:64, B:176 and CMYK values of C:0, M:0.69, Y:0.14, K:0.2. Its decimal value is 13451440.\n\nHex triplet RGB Decimal cd40b0 `#cd40b0` 205, 64, 176 `rgb(205,64,176)` 80.4, 25.1, 69 `rgb(80.4%,25.1%,69%)` 0, 69, 14, 20 312.3°, 58.5, 52.7 `hsl(312.3,58.5%,52.7%)` 312.3°, 68.8, 80.4 cc3399 `#cc3399`\nCIE-LAB 51.591, 66.517, -30.261 34.847, 19.783, 43.055 0.357, 0.203, 19.783 51.591, 73.077, 335.537 51.591, 70.203, -54.938 44.478, 62.009, -26.258 11001101, 01000000, 10110000\n\n# Color Schemes with #cd40b0\n\n• #cd40b0\n``#cd40b0` `rgb(205,64,176)``\n• #40cd5d\n``#40cd5d` `rgb(64,205,93)``\nComplementary Color\n• #a440cd\n``#a440cd` `rgb(164,64,205)``\n• #cd40b0\n``#cd40b0` `rgb(205,64,176)``\n• #cd406a\n``#cd406a` `rgb(205,64,106)``\nAnalogous Color\n• #40cda4\n``#40cda4` `rgb(64,205,164)``\n• #cd40b0\n``#cd40b0` `rgb(205,64,176)``\n• #6acd40\n``#6acd40` `rgb(106,205,64)``\nSplit Complementary Color\n• #40b0cd\n``#40b0cd` `rgb(64,176,205)``\n• #cd40b0\n``#cd40b0` `rgb(205,64,176)``\n• #b0cd40\n``#b0cd40` `rgb(176,205,64)``\n• #5d40cd\n``#5d40cd` `rgb(93,64,205)``\n• #cd40b0\n``#cd40b0` `rgb(205,64,176)``\n• #b0cd40\n``#b0cd40` `rgb(176,205,64)``\n• #40cd5d\n``#40cd5d` `rgb(64,205,93)``\n• #992881\n``#992881` `rgb(153,40,129)``\n``#ad2d93` `rgb(173,45,147)``\n• #c133a4\n``#c133a4` `rgb(193,51,164)``\n• #cd40b0\n``#cd40b0` `rgb(205,64,176)``\n• #d254b8\n``#d254b8` `rgb(210,84,184)``\n• #d868c1\n``#d868c1` `rgb(216,104,193)``\n• #dd7dc9\n``#dd7dc9` `rgb(221,125,201)``\nMonochromatic Color\n\n# Alternatives to #cd40b0\n\nBelow, you can see some colors close to #cd40b0. Having a set of related colors can be useful if you need an inspirational alternative to your original color choice.\n\n• #c740cd\n``#c740cd` `rgb(199,64,205)``\n• #cd40c8\n``#cd40c8` `rgb(205,64,200)``\n• #cd40bc\n``#cd40bc` `rgb(205,64,188)``\n• #cd40b0\n``#cd40b0` `rgb(205,64,176)``\n• #cd40a4\n``#cd40a4` `rgb(205,64,164)``\n• #cd4099\n``#cd4099` `rgb(205,64,153)``\n• #cd408d\n``#cd408d` `rgb(205,64,141)``\nSimilar Colors\n\n# #cd40b0 Preview\n\nThis text has a font color of #cd40b0.\n\n``<span style=\"color:#cd40b0;\">Text here</span>``\n#cd40b0 background color\n\nThis paragraph has a background color of #cd40b0.\n\n``<p style=\"background-color:#cd40b0;\">Content here</p>``\n#cd40b0 border color\n\nThis element has a border color of #cd40b0.\n\n``<div style=\"border:1px solid #cd40b0;\">Content here</div>``\nCSS codes\n``.text {color:#cd40b0;}``\n``.background {background-color:#cd40b0;}``\n``.border {border:1px solid #cd40b0;}``\n\n# Shades and Tints of #cd40b0\n\nA shade is achieved by adding black to any pure hue, while a tint is created by mixing white to any pure color. In this example, #0b0309 is the darkest color, while #fefbfd is the lightest one.\n\n• #0b0309\n``#0b0309` `rgb(11,3,9)``\n• #1b0717\n``#1b0717` `rgb(27,7,23)``\n• #2a0b24\n``#2a0b24` `rgb(42,11,36)``\n• #3a0f31\n``#3a0f31` `rgb(58,15,49)``\n• #49133e\n``#49133e` `rgb(73,19,62)``\n• #59174b\n``#59174b` `rgb(89,23,75)``\n• #681b59\n``#681b59` `rgb(104,27,89)``\n• #781f66\n``#781f66` `rgb(120,31,102)``\n• #872373\n``#872373` `rgb(135,35,115)``\n• #972880\n``#972880` `rgb(151,40,128)``\n• #a72c8d\n``#a72c8d` `rgb(167,44,141)``\n• #b6309a\n``#b6309a` `rgb(182,48,154)``\n• #c634a8\n``#c634a8` `rgb(198,52,168)``\n• #cd40b0\n``#cd40b0` `rgb(205,64,176)``\n• #d150b6\n``#d150b6` `rgb(209,80,182)``\n• #d55fbd\n``#d55fbd` `rgb(213,95,189)``\n• #d96fc3\n``#d96fc3` `rgb(217,111,195)``\n• #dd7eca\n``#dd7eca` `rgb(221,126,202)``\n• #e18ed0\n``#e18ed0` `rgb(225,142,208)``\n• #e59dd7\n``#e59dd7` `rgb(229,157,215)``\n``#e9addd` `rgb(233,173,221)``\n• #eebce3\n``#eebce3` `rgb(238,188,227)``\n• #f2ccea\n``#f2ccea` `rgb(242,204,234)``\n• #f6dbf0\n``#f6dbf0` `rgb(246,219,240)``\n• #faebf7\n``#faebf7` `rgb(250,235,247)``\n• #fefbfd\n``#fefbfd` `rgb(254,251,253)``\nTint Color Variation\n\n# Tones of #cd40b0\n\nA tone is produced by adding gray to any pure hue. In this case, #8c818a is the less saturated color, while #fb12cb is the most saturated one.\n\n• #8c818a\n``#8c818a` `rgb(140,129,138)``\n• #95788f\n``#95788f` `rgb(149,120,143)``\n• #9f6e95\n``#9f6e95` `rgb(159,110,149)``\n• #a8659a\n``#a8659a` `rgb(168,101,154)``\n• #b15ca0\n``#b15ca0` `rgb(177,92,160)``\n• #ba53a5\n``#ba53a5` `rgb(186,83,165)``\n• #c449ab\n``#c449ab` `rgb(196,73,171)``\n• #cd40b0\n``#cd40b0` `rgb(205,64,176)``\n• #d637b5\n``#d637b5` `rgb(214,55,181)``\n• #e02dbb\n``#e02dbb` `rgb(224,45,187)``\n• #e924c0\n``#e924c0` `rgb(233,36,192)``\n• #f21bc6\n``#f21bc6` `rgb(242,27,198)``\n• #fb12cb\n``#fb12cb` `rgb(251,18,203)``\nTone Color Variation\n\n# Color Blindness Simulator\n\nBelow, you can see how #cd40b0 is perceived by people affected by a color vision deficiency. This can be useful if you need to ensure your color combinations are accessible to color-blind users.\n\nMonochromacy\n• Achromatopsia 0.005% of the population\n• Atypical Achromatopsia 0.001% of the population\nDichromacy\n• Protanopia 1% of men\n• Deuteranopia 1% of men\n• Tritanopia 0.001% of the population\nTrichromacy\n• Protanomaly 1% of men, 0.01% of women\n• Deuteranomaly 6% of men, 0.4% of women\n• Tritanomaly 0.01% of the population" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5421506,"math_prob":0.5930977,"size":3720,"snap":"2019-51-2020-05","text_gpt3_token_len":1669,"char_repetition_ratio":0.12513456,"word_repetition_ratio":0.011111111,"special_character_ratio":0.5405914,"punctuation_ratio":0.23634337,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.964182,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-26T12:03:32Z\",\"WARC-Record-ID\":\"<urn:uuid:2f3a1222-70e3-4396-aee8-585330f2a1bb>\",\"Content-Length\":\"36325\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:eff273e1-b884-4049-95e2-710886ad541c>\",\"WARC-Concurrent-To\":\"<urn:uuid:81314943-9f5b-4dac-9557-04add9213da8>\",\"WARC-IP-Address\":\"178.32.117.56\",\"WARC-Target-URI\":\"https://www.colorhexa.com/cd40b0\",\"WARC-Payload-Digest\":\"sha1:GQLJLTE2HAPEY4V554GMQFXLTXF7B3CW\",\"WARC-Block-Digest\":\"sha1:QBWAOSDDV2JKIWVBG3GH6NGO2P3CKP4L\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579251688806.91_warc_CC-MAIN-20200126104828-20200126134828-00309.warc.gz\"}"}
https://www.oschina.net/question/3306142_2245351
[ "# 学完Python基础知识后,你真的会python吗\n\n## 1.列表生成器\n\n### 描述\n\n```class A(object):\nx = 1\ngen = (x for _ in xrange(10))  # gen=(x for _ in range(10))\nif __name__ == \"__main__\":\nprint(list(A.gen))```\n\n### 答案\n\n```class A(object):\nx = 1\ngen = (lambda x: (x for _ in xrange(10)))(x)  # gen=(x for _ in range(10))\nif __name__ == \"__main__\":\nprint(list(A.gen))```\n\n## 2.装饰器\n\n### 描述\n\n```import time\nclass Timeit(object):\ndef __init__(self, func):\nself._wrapped = func\ndef __call__(self, *args, **kws):\nstart_time = time.time()\nresult = self._wrapped(*args, **kws)\nprint(\"elapsed time is %s \" % (time.time() - start_time))\nreturn result```\n\n```@Timeit\ndef func():\ntime.sleep(1)\nreturn \"invoking function func\"\nif __name__ == '__main__':\nfunc()  # output: elapsed time is 1.00044410133```\n\n```class A(object):\n@Timeit\ndef func(self):\ntime.sleep(1)\nreturn 'invoking method func'\nif __name__ == '__main__':\na = A()\na.func()  # Boom!```\n\n### 答案\n\n```class Timeit(object):\ndef __init__(self, func):\nself.func = func\ndef __call__(self, *args, **kwargs):\nprint('invoking Timer')\ndef __get__(self, instance, owner):\nreturn lambda *args, **kwargs: self.func(instance, *args, **kwargs)```\n\n## 3.Python 调用机制\n\n### 描述\n\n```class A(object):\ndef __call__(self):\nprint(\"invoking __call__ from A!\")\nif __name__ == \"__main__\":\na = A()\na()  # output: invoking __call__ from A```\n\n```a.__call__ = lambda: \"invoking __call__ from lambda\"\na.__call__()\n# output:invoking __call__ from lambda\na()\n# output:invoking __call__ from A!```\n\n### 答案\n\nFor new-style classes, implicit invocations of special methods are only guaranteed to work correctly if defined on an object’s type, not in the object’s instance dictionary. That behaviour is the reason why the following code raises an exception (unlike the equivalent example with old-style classes):\n\n```class C(object):\npass\nc = C()\nc.__len__ = lambda: 5\nlen(c)\n# Traceback (most recent call last):\n#  File \"<stdin>\", line 1, in <module>\n# TypeError: object of type 'C' has no len()```\n\n## 4.描述符\n\n### 描述\n\n```class Grade(object):\ndef __init__(self):\nself._score = 0\ndef __get__(self, instance, owner):\nreturn self._score\ndef __set__(self, instance, value):\nif 0 <= value <= 100:\nself._score = value\nelse:\nraise ValueError('grade must be between 0 and 100')\nclass Exam(object):\ndef __init__(self, math):\nself.math = math\nif __name__ == '__main__':\nniche = Exam(math=90)\nprint(niche.math)\n# output : 90\nsnake = Exam(math=75)\nprint(snake.math)\n# output : 75\nsnake.math = 120\n# output: ValueError:grade must be between 0 and 100!```\n\n```class Grad(object):\ndef __init__(self):\ndef __get__(self, instance, owner):\ndef __set__(self, instance, value):\nif 0 <= value <= 100:\nelse:\nraise ValueError(\"fuck\")```\n\n### 答案\n\n1.第一个问题的其实很简单,如果你再运行一次 print(niche.math) 你就会发现,输出值是 75 ,那么这是为什么呢?这就要先从 Python 的调用机制说起了。我们如果调用一个属性,那么其顺序是优先从实例的 dict 里查找,然后如果没有查找到的话,那么一次查询类字典,父类字典,直到彻底查不到为止。好的,现在回到我们的问题,我们发现,在我们的类 Exam 中,其 self.math 的调用过程是,首先在实例化后的实例的 dict 中进行查找,没有找到,接着往上一级,在我们的类 Exam 中进行查找,好的找到了,返回。那么这意味着,我们对于 self.math 的所有操作都是对于类变量 math 的操作。因此造成变量污染的问题。那么该则怎么解决呢?很多同志可能会说,恩,在 set 函数中将值设置到具体的实例字典不就行了。\n\n2.经过改良的做法,利用 dict 的 key 唯一性,将具体的值与实例进行绑定,但是同时带来了内存泄露的问题。那么为什么会造成内存泄露呢,首先复习下我们的 dict 的特性,dict 最重要的一个特性,就是凡可 hash 的对象皆可为 key ,dict 通过利用的 hash 值的唯一性(严格意义上来讲并不是唯一,而是其 hash 值碰撞几率极小,近似认定其唯一)来保证 key 的不重复性,同时(敲黑板,重点来了),dict 中的 key 引用是强引用类型,会造成对应对象的引用计数的增加,可能造成对象无法被 gc ,从而产生内存泄露。那么这里该怎么解决呢?两种方法\n\n```class Grad(object):\ndef __init__(self):\nimport weakref\ndef __get__(self, instance, owner):\ndef __set__(self, instance, value):\nif 0 <= value <= 100:\nelse:\nraise ValueError(\"fuck\")```\n\nweakref 库中的 WeakKeyDictionary 所产生的字典的 key 对于对象的引用是弱引用类型,其不会造成内存引用计数的增加,因此不会造成内存泄露。同理,如果我们为了避免 value 对于对象的强引用,我们可以使用 WeakValueDictionary 。\n\n```class Grad(object):\ndef __get__(self, instance, owner):\nreturn instance.__dict__[self.key]\ndef __set__(self, instance, value):\nif 0 <= value <= 100:\ninstance.__dict__[self.key] = value\nelse:\nraise ValueError(\"fuck\")\ndef __set_name__(self, owner, name):\nself.key = name```\n\n## 5.Python 继承机制\n\n### 描述\n\n```class Init(object):\ndef __init__(self, value):\nself.val = value\ndef __init__(self, val):\nself.val += 2\nclass Mul5(Init):\ndef __init__(self, val):\nsuper(Mul5, self).__init__(val)\nself.val *= 5\npass\nclass Incr(Pro):\ncsup = super(Pro)\ndef __init__(self, val):\nself.csup.__init__(val)\nself.val += 1\np = Incr(5)\nprint(p.val)```\n\n## 6. Python 特殊方法\n\n### 描述\n\n```class Singleton(object):\n_instance = None\ndef __new__(cls, *args, **kwargs):\nif cls._instance:\nreturn cls._instance\ncls._isntance = cv = object.__new__(cls, *args, **kwargs)\nreturn cv\nsin1 = Singleton()\nsin2 = Singleton()\nprint(sin1 is sin2)\n# output: True```\n\n```class SingleMeta(type):\ndef __init__(cls, name, bases, dict):\ncls._instance = None\n__new__o = cls.__new__\ndef __new__(cls, *args, **kwargs):\nif cls._instance:\nreturn cls._instance\ncls._instance = cv = __new__o(cls, *args, **kwargs)\nreturn cv\ncls.__new__ = __new__o\nclass A(object):\n__metaclass__ = SingleMeta\na1 = A()  # what`s the fuck```\n\n```class TraceAttribute(type):\ndef __init__(cls, name, bases, dict):\n__getattribute__o = cls.__getattribute__\ndef __getattribute__(self, *args, **kwargs):\nprint('__getattribute__:', args, kwargs)\nreturn __getattribute__o(self, *args, **kwargs)\ncls.__getattribute__ = __getattribute__\nclass A(object):  # Python 3 是 class A(object,metaclass=TraceAttribute):\n__metaclass__ = TraceAttribute\na = 1\nb = 2\na = A()\na.a\n# output: __getattribute__:('a',){}\na.b```\n\n### 答案\n\n```class SingleMeta(type):\ndef __init__(cls, name, bases, dict):\ncls._instance = None\n__new__o = cls.__new__\n@staticmethod\ndef __new__(cls, *args, **kwargs):\nif cls._instance:\nreturn cls._instance\ncls._instance = cv = __new__o(cls, *args, **kwargs)\nreturn cv\ncls.__new__ = __new__o\nclass A(object):\n__metaclass__ = SingleMeta\nprint(A() is A())  # output: True学好python你需要一个良好的环境,一个优质的开发交流群,群里都是那种相互帮助的人才是可以的,我有建立一个python学习交流群,在群里我们相互帮助,相互关心,相互分享内容,这样出问题帮助你的人就比较多,群号是301,还有056,最后是051,这样就可以找到大神聚合的群,如果你只愿意别人帮助你,不愿意分享或者帮助别人,那就请不要加了,你把你会的告诉别人这是一种分享。如果你看了觉得还可以的麻烦给我点个赞谢谢\n\n。```", null, "" ]
[ null, "https://static.oschina.net/new-osc/img/icon/back-to-top.svg", null ]
{"ft_lang_label":"__label__zh","ft_lang_prob":0.54918754,"math_prob":0.94428885,"size":8376,"snap":"2020-45-2020-50","text_gpt3_token_len":4038,"char_repetition_ratio":0.14154324,"word_repetition_ratio":0.16328709,"special_character_ratio":0.29309934,"punctuation_ratio":0.18458419,"nsfw_num_words":1,"has_unicode_error":false,"math_prob_llama3":0.9904623,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-30T23:44:44Z\",\"WARC-Record-ID\":\"<urn:uuid:66bb1a93-df57-4a5b-973c-3926deb3e96b>\",\"Content-Length\":\"67719\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:801257c4-1ae7-4b0e-acfc-25f4d2617a7c>\",\"WARC-Concurrent-To\":\"<urn:uuid:a46e3f6f-c2aa-4a8e-a438-616fb73b3989>\",\"WARC-IP-Address\":\"212.64.62.183\",\"WARC-Target-URI\":\"https://www.oschina.net/question/3306142_2245351\",\"WARC-Payload-Digest\":\"sha1:BW3U57EGA22JUCKTYKGAGAMTIXG363VA\",\"WARC-Block-Digest\":\"sha1:U6FYXUGNEIZBW7FZ6H2EDNXNV7N6MA75\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107911792.65_warc_CC-MAIN-20201030212708-20201031002708-00515.warc.gz\"}"}
https://www.docs.jcmwave.com/JCMsuite/html/EMTutorial/f14bcdd9fee3e9e79ef3a6e70dc63d1e.html?version=5.0.0
[ "In this tutorial project the fundamental propagation modes of a graded index fiber are computed. In a graded index fiber, the refractive index of the core is not constant in space but depending on radial distance from the optical axis of the fiber. The functional dependence of refractive index in the core region is defined as a python expression in the file materials.jcm. In this case the permittivity", null, "depends on radial distance", null, "as follows:", null, "with core radius", null, ",", null, ",", null, ", and", null, ".\n\nCompare the syntax in the file materials.jcm:\n\n• materials.jcm [ASCII]\n\n 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 Material { Name = \"Cladding\" DomainId = 1 RelPermeability = 1.0 RelPermittivity = 2.0736 } Material { Name = \"Core\" DomainId = 2 RelPermeability = 1.0 RelPermittivity { Python { Expression = \"this_radius = power(power(X, 2) + power(X, 2), 0.5); delta = (power(n1, 2) - power(n2, 2))/(2*power(n1, 2)); value = power(n1, 2)*(1 - 2*delta*power((this_radius/radius), exponent_g)); value = value*eye(3, 3)\" Parameter { Name = \"n1\" VectorValue = 1.45 } Parameter { Name = \"n2\" VectorValue = 1.44 } Parameter { Name = \"exponent_g\" VectorValue = 1.9 } Parameter { Name = \"radius\" VectorValue = 9.5e-6 } } } } \n\nDefinition of the geometry:\n\n• layout.jcm [ASCII]\n\n 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 Layout2D { UnitOfLength = 1.0e-6 MeshOptions { MaximumSideLength = 15 } Objects { Circle { Name = \"Cladding\" DomainId = 1 Priority = -1 Radius = 50.0 RefineAll = 2 Boundary { BoundaryId = 1 Class = Domain } } Circle { Name = \"Core\" DomainId = 2 Priority = 1 Radius = 9.5 MeshOptions { CurvilinearDegree = 2 MaximumSideLength = 4 } } } } \n\nThe tangential electric field components of the fields are expected to be decayed to zero at the boundaries of the computational domain:\n\n• boundary_conditions.jcm [ASCII]\n\n 1 2 3 4 BoundaryCondition { BoundaryId = 1 Electromagnetic = TangentialElectric } \n\nAccuracy settings and post process definitions (here, also the spatially dependent permittivity field is exported for visualization/cross-checking purposes):\n\n• project.jcmp [ASCII]\n\n 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 Project { InfoLevel = 3 Electromagnetics { TimeHarmonic { PropagatingMode { Lambda0 = 1.55e-06 FieldComponents = ElectricXYZ SelectionCriterion { NearGuess { Guess = 1.45 NumberEigenvalues = 2 } } Accuracy { FiniteElementDegree = 3 Precision = 1e-4 Refinement { MaxNumberSteps = 1 } } } } } } PostProcess { ExportFields { FieldBagPath = \"project_results/fieldbag.jcm\" OutputFileName = \"project_results/permittivity_field.jcm\" OutputQuantity = RelPermittivity Cartesian { NGridPointsX = 150 NGridPointsY = 150 } } } PostProcess { ExportFields { FieldBagPath = \"project_results/fieldbag.jcm\" OutputFileName = \"project_results/e_field_cartesian.jcm\" OutputQuantity = ElectricFieldStrength Cartesian { NGridPointsX = 150 NGridPointsY = 150 } } } \n\nA visualization of the field intensity distribution of a computed mode is shown in the figure below." ]
[ null, "https://www.docs.jcmwave.com/JCMsuite/html/EMTutorial/_images/math/5fd92ac684e9e44f799e23c003c2808b39baa241.png", null, "https://www.docs.jcmwave.com/JCMsuite/html/EMTutorial/_images/math/3e0c2436e9fc63b3fb706b6fde5c44cdfa79508d.png", null, "https://www.docs.jcmwave.com/JCMsuite/html/EMTutorial/_images/math/85d80ffcefeafcb78aa01141023c28ced692d8f0.png", null, "https://www.docs.jcmwave.com/JCMsuite/html/EMTutorial/_images/math/c9bc11aab3388de4901b1dab593c1014a84d8c67.png", null, "https://www.docs.jcmwave.com/JCMsuite/html/EMTutorial/_images/math/cc84a8773eeef4b8d07b8931325511ce6e6b1e06.png", null, "https://www.docs.jcmwave.com/JCMsuite/html/EMTutorial/_images/math/b903fecdad883cb5ebb0ee447882a2bd1a69040c.png", null, "https://www.docs.jcmwave.com/JCMsuite/html/EMTutorial/_images/math/55af95b63d78c2e45382746b45a9d8c1639116ee.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5468418,"math_prob":0.97932744,"size":3133,"snap":"2022-05-2022-21","text_gpt3_token_len":1003,"char_repetition_ratio":0.10067114,"word_repetition_ratio":0.25830257,"special_character_ratio":0.3511012,"punctuation_ratio":0.0952381,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9827635,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14],"im_url_duplicate_count":[null,1,null,1,null,1,null,1,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-20T07:32:38Z\",\"WARC-Record-ID\":\"<urn:uuid:636154fa-7930-4e54-83f6-55569f126f11>\",\"Content-Length\":\"15297\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c6727094-7100-422d-83c2-4bb1b93e57aa>\",\"WARC-Concurrent-To\":\"<urn:uuid:574339fb-747c-4ca8-b1d0-5d9a7a15401b>\",\"WARC-IP-Address\":\"85.13.128.127\",\"WARC-Target-URI\":\"https://www.docs.jcmwave.com/JCMsuite/html/EMTutorial/f14bcdd9fee3e9e79ef3a6e70dc63d1e.html?version=5.0.0\",\"WARC-Payload-Digest\":\"sha1:ODY7GWODQWO6PY6MRKDHBXOJVV3MYW3T\",\"WARC-Block-Digest\":\"sha1:45HC6ERCBPSTD7XZ5ZDD4WBWXHYO6JM5\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662531762.30_warc_CC-MAIN-20220520061824-20220520091824-00427.warc.gz\"}"}
https://profdoc.um.ac.ir/paper-abstract-1085353.html
[ "Soft Computing, ( ISI ), Volume (25), No (15), Year (2021-8) , Pages (9789-9810)\n\n#### Title : ( A risk index to find the optimal uncertain random portfolio )\n\nCitation: BibTeX | EndNote\n\n#### Abstract\n\nIt is possible in a stock exchange that some candidate securities possess sufficient transaction data, and some others are newly listed and lack enough data. If an investor wants to choose a portfolio that contains two types of securities mentioned, none of the probability theory and uncertainty theory, alone, can be applied. In this case, the chance theory can be useful. For this purpose, in this paper, we discuss the uncertain random portfolio- which is a portfolio contains some candidate securities that have sufficient transaction data and some newly listed ones with insufficient transaction data- selection problem. Indeed, this paper introduces a new risk criterion and proposes a new type of mean-risk model based on this criterion to find the optimal uncertain random portfolio. And in the end, a numerical example is presented for the sake of illustration.\n\n#### Keywords\n\n, Uncertain random variable, Risk index, Optimal uncertain random portfolio, Mean-risk model, Optimization, Sensitivity analysis\nبرای دانلود از شناسه و رمز عبور پرتال پویا استفاده کنید.", null, "@article{paperid:1085353,\ntitle = {A risk index to find the optimal uncertain random portfolio},\njournal = {Soft Computing},\nyear = {2021},\nvolume = {25},\nnumber = {15},\nmonth = {August},\nissn = {1432-7643},\npages = {9789--9810},\nnumpages = {21},\nkeywords = {Uncertain random variable; Risk index; Optimal uncertain random portfolio; Mean-risk model; Optimization; Sensitivity analysis},\n}\n\n%0 Journal Article\n%T A risk index to find the optimal uncertain random portfolio" ]
[ null, "https://profdoc.um.ac.ir/images/ajax-loader.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7869067,"math_prob":0.82051826,"size":1938,"snap":"2023-40-2023-50","text_gpt3_token_len":450,"char_repetition_ratio":0.14374353,"word_repetition_ratio":0.06271777,"special_character_ratio":0.22961816,"punctuation_ratio":0.15361446,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9732652,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-02T12:47:30Z\",\"WARC-Record-ID\":\"<urn:uuid:5c3c7823-8518-4991-887d-c66946e795b8>\",\"Content-Length\":\"26082\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:cccf0255-769c-4170-b6d6-17d0708fe5f1>\",\"WARC-Concurrent-To\":\"<urn:uuid:7f36c302-e86d-4bdd-b760-b56b149177a9>\",\"WARC-IP-Address\":\"109.122.252.57\",\"WARC-Target-URI\":\"https://profdoc.um.ac.ir/paper-abstract-1085353.html\",\"WARC-Payload-Digest\":\"sha1:LG7FFV32SSKY3CPAEZ6ZJ7LBJGL2RNGP\",\"WARC-Block-Digest\":\"sha1:RX6NQS36GZNUT4SHVANZOKTMNFQ3BCNI\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100399.81_warc_CC-MAIN-20231202105028-20231202135028-00649.warc.gz\"}"}
https://scikit-survival.readthedocs.io/en/latest/api/generated/sksurv.datasets.load_arff_files_standardized.html
[ "sksurv.datasets.load_arff_files_standardized(path_training, attr_labels, pos_label=None, path_testing=None, survival=True, standardize_numeric=True, to_numeric=True)[source]#\n\nParameters:\n• path_training (str) – Path to ARFF file containing data.\n\n• attr_labels (sequence of str) – Names of attributes denoting dependent variables. If `survival` is set, it must be a sequence with two items: the name of the event indicator and the name of the survival/censoring time.\n\n• pos_label (any type, optional) – Value corresponding to an event in survival analysis. Only considered if `survival` is `True`.\n\n• path_testing (str, optional) – Path to ARFF file containing hold-out data. Only columns that are available in both training and testing are considered (excluding dependent variables). If `standardize_numeric` is set, data is normalized by considering both training and testing data.\n\n• survival (bool, optional, default: True) – Whether the dependent variables denote event indicator and survival/censoring time.\n\n• standardize_numeric (bool, optional, default: True) – Whether to standardize data to zero mean and unit variance. See `sksurv.column.standardize()`.\n\n• to_numeric (boo, optional, default: True) – Whether to convert categorical variables to numeric values. See `sksurv.column.categorical_to_numeric()`.\n\nReturns:\n\n• x_train (pandas.DataFrame, shape = (n_train, n_features)) – Training data.\n\n• y_train (pandas.DataFrame, shape = (n_train, n_labels)) – Dependent variables of training data.\n\n• x_test (None or pandas.DataFrame, shape = (n_train, n_features)) – Testing data if path_testing was provided.\n\n• y_test (None or pandas.DataFrame, shape = (n_train, n_labels)) – Dependent variables of testing data if path_testing was provided." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.703901,"math_prob":0.8797858,"size":1790,"snap":"2023-40-2023-50","text_gpt3_token_len":406,"char_repetition_ratio":0.1237402,"word_repetition_ratio":0.122171946,"special_character_ratio":0.21731843,"punctuation_ratio":0.20905924,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97006875,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-25T16:47:13Z\",\"WARC-Record-ID\":\"<urn:uuid:89c59e6b-2e62-4447-99b0-d4ffad136891>\",\"Content-Length\":\"36698\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:97fe984f-1e57-4dea-bed6-efddaaa8a0c0>\",\"WARC-Concurrent-To\":\"<urn:uuid:b4efb37a-197d-4fd9-9dd3-01c589ddb539>\",\"WARC-IP-Address\":\"104.17.32.82\",\"WARC-Target-URI\":\"https://scikit-survival.readthedocs.io/en/latest/api/generated/sksurv.datasets.load_arff_files_standardized.html\",\"WARC-Payload-Digest\":\"sha1:AMPA45JOVTNICQBDON4ZSTZPIGV5K5FE\",\"WARC-Block-Digest\":\"sha1:AHHPFDHG7XAP3HYX3UCX2MIRVEW6QJYV\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233509023.57_warc_CC-MAIN-20230925151539-20230925181539-00268.warc.gz\"}"}