URL
stringlengths 15
1.68k
| text_list
sequencelengths 1
199
| image_list
sequencelengths 1
199
| metadata
stringlengths 1.19k
3.08k
|
---|---|---|---|
https://www.appsapk.com/mathematics-app/ | [
"",
null,
"# Mathematics APK\n\n– formula calculation with variables, sums, products and sequences\n– solve equations (linear, quadratic, cubic, transpose)\n– plot functions, derivate, integrate, calculate roots and extremas (polynomial, rational, exponential, etc)\n– calculate tangent, asymptote, interception\n– reconstruction: calculate f(x) by given roots, points or extremas\n– f(x,y) plot in 3D\n– factoring (Bézout’s identity, Euler’s totient function, least common multiple, greatest common divisor)\n– modulo calculation with big integer and fractions\n– differential and integral calculus\n– curve sketching, limes, minima, maxima\n– linear algebra, vectors, matrices\n– unit conversion\n– number systems, binary/oct/decimal/hex calculator\n– complex number calculation\n– statistics (gaussian test, t test, binominal and normal distribution, standard deviation, average, sum, median)Supported languages:\nEnglish, German, Francais, Espanol, Italian, Portuguese\n\nMathematics is a powerful calculation software for your android smartphone.\n\nFEATURES\n\nCalculate any formula you want and show them in a 2d or 3d plot. The natural display shows fractions, roots and exponents as you would expect it from mathematics.\n\nIn a few seconds you derivate or integrate your desired function, calculate the zero points of your function und show them in the function plot. See all maxima, minima or inflection points in one view.\n\nThe easy way of use allows you to solve linear equations in just a moment. Or transform your mathematical, physical or chemical equation to any unknown variable.\n\nYou often needs to calculate with binary, octal or hexadecimal number systems? No problem! You can mix them together in one calculation even by using decimal digits. But that’s not enough! You can also calculate with any other number system with base 2 to 18.\n\nFrom time to time you may need to convert units to another one, like Celsius to Fahrenheit, miles to kilometre, inches to foot and so on.\n\nYou will also be able to calculate with vectors, matrices and determinants.\n\nAll this features are combined in this app and will make your mathematical life a lot easier.\n\n#### What’s New\n\n– share your calculations via Bluetooth\n– time calculator\n\nName\nMathematics\nPackage\nde.daboapps.mathematics\nVersion\n3.3\nSize\n1.91 MB\nInstalls\nDeveloped By\ndaboApps\n\n• pashuram chaudhary says:\n\nexcellent apps for st.\n\n• sunil says:\n\nBeautiful app!\n\n• computer repair hackney says:\n\nI would have loved to have had this app when I was in school. Although found it to be a little slow.\n\nThis site uses Akismet to reduce spam. Learn how your comment data is processed."
] | [
null,
"data:image/svg+xml,%3Csvg%20xmlns=%22http://www.w3.org/2000/svg%22%20viewBox=%220%200%20120%20120%22%3E%3C/svg%3E",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8450863,"math_prob":0.9799419,"size":2358,"snap":"2020-45-2020-50","text_gpt3_token_len":529,"char_repetition_ratio":0.11852167,"word_repetition_ratio":0.0,"special_character_ratio":0.20653096,"punctuation_ratio":0.15898618,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9868209,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-11-29T10:35:58Z\",\"WARC-Record-ID\":\"<urn:uuid:f4dd8cd4-96a9-42a8-aaa7-42126eb6a57b>\",\"Content-Length\":\"450722\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:591b26e5-c720-46b5-9f87-8d87703541dc>\",\"WARC-Concurrent-To\":\"<urn:uuid:1bf1c47d-ef45-4a83-9cd7-91f337b71514>\",\"WARC-IP-Address\":\"104.26.2.18\",\"WARC-Target-URI\":\"https://www.appsapk.com/mathematics-app/\",\"WARC-Payload-Digest\":\"sha1:RKRT7SJLQUA7V24Z3MB3XNBVHIMSU7HZ\",\"WARC-Block-Digest\":\"sha1:YQ24YF2YVOBMRIZP3BOOVWPOXY46DTBX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141197593.33_warc_CC-MAIN-20201129093434-20201129123434-00142.warc.gz\"}"} |
https://www.enotes.com/homework-help/use-integration-by-parts-integrate-integration-350459?en_action=hh-question_click&en_label=hh-sidebar&en_category=internal_campaign | [
"# Use integration by parts to integrate Integration sign, x^5 ln (x) dx\n\njeew-m",
null,
"| Certified Educator\n\ncalendarEducator since 2012\n\nstarTop subjects are Math, Science, and Social Sciences\n\nUsing integration by parts;\n\nintudv = uv-intvdu\n\nLet v=x^6 and u=lnx\n\nThen;\n\ndv = 6x^5 dx\n\ndu = 1/x * dx\n\n`intudv = int(lnx*6x^5)dx`\n\n`int(lnx*6x^5)dx= (lnx)(x^6)-int(x^6)*1/xdx`\n\n`int(6*lnx*x^5)dx = (lnx)(x^6)-intx^5dx`\n\n`int(x^5*lnx) = (1/6)[(lnx)(x^6)-x^6/6]+C`\n\ncheck Approved by eNotes Editorial\n\nbeckden",
null,
"| Certified Educator\n\ncalendarEducator since 2011\n\nstarTop subjects are Math, Science, and Business\n\nUse LIPET (ln, inv trig, polynomial, exponential, trig) to find u, we use `u=ln(x)` , so `dv=x^5.dx`\n\n`du = 1/x dx` , `v = x^6/6 `\n\nwe use `int u dv = uv - int v du`\n\n`int x^5 ln(x) dx = ln(x) x^6/6 - int x^6/6 1/x dx`\n\n`int x^5 ln(x) dx = (x^6 ln(x))/6 - int x^5/6 dx`\n\n`int x^5 ln(x) dx = (x^6 ln(x))/6 - 1/6 x^6/6 + C`"
] | [
null,
"https://static.enotescdn.net/images/core/educator-indicator_thumb.png",
null,
"https://static.enotescdn.net/images/core/educator-indicator_thumb.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.66288084,"math_prob":0.9994716,"size":1512,"snap":"2019-51-2020-05","text_gpt3_token_len":562,"char_repetition_ratio":0.15782493,"word_repetition_ratio":0.11764706,"special_character_ratio":0.36640212,"punctuation_ratio":0.07079646,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99990344,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-19T14:09:22Z\",\"WARC-Record-ID\":\"<urn:uuid:cc777452-01a5-42be-af58-f9f360bdaa8b>\",\"Content-Length\":\"46434\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c8c84c52-0e4d-4fbb-a2b2-52aeae24d97a>\",\"WARC-Concurrent-To\":\"<urn:uuid:94b96eda-c432-4a0d-b333-172ec1c48090>\",\"WARC-IP-Address\":\"104.26.4.75\",\"WARC-Target-URI\":\"https://www.enotes.com/homework-help/use-integration-by-parts-integrate-integration-350459?en_action=hh-question_click&en_label=hh-sidebar&en_category=internal_campaign\",\"WARC-Payload-Digest\":\"sha1:L3GOST77PLSGPYPJX75LDHQS2IXKILJU\",\"WARC-Block-Digest\":\"sha1:3PRJH7ROJ5SXAGVJMCQTTCDGV6OB7ACZ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579250594603.8_warc_CC-MAIN-20200119122744-20200119150744-00400.warc.gz\"}"} |
https://cyberleninka.org/article/n/3035 | [
"# Two Mesh Deformation Methods Coupled with a Changing-connectivity Moving Mesh Method for CFD ApplicationsAcademic research paper on \"Medical engineering\"",
null,
"CC BY-NC-ND",
null,
"0",
null,
"0\nShare paper\nProcedia Engineering\nOECD Field of science\nKeywords\n{\"Moving mesh\" / \"Inverse Distance Weighted interpolation\" / \"Elasticity-based mesh deformation\" / \"ALE simulations\"}\n\n## Abstract of research paper on Medical engineering, author of scientific article — Nicolas Barral, Edward Luke, Frédéric Alauzet\n\nAbstract Three-dimensional real-life simulations are generally unsteady and involve moving geometries. Industry is currently very far from performing such body-fitted simulations on a daily basis, mainly due to the robustness of the moving mesh algorithm and their extensive computational cost. A moving mesh algorithm coupled to local mesh optimizations has proved its efficiency in dealing with large deformation of the mesh without re-meshing. In this paper, the coupling of this algorithm with two mesh deformation techniques is studied: an elasticity PDE-based one and an explicit Inverse Distance Weighted interpolation one, and both techniques are compared. The efficiency of this method is demonstrated on challenging test cases, involving large body deformations, boundary layers and large displacements with shearing. Finally, the moving mesh algorithm is coupled to a CFD flow solver.\n\n## Academic research paper on topic \"Two Mesh Deformation Methods Coupled with a Changing-connectivity Moving Mesh Method for CFD Applications\"\n\n(I)\n\nCrossMark\n\nAvailable online at www.sciencedirect.com\n\nScienceDirect\n\nProcedía Engineering 82 (2014) 213-227\n\nProcedía Engineering\n\nwww.elsevier.com/locate/procedia\n\n23rd International Meshing Roundtable (IMR23)\n\nTwo mesh deformation methods coupled with a changing-connectivity moving mesh method for CFD applications\n\nNicolas Barrala, Edward Lukeb, Frederic Alauzeta*\n\naGamma3 Team, INRIA Paris-Roquencourt, Domaine de Voluceau, Rocquencourt, BP 105, 78153, Le Chesnay Cedex, France bDepartment of Computer Science and Engineering, Box 9637, Mississippi State University, MS 39762, United States\n\nAbstract\n\nThree-dimensional real-life simulations are generally unsteady and involve moving geometries. Industry is currently very far from performing such body-fitted simulations on a daily basis, mainly due to the robustness of the moving mesh algorithm and their extensive computational cost. A moving mesh algorithm coupled to local mesh optimizations has proved its efficiency in dealing with large deformation of the mesh without re-meshing. In this paper, the coupling of this algorithm with two mesh deformation techniques is studied: an elasticity PDE-based one and an explicit Inverse Distance Weighted interpolation one, and both techniques are compared. The efficiency of this method is demonstrated on challenging test cases, involving large body deformations, boundary layers and large displacements with shearing. Finally, the moving mesh algorithm is coupled to a CFD flow solver.\n\nPeer-review under responsibility of organizing committee of the 23rd International Meshing Roundtable (IMR23) Keywords: Moving mesh; Inverse Distance Weighted interpolation; Elasticity-based mesh deformation; ALE simulations\n\n1. Introduction\n\nThe growing expectations of the industrial world for simulations involving moving geometries have given a boost to this research field for the last decades. Moving mesh simulations are currently used in many research fields: ballistics, biomedical, aeronautics, transports... These simulations combine the difficulties associated with unsteadiness, mesh movement and fluid-structure or structure-structure coupling, and are generally hard to perform and very costly in terms of CPU time.\n\nThree leading methodologies have been designed in the literature to handle geometry displacements during numerical simulations: body-fitted approach [6,7,10,17,23], Chimera methods [8,9,21] and immersed/embedded boundary methods [18,22]. Each of them has its own strengths and weaknesses. In this work, we consider the first class of methods where the time-evolving computational domain is treated with a body-fitted approach, which means that the computational mesh follows time-evolving geometries in their movement. Only simplicial unstructured meshes are considered mainly because several fully-automatic software packages are available to generate such meshes [16,20].\n\n* Corresponding author. Tel.: +33 1 39 63 57 93. E-mail address: [email protected]\n\nPeer-review under responsibility of organizing committee of the 23rd International Meshing Roundtable (IMR23) doi: 10.1016/j.proeng.2014.10.385\n\nAnother reason is that our final objective is to use highly anisotropic metric-based mesh adaptation [2,3] on moving mesh simulations, and simplicial elements still remain the most flexible to handle this technology.\n\nUnfortunately, the fixed-connectivity constraint imposed by the classical flow solver numerical scheme Arbitrary-Lagrangian-Eulerian (ALE) framework considerably limits the efficiency of body-fitted moving mesh techniques. If the problem induces large displacements of the geometry, the mesh distortion can adversely affect the accuracy and stability of the numerical schemes and often make the ALE approach unpractical.\n\nTwo different methods have been proposed to handle large displacements moving mesh simulations. The first one consists of moving the mesh as much as possible, keeping the connectivity fixed, and solving the equations in a fully ALE manner, until the mesh becomes too bad and a global or local remeshing is performed, the solution being interpolated on the new mesh, see for instance . In the case of displacements with large shearing, a large number of remeshing is performed, which is prohibitive in terms of CPU time. The second approach aims at maintaining the best possible mesh quality while moving using several local mesh optimizations such as vertex addition or collapsing and connectivity changes [10,13]. This strategy is extremely robust and maintains a good mesh quality throughout the simulation, but involves a large number of solution interpolations after each mesh modification, and the numerical method does not fully comply with the ALE framework.\n\nA new approach for moving mesh simulations has been proposed in , which is compatible with the ALE framework and able to handle anisotropic adapted meshes. First, the number of solutions required for mesh deformation -achieved through an elasticity-based PDE method- is significantly reduced. Second, the mesh deformation algorithm is coupled with local mesh quality optimizations[12,15] using only vertex smoothing and edge/face swapping. This mesh-connectivity-change operator is especially powerful in handling shear and large deformation movement, since no re-meshing is required.\n\nIn this paper, further demonstration of the robustness of this algorithm is provided: we propose to use an explicit interpolation method to compute the mesh deformation, and compare this method to the elasticity-based one on new challenging cases. Finally, this algorithm is successfully coupled to a three-dimensional CFD ALE solver.\n\nThis paper begins with a reminder of the changing-connectivity moving mesh algorithm. The two methods used to compute the mesh deformation are then described and compared on some challenging examples. The paper ends with CFD applications, where an ALE solver has been coupled to the moving mesh method.\n\n2. Mesh-connectivity-change moving mesh strategy\n\nThe strategy developed to move the inner mesh following the moving boundaries involves two main parts. First, the computation of the mesh deformation: inner vertices are assigned a trajectory, and thus a position for future time steps. Second, the optimization of the mesh, in which the positions computed in the mesh deformation phase are corrected and connectivity changes are performed to ensure a good mesh quality. This strategy has proven to be very powerful : large displacement of complex geometries can be performed preserving a good mesh quality without any remeshing. Note that, in the scope of this paper, the surface meshes are not optimized (the surface vertices are not moved on the surface).\n\n2.1. Mesh deformation phase\n\nDuring the first phase of the algorithm, the displacement of all volume vertices, i.e., the mesh deformation, is computed. To this end, two methods are considered in this paper: an elasticity-based PDE method or an explicit Inverse Distance Weighted (IDW) interpolation method. Both will be discussed more in depth in Section 3. In either cases, the cost of the mesh deformation solutions can be reduced by: (i) using a dedicated coarser mesh to compute the mesh deformation (ii) rigidifying some regions around tiny complex details of the geometries.\n\n2.2. Improving mesh deformation algorithm efficiency\n\nMesh deformation algorithms, are known to be an expensive part of dynamic mesh simulations, as their solution is generally required at each solver time step (or each few solver time steps). To reduce the number of such solutions, we\n\npropose to solve the mesh deformation problem for a large time frame of length At instead of doing it at each solver time step St. While there is a risk of a less effective mesh displacement solution, it is a worthwhile strategy if our methodology is able to preserve the mesh quality along large displacements. Solving the mesh deformation problem once for large time frame is problematic in the case of curved or accelerated trajectories of the bodies. To enhance the mesh deformation prescription, accelerated-velocity curved, i.e., high-order, vertex trajectories are computed.\n\nThe paths of inner vertices can be improved by providing a constant acceleration a to each vertex, which results in an accelerated and curved trajectory. During time frame [t, t + At], the position and the velocity of a vertex are updated as follow:\n\nx(t + St) = x(t) + St v(t) + — a and v(t + St) = v(t) + St a.\n\nPrescribing a velocity and an acceleration vectors to each vertex requires solving two mesh deformation problems. If inner vertex displacement is sought for time frame [t, t + At], boundary conditions are imposed by the location of the body at time t + At/2 and t + At. These locations are computed using body velocity and acceleration. Now, to define the trajectory of each vertex, the velocity and acceleration are deduced from evaluated middle and final positions:\n\nAt v(t) = -3 x(t) + 4 x(t + At/2) - x(t + At)\n\n—a = 2 x(t) - 4x(t + At/2) + 2x(t + At) .\n\nIn this context, it is mandatory to certify that the mesh motion remains valid for the whole time frame [t, t + At], which is done computing the sign of the volumes of the elements all along their path .\n\n2.3. Local mesh optimization\n\nIt has been proposed to couple mesh deformation with local mesh optimization using smoothing and generalized swapping to achieve efficiently large displacement in moving mesh applications. Connectivity changes are really effective in handling shear and removing highly skewed elements. Here, we briefly recall the mesh optimization procedure in the generalized context of metric-based mesh adaptive optimization.\n\nFor 3D adapted meshes, an element's quality is measured in terms of element's shape by the quality function:\n\nV3 (Z 4(e0)2\n\nQm(K) = - e [1, +~] , (1)\n\n216 \\K\\m\n\nwhere £M(e) and \\K\\M are edge length and element volume in metric M. Metric M is a 3 x 3 symmetric positive definite tensor prescribing element sizes, anisotropy and orientations to the mesh generator. QM(K) = 1 corresponds to a perfectly regular element and QM(K) < 2 correspond to excellent quality elements, while a high value of QM(K) indicates a nearly degenerated element. For non-adapted meshes, the identity matrix I3 is chosen as metric tensor.\n\nThe first mesh optimization tool is vertex smoothing which consists of relocating each vertex inside its ball of elements, i.e., the set of elements having Pi as vertex. For each tetrahedron Kj of the ball of Pi, a new optimal position Popt for Pi can be proposed to form a regular tetrahedron in metric space. The final optimal position P0iI\" is computed as a weighted average of all these optimal positions. This way, an element of the ball is all the more dominant if its quality in the original mesh is bad.\n\nThe second mesh optimization tool to improve mesh quality is generalized swapping/local-reconnection. Let a and P be the tetrahedra's vertices opposite to the common face P1P2P3. Face swapping consists of suppressing this face and creating the edge e = a/P. In this case, the two original tetrahedra are deleted and three new tetrahedra are created. A generalization of this operation exists and consists in reconnecting the inside of shells of tetrahedra [1,12,15]. The different edge swaps are generally denoted n ^ m where m is the number of new tetrahedra. In this work, edge swaps 3 ^ 2, 4 ^ 4, 5 ^ 6, 6 ^ 8 and 7 ^ 10 have been implemented.\n\n2.4. Moving mesh algorithm\n\nThe changing-connectivity moving mesh algorithm (MMA) is described in Algorithm 1, where the different phases described above are put together.\n\nAlgorithm 1 Changing-Connectivity Moving Mesh Algorithm\n\nWhile (t < Tend)\n\n1. Solve mesh deformation: compute vertices trajectories\n\n(a) {dOy(t + A)} = Compute body vertex displacement from current translation speed vbody, rotation speed ebody and acceleration abody for [t, t + f ]\n\nd(t + f) = Solve mesh deformation problem (db^^t + f), ft)\n\n(b) {d^t + At)} = Compute body vertex displacement from current translation speed vbody, rotation speed 0body and acceleration abody for [t, t + At]\n\nd(t + At) = Solve mesh deformation problem (d^^t + At), At)\n\n(c) {v, a} = Deduce inner vertex speed and acceleration from both displacements jd(t + A), d(t + At)J\n\n(d) If predicted mesh motion is invalid then At = At/2n and goto 1. Else Tels = t + At\n\n2. Moving mesh stage with mesh optimizations While (t < Tels)\n\n(a) 6t = Get moving mesh time step (Wk, v, CFLgeom)\n\n(b) Hk = Swaps optimization (Hk, QstWragpet)\n\n(c) v°P' = Vertices smoothing (Hk, Qmax)\n\n(d) Hk+1 = Move the mesh and update vertices speed (Hk, St, v, vopt, a)\n\n(e) Check mesh quality: stop if too distorted.\n\n(f) t = t + 6t\n\nEndWhile EndWhile\n\nIn this algorithm, two time steps appear: a large one At for the mesh deformation computation, and a smaller one 5topt corresponding to the steps where the mesh is optimized. The first one is set manually at the beginning of the computation, but can be automatically reduced if the mesh quality degrades. The second one is computed automatically, using the CFLgeom parameter as described below.\n\nA good restriction to be imposed on the mesh movement is that vertices cannot cross too many elements on a single move between two mesh optimizations. Therefore, a geometric parameter CFLgeom is introduced to control the number of stages used to perform the mesh displacement between t and t + At. If CFLgeom is greater than one, the mesh is authorized to cross more than one element in a single move. The moving geometric time step is given by:\n\nSt = CFLgeom max — , (2)\n\nP v(x)\n\nwhere h(xi) is the smallest altitude of all the elements in the ball of vertex P,, and v(x,) its velocity.\n\nAfter each mesh deformation solution, the quality of the mesh in the future is analyzed: if the quality is too low, the mesh deformation problem is solved again with a smaller time step.\n\n3. Comparison of two mesh deformation methods\n\n3.1. Elasticity-based PDE\n\nThe first method to compute the displacement of all vertices, i.e., the mesh deformation, is a PDE-based method using a linear-elasticity analogy [1,4].More precisely, the inner vertices movement is obtained by solving an elasticity-\n\nlike equation with a P1 Finite Element Method (FEM):\n\nVd + rV d\n\ndiv(<(6)) = 0, with 6 =-2-, (3)\n\nwhere < and 6 are respectively the Cauchy stress and strain tensors, and d is the Lagrangian displacement of the vertices. The Cauchy stress tensor follows Hooke's law for isotropic homogeneous medium , where v is the Poisson ratio, E the Young modulus of the material and A, p are the Lame coefficients:\n\n1 + v v\n\n<(6) = A trace(6) I d + 2 p 6 or 6(<) = < - — trace(<) Id .\n\nIn our context, v is typically chosen of the order of 0.48. This corresponds to a very soft, nearly incompressible material. Actually, this value for v corresponds to a nearly ill-posed problem. Note that the closer v is to 0.5, the harder it is to converge the Finite-Element linear system. The chosen value appears to be a good trade-off between material softness and the preservation of the iterative linear system solving algorithm efficiency. The FEM system is then solved by a Conjugate Gradient algorithm coupled with an LU-SGS pre-conditioner.\n\nTwo kinds of boundary conditions are considered. Dirichlet conditions are used to enforce strongly the displacement of the vertices located on moving boundaries. They are also used for fixed boundaries far from the moving bodies. These Dirichlet conditions can be relaxed if we want some vertices to move on a specific plane, typically on faces of a surrounding box close to the moving bodies, or even intersecting one (in the case of symmetry planes). In that case, if the plane is orthogonal to an axis of the Cartesian frame, only the displacement along this axis is enforced as a Dirichlet condition. The displacement in the two other directions are considered as degrees of freedom and is added in the FEM matrix as for normal volume points.\n\nAn advantage of elasticity-based methods is the opportunity they offer to adapt the local material properties of the mesh, especially its stiffness, according to the distortion and efforts born by each element: the way the Jacobian of the transformation from the reference element to the current element is accounted for in the FEM matrix assembly is modified. The classical P1 FEM formulation of the linear elasticity matrix leads to the evaluation of quantities of the form:\n\nfs ^ ^ dx = s\\K\\ddJ ^ , Jk dxk dxl dxk dxl\n\nwhere s is either A, p or A + 2p and \\K\\ is the volume of tetrahedron K. The above quantity is replaced by:\n\nX s ( K f ^ ^ dx = s \\ K \\ ( K № ^\n\ndxk d xl \\\\ K \\/ dxk dxl\n\nwhere x > 0 is the stiffening power and K is the reference element. This technique locally multiplies A and p by a factor proportional to \\K\\x determines how stiffer than large elements small elements are. In this work, we chose X = 1.\n\nThe main drawback of this method is that it is difficult to parallelize efficiently because of the pre-conditioner. Moreover, the system can be slow to converge if the system is too stiff.\n\n3.2. Inverse Weighted Distance method\n\nAn alternative approach to computing the displacement of the volume vertices is by means of an interpolation approach. In the interpolation approach the displacement is defined by an algebraic interpolation function which provides a smooth function to blend displacements between boundary surfaces. Algebraic methods may either utilize implicit formulations such as those used in the popular Radial Basis Function (RBF) interpolation method , or explicit formulations such as transfinite interpolation used for structured mesh generation. Generally explicit interpolation functions have performance advantages since they avoid problems associated with solving large stiff linear systems, however care must be taken to design interpolation functions that can sustain large deformations without introducing folded volume cells. For this evaluation we use the robust and fast explicit deformation method that is an algebraic technique that utilizes a reciprocal distance weighted sum of deformation functions to interpolate deformations to the volume vertices. In this algebraic deformation method it is assumed that every vertex of the boundary\n\nsurfaces have a displacement field that can be approximated by a rigid body motion (displacement plus rotation about the node). This rigid body motion is computed from the displacements of neighbor vertices by solving a least squares problem fitting a quaternion to the relative displacements of neighboring vertices. The nodal displacement field in the neighborhood of vertex i can then be represented by the function denoted si(r), that is written as\n\n3(7) = Mir + bi -f, (4)\n\nwhere Mi is a rotation matrix, b, is a displacement vector associated with the ith node, and 7 is a coordinate vector in the original mesh. The displacement field in the volume mesh is then described through a weighted average of all boundary node displacement fields as given by\n\n7(7) = ^f^. (5)\n\n£ wi(r)\n\nThe weighting function used to blend these displacements is based on a two-term inverse distance weighting function that is designed to preserve near-boundary mesh orthogonality while providing a smooth transition to blend between more distant surfaces. The nodal weight employed is defined by the function\n\nwi(r) = A, *\n\nLdef I aLdef ^\n\niir -m \\r -m\n\nwhere ri is the position of point i, A, is the area weight assigned to node i, Ldef is an estimated length of the deformation region, a is an estimated fraction of Ldef that is reserved for stiffer near body deformation, and a and b are user-defined exponents where a controls the smooth blending of deformations between surfaces while b controls the more stiff deformation close to deforming surfaces. Numerical experimentation suggests that a = 3 and b = 5 provide the best-quality for three-dimensional cases. The a parameter can be a fixed parameter or it can be dynamically computed based on the amount of deformation in the problem. Further details are described in a recent paper and not discussed here.\n\nNote, a naive implementation of the algorithm directly from Equation (5) will result in an O(n2) algorithm that will be too slow for practical applications. However, the localization provided by the reciprocal weights provides a mechanism for approximating the contributions of more distant points. Therefore the displacement function can be evaluated with a suitable tolerance utilizing a fast O(n log n) tree-code based algorithm that approximates the contributions of collections of distant surface nodes using a multi-pole expansion technique (as described in detail in ). When the algorithm is optimized using these techniques it can be very fast and highly parallelizable.\n\nThis algorithm has the advantage that it can be applied to meshes with arbitrary connectivity including adapted meshes with hanging nodes without difficulty. It is able to handle highly anisotropic mesh elements near viscous walls without difficulty and is free of numerical stiffness issues that can degrade the robustness of implicit methods for mapping deformations. However, since it is a global method that is based on surface mesh deformations only, it lacks an ability to dynamically adapt deformation to mesh conditions. For example, it is unable to adjust deformations locally to adapt to local mesh resolution requirements as can be easily accomplished with adjustments to element stiffness in PDE-based method.\n\n3.3. Numerical Examples\n\nWe now study on challenging numerical examples how those two mesh deformation methods behave when used in the changing-connectivity moving mesh algorithm. The next examples are purely moving mesh simulations and we will focus our analysis on mesh quality criteria. In particular, we show that the MMA applies successfully to rigid body with large shearing and deformable bodies and we give an insight into a strategy to move boundary layers on deformable bodies.\n\n3.3.1. Interpenetrating cylinders\n\nThe first example example is a challenging academic test case with rigid bodies, and has been presented in . Two cylinders interpenetrate each other and there is only one layer of elements between both cylinders, i.e. internal edges\n\nTable 1. Interpenetrating cylinders test case. Total number of mesh deformation solutions and mesh quality optimization steps to achieve the displacement, final mesh statistics and total number of swaps with the changing-connectivity MMA.\n\n# mesh deform. # mesh optim. Q™™ 1 < Qeni < 2 QZZn # swaps\n\nElasticity 50 429 1.5 97.1% 79 852,514\n\nIDW 50 570 1.4 98.2% 30 1,408,680\n\nconnect both cylinders. The geometry of the domain is shown in Figure 1. To make this test case more complicated, we add rotation to the top cylinder:\n\n= (0,0,5), vtop = (0,0, -0.1) and &ottom = (0,0,0), vbo\"om = (0,0,0.1).\n\nThe initial mesh is composed of 34567 vertices and 188 847 tetrahedra.\n\nA shear layer appears between the two cylinders, that is only one element wide. Therefore, the swapping optimization has just a little of freedom to act. Moreover, it is obvious that too large At will lead to an invalid mesh: such a simulation requires a relatively large number of mesh deformation solutions. For both mesh deformation methods, 50 mesh deformation steps are set, i.e. At = 1, and we set CFLgeom = 2.\n\nThe changing-connectivity MMA proves to be extremely robust in both cases, keeping the mesh worst element quality below 100 and ensuring a final excellent mean quality of 1.5 . Because of the shear, a large number of swaps (around 1 million) have been performed. For that test case, results with both mesh deformation methods are very similar. Moreover, Figure 1 (center) points out a major difference between elasticity and IDW: elasticity creates rotational displacements in the mesh above and under the cylinder, that push vertices in front, attract vertices behind the moving inner cylinder, and thus move vertices to empty areas instead of crushing them, while the IDW method tends to produce straighter displacements. Table 1 provides a comparison of the MMA with the two mesh deformation methods.\n\n3.3.2. Squeezing can\n\nThe next test case deals with deformable geometries. It represents a can which is squeezed. The can is a cylinder of length 5 units and diameter 2 units inside a spherical domain of radius 10 units. The can is centered at the origin and is aligned along the z-axis. The simulation runs up to Tend = 1 and the temporal deformation function is given by:\n\ni x(t) = X(0) i , , ,,\n\nif |z(0)| < 2 then x(t) = I y(t) = (a + (1 - a)(1 - t2)) y(0) with a = 2;^ (1.2 - cos glz(0)|)) . {z(t) = z(0) .\n\nThe can deformation is shown in Figure 2. For this case, the moving mesh simulation is done for the inside mesh of the can geometry. The mesh is composed of 7 018 vertices and 36 407 tetrahedra. Four mesh deformation steps are initially prescribed, and CFLgeom is fixed to 1. Statistics for the case are given in Table 2.\n\nThe mesh is successfully moved,as shown on Figure 2. We clearly see that the elasticity methods has moved vertices away from the squeezed region while keeping an excellent mesh quality, whereas the IDW tends to let them crush. For this case, the swap optimization are essential to preserve the quality of the mesh all along the movement.\n\n3.3.3. Bending beam\n\nThis test case has been proposed in . It is a general deformation test on a three-dimensional bending beam. The beam is 8 units long, 2 units wide and 0.1 unit thick inside a spherical domain of radius 25 units. The beam is deformed such that the beam center-line is mapped onto a circular arc:\n\nr x(t) = (R - z(0)) sin(e) x(0) „ _ ,\n\nx(t) = r y(t) = y(0) with e and R = Rmin + (Rmax - Rmin)\"\n\n—— — — —111111 • \\ nitiA - -111111/ -, _\n\nz(t) = z(0) + (R - z(0))(1 - cos(fl)) R 1 - e\n\nFig. 1. Interpenetrating cylinders test case. Evolution of the mesh with the elasticity-based (top) and IDW (bottom) mesh deformation methods. Between the initial (left) and final (right) meshes, the displacements given by the mesh deformation steps at times 12, 18 and 25 is shown.\n\nTable 2. Squeezing can test case. Total number of mesh deformation solutions and mesh quality optimization steps to achieve the displacement, final mesh statistics and total number of swaps with the changing-connectivity MMA.\n\n# mesh deform. # mesh optim. Qmean ^end 1 < Qend < 2 QWOrSt ^overall # swaps\n\nElasticity (in.) 6 27 1.5 93.0% 11 16.255\n\nIDW (in.) 4 27 1.48 92.5% 11 11,940\n\nFig. 2. Squeezing can test cases. From left to right: initial mesh, displacement given by the elasticity-based and IDW mesh deformation problems at times 0.75, and final meshes with elasticity and IDW.\n\nwhere Rmin and Rmax are the radii of curvature at t = 0 and t = 1, respectively. k is a parameter that controls how rapidly the radius and center of curvature move from the initial to final values. For this test case, Rmin = 2, Rmax = 500 and k = 10. The initial mesh is composed of 58 315 vertices and 322 971 tetrahedra. We set At = 0.2 and CFLgeom = 1.\n\nAgain, this large deformation problem is solved without any difficulty, requiring only a few mesh deformation steps, the final mesh is excellent as shown in Table 3 and no skewed element appears inside the mesh. There again, we can see in Figure 3 that the elasticity-based method tends to move points farther from the body than the IDW method, thus requiring significantly more swaps. However, the performances of the elasticity-based and IDW methods are very close, cf. Table 3.\n\nTable 3. Bending beam test case. Total number of mesh deformation solutions and mesh quality optimization steps to achieve the displacement, final mesh statistics and total number of swaps with the changing-connectivity MMA.\n\n# mesh deform. # mesh optim. Qmean ^end 1 < Qend < 2 QWOrSt ^overall # swaps\n\nElasticity IDW 5 4 211 40 1.3 1.3 99.9% 99.8% 4.4 5.9 479,182 149,536\n\nFig. 3. Bending beam test case. From left to right: initial mesh, displacement given by the elasticity-based and IDW mesh deformation problems at time 0.8, and final meshes with elasticity and IDW.\n\n3.3.4. Boundary layers on deformable geometries\n\nDealing with boundary layers (BL) meshes for rigid bodies is rather easy: the whole mesh layers are rigidified and moved together with the body. However, it is a lot more difficult with deformable bodies, since the mesh structured layers must both follow the deformation of the body and keep their structure. It was a priori not certain that the movement provided by the elasticity or IDW step would be precise enough to keep that structure. In the sequel we demonstrate that both methods can move one thick layer with a good accuracy, and that such layers can also be moved correctly with the elasticity method.\n\nOne boundary layer. For this case, we consider the bending beam studied previously, to which we add one thin structured mesh layer of height a quarter of the height of the beam, see Figure 4. The movement of the beam is the same and no swap or smoothing optimizations are done in the BL. At the end of the simulation, we observe that the structured aspect of the BL has been well preserved. In Table 4, we gather quality indicators depending on different moving mesh parameters. We consider the variation of the distance of the layer vertices to the body, that is expected to remain constant in ideal cases. We can see that taking acceleration into account in the trajectories is very important: indeed, as the movement of the beam is accelerated, the layer under the beam tends to thicken while the layer above it tends to be crushed. We can also see that less IDW steps are required to reach an equivalent mean quality, but the maximal distance variation is less good.\n\nThis result is very promising, because the structured aspect of the BL mesh is well preserved. This lets us consider a clever strategy to move BL meshes with deformable geometries.\n\nTable 4. Bending beam with one BL. Mean and maximal relative distance variations ¿dx, in the case of 10 elasticity solutions, 50 elasticity solutions with linear (lin.) trajectories (without taking acceleration into account), 50 elasticity solutions with quadratic (quad.) trajectories.\n\nMean relative ¿dx (in %) 27 9.9 5.7 6.4\n\nMax relative ¿dx (in %) 143 66 19 60\n\nFig. 4. One BL test case. Initial mesh (a) and close up on the BL (b), final mesh with elasticity (c) and IDW (d), and close-ups on the final BL meshes with elasticity (e) and IDW (f).\n\nSeveral boundary layers. The elasticity mesh deformation method is also actually capable of moving several thinner BL, provided the mesh deformation time step is small enough to handle the displacement of the smallest elements against the body. The simulation was run with a boundary layer containing 10 thin layers (the smallest height is 0.0017 units): the structure inside the layer is still well preserved all along the movement, and the quality of the layer looks good even at the end of the deformation of the beam. Close-ups on the BL meshes are show in Figure 5. We were not able to move these layers with the IDW mesh deformation, which needs to be further investigated.\n\n4. Moving-mesh ALE computations\n\nThe changing-connectivity moving mesh algorithm described above is coupled to our in-house flow solver to run CFD simulations with moving geometries. The compressible Euler equations are solved with a Finite Volume method. The displacements of the vertices require the use of the ALE formulation of these equations. As regards the temporal accuracy, the considered SSPRK schemes are based on the strict application of the Discrete Geometrical Conservation Law (DGCL). Details about the ALE solver are given in . The absence of remeshing preserves the quality of the solution, as it is free of most interpolation errors due to solutions transfers from one mesh to another.\n\nThe flow solver is integrated to Algorithm 1 as follows. The solver iterations are embedded in the optimization phase, so that the mesh is moved after each solver iteration, but optimizations are performed only when the optimization time (defined by the CFLgeom parameter) is reached. If necessary, the solver time step is truncated to perform optimizations at the correct time.\n\nThe efficiency of the method is demonstrated on two examples involving complex moving geometries. Both simulations were run using the IDW interpolation and the elasticity-based mesh deformation methods.\n\n4.1. Vortical wake of a F117 aircraft nosing up\n\nThe second example is a subsonic F117 aircraft nosing up, that creates a vortical wake. An inflow of air at Mach 0.4 arrives in front of the aircraft, initially in horizontal position that noses up, stays up for a while, then noses down. In this example, the aicraft rotates around its center of gravity. Let T = 1 s be the characteristic time of the movement and 6max = 20o the maximal angle reached, the movement is defined by its angle of rotation, of which the evolution is divided in 7 phases. Phase (i) (0 < t < T/2) is an initialization phase, during which the flow around the aircraft is established. Phase (ii) (T < t < T) and (iii) (T < t < 3r) are respectively phases of accelerated and decelerated ascension. Vortices start to grow behind the aircraft, and they expand during phase (iv) (3T < t < TT), where the aircraft stays in upward position. Phase (v) (TT < t < 4T) and (vi) (4T < t < 9r) are phases of accelerated and decelerated descent, the vortices start to move away and they slowly disappear in phase (vii) (9rt < 5T). Wall conditions are imposed on the faces of the surrounding box and slipping conditions on the aircraft.\n\nThe initial mesh has 501,859 vertices and 3,012,099 tetrahedra. The mesh used and snapshots of the solutions are shown in Figures 6 and 7. A difficulty of this test case is the uneven movement of the aircraft, with strong acceleration and deceleration during the phases of ascension and descent, and the inversion of the acceleration in the middle of these phases. It is necessary to solve enough mesh deformation problems to avoid that the body boundaries cross small elements close to it. Statistics of the moving mesh characteristics are show in Table 5. We can see that the two proposed mesh deformation methods produce rather similar results in terms of mesh quality, but the case requires only half as many mesh deformation steps if the IDW method is used.\n\nTable 5. Nosing up F117 test case. Total number of mesh deformation solutions and mesh quality optimization steps to achieve the displacement, final mesh statistics and total number of swaps with the changing-connectivity MMA.\n\n# mesh deform. # mesh optim. Qmean 1 < Qend < 2 QWorSt ^overall # swaps\n\nElasticity 24 66 1.4 99.8% 19 698 755\n\nIDW 12 37 1.4 99.8% 19 713 247\n\nFig. 6. Test case of the nosing up F117. semi-structured meshes in initial horizontal position, and upward position.\n\n4.2. Two F117 aircraft flight paths crossing\n\nThe first example is modeling two F117 aircrafts having crossing flight paths. This problem illustrates the efficiency of the connectivity-change moving mesh algorithm in handling large displacements of complex geometries without any remeshing. When both aircrafts cross each other, the mesh deformation encounters a large shearing due to the opposite flight directions. The connectivity-change mesh deformation algorithm handles easily this complex displacement thanks to the mesh local reconnection. Therefore, the mesh quality remains very good during the whole displacement.\n\nAs concerns the fluid simulation, the aircrafts are moved at a speed of Mach 0.8, in an initially inert uniform fluid: at t = 0 the speed of the air is null everywhere. The rotation speed of the aircrafts is set to a tenth of their translation speed. Transmitting boundary conditions are used on the sides of the surrounding box, and slipping conditions are imposed on the two F117 bodies. After a phase of initialization, the flow is established when the two F117s pass each other.\n\nThe initial mesh has 586,610 vertices and 3,420,332 tetrahedra. Some statistics on the moving mesh aspect of the simulation are shown in Table 6. We can see that both methods keep an excellent mesh quality all along the simulation. We used the minimal number of mesh deformations possible, and the IDW method again requires half as many mesh deformation steps, but then requires almost five times more swaps for the whole simulation. This is due to the fact that with the elasticty-based approach, vertices tend to bypass the moving bodies, while their trajectories are straighter with the IDW, thus creating a lot of shearing. In Figure 8, we show both the moving mesh aspect of the simulation and the flow solution at different time steps.\n\nTable 6. Two F117s test case. Total number of mesh deformation solutions and mesh quality optimization steps to achieve the displacement, final mesh statistics and total number of swaps with the changing-connectivity MMA.\n\n# mesh deform. # mesh optim. Qmean x^end 1 < Qend < 2 QWOrSt '¿overall # swaps\n\nElasticity 43 1563 1.4 99.8% 27 2,176,850\n\nIDW 22 1717 1.4 99.7% 26 10,396,127\n\n5. Conclusion\n\nIn this paper, new numerical evidence has been given that 3D large displacements are possible with a mesh deformation algorithm coupled with mesh optimization using only swaps and vertex movements. No re-meshing is required. The possibility to compute the mesh deformation using an explicit Inverse Distance Weighted interpolation has been added to the algorithm. Moreover, we have been dealing with deformable geometries, and have started to\n\naddress the issue of boundary layers. Finally, the moving mesh algorithm has been coupled to a CFD ALE flow solver. Several examples, both purely moving-mesh and CFD, have been exhibited.\n\nThe comparison of the two mesh deformation methods shows that both are very robust coupled with mesh optimizations. They both produce meshes of equivalent excellent qualities.It seems that in general, less IDW solutions are necessary, but the elasticity-based approach requires significantly less swap in the cases with a lot of shearing. These results need to be confirmed and refined on more cases, but they confirm the advantage of having both tools at our disposal.\n\nHowever some problematics remain. The moving of boundary layers need to be further developed. The implementation of the IDW method and the parallelization of the flow solver must be improved, in order to run CPU time comparisons of both methods. Other issues are surface optimizations, simulations of deformable geometries with possible connectivity changes and the efficient treatment of contact problems.\n\n6. Acknowledgments\n\nThis work was partially funded by the Airbus Group Foundation.\n\nReferences\n\n F. Alauzet. A changing-topology moving mesh technique for large displacements. Engineering with Computers, 30(2):175-200, 2014.\n\n F. Alauzet and A. Loseille. High-order sonic boom prediction by utilizing mesh adaptive methods. In 48th AIAA Aerospace Sciences Meeting and Exhibit, AIAA-2010-1390, Orlando, FL, USA, Jan 2010.\n\n F. Alauzet and G. Olivier. Extension of metric-based anisotropic mesh adaptation to time-dependent problems involving moving geometries. In 49th AIAA Aerospace Sciences Meeting, AIAA Paper 2011-0896, Orlando, FL, USA, Jan 2011.\n\n T.J. Baker and P. Cavallo. Dynamic adaptation for deforming tetrahedral meshes. AIAA Journal, 19:2699-3253, 1999.\n\n N. Barral and F. Alauzet. Large displacement body-fitted FSI simulations using a mesh-connectivity-change moving mesh strategy. In 44th AIAA Fluid Dynamics Conference, AIAA-2014, Atlanta, GA, USA, June 2014.\n\n J.D. Baum, R. Lohner, T.J Marquette, and H. Luo. Numerical simulation of aircraft canopy trajectory. In 35th AIAA Aerospace Sciences Meeting, AIAA Paper 1997-0166, Reno, NV, USA, Jan 1997.\n\n J.D. Baum, H. Luo, and R. Lohner. A new ALE adaptive unstructured methodology for the simulation of moving bodies. In 32th AIAA Aerospace Sciences Meeting, AIAA Paper 1994-0414, Reno, NV, USA, Jan 1994.\n\n J.A. Benek, P.G. Buning, and J.L. Steger. A 3D chimera grid embedding technique. In 7th AIAA Computational Fluid Dynamics Conference, AIAA Paper 1985-1523, Cincinnati, OH, USA, Jul 1985.\n\n F. Brezzi, J.L. Lions, and O. Pironneau. Analysis of a Chimera method. C.R. Acad. Sci. ParisSer. I, 332(7):655-660, 2001.\n\n G. Compere, J-F. Remacle, J. Jansson, and J. Hoffman. A mesh adaptation framework for dealing with large deforming meshes. Int. J. Numer. Meth. Engng, 82(7):843-867, 2010.\n\n A. de Boer, M. van der Schoot, and H. Bijl. Mesh deformation based on radial basis function interpolation. Comput. & Struct., 85:784-795, 2007.\n\n E. Briere de l'Isle and P.L. George. Optimization of tetrahedral meshes. IMA Volumes in Mathematics and its Applications, 75:97-128, 1995.\n\n C. Dobrzynski and P.J. Frey. Anisotropic Delaunay mesh adaptation for unsteady simulations. In Proceedings of the 17th International Meshing Roundtable, pages 177-194. Springer, 2008.\n\n L. E. Eriksson. Generation of boundary conforming grids around wing-body configurations using transfinite interpolation. AIAA Journal, 20:1313-1320, 1982.\n\n P.J. Frey and P.L. George. Mesh generation. Application to finite elements. ISTE Ltd and John Wiley & Sons, 2nd edition, 2008.\n\n P.L. George. Tet meshing: construction, optimization and adaptation. In Proceedings of the 8th International Meshing Roundtable, South Lake Tao, CA, USA, 1999.\n\n O. Hassan, K.A. S0rensen, K. Morgan, and N. P. Weatherill. A method for time accurate turbulent compressible fluid flow simulation with moving boundary components employing local remeshing. Int. J. Numer. Meth. Fluids, 53(8):1243-1266, 2007.\n\n R. Lohner, J.D. Baum, E. Mestreau, D. Sharov, C. Charman, and D. Pelessone. Adaptive embedded unstructured grid methods. Int. J. Numer. Meth. Engng, 60:641-660, 2004.\n\n E. Luke, E. Collins, andE. Blades. A fast mesh deformation method using explicit interpolation. J. Comp. Phys., 231:586-601, 2012.\n\n D. Marcum and N. Weatherill. Unstructured grid generation using iterative point insertion and local reconnection. AIAA Journal, 33(9):1619-1625, 1982.\n\n S. Murman, M. Aftosmis, and M. Berger. Simulation of 6-DOF motion with cartesian method. In 41th AIAA Aerospace Sciences Meeting and Exhibit, AIAA-2003-1246, Reno, NV, USA, Jan 2003.\n\n C.S. Peskin. Flow patterns around heart valves: a numerical method. J. Comp. Phys., 10:252-271, 1972.\n\n M. L. Staten, S. J. Owen, S. M. Shontz, A. G. Salinger, and T. S. Coffey. A comparison of mesh morphing methods for 3D shape optimization.\n\nIn Proceedings of the 20th International Meshing Roundtable, pages 293-310. Springer, 2011.\n\nFig. 7. Test case of the nosing up F117. Isolines on several cutting planes of the mach field at different time steps ( from left to right and up to down t = 0.49, 0.85, 1.22, 1.47, 2.08, 2.82, 3.31, 3.80, 4.28 and 4.90).\n\nFig. 8. Test case of the two F117s: Snapshots of the moving geometries and the mesh (left) and density (right)."
] | [
null,
"https://cyberleninka.org/images/tsvg/cc-label.svg",
null,
"https://cyberleninka.org/images/tsvg/view.svg",
null,
"https://cyberleninka.org/images/tsvg/download.svg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8911466,"math_prob":0.9398423,"size":43380,"snap":"2022-05-2022-21","text_gpt3_token_len":10103,"char_repetition_ratio":0.16313168,"word_repetition_ratio":0.04925479,"special_character_ratio":0.23241125,"punctuation_ratio":0.13171494,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9783436,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-28T13:51:22Z\",\"WARC-Record-ID\":\"<urn:uuid:f2c27fb4-f2b5-4d91-a6b9-01e2d42892d5>\",\"Content-Length\":\"75446\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:89abc429-fbf4-495b-bbd3-357f716ae7c7>\",\"WARC-Concurrent-To\":\"<urn:uuid:d7f9c52d-f948-4f19-9f10-d78d8d70af2c>\",\"WARC-IP-Address\":\"159.69.2.174\",\"WARC-Target-URI\":\"https://cyberleninka.org/article/n/3035\",\"WARC-Payload-Digest\":\"sha1:4HPZNKIL5YGH62FLQFPR3HCUEDXDFR37\",\"WARC-Block-Digest\":\"sha1:TLNP6BTWZUX7TWADP27J2LWL35NLHHIS\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652663016853.88_warc_CC-MAIN-20220528123744-20220528153744-00555.warc.gz\"}"} |
https://support.scisports.com/en/articles/4719875-aggregated-variables-and-calculations | [
"## Understand The Averages\n\n### 95' Field Players-Averages\n\nComparison between teams can be done with normalized field player averages. For the calculation of the match averages, we have excluded goalkeepers and corrected for substitutes and red cards. The values are constructed from a value per minute times 95. 95 minutes because this is on average the duration of a match.\n\nequation 1:\n\n``95 minute Fieldplayers Average = SUM(variable) / (SUM(time on field)/match time)*95``\n\nFor example:",
null,
"95' Fieldplayers Average Total Distance\n= SUM(Total_Distance) / (SUM(time on field)/actual match time)*95\n\n95' Fieldplayers Avg. Total Distance = 108.508 / (930/93)*95\n\n95' Fieldplayers Avg. Total Distance = 10.308 m\n\n### Multiple Matches Average\n\nFor the calculation of a multi-match average first, the 95' field player's average is calculated for each match. Consecutively the average of all (selected) matches is calculated.",
null,
"Averages over multiple matches are being used in our League Analysis.\n\nIf you have any questions or suggestions, please send us your feedback.\n\nUse the chat icon on the right to directly get in touch with us."
] | [
null,
"https://downloads.intercomcdn.com/i/o/277118115/a21338a5634bb82afe43cd99/image.png",
null,
"https://downloads.intercomcdn.com/i/o/277130101/51386a0378d8324c0700e8b1/image.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8777325,"math_prob":0.9808268,"size":1103,"snap":"2021-43-2021-49","text_gpt3_token_len":252,"char_repetition_ratio":0.1410373,"word_repetition_ratio":0.024096385,"special_character_ratio":0.23934723,"punctuation_ratio":0.092307694,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97501236,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-27T16:43:09Z\",\"WARC-Record-ID\":\"<urn:uuid:04453464-669a-47a7-bdd9-5e3079251588>\",\"Content-Length\":\"13701\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8f55a881-b376-492d-b261-a926e344bd9c>\",\"WARC-Concurrent-To\":\"<urn:uuid:5be0b9d3-a56b-4b8c-be0f-52552efe6ecd>\",\"WARC-IP-Address\":\"104.45.68.100\",\"WARC-Target-URI\":\"https://support.scisports.com/en/articles/4719875-aggregated-variables-and-calculations\",\"WARC-Payload-Digest\":\"sha1:5UI4RSDQ4PAJJFBS4IJZTVLRN2QXJLEL\",\"WARC-Block-Digest\":\"sha1:V4JHBYYWYA2O34DWXURQ7APVXLA2C6TX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323588216.48_warc_CC-MAIN-20211027150823-20211027180823-00189.warc.gz\"}"} |
https://www.brandenbyers.com/slides/test-them-puzzles/ | [
"# Puzzles",
null,
"",
null,
"",
null,
"",
null,
"# Coding Puzzles\n\n## Next to Last Item in a List\n\nReturn the next-to-last item from an array.\n\n## The Gas Station Problem\n\nThere is a truck driving clockwise on a circular route;\nthe truck has a gas tank with infinite capacity, initially empty.\n\nAlong the route are gas stations with varying amounts of gas available:\nyou are given a list of gas stations\nwith the amounts of gas they have available\nand the amounts of gas required to drive to the next gas station.\n\nYou must find a gas station that,\nfor a trip starting from that gas station,\nwill be able to return to that gas station.\n\n### Investigating the Torricelli point of a triangle\n\nLet ABC be a triangle with all interior angles being less than 120 degrees.\nLet X be any point inside the triangle and let XA = p, XC = q, and XB = r.\n\nFermat challenged Torricelli to find the position of X such that p + q + r was minimised.\n\nTorricelli was able to prove that if equilateral triangles AOB, BNC and AMC are constructed on each side of triangle ABC, the circumscribed circles of AOB, BNC, and AMC will intersect at a single point, T, inside the triangle.\nMoreover he proved that T, called the Torricelli/Fermat point, minimises p + q + r. Even more remarkable,\nit can be shown that when the sum is minimised, AN = BM = CO = p + q + r and that AN, BM and CO also intersect at T.\n\nIf the sum is minimised and a, b, c, p, q and r are all positive integers we shall call triangle ABC a Torricelli triangle. For example, a = 399, b = 455, c = 511 is an example of a Torricelli triangle, with p + q + r = 784.\n\nFind the sum of all distinct values of p + q + r ≤ 120000 for Torricelli triangles.\n\n# Why write puzzles?\n\n## Programmer Practice\n\n• Athletes Train\n• Musicians Rehearse\n• Lawyers & Doctors Practice\n\nTest-Driven Learning\n\n## Puzzles Are",
null,
"# Aha!\n\n#### Dopamine drip",
null,
"",
null,
"",
null,
"",
null,
"rosettacode.org/wiki/towers_of_hanoi\n\n# Parrots\n\n• Inquisitive\n• Destructive exploration\n• Playful learners\n• Pull, tear, scratch, probe\n• Haphazardly dig for food",
null,
"# Crows\n\n• Curious\n• Explore from a distance\n• Methodical learners\n• Graceful tool users\n• Precisely prod for food",
null,
"## Code like a...\n\n### PARROT\n\n• Inquisitive\n• Destructive exploration\n• Playful learners\n• Pull, tear, scratch probe\n• Haphazardly dig for food\n\n### CROW\n\n• Curious\n• Explores from a distance\n• Methodical learners\n• Graceful tool users\n• Precisely prod for food",
null,
"Flexibility in Problem Solving and Tool Use of Kea and New Caledonian Crows in a Multi Access Box Paradigm\n\n## Code Challenge Websites",
null,
"",
null,
"# How-to\n\n### Let's make some sausage!\n\n``````var add = function (a, b) {\nreturn a + b;\n};``````\nexpect(add(2, 2).to.equal(4));````expect(add(2, '2').to.equal('22'));``expect(add('con','cat').to.equal('concat'));````\n\n# Assumptions",
null,
"``````var add = function (a, b) {\nif(typeof a + b === 'string') {\nreturn 'Go fish';\n}\n...\n};``````\nexpect(add('con','cat')).to.equal('Go fish');````expect(add(2, 2)).to.be.a('number')````\n``````var add = function (a, b) {\nreturn parseInt(a) + parseInt(b);\n};``````\nexpect(add(2, '2').to.equal(4));````expect(add(2.22, 2.22).to.equal(4.44));````\n// AssertionError: expected 4 to equal 4.44``````",
null,
"# Chai.js\n\n### assert()\n\n``````expect('something').to.contain('thing');\n\nexpect('Hello World').to.be.a('string');``````\n\n# Draft Ideas\n\n### Inspired by Fiction",
null,
"",
null,
"",
null,
"# Libraries\n\n## _.shuffle\n\n``````test('shuffle', function() {\ndeepEqual(_.shuffle(), , 'behaves correctly on size 1 arrays');\nvar numbers = _.range(20);\nvar shuffled = _.shuffle(numbers);\nnotDeepEqual(numbers, shuffled, 'does change the order'); // Chance of false negative: 1 in ~2.4*10^18\nnotStrictEqual(numbers, shuffled, 'original object is unmodified');\ndeepEqual(numbers, _.sortBy(shuffled), 'contains the same members before and after shuffle');\n\nshuffled = _.shuffle({a: 1, b: 2, c: 3, d: 4});\nequal(shuffled.length, 4);\ndeepEqual(shuffled.sort(), [1, 2, 3, 4], 'works on objects');\n});``````\n\nrosettacode.org\n\n# Slasher Film\n\nReturn the surviving elements of an array\nafter chopping off n elements from the head.\n\n``````\nslasher([1, 2, 3], 2);``````\n``expect(slasher([1,2,3],2)).to.equal();``expect(slasher([1,2,3],0)).to.equal([1,2,3]);``expect(slasher([1,2,3],9)).to.equal([]);``\n``` `expect(slasher([1,2,3],2)).to.equal(['Three, you're lucky to still be.']);`\n```\n``` `expect(slasher([1,2,3],0)).to.equal(['All numbers live another day.']);`\n```\n``` `expect(slasher([1,2,3],9)).to.equal(['It is a sad day.']);`\n```\n``expect(slasher(['1,354','22','4900'],1)).to.equal(['Four thousand nine hundred and twenty-two; you are amongst the lucky few.']);``\n\nReturn a string of the the English word equivalents\nof the surviving elements (numbers) in an array\nafter chopping off n elements from the head.\nThe numbers must be listed from last to first.\nThe string should be finished with a phrase\nthat rhymes with the last number listed.\n\n## Rhyming Numbers Poem\n\nYou will be given an array of numbers.\nEach number will be between 0 and 9.\nThese numbers may be integers or strings.\nIf a number is in a string, it may be a digit or word.\nReturn a nonsense poem,\nwhere each line starts with a number from the array\nfollowed by a phrase ending in a word that rhymes with that number.\n\nNine, you smell like wine\nFive, chicken dumplings make me feel alive\nThree, you be lost deep in the sea\nTwo, please stop sniffing glue\nOne, this poem is done\n\n````var rhymingNumberPoem = function(arr) {`` // this is a poet but didn't know it```` return str;\n};``````\n``````var shortArr = [9,5,3,1];\nvar longArr = [9,5,3,1,5,4,3,5,6,3,2,3,4,7,5,2,3,4,7,10,23,4,2];\n``````\n````describe('Rhyming Number Poem', function() {```` it('should return a string', function() {\nexpect(rhymingNumberPoem(shortArr)).to.be.a('string');\n});`````` it('should return correct number of lines from short array', function() {\nexpect(rhymingNumberPoem(shortArr).split('\\n').length).to.equal(shortArr.length);\n});`````` it('should create a short rhyming poem', function() {\nexpect(rhymeChecker(rhymingNumberPoem(shortArr))).to.equal(true);\n});`````` it('should create a long rhyming poem', function() {\nexpect(rhymeChecker(rhymingNumberPoem(longArr))).to.equal(true);\n});````});````\n``````var rhymeChecker = function(str) {\nvar arr = str.split(/,|\\./).join('').split(\"\\n\");\nif(str.length < 1 || arr.split(' ').length < 3) {\nreturn false;\n}\nvar count = arr.reduce(function(a, b) {\nvar words = b.split(' '),\nfirstWord = words.toLowerCase(),\nlastWord = words[words.length - 1].toLowerCase();\nif(numberRhymes[firstWord].match(lastWord)) {\nreturn a + 1;\n}\nreturn a;\n}, 0);\nif(count < arr.length) {\nreturn false;\n}\nreturn true;\n};``````\n``````\nzero: 'hero giro tiro nonzero subzero',\none: 'won done run son none sun gun fun tion ton nun shun sion bun donne dun fon pun rien bn cen gon hun jun tonne tun ven yean naan spun fron seisin stun toucan anyone begun undone outdone outrun spline rerun redone everyone overrun overdone cretonne misdone twentyone'``````\n\nbrandenbyers.com/slides/test-them-puzzles/poem.js\n\nnpm install\n\nmocha poem\n\n## Bonfires\n\nfreecodecamp/seed/challenges/basic-bonfires.json\n\n``````\n{\n\"name\": \"Bonfire: Find the Longest Word in a String\",\n\"dashedName\": \"bonfire-find-the-longest-word-in-a-string\",\n\"difficulty\": \"1.04\",\n\"description\": [\n\"Return the length of the longest word in the provided sentence.\",\n],\n\"challengeSeed\": [\n\"function findLongestWord(str) {\",\n\" return str.length;\",\n\"}\",\n\"\",\n\"findLongestWord('The quick brown fox jumped over the lazy dog');\"\n],\n\"tests\": [\n\"expect(findLongestWord('The quick brown fox jumped over the lazy dog')).to.be.a('Number');\",\n\"expect(findLongestWord('The quick brown fox jumped over the lazy dog')).to.equal(6);\",\n\"expect(findLongestWord('May the force be with you')).to.equal(5);\",\n\"expect(findLongestWord('Google do a barrel roll')).to.equal(6);\",\n\"expect(findLongestWord('What is the average airspeed velocity of an unladen swallow')).to.equal(8);\"\n],\n\"String.split()\",\n\"String.length\"\n]\n}``````\n\n## Write Kickass Puzzles\n\n1. Devise a description\n2. Generate tests\n3. Double, triple, and quadrupal check for edge cases\n4. Solve and bathe in dopamine\n5. Share/repeat",
null,
"## @brandenbyers\n\nInteractive Intelligence\n\nFree Code Camp\n\nBalance Boards\n\nGoats\n\nmidwestjs.com/feedback"
] | [
null,
"https://www.brandenbyers.com/slides/test-them-puzzles/img/drawing-sudoku.svg",
null,
"https://www.brandenbyers.com/slides/test-them-puzzles/img/drawing-rubiks.svg",
null,
"https://www.brandenbyers.com/slides/test-them-puzzles/img/drawing-crossword.svg",
null,
"https://www.brandenbyers.com/slides/test-them-puzzles/img/puzzlejs.png",
null,
"https://www.brandenbyers.com/slides/test-them-puzzles/img/universal.svg",
null,
"https://www.brandenbyers.com/slides/test-them-puzzles/img/dopamine-drip.svg",
null,
"https://www.brandenbyers.com/slides/test-them-puzzles/img/stomachion.svg",
null,
"https://www.brandenbyers.com/slides/test-them-puzzles/img/go-problem.svg",
null,
"https://www.brandenbyers.com/slides/test-them-puzzles/img/tower-of-hanoi.svg",
null,
"https://www.brandenbyers.com/slides/test-them-puzzles/img/parrot-box-2.svg",
null,
"https://www.brandenbyers.com/slides/test-them-puzzles/img/crow-box-2.svg",
null,
"https://www.brandenbyers.com/slides/test-them-puzzles/img/crow-box-2.svg",
null,
"https://www.brandenbyers.com/slides/test-them-puzzles/img/code-wars-1.jpg",
null,
"https://www.brandenbyers.com/slides/test-them-puzzles/img/free-code-camp-1.jpg",
null,
"https://www.brandenbyers.com/slides/test-them-puzzles/img/peach-emoji.png",
null,
"https://www.brandenbyers.com/slides/test-them-puzzles/img/peach-emoji.png",
null,
"https://www.brandenbyers.com/slides/test-them-puzzles/img/fifty-shades-of-repetition.svg",
null,
"https://www.brandenbyers.com/slides/test-them-puzzles/img/harry-potter-puzzle.svg",
null,
"https://www.brandenbyers.com/slides/test-them-puzzles/img/enders-puzzle.svg",
null,
"https://www.brandenbyers.com/slides/test-them-puzzles/img/floating-branden-head-no-shadow.svg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.63635683,"math_prob":0.95888,"size":6692,"snap":"2019-35-2019-39","text_gpt3_token_len":1890,"char_repetition_ratio":0.1270933,"word_repetition_ratio":0.04004215,"special_character_ratio":0.30573818,"punctuation_ratio":0.2304054,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9614958,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40],"im_url_duplicate_count":[null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,4,null,4,null,2,null,2,null,4,null,4,null,2,null,2,null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-09-20T14:32:54Z\",\"WARC-Record-ID\":\"<urn:uuid:ef66a6a5-f316-4389-8788-3eabd75c7194>\",\"Content-Length\":\"23046\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6c758df3-ab6c-4769-b0a2-e9bdc305ae78>\",\"WARC-Concurrent-To\":\"<urn:uuid:ef8f0052-3ff1-41b8-ad6e-b9403e73d490>\",\"WARC-IP-Address\":\"167.99.4.63\",\"WARC-Target-URI\":\"https://www.brandenbyers.com/slides/test-them-puzzles/\",\"WARC-Payload-Digest\":\"sha1:GZ6TTRIHMQ263FURDI5RZPJLPA5336PW\",\"WARC-Block-Digest\":\"sha1:FDDSRAT7C5BBOY5ZIXB7EFF5GQMUVKJ3\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-39/CC-MAIN-2019-39_segments_1568514574039.24_warc_CC-MAIN-20190920134548-20190920160548-00359.warc.gz\"}"} |
https://codereview.stackexchange.com/questions/251910/c-dice-game-using-random-numbers | [
"# c++ dice game using random numbers\n\nI'm fairly new to c++ programming. It would be helpful if I could get a feedback on a dice game I wrote. I would really appreciate some tips as well as your opinions.\n\nThis program will begin by asking the user to enter an amount to bet (from $1-$1000). Next, the user is asked to guess the total that will occur when two dice are rolled. The program then “rolls two dice” (using random numbers) and visually displays the result. Then, if the player’s guess was correct, the player wins 8 times their bet divided by the number of possible way to get that roll. If the player’s guess was wrong, they lose their bet times the number of possible ways to get their guess.\n\n\n\n#include <iostream>\n#include <cstdlib>\n#include <time.h>\nusing namespace std;\n\n//main program\nint main()\n{\nint guess;//user guess for the value of the dice\nfloat bet;//the bet placed by the user\nfloat possibility;//number of combinations each number can come\nint dice1, dice2;//the random value between 1-6 for each dice\nint total;//the total number after the value of dice1 and dice2 is added\nfloat earned_or_lost_money;//earned or lost money depending on whether you win/lose\n\nsrand(time(0));//random number seed\n\ncout << endl;//blank line\n\ncout << \" What do you guess the result will be?(2-12): \";\ncin >> guess;//user input for guess\n\ncout << \" Enter amount of money to bet ($1-$1000): \";\ncin >> bet;//user input for bet\n\ncout << endl;//blank line\n\ndice1 = rand() % 6 + 1;//value of first dice(1-6)\ndice2 = rand() % 6 + 1;//value of second dice(1-6)\n\ntotal = dice1 + dice2;//value of dice1+dice2\n\nswitch (guess)// values for possible combinations for the total value of the dices added\n{\n\ncase 2: possibility = 1; break;\ncase 3: possibility = 2; break;\ncase 4: possibility = 3; break;\ncase 5: possibility = 4; break;\ncase 6: possibility = 5; break;\ncase 7: possibility = 6; break;\ncase 8: possibility = 5; break;\ncase 9: possibility = 4; break;\ncase 10: possibility = 3; break;\ncase 11: possibility = 2; break;\ncase 12: possibility = 1; break;\n}//switch\n\ncout << \" For your $\" << bet << \" bet, the roll is\" << endl; cout << endl;//blank line if (dice1 == 1)//drawing if the value of dice1 is 1 { cout << \"=============\" << endl; cout << \"| |\" << endl; cout << \"| 1 |\" << endl; cout << \"| |\" << endl; cout << \"=============\" << endl; }//if else if (dice1 == 2)//drawing if the value of dice1 is 2 { cout << \"=============\" << endl; cout << \"| 2 |\" << endl; cout << \"| |\" << endl; cout << \"| 2 |\" << endl; cout << \"=============\" << endl; }//else if else if (dice1 == 3)//drawing if the value of dice1 is 3 { cout << \"=============\" << endl; cout << \"| 3 |\" << endl; cout << \"| 3 |\" << endl; cout << \"| 3 |\" << endl; cout << \"=============\" << endl; }//else if else if (dice1 == 4)//drawing if the value of dice1 is 4 { cout << \"=============\" << endl; cout << \"| 4 4 |\" << endl; cout << \"| |\" << endl; cout << \"| 4 4 |\" << endl; cout << \"=============\" << endl; }//else if else if (dice1 == 5)//drawing if the value of dice1 is 5 { cout << \"=============\" << endl; cout << \"| 5 5 |\" << endl; cout << \"| 5 |\" << endl; cout << \"| 5 5 |\" << endl; cout << \"=============\" << endl; }//else if else if (dice1 == 6)//drawing if the value of dice1 is 6 { cout << \"=============\" << endl; cout << \"| 6 6 |\" << endl; cout << \"| 6 6 |\" << endl; cout << \"| 6 6 |\" << endl; cout << \"=============\" << endl; }//else if cout << endl;//blank line if (dice2 == 1)//drawing if the value of dice2 is 1 { cout << \"=============\" << endl; cout << \"| |\" << endl; cout << \"| 1 |\" << endl; cout << \"| |\" << endl; cout << \"=============\" << endl; }//if else if (dice2 == 2)//drawing if the value of dice2 is 2 { cout << \"=============\" << endl; cout << \"| 2 |\" << endl; cout << \"| |\" << endl; cout << \"| 2 |\" << endl; cout << \"=============\" << endl; }//else if else if (dice2 == 3)//drawing if the value of dice2 is 3 { cout << \"=============\" << endl; cout << \"| 3 |\" << endl; cout << \"| 3 |\" << endl; cout << \"| 3 |\" << endl; cout << \"=============\" << endl; }//else if else if (dice2 == 4)//drawing if the value of dice2 is 4 { cout << \"=============\" << endl; cout << \"| 4 4 |\" << endl; cout << \"| |\" << endl; cout << \"| 4 4 |\" << endl; cout << \"=============\" << endl; }//else if else if (dice2 == 5)//drawing if the value of dice2 is 5 { cout << \"=============\" << endl; cout << \"| 5 5 |\" << endl; cout << \"| 5 |\" << endl; cout << \"| 5 5 |\" << endl; cout << \"=============\" << endl; }//else if else if (dice2 == 6)//drawing if the value of dice2 is 6 { cout << \"=============\" << endl; cout << \"| 6 6 |\" << endl; cout << \"| 6 6 |\" << endl; cout << \"| 6 6 |\" << endl; cout << \"=============\" << endl; }//else if cout << endl;//blank line cout << \" For a total of \" << total << endl; cout << endl;//blank line if (guess == total)//if the user guessed correctly { earned_or_lost_money = bet * 8 / possibility;//calculation for money earned cout << \" You were correct... since \" << guess << \" can come up \" << possibility << \" ways,\" << endl; cout << \"You win$\" << bet << \"*8/\" << possibility << \"= $\" << earned_or_lost_money << endl; }//if else//if the user guessed wrong { earned_or_lost_money = possibility * bet;//calculation for lost money cout << \" You were wrong... since \" << guess << \" can come up \" << possibility << \" ways,\" << endl; cout << \" You lost \" << possibility << \"*$\" << bet << \"= \\$\" << earned_or_lost_money << \" dollars!!!!!!\" << endl;\ncout << \" Thanks for your donation :)\" << endl;\n}//else\nreturn 0;\n}//end\n\n\n## 3 Answers\n\nThe 2 previous answers provide a lot of good points, here are a few more suggestions:\n\n## Generating Random Numbers\n\nAs of C++11 there are better random number generators to use that the C standard library rand() function. This stackoverflow question provides the following:\n\n#include <random>\n#include <iostream>\n\nint main() {\nstd::random_device rd;\nstd::mt19937 mt(rd());\nstd::uniform_real_distribution<double> dist(1.0, 10.0);\n\nfor (int i=0; i<16; ++i)\nstd::cout << dist(mt) << \"\\n\";\n}\n\n\nThis provides a better distribution than the C function rand().\n\n## Representing Money in Software\n\nBanks don't like to lose money due to floating point error, nor can they legally cheat the costumers so they don't use floating point numbers to represent money, they use 2 integer values, one for dollars and one for cents. This prevents floating point errors from affecting the cash values involved.\n\n## Complexity\n\nAny function larger than one screen in most IDEs or editors is to large, it is very hard to write, read, debug and maintain functions this large. You can create a class as one of the other answers suggested, you can also just create a number of functions called from main(). If you continue learning software you will learn principles on why this is important.\n\n• Two integers? Too wasteful and cumbersome. Better a single integer with twice the bits. – Deduplicator Nov 10 '20 at 20:46\n• Or for JPY just the yen - or more likely some multi didgit library or if old enough BCD rather than binary integers – mmmmmm Nov 11 '20 at 10:37\n\nFor a beginner, this is a nice attempt.\n\n# Avoid \"using namespace std\"\n\nThough this is trivial for a ten lines of code project, it immediately starts leading to issues when code size increases. Avoid it now. You can alternatively type std::cout or rather using std::cout, here the effect is localized to just cout. Check here for more info on why \"using namespace std\" is consider bad.\n\n# Prefer double to floating-point values\n\nDouble has 2x precision than floating-point values, thereore, provides more accuracy when performing calculations. Double on most machines is 8bytes whereas float is 4bytes, but the precision gain is worth it.\n\n# Make use of random header\n\nrand requires a seed to generate its pseudorandom numbers, if this seed is the same at each run, it produces the same output. This leads to some sort of dependency on the seed. rand also uses srand making it impossible to use with other random engine\n\n# Consider using a class\n\nYour main function is cluttered with code here and there, move them to separate class, you could have a class that handles everything concerning the dice and the rest other functionalities.\n\n# Prefer \\n to endl\n\nWhenever you need a newline but don't explicitly need to flush the buffer, use \\n. endl adds a newline and flushes the buffer, this would result to a slight drop in performance.\n\n• Actually for accuracy reasons backs use 2 integers for money values, one for dollars and one for cents. This prevents floating point errors. – pacmaninbw Nov 10 '20 at 17:54\n• @pacmaninbw I totally agree. – theProgrammer Nov 10 '20 at 18:19\n• Integers for money, sure. But why break it down into two, when handling just one is more efficient and convenient? And if it has twice the bits, you can store even bigger totals. – Deduplicator Nov 10 '20 at 20:48\n\nDepending of what you want, there is some things you can do:\n\n• refactor dice drawing function as there is a lot of redounding code\n• a check for all inputs (nothing prevent your user to input bet out of 2-12 and money out of 0-1000, even negative bet or non numeric ones\n• add a loop allowing player to play multiples games (with money carried between)\n• i probably would have used a table to store possibilities of bet\n\nWithout further informations about what you want to improve, it's hard to be more specific"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.72092843,"math_prob":0.9504387,"size":5468,"snap":"2021-04-2021-17","text_gpt3_token_len":1647,"char_repetition_ratio":0.3545022,"word_repetition_ratio":0.5046296,"special_character_ratio":0.46762985,"punctuation_ratio":0.16613756,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9981133,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-04-17T23:21:04Z\",\"WARC-Record-ID\":\"<urn:uuid:6bf07289-77d2-475b-a01f-32d2d9b41d1f>\",\"Content-Length\":\"197217\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5cb7812c-ebae-40ef-a034-18d9a0b024fa>\",\"WARC-Concurrent-To\":\"<urn:uuid:6704a825-da2c-49b6-a78e-fef30c0e52fe>\",\"WARC-IP-Address\":\"151.101.1.69\",\"WARC-Target-URI\":\"https://codereview.stackexchange.com/questions/251910/c-dice-game-using-random-numbers\",\"WARC-Payload-Digest\":\"sha1:WEAEUVIPUE4KDI4NN6DQVOR2PDTGBXEZ\",\"WARC-Block-Digest\":\"sha1:IV33D47E2T67VW43UCF5NMFXQB4QRGC5\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-17/CC-MAIN-2021-17_segments_1618038464065.57_warc_CC-MAIN-20210417222733-20210418012733-00033.warc.gz\"}"} |
https://lavelle.chem.ucla.edu/forum/viewtopic.php?f=10&t=45865 | [
"## G 25\n\nNorman Dis4C\nPosts: 101\nJoined: Sat Sep 28, 2019 12:16 am\n\n### G 25\n\n\"Practitioners of the branch of alternative medicine known as homeopathy claim that very dilute solutions of substances can have an effect. Is the claim plausible? To explore this question, suppose that you prepare a solution of a supposedly active substance, X, with a molarity of 0.10 mol/L. Then\nyou dilute 10. mL of that solution by doubling the volume, doubling it again, and so on, for 90 doublings in all. How many molecules of X will be present in 10. mL of the final solution? Comment on the possible health benefits of the solution.\"\n\nThe correct answer is no X remaining\n\nShouldn't the amount of solute, which is substance X remain the same?\n\nAnna Wu 1H\nPosts: 51\nJoined: Sat Aug 17, 2019 12:16 am\n\n### Re: G 25\n\nThe amount of solute remains the same, but the volume changes. Since the solution is being diluted so dramatically, the 10 mL sample taken at the end has close to zero solute in it.\n\nCharisse Vu 1H\nPosts: 101\nJoined: Thu Jul 25, 2019 12:17 am\n\n### Re: G 25\n\nSince you are doubling the volume of the solution 90 times, the final volume would be much greater than the initial volume. The question asks how many molecules would be present in only 10. mL of the final solution. Since the volume has increased so dramatically, 10. mL of the final solution would contain a very, very minute amount of the molecule X.\n\nManav Govil 1B\nPosts: 104\nJoined: Sat Sep 07, 2019 12:19 am\n\n### Re: G 25\n\nThe thing is that after 69 doublings, less than one molecule would remain in the 10 mL of the diluted solution. At that point, the solution does not have any health benefits whatsoever. When a solution has less than one molecule of a specific substance (especially if that substance is intended to aid the issue), the solution is ineffective, as the substance is too small to do anything. If you want exact numbers, I found that after 90 doublings, there are 4.9 * 10^-7 molecules in the solution - which is almost nothing.\n\nAmir Bayat\nPosts: 115\nJoined: Sat Sep 07, 2019 12:16 am\n\n### Re: G 25\n\nI understand your guys' reasoning as to why there would be a very, very minute amount of X left in the substance. However, what is the fastest and quickest way to approach a question like this and show work?\n\nI understand the conceptual and visualization aspect of it, but doubling until 69 times would take too long on a test. Have any of you guys found a way to approach this that would take less time?\n\nBrian Tangsombatvisit 1C\nPosts: 119\nJoined: Sat Aug 17, 2019 12:15 am\n\n### Re: G 25\n\nI think once you see an absurd number that is above 50+, you can assume that you don't have to do all the math and just approach the question conceptually. If you use the concept of limits from algebra, if a number keeps getting smaller and smaller, you can safely make the assumption that it approaches 0 since it's so tiny.\n\nEdmund Zhi 2B\nPosts: 118\nJoined: Sat Jul 20, 2019 12:16 am\n\n### Re: G 25\n\nIn 20 mL of the 0.1M solution, there will be 0.1 = x/0.01, x = 0.001 mol of X (the solute). Every time you double this volume, there will be half as many moles in a 10 ml solution. The number of molecules in the final solution is therefore 0.001 mol * (1/2)^90 * avagadro's number, which is around 4.9 * 10^-7 molecules every 10ml, which is basically 0. There are no health benefits if the molecule is completely missing in the solution.\n\nAmir Bayat\nPosts: 115\nJoined: Sat Sep 07, 2019 12:16 am\n\n### Re: G 25\n\nUpdate:\n\nThere is a much faster way to approach this and it is with using the formula below, once we have solved for the amount of molecules of X.\n\nnumber of molecules of X solved for --> 6.0 * 10^20 molecules X + (1/2)^n = 1 molecule.\n\nWe then take the log of both sides and solve for n... getting 69 as the number of doubles to have one molecule of X left, and thus none left after 90 doubles."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.94525826,"math_prob":0.8752754,"size":4475,"snap":"2020-45-2020-50","text_gpt3_token_len":1182,"char_repetition_ratio":0.13330351,"word_repetition_ratio":0.31111112,"special_character_ratio":0.28648046,"punctuation_ratio":0.1353001,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96786517,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-11-24T07:33:30Z\",\"WARC-Record-ID\":\"<urn:uuid:393a686e-4ade-4ba8-98a0-96134b3fdd71>\",\"Content-Length\":\"67533\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8d00a9c6-ae75-456a-be7c-5c6fa2d50241>\",\"WARC-Concurrent-To\":\"<urn:uuid:d8dab58d-4f1d-4697-aa95-9cd8e7d660a4>\",\"WARC-IP-Address\":\"169.232.134.130\",\"WARC-Target-URI\":\"https://lavelle.chem.ucla.edu/forum/viewtopic.php?f=10&t=45865\",\"WARC-Payload-Digest\":\"sha1:XM4DEQ5NNTNDIH6PYYIFC3FYAPR6ULLF\",\"WARC-Block-Digest\":\"sha1:TFE43YMG2QMXELZOPUCJ6OMF4AJ6CZ5K\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141171126.6_warc_CC-MAIN-20201124053841-20201124083841-00486.warc.gz\"}"} |
http://nicadd.niu.edu/~piot/phys_630/Syllabus.html | [
"# PHYS 630: Advanced Optics (Fall 2008)\n\n## Syllabus of Tuesdays and Thursdays lectures\n\ná Introduction (1 lessons)\n\no Course organization,\n\no Ray Optics\n\no Helmholtz equation\n\ná Beam Optics (3 lessons)\n\no Introduction\n\no Gaussian Beams\n\no Other solution of Helmholtz equation\n\no Short duration beams\n\no Alternate method for describing a beam: covariance matrix and M2\n\ná Fourier Optics (4 lessons)\n\no Hamonic analysis of a signal\n\no Amplitude and phase modulations\n\no Transfer function of free space\n\no Optical Fourier transform\n\no Diffraction & Interference\n\no Image shaping\n\no Holography\n\ná Electromagnetic description of light & propagation in matter (7 lessons)\n\no Light in vacuum\n\no Theory of electromagnetic beams\n\no Light guiding\n\no Absorption of light & Dispersion\n\no Optical phenomena in nonisotropic media\n\n¤ Dichroism and birefringence\n\n¤ E-field effects\n\n¤ Acousto-optics effects\n\n¤ B-field effects\n\ná Lasers (5 lessons)\n\no Interaction of light with matter\n\no Laser dynamics\n\no Pulsed laser beam\n\no Amplifiers\n\no Example of laser systems\n\ná Other light sources (1 lesson)\n\no Radiatio from moving charged particle\n\no Free-eletron laser\n\no Thomson scattering\n\ná Nonlinear Optics (4 lessons)\n\no Nonlinear optical media\n\no 2nd order optics\n\no 3rd order optics\n\no wave mixing\n\no high harmonic generation\n\no self focusing and phase modulation\n\ná Introduction to Statistical optics (2 lessons)\n\no Statistical properties of random light\n\no Interference of partially polarized coherent light\n\no Transmission of partially coherent light through optical system\n\no Partial polarization\n\ná Introduction to Quantum Optics (TBD)"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.60622585,"math_prob":0.6175571,"size":1547,"snap":"2021-31-2021-39","text_gpt3_token_len":459,"char_repetition_ratio":0.13739468,"word_repetition_ratio":0.0,"special_character_ratio":0.18552037,"punctuation_ratio":0.012244898,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97659665,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-28T23:56:47Z\",\"WARC-Record-ID\":\"<urn:uuid:1b1c27b9-3215-4a0e-a2c3-cc75cd660ba5>\",\"Content-Length\":\"22125\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9e964de2-eb62-4a27-8e53-77165439d748>\",\"WARC-Concurrent-To\":\"<urn:uuid:ca6c7cae-2bb6-4bb9-ba83-a5d0b381d0cc>\",\"WARC-IP-Address\":\"131.156.85.19\",\"WARC-Target-URI\":\"http://nicadd.niu.edu/~piot/phys_630/Syllabus.html\",\"WARC-Payload-Digest\":\"sha1:Q3GLC5F5TO2JKAY5QWUUB4XUON3K7MSV\",\"WARC-Block-Digest\":\"sha1:ZOGDPLR6PHQOSHHWP2SR25AHF6SBVXAU\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780060908.47_warc_CC-MAIN-20210928214438-20210929004438-00095.warc.gz\"}"} |
https://answers.everydaycalculation.com/add-fractions/8-28-plus-2-80 | [
"Solutions by everydaycalculation.com\n\n8/28 + 2/80 is 87/280.\n\n1. Find the least common denominator or LCM of the two denominators:\nLCM of 28 and 80 is 560\n2. For the 1st fraction, since 28 × 20 = 560,\n8/28 = 8 × 20/28 × 20 = 160/560\n3. Likewise, for the 2nd fraction, since 80 × 7 = 560,\n2/80 = 2 × 7/80 × 7 = 14/560",
null,
"Download our mobile app and learn to work with fractions in your own time:"
] | [
null,
"https://answers.everydaycalculation.com/mathstep-app-icon.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.70663285,"math_prob":0.99679506,"size":322,"snap":"2019-51-2020-05","text_gpt3_token_len":132,"char_repetition_ratio":0.18238993,"word_repetition_ratio":0.0,"special_character_ratio":0.49068323,"punctuation_ratio":0.064935066,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99656934,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-07T21:18:07Z\",\"WARC-Record-ID\":\"<urn:uuid:1c7d9fb2-43f1-43a4-94ea-870bb9242a17>\",\"Content-Length\":\"7916\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:56727e45-b549-47d0-879d-9f4c3f29b268>\",\"WARC-Concurrent-To\":\"<urn:uuid:888f7df4-e3b8-49c6-bc56-8948385ecd5c>\",\"WARC-IP-Address\":\"96.126.107.130\",\"WARC-Target-URI\":\"https://answers.everydaycalculation.com/add-fractions/8-28-plus-2-80\",\"WARC-Payload-Digest\":\"sha1:5QGWAVYKHAQ53VJQUVRNMLRCA4Y2EEIA\",\"WARC-Block-Digest\":\"sha1:7QT6OFDYDWHNLUIXSZNPCIT5IVPY3JU2\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575540502120.37_warc_CC-MAIN-20191207210620-20191207234620-00299.warc.gz\"}"} |
https://ir.cwi.nl/pub/30210 | [
"We study the resource augmented version of the k-server problem, also known as the k-server problem against weak adversaries or the (h,k)-server problem. In this setting, an online algorithm using k servers is compared to an offline algorithm using h servers, where h ≤ k. For uniform metrics, it has been known since the seminal work of Sleator and Tarjan (1985) that for any \">0, the competitive ratio drops to a constant if k=(1+\") · h. This result was later generalized to weighted stars (Young 1994) and trees of bounded depth (Bansal et al. 2017). The main open problem for this setting is whether a similar phenomenon occurs on general metrics. We resolve this question negatively. With a simple recursive construction, we show that the competitive ratio is at least ω(loglogh), even as k→∞. Our lower bound holds for both deterministic and randomized algorithms. It also disproves the existence of a competitive algorithm for the infinite server problem on general metrics.\n\n, , ,\ndoi.org/10.1145/3357713.3384306\nAnnual ACM SIGACT Symposium on Theory of Computing\nCentrum Wiskunde & Informatica, Amsterdam, The Netherlands\n\nBienkowski, M, Byrka, J, Coester, C.E, & Jeż, L. (2020). Unbounded lower bound for k-server against weak adversaries. In Proceedings of the Annual ACM SIGACT Symposium on Theory of Computing (pp. 1165–1169). doi:10.1145/3357713.3384306"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.85534716,"math_prob":0.898485,"size":1511,"snap":"2021-43-2021-49","text_gpt3_token_len":362,"char_repetition_ratio":0.09820836,"word_repetition_ratio":0.053097345,"special_character_ratio":0.25876904,"punctuation_ratio":0.14736842,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96277225,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-15T21:41:08Z\",\"WARC-Record-ID\":\"<urn:uuid:c38bac86-e085-4fb0-8b9f-490a5db04ae6>\",\"Content-Length\":\"23192\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7db3a192-617e-475c-b2df-7d766d0d25dd>\",\"WARC-Concurrent-To\":\"<urn:uuid:3c9e7611-7830-4e40-bfee-4bf8ec5f0473>\",\"WARC-IP-Address\":\"5.79.73.6\",\"WARC-Target-URI\":\"https://ir.cwi.nl/pub/30210\",\"WARC-Payload-Digest\":\"sha1:OLQT574GER7PC2F2BWXJGAUCL5JZEOUK\",\"WARC-Block-Digest\":\"sha1:QJ4IOGIUOMXRJ4SYYL7VEJKO5XGIIXZ7\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323583083.92_warc_CC-MAIN-20211015192439-20211015222439-00703.warc.gz\"}"} |
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Equilibria/Solubilty/An_Introduction_to_Solubility_Products | [
"# An Introduction to Solubility Products\n\n•",
null,
"• Contributed by Jim Clark\n• Former Head of Chemistry and Head of Science at Truro School in Cornwall\n\nThis page discusses how solubility products are defined, including their units units. It also explores the relationship between the solubility product of an ionic compound and its solubility.\n\n## Solubility products are equilibrium constants\n\nBarium sulfate is almost insoluble in water. It is not totally insoluble—very small amounts do dissolve. This is true of any \"insoluble\" ionic compound. If solid barium sulfate is shaken with water, a small amount of barium ions and sulfate ions break away from the surface of the solid and go into solution. Over time, some of these return from solution to stick onto the solid again.\n\nAn equilibrium is established when the rate at which some ions are breaking away from the solid lattice is exactly matched by the rate at which others are returning. Consider the balanced equation for the barium sulfate reaction:\n\n$BaSO_4(s) \\rightleftharpoons Ba^{2+}(aq) + SO_4^{2-}(aq)$\n\nThe position of this equilibrium lies very far to the left. The great majority of the barium sulfate is present as solid—there are no visible changes to the solid. However, the equilibrium does exist, and an equilibrium constant can be written. The equilibrium constant is called the solubility product, and is given the symbol Ksp:\n\n$K_{sp} = [Ba^{2+} (aq)] [SO_4^{2-}(aq)]$\n\nFor simplicity, solubility product expressions are often written without the state symbols:\n\n$K_{sp} = [Ba^{2+}] [SO_4^{2-}]$\n\nNotice that there is no solid barium sulfate term. For many simple equilibria, the equilibrium constant expression has terms for the right side of the equation divided by terms for the left side. But in this case, there is no term for the concentration of the solid barium sulfate. This is a heterogeneous equilibrium, one which contains substances in more than one state. In a heterogeneous equilibrium, concentration terms for solids are left out of the expression.\n\n## Solubility products for more complicated solids\n\nHere is the corresponding equilibrium for calcium phosphate, Ca3(PO4)2:\n\n$Ca_3(PO_4)_2(s) \\rightleftharpoons 3Ca^{2+}(aq) + 2PO_4^{3-}(aq)$\n\nBelow is the solubility product expression:\n\n$K_{sp} = [Ca^{2+}]^3[PO_4^{3-}]^2$\n\nAs with any other equilibrium constant, the concentrations are raised to the power of their respective stoichiometric coefficients in the equilibrium equation. Solubility products only apply to sparingly soluble ionic compounds, not to normally soluble compounds such as sodium chloride. Interactions between the ions in the solution interfere with the simple equilibrium.\n\n## The units for solubility products\n\nThe units for solubility products differ depending on the solubility product expression, and you need to be able to work them out each time.\n\nExample 1: BaSO4\n\nBelow is the solubility product expression for barium sulfate:\n\n$K_{sp} = [Ba^{2+}] [SO_4^{2-}]$\n\nEach concentration has the unit mol dm-3. In this case, the units for the solubility product in this case are the following:",
null,
"(mol dm-3) x (mol dm-3)",
null,
"= mol2 dm-6\n\nExample 2: CaPO4\n\nRecall the solubility product expression for calcium phosphate:\n\n$K_{sp} = [Ca^{2+}]^3[PO_4^{3-}]^2$\n\nThe units this time will be:",
null,
"(mol dm-3)3 x (mol dm-3)2",
null,
"= (mol dm-3)5",
null,
"= mol5 dm-15\n\n## Solubility products apply only to saturated solutions\n\nRecall the barium sulfate equation:\n\n$BaSO_4(s) \\rightleftharpoons Ba^{2+}(aq) + SO_4^{2-}(aq)$\n\nThe corresponding solubility product expression is the following:\n\n$K_{sp}= [Ba^{2+}][SO_4^{2-}]$\n\nKsp for barium sulfate at 298 K is 1.1 x 10-10 mol2 dm-6.\n\nIn order for this equilibrium constant (the solubility product) to apply, solid barium sulfate must be present in a saturated solution of barium sulfate. This is indicated by the equilibrium equation. If barium ions and sulfate ions exist in solution in the presence of some solid barium sulfate at 298 K, and multiply the concentrations of the ions together, the answer will be 1.1 x 10-10 mol2 dm-6. It is possible to multiply ionic concentrations and obtain a value less than this solubility product; in such cases the solution is too dilute, and no equilibrium exists. If the concentrations are lowered enough, no precipitate can form.\n\nHowever, it is impossible to calculate a product greater than the solubility product. If solutions containing barium ions and sulfate ions are mixed such that the product of the concentrations would exceed Ksp, a precipitate forms. Enough solid is produced to reduce the concentrations of the barium and sulfate ions down to the value of the solubility product.\n\n## Summary\n\n• The value of a solubility product relates only to a saturated solution.\n• If the ionic concentrations give a value less than the solubility product, the solution is not saturated. No precipitate would be formed in such a case.\n• If the ionic concentrations give a value more than the solubility product, enough precipitate would be formed to reduce the concentrations to give an answer equal to the solubility product.\n\n## Contributors\n\nJim Clark (Chemguide.co.uk)"
] | [
null,
"https://chem.libretexts.org/@api/deki/files/123564/clark.jpg",
null,
"http://www.chemguide.co.uk/physical/ksp/padding.gif",
null,
"http://www.chemguide.co.uk/physical/ksp/padding.gif",
null,
"http://www.chemguide.co.uk/physical/ksp/padding.gif",
null,
"http://www.chemguide.co.uk/physical/ksp/padding.gif",
null,
"http://www.chemguide.co.uk/physical/ksp/padding.gif",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8545495,"math_prob":0.98989314,"size":5061,"snap":"2020-10-2020-16","text_gpt3_token_len":1253,"char_repetition_ratio":0.19398853,"word_repetition_ratio":0.03448276,"special_character_ratio":0.23710729,"punctuation_ratio":0.087100334,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9949585,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-03-31T20:02:37Z\",\"WARC-Record-ID\":\"<urn:uuid:718b29dd-5e3c-4807-83d8-05c9e441bdfa>\",\"Content-Length\":\"105598\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7cff09ae-28c3-44a5-ba77-2724ac74d1b0>\",\"WARC-Concurrent-To\":\"<urn:uuid:bb199549-a62c-456d-8605-43a689d4e968>\",\"WARC-IP-Address\":\"99.84.181.53\",\"WARC-Target-URI\":\"https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Equilibria/Solubilty/An_Introduction_to_Solubility_Products\",\"WARC-Payload-Digest\":\"sha1:XGG5AQWF7P6LAC7RCWMCFWEZWXCQBKV2\",\"WARC-Block-Digest\":\"sha1:BBPEACZGBN63U3J6NEVRUE6WIB4ZWR55\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-16/CC-MAIN-2020-16_segments_1585370503664.38_warc_CC-MAIN-20200331181930-20200331211930-00121.warc.gz\"}"} |
https://seniorsecondary.tki.org.nz/Mathematics-and-statistics/Achievement-objectives/AOs-by-level/AO-M8-7 | [
"Te Kete Ipurangi\nCommunities\nSchools\n\n### Te Kete Ipurangi user options:\n\nAchievement objectives\n\nWhat has changed:\n\n# Achievement objective M8-7\n\nIn a range of meaningful contexts, students will be engaged in thinking mathematically and statistically. They will solve problems and model situations that require them to:\n\n• form and use trigonometric, polynomial, and other non-linear equations.\n\n## Indicators\n\n• Solves contextual problems by forming and solving equations.\n• Solves quadratic and cubic equations with complex roots. These equations are likely to involve real coefficients so that solutions are complex conjugate pairs.\n• Analyses the existence of solutions in the context of the situation.\n• Non-linear equations include:\n• exponential\n• logarithmic\n• rational\n• surd\n• hyperbolic.\n• Makes links to graphs of functions M8-2, curve fitting M8-4, manipulating expressions M7-6 and M8-6, and complex numbers M8-9.\n• See key mathematical ideas on NZmaths.\n\n## Progression\n\nM8-7 links from M7-4, M7-7 and to M8-9.\n\n## Possible context elaborations\n\n• Natural phenomena that result in periodic behaviours (for example, tidal patterns, sound waves, circular motion, harmonics …).\n• Explore growth and decay situations.\n• Completely factor a third degree polynomial, using long division and/or reminder and factor theorems.\n• Given general solution for a trig equation, find the solutions that satisfy a particular context.\n• Although numerical solutions (bisection and Newton-Raphson) do not need to be included, they could form part of an inquiry into dealing with non-linear equations that can’t be solved with algebraic methods.\n• Activity: Big Bang Theory – Exact values\n• Graphing projectile paths\n• Investigating algebra: Exploring graphs\n• Practical situations involving trigonometric graphs: Includes pedal height of a bicycle, height of loaded oscillating spring.\n• Tidal estuaries: Investigating the times yachts can enter and leave tidal estuaries.\n• Use of remainder and factor theorems.\n\n## Assessment for qualifications\n\nNCEA achievement standards at levels 1, 2 and 3 have been aligned to the New Zealand Curriculum. Please ensure that you are using the correct version of the standards by going to the NZQA website.\n\nThe NZQA subject-specific resources pages are very helpful. From there, you can find all the achievement standards and links to assessment resources, both internal and external.\n\nThe following achievement standard(s) could assess learning outcomes from this AO:\n\nLast updated September 17, 2018"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8592751,"math_prob":0.8887712,"size":2398,"snap":"2021-31-2021-39","text_gpt3_token_len":530,"char_repetition_ratio":0.10568087,"word_repetition_ratio":0.0,"special_character_ratio":0.20183486,"punctuation_ratio":0.119700745,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9780174,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-07-24T23:52:50Z\",\"WARC-Record-ID\":\"<urn:uuid:bc671cf2-8d07-4cff-8a64-183628ba5bb6>\",\"Content-Length\":\"249749\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:74080a94-c8ea-4926-b67f-8ed1e8c08693>\",\"WARC-Concurrent-To\":\"<urn:uuid:feda8fb4-6156-4820-86a9-51834330db77>\",\"WARC-IP-Address\":\"185.125.86.67\",\"WARC-Target-URI\":\"https://seniorsecondary.tki.org.nz/Mathematics-and-statistics/Achievement-objectives/AOs-by-level/AO-M8-7\",\"WARC-Payload-Digest\":\"sha1:OCMBTQPLEQIXCGNOQSWC4X3JWNTNVDVM\",\"WARC-Block-Digest\":\"sha1:HRUOM7NKVQKPJQ3DXGTCLTE3RFX43GVE\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046151531.67_warc_CC-MAIN-20210724223025-20210725013025-00630.warc.gz\"}"} |
https://all-learning.com/swift-closure/ | [
"# Swift Closure With Examples [Latest]",
null,
"In this tutorial, we’ll be discussing and seeing the usages (why, how and where) of Swift Closure.\n\nIt’s noteworthy to mention here that Swift Function is also a form of closure.\n\n## Swift Closure\n\nAccording to the Apple Documentation, “Closures are self-contained blocks of functionality that can be passed around and used in your code”.\n\nClosures can capture and store references to any constants and variables from the context in which they are defined.\n\nThe concept of Swift Closure is similar to blocks in C. Closures are nameless functions and without the keyword `func`.\n\n### Forms of Closure in Swift\n\nClosures can take one of the three forms.\n\n1. Global functions\n2. Nested functions\n3. Closures expression\n\nGlobal and nested functions were discussed at length in the Functions in Swift tutorial.\n\nWhen we mention closures we generally mean the third form i.e. Closure Expressions. Before we get on with the syntax and use cases, let’s look at the benefits that Closures bring to us.\n\n### Advantages of Closures in Swift\n\n1. Closures are much less verbose when compared to Functions. Swift can infer the parameter types and return type from the context in which the closure is defined thereby making it convenient to define and pass to a function\n2. Swift Closure can be used to capture and store the state of some variable at a particular point in time and use it later(We’ll be discussing this at length later in the tutorial)\n3. Closures allow us to run a piece of code even after the function has returned\n\n### Swift Closure Syntax\n\nLet’s recall the syntax of Functions in swift first.\n\n``````\nfunc name(parameters) -> (return type){\n//body of function\n}\n``````\n\nFunctions require a `func` keyword followed by name of function, arguments and return type after `->`\n\nSyntax for Closures is given below.\n\n``````\n{ (parameters)->(return type) in\n//body of the closure\n}\n``````\n\nClosures are defined within curly braces. The `in` keyword separates the arguments and return type from the body.\n\nLike functions, closures can also be referenced as types. Let’s dive into playground in XCode and start coding.\n\n### Declaring Swift Closures as variables\n\n``````\nvar myClosure: (ParameterTypes) -> ReturnType\n``````\n\nLet’s create a basic function and a closure that’ll print “Hello World”.\n\n``````\nfunc helloWorldFunc()\n{\nprint(\"Hello World\")\n}\nvar helloWorldClosure = { () -> () in print(\"Hello World\") }\nhelloWorldFunc() //prints Hello World\nhelloWorldClosure() //prints Hello World\n``````\n\nIn the above code, we’ve defined a closure with no parameters and no return type and set it to a variable `helloWorldClosure`.\n\nCalling the closure is similar to calling a function as shown above.\n\nTip 1: If the closure does not return any value or the type can be inferred you can omit the arrow (`->`) and the return type. We can shorten the above code as shown below.\n\n``````\nvar helloWorldClosure = { () in print(\"Hello World\") }\n``````\n\nTip 2: If the closure doesn’t require any parameter or can be inferred, remove the argument parenthesis. Now we don’t need the `in` keyword too. This will further shorten the code as shown below.\n\n``````\nvar helloWorldClosure = { print(\"Hello World\") }\n``````\n\nWe’ve seen one use case now where closures made the code look less verbose compared to a function. Let’s create a closure with parameters and see how it fares when compared to closures.\n\n### Swift Closures with arguments and return type\n\nLet’s create a function and closure that accepts two strings, appends them and return.\n\n``````\nfunc appendString(_ a: String, with b: String) -> String\n{\nreturn a + \" : \" + b\n}\nprint(appendString(\"Swift\", with: \"Functions\")) //print \"Swift : Functionsn\"\n``````\n\nSo far so good. Lets do the same using a closure.\n\n``````\nvar appendStringClosure = { (a:String,b:String) -> (String) in return a + \" : \" + b }\nprint(appendStringClosure(\"Swift\", \"Closures\")) //prints \"Swift : Closuresn\"\n``````\n\nThanks to type inference as we’d seen earlier, the above closure definition is the same as the following.\n\n``````\nvar appendStringClosure = { (a:String,b:String) in return a + \" : \" + b } // return type inferred\nvar appendStringClosure : (String, String) -> String = { (a,b) in return a + \" : \" + b } //Closure type declared on the variable itself\nvar appendStringClosure : (String, String) -> String = { (a,b) in a + \" : \" + b } // omit return keyword\n``````\n\nThere’s even a shortened version for the above.\n\nSwift allows us to refer to arguments passed to closure using shorthand argument names : \\$0, \\$1, \\$2 etc. for the first, second, third etc. parameters respectively.\n\n``````\nvar appendStringClosure : (String, String) -> String = { \\$0 + \" : \" + \\$1 }\nprint(appendStringClosure(\"Swift\", \"Closures\")) //prints \"Swift : Closuresn\"\n``````\n\nLet’s take it one more step ahead:\n\n``````\nvar appendStringClosure = { \\$0 + \" : \" + \\$1 + \" \" + \\$2 }\nprint(appendStringClosure(\"Swift\", \"Closures\", \"Awesome\")) //prints \"Swift : Closures Awesomen\"\n``````\n\nAdd as many arguments as you can.\n\nNote: It’s recommended to set the closure type.\n\nIt’s time to use Closures inside functions.\n\n### Closures inside Functions\n\nLet’s create a function that takes a function/closure as parameter.\n\n``````\nfunc operationsSq(a: Int, b:Int, myFunction: (Int, Int)->Int)->Int\n{\nreturn myFunction(a,b)\n}\n``````\n\nThis function expects a parameter of type `(Int, Int)->Int` as the third argument.\nLets pass a function in there as shown below\n\n``````\nfunc addSquareFunc(_ a: Int, _ b:Int)->Int\n{\nreturn a*a + b*b\n}\nprint(operationsSq(a:2,b:2, myFunction: addSquareFunc)) //a^2 + b^2 prints 8\n``````\n\nFair enough. But creating a different function for another operation (let’s say subtracting squares etc) would keep increasing the boilerplate code.\n\nThis is where closure comes to our rescue. We’ve defined the following 4 types of expressions that we’ll be passing one by one in the `operationsSq` function.\n\n``````\nvar addSq : (Int, Int) -> Int = {\\$0*\\$0 + \\$1*\\$1 }\nvar subtractSq: (Int, Int) -> Int = { \\$0*\\$0 - \\$1*\\$1 }\nvar multiplySq: (Int, Int) -> Int = { \\$0*\\$0 * \\$1*\\$1 }\nvar divideSq: (Int, Int) -> Int = { (\\$0*\\$0) / (\\$1*\\$1) }\nvar remainderSq: (Int, Int) -> Int = { (\\$0*\\$0) % (\\$1*\\$1) }\nprint(operationsSq(a:4,b:5, myFunction: subtractSq)) //prints -9\nprint(operationsSq(a:5,b:5, myFunction: multiplySq)) //prints 625\nprint(operationsSq(a:10,b:5, myFunction: divideSq)) //prints 4\nprint(operationsSq(a:7,b:5, myFunction: remainderSq)) //prints 24\n``````\n\nMuch shorter than what we achieved with a function as a parameter. In fact, we can directly pass the closure expressions as shown below and it’ll achieve the same result:\n\n``````\noperationsSq(a:2,b:2, myFunction: { \\$0*\\$0 + \\$1*\\$1 })\noperationsSq(a:4,b:5, myFunction: { \\$0*\\$0 - \\$1*\\$1 })\noperationsSq(a:5,b:5, myFunction: { \\$0*\\$0 * \\$1*\\$1 })\noperationsSq(a:10,b:5, myFunction: { (\\$0*\\$0) / (\\$1*\\$1) })\noperationsSq(a:7,b:5, myFunction: { (\\$0*\\$0) % (\\$1*\\$1) })\n``````\n\n### Trailing Closures\n\nWhen the closure is being passed as the final argument to a function we can pass it as a trailing closure.\n\nA trailing closure is written after the function call’s parentheses, even though it is still an argument to the function.\n\nWhen you use the trailing closure syntax, you don’t write the argument label for the closure as part of the function call.\n\nIf closure argument is the sole argument of the function you can remove the function parentheses. Hence the previous closures can be written as the following and it would still function the same way.\n\n``````\noperationsSq(a:2,b:2){ \\$0*\\$0 + \\$1*\\$1 }\n``````\n\nLet’s look at another example where we’ll be adding the sum of exponentials of numbers using closure expressions.\n\n``````\nfunc sumOfExponentials(from: Int, to: Int, myClosure: (Int) -> Int)->Int\n{\nvar sum = 0\nfor i in from...to{\nsum = sum + myClosure(i)\n}\nprint(sum)\nreturn sum\n}\n//Trailing closures\nsumOfExponentials(from:0,to:5){ \\$0 } //sum of numbers\nsumOfExponentials(from:0,to:5){ \\$0*\\$0 } //sum of squares\nsumOfExponentials(from:0,to:5){ \\$0*\\$0*\\$0 } //sum of cubes\n``````\n\nConvert a numbers array to strings array\nAnother use case of trailing closure is given below.\n\n``````\nvar numbers = [1,2,3,4,5,6]\nprint(numbers)\nvar strings = numbers.map{\"(\\$0)\"}\nprint(strings) //prints [\"1\", \"2\", \"3\", \"4\", \"5\", \"6\"]n\n``````\n\nmap is a high order function for transforming an array.\n\nSort numbers in descending order using trailing closures\n\n``````\nvar randomNumbers = [5,4,10,45,76,11,0,9]\nrandomNumbers = randomNumbers.sorted{\\$0>\\$1}\nprint(randomNumbers) //\"[76, 45, 11, 10, 9, 5, 4, 0]n\"\n``````\n\n### Capture Lists in Closures\n\nBy default, closures can capture and store references to any constants and variables from the context in which they are defined (Hence the name closures).\n\nCapturing references to variables can cause our closures to behave differently than it’s supposed to. Let’s see this through an example.\n\n``````\nvar x = 0\nvar myClosure = { print(\"The value of x at start is (x)\") }\nmyClosure() //prints 0 as desired.\n``````\n\nSo far so good. The above example looks pretty straightforward until we do this:\n\n``````\nvar x = 0\nvar myClosure = { print(\"The value of x at start is (x)\") }\nmyClosure() //The value of x at start is 0\nx = 5\nmyClosure() //The value of x at start is 5\n``````\n\nNow `myClosure` was defined and initialized before changing the value of `x` to 5. Why does it print 5 in the end then?\n\nThe reason is that closure captures the reference (memory address) of `x`. Any changes made to value at that memory address would be displayed by the closure when it’s invoked.\n\nTo make x behave as a value type instead we need to make use of Capture Lists. Capture Lists is an array `[]` that holds local copies of the variables. Thereby capturing the variables by value types and NOT reference types.\n\nThe array with the local variables is displayed before the `in` keyword as shown below.\n\n``````\nvar x = 0\nvar myClosure = { [x] in print(\"The value of x at start is (x)\") }\nmyClosure() // The value of x at start is 0\nx = 5\nmyClosure() //The value of x at start is 0\n``````\n\nCapturing reference types can be destructive when used in Classes since the closure can hold a strong reference to an instance variable and cause memory leaks. Let’s see an example of this.\n\n``````\nclass Person {\nvar x: Int\nvar myClosure: ()->() = {print(\"Hey there\")}\ninit(x: Int)\n{\nself.x = x\n}\nfunc initClosure()\n{\nmyClosure = { print(\"Initial value is not defined yet\")}\n}\ndeinit{\nprint(\"(self) escaped\")\n}\n}\nvar a:Person? = Person(x: 0)\na?.initClosure()\na?.x = 5\na?.myClosure()\na = nil\n``````\n\nDeinitializers are called automatically, just before instance deallocation takes place. In the above code, it’ll be called when `a` is set to `nil` and `self` doesn’t have any references attached to it.\n\nThe above code would print:\n`\"Initial value is not defined yet\"` followed by ” escaped” (“__lldb_expr_26.Person escaped” for me). Awesome there are no memory leaks. Lets set the value inside `myClosure` to `x` as shown below:\n\n``````\nclass Person {\nvar x: Int\nvar myClosure: ()->() = {print(\"Hey there\")}\ninit(x: Int)\n{\nself.x = x\n}\nfunc initClosure()\n{\nmyClosure = { print(\"Intial value is (self.x)\")}\n}\ndeinit{\nprint(\"(self) escaped\")\n}\n}\nvar a:Person? = Person(x: 0)\na?.initClosure()\na?.x = 5\na?.myClosure()\na = nil\n``````\n\nWhoops! the print statement within `deinit` is not printed. The closure holds a strong reference to `self`. This will eventually lead to memory leaks. A remedy for this is to place a weak or unknown reference to self in a capture list inside the closure as seen below.\n\n• A weak reference is a reference that does not keep a strong hold on the instance it refers to and so does not stop ARC from disposing of the referenced instance.\n• An unowned reference does not keep a strong hold on the instance it refers to. Unlike a weak reference, however, an unowned reference is used when the other instance has the same lifetime or a longer lifetime.\n\nThe code for capture list with `self` reference is given below:\n\n``````\nclass Person {\nvar x: Int\nvar myClosure: ()->() = {print(\"Hey there\")}\ninit(x: Int)\n{\nself.x = x\n}\nfunc initClosure()\n{\nmyClosure = {[weak self] in guard let weakSelf = self else { return }\nprint(\"Intial value is (weakSelf.x)\")}\n}\ndeinit{\nprint(\"(self) escaped\")\n}\n}\nvar a:Person? = Person(x: 0)\na?.initClosure()\na?.x = 5\na?.myClosure()\na = nil\n``````\n\nNo memory leaks with the above code!\n\n### Escaping Closures\n\nThere are two other types of closures too:\n\n• An escaping closure is a closure that’s called after the function it was passed to returns. In other words, it outlives the function it was passed to. An escaping closure is generally used in completion handlers since they are called after the function is over.\n• A non-escaping closure is a closure that’s called within the function it was passed into, i.e. before it returns. Closures are non-escaping by default\n``````\nvar completionHandlers: [() -> Void] = []\nfunc someFunctionWithNonescapingClosure(closure: () -> Void) {\nclosure()\n}\nfunc someFunctionWithEscapingClosure(completionHandler: @escaping () -> Void) {\ncompletionHandlers.append(completionHandler)\n}\nclass SomeClass {\nvar x = 10\nfunc doSomething() {\nsomeFunctionWithEscapingClosure {[weak self] in guard let weakSelf = self else { return }\nweakSelf.x = 100 }\nsomeFunctionWithNonescapingClosure { x = 200 }\n}\ndeinit{\nprint(\"deinitalised\")\n}\n}\nvar s:SomeClass? = SomeClass()\ns?.doSomething()\nprint(s?.x ?? -1) //prints 200\ncompletionHandlers.first?()\nprint(s?.x ?? -1) //prints 100\ns = nil\n``````\n\ncompletionHandlers.first() returns and invokes the first element of the array which is an escaping closure. Since we’d set a value type inside it using capture lists, it prints 100.\nWe’ll be discussing escaping closures at length in a later tutorial.\n\nHere’s a screenshot from my playground.",
null,
"That’s all about closure in swift."
] | [
null,
"https://all-learning.com/wp-content/uploads/2017/08/Swift-Closure-With-Examples.png",
null,
"https://all-learning.com/wp-content/uploads/2017/08/ios-closures-playground-swift-450x27-1.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.78707844,"math_prob":0.96947545,"size":13348,"snap":"2023-14-2023-23","text_gpt3_token_len":3419,"char_repetition_ratio":0.15714929,"word_repetition_ratio":0.15213442,"special_character_ratio":0.27232546,"punctuation_ratio":0.13718553,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9661183,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-03-27T20:57:15Z\",\"WARC-Record-ID\":\"<urn:uuid:8cffc759-4472-469e-a2ef-d3771ca2b836>\",\"Content-Length\":\"98934\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5a15bdb6-fdd7-4523-a169-51d3e77e15eb>\",\"WARC-Concurrent-To\":\"<urn:uuid:c1245698-1ca2-40bc-96e7-4bbbe03af27c>\",\"WARC-IP-Address\":\"172.67.139.202\",\"WARC-Target-URI\":\"https://all-learning.com/swift-closure/\",\"WARC-Payload-Digest\":\"sha1:OQ6HWGDJURI5QT7W37XKTJWBVB2HW66L\",\"WARC-Block-Digest\":\"sha1:BYNSJD2MBEMNORUUKWR3ZJXASKJ7LTCM\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296948684.19_warc_CC-MAIN-20230327185741-20230327215741-00528.warc.gz\"}"} |
https://hlearner.com/financial/home-loan.html | [
" H-learner | Home Loan Calculator",
null,
"",
null,
"%\n\nFormula\n\nHouse Value = monthly *((1+I /(100*12)years*12 -1)/ (I/100*12)*(( I/100*12))+1) years*12)\n\nWhere,\nMonthly = (monthly income + other income – monthly expenditure) *28/100\nI = Interest"
] | [
null,
"https://hlearner.com/assets/media/image/logo/logo.png",
null,
"https://hlearner.com/assets/media/image/logo/dark-logo.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9452703,"math_prob":0.9965459,"size":648,"snap":"2021-04-2021-17","text_gpt3_token_len":155,"char_repetition_ratio":0.12732919,"word_repetition_ratio":0.0,"special_character_ratio":0.27623457,"punctuation_ratio":0.040983606,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9741524,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-01-26T05:27:10Z\",\"WARC-Record-ID\":\"<urn:uuid:696b7711-d5f6-4858-a739-ff8d24155be5>\",\"Content-Length\":\"55425\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:16e42d6c-de69-4e78-b7fd-9bc90abb7208>\",\"WARC-Concurrent-To\":\"<urn:uuid:3cfea0c6-3628-42d6-8402-85ea411ac82b>\",\"WARC-IP-Address\":\"172.67.188.88\",\"WARC-Target-URI\":\"https://hlearner.com/financial/home-loan.html\",\"WARC-Payload-Digest\":\"sha1:SUZKBBP67HBRFQ5AUVAUQVTU2U7LYXUB\",\"WARC-Block-Digest\":\"sha1:LRWKP3OU4NBXN26766WNIZJW466UI3YK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-04/CC-MAIN-2021-04_segments_1610704798089.76_warc_CC-MAIN-20210126042704-20210126072704-00740.warc.gz\"}"} |
https://java.meritcampus.com/core-java-questions/Calculate-the-largest-angle-of-the-given-triangle | [
"...\n\nWrite a program to get the largest angle of a triangle by using the given points. Round the resultant angles to the nearest integer value.\n\nInput (Point, Point, Point) Output (double)\n`[0, 0], [5, 0], [0, 5]` `90.0`\n`[90, 0], [0, 0], [45, 77.94]` `60.0`\n`[0, 2], [5, 0], [0, 5]` `112.0`\n`[0, 0], [10, 0], [5, 7.666]` `66.0`\n`[25, 43], [50, 0], [50, 43]` `90.0`\n\n``` class CalculateTheLargestAngle { public static void main(String s[]) { Point firstPoint = new Point(0, 0); Point secondPoint = new Point(5, 0); Point thirdPoint = new Point(0, 5); System.out.println(\"The largest angle of the triangle : \" + getLargestAngle(firstPoint, secondPoint, thirdPoint)); } public static double getLargestAngle(Point firstPoint, Point secondPoint, Point thirdPoint) { //Write code here to get the largest angle of the triangle } //If required write any additional methods here } class Point { double x; double y; Point(double x, double y) { this.x = x; this.y = y; } } ```\n0\nWrong\nScore more than 2 points"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.5538736,"math_prob":0.99857736,"size":2643,"snap":"2021-04-2021-17","text_gpt3_token_len":745,"char_repetition_ratio":0.18794999,"word_repetition_ratio":0.06912442,"special_character_ratio":0.35035944,"punctuation_ratio":0.11728395,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9974079,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-01-18T23:51:36Z\",\"WARC-Record-ID\":\"<urn:uuid:78e71526-a680-4b66-8000-77eaa26b4a4d>\",\"Content-Length\":\"203187\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:afb924ec-2665-4886-b00a-fea76532b295>\",\"WARC-Concurrent-To\":\"<urn:uuid:85efb3cd-6086-4edc-8401-8f4a015102d9>\",\"WARC-IP-Address\":\"138.197.232.42\",\"WARC-Target-URI\":\"https://java.meritcampus.com/core-java-questions/Calculate-the-largest-angle-of-the-given-triangle\",\"WARC-Payload-Digest\":\"sha1:7DQGEQULMVGZO5FLIP4MR4BGN5ZAIBJT\",\"WARC-Block-Digest\":\"sha1:F5VOWCGJZUSF65IRH3XIK2EFL7ESRI26\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-04/CC-MAIN-2021-04_segments_1610703517159.7_warc_CC-MAIN-20210118220236-20210119010236-00361.warc.gz\"}"} |
https://answers.everydaycalculation.com/add-fractions/42-18-plus-25-42 | [
"Solutions by everydaycalculation.com\n\nAdd 42/18 and 25/42\n\n1st number: 2 6/18, 2nd number: 25/42\n\n42/18 + 25/42 is 41/14.\n\nSteps for adding fractions\n\n1. Find the least common denominator or LCM of the two denominators:\nLCM of 18 and 42 is 126\n2. For the 1st fraction, since 18 × 7 = 126,\n42/18 = 42 × 7/18 × 7 = 294/126\n3. Likewise, for the 2nd fraction, since 42 × 3 = 126,\n25/42 = 25 × 3/42 × 3 = 75/126\n4. Add the two fractions:\n294/126 + 75/126 = 294 + 75/126 = 369/126\n5. After reducing the fraction, the answer is 41/14\n6. In mixed form: 213/14"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.668233,"math_prob":0.99920374,"size":313,"snap":"2019-43-2019-47","text_gpt3_token_len":123,"char_repetition_ratio":0.18446602,"word_repetition_ratio":0.0,"special_character_ratio":0.485623,"punctuation_ratio":0.12328767,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99933267,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-10-14T20:46:14Z\",\"WARC-Record-ID\":\"<urn:uuid:10c26855-d7bc-440a-9f16-e791c760fe0b>\",\"Content-Length\":\"7879\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:bb79cd76-2b7d-4028-92ee-baac8b8a882f>\",\"WARC-Concurrent-To\":\"<urn:uuid:e605be41-22a5-4d2b-9fe2-fc3fa01b5d89>\",\"WARC-IP-Address\":\"96.126.107.130\",\"WARC-Target-URI\":\"https://answers.everydaycalculation.com/add-fractions/42-18-plus-25-42\",\"WARC-Payload-Digest\":\"sha1:5RSGYWGSUOL42IJU4Z3YXWFD3C3WXLI4\",\"WARC-Block-Digest\":\"sha1:ECT2JUN76K2GVURPNJI45M44FNHMLCLJ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-43/CC-MAIN-2019-43_segments_1570986655310.17_warc_CC-MAIN-20191014200522-20191014224022-00339.warc.gz\"}"} |
https://admin.clutchprep.com/physics/practice-problems/38901/an-equilateral-triangle-160-abc-a-positive-point-charge-q-is-located-at-each-of- | [
"# Problem: An equilateral triangle ABC. A positive point charge +q is located at each of the three vertices A, B, and C. Each side of the triangle is of length a. A point charge Q (that may be positive or negative) is placed at the mid-point between B and C. Is it possible to choose the value of Q (that is non-zero) such that the force on Q is zero? A) Yes, because the forces on Q are vectors and three vectors can add to zero. B) No, because the forces on Q are vectors and three vectors can never add to zero. C) Yes, because the electric force at the mid-point between B and C is zero whether a charge is placed there or not. D) No, because the forces on Q due to the charges at B and C point in the same direction. E) No, because a fourth charge would be needed to cancel the force on Q due to the charge at A.\n\n🤓 Based on our data, we think this question is relevant for Professor Schulte's class at UCF.\n\n###### Problem Details\n\nAn equilateral triangle ABC. A positive point charge +q is located at each of the three vertices A, B, and C. Each side of the triangle is of length a. A point charge Q (that may be positive or negative) is placed at the mid-point between B and C. Is it possible to choose the value of Q (that is non-zero) such that the force on Q is zero?\n\nA) Yes, because the forces on Q are vectors and three vectors can add to zero.\n\nB) No, because the forces on Q are vectors and three vectors can never add to zero.\n\nC) Yes, because the electric force at the mid-point between B and C is zero whether a charge is placed there or not.\n\nD) No, because the forces on Q due to the charges at B and C point in the same direction.\n\nE) No, because a fourth charge would be needed to cancel the force on Q due to the charge at A."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9600427,"math_prob":0.9924697,"size":900,"snap":"2020-24-2020-29","text_gpt3_token_len":222,"char_repetition_ratio":0.14955357,"word_repetition_ratio":0.12849163,"special_character_ratio":0.23888889,"punctuation_ratio":0.09313726,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9969217,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-06-05T03:10:43Z\",\"WARC-Record-ID\":\"<urn:uuid:ad995984-87e9-4515-89eb-8a94cd984f25>\",\"Content-Length\":\"168977\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4e509521-e9ff-48bb-912f-ed04779e34cd>\",\"WARC-Concurrent-To\":\"<urn:uuid:85414e8e-eda4-4752-abf3-8510ebb9f461>\",\"WARC-IP-Address\":\"18.210.187.7\",\"WARC-Target-URI\":\"https://admin.clutchprep.com/physics/practice-problems/38901/an-equilateral-triangle-160-abc-a-positive-point-charge-q-is-located-at-each-of-\",\"WARC-Payload-Digest\":\"sha1:ZXTHNELQEKTRSNSLSN4RWDQQ4XYNMOMC\",\"WARC-Block-Digest\":\"sha1:4MDCUE7PD5T2JCZYRC7EDMU7C6OAXAPN\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-24/CC-MAIN-2020-24_segments_1590348492427.71_warc_CC-MAIN-20200605014501-20200605044501-00368.warc.gz\"}"} |
https://www.osti.gov/etdeweb/biblio/21217733 | [
"You need JavaScript to view this\n\nClosed-form solution of a two-dimensional fuel temperature model for TRIGA-type reactors\n\n## Abstract\n\nIf azimuthal power density variations are ignored, the steady-state temperature distribution within a TRIGA-type fuel element is given by the solution of the Poisson equation in two dimensions (r and z) . This paper presents a closed-form solution of this equation as a function of the axial and radial power density profiles, the conductivity of the U-ZrH, the inlet temperature, specific heat and flow rate of the coolant, and the overall heat transfer coefficient. The method begins with the development of a system of linear ordinary differential equations describing mass and energy balances in the fuel and coolant. From the solution of this system, an expression for the second derivative of the fuel temperature distribution in the axial (z) direction is found. Substitution of this expression into the Poisson equation for T(r,z) reduces it from a partial differential equation to an ordinary differential equation in r, which is subsequently solved in closed-form. The results of typical calculations using the model are presented. (author)\nAuthors:\nRivard, J B \n1. Sandia Laboratories (United States)\nPublication Date:\nJul 01, 1974\nProduct Type:\nConference\nReport Number:\nINIS-US-09N0328; TOC-5\nResource Relation:\nConference: 3. TRIGA owners' conference, Albuquerque, NM (United States), 25-27 Feb 1974; Other Information: Country of input: International Atomic Energy Agency (IAEA); 1 fig; Related Information: In: 3. TRIGA owners' conference. Papers and abstracts, 432 pages.\nSubject:\n21 SPECIFIC NUCLEAR REACTORS AND ASSOCIATED PLANTS; COOLANTS; ENERGY BALANCE; FLOW RATE; HEAT TRANSFER; NUCLEAR FUELS; POISSON EQUATION; POWER DENSITY; SPECIFIC HEAT; STEADY-STATE CONDITIONS; TEMPERATURE DISTRIBUTION; TRIGA TYPE REACTORS; TWO-DIMENSIONAL CALCULATIONS\nOSTI ID:\n21217733\nResearch Organizations:\nGeneral Atomic Co., San Diego, CA (United States)\nCountry of Origin:\nUnited States\nLanguage:\nEnglish\nOther Identifying Numbers:\nTRN: US09N0340086517\nAvailability:\nAvailable from INIS in electronic form\nSubmitting Site:\nINIS\nSize:\npage(s) 2.43-2.49\nAnnouncement Date:\nOct 19, 2009\n\n## Citation Formats\n\nRivard, J B. Closed-form solution of a two-dimensional fuel temperature model for TRIGA-type reactors. United States: N. p., 1974. Web.\nRivard, J B. Closed-form solution of a two-dimensional fuel temperature model for TRIGA-type reactors. United States.\nRivard, J B. 1974. \"Closed-form solution of a two-dimensional fuel temperature model for TRIGA-type reactors.\" United States.\n@misc{etde_21217733,\ntitle = {Closed-form solution of a two-dimensional fuel temperature model for TRIGA-type reactors}\nauthor = {Rivard, J B}\nabstractNote = {If azimuthal power density variations are ignored, the steady-state temperature distribution within a TRIGA-type fuel element is given by the solution of the Poisson equation in two dimensions (r and z) . This paper presents a closed-form solution of this equation as a function of the axial and radial power density profiles, the conductivity of the U-ZrH, the inlet temperature, specific heat and flow rate of the coolant, and the overall heat transfer coefficient. The method begins with the development of a system of linear ordinary differential equations describing mass and energy balances in the fuel and coolant. From the solution of this system, an expression for the second derivative of the fuel temperature distribution in the axial (z) direction is found. Substitution of this expression into the Poisson equation for T(r,z) reduces it from a partial differential equation to an ordinary differential equation in r, which is subsequently solved in closed-form. The results of typical calculations using the model are presented. (author)}\nplace = {United States}\nyear = {1974}\nmonth = {Jul}\n}"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.901989,"math_prob":0.8869553,"size":3965,"snap":"2020-45-2020-50","text_gpt3_token_len":805,"char_repetition_ratio":0.14718506,"word_repetition_ratio":0.9337748,"special_character_ratio":0.19949558,"punctuation_ratio":0.09117221,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97757244,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-25T03:05:04Z\",\"WARC-Record-ID\":\"<urn:uuid:f786ec79-c2f8-4e09-9da4-db444a1a6412>\",\"Content-Length\":\"259168\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:cfb43419-3f44-4037-9c59-4a394943b325>\",\"WARC-Concurrent-To\":\"<urn:uuid:d50974c2-b2fb-4c5d-8878-0188ec1c0664>\",\"WARC-IP-Address\":\"192.107.175.222\",\"WARC-Target-URI\":\"https://www.osti.gov/etdeweb/biblio/21217733\",\"WARC-Payload-Digest\":\"sha1:UM7S5OIWYD7BXOLXCY7MOXNDTHF2RERB\",\"WARC-Block-Digest\":\"sha1:GULMP7JHVU6E7A3DPJI6VFFZN4KP25IW\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107885126.36_warc_CC-MAIN-20201025012538-20201025042538-00140.warc.gz\"}"} |
https://docs.bunifuframework.com/en/articles/2281887-stacked-column-100-chart | [
"Stacked column 100 chart is derived from stacked column chart. The difference is that the sum of data series add up to 100%.\n\nStacked column 100 chart is useful when showing part-to-whole proportions over a period of time.\n\nHow to display stacked bar chart using Bunifu Dataviz\n\nSimply locate Bunifu Dataviz control in your toolbox and drag it to the location on your form where you would like to display it.\n\nWe will use button click event handler to display our chart called render_stacked_column_100\n\nC# code\n\n``private void render(){ var canvas = new Bunifu.DataViz.WinForms.Canvas(); var datapoint = new Bunifu.DataViz.WinForms.DataPoint(Bunifu.DataViz.WinForms.BunifuDataViz._type.Bunifu_stackedColumn100); datapoint.addxy(\"new Date (2002,11,10)\", new JArray(16.23, 17.99).ToString()); datapoint.addxy(\"new Date (2002, 11, 9)\", new JArray(15.95, 19.25).ToString()); datapoint.addxy(\"new Date (2002, 11, 8)\", new JArray(11.30, 16.88).ToString()); datapoint.addxy(\"new Date (2002, 11, 7)\", new JArray(13.29, 14.28).ToString()); datapoint.addxy(\"new Date (2002, 11, 6)\", new JArray(15.23, 16.45).ToString()); datapoint.addxy(\"new Date (2002, 11, 5)\", new JArray(13.70, 16.50).ToString()); datapoint.addxy(\"new Date (2002, 11, 4)\", new JArray(17.50, 19.00).ToString()); datapoint.addxy(\"new Date (2002, 11, 3)\", new JArray(19.50, 20.85).ToString()); datapoint.addxy(\"new Date (2002, 11, 2)\", new JArray(20.07, 21.44).ToString()); datapoint.addxy(\"new Date (2002, 11, 1)\", new JArray(25.00, 26.70).ToString()); canvas.addData(datapoint); bunifuDataViz1.Render(canvas);}``\n\nVB.NET code\n\n``Private Sub render() Dim canvas = New Bunifu.DataViz.WinForms.Canvas() Dim datapoint = New Bunifu.DataViz.WinForms.DataPoint(Bunifu.DataViz.WinForms.BunifuDataViz._type.Bunifu_stackedColumn100) datapoint.addxy(\"new Date (2002,11,10)\", New JArray(16.23, 17.99).ToString()) datapoint.addxy(\"new Date (2002, 11, 9)\", New JArray(15.95, 19.25).ToString()) datapoint.addxy(\"new Date (2002, 11, 8)\", New JArray(11.30, 16.88).ToString()) datapoint.addxy(\"new Date (2002, 11, 7)\", New JArray(13.29, 14.28).ToString()) datapoint.addxy(\"new Date (2002, 11, 6)\", New JArray(15.23, 16.45).ToString()) datapoint.addxy(\"new Date (2002, 11, 5)\", New JArray(13.70, 16.50).ToString()) datapoint.addxy(\"new Date (2002, 11, 4)\", New JArray(17.50, 19.00).ToString()) datapoint.addxy(\"new Date (2002, 11, 3)\", New JArray(19.50, 20.85).ToString()) datapoint.addxy(\"new Date (2002, 11, 2)\", New JArray(20.07, 21.44).ToString()) datapoint.addxy(\"new Date (2002, 11, 1)\", New JArray(25.00, 26.70).ToString()) canvas.addData(datapoint) bunifuDataViz1.Render(canvas)End Sub``\n\nIn order to display a Bunifu Stacked Column 100 we need the following controls:\n\n• Bunifu Data Viz - This is the container for our chart\n• Bunifu Canvas - This is the layer between the data viz (container) and the dataset\n• Bunifu Data Point - This contains the data that we want to represent, as pairs of X and Y coordinates\n\nBunifu Stacked Column 100 simply works by creating 2 data point objects, one for the “low” set of points and one for the “high” set of points. The control will know automatically to adjust the width of the lines to match the specified data points and also to fill the entire width of the graph.\n\nOn running the code you should see something like this:",
null,
"That's it!"
] | [
null,
"https://downloads.intercomcdn.com/i/o/73765044/6f00bcd742b73f18cf9f7e17/stacked-column-chart-100%5B1%5D.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.6296301,"math_prob":0.916581,"size":3526,"snap":"2019-26-2019-30","text_gpt3_token_len":1076,"char_repetition_ratio":0.22827938,"word_repetition_ratio":0.04524887,"special_character_ratio":0.35110608,"punctuation_ratio":0.26517966,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9765401,"pos_list":[0,1,2],"im_url_duplicate_count":[null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-07-22T08:04:53Z\",\"WARC-Record-ID\":\"<urn:uuid:c54a80e6-cd72-4f29-a0dd-d0761d37ea63>\",\"Content-Length\":\"18849\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:405c2256-fcef-4003-8f68-fc0f5f05b643>\",\"WARC-Concurrent-To\":\"<urn:uuid:a445e8e5-f2cf-465a-a0c4-0876a96a3003>\",\"WARC-IP-Address\":\"45.63.64.75\",\"WARC-Target-URI\":\"https://docs.bunifuframework.com/en/articles/2281887-stacked-column-100-chart\",\"WARC-Payload-Digest\":\"sha1:56NOAVE27QTA7PI7E64QAIGTG3HYERLG\",\"WARC-Block-Digest\":\"sha1:GLKXEBNBWO6ESR6DB4BYU4EMCAEQAYSG\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-30/CC-MAIN-2019-30_segments_1563195527828.69_warc_CC-MAIN-20190722072309-20190722094309-00298.warc.gz\"}"} |
https://it.mathworks.com/help/optim/ug/output-functions.html | [
"# Output Functions for Optimization Toolbox™\n\n### What Is an Output Function?\n\nFor some problems, you might want output from an optimization algorithm at each iteration. For example, you might want to find the sequence of points that the algorithm computes and plot those points. To do this, create an output function that the optimization function calls at each iteration. See Output Function and Plot Function Syntax for details and syntax.\n\nThis section shows the solver-based approach to output functions. For the problem-based approach, see Output Function for Problem-Based Optimization.\n\nGenerally, the solvers that can employ an output function are the ones that can take nonlinear functions as inputs. You can determine which solvers can have an output function by looking in the Options section of function reference pages.\n\n### Example: Use an Output Function\n\nThis example shows how to use an output function to monitor the `fmincon` solution process for solving a constrained nonlinear optimization problem. The output function does the following at the end of each `fmincon` iteration:\n\n• Plot the current point.\n\n• Store the current point and its corresponding objective function value in a variable called `history`, and store the current search direction in a variable called `searchdir`. The search direction is a vector that points in the direction from the current point to the next one.\n\nAdditionally, to make the history available outside of the `fmincon` function, perform the optimization inside a nested function that calls `fmincon` and returns the output function variables. For more information about this method of passing information, see Passing Extra Parameters. The `runfmincon` function at the end of this example contains the nested function call.\n\n### Objective and Constraint Functions\n\nThe problem is to minimize the function\n\n`$f\\left(x\\right)=\\mathrm{exp}\\left({x}_{1}\\right)\\left(4{x}_{1}^{2}+2{x}_{2}^{2}+4{x}_{1}{x}_{2}+2{x}_{2}+1\\right)$`\n\nsubject to the nonlinear inequality constraints\n\n`$\\begin{array}{l}{x}_{1}+{x}_{2}-{x}_{1}{x}_{2}\\ge 3/2\\\\ {x}_{1}{x}_{2}\\ge -10.\\end{array}$`\n\nThe `objfun` function nested in `runfmincon` implements the objective function. The `confun` function nested in `runfmincon` implements the constraint function.\n\n### Call Solver\n\nTo obtain the solution to the problem and see the history of the `fmincon` iterations, call the `runfmincon` function.\n\n`[xsol,fval,history,searchdir] = runfmincon;`\n``` Max Line search Directional First-order Iter F-count f(x) constraint steplength derivative optimality Procedure 0 3 1.8394 0.5 Infeasible start point 1 6 1.85127 -0.09197 1 0.109 0.778 ```\n``` 2 9 0.300167 9.33 1 -0.117 0.313 Hessian modified twice ```\n``` 3 12 0.529835 0.9209 1 0.12 0.232 ```\n``` 4 16 0.186965 -1.517 0.5 -0.224 0.13 ```\n``` 5 19 0.0729085 0.3313 1 -0.121 0.054 ```\n``` 6 22 0.0353323 -0.03303 1 -0.0542 0.0271 ```\n``` 7 25 0.0235566 0.003184 1 -0.0271 0.00587 ```\n``` 8 28 0.0235504 9.035e-08 1 -0.0146 8.51e-07 ```\n```Active inequalities (to within options.ConstraintTolerance = 1e-06): lower upper ineqlin ineqnonlin 1 2 ```",
null,
"```Local minimum found that satisfies the constraints. Optimization completed because the objective function is non-decreasing in feasible directions, to within the value of the optimality tolerance, and constraints are satisfied to within the value of the constraint tolerance. ```\n\nThe output function creates a plot of the points that `fmincon` evaluates. Each point is labeled by its iteration number. The optimal point occurs at the eighth iteration. The last two points in the sequence are so close that they overlap.\n\nThe output `history` is a structure that contains two fields.\n\n`disp(history)`\n``` x: [9x2 double] fval: [9x1 double] ```\n\nThe `fval` field in `history` contains the objective function values corresponding to the sequence of points `fmincon` computes.\n\n`disp(history.fval)`\n``` 1.8394 1.8513 0.3002 0.5298 0.1870 0.0729 0.0353 0.0236 0.0236 ```\n\nThese are the same values displayed in the iterative output in the column with header `f(x)`.\n\nThe `x` field of `history` contains the sequence of points that `fmincon` computes.\n\n`disp(history.x)`\n``` -1.0000 1.0000 -1.3679 1.2500 -5.5708 3.4699 -4.8000 2.2752 -6.7054 1.2618 -8.0679 1.0186 -9.0230 1.0532 -9.5471 1.0471 -9.5474 1.0474 ```\n\nThe second output argument, `searchdir`, contains the search directions for `fmincon` at each iteration. The search direction is a vector pointing from the point computed at the current iteration to the point computed at the next iteration.\n\n`disp(searchdir)`\n``` -0.3679 0.2500 -4.2029 2.2199 0.7708 -1.1947 -3.8108 -2.0268 -1.3625 -0.2432 -0.9552 0.0346 -0.5241 -0.0061 -0.0003 0.0003 ```\n\n### Helper Functions\n\nThe following code creates the `runfmincon` function, containing the `outfun` output function, `objfun` objective function, and `confun` nonlinear constraint function as nested functions,\n\n```function [xsol,fval,history,searchdir] = runfmincon % Set up shared variables with OUTFUN history.x = []; history.fval = []; searchdir = []; % call optimization x0 = [-1 1]; options = optimoptions(@fmincon,'OutputFcn',@outfun,... 'Display','iter','Algorithm','active-set'); [xsol,fval] = fmincon(@objfun,x0,[],[],[],[],[],[],@confun,options); function stop = outfun(x,optimValues,state) stop = false; switch state case 'init' hold on case 'iter' % Concatenate current point and objective function % value with history. x must be a row vector. history.fval = [history.fval; optimValues.fval]; history.x = [history.x; x]; % Concatenate current search direction with % searchdir. searchdir = [searchdir;... optimValues.searchdirection']; plot(x(1),x(2),'o'); % Label points with iteration number and add title. % Add .15 to x(1) to separate label from plotted 'o' text(x(1)+.15,x(2),... num2str(optimValues.iteration)); title('Sequence of Points Computed by fmincon'); case 'done' hold off otherwise end end function f = objfun(x) f = exp(x(1))*(4*x(1)^2 + 2*x(2)^2 + 4*x(1)*x(2) +... 2*x(2) + 1); end function [c, ceq] = confun(x) % Nonlinear inequality constraints c = [1.5 + x(1)*x(2) - x(1) - x(2); -x(1)*x(2) - 10]; % Nonlinear equality constraints ceq = []; end end```"
] | [
null,
"https://it.mathworks.com/help/examples/optim/win64/OutputFunctionsForOptimizationToolboxExample_01.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.73550516,"math_prob":0.9910715,"size":5600,"snap":"2021-04-2021-17","text_gpt3_token_len":1576,"char_repetition_ratio":0.16136526,"word_repetition_ratio":0.01953602,"special_character_ratio":0.3092857,"punctuation_ratio":0.18284228,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9884072,"pos_list":[0,1,2],"im_url_duplicate_count":[null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-01-18T16:04:06Z\",\"WARC-Record-ID\":\"<urn:uuid:c104cc7f-bd5a-4782-b064-2ccb416e510c>\",\"Content-Length\":\"80046\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:06b9498f-6429-46cc-ae26-3e58ce4d5758>\",\"WARC-Concurrent-To\":\"<urn:uuid:cafd6e56-c12a-496d-97c5-5dd40ce8b8cd>\",\"WARC-IP-Address\":\"23.10.129.243\",\"WARC-Target-URI\":\"https://it.mathworks.com/help/optim/ug/output-functions.html\",\"WARC-Payload-Digest\":\"sha1:NPC7FSXZ2CXF5KA4CCGZ7AOBP2XNZVMW\",\"WARC-Block-Digest\":\"sha1:QKTFEU7ISBUXK4PB66YYWPE2C2ZWKZBY\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-04/CC-MAIN-2021-04_segments_1610703515075.32_warc_CC-MAIN-20210118154332-20210118184332-00318.warc.gz\"}"} |
http://slideplayer.com/slide/4164461/ | [
"",
null,
"# Force Force is a push or pull on an object The object is called the System Force on a system in motion causes change in velocity = acceleration Force is.\n\n## Presentation on theme: \"Force Force is a push or pull on an object The object is called the System Force on a system in motion causes change in velocity = acceleration Force is.\"— Presentation transcript:\n\nForce Force is a push or pull on an object The object is called the System Force on a system in motion causes change in velocity = acceleration Force is a vector, it has direction External World = everything around the system that exerts a force\n\nTypes of Forces Forces Contact When the external world is in contact with the system and exerts a force ex. Hand that pushes, Rope that pulls Field Forces A force exerted on a system without touching the system ex. Gravity, Magnetic force, Interaction between charged particles\n\nAgent An agent is the item in the external world that causes the force to act on the system ex. The hand that holds The rope that pulls The earth The magnet\n\nFree Body Diagram A diagram representing the System, Agent and the Forces acting on the system System is represented by a dot Forces represented by arrows pointing in the direction of the force, away from system Label each force with a subscript Choose direction of +ve usually towards stronger force\n\nFt gF Ball String (force of tension) (force of gravity) Positive Direction\n\nCombining Forces As forces are vectors they can be added just as vectors ex.1 Fa + Fb = F net ex.2 Fa + Fb = Fnet Nonlinear non perpendicular forces are added by adding x and y components of the force vectors\n\nNewton’s First Law An object at rest will remain at rest and an object in motion will remain in motion unless acted on by an external net force Also known as the Law of Inertia Inertia is the tendency of an object to resist change in motion Equilibrium = if the net force on a system is zero the system is in equilibrium speed and direction is unchanged.\n\nCommon Types of Forces Friction F f = Contact force opposing motion of two surfaces, parallel to the surface and opposite to the direction of motion Normal F N = Contact force exerted by system’s surface perpendicular to and away from surface Spring F sp = A restoring force- push or pull of spring exerted on system opposite to displacement\n\nTension F T = pull exerted by a rope on a system, away from the system and parallel to the rope Thrust F THRUST = a force that moves a system in the same direction as the acceleration Weight F g = a field force on a system due to gravity directed to the center of the earth\n\nNewton’s Second Law The acceleration of an object is equal to the net force acting on it divided by the mass of the object a = F/m or F = ma The larger the force the greater the acceleration The greater the mass with the same force the lower the acceleration Force of 1N = 1kg * 1m/s.s\n\nNewton’s Third Law All forces come in pairs equal and opposite in direction ex. A ball on a table and the table on the earth. Forces-ball on table / table on ball -table on earth/ earth on table - ball on earth/ earth on ball\n\nNet free body diagram ball table earth F table on ball F earth on ball\n\nDrag Force and Terminal Velocity Drag Force is the force exerted on an object as it moves the a fluid ex. Air and water as the speed of the object through the fluid increases so does the drag Drag is effected by the fluids viscosity and temperature Terminal velocity is when the drag force equals the force due to gravity no acceleration constant velocity app.(60m/s)\n\nTension Is a force exerted by a string or rope on a system It is assumed the rope has no mass Tension within all points of the rope is equal and opposite to the force exerted by the system’s weight\n\nExample of bucket on rope FTFT F g =mg\n\nNormal Force Is a contact force exerted by a surface on an object. This force is perpendicular to the surface of contact FgFg N\n\nGravitational Force Gravitational force is the mutual attraction between any two bodies in the universe Newton’s Law of Universal Gravitation =every particle in the universe attracts every other particle with a force that is directly proportional to the product of their masses and inversely proportional to the square of the distance between them F g = Gm 1 m 2 /r 2 where G = the universal gravitational constant = 6.67*10 -11 N.m 2 /kg 2\n\nGravity As weight = mg then w = GM E m/r 2 Then g = GM E /r 2 Therefore the larger the mass of an object the larger the effective gravity it generates\n\nForces of Friction f s and f k An object moving on a surface or passing through a fluid experiences resistance to motion = friction F f s = forces of static friction = the force that prevents movement of an object that is being subjected to an external force When movement is about to occur f s is at max.\n\nf s and f k When a force F exceeds F f s max then movement will occur and the new friction force is less. This new friction force is called the Force of kinetic friction f k When F-f k = positive value there is acceleration Both f s and f k are proportional to the normal force; f s =u s n and f k =u k n u s is the coefficient of static friction U k is the coefficient of kinetic friction\n\nSolving Friction Problems Draw a free body diagram making sure to label all forces\n\nDownload ppt \"Force Force is a push or pull on an object The object is called the System Force on a system in motion causes change in velocity = acceleration Force is.\"\n\nSimilar presentations"
] | [
null,
"http://slideplayer.com/static/blue_design/img/slide-loader4.gif",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.90303874,"math_prob":0.9857138,"size":5159,"snap":"2019-35-2019-39","text_gpt3_token_len":1160,"char_repetition_ratio":0.15344326,"word_repetition_ratio":0.0030549897,"special_character_ratio":0.2201977,"punctuation_ratio":0.029821074,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99858,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-09-22T03:58:30Z\",\"WARC-Record-ID\":\"<urn:uuid:b85eaf08-c764-4a74-9ca9-f0194bb3e9b2>\",\"Content-Length\":\"172489\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:55b3da37-0753-4cfe-912d-e91f6dc09de1>\",\"WARC-Concurrent-To\":\"<urn:uuid:a6bf5bda-7a47-43e0-a539-51162ca72d3c>\",\"WARC-IP-Address\":\"138.201.58.10\",\"WARC-Target-URI\":\"http://slideplayer.com/slide/4164461/\",\"WARC-Payload-Digest\":\"sha1:4GZ2P55B3OUSGQGWJOTVQZ6SWWCJVIYT\",\"WARC-Block-Digest\":\"sha1:NQHR6WTJCAORR4IA2Y7GYU5U2D2KAMBL\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-39/CC-MAIN-2019-39_segments_1568514575076.30_warc_CC-MAIN-20190922032904-20190922054904-00228.warc.gz\"}"} |
https://pyneng.readthedocs.io/en/latest/book/09_functions/func_args_var.html | [
"# Variable length arguments#\n\nSometimes it is necessary to make function accept not a fixed number of arguments, but any number. For such a case, in Python it is possible to create a function with a special parameter that accepts variable length arguments. This parameter can be both keyword and positional.\n\nNote\n\nEven if you don’t use it in your scripts there’s a good chance you’ll find it in someone else’s code.\n\n## Variable length positional arguments#\n\nParameter that takes positional variable length arguments is created by adding an asterisk before parameter name. Parameter can have any name but by agreement `*args` is the most common name.\n\nExample of a function:\n\n```In : def sum_arg(a, *args):\n....: print(a, args)\n....: return a + sum(args)\n....:\n```\n\nFunction `sum_arg` is created with two parameters:\n\n• parameter `a`\n\n• if passed as positional argument, should be first\n\n• if passed as a keyword argument, the order does not matter\n\n• parameter `*args` - expects variable length arguments\n\n• all other arguments as a tuple\n\n• these arguments may be missed\n\nCall a function with different number of arguments:\n\n```In : sum_arg(1, 10, 20, 30)\n1 (10, 20, 30)\nOut: 61\n\nIn : sum_arg(1, 10)\n1 (10,)\nOut: 11\n\nIn : sum_arg(1)\n1 ()\nOut: 1\n```\n\nYou can also create such a function:\n\n```In : def sum_arg(*args):\n....: print(args)\n....: return sum(args)\n....:\n\nIn : sum_arg(1, 10, 20, 30)\n(1, 10, 20, 30)\nOut: 61\n\nIn : sum_arg()\n()\nOut: 0\n```\n\n## Keyword variable length arguments#\n\nParameter that accepts keyword variable length arguments is created by adding two asterisk in front of parameter name. Name of parameter can be any but by agreement most commonly use name `**kwargs` (from keyword arguments).\n\nExample of a function:\n\n```In : def sum_arg(a, **kwargs):\n....: print(a, kwargs)\n....: return a + sum(kwargs.values())\n....:\n```\n\nFunction `sum_arg` is created with two parameters:\n\n• parameter `a`\n\n• if passed as positional argument, should be first\n\n• if passed as a keyword argument, the order does not matter\n\n• parameter `**kwargs` - expects keyword variable length arguments\n\n• all other keyword arguments as a dictionary\n\n• these arguments may be missed\n\nCalling a function with different number of keyword arguments:\n\n```In : sum_arg(a=10, b=10, c=20, d=30)\n10 {'c': 20, 'b': 10, 'd': 30}\nOut: 70\n\nIn : sum_arg(b=10, c=20, d=30, a=10)\n10 {'c': 20, 'b': 10, 'd': 30}\nOut: 70\n```"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.56058854,"math_prob":0.9587207,"size":2340,"snap":"2022-40-2023-06","text_gpt3_token_len":637,"char_repetition_ratio":0.16523972,"word_repetition_ratio":0.20472442,"special_character_ratio":0.31794873,"punctuation_ratio":0.2294455,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.983943,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-01-29T16:43:52Z\",\"WARC-Record-ID\":\"<urn:uuid:8a704436-1891-4936-8730-a93197150a4a>\",\"Content-Length\":\"77583\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f1361b7e-12c5-47d5-907c-d606a9d2fd9a>\",\"WARC-Concurrent-To\":\"<urn:uuid:c2961475-6478-48a2-8c86-4da8e5b6fda3>\",\"WARC-IP-Address\":\"104.17.33.82\",\"WARC-Target-URI\":\"https://pyneng.readthedocs.io/en/latest/book/09_functions/func_args_var.html\",\"WARC-Payload-Digest\":\"sha1:F3CGT3CZHSWS3PZAYVGAVH7EOMTDAZHR\",\"WARC-Block-Digest\":\"sha1:P2JYFILBFMMMUTHQHAVHKPAX4PNIZYMJ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764499744.74_warc_CC-MAIN-20230129144110-20230129174110-00539.warc.gz\"}"} |
http://www.stats.bris.ac.uk/R/web/packages/energy/news/news.html | [
"energy 1.7-5\n\n• User level changes:\n• kgroups: (new) implements energy clustering for a specified number k classes by energy distance criterion, analogous to the k classes of the k-means algorithm.\n• dcov2d and dcor2d: (new) O(n log n) methods to compute the squared U or V statistics for real x and y\n• sortrank() function added (a utility)\n• Internal changes:\n• B-tree.cpp: Btree_sum and other internal functions implement binary tree search for faster O(n log n) calculation of paired distances in dcov2d\n• kgroups.cpp: Rcpp implementation of k-groups algorithm\n• energy.hclust implementation: replaced C++ code with call to stats::hclust; since R > 3.0.3 it is now equivalent for alpha = 1 with method = “ward.D”. Input and return value unchanged except heights from hclust are half.\n\nenergy 1.7-4\n\n• User level changes\n• disco: handle the case when the user argument x is dist with conflicting argument distance=FALSE\n• dcor.t and dcor.ttest: handle the cases when class of argument x or y conflicts with the distance argument\n• Split manual page of dcovU into two files.\n• indep.etest and indep.e removed now Defunct (were Deprecated since Version 1.1-0, 2008-04-07; replaced by indep.test).\n• Internal changes\n• BCDCOR: handle the cases when class of argument x or y conflicts with the distance argument\n\nenergy 1.7-2\n\n• User level changes\n• Provided new dcor.test function, similar to dcov.test but using the distance correlation as the test statistic.\n• Number of replicates R for Monte Carlo and permutation tests now matches the argument of the boot::boot function (no default value, user must specify).\n• If user runs a test with 0 replicates, p-value printed is NA\n• Internal changes\n• energy_init.c added for registering routines\n\nenergy 1.7-0\n\n• Partial Distance Correlation statistics and tests added\n• pdcov, pdcor, pdcov.test, pdcor.test\n• dcovU: unbiased estimator of distance covariance\n• bcdcor: bias corrected distance correlation\n• Ucenter, Dcenter, U_center, D_center: double-centering and U-centering utilities\n• U_product: inner product in U-centered Hilbert space\n• updated NAMESPACE and DESCRIPTION imports, etc.\n• revised package Title and Description in DESCRIPTION\n• package now links to Rcpp\n• mvnorm c code ported to c++ (mvnorm.cpp); corresponding changes in Emvnorm.R\n• syntax for bcdcor: “distance” argument removed, now argument can optionally be a dist object\n• syntax for energy.hclust: first argument must now be a dist object\n• default number of replicates R in tests: for all tests, R now defaults to 0 or R has no default value.\n\nenergy 1.6.2\n\n• inserted GetRNGstate() .. PutRNGState around repl. loop in dcov.c.\n\nenergy 1.6.1\n\n• replace Depends with Imports in DESCRIPTION file\n\nenergy 1.6.0\n\n• implementation of high-dim distance correlation t-test introduced in JMVA Volume 117, pp. 193-213 (2013).\n• new functions dcor.t, dcor.ttest in dcorT.R\n• minor changes to tidy other code in dcov.R\n• removed unused internal function .dcov.test\n\nenergy 1.5.0\n\n• NAMESPACE: insert UseDynLib; remove zzz.R, .First.Lib()\n\nenergy 1.4-0\n\n• (dcov.c, Eindep.c) Unused N was removed.\n• (dcov.c) In case dcov=0, bypass the unnecessary loop that generates replicates (in dCOVtest and dCovTest). In this case dcor=0 and test is not significant. (dcov=0 if one of the samples is constant.)\n• (Eqdist.R) in eqdist.e and eqdist.etest, method=“disco” is replaced by two options: “discoB” (between sample components) and “discoF” (disco F ratio).\n• (disco.R) Added disco.between and internal functions that compute the disco between-sample component and corresponding test.\n• (utilities.c) In permute function replaced rand_unif with runif.\n• (energy.c) In ksampleEtest the pval computation changed from ek/B to (ek+1)/(B+1) as it should be for a permutation test, and unneeded int* n removed.\n\nenergy 1.3-0\n\n• In distance correlation, distance covariance functions (dcov, dcor, DCOR) and dcov.test, arguments x and y can now optionally be distance objects (result of dist function or as.dist). Matrices x and y will always be treated as data.\n\n• Functions in dcov.c and utilities.c were modified to support arguments that are distances rather than data. In utilities.c the index_distance function changed. In dcov.c there are many changes. Most importantly for the exported objects, there is now an extra required parameter in the dims argument passed from R. In dCOVtest dims must be a vector c(n, p, q, dst, R) where n is sample size, p and q are dimensions of x and y, dst is logical (TRUE if distances) and R is number of replicates. For dCOV dims must be c(n, p, q, dst).\n\nenergy 1.2-0\n\n• disco (distance components) added for one-way layout.\n• A method argument was added to ksample.e, eqdist.e, and eqdist.etest, method = c(“original”, “disco”).\n• A method argument was added to edist, which summarizes cluster distances in a table: method = c(“cluster”,“discoB”,“discoF”))"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.871029,"math_prob":0.8390565,"size":763,"snap":"2019-26-2019-30","text_gpt3_token_len":182,"char_repetition_ratio":0.12648222,"word_repetition_ratio":0.0,"special_character_ratio":0.22280473,"punctuation_ratio":0.16666667,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.974283,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-06-17T18:59:18Z\",\"WARC-Record-ID\":\"<urn:uuid:7d969e9c-bbb2-483a-9270-3bce64a9f82a>\",\"Content-Length\":\"6894\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2ed90067-2f7a-4133-887c-0a3003010c73>\",\"WARC-Concurrent-To\":\"<urn:uuid:19aa37ad-2f88-48dd-ab55-4f086d479d1c>\",\"WARC-IP-Address\":\"137.222.10.189\",\"WARC-Target-URI\":\"http://www.stats.bris.ac.uk/R/web/packages/energy/news/news.html\",\"WARC-Payload-Digest\":\"sha1:Y2ENK3NHYEWUV35HGEWXRY6CDLOH6DO3\",\"WARC-Block-Digest\":\"sha1:BBEGWOOTRW6O4GNPNAZMGW7D2DWQTH7M\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-26/CC-MAIN-2019-26_segments_1560627998558.51_warc_CC-MAIN-20190617183209-20190617205209-00321.warc.gz\"}"} |
http://www.vbaexpress.com/forum/showthread.php?66460-Dynamic-Function&s=ec18fa000b3797713e6d93b168b53bbe&p=397577 | [
"1. ## Dynamic Function\n\nHello! I want to create a dynamic function in VBA wherein I provide the name range, column number and row number. As per the values provided, the range should be hard pasted. This is what I did but it is giving an error:\n\n```Public Function HardPaste(Table_Range As Variant, Col_Num As Integer, Row_Num As Integer)\nTable_Range(Row_Num, Col_Num).Select\nSelection.PasteSpecial Paste:=xlPasteValues\nEnd Function```\nThis is how I am using it:\n\n```Sub Test()\nHardPaste(CQ_Table_Range,1,4)\nEnd Sub```\n\nThe error I am getting is Compile Error: Expected :=",
null,
"",
null,
"Reply With Quote\n\n2. I tried another this and it worked:\n\n```Public Function HardPaste(Table_Range As Range, Col_Num As Integer, Row_Num As Integer)\nRange(Table_Range(Row_Num, Col_Num), Table_Range(Table_Range.Rows.Count, Table_Range.Columns.Count)).Copy\nTable_Range(Row_Num, Col_Num).PasteSpecial Paste:=xlPasteValues\nApplication.CutCopyMode = False\nEnd Function```\nThanks!",
null,
"",
null,
"Reply With Quote\n\n####",
null,
"Posting Permissions\n\n• You may not post new threads\n• You may not post replies\n• You may not post attachments\n• You may not edit your posts\n•"
] | [
null,
"http://www.vbaexpress.com/forum/images/misc/progress.gif",
null,
"http://www.vbaexpress.com/forum/clear.gif",
null,
"http://www.vbaexpress.com/forum/images/misc/progress.gif",
null,
"http://www.vbaexpress.com/forum/clear.gif",
null,
"http://www.vbaexpress.com/forum/images/buttons/collapse_40b.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.63373816,"math_prob":0.4933194,"size":693,"snap":"2023-40-2023-50","text_gpt3_token_len":178,"char_repetition_ratio":0.11030479,"word_repetition_ratio":0.0,"special_character_ratio":0.22799423,"punctuation_ratio":0.19548872,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9812579,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-01T03:46:43Z\",\"WARC-Record-ID\":\"<urn:uuid:6d7e2da1-44b8-474c-9ee1-9fd6b1cf58e1>\",\"Content-Length\":\"41872\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8ee2953f-4ef1-4762-bf1a-2e19f233c1e0>\",\"WARC-Concurrent-To\":\"<urn:uuid:c2c721a1-29b9-4158-9d94-22fee00886f4>\",\"WARC-IP-Address\":\"67.212.188.2\",\"WARC-Target-URI\":\"http://www.vbaexpress.com/forum/showthread.php?66460-Dynamic-Function&s=ec18fa000b3797713e6d93b168b53bbe&p=397577\",\"WARC-Payload-Digest\":\"sha1:KTVBC7JHXUNKJBVTXFW4ANTXZREZKFYC\",\"WARC-Block-Digest\":\"sha1:OOXZMB7B2KAGTKCTKRP3R5HQSZHJ63QO\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100264.9_warc_CC-MAIN-20231201021234-20231201051234-00277.warc.gz\"}"} |
https://brilliant.org/problems/turn-it-this-way/ | [
"# Turn it this way\n\nAlgebra Level 1\n\nThe point $\\left(4+7\\sqrt{3}\\ ,\\ 7-4\\sqrt{3}\\right)$ is rotated $\\dfrac{\\pi}{3}$ radians counterclockwise about the origin. If the resulting image is $(a,b)$, then what is $a+b$?\n\n×\n\nProblem Loading...\n\nNote Loading...\n\nSet Loading..."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7752435,"math_prob":0.9998455,"size":270,"snap":"2021-21-2021-25","text_gpt3_token_len":62,"char_repetition_ratio":0.15037593,"word_repetition_ratio":0.29268292,"special_character_ratio":0.22962964,"punctuation_ratio":0.29508197,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98810893,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-06-24T20:46:35Z\",\"WARC-Record-ID\":\"<urn:uuid:dac4c274-526c-4035-8c0f-2243a47df25a>\",\"Content-Length\":\"41392\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:cff3a1d3-aba3-4f26-a295-4c3ac519e201>\",\"WARC-Concurrent-To\":\"<urn:uuid:e2249c3d-2cff-4f12-b1db-4ca1119502c8>\",\"WARC-IP-Address\":\"104.20.34.242\",\"WARC-Target-URI\":\"https://brilliant.org/problems/turn-it-this-way/\",\"WARC-Payload-Digest\":\"sha1:5LXCGF2OGQEZPBVMKA6RJZO276GUKFDN\",\"WARC-Block-Digest\":\"sha1:UR5FQZ5A4AO4MXSPAGK2L6KSA6OFVFAA\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-25/CC-MAIN-2021-25_segments_1623488559139.95_warc_CC-MAIN-20210624202437-20210624232437-00500.warc.gz\"}"} |
https://studylib.net/doc/10408833/february-21--2007-lecturer-dmitri-zaitsev-hilary-term-2007 | [
"# February 21, 2007 Lecturer Dmitri Zaitsev Hilary Term 2007",
null,
"```February 21, 2007\nLecturer Dmitri Zaitsev\nHilary Term 2007\nCourse 2E1 2006-07 (SF Engineers & MSISS & MEMS)\nS h e e t 16\nDue: at the end of the tutorial\nExercise 2(ii)\nFind an equation for the plane spanned by the vectors: u = (1, −1, 1), v = (−1, 0, 1);\nSolution The span is the set of all linear combinations of u and v, i.e. the set of\nall expressions k1 u + k2 v, hence the set of all vectors (x, y, z) that can be written as\nk1 u+k2 v, hence the set of all (x, y, z) for which the following vector equation is solvable\nin (k1 , k2 ):\n\n\n\n \n1\n−1\nx\n\n\n\n\n\nk1 −1 + k2\n0\n= y .\n1\n1\nz\nThis vector equation is equivalent to the system\n(\nk1 − k2 = x\n−k1 = y\nk1 + k2 = z\nfrom which we can eliminate both k1 and then k2 :\n(\nk1 = −y\nk2 = k1 − x\n−y + k2 = z\n(\nk1 = −y\nk2 = −y − x\n−2y − x = z\nor\nThe system is solvable in (k1 , k2 ) precisely when −2y − x = z or\nx + 2y + z = 0,\nwhich is the desired equation.\nIt is a good idea to check the correctness of this equation by substituting into it\n(x, y, z) = u = (1, −1, 1) and (x, y, z) = v = (−1, 0, 1) (the given vectors):\n1 + 2 · (−1) + 1 = 0,\n(−1) + 2 · 0 + 1 = 0.\n```"
] | [
null,
"https://s2.studylib.net/store/data/010408833_1-be7b2e18dcd37bf63ba2b257ad0b631d-768x994.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8701454,"math_prob":0.9999212,"size":1095,"snap":"2021-43-2021-49","text_gpt3_token_len":421,"char_repetition_ratio":0.12190651,"word_repetition_ratio":0.05860806,"special_character_ratio":0.43561643,"punctuation_ratio":0.133829,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999713,"pos_list":[0,1,2],"im_url_duplicate_count":[null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-12-07T19:20:46Z\",\"WARC-Record-ID\":\"<urn:uuid:5dccd0a8-fa1d-495f-aaee-fae3b7483a74>\",\"Content-Length\":\"48872\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1cd9186e-0eed-4c3b-a97b-5ac53f787803>\",\"WARC-Concurrent-To\":\"<urn:uuid:409bead5-cc73-4941-aee0-893450af3d5a>\",\"WARC-IP-Address\":\"172.67.175.240\",\"WARC-Target-URI\":\"https://studylib.net/doc/10408833/february-21--2007-lecturer-dmitri-zaitsev-hilary-term-2007\",\"WARC-Payload-Digest\":\"sha1:X5NQ3BF7OWTJDHKZEGVO2P57K6DNY4VH\",\"WARC-Block-Digest\":\"sha1:VU3AZMWA65JMBC4XLEZJSLDS2CBDKTGX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964363405.77_warc_CC-MAIN-20211207170825-20211207200825-00065.warc.gz\"}"} |
https://math.stackexchange.com/questions/65531/what-is-the-motivation-for-defining-both-homogeneous-and-inhomogeneous-cochains | [
"# What is the motivation for defining both homogeneous and inhomogeneous cochains?\n\nIn my few months of studying group cohomology, I've seen two \"standard\" complexes that are introduced:\n\n• We let $X_r$ be the free $\\mathbb{Z}[G]$-module on $G^r$ (so, it has as a $\\mathbb{Z}[G]$-basis the $r$-tuples $(g_1,\\ldots,g_r)$ of elements of $G$). The $G$-module structure of $X_r$ comes by virtue of being a $\\mathbb{Z}[G]$-module.\nThe boundary maps $\\partial_r:X_r\\to X_{r-1}$ are\n\n$$\\partial_{r}(g_1,\\ldots,g_r)=g_1(g_2,\\ldots,g_r)+\\sum_{j=1}^r(-1)^j(g_1,\\ldots,g_jg_{j+1},\\ldots,g_r)+(-1)^r(g_1,\\ldots,g_r)$$\n\n• We let $E_r$ be the free $\\mathbb{Z}$-module on $G^{r+1}$ (so, it has as a $\\mathbb{Z}$-basis the $(r+1)$-tuples $(g_0,\\ldots,g_r)$ of elements of $G$). The $G$-module structure of $E_r$ is defined by $g(g_0,\\ldots,g_r)=(gg_0,\\ldots,gg_r)$. The boundary maps $d_r:E_r\\to E_{r-1}$ are\n$$d_{r}(g_0,\\ldots,g_r)=\\sum_{j=0}^{r}(-1)^j(g_0,\\ldots,\\widehat{g_j},\\ldots,g_{r})$$\n\nWe may then proceed to compute the cohomology of $G$ with coefficients in a $G$-module $A$ using $$0\\to \\text{Hom}_G(X_0,A)\\to\\text{Hom}_G(X_1,A)\\to\\cdots$$ or by using $$0\\to \\text{Hom}_G(E_0,A)\\to\\text{Hom}_G(E_1,A)\\to\\cdots$$ Elements of $\\text{Hom}_G(X_r,A)$ are \"inhomogeneous cochains\" and elements of $\\text{Hom}_G(E_r,A)$ are \"homogeneous cochains\". In either case, all that matters is what happens to the basis elements, so really we can say that an \"inhomogeneous cochain\" is a function $f:G^r\\to A$, and that a \"homogeneous cochain\" is a function $f:G^{r+1}\\to A$ that satisfies $f(gg_0,\\ldots,gg_r)=g\\cdot f(g_0,\\ldots,g_r)$.\n\nLang defines them both in his Topics in cohomology of groups, and says\n\n... we have a $\\mathbb{Z}[G]$-isomorphism $X\\xrightarrow{\\approx}E$ between the non-homogeneous and the homogeneous complex uniquely determined by the value on basis elements such that $$(\\sigma_1,\\ldots,\\sigma_r)\\mapsto (e,\\sigma_1,\\sigma_1\\sigma_2,\\ldots,\\sigma_1\\sigma_2\\ldots \\sigma_r)$$\n\nbut Serre defines the only the homogenous cochains in Local Fields and then says that a cochain\n\n... is uniquely determined by its restriction to systems of the form $(1,g_1,g_1g_2,\\ldots,g_1\\cdots g_i)$. That leads us to interpret the elements of $\\text{Hom}_G(E_r,A)$ as \"inhomogeneous cochains\", i.e. as functions $f(g_1,\\ldots,g_i)$ of $i$ arguments, with values in $A$, whose coboundary is given by ...\n\nTo put it bluntly, my question is: Why are we doing this? I can think of some possible reasons:\n\n• Historical - perhaps one way was defined first, now the other is more popular, but the older definition is still included out of tradition.\n\n• Practical - perhaps there are important computations that are significantly easier to see or do using one or the other approach, or where it is useful to switch between them for some reason.\n\n• Big picture - perhaps there is a high-level interpretation of one or both approaches that ties in with some other field where (co)homology plays a role.\n\nSo, what's the real motivation for defining both \"homogeneous\" and \"inhomogeneous\" cochains?\n\n• I don't know what the real answer is, but I'll just point out that the inhomogeneous definition has the advantage that it's defined on a tuple of smaller size (r instead of r+1), while the homogeneous definition has the advantage that the boundary map is \"simpler\" (omitting a term rather than combining via a product). – Ted Sep 18 '11 at 17:22\n• The second complex makes it obvious that we are dealing with the homology of a simplicial set. Cycles and cocycles on the first one, on the other hand, are what we usually find in nature (derivations, factor sets in extensions, etc) – Mariano Suárez-Álvarez Sep 18 '11 at 18:29\n• @Mariano: I'm afraid I don't really understand your explanation of the first complex; Serre, for example, talks about factor sets using homogeneous cochains. Could expand a bit more on the places we see inhomogeneous cochains \"in nature\" (in an answer, if you want)? – Zev Chonoles Sep 18 '11 at 22:55\n• By the way, one should keep in mind the fact that the two complexes are in fact isomorphic! – Mariano Suárez-Álvarez Sep 23 '11 at 3:27\n• @Mariano: Indeed - that was one of the reasons I found the situation a bit perplexing. – Zev Chonoles Sep 23 '11 at 3:38\n\n## 2 Answers\n\nLet me give you three examples where nature picks your first complex:\n\nFirst: Let $G$ be a group, let $A$ be a $G$-module, and let $A\\rtimes G$ be the direct product (as a set, this is $A\\times G$, and it becomes a group with multiplication such that $(a,g)\\cdot(b,h)=(a+g\\cdot b,gh)$ for all $a$, $b\\in A$ and all $g$, $h\\in G$) Consider the projection map $p:A\\rtimes G\\to G$, which is a group homomorphism. A section of $p$ is a group homomorphism $s:G\\to A\\rtimes G$ such that $f\\circ s=\\mathrm{id}_G$. It is immediate to check that a section determines uniquely and is determined uniquely by a function $\\sigma:G\\to A$ such that $$g\\cdot\\sigma(h)+\\sigma(g)=\\sigma(gh)$$ for all $g$, $h\\in G$; indeed, the relation between $s$ and $\\sigma$ is that $s(g)=(\\sigma(g),g)$ for all $g\\in G$. The function $\\sigma$ is then a $1$-cocycle defined on your first complex.\n\nSecond: Consider an extension\n\n i f\n0 ---> A ---> E ---> G ---> 1\n\n\nof a group $G$ by an abelian group $A$ (whose operation I'll write $+$). Let $\\sigma:G\\to E$ be a set-theoretic section of $f$. For all $g$, $h\\in G$ we have $f(\\sigma(g)\\sigma(h))=f(\\sigma(g))f(\\sigma(h))=gh=f(\\sigma(gh))$, so that there exists a unique element $\\alpha(g,h)\\in A$ such that $$\\sigma(g)\\sigma(h)=\\iota(\\alpha(g,h))\\sigma(gh).$$\n\nThere is an action of $G$ on $A$ such that $$\\iota(g\\cdot a)=\\sigma(g)\\iota(a)\\sigma(g)^{-1}$$ for all $g\\in G$ and all $a\\in A$. It is eeasy to check that this is indeed an action of $G$ on $A$ by group automorphisms (here it is where we need that $A$ be abelian) In other words, $A$ is a $G$-module.\n\nNow, whenever $g$, $h$, $k$ are in $G$ we have $$(\\sigma(g)\\sigma(h))\\sigma(k)=\\iota(\\alpha(g,h))\\sigma(gh)\\sigma(k)=\\iota(\\alpha(g,h)+\\alpha(gh,k))\\sigma(ghk)$$ and $$\\sigma(g)(\\sigma(h)\\sigma(k))=\\sigma(g)\\iota(\\alpha(h,k))\\sigma(hk)=\\iota(g\\cdot\\alpha(h,k))\\sigma(g)\\sigma(hk)=\\iota(g\\cdot\\alpha(h,k)+\\alpha(g,hk))\\sigma(ghk).$$ Since multiplication in $G$ is associative, the left-hand sides in these last two equations are equal, so so are their right hand sides---and since $\\iota$ is injective, we see that $$g\\cdot\\alpha(h,k)+\\alpha(g,hk)=\\alpha(g,h)+\\alpha(gh,k)$$ or, equivalently, that $$g\\cdot\\alpha(h,k)-\\alpha(gh,k)+\\alpha(g,hk)-\\alpha(g,h)=0.$$ This means that $\\alpha$ determines a $2$-cocyle on your first the complex.\n\nThird: If $G$ is a group, the category of $G$-modules over a field $k$ is a monoidal category $\\mathscr M_G$ with respect to the tensor product of representations. If $\\alpha:G\\times G\\times G\\to k^\\times$ is a $3$-cocycle defined on your first complex and with values in the multiplicative group of $k$, then one can \"twist\" the associativity isomorphisms of $\\mathscr M_G$ using $\\alpha$ to obtain a different, slightly more fun monoidal category $\\mathscr M_G(\\alpha)$, and if you work this out in detail, you will see that again the cocycle condition with respect to your first complex is precisely the pentagon condition for a monoidal structure.\n\nThese are just three instances where nature picks inhomopgenous cochains.\n\nThe inhomogeneous cochain construction is a standard free resolution of $\\mathbb Z$ (the trivial $G$-module) as a $\\mathbb Z[G]$-module, and it is explicitly constructed to be such. Since taking $G$-invariants is the same as forming $Hom_G(\\mathbb Z, A)$, this is what is needed to compute derived functors of this operation (which is what group cohomology is, from a derived functor point of view).\n\nOn the other hand, homogenous cochains are what you get if you compute the cohomology of local systems (twisted coefficients) on the classifying space for $G$, which is how group cohomology first arose (explicitly --- there were implicit examples of group cohomology classes much earlier) in the literature. The \"homogeneity\" reflects the fact that we are computing with a certain $G$-equivariant simplicial complex.\n\nLoosely, and roughly, speaking, the inhomogeneous picture is more algebraic, and the homogeneous picture is more topological.\n\n• The homogeneous cocycles are very similar to the forming of a boundary of a topological CW-complex. It therefore gives intuition that the homogeneous approach is more useful in algebraic topology. In fact, the homogeneous cocyles formula is the same as Cech cohomology formula when considering cohomology of sheaves. – LinAlgMan Aug 5 '13 at 13:08"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.87894183,"math_prob":0.99957436,"size":2972,"snap":"2021-21-2021-25","text_gpt3_token_len":986,"char_repetition_ratio":0.13443395,"word_repetition_ratio":0.026041666,"special_character_ratio":0.3129206,"punctuation_ratio":0.1446541,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99997497,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-06-24T03:24:34Z\",\"WARC-Record-ID\":\"<urn:uuid:13dc86eb-0bd6-45bf-b177-ae8eecf6fbf7>\",\"Content-Length\":\"185685\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d3d36a48-c237-4c2f-acd1-1440a0fdc62e>\",\"WARC-Concurrent-To\":\"<urn:uuid:ac49a988-7a85-4024-8f3e-35fc7560dd8e>\",\"WARC-IP-Address\":\"151.101.193.69\",\"WARC-Target-URI\":\"https://math.stackexchange.com/questions/65531/what-is-the-motivation-for-defining-both-homogeneous-and-inhomogeneous-cochains\",\"WARC-Payload-Digest\":\"sha1:SLW2OQF6N2EQBMCCDRZXR6OEEHJRJ3VI\",\"WARC-Block-Digest\":\"sha1:AFPRR773MHNCBXIHADYQ55BXQILO3SLQ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-25/CC-MAIN-2021-25_segments_1623488550571.96_warc_CC-MAIN-20210624015641-20210624045641-00205.warc.gz\"}"} |
https://azurecloudai.blog/2021/04/29/how-to-get-ueba-costs-for-azure-sentinel/ | [
"# How to Get UEBA Costs for Microsoft Sentinel\n\nThe cost for UEBA is nominal and based on the amount of data that is analyzed. Your costs will vary depending on several factors. However, the following KQL query can be used to get the estimated cost of the solution.\n\n``````union withsource=TableName1 *\n| where TimeGenerated > ago(30d) //In the last 30 days\n| summarize Entries = count(), Size = sumif(_BilledSize, _IsBillable == true), last_log = datetime_diff(\"second\", now(), max(TimeGenerated)), estimate = sumif(_BilledSize, _IsBillable == true) by TableName1, _IsBillable\n| project ['Table Name'] = TableName1, ['Table Size'] = Size, ['% of Total GiB'] = (Size / (1024 * 1024 * 1024)) / 10.04 * 100, ['IsBillable'] = _IsBillable, ['Last Record Received'] = last_log, ['Estimated Table Price'] = (estimate / (1024 * 1024 * 1024)) * 4.0 //Cost may be different. Then, alter the 4.0\n| where ['Table Name'] == \"BehaviorAnalytics\" or ['Table Name'] == \"IdentityInfo\" or ['Table Name'] == \"UserAccessAnalytics\" or ['Table Name'] == \"UserPeerAnalytics\" //The four data tables utilized by UEBA\n| serialize TotalCost=row_cumsum(['Estimated Table Price']) //This starts at the bottom result row and adds all table prices for a final TotalCost value at the top\n| order by ['Table Size'] desc``````\n\nThis is something I slapped together quickly and is a work in progress.\n\nThe latest version of this KQL query will always be located at: https://github.com/rod-trent/Azure-Sentinel-Cost-Troubleshooting-Kit/blob/main/KQL-Queries/UEBACosts.yaml\n\nThis query takes the billable results of the four UEBA tables (BehaviorAnalytics, IdentityInfo, UserAccessAnalytics, and UserPeerAnalytics) and then also adds a Total Cost column. The top value in the results will show the sum of costs.\n\n•",
null,
""
] | [
null,
"https://i0.wp.com/azurecloudai.blog/wp-content/uploads/2022/09/6mdm.jpg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.77961576,"math_prob":0.79937303,"size":1930,"snap":"2022-40-2023-06","text_gpt3_token_len":498,"char_repetition_ratio":0.124610595,"word_repetition_ratio":0.013986014,"special_character_ratio":0.28704664,"punctuation_ratio":0.10759494,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98551077,"pos_list":[0,1,2],"im_url_duplicate_count":[null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-02-09T05:48:54Z\",\"WARC-Record-ID\":\"<urn:uuid:fc65ee1f-b4ca-48c0-a819-3d3ce54a0cdc>\",\"Content-Length\":\"89950\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:34704512-a201-4c23-b5f1-44050950af33>\",\"WARC-Concurrent-To\":\"<urn:uuid:ca756cc2-b9b8-4c28-9ef9-700a5058cc71>\",\"WARC-IP-Address\":\"192.0.78.249\",\"WARC-Target-URI\":\"https://azurecloudai.blog/2021/04/29/how-to-get-ueba-costs-for-azure-sentinel/\",\"WARC-Payload-Digest\":\"sha1:HKEUSLHMOP6EIHN4WYNPYIW7LDDWIQTS\",\"WARC-Block-Digest\":\"sha1:EV3YYPJ5XPU2OME4HYT7ARDQZJGNMQY2\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764501407.6_warc_CC-MAIN-20230209045525-20230209075525-00766.warc.gz\"}"} |
https://picos-api.gitlab.io/picos/api/picos.expressions.cone_psd.html | [
"# picos.expressions.cone_psd¶\n\nImplements the positive semidefinite cone.\n\nClasses\n\nclass picos.expressions.cone_psd.PositiveSemidefiniteCone(dim=None)[source]\n\nThe positive semidefinite cone.\n\nUnlike other cones which are defined only on",
null,
", this cone accepts both symmetric and hermitian matrices as well as their special vectorization as members.\n\nExample\n\n>>> from picos import Constant, PositiveSemidefiniteCone\n>>> R = Constant(\"R\", range(16), (4, 4))\n>>> S = R + R.T\n>>> S.shape\n(4, 4)\n>>> S.svec.shape\n(10, 1)\n>>> S.svec << PositiveSemidefiniteCone() # Constrain the matrix via svec().\n<4×4 LMI Constraint: R + Rᵀ ≽ 0>\n>>> C = S << PositiveSemidefiniteCone(); C # Constrain the matrix directly.\n<4×4 LMI Constraint: R + Rᵀ ≽ 0>\n>>> C.conic_membership_form # The conic form still refers to svec().\n<10×1 Real Constant: svec(R + Rᵀ)>\n>>> C.conic_membership_form\n<10-dim. Positive Semidefinite Cone: {svec(A) : xᵀ·A·x ≥ 0 ∀ x}>\n\n__init__(dim=None)[source]\n\nConstruct a PositiveSemidefiniteCone.\n\nIf a fixed dimensionality is given, this must be the dimensiona of the special vectorization. For a",
null,
"matrix, this is",
null,
".\n\nproperty dual_cone\n\nImplement cone.Cone.dual_cone."
] | [
null,
"https://picos-api.gitlab.io/picos/_images/math/28f7a0ae3445b6714ccd5b31a1e270a080048fc5.svg",
null,
"https://picos-api.gitlab.io/picos/_images/math/6eb743c8bde71d3860a46dbd197ee7674c4961ea.svg",
null,
"https://picos-api.gitlab.io/picos/_images/math/28f5cc1e342341a448780aaf7f6ee383fc019e34.svg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.6077443,"math_prob":0.8239579,"size":1204,"snap":"2022-27-2022-33","text_gpt3_token_len":375,"char_repetition_ratio":0.14916667,"word_repetition_ratio":0.061728396,"special_character_ratio":0.2857143,"punctuation_ratio":0.20657277,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9743908,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,2,null,6,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-07-01T19:58:03Z\",\"WARC-Record-ID\":\"<urn:uuid:5c6f3f02-7b92-4191-b419-0380b5b68996>\",\"Content-Length\":\"20380\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8d7d5280-65ce-4b20-a128-b914001d0f4c>\",\"WARC-Concurrent-To\":\"<urn:uuid:0501cc67-b254-4460-b2b5-f0605173dd13>\",\"WARC-IP-Address\":\"35.185.44.232\",\"WARC-Target-URI\":\"https://picos-api.gitlab.io/picos/api/picos.expressions.cone_psd.html\",\"WARC-Payload-Digest\":\"sha1:AGSQ3LGPGDHWIPGCRAA6N747K5WLTLOX\",\"WARC-Block-Digest\":\"sha1:L5ZNUZHWTNAVO2XNQ4UL3DMPZ3YNCOY4\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656103945490.54_warc_CC-MAIN-20220701185955-20220701215955-00082.warc.gz\"}"} |
https://www.scholars.northwestern.edu/en/publications/on-the-taylor-expansion-of-value-functions | [
"# On the taylor expansion of value functions\n\nResearch output: Contribution to journalArticlepeer-review\n\n7 Scopus citations\n\n## Abstract\n\nWe introduce a framework for approximate dynamic programming that we apply to discrete-time chains on ℤd+ with countable action sets. The framework is grounded in the approximation of the (controlled) chain's generator by that of another Markov process. In simple terms, our approach stipulates applying a second-order Taylor expansion to the value function, replacing the Bellman equation with one in continuous space and time in which the transition matrix is reduced to its first and second moments. In some cases, the resulting equation can be interpreted as a Hamilton-Jacobi-Bellman equation for a Brownian control problem. When tractable, the \"Taylored\"equation serves as a useful modeling tool. More generally, it is a starting point for approximation algorithms. We develop bounds on the optimality gap-the suboptimality introduced by using the control produced by the Taylored equation. These bounds can be viewed as a conceptual underpinning, analytical rather than relying on weak convergence arguments, for the good performance of controls derived from Brownian approximations. We prove that under suitable conditions and for suitably \"large\"initial states, (1) the optimality gap is smaller than a 1 - α fraction of the optimal value, with which α ∈ (0, 1) is the discount factor, and (2) the gap can be further expressed as the infinite-horizon discounted value with a \"lowerorder\"per-period reward. Computationally, our framework leads to an \"aggregation\"approach with performance guarantees. Although the guarantees are grounded in partial differential equation theory, the practical use of this approach requires no knowledge of that theory.\n\nOriginal language English (US) 631-654 24 Operations Research 68 2 https://doi.org/10.1287/opre.2019.1903 Published - Mar 2020\n\n## Keywords\n\n• Approximate dynamic programming\n• Approximate policy iteration\n• Hamilton-Jacobi-Bellman equation\n• Markov decision process\n• Taylor expansion\n\n## ASJC Scopus subject areas\n\n• Computer Science Applications\n• Management Science and Operations Research\n\n## Fingerprint\n\nDive into the research topics of 'On the taylor expansion of value functions'. Together they form a unique fingerprint."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8827255,"math_prob":0.6976453,"size":4244,"snap":"2023-40-2023-50","text_gpt3_token_len":998,"char_repetition_ratio":0.10283019,"word_repetition_ratio":0.7851373,"special_character_ratio":0.22266729,"punctuation_ratio":0.108991824,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9601079,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-05T20:15:20Z\",\"WARC-Record-ID\":\"<urn:uuid:d5c54ec0-4c70-4c8c-bcb0-4dd586c0835c>\",\"Content-Length\":\"53830\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9b2a6bcd-017b-4413-92a7-a433cfba2130>\",\"WARC-Concurrent-To\":\"<urn:uuid:f7f39b72-7271-4896-98c3-523404dd4589>\",\"WARC-IP-Address\":\"18.210.30.88\",\"WARC-Target-URI\":\"https://www.scholars.northwestern.edu/en/publications/on-the-taylor-expansion-of-value-functions\",\"WARC-Payload-Digest\":\"sha1:WBPWEYRQTCWAKVP3BC6F34SNVQM3PGHY\",\"WARC-Block-Digest\":\"sha1:DNGETLSX4TEMMJ6CBZOEZRYWCZSRBHEC\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100555.27_warc_CC-MAIN-20231205172745-20231205202745-00189.warc.gz\"}"} |
https://www.viajeabariloche.com/2019/06/solving-algebraic-fractions-worksheet.html | [
"# Solving Algebraic Fractions Worksheet\n\n## Wednesday, June 5, 2019\n\nMath worksheet maths word problems worksheets ks3 2016 rio olympics ision fractional equations tes algebra 5 1 solve gcse questions rtf problem solving linear. Simplify write them as a single.",
null,
"Algebraic Fractions Add Equations Ks4 Higher By Hassan2008\n\n### Gcse igcse maths mathematics algebraic fractions add subtract multiply divide simplify differentiated practice worksheets with space for answers solutions.",
null,
"Solving algebraic fractions worksheet. Section 3 solving equations. Higher ks4 ks3 a. Worksheets are multi step equations date period two step equations date period solving multi.\n\nSolving equations containing algebraic fractions worksheets showing all 8 printables. Ideal for gcse revision this is one of a collection of worksheets which contain exam type questions that gradually increase in difficulty. Worksheets are work 2 3 algebraic fractions equations containing fractions.\n\nFind algebraic fraction revision materials worksheets and practice papers to help you master this tricky algebra based topic. These algebra 1 equations worksheets will produce one step problems containing fractions. Solving algebraic equations worksheets showing all 8 printables.\n\nWorksheet 23 algebraic fractions section 1 factoring and algebraic fractions as pointed out in worksheet 21. Solving equations with algebraic fractions. Dividing algebraic fractions worksheet tes 736862 myscres solving equations worksheet pdf math worksheets gcseaths algebra multi step equations with variables on both.",
null,
"Algebraic Fractions Worksheet By Hel466 Teaching Resources Tes",
null,
"Adding And Subtracting Algebraic Fractions By Embob2000 Teaching",
null,
"Solving Algebraic Fractions Algebra Fractions Pinterest",
null,
"Algebraic Fractions Complete By Chriswallis2 Teaching Resources Tes",
null,
"Adding And Subtracting Algebraic Fraction By Swaller25 Teaching",
null,
"Algebraic Fractions Practice Questions Solutions By Transfinite",
null,
"Solving Equations Containing Algebraic Fractions Worksheet For 8th",
null,
"Algebraic Manipulation Amendments To Worksheet Pg 3 Example 2",
null,
"Sample Unit Mathematics Stage 5 Stem Advanced Pathway Algebraic",
null,
"Algebraic Fractions Thoughts And Crosses By Leond06 Teaching",
null,
"Addition And Subtraction Algebra Worksheets Solving Equations Of",
null,
"Pin By Math W On Math Worksheets Algebra Pinterest Worksheets",
null,
"Sample Unit Mathematics Stage 5 Stem Advanced Pathway Algebraic",
null,
"Algebraic Fractions Gcse Higher A A With Answers By Hassan2008",
null,
"Pre Algebra",
null,
"Solving Quadratic Equations From Algebraic Fractions Worksheet Edplace",
null,
"Solving Equations With Algebraic Fractions Worksheet With Solutions",
null,
"Solving Algebra Fractions With A Quadratic Gcse Maths Level 7",
null,
"Gcse Revision Algebraic Fractions Solving Equations By",
null,
"Algebra 1 Worksheets Equations Worksheets",
null,
"Equations Involving Algebraic Fractions Advanced Corbettmaths",
null,
"2 3 Solving Multi Step Equations With Fractions And Decimals Math",
null,
"Gcse Maths Revision Solving Linear Equations 2 Involving",
null,
"Simplifying Algebraic Fractions Homework By Mrsmorgan1 Teaching"
] | [
null,
"https://dryuc24b85zbr.cloudfront.net/tes/resources/6192159/image",
null,
"https://d1uvxqwmcz8fl1.cloudfront.net/tes/resources/11798412/58684e56-5e16-4639-a456-f669fa796f46/image",
null,
"https://dryuc24b85zbr.cloudfront.net/tes/resources/11235925/image",
null,
"https://dryuc24b85zbr.cloudfront.net/tes/resources/11391073/image",
null,
"https://i.pinimg.com/originals/a9/6a/57/a96a57f09b25c648b160032ffa71b0ab.png",
null,
"https://d1e4pidl3fu268.cloudfront.net/80809b27-0000-4eeb-9451-701034c8d1e7/Coverpic.crop_611x458_0%2C1.preview.PNG",
null,
"https://dryuc24b85zbr.cloudfront.net/tes/resources/7545875/image",
null,
"https://dryuc24b85zbr.cloudfront.net/tes/resources/6330174/image",
null,
"https://content.lessonplanet.com/resources/thumbnails/184951/large/cgrmlwnvbnzlcnqymdezmdmzmc0xmtizos1kaxu4dzcuanbn.jpg",
null,
"https://images.slideplayer.com/25/8073145/slides/slide_12.jpg",
null,
"https://www.missbsresources.com/images/Algebra/Worksheets/SimplifyingAlgebraicFractions.PNG",
null,
"https://dryuc24b85zbr.cloudfront.net/tes/resources/6435841/image",
null,
"https://lh3.googleusercontent.com/proxy/Z56HZ1weXvZ9cyjvf0fe6l6-I5bIKmuplSWVKWS-tevdcmUtw7K9kQ9nc5dFkSVt4eu3WxdULqP_eiXzlFEF41QVH3NAG84_hTdXZsqhOQPAIBhxJ7-gLl_R90DjDdrFoU4IDNZbixj7V9IIeq-g-LEHvs8tQg0PktCTNyJIapWVIZl74suTI9YawtINmAie6SqfA-I48Wheeii1O8AZZn2efpdlQ2TkvDTkec3ZpQk1ZnNC2T1J7QuwLpgHweC3jYiq_foFrwpz9lxEpzr4=s0-d",
null,
"https://i.pinimg.com/originals/a6/b1/30/a6b130b1485cc3a3c54a536e78bd5827.png",
null,
"https://www.missbsresources.com/images/Algebra/Worksheets/AddandSubtractAlgebraicFractions.PNG",
null,
"https://dryuc24b85zbr.cloudfront.net/tes/resources/6177963/image",
null,
"https://www.dadsworksheets.com/worksheets/pre-algebra/mixed-pre-algebra-addition-subtraction-easy-v1.jpg",
null,
"https://edplaceimages.s3.amazonaws.com/worksheetPreviews/worksheet_1538545835.jpg",
null,
"https://l.imgt.es/resource-preview-imgs/83fdc4f3-bcb1-4912-8f19-ac8e289cb221%2FSolvingequationswithalgebraicfractionsimage.crop_230x173_0%2C10.preview.png",
null,
"https://i.ytimg.com/vi/N-3bGmQ7Z8s/maxresdefault.jpg",
null,
"https://d1uvxqwmcz8fl1.cloudfront.net/tes/resources/11798412/58684e56-5e16-4639-a456-f669fa796f46/image",
null,
"https://lh6.googleusercontent.com/proxy/fk-I1sLn5_DnOR9ZmXzd01hUxjrp9saRRuKVHco-iV6cu_Rf2X-0n1yo_MW3dLHwVJtJuB-p9cDdvNzzbYTqyJss5suDgogMPz9VQacWrHYRiygBTiw71FjquhkuuQ=s0-d",
null,
"https://videos.files.wordpress.com/z5tGXsgg/equations-involving-harder-algebraic-fractions_hd.original.jpg",
null,
"https://showme0-9071.kxcdn.com/files/39490/pictures/thumbs/1653488/last_thumb1410876175.jpg",
null,
"https://i.ytimg.com/vi/DiUzSTm330U/maxresdefault.jpg",
null,
"https://dryuc24b85zbr.cloudfront.net/tes/resources/6448193/image",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7823615,"math_prob":0.9797612,"size":2937,"snap":"2019-51-2020-05","text_gpt3_token_len":608,"char_repetition_ratio":0.28605524,"word_repetition_ratio":0.056410257,"special_character_ratio":0.15900579,"punctuation_ratio":0.03902439,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9994018,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52],"im_url_duplicate_count":[null,null,null,8,null,6,null,3,null,null,null,2,null,2,null,null,null,3,null,2,null,5,null,4,null,1,null,null,null,4,null,null,null,7,null,null,null,5,null,2,null,8,null,7,null,2,null,9,null,null,null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-12T21:53:52Z\",\"WARC-Record-ID\":\"<urn:uuid:80c7efde-931a-489f-a2b6-dde715ba09f7>\",\"Content-Length\":\"101958\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2b2db694-6db9-4f41-bacc-cc01aeaf4131>\",\"WARC-Concurrent-To\":\"<urn:uuid:13fb1d00-ad80-4313-925f-2584f7b8e94c>\",\"WARC-IP-Address\":\"172.217.15.115\",\"WARC-Target-URI\":\"https://www.viajeabariloche.com/2019/06/solving-algebraic-fractions-worksheet.html\",\"WARC-Payload-Digest\":\"sha1:ASL2NVBGEXCZB5354E2LUUD36GEJIGGM\",\"WARC-Block-Digest\":\"sha1:5RP2FVLR2YUNSGIOWNKHSA3TQ4FLASUP\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575540547165.98_warc_CC-MAIN-20191212205036-20191212233036-00231.warc.gz\"}"} |
https://machinelearningmastery.com/building-a-softmax-classifier-for-images-in-pytorch/ | [
"# Building a Softmax Classifier for Images in PyTorch\n\nLast Updated on January 9, 2023\n\nSoftmax classifier is a type of classifier in supervised learning. It is an important building block in deep learning networks and the most popular choice among deep learning practitioners.\n\nSoftmax classifier is suitable for multiclass classification, which outputs the probability for each of the classes.\n\nThis tutorial will teach you how to build a softmax classifier for images data. You will learn how to prepare the dataset, and then learn how to implement softmax classifier using PyTorch. Particularly, you’ll learn:\n\n• How you can use a Softmax classifier for images in PyTorch.\n• How to build and train a multi-class image classifier in PyTorch.\n• How to plot the results after model training.\n\nLet’s get started.",
null,
"Building a Softmax Classifier for Images in PyTorch.\nPicture by Joshua J. Cotten. Some rights reserved.\n\n## Overview\n\nThis tutorial is in three parts; they are\n\n• Preparing the Dataset\n• Build the Model\n• Train the Model\n\n## Preparing the Dataset\n\nThe dataset you will use here is Fashion-MNIST. It is a pre-processed and well-organized dataset consisting of 70,000 images, with 60,000 images for training data and 10,000 images for testing data.\n\nEach example in the dataset is a $28\\times 28$ pixels grayscale image with a total pixel count of 784. The dataset has 10 classes, and each image is labelled as a fashion item, which is associated with an integer label from 0 through 9.\n\nThis dataset can be loaded from torchvision. To make the training faster, we limit the dataset to 4000 samples:\n\nAt the first time you fetch the fashion-MNIST dataset, you will see PyTorch downloading it from Internet and saving to a local directory named data:\n\nThe dataset train_data above is a list of tuples, which each tuple is an image (in the form of a Python Imaging Library object) and an integer label.\n\nLet’s plot the first 10 images in the dataset with matplotlib.\n\nYou should see an image like the following:",
null,
"PyTorch needs the dataset in PyTorch tensors. Hence you will convert this data by applying the transforms, using the ToTensor() method from PyTorch transforms. This transform can be done transparently in torchvision’s dataset API:\n\nBefore proceeding to the model, let’s also split our data into train and validation sets in such a way that the first 3500 images is the training set and the rest is for validation. Normally we want to shuffle the data before the split but we can skip this step to make our code concise.\n\n## Build the Model\n\nIn order to build a custom softmax module for image classification, we’ll use nn.Module from the PyTorch library. To keep things simple, we build a model of just one layer.\n\nNow, let’s instantiate our model object. It takes a one-dimensional vector as input and predicts for 10 different classes. Let’s also check how parameters are initialized.\n\nYou should see the model’s weight are randomly initialized but it should be in the shape like the following:\n\n## Train the Model\n\nYou will use stochastic gradient descent for model training along with cross-entropy loss. Let’s fix the learning rate at 0.01. To help training, let’s also load the data into a dataloader for both training and validation sets, and set the batch size at 16.\n\nNow, let’s put everything together and train our model for 200 epochs.\n\nYou should see the progress printed once every 10 epochs:\n\nAs you can see, the accuracy of the model increases after every epoch and its loss decreases. Here, the accuracy you achieved for the softmax images classifier is around 85 percent. If you use more data and increase the number of epochs, the accuracy may get a lot better. Now let’s see how the plots for loss and accuracy look like.\n\nFirst the loss plot:\n\nwhich should look like the following:",
null,
"Here is the model accuracy plot:\n\nwhich is like the one below:",
null,
"Putting everything together, the following is the complete code:\n\n## Summary\n\nIn this tutorial, you learned how to build a softmax classifier for images data. Particularly, you learned:\n\n• How you can use a softmax classifier for images in PyTorch.\n• How to build and train a multiclass image classifier in PyTorch.\n• How to plot the results after model training.\n\n### 2 Responses to Building a Softmax Classifier for Images in PyTorch\n\n1.",
null,
"Dhavan Rathore January 9, 2023 at 8:41 pm #\n\nNicely written and explained\n\n2.",
null,
"Dhavan Rathore January 9, 2023 at 8:42 pm #\n\nYes, well said"
] | [
null,
"https://machinelearningmastery.com/wp-content/uploads/2023/01/joshua-j-cotten-Ge1t87lvyRM-unsplash-scaled.jpg",
null,
"https://machinelearningmastery.com/wp-content/uploads/2023/01/torch-fmnist-01.png",
null,
"https://machinelearningmastery.com/wp-content/uploads/2023/01/torch-fmnist-02.png",
null,
"https://machinelearningmastery.com/wp-content/uploads/2023/01/torch-fmnist-03.png",
null,
"https://secure.gravatar.com/avatar/6da1b0ea557b9fc0954ccd3c406d9016",
null,
"https://secure.gravatar.com/avatar/6da1b0ea557b9fc0954ccd3c406d9016",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.60868675,"math_prob":0.6985371,"size":11663,"snap":"2022-40-2023-06","text_gpt3_token_len":3526,"char_repetition_ratio":0.13405952,"word_repetition_ratio":0.2741021,"special_character_ratio":0.35694075,"punctuation_ratio":0.23699649,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98270977,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12],"im_url_duplicate_count":[null,2,null,2,null,2,null,2,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-02-03T10:33:05Z\",\"WARC-Record-ID\":\"<urn:uuid:1f0b8926-db02-4d1e-898b-914893b701b9>\",\"Content-Length\":\"316212\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:531af309-d0b1-4bf8-96dc-39c4aa13c53d>\",\"WARC-Concurrent-To\":\"<urn:uuid:6d531f43-1e59-441d-b148-923b2e5591f9>\",\"WARC-IP-Address\":\"104.26.1.148\",\"WARC-Target-URI\":\"https://machinelearningmastery.com/building-a-softmax-classifier-for-images-in-pytorch/\",\"WARC-Payload-Digest\":\"sha1:3PEAGSFVUDATYAK5HILJ5A72OGYFU6I3\",\"WARC-Block-Digest\":\"sha1:RKXRN6WIEWLFJM7RKNK5HZZLWEHPZLGM\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764500044.66_warc_CC-MAIN-20230203091020-20230203121020-00545.warc.gz\"}"} |
https://bitscope.com/ed/blog/DK/?p=DK07A&c=C3BB1FEEE0EACBBFD4888010138E412E | [
"# 4 Bit Up/Down Counter\n\n BitScope Ed Education Blog 2013 / 11\n\n2013-11-07\n\n# Breadboard One | The 4 Bit Up/Down Counter",
null,
"Breadboard One | 4 Bit Up/Down Counter Block Diagram\n\nLast month we introduced the Breadboard One educational electronic projects lab. It is a simple mixed signal circuit which we're using to explain the key elements of typical mixed signal systems.\n\nBreadboard One comprises four primary circuits, the first of which is a 4 bit up/down counter. This is a purely digital component and we'll explain how it works and what its output looks like here.\n\nThe counter we use is the CMOS Logic CD4029. It is a member of the CD4000 family which has been in production for almost 40 years! There are newer logic families with the same functionality such as 74HC4029 but we'll stick with the original.\n\nAs you would expect, a counter counts. The block diagram shows the layout of the inputs and outputs of this component where Q4..Q1 are the four bits that comprise the binary encoded output that drive the BreadBoard's D/A convertor. The other two important signals are the CLOCK and the UP/DOWN inputs.\n\nThere are other signals which we're not using in this project as well as power (Vdd) and ground (Vss) but we'll list all the signals here as you can easily experiment with them by modifying the circuit slightly if you want.\n\n Signal Type Description CLOCK Mixed Counter clock input. UP/DOWN Input Increment or Decrement Count. Q4..Q1 Output 4 bit binary encoded outputs. J4..J1 Inputs 4 bit binary encoded \"jam\" inputs. BINARY/DECADE Input Binary or Decade counting mode. CARRY IN Input Counter \"carry in\" input. CARRY OUT Output Counter \"carry out\" output. PRESETENABLE INPUT Enable the jam inputs.\n\nCD4029 Signal Descriptions\n\nInternally the counter comprises a set of logic gates configured to implement the arithmetic addition operator (grab the data sheet for the full details). Normally the counter increments the 4 bit word (Q4,Q3,Q2,Q1) by one every time the clock input is toggled.\n\nIf the UP/DOWN input is asserted the counter counts down (subtracts one) upon each clock cycle instead.\n\nThese two modes of operation are what the Breadboard One project uses but we can run these two modes in isolation by modifying the circuit to simply disconnect the UP/DOWN input from the output of the SCHMITT trigger (as we will show below).\n\nThe CARRY OUT and CARRY IN signals are used when more than one counter are used \"in cascade\". Simply connecting the CARRY OUT of one counter to the CARRY IN of a second one, an 8 bit counter can be built where Q4..Q1 of the first are the low four bits and Q4..Q1 of the second are the high four bits. The CARRY signal is generated each time the counter reaches its limit and \"rolls over\" (to start the count again). There are other ways to connect multiple counters (e.g. ripple counting) but refer to the data sheet for full details. The BINARY/DECADE input defines the limit; 15 (binary, 0..15) or 9 (decimal, 0..9).\n\nThe remaining signals, PRESET ENABLE and J4..J1 allow the counter to be preset with a known value.\n\n## Timing Diagrams and Logic Analysis\n\nA logic diagram is the easiest way describe the operation of a digital circuit like this.",
null,
"Binary Counter Logic Diagram\n\nFor each clock cycle (at the top of the diagram) the four bits cycle in a binary encoded sequence in this case starting at 5, counting up to 15 before being \"jammed\" to 9 and then counting down to zero and wrapping. Our use of the counter in Breadboard One is simpler in that we're using the UP/DOWN signal but not the jam or carry and to keep it even simpler, here we've asserted the counter as always UP and we observe the result on BS10 as:",
null,
"Binary Counter, Mixed Signal Analysis, UP Count\n\nThe top half of the display shows the binary encoded counter output as an analog signal produced by the D/A convertor. We'll explain the operation of this component in a future post. For now it's enough to understand that it shows an analog representation of the 4 bit counter output on Q1..Q4. These signals are shown on BitScope's logic channels 0..3 (white, brown, red and orange) and you can see their combined value aligns with the analog signal level for each value.\n\nThe clock input (driven by BitScope's waveform generator) is logic channel 5 (yellow) and the UP/DOWN signal is on channel 6 (green). Note that it remains high so the counter increments from 0 to 15 before wrapping and starting again.\n\nHere's the same circuit with one modification, we've pulled the UP/DOWN signal on channel 6 (green) low:",
null,
"Binary Counter, Mixed Signal Analysis, DOWN Count\n\nIt's very easy with Breadboard One to re-arrange the circuit to try all sorts of variations on this theme and observe all the signals, digital and analog, using the BitScope but for now we have the full monty:",
null,
"Binary Counter, Mixed Signal Analysis, Breadboard One\n\nThis is the complete Breadboard One circuit counting UP and then DOWN as we outlined in our earlier post describing the operation of the complete mixed signal circuit.\n\nNote the UP/DOWN signal is now toggling as driven by the SCHMITT trigger output.\n\nAll of these circuit experiments and screenshots were made on our Raspberry Pi based Electronic Projects Lab using BreadBoard One.\n\nIn future posts we'll explore the operation of the D/A convertor, the analog filter and SCHMITT trigger components of this circuit and explain the details of how they work together.\n\nIn the meantime, here are some further experiments that you can try:\n\n• Pull the BINARY/DECIMAL pin to HIGH and LOW and watch the counting states.\n• Monitor the CARRY OUT pin and check it changes for UP, DOWN. Compare when it does this when BINARY and DECIMAL are selected.\n• Check that the counter changes state on the rising edge of the clock and not the falling edge.\n• Change the generated CLOCK signal (on BitScope) to a TRIANGLE wave and make measurements of the exact switching voltage of the CLOCK input. Make the CLOCK signal smaller and see when circuit stops functioning.\n\nThere are many other changes and tests that can be made to find out how circuits like Breadboard One work in practice compared to their theory of operation and we'll cover these issues in future posts too."
] | [
null,
"https://bitscope.com/ed/blog/DK/01.png",
null,
"https://bitscope.com/ed/blog/DK/02.png",
null,
"https://bitscope.com/ed/blog/DK/03.png",
null,
"https://bitscope.com/ed/blog/DK/04.png",
null,
"https://bitscope.com/ed/blog/DK/05.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9280628,"math_prob":0.8140571,"size":5049,"snap":"2022-27-2022-33","text_gpt3_token_len":1134,"char_repetition_ratio":0.13557978,"word_repetition_ratio":0.0,"special_character_ratio":0.21806298,"punctuation_ratio":0.07637475,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95506227,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,7,null,7,null,7,null,7,null,7,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-06-27T09:15:50Z\",\"WARC-Record-ID\":\"<urn:uuid:b473f317-204e-4621-b21b-7940133d51ad>\",\"Content-Length\":\"35653\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4d766623-8781-475d-8acf-5cb53be23a80>\",\"WARC-Concurrent-To\":\"<urn:uuid:292b4ae4-af62-4285-bf5d-ca9c62b479fa>\",\"WARC-IP-Address\":\"184.169.168.51\",\"WARC-Target-URI\":\"https://bitscope.com/ed/blog/DK/?p=DK07A&c=C3BB1FEEE0EACBBFD4888010138E412E\",\"WARC-Payload-Digest\":\"sha1:PKEGEPVIF65POJKWK4IYVAYZKQ75GM74\",\"WARC-Block-Digest\":\"sha1:TSORLA4DG2PR2TCUP5US2MRZDNNBFD2L\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656103329963.19_warc_CC-MAIN-20220627073417-20220627103417-00008.warc.gz\"}"} |
https://calculator.academy/henderson-hasselbalch-calculator/ | [
"Enter the conjugate base, acid [HA], and dissociation constant into the calculator to calculate the pH of a solution.\n\n## Henderson Hasselbalch Formula\n\nThe following formula is used to calculate the pH of a solution using the Henderson Hasselbalch equation.\n\npH = Kₐ + log([A⁻]/[HA])\n• Ka is the acid dissociation constant\n• A- is the molar concentration of the base in molars\n• HA is the molar concentration of the acid in molar\n\npH is a measure of the solutions acidity of a solution.\n\n## Henderson Hasselbalch Equation Definition\n\nWhat is the Henderson Hasselbalch equation? The Henderson Hasselbalch equation is a formula used to calculate the pH (acidity) of a buffer. The equation is denoted: pH = Kₐ + log([A⁻]/[HA]).\n\nWhat is a buffer? A buffer is a mixture or solution that contains a weak acid and a conjugate base. The combination of these two components, while at equilibrium,\n\n## How to calculate buffer pH?\n\nThe following is an example problem that uses the Henderson-Hasselbalch equation to determine the pH of a buffer.\n\n1. First, determine the acid dissociation constant. For this example, the acid dissociation constant is found to be 3.\n2. Next, determine the molar concentration of the base. The molar concentration is found to be or 3 moles per liter.\n3. Next, determine the molar concentration of the acid. For this problem, the molar concentration of the acid is found to be 4 moles per liter.\n4. Finally, calculate the buffer pH. Using the formula, the buffer pH is found to be 3 + log ( 3/4) =\n\nDoes the henderson-hasselbalch equation work for bases? The henderson-hasselbalch equation is used to find the pH values for buffer solutions, which are acid-base solutions, so the equation itself is used to bases.\n\nWhen does the henderson-hasselbalch equation fail? The equation does not work in situations where the buffer solutions are highly diluted through the water."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8666308,"math_prob":0.9952507,"size":1991,"snap":"2022-40-2023-06","text_gpt3_token_len":471,"char_repetition_ratio":0.17362858,"word_repetition_ratio":0.064615384,"special_character_ratio":0.20894024,"punctuation_ratio":0.09803922,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99987924,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-01-31T19:19:37Z\",\"WARC-Record-ID\":\"<urn:uuid:2df9efa1-ca9d-4fca-a954-39aa2da381f0>\",\"Content-Length\":\"175663\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3b296e2f-8197-41fb-a38f-60dcdbf02a67>\",\"WARC-Concurrent-To\":\"<urn:uuid:6ad1fdfe-b6bd-4e9a-8691-5b568bf6505d>\",\"WARC-IP-Address\":\"172.66.41.37\",\"WARC-Target-URI\":\"https://calculator.academy/henderson-hasselbalch-calculator/\",\"WARC-Payload-Digest\":\"sha1:IC4BEYR6HWRDPEO5JQXLFJIL67GL6IFB\",\"WARC-Block-Digest\":\"sha1:QZJ7HHN2DSR6KE7MLIXS5UHLIUJO4TMX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764499890.39_warc_CC-MAIN-20230131190543-20230131220543-00664.warc.gz\"}"} |
https://forum.arduino.cc/t/speed-sensor-velocity-km-h-using-ultrasonic-sensor-ping/136513 | [
"",
null,
"# Speed Sensor (Velocity Km/H) Using Ultrasonic Sensor (Ping)\n\nhello everybody…\ncan someone help me on this problem?\n\ni have arduino mega and ping ultrasonic sensor, but the coding is problem for me. hopefully someone can help me… =(\n\nWhat’s the problem, and why are you SHOUTING?\n\n1kmh-1 is 0.277ms-1\n\nThe Ping can take maybe 20 readings a second, out to a sensible range of what? Four metres?\n\nDo some simple sums.\n\ncan u help me for the coding part?i really blur to do the speed sensor coding...\n\nCan you stop and think for a moment, and tell us exactly what you're trying to do? Also think about the limitations I described in my other post.\n\nYour ping sensor is a simple time of flight sensor where the variable time represents the variable distance to a fixed object. To turn that into a speed sensor would require that you sense the change of distance of a return pulse of a moving object over a specific time period, so it's a second order value that would be based on the integration of the primary time of flight measurement that the sensor provides you. So the solution is math based.\n\nLefty\n\nAWOL: Can you stop and think for a moment, and tell us exactly what you're trying to do? Also think about the limitations I described in my other post.\n\nsorry.. i try to do the speed sensor between two car to find the crash impact..\n\nfor the example..i put ping ultrasonic sensor in front of my car..than, find the different speed between my car and the front car..\n\nfor the example..i put ping ultrasonic sensor in front of my car..than, find the different speed between my car and the front car..\n\n\"If you can see my Ping, you're driving too close\" :)\n\nOK, so show us your code so far.\n\nWell assuming you have the coding done to measure the distance in the first place, which is well documented if I recall from looking at Ping)) some time ago…\n\nIn it’s simplest form you just need to take two distance readings noting the time using millis() or micros() at which you took the readings. Then you calculate the speed as distance / time as:\n\n``````Speed = (FirstDistance - SecondDistance) / (SecondTime - FirstTime)\n``````\n\nIf FirstDistance < SecondDistance the distance is increasing and the car in front is moving away; that would give a -ve speed.\n\nSomething like that, anyway 8)\n\nAWOL:\n\nfor the example…i put ping ultrasonic sensor in front of my car…than, find the different speed between my car and the front car…\n\n“If you can see my Ping, you’re driving too close”",
null,
"OK, so show us your code so far.\n\nyes…u are too close if u see my ping sensor…hehehe…\ni dont know how to start the coding for speed sensor…before this i just ‘play’ with range…sens and show the range between your car, and front car…\n\n``````#include <LiquidCrystal.h> // lcd\nLiquidCrystal lcd(12, 11, 5, 4, 3, 2); //lcd\n\nconst int pingPin = 7; //sensor\n\nconst int redPin = 6; //led\nconst int greenPin = 5; //led\nconst int bluePin = 4; //led\n\nvoid setup() {\n// initialize serial communication:\n\npinMode (redPin, OUTPUT);\npinMode (greenPin, OUTPUT);\npinMode (bluePin, OUTPUT);\n\nSerial.begin(9600);\n\nlcd.begin(16, 2);\nlcd.print(\"Distance (cm):\");\n}\n\nvoid loop()\n{\n// establish variables for duration of the ping,\n// and the distance result in inches and centimeters:\nlcd.setCursor(0, 1);\n\nlong duration, inches,cm;\n\n// The PING))) is triggered by a HIGH pulse of 2 or more microseconds.\n// Give a short LOW pulse beforehand to ensure a clean HIGH pulse:\npinMode(pingPin, OUTPUT);\ndigitalWrite(pingPin, LOW);\ndelayMicroseconds(2);\ndigitalWrite(pingPin, HIGH);\ndelayMicroseconds(15);\ndigitalWrite(pingPin, LOW);\ndelayMicroseconds(20);\n// The same pin is used to read the signal from the PING))): a HIGH\n// pulse whose duration is the time (in microseconds) from the sending\n// of the ping to the reception of its echo off of an object.\npinMode(pingPin, INPUT);\nduration = pulseIn(pingPin, HIGH);\n\ninches = microsecondsToInches(duration);\ncm = microsecondsToCentimeters(duration);\n\nSerial.print(inches);\nSerial.print(\"in, \");\nSerial.print(cm);\nSerial.print(\"cm\");\nSerial.println();\n\n// convert the time into a distance\n\nif (inches > 30) {\ndigitalWrite(greenPin, HIGH); // green LED\ndigitalWrite(redPin, LOW);\ndigitalWrite(bluePin, LOW);\n}\nelse if (inches <= 30 && inches > 12) {\ndigitalWrite(greenPin, HIGH);\ndigitalWrite(redPin, HIGH); // orange LED\ndigitalWrite(bluePin, LOW);\n}\nelse if (inches <= 12 && inches > 6) {\ndigitalWrite(redPin, LOW); // red LED\ndigitalWrite(greenPin, HIGH);\ndigitalWrite(bluePin, HIGH);\n}\nelse {\ndigitalWrite(redPin, HIGH); // purple LED\ndigitalWrite(greenPin, HIGH);\ndigitalWrite(bluePin, HIGH);\n}\n\nlcd.print(cm);\nlcd.print(\"cm\");\nlcd.print(\"\");\ndelay(100);\n}\n\nlong microsecondsToInches(long microseconds)\n{\n// According to Parallax's datasheet for the PING))), there are\n// 73.746 microseconds per inch (i.e. sound travels at 1130 feet per\n// second). This gives the distance travelled by the ping, outbound\n// and return, so we divide by 2 to get the distance of the obstacle.\n// See: http://www.parallax.com/dl/docs/prod/acc/28015-PING-v1.3.pdf\nreturn microseconds / 74 / 2;\n}\n\nlong microsecondsToCentimeters(long microseconds)\n{\n// The speed of sound is 340 m/s or 29 microseconds per centimeter.\n// The ping travels out and back, so to find the distance of the\n// object we take half of the distance travelled.\nreturn microseconds / 29 / 2;\n}\n``````\n\nModerator edit: CODE TAGS\n\nJimboZA:\nWell assuming you have the coding done to measure the distance in the first place, which is well documented if I recall from looking at Ping)) some time ago…\n\nIn it’s simplest form you just need to take two distance readings noting the time using millis() or micros() at which you took the readings. Then you calculate the speed as distance / time as:\n\n``````Speed = (FirstDistance - SecondDistance) / (SecondTime - FirstTime)\n``````\n\nIf FirstDistance < SecondDistance the distance is increasing and the car in front is moving away; that would give a -ve speed.\n\nSomething like that, anyway 8)\n\nthank you for your idea… if you can see from my code, where i can get the value of secondtime and first time?\n\nwhere i can get the value of secondtime and first time?\n\nHere\n\nEvery time through loop() you are measuring the ping duration and then calculating a distance...\n\nWhere you read the duration, you could add a line like `time = millis` so that would record when in the day (or in fact since the sketch started running) that duration was captured.\n\nYou'll need to do something to move each current time reading into an oldtime variable (at the top of loop() ?) so that you can subtract one from the other to give the time between readings. Similarly, you'll need to move the current distance to an olddistance variable so you can subtract them.. as it stands now, values like cm and so on are overwritten each time through loop() so you need to have a means of storing the previous one before the new one comes along.\n\nokay friends..i will try first...\n\nmatt121187: okay friends..i will try first...\n\nAwesome 8) .... and my advice is that before you dive into code, draw a flowchart of what it is you want to do. Get the logic clear on paper and in your head before you get into the nitty-gritties of coding.\n\nJimboZA:\n\nmatt121187: okay friends..i will try first...\n\nAwesome 8) .... and my advice is that before you dive into code, draw a flowchart of what it is you want to do. Get the logic clear on paper and in your head before you get into the nitty-gritties of coding.\n\nthanks for your advice..i just draw the flowchart..but i bit confusing to get the value of FirstDistance ,SecondDistance, SecondTime, FirstTime =(\n\ni'm really blur...cant solve this problem.... =(\n\nIn that case I suggest you pick a simpler problem.\n\ncant solve this problem.\n\nCan't or won't? What have you tried?\n\nhow to hold the previous distance?\n\nthis is my coding right now..but the previous state still show the current distance\n\n``````const int pingPin = 7;\nint start = 0;\n\nvoid setup() {\n// initialize serial communication:\nSerial.begin(9600);\n}\n\nvoid loop()\n{\n// establish variables for duration of the ping,\n// and the distance result in inches and centimeters:\nlong duration, inches, cm, currentState, previousState;\n\n// The PING))) is triggered by a HIGH pulse of 2 or more microseconds.\n// Give a short LOW pulse beforehand to ensure a clean HIGH pulse:\npinMode(pingPin, OUTPUT);\ndigitalWrite(pingPin, LOW);\ndelayMicroseconds(2);\ndigitalWrite(pingPin, HIGH);\ndelayMicroseconds(5);\ndigitalWrite(pingPin, LOW);\n\n// The same pin is used to read the signal from the PING))): a HIGH\n// pulse whose duration is the time (in microseconds) from the sending\n// of the ping to the reception of its echo off of an object.\npinMode(pingPin, INPUT);\nduration = pulseIn(pingPin, HIGH);\n\n// convert the time into a distance\n\ncm = microsecondsToCentimeters(duration);\n\ncurrentState = cm;\n\npreviousState = currentState + start ;\npreviousState = currentState;\n\nSerial.print(previousState);\nSerial.print(\"cm, \");\nSerial.print(currentState);\nSerial.print(\"cm\");\nSerial.println();\n\ndelay(100);\n}\n\nlong microsecondsToCentimeters(long microseconds)\n{\n// The speed of sound is 340 m/s or 29 microseconds per centimeter.\n// The ping travels out and back, so to find the distance of the\n// object we take half of the distance travelled.\nreturn microseconds / 29 / 2;\n}\n``````\n\nAny variable declared within a function like \"loop\" is going to have limited use for remembering its value. If you want a variable to maintain its value, declare it with global scope."
] | [
null,
"https://aws1.discourse-cdn.com/arduino/original/3X/1/f/1f6eb1c9b79d9518d1688c15fe9a4b7cdd5636ae.svg",
null,
"https://emoji.discourse-cdn.com/twitter/slight_smile.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8559558,"math_prob":0.9208056,"size":9482,"snap":"2021-31-2021-39","text_gpt3_token_len":2301,"char_repetition_ratio":0.13557713,"word_repetition_ratio":0.4726688,"special_character_ratio":0.25226745,"punctuation_ratio":0.16373333,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9539035,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-07-28T11:55:14Z\",\"WARC-Record-ID\":\"<urn:uuid:e82fd379-b2fa-4a84-b5ce-f39b41080e89>\",\"Content-Length\":\"62833\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4d7e6652-c6c2-4332-a231-19af50680b39>\",\"WARC-Concurrent-To\":\"<urn:uuid:f515967c-d25b-4fdd-94a9-8df0e7f662c0>\",\"WARC-IP-Address\":\"184.104.202.141\",\"WARC-Target-URI\":\"https://forum.arduino.cc/t/speed-sensor-velocity-km-h-using-ultrasonic-sensor-ping/136513\",\"WARC-Payload-Digest\":\"sha1:KU63XHH6SD2CPKAI6CCPUBASE2YKXY4Y\",\"WARC-Block-Digest\":\"sha1:KHPOJNKCSQQK2ODSU7LFWBOKZWA7LSQY\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046153709.26_warc_CC-MAIN-20210728092200-20210728122200-00683.warc.gz\"}"} |
https://www.sqlservercentral.com/forums/topic/loss-of-precision-when-multiplying | [
"Loss of precision when multiplying\n\n• I am currently working with a database that has all DECIMALtypes set to a precision and scale of 38 and 18. Though I am fully aware that there might be a loss of precision when multiplying two numbers, SQL Server,when multiplying, truncates the numbers before the operation, resulting a bigger loss of precision.\n\nFor example,\n\nSELECT 1000000000000.123456 * 10000.123456\n\nreturns «10000123456001234.575241383936», which is exact while\n\nSELECT CONVERT(DECIMAL(38, 18), 1000000000000.123456) *CONVERT(DECIMAL(38, 18), 10000.123456)SELECT CONVERT(DECIMAL(38, 18), 1000000000000.123456) *CONVERT(DECIMAL(38, 18), 10000.123456)\n\nreturns the truncated value «10000123456001234.575241»\n\nWhen working in CLR, the SqlDecimal class has methods(ConvertToPrecScale and AjustScale) that let you change the precision or scale of a decimal value. The problem is that I just basically want to remove leadingand trailing zeros while the two aforementioned methods want to know an exact position.\n\nI came up with the following function:\n\ninternal static SqlDecimal PackPrecScale(SqlDecimal d)\n{\nvar s = d.ToString();\nvar l = s.Length;\nvar indexofperiod = s.IndexOf('.');\n\n//Remove trailing zeros\nif (indexofperiod != -1)\n{\nwhile (s[l - 1] == '0') l--;\nif (s[l - 1] == '.')\n{\nl--;\nindexofperiod = -1;\n}\n}\n\nvar precision = 6;\nvar scale = 0;\nif (l > 0)\n{\nprecision = l;\nif (s == '-') precision--;\nif (indexofperiod != -1)\n{\nprecision--;\nscale = l - indexofperiod - 1;\n}\nif (precision < 6) precision = 6;\n}\n\nreturn SqlDecimal.ConvertToPrecScale(d, precision, scale);\n}\n\nThat returns a decimal having the smallest possible precision and scale. Having to multiply a and b, I can then write:\n\nSqlDecimal.ConvertToPrecScale(SqlDecimal.Multiply(PackPrecScale(a),PackPrecScale(b)), 38, 18)\n\nTo keep maximum precision while multiplying.\nIs there a simpler or safer way to «pack» a SqlDecimal?\n\n• This gives the same accuracy as your first 'exact' example:\n\nSELECT CAST(1000000000000.123456 AS DECIMAL(29,12)) * CAST(10000.123456 AS DECIMAL(29,12))\n\nThis code helped me determine the CASTs to use:\n\nDECLARE @numval SQL_VARIANT = 10000123456001234.575241383936;\n\nSELECT SQL_VARIANT_PROPERTY(@numval, 'BaseType');\n\nSELECT SQL_VARIANT_PROPERTY(@numval, 'Precision');\n\nSELECT SQL_VARIANT_PROPERTY(@numval, 'Scale');\n\nIf you aren't happy single, you won't be happy in a relationship.\n\nRemember, happiness comes from guitars, not relationships.\n\n• I was expecting this to be a case of the OP was using float when I first opened the topic; was nice to see that wasn't the case.\n\nTo cover why this happens, however, this is actually explained in the documenation: Precision, scale, and Length (Transact-SQL). Specifically it notes:\n\nIn multiplication and division operations we need precision - scale places to store the integral part of the result. The scale might be reduced using the following rules:\n\n1. The resulting scale is reduced to min(scale, 38 – (precision-scale)) if the integral part is less than 32, because it cannot be greater than 38 – (precision-scale). Result might be rounded in this case.\n2. The scale will not be changed if it is less than 6 and if the integral part is greater than 32. In this case, overflow error might be raised if it cannot fit into decimal(38, scale)\n3. The scale will be set to 6 if it is greater than 6 and if the integral part is greater than 32. In this case, both integral part and scale would be reduced and resulting type is decimal(38,6). Result might be rounded to 6 decimal places or overflow error will be thrown if integral part cannot fit into 32 digits.\n\nI've bolded the relevant part.\n\nAs your scale is greater than 32 (38) and your precision is higher than 6 (12), when you use a multiplication the precision is automatically set to 6, and why you only get .~575241 (to 6 decimal places). A reason why you should always make sure you use relevant sizes for your datatypes.\n\nIn this case, declaring a decimal as a decimal(38,12) when you're only using a scale of 19, and a precision of 6 means you're not using a relevant size. In fact 38 is double to scale of 19, as is 12 of 6, so in a way, the size is quadruple what it should be (yes, i realise that technically it's only double in size, but double the size in both scale and precision kind of makes it \"quadrupally\" bad 😉 ).\n\nThom~\n\nExcuse my typos and sometimes awful grammar. My fingers work faster than my brain does.\nLarnu.uk\n\n• Thom A - Friday, June 29, 2018 9:59 AM\n\nI was expecting this to be a case of the OP was using float when I first opened the topic; was nice to see that wasn't the case.\n\nTo cover why this happens, however, this is actually explained in the documenation: Precision, scale, and Length (Transact-SQL). Specifically it notes:\n\nIn multiplication and division operations we need precision - scale places to store the integral part of the result. The scale might be reduced using the following rules:\n\n1. The resulting scale is reduced to min(scale, 38 – (precision-scale)) if the integral part is less than 32, because it cannot be greater than 38 – (precision-scale). Result might be rounded in this case.\n2. The scale will not be changed if it is less than 6 and if the integral part is greater than 32. In this case, overflow error might be raised if it cannot fit into decimal(38, scale)\n3. The scale will be set to 6 if it is greater than 6 and if the integral part is greater than 32. In this case, both integral part and scale would be reduced and resulting type is decimal(38,6). Result might be rounded to 6 decimal places or overflow error will be thrown if integral part cannot fit into 32 digits.\n\nI've bolded the relevant part.\n\nAs your scale is greater than 32 (38) and your precision is higher than 6 (12), when you use a multiplication the precision is automatically set to 6, and why you only get .~575241 (to 6 decimal places). A reason why you should always make sure you use relevant sizes for your datatypes.\n\nIn this case, declaring a decimal as a decimal(38,12) when you're only using a scale of 19, and a precision of 6 means you're not using a relevant size. In fact 38 is double to scale of 19, as is 12 of 6, so in a way, the size is quadruple what it should be (yes, i realise that technically it's only double in size, but double the size in both scale and precision kind of makes it \"quadrupally\" bad 😉 ).\n\nYes, this is absolutely correct. It is best to use the smallest Precision and Scale for each operand. So, even going with the size of the resulting expression (as Phil did above) is dangerous and could lead to unnecessary truncation.\n\nIdeally you would be able to do something like this:\n\nSELECT CONVERT(DECIMAL(19, 6), 1000000000000.123456) * CONVERT(DECIMAL(11, 6), 10000.123456)\n-- 10000123456001234.575241383936\n\nBut when storing decimal values, yeah, it kinda makes sense that people use the largest possible container if it is uncertain what the max size of the values to store is up front.\n\nI don't know of a built-in way to shrink the Precision and Scale down to the minimum required sizes, but I can see some improvements to make in the O.P.s code. It shouldn't be necessary to do either loop, or to test for a leading minus / negative sign. Try this:\n\n// convert to string and remove potential negative sign\nstring _InputString = SqlDecimal.Abs(_Input).ToString();\n\n// remove trailing 0's (only on the right side)\n_InputString = _InputString.TrimEnd(new char[] { '0' });\n\nint _Precision = _InputString.Length - 1;\n\n// if decimal is usually closer to the right side, possibly save time by starting there,\n// else use IndexOf\nint _Scale = (_Precision - _InputString.LastIndexOf('.'));\n\nreturn SqlDecimal.ConvertToPrecScale(_Input, _Precision, _Scale);\n\nTake care,\nSolomon...\n\nSQL#https://SQLsharp.com/ ( SQLCLR library ofover 340 Functions and Procedures)\nSql Quantum Lifthttps://SqlQuantumLift.com/ ( company )\nSql Quantum Leaphttps://SqlQuantumLeap.com/ ( blog )\nInfo sitesCollations • Module Signing • SQLCLR\n\nViewing 4 posts - 1 through 3 (of 3 total)"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8273811,"math_prob":0.9770617,"size":8136,"snap":"2022-05-2022-21","text_gpt3_token_len":2105,"char_repetition_ratio":0.12788981,"word_repetition_ratio":0.4761194,"special_character_ratio":0.29277286,"punctuation_ratio":0.14053716,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9880442,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-01-29T13:45:18Z\",\"WARC-Record-ID\":\"<urn:uuid:6b7aa802-43f9-4d25-9063-ffa996480a5e>\",\"Content-Length\":\"104722\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:87ebeb0d-840a-4c41-95c6-fee6e3bbb6a0>\",\"WARC-Concurrent-To\":\"<urn:uuid:bd525c16-6404-4f0e-aa91-4c173202706b>\",\"WARC-IP-Address\":\"34.242.253.12\",\"WARC-Target-URI\":\"https://www.sqlservercentral.com/forums/topic/loss-of-precision-when-multiplying\",\"WARC-Payload-Digest\":\"sha1:BNNCENATFHTZMQUYDJP7W3EAVF5ULM24\",\"WARC-Block-Digest\":\"sha1:KTB4SDOUHI4R6PZJVK4COOOJDDSUMTET\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-05/CC-MAIN-2022-05_segments_1642320306181.43_warc_CC-MAIN-20220129122405-20220129152405-00146.warc.gz\"}"} |
https://yohua.ml/2019/03/07/%E4%BA%92%E8%BF%9E%E7%BD%91%E7%BB%9C%E7%9A%84%E5%9F%BA%E6%9C%AC%E6%A6%82%E5%BF%B5/ | [
"此处概念限定为“互连”,与“互联”区分。\n\n# 互连网络的功能和特征\n\n互联网络(interconnection network)是一种由开关元件按照一定拓扑结构和控制方式构成的网络,用来实现计算机系统中结点之间的相互连接。这些节点可以是处理器、存储模块后其他设备。从拓扑结构角度,互联网络是从输入结点到输出结点之间的一组互连或映射(mapping)。",
null,
"互连网络已经成为并行计算机系统的一个关键组成部分。随着各个领域对高性能计算机的要求越来越高,多处理机和多计算机系统的规模越来越大,对处理器之间或处理单元与存储模块之间通信的速度和灵活性的要求也越来越高。因此,它对计算机系统的性能价格比有着决定性的影响。\n可以从以下4个不同的方面来描述互连网络。\n(1)定时方式:有同步和异步两种。\n同步系统使用一个统一的时钟。它可以将数据同时播送到所有处理器结点中, 也可以使所有结点同时与相邻的结点进行通信(若无冲突)。而异步系统没有统一的时钟,系统中的各个处理机都是独立地工作。\n(2)交换方式:有线路交换和分组交换两种。\n(3)控制策略:有集中式和分散式两种。\n(4)拓扑结构:有静态和动态两种。\n\n# 互连函数\n\n为反应不同互连函数的连接特性,每种互连网络可用一组互连函数(interconnection function)来描述。用变量x表示输入(设x=0,1,···,N-1),用函数f(x)表示输出,通过数学表达式建立输入端与输出端的一一对应关系。即在互连函数f的作用下,输入端x连接到输出端f(x)。x和f(x)可以用二进制表示,也可以用十进制表示。\n互连函数反映了网络输入数组和输出数组之间对应的置换关系或排列关系,所以互连函数有事也称为置换(permutation)函数或排列函数。\n互连函数f(x)有时可以采用循环表示,即($x_0x_1x_2···x_{j-1}$)。它表示:\n\n$$f(x_0)=x_1, f(x_1)=x_2, ···, f(x_{j-1})=x_0$$\n\n其中,j称为该循环的长度。\n下面介绍几种常用的基本互连函数及其主要特征。\n\n## 交换函数\n\n交换函数用于实现二进制地址编码中第k位互反的输入端与输出端的连接。其表达式为:\n\n$$E_k(x_{n-1}x_{n-2}···x_{k+1}x_{k}x_{k-1}···x_1x_0)=x_{n-1}x_{n-2}···x_{k+1} \\overline{x_k} x_{k-1}···x_1x_0$$\n\n交换函数主要用来构建立方体网络和各种超立方体互连网络。它共有$n=log_2N$ 种互连函数。N为结点个数。",
null,
"",
null,
"## 均匀洗牌函数\n\n均匀洗牌函数(shuffle)定义为:将输入端分成数目相等的两半,前一半和后一半按类似均匀混洗扑克牌的方式交叉的连接到输出端(输出端相当于混洗的结果),其函数可表示为:\n\n$$S(x_{n-1}x_{n-2}···x_{2}x_{1})=x_{n-2}···x_2x_1x_{n-1}$$\n\n即把输入端的二进制编号循环左移一位。\n逆均匀洗牌函数于此类似,是将输入端的二进制编号循环右移一位而得到所连接的输出端编号。\n逆均匀洗牌函数是均匀洗牌函数的逆函数。两者的输入端与输出端正好互换位置(互为镜像)。\n用它们代表的链路和交换开关多级组合起来可构成Omega网络与逆Omega网络。",
null,
"## 蝶式函数\n\n蝶式互连函数(butterfly)定义为:\n\n$$B(x_{n-1}x_{n-2}···x_1x_0)=x_0x_{n-2}···x_1x_{n-1}$$\n\n即把输入端的二进制编号的最高位与最低位互换位置,便得到了输出端的编号。\n与均匀混洗函数类似,只用蝶式函数不能实现任意结点之间的连接,但是蝶式变换与交换变换的多级组合可作为构成方体多级网络的基础。",
null,
"## 反位序函数\n\n反位序函数是将输入端二进制编号的位序颠倒过来求得相应输出端的编号。其互连函数为:\n\n$$R(x_{n-1}x_{n-2}···x_1x_0)=x_0x_1x_2···x_{n-2}x_{n-1}$$\n\n对于N=8的情况,正好B(x)函数等于R(x)函数,其变换图形如下。",
null,
"## PM2I函数\n\nPM2I函数是一种移数函数,它是将各输入端都循环移动一定的位置后连到输出端。其函数为:\n\n$$PM2_{+i}(x)=(x+2^i) mod N$$\n\n$$PM2_{-i}(x)=(x-2^i) mod N$$\n\n其中,0<=x<=N-1, 0<=i<=n-1, n=log_2N, N为结点数。显然,PM2I互连网络共有2n个互连函数。\n当N=8时,有6个PM2I函数:\n\n$$PM2_{+0}: (0 1 2 3 4 5 6 7 )$$\n\n$$PM2_{-0}: (7 6 5 4 3 2 1 0 )$$\n\n$$PM2_{+1}: (0 2 4 6 )(1 3 5 7 )$$\n\n$$PM2_{-1}: (6 4 2 0 )(7 5 3 1 )$$\n\n$$PM2_{\\pm2}: (0 4)(1 5)(2 6)(3 7)$$\n\n下图画出了其中3个函数的变形图形。",
null,
"PM2I函数是构成数据变换网络的基础。\n阵列计算机ILLIAC IV采用$PM2_{\\pm0}$$PM2_{\\pm \\frac{n}{2}}$构成互连网络,实现各处理单元之间的上下左右互连,如下图。",
null,
"# 互连函数的性能参数\n\n网络通常是用有向边或无向边连接有限个结点的图来表示。互连网络的主要特征参数有:\n(1)网络规模(network size):网络中结点的个数。它表示该网络所能连接的部件的数量、\n(2)节点度(node degree):与结点相连接的边数(通道数),包括入度(in degree)和出度(out degree)。进入结点的边数称为入度,从结点出来的边数称为出度。\n(3)距离:对于网络中的任意两个结点,从一个结点出发到另一个节点终止所需要跨越的边数的最小值。\n(4)网络直径(network diameter):网络中任意两个结点之间距离的最大值。网络直径应尽可能地小。\n(5)结点之间的线长:两个结点之间连线的长度,用米、千米等表示。\n(6)等分宽度:当某一网路被分割成相等的两半时,沿切口的边数(通道数)的最小值称为通道等分宽度(channel bisection width),用b表示。而线等分宽度为B=b*w。其中w为通道宽度(用位表示)。该参数主要反映了网络最大流量。\n(7)对称性:从任何结点看到的拓扑结构都是相同的网络称为对称网络(symmetric network)。对称网络比较容易实现,编程也比较容易。"
] | [
null,
"https://yohua.ml/2019/03/07/互连网络的基本概念/001.png",
null,
"https://yohua.ml/2019/03/07/互连网络的基本概念/002.png",
null,
"https://yohua.ml/2019/03/07/互连网络的基本概念/003.png",
null,
"https://yohua.ml/2019/03/07/互连网络的基本概念/004.png",
null,
"https://yohua.ml/2019/03/07/互连网络的基本概念/005.png",
null,
"https://yohua.ml/2019/03/07/互连网络的基本概念/006.png",
null,
"https://yohua.ml/2019/03/07/互连网络的基本概念/007.png",
null,
"https://yohua.ml/2019/03/07/互连网络的基本概念/008.png",
null
] | {"ft_lang_label":"__label__zh","ft_lang_prob":0.9692758,"math_prob":0.99877286,"size":2073,"snap":"2022-40-2023-06","text_gpt3_token_len":1844,"char_repetition_ratio":0.06283229,"word_repetition_ratio":0.0,"special_character_ratio":0.16208394,"punctuation_ratio":0.01904762,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9665471,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16],"im_url_duplicate_count":[null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-10-02T19:27:11Z\",\"WARC-Record-ID\":\"<urn:uuid:805c8405-c908-4df4-8b36-32143062e1d9>\",\"Content-Length\":\"57638\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f1ad2210-90e1-421b-ab21-c910b28422b6>\",\"WARC-Concurrent-To\":\"<urn:uuid:66f56400-a419-48bc-b71c-b093c1be0178>\",\"WARC-IP-Address\":\"185.199.109.153\",\"WARC-Target-URI\":\"https://yohua.ml/2019/03/07/%E4%BA%92%E8%BF%9E%E7%BD%91%E7%BB%9C%E7%9A%84%E5%9F%BA%E6%9C%AC%E6%A6%82%E5%BF%B5/\",\"WARC-Payload-Digest\":\"sha1:4H675MY47RJPABIXT4YPTJDV3XMJG4V7\",\"WARC-Block-Digest\":\"sha1:6F2TK3KVUSPQM7BJID7OOVPAXQBZMS47\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030337339.70_warc_CC-MAIN-20221002181356-20221002211356-00431.warc.gz\"}"} |
https://docs.rs/crate/seq_geom_xform/0.3.0/source/Cargo.toml.orig | [
"# seq_geom_xform 0.3.0\n\nTransform/normalize complex single-cell fragment geometries into simple geometries.\nDocumentation\n``````[package]\nname = \"seq_geom_xform\"\nversion = \"0.3.0\"\nedition = \"2021\"\nauthors = [\n\"Rob Patro <[email protected]>\"\n]\ndescription = \"Transform/normalize complex single-cell fragment geometries into simple geometries.\"\nrepository = \"https://github.com/COMBINE-lab/seq_geom_xform\"\nhomepage = \"https://github.com/COMBINE-lab/seq_geom_xform\"\ninclude = [\n\"/src/*.rs\",\n\"/src/lib/*.rs\",\n\"/src/bin/*.rs\",\n\"/Cargo.toml\",\n\"/Cargo.lock\",\n\"/CONTRIBUTING.md\",\n\"/CODE_OF_CONDUCT.md\",\n]\nkeywords = [\n\"single-cell\",\n\"preprocessing\",\n\"RNA-seq\",\n]\ncategories = [\"command-line-utilities\", \"science\"]\n\n# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html\n[lib]\nname = \"seq_geom_xform\"\npath = \"src/lib.rs\"\n\n[[bin]]\nname = \"seq_xformer\"\npath = \"src/bin/bin.rs\"\n\n[dependencies]\nseq_geom_parser = { git = \"https://github.com/COMBINE-lab/seq_geom_parser\", branch = \"dev\", version = \"0.2.1\" }\nregex = \"1.7\"\nanyhow = \"1.0\"\nneedletail = \"0.5.0\"\nclap = { version = \"4.1.11\", features = [\"derive\"] }\nthousands = \"0.2.0\"\ntracing = \"0.1.37\"\ntracing-subscriber = { version = \"0.3.16\", default-features = true, features = [\"env-filter\"] }\ntempfile = \"3.4.0\"\nnix = { version = \"0.26.2\", features = [\"fs\"] }\n``````"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.6460924,"math_prob":0.966152,"size":1181,"snap":"2023-40-2023-50","text_gpt3_token_len":388,"char_repetition_ratio":0.13084112,"word_repetition_ratio":0.0,"special_character_ratio":0.38441998,"punctuation_ratio":0.2744186,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9841428,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-10-02T17:15:32Z\",\"WARC-Record-ID\":\"<urn:uuid:d0f54b71-309d-4a4b-8215-f239f663ad1e>\",\"Content-Length\":\"56478\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f24cb0dd-0583-4779-aae8-76a8e337a9c7>\",\"WARC-Concurrent-To\":\"<urn:uuid:da7597de-af34-4b8b-8298-8e0770d3ba16>\",\"WARC-IP-Address\":\"108.138.85.36\",\"WARC-Target-URI\":\"https://docs.rs/crate/seq_geom_xform/0.3.0/source/Cargo.toml.orig\",\"WARC-Payload-Digest\":\"sha1:2TIH3BSIUS3BG3DB64UCQS5EXKC3BTKS\",\"WARC-Block-Digest\":\"sha1:UPL2KLH7NUQNVLY65GSUFZBVSN47JQZF\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233511002.91_warc_CC-MAIN-20231002164819-20231002194819-00854.warc.gz\"}"} |
https://cboard.cprogramming.com/c-programming/169097-gettoken-function.html?s=4df223358d054aee44368c2b27f16d9c | [
"1. ## gettoken() function\n\nhello,\n\nI have been trying to understand this function, there is still a few things confusing me. First, what are those weird statements like return token type = BRACKETS for? I have never seen them before.\n\nCode:\n```int gettoken(void) /* return next token */\n\n{\n\nint c, getch(void);\nvoid ungetch(int);\nchar *p = token; /* character string pointed by p */\n\nwhile ((c = getch()) == ' ' || c == '\\t') /* skip blanks or tabs */\n;\nif (c == '(') {\nif ((c = getch()) == ')') {\nstrcpy(token, \"()\");\n} else {\nungetch(c); /* put character back on input before prosessing new input */\n\n}\n} else if (c == '[') {\nfor (*p++ = c; (*p++ = getch()) != ']'; ) /* do nothing as long as character is not terminating bracket */\n;\n*p = '\\0';\nreturn tokentype = BRACKETS; /* if it is, we found it, so tokentype is brackets */\n} else if (isalpha(c)) {\nfor (*p++ = c; isalnum(c == getch()); ) /* read characters as long as is alphanumeric */\n*p++ = c;\n*p = '\\0';\nungetch(c); /* if not, push back into the buffer */\n} else\n}```",
null,
"2.",
null,
"Originally Posted by coder222\nFirst, what are those weird statements like return token type = BRACKETS for? I have never seen them before.\nThis:\nCode:\n`return tokentype = BRACKETS;`\nis equivalent to:\nCode:\n```tokentype = BRACKETS;",
null,
"Popular pages Recent additions",
null,
""
] | [
null,
"https://cboard.cprogramming.com/images/misc/progress.gif",
null,
"https://cboard.cprogramming.com/images/misc/quote_icon.png",
null,
"https://cboard.cprogramming.com/images/misc/progress.gif",
null,
"https://www.feedburner.com/fb/images/pub/feed-icon16x16.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.75278705,"math_prob":0.9839357,"size":1133,"snap":"2022-27-2022-33","text_gpt3_token_len":316,"char_repetition_ratio":0.14880425,"word_repetition_ratio":0.00913242,"special_character_ratio":0.37511033,"punctuation_ratio":0.15813954,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9519759,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-07-01T05:57:51Z\",\"WARC-Record-ID\":\"<urn:uuid:d70e2621-a3db-460d-83eb-407d54e17260>\",\"Content-Length\":\"47876\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3cf3bf6b-f372-4355-b069-c706a510826e>\",\"WARC-Concurrent-To\":\"<urn:uuid:cd053b0e-7325-4957-985b-1f1f416f04bf>\",\"WARC-IP-Address\":\"198.46.93.160\",\"WARC-Target-URI\":\"https://cboard.cprogramming.com/c-programming/169097-gettoken-function.html?s=4df223358d054aee44368c2b27f16d9c\",\"WARC-Payload-Digest\":\"sha1:U7J4WGPKOK7OXQRHFQ24T2BIGGUI6QY3\",\"WARC-Block-Digest\":\"sha1:JYXCV3WHM4Y7FFGQ2W7RZJGASEP5REUI\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656103920118.49_warc_CC-MAIN-20220701034437-20220701064437-00301.warc.gz\"}"} |
https://math.stackexchange.com/questions/8814/funny-identities?page=3&tab=votes | [
"Funny identities [closed]\n\nHere is a funny exercise $$\\sin(x - y) \\sin(x + y) = (\\sin x - \\sin y)(\\sin x + \\sin y).$$ (If you prove it don't publish it here please). Do you have similar examples?\n\nclosed as primarily opinion-based by Najib Idrissi, user98602, user147263, apnorton, user2345215Jan 31 '15 at 18:16\n\nMany good questions generate some degree of opinion based on expert experience, but answers to this question will tend to be almost entirely based on opinions, rather than facts, references, or specific expertise. If this question can be reworded to fit the rules in the help center, please edit the question.\n\n• This is related. – J. M. is a poor mathematician Nov 3 '10 at 22:35\n• I noticed that your expression can be also written as $\\sin(x - y) \\sin(x + y) = (\\cos y + \\cos x) (\\cos y - \\cos x)$ – Quixotic Nov 4 '10 at 11:09\n• I have tripped up many calculus students with this one: $log(1+2+3)=log1+log2+log3$. I am evil... – user641 Dec 8 '12 at 1:23\n• @SteveD If only we could find an odd example... – peoplepower Jan 13 '13 at 0:31\n• Almost an identity: $$\\sqrt{123456790}\\approx 11111.11111\\,.$$ – Jakob Werner Jul 12 '14 at 18:47\n\nIf we define $P$ as the infinite lower triangular matrix where $P_{i,j} = \\binom{i}{j}$ (we can call it the Pascal Matrix), then $$P^k_{i,j} = \\binom{i}{j}k^{i-j}$$\n\nwhere $P^k_{i,j}$ is the element of $P^k$ in the position $i,j.$\n\nI have another one, but I'm quite unwilling to post this here because it's MINE, I haven't found it anywhere, so don't steal this.\n\nLet us take the four most important mathematical constants: The Euler number $e$, the Aurea Golden Ratio $\\phi$, the Euler-Mascheroni constant $\\gamma$ and finally $\\pi$. Well we can see easily that\n\n$$e\\cdot\\gamma\\cdot\\pi\\cdot\\phi \\approx e + \\gamma + \\pi + \\phi$$\n\n• I wouldn't take credit for this dude – user285523 Nov 11 '15 at 2:48\n\n$$\\int_{-\\infty}^{\\infty}{\\sin\\left(x\\right) \\over x}\\,{\\rm d}x = \\pi\\int_{-1}^{1}\\delta\\left(k\\right)\\,{\\rm d}k$$"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.83198786,"math_prob":0.99647987,"size":2105,"snap":"2019-26-2019-30","text_gpt3_token_len":678,"char_repetition_ratio":0.10661589,"word_repetition_ratio":0.01179941,"special_character_ratio":0.3567696,"punctuation_ratio":0.12009238,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99986756,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-06-26T08:50:56Z\",\"WARC-Record-ID\":\"<urn:uuid:0268a7a8-a4de-48d7-b156-94708faa95c5>\",\"Content-Length\":\"140516\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:431a110b-773a-45a2-8260-ec112f737593>\",\"WARC-Concurrent-To\":\"<urn:uuid:22989bb4-edae-46ea-bb67-707492e54ba1>\",\"WARC-IP-Address\":\"151.101.193.69\",\"WARC-Target-URI\":\"https://math.stackexchange.com/questions/8814/funny-identities?page=3&tab=votes\",\"WARC-Payload-Digest\":\"sha1:AIZFE4NRAKHGO2NL7UDWOH2G4UPSGGXZ\",\"WARC-Block-Digest\":\"sha1:IXUO6KXITNKXWMJ6SONT3PWOAX43TUCG\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-26/CC-MAIN-2019-26_segments_1560628000231.40_warc_CC-MAIN-20190626073946-20190626095946-00333.warc.gz\"}"} |
https://www.chegg.com/homework-help/drawing-sketch-surfaces-exercise-ellipsoids-chapter-12.6-problem-18e-solution-9780321730787-exc | [
"",
null,
"# Thomas' Calculus, Early Transcendentals, Books a la Carte Edition (12th Edition) Edit edition Problem 18E from Chapter 12.6: Drawing Sketch the surfaces in Exercise. ELLIPSOIDS\n\nWe have solutions for your book!\nChapter: Problem:\n\nDrawing\n\nSketch the surfaces in Exercise.\n\nELLIPSOIDS",
null,
"Step-by-step solution:\nChapter: Problem:\n• Step 1 of 5\n\nConsider the surface",
null,
"The standard form for the quadratic surface is",
null,
"The surface is symmetric with respect to each of the coordinate planes because each variable in the defining equation is squared\n\n• Chapter , Problem is solved.\nCorresponding Textbook",
null,
"Thomas' Calculus, Early Transcendentals, Books a la Carte Edition | 12th Edition\n9780321730787ISBN-13: 032173078XISBN:\nThis is an alternate ISBN. View the primary ISBN for: Thomas' Calculus Early Transcendentals 12th Edition Textbook Solutions"
] | [
null,
"https://cs.cheggcdn.com/covers2/20500000/20500993_1307601438_Width200.jpg",
null,
"https://mgh-images.s3.amazonaws.com/9780321998002/508937-12.6-18EEI1.png",
null,
"https://word-to-html-images.s3.amazonaws.com/9780321198006/626-12.6-22E-i1.png",
null,
"https://word-to-html-images.s3.amazonaws.com/9780321198006/626-12.6-22E-i2.png",
null,
"https://cs.cheggcdn.com/covers2/20500000/20500993_1307601438_Width200.jpg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.78824717,"math_prob":0.81547767,"size":331,"snap":"2019-13-2019-22","text_gpt3_token_len":71,"char_repetition_ratio":0.16207951,"word_repetition_ratio":0.12765957,"special_character_ratio":0.17824773,"punctuation_ratio":0.071428575,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98047495,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,null,null,1,null,1,null,1,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-03-22T16:51:23Z\",\"WARC-Record-ID\":\"<urn:uuid:5eb0844b-39f8-4f6b-aa37-0fd13cc1a9b4>\",\"Content-Length\":\"136959\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:92ea164f-b765-46c9-b673-b677fee33a82>\",\"WARC-Concurrent-To\":\"<urn:uuid:f120db10-30cd-489f-9c9d-84d2c9e52953>\",\"WARC-IP-Address\":\"52.40.70.55\",\"WARC-Target-URI\":\"https://www.chegg.com/homework-help/drawing-sketch-surfaces-exercise-ellipsoids-chapter-12.6-problem-18e-solution-9780321730787-exc\",\"WARC-Payload-Digest\":\"sha1:JT4BXXXBW77KFBFT47QSZVVNBBWTXBZ6\",\"WARC-Block-Digest\":\"sha1:E4XF6IOTNZ6I343K7OJBDXIEG6JOHCKY\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-13/CC-MAIN-2019-13_segments_1552912202672.57_warc_CC-MAIN-20190322155929-20190322181929-00221.warc.gz\"}"} |
http://deufrabase.ifsttar.fr/methodology/case-studies/lden-calculation/ | [
"# Lden calculation\n\nFor sake of simplification, the Lden calculation in the DEUFRABASE is carried out assuming the same sound velocity gradient for the whole periods (day, evening and night) and whatever the propagation distances.\n\nIf the user wants to consider the variations in the meteorological conditions during the day, evening and night periods, for propagation distances greater than 100 m, it is possible to carry out a more accurate calculation through the following procedure:\n\n1. Perform a first calculation for a given geometry without gradient celerity (for example, the geometry '1c dense surface'), which corresponds to the day and evening periods;\n2. From the DEUFRABASE results, export the resulting calculation of the LA,eq time distribution to an Excel or CSV file (use the Menu option on the corresponding chart);\n3. Perform a second calculation for the same geometry with gradient celerity (using the same example, the geometry '1c dense surface with gradient'), which corresponds to the night period;\n4. Export the resulting calculation of the LA,eq time distribution to an Excel or CSV file;\n5. In a new spreadsheet, open both files and create a new table using, as the first value series, the data from the first file that corresponds to the day-evening period [6:00-22:00] (for Ld and Le), and, as a second value series, the data from the second file that corresponds to the night period [22:00-6:00] (for Ln);\n6. Perform the Lden calculation using the following relation:\nL_\\text{den}=10\\log_{10} \\left( \\frac{12}{24} 10^{0.1\\times L_\\text{d}} + \\frac{4}{24} 10^{0.1\\times \\left(L_\\text{e}+5\\right)} + \\frac{8}{24} 10^{0.1\\times \\left(L_\\text{n}+10\\right)} \\right)"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.79393727,"math_prob":0.99228424,"size":1680,"snap":"2019-13-2019-22","text_gpt3_token_len":413,"char_repetition_ratio":0.1300716,"word_repetition_ratio":0.104,"special_character_ratio":0.25416666,"punctuation_ratio":0.103125,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9880676,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-03-20T21:08:58Z\",\"WARC-Record-ID\":\"<urn:uuid:87fb61f4-7916-409f-ba6d-0e12a87db025>\",\"Content-Length\":\"21921\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5681196e-aa47-4375-af01-f3518146b4cc>\",\"WARC-Concurrent-To\":\"<urn:uuid:1e8741f7-98f5-416e-b59d-9cbf41233849>\",\"WARC-IP-Address\":\"137.121.250.88\",\"WARC-Target-URI\":\"http://deufrabase.ifsttar.fr/methodology/case-studies/lden-calculation/\",\"WARC-Payload-Digest\":\"sha1:UCNQN6Y3CUAQSVSKZUQFFPP7S6LP4NWP\",\"WARC-Block-Digest\":\"sha1:LVUVB2F346ERK3MLUHVNPVUTZMC2QBAQ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-13/CC-MAIN-2019-13_segments_1552912202471.4_warc_CC-MAIN-20190320210433-20190320232433-00154.warc.gz\"}"} |
https://pubs.asahq.org/anesthesiology/article/83/4/747/35230/Validation-of-the-Alfentanil-Canonical-Univariate | [
"Background\n\nSeveral parameters derived from the multivariate electroencephalographic (EEG) signal have been used to characterize the effects of opioids on the central nervous system. These parameters were formulated on an empirical basis. A new statistical method, semilinear canonical correlation, has been used to construct a new EEG parameter (a certain combination of the powers in the EEG power spectrum) that correlates maximally with the concentration of alfentanil at the effect site. To date, this new canonical univariate parameter (CUP) has been tested only in a small sample of subjects receiving alfentanil.\n\nMethods\n\nThe CUP was tested on EEG data from prior studies of the effect of five opioids: alfentanil (n = 5), fentanyl (n = 15), sufentanil (n = 11), trefentanil (n = 5), and remifentanil (n = 8). We compared the CUP to the commonly used EEG parameter spectral edge, SE95%. The comparison was based on the signal to noise ratio, obtained by fitting a nonlinear pharmacodynamic model to both parameters. The pharmacodynamic parameter estimates obtained using both measurements were also compared.\n\nResults\n\nThe values for signal-to-noise ratio were significantly greater for the CUP than for SE95% when considering all the opioids at once. The pharmacodynamic estimates were similar between the two EEG parameters and with previously published results. Semilinear canonical correlation coefficients estimated within each drug group showed patterns similar to each other and to the coefficients in the CUP, but different from coefficients for propofol and midazolam.\n\nConclusions\n\nAlthough the CUP was originally designed and tested using alfentanil, we have proven it to be a general measure of opioid effect on the EEG.\n\nMethods: The CUP was tested on EEG data from prior studies of the effect of five opioids: alfentanil (n = 5), fentanyl (n = 15), sufentanil (n = 11), trefentanil (n = 5), and remifentanil (n = 8). We compared the CUP to the commonly used EEG parameter spectral edge, SE95%. The comparison was based on the signal to noise ratio, obtained by fitting a nonlinear pharmacodynamic model to both parameters. The pharmacodynamic parameter estimates obtained using both measurements were also compared.\n\nResults: The values for signal-to-noise ratio were significantly greater for the CUP than for SE95% when considering all the opioids at once. The pharmacodynamic estimates were similar between the two EEG parameters and with previously published results. Semilinear canonical correlation coefficients estimated within each drug group showed patterns similar to each other and to the coefficients in the CUP, but different from coefficients for propofol and midazolam.\n\nConclusions: Although the CUP was originally designed and tested using alfentanil, we have proven it to be a general measure of opioid effect on the EEG.\n\nKey words: Analgesics: opioids. Electroencephalogram: canonical univariate parameter; spectral edge. Statistical modeling: semilinear canonical correlation.\n\nTHE electroencephalogram (EEG) has been widely used as a measure of anesthetic drug effect on the central nervous system (CNS). It is a continuous, noninvasive measure from which estimates can be made about the time course of anesthetic drug concentration within the CNS. Additionally, the EEG has proven to be a useful measure of drug potency and, as such, has played an important role in the development of new anesthetic drugs. \n\nElectroencephalographic measures of opioid drug effect reflect both the time course and relative potency of opioid drug effect on the cerebral cortex, as reviewed by Shafer and Varvel. This is not surprising, because the EEG response to opioids is clearly a function of the plasma concentration [4,5](after compensating for diffusion delay to the effect site), and the clinical response to opioids is also a function of the plasma concentration, as demonstrated by Ausems et al. \n\nSince the early work of Bickford describing the effects of central acting drugs using the EEG, many different methods of analyzing the EEG signal have been used to relate the EEG effect to drug concentration. Spectral edge, total power, power in the delta, alpha, theta, and beta frequencies, median frequency, total number of waves per second, autocorrelation, and a variety of ratios of different frequency bands are all examples of EEG processing techniques that have been applied in an effort to reduce the complex EEG waveform to a single (i.e., univariate) measure of the effect of anesthetic drugs on the EEG. These parameters were developed to quantitate some aspect of the EEG that could be seen by visual inspection to change after drug administration. Thus, the selection of EEG parameters has been mostly empirical, in that the EEG signal processing algorithms have been chosen in an ad hoc manner. An alternative is to use an appropriate statistical method to define the EEG parameter that \"optimally\" correlates with anesthetic drug concentration at the site of drug effect.\n\nOne such method is semilinear canonical correlation (SCC). Semilinear canonical correlation is a statistical technique that can be used to characterize the particular combination of EEG frequencies that optimally correlates with effect site opioid concentration. Semi-linear canonical correlation searches for the best linear combination of the powers in the frequency spectrum of the EEG, while rearranging and estimating the pharmacodynamic parameters in an iterative fashion, trying to obtain the combination of both EEG measure and pharmacodynamic parameter estimates that maximizes the signal-to-noise ratio (R2), a measure of how close measurements and predictions are.\n\nGregg et al. previously applied SCC to the measurement of alfentanil drug effect. They demonstrated in a test population that the EEG-based measurement developed with SCC, the alfentanil canonical univariate parameter (CUP), performed better than spectral edge, median frequency, delta power, theta ratio, and total power as a measurement of alfentanil drug effect on the CNS.\n\nIn this article we extend those results to five opioids, alfentanil (in a new data set), fentanyl, sufentanil, trefentanil, and remifentanil, comparing the performance of the CUP to the commonly used EEG parameter, spectral edge, SE95%. The R2of the two EEG parameters was statistically compared.\n\n## Methods\n\nWe used EEG data recorded in prior studies, [2,4,5,9]performed by our research group under the approval of the Stanford University Institutional Review Board. The opioids studied were fentanyl, alfentanil, sufentanil, trefentanil, and remifentanil. The original experiments characterized the pharmacokinetic and pharmacodynamic profile of the opioids using the EEG as a measure of opioid drug effect on the CNS. Alfentanil, trefentanil, remifentanil, and fentanyl [2,9]were studied in healthy volunteers. Additional fentanyl and sufentanil data were used from studies in patients undergoing general anesthesia. The patients receiving sufentanil or fentanyl were ASA physical status 1-2, scheduled for a variety of elective surgical procedures, and who received no premedication or other CNS active drug prior to their EEG study. Demographic characteristics for all these subjects were reported in the respective publications. [2,4,5,9]Details about the drug administration and the EEG endpoints pursued are given in Table 1.\n\nTable 1. Description of the Main Characteristics of the Five Original Studies on Which We Based Our EEG Analysis.",
null,
"### Electroencephalographic Data Collection\n\nThe details of the EEG data collection are as reported in the original manuscripts [2,4,5,9]and the EEG signal analysis for all five opioids was as described by Gregg et al. To compute the CUP, the frequency spectrum of the EEG for each epoch was reduced to 10 bins of 3 Hz each representing the EEG power spanning from 0.5 to 30 Hz. This binning method has higher resolution than the \"classical\" frequency bands delta, theta, alpha, and beta.\n\n### The Canonical Univariate Parameter\n\nThe EEG measure reported by Gregg et al. is based on a series of coefficients that apply to the spectral powers in the 10 3-Hz frequency bins. It was derived using the SCC technique, described later. The coefficients reported by Gregg et al. are shown in Table 2.\n\nTable 2. Values for the Weighting Coefficients Obtained with Semilinear Canonical Correlation for Each Frequency Bin*",
null,
"(Figure 1(A and B)) show how the EEG measures of drug effect were constructed for a representative baseline EEG waveform (Figure 1(A)) and a waveform showing maximum opioid drug effect (Figure 1(B)). This example of maximal EEG effect was obtained by visual inspection of the EEG tracing, looking for maximal opioid-induced slowing. The raw signal was initially digitized and transformed into the frequency domain by means of the fast Fourier transform, obtaining 60 frequency bins of 0.5 Hz each spanning from 0.5 Hz to 30 Hz. The spectral edge was defined as the frequency below which 95% of the power lies, as shown on the second graph in the two figures. The 60 frequency bins were then reduced to 10 bins of 3 Hz each (third graph in Figure 1(A and B)). The power in each bin was converted into a natural log (log) of the power (fourth graph in Figure 1(A and B)). In the last graph in Figure 1(A and B), the log in each bin is multiplied by the corresponding coefficient from Table 2. The CUP is then merely the sum of the bins shown in the last graph in Figure 1(A and B). In other words, Equation 1where the gammaiare the coefficients taken from Table 2, and the birepresent the power in each one of the ten frequency bins.\n\nFigure 1. Construction of the alfentanil canonical univariate parameter of the electroencephalogram (EEG). From the raw EEG signal (graph 1) we obtained the power spectrum by means of the FFT (graph 2). The power spectrum was calculated for each 20 sec epoch of EEG signal (although only 5 sec are shown in graph 1 for clarity). We then added every six bins to obtain ten bins of 3 Hz each (graph 3); the fourth graph shows the log (natural log) transformation of the power in each bin and how the gamma vector transformed the power in each frequency bin. (A) Baseline EEG. (B) EEG during maximal opioid effect. Note in graph 2 the value for the spectral edge and the canonical univariate parameter value in graph 5. Note also in graph (B) that the shift in the power toward the lowest frequency bins along with the important decrease in power from b4to b10.\n\nFigure 1. Construction of the alfentanil canonical univariate parameter of the electroencephalogram (EEG). From the raw EEG signal (graph 1) we obtained the power spectrum by means of the FFT (graph 2). The power spectrum was calculated for each 20 sec epoch of EEG signal (although only 5 sec are shown in graph 1 for clarity). We then added every six bins to obtain ten bins of 3 Hz each (graph 3); the fourth graph shows the log (natural log) transformation of the power in each bin and how the gamma vector transformed the power in each frequency bin. (A) Baseline EEG. (B) EEG during maximal opioid effect. Note in graph 2 the value for the spectral edge and the canonical univariate parameter value in graph 5. Note also in graph (B) that the shift in the power toward the lowest frequency bins along with the important decrease in power from b4to b10.\n\nSummarizing, the CUP is not only the gamma coefficients, but the combination of the logs of the powers and their corresponding coefficients. The coefficients tend to \"modulate\" the changes in the power spectrum in a way that emphasizes where the drug-induced changes are occurring. For opioids, it is known that the main change is a shift in the power toward the low frequencies, and the combination of the logs of the powers and the coefficients reflects this trend (see bottom panels of Figure 1(A and B)). Thus, it is not as important whether the coefficient for the last frequency bin is positive as it is that the log of the power in that bin multiplied by the coefficient is very small and contributes negligibly to the CUP. Greater CUP values indicate that the effect is increasing.\n\n### Pharmacokinetic and Pharmacodynamic Models\n\nThe observed drug effect was related to drug concentration at the site of drug effect using a pharmacodynamic model. Previous publications using the EEG as a measure of drug effect have used the classic Hill equation as the model relating concentration in the effect compartment, Ce, to EEG effect, E, as follows: Equation 2where E is the effect being modeled, either SE95% or the CUP, E0is the effect when no drug is present, Emaxis the maximum possible effect reached because of the administration of the drug, IC50is the effect compartment concentration associated with an effect half the difference between Emaxand E0, and alpha is the Hill equation coefficient that determines the steepness of the relationship. The apparent effect compartment concentration, Ccis calculated as Equation 3where k sub e0 is the elimination rate constant from the effect compartment ,* is the convolution operator, and Cp(t) is the prediction of the pharmacokinetic model at time t. In turn, Cp(t) was calculated as the convolution of the plasma disposition function with the drug input over time, I(t): Equation 4where I(t) is the infusion rate shown in Table 1and the duration of the infusion. The values of Ajand lambdajare those of the individual subjects for the study, either as reported by the authors (fentanyl, [2,4]alfentanil, trefentanil, remifentanil ) or as calculated by using extended least-squares regression for each person from the original data (sufentanil ).\n\n### Nonlinear Regression\n\nCombining the above relationships yielded the following pharmacodynamic model relating the pharmacokinetic parameters, the time course of the infusion, the observed EEG data, and the parameters of the pharmacodynamic model for SE95%: Equation 5for the canonical univariate parameter: Equation 6.\n\nEach person was fit separately. The unknown parameters of the above models are IC50, alpha, E0, Emax, and ke0. These pharmacodynamic parameters were estimated independently for SE95% and CUP using nonlinear regression with ordinary least-squares. Because we are trying to maximize correlation between the measure of drug effect and the drug concentration in the effect site, the objective function was the R sup 2 (coefficient of determination or squared coefficient of correlation, R2). This coefficient is defined as; Equation 7where SSE (sum of squared errors) represents the sum of the squares of the differences between the observed measurements yifor a given time and what the model predicts for this same time, yi, and SST (total sum of squares) stands for the sum of the squares of the differences between each actual measurement and the average of all the measurements, y with bar. Observe that because SST does not depend on the model parameters, maximizing R2is equivalent to minimizing SSE, i.e., it is equivalent to nonlinear regression with ordinary least-squares.\n\nThe R2is a measure of the proportion of the variation in the effect measurement directly attributable to changes in concentration of the drug at the site of drug effect. A value of R2close to one means that changes in effect can be entirely explained by changes in the apparent effect compartment concentration. A value of R2close to zero means that there is no relationship between effect compartment concentration and effect. [12,13]We compared the values of R2between SE95% and the CUP using the Wilcoxon signed rank sum test for paired values.\n\n### Semilinear Canonical Correlation\n\nSemilinear canonical correlation** is the statistical approach that allows one to extract from the complex multidimensional EEG recording only the information maximally correlated with \"apparent\" effect compartment concentration of the drug. The technique is exactly what is described in the regression description for CUP in Equation 6, except that the ten coefficients gammaiare estimated concurrently with the parameters of the pharmacodynamic model IC50, alpha, E0, Emax, and Ke0.\n\nIn conventional canonical correlation, [14,15]all parameters enter the model linearly. Semilinear canonical correlation differs from conventional canonical correlation only in that several parameters (IC50, alpha, and Ke0) enter the model nonlinearly, and hence a nonlinear regression is required. Figure 2explains SCC using a progression of more familiar statistical models.\n\nFigure 2. Different linear and nonlinear statistical models as compared to semilinear canonical correlation and to the alfentanil canonical univariate parameter. The x's are independent variables, the y's are dependent variables and the beta and gamma are parameters of the models. Also shown the model for the canonical univariate parameter as compared to the semilinear canonical correlation model. The ten gamma coefficients, E sub 0, Emax, IC50, and ke0should be estimated at the same time trying to maximize the value of R2(see text).\n\nFigure 2. Different linear and nonlinear statistical models as compared to semilinear canonical correlation and to the alfentanil canonical univariate parameter. The x's are independent variables, the y's are dependent variables and the beta and gamma are parameters of the models. Also shown the model for the canonical univariate parameter as compared to the semilinear canonical correlation model. The ten gamma coefficients, E sub 0, Emax, IC50, and ke0should be estimated at the same time trying to maximize the value of R2(see text).\n\nUsing SCC, we estimated the elements of the gamma vector for each person. We then calculated a population estimate of the elements of the gamma vector for each opioid by taking the arithmetic mean of each element of gamma Equation 8, Equation 9, Equation 10, Equation 11, Equation 12and Equation 13estimated in the persons receiving that opioid. The details of the method are described by Gregg et al. We then compared the pattern of the gamma vectors among the different opioids to see the extent to which EEG measures customized for each opioid agreed with each other and with the alfentanil CUP.\n\nAll the computations for both nonlinear regression and SCC, were performed on a spreadsheet using the Excel software program (Microsoft, Redmond, WA), the parameters were optimized with the Solver tool within Excel. The template spreadsheet is available by anonymous File Tranfer Protocol (FTP) in the /public/scc.dir directory of pkpd.icon.palo-alto.med.va.gov. The data for each person can be found in separate workbooks.\n\n## Results\n\n### Signal-to-Noise Ratio\n\nOne EEG measurement every 20 s (3 measurements per min) were used for fitting; EEG recording time ranged from around 60 min for sufentanil and fentanyl to 2 h for alfentanil, trefentanil, a subset of fentanyl and remifentanil. The same number of data points were used in each person for SE95%, for the CUP and to compute the optimal canonical combination.\n\n(Figure 3) shows the R2values for each person for SE95% and the CUP measures of drug effect. The top graph shows the results for all five opioids, while the lower graphs distinguish the different opioids. Every black dot in the plot represents a person and the connecting line tracks the improvement or decrement in the R2value. The arrow shows the median value within each drug group.\n\nFigure 3. Signal-to-noise ratio comparison. Values of the signal-to-noise coefficient for both measurements of opioid EEG effect. Every black dot represents a person. The lines connect the value for spectral edge and the canonical univariate parameter in each subject. The small arrows note the median value. The top panel combines the results for all opioids. The lower panels show the results with each opioid individually. Dots overlap where two lines appear to connect to the same dot.\n\nFigure 3. Signal-to-noise ratio comparison. Values of the signal-to-noise coefficient for both measurements of opioid EEG effect. Every black dot represents a person. The lines connect the value for spectral edge and the canonical univariate parameter in each subject. The small arrows note the median value. The top panel combines the results for all opioids. The lower panels show the results with each opioid individually. Dots overlap where two lines appear to connect to the same dot.\n\n(Figure 3) shows that SE95% was a good measure of opioid drug effect for the five opioids studied. In general, R2was about 0.8 for SE95% across the opioids studied. The comparison of R2between SE95% and the CUP' when considering all opioids together, yielded an improvement in median R2(0.80 vs. 0.86) that was statistically significant (P = 0.0006; Table 3). For fentanyl, a statistically significant difference in R2values was also found (P = 0.02; Table 3). The trend toward improved R2was seen for each opioid studied, even in the case of sufentanil were the CUP R2was greater for 6 of 11 subjects, as shown in the lower 5 graphs of Figure 3.\n\nTable 3. Values of the Signal-to-Noise Ratio for All the Individuals and for Each Drug for Both SE95% and the CUP",
null,
"Perhaps of greater clinical significance was the increased consistency of the CUP when compared with SE95%. For every opioid except remifentanil, there were persons in whom the SE95% performed relatively poorly (R2less than 0.7). In these persons, the relationship of the CUP to effect site concentration was considerably stronger than the relationship of SE95%. Additionally, in no subject was the R2value for the CUP less than 0.6. Thus, the CUP behaved as a better measure of drug effect, in that it did not perform abysmally as SE95% occasionally did. Figure 4illustrates this point. Here we show the worst examples (by R2criterion) of the relationship between concentration and response for SE95% and CUP for the five opioids and how this same relationship is described by the other parameter. The worst R2for every drug was from a SE95% as can be seen in Figure 3. For fentanyl and alfentanil, the CUP found a drug effect while SE95% found virtually no relationship. For sufentanil, the CUP relationship was somewhat steeper and had less variability about the baseline. For trefentanil and remifentanil, the primary improvement was decreased noise, particularly about the baseline.\n\nFigure 4. Effect (vertical axis) versus effect site concentration (horizontal axis) relationships in the persons with lowest value for the signal-to-noise ratio. On the graphs at left, SE95% versus effect site concentration for every opioid is plotted; the corresponding graph on the right shows the relationship between effect site concentration and the canonical univariate parameter for the same person. Each graph also shows the value for signal-to-noise ratio.\n\nFigure 4. Effect (vertical axis) versus effect site concentration (horizontal axis) relationships in the persons with lowest value for the signal-to-noise ratio. On the graphs at left, SE95% versus effect site concentration for every opioid is plotted; the corresponding graph on the right shows the relationship between effect site concentration and the canonical univariate parameter for the same person. Each graph also shows the value for signal-to-noise ratio.\n\n### Pharmacodynamic Analysis\n\nOf the initial 44 subjects, the concentration-response relationship for both SE95% and for CUP could be described by a sigmoidal relationship in 33. The subjects where the relationship between effect and apparent effect site concentration did not follow a sigmoidal shape, were not included in the following analysis of the pharmacodynamic parameters. Table 4shows the values for t 1/2 ke0, IC50, and alpha estimated using the CUP, SE95%, and as reported in the original studies. The values for t 1/2 ke0, IC50, and alpha are generally in good agreement between SE95% and the CUP, and with the values reported by the original authors. Figure 5shows the concentration versus response relationship for all five opioids, using both measures of drug effect. As expected, the effect site concentration versus response relationships estimated using CUP and SE95% were similar. Thus, CUP appeared to be measuring the same pharmacodynamic phenomenon as SE95%, but with increased R2.\n\nTable 4. Summary of the Pharmacodynamic Parameters Estimated with Both SE95% and the CUP as Compared with the Values Reported in the Publications Where Data Were First Analyzed",
null,
"Figure 5. The canonical univariate parameter and spectral edge versus effect site concentration. Based on the pharmacodynamic parameter estimates obtained with both electroencephalographic measures of effect, the graph shows both spectral edge and canonical univariate parameter together for each opioid in a logarithmic scale of effect site concentration. The sigmoidal relationship for spectral edge is preserved when using canonical univariate parameter as a measure of opioid effect.\n\nFigure 5. The canonical univariate parameter and spectral edge versus effect site concentration. Based on the pharmacodynamic parameter estimates obtained with both electroencephalographic measures of effect, the graph shows both spectral edge and canonical univariate parameter together for each opioid in a logarithmic scale of effect site concentration. The sigmoidal relationship for spectral edge is preserved when using canonical univariate parameter as a measure of opioid effect.\n\n### Optimal Coefficients Estimated Using Semilinear Canonical Correlation\n\n(Figure 6) shows the mean gamma vector for each opioid, and the gamma vector from Table 2. The gamma vectors for all five opioids are very similar, and follow the same pattern as the gamma vector reported by Gregg et al. These similarities suggest that the EEG response as a measure of opioid drug effect is consistent across these five opioids. In general, the weights estimated by SCC are greater at the lower frequency bins where most of the opioid effect is located.\n\nFigure 6. Opioid gamma vector. The comparison between the pattern of the gamma vector reported by Gregg et al. and the optimal gamma vector for each one of the opioids studied is shown in the graph. In the low frequency range is where the weight of this coefficients is mostly located for all the opioids.\n\nFigure 6. Opioid gamma vector. The comparison between the pattern of the gamma vector reported by Gregg et al. and the optimal gamma vector for each one of the opioids studied is shown in the graph. In the low frequency range is where the weight of this coefficients is mostly located for all the opioids.\n\n## Discussion\n\nBuhrer et al. argue that a suitable EEG parameter for characterizing the Ceversus EEG effect relationship should:\n\n1. allow a quantification of the changes in the EEG;\n\n2. be stable during baseline, when no drug is present;\n\n3. distill the most prominent drug-induced property visible in the raw EEG tracing with the minimal amount of data transformation (the most prominent change in the EEG induced by opioids is the slowing in frequency and increase in amplitude);\n\n4. show onset and offset of drug effect as a function of concentration of drug in plasma and equilibration delay;\n\n5. exhibit a duration of the ceiling effect proportional to the dose administered\n\n6. be obtainable with the use of available software.\n\nIn these studies, the CUP met all of the criteria except possibly item 5, which was not specifically investigated. Thus the CUP is a nearly optimal EEG parameter for the purpose of measuring opioid drug effect on the CNS.\n\nIn general, SE95% is also a good measure of opioid drug effect. In the original work by Gregg et al., SE95% was the best among the standard measurements of drug effect on the EEG evaluated. Additionally, investigators have used SE95% as a measure of opioid drug effect for longer than 15 yr with good results.\n\nWhen SE95% performed well as a measure of drug response, the CUP also performed well and the difference between them was minimal. That the CUP performs better than SE95% is mostly evident in those persons in whom SE95% was unable to demonstrate a drug response (Figure 4). Since the CUP was designed to maximize the correlation between EEG effect and effect compartment drug concentration, it is able to extract information about drug effect even in noisy EEGs where standard measures fail. The corollary is that the CUP tends to ignore information in the EEG not related to drug concentration. Thus, the CUP can be viewed as a method of noise rejection.\n\nThe pharmacodynamic parameters we report differ slightly from those reported by the authors in the original studies (Table 4). Modest methodologic differences likely account for why our results were not identical. First, our methods of digitizing and processing waveforms have improved since some of the original studies were performed. To provide consistency across the results, we redigitized the analog signals from all studies not originally processed with our current hardware and software. Additionally, we completely reprocessed the digitized waveforms in all studies so that a consistent processing method applied throughout. These changes in processing accounted for some of the differences in pharmacodynamic results between the original publication and what we report herein. Many of these data sets were originally analyzed using a semiparametric approach to compute Ke0, whereas we have used a parametric method. These small differences in method likely explain the small differences in pharmacodynamic parameter estimates between this study and the original studies shown in Table 4.\n\nWe have designed our study as a validation of the univariate parameter designed by Gregg et al. In their study they obtained the CUP based on a learning sample of eight persons and tested the resulting coefficients post hoc in another sample of seven persons from the same study. We have shown that despite the small size of the learning set (8 persons) and the narrow focus (just alfentanil), this measure of opioid effect on the EEG is applicable to pure micro agonists in general. This is shown not only in our prospective test here with five other opioids, but also by the similarity of the patterns between the gamma vector reported in the original article and the gamma vector we estimated for each opioid. Thus, we propose that the CUP developed for alfentanil can be generally referred to as the \"opioid canonical univariate parameter.\"\n\nTo see if the coefficients gammaidefining the opioid CUP provide a general measure of anesthetic drug effect, or are specific to opioids, we also estimated the gamma vector for midazolam (T.W. Schnider, personal communication), and propofol based on EEG recorded on previous studies.***\n\nA visual comparison between these gamma vectors and the CUP gamma vector is shown in Figure 7. Although the lowest frequencies are important to both sets of coefficients (and hence may be useful as a measure of hypnotic drug effect in general), there were clear differences in how the mid-range and upper frequencies were weighted. This suggests that the gamma vector for opioids is not generally applicable to other CNS active drugs used in the practice of anesthesia.\n\nFigure 7. gamma vector of the alfentanil canonical univariate parameter versus gamma vector for other hypnotics. The pattern of the coefficients from Table 2, as compared to the optimal gamma vector obtained using semilinear canonical correlation in a group of persons under the effects of propofol or midazolam. Note the different pattern of the weights in the high frequencies for propofol and midazolam as compared to the canonical univariate parameter.\n\nFigure 7. gamma vector of the alfentanil canonical univariate parameter versus gamma vector for other hypnotics. The pattern of the coefficients from Table 2, as compared to the optimal gamma vector obtained using semilinear canonical correlation in a group of persons under the effects of propofol or midazolam. Note the different pattern of the weights in the high frequencies for propofol and midazolam as compared to the canonical univariate parameter.\n\nIn summary, the effect of the pure micro agonists fentanyl, alfentanil, sufentanil, trefentanil, and remifentanil on the EEG is consistent, except for differences in potency and rate of plasma-CNS equilibration among the opioids. A measure of drug effect designed for alfentanil, the CUP, proves to be a robust measure of fentanyl, sufentanil, trefentanil, and remifentanil drug effect on the EEG. In particular, this measure of opioid drug effect performs well in those occasional subjects in whom the 95% spectral edge performs poorly as a measure of opioid drug effect. This would suggest that the CUP may be particularly useful in closed-loop opioid control systems based on an EEG measure of drug response. Such systems must behave well in the worst-case situation, where a poor R2might result in inappropriate drug dosing. In addition, our results support the conclusion of Gregg et al. that SCC is a useful new statistical tool for developing univariate measures of drug response from the multivariate response measures gathered in clinial research.\n\nThe authors thank T. D. Egan, M.D., H. M. Lemmens, M.D., and J. C. Scott, M.D., who supplied the electroencephalographic signal analyzed in this study.\n\n* Hermann DJ, Muir KT, Egan TD, Stanski DR, Shafer SL: Use of pharmacokinetic-pharmacodynamic modeling for developing rational dosing strategies for Phase III Clinical Trials: Remifentanil. Second International Symposium on Measurements and Kinetics of In Vivo Drug Effects, Noordwijkerhout, April 1994.\n\n** Beal SL, Dunne A, Sheiner LB: Estimating optimal linear transformations of a multivariate response to a univariate response with application to semilinear canonical correlation. Technical Report of the Division of Clinical Pharmacology. San Francisco, University of California, San Francisco, 1990.\n\n*** Dyck JB, Shafer S: Effects of age on propofol pharmacokinetics. Semin Anesth 11:2-4, 1992.\n\n## REFERENCES\n\n1.\nHung OR, Varvel JR, Shafer SL, Stanski DR: Thiopental pharmacodynamics: II. Quantitation of clinical and electroencephalographic depth of anesthesia. ANESTHESIOLOGY 77:237-244, 1992.\n2.\nLemmens HJM, Dyck JB, Shafer SL, Stanski DR: Pharmacokinetic-pharmacodynamic modeling in drug development: Application to the investigational opioid trefentanil. Clin Pharmacol Ther 56:261-271, 1994.\n3.\nShafer SL, Varvel JR: Pharmacokinetics, pharmacodynamics, and rational opioid selection. ANESTHESIOLOGY 74:53-63, 1991.\n4.\nScott JC, Stanski DR: Decreased fentanyl and alfentanil dose requirements with age. A simultaneous pharmacokinetic and pharmacodynamic evaluation. J Pharmacol Exp Ther 240:159-166, 1987.\n5.\nScott JC, Cooke JE, Stanski DR: Electroencephalographic quantitation of opioid effect: Comparative pharmacodynamics of fentanyl and sufentanil. ANESTHESIOLOGY 74:34-42, 1991.\n6.\nAusems ME, Hug CCJ, Stanski DR, Burm AG: Plasma concentrations of alfentanil required to supplement nitrous oxide anesthesia for general surgery. ANESTHESIOLOGY 65:362-373, 1986.\n7.\nBickford RG: Automated electroencephalographic control of general anesthesia. Electroencephalogr Clin Neurophysiol 2:93-96, 1950.\n8.\nGregg KM, Varvel JR, Sharer SL: Application of semilinear canonical correlation to the measurement of opioid drug effect. J Pharmacokinet Biopharm 20:611-635, 1992.\n9.\nEgan T, Lemmens H, Fiset P, Hermann D, Muir K, Stanski D, Shafer S: The pharmacokinetics of the new short-acting opioid remifentanil (GI87084B) in healthy adult male volunteers. ANESTHESIOLOGY 79:881-892, 1993.\n10.\nHill AV: The possible effects of the aggregation of the molecules of hemoglobin on its dissociation curves. J Physiol 40:iv-vii, 1910.\n11.\nSheiner LB, Stanski DR, Vozeh S, Miller RD, Ham J: Simultaneous modeling of pharmacokinetics and pharmacodynamics: Application to d-tubocurarine. Clin Pharmacol Ther 25:358-371, 1979.\n12.\nBowerman BL, O'Connell RT: Simple coefficients of determination and correlation, Linear Statistical Models (An Applied Approach). Edited by Payne M. Boston, PWS-KENT, 1990, pp 174-183.\n13.\nNeter J, Wasserman W, Kutner MH: Applied Linear Statistical Models. Homewood, Richard D. Irwin, 1985, pp 241-242.\n14.\nLevine MS: Canonical analysis and factor comparison. Beverly Hills, SAGE University Papers, 1977, pp. 11-26.\n15.\nThompson B: Canonical correlation analysis, Uses and Interpretation. Beverly Hills, SAGE University Papers, 1984, pp. 7-63.\n16.\nBuhrer M, Maitre PO, Hung O, Stanski DR: Electroencephalographic effects of benzodiazepines: I. Choosing an electroencephalographic parameter to measure the effect of midazolam on the central nervous system. Clin Pharmacol Ther 48:544-554, 1990.\n17.\nUnadkat JD, Bartha F, Sheiner LB: Simultaneous modeling of pharmacokinetics and pharmacodynamics with nonparametric kinetic and dynamic models. Clin Pharmacol Ther 40:86-93, 1986."
] | [
null,
"https://asa2.silverchair-cdn.com/asa2/content_public/journal/anesthesiology/83/4/10.1097_00000542-199510000-00014/2/m_14tt1.png",
null,
"https://asa2.silverchair-cdn.com/asa2/content_public/journal/anesthesiology/83/4/10.1097_00000542-199510000-00014/2/m_14tt2.png",
null,
"https://asa2.silverchair-cdn.com/asa2/content_public/journal/anesthesiology/83/4/10.1097_00000542-199510000-00014/2/m_14tt3.png",
null,
"https://asa2.silverchair-cdn.com/asa2/content_public/journal/anesthesiology/83/4/10.1097_00000542-199510000-00014/2/m_14tt4.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.92349315,"math_prob":0.90070313,"size":26266,"snap":"2021-31-2021-39","text_gpt3_token_len":5608,"char_repetition_ratio":0.16521971,"word_repetition_ratio":0.11092274,"special_character_ratio":0.19854565,"punctuation_ratio":0.10248777,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96797943,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,2,null,2,null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-07-28T09:35:50Z\",\"WARC-Record-ID\":\"<urn:uuid:78117fab-67d7-4dd3-9c74-ef26746ada15>\",\"Content-Length\":\"198927\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9d6131b4-7673-43f7-925e-2d26628275c1>\",\"WARC-Concurrent-To\":\"<urn:uuid:73903ba2-856a-4587-96bf-4ffa3ba593e4>\",\"WARC-IP-Address\":\"52.224.196.54\",\"WARC-Target-URI\":\"https://pubs.asahq.org/anesthesiology/article/83/4/747/35230/Validation-of-the-Alfentanil-Canonical-Univariate\",\"WARC-Payload-Digest\":\"sha1:IH3QV6ZDNSJ73IUFNPHV5XRFM5O2THDL\",\"WARC-Block-Digest\":\"sha1:6T266O6V2AO26PHXWFIBWDQVTTCPKRU7\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046153709.26_warc_CC-MAIN-20210728092200-20210728122200-00249.warc.gz\"}"} |
https://www.nag.com/numeric/nl/nagdoc_27.3/clhtml/g03/g03gac.html | [
"# NAG CL Interfaceg03gac (gaussian_mixture)\n\nSettings help\n\nCL Name Style:\n\n## 1Purpose\n\ng03gac performs a mixture of Normals (Gaussians) for a given (co)variance structure.\n\n## 2Specification\n\n #include\n void g03gac (Integer n, Integer m, const double x[], Integer pdx, const Integer isx[], Integer nvar, Integer ng, Nag_Boolean popt, double prob[], Integer tdprob, Integer *niter, Integer riter, double w[], double g[], Nag_VarCovar sopt, double s[], double f[], double tol, double *loglik, NagError *fail)\nThe function may be called by the names: g03gac or nag_mv_gaussian_mixture.\n\n## 3Description\n\nA Normal (Gaussian) mixture model is a weighted sum of $k$ group Normal densities given by,\n $p (x∣w,μ,Σ) = ∑ j=1 k wj g (x∣μj,Σj) , x∈ℝp$\nwhere:\n• $x$ is a $p$-dimensional object of interest;\n• ${w}_{j}$ is the mixture weight for the $j$th group and $\\sum _{\\mathit{j}=1}^{k}{w}_{j}=1$;\n• ${\\mu }_{j}$ is a $p$-dimensional vector of means for the $j$th group;\n• ${\\Sigma }_{j}$ is the covariance structure for the $j$th group;\n• $g\\left(·\\right)$ is the $p$-variate Normal density:\n $g (x∣μj,Σj) = 1 (2π) p/2 |Σj| 1/2 exp[-12(x-μj) Σ j −1 (x-μj) T] .$\nOptionally, the (co)variance structure may be pooled (common to all groups) or calculated for each group, and may be full or diagonal.\nHartigan J A (1975) Clustering Algorithms Wiley\n\n## 5Arguments\n\n1: $\\mathbf{n}$Integer Input\nOn entry: $n$, the number of objects. There must be more objects than parameters in the model.\nConstraints:\n• if ${\\mathbf{sopt}}=\\mathrm{Nag_GroupCovar}$, ${\\mathbf{n}}>{\\mathbf{ng}}×\\left({\\mathbf{nvar}}×{\\mathbf{nvar}}+{\\mathbf{nvar}}\\right)$;\n• if ${\\mathbf{sopt}}=\\mathrm{Nag_PooledCovar}$, ${\\mathbf{n}}>{\\mathbf{nvar}}×\\left({\\mathbf{ng}}+{\\mathbf{nvar}}\\right)$;\n• if ${\\mathbf{sopt}}=\\mathrm{Nag_GroupVar}$, ${\\mathbf{n}}>2×{\\mathbf{ng}}×{\\mathbf{nvar}}$;\n• if ${\\mathbf{sopt}}=\\mathrm{Nag_PooledVar}$, ${\\mathbf{n}}>{\\mathbf{nvar}}×\\left({\\mathbf{ng}}+1\\right)$;\n• if ${\\mathbf{sopt}}=\\mathrm{Nag_OverallVar}$, ${\\mathbf{n}}>{\\mathbf{nvar}}×{\\mathbf{ng}}+1$.\n2: $\\mathbf{m}$Integer Input\nOn entry: the total number of variables in array x.\nConstraint: ${\\mathbf{m}}\\ge 1$.\n3: $\\mathbf{x}\\left[{\\mathbf{n}}×{\\mathbf{pdx}}\\right]$const double Input\nOn entry: ${\\mathbf{x}}\\left[\\left(\\mathit{i}-1\\right)×{\\mathbf{pdx}}+\\mathit{j}-1\\right]$ must contain the value of the $\\mathit{j}$th variable for the $\\mathit{i}$th object, for $\\mathit{i}=1,2,\\dots ,n$ and $\\mathit{j}=1,2,\\dots ,{\\mathbf{m}}$.\n4: $\\mathbf{pdx}$Integer Input\nOn entry: the stride separating matrix column elements in the array x.\nConstraint: ${\\mathbf{pdx}}\\ge {\\mathbf{m}}$.\n5: $\\mathbf{isx}\\left[{\\mathbf{m}}\\right]$const Integer Input\nOn entry: if ${\\mathbf{nvar}}={\\mathbf{m}}$ all available variables are included in the model and isx is not referenced; otherwise the $j$th variable will be included in the analysis if ${\\mathbf{isx}}\\left[\\mathit{j}-1\\right]=1$ and excluded if ${\\mathbf{isx}}\\left[\\mathit{j}-1\\right]=0$, for $\\mathit{j}=1,2,\\dots ,{\\mathbf{m}}$.\nConstraint: if ${\\mathbf{nvar}}\\ne {\\mathbf{m}}$, ${\\mathbf{isx}}\\left[\\mathit{j}-1\\right]=1$ for nvar values of $\\mathit{j}$ and ${\\mathbf{isx}}\\left[\\mathit{j}-1\\right]=0$ for the remaining ${\\mathbf{m}}-{\\mathbf{nvar}}$ values of $\\mathit{j}$, for $\\mathit{j}=1,2,\\dots ,{\\mathbf{m}}$.\n6: $\\mathbf{nvar}$Integer Input\nOn entry: $p$, the number of variables included in the calculations.\nConstraint: $1\\le {\\mathbf{nvar}}\\le {\\mathbf{m}}$.\n7: $\\mathbf{ng}$Integer Input\nOn entry: $k$, the number of groups in the mixture model.\nConstraint: ${\\mathbf{ng}}\\ge 1$.\n8: $\\mathbf{popt}$Nag_Boolean Input\nOn entry: if ${\\mathbf{popt}}=\\mathrm{Nag_TRUE}$, the initial membership probabilities in prob are set internally; otherwise these probabilities must be supplied.\n9: $\\mathbf{prob}\\left[{\\mathbf{n}}×{\\mathbf{tdprob}}\\right]$double Input/Output\nOn entry: if ${\\mathbf{popt}}\\ne \\mathrm{Nag_TRUE}$, ${\\mathbf{prob}}\\left[\\left(i-1\\right)×{\\mathbf{tdprob}}+j-1\\right]$ is the probability that the $i$th object belongs to the $j$th group. (These probabilities are normalised internally.)\nOn exit: ${\\mathbf{prob}}\\left[\\left(i-1\\right)×{\\mathbf{tdprob}}+j-1\\right]$ is the probability of membership of the $i$th object to the $j$th group for the fitted model.\n10: $\\mathbf{tdprob}$Integer Input\nOn entry: the stride separating matrix column elements in the array prob.\nConstraint: ${\\mathbf{tdprob}}\\ge {\\mathbf{ng}}$.\n11: $\\mathbf{niter}$Integer * Input/Output\nOn entry: the maximum number of iterations.\nSuggested value: $15$\nOn exit: the number of completed iterations.\nConstraint: ${\\mathbf{niter}}\\ge 1$.\n12: $\\mathbf{riter}$Integer Input\nOn entry: if ${\\mathbf{riter}}>0$, membership probabilities are rounded to $0.0$ or $1.0$ after the completion of the first riter iterations.\nSuggested value: $0$\n13: $\\mathbf{w}\\left[{\\mathbf{ng}}\\right]$double Output\nOn exit: ${w}_{j}$, the mixing probability for the $j$th group.\n14: $\\mathbf{g}\\left[{\\mathbf{nvar}}×{\\mathbf{ng}}\\right]$double Output\nOn exit: ${\\mathbf{g}}\\left[\\left(i-1\\right)×{\\mathbf{ng}}+j-1\\right]$ gives the estimated mean of the $i$th variable in the $j$th group.\n15: $\\mathbf{sopt}$Nag_VarCovar Input\nOn entry: determines the (co)variance structure:\n${\\mathbf{sopt}}=\\mathrm{Nag_GroupCovar}$\nGroupwise covariance matrices.\n${\\mathbf{sopt}}=\\mathrm{Nag_PooledCovar}$\nPooled covariance matrix.\n${\\mathbf{sopt}}=\\mathrm{Nag_GroupVar}$\nGroupwise variances.\n${\\mathbf{sopt}}=\\mathrm{Nag_PooledVar}$\nPooled variances.\n${\\mathbf{sopt}}=\\mathrm{Nag_OverallVar}$\nOverall variance.\nConstraint: ${\\mathbf{sopt}}=\\mathrm{Nag_GroupCovar}$, $\\mathrm{Nag_PooledCovar}$, $\\mathrm{Nag_GroupVar}$, $\\mathrm{Nag_PooledVar}$ or $\\mathrm{Nag_OverallVar}$.\n16: $\\mathbf{s}\\left[\\mathit{dim}\\right]$double Output\nNote: the dimension, dim, of the array s must be at least $\\mathit{a}×\\mathit{b}×\\mathit{c}$.\nwhere ${\\mathbf{S}}\\left(i,j,k\\right)$ appears in this document, it refers to the array element ${\\mathbf{s}}\\left[\\left(k-1\\right)×\\mathit{a}×\\mathit{b}+\\left(j-1\\right)×\\mathit{a}+i-1\\right]$.\nOn exit: if ${\\mathbf{sopt}}=\\mathrm{Nag_GroupCovar}$, ${\\mathbf{S}}\\left(i,j,k\\right)$ gives the $\\left(i,j\\right)$th element of the $k$th group, with $a=b={\\mathbf{nvar}}$ and $c={\\mathbf{ng}}$.\nIf ${\\mathbf{sopt}}=\\mathrm{Nag_PooledCovar}$, ${\\mathbf{S}}\\left(i,j,1\\right)$ gives the $\\left(i,j\\right)$th element of the pooled covariance, with $a=b={\\mathbf{nvar}}$ and $c=1$.\nIf ${\\mathbf{sopt}}=\\mathrm{Nag_GroupVar}$, ${\\mathbf{S}}\\left(j,k,1\\right)$ gives the $j$th variance in the $k$th group, with $a={\\mathbf{nvar}}$, $b={\\mathbf{ng}}$ and $c=1$.\nIf ${\\mathbf{sopt}}=\\mathrm{Nag_PooledVar}$, ${\\mathbf{S}}\\left(j,1,1\\right)$ gives the $j$th pooled variance., with $a={\\mathbf{nvar}}$ and $b=c=1$\nIf ${\\mathbf{sopt}}=\\mathrm{Nag_OverallVar}$, ${\\mathbf{S}}\\left(1,1,1\\right)$ gives the overall variance, with $a=b=c=1$.\n17: $\\mathbf{f}\\left[{\\mathbf{n}}×{\\mathbf{ng}}\\right]$double Output\nOn exit: ${\\mathbf{f}}\\left[\\left(i-1\\right)×{\\mathbf{ng}}+j-1\\right]$ gives the $p$-variate Normal (Gaussian) density of the $i$th object in the $j$th group.\n18: $\\mathbf{tol}$double Input\nOn entry: iterations cease the first time an improvement in log-likelihood is less than tol. If ${\\mathbf{tol}}\\le 0$ a value of ${10}^{-3}$ is used.\n19: $\\mathbf{loglik}$double * Output\nOn exit: the log-likelihood for the fitted mixture model.\n20: $\\mathbf{fail}$NagError * Input/Output\nThe NAG error argument (see Section 7 in the Introduction to the NAG Library CL Interface).\n\n## 6Error Indicators and Warnings\n\nNE_ALLOC_FAIL\nDynamic memory allocation failed.\nSee Section 3.1.2 in the Introduction to the NAG Library CL Interface for further information.\nNE_ARRAY_SIZE\nOn entry, ${\\mathbf{pdx}}=⟨\\mathit{\\text{value}}⟩$ and ${\\mathbf{n}}=⟨\\mathit{\\text{value}}⟩$.\nConstraint: ${\\mathbf{pdx}}\\ge {\\mathbf{n}}$.\nOn entry, ${\\mathbf{tdprob}}=⟨\\mathit{\\text{value}}⟩$ and ${\\mathbf{n}}=⟨\\mathit{\\text{value}}⟩$.\nConstraint: ${\\mathbf{tdprob}}\\ge {\\mathbf{n}}$.\nOn entry, argument $⟨\\mathit{\\text{value}}⟩$ had an illegal value.\nNE_CLUSTER_EMPTY\nAn iteration cannot continue due to an empty group, try a different initial allocation.\nNE_INT\nOn entry, ${\\mathbf{m}}=⟨\\mathit{\\text{value}}⟩$.\nConstraint: ${\\mathbf{m}}\\ge 1$.\nOn entry, ${\\mathbf{ng}}=⟨\\mathit{\\text{value}}⟩$.\nConstraint: ${\\mathbf{ng}}\\ge 1$.\nOn entry, ${\\mathbf{niter}}=⟨\\mathit{\\text{value}}⟩$.\nConstraint: ${\\mathbf{niter}}\\ge 1$.\nNE_INT_2\nOn entry, ${\\mathbf{nvar}}=⟨\\mathit{\\text{value}}⟩$ and ${\\mathbf{m}}=⟨\\mathit{\\text{value}}⟩$.\nConstraint: $1\\le {\\mathbf{nvar}}\\le {\\mathbf{m}}$.\nNE_INTERNAL_ERROR\nAn internal error has occurred in this function. Check the function call and any array sizes. If the call is correct then please contact NAG for assistance.\nSee Section 7.5 in the Introduction to the NAG Library CL Interface for further information.\nNE_MAT_NOT_POS_DEF\nA covariance matrix is not positive definite, try a different initial allocation.\nNE_NO_LICENCE\nYour licence key may have expired or may not have been installed correctly.\nSee Section 8 in the Introduction to the NAG Library CL Interface for further information.\nNE_OBSERVATIONS\nOn entry, ${\\mathbf{n}}=⟨\\mathit{\\text{value}}⟩$ and $p=⟨\\mathit{\\text{value}}⟩$.\nConstraint: ${\\mathbf{n}}>p$, the number of parameters, i.e., too few objects have been supplied for the model.\nNE_PROBABILITY\nOn entry, row $⟨\\mathit{\\text{value}}⟩$ of supplied prob does not sum to $1$.\nNE_VAR_INCL_INDICATED\nOn entry, ${\\mathbf{nvar}}\\ne {\\mathbf{m}}$ and isx is invalid.\n\nNot applicable.\n\n## 8Parallelism and Performance\n\ng03gac is threaded by NAG for parallel execution in multithreaded implementations of the NAG Library.\ng03gac makes calls to BLAS and/or LAPACK routines, which may be threaded within the vendor library used by this implementation. Consult the documentation for the vendor library for further information.\nPlease consult the X06 Chapter Introduction for information on how to control and interrogate the OpenMP environment used within this function. Please also consult the Users' Note for your implementation for any additional implementation-specific information.\n\nNone.\n\n## 10Example\n\nThis example fits a Gaussian mixture model with pooled covariance structure to New Haven schools test data, see Table 5.1 (p. 118) in Hartigan (1975).\n\n### 10.1Program Text\n\nProgram Text (g03gace.c)\n\n### 10.2Program Data\n\nProgram Data (g03gace.d)\n\n### 10.3Program Results\n\nProgram Results (g03gace.r)"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.89725345,"math_prob":0.9998285,"size":268,"snap":"2022-40-2023-06","text_gpt3_token_len":73,"char_repetition_ratio":0.2159091,"word_repetition_ratio":0.30357143,"special_character_ratio":0.2835821,"punctuation_ratio":0.25757575,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.999992,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-10-04T03:06:42Z\",\"WARC-Record-ID\":\"<urn:uuid:0494f4cd-e4e2-497e-aee3-e3575b5f9033>\",\"Content-Length\":\"50096\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c239af3c-fcf3-4c9e-b380-c94769e8a346>\",\"WARC-Concurrent-To\":\"<urn:uuid:017fc61c-dc14-44d8-8024-451412c9a378>\",\"WARC-IP-Address\":\"78.129.168.4\",\"WARC-Target-URI\":\"https://www.nag.com/numeric/nl/nagdoc_27.3/clhtml/g03/g03gac.html\",\"WARC-Payload-Digest\":\"sha1:HGAIAGQMK2VXQMKQ33IHJHKBMCQXEUN2\",\"WARC-Block-Digest\":\"sha1:JDXJRROHA3MASNQUJR53YWFRQPCFQYX6\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030337473.26_warc_CC-MAIN-20221004023206-20221004053206-00612.warc.gz\"}"} |
https://www.selfridges.com/CN/zh/cat/bottega-veneta-the-lug-elasticated-side-leather-boots_R03732278/?previewAttribute=CREAM | [
"以当地货币和语言购买\n\n• 澳大利亚 / 澳元 \\$\n• 加拿大 / 加元 \\$\n• 中国 / 人民币 ¥\n• 法国 / 欧元 €\n• 德国 / 欧元 €\n• 中国香港 / 港元 \\$\n• 爱尔兰 / 欧元 €\n• 意大利 / 欧元 €\n• 日本 / 日元 ¥\n• 科威特 / 美元 \\$\n• 中国澳门 / 港元 \\$\n• 荷兰 / 欧元 €\n• 卡塔尔 / 美元 \\$\n• 沙特阿拉伯 / 美元 \\$\n• 新加坡 / 新加坡元 \\$\n• 韩国 / 韩元 ₩\n• 西班牙 / 欧元 €\n• 台湾 / 新台币 \\$\n• 阿拉伯联合酋长国 / 美元 \\$\n• 英国 / 英镑 £\n• 美国 / 美元 \\$\n• 不符合您的要求?阅读更多\n• 简体中文\n• 英语\n• 英语\n• 简体中文\n• 英语\n• 英语\n• 英语\n• 简体中文\n• 英语\n• 英语\n• 英语\n• 英语\n• 英语\n• 简体中文\n• 英语\n• 英语\n• 英语\n• 英语\n• 简体中文\n• 英语\n• 英语\n• 英语\n• 简体中文\n• 英语\n• 英语\n• 简体中文\n• 英语\n• 英语\n• 简体中文\n\n国际送货\n\nselfridges.com 上几乎所有的商品均可提供国际配送服务,您的订单可发往全世界 130 个国家/地区,包括北美、澳洲、中东及中国。\n\n• 阿尔及利亚\n• 安道尔\n• 安提瓜和巴布达\n• 阿鲁巴\n• 澳大利亚\n• 奥地利\n• 阿塞拜疆\n• 巴林\n• 孟加拉国\n• 巴巴多斯\n• 白俄罗斯\n• 比利时\n• 伯利兹\n• 百慕大\n• 玻利维亚\n• 博兹瓦纳\n• 文莱\n• 保加利亚\n• 柬埔寨\n• 加拿大\n• 开曼群岛\n• 智利\n• 中国大陆\n• 哥伦比亚\n• 哥斯达黎加\n• 克罗地亚\n• 塞浦路斯\n• 捷克共和国\n• 丹麦\n• 多米尼克\n• 多米尼加共和国\n• 厄瓜多尔\n• 埃及\n• 萨尔瓦多\n• 爱沙尼亚\n• 芬兰\n• 法国\n• 法属圭亚那\n• 德国\n• 直布罗陀\n• 希腊\n• 格林纳达\n• 瓜德罗普岛\n• 危地马拉\n• 根西岛\n• 圭亚那\n• 洪都拉斯\n• 香港\n• 匈牙利\n• 冰岛\n• 印度\n• 印度尼西亚\n• 爱尔兰\n• 以色列\n• 意大利\n• 牙买加\n• 日本\n• 泽西岛\n• 约旦\n• 哈萨克斯坦\n• 肯尼亚\n• 科威特\n• 老挝\n• 拉脱维亚\n• 黎巴嫩\n• 莱索托\n• 列支敦士登\n• 立陶宛\n• 卢森堡\n• 澳门\n• 马来西亚\n• 马尔代夫\n• 马耳他\n• 马提尼克岛\n• 马约特岛\n• 墨西哥\n• 摩纳哥\n• 蒙特塞拉特\n• 摩洛哥\n• 缅甸\n• 纳米比亚\n• 荷兰\n• 新西兰\n• 尼加拉瓜\n• 尼日利亚\n• 挪威\n• 阿曼\n• 巴基斯坦\n• 巴拿马\n• 巴拉圭\n• 秘鲁\n• 菲律宾\n• 波兰\n• 葡萄牙\n• 波多黎各\n• 卡塔尔\n• 留尼汪岛\n• 罗马尼亚\n• 卢旺达\n• 圣基茨与尼维斯\n• 圣卢西亚\n• 圣马丁岛(法属)\n• 圣马力诺\n• 沙特阿拉伯\n• 塞尔维亚\n• 新加坡\n• 斯洛伐克\n• 斯洛文尼亚\n• 南非\n• 韩国\n• 西班牙\n• 斯里兰卡\n• 苏里南\n• 斯威士兰\n• 瑞典\n• 瑞士\n• 台湾\n• 坦桑尼亚\n• 泰国\n• 特立尼达和多巴哥\n• 土耳其\n• 乌干达\n• 乌克兰\n• 阿拉伯联合酋长国\n• 英国\n• 美国\n• 乌拉圭\n• 委内瑞拉\n• 越南\n\nBOTTEGA VENETA The Lug 侧面松紧皮靴\n\n¥8800.00\n\n*进口关税将在结算时显示",
null,
"CREAM\n\n• Bottega Veneta 皮靴\n\n• 100% 小牛皮\n\n• 直套式\n\n• 中筒牛皮廓形,堆叠凹凸鞋底,笨重圆头,松紧Side拼接,皮革拉片,对比鞋底\n\n• 鞋跟高度:2.2 英寸\n\n• 清洁:请遵照专业皮革清洁方法\n\n• 意大利制造\n\n• 此品牌尺码偏大。我们建议您选小一码\n\n• 鞋面:100% 小牛皮\n\n• 衬里:100% 小牛皮\n\n• 鞋底:100% 橡胶\n\n英国和欧洲\n\n¥90.00\n• 无限英国定时、指定日和标准配送\n• 英国境内次日配送(英国时间下午 6 点前下单)\n• 无限欧盟地区标准配送\n• 免费退货\n• 不受最低消费金额限制\n\n全球\n\n¥360.00\n• 订单金额超过¥ 360.00英国时间,指定日期和标准交货时间不受限制\n• 订单满¥ 360.00在全球范围内无限次送货\n\nRef: R03732278\n\n2020秋冬系列玩转抽象造型和实用色调 Bottega Veneta 通过高端时尚态度证明持久妆感的力量。超厚皮革 The Lug运动鞋 靴子采用植鞣革小牛皮制成,搭配看似轻盈的橡胶鞋底。采用对比色天然皮革衬里和前后拉片设计,打造时尚造型。"
] | [
null,
"https://images.selfridges.com/is/image/selfridges/R03732278_CREAM_SW",
null
] | {"ft_lang_label":"__label__zh","ft_lang_prob":0.9961998,"math_prob":0.57320094,"size":469,"snap":"2022-05-2022-21","text_gpt3_token_len":450,"char_repetition_ratio":0.21290323,"word_repetition_ratio":0.0,"special_character_ratio":0.5138593,"punctuation_ratio":0.013333334,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9636162,"pos_list":[0,1,2],"im_url_duplicate_count":[null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-01-28T17:38:59Z\",\"WARC-Record-ID\":\"<urn:uuid:e6c4c970-b578-40c0-81b0-279d39001653>\",\"Content-Length\":\"294430\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4c6a5cb5-81aa-400c-ac4f-e975618b100a>\",\"WARC-Concurrent-To\":\"<urn:uuid:c892a4f9-06c2-49f5-85a8-2c6eb7009255>\",\"WARC-IP-Address\":\"104.18.20.144\",\"WARC-Target-URI\":\"https://www.selfridges.com/CN/zh/cat/bottega-veneta-the-lug-elasticated-side-leather-boots_R03732278/?previewAttribute=CREAM\",\"WARC-Payload-Digest\":\"sha1:NZIXXQSQA474XKK66LUVHLCILOCPU4MF\",\"WARC-Block-Digest\":\"sha1:DZECZF2AQJQPBAS56PZSBT6WZSBIQ3T7\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-05/CC-MAIN-2022-05_segments_1642320306301.52_warc_CC-MAIN-20220128152530-20220128182530-00460.warc.gz\"}"} |
https://lightcolourvision.org/reference-library/angle-of-refraction/summary/ | [
"# Angle of refraction\n\nThe angle of refraction measures the angle to which light bends as it passes across the boundary between different media.\n\n• The angle of refraction is measured between a ray of light and an imaginary line called the normal.\n• In optics, the normal is a line drawn on a ray diagram perpendicular to, so at a right angle to (900), to the boundary between two media.\n• If the boundary between the media is curved then the normal is drawn perpendicular to the boundary.\n• Snell’s law is a formula used to describe the relationship between the angles of incidence and refraction when referring to light passing across the boundary between two different transparent media, such as water, glass, or air.\n• In optics, the law is used in ray diagrams to compute the angles of incidence or refraction, and in experimental optics to find the refractive index of a medium."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9304476,"math_prob":0.9710732,"size":860,"snap":"2021-21-2021-25","text_gpt3_token_len":174,"char_repetition_ratio":0.1542056,"word_repetition_ratio":0.0,"special_character_ratio":0.20465116,"punctuation_ratio":0.086419754,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.989136,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-06-23T00:01:04Z\",\"WARC-Record-ID\":\"<urn:uuid:fa49980a-943e-491a-80ae-86a932c08e7c>\",\"Content-Length\":\"786746\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a8634cf4-f561-473c-98ce-de0ea4062415>\",\"WARC-Concurrent-To\":\"<urn:uuid:3151e032-db6e-44bc-946b-bff311fffdd9>\",\"WARC-IP-Address\":\"104.21.85.254\",\"WARC-Target-URI\":\"https://lightcolourvision.org/reference-library/angle-of-refraction/summary/\",\"WARC-Payload-Digest\":\"sha1:GNGF2BYMU7323MPSXLNLGUDW6LRSLRUI\",\"WARC-Block-Digest\":\"sha1:R3F2GHE5YTHI36KQDHKU5JYCLYTI7KG2\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-25/CC-MAIN-2021-25_segments_1623488525399.79_warc_CC-MAIN-20210622220817-20210623010817-00188.warc.gz\"}"} |
https://www.graduate.technion.ac.il/Theses/Abstracts.asp?Id=23094 | [
"M.Sc Student Shalem Iddit Multilevel Two-dimensional Phase Unwrapping Department of Computer Science Professor Irad Yavneh",
null,
"Abstract\n\nIn this work we study the problem of two-dimensional phase unwrapping and propose two algorithms for its solution. Two-dimensional phase unwrapping is the problem of deducing unambiguous \"phase\" from values known only modulo 2π. Many authors agree that the objective of phase unwrapping should be to find a weighted minimum of the number of places where adjacent (discrete) phase values differ by more than π (called discontinuities). This NP-hard problem is of considerable practical interest, largely due to its importance in interpreting data acquired with synthetic aperture radar (SAR) interferometry. Consequently, many heuristic algorithms have been proposed. Our first algorithm considers the wrapped phase array as a grey level image and applies the image segmentation problem to this image. Based on the segmentation, we develop an efficient relaxation method for decreasing discontinuities in the phase image. The second algorithm presents an efficient multi-level graph algorithm for the approximate solution of an equivalent problem―minimal residue cut in the dual graph."
] | [
null,
"https://www.graduate.technion.ac.il/Theses/Images\\pdficon_large.gif",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8789382,"math_prob":0.91863364,"size":1263,"snap":"2020-24-2020-29","text_gpt3_token_len":242,"char_repetition_ratio":0.111199364,"word_repetition_ratio":0.0,"special_character_ratio":0.17181315,"punctuation_ratio":0.06030151,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96012926,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-07-06T17:01:35Z\",\"WARC-Record-ID\":\"<urn:uuid:37ff8914-f753-4e9d-b013-4f28bd490972>\",\"Content-Length\":\"6686\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:05a43a70-bb5c-4b41-8b72-bd1619d6e621>\",\"WARC-Concurrent-To\":\"<urn:uuid:977d4966-c676-4ba2-9f3e-d25e01acc8c8>\",\"WARC-IP-Address\":\"132.69.246.237\",\"WARC-Target-URI\":\"https://www.graduate.technion.ac.il/Theses/Abstracts.asp?Id=23094\",\"WARC-Payload-Digest\":\"sha1:YNMQA3Y5VK6U65YYHA2E27VBODT5JDO6\",\"WARC-Block-Digest\":\"sha1:5YORDXBSQKGNVYXO645QIKMYEWEMBVKA\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-29/CC-MAIN-2020-29_segments_1593655881763.20_warc_CC-MAIN-20200706160424-20200706190424-00371.warc.gz\"}"} |
https://mashalscienceacademy.com/numerical-problem-8-alternating-current-circuit-physics-12-msa/ | [
"Problem 8: A coil of pure inductance 318 mH is connected in series with a pure resistance of 75 ohms. The voltage across resistor is 150 V and the frequency of power supply is 50 Hz. Calculate the voltage of power source and the phase angle.\n\nSolution\n\nGiven\n\nInductance, L = 318 mH = 0.318 H\nResistance of resistor, R = 75 Ω\nVoltage drop of resistor, VR = 150 V\nFrequency of voltage, f = 50 Hz\n\nFind: Voltage of the power source, V and phase angle ϕ.\n\nRequired",
null,
"(1) Now to find V, we know the value of VR. However, VL needs to be calculated. For this purpose, we calculate the current which passes through the series circuit and the inductive reactance, XL.",
null,
"Now put the values in the formula, VL = IXL,\n\nVL = 2 × 100 = 200 V\n\nNow use formula (1) to find V,",
null,
"To find phase angle ϕ,",
null,
"",
null,
""
] | [
null,
"https://mashalscienceacademy.com/wp-content/uploads/2020/11/p12c5num45.png",
null,
"https://mashalscienceacademy.com/wp-content/uploads/2020/11/p12c5num46.png",
null,
"https://mashalscienceacademy.com/wp-content/uploads/2020/11/p12c5num47.png",
null,
"https://mashalscienceacademy.com/wp-content/uploads/2020/11/p12c5num49.png",
null,
"https://mashalscienceacademy.com/wp-content/uploads/2020/11/p12c5num48.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8134387,"math_prob":0.9990174,"size":1014,"snap":"2023-40-2023-50","text_gpt3_token_len":295,"char_repetition_ratio":0.102970295,"word_repetition_ratio":0.033519555,"special_character_ratio":0.27712032,"punctuation_ratio":0.1421801,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99988616,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,2,null,2,null,2,null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-07T02:57:57Z\",\"WARC-Record-ID\":\"<urn:uuid:4cb80b85-7347-4dbc-aebc-d3c642745da3>\",\"Content-Length\":\"75684\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2f811a95-81dd-40a9-9664-e0ee35fcaf8f>\",\"WARC-Concurrent-To\":\"<urn:uuid:cb770b3f-cefb-4476-824c-71fe4836b6cb>\",\"WARC-IP-Address\":\"68.178.226.214\",\"WARC-Target-URI\":\"https://mashalscienceacademy.com/numerical-problem-8-alternating-current-circuit-physics-12-msa/\",\"WARC-Payload-Digest\":\"sha1:HOGTPCEC26BQCAGSQAXKZ3373YWVBZIY\",\"WARC-Block-Digest\":\"sha1:ZC4H6R7AZ3DHF53YOWEDNLOPOMPXWB2P\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100632.0_warc_CC-MAIN-20231207022257-20231207052257-00632.warc.gz\"}"} |
https://www.wyzant.com/resources/answers/topics/cosine | [
"125 Answered Questions for the topic Cosine\n\n02/11/21\n\n12/07/20\n\n#### Pre Calculus Question\n\nGiven 𝑠𝑖𝑛𝜃 = −1 and 𝑐𝑜𝑠𝜃 > 0, find (a) 𝑠𝑖𝑛2𝜃, and (b) 𝑐𝑜𝑠2𝜃\n\n12/07/20\n\n#### What is the max. temperature (Celsius) that occurs on the second day?\n\nf(t) = 18-4cos(π/12(t+1))t stands for the hour\n\n10/18/20\n\n#### If sin A = 2/3 and sin B = 1/3 , and A and B are in quadrant I, find sin (A + B).\n\nI need solutions and the steps. I cannot understand it even after researching for hours.\n\n08/30/20\n\n#### I need help figuring out how to solve this\n\nA gasoline and air mixture is ignited by spark plugs, which forces the pistons to move up and down, rotating the crankshaft. The animation on the left shows a single piston moving up and... more\n\n05/18/20\n\n#### Law of sines and cosines\n\nFind b, given a=10 , c=8 and m<B=40o\n\n05/18/20\n\n#### Law of sines and cosines\n\nIn Triangle ABC, m<A=30o , m<B=80o , a=18in. Find the length of C.\n\n05/15/20\n\n#### Given cos(t)=-7/25 for pi/2 < t < pi, find the value of sin (t)\n\nI appreciate all of the tutors and experts on here, thanks so much for saving me and many other students.\n\n04/04/20\n\n#### are sine and cosine equal for non-acute complementary angles?\n\nfor example, when writing a proof, is it logical to write the following:cos(-ø)=sin(π/2+ø) (∵ the cosine of an angle is equal to the sine of its complementary angle)I ask because this... more\n\n03/18/20\n\n#### Trignometric Functions\n\nFind the value of each trigonometric function using the given angle and a calculator. Round answers to three decimal places.Cos 77 °\n\n12/04/19\n\n#### Given Cos (2x)= -24/25, 2nd Quadrant (pi/2 <x< pi); What is Sin(x) ?\n\nAm I missing something big here? I've gotten around to making sin(2x)=7/25 but I don't understand how to go from double angle to single angle using identities.\n\n11/26/19\n\n#### Consider the work below for verifying the given identities. One step in each identity is missing. Fill in the box showing the correct work for the missing step\n\nFirst Stepcot θ sec θ csc2 θ - cot3 θ sec θ = csc θSecond Step (missing step)Third Stepcot θ sec θ (1) = csc θFourth Stepcos θ 1------- * ------ = csc θsin θ... more\n\n11/26/19\n\n#### If sinx cosx = 2/5 and tan x = 1/2, then sin^2x =\n\nIf sinx cosx = 2/5 and tan x = 1/2, then sin2 x= A. 1/5 B. 2/5 C. sqrt(5)/5 D. 2sqrt(5)/5 E. 3/5\n\n07/09/19\n\n#### Write −6sin(8t)+5cos(8t) in the form Asin(Bt+ϕ) using sum or difference formulas.\n\n-6sin(8t)+5cos(8t)= Please provide all steps! Thank You!\n\n07/09/19\n\n#### If cos^2sin^4x= A+Bcos2x+Ccos4x+Dcos2xcos4x, then A= B= C= D=\n\nPlease tell me how you got it by showing steps!! what does A, B, C, and D equal in the second equation?\nCosine Algebra 2 Sine\n\n07/08/19\n\n#### Sine and Cosine\n\nWhy can sine and cosine never be greater than 1?\n\n07/08/19\n\n#### Sine Cosine and Tangent\n\nWhy can sine and cosine never be greater than 1? Why is it that cosecant and secant should be greater than 1? Why are tangent and cotangent sometimes undefined?\n\n07/07/19\n\n#### If sinx+cosx=sqrt(2)sin(x+Aπ), then the number 0<A=___________ <2\n\nplease provide all steps for the solution\n\n07/07/19\n\n#### Use the product to sum formula to fill in the blanks in the identity below: sin(7x)cos(4x)=1/2(sin(_____x)+sin(______x))\n\nplease provide all steps for the solution\n\n07/07/19\n\n#### Use the product-to-sum formula to find the exact value: If cos37.5∘sin7.5∘=(√A−B)/4, then A= B=\n\nplease provide all steps for the solution\n\n07/07/19\n\n#### Use the product-to-sum formula to find the exact value: If 2sin52.5∘sin97.5∘=2√+A√2, then A=\n\nplease provide all steps for the solution\n\n07/07/19\n\n#### Use the sum-to-product formula to simplify the expression: If sin41∘+sin19∘=sinA∘,0<A<90, then A= ____degrees\n\nplease provide all steps for the solution!!!!\n\n07/02/19\n\n#### Express in terms of x: sin(2arctan(x))\n\nPLEASE give clear steps! Thank You!If the answers ends up having square roots, can you simply it where it does not\n\n07/02/19\n\n#### Find tan(arcsin(4/6) + arccos(10/17))\n\nPLEASE show clear steps!!If the answers ends up having square roots, can you simply it where it does not\n\n## Still looking for help? Get the right answer, fast.\n\nGet a free answer to a quick problem.\nMost questions answered within 4 hours.\n\n#### OR\n\nChoose an expert and meet online. No packages or subscriptions, pay only for the time you need."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.77598643,"math_prob":0.97840923,"size":4615,"snap":"2021-21-2021-25","text_gpt3_token_len":1509,"char_repetition_ratio":0.114075035,"word_repetition_ratio":0.14596671,"special_character_ratio":0.30574214,"punctuation_ratio":0.09811695,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9955409,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-06-19T01:10:52Z\",\"WARC-Record-ID\":\"<urn:uuid:ced6a352-fd63-4121-9708-71e695fc1640>\",\"Content-Length\":\"97504\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7e6e6ef9-e034-4b98-b6b7-c845765ddd26>\",\"WARC-Concurrent-To\":\"<urn:uuid:8a30ac6f-65d0-4ff0-8cf8-cde42a54804d>\",\"WARC-IP-Address\":\"34.117.195.90\",\"WARC-Target-URI\":\"https://www.wyzant.com/resources/answers/topics/cosine\",\"WARC-Payload-Digest\":\"sha1:J3JBLO67ZCNKHEFA5EHEQUKDDNT3HXSB\",\"WARC-Block-Digest\":\"sha1:J6ATVVNGTZBWF6VNTBGTKLGB3VVS673Z\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-25/CC-MAIN-2021-25_segments_1623487643354.47_warc_CC-MAIN-20210618230338-20210619020338-00218.warc.gz\"}"} |
https://physics.stackexchange.com/questions/142912/physical-interpretation-of-order-of-tensor-indices/143079 | [
"# Physical interpretation of order of tensor indices\n\nUsing positional index notation with tensors is common. For example, the following simple equation from Carroll's Spacetime and Geometry text (eq. 3.146): $$R = R^\\mu_{\\,\\,\\mu} = g^{\\mu\\nu}R_{\\mu\\nu}$$ Where the Ricci scalar summation, $R^\\mu_{\\,\\,\\mu}$ uses positional offsets on the upper and lower indices.\n\nSo, I think I understand the usage and motivation for this positional offset. I am curious and thus the puzzle as to why so many texts (almost all of the current raft of GR texts) totally ignore any type of explanation for the use of positional index notation. And, some texts ignore it altogether -- that is, not using positional notation at all.\n\nApparently, authors seem to believe that this notation should be obvious (and, in some ways it is) but I am puzzled by the lack of explanation. Is there a formal definition of the motivation and \"need\" for positional index notation?\n\n• Are you asking why $R$ is used instead of $R^{\\mu}_{\\,\\,\\mu}$? The reason is this: $R^{\\mu}_{\\,\\,\\mu} = R^{0}_{\\,\\,0} + R^{1}_{\\,\\,1} + .... + R^{n}_{\\,\\,n}$. All these are scalars so they are brought together in a single term, just how you can set $a + b + c$ equal to another number $d$. This results in your equations having the bare minimum of indices needed for obvious reasons. – Constandinos Damalas Oct 24 '14 at 16:49\n• Why some indices are up and some down for example? – Constandinos Damalas Oct 24 '14 at 17:00\n• @PhotonicBoom, I guess my question was not as clear as I thought it was. No, I understand upper and lower index -- traditionally referred to as Contravariant (upper) and Covariant (lower). I also understand dual vector spaces, convectors, and other aspects of tensors and notation. I am specifically referring to the slot positional notation as where the upper and lower index are in columnar positions with spacing to maintain these slot positions. This is the reason I chose this equation on the Ricci Scalar as an example in that it used such offsets in the $R_\\mu_{\\,\\,\\mu}$ summation term. – K7PEH Oct 24 '14 at 17:04\n• i think the answer is to make clear which of the 2 indices has been made contra-variant or co-variant (since generaly the 2 indices $\\mu$ and $\\nu$ may play different parts). This if i understand correctly, positional notation havent heard it – Nikos M. Oct 24 '14 at 17:11\n• OP writes (v1): So, I think I understand the usage and motivation for this positional offset. I am curious and thus the puzzle as to why so many texts (almost all of the current raft of GR texts) totally ignore any type of explanation for the use of positional index notation. It seems OP understands the notational issue and is only asking a primarily opinion-based question about the author's choice of presentation. – Qmechanic Oct 25 '14 at 19:37\n\nOn a two-index tensor, swapping the two indices is equivalent to transposing a matrix.\n\nYou may not see many authors spending a lot of effort on this issue simply because an awful lot of the tensors we deal with are symmetric. This includes the metric, Ricci tensor, Einstein tensor, and stress-energy tensor. Therefore there is no special interest in discussing transposition. Sometimes there is some physical interest in understanding why these tensors must be symmetric, e.g., understanding why the stress-energy tensor is symmetric leads to ideas about torsion.\n\nWe also run into some tensors that are antisymmetric. Here the physical interpretation of swapping indices is generally something to do with choosing an orientation. I think texts that discuss antisymmetric tensors do usually give this interpretation. E.g., the electromagnetic field tensor is antisymmetric, and this relates to the right-hand rule for magnetic forces.\n\nTo make an analogy, real numbers can be negative or positive. A physics textbook will not give a general physical interpretation of what is meant by a negative number, because there is none. But it will usually give an interpretation in specific physical cases, e.g., negative velocities or a negative temperature on the Celsius scale.\n\n• The horizontal position of indices is important for a tensor that is not totally symmetric, e.g., the EM field strength $F_{\\mu\\nu}$ or the Riemann curvature tensor $R_{\\mu\\nu\\lambda\\kappa}$, etc, in order to properly identify which indices get raised or lowered.\n• As usual, be prepared that different authors use different conventions and notations. E.g. different authors order the indices of the Riemann curvature tensor $R_{\\mu\\nu\\lambda\\kappa}$ differently."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8807207,"math_prob":0.9653854,"size":892,"snap":"2020-45-2020-50","text_gpt3_token_len":208,"char_repetition_ratio":0.14977477,"word_repetition_ratio":0.0,"special_character_ratio":0.24215247,"punctuation_ratio":0.12138728,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9957645,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-26T13:47:11Z\",\"WARC-Record-ID\":\"<urn:uuid:932df546-8fac-41a3-8ada-69bb4a49c60f>\",\"Content-Length\":\"159360\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6153e2fc-de2c-4a6d-b91a-65b9ad695d45>\",\"WARC-Concurrent-To\":\"<urn:uuid:99c101ea-e95a-498e-8b06-4f95956425ba>\",\"WARC-IP-Address\":\"151.101.65.69\",\"WARC-Target-URI\":\"https://physics.stackexchange.com/questions/142912/physical-interpretation-of-order-of-tensor-indices/143079\",\"WARC-Payload-Digest\":\"sha1:LO47SOP75KYCFY557A43GEHR3XXUDSJE\",\"WARC-Block-Digest\":\"sha1:FLWZE4PMOASLQJG5WRS3KROXBSRUUXK6\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107891228.40_warc_CC-MAIN-20201026115814-20201026145814-00564.warc.gz\"}"} |
https://www.shaalaa.com/question-bank-solutions/the-v-i-graph-for-a-series-combination-and-for-a-parallel-combination-of-two-resistors-is-shown-in-the-figure-below-which-of-the-two-a-or-b-system-of-resistors-resistors-in-series_18092 | [
"# The V-i Graph for a Series Combination and for a Parallel Combination of Two Resistors is Shown in the Figure Below. Which of the Two a Or B. - Physics\n\nThe V-I graph for a series combination and for a parallel combination of two resistors is shown in the figure below. Which of the two A or B. represents the parallel combination? Give reasons for your answer.",
null,
"#### Solution 1\n\nFor the same change in I, change in V is less for the straight line A than for the straight line B (i.e. the straight line A is less steep than B). The straight line A represents small resistance, while the straight line B represents more resistance. The equivalent resistance is less in a parallel combination than in a series combination. So, line A represents a parallel combination.\n\n#### Solution 2\n\n‘A’ shows parallel combination because in case of parallel combination, the resistance is less for their potential is less, currents is more.\n\nConcept: System of Resistors - Resistors in Series\nIs there an error in this question or solution?\n\n#### APPEARS IN\n\nICSE Class 10 Physics\nChapter 7 Electricity\nFigure Based Short Answers | Q 6"
] | [
null,
"https://www.shaalaa.com/images/_4:b8097a601ecf4c69b15defa37831e4f0.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9201342,"math_prob":0.8304708,"size":874,"snap":"2022-05-2022-21","text_gpt3_token_len":182,"char_repetition_ratio":0.17586207,"word_repetition_ratio":0.0,"special_character_ratio":0.201373,"punctuation_ratio":0.105882354,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99500763,"pos_list":[0,1,2],"im_url_duplicate_count":[null,5,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-21T09:17:50Z\",\"WARC-Record-ID\":\"<urn:uuid:b756e4dd-91fc-4a74-9e2d-b9493c94ff52>\",\"Content-Length\":\"41767\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8803a243-25ef-4ce7-b777-1a27198e2dcc>\",\"WARC-Concurrent-To\":\"<urn:uuid:cf31fe2c-de6e-4d0d-8c72-7f62ab009377>\",\"WARC-IP-Address\":\"104.26.13.31\",\"WARC-Target-URI\":\"https://www.shaalaa.com/question-bank-solutions/the-v-i-graph-for-a-series-combination-and-for-a-parallel-combination-of-two-resistors-is-shown-in-the-figure-below-which-of-the-two-a-or-b-system-of-resistors-resistors-in-series_18092\",\"WARC-Payload-Digest\":\"sha1:MUVJOKCWPMHLCZ2GXTDM5DQMZDO36J5A\",\"WARC-Block-Digest\":\"sha1:S3RKASNYC2CDS47H6ADBAILNBJWRFKRS\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662539049.32_warc_CC-MAIN-20220521080921-20220521110921-00409.warc.gz\"}"} |
http://gipnoz.info/mean-median-mode-worksheets-with-answers/ | [
"# Mean Median Mode Worksheets With Answers\n\nmean mode median and range worksheet answers best rounding decimals of answ.\n\nmean median mode and range worksheets kid answers key 7 grouped data worksheet with doc workshe.\n\nmean mode median and range worksheet answers worksheets image collections for coll.\n\nmean median mode range worksheets with answers worksheet of wit.\n\nmedian mode range worksheet marvelous free math worksheets for grade mean of ran.\n\nmean median mode and range lovely worksheet activity sheet m 2 mathantics answers.\n\ncollection of standard deviation worksheet with answers download 1 mathantics mean median mode on deviat.\n\nfree collection of grade 7 math worksheets mean median mode 1 range with answers pdf downl.\n\nmean mode median and range worksheet answers along with best school math find graph tally images on of mo.\n\nmean median mode range worksheets with answers or worksheet of workshee.\n\nmean median mode range worksheets and worksheet by teaching resources me.\n\nmean median mode range worksheets free word problems worksheet with answers pdf proble.\n\nfree collection of grade 5 math worksheets mean median mode 1 ungrouped data worksheet with answers pdf mo.\n\ndelighted how do you find the mean median mode and range free best grade math worksheets calculating worksheet.\n\n9 best images of median and mode worksheets grade 4 math mean free range answers grouped data workshee.\n\nmean median mode worksheets and range word problems trigonometry with answers grouped data worksheet.\n\nmean median mode range worksheet free worksheets answers grouped data with pdf.\n\nmath worksheets mean median mode elegant free with answers for all download and downloa.\n\nmean median mode range worksheets answers key 8 mathworksheets4kids ke.\n\ngrade ma word problems worksheets mean median mode range grad on free for the volume math.\n\nmean median mode range worksheets with answers worksheet s pdf workshee.\n\ninterpreting diagrams worksheets luxury mean median mode range of answ.\n\nmean median mode practice worksheet range answers pdf 7 best images of and worksheets.\n\nmean median mode range maths worksheet a answer 130a answers estimate of the.\n\nfree worksheets library download and print on mean median mode by math crush a rate of excel range worksheet ks images for kids drills media inequality word pro.\n\nmean median mode range worksheets with answers answe.\n\nmean median mode range worksheets and answers ls for medium size grade math worksheet ai.\n\nmean median mode range worksheets answers key the best image collection download and share 0 coloring workshe.\n\nmean median mode range worksheets worksheet answers mathantics 9 best images of and grade.\n\nf mean median and mode together ungrouped data worksheet with answers pdf comparing.\n\nmeasure of central tendency worksheet math answers worksheets mean median mode with d.\n\nfresh mean mode median and range worksheet answers of awesome 1 media."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.777762,"math_prob":0.8537216,"size":2846,"snap":"2019-13-2019-22","text_gpt3_token_len":523,"char_repetition_ratio":0.37086558,"word_repetition_ratio":0.11685393,"special_character_ratio":0.17498243,"punctuation_ratio":0.06652807,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9961472,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-05-27T10:02:12Z\",\"WARC-Record-ID\":\"<urn:uuid:1a82319a-8e2b-454e-addb-e5d229f88e7c>\",\"Content-Length\":\"42644\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:cc06b1e6-128a-4bd5-acd8-8f2b158c5c65>\",\"WARC-Concurrent-To\":\"<urn:uuid:12718a03-fa9f-4534-b510-df0890600ebe>\",\"WARC-IP-Address\":\"104.31.84.8\",\"WARC-Target-URI\":\"http://gipnoz.info/mean-median-mode-worksheets-with-answers/\",\"WARC-Payload-Digest\":\"sha1:MO6DBAWCODRXXMTRXFO7HM6REASUVACJ\",\"WARC-Block-Digest\":\"sha1:D3OZF6WWWUT335T4UPNHIS2VVTALQFQY\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-22/CC-MAIN-2019-22_segments_1558232262311.80_warc_CC-MAIN-20190527085702-20190527111702-00297.warc.gz\"}"} |
https://www.numbersaplenty.com/1726274 | [
"Search a number\nBaseRepresentation\nbin110100101011101000010\n310020201000002\n412211131002\n5420220044\n6101000002\n720446604\noct6453502\n93221002\n101726274\n11a79a80\n126b3002\n13485984\n1432d174\n1524174e\nhex1a5742\n\n1726274 has 8 divisors (see below), whose sum is σ = 2824848. Its totient is φ = 784660.\n\nThe previous prime is 1726273. The next prime is 1726289. The reversal of 1726274 is 4726271.\n\nIt is a sphenic number, since it is the product of 3 distinct primes.\n\nIt is an Ulam number.\n\nIt is a Curzon number.\n\nIt is not an unprimeable number, because it can be changed into a prime (1726273) by changing a digit.\n\nIt is a polite number, since it can be written in 3 ways as a sum of consecutive naturals, for example, 39212 + ... + 39255.\n\nIt is an arithmetic number, because the mean of its divisors is an integer number (353106).\n\n21726274 is an apocalyptic number.\n\n1726274 is a deficient number, since it is larger than the sum of its proper divisors (1098574).\n\n1726274 is a wasteful number, since it uses less digits than its factorization.\n\n1726274 is an evil number, because the sum of its binary digits is even.\n\nThe sum of its prime factors is 78480.\n\nThe product of its digits is 4704, while the sum is 29.\n\nThe square root of 1726274 is about 1313.8774676506. The cubic root of 1726274 is about 119.9600329864.\n\nIt can be divided in two parts, 172 and 6274, that added together give a palindrome (6446).\n\nThe spelling of 1726274 in words is \"one million, seven hundred twenty-six thousand, two hundred seventy-four\".\n\nDivisors: 1 2 11 22 78467 156934 863137 1726274"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.842027,"math_prob":0.9736976,"size":1638,"snap":"2023-14-2023-23","text_gpt3_token_len":529,"char_repetition_ratio":0.1505508,"word_repetition_ratio":0.0070921984,"special_character_ratio":0.43528694,"punctuation_ratio":0.13719513,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99630165,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-10T08:36:59Z\",\"WARC-Record-ID\":\"<urn:uuid:2e784d8f-883f-4f7e-8cbc-e1db51feea98>\",\"Content-Length\":\"8059\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b25af344-f407-4fb0-bfbc-b4b528d9a192>\",\"WARC-Concurrent-To\":\"<urn:uuid:d7981283-750f-48ab-b763-5a1ffe1ce882>\",\"WARC-IP-Address\":\"89.46.108.74\",\"WARC-Target-URI\":\"https://www.numbersaplenty.com/1726274\",\"WARC-Payload-Digest\":\"sha1:PQOZOUW2TI2U735M2G7CTIGPCQ63FBZD\",\"WARC-Block-Digest\":\"sha1:XXQYIAH6IKC5GT3QU77QDHXD2BFXD5GB\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224657144.94_warc_CC-MAIN-20230610062920-20230610092920-00177.warc.gz\"}"} |
https://publiclab.org/wiki/infragram-sandbox | [
"# Infragram Sandbox\n\n« Back to Infragram\n\nThe Infragram Sandbox is a browser based tool for experimenting with image compositing, intended for use with Public Lab Infragram cameras.\n\nRead this research note and watch this short video to get a quick idea of how to use it:\n\n## Infragrammar\n\nInfragram Sandbox uses simple math expressions, which are actually written in JavaScript. Here are some examples:\n\n• NDVI = (R-B)/(R+B) for -1..1, or for 0..1 (((R-B)/(R+B))+1)/2 in the Monochrome input\n• ENDVI = ((R+G)-(2*B))/((R+G)+(2*B)) which MaxMax uses for its vegetation stress cameras, in the Monochrome input\n• A colormapped NDVI, scaled to emphasize differentiation: ((R-B)/(R+B)-0.2)*-720, in the Hue input, and tweaking the 0.2 value to between 0.1 and 0.9 (read more here)\n\n### Math\n\nInfragrammar can use JavaScript math functions. These include:\n\n• Math.log() - logarithms\n• Math.abs() - absolute value\n• Math.sin() - sine\n\n### Sliders\n\nYou can also link an equation to a slider; for now we are using the letter \"X\" to represent this value, but in the future, any string represented as {mystring} will generate a slider called \"mystring\". Move the slider to try different values in real time.",
null,
""
] | [
null,
"http://publiclab.org/system/images/photos/000/001/174/medium/Screen_Shot_2013-08-17_at_11.52.00_AM.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8419157,"math_prob":0.87155986,"size":1162,"snap":"2021-31-2021-39","text_gpt3_token_len":311,"char_repetition_ratio":0.103626944,"word_repetition_ratio":0.011235955,"special_character_ratio":0.26419964,"punctuation_ratio":0.12931034,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99137473,"pos_list":[0,1,2],"im_url_duplicate_count":[null,10,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-21T21:16:14Z\",\"WARC-Record-ID\":\"<urn:uuid:2433ed1f-6c34-4d9d-a7d6-addde58d4334>\",\"Content-Length\":\"71534\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a7fb53a6-4abb-4fac-8137-94f85fec52d8>\",\"WARC-Concurrent-To\":\"<urn:uuid:42a579b4-82b2-4946-bd27-e8093ae2a517>\",\"WARC-IP-Address\":\"162.209.105.67\",\"WARC-Target-URI\":\"https://publiclab.org/wiki/infragram-sandbox\",\"WARC-Payload-Digest\":\"sha1:NMGXWBJ57WKSHM3JKNMHHWQHA7PQW5V4\",\"WARC-Block-Digest\":\"sha1:UPAXZK5C4VUATLP4PXSAUY6RC3I5AHIK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780057227.73_warc_CC-MAIN-20210921191451-20210921221451-00567.warc.gz\"}"} |
https://phennydiscovers.com/index.php/hagtixtnhen9 | [
"# Math question answer app\n\nApps can be a great way to help students with their algebra. Let's try the best Math question answer app. Our website can help me with math work.\n\n## The Best Math question answer app",
null,
"Apps can be a great way to help learners with their math. Let's try the best Math question answer app. This can be simplified to x=log32/log8. By using the Powers Rule, you can quickly and easily solve for exponents. However, it is important to note that this rule only works if the base of the exponent is 10. If the base is not 10, you will need to use a different method to solve for the exponent. Nevertheless, the Powers Rule is a useful tool that can save you time and effort when solving for exponents.\n\nSolving for x with fractions can be tricky, but there are a few steps that can make the process simpler. First, it is important to understand that when solving for x, the goal is to find the value of x that will make the equation true. In other words, whatever value is plugged into the equation in place of x should result in a correct answer. With this in mind, the next step is to create an equation using only fractions that has the same answer no matter what value is plugged in for x. This can be done by cross-multiplying the fractions and setting the two sides of the equation equal to each other. Once this is done, the final step is to solve for x by isolating it on one side of the equation. By following these steps, solving for x with fractions can be much less daunting.\n\nHard math equations can be a challenge to solve, but the feeling of satisfaction that comes from finding the answer is well worth the effort. There are a variety of techniques that can be used to solve hard math equations, and often the best approach is to try a few different methods until one works. However, it is important to persevere and not give up if the answer isn't immediately apparent. With a little perseverance, even the most difficult equation can be solved. Hard math equations with answers can be a great way to challenge yourself and keep your mind sharp.So don't be discouraged if you find yourself stuck on a hard math equation - with a little patience and persistence, you'll be able to find the answer you're looking for.\n\nHow to solve for roots: There are several ways to solve for roots, or zeros, of a polynomial function. The most common method is factoring. To factor a polynomial, one expands it into the product of two linear factors. This can be done by grouping terms, by difference of squares, or by completing the square. If the polynomial cannot be factored, then one may use synthetic division to divide it by a linear term. Another method that may be used is graphing. Graphing can show where the function intersects the x-axis, known as the zeros of the function. Graphing can also give an approximate zero if graphed on a graphing calculator or computer software with accuracy parameters. Finally, numerical methods may be used to find precise zeros of a polynomial function. These include Newton's Method, the Bisection Method, and secant lines. Knowing how to solve for roots is important in solving many real-world problems.\n\n## Instant support with all types of math",
null,
"Idk. why anybody would not like this app there are a few things the app makers could fix but it's really good I wish they had more details on how to do the problem and the camera is okay I like it before you had to click the actual bottom but overall love this app and can-do fraction, multiplication, division, and a lot more different types of problems I do also wish they give. You videos on how to do the problems. so much for this app I hope my comment helps any other people how use it.\n\nLucia Morris",
null,
"Really helpful tool as it not only solves pretty much any problem you throw at it but also shows you every step in the process (unlike mathway, which requires you to pay for that function). You do have to pay for more explanation for each step and some other cool features but hey, at least they still give you the steps for free (which really helps when doing call homework as I can spot where I got it wrong)\n\nUlrike Roberts\n\nSolve math problems with camera Solve for x tangent circle calculator Free math tutor app Help with math Free math solver with steps"
] | [
null,
"https://phennydiscovers.com/PSY6320a40ff2880/author-profile.jpg",
null,
"https://phennydiscovers.com/PSY6320a40ff2880/testimonial-one.jpg",
null,
"https://phennydiscovers.com/PSY6320a40ff2880/testimonial-two.jpg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9622255,"math_prob":0.9340302,"size":4135,"snap":"2022-40-2023-06","text_gpt3_token_len":875,"char_repetition_ratio":0.1164367,"word_repetition_ratio":0.047808766,"special_character_ratio":0.20628779,"punctuation_ratio":0.08513189,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.989618,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-10-03T09:13:43Z\",\"WARC-Record-ID\":\"<urn:uuid:c826b92b-cdbf-4d2a-bac3-7c52981ad78b>\",\"Content-Length\":\"26126\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:63bf1273-be2c-4950-9cbe-bcf7b7c85586>\",\"WARC-Concurrent-To\":\"<urn:uuid:dbef1152-936e-475c-95f5-92d6a70e112d>\",\"WARC-IP-Address\":\"67.21.87.7\",\"WARC-Target-URI\":\"https://phennydiscovers.com/index.php/hagtixtnhen9\",\"WARC-Payload-Digest\":\"sha1:O3YB7HIL3JYDDTHQUI57KDKCCYFPIG6Q\",\"WARC-Block-Digest\":\"sha1:IIAQ62267IWNHKYZSWHC3A2JKW2T4SS4\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030337404.30_warc_CC-MAIN-20221003070342-20221003100342-00408.warc.gz\"}"} |
https://www.colorhexa.com/182e22 | [
"# #182e22 Color Information\n\nIn a RGB color space, hex #182e22 is composed of 9.4% red, 18% green and 13.3% blue. Whereas in a CMYK color space, it is composed of 47.8% cyan, 0% magenta, 26.1% yellow and 82% black. It has a hue angle of 147.3 degrees, a saturation of 31.4% and a lightness of 13.7%. #182e22 color hex could be obtained by blending #305c44 with #000000. Closest websafe color is: #003333.\n\n• R 9\n• G 18\n• B 13\nRGB color chart\n• C 48\n• M 0\n• Y 26\n• K 82\nCMYK color chart\n\n#182e22 color description : Very dark (mostly black) cyan - lime green.\n\n# #182e22 Color Conversion\n\nThe hexadecimal color #182e22 has RGB values of R:24, G:46, B:34 and CMYK values of C:0.48, M:0, Y:0.26, K:0.82. Its decimal value is 1584674.\n\nHex triplet RGB Decimal 182e22 `#182e22` 24, 46, 34 `rgb(24,46,34)` 9.4, 18, 13.3 `rgb(9.4%,18%,13.3%)` 48, 0, 26, 82 147.3°, 31.4, 13.7 `hsl(147.3,31.4%,13.7%)` 147.3°, 47.8, 18 003333 `#003333`\nCIE-LAB 16.814, -12.175, 5.033 1.642, 2.264, 1.864 0.285, 0.392, 2.264 16.814, 13.174, 157.54 16.814, -8.381, 5.747 15.045, -6.844, 3.187 00011000, 00101110, 00100010\n\n# Color Schemes with #182e22\n\n• #182e22\n``#182e22` `rgb(24,46,34)``\n• #2e1824\n``#2e1824` `rgb(46,24,36)``\nComplementary Color\n• #192e18\n``#192e18` `rgb(25,46,24)``\n• #182e22\n``#182e22` `rgb(24,46,34)``\n• #182e2d\n``#182e2d` `rgb(24,46,45)``\nAnalogous Color\n• #2e1819\n``#2e1819` `rgb(46,24,25)``\n• #182e22\n``#182e22` `rgb(24,46,34)``\n• #2d182e\n``#2d182e` `rgb(45,24,46)``\nSplit Complementary Color\n• #2e2218\n``#2e2218` `rgb(46,34,24)``\n• #182e22\n``#182e22` `rgb(24,46,34)``\n• #22182e\n``#22182e` `rgb(34,24,46)``\n• #242e18\n``#242e18` `rgb(36,46,24)``\n• #182e22\n``#182e22` `rgb(24,46,34)``\n• #22182e\n``#22182e` `rgb(34,24,46)``\n• #2e1824\n``#2e1824` `rgb(46,24,36)``\n• #000000\n``#000000` `rgb(0,0,0)``\n• #070c09\n``#070c09` `rgb(7,12,9)``\n• #0f1d16\n``#0f1d16` `rgb(15,29,22)``\n• #182e22\n``#182e22` `rgb(24,46,34)``\n• #213f2e\n``#213f2e` `rgb(33,63,46)``\n• #29503b\n``#29503b` `rgb(41,80,59)``\n• #326047\n``#326047` `rgb(50,96,71)``\nMonochromatic Color\n\n# Alternatives to #182e22\n\nBelow, you can see some colors close to #182e22. Having a set of related colors can be useful if you need an inspirational alternative to your original color choice.\n\n• #182e1d\n``#182e1d` `rgb(24,46,29)``\n• #182e1e\n``#182e1e` `rgb(24,46,30)``\n• #182e20\n``#182e20` `rgb(24,46,32)``\n• #182e22\n``#182e22` `rgb(24,46,34)``\n• #182e24\n``#182e24` `rgb(24,46,36)``\n• #182e26\n``#182e26` `rgb(24,46,38)``\n• #182e28\n``#182e28` `rgb(24,46,40)``\nSimilar Colors\n\n# #182e22 Preview\n\nText with hexadecimal color #182e22\n\nThis text has a font color of #182e22.\n\n``<span style=\"color:#182e22;\">Text here</span>``\n#182e22 background color\n\nThis paragraph has a background color of #182e22.\n\n``<p style=\"background-color:#182e22;\">Content here</p>``\n#182e22 border color\n\nThis element has a border color of #182e22.\n\n``<div style=\"border:1px solid #182e22;\">Content here</div>``\nCSS codes\n``.text {color:#182e22;}``\n``.background {background-color:#182e22;}``\n``.border {border:1px solid #182e22;}``\n\n# Shades and Tints of #182e22\n\nA shade is achieved by adding black to any pure hue, while a tint is created by mixing white to any pure color. In this example, #040705 is the darkest color, while #f9fcfb is the lightest one.\n\n• #040705\n``#040705` `rgb(4,7,5)``\n• #0b140f\n``#0b140f` `rgb(11,20,15)``\n• #112118\n``#112118` `rgb(17,33,24)``\n• #182e22\n``#182e22` `rgb(24,46,34)``\n• #1f3b2c\n``#1f3b2c` `rgb(31,59,44)``\n• #254835\n``#254835` `rgb(37,72,53)``\n• #2c553f\n``#2c553f` `rgb(44,85,63)``\n• #336248\n``#336248` `rgb(51,98,72)``\n• #3a6e52\n``#3a6e52` `rgb(58,110,82)``\n• #407b5b\n``#407b5b` `rgb(64,123,91)``\n• #478865\n``#478865` `rgb(71,136,101)``\n• #4e956e\n``#4e956e` `rgb(78,149,110)``\n• #55a278\n``#55a278` `rgb(85,162,120)``\n• #5fab82\n``#5fab82` `rgb(95,171,130)``\n• #6cb28c\n``#6cb28c` `rgb(108,178,140)``\n• #79b996\n``#79b996` `rgb(121,185,150)``\n• #85c0a0\n``#85c0a0` `rgb(133,192,160)``\n• #92c6aa\n``#92c6aa` `rgb(146,198,170)``\n• #9fcdb4\n``#9fcdb4` `rgb(159,205,180)``\n• #acd4be\n``#acd4be` `rgb(172,212,190)``\n• #b9dac8\n``#b9dac8` `rgb(185,218,200)``\n• #c6e1d2\n``#c6e1d2` `rgb(198,225,210)``\n• #d3e8dc\n``#d3e8dc` `rgb(211,232,220)``\n• #e0efe6\n``#e0efe6` `rgb(224,239,230)``\n• #edf5f1\n``#edf5f1` `rgb(237,245,241)``\n• #f9fcfb\n``#f9fcfb` `rgb(249,252,251)``\nTint Color Variation\n\n# Tones of #182e22\n\nA tone is produced by adding gray to any pure hue. In this case, #232323 is the less saturated color, while #024420 is the most saturated one.\n\n• #232323\n``#232323` `rgb(35,35,35)``\n• #202623\n``#202623` `rgb(32,38,35)``\n• #1d2922\n``#1d2922` `rgb(29,41,34)``\n• #1b2b22\n``#1b2b22` `rgb(27,43,34)``\n• #182e22\n``#182e22` `rgb(24,46,34)``\n• #153122\n``#153122` `rgb(21,49,34)``\n• #133322\n``#133322` `rgb(19,51,34)``\n• #103621\n``#103621` `rgb(16,54,33)``\n• #0d3921\n``#0d3921` `rgb(13,57,33)``\n• #0b3b21\n``#0b3b21` `rgb(11,59,33)``\n• #083e21\n``#083e21` `rgb(8,62,33)``\n• #054120\n``#054120` `rgb(5,65,32)``\n• #024420\n``#024420` `rgb(2,68,32)``\nTone Color Variation\n\n# Color Blindness Simulator\n\nBelow, you can see how #182e22 is perceived by people affected by a color vision deficiency. This can be useful if you need to ensure your color combinations are accessible to color-blind users.\n\nMonochromacy\n• Achromatopsia 0.005% of the population\n• Atypical Achromatopsia 0.001% of the population\nDichromacy\n• Protanopia 1% of men\n• Deuteranopia 1% of men\n• Tritanopia 0.001% of the population\nTrichromacy\n• Protanomaly 1% of men, 0.01% of women\n• Deuteranomaly 6% of men, 0.4% of women\n• Tritanomaly 0.01% of the population"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.5449653,"math_prob":0.76265836,"size":3686,"snap":"2019-13-2019-22","text_gpt3_token_len":1622,"char_repetition_ratio":0.12873438,"word_repetition_ratio":0.010989011,"special_character_ratio":0.56918067,"punctuation_ratio":0.23385301,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99375045,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-03-24T15:55:21Z\",\"WARC-Record-ID\":\"<urn:uuid:27482fbc-ff6b-47f2-b7fa-21626a06de64>\",\"Content-Length\":\"36363\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5e832582-51df-4c8d-8d13-4aab16e09f25>\",\"WARC-Concurrent-To\":\"<urn:uuid:2fe92117-d6c7-4d4b-b832-daafbb48bf17>\",\"WARC-IP-Address\":\"178.32.117.56\",\"WARC-Target-URI\":\"https://www.colorhexa.com/182e22\",\"WARC-Payload-Digest\":\"sha1:SQAKIPEAETRQ2L2R3KY75FRVR5VIWOZT\",\"WARC-Block-Digest\":\"sha1:RG6KNCEJAXTG4JZEDR4ANU4AXKUHBPZR\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-13/CC-MAIN-2019-13_segments_1552912203462.50_warc_CC-MAIN-20190324145706-20190324171706-00500.warc.gz\"}"} |
http://safecurves.cr.yp.to/proof/750357943.html | [
"Primality proof for n = 750357943:\n\nTake b = 2.\n\nb^(n-1) mod n = 1.\n\n125059657 is prime.\nb^((n-1)/125059657)-1 mod n = 63, which is a unit, inverse 595522177.\n\n(125059657) divides n-1.\n\n(125059657)^2 > n.\n\nn is prime by Pocklington's theorem."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.64638084,"math_prob":0.9996574,"size":236,"snap":"2020-10-2020-16","text_gpt3_token_len":93,"char_repetition_ratio":0.1724138,"word_repetition_ratio":0.0,"special_character_ratio":0.5805085,"punctuation_ratio":0.18518518,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9971274,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-02-18T21:31:54Z\",\"WARC-Record-ID\":\"<urn:uuid:0c6c615b-3a99-4383-912f-5992f5f607f7>\",\"Content-Length\":\"493\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1767860d-1961-41cc-aaef-0ffa7e48255e>\",\"WARC-Concurrent-To\":\"<urn:uuid:d0574d1a-da4e-469b-bb95-93277568bf0d>\",\"WARC-IP-Address\":\"131.193.32.108\",\"WARC-Target-URI\":\"http://safecurves.cr.yp.to/proof/750357943.html\",\"WARC-Payload-Digest\":\"sha1:TIBUG7NG5YNA2JBJICTKIFMPRIQCLHZX\",\"WARC-Block-Digest\":\"sha1:I2PPAZAY55C4PJXZFBIEU4LBZET4KT2Z\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-10/CC-MAIN-2020-10_segments_1581875143815.23_warc_CC-MAIN-20200218210853-20200219000853-00303.warc.gz\"}"} |
http://mizar.uwb.edu.pl/version/current/html/proofs/group_2/53 | [
"let G1, G2 be Group; :: thesis: ( G1 is Subgroup of G2 & G2 is Subgroup of G1 implies multMagma(# the carrier of G1, the multF of G1 #) = multMagma(# the carrier of G2, the multF of G2 #) )\nassume that\nA1: G1 is Subgroup of G2 and\nA2: G2 is Subgroup of G1 ; :: thesis: multMagma(# the carrier of G1, the multF of G1 #) = multMagma(# the carrier of G2, the multF of G2 #)\nset g = the multF of G2;\nset f = the multF of G1;\nset B = the carrier of G2;\nset A = the carrier of G1;\nA3: ( the carrier of G1 c= the carrier of G2 & the carrier of G2 c= the carrier of G1 ) by A1, A2, Def5;\nthen A4: the carrier of G1 = the carrier of G2 ;\nthe multF of G1 = the multF of G2 || the carrier of G1 by\n.= ( the multF of G1 || the carrier of G2) || the carrier of G1 by\n.= the multF of G1 || the carrier of G2 by\n.= the multF of G2 by ;\nhence multMagma(# the carrier of G1, the multF of G1 #) = multMagma(# the carrier of G2, the multF of G2 #) by ; :: thesis: verum"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.76843417,"math_prob":0.99061626,"size":978,"snap":"2019-43-2019-47","text_gpt3_token_len":381,"char_repetition_ratio":0.3470226,"word_repetition_ratio":0.29910713,"special_character_ratio":0.38650307,"punctuation_ratio":0.14285715,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99580324,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-10-21T02:41:42Z\",\"WARC-Record-ID\":\"<urn:uuid:f9b3d860-d9a5-4fcc-ae43-f6c77b28ef74>\",\"Content-Length\":\"8916\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:22cf2176-356e-4f0b-ab1a-41fbdeeb6fe4>\",\"WARC-Concurrent-To\":\"<urn:uuid:10023476-1c9c-4b89-8d05-372c7f5f4e5a>\",\"WARC-IP-Address\":\"212.33.73.131\",\"WARC-Target-URI\":\"http://mizar.uwb.edu.pl/version/current/html/proofs/group_2/53\",\"WARC-Payload-Digest\":\"sha1:VFUVSJFT3TIIAAUF5F5LU56MIFGGRHSE\",\"WARC-Block-Digest\":\"sha1:4ROWPLPDWWQ4T5EIN5FHT2EMHYG4Q7P5\",\"WARC-Identified-Payload-Type\":\"application/xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-43/CC-MAIN-2019-43_segments_1570987751039.81_warc_CC-MAIN-20191021020335-20191021043835-00321.warc.gz\"}"} |
https://realestimateservice.com/blog/concrete-estimating-services-construction-cost-estimator-company-in-new-york/ | [
"0Comment",
null,
"# Concrete Estimating Services | Construction Cost Estimator Company In New York\n\n## How to calculate concrete:\n\n• Determine how thick you want the concrete.\n• Measure the length and width that you’d like to cover.\n• Multiply the length by the width to determine square footage.\n• Convert the thickness from inches to feet.\n• Multiply the thickness in feet by the square footage to determine cubic feet.\n\n### How much does a 30×30 concrete slab cost?\n\nHow much is the cost of a 30×30 concrete slab? A 30×30 foot concrete slab costs about \\$5,400 including installation. This is for 6-inch thickness, but can go up or down depending on what your concrete pro recommends.\n\n### How much does a cubic foot of concrete cost?\n\nThe average cost of concrete is \\$119 to \\$147 per cubic yard, which includes delivery up to 20 miles. Pouring plain concrete costs \\$5 to \\$10 per square foot depending on the quality, while colored, stamped, or stained concrete costs \\$8 to \\$18 per square foot to install.",
null,
"### How much is concrete per square yard?\n\nHow much is a yard of concrete? When estimating the cost of concrete, use \\$125 per yard as a ballpark figure. However, concrete prices differ by region, and you’ll have to pay a ready mix delivery fee, plus factor in labor charges if hiring a contractor. Get quotes from concrete contractors near me.\n\n# Concrete Estimating\n\nStudents will learn to accurately estimate concrete costs. Using a real set of drawings, students will perform a complete concrete takeoff and will apply material and labor costs to that takeoff to assemble a complete estimate. Students will leave this class with the knowledge needed to produce accurate concrete estimates using efficient and reliable estimating techniques.\n\n Get acquainted with concrete Concrete terminology Categories of concrete costs General costs for; Concrete, Placing, Reinforcing, Finishing, Equipment Essential forms and documentation Takeoff quantities of work Slabs on grade problems Footers and pads Utilize a cost data manual Shrinkage and temperature reinforcement Horizontal forming Forming costs-job built Reinforcing costs for; Footers/pads, Pile caps, Piers/pilasters/columns, Walls/beams, Slabs/wire or fiber fabric Tilt-up systems Pre-cast concrete Manufactured forming systems Discussion on; Hot and cold weather concrete, Admixtures, Chemical sealers, compounds, Form releases, hardener Introduction to computerized takeoff and estimating Prepare a concrete bid\n\n## Traditional Concrete Visualization & Estimation Methods:\n\n• Estimating slab in cubic feet: multiply the length by the width by the depth of the desired slab in inches, then divide by 1728 to calculate the slab’s total cubic feet (don’t forget to add ten percent to adjust for variations and waste)\n• For cubic yards: follow the steps above and divide by 27\n• Visualizing a cubic yard: Imagine a 9’ x 9’ slab at 4” deep, or approximately five sidewalk squares\n• 1 cubic yard of concrete placed at 4-inches deep will cover 81 square feet\n• one pallet of 56, 60-pound bags of Sakrete mix is roughly one cubic yard of concrete (considering waste: add about 4 additional bags are of 60-pound mix for full cubic yard)\n\n## How much concrete do I need for a 10×10 slab\n\nTo determine the how much concrete do I need for a 10×10 slab solve in following steps:-\n\nStep 1 :- calculate the volume in cubic feet:- we have given 10×10 slab (length × width), consider it is 4 inch thick, 4 inch = 0.33 feet, to determine the volume of slab, you need to multiply all dimensions, such that = 10′ × 10′ × 0.33′ = 33 cubic foot.\n\nStep 2 :- converting cubic feet into cubic yard:- now converting into cubic yard, you should divide it by 27, because 1 cubic yard = 27 cubic foot, so volume of concrete you needed for 10 × 10 slab 4 inch thick in cubic yard = 33/27 = 1.23.\n\nStep 3 :- Add 10% extra :- considering extra 10% wastage during mixing and pouring, so 10% of 1.23 cubic yard = 0.123, add 1.23 + 0.122 = 1.35 cubic yard.\n\n### How much concrete do I need for a 10×10 slab\n\nRegarding this, “how much concrete do i need for a 10×10 slab?”, generally you will need 1.23 cubic yards of premixed concrete for a 10×10 slab at 4 inches thick, at 5 inches thick you will need 1.54 cubic yards of concrete and at 6 inches thick – 1.85 cubic yards of concrete. Always you should purchase 5 to 10% more premixed than required.\n\n4″ thick slab of 10×10 is ideal in normal load or standard condition for making sidewalks, curbs, steps, ramp, walkway and patio and 5 inch or 6 inch thick slab is ideal for heavy load and heavy traffic",
null,
"### How many yards of concrete do I need for a 10×10 slab\n\nOne may ask a reasonable question, “how many yards of concrete do I need for a 10×10 slab?”, generally there are 1.23 cubic yards of premixed concrete you will need for a 10×10 slab at 4 inches thick, at 5 inches thick you will need 1.54 cubic yards of concrete and at 6 inches thick – 1.85 cubic yards of concrete.\n\n### How many cubic feet of concrete do I need for a 10×10 slab?\n\nOne may ask a reasonable question, “how many cubic feet of concrete do I need for a 10×10 slab?”, generally there are 33.21 cubic feet of premixed concrete you will need for a 10×10 slab at 4 inches thick, at 5 inches thick you will need 41.58 cubic feet of concrete and at 6 inches thick – 49.95 cubic feet of concrete."
] | [
null,
"data:image/gif;base64,R0lGODlhAQABAAAAACH5BAEKAAEALAAAAAABAAEAAAICTAEAOw==",
null,
"data:image/gif;base64,R0lGODlhAQABAAAAACH5BAEKAAEALAAAAAABAAEAAAICTAEAOw==",
null,
"data:image/gif;base64,R0lGODlhAQABAAAAACH5BAEKAAEALAAAAAABAAEAAAICTAEAOw==",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.89177066,"math_prob":0.9547446,"size":3681,"snap":"2023-40-2023-50","text_gpt3_token_len":934,"char_repetition_ratio":0.20097905,"word_repetition_ratio":0.21815519,"special_character_ratio":0.2779136,"punctuation_ratio":0.10246433,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97648764,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-10-03T07:36:06Z\",\"WARC-Record-ID\":\"<urn:uuid:4db81677-a65d-4a34-bc59-94b9039e93bb>\",\"Content-Length\":\"158037\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a4351f77-f669-48fd-bba1-09cb0a44c372>\",\"WARC-Concurrent-To\":\"<urn:uuid:96ad64d9-1464-483b-8475-1907b36a7e5f>\",\"WARC-IP-Address\":\"185.212.70.130\",\"WARC-Target-URI\":\"https://realestimateservice.com/blog/concrete-estimating-services-construction-cost-estimator-company-in-new-york/\",\"WARC-Payload-Digest\":\"sha1:T7AETJYSXOVLTHKNDKXQ4VEM6T2U7JQS\",\"WARC-Block-Digest\":\"sha1:GNKMZHFWAA2V4EBZYFFGGZUZ2HE5JHSW\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233511055.59_warc_CC-MAIN-20231003060619-20231003090619-00247.warc.gz\"}"} |
https://openhome.cc/eGossip/OpenSCAD/lib2x-shape_glued2circles.html | [
"# shape_glued2circles\n\nReturns shape points of two glued circles. They can be used with xxx_extrude modules of dotSCAD. The shape points can be also used with the built-in polygon module.\n\n## Parameters\n\n• `radius` : The radius of two circles.\n• `centre_dist` : The distance between centres of two circles.\n• `tangent_angle` : The angle of a tangent line. It defaults to 30 degrees. See examples below.\n• `t_step` : It defaults to 0.1. See bezier_curve for details.\n• `\\$fa`, `\\$fs`, `\\$fn` : Check the circle module for more details.\n\n## Examples\n\n``````use <shape_glued2circles.scad>;\n\n\\$fn = 36;\n\ncentre_dist = 30;\n\npolygon(shape_pts);\n``````",
null,
"``````use <shape_glued2circles.scad>;\n\n\\$fn = 36;\n\ncentre_dist = 30;\n\nwidth = centre_dist / 2 + radius;\n\nrotate_extrude() difference() {\npolygon(shape_pts);\n\n}\n``````",
null,
"``````use <shape_glued2circles.scad>;\n\n\\$fn = 36;\n\ncentre_dist = 30;",
null,
""
] | [
null,
"https://openhome.cc/eGossip/OpenSCAD/images/lib2x-shape_glued2circles-1.JPG",
null,
"https://openhome.cc/eGossip/OpenSCAD/images/lib2x-shape_glued2circles-2.JPG",
null,
"https://openhome.cc/eGossip/OpenSCAD/images/lib2x-shape_glued2circles-3.JPG",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.6147666,"math_prob":0.8930688,"size":1199,"snap":"2020-34-2020-40","text_gpt3_token_len":334,"char_repetition_ratio":0.18158996,"word_repetition_ratio":0.24375,"special_character_ratio":0.2969141,"punctuation_ratio":0.25120774,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9769661,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-08-09T03:08:22Z\",\"WARC-Record-ID\":\"<urn:uuid:5c56f347-42ac-4667-978e-9426a915ebf5>\",\"Content-Length\":\"8590\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e909d6ab-ff0b-4886-b34c-c39306d21547>\",\"WARC-Concurrent-To\":\"<urn:uuid:12648199-3db1-4cae-bbf8-6999d9fca573>\",\"WARC-IP-Address\":\"203.98.64.6\",\"WARC-Target-URI\":\"https://openhome.cc/eGossip/OpenSCAD/lib2x-shape_glued2circles.html\",\"WARC-Payload-Digest\":\"sha1:C3W5Q3WHOCPBKQ7QJJ5D7CSO4AWD342N\",\"WARC-Block-Digest\":\"sha1:AEEHLYYKWVLC5W6P2LGUX5IWLH6IOKQK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-34/CC-MAIN-2020-34_segments_1596439738380.22_warc_CC-MAIN-20200809013812-20200809043812-00382.warc.gz\"}"} |
http://writings.nunojob.com/2010/08/lost-in-recursion-generate-xml-from-key-value-pairs-html-form.html | [
"Geek. Open-source enthusiast. Shaping the future of the node.js ☁ @nodejitsu. Founder @thenodefirm& curator @lxjs\nThis is the old blog, check the new one at nunojob.com\n\n# Lost in Recursion - Generate XML from key value pairs (HTML Form)\n\n## Meta\n\n• Pre-requisites\n\nMarkLogic Server 4.2. Knowing your way around http://localhost:8001 or a working cq instalation\n\n• Difficulty: Medium\n• Keywords: XML, XForms, HTML, Forms, XQuery, Functional, Map, Fold, High Order Functions, Application Buider, Search, git, github, XQuery 1.1\n\n## Introduction\n\nOne of the cool things about XForms is that I can abstract the data model from the form and get a consistent view of my XML. For me this is the killer feature about XForms. However, regular HTML forms are way more pervasive and I found myself thinking on how I could implement this feature in standard HTML.\n\nIn XForms we have a model (which is XML) and also a form that acts on that model. So our form \"knows\" the XML structure. In HTML forms there's no notion of data model implicit, or anything like that. What is submitted from an HTML form is a set of key value pairs.",
null,
"In this little article we are going to design an application that can insert and search multiple choice questions using HTML. The HTML form will be responsible for the insert. The search will be tackled with Application Builder in part two of this article.\n\n## Part 1: Creating the Form\n\nFor the sake of this demonstration let's assume 'option_a' is always the correct option, thus avoiding another control. This is ok as we can randomize this list in the server side once we receive the options.\n\nSo while in XForms we would submit something like:\n\n``````<text>Which of the following twitter users works for MarkLogic?</text>\n<a>peteaven</a>\n<b>jchris</b>\n<c>stuhood</c>\n<d>antirez</d>\n``````\n\nIn regular html you have something like:\n\nPOST / HTTP/1.1 Content-Type: application/x-www-form-urlencoded\n\n`````` question=Which of the following twitter users works for MarkLogic?\n&option_a=peteaven&option_b=jchris&option_c=stuhood&option_d=antirez\n``````\n\nWhile this can map perfectly to a relational database it doesn't play well with XML. Let me rephrase this: There are multiple ways you could shape it as XML.\n\nOne possible solution is to name the fields with an XPath expression and then generate an XML tree out of this path expression.",
null,
"Once we solve this we have two options on how to generate the XML from XPath: Do some work with a client-side language like javascript and produce the XML that is sent to the server or simply submit the form and create the XML on the server-side with XQuery. I choose the second approach for two reasons:\n\n1. To push the XQuery High Order Functions support in MarkLogic Server to the limit and learn how far it can go.\n2. Other people might have a similar problem that needs to be solved in the server side. This way they can reuse the code.\n\nHigh order functions are functions that take functions as parameters.\n\nGeek Corner Ever heard of MapReduce? This is a generally accepted paradigm in distributed computing first published by google. However the functions map and reduce are just the names of the high order functions that are used in the paper. In mathematical terms it should actually be 'reduce . map' but compose functions and point-free notation are well behind scope here (just as out of scope as they are awesome). I still wonder why no-one thought of 'reduce . map . filter' yet and using the filter stage to introduce basic lookup indexes\n\nTwo examples of such functions are fold (a.k.a. reduce or inject) and map (a.k.a. collect or transform).\n\nFold is a list destructor. You give it a list l, a starting value z and a function f. Then the fold starts accumulating the value of applying f to each element of l in z. Map is a function that applies a function f to each element of a list.\n\nAn example of a fold might be implementing sum, a function that sums the contents of a list:\n\n``````# in no particular language, pseudo code\nsum l = fold (+) 0 l\n``````\n\nAn example of a map is multiply every element in a list by two:\n\n``````# in no particular language, pseudo code\ndouble l = map (2*) l\n``````\n\nGeek Corner A fold is really just a list destructor. But you can generalize it for any arbitrary algebraic data types. You call these \"generic folds\" a catamorphism. Actually a fold is just a catamorphism on lists.\n\nImplementing these functions in MarkLogic XQuery 1.0 with recursion is really easy:\n\n``````declare function local:head( \\$l ) { \\$l } ;\ndeclare function local:tail( \\$l ) { fn:subsequence( \\$l, 2 ) } ;\ndeclare function local:fold( \\$f, \\$z, \\$l ) {\nif( fn:empty( \\$l ) ) then \\$z\nelse local:fold( \\$f,\nxdmp:apply( \\$f, \\$z, local:head( \\$l ) ),\nlocal:tail( \\$l ) ) } ;\n\ndeclare function local:map( \\$f, \\$l ) {\nfor \\$e in \\$l return xdmp:apply( \\$f, \\$e ) } ;\n\ndeclare function local:add(\\$x, \\$y) { \\$x + \\$y } ;\ndeclare function local:multiply(\\$x, \\$y) { \\$x * \\$y } ;\ndeclare function local:multiply-by-two(\\$x) { \\$x * 2 } ;\n\n(: sums a list using fold :)\ndeclare function local:sum( \\$l ) {\nreturn local:fold( \\$add, 0, \\$l ) } ;\n\ndeclare function local:double ( \\$l ) {\nlet \\$multiply-by-two :=\nxdmp:function( xs:QName( 'local:multiply-by-two' ) )\nreturn local:map( \\$multiply-by-two, \\$l ) } ;\n\n(: factorial just for fun :)\ndeclare function local:fact(\\$n) {\nlet \\$multiply := xdmp:function(xs:QName('local:multiply'))\nreturn local:fold(\\$multiply, 1, 1 to \\$n) };\n\n(: This is the main part of the XQuery file\n: Illustrating the fold and map from the previous listing :)\n<tests>\n<!-- fun facts: http://www.mathematische-basteleien.de/triangularnumber.htm -->\n<sum> { local:sum(1 to 100) } </sum>\n<fact> { local:fact( 10 ) } </fact>\n<double> { local:double( (1 to 5) ) } </double>\n</tests>\n``````\n\nGeek Corner Good news is XQuery 1.1 will have improved support for High Order Functions and that will be awesome. No more xdmp:apply and it will be directly integrated in the syntax.\n\nSo how can we use all of this to solve our XPath to XML problem? Simple. We need to destruct the list of xpaths and generate a tree. In other words, we need to fold the list intro a tree.",
null,
"If we go one level down an XPath is really a list of steps. Once again we need to destruct that list to create each node. So we need a fold inside a fold.\n\nWe now need to iterate the list of field values, navigate to the corresponding node using the XPath expression, and finally replace the value of the node (empty at this point) with the value provided in the HTTP form.\n\nScared? Wondering if we really need all this functional stuff? Fear not, problem is solved and we will simply use a XQuery library module that already exists to solve the problem! Hooray.\n\nThe library is called generate-tree and is included in the dxc github project. To get it simply install git and:\n\n``````git clone git://github.com/dscape/dxc.git\n``````\n\nIf you don't know what git is (neither you care) simply go to the project page at http://github.com/dscape/dxc and download the source.\n\nIf you are curious to see the implementation using the folds and everything you learned so far you can check the the gen-tree.xqy implementation at github. Or as an exercise you can try and do it yourself! To run this code directly from cq I created another script that creates a tree while printing out debug messages. This might be useful to understand how the code is running without getting \"lost in recursion\".\n\nCreate a folder called 'questions-form' and place the dxc code there:\n\n``````njob@ubuntu:~/Desktop/questions-form\\$ ls -l\ntotal 8\ndrwxr-xr-x 12 njob njob 4096 2010-08-13 20:51 dxc\n-rw-r--r-- 1 njob njob 149 2010-08-13 20:59 index.xqy\n``````\n\nNow we need to create the HTML form. For now simply create a file called index.xqy inside the 'questions-form' directory and insert the following code:\n\n``````xquery version '1.0-ml';\n\n\"Hello World!\"\n``````\n\nIn this listing we simply print Hello World! To get our website online simply go the the MarkLogic Server Administration Interface at http://localhost:8001 and create a new Application Server with the following parameters:\n\n``````name: questions-form\nport: <any port that is available in your system>\nroot: <full path of the directory where you have the index.xqy file>\n``````\n\nIn my case this will be:\n\n``````port: 6173\nroot: /home/njob/Desktop/questions-form\n``````\n\nIf you have cq installed you can simplify the process by running the following script (remember to change the root. Also change the port if necessary)\n\n``````xquery version '1.0-ml';\n\nlet \\$name := \"questions-form\"\nlet \\$root := \"/home/njob/Desktop/questions-form\"\nlet \\$port := 6173\nlet \\$db := \"Documents\"\nlet \\$groupid := admin:group-get-id( \\$config, \"Default\" )\nlet \\$new := admin:http-server-create( \\$config, \\$groupid, \\$name,\n\\$root, xs:unsignedLong( \\$port ), 0, xdmp:database( \\$db ) )\nreturn ( admin:save-configuration( \\$new ) ,\n<div class=\"message\">\nAn HTTP Server called {\\$name} with root {\\$root} on\nport {\\$port} created successfully </div> )\n``````\n\nThis is running against the default Documents database. This is ok for a demonstration but in a realistic scenario you would be using your own database.\n\nNow when you visit http://localhost:6173 you will get a warm Hello World!\n\nNow let's change the code to actually perform the transformation. To do so simply insert this code in index.xqy. Feel free to inspect it and learn from it - I commented it just for that reason.\n\n``````xquery version '1.0-ml';\n\n(: First we import the library that generates the tree :)\nimport module namespace mvc = \"http://ns.dscape.org/2010/dxc/mvc\"\nat \"dxc/mvc/mvc.xqy\" ;\n\n(:\n: This function receives a string as the parameter \\$o\n: which will be either 'a', 'b', 'c' or 'd' and\n: generates an input field for the form\n:)\ndeclare function local:generate-option( \\$o ) {\n\n(: This function simply displays an html form as described in the figures :)\ndeclare function local:display-form() {\n<form name=\"question_new\" method=\"POST\" action=\"/\" id=\"question_new\">\n<label for=\"/question/text\">Question</label><br/>\n <textarea name=\"/question/text\" id=\"/question/text\"\nrows=\"2\" cols=\"50\">\nQuestion goes here </textarea>\n<br/>\n{ (: using the generate option function button to generate four fields :)\nfor \\$o in ('a','b','c','d') return local:generate-option( \\$o ) }\n<br/><br/><input type=\"submit\" name=\"submit\" id=\"submit\" value=\"Submit\"/>\n</form> } ;\n\n(: this function will process the insert and display the result\n: for now it simply shows the tree that was generated from the HTML form\n:)\ndeclare function local:display-insert() {\nxdmp:quote( mvc:tree-from-request-fields() ) } ;\n\n(: Now we set the content type to text html so the browser renders\n: the page as HTML as opposed to XML :)\nxdmp:set-response-content-type(\"text/html\"),\n<html>\n<title>New Question</title>\n<body> {\n(: if it's a post then the user submited the form :)\nif( xdmp:get-request-method() = \"POST\" )\nthen local:display-insert()\nelse\n(: the user wants to create a new question :)\nlocal:display-form() }\n</body>\n</html>\n``````\n\nWe are using the 'mvc:tree-from-request-fields()' function to create the tree from the request fields. However this function isn't described in gen-tree.xqy. This is declare in another library called mvc:\n\n``````declare function mvc:tree-from-request-fields() {\nlet \\$keys := xdmp:get-request-field-names() [fn:starts-with(., \"/\")]\nlet \\$values := for \\$k in \\$keys return xdmp:get-request-field(\\$k)\nreturn gen:process-fields( \\$keys, \\$values ) } ;\n``````\n\nNow you can visit http://localhost:6173 again and you'll see our form. Fill it accordingly to the following picture and click \"Submit\"",
null,
"This is how the document you inserted looks like:\n\n``````<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<question>\n<text>Which of the following twitter users works for MarkLogic?</text>\n<a>peteaven</a>\n<b>jchris</b>\n<c>stuhood</c>\n<d>antirez</d>\n</question>\n``````\n\nNow let's augment our form with some more interesting fields like author and difficulty. This will help make our search application interesting. Simply update the display-form function:\n\n``````(: This function simply displays an html form as described in the figures :)\ndeclare function local:display-form() {\n<form name=\"question_new\" method=\"POST\" action=\"/\" id=\"question_new\">\n<input type=\"hidden\" name=\"/question/created-at\"\nid=\"/question/created-at\" value=\"{fn:current-dateTime()}\"/>\n<input type=\"hidden\" name=\"/question/author\"\nid=\"/question/author\" value=\"{xdmp:get-current-user()}\"/>\n<br/> <label for=\"/question/difficulty\">Difficulty: </label>\n<input type=\"text\" name=\"/question/difficulty\"\nid=\"/question/difficulty\" size=\"50\"/>\n<br/> <label for=\"/question/topic\">Topic: \n</label>\n<input type=\"text\" name=\"/question/topic\"\nid=\"/question/topic\" size=\"50\"/>\n<br/><br/> <label for=\"/question/text\">Question</label><br/>\n <textarea name=\"/question/text\" id=\"/question/text\"\nrows=\"2\" cols=\"50\">\nQuestion goes here </textarea>\n<br/>\n{ (: using the generate option function button to generate four fields :)\nfor \\$o in ('a','b','c','d') return local:generate-option( \\$o ) }\n<br/><br/><input type=\"submit\" name=\"submit\" id=\"submit\" value=\"Submit\"/>\n</form> } ;\n``````\n\nThe end result should be this form:",
null,
"Now we are missing the part where we actually insert the document in the database. For that we need to update the function that local:display-insert() function:\n\n``````(: this function will process the insert and display the result\n: it then redirects to / giving you the main page\n:)\ndeclare function local:display-insert() {\ntry {\nlet \\$question := mvc:tree-from-request-fields() (: get tree :)\nlet \\$author := if (\\$question//author)\nthen fn:concat(\\$question//author, \"/\") else ()\n(: now we insert the document :)\nlet \\$_ := xdmp:document-insert(\n(: this fn:concat is generating a uri with directories\n: e.g. /questions/njob/2362427670145529782.xml\n:)\nfn:concat(\"/questions/\", \\$author, xdmp:random(), \".xml\") , \\$question )\nreturn xdmp:redirect-response(\"/?flash=Insert+OK\")\n} catch (\\$e) {\nxdmp:redirect-response(fn:concat(\"/?flash=\",\nfn:encode-for-uri(\\$e//message/text()))) } } ;\n``````\n\nSo far we talked about the problem, differences with XForms, proceeded to talk on high order functions and how to implement it in XQuery and finally we got a working solution for our little problem. Coming up next we are going to build an application to search these questions we can now insert with Application Builder. Then we are going to take advantage of the new functionalities available in MarkLogic 4.2. to extend application builder with this form."
] | [
null,
"http://img.skitch.com/20100813-bufb2yatuw94uwisk7g2m8im1x.png",
null,
"http://img.skitch.com/20100813-k83ufc24sgdkjmm6wuhxq746sc.png",
null,
"http://img.skitch.com/20100814-m3kg2qiadmhpt1abtq8kwsejfe.png",
null,
"http://img.skitch.com/20100814-j5j3yn3ny21ne7fuaqed854g8i.png",
null,
"http://img.skitch.com/20100814-mj3betss6tiq143tq9gpsdhi72.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7170772,"math_prob":0.5983259,"size":13908,"snap":"2020-10-2020-16","text_gpt3_token_len":3482,"char_repetition_ratio":0.13017836,"word_repetition_ratio":0.08066038,"special_character_ratio":0.26783147,"punctuation_ratio":0.13632986,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95643157,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,4,null,4,null,4,null,4,null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-02-23T13:31:38Z\",\"WARC-Record-ID\":\"<urn:uuid:3b0fccb1-d25f-4b0e-91a2-4c8394443d7b>\",\"Content-Length\":\"21443\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:35d092b4-21e3-4f11-8b1d-72ffc925f3f3>\",\"WARC-Concurrent-To\":\"<urn:uuid:39c23a43-2ea4-4ee7-aeed-4b371984d120>\",\"WARC-IP-Address\":\"185.199.110.153\",\"WARC-Target-URI\":\"http://writings.nunojob.com/2010/08/lost-in-recursion-generate-xml-from-key-value-pairs-html-form.html\",\"WARC-Payload-Digest\":\"sha1:QBYRLV54N5ZHP6OUCJDR5WEGKLAXWWX2\",\"WARC-Block-Digest\":\"sha1:2XQOQZ2S3ALAE7YQQRYDRXG4PCKZIRZM\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-10/CC-MAIN-2020-10_segments_1581875145774.75_warc_CC-MAIN-20200223123852-20200223153852-00226.warc.gz\"}"} |
https://scicomp.stackexchange.com/questions/27775/hypergraph-matching-adjacency-matrix | [
"# Hypergraph matching -> adjacency matrix?\n\nI need to do a matching on a hypergraph. I read that in the case of a hypergraph there is no adjacency matrix.\n\nHow do I represent edges then?\n\nEdges are represented as sets of vertices. With classical graphs, an edge can be represented by the set containing its 2 endpoints. With hypergraphs, they are represented by a set containing more than 2 nodes e.g. $e_i = \\lbrace v_1, v_2, ... , v_n \\rbrace$."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9372106,"math_prob":0.8881018,"size":1080,"snap":"2023-40-2023-50","text_gpt3_token_len":231,"char_repetition_ratio":0.10873606,"word_repetition_ratio":0.2793296,"special_character_ratio":0.21018519,"punctuation_ratio":0.13207547,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96268165,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-11-29T18:27:16Z\",\"WARC-Record-ID\":\"<urn:uuid:f8aeacd0-0442-4fe1-9157-21a26a36cb12>\",\"Content-Length\":\"157627\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5361a2a5-5d23-42fb-a1b2-f0e2c5e7a370>\",\"WARC-Concurrent-To\":\"<urn:uuid:c318523e-5c1f-4c26-bf70-74622955ecf2>\",\"WARC-IP-Address\":\"172.64.144.30\",\"WARC-Target-URI\":\"https://scicomp.stackexchange.com/questions/27775/hypergraph-matching-adjacency-matrix\",\"WARC-Payload-Digest\":\"sha1:GPRMBMNQ6EJGKOVMTYAJT7LGPKJBO2XH\",\"WARC-Block-Digest\":\"sha1:G5PAQKGRJPXSLC2FD2W6CR2P4JXJHLYE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100135.11_warc_CC-MAIN-20231129173017-20231129203017-00571.warc.gz\"}"} |
https://m.ebrary.net/35660/engineering/inconstant_velocity_error_platform | [
"",
null,
"Home",
null,
"Engineering",
null,
"",
null,
"# (a) Inconstant velocity error of the platform\n\nIn an ideal SAR system, the platform is assumed to be moving with a constant velocity, and the signal is transmitted with PRF. The signal is transmitted and received with a constant PRF, and the azimuth signal is uniformly sampled. However, if the PRF remains constant, and the platform velocity is changing, the azimuth signal sampling is no longer uniform, and the azimuth resolution will be deteriorated. The geometry of azimuth non-uniform sampling is illustrated in Fig. 6.3.",
null,
"In a signal processing point of view, according to Eq. 2.2.6, the Doppler modulation rate of a stationary target can be expressed as",
null,
"If is Va no longer constant, fdr will be a changing value. Thus, using a constant fdr as a matched filter will lead to the azimuth defocus.\n\nThere are two ways to solve this problem: re-sampling algorithm and phase compensation algorithm. Both algorithms use the velocity data recorded by the INS to compensate the motion error. However, these two algorithms are high dependent to the accuracy of the INS data, and the signal processing complexity is increased.\n\nThe most practical way to solve this problem in an airborne SAR system is to change the time-constant PRF into space-constant PRF . By altering the value of PRF according to the velocity of the platform, the PRF-to-velocity ratio Kv remains constant. According to Eq. 2.2.5, the azimuth phase of a target in SAR can be expressed as",
null,
"where azimuth time ta = — —f: : if. Substitute this equation into Eq. 6.2.5, it is\n\nnoted that the Doppler modulation rate is constant with a constant Kv.\n\n Related topics"
] | [
null,
"https://m.ebrary.net/templates/mobile_pda/images/mobi_logo.png",
null,
"https://m.ebrary.net/images/M_images/arrow.png",
null,
"https://m.ebrary.net/templates/mobile_pda/images/user-increase.png",
null,
"https://m.ebrary.net/templates/mobile_pda/images/user-decrease.png",
null,
"https://m.ebrary.net/htm/img/39/324/159.png",
null,
"https://m.ebrary.net/htm/img/39/324/160.png",
null,
"https://m.ebrary.net/htm/img/39/324/161.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.91914445,"math_prob":0.9732487,"size":1671,"snap":"2021-31-2021-39","text_gpt3_token_len":378,"char_repetition_ratio":0.14217156,"word_repetition_ratio":0.0,"special_character_ratio":0.21843208,"punctuation_ratio":0.12990937,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98976403,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,2,null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-08-04T14:42:34Z\",\"WARC-Record-ID\":\"<urn:uuid:b3dee34f-d65c-406f-b9f6-3b9058ac3ef4>\",\"Content-Length\":\"11417\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8953b7b0-12ab-499b-b3b8-ae64c085dd20>\",\"WARC-Concurrent-To\":\"<urn:uuid:67e6b9e4-6581-4e5a-a928-8951182d66ff>\",\"WARC-IP-Address\":\"5.45.72.163\",\"WARC-Target-URI\":\"https://m.ebrary.net/35660/engineering/inconstant_velocity_error_platform\",\"WARC-Payload-Digest\":\"sha1:YSZL2FFFELQNOJ3SERDVQTZDD47QA2VV\",\"WARC-Block-Digest\":\"sha1:RUXXSNB3Z5JS4ZK7FVGOKZG6X56QZNNJ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046154878.27_warc_CC-MAIN-20210804142918-20210804172918-00095.warc.gz\"}"} |
https://answers.opencv.org/question/24398/depth-discontinuities-in-gpubm-and-gpucsbp/ | [
"# Depth Discontinuities in GPU::BM and GPU::CSBP\n\nI am interested in using the stereo correspondence algorithms in the GPU module (2.4.X). However, the results I am able to achieve are not useable as there are great errors at depth edges/discontinuities. My question is this: is there a way to achieve quality segmentation (sharp edges) at severe depth discontinuities in either the gpu::BM or gpu::CSBP? I know from reading the paper on CSBP that this is a problem inherent to the method and is supposedly fixed by using a bilateral or median filter.\n\nHow have you gotten the best results from the real-time parallel stereo methods (not BP)? How did you eliminate poor matches? What postprocessing steps did you follow?\n\nThis is the disparity image I generate with CSBP (and some first attempts at filtering). Notice the absence of a depth 'shadow' where the disparity really shouldn't be estimated:",
null,
"For comparison, see the quality of my results with SGBM:",
null,
"edit retag close merge delete\n\n1\n\nI think the solution would be to run the algorithm twice, matching once L to R and once R to L and then do a consistency check to eliminate bad matches/values. This would double the runtime, but is certainly worth a try.\n\nSort by » oldest newest most voted\n\nHere is code to do L-R, R-L, and a consistency check for CSBP.\n\nd_left.upload(imgL);\n\n// L-R disparity\nscsbp(d_left, d_right, d_dispPre_LR);\ndbf(d_dispPre_LR, d_left, d_disp_LR);\n\n// Flip horizontally (1) for R-L disparity computation\ncv::gpu::flip(d_left, d_left_f, 1);\ncv::gpu::flip(d_right, d_right_f, 1);\n\n// R-L disparity\nscsbp(d_right_f, d_left_f, d_dispPre_RL_f);\ndbf(d_dispPre_RL_f, d_right_f, d_disp_RL_f);\n\ncv::flip(disp_RL_f, disp_RL, 1); // flip back (not supported on GPU)\n\n//////////////////////////////////////////////////////////////////////\n// Consistency Check:\n//////////////////////////////////////////////////////////////////////\nshort* dLPtr = (short*) disp_LR.data;\nshort* dRPtr = (short*) disp_RL.data;\nint cstep = disp_LR.cols;\n\nfor (int i = 0; i < disp_LR.rows; i++) {\nfor (int j = 0; j < disp_LR.cols; j++) {\nshort l_d = dLPtr[(i*cstep)+j];\nshort r_d = dRPtr[(i*cstep)+j-l_d];\nif (abs(l_d-r_d) > 1) {\n// set disparity to 0\ndLPtr[(i*cstep)+j] = 0;\n}\n}\n}\n\nmore\n\nHi, I got better results with the RL version when compared with LR, but when I try to fix the disparity as you suggested, the image becomes almost black. Could you comment why that conditional, is the objective to zero out displacement when its found in LR but not in RL? Thank you.\n\n1\n\nThe conditional is the consistency check. When the two disparity images don't agree, we conclude that we can't trust either of them. Right now it is set to 1, but perhaps you could change that to 2 and take the average (which should be included above anyway). Unfortunately this method (CSBP) only computes integer disparities and so you can't get any finer (than 0.5 through averaging). CSBP returns a dense disparity, so there is a value for every pixel, it is always 'found'.\n\nOfficial site\n\nGitHub\n\nWiki\n\nDocumentation"
] | [
null,
"https://answers.opencv.org/upfiles/138515979468243.png",
null,
"https://answers.opencv.org/upfiles/13851602017245501.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.94756675,"math_prob":0.91189104,"size":1219,"snap":"2020-45-2020-50","text_gpt3_token_len":266,"char_repetition_ratio":0.089711934,"word_repetition_ratio":0.0,"special_character_ratio":0.20672682,"punctuation_ratio":0.11788618,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9732871,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,7,null,7,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-20T20:48:20Z\",\"WARC-Record-ID\":\"<urn:uuid:1e2a903d-619f-4595-83a0-a4486ca1e1f8>\",\"Content-Length\":\"63047\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:26471b48-218d-44ba-8e4a-50f9369fe076>\",\"WARC-Concurrent-To\":\"<urn:uuid:4f56b8b7-54c5-4509-a8ee-cbbfe380f829>\",\"WARC-IP-Address\":\"5.9.49.245\",\"WARC-Target-URI\":\"https://answers.opencv.org/question/24398/depth-discontinuities-in-gpubm-and-gpucsbp/\",\"WARC-Payload-Digest\":\"sha1:PS5X2F5VSLDJVUBNNSLOKZG5SW7OM6PI\",\"WARC-Block-Digest\":\"sha1:NMFBWSAGVPGL4KCAYG63D2LRU7MFGZHW\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107874135.2_warc_CC-MAIN-20201020192039-20201020222039-00009.warc.gz\"}"} |
https://www.routledge.com/Digital-Control-Applications-Illustrated-with-MATLAB/Shertukde/p/book/9781482236699 | [
"",
null,
"1st Edition\n\n# Digital Control Applications Illustrated with MATLAB®\n\nBy\n\nISBN 9781482236699\nPublished February 13, 2015 by CRC Press\n384 Pages 287 B/W Illustrations\n\nUSD \\$140.00\n\nPrices & shipping based on shipping country\n\n## Book Description\n\nDigital Control Applications Illustrated with MATLAB® covers the modeling, analysis, and design of linear discrete control systems. Illustrating all topics using the micro-computer implementation of digital controllers aided by MATLAB®, Simulink®, and FEEDBACK<<®, this practical text:\n\n• Describes the process of digital control, followed by a review of Z-transforms, feedback control concepts, and s-to-z plane conversions, mappings, signal sampling, and data reconstruction\n• Presents mathematical representations of discrete systems affected by the use of advances in computing methodologies and the advent of computers\n• Demonstrates state-space representations and the construction of transfer functions and their corresponding discrete equivalents\n• Explores steady-state and transient response analysis using Root-Locus, as well as frequency response plots and digital controller design using Bode Plots\n• Explains the design approach, related design processes, and how to evaluate performance criteria through simulations and the review of classical designs\n• Studies advances in the design of compensators using the discrete equivalent and elucidates stability tests using transformations\n• Employs test cases, real-life examples, and drill problems to provide students with hands-on experience suitable for entry-level jobs in the industry\n\nDigital Control Applications Illustrated with MATLAB® is an ideal textbook for digital control courses at the advanced undergraduate and graduate level.\n\nPreface\n\nAuthor\n\nDigital Control Introduction and Overview\n\nOverview of Process Control: Historical Perspective\n\nFeedback Control Structures for Continuous Systems: Mathematical Representation of (Sub) System Dynamics\n\nBasic Feedback Control Loop: Single Input Single Output (SISO) System\n\nGoal\n\nContinuous Control Structures: Output Feedback\n\nContinuous Control Structures: State Variable Feedback\n\nDigital Control Basic Structure\n\nRelationship of Time Signals and Samples\n\nA Typical Algorithm for H\n\nDifferences in Digital versus Analog Control Methods\n\nComputing the Time Response of a Linear, Time Invariant, Discrete Model to an Arbitrary Input\n\nReview of z-Transforms for Discrete Systems\n\nSome Useful Results for One-Sided z-Transforms\n\nHow to Find y(k) Using z-Transforms\n\nUse of z-Transforms to Solve nth Order Difference Equations\n\nStability of the Time Response\n\nContinuous versus Discrete Relationships\n\ns-to-z Plane Mappings\n\nLOCI of Constant Damping Ratio (ζ) and Natural Frequency (ωn) in s-Plane to z-Plane Mapping\n\nSignal Sampling and Data Reconstruction\n\nImpulse Sampling\n\nLaplace Transform of a Sampled Signal\n\nNyquist Theorem\n\nNyquist Result\n\nRecovering f(t) from f*(t)\n\nAliasing\n\nHow to Avoid Aliasing\n\nInterpretation of Aliasing in s-Plane\n\nExample of Aliasing in a Control Setting\n\nProblems\n\nMathematical Models of Discrete Systems\n\nDiscrete Time System Representations\n\nDifference Equation Form\n\nSignal Flow Diagram and Analysis\n\nState Equations from Node Equations\n\nState Variable Forms: I\n\nState Variable Forms\n\nTransfer Function of a State-Space Model\n\nState Variable Transformation\n\nExample\n\nObtaining the Time Response X(k)\n\nComputing G(z) from A, B, C, d\n\nLeverier Algorithm Implementation\n\nAnalysis of the Basic Digital Control Loop\n\nDiscrete System Time Signals\n\nModels for Equivalent Discrete System, GÞ(Z)\n\nComputing Φ and Γ (or Ψ)\n\nAlgorithm for Obtaining Ψ(h) and Φ, Γ\n\nSome Discussion on the Selection of h\n\nExamples\n\nDiscrete System Equivalents: Transfer Function Approach\n\nRelationship between G(s) and GÞ□(z)\n\nComparison of a Continuous and Discrete Equivalent Bode Plot\n\nEffects of Time Step h on GÞ (z = e jωh)\n\nAnatomy of a Discrete Transfer Function\n\nModeling a Process with Delay in Control, τ = Mh + ϵ\n\nState Model for a Process with Fractional Delay ϵ < h\n\nState Model for a Process with Large Delay\n\nTransfer Function Approach to Modeling a Process with Delay\n\nProblems\n\nPerformance Criteria and the Design Process\n\nDesign Approaches and the Design Process\n\nElements of Feedback System Design\n\nElements of FB System Design II\n\nClosed-Loop System Zeros\n\nDesign Approaches to be Considered\n\nPerformance Measure for a Design Process\n\nStability of the Closed-Loop System\n\nSpeed of Transient Response\n\nSensitivity and Return Difference\n\nExample: Evaluation and Simulation\n\nSimulation of Closed-Loop Time Response\n\nSimulation Structure\n\nFlow Diagram for Simulation Program for a Control Algorithm\n\nModifications to Time Delay\n\nControl (Cntrl) Algorithm Simulation\n\nSimulation of Time Delay, τ\n\nRequired Modifications to Simulation Flow Diagram\n\nTools for Control Design and Analysis\n\nOverview of Classical Design Techniques (Continuous Time)\n\nLag Compensator Design, H(s)\n\nExample of Lag Compensator Design\n\nLag Compensation Design\n\nCritique of Continuous Time H(s) Design\n\nProblems\n\nCompensator Design via Discrete Equivalent\n\nStability of Discrete Systems\n\nJury Test\n\nStability with Respect to a Parameter β\n\nStability with Respect to Multiple Parameters: α, β\n\nA More Complicated, State-Space Example\n\nExample State-Space Example Plots\n\nFundamentals of Digital Compensator Design\n\nH(z) Design via Discrete Equivalent: H(s) − HÞ(z)\n\nForms of Discrete Integration\n\nGeneral Algorithm for Tustin Transformation\n\nTustin Equivalence with Frequency Prewarping\n\nDiscrete Equivalent Designs\n\nSummary of Discrete Equivalence Methods\n\nExample of a Discrete Equivalent Design\n\nDiscrete Equivalent Computations\n\nEvaluation of Digital Control Performance\n\nContinuous versus Discrete System Loop Gain\n\nMethods to Improve Discrete CL Performance\n\nProblems\n\nCompensator Design via Direct Methods\n\nDirect Design Compensation Methods\n\nRL Design of H(z)\n\nExample of Design Approach: Antenna Positioning Control\n\nRL Redesign (After Much Trial and Error)\n\nAn Example of a Poor Design Choice\n\nw-Plane Design of H(z)\n\nDesign Approach\n\nGeneral z → w Plane Mapping\n\nExample of Design Approach\n\nFrequency Domain Evaluation\n\nPID Design\n\nDigital PID Controller\n\nIntegral Windup Modifications\n\nExample\n\nOther PID Considerations\n\nPID Initial Tuning Rules\n\nReal-Time PID Control of an Inverted Pendulum Using FEEDBACK≪®: Page 26 of 33–936 S of FEEDBACK≪® Document\n\nA Technique for System Control with Time Delay\n\nSmith Predictor/Compensator\n\nExample of Smith Predictor Motor-Positioning Example with τ 1 S, h = 1 S (i.e., M = 1)\n\nImplementation of High-Order Digital Compensators\n\nSummary of Compensator Design Methods\n\nProblems\n\nState-Variable Feedback Design Methods\n\nLinear State-Variable Feedback\n\nControl in State-Space\n\nControllability\n\nOpen-Loop versus CL Control\n\nDiscrete SVFB Design Methods\n\nContinuous → Discrete Gain Transformation Methods\n\nAverage Gain Method\n\nExample: Satellite Motor Control\n\nState Variable Feedback Control: Direct Pole Placement\n\nDiscrete System Design\n\nPole Placement Methods\n\nTransformation Approach for Pole Placement\n\nAckermann Formula\n\nAlgorithm to Obtain pd(Φ)\n\nCL System Zeros\n\nInverted Pendulum on a Cart\n\nEquivalent Discrete Design u(k) = −KÞX(k)\n\nDirect Digital Design: Inverted Pendulum\n\nCL Simulation Inverted Pendulum X(0) = [0.2, 0, 1, 0]′\n\nDeadbeat Controller Inverted Pendulum X(0) = [0.2, 0, 1, 0]″\n\nSummary of Pole Placement Design by SVFB\n\nSVFB with Time Delay in Control: τ = Mh + ε\n\nState Prediction\n\nImplementation of the Delay Compensator: General Case\n\nExample—Inverted Pendulum\n\nComparison with Smith Predictor Structure (ε = 0)\n\nCommand Inputs to SVFB Systems\n\nIntegral Control in SVFB\n\nProblems\n\nLyapunov Stability Theory Preliminaries\n\nApplication to Stability Analysis\n\nMain Theorem for Linear Systems\n\nPractical Use of Lyapunov Theorem\n\nNumerical Solution of the Lyapunov Equation\n\nAlgorithm to Solve Lyapunov Equation (DLINEQ)\n\nConstructive Application of Lyapunov Theorem to SVFB\n\nDiscussion of Stabilization Result\n\nLyapunov (\"Bang-Bang\") Controllers\n\nIntroduction to Least-Squares Optimization\n\nOptimization Approach and Algorithm\n\nContinued Method for Obtaining K1 from P0\n\nThe Discrete Riccati Equation\n\nApplication of the Optimal Control\n\nProperties of the Optimal CL System-1\n\nProperties of the Optimal CL System-2\n\nExamples and Applications\n\nExamples with FEEDBACK≪® Hardware and Software Package\n\nSummary of Optimal Control Design Method\n\nRate Weighting\n\nWeighting of Control Rate\n\nProperties of a Rate-Weighted Controller\n\nCompensation for Fractional Time Delay\n\nProblems\n\nEstimation of System State\n\nState Estimation\n\n\"Observation\" of System State\n\nSystem Observability Requirement\n\nObserver Pole Placement Problem\n\nSelection of Observer CL Poles\n\nExample of State Estimation\n\nMechanics of Observer Dynamics\n\nImplementation of the Observer-Controller Pair\n\nImplementation: Some Practical Considerations\n\nComposite CL Observer and Controller\n\nExample Satellite Control with Command Input\n\nTransfer Function of Composite CL Observer and Controller\n\nPoles and Zeros of Composite T(z)\n\nReduced-Order Observers\n\nReduced-Order Observer Design for Xb\n\nImplementation of Reduced-Order Observer/Controller\n\nLoop Gain Analysis of RO Observer/Controller\n\nModifications for Time Delay τ = Mh + ϵ\n\nCase Study: State Estimation in Passive Target Tracking\n\nProblems\n\nImplementation Issues in Digital Control\n\nMechanization of the Control Algorithm on Microcontrollers Motivation\n\nMicroprocessor Implementation Structure\n\nBinary Representation of Quantized Numbers\n\nDigital Quantization of a Continuous Value\n\nSources of Numerical Errors in Digital Control\n\nAlgorithm Realization Structures\n\nAnalysis of Control Algorithm Implementation\n\nResponse of Discrete Systems to White Noise\n\nPropagation of Multiplication Errors through the Controller\n\nParameter Errors and Sensitivity Analysis\n\nNonlinear Effects\n\nCase Study\n\nConcluding remarks\n\nProblems\n\nBibliography\n\nAppendix I: MATLAB® Primer\n\nAppendix II: FEEDBACK≪® Guide for Applications in the Text\n\nAppendix III: Suggested MATLAB® Code for Algorithms and Additional\n\nExamples from FEEDBACK≪®\n\nIndex\n\n...\n\n## Author(s)\n\n### Biography\n\nHemchandra Madhusudan Shertukde, SM’92, IEEE, holds a B.Tech from the Indian Institute of Technology Kharagpur, as well as an MS and Ph.D in electrical engineering with a specialty in controls and systems engineering from the University of Connecticut, Storrs, USA. Currently, he is professor of electrical and computer engineering for the College of Engineering, Technology, and Architecture (CETA) at University of Hartford, Connecticut, USA. He is also senior lecturer at the Yale School of Engineering and Applied Sciences (SEAS), New Haven, Connecticut, USA. The principal inventor of two commercialized patents, he has published several journal articles and written three solo books.\nDr. Shertukde is the recipient of the 2017 IEEE EAB/SA Standards Education Award, 2017 IEEE-PES CT Chapter Outstanding Engineer Award, and the 2016 IEEE Award as the Chair of the Working Group C.5.159. He continues to be in leadership positions for several other Working Groups enabling IEEE-TC to publish different standards and User's Guides for Electrical Power Transformers.\n\n### Featured Author Profiles\n\nProfessor, CETA/University of Hartford\nWest Hartford, CT, USA\n\n## Reviews\n\n\"This book is an asset for practicing controls engineers as well as students of advanced control systems courses. Books on this topic have been usually quite theoretically oriented, and a common complaint of the students is that they do not get a practical flavor after going through a course, or even multiple courses. The author brings a fresh approach, backed up by decades of teaching and professional experience, which is oriented toward practical application. I look forward to having this book on my shelf.\"\n—Amit Patra, Indian Institute of Technology Kharagpur\n\n\"The author provides a mathematically detailed yet accessible presentation of topics in digital control theory. After introducing digital control systems and reviewing the modeling and performance of discrete systems, several chapters are dedicated to different design methods. These methods include discrete equivalent, direct, state-variable feedback, Lyapunov, and optimal. Throughout the book, the emphasis is on application of the theory rather than on theorems and abstract mathematics.\"\n—Patricia Mellodge, University of Hartford, Connecticut, USA\n\n\"This book has an excellent flow of material for teaching design of digital control or computer control of dynamic systems. In particular, and following the well-organized design chapters, the microprocessor/computer implementation hardware and software aspects in chapter 9 are very valuable, well presented, and certainly well appreciated in this book. Chapters 1 through 4 provide clear and concise presentation of prerequisite material/knowledge for a digital control system design course, and could be very well appreciated in a previous senior-level control design course. The design chapters 5 through 8 progress effectively with the right sequence of techniques, starting with design by digital equivalents, followed by transformed domain techniques, and then moving into the time domain state space design including state estimators both full order and reduced order. Using MATLAB® software and simulations for examples and case studies provides students with valuable practice opportunities for the material presented throughout the book. This fits exactly in how I teach my first in a two-graduate-course sequence 'Computer Control of Dynamic Systems' here at California State University, Chico (the second is 'Adaptive Control Systems').\"\n\n—Dr. Adel A Ghandakly, California State University, Chico, USA\n\n## Support Material\n\n### Ancillaries\n\n• Instructor Resources"
] | [
null,
"https://www.routledge.com/images/ajax-spinner.gif.pagespeed.ce.QA8PeRfjsw.gif",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7898812,"math_prob":0.5120381,"size":13780,"snap":"2021-04-2021-17","text_gpt3_token_len":2906,"char_repetition_ratio":0.14931765,"word_repetition_ratio":0.012467532,"special_character_ratio":0.16828737,"punctuation_ratio":0.06378987,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9658213,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-01-22T23:46:04Z\",\"WARC-Record-ID\":\"<urn:uuid:484606ef-f8e5-4228-9388-4d555de3f79e>\",\"Content-Length\":\"101575\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:395d6b2b-006a-4f9c-b243-deb6e72576bf>\",\"WARC-Concurrent-To\":\"<urn:uuid:03e66d19-6cc2-4a94-a24d-eb32dd1df118>\",\"WARC-IP-Address\":\"104.16.182.86\",\"WARC-Target-URI\":\"https://www.routledge.com/Digital-Control-Applications-Illustrated-with-MATLAB/Shertukde/p/book/9781482236699\",\"WARC-Payload-Digest\":\"sha1:V4PVQVMUBW5ZG6MU6GUSCMKVRVW4XGSI\",\"WARC-Block-Digest\":\"sha1:M5MTNVS7JCSDQQC4ZE7NJX3HE2QSANLV\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-04/CC-MAIN-2021-04_segments_1610703531429.49_warc_CC-MAIN-20210122210653-20210123000653-00407.warc.gz\"}"} |
https://zh.m.wikipedia.org/wiki/%E6%8B%90%E7%82%B9 | [
"# 拐点\n\n## 分類\n\n• 如果$f'(x)$ 為零,此點為拐點的驻点,簡稱為鞍點\n• 如果$f'(x)$ 不為零,此點為拐點的非驻点\n\n## 雙正則點與拐點\n\n:某些作者偏好將拐點定義為「使一階與二階微分平行的點」,在此定義下,切線不一定在該點穿越曲線本身。\n\n## 代數曲線的拐點\n\n$C$ $F$ 上的平面代數曲線,其拐點定義為一平滑點$P\\in C(F)$ ,使得該點切線$L_{P}$ $C$ $P$ 點的相交重數$\\geq 3$"
] | [
null
] | {"ft_lang_label":"__label__zh","ft_lang_prob":0.99170065,"math_prob":0.99979585,"size":845,"snap":"2019-51-2020-05","text_gpt3_token_len":1060,"char_repetition_ratio":0.078478,"word_repetition_ratio":0.0,"special_character_ratio":0.20946746,"punctuation_ratio":0.035714287,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98222154,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-26T04:52:55Z\",\"WARC-Record-ID\":\"<urn:uuid:a5ae5b95-e5b1-41fe-8489-c13f5a02533b>\",\"Content-Length\":\"68854\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7085031b-6a52-4ab1-b101-a06b4a15ea66>\",\"WARC-Concurrent-To\":\"<urn:uuid:3d4c3574-3e62-424b-a61a-fd42f955f448>\",\"WARC-IP-Address\":\"208.80.153.224\",\"WARC-Target-URI\":\"https://zh.m.wikipedia.org/wiki/%E6%8B%90%E7%82%B9\",\"WARC-Payload-Digest\":\"sha1:NHTGJ6DDB56WAVOUMFVVHZSA2T64BPXV\",\"WARC-Block-Digest\":\"sha1:JVO2AQWTSB4O63GVEYWRTTNAG6PQRVMA\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579251687725.76_warc_CC-MAIN-20200126043644-20200126073644-00109.warc.gz\"}"} |
https://www.converters.wezual.com/length/mm-to-in-converter/ | [
"mm\nin\n\n## Millimeter To Inch Conversion Chart\n\nMillimeter (mm) Inch (in)\n0.01 mm0.00039370078740157485 in\n0.1 mm0.003937007874015749 in\n1 mm0.03937007874015748 in\n2 mm0.07874015748031496 in\n5 mm0.1968503937007874 in\n10 mm0.3937007874015748 in\n20 mm0.7874015748031497 in\n50 mm1.968503937007874 in\n100 mm3.937007874015748 in\n500 mm19.68503937007874 in\n1000 mm39.37007874015748 in\n\n## Convert from millimeter to inch formula\n\nTotal inch =\n Total millimeter 25.4\n\nFor example, if you want to convert 160 millimeter to inch then,\n\n 160 25.4\n= 6.299212598425197 in\n\n## Convert from inch to millimeter formula\n\n Total millimeter = Total inch x 25.4\n\n6.299212598425197 in = 6.299212598425197 x 25.4 = 160 mm\n\n## Millimeter\n\nA millimeter is the smallest unit. There are 10 mm in one centimeter, if an object is smaller than one centimeter, you would use millimeters. A scientist measuring something under a magnifying glass might use millimeters to represent a tiny specimen.\n\n## Inch\n\nThe inch (symbol: in) is a unit of length in the (British) imperial and United States customary systems of measurement. Standards for the exact length of an inch have varied in the past, but since the adoption of the international yard during the 1950s and 1960s, it has been based on the metric system and defined as exactly 25.4 mm."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.81104136,"math_prob":0.73974586,"size":1487,"snap":"2021-43-2021-49","text_gpt3_token_len":436,"char_repetition_ratio":0.19757248,"word_repetition_ratio":0.0,"special_character_ratio":0.40753195,"punctuation_ratio":0.115384616,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97029483,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-23T15:29:14Z\",\"WARC-Record-ID\":\"<urn:uuid:6d300f71-869c-4e92-9f66-9ec71d708afe>\",\"Content-Length\":\"107026\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ffa496d7-5d1a-42dd-81b9-4e4158c024ff>\",\"WARC-Concurrent-To\":\"<urn:uuid:ce15f7f5-f305-4894-b9a0-45efb4c70601>\",\"WARC-IP-Address\":\"99.84.216.86\",\"WARC-Target-URI\":\"https://www.converters.wezual.com/length/mm-to-in-converter/\",\"WARC-Payload-Digest\":\"sha1:MDLD6IS3BQ26YUIERC57OYKWODBTBAJ2\",\"WARC-Block-Digest\":\"sha1:HMTDLOGHMA7JZGFT4V5GMDPPIVLTS44R\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323585696.21_warc_CC-MAIN-20211023130922-20211023160922-00477.warc.gz\"}"} |
https://projecteuclid.org/journals/revista-matematica-iberoamericana/volume-23/issue-1/Small-gaps-in-coefficients-of-L-functions-and-mathfrakB-free/rmi/1180728895.full | [
"Translator Disclaimer\nApril, 2007 Small gaps in coefficients of $L$-functions and $\\mathfrak{B}$-free numbers in short intervals\nEmmanuel Kowalski, Olivier Robert, Jie Wu\nRev. Mat. Iberoamericana 23(1): 281-326 (April, 2007).\n\n## Abstract\n\nWe discuss questions related to the non-existence of gaps in the series defining modular forms and other arithmetic functions of various types, and improve results of Serre, Balog and Ono, and Alkan using new results about exponential sums and the distribution of $\\mathfrak{B}$-free numbers.\n\n## Citation\n\nEmmanuel Kowalski. Olivier Robert. Jie Wu. \"Small gaps in coefficients of $L$-functions and $\\mathfrak{B}$-free numbers in short intervals.\" Rev. Mat. Iberoamericana 23 (1) 281 - 326, April, 2007.\n\n## Information\n\nPublished: April, 2007\nFirst available in Project Euclid: 1 June 2007\n\nzbMATH: 1246.11099\nMathSciNet: MR2351136\n\nSubjects:\nPrimary: 11F12 , 11F30 , 11F66 , 11L15 , 11N25\n\nKeywords: $\\mathfrak{B}$-free numbers , exponential sums , Fourier coefficients of modular forms , Rankin-Selberg convolution",
null,
"",
null,
""
] | [
null,
"https://projecteuclid.org/Content/themes/SPIEImages/Share_black_icon.png",
null,
"https://projecteuclid.org/images/journals/cover_rmi.jpg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7552995,"math_prob":0.81343895,"size":519,"snap":"2023-14-2023-23","text_gpt3_token_len":134,"char_repetition_ratio":0.10097087,"word_repetition_ratio":0.0,"special_character_ratio":0.23314066,"punctuation_ratio":0.13978495,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95167476,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-02T15:44:51Z\",\"WARC-Record-ID\":\"<urn:uuid:2347eef7-7900-45c9-b238-4cec86723d6e>\",\"Content-Length\":\"144361\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2b29d734-90c3-441e-aa55-82e6f8fff659>\",\"WARC-Concurrent-To\":\"<urn:uuid:135b9aa4-4c9c-474d-ae73-e9884ea774f6>\",\"WARC-IP-Address\":\"107.154.79.145\",\"WARC-Target-URI\":\"https://projecteuclid.org/journals/revista-matematica-iberoamericana/volume-23/issue-1/Small-gaps-in-coefficients-of-L-functions-and-mathfrakB-free/rmi/1180728895.full\",\"WARC-Payload-Digest\":\"sha1:GBV5MDDEDGNFU5MOUOGSRMU2IFKNWIGL\",\"WARC-Block-Digest\":\"sha1:HB4US6ZWOBHWL4LGEOA2ZHLJWBZ6X7MQ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224648695.4_warc_CC-MAIN-20230602140602-20230602170602-00526.warc.gz\"}"} |
https://www.physicsforums.com/threads/integral-of-x-1-x-2-4x-5.854235/ | [
"# Integral of (x+1)/(x^2+4x+5)\n\nMember warned about posting homework questions in a technical forum section\n∫(x+1)/(x^2 + 4x +5) dx\nAnyone can help me doing it using Arctan and ln\n\nblue_leaf77\nHomework Helper\nLet's first hear out your own idea to solve this problem? What strategy you have in mind?\n\nLet's first hear out your own idea to solve this problem? What strategy you have in mind?\n∫(x+1)/(x^2+4x+5) dx = 1/2∫ (2x+2)/(x^2+4x+5) dx = 1/2∫(2x+4)/(x^2+4x+5) dx + 1/2∫(-2)/(1+(x+2)^2) dx\n[1/2 ln l(x^2+4x+5)l ] - arctan(x+2)\nSO??\n\nHallsofIvy\nHomework Helper\nDon't forget the \"constant of integration\".\n\n•",
null,
"TheTimeTraveler\nDon't forget the \"constant of integration\".\nThaanks\n\nphion\nGold Member\nTry re-writing the original expression into something more manageable."
] | [
null,
"data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.81636494,"math_prob":0.97637844,"size":210,"snap":"2020-45-2020-50","text_gpt3_token_len":67,"char_repetition_ratio":0.116504855,"word_repetition_ratio":0.30769232,"special_character_ratio":0.3,"punctuation_ratio":0.0,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9934303,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-11-01T01:42:13Z\",\"WARC-Record-ID\":\"<urn:uuid:98fe7943-d539-4cd7-90d3-6c9804f714e3>\",\"Content-Length\":\"78409\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8e54d20f-f87b-4c54-915d-69a8141e70dd>\",\"WARC-Concurrent-To\":\"<urn:uuid:e1476806-ecbb-4aea-9fbc-b940deb654e2>\",\"WARC-IP-Address\":\"23.111.143.85\",\"WARC-Target-URI\":\"https://www.physicsforums.com/threads/integral-of-x-1-x-2-4x-5.854235/\",\"WARC-Payload-Digest\":\"sha1:4JRYFHZPYXNAPD5WPJFJIF4KXBUPUD5G\",\"WARC-Block-Digest\":\"sha1:2XGHTGPD4LZ3AOKPMTCSVTSVD4ENHW4U\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107922746.99_warc_CC-MAIN-20201101001251-20201101031251-00566.warc.gz\"}"} |
https://www.asknumbers.com/gallon-to-oz/34-gallon-to-oz.aspx | [
"# How Many Ounces in 34 Gallons?\n\n34 Gallons to ounces (oz) converter. How many oz in 34 gallons? We assume you are converting between US fluid ounces and US fluid gallons.\n\n34 Gallons equal to 4352 oz or there are 4352 oz in 34 gallons.\n\n←→\nstep\nRound:\nEnter Gallon\nEnter Ounce\n\n## How to convert 34 gallons to ounces (oz)?\n\nThe conversion factor from gallon to ounce is 128. To convert any value of gallon to oz, multiply the gallon value by the conversion factor.\n\nTo convert 34 gallons to oz, multiply 34 by 128, that makes 34 gallons equal to 4352 oz.\n\n34 gallons to ounces formula\n\nounce = gallon value * 128\n\nounce = 34 * 128\n\nounce = 4352\n\nCommon conversions from 34.x gallons to oz:\n(rounded to 3 decimals)\n\n• 34 gallons = 4352 oz\n• 34.1 gallons = 4364.8 oz\n• 34.2 gallons = 4377.6 oz\n• 34.3 gallons = 4390.4 oz\n• 34.4 gallons = 4403.2 oz\n• 34.5 gallons = 4416.0 oz\n• 34.6 gallons = 4428.8 oz\n• 34.7 gallons = 4441.6 oz\n• 34.8 gallons = 4454.4 oz\n• 34.9 gallons = 4467.2 oz\n\nWhat is a Gallon?\n\nGallon is an imperial and United States Customary systems volume unit. 1 US gallon = 128 US fluid ounces. 1 Imperial gallon = 160 Imperial fluid ounces. The symbol is \"gal\".\n\nWhat is a Fluid Ounce?\n\nFluid ounce is an Imperial and United States Customary measurement systems volume unit. 1 US fluid ounce = 128 mL. 1 UK fluid ounce = 28.4130625 mL. The symbol is \"fl oz\".\n\nCreate Conversion Table\nClick \"Create Table\". Enter a \"Start\" value (5, 100 etc). Select an \"Increment\" value (0.01, 5 etc) and select \"Accuracy\" to round the result."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.687541,"math_prob":0.9920673,"size":1078,"snap":"2022-27-2022-33","text_gpt3_token_len":349,"char_repetition_ratio":0.22905028,"word_repetition_ratio":0.009569378,"special_character_ratio":0.39146566,"punctuation_ratio":0.1570248,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9970502,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-06-27T17:09:09Z\",\"WARC-Record-ID\":\"<urn:uuid:c39e6bd2-d462-4779-989c-b8cdfca71af9>\",\"Content-Length\":\"46751\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:66cad48b-d114-49f8-8916-4f7ab5af61cc>\",\"WARC-Concurrent-To\":\"<urn:uuid:8126f416-4499-4f30-906a-64d03da53dba>\",\"WARC-IP-Address\":\"172.67.189.12\",\"WARC-Target-URI\":\"https://www.asknumbers.com/gallon-to-oz/34-gallon-to-oz.aspx\",\"WARC-Payload-Digest\":\"sha1:TCWHCDMDB53GBVD7M46QHM3GOGRZ4QSZ\",\"WARC-Block-Digest\":\"sha1:IQI7PDSXMPSYUSSVU5PBHIMSFAWAMFO3\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656103337962.22_warc_CC-MAIN-20220627164834-20220627194834-00245.warc.gz\"}"} |
https://pcdandf.com/pcdesign/index.php/magazine/10122-thermal-management-1506 | [
"",
null,
"",
null,
"### MAGAZINE\n\nWhen optimizing board area when considering thermal effects, the IPC charts aren’t enough.\n\nPrior to 2009 board designers concerned about PCB trace temperatures really had only one tool available to them. That tool started out as a poorly controlled, “tentative” set of curves created by two employees of the National Bureau of Standards in 1956 (Note 1). Those curves were redrawn and republished until they finally appeared in a military standard1 and ultimately in an IPC standard2. Board designers have been using them (and various calculators and Web pages based on them) ever since.\n\nFinally, in 2009 IPC published a new standard, IPC-2152, Standard for Determining Current Carrying Capacity in Printed Board Design.3 This is arguably the first, best-researched, well-controlled, most complete investigation into trace current and temperatures in the history of our industry. It is over 90 pages long and contains over 75 charts and tables. There are only two problems with the standard: it is bulky and awkward to use, and the problem of trace currents and\ntemperatures is too complicated to be able to distill into a collection of charts.\n\nIn this article we cover the basic background information needed to understand why traces heat up with current, convert the chart data into sets of equations, evaluate the charts and the equations using a thermal simulation model, and look at the sensitivity of the trace and formula results to other factors (such as adjacent traces and planes, different materials, etc.) (Note 2). The overriding conclusion of this article is that optimizing board area when considering thermal effects does not seem to be possible without the use of sophisticated thermal simulation software.\n\nBackground\n\nThe standard formula for the resistance of a copper trace or conductor is\n\nR = (ρ/A) * L [Eq. 1]\n\nwhere R = resistance in Ohms\nρ = resistivity of copper (typ. 1.7 µOhm*cm = 0.67 µOhm*in)\nA = cross-sectional area of the trace or conductor\nL = length of the trace or conductor\n\nThe resistivity of copper (indeed for most materials) is very sensitive to temperature, so its value is usually offered at a specific reference temperature, often 20˚C. The sensitivity of the resistivity of copper to a change in temperature is called the “thermal coefficient of resistivity” and is often given as 0.0039 per degree C. So the resistance of a copper conductor at any temperature different from the reference is found from Eq. 2,\n\nR = Rref(1+ α*ΔT) [Eq. 2]\n\nwhere Rref = resistance at the reference temperature (often 20˚)\n\nα = thermal coefficient of\nresistivity (0.0039/˚C), and\nΔT = the change in temperature\nfrom the reference\ntemperature.\n\nFrom Eq. 2 we can derive Eq. 3:\n\nΔT = ((R/Rref)-1) / α [Eq. 3]\n\nEq. 3 is the form whereby we can calculate the temperature change of a conductor if we know the resistance of the conductor at the ambient temperature and at some elevated temperature.\n\nWhen we pass a current (I) through a trace on a PCB, there is power dissipated in the trace given by the relationship I2R. This power dissipation heats the trace. At the same time, the trace cools by conduction into the board material, convection into the air surrounding the trace, and by radiation away from the trace. A stable temperature is reached when the I2R heating effect just equals the cooling effect. So we can speculate that the temperature change follows the relationship shown in Eq. 4, where w = width of the trace and Th = trace thickness (so w+Th is proportional to the surface area).",
null,
"[Eq. 4]\n\nIPC Curves and Equations\n\nA typical IPC curve from IPC-2152 is shown in FIGURE 1. It has cross-sectional area (in square mils) along the horizontal axis and current along the vertical axis. Then individual curves are provided for various changes in temperature.",
null,
"Figure 1. Typical IPC curve. Source: IPC-2152 Figure 5-1, p. 5.\n\nWe preferred working with curves which had current along the horizontal axis and the change in temperature on the vertical axis. This is more consistent with Eq. 4. So we converted a variety of curves in IPC-2152 to this orientation. We did that with the aid of a digitizing program (Note 3). We then fit equations to the resulting curves. A typical result for 2 oz. curves is shown in FIGURE 2.",
null,
"Figure 2. Converted 2 oz. IPC external data fitted with Eq. 4.\n\nThe IPC data are shown as the black curves in the figure, and our fitted equation (Eq. 5) is shown in each of the dotted red curves. Although not shown in this article, the curves and Eq. 5 matched equally well for all external trace sizes and currents. We believe an equation like Eq. 5 is much easier to deal with than are a large set of curves.\n\nΔT = 215.3 * C2 * W-1.15 * Th-1.0 [Eq. 5]\n\nwhere ΔT = the change in temperature\nfrom 20˚C\nC = current in Amperes\nW = trace width in mils\nTh = trace thickness in mils (Note 4)\n\nIPC-2152 also provides data for internal traces and traces in a vacuum. Perhaps the most surprising result in the standard is that internal traces actually run cooler than do external traces! This is a major change from the previous standard. The traces run cooler because the dielectric materials conduct heat away from the trace better than does the air.\n\nFollowing the procedure above, we converted the internal and vacuum data to the format shown in Figure 2 and developed equations for those curves. The resulting equations are summarized in Appendix 1. The resulting equations are not as straightforward as Eq. 5, having different coefficients for some different trace widths. This can pose a problem when trying to design high-current traces on internal layers or in a vacuum, but the equations can, in fact, be pretty easily handled by calculators with appropriate programming (Note 5). In the cases where the formulas are different within a trace thickness, the differences are small and probably reflect errors, uncertainties and variations as a result of various graphical drawings and manipulations.\n\nThermal Simulations\n\nWe used a computer thermal simulation software tool (Note 6) to validate the IPC curves by modeling the IPC test board used for collecting the IPC data. The configuration of the test board used for the IPC testing is freely downloadable from IPC (Note 7). FIGURE 3 shows one of the images produced by the simulation and provides a good illustration of what the test board looks like. This particular simulation was of a 2 oz., 200 mils wide trace with 12A of current applied. The entire trace is 12\" long with “sense” probes 3\" in from each side. Insetting the probes from the trace edge helps minimize the effects of thermal gradients that might be caused by the presence of the pads. The resistance is measured between these probes with zero current (20˚C) and then with the test current (Note 8). The change in temperature is derived by applying Eq. 3.",
null,
"Figure 3. TRM model of a 2 oz., 200 mils wide trace.\n\nFIGURE 4 shows the result of this simulation. The maximum temperature at the midpoint of the trace is 44.9˚C, which is a 24.9˚C change from the ambient. Note there are very definite thermal gradients away from the midpoint of the trace.",
null,
"Figure 4. Result of simulation of the model shown in Figure 3.\n\nThermal simulations were run for a variety of 2 oz. external trace sizes and currents. Results of these simulations are plotted in FIGURE 5. The simulation results are the red squares on the diagram. The simulations fit the IPC derived curves and the equations very well. This high degree of congruence gives us a high degree of confidence in the IPC data, the equations and the simulations.",
null,
"Figure 5. Thermal simulation results for selected curves.\n\nSimilar simulations were run for internal traces and traces in a vacuum. All simulations fit the IPC data and their derived equations quite well.\n\nSensitivities and Sensitivity Simulations\n\nBe careful about certain sensitivities. One is how sensitive the change in temperature is to a change in current for very narrow traces. FIGURE 6 illustrates a close-up view of 1 oz. 5 mil and 10 mil wide traces. Note how the 5 mil wide trace change in temperature ranges from about 6˚ at a 500 mA current to about 25˚ at a 1.0A current to about 55˚ at 1.5A and 100˚ at 2.0A. A relatively small change in current can cause a really large change in temperature. The situation is slightly better for thicker traces.",
null,
"Figure 6. Narrow curves are very sensitive to changes in current.\n\nWe ran several thermal simulations to examine the sensitivity of trace heating to a variety of factors. The control simulation was a 6\" long, 1.0 oz., 200 mil wide trace carrying 15A, with pads at each end. A schematic of the model (similar to that shown in Figure 3) is shown in FIGURE 7. In this control situation, the change in temperature was 94.9˚, as shown in FIGURE 8.",
null,
"Figure 7. Schematic of thermal simulation of 6˚, 1 oz., 200 mil wide trace.",
null,
"Figure 8. Thermal profile of the simulation in Figure 7.\n\nOne thing to notice from this simulation is the thermal profile. The hottest portion of the trace is at the midpoint (about 114.9˚C, which is 94.9˚ above ambient), as one would expect. But the temperature near the pads is much cooler, in this case about 60˚ above ambient. So when we are thinking about the maximum temperature of a trace, it is important to consider where that maximum temperature is going to occur.\n\nIf we do nothing but change the length of the trace, the results change:",
null,
"The difference is likely caused by the impact of the pads. In this simulation, the pads were not connected to a power plane. If they were connected to a power plane, the plane would act as a heat sink, likely cooling the trace even more. Thus, changing the trace length can impact the change in temperature.\n\nIt takes time for a trace to reach a stabilized temperature. The time can be impacted by several things. But on the other hand, the simulations we looked at were reasonably uniform in this respect. FIGURE 9 illustrates the transient response for our control simulation. Although the temperature will continue to rise for a very long time (up to 30 min.), it has reached very close to its maximum temperature within about 5 min. This may or may not have implications for board designs.",
null,
"Figure 9. Transient response for 1 oz., 6”, 200 mil wide trace with a switched current of 15A.\n\nAn underlying plane has a major effect on the change in temperature. We simulated the case of a plane on the opposite side of a 63 mil thick board, and also a plane layer 10 mils under the trace. In both cases we modeled a continuous plane across the entire board. But all that is really needed is a plane that extends several trace widths in both directions from the trace. Without a plane, the change in temperature was 94.9°. With a plane on the opposite side of the board, the change in temperature was only 47.9°. But with a plane 10 mils directly underneath the trace (typical of many modern boards today), the change in temperature was only 33.7°C.\n\nFigures 10 and 11 show the thermal gradients in the dielectric layer directly under the trace layer for the simulations with no plane and with a plane 10 mils under the trace. The dielectric is much hotter for the simulation without the plane. But the impact that the plane has in spreading the heat away from the trace is clear in Figure 11.",
null,
"Figure 10. Thermal gradient in dielectric with no plane.",
null,
"Figure 11. Thermal gradient in dielectric with a closely spaced plane.\n\nWe ran a simulation of our control trace next to a second, 200 mil wide trace placed 8 mils away (edge-to-edge.) The adjacent trace carried no current. The control trace cooled a little to a temperature change of 84.9˚. But the adjacent trace warmed to a temperature change of 51.5˚. This may or may not be an issue in board designs.\n\nFinally, we looked at a different material. All IPC trace data are taken with traces on a board with polyimide dielectric. Polyimide has a slightly higher thermal conductivity than does FR-4, for example. Thus, polyimide will result in a slightly cooler trace than will standard FR-4. All our simulations above were done with the dielectric assumed to be polyimide. When the control simulation was run with FR-4 instead, the trace heated to a temperature change of 122.9˚ (instead of 94.9˚).\n\nBoard material is a very difficult parameter to analyze. The material properties will vary between manufacturers. Thermal conductivity will vary depending on the direction; that is, the thermal properties in the X-Y direction will be different from the thermal properties in the Z direction. Materials with “tighter” weave will probably cool better than will materials with “looser” weaves. There are no dependable standards for measuring and reporting thermal conductivity. Therefore, board thermal characteristics may change somewhat if the material is changed (either knowingly or unknowingly).\n\nAfter looking at several different simulations based on our control simulation of a 1 oz. 200 mil wide, 6\" long trace carrying 15A and a polyimide board material, the results are a little less than comforting! They are summarized in TABLE 1.\n\nTable 1. Results of Control Simulation",
null,
"Conclusions\n\nAs a result of the simulations, we can offer a couple of conclusions:\n\n1. It is almost universally true that variations in results increase with temperature. That is, at low temperatures (associated with lower currents) the various parameters we have looked at here make little or no difference. At intermediate temperatures, the differences would by much milder than reported here.\n2. Except for material selection, the IPC curves generally represent a “worst case” scenario. Any other variation we introduce lowers the trace temperature, sometimes considerably so.\n3. We have not addressed in this paper some of the more complex shapes such as fillets in thermal reliefs or copper plating in drill holes. While we can speculate that the I2R heating in these cases might be similar to other simulations, the cooling situations might be substantially more complicated.\n\nOne overriding conclusion is inescapable. The relationship between trace currents and temperatures is very complex. It is too complex to model with a single set of equations or curves. Since board real estate is very expensive, board designers usually want to use the smallest traces their fabricators will permit while still meeting requirements. Optimizing board area when considering thermal effects does not seem to be possible without the use of sophisticated thermal simulation software.\n\nReferences\n\n1. MIL-STD-275E, Printed Wiring For Electronic Equipment, Dec. 31, 1984.\n2. IPC-2221, Generic Standard on Printed Board Design, February 1998.\n3. IPC-2152, Standard for Determining Current Carrying Capacity in Printed Board Design, August 2009.\n4. Douglas Brooks and Johannes Adam, “Trace Currents and Temperatures Revisited,” 2015, available at www.ultracad.com.\n\nNotes\n\n1. Ref. 3 contains a good historical summary of the development of the original charts. Ref. 4 contains copies of some of these early references.\n2. This paper summarizes the results of a much more detailed paper by the authors4. Many of the details regarding curve fitting and thermal simulations are covered in much more detail in that paper.\n3. We used a program called GetData Graph Digitizer, available at www.getdata-graph-digitizer.com.\n4. This equation has thickness in mils, while all the curves show thickness in ounces. We used the following conversions: 0.5 oz. = 0.65 mils; 1.0 oz. = 1.35 mils; 2.0 oz. = 2.7 mils; 3.0 oz. = 3.9 mils.\n5. UltraCAD’s PCB4.0 Trace Calculator handles these calculations easily.\n6. The software tool we use is called TRM (Thermal Risk Management) developed by Dr. Johannes Adam. See adam-research.com for more information regarding the tool and these types of simulations.\n7. IPC-TM-650, Test Methods Manual, method 2.5.4.1A “Conductor Temperature Rise Due to Current Changes in Conductors” available at www.ipc.org/test-methods.aspx.\n8. The temperature change can be determined by measuring the change in resistance and applying Eq. 3, or measuring the change in voltage, dividing by the current (to get the change in resistance), and then applying Eq. 3. In either case, the change in temperature that is measured is the average change over the entire trace length, not the peak temperature change at the midpoint of the trace.\n\nDouglas Brooks, Ph.D., is owner of UltraCAD Design, a PCB design service bureau and author of PCB Currents: How They Flow, How They React; This email address is being protected from spambots. You need JavaScript enabled to view it.. He will speak at PCB West in September in Santa Clara, CA. Johannes Adam, Ph.D., CID, is founder of ADAM Research, a technical consultant for electronics companies and a software developer, and the author of the Thermal Risk Management simulation program.",
null,
""
] | [
null,
"https://pcdandf.com/pcdesign/modules/mod_twitter_widget_slider/assets/twitter-icon.png",
null,
"https://pcdandf.com/pcdesign/modules/mod_facebookslider/assets/facebook-icon.png",
null,
"https://pcdandf.com/pcdesign/images/stories/ArticleImages/1506/brooksEq4.jpg",
null,
"https://pcdandf.com/pcdesign/images/stories/ArticleImages/1506/brooks1.jpg",
null,
"https://pcdandf.com/pcdesign/images/stories/ArticleImages/1506/brooks2.jpg",
null,
"https://pcdandf.com/pcdesign/images/stories/ArticleImages/1506/brooks3.jpg",
null,
"https://pcdandf.com/pcdesign/images/stories/ArticleImages/1506/brooks4.jpg",
null,
"https://pcdandf.com/pcdesign/images/stories/ArticleImages/1506/brooks5.jpg",
null,
"https://pcdandf.com/pcdesign/images/stories/ArticleImages/1506/brooks6.jpg",
null,
"https://pcdandf.com/pcdesign/images/stories/ArticleImages/1506/brooks7.jpg",
null,
"https://pcdandf.com/pcdesign/images/stories/ArticleImages/1506/brooks8.jpg",
null,
"https://pcdandf.com/pcdesign/images/stories/ArticleImages/1506/brookesTraceTable.jpg",
null,
"https://pcdandf.com/pcdesign/images/stories/ArticleImages/1506/brooks9.jpg",
null,
"https://pcdandf.com/pcdesign/images/stories/ArticleImages/1506/brooks10.jpg",
null,
"https://pcdandf.com/pcdesign/images/stories/ArticleImages/1506/brooks11.jpg",
null,
"https://pcdandf.com/pcdesign/images/stories/ArticleImages/1506/brookesTable1.jpg",
null,
"https://pcdandf.com/pcdesign/images/stories/mag_covers/2021/2101cover.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9310842,"math_prob":0.9575892,"size":15216,"snap":"2022-27-2022-33","text_gpt3_token_len":3419,"char_repetition_ratio":0.16486984,"word_repetition_ratio":0.03892565,"special_character_ratio":0.22213459,"punctuation_ratio":0.11215898,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96985495,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34],"im_url_duplicate_count":[null,null,null,null,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,10,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-07-06T22:18:19Z\",\"WARC-Record-ID\":\"<urn:uuid:c5e18553-8a64-4946-ad8a-5c45964ac5da>\",\"Content-Length\":\"85592\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:200f6ce3-4c5a-4db1-af3f-a6506225f56d>\",\"WARC-Concurrent-To\":\"<urn:uuid:1367b891-1ed3-41d4-8223-ca3df7afb759>\",\"WARC-IP-Address\":\"64.64.222.241\",\"WARC-Target-URI\":\"https://pcdandf.com/pcdesign/index.php/magazine/10122-thermal-management-1506\",\"WARC-Payload-Digest\":\"sha1:VKWX6VA3RAXKTQM62FPVWM2WHAND67GC\",\"WARC-Block-Digest\":\"sha1:F4RABGMNW5SXF6YLBSBCJNK5L4NUBSHU\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656104678225.97_warc_CC-MAIN-20220706212428-20220707002428-00525.warc.gz\"}"} |
https://lists.lfaidata.foundation/g/janusgraph-users/topic/count_query_optimization/81274509?p=,,,20,0,0,0::recentpostdate/sticky,,,20,2,200,81274509,previd%3D1619294718558563860,nextid%3D1615888176540130863&previd=1619294718558563860&nextid=1615888176540130863 | [
"#### Count Query Optimization\n\nVinayak Bali\n\nHi All,\n\nThe schema consists of A, B as nodes, and E as an edge with some other nodes and edges.\nA: 183468\nB: 437317\nE: 186513\n\nQuery: g.V().has('property1', 'A').as('v1').outE().has('property1','E').as('e').inV().has('property1', 'B').as('v2').select('v1','e','v2').dedup().count()\nOutput: 200166\nTime Taken: 1min\n\nQuery: g.V().has('property1', 'A').aggregate('v').outE().has('property1','E').aggregate('e').inV().has('property1', 'B').aggregate('v').select('v').dedup().as('vetexCount').select('e').dedup().as('edgeCount').select('vetexCount','edgeCount').by(unfold().count())\nOutput: ==>[vetexCount:383633,edgeCount:200166]\nTime: 3.5 mins\nProperty1 is the index.\nHow can I optimize the queries because minutes of time for count query is not optimal. Please suggest different approaches.\n\nThanks & Regards,\nVinayak\n\nAMIYA KUMAR SAHOO\n\nHi Vinayak,\n\nFor query 1.\n\nWhat is the degree centrality of vertex having property A. How much percentage satisfy out edge having property E. If it is small, VCI will help to increase speed for this traversal.\n\nYou can give it a try to below query, not sure if it will speed up.\n\ng.V().has('property1', 'A').\noutE().has('property1','E').\ninV().has('property1', 'B').\ndedup().by(path()).\ncount()\n\nOn Fri, 12 Mar 2021, 13:30 Vinayak Bali, <vinayakbali16@...> wrote:\nHi All,\n\nThe schema consists of A, B as nodes, and E as an edge with some other nodes and edges.\nA: 183468\nB: 437317\nE: 186513\n\nQuery: g.V().has('property1', 'A').as('v1').outE().has('property1','E').as('e').inV().has('property1', 'B').as('v2').select('v1','e','v2').dedup().count()\nOutput: 200166\nTime Taken: 1min\n\nQuery: g.V().has('property1', 'A').aggregate('v').outE().has('property1','E').aggregate('e').inV().has('property1', 'B').aggregate('v').select('v').dedup().as('vetexCount').select('e').dedup().as('edgeCount').select('vetexCount','edgeCount').by(unfold().count())\nOutput: ==>[vetexCount:383633,edgeCount:200166]\nTime: 3.5 mins\nProperty1 is the index.\nHow can I optimize the queries because minutes of time for count query is not optimal. Please suggest different approaches.\n\nThanks & Regards,\nVinayak\n\nHi all,\n\nI also thought about the vertex centrex index first, but I am afraid that the VCI can only help to filter the edges to follow, but it does not help in counting the edges. A better way to investigate is to leave out the final inV() step. So, e.g. you can count the number of distinct v2 id's with:\ng.V().has('property1', 'A').outE().has('property1','E').id().map{it.get().getOutVertexId()}.dedup().count()\n\nNote that E().id() returns RelationIdentifier() objects that contain both the edge id, the inVertexId and the OutVertexId. This should diminish the number of storage backend calls.\n\nBest wishes, Marc\n\nAMIYA KUMAR SAHOO\n\nHi Marc,\n\nVinayak query has a filter on inV property (property1 = B), hence I did not stop at edge itself.\n\nIf this kind of query is frequent, decision can be made if the same value makes sense to keep duplicate at both vertex and edge. That will help eliminate the traversal to the out vertex.\n\nRegards,\nAmiya\n\nBoxuan Li\n\nApart from rewriting the query, there are some config options (https://docs.janusgraph.org/basics/configuration-reference/#query) worth trying:\n\n1) Turn on query.batch\n2) Turn off\nquery.fast-property\n\nVinayak Bali\n\nHi All,\n\nThe solution from BO XUAN LI to change config files worked for the following query:\ng.V().has('property1', 'A').as('v1').outE().has('property1','E').as('e').inV().has('property1', 'B').as('v2').select('v1','e','v2').dedup().count()\n\nBut not for the following query:\ng.V().has('property1', 'A').aggregate('v').outE().has('property1','E').aggregate('e').inV().has('property1', 'B').aggregate('v').select('v').dedup().as('vetexCount').select('e').dedup().as('edgeCount').select('vetexCount','edgeCount').by(unfold().count())\n\nI need an optimized query to get both nodes, as well as edges, count. Request you to provide your valuable feedback and help me to achieve it.\n\nThanks & Regards,\nVinayak\n\nOn Sat, Mar 13, 2021 at 8:16 AM BO XUAN LI <liboxuan@...> wrote:\nApart from rewriting the query, there are some config options (https://docs.janusgraph.org/basics/configuration-reference/#query) worth trying:\n\n1) Turn on query.batch\n2) Turn off\nquery.fast-property\n\nHi Vinayak,\n\nReferring to you last post, what happens if you use aggregate(local, 'v') and aggregate(local, 'e'). The local modifier makes the aggregate() step lazy, which hopefully gives janusgraph more opportunity to batch the storage backend requests.\nhttps://tinkerpop.apache.org/docs/current/reference/#store-step\n\nBest wishes, Marc\n\nVinayak Bali\n\nHi Marc,\n\nUsing local returns the output after each count. For example:\n\n==>[vetexCount:184439,edgeCount:972]\n==>[vetexCount:184440,edgeCount:973]\n==>[vetexCount:184441,edgeCount:974]\n==>[vetexCount:184442,edgeCount:975]\n==>[vetexCount:184443,edgeCount:976]\n==>[vetexCount:184444,edgeCount:977]\n==>[vetexCount:184445,edgeCount:978]\n==>[vetexCount:184446,edgeCount:979]\n==>[vetexCount:184447,edgeCount:980]\n==>[vetexCount:184448,edgeCount:981]\n==>[vetexCount:184449,edgeCount:982]\n==>[vetexCount:184450,edgeCount:983]\n==>[vetexCount:184451,edgeCount:984]\n==>[vetexCount:184452,edgeCount:985]\n==>[vetexCount:184453,edgeCount:986]\n==>[vetexCount:184454,edgeCount:987]\n==>[vetexCount:184455,edgeCount:988]\n==>[vetexCount:184456,edgeCount:989]\n==>[vetexCount:184457,edgeCount:990]\n==>[vetexCount:184458,edgeCount:991]\n==>[vetexCount:184459,edgeCount:992]\n==>[vetexCount:184460,edgeCount:993]\n==>[vetexCount:184461,edgeCount:994]\n==>[vetexCount:184462,edgeCount:995]\n==>[vetexCount:184463,edgeCount:996]\n==>[vetexCount:184464,edgeCount:997]\n==>[vetexCount:184465,edgeCount:998]\n\nYou can suggest some other approach too. I really need it working.\n\nThanks & Regards,\nVinayak\n\nOn Wed, Mar 17, 2021 at 5:54 PM <hadoopmarc@...> wrote:\nHi Vinayak,\n\nReferring to you last post, what happens if you use aggregate(local, 'v') and aggregate(local, 'e'). The local modifier makes the aggregate() step lazy, which hopefully gives janusgraph more opportunity to batch the storage backend requests.\nhttps://tinkerpop.apache.org/docs/current/reference/#store-step\n\nBest wishes, Marc\n\nNicolas Trangosi <nicolas.trangosi@...>\n\nHi,\nYou may try to use denormalization by setting property1 from inV also on edge.\nThen once edges are updated, following query should work:\n\ng.V().has('property1', 'A').aggregate('v').outE().has('property1','E').has('inVproperty1', 'B').aggregate('e').inV().aggregate('v').select('v').dedup().as('vetexCount').select('e').dedup().as('edgeCount').select('vetexCount','edgeCount').by(unfold().count())\n\nLe mer. 17 mars 2021 à 14:05, Vinayak Bali <vinayakbali16@...> a écrit :\nHi Marc,\n\nUsing local returns the output after each count. For example:\n\n==>[vetexCount:184439,edgeCount:972]\n==>[vetexCount:184440,edgeCount:973]\n==>[vetexCount:184441,edgeCount:974]\n==>[vetexCount:184442,edgeCount:975]\n==>[vetexCount:184443,edgeCount:976]\n==>[vetexCount:184444,edgeCount:977]\n==>[vetexCount:184445,edgeCount:978]\n==>[vetexCount:184446,edgeCount:979]\n==>[vetexCount:184447,edgeCount:980]\n==>[vetexCount:184448,edgeCount:981]\n==>[vetexCount:184449,edgeCount:982]\n==>[vetexCount:184450,edgeCount:983]\n==>[vetexCount:184451,edgeCount:984]\n==>[vetexCount:184452,edgeCount:985]\n==>[vetexCount:184453,edgeCount:986]\n==>[vetexCount:184454,edgeCount:987]\n==>[vetexCount:184455,edgeCount:988]\n==>[vetexCount:184456,edgeCount:989]\n==>[vetexCount:184457,edgeCount:990]\n==>[vetexCount:184458,edgeCount:991]\n==>[vetexCount:184459,edgeCount:992]\n==>[vetexCount:184460,edgeCount:993]\n==>[vetexCount:184461,edgeCount:994]\n==>[vetexCount:184462,edgeCount:995]\n==>[vetexCount:184463,edgeCount:996]\n==>[vetexCount:184464,edgeCount:997]\n==>[vetexCount:184465,edgeCount:998]\n\nYou can suggest some other approach too. I really need it working.\n\nThanks & Regards,\nVinayak\n\nOn Wed, Mar 17, 2021 at 5:54 PM <hadoopmarc@...> wrote:\nHi Vinayak,\n\nReferring to you last post, what happens if you use aggregate(local, 'v') and aggregate(local, 'e'). The local modifier makes the aggregate() step lazy, which hopefully gives janusgraph more opportunity to batch the storage backend requests.\nhttps://tinkerpop.apache.org/docs/current/reference/#store-step\n\nBest wishes, Marc\n\n--",
null,
"Ce message et ses pièces jointes peuvent contenir des informations confidentielles ou privilégiées et ne doivent donc pas être diffusés, exploités ou copiés sans autorisation.\nSi vous avez reçu ce message par erreur, veuillez le signaler a l'expéditeur et le détruire ainsi que les pièces jointes.\nLes messages électroniques étant susceptibles d'altération, DCbrain décline toute responsabilité si ce message a été altéré, déformé ou falsifié. Merci.\n\nThis message and its attachments may contain confidential or privileged information that may be protected by law; they should not be distributed, used or copied without authorisation. If you have received this email in error, please notify the sender and delete this message and its attachments. As emails may be altered, DCbrain is not liable for messages that have been modified, changed or falsified. Thank you.\n\nHi Vinayak,\n\nAnother attempt, this one is very similar to the one that works.\n\ngremlin> graph = JanusGraphFactory.open('conf/janusgraph-inmemory.properties')\n==>standardjanusgraph[inmemory:[127.0.0.1]]\ngremlin> g = graph.traversal()\n==>graphtraversalsource[standardjanusgraph[inmemory:[127.0.0.1]], standard]\n==>null\n\ngremlin> g.V().as('v1').outE().as('e').inV().as('v2').union(select('v1'), select('v2')).dedup().count()\n16:12:39 WARN org.janusgraph.graphdb.transaction.StandardJanusGraphTx - Query requires iterating over all vertices [()]. For better performance, use indexes\n==>12\n\ngremlin> g.V().as('v1').outE().as('e').inV().as('v2').select('e').dedup().count()\n16:15:30 WARN org.janusgraph.graphdb.transaction.StandardJanusGraphTx - Query requires iterating over all vertices [()]. For better performance, use indexes\n==>17\n\ngremlin> g.V().as('v1').outE().as('e').inV().as('v2').union(\n......1> union(select('v1'), select('v2')).dedup().count(),\n......2> select('e').dedup().count().as('ecount')\n......3> )\n16:27:42 WARN org.janusgraph.graphdb.transaction.StandardJanusGraphTx - Query requires iterating over all vertices [()]. For better performance, use indexes\n==>12\n==>17\n\nBest wishes, Marc\n\nAMIYA KUMAR SAHOO\n\nHi Vinayak,\n\nMay be try below.\n\ng.V().has('property1', 'A').\noutE().has('property1','E').\nwhere(inV().has('property1', 'B')). fold().\nproject('edgeCount', 'vertexCount').\nby(count(local)).\nby(unfold().bothV().dedup().count()) // I do not think dedup is required for your use case, can try both with and without dedup\n\nRegards, Amiya\n\nVinayak Bali\n\nHi Amiya,\n\nWith dedup:\ng.V().has('property1', 'A').\noutE().has('property1','E').\nwhere(inV().has('property1', 'B')). fold().\nproject('edgeCount', 'vertexCount').\nby(count(local)).\nby(unfold().bothV().dedup().count())\nOutput: ==>[edgeCount:200166,vertexCount:34693]\n\nwithout dedup:\ng.V().has('property1', 'A').\noutE().has('property1','E').\nwhere(inV().has('property1', 'B')). fold().\nproject('edgeCount', 'vertexCount').\nby(count(local)).\nby(unfold().bothV().count())\nOutput: ==>[edgeCount:200166,vertexCount:400332]\n\nBoth queries are taking approx 3 sec to run.\n\nQuery: g.V().has('property1', 'A').aggregate('v').outE().has('property1','E').aggregate('e').inV().has('property1', 'B').aggregate('v').select('v').dedup().as('vetexCount').select('e').dedup().as('edgeCount').select('vetexCount','edgeCount').by(unfold().count())\nOutput: ==>[vetexCount:383633,edgeCount:200166]\nTime: 3.5 mins\n\nEdge Count is the same for all the queries but getting different vertexCount. Which one is the right vertex count??\n\nThanks & Regards,\nVinayak\n\nOn Thu, Mar 18, 2021 at 11:18 AM AMIYA KUMAR SAHOO <amiyakr.sahoo91@...> wrote:\nHi Vinayak,\n\nMay be try below.\n\ng.V().has('property1', 'A').\noutE().has('property1','E').\nwhere(inV().has('property1', 'B')). fold().\nproject('edgeCount', 'vertexCount').\nby(count(local)).\nby(unfold().bothV().dedup().count()) // I do not think dedup is required for your use case, can try both with and without dedup\n\nRegards, Amiya\n\nAMIYA KUMAR SAHOO\n\nHi Vinayak,\n\nCorrect vertex count is ( 400332 non-unique, 34693 unique).\n\ng.V().has('property1', 'A').aggregate('v'), all the vertex having property1 = A might be getting included in count in your second query because of eager evaluation (does not matter they have outE with property1 = E or not)\n\nRegards,\nAmiya\n\nVinayak Bali\n\nAmiya - I need to check the data, there is some mismatch with the counts.\n\nConsider we have more than one relation to get the count. How can we modify the query?\n\nFor example:\n\nA->E->B query is as follows:\ng.V().has('property1', 'A').\noutE().has('property1','E').\nwhere(inV().has('property1', 'B')). fold().\nproject('edgeCount', 'vertexCount').\nby(count(local)).\nby(unfold().bothV().dedup().count())\n\nA->E->B->E1->C->E2->D\n\nWhat changes can be made in the query ??\n\nThanks\n\nOn Thu, Mar 18, 2021 at 1:59 PM AMIYA KUMAR SAHOO <amiyakr.sahoo91@...> wrote:\nHi Vinayak,\n\nCorrect vertex count is ( 400332 non-unique, 34693 unique).\n\ng.V().has('property1', 'A').aggregate('v'), all the vertex having property1 = A might be getting included in count in your second query because of eager evaluation (does not matter they have outE with property1 = E or not)\n\nRegards,\nAmiya\n\nAMIYA KUMAR SAHOO\n\nHi Vinayak,\n\nTry below. If it works for you, you can add E2 and D similarly.\n\ng.V().has('property1', 'A').\noutE().has('property1', 'E').as('e').\ninV().has('property1', 'B').\noutE().has('property1', 'E1').as('e').\nwhere (inV().has('property1', 'C')).\nselect (all, 'e').fold().\nproject('edgeCount', 'vertexCount').\nby(count(local)).\nby(unfold().bothV().dedup().count())\n\nRegards,\nAmiya\n\nOn Thu, 18 Mar 2021, 15:47 Vinayak Bali, <vinayakbali16@...> wrote:\nAmiya - I need to check the data, there is some mismatch with the counts.\n\nConsider we have more than one relation to get the count. How can we modify the query?\n\nFor example:\n\nA->E->B query is as follows:\ng.V().has('property1', 'A').\noutE().has('property1','E').\nwhere(inV().has('property1', 'B')). fold().\nproject('edgeCount', 'vertexCount').\nby(count(local)).\nby(unfold().bothV().dedup().count())\n\nA->E->B->E1->C->E2->D\n\nWhat changes can be made in the query ??\n\nThanks\n\nOn Thu, Mar 18, 2021 at 1:59 PM AMIYA KUMAR SAHOO <amiyakr.sahoo91@...> wrote:\nHi Vinayak,\n\nCorrect vertex count is ( 400332 non-unique, 34693 unique).\n\ng.V().has('property1', 'A').aggregate('v'), all the vertex having property1 = A might be getting included in count in your second query because of eager evaluation (does not matter they have outE with property1 = E or not)\n\nRegards,\nAmiya\n\nVinayak Bali\n\nHi All,\n\nAdding these properties in the configuration file affects edge traversal. Retrieving a single edge takes 7 mins of time.\n1) Turn on query.batch\n2) Turn off\nquery.fast-property\nCount query is faster but edge traversal becomes more expensive.\nIs there any other way to improve count performance without affecting other queries.\n\nThanks & Regards,\nVinayak\n\nOn Fri, Mar 19, 2021 at 1:53 AM AMIYA KUMAR SAHOO <amiyakr.sahoo91@...> wrote:\nHi Vinayak,\n\nTry below. If it works for you, you can add E2 and D similarly.\n\ng.V().has('property1', 'A').\noutE().has('property1', 'E').as('e').\ninV().has('property1', 'B').\noutE().has('property1', 'E1').as('e').\nwhere (inV().has('property1', 'C')).\nselect (all, 'e').fold().\nproject('edgeCount', 'vertexCount').\nby(count(local)).\nby(unfold().bothV().dedup().count())\n\nRegards,\nAmiya\n\nOn Thu, 18 Mar 2021, 15:47 Vinayak Bali, <vinayakbali16@...> wrote:\nAmiya - I need to check the data, there is some mismatch with the counts.\n\nConsider we have more than one relation to get the count. How can we modify the query?\n\nFor example:\n\nA->E->B query is as follows:\ng.V().has('property1', 'A').\noutE().has('property1','E').\nwhere(inV().has('property1', 'B')). fold().\nproject('edgeCount', 'vertexCount').\nby(count(local)).\nby(unfold().bothV().dedup().count())\n\nA->E->B->E1->C->E2->D\n\nWhat changes can be made in the query ??\n\nThanks\n\nOn Thu, Mar 18, 2021 at 1:59 PM AMIYA KUMAR SAHOO <amiyakr.sahoo91@...> wrote:\nHi Vinayak,\n\nCorrect vertex count is ( 400332 non-unique, 34693 unique).\n\ng.V().has('property1', 'A').aggregate('v'), all the vertex having property1 = A might be getting included in count in your second query because of eager evaluation (does not matter they have outE with property1 = E or not)\n\nRegards,\nAmiya\n\nBoxuan Li\n\nHave you tried keeping query.batch = true AND query.fast-property = true?\n\nRegards,\nBoxuan\n\nOn Mar 22, 2021, at 8:28 PM, Vinayak Bali <vinayakbali16@...> wrote:\n\nHi All,\n\nAdding these properties in the configuration file affects edge traversal. Retrieving a single edge takes 7 mins of time.\n1) Turn on query.batch\n2) Turn off\nquery.fast-property\nCount query is faster but edge traversal becomes more expensive.\nIs there any other way to improve count performance without affecting other queries.\n\nThanks & Regards,\nVinayak\n\nOn Fri, Mar 19, 2021 at 1:53 AM AMIYA KUMAR SAHOO <amiyakr.sahoo91@...> wrote:\nHi Vinayak,\n\nTry below. If it works for you, you can add E2 and D similarly.\n\ng.V().has('property1', 'A').\noutE().has('property1', 'E').as('e').\ninV().has('property1', 'B').\noutE().has('property1', 'E1').as('e').\nwhere (inV().has('property1', 'C')).\nselect (all, 'e').fold().\nproject('edgeCount', 'vertexCount').\nby(count(local)).\nby(unfold().bothV().dedup().count())\n\nRegards,\nAmiya\n\nOn Thu, 18 Mar 2021, 15:47 Vinayak Bali, <vinayakbali16@...> wrote:\nAmiya - I need to check the data, there is some mismatch with the counts.\n\nConsider we have more than one relation to get the count. How can we modify the query?\n\nFor example:\n\nA->E->B query is as follows:\ng.V().has('property1', 'A').\noutE().has('property1','E').\nwhere(inV().has('property1', 'B')). fold().\nproject('edgeCount', 'vertexCount').\nby(count(local)).\nby(unfold().bothV().dedup().count())\n\nA->E->B->E1->C->E2->D\n\nWhat changes can be made in the query ??\n\nThanks\n\nOn Thu, Mar 18, 2021 at 1:59 PM AMIYA KUMAR SAHOO <amiyakr.sahoo91@...> wrote:\nHi Vinayak,\n\nCorrect vertex count is ( 400332 non-unique, 34693 unique).\n\ng.V().has('property1', 'A').aggregate('v'), all the vertex having property1 = A might be getting included in count in your second query because of eager evaluation (does not matter they have outE with property1 = E or not)\n\nRegards,\nAmiya\n\nVinayak Bali\n\nHi All,\n\nquery.batch = true AND query.fast-property = true\nthis doesn't work. facing the same problem. Is there any other way??\n\nThanks & Regards,\nVinayak\n\nOn Mon, Mar 22, 2021 at 6:06 PM Boxuan Li <liboxuan@...> wrote:\nHave you tried keeping query.batch = true AND query.fast-property = true?\n\nRegards,\nBoxuan\n\nOn Mar 22, 2021, at 8:28 PM, Vinayak Bali <vinayakbali16@...> wrote:\n\nHi All,\n\nAdding these properties in the configuration file affects edge traversal. Retrieving a single edge takes 7 mins of time.\n1) Turn on query.batch\n2) Turn off\nquery.fast-property\nCount query is faster but edge traversal becomes more expensive.\nIs there any other way to improve count performance without affecting other queries.\n\nThanks & Regards,\nVinayak\n\nOn Fri, Mar 19, 2021 at 1:53 AM AMIYA KUMAR SAHOO <amiyakr.sahoo91@...> wrote:\nHi Vinayak,\n\nTry below. If it works for you, you can add E2 and D similarly.\n\ng.V().has('property1', 'A').\noutE().has('property1', 'E').as('e').\ninV().has('property1', 'B').\noutE().has('property1', 'E1').as('e').\nwhere (inV().has('property1', 'C')).\nselect (all, 'e').fold().\nproject('edgeCount', 'vertexCount').\nby(count(local)).\nby(unfold().bothV().dedup().count())\n\nRegards,\nAmiya\n\nOn Thu, 18 Mar 2021, 15:47 Vinayak Bali, <vinayakbali16@...> wrote:\nAmiya - I need to check the data, there is some mismatch with the counts.\n\nConsider we have more than one relation to get the count. How can we modify the query?\n\nFor example:\n\nA->E->B query is as follows:\ng.V().has('property1', 'A').\noutE().has('property1','E').\nwhere(inV().has('property1', 'B')). fold().\nproject('edgeCount', 'vertexCount').\nby(count(local)).\nby(unfold().bothV().dedup().count())\n\nA->E->B->E1->C->E2->D\n\nWhat changes can be made in the query ??\n\nThanks\n\nOn Thu, Mar 18, 2021 at 1:59 PM AMIYA KUMAR SAHOO <amiyakr.sahoo91@...> wrote:\nHi Vinayak,\n\nCorrect vertex count is ( 400332 non-unique, 34693 unique).\n\ng.V().has('property1', 'A').aggregate('v'), all the vertex having property1 = A might be getting included in count in your second query because of eager evaluation (does not matter they have outE with property1 = E or not)\n\nRegards,\nAmiya\n\n 1 - 18 of 18"
] | [
null,
"https://dcbrain.com/images_mail/LOG.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.57585377,"math_prob":0.6998018,"size":24468,"snap":"2023-14-2023-23","text_gpt3_token_len":7174,"char_repetition_ratio":0.22437051,"word_repetition_ratio":0.6973838,"special_character_ratio":0.32094982,"punctuation_ratio":0.27194694,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95554084,"pos_list":[0,1,2],"im_url_duplicate_count":[null,5,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-05-29T11:21:42Z\",\"WARC-Record-ID\":\"<urn:uuid:c8c776c9-6357-4165-b145-a0565e99c443>\",\"Content-Length\":\"172707\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:72407a1e-5fed-4d99-b6b9-8a2641242a6f>\",\"WARC-Concurrent-To\":\"<urn:uuid:718fe69d-9cb7-4f54-81c1-89b19e1e5e45>\",\"WARC-IP-Address\":\"173.255.221.194\",\"WARC-Target-URI\":\"https://lists.lfaidata.foundation/g/janusgraph-users/topic/count_query_optimization/81274509?p=,,,20,0,0,0::recentpostdate/sticky,,,20,2,200,81274509,previd%3D1619294718558563860,nextid%3D1615888176540130863&previd=1619294718558563860&nextid=1615888176540130863\",\"WARC-Payload-Digest\":\"sha1:D6A6CZVNNN3REOVVK4BNZDNYKQFHIQ7M\",\"WARC-Block-Digest\":\"sha1:HYNSAEEH3RX3YQVXDM5RXS7RSOSPN3KF\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224644855.6_warc_CC-MAIN-20230529105815-20230529135815-00426.warc.gz\"}"} |
https://physics.stackexchange.com/questions/21851/prove-that-negative-absolute-temperatures-are-actually-hotter-than-positive-abso | [
"# Prove that negative absolute temperatures are actually hotter than positive absolute temperatures\n\nCould someone provide me with a mathematical proof of why, a system with an absolute negative Kelvin temperature (such that of a spin system) is hotter than any system with a positive temperature (in the sense that if a negative-temperature system and a positive-temperature system come in contact, heat will flow from the negative- to the positive-temperature system).\n\nFrom a fundamental (i.e., statistical mechanics) point of view, the physically relevant parameter is coldness = inverse temperature $\\beta=1/k_BT$. This changes continuously. If it passes from a positive value through zero to a negative value, the temperature changes from very large positive to infinite (with indefinite sign) to very large negative. Therefore systems with negative temperature have a smaller coldness and hence are hotter than systems with positive temperature.\n\nSome references:\n\nD. Montgomery and G. Joyce. Statistical mechanics of “negative temperature” states. Phys. Fluids, 17:1139–1145, 1974.\nhttp://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/19730013937_1973013937.pdf\n\nE.M. Purcell and R.V. Pound. A nuclear spin system at negative temperature. Phys. Rev., 81:279–280, 1951.\nhttp://prola.aps.org/abstract/PR/v81/i2/p279_1\n\nSection 73 of Landau and E.M. Lifshits. Statistical Physics: Part 1,\n\nExample 9.2.5 in my online book Classical and Quantum Mechanics via Lie algebras.\n\n• \"From a fundamental (i.e., statistical mechanics) point of view, the physically relevant parameter is coldness\". I am afraid, that is not correct. It is energy, as shown in this paper. For instance, (inverse) temperature does generally not allow determining the direction of heat flow, because it is only a derivative of $S$. – jkds Oct 15 '18 at 11:26\n• @jkds: Of course, internal energy, temperature, pressure, etc. are all physically relevant. What I had meant is that coldness (inverse) temperature is more relevant than temperature itself. – Arnold Neumaier Oct 15 '18 at 12:18\n• Sure, but what the authors showed was that temperature is not in one-to one correspondence to a system's macrostate. The same system can have the same temperature at completely different internal energies. So temperature, unlike $E/N$, can be a misleading descriptor of the system. – jkds Oct 22 '18 at 9:36\n• @jkds: In the canonical ensemble, the macrostate is determined by the temperature; in other ensembles (such as the grand canonical one), one needs of course additional parameters. Then temperature and internal energy are no longer in 1-1 correspondence but related by an equation of state involving the other parameters. But my answer is anyway independent of heat flow. – Arnold Neumaier Oct 22 '18 at 13:29\n• @jkds: Temperature is a property of the thermodynamic limit where the microcanonical ensemble is equivalent to the canonical ensemble. In the canonical ensemble the 1-1 correspondence is self-evident. Moreover one can prove convexity. Thus if you assume a non-convex entropy functional you are in the thermodynamic situation only after performing the Maxwell construction (corresponding here to taking the convex envelope). – Arnold Neumaier Oct 23 '18 at 16:40\n\nArnold Neumaier's comment about statistical mechanics is correct, but here's how you can prove it using just thermodynamics. Let's imagine two bodies at different temperatures in contact with one another. Let's say that body 1 transfers a small amount of heat $Q$ to body 2. Body 1's entropy changes by $-Q/T_1$, and body 2's entropy changes by $Q/T_2$, so the total entropy change is $$Q\\left(\\frac{1}{T_2}-\\frac{1}{T_1}\\right).$$ This total entropy change must be positive (according to the second law), so if $1/T_1>1/T_2$ then $Q$ has to be negative, meaning that body 2 can transfer heat to body 1 rather than the other way around. It's the sign of $\\frac{1}{T_2}-\\frac{1}{T_1}$ that determines the direction that heat can flow.\n\nNow let's say that $T_1<0$ and $T_2>0$. Now it's clear that $\\frac{1}{T_2}-\\frac{1}{T_1}>0$ since both $1/T_2$ and $-1/T_1$ are positive. This means that body 1 (with a negative temperature) can transfer heat to body 2 (with a positive temperature), but not the other way around. In this sense body 1 is \"hotter\" than body 2.\n\n• This is right, and central point can be stated like this: when heat energy leaves a body at negative temperature, the entropy of that body increases. – Andrew Steane Oct 30 '18 at 10:01\n• Your thermodynamic proof is wrong, because in thermodynamics $T<0$ breaks the consistency of thermodynamics, see this Nature Physics paper Consistent thermostatistics forbids negative absolute temperatures – jkds Oct 30 '18 at 14:48\n\nTake a hydrogen gas in a magnetic field. The nuclei can be aligned with the field, low energy, or against it, high energy. At low temperature most of the nuclei are aligned with the field and no matter how much I heat the gas I can never make the population of the higher energy state exceed the lower energy state. All I can do is make them almost equal, as described by the Boltzmann distribution.\n\nNow I take another sample of hydrogen where I have created a population inversion, maybe by some method akin to that used in a laser, so there are more nuclei aligned against the field than with it. This is my negative temperature material.\n\nWhat happens when I mix the samples. Well I would expect the population inverted gas to \"cool\" and the normal gas to \"heat\" so that my mixture ends up with the Boltzmann distribution of aligned and opposite nuclei.\n\nAh, but who says that negative absolute temperatures exist at all? This is not without its controversies. There's a nature paper here which challenges the very existence of negative absolute temperatures, arguing that negative temperatures come about due to a poor method of defining the entropy, which in turn is used to calculate the temperature.\n\nOther people insist that these negative temperatures are \"real\".\n\nSo, depending on which side of this debate you align yourself with, these systems can be described with positive temperatures (and behave accordingly), or negative temperatures which have very exotic properties.\n\n• This does not answer the question (the proof that is asked for does not rely on whether such systems actually exist or not). – ACuriousMind Jun 30 '15 at 10:26\n• The one thing that everyone agrees on is that their behavior is a bit surprising, and that is to be expected as we don't encounter systems with temperature ceilings in day-to-day life. In any case, that paper is cited in the comments on most of our \"negative absolute temperature\" questions. I can assure you that most of the answer authors are aware of it. But the question presupposes the definition of temperature which generates 'negative' values and this post doesn't really address it. – dmckee --- ex-moderator kitten Jul 1 '15 at 3:04\n• @ACuriousMind: What of E=-mcc? Matt Thompson's answer is to claim the negative temperatures are the similar beast of spurious mathematical solutions and have no meaning whatsoever. – Joshua May 22 '16 at 16:20\n• @matt-thompson: you are spot on. In fact, \"temperature\" as opposed to energy is only a derived quantity (a derivative of $S$) and nowhere near as fundamental. By looking at non-monotonously growing densities of states it is easy to construct paradoxa, like systems in which heat is flowing from the colder to the hotter bath, regardless of which entropy definition is used, see the authors' follow-up paper – jkds Oct 15 '18 at 6:36\n• For negative temperature, you require a thermal equilibrium in which dS/dU < 0. This can happen, but only in a metastable sense. However, much of equilibrium thermal physics can apply to long-lived metastable equilibria. The concept of negative temperature is consistent with this. (And by the way, if it were true that someone had found a way for heat to flow from a colder to a hotter bath (correctly defined) without entropy increasing elsewhere, then we would all know about it because they would be rich and our energy problems would be over.) – Andrew Steane Oct 30 '18 at 10:13\n\nFor the visually inclined, this article explains it simply. The maximum hotness definition is the middle image instead of the expected right image:",
null,
"Due to the unintuitive definition of heat, a sample that only includes hot particles is negative kelvin / beyond infinite hot, and as clear from the image would give energy to colder particles.\n\nNegative temperature - yes I encountered that once: I seem to recall that it's the state that arises when, say, you have a system of magnetic dipoles in a magnetic field, and they have arrived at an equilibrium distribution of orientations ... and then the magnetic field is suddenly reversed and the distribution is momentariy backwards - basically the distribition given by substituting a negative value of T. Other scenarios can probably be thought of or actually brought into being that would similarly occasion this notion. I think possibly the answer is that the system is utterly out of thermodynamic equilibrium, whence the 'temperature' is just the variable that formerly was truly a temperature, and is now merely an artifact that gives this non-equilibrium distribution when rudely plugged into the distribution formula. So heat is transferred because you now have a highly excited system utterly out of equilibrium impinging upon a system that approximates a heat reservoir. I think there's no question really of accounting for the heat transfer by the usual method, ie when both temperatures are positive, of introducing the temperature difference as that which drives the transfer.\n\nAnd would it even be heat transfer atall if the energy is proceeding from a source utterly out of thermodynamic equilibrium? It's more that the transferred energy is becoming heat, I would say.\n\n• Just to say, in the spin example the system is not \"utterly out of equilibrium\". Surprising as it may seem, the situation with spins more \"up\" than \"down\" is a metastable equilibrium, because the second derivative of the entropy is negative. This means that after a small fluctuation the system will move back or 'relax' to the negative temperature state, and this is the sense in which we can speak of thermal equilibrium here. – Andrew Steane Oct 30 '18 at 10:20\n• Really!? It's metastable is it? That's really quite remarkable! I feel a need to look at that more closely. Thankyou. – AmbretteOrrisey Oct 30 '18 at 10:23\n\nNone of the answers above are correct. Matt Thompson's answer is close.\n\nThe OP asks for a mathematical proof that\n\nif a negative-temperature system and a positive-temperature system come in contact, heat will flow from the negative- to the positive-temperature system\n\nThere is no proof for this statement because it is incorrect\n\nIn statistical mechanics temperature is defined as $$\\begin{equation} \\frac{1}{T} = \\frac{\\partial S}{\\partial E} \\end{equation}$$\n\ni.e. a derivative of $$S$$. For $$\\it normal$$ systems, like ideal gases, etc. $$S(E)$$ is a highly convex function of $$E$$ and there is a 1-to-1 relation between the system's macrostate and its temperature.\n\nHowever, in cases where $$S$$ is not a convex function of $$E$$, $$\\frac{\\partial S}{\\partial E}$$ can take the same numerical value at different energies $$E$$ and therefore the same temperature. In other words, $$T$$, unlike $$E$$ does --in general-- not uniquely describe a system's macrostate. This situation occurs in systems that have a negative Boltzmann temperature (detail: for a negative Boltzmann temperature $$S$$ needs to be non-monotonous in $$E$$).\n\nAn isolated system 1 with a negative Boltzmann temperature $$T_B<0$$ can have either higher or lower internal energy $$E_1/N$$ than another isolated system, system 2, that it gets coupled to.\n\nDepending on which system has a higher $$E_i/N, i=1,2$$ heat flows either from system 1 to system 2 or vice versa, regardless of the temperatures of the two systems before coupling. For details, see\n\nBelow I have attached Fig. 1, taken from the arxiv version of this work to illustrate this fact.",
null,
"PS\n\n1. I am not an author of any of the cited papers.\n\n2. Thermodynamics is compatible with the use of the Gibbs entropy, but not with the Boltzmann entropy. Showing this is a four line proof, see this Nature Physics paper Consistent thermostatistics forbids negative absolute temperatures. The Gibbs temperature (unlike Boltzmann temperature) is always positive, $$T>0$$.\n\n3. The attempt above by @Nathaniel at a purely thermodynamic proof of the OP's statement relies on the premise that $$T<0$$ is compatible with thermodynamics. This is not the case, see point 2. The proof given is invalid.\n\n4. For normal systems the distinction between Gibbs and Boltzmann temperature is practically irrelevant. The difference becomes drastic though, when edge cases are considered, e.g. truncated Hamiltonians or systems with non-monotonous densities of states. In fact, in most calculations in statistical mechanics textbooks the Gibbs entropy is used instead of the Boltzmann entropy. Remember calculating \"all states up to energy $$E$$\" instead of \"all states in an $$\\epsilon$$ shell at energy $$E$$\"? That's all the difference.\n\n5. There is a whole series of attempts to publish comments on the Nature Physics article by Dunkel and Hilbert, but all got rejected. These all follow the pattern of trying to create a contradiction, but none were able to punch a hole into Dunkel and Hilbert's short mathematical argument.\n\n• It is not necessary for $S$ to be nonconvex in order to have a negative temperature. The canonical ensemble for a simple 2-state system has a negative temperature regime, but $S(E)$ is convex in that case. It is surely the case that if you move to the microcanonical ensemble then nonconvexity can make things more complicated, but that's tangential to this question. – Nathaniel Oct 30 '18 at 9:51\n• I had a quick look at the paper just in case, but I didn't change my mind. The proof in my answer really is a mathematical proof - it says that (i) if temperature is defined as $1/T=\\frac{\\partial S}{\\partial E}$, and (ii) if the first and second laws hold, then (iii) heat must always flow from lower $1/T$ to higher $1/T$. If it doesn't then you're using the wrong ensemble or have made some other mistake - there is no other possibility. Neither non-convexity of the entropy nor non-uniqueness of $E(T)$ can change this. – Nathaniel Oct 30 '18 at 10:13\n• Yes, I read the paper, albeit briefly, as I said. They review multiple statistical definitions of the entropy and temperature, and claim that for some of them the temperature doesn't predict the direction of heat flow. But that implies a violation of the second law, so it just means those definitions are not the correct ones for the system in question. I do agree with them that the temperature doesn't uniquely determine the thermodynamic state if the entropy isn't convex, but they seem to say this implies it can't predict the direction of heat flow, which doesn't actually follow at all. – Nathaniel Oct 30 '18 at 14:13\n• Look, if $T$ is defined via $1/T = \\frac{\\partial S}{\\partial E}$ then for two coupled systems we have $\\frac{\\partial S}{\\partial E_1} = \\frac{\\partial (S_1+S_2)}{\\partial E_1} =\\frac{\\partial S_1}{\\partial E_1} - \\frac{\\partial S_2}{\\partial E_2} = 1/T_1-1/T_2$, and the entropy increases if and only if heat flows from the system with lower $1/T$ to the system with higher $1/T$. This is a really simple, completely incontrovertible consequence of the definition. If your statistical definition of entropy contradicts this then it contradicts the second law, even if you have a Nature paper. – Nathaniel Nov 2 '18 at 9:21\n• Regarding the Nature Physics paper by Dunkel and Hilbert, it's baffling to me that they fail to mention the Gibbs-Shannon or the von Neumann entropy, those being the statistical definitions from which the Boltzmann distribution is derived in the first place. However, it's not surprising to me that the thing they call the Gibbs entropy (which is actually Boltzmann's definition of the entropy) is a better approximation than the thing they call the Boltzmann entropy. So I don't disagree with them on that point. – Nathaniel Nov 2 '18 at 9:44"
] | [
null,
"https://i.stack.imgur.com/RMZ2O.jpg",
null,
"https://i.stack.imgur.com/Ehd6m.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.91659707,"math_prob":0.954794,"size":16825,"snap":"2020-34-2020-40","text_gpt3_token_len":3938,"char_repetition_ratio":0.15760061,"word_repetition_ratio":0.041775458,"special_character_ratio":0.23441307,"punctuation_ratio":0.1003861,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9916617,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-22T09:03:17Z\",\"WARC-Record-ID\":\"<urn:uuid:46f91912-ce6b-4d02-b072-4060c8891640>\",\"Content-Length\":\"236383\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d6e680d8-3fcb-4024-ae77-34b1e2ddbc1d>\",\"WARC-Concurrent-To\":\"<urn:uuid:31fb0c2e-1dcc-42cf-9000-b245589d46d6>\",\"WARC-IP-Address\":\"151.101.1.69\",\"WARC-Target-URI\":\"https://physics.stackexchange.com/questions/21851/prove-that-negative-absolute-temperatures-are-actually-hotter-than-positive-abso\",\"WARC-Payload-Digest\":\"sha1:6W63HFAVSVU6QLJ2UPYFZMLEOIE42MGC\",\"WARC-Block-Digest\":\"sha1:DFOQGXNOXJFEBGOMZIK6Q5IIWUZ652FB\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600400204410.37_warc_CC-MAIN-20200922063158-20200922093158-00292.warc.gz\"}"} |
https://visualfractions.com/calculator/factors/factors-of-357/ | [
"# Factors of 357\n\nSo you need to find the factors of 357 do you? In this quick guide we'll describe what the factors of 357 are, how you find them and list out the factor pairs of 357 for you to prove the calculation works. Let's dive in!\n\n## Factors of 357 Definition\n\nWhen we talk about the factors of 357, what we really mean is all of the positive and negative integers (whole numbers) that can be evenly divided into 357. If you were to take 357 and divide it by one of its factors, the answer would be another factor of 357.\n\nLet's look at how to find all of the factors of 357 and list them out.\n\n## How to Find the Factors of 357\n\nWe just said that a factor is a number that can be divided equally into 357. So the way you find and list all of the factors of 357 is to go through every number up to and including 357 and check which numbers result in an even quotient (which means no decimal place).\n\nDoing this by hand for large numbers can be time consuming, but it's relatively easy for a computer program to do it. Our calculator has worked this out for you. Here are all of the factors of 357:\n\n• 357 ÷ 1 = 357\n• 357 ÷ 3 = 119\n• 357 ÷ 7 = 51\n• 357 ÷ 17 = 21\n• 357 ÷ 21 = 17\n• 357 ÷ 51 = 7\n• 357 ÷ 119 = 3\n• 357 ÷ 357 = 1\n\nAll of these factors can be used to divide 357 by and get a whole number. The full list of positive factors for 357 are:\n\n1, 3, 7, 17, 21, 51, 119, and 357\n\n## Negative Factors of 357\n\nTechnically, in math you can also have negative factors of 357. If you are looking to calculate the factors of a number for homework or a test, most often the teacher or exam will be looking for specifically positive numbers.\n\nHowever, we can just flip the positive numbers into negatives and those negative numbers would also be factors of 357:\n\n-1, -3, -7, -17, -21, -51, -119, and -357\n\n## How Many Factors of 357 Are There?\n\nAs we can see from the calculations above there are a total of 8 positive factors for 357 and 8 negative factors for 357 for a total of 16 factors for the number 357.\n\nThere are 8 positive factors of 357 and 8 negative factors of 357. Wht are there negative numbers that can be a factor of 357?\n\n## Factor Pairs of 357\n\nA factor pair is a combination of two factors which can be multiplied together to equal 357. For 357, all of the possible factor pairs are listed below:\n\n• 1 x 357 = 357\n• 3 x 119 = 357\n• 7 x 51 = 357\n• 17 x 21 = 357\n\nJust like before, we can also list out all of the negative factor pairs for 357:\n\n• -1 x -357 = 357\n• -3 x -119 = 357\n• -7 x -51 = 357\n• -17 x -21 = 357\n\nNotice in the negative factor pairs that because we are multiplying a minus with a minus, the result is a positive number.\n\nSo there you have it. A complete guide to the factors of 357. You should now have the knowledge and skills to go out and calculate your own factors and factor pairs for any number you like.\n\nFeel free to try the calculator below to check another number or, if you're feeling fancy, grab a pencil and paper and try and do it by hand. Just make sure to pick small numbers!\n\n## Factors Calculator\n\nWant to find the factor for another number? Enter your number below and click calculate.\n\nFactors of 358"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9346948,"math_prob":0.9888377,"size":3101,"snap":"2020-45-2020-50","text_gpt3_token_len":834,"char_repetition_ratio":0.20536003,"word_repetition_ratio":0.010903426,"special_character_ratio":0.32473394,"punctuation_ratio":0.08597285,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9981093,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-11-29T05:27:36Z\",\"WARC-Record-ID\":\"<urn:uuid:3d79815c-3fb5-484c-babe-7fdd3cf155ed>\",\"Content-Length\":\"11654\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:701371af-826d-42db-9d51-19bb1e1d7439>\",\"WARC-Concurrent-To\":\"<urn:uuid:3b7f7ee9-3f85-46cd-a004-43805899db2d>\",\"WARC-IP-Address\":\"142.93.52.42\",\"WARC-Target-URI\":\"https://visualfractions.com/calculator/factors/factors-of-357/\",\"WARC-Payload-Digest\":\"sha1:2RUCE5QNVPHV5BXPOAMDQQPARID3BDWF\",\"WARC-Block-Digest\":\"sha1:ODU2JYHZO6WQ5N72DWHKVLKKL35MN5X3\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141196324.38_warc_CC-MAIN-20201129034021-20201129064021-00650.warc.gz\"}"} |
https://yadavgaurav.com/table-control-chart-constants/ | [
"# Table of Control Chart – Constants A2 d2 D3 D4",
null,
"Control charts are used to calculate control limits and estimate the process standard deviation. Control chart constants used are D4, D3, A2, and d2 (These control chart constants depend on the subgroup size n).\n\nThese control chart constants are summarised in the table below. For example, if your subgroup is 4, then D4 = 2.282, A2 = 0.729, and d2 = 2.059.\n\nThere is no value for D3. This simply means that the R chart has no lower control limit when the subgroup size is 4.\n\n## 3 thoughts on “Table of Control Chart – Constants A2 d2 D3 D4”\n\nThanks\n\n2. Bao\n\nValues greater than 25?"
] | [
null,
"https://yadavgaurav.com/wp-content/uploads/2019/10/Control-Charts-768x461.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.62560934,"math_prob":0.9421705,"size":1358,"snap":"2021-04-2021-17","text_gpt3_token_len":583,"char_repetition_ratio":0.13293944,"word_repetition_ratio":0.0,"special_character_ratio":0.6671576,"punctuation_ratio":0.2511628,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9909946,"pos_list":[0,1,2],"im_url_duplicate_count":[null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-01-15T14:23:45Z\",\"WARC-Record-ID\":\"<urn:uuid:8f67474a-ea30-48b8-80bb-b09b7ab9981d>\",\"Content-Length\":\"17789\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d9391cce-636a-47c0-a9a1-ee52fc1bfb85>\",\"WARC-Concurrent-To\":\"<urn:uuid:280438ad-d58d-43f8-994e-dab2555f7353>\",\"WARC-IP-Address\":\"104.21.35.98\",\"WARC-Target-URI\":\"https://yadavgaurav.com/table-control-chart-constants/\",\"WARC-Payload-Digest\":\"sha1:HYIMSAXSTIWJVBKD6JUMYB6HAKOERF7D\",\"WARC-Block-Digest\":\"sha1:WWJNSVTKOC3OB6B56ELKGQOJME4H5JG4\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-04/CC-MAIN-2021-04_segments_1610703495901.0_warc_CC-MAIN-20210115134101-20210115164101-00205.warc.gz\"}"} |
https://ch.mathworks.com/matlabcentral/profile/authors/11608268-bowen-li | [
"Community Profile",
null,
"# BOWEN LI\n\n13 total contributions since 2019\n\n#### BOWEN LI's Badges\n\n•",
null,
"•",
null,
"View details...\n\nContributions in\nView by\n\nQuestion\n\nwhy ga generates different fval and penalty values\nHello everyone, I have a question when I check the result after i using GA on my integer (binary) minimization problem that the ...\n\n4 days ago | 1 answer | 0\n\n### 1\n\nanswer\n\nQuestion\n\nWhy I receive a NaN value\nHi everyone, i met a problem when i try to check my following code which is a function file, when i try with some inputs, Matlab...\n\n23 days ago | 0 answers | 0\n\n### 0\n\nanswers\n\nQuestion\n\nDivision by an OptimizationVariable not supported.\nHi everyone, I met a problem that in my equation, there is a part where includes a division of two optimization varaibles. For ...\n\n30 days ago | 1 answer | 1\n\n### 1\n\nanswer\n\nQuestion\n\nerror: Division by an OptimizationVariable not supported.\nHi everyone, I met a error mentioned in my title, however, i don't think my code could be organized by a solver-based approach. ...\n\n1 month ago | 1 answer | 0\n\n### 1\n\nanswer\n\nQuestion\n\nIs it possible for a 3D varaible to be used in a optimization problem\nHi everyone, I have a very straightforward question that is it possible to operate on a 3D optimization varibale by using optimi...\n\n1 month ago | 1 answer | 0\n\n### 1\n\nanswer\n\nQuestion\n\nmultidimensional matrix optimization error\nHello everyone, I have an optimization variable y=optimvar('y',[4,1],'Type','integer','LowerBound',0,'UpperBound',1); and I...\n\n1 month ago | 1 answer | 0\n\n### 1\n\nanswer\n\nAnswered\nMILP - Multidimensional optimization\nHi, I am working on a similar question as yours, have you solved this question?\n\n1 month ago | 0\n\nQuestion\n\nHow to loop a objective function using the optimization toolbox?\nHi everyone, I have a question that how can I put a \"loop\" structure in a optmization problem. In my problem I have two binary...\n\n2 months ago | 1 answer | 0\n\n### 1\n\nanswer\n\nQuestion\n\nHow to make a matrix compose by seperate optimization variables\nHi everyone, I am thinking if it possible to put several optimizaiton variables into a matrix form. My optimization variable is...\n\n2 months ago | 1 answer | 0\n\n### 1\n\nanswer\n\nQuestion\n\nhow to create a symmetric binary variable matrix\nHi, I am wondering how to get binary variable matrix that is symmetric to the diagonal. The matrix looks like this: [y1 y2 y3 ...\n\n2 months ago | 1 answer | 0\n\n### 1\n\nanswer\n\nQuestion\n\nHow to define a binary variable matrix\nHi, I'm trying to define a binary variable matrix that each row has the same outputs, but differs in different rows. For examp...\n\n2 months ago | 0 answers | 0\n\n### 0\n\nanswers\n\nQuestion\n\nhow to iterate a binary variable matrix\nHi I have a matrix which is filled with binary variables, and in each iteration i want to get a new binary matrix with the same...\n\n2 months ago | 0 answers | 0\n\n### 0\n\nanswers\n\nQuestion\n\nhow to iterate a matrix for multiple times\nHi I have a matrix, let's say, a random 5x5 matrix. In time period 1, it is a 5x5 random matrix, in time period 2, all element...\n\n2 months ago | 1 answer | 0\n\nanswer"
] | [
null,
"https://ch.mathworks.com/responsive_image/150/0/0/0/0/cache/matlabcentral/profiles/11608268_1559535491176_DEF.jpg",
null,
"https://ch.mathworks.com/matlabcentral/profile/badges/Thankful_5.png",
null,
"https://ch.mathworks.com/matlabcentral/profile/badges/First_Answer.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.67852646,"math_prob":0.56526953,"size":455,"snap":"2019-35-2019-39","text_gpt3_token_len":162,"char_repetition_ratio":0.24611974,"word_repetition_ratio":0.7706422,"special_character_ratio":0.389011,"punctuation_ratio":0.0,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9648038,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,1,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-08-26T03:37:21Z\",\"WARC-Record-ID\":\"<urn:uuid:2f25c4af-d298-4f73-a0cb-abdd10db1d4b>\",\"Content-Length\":\"108541\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ceb0684f-f661-4c58-a013-442171d5ef5e>\",\"WARC-Concurrent-To\":\"<urn:uuid:2c6d5296-3848-42de-9426-3a32f4c15789>\",\"WARC-IP-Address\":\"104.117.39.124\",\"WARC-Target-URI\":\"https://ch.mathworks.com/matlabcentral/profile/authors/11608268-bowen-li\",\"WARC-Payload-Digest\":\"sha1:7VQG4ZRDSAFGXXB26OE34RIYTSKOF33W\",\"WARC-Block-Digest\":\"sha1:NCZOXZUUTRGAVXD64RVUMDFVKZXSRCNO\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-35/CC-MAIN-2019-35_segments_1566027330962.67_warc_CC-MAIN-20190826022215-20190826044215-00207.warc.gz\"}"} |
https://www.numbers.education/12245.html | [
"Is 12245 a prime number? What are the divisors of 12245?\n\n## Parity of 12 245\n\n12 245 is an odd number, because it is not evenly divisible by 2.\n\nFind out more:\n\n## Is 12 245 a perfect square number?\n\nA number is a perfect square (or a square number) if its square root is an integer; that is to say, it is the product of an integer with itself. Here, the square root of 12 245 is about 110.657.\n\nThus, the square root of 12 245 is not an integer, and therefore 12 245 is not a square number.\n\n## What is the square number of 12 245?\n\nThe square of a number (here 12 245) is the result of the product of this number (12 245) by itself (i.e., 12 245 × 12 245); the square of 12 245 is sometimes called \"raising 12 245 to the power 2\", or \"12 245 squared\".\n\nThe square of 12 245 is 149 940 025 because 12 245 × 12 245 = 12 2452 = 149 940 025.\n\nAs a consequence, 12 245 is the square root of 149 940 025.\n\n## Number of digits of 12 245\n\n12 245 is a number with 5 digits.\n\n## What are the multiples of 12 245?\n\nThe multiples of 12 245 are all integers evenly divisible by 12 245, that is all numbers such that the remainder of the division by 12 245 is zero. There are infinitely many multiples of 12 245. The smallest multiples of 12 245 are:\n\n## Numbers near 12 245\n\n### Nearest numbers from 12 245\n\nFind out whether some integer is a prime number"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.93639904,"math_prob":0.9992601,"size":608,"snap":"2020-24-2020-29","text_gpt3_token_len":176,"char_repetition_ratio":0.1705298,"word_repetition_ratio":0.0,"special_character_ratio":0.3305921,"punctuation_ratio":0.14383562,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9988846,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-07-02T16:24:42Z\",\"WARC-Record-ID\":\"<urn:uuid:b63193b0-1b18-432d-9337-5fcf9f727247>\",\"Content-Length\":\"18400\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e479a993-bb4d-402f-8812-d9fda107dcb2>\",\"WARC-Concurrent-To\":\"<urn:uuid:84020134-dbee-4797-993b-5b05279373fd>\",\"WARC-IP-Address\":\"213.186.33.19\",\"WARC-Target-URI\":\"https://www.numbers.education/12245.html\",\"WARC-Payload-Digest\":\"sha1:JOSJCLGVVKWSZW7TKEA65EP7Z4SIHCIY\",\"WARC-Block-Digest\":\"sha1:AQUTZG3NYVCSEZK2O4SQS3ZIWFLOKM5Z\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-29/CC-MAIN-2020-29_segments_1593655879532.0_warc_CC-MAIN-20200702142549-20200702172549-00353.warc.gz\"}"} |
http://www.nesterovsky-bros.com/weblog/2009/02/12/BareBinaryTreeAlgorithms.aspx | [
"Do you agree that binary trees and algorithms that keep trees reasonably balanced are important?\n\nIt's interesting enough, however, that you won't easily find these algorithms publicly available.\n\nThough red-black, AVL and other algorithms described in the wikipedia are defined in terms of tree manipulation, all implementations we have seen, deal with trees annotated with keys and values. These implementations really use tree balancing algorithms behind the schene, and expose a commonplace set or map containers to a client. Even C++ Standard Library suffers from this disease.\n\nWe think that binary trees are valuable independent concepts, and they worth to be implemented separately, at least because there are other algorithms, except sets and maps, using trees.\n\nAnd well, we did it in C#! See RedBlackTree.cs.\n\nConsider an example - a simple scheduler, ScheduleBookmark.cs, with operations:\n\n• schedule an action;\n• remove an action from the schedule;\n• enumerate actions;\n• find a date, an action is scheduled for;\n• find an action (or at least closest one) for a specified date;\n• postpone actions due to delays;\n\nA balanced binary tree allows efficient implementation of such a scheduler. Tree node stores an action, and a time span between parent node and this node. This way:\n\nOperation Steps\nschedule an action find place + link node + rebalance tree\nremove an action from the schedule unlink node + rebalance tree\nenumerate actions navigate tree\nfind a date, an action is scheduled for find node in tree\nfind an action for a specified date cumulate time spans up to the tree root\npostpone actions due to delays fixup time spans from a node up to the tree root\n\nCompare operation complexities between tree, array, list and map:\nOperation Tree Array List Map\nschedule an action O(ln(N)) O(N) O(N) O(ln(N))\nremove an action from the schedule O(ln(N)) O(N) O(1) O(ln(N))\nenumerate actions O(ln(N)) O(1) O(1) O(ln(N))\nfind a date, an action is scheduled for O(ln(N)) O(1) O(1) O(1)\nfind an action for a specified date O(ln(N)) O(ln(N)) O(N) O(ln(N))\npostpone actions due to delays O(ln(N)) O(N) O(N) O(N*ln(N))\n\nComplexity of each operation for the tree is O(ln(N)). No arrays, lists, or maps achieve similar worst case guaranty.\n\nFinally, the test program is Program.cs, and a whole project (VS2008) is Tree.zip\n\nAll comments require the approval of the site owner before being displayed.\n Name E-mail Home page Remember Me Comment (Some html is allowed: `a@href@title, b, blockquote@cite, em, i, strike, strong, sub, super, u`) where the @ means \"attribute.\" For example, you can use or\n.",
null,
"Enter the code shown (prevents robots): Live Comment Preview\nArchive\n < February 2009 >\nSunMonTueWedThuFriSat\n25262728293031\n1234567\n891011121314\n15161718192021\n22232425262728\n1234567\nBlogroll\nStatistics\nTotal Posts: 381\nThis Year: 2\nThis Month: 0\nThis Week: 0"
] | [
null,
"http://www.nesterovsky-bros.com/weblog/2009/02/12/CaptchaImage.aspx",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8673245,"math_prob":0.8729018,"size":1362,"snap":"2022-05-2022-21","text_gpt3_token_len":299,"char_repetition_ratio":0.10530192,"word_repetition_ratio":0.0,"special_character_ratio":0.20264317,"punctuation_ratio":0.17293233,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9500458,"pos_list":[0,1,2],"im_url_duplicate_count":[null,5,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-25T06:17:48Z\",\"WARC-Record-ID\":\"<urn:uuid:164b2ed5-5994-4736-98f4-dfd6a8b712b4>\",\"Content-Length\":\"37984\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c25bb9ad-42d5-4643-bb05-c1706135fa7b>\",\"WARC-Concurrent-To\":\"<urn:uuid:bcf6da52-99d0-410d-a770-dae9f1873d95>\",\"WARC-IP-Address\":\"96.31.33.38\",\"WARC-Target-URI\":\"http://www.nesterovsky-bros.com/weblog/2009/02/12/BareBinaryTreeAlgorithms.aspx\",\"WARC-Payload-Digest\":\"sha1:SH55W3XJY3JGWX4H2S2SCUJV6SJ4LBZ6\",\"WARC-Block-Digest\":\"sha1:7LNVERYI5V3J6WERO53RYDXWN6TPX2ZF\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662580803.75_warc_CC-MAIN-20220525054507-20220525084507-00545.warc.gz\"}"} |
https://support.cleanpower.com/powerclerk/formulas/ | [
"# Formulas and Calculated Fields\n\n##### Use Formulas to perform calculations in Forms that assist applicants and increase data accuracy.",
null,
"Are there parts of the program application where providing in-form calculations would increase the accuracy of data inputs (like showing an applicant their long-term average PV energy output)?\n\nAre there Data Fields that need to be calculated or created for certain Reports or Dashboards?\n\nWhich Forms do require calculations that can be handled by Formulas?\nShould a Formula result trigger an Automation?\n\n##### Locating the Formulas feature",
null,
"Figure 1: PROGRAM DESIGN >> Formulas\n\n##### How to work with Formulas\n\nThe Formulas feature has been designed to enable math to be performed in Forms automatically by PowerClerk using references to Data Fields and constants. An intuitive drag-and-drop Formula editor is used to configure Formulas.\n\nFormulas can perform numeric operations (addition, subtraction, multiplication, division) as well as Boolean operations (greater than, less than, is equal to, is same as, is not same as, etc.) on numeric data fields, and can also perform Boolean operations (is equal to, is not) on Boolean data fields (true/false) on non-numeric fields. A few examples have been provided in Figure 2 and Figure 3 below.",
null,
"Figure 2: Example Numeric Formula",
null,
"Figure 3: Example Boolean Formula\n\nOnce a Formula has been saved, you can create a Calculated Field to use this Field in your Forms. In the example in Figure 4 below, the Calculated Field created from the Formula comparing Estimated Annual Production to Historical Annual Consumption (kWh) designed in Figure 3 above populates the result automatically in the Form and updates automatically as data field values change.",
null,
"Figure 4: Calculated Field element in a Form\n\nFormulas can also be used within Automations to trigger Action Rules as shown in Figure 5 below:",
null,
"Figure 5: Formulas in Automations\n\n### Formula Data Dictionary\n\n##### Supported Data Types (Input):\nColor Data Type Reference",
null,
"Blue accepts Numeric inputs Numeric constant or Data Field Reference types (integer, decimal, price)",
null,
"Orange accepts Boolean inputs True, False, (Multiple) Choice, Drop-down List, Attachment Approval/Rejection or Data Field references types which return a True/False result (Checkbox, Boolean Calculated Field Result)",
null,
"Green accepts Text inputs Text String or Text Data Field reference types (Single Line Text, Paragraph Text, Email, or Contact Data Fields)",
null,
"Purple accepts Date inputs Date or Date Field reference",
null,
"Black accepts Dynamic inputs Formula reference types (Numeric, Boolean/Relational, Text, Date)\nAdditionally, inputs can also accept formula operations that match their respective data types.\n\n##### Supported Formula Operations (Output):\nNumeric Operations\n(Returns a Numeric result)\nOperator Name Description",
null,
"Addition Adds two values",
null,
"Subtraction Subtracts two values",
null,
"Multiplication Multiplies two values",
null,
"Division Divides two values",
null,
"Contant Can contain any Numeric value",
null,
"Can reference any Numeric Data Field",
null,
"Compares sets of data and tests the results:\nIf the results are true, the THEN instructions are taken\nIf not, the ELSE instructions are taken e.g. If (A = TRUE), Then B, Else C",
null,
"Min Returns the smallest value from the two Numeric values provided e.g. (1,2) = 1",
null,
"Max Returns the largest value from the two Numeric values provided e.g. (1,2) = 2",
null,
"Exponent Represents how many times a Numeric value is multiplied by itself\ne.g. (2^5) = 2 x 2 x 2 x 2 x 2 = 32",
null,
"Log e.g. Log (1000) = 3 (10 x 10 x 10 = 10^3 = 1,000)\n\nRelational Formula Operations\n(Subset of Boolean Formula Operations that tests a relationship and returns a True/False result)\nOperator Name Description",
null,
"Equal To Accepts a Numeric or Text input",
null,
"Not Equal To Accepts a Numeric or Text input",
null,
"Greater Than Accepts a Numeric input",
null,
"Less Than Accepts a Numeric input",
null,
"Greater Than or Equal To Accepts a Numeric input",
null,
"Less Than or Equal To Accepts a Numeric input",
null,
"Is Before/After Accepts a Date input\n\nBoolean Formula Operations\n(Returns a True/False result)\nOperator Name Description",
null,
"And True if and only if both conditions are true",
null,
"Or True if either condition is true (or if both are true)",
null,
"Not Changes true to false, and false to true",
null,
"True Constant of true",
null,
"False Constant of false",
null,
"References a Checkbox Data Field",
null,
"Determines if an Attachment is approved by an administrator",
null,
"Determines if an Attachment is rejected by an administrator",
null,
"References a Multiple Choice Data Field & looks to see if the Data Field is equal to the designated choice in the argument",
null,
"References a Multiple Choice Data Field & looks for if the Data Field is not equal to the designated choice in the argument\n\nText Formula Operations\n(Returns a Text result except for the Length of Operation, which returns a Numeric result)\nOperator Description",
null,
"References any Text value entered",
null,
"References Single Line Text, Paragraph Text, Email, or Contact Data Fields",
null,
"Compares Boolean True/False data and tests the Text results\nIf the results are true, the THEN instructions are taken\nIf not, the ELSE instructions are taken e.g. If (A = TRUE), Then B, Else C",
null,
"Determines the length of a given Text value and returns a Numeric value\ne.g. Length of (PowerClerk) = 10",
null,
"Combines two Text values and returns a concatenated Text string\ne.g. “foo” + “bar” = “foobar”",
null,
"Enables referencing the respective project number (e.g. “CPR-00123”) within a Formula",
null,
"Reference non-string Data Field types such as Numeric, Boolean,\nChoice, and Date Data Fields as text representations enabling usage\nof non-string Data Fields in context of any of the other Text Formula\nOperators (e.g. IF/THEN/ELSE, LENGTH OF(), CONCATENATE)\n\nData Formula Operations\n(Returns a Date result)\nOperator Description",
null,
"References a designated Date",
null,
"References a Date Data Field",
null,
"References a Numeric value before/after a Date or Date Data Field reference e.g. (7) days after (Date)\n\nFormula Reference Operations\n(Returns a result based on the primary operation)\nOperator Description",
null,
"Place a Formula reference within any other Formula\noperator to avoid having to create multiple\nlayers of complex Formulas:",
null,
"Multi-Instance Formula Operators\n(valdation of multi-instance fields to ensure all field instances adhere to the desired criteria)\nOperator Description",
null,
"Returns Boolean True if all instance values across the entire multi-instance field match, or False if not.",
null,
"If all values across the multi-instance field match, returns that exact single value (whether that value is numerical, text, Boolean, or date).",
null,
"Returns the summation of all values across a multi-instance field of numerical data type.\n\n##### Dynamic Formula References\n\nUse Dynamic Formula References to simplify complex Formula expressions by referencing existing Formulas as components (or Formula segments) of the larger Formula.\n\nTo reference a Formula within another Formula, please use the Formula Reference operand that is listed under the “Dynamic” section in the Formula editor:",
null,
"Figure 6: Dynamic Formula References\n\nThe Dynamic Formula Reference operator itself is not designated to return a Numerical, Boolean, Text, or Date result. Once a Dynamic Formula Reference operator is placed into a Formula container, the operator will turn into the respective data type. Then, a list of the available Formula Reference Expressions (matching the respective data type) will be displayed under the Properties tab on the right-hand side of the editor.",
null,
"Figure 7a: Adding a Formula Reference determines data type",
null,
"Figure 7b: Dynamic Formula Reference data types\n\n##### Rules of Formula References\n1. Incomplete Formulas cannot be referenced.",
null,
"Figure 8a: Formula Reference Rule 1\n\n2. You cannot create circular Formula References.",
null,
"Figure 8b: Formula Reference Rule 2\n\n3. A Formula Reference cannot be the only component of a Formula.",
null,
"Figure 8c: Formula Reference Rule 3\n\nCompleted Formulas that incorporate Dynamic Formula References may be used to create Calculated Fields. Please note: Once the Calculated Field has been created, any Formula Reference is fully resolved into the Calculated Field and indistinguishable from other Formula components.\n\nWhen the Calculated Field is included on a Form, and configured to “Show Details” (as shown in Figure 9), each individual component of the Formula will be displayed as any prior Formula References have been flattened into the Calculated Field.",
null,
"Figure 9a: Formula showing the Formula Reference",
null,
"Figure 9b: Calculated Field with a flattened Formula Reference",
null,
"Figure 9c: Calculated Field as displayed on a Form without any Formula References any longer\n\n##### Calculated Fields and Advanced Visibility Rules\n\nOne of the great advantages of Calculated Fields are that you can use them within your Forms without having to display the actual Field result on the Form any longer. In addition you can also reference Formulas to define Advanced Visibility Rules as shown in Figure 10:",
null,
"Formulas can also reference Calculated Fields. Currently only one level of reference is supported (i.e. you cannot reference Calculated Fields that reference other Calculated Fields).\n\n### Video Guides\n\n##### Build A Formula\n\nThis video guide will demonstrate how to build a formula through our visual design surface. Use PowerClerk to calculate key data fields that streamline your administrator workflow.\n\n##### Automation with Formulas in Action Rules\n\nThe following video will present you the concept of triggering Automations based on a Formula’s result:\n\n##### Formulas and Advanced Visibility Rules\n\nTo control the Conditional Visibility of From fields you can use Formula logic to adapt the display of your Forms to a vast variety of business scenarios:\n\n##### Calculated Fields\n\nDisplay the results of Formula calculations on Forms, Communications, Document Templates, and Reports. This video demonstrates how to use Calculated Fields within your program:\n\nA full list of all Video Guides can also be found here.\n\n### FAQs\n\nQ: What should I keep in mind when working with Formulas?\nA: Please consider the following guidance when working with Formulas:\n• Formulas cannot reference another formula.\n• The Calculated Fields are dynamic. If any field included in the calculation changes, the Calculated Result field will update accordingly.\n• Calculated Fields cannot be read-only (they don’t need to be, as they are calculated based off other form fields).\nQ: I am looking for a tutorial on how to get started with Formulas, can you help?\nA: Please consider the following instructions as an exercise for your sandbox environment to familiarize yourself with PowerClerk's Formula functionality. With that said, let's get started!\n• STEP 1: In your sandbox environment create a form by clicking Program Design >> Forms >> New Form. Name this new form \"Calculator\". Add two Integer fields (labelled \"Value A\" and \"Value B\" below) and one Multiple Choice field (labelled \"Operators\") with the following choices as shown below:",
null,
"Save your new form by clicking on \"Save\".\n• STEP 2: Navigate now to the Formula menu by clicking on Program Design >> Formulas >> New Formula and name this new Formula \"MyCalculator\"\n• STEP 3: Drag each of the available operator fields onto the scratchpad as shown below:",
null,
"• STEP 4: Drag the (Data Field Reference) data type into each of the blue operator options and assign Value A and Value B as shown below and finish the other operator blocks accordingly:",
null,
"• STEP 5a: For ADDITIONS: Drag a (IF __ THEN __ ELSE __ ) data type onto the scratch pad as well as one of the Boolean operators named ( Choice Data Field Reference == Choice ):",
null,
"STEP 5b: Drag the data types into their positions as indicated below:",
null,
"Please notice how the Boolean operator could only be placed into the orange position within the (IF __ THEN __ ELSE __ ) data type.\n• STEP 6a: Repeat these steps for SUBTRACTION, MULTIPLICATION, and DIVISION operator blocks - your scratchpad should look now similar to this screenshot below:",
null,
"STEP 6b: Click now on each of the ( Choice Data Field Reference == Choice ) data type and assign as value \"Operators\" and choose the respective Choice option (i.e. \"+ addition\" in the screenshot below):",
null,
"STEP 6c: Repeat this procedure for all other operators and your scratchpad should now look similar to this:",
null,
"• STEP 7: Now drag the DIVISION formula block into the open blue position of the MULTIPLICATION formula block and repeat these steps until your scratchpad looks like this (Please note for the remaining blue position within the DIVISION formula block you assign data type (constant) and give it a value of 0):",
null,
"• STEP 8: Save your formula.\n• STEP 9: In the Formula overview screen select your \"MyCalculator\" Formula and click on the Create Calculated Field button - you are now ready to use this Calculated Field within your original Form:",
null,
"• STEP 10: Click now on your original Form in Program Design >> Forms and click the \"Edit Form\" button:",
null,
"• STEP 11: Drag a \"Calculated Field Results\" onto your form as shown below:",
null,
"• STEP 12: Select \"MyCalculator\" as the Formula and designate the visibility to be conditional on the Operators field not to be empty:",
null,
"• STEP 13: Save your form now.\n• STEP 14: We now need to configure your form to display it on the HOME screen. Click on Program Design >> Forms and select your original form and click this time on \"Configure Form\". In the following screen select option \"Use for New Project button\" and hit \"Save\":",
null,
"• CONGRATULATIONS! Click now on HOME >>",
null,
"to enjoy your new calculator:",
null,
"Q: What is the template tag syntax for a calculated field?\nA: Calculated fields are listed in the Formulas > Calculated Fields section, but their template tags can be found in PowerClerk's Data Fields functionality as with any other data field. You can search for your Calculated Field and copy the accompanying template tag as exemplified below for `{data: My Calculated Field}`:",
null,
"Copying a Calculated Field's Template Tag"
] | [
null,
"https://support.cleanpower.com/wp-content/uploads/2018/04/calculated_fields.png",
null,
"https://support.cleanpower.com/wp-content/uploads/2018/04/formulas_1.png",
null,
"https://support.cleanpower.com/wp-content/uploads/2018/04/formulas_2.png",
null,
"https://support.cleanpower.com/wp-content/uploads/2018/04/formulas_3.png",
null,
"https://support.cleanpower.com/wp-content/uploads/2018/04/formulas_4.png",
null,
"https://support.cleanpower.com/wp-content/uploads/2018/05/formulas_5.png",
null,
"https://support.cleanpower.com/wp-content/uploads/2019/06/blue.png",
null,
"https://support.cleanpower.com/wp-content/uploads/2019/06/orange.png",
null,
"https://support.cleanpower.com/wp-content/uploads/2019/06/green.png",
null,
"https://support.cleanpower.com/wp-content/uploads/2019/06/purple.png",
null,
"https://support.cleanpower.com/wp-content/uploads/2019/06/all_colors.png",
null,
"https://support.cleanpower.com/wp-content/uploads/2019/06/addition.png",
null,
"https://support.cleanpower.com/wp-content/uploads/2019/06/subtraction.png",
null,
"https://support.cleanpower.com/wp-content/uploads/2019/06/multiplication.png",
null,
"https://support.cleanpower.com/wp-content/uploads/2019/06/division.png",
null,
"https://support.cleanpower.com/wp-content/uploads/2019/06/constant.png",
null,
"https://support.cleanpower.com/wp-content/uploads/2019/06/data_field_reference.png",
null,
"https://support.cleanpower.com/wp-content/uploads/2019/06/if_then_else.png",
null,
"https://support.cleanpower.com/wp-content/uploads/2019/06/min.png",
null,
"https://support.cleanpower.com/wp-content/uploads/2019/06/max.png",
null,
"https://support.cleanpower.com/wp-content/uploads/2019/06/exponent.png",
null,
"https://support.cleanpower.com/wp-content/uploads/2019/06/log.png",
null,
"https://support.cleanpower.com/wp-content/uploads/2019/06/equal_to.png",
null,
"https://support.cleanpower.com/wp-content/uploads/2019/06/not_equal_to.png",
null,
"https://support.cleanpower.com/wp-content/uploads/2019/06/greater_than.png",
null,
"https://support.cleanpower.com/wp-content/uploads/2019/06/less_than.png",
null,
"https://support.cleanpower.com/wp-content/uploads/2019/06/greater_or_equal.png",
null,
"https://support.cleanpower.com/wp-content/uploads/2019/06/less_or_equal.png",
null,
"https://support.cleanpower.com/wp-content/uploads/2019/06/before_after.png",
null,
"https://support.cleanpower.com/wp-content/uploads/2019/06/and.png",
null,
"https://support.cleanpower.com/wp-content/uploads/2019/06/or.png",
null,
"https://support.cleanpower.com/wp-content/uploads/2019/06/not.png",
null,
"https://support.cleanpower.com/wp-content/uploads/2019/06/true.png",
null,
"https://support.cleanpower.com/wp-content/uploads/2019/06/false.png",
null,
"https://support.cleanpower.com/wp-content/uploads/2019/06/boolean.png",
null,
"https://support.cleanpower.com/wp-content/uploads/2019/06/is_approved.png",
null,
"https://support.cleanpower.com/wp-content/uploads/2019/06/is_rejected.png",
null,
"https://support.cleanpower.com/wp-content/uploads/2019/06/equals_choice.png",
null,
"https://support.cleanpower.com/wp-content/uploads/2019/06/does_not_equal_choice.png",
null,
"https://support.cleanpower.com/wp-content/uploads/2019/06/text.png",
null,
"https://support.cleanpower.com/wp-content/uploads/2019/06/text_data_field_reference.png",
null,
"https://support.cleanpower.com/wp-content/uploads/2019/06/text_if_then_else.png",
null,
"https://support.cleanpower.com/wp-content/uploads/2019/06/length.png",
null,
"https://support.cleanpower.com/wp-content/uploads/2020/08/concatenate_v2.png",
null,
"https://support.cleanpower.com/wp-content/uploads/2020/08/project_number-1.png",
null,
"https://support.cleanpower.com/wp-content/uploads/2020/09/text_operator.png",
null,
"https://support.cleanpower.com/wp-content/uploads/2019/06/date.png",
null,
"https://support.cleanpower.com/wp-content/uploads/2019/06/date_data_field_reference.png",
null,
"https://support.cleanpower.com/wp-content/uploads/2019/06/time_before_after.png",
null,
"https://support.cleanpower.com/wp-content/uploads/2019/06/formula_reference.png",
null,
"https://support.cleanpower.com/wp-content/uploads/2019/06/if_then_formula_reference.png",
null,
"https://support.cleanpower.com/wp-content/uploads/2021/07/all_match_v2.png",
null,
"https://support.cleanpower.com/wp-content/uploads/2021/07/value_of_all_match_v2.png",
null,
"https://support.cleanpower.com/wp-content/uploads/2021/07/sum_all_v2.png",
null,
"https://support.cleanpower.com/wp-content/uploads/2019/03/formulas_15.png",
null,
"https://support.cleanpower.com/wp-content/uploads/2019/03/formulas_17.png",
null,
"https://support.cleanpower.com/wp-content/uploads/2019/03/formulas_16.png",
null,
"https://support.cleanpower.com/wp-content/uploads/2019/03/formulas_20a.png",
null,
"https://support.cleanpower.com/wp-content/uploads/2019/03/formulas_20b.png",
null,
"https://support.cleanpower.com/wp-content/uploads/2019/03/formulas_20c.png",
null,
"https://support.cleanpower.com/wp-content/uploads/2019/03/formulas_18.png",
null,
"https://support.cleanpower.com/wp-content/uploads/2019/03/formulas_19b.png",
null,
"https://support.cleanpower.com/wp-content/uploads/2019/03/formulas_19.png",
null,
"https://support.cleanpower.com/wp-content/uploads/2018/05/formulas_6.png",
null,
"https://support.cleanpower.com/wp-content/uploads/2018/03/calculator_1.png",
null,
"https://support.cleanpower.com/wp-content/uploads/2018/03/calculator_2.png",
null,
"https://support.cleanpower.com/wp-content/uploads/2018/03/calculator_3.png",
null,
"https://support.cleanpower.com/wp-content/uploads/2018/03/calculator_4.png",
null,
"https://support.cleanpower.com/wp-content/uploads/2018/03/calculator_4b.png",
null,
"https://support.cleanpower.com/wp-content/uploads/2018/03/calculator_5.png",
null,
"https://support.cleanpower.com/wp-content/uploads/2018/03/calculator_6.png",
null,
"https://support.cleanpower.com/wp-content/uploads/2018/03/calculator_7.png",
null,
"https://support.cleanpower.com/wp-content/uploads/2018/03/calculator_8.png",
null,
"https://support.cleanpower.com/wp-content/uploads/2018/05/formulas_9.png",
null,
"https://support.cleanpower.com/wp-content/uploads/2018/03/calculator_10-e1522261980773.png",
null,
"https://support.cleanpower.com/wp-content/uploads/2018/05/calculator_11.png",
null,
"https://support.cleanpower.com/wp-content/uploads/2018/05/formulas_12.png",
null,
"https://support.cleanpower.com/wp-content/uploads/2018/03/calculator_13.png",
null,
"https://support.cleanpower.com/wp-content/uploads/2018/03/calculator_14-e1522368981351.png",
null,
"https://support.cleanpower.com/wp-content/uploads/2018/03/calculator_15.png",
null,
"https://support.cleanpower.com/wp-content/uploads/2018/10/formulas_13.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7906704,"math_prob":0.93975705,"size":14290,"snap":"2021-43-2021-49","text_gpt3_token_len":3103,"char_repetition_ratio":0.16645667,"word_repetition_ratio":0.07109208,"special_character_ratio":0.21546537,"punctuation_ratio":0.08153078,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98992956,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120,121,122,123,124,125,126,127,128,129,130,131,132,133,134,135,136,137,138,139,140,141,142,143,144,145,146,147,148,149,150,151,152,153,154,155,156,157,158,159,160,161,162],"im_url_duplicate_count":[null,3,null,3,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,3,null,3,null,3,null,3,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-20T06:29:12Z\",\"WARC-Record-ID\":\"<urn:uuid:c4c3c2d3-0654-4210-b1b1-0270eebaa190>\",\"Content-Length\":\"316202\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:dd7e3bc5-359a-450e-b63b-7b1bebf0ddcc>\",\"WARC-Concurrent-To\":\"<urn:uuid:1687d5fb-7818-4cc5-865a-f8dc4c1d4b62>\",\"WARC-IP-Address\":\"104.196.122.90\",\"WARC-Target-URI\":\"https://support.cleanpower.com/powerclerk/formulas/\",\"WARC-Payload-Digest\":\"sha1:FEZOSSZUGIXGX5JD4XRJRHIF5I443K7N\",\"WARC-Block-Digest\":\"sha1:U64PJPXNPIS47I4TTO6T7YM6DEQAIB3E\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323585302.91_warc_CC-MAIN-20211020055136-20211020085136-00084.warc.gz\"}"} |
https://mathschoolinternational.com/Math-Questions/Questions-IQ/IQ-0011.aspx | [
"",
null,
"• Welcome in Math School.\n• This is beta verion of our website.\n##### Fill the box using (1, 3, 5, 7 ,9, 11, 13, 15) ?",
null,
"___________________________________________\n\n##### Answer\n``` (3+3) + (3+3) + (3+3) + (3+3) + (3+3) = 30\n```\n##### Explanation\n\nThis is an interesting question this kind of question was already asked in india's compitative exam UPSC(IAS/IPS) in year 2013. And only toper of this exam can solve it.\n\nThe following are the numbers that you can use to fill in the brackets: 1, 3, 5, 7, 9, 11, 13 and 15. You can repeat the numbers if required. The resulting sum should be 30.\n\n``` (15-9)+(13-7)+(7-1)+(9-1)+(13-9)=30\n```\n\nIf you solve inside the brackets, you will get the following equation 6 + 6 + 6 + 8 + 4 and by adding all these numbers you will get 30.\n\n``` or\n(7-1)+(15-9)+(11-5)+(13-3)+(3-1) = 30\n```\n\nThis is not mention in above question that you can use sign or not. Thus we can use the sign (+, -, x etc ).\nBecause in the given question it is also mention that we can repeated the numbers. So wee can also solve it as...\n\n``` (3+3) + (3+3) + (3+3) + (3+3) + (3+3) = 30\n```\n\nThe above solution is best and easy way.\n\n___________________________________________\n\n##### Miscellaneous Short Questions\n\n######",
null,
"Famous Question IAS topper Gaurav Agrawal\n\n______________________\n\n######",
null,
"Same Question IAS topper Gaurav Agrawal\n\n______________________\n\n######",
null,
"If Y+1=3 then y = 2 ? (is not simple)\n\n______________________\n\n######",
null,
"Mr. Khan's logical question?\n\n______________________\n\n##### Miscellaneous Short Questions\n\n######",
null,
"Math puzzle\n\n______________________\n\n######",
null,
"-(-a) is read as ...?\n\n______________________\n\n######",
null,
"If √(0.04)(0.4)a = (0.4)(0.04)√b then a/b ... ?\n\n______________________\n\n######",
null,
"3 ÷ (5 ÷ 7) = (3 ÷ 5) ÷ 7\n\n______________________\n\n######",
null,
"Which step is wrong?\n\n______________________\n\n##### SHORTCUT TRICKS (Division)\n• Divisible by 2 Shortcut trick\n• Divisible by 3 Shortcut trick\n• Divisible by 4 Shortcut trick\n• Divisible by 5 Shortcut trick\n• Divisible by 6 Shortcut trick",
null,
"• Divisible by 7 Shortcut trick",
null,
"• Divisible by 8 Shortcut trick",
null,
"• Divisible by 9 Shortcut trick\n• Divisible by 10 Shortcut trick\n\n##### Worksheets (Solved)\n\n###### Integration",
null,
"",
null,
"",
null,
"",
null,
"",
null,
""
] | [
null,
"https://mathschoolinternational.com/images/logoMedium.gif",
null,
"https://mathschoolinternational.com/Math-Questions/Questions-IQ/Questions\\IQ-0011.png",
null,
"https://mathschoolinternational.com/Math-Questions/Questions-IQ/Questions/IQ-0010.png",
null,
"https://mathschoolinternational.com/Math-Questions/Questions-IQ/Questions/IQ-0011.png",
null,
"https://mathschoolinternational.com/Math-Questions/Questions-IQ/Questions/IQ-0006.png",
null,
"https://mathschoolinternational.com/Math-Questions/Questions-IQ/Questions/IQ-0001.png",
null,
"https://mathschoolinternational.com/Math-Questions/Questions-IQ/Questions/IQ-5001.png",
null,
"https://mathschoolinternational.com/Math-Questions/Questions-IQ/Questions/IQ-Negative.png",
null,
"https://mathschoolinternational.com/Math-Questions/Questions-SquareRoot/Questions/SquareRoot-9916-Solve.png",
null,
"https://mathschoolinternational.com/Math-Questions/Questions-FRACTION/Math-Question-Associative.png",
null,
"https://mathschoolinternational.com/Math-Questions/Questions-BODMAS/Question/Order-of-Operation-Question-0300.png",
null,
"https://mathschoolinternational.com/Images/New.gif",
null,
"https://mathschoolinternational.com/Images/New.gif",
null,
"https://mathschoolinternational.com/Images/New.gif",
null,
"https://mathschoolinternational.com/Images/New.gif",
null,
"https://mathschoolinternational.com/images/advertise-here2.jpg",
null,
"https://mathschoolinternational.com/images/ExamTips.png",
null,
"https://mathschoolinternational.com/images\\100Marks.jpg",
null,
"https://mathschoolinternational.com/Math-Questions/Questions-IQ/..\\..\\images\\Times-of-India-Education-Display-Ads.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.81748426,"math_prob":0.9895803,"size":1523,"snap":"2021-04-2021-17","text_gpt3_token_len":517,"char_repetition_ratio":0.1770902,"word_repetition_ratio":0.06741573,"special_character_ratio":0.43795142,"punctuation_ratio":0.13946587,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9934168,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38],"im_url_duplicate_count":[null,null,null,2,null,7,null,7,null,7,null,7,null,7,null,7,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,10,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-04-21T20:27:19Z\",\"WARC-Record-ID\":\"<urn:uuid:a991e675-621f-467d-a69a-017b36fcc7d4>\",\"Content-Length\":\"36465\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d016e6b7-c33a-4969-b0de-725d2b8383e7>\",\"WARC-Concurrent-To\":\"<urn:uuid:82b0edf5-c242-4257-b895-febbf4f30a6a>\",\"WARC-IP-Address\":\"172.67.176.66\",\"WARC-Target-URI\":\"https://mathschoolinternational.com/Math-Questions/Questions-IQ/IQ-0011.aspx\",\"WARC-Payload-Digest\":\"sha1:UH4KR4UFDVOT2MSC7H3KRUAIIO2ZVA6Q\",\"WARC-Block-Digest\":\"sha1:3XHEZSF7YD42ICBM5GAIR5A44BJ4BDMQ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-17/CC-MAIN-2021-17_segments_1618039550330.88_warc_CC-MAIN-20210421191857-20210421221857-00445.warc.gz\"}"} |
https://answers.everydaycalculation.com/multiply-fractions/50-25-times-14-12 | [
"Solutions by everydaycalculation.com\n\n## Multiply 50/25 with 14/12\n\n1st number: 2 0/25, 2nd number: 1 2/12\n\nThis multiplication involving fractions can also be rephrased as \"What is 50/25 of 1 2/12?\"\n\n50/25 × 14/12 is 7/3.\n\n#### Steps for multiplying fractions\n\n1. Simply multiply the numerators and denominators separately:\n2. 50/25 × 14/12 = 50 × 14/25 × 12 = 700/300\n3. After reducing the fraction, the answer is 7/3\n4. In mixed form: 21/3\n\nMathStep (Works offline)",
null,
"Download our mobile app and learn to work with fractions in your own time:"
] | [
null,
"https://answers.everydaycalculation.com/mathstep-app-icon.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8329059,"math_prob":0.98987186,"size":457,"snap":"2023-40-2023-50","text_gpt3_token_len":179,"char_repetition_ratio":0.17218544,"word_repetition_ratio":0.0,"special_character_ratio":0.46608314,"punctuation_ratio":0.08411215,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9763339,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-21T09:32:39Z\",\"WARC-Record-ID\":\"<urn:uuid:8f26dce5-7dcf-4138-9928-7a81a3f01de4>\",\"Content-Length\":\"7393\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0eacff9e-2e6f-409a-8fd5-b458fb298937>\",\"WARC-Concurrent-To\":\"<urn:uuid:9d491b9e-6424-4445-b59f-6ac3def7cdc4>\",\"WARC-IP-Address\":\"96.126.107.130\",\"WARC-Target-URI\":\"https://answers.everydaycalculation.com/multiply-fractions/50-25-times-14-12\",\"WARC-Payload-Digest\":\"sha1:HXS7DO76CP64ZMZJZNVAJVU5ZXXWIGFL\",\"WARC-Block-Digest\":\"sha1:YDFP3IU7ZRPOIHSTKCOO72I7R46KLIGJ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233505362.29_warc_CC-MAIN-20230921073711-20230921103711-00634.warc.gz\"}"} |
https://santandergto.com/en/some-further-details-on-quantum-computing/ | [
"#### Superposition ⚙️\n\nA qubit is represented as a combination of two quantum states are expressed as|0> for the “ground state” and |1> for the “excited state”, equivalent to 0 or 1 when translated to classical bits.\n\nBoth states are combined like this to define a qubit:\n\nWhere α y β are complex numbers defining probability amplitudes. Hence, we can calculate:\n\nFor instance, a qubit defined by\n\nhas a 50-50 probability of becoming 0 or 1 when measured since\n\nSince α y β and are complex, the qubit can be represented as a point on the surface of a sphere, called the “Bloch Sphere. This graph is very useful to understand the behavior of a single qubit when modified by some gates.\n\nUsing this math, we can calculate the probability that a qubit shows up as 0 or 1 when measured. But be aware, the Bloch sphere does not work when you entangle two qubits.\n\n#### Sample quantum gates ?\n\nThe Hadamard gate creates a basic 50-50 superposition. Hence, measuring it will (theoretically) provide a result of 0 or 1 with a probability of 50% for each of the states.\n\nFigure 2 shows a basic circuit built in IBM Quantum Experience, a highly recommended resource to learn about and experiment with quantum computing. Here we take a qubit initialized in “ground state”, , and apply a Hadamard gate (blue box) to it. At this step, the qubit is in superposition. Then, we measure it (pink box) and take the result as a classic bit: either 0 or 1.\n\nThe histogram below the circuit represents the results: Roughly 50% of the measures provided a 0 and 50% provided a 1, as expected.\n\nLooking at this on the Bloch sphere we start with the qubit on the top of the Z axis\n\nThe Hadamard gate maps the Z axis into the X axis and vice versa. Hence, we have moved the qubit like this:\n\nBut measurements only happen on the Z axis. Hence, since our qubit is aligned with the X axis, it has no definite value on Z. When measuring, it will be like projecting it on Z and the result may be |0> or |1> with identical probability.\n\n#### Undoing randomness with two Hadamard gates ?\n\nOne of the many weird things you can find when playing around with qubits in superposition is that applying a second Hadamard gate results in a |0> with 100% probability.\n\nIsn’t it weird if you think about it? We take a gate that converts a|0> into something that, when measured provides a perfectly random result. And if we apply the same gate to that perfectly random result, we get a 0 with absolute certainty. Why is that? The Bloch sphere can help us understand what is happening at quantum level.\n\nWe start with the qubit on the top of the Z axis\n\nThe Hadamard gate maps the Z axis into the X axis and vice versa. Hence, we have moved the qubit like this:\n\nWhen we apply the Hadamard gate again we map X on Z.\n\nHence, we finish with the qubit again on Z and state|0> .\n\n#### CNOT Gate ?\n\nThanks to entanglement, two qubits will always provide correlated measurements. Take for example the 50-50 qubit we saw before and consider that we have a second qubit entangled with it. Since the first qubit has equal probability to provide a 0 or 1 when measured we can’t predict what it will be. However, we can be sure that the value of the second qubit will have a known relationship with the first one when we measure it.\n\nTake for example the CNOT (controlled NOT) quantum gate. It works with two qubits. The first qubit is known as the “control” and the gate applies a NOT to the second qubit when the control (the first qubit) is in state |1> . So, the CNOT gate will provide the following results:\n\n|00> →|00>\n\n|01> →|01>\n\n|10> →|11>\n\n|11> →|10>\n\nYou might think that this looks pretty much like classical logical gates, but the core concept here is that it is applied to qubits while in superposition.\n\nTake Figure 4. We consider two qubits (q and q), both initialized at ground state.\n\nAs a first step we put q in superposition using a Hadamard gate (blue box). We also put q in “excited state” by using a NOT gate (green box).\n\nThen, we apply the CNOT gate, making q the control. Hence, since q can be either |0> or |1> with 50% probability and q is started as|1> the only transitions that we can get are:\n\n|01> →|01>\n\n|11> →|10>\n\nAnd they once measured they show in the histogram 50% probability as expected due to the 50-50 probability of q.\n\nRemember, the Bloch sphere can’t be used to represent two qubits after using a CNOT gate, because they have been entangled.\n\n#### Manipulating probabilities ?\n\nI won’t go much deeper on quantum gates. If you are interested, I recommend you study the guides available in IBM Quantum Experience. I’d just like to show you that probabilities in the result can be manipulated using different gates like in Figure 5 where the probability of measuring a 0 is much larger than that of obtaining a 1.\n\nLooking at this with the Bloch sphere, we start with the qubit in |0>.\n\nWe then apply a Hadamard gate, which maps Z on X.\n\nThen we apply a T gate, which rotates the qubit π/4 around the Z axis.\n\nBy applying the Hadamard gate again we remap X on Z, hence the qubit rotates again:\n\nAnd when measuring it on Z we get that the probability of getting is not 100% because the projection comes somewhere to the top of the axis, but not at the top. The theoretical probability of obtaining 0 is 0.85, and of obtaining 1 it is 0.15.\n\n#### The QSphere ?\n\nThe Bloch sphere is a particularly good tool to represent a single qubit, but it can’t represent two entangled qubits because their state is something “shared” and not particular of each one of them. Once two qubits are entangled, they can’t be thought of like two independent entities. Their state is something that spans both qubits.\n\nThe qsphere can represent in a single visualization all the states of multiqubit system. For this reason it has been recently included as the default visualization in IBM Q Experience.\n\nTake for instance this circuit:\n\nWe now know this represents 4 qubits that, before being measured can be in any of states with equal probability.\n\nThis is the histogram:\n\nAnd this is the qsphere showing in a single diagram all 16 states.\n\nThe size of the dots representing each state also show their probability to be measured. The larger the dot, the more likely that result would be if measured. In this case all dots have the same size as they are equally probable. And the color of the dot represents the phase of that state.\n\nThe qsphere’s north pole represents the state where all qubits are |0>. The south pole represents the state where all qubits are |1>. And each of the parallels in the sphere represent from north to south states with a growing number of |1>’s: The northmost parallel represents all states with just one |1>, the next parallel southwards includes all states with two |1>’s, etc.\n\nYou can find a great thread by @abe_asfaw explaining the qsphere on Twitter. The very first video in the thread shows how the qubit states interfere with each other to increase the probability of measuring a specific result in different phases of the Grover’s algorithm, a quantum search procedure.\n\nThere is a growing trend to deprecate the Bloch sphere to teach and show qubit states in favor of the qsphere.\n\n#### Where to go from here\n\nQuantum computing algorithms and programming is a complex subject. Looking at the examples provided here you can think of the current state of quantum computing as if classic computing was wiring some tens of logic gates (AND, OR, XOR, etc). However, it is a new and promising paradigm in computing that will surely bring a new era to science and technology.",
null,
"",
null,
""
] | [
null,
"https://santandergto.com/wp-content/uploads/2022/05/Como-evitar-correos-1-scaled.jpeg",
null,
"https://santandergto.com/wp-content/uploads/2023/02/WIT_collage-11feb-2023-llama.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.92145234,"math_prob":0.9692664,"size":8014,"snap":"2023-14-2023-23","text_gpt3_token_len":1896,"char_repetition_ratio":0.14294632,"word_repetition_ratio":0.051567946,"special_character_ratio":0.2302221,"punctuation_ratio":0.09238452,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9937109,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-03-25T17:31:05Z\",\"WARC-Record-ID\":\"<urn:uuid:d5684967-441c-4e3a-b6b7-96d5841fd55d>\",\"Content-Length\":\"99779\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e1b06a90-18fc-431d-914f-567cc6d7bb0d>\",\"WARC-Concurrent-To\":\"<urn:uuid:7bcdcc70-6c5a-422a-ba8d-5d326cc09f9f>\",\"WARC-IP-Address\":\"45.60.195.69\",\"WARC-Target-URI\":\"https://santandergto.com/en/some-further-details-on-quantum-computing/\",\"WARC-Payload-Digest\":\"sha1:4H6DKUEXWKG4YLOIN7NPW2RPGNYEGCXT\",\"WARC-Block-Digest\":\"sha1:BCUSLWCDAFYPA42DB6M7RRHLIW2V53NG\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296945368.6_warc_CC-MAIN-20230325161021-20230325191021-00410.warc.gz\"}"} |
https://physics.stackexchange.com/questions/383203/single-electron-on-a-two-atoms-chain-factorisation-of-hilbert-space-by-externa | [
"# Single electron on a two atoms chain : factorisation of hilbert space by external and internal degrees of motion\n\nI have a question on the first pages of the book \"A Short Course on Topological Insulators\" by János K. Asbóth, László Oroszlány and András Pályi\n\nBut actually we can see it here : http://theorie.physik.uni-konstanz.de/burkard/sites/default/files/ts15/TalkSSH.pdf\n\nPresentation of the problem\n\nWe work with a 1 dimensional chain where there are two types of atoms $A$ and $B$. The unit cell is labelled by $m$. We study the motion of one electron.\n\nWe have different interaction terms : $v$ and $w$.\n\nThey work with the following SSH hamiltonian model :\n\n$$H=v \\sum_{m=1}^N (|m,B\\rangle\\langle m,A| +hc)+w\\sum_{m=1}^{N-1} (|m+1,A\\rangle\\langle m,B|+hc)$$\n\nWhere $hc$ is for the hermitic conjugate.\n\nThus, if we write the hamiltonian, we have something that looks like :\n\n$$H = \\begin{bmatrix}0&v&0&0&0&0&0&0\\\\v&0&w&0&0&0&0&0 \\\\ 0&w&0&v&0&0&0&0 \\\\ 0&0&v&0&w&0&0&0\\\\ 0&0&0&w&0&v&0&0\\\\ 0&0&0&0&v&0&w&0 \\\\ 0&0&0&0&0&w&0&v \\\\ 0&0&0&0&0&0&v&0 \\end{bmatrix}$$\n\nAnd the Hilbert space can be seen as a tensorial product of :\n\n$\\mathcal{H}_{tot}=\\mathcal{H}_{ext} \\otimes \\mathcal{H}_{int}$\n\nWhere the external degree of freedom is represented by the letter $m$, and the internal one by the fact we are on site $A$ or $B$.\n\nThus : $|m,\\alpha\\rangle = |m\\rangle \\otimes |\\alpha \\rangle$ where $\\alpha=A,B$.\n\nMy question\n\nBut there is something I misunderstand here.\n\nI agree that we can see the total Hilbert space of the problem as a tensorial product of an internal and external degrees of freedom hilbert spaces.\n\nBut in the same time, if we consider the state $|m,A\\rangle$, we would see a gaussian centered on the atom $A$ in cell $m$. And then $|A\\rangle$ would be a gaussian centered in $0$ and $|m\\rangle$ would \"shift it\" to the position $m$ right ? But if we write everything on the $x$ basis we have :\n\n$$\\psi(x)=\\psi_m(x) \\psi_A(x)$$ and it should be $0$ outside of the support of $\\psi_A$. And as $\\psi_A$ is a gaussian centered in $0$ we would have a wavefunction that is zero everywhere if we go far enough.\n\nWhere is the mistake I do in my vision of the problem?\n\nIsn't $|m,A\\rangle$ a gaussian centered on the atom $A$ that is in the cell $m$ ? If so, what represents the kets $|m\\rangle$ and $|A\\rangle$ physically (what those wavefunctions look like).\n\n• It looks like the book you mention can also be accessed via arXiv.com. Here is the link I found: arxiv.org/pdf/1509.02295.pdf – Kenny H Feb 3 '18 at 18:51\n\nIt is wrong to write $\\psi(x)=\\psi_m(x)\\psi_A(x)$. The correct wave function $\\psi_{mA}(x)$ that represents the state $|m,A\\rangle$ should be $$\\psi_{mA}(x)=e^{-m\\partial_x}\\psi_{A}(x)=\\left(1-m\\partial_x+\\frac{1}{2!}m^2\\partial_x^2-\\frac{1}{3!}m^3\\partial_x^3+\\cdots\\right)\\psi_{A}(x) =\\psi_{A}(x-m),$$ where we have used the formula of Taylor expansion. Physically, this can be understood by noticing that $p=-\\text{i}\\partial_x$ is the momentum operator that generates translation, and the meaning of the state $|m,A\\rangle$ is the simply the Gaussian packet $\\psi_A(x)$ translated by the displacement $m$.\n\nThe tensor product in $|m,A\\rangle=|m\\rangle\\otimes|A\\rangle$ does not mean to multiply two wave functions together directly. It just means that if you consider the following linear superposition, the result can be expanded in the tensor product basis as $$(c_m|m\\rangle+c_n|n\\rangle)\\otimes(c_A|A\\rangle+c_B|B\\rangle)=c_mc_A|m,A\\rangle+c_mc_B|m,B\\rangle+c_nc_A|n,A\\rangle+c_nc_B|n,B\\rangle.$$ This is what defines a tensor product structure in the Hilbert space. Any algebraic structure satisfying such property of binlienar maps can be called a tensor product. You can see that in terms of the wave function, the following algebraic structure is indeed a tensor product $$(c_m e^{-m\\partial_x}+c_n e^{-n\\partial_x})\\otimes(c_A \\psi_A(x)+c_B \\psi_B(x))=c_mc_A \\psi_A(x-m)+c_mc_B \\psi_B(x-m)+c_nc_A \\psi_A(x-n)+c_nc_B \\psi_B(x-n).$$ In this sense, the operators $e^{-m\\partial_x}$ form a set of basis of the external Hilbert space, which can be denoted as $|m\\rangle$ abstractly. There is no wave function associated with $|m\\rangle$, because $|m\\rangle=e^{-m\\partial_x}$ is actually represented as a linear operator in this case.\n\nWell, if one insists to understand the $|m\\rangle$ state as a wave function, one possible interpretation is to consider it to be a Dirac delta function located at $x=m$ (the center of the $m$th unit cell). $$\\psi_m(x)=\\delta(x-m).$$ But still, the tensor product $|m\\rangle\\otimes|A\\rangle$ does not correspond to multiplying the wave functions $\\psi_m(x)$ and $\\psi_A(x)$ together in a point-wise manner. It should actually be understood as a convolution of the two wave functions: $$\\psi_{mA}(x)=(\\psi_{m}*\\psi_A)(x)=\\int\\text{d}y\\psi_m(y)\\psi_A(x-y)=\\psi_A(x-m).$$ The convolution also satisfies the algebraic properties of the tensor product and is hence a legitimate representation of the tensor product. This point of view is secretly equivalent to the above operator point of view, because, in functional analysis, the Dirac delta function (or the shifted Dirac delta function) is actually defined to be the kernel of the identity operator (or the translation operator).\n\n• Thank you for your answer. But I thought that in a tensor product of Hilbert space, each element should be an integrable function. But apparently we also allow operators to live in one of the element of the tensor product ? I didn't know this. – StarBucK Feb 4 '18 at 10:45\n• @StarBucK Well, your understanding just corresponds to the simplest case of the tensor product. In the eyes of mathematicians, tensor product can be anything that satisfies the universal property of bilinear maps. So we should look at the tensor product from a higher and more general point of view. However, if you insist to think in terms of wave functions, it is also possible. See the appended section my updated answer. – Everett You Feb 4 '18 at 16:52\n• – Everett You Feb 4 '18 at 16:53\n• Hmm about your appended section for me it is contradictory with the postulates of the scalar product with tensor products. Indeed we define them as $\\langle x , x' | \\psi , \\phi \\rangle= \\psi(x) \\phi(x')$. Thus in this part you change this postulate ? – StarBucK Feb 4 '18 at 17:52\n• After looking through your answer, I realized my answer below is way too unrigorous.... – Kite.Y Feb 5 '18 at 16:21\n\nOK, let's see if this is what you want: consider a general position: \\begin{align} x = m \\cdot a_0 + \\tilde{x} \\end{align} and consider the following basis function: \\begin{align} &\\psi_M(x) \\propto \\sum_{m = 0}^{N} \\delta(m\\cdot a_0) \\\\ \\lim_{x-\\infty}\\quad& \\psi_A(x) = 0 \\end{align} then, your wave-function can be written as: \\begin{align} \\psi(x) = \\psi_M(m\\cdot a_0)\\otimes\\psi_A(\\tilde{x}) \\end{align}\n\nI think your problem is due to a misunderstanding of what the tensor product state space, spanned by the basis $\\{\\lvert m,X \\rangle\\}$ where $m=1,2,3,4$ and $X=A,B$, means. The SSH model you are considering is specified by the Hamiltonian whose states are spanned by this tensor product of basis vectors. In the context of this model, the state $\\lvert A \\rangle$ does not necessarily have any meaning on its own. Therefore, I believe your interpretation of beginning with the state $\\lvert A\\rangle$ being a Gaussian centered on anything is incorrect. This state on its own has no meaning. Within this model, the states must be specified by the full tensor product.\n\n• Thank you for your answer. I'm not sure to understand though. If I have a tensor product of states I can always project it on $\\langle x, x'|$. Thus $\\langle x,x' |m,X\\rangle=\\langle x | m \\rangle \\langle x' | X \\rangle = \\psi_m(x) \\psi_X(x')$. Thus I don't understand – StarBucK Feb 3 '18 at 20:22\n• You are completely correct that you can always project a tensor product of states onto whatever you like. The issue is in your first explanation of the problem, where you think about it as starting with $\\lvert A\\rangle$ and then shifting it by specifying $\\lvert m\\rangle$. The states $\\lvert A \\rangle$ and $\\lvert m \\rangle$ have no meaning on their own since the state space of the model is a tensor product of the two. Does this help? – Kenny H Feb 3 '18 at 20:32\n• So what you mean is that indeed $|A\\rangle$ has a given wavefunction but it is not the physical meaning I think of a gaussian centered. It is indeed a function on $\\mathbb{R}$ but something that it is hard to have a physical intuition on. In the opposite of $|m,A\\rangle$ that can really be seen as a Gaussian centere on the atom $A$ in the cell $m$. Is that what you mean ? – StarBucK Feb 3 '18 at 20:37\n• No. In this model, the state space is the tensor product $\\lvert m\\rangle \\otimes \\lvert X\\rangle$. You must specify both parts of the tensor product to select out states in this model's state space. The state vector $\\lvert A \\rangle$ has no meaning on its own and it does not make sense to talk about its wavefunction. – Kenny H Feb 3 '18 at 21:05"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7678577,"math_prob":0.99813336,"size":2242,"snap":"2020-10-2020-16","text_gpt3_token_len":748,"char_repetition_ratio":0.121090256,"word_repetition_ratio":0.028409092,"special_character_ratio":0.31846565,"punctuation_ratio":0.08782435,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9998222,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-02-21T00:05:35Z\",\"WARC-Record-ID\":\"<urn:uuid:447f08ad-83cd-4075-8033-f0454c4eac5e>\",\"Content-Length\":\"172889\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3dcf1aba-f3a5-448a-9eae-020443f3a208>\",\"WARC-Concurrent-To\":\"<urn:uuid:aad18995-9942-4f62-9c09-ea54674024e4>\",\"WARC-IP-Address\":\"151.101.193.69\",\"WARC-Target-URI\":\"https://physics.stackexchange.com/questions/383203/single-electron-on-a-two-atoms-chain-factorisation-of-hilbert-space-by-externa\",\"WARC-Payload-Digest\":\"sha1:TKFZY6TZRRXIM3B4UVZ2LUSBTOTUWZXA\",\"WARC-Block-Digest\":\"sha1:TV4MUJS6BY6VPQ4YR5YJFY4ZAGK6QXGS\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-10/CC-MAIN-2020-10_segments_1581875145316.8_warc_CC-MAIN-20200220224059-20200221014059-00262.warc.gz\"}"} |
https://stacks.math.columbia.edu/tag/03JN | [
"Lemma 67.3.3. Let $S$ be a scheme. Let $Y \\to X$ be a representable morphism of algebraic spaces over $S$. Let $U \\to X$ be a morphism from a scheme to $X$. If the fibres of $U \\to X$ are universally bounded, then the fibres of $U \\times _ X Y \\to Y$ are universally bounded.\n\nProof. This is clear from the definition, and properties of fibre products. (Note that $U \\times _ X Y$ is a scheme as we assumed $Y \\to X$ representable, so the definition applies.) $\\square$\n\nIn your comment you can use Markdown and LaTeX style mathematics (enclose it like $\\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar)."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8009559,"math_prob":0.99580204,"size":835,"snap":"2022-40-2023-06","text_gpt3_token_len":216,"char_repetition_ratio":0.10830325,"word_repetition_ratio":0.0,"special_character_ratio":0.25868264,"punctuation_ratio":0.10404624,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.999691,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-01-27T08:11:24Z\",\"WARC-Record-ID\":\"<urn:uuid:05b6e883-4e13-4369-90bd-b269938bc7c1>\",\"Content-Length\":\"14228\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:60f06b41-e42b-4d59-b5ce-9dcbd8298d5e>\",\"WARC-Concurrent-To\":\"<urn:uuid:dd87d792-8b75-456f-9348-8da24b7e1ce9>\",\"WARC-IP-Address\":\"128.59.222.85\",\"WARC-Target-URI\":\"https://stacks.math.columbia.edu/tag/03JN\",\"WARC-Payload-Digest\":\"sha1:JW642XAFOYVGXGHNIYWIHA7IIZPAV5AM\",\"WARC-Block-Digest\":\"sha1:3RFOR3PNZ5CQNLV5URSTVM5QWDMKK25S\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764494974.98_warc_CC-MAIN-20230127065356-20230127095356-00366.warc.gz\"}"} |
https://www.arxiv-vanity.com/papers/cond-mat/0401620/ | [
"arXiv Vanity renders academic papers from arXiv as responsive web pages so you don’t have to squint at a PDF. Read this paper on arXiv.org.\n\n# Frequency-dependent local interactions and low-energy effective models from electronic structure calculations\n\nF. Aryasetiawan, M. Imada, A. Georges, G. Kotliar, S. Biermann, and A. I. Lichtenstein Research Institute for Computational Sciences, AIST, 1-1-1 Umezono, Tsukuba Central 2, Ibaraki 305-8568, Japan Institute for Solid State Physics, University of Tokyo, Kashiwaoha, Kashiwa, Chiba, 277-8581, Japan PRESTO, Japan Science and Technology Agency Centre de Physique Theorique, Ecole Polytechnique, 91128 Palaiseau, France LPT-ENS CNRS-UMR 8549, 24 Rue Lhomond, 75231 Paris Cedex 05, France Department of Physics and Astronomy, Serin Physics Laboratory, Rutgers University, Piscataway NJ 08854-8019 USA University of Nijmegen, NL-6525 ED, Nijmegen, The Netherlands\nJan 2004\n###### Abstract\n\nWe propose a systematic procedure for constructing effective models of strongly correlated materials. The parameters, in particular the on-site screened Coulomb interaction U, are calculated from first principles, using the GW approximation. We derive an expression for the frequency-dependent and show that its high frequency part has significant influence on the spectral functions. We propose a scheme for taking into account the energy dependence of , so that a model with an energy- independent local interaction can still be used for low-energy properties.\n\n## I Introduction\n\nLattice fermion models such as the Hubbard model or the Anderson impurity model and their extensions have played a major role in studying electron correlations in systems with strong onsite correlations. Despite the widespread use of these models, little justification has been given in using them. The models are postulated on the basis of physical intuition. In particular, the models employ parameters, such as the famous Hubbard interaction U, which are normally adjusted to serve the given problem. Without judicious choice of parameters, the model may yield misleading results or in the worst case, the model itself is not sufficient to describe the real system. One can define rigorously these concepts in the path integral formulation of the many body problem by performing a partial trace over the degrees of freedom that one wants to eliminate, and ignoring the retardation in the interactions generated by this procedure. However, this elimination of the degrees of freedom is very hard to perform for real materials. It is therefore very desirable to figure out a systematic way of constructing low energy effective models with well defined parameters calculated from first principles such that the model can quantitatively reproduce and predict physical properties of interest of the corresponding real system, especially when the correlation effects are crucial.\n\nAnother important issue that has not received sufficient attention is the role of energy dependence of the screened local Coulomb interaction U. Model studies investigating the importance of high-energy states in the Hubbard model can be found in olle2 ; sun ; florens . A dynamic Hubbard model has also been considered. hirsch . In most cases however U is assumed to be static, but on the other hand we know that at high energy the screening becomes weaker and eventually the interaction approaches the large bare Coulomb value, which is an order of magnitude larger than the static screened value. Of course the high-energy part of the Coulomb interaction has in some way been down-folded into the Hubbard U but it is not clear how this downfolding is actually accomplished.\n\nA number of authors have addressed the problem of determining the Hubbard U from first principles. One of the earliest works is the constrained local density approximation (LDA) approach olle1 ; olle2 where the Hubbard U is calculated from the total energy variation with respect to the occupation number of the localized orbitals. An approach based on the random-phase approximation (RPA) was later introduced springer , which allows for the calculations of the matrix elements of the Hubbard U and its energy dependence. This was followed by a more refined approach for calculating U. kotani . A yet different approach, computes the matrix elements of the Coulomb interactions screened in real space and assumes a Yukawa form to extract the Hubbard U and the other interactions which determine the multiplet splittings. brooks .\n\nThe purpose of the present work is to develop a precise formulation for a systematic construction of effective models where the parameters are obtained from realistic first-principles electronic structure calculations. In particular, we concentrate on the calculation of the Hubbard U and demonstrate the importance of its energy dependence. We show that a static Hubbard hamiltonian, obtained from a naive construction in which this energy dependence is simply neglected, fails even at low energy. This static model can be appropriately modified however, by taking into account the feedback of the high-energy part of U into the one-particle propagator. We illustrate our scheme in transition metals, concentrating on Ni as an example, since it is a prototype system consisting of a narrow 3d band embedded in a wide band. Furthermore, Ni is one of the most problematic case from the viewpoint of the LDA.\n\n## Ii Theory\n\nLet us suppose that the bandstructure of a given solid can be separated into a narrow band near the Fermi level and the rest, like, for example, in transition metals or 4f metals. Our aim is to construct an effective model which only includes the narrow 3d or 4f band. The effective interaction between the 3d electrons in the Hubbard model can be formally constructed as follows. We first divide the complete Hilbert space into the Hubbard space , consisting of the 3d states or the localized states, and the rest. The bare Green’s function , spanning the d-subspace is given by:\n\n Gd(r,r′;ω)=occ∑dψd(r)ψ∗d(r′)ω−εd−i0++unocc∑dψd(r)ψ∗d(r′)ω−εd+i0+ (1)\n\nLet be the total (bare) polarization, including the transitions between all bands:\n\n P(r,r′;ω) =occ∑iunocc∑jψi(r)ψ∗i(r′)ψ∗j(r)ψj(r′) ×{1ω−εj+εi+i0+−1ω+εj−εi−i0+} (2)\n\ncan be divided into: , in which includes only 3d to 3d transitions (i.e limiting the summations in (2) to ), and be the rest of the polarization. The screened interaction on the RPA level is given by\n\n W =[1−vP]−1v =[1−vPr−vPd]−1v =[(1−vPr){1−(1−vPr)−1vPd}]−1v ={1−(1−vPr)−1vPd}−1(1−vPr)−1v =[1−WrPd]−1Wr (3)\n\nwhere we have defined a screened interaction that does not include the polarization from the 3d-3d transitions:\n\n Wr(ω)=[1−vPr(ω)]−1v (4)\n\n(we have not explicitly indicated spatial coordinates in this equation). The identity in (3) explicitly shows that the interaction between the 3d electrons is given by a frequency-dependent interaction . It fits well with the usual physical argument that the remaining screening channels in the Hubbard model associated with the 3d electrons, represented by the 3d-3d polarization further screen to give the fully screened interaction .\n\nWe now choose a basis of Wannier functions , centered about atomic positions R, corresponding to the 3d Bloch functions , and consider the matrix elements of the (partially screened) frequency-dependent Coulomb interaction :\n\n UR1nR2n′,R3mR4m′(τ−τ′)≑∫d3rd3r′ϕ∗R1n(r)ϕR2n′(r)Wr(r,r′;τ−τ′)ϕ∗R3m(r′)ϕR4m′(r′) (5)\n\nWe would like to obtain an effective model for the 3d degrees of freedom. Because of the frequency dependence of the U’s (corresponding to a retarded interaction), this effective theory will not take a hamiltonian form. We can however, write such a representation in the functional integral formalismnegele_orland by considering the effective action for the 3d degrees of freedom given by:\n\n S =∫dτdτ′[−∑d†Rn(τ)−1Rn,R′n′(τ−τ′)dRn′(τ′) +12∑:d†R1n(τ)dR2n′(τ):UR1nR2n′,R3mR4m′(τ−τ′):d†R3m(τ′)dR4m′(τ′):] (6)\n\nwhere denotes normal ordering, which accounts for the Hartree term, and the summation is over repeated indices. When using a Wannier transformation which does not mix the d-subspace with other bands, the Green’s function can be taken, to first approximation, to be the bare Green’s function constructed from the Bloch eigenvalues and eigenfunctions. If instead an LMTO formalism andersen-lmto is used, one should in principle obtain from a downfolding procedure onto the d-subspace, i.e perform a partial trace over degrees of freedom (e.g to first order: ).\n\nIn the following, we retain only the local components of the effective interaction on the same atomic site. This is expected to be a reasonable approximation because the 3d states are rather localized. The formalism may be easily extended to include intersite Coulomb interactions if necessary. Hence, we consider the frequency-dependent Hubbard interactions:\n\n Unn′,mm′(τ−τ′)≑∫d3rd3r′ϕ∗n(r)ϕn′(r)Wr(r,r′;τ−τ′)ϕ∗m(r′)ϕm′(r′) (7)\n\nwith being the Wannier orbital for R=0. In order to illustrate the procedure within the linear muffin-tin orbital (LMTO) basis set, we use instead of the Wannier orbital the normalized function head of the LMTO which is a solution to the radial Shrödinger equation matching to a Hankel function at zero energy at the atomic sphere boundary.\n\nIn this paper, we investigate the importance of the energy- dependence of U. Therefore, we shall compare the results obtained from (6) with those of a Hamiltonian approach in which one would construct a Hubbard model with a static interaction U:\n\n H=∑Rn,R′n′c†RnhRn,R′n′cR′n′+12∑R,nn′,mm′c†RncRn′Unn′,mm′c†RmcRm′ (8)\n\nIt seems natural to identify the static Hubbard U with the (partially) screened local interaction in the low-frequency limit . Note that the Hubbard model (8) has been constructed in the most naive manner, by simply taking the quadratic part to be the d-block of the non-interacting hamiltonian.\n\nIn order to compare the results obtained from the full dynamical U to those of the static Hubbard model, we need to solve (6) and (8) within some consistent approximation scheme. In the following, we adopt a “GW universe” where the exact self-energy for the solid is assumed to be given by the GW approximation (GWA) hedin ; ferdi , and where the GW approximation is also assumed to be a reliable tool in solving the effective models (6) and (8). This allows us to make a proper comparison between the “exact” self-energy and the Hubbard model self-energy. If the assumption of static U is valid, the Hubbard self-energy and the true self-energy (both within the GWA) should be close to each other, at least for small energies. Or equivalently, the spectral function for small energies should resemble that of the full one.",
null,
"Figure 1: respect to the ∖$E∖_g∖$ channel. Other channels give almost the same result.\n\n## Iii Results and Discussions\n\n### iii.1 Comparing self-energies\n\nThe screened interaction with and without the 3d-3d transitions is shown in Fig. (1) in the case of Nickel. Here and in all the following, a spin-unpolarized (paramagnetic) solution is considered. At low energies, the (partially) screened interaction without the 3d-3d transitions is larger than the full one , and at high energies they approach each other, as anticipated. Related calculations have also been performed by Kotani kotani .\n\nWe first compare the self-energy obtained from a GW treatment of the full system, given by:\n\n Σ(r,r′;ω)=i2π∫dω′eiηω′G(r,r′;ω+ω′)W(r,r′;ω′) (9)\n\nto the self-energy obtained from the effective model (6) with an energy-dependent interaction . Because of (3), the screened interaction corresponding to this is simply , and the corresponding self-energy reads:\n\n Σd(ω)=i2π∫dω′eiηω′Gd(ω+ω′)W(ω′) (10)\n\nThe difference between this expression and the GWA for the full system (Eq.9) is that in (10) only the d-block of the Green’s function has been included (since the effective action was written for the d-band only). Hence, the two self-energies differ by a term , with . We expect that the wavefunction overlap between two 3d states (one from and the other from the 3d state appearing in the matrix element of ) and other non-3d states is small so that should be close to the true . In Fig. 2, the two self-energies are displayed (more precisely, in this figure and in all the following, we display the matrix element of the self-energy in the lowest 3d state (band number 2), at the -point, corresponding to an LDA eigenenergy -1.79 eV). We observe that the two self-energies indeed almost coincide with each other, even at high energies. Hence, we conclude that the effective Hubbard model (6) for the d-subspace, with an energy-dependent interaction, provides a reliable description of the real system.",
null,
"Figure 2: The self-energy of the real system (nickel) and the Hubbard model with a frequency-dependent U.\n\nWe now turn to the self-energy associated with the static Hubbard model (8) with . The local screened interaction for this model, within the GW approximation, is given by:\n\n Wd(ω)=[1−UPd(ω)]−1U (11)\n\nand the self-energy of the Hubbard model in the GWA thus reads:\n\n ΣHd(r,r′;ω)=i2π∫dω′eiηω′Gd(r,r′;ω+ω′)Wd(r,r′;ω′) (12)",
null,
"Figure 3: The real parts of the self-energy of the real system (solid) and the Hubbard model (dash) in the GWA.\n\nNote that the difference between this static Hubbard model self-energy and that of the effective model with a frequency-dependent interaction (6) relies in the use of a different form of the screened interaction instead of the full . In Fig. (3), the real part of this self-energy with that of the full GWA self-energy for nickel are shown. Since the energy scale of the self-energy of the real system is determined by the bare Coulomb interaction whereas the Hubbard self-energy is set by , the latter has been shifted so that it is equal to the former at the LDA eigenvalue (-1.79 eV) of the band we have considered, at the -point. The difference in magnitude of the self-energies is not important since it simply shifts the spectrum (or, said differently, we have compared differences from the values of the chemical potential obtained in the various schemes). However, the difference in the variation of the self-energy with respect to energy matters, since it will give a different quasiparticle weight and affect the spectral function. As can be deduced from the figure, the Z factor of the Hubbard model taken at the energy of the quasiparticle band at the -point (eV) is much closer to unity as compared to the true (full GW) one because the former already contains the renormalization from the plasmon. Hence, neglecting frequency-dependence directly affects the physical results, even in the low-energy range. We shall see below, however, that it is possible to modify the static model in such a way that an accurate approximation is obtained at low energy.",
null,
"Figure 4: The imaginary parts of the self-energy of the real systems (solid) and the Hubbard model (dash) in the GWA.\n\nIn Fig. 4 the imaginary part of the self-energy is shown. Here we see that Im for the Hubbard model is peaked around 5 eV since there are no states above or below the 3d band. As a consequence, the real part of the self-energy exhibits the Kramers-Kronig behavior at around -5 eV. Within an energy region spanning about twice the 3d bandwidth, the imaginary part of the Hubbard model self-energy, in contrast to the real part, is not in bad agreement with the full one.\n\nThese findings can be understood qualitatively by considering more explicit expressions of the self-energy obtained within the GW approximation, for an effective model of the d-subspace defined on the periodic lattice. The imaginary part reads:\n\n ImΣn(k,ω)=∑q,mImWnm(q,ω−ϵmk−q)[nF(ϵmk−q)+nB(ω−ϵmk−q] (13)\n\nIn this expression, are band indices, corresponds to the n-th non-interacting band and (resp. ) is the Fermi (resp. Bose) function. is the spectral function associated with the effective interaction (to be taken as if the self-energy of the frequency-dependent effective model is considered, and as if that of the static Hubbard model is considered). From this expression, one sees that if one considers an energy , the bands contained in the energy interval are the only ones contributing significantly to . This is the reason why the imaginary part of the self-energy at low-energy will be correctly reproduced by the effective low-energy model (provided of course that the spectral function is correctly approximated at low energy). In contrast, the real part of the self-energy is obtained from the Kramers-Krönig relation in the form:\n\n ReΣn(k,ω)=−1πP∫+∞−∞dν∑q,mImWnm(q,ν)nF(ϵmk−q)+nB(ν)ω−ϵmk−q−ν (14)\n\nBecause the principal- part integral extends over the whole frequency range, high-frequency contributions influence the self-energy even at low frequency. As a result, an accurate description of the real part of the self-energy cannot be obtained within the naive construction of the static Hubbard model because the effective interaction is not correctly approximated over the whole frequency range. This formula also suggests a way to appropriately modify the effective static model in order to obtain an accurate description at low energy, as discussed below.\n\n### iii.2 The puzzle of the satellite",
null,
"Figure 5: The spectral functions of the real system (solid) and the Hubbard model (dash) in the GWA.\n\nFig. 5 compares the spectral function obtained for the full system within the GWA, and that of the static Hubbard model (8). A striking difference between these two results is the absence of the 6 eV satellite in the GWA for the full system, while the static Hubbard model displays a satellite feature. This is to be expected from the structure of the self-energy. The GWA self-energy of the full system continues to grow and reaches a maximum around the plasmon excitation at around 25-30 eV, while the self-energy of the static Hubbard model has a maximum around the width of the 3d band (4 eV). In comparison with the true plasmon, the ”plasmon” of the Hubbard model has a much lower energy, of the order of , which results in the ”6 eV satellite”. One could blame the appearance of a satellite in our static Hubbard model calculation on the fact that we have used a (non self-consistent) GW approximation. However, more accurate treatments of the static Hubbard model (such as DMFT) do preserve this feature, which has in this context a natural interpretation as a lower Hubbard band. On the other hand, it could be that the experimentally observed Nickel satellite has a somewhat different physical origin and that the satellite obtained in static Hubbard model calculations is spurious in the context of Nickel (e.g because it is not legitimate to use a low-energy effective model in this energy range). In our view, this issue is an open problem which deserves further work.\n\n### iii.3 Improving the effective static model\n\nThe preceding discussion shows that an effective model for the d-band can be constructed, which accurately reproduces the full results over an extended energy range, provided the energy- dependence of is retained. However, performing calculations with an energy- dependent Hubbard interaction is exceedingly difficult. A more modest goal is to obtain an effective model which would apply to some low-energy range only (say, , with a cutoff of the order of the d-bandwith). In order to achieve this goal, we propose to adopt a renormalization group point of view, in which high energies are integrated out in a systematic way. Following this procedure, an appropriate low-energy model with a static U can be appropriately constructed. As we shall see, the bare Green’s function defining this low-energy model does not coincide with the non-interacting Green’s function in the d-subspace (we have seen that this does not lead to a satisfactory description, even at low energy).\n\nLet us illustrate this idea within the GW approximation. The full Hilbert space is divided into the Hubbard space, comprising the 3d orbitals, and the downfolded space, comprising the rest of the Hilbert space. This approach is complementary to the one in sun ; silke , where the division is done in real space (onsite and off sites). Within the GWA we may write the full self-energy as follows:\n\n GW=GdWd+Gd(W−Wd)+GrW. (15)\n\nwhere . The term represents the high-energy contribution of the screened interaction. This is the main source of error in the naive static limit, as discussed in the previous sections. The term is not obtainable within the Hubbard model, even when a frequency-dependent U is employed since resides in the downfolded space. This term was shown to be small. Its effect at low-energy can also be taken into account by appropriate modifications of the one-particle propagator, but we shall neglect it for simplicity.\n\nWe consider the following Hubbard model with a static , but a modified one-particle propagator , defined by the action:\n\n SH =−∫dτdτ′∑d†Rn(τ)[˜G0Rn,R′n′]−1(τ−τ′)dRn′(τ′) +12∫dτ∑:d†R1n(τ)dR2n′(τ):UR1nR2n′,R3mR4m′:d†R3m(τ)dR4m′(τ): (16)\n\nThe self-energy of this static Hubbard model in the GWA is:\n\n ˜ΣHd=˜Gd˜Wd (17)\n\nwhere the new effective interaction is , with constructed from the new Green’s functions (schematically ). We request that the interacting Green’s function of this modified static model coincides with that of the Green’s function calculated with the frequency- dependent interaction in the low energy range :), that is:\n\n ˜G−1d−˜ΣHd≃G−1d−GdWd−Gd(W−Wd)for|ω|<Λ (18)\n\nUsing the identity , this can be rewritten as:\n\n [1−U˜Pd]−1˜G−1d≃[1−UPd]−1G−1d−Gd(W−Wd)for|ω|<Λ (19)\n\nThis is an integral equation which determines in principle the modified bare Green’s function to be used in the ”downfolded” static action (16). To first approximation, one can neglect the polarization terms in this equation, and obtain the first order modification of as:\n\n ˜G−1d=G−1d−Gd(W−Wd)+⋯ (20)\n\nThe first correction appearing in this equation is precisely the contribution coming from the high-energy part of the screened interaction. We have explained above that this correction is not small, which is the reason for the failure of the ”naive” static Hubbard model using the non-interacting . Dividing the screened interaction into a low-energy part for and a high-energy part for , we can use the explicit forms (13,14) given in the previous section to obtain this first correction in the form:\n\n ˜G−1d(k,ω)n=G−1d(k,ω)n+1π∫|ν|>Λdν∑q,mImWnm(q,ν)nF(ϵmk−q)+nB(ν)ω−ϵmk−q−ν+⋯ (21)\n\nIn particular, we see by expanding this expression to first order in , that the low frequency expansion of the modified one-particle propagator reads: . The coefficient is a partial contribution to the quasi-particle residue . In practice this integral can be easily evaluated using the well known plasmon pole approximation, which should contain most of the high energy contribution. This correction insures that the quasi- particle residue is obtained correctly from this “downfolded” static model. We do not, however, expect that this improvement solves the problem with the satellite discussed in the previous subsection.\n\n### iii.4 Beyond the GW approximation\n\nIn strongly correlated systems, it is known that the GWA is not sufficient and improvement beyond the GWA is needed. Much of the short coming of the GWA probably originates from the improper treatment of short-range correlations within the random-phase approximation (RPA). Thus, one would like in the first instance attempt to improve these short-range correlations, which are essentially captured by the Hubbard model. The formula (15) suggests a natural way to do this, as follows. The contribution from the frequency-dependent U as well as the contribution from the downfolded Hilbert space are treated within the GWA. The self-energy corresponding to the GW self-energy of the Hubbard model with a static U can then be replaced by that obtained from more accurate theories such as dynamical mean-field theory (DMFT) georges or from exact methods such as Lanczos diagonalization. Thus, if we use the LDA to construct , the correction to the LDA exchange-correlation potential reads:\n\n ΔΣ=ΣH+Gd(W−Wd)+GrW−vxc, (22)\n\nwhere is the Hubbard model self-energy obtained from more accurate methods, replacing , We note also that the scheme in (15) avoids the problem of double counting, inherent in LDA+U anisimov or LDA+DMFT methods.anisimov2 In this way, it is possible to calculate the Hubbard U using the response function constructed from the LDA bandstructure.\n\nThe application of DMFT requires the Hubbard U for the impurity model. To calculate the Hubbard U for an impurity model, it is necessary to downfold contributions to the polarization P from the neighboring sites. Here, the identity in Eq. (3) shows its usefullness. Since the formulation in (3) is quite general, we merely need to redefine to be the onsite polarizations or equivalently, we redefine to also include polarizations from the neighboring sites. The computational result for Ni shows that the difference between the lattice and the impurity Hubbard U is rather small, essentially negligible. Somewhat different but related calculations of the Hubbard U for the impurity model has also been performed in kotani for Fe and Ni, which confirm the result obtained in our formulation.\n\nAnother feasible approach for calculating physical quantities of the derived Hubbard model is by using the path integral renormalization group (PIRG) method.imada One advantage of this method is the possibility of using the lattice Hubbard model as opposed to the impurity model, thus including spatial fluctuations and possible symmetry breaking. The method is also suited for studying single-particle Green’s functions as well as thermodynamic quantities, in particular for accurate determination of the phase diagram, which often requires a determination of the possible symmetry breaking of the Hubbard Hamiltonian after taking account of spatial and temporal fluctuations on an equal footing. In calculating the self-energy, one may substitute by the self-energy obtained within the correlator projection method onoda together with the PIRG. These form a future challenge in the field of strongly correlated materials.\n\n## Iv Summary and conclusion\n\nIn conclusion, we have investigated the construction of effective models for the correlated orbitals in materials with a natural separation of bands. We have shown that, if one retains the full frequency dependence of the local components of the screened interaction, an accurate effective model can be obtained over an extended energy range. Simply neglecting this energy dependence and using the non-interacting hamiltonian into a static Hubbard model does not provide an accurate description even at low energy. However, a proper modification of the bare propagator, obtained by integrating out high energies, allows for the construction of an effective Hubbard model which describes the low-energy physics in a satisfactory manner.\n\n## V Acknowledgment\n\nWe thank Y. Asai for useful comments. We would like to acknowledge hospitality of the Kavli Institute for Theoretical Physics (KITP at UC-Santa Barbara) where this work was initiated and partially supported by grant NSF-PHY99-07949. G.K aknowledges NSF grant DMR-0096462 and F.A acknowledges the support from NAREGI Nanoscience Project, Ministry of Education, Culture, Sports, Science and Technology, Japan. We also acknowledge the support of an international collaborative grant from CNRS (PICS 1062), of an RTN network of the European Union and of IDRIS Orsay (project number 031393).\n\n## References\n\n• (1) O. Gunnarsson, Phys. Rev. B 41, 514 (1990)\n• (2) P. Sun and G. Kotliar, Phys. Rev. B 66, 085120 (2002)\n• (3) S. Florens, PhD thesis (Paris, 2003); S.Florens, L. de Medici and A. Georges (unpublished).\n• (4) J. E. Hirsch, Phys. Rev. Lett. 87, 206402 (2001); J. E. Hirsch, Phys. Rev. B 67, 035103 (2003)\n• (5) O. Gunnarsson, O. K. Andersen, O. Jepsen, and J. Zaanen, Phys. Rev. B 39, 1708 (1989); V. I. Anisimov and O. Gunnarsson, Phys. Rev. B 43, 7570 (1991)\n• (6) M. Springer and F. Aryasetiawan, Phys. Rev. B 57, 4364 (1998)\n• (7) T. Kotani, J. Phys: Condens. Matter 12, 2413 (2000)\n• (8) M. Norman and A. Freeman Phys. Rev. B 33, 8896 (1986); M. Brooks J. Phys Cond. Matt 13, 2001, L469.\n• (9) J.W.Negele and H.Orland, Quantum Many-Particle Systems (Addison-Wesly, Redwood City, CA), 1988.\n• (10) O. K. Andersen, Phys. Rev. B 12, 3060 (1975); O. K. Andersen, T. Saha-Dasgupta, S. Erzhov, Bull. Mater. Sci. 26, 19 (2003)\n• (11) L. Hedin, Phys. Rev. 139, A796 (1965); L. Hedin and S. Lundqvist, Solid State Physics vol. 23, eds. H. Ehrenreich, F. Seitz, and D. Turnbull (Academic, New York, 1969)\n• (12) F. Aryasetiawan and O. Gunnarsson, Rep. Prog. Phys. 61, 237 (1998)\n• (13) S. Biermann, F. Aryasetiawan, and A. Georges, Phys. Rev. Lett. 90, 086402 (2003)\n• (14) A. Georges, G. Kotliar, W. Krauth, and M. J. Rosenberg, Rev. Mod. Phys. 68, 13 (1996)\n• (15) V. I. Anisimov, F. Aryasetiawan, and A. I. Lichtenstein, J. Phys.: Condens. Matter 9, 767 (1997)\n• (16) For reviews, see Strong Coulomb correlations in electronic structure calculations, edited by V. I. Anisimov, Advances in Condensed Material Science (Gordon and Breach, New York, 2001)\n• (17) M. Imada and T. Kashima, J.Phys. Soc. Jpn. 69, 2723 (2000); T. Kashima and M. Imada, J.Phys. Soc. Jpn. 70, 2287 (2001)\n• (18) S. Onoda and M. Imada, Phys. Rev. B 67, 161102 (2003); S. Onoda and M. Imada, J. Phys. Soc. Jpn. 70, 3398 (2001)"
] | [
null,
"https://media.arxiv-vanity.com/render-output/3823338/x1.png",
null,
"https://media.arxiv-vanity.com/render-output/3823338/x2.png",
null,
"https://media.arxiv-vanity.com/render-output/3823338/x3.png",
null,
"https://media.arxiv-vanity.com/render-output/3823338/x4.png",
null,
"https://media.arxiv-vanity.com/render-output/3823338/x5.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9190003,"math_prob":0.9430036,"size":26851,"snap":"2020-45-2020-50","text_gpt3_token_len":5873,"char_repetition_ratio":0.17480537,"word_repetition_ratio":0.016593685,"special_character_ratio":0.21585788,"punctuation_ratio":0.11642743,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9695698,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,1,null,1,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-11-27T23:08:25Z\",\"WARC-Record-ID\":\"<urn:uuid:fd04b839-9f8e-4627-be54-89d0b3c4bbfe>\",\"Content-Length\":\"477969\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d41041e7-1a90-4f29-8e2b-02e708ce450a>\",\"WARC-Concurrent-To\":\"<urn:uuid:7234e896-faaa-4394-a114-9c305b69bce1>\",\"WARC-IP-Address\":\"172.67.158.169\",\"WARC-Target-URI\":\"https://www.arxiv-vanity.com/papers/cond-mat/0401620/\",\"WARC-Payload-Digest\":\"sha1:PWSP7MFNLIOPRQEUBHBVTSJBTU2IXOBU\",\"WARC-Block-Digest\":\"sha1:ITDWO2SAULWZQ3HK5WIUPGMJZWOAIB37\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141194634.29_warc_CC-MAIN-20201127221446-20201128011446-00451.warc.gz\"}"} |
https://fr.maplesoft.com/support/help/Maple/view.aspx?path=CurveFitting%2FGeneral%2FSplineConditions | [
"",
null,
"Spline Continuity and End Conditions - Maple Help\n\nSpline Continuity and End Conditions\n\n This help page describes the interpolating, continuity, and end conditions used in CurveFitting[Spline].\n The form of the resulting piecewise function returned depends on whether the degree d is odd or even, and whether or not the knots='data' option is specified in the even case.",
null,
"Odd Degree\n\n The resulting function created by CurveFitting[Spline] is of the form $\\mathrm{piecewise}\\left(v<{x}_{1},{p}_{1},...,{p}_{n}\\right)$, where the n spline sections $\\left\\{{p}_{1},{p}_{2},...,{p}_{n}\\right\\}$ are polynomials of degree at most d. These polynomials are given by the following $\\left(d+1\\right)n$ conditions:",
null,
"2n Interpolating Conditions\n\n • Force continuity at the knots.\n\n${p}_{i}\\left({x}_{i-1}\\right)={y}_{i-1},\\mathrm{and}{p}_{i}\\left({x}_{i}\\right)={y}_{i},\\mathrm{for}i=1,2,\\mathrm{...},n$",
null,
"(d-1)(n-1) Continuity Conditions\n\n • Force continuity of the derivatives of order $1,2,...,d-1$ at the knots.\n\n${p}_{i}^{\\left(k\\right)}\\left({x}_{i}\\right)={p}_{i+1}^{\\left(k\\right)}\\left({x}_{i}\\right),\\mathrm{for}i=1,2,\\mathrm{...},n-1,\\mathrm{and}k=1,2,\\mathrm{...},d-1$",
null,
"d-1 End Conditions\n\n • Natural splines specified by endpoints='natural'.\n Equate the derivates of order $\\frac{\\left(d+1\\right)}{2},...,d-1$ at the end nodes to zero.\n\n${p}_{i}^{\\left(k\\right)}\\left({x}_{0}\\right)=0,\\mathrm{and}{p}_{n}^{\\left(k\\right)}\\left({x}_{n}\\right)=0,\\mathrm{for}k=\\frac{d}{2}+\\frac{1}{2},\\mathrm{...},d-1$\n\n • Not-a-knot splines specified by endpoints='notaknot'.\n Force the continuity of the dth derivative at the knots ${x}_{i}$, for $i=1,2,...,\\frac{\\left(d-1\\right)}{2}$ and $i=n-\\frac{\\left(d-1\\right)}{2},...,n-1$.\n\n${p}_{i}^{\\left(k\\right)}\\left({x}_{i}\\right)={p}_{i+1}^{\\left(d\\right)}\\left({x}_{i}\\right),\\mathrm{and}{p}_{n-i}^{\\left(d\\right)}\\left({x}_{n-i}\\right)={p}_{n-i+1}^{\\left(d\\right)}\\left({x}_{n-i}\\right),\\mathrm{for}i=1,2,\\mathrm{...},\\frac{d}{2}-\\frac{1}{2}$\n\n • Periodic splines specified by endpoints='periodic'.\n Match the derivatives of order $1,2,...,d-1$ at the end nodes.\n\n${p}_{1}^{\\left(k\\right)}\\left({x}_{0}\\right)={p}_{n}^{\\left(k\\right)}\\left({x}_{n}\\right),\\mathrm{for}k=1,2,\\mathrm{...},d-1$\n\n • Clamped splines specified by endpoints=V.\n Equate the derivates of order $1,2,...,\\frac{\\left(d-1\\right)}{2}$ at the end nodes to the specified values given in V, where V is either a list, Vector, or an Array, of dimension $d-1$ containing the specified clamped conditions. Specifically,\n\n$V=\\left[{p}_{i}^{\\left(1\\right)}\\left({x}_{0}\\right),\\mathrm{...},{p}_{1}^{\\left[\\frac{d}{2}-\\frac{1}{2}\\right]}\\left({x}_{0}\\right),{p}_{n}^{\\left(1\\right)}\\left({x}_{0}\\right),\\mathrm{...},{p}_{n}^{\\left[\\frac{d}{2}-\\frac{1}{2}\\right]}\\left({x}_{n}\\right)\\right],\\mathrm{with}$\n\n${p}_{1}^{\\left(k\\right)}\\left({x}_{0}\\right)={V}_{k},\\mathrm{and}{p}_{n}^{\\left(k\\right)}\\left({x}_{n}\\right)={V}_{k+\\frac{d}{2}+\\frac{1}{2}},\\mathrm{for}k=1,2,\\mathrm{...},\\frac{d}{2}-\\frac{1}{2}$\n\n • Generalized splines given by endpoints=G.\n Generalized end conditions can be specified involving any arbitrary linear combination of the values of the derivatives (of any order $1,2,...,d$ at the nodes ${x}_{0}$, ${x}_{1}$, ${x}_{n-1}$, and ${x}_{n}$, where $1). Such end conditions can be represented by a linear system of the form $\\mathrm{Ax}=b$, where $x$ is a vector of dimension $4d$, with\n\n$x=\\left[{p}_{1}^{\\left(1\\right)}\\left({x}_{0}\\right),{p}_{2}^{\\left(1\\right)}\\left({x}_{1}\\right),\\mathrm{...},{p}_{1}^{\\left(d-1\\right)}\\left({x}_{0}\\right),{p}_{2}^{\\left(d-1\\right)}\\left({x}_{1}\\right),{p}_{1}^{\\left(d\\right)},{p}_{2}^{\\left(d\\right)},{p}_{n-1}^{\\left(1\\right)}\\left({x}_{n-1}\\right),{p}_{n}^{\\left(1\\right)}\\left({x}_{n}\\right),\\mathrm{...},{p}_{n-1}^{\\left(d-1\\right)}\\left({x}_{n-1}\\right),{p}_{n}^{\\left(d-1\\right)}\\left({x}_{n}\\right),{p}_{n-1}^{\\left(d\\right)},{p}_{n}^{\\left(d\\right)}\\right]$\n\n $A$ is the corresponding coefficient matrix of dimension $d-1$ by $4d$ and $b$, a vector of dimension $d-1$, represents the right-hand side of the linear system.\n Generalized end conditions are specified with the optional parameter endpoints=G, where $G$ is a Matrix or an Array. Here, $G$ represents the augmented linear system $[A|b]$, having dimensions $d-1$ by $4d+1$.",
null,
"Even Degree\n\n Without the knots='data' option, the resulting function created by CurveFitting[Spline] is of the form $\\mathrm{piecewise}\\left(v<{z}_{1},{p}_{1},...,v<{z}_{n},{p}_{n},{p}_{n+1}\\right)$, where ${z}_{i}=\\frac{{x}_{i-1}}{2}+\\frac{{x}_{i}}{2}$, for $i=1,2,\\mathrm{...},n$ (that is, the spline knots are defined at the midpoints of the nodes) and the $n+1$ spline sections $\\left\\{{p}_{1},{p}_{2},...,{p}_{n+1}\\right\\}$ are polynomials of degree at most d. These polynomials are specified by the following $\\left(d+1\\right)\\left(n+1\\right)$ conditions.",
null,
"n+1 Interpolating Conditions at the Nodes\n\n • Force continuity at the nodes.\n\n${p}_{i}\\left({x}_{i-1}\\right)={y}_{i-1},\\mathrm{for}i=1,2,\\mathrm{...},n+1$",
null,
"n Interpolating Conditions at the Knots\n\n • Force continuity at the knots.\n\n${p}_{i}\\left({z}_{i}\\right)={p}_{i+1}\\left({z}_{i}\\right),\\mathrm{for}i=1,2,\\mathrm{...},n$",
null,
"(d-1)n Continuity Conditions\n\n • Force continuity of the derivatives of order $1,2,...,d-1$ at the knots.\n\n${p}_{i}^{\\left(k\\right)}\\left({z}_{i}\\right)={p}_{i+1}^{\\left(k\\right)}\\left({z}_{i}\\right),\\mathrm{for}i=1,2,\\mathrm{...},n\\mathrm{and}k=1,2,\\mathrm{...},d-1$",
null,
"d End Conditions\n\n • Natural splines specified by endpoints='natural'.\n Equate the derivates of order $\\frac{d}{2},...,d-1$ at the end nodes to zero.\n\n${p}_{1}^{\\left(k\\right)}\\left({x}_{0}\\right)=0,\\mathrm{and}{p}_{n+1}^{\\left(k\\right)}\\left({x}_{n}\\right)=0,\\mathrm{for}k=\\frac{d}{2},\\mathrm{...},d-1$\n\n • Not-a-knot splines specified by endpoints='notaknot'.\n Force the continuity of the dth derivative at the knots ${z}_{i}$, for $i=1,2,...,\\frac{d}{2}$ and $i=n+1-\\frac{d}{2},...,n-1$.\n\n${p}_{i}^{\\left(d\\right)}\\left({z}_{i}\\right)={p}_{i+1}^{\\left(d\\right)}\\left({z}_{i}\\right),\\mathrm{and}{p}_{n-i+1}^{\\left(d\\right)}\\left({z}_{n-i+1}\\right)={p}_{n+2-i}^{\\left(d\\right)}\\left({z}_{n-i+1}\\right),\\mathrm{for}i=1,2,\\mathrm{...},\\frac{d}{2}$\n\n • Periodic splines specified by endpoints='periodic'.\n Match the derivatives of order $1,2,...,d$ at the end nodes.\n\n${p}_{1}^{\\left(k\\right)}\\left({x}_{0}\\right)={p}_{n+1}^{\\left(k\\right)}\\left({x}_{n}\\right),\\mathrm{for}k=1,2,\\mathrm{...},d$\n\n • Clamped splines specified by endpoints=V.\n Equate the derivates of order $1,2,...,\\frac{d}{2}$ at the end nodes to the specified values given in V, where V is either a list, Vector, or an Array, of dimension $d$ containing the specified clamped conditions. Specifically,\n\n$V=\\left[{p}_{1}^{\\left(1\\right)}\\left({x}_{0}\\right),\\mathrm{...},{p}_{1}^{\\left(\\frac{d}{2}\\right)}\\left({x}_{0}\\right),{p}_{n+1}^{\\left(1\\right)}\\left({x}_{n}\\right),\\mathrm{...},{p}_{n+1}^{\\left(\\frac{d}{2}\\right)}\\left({x}_{n}\\right)\\right],\\mathrm{with}$\n\n${p}_{1}^{\\left(k\\right)}\\left({x}_{0}\\right)={V}_{k},\\mathrm{and}{p}_{n+1}^{\\left(k\\right)}\\left({x}_{n}\\right)={V}_{k+\\frac{d}{2}},\\mathrm{for}k=1,2,\\mathrm{...},\\frac{d}{2}$\n\n • Generalized splines specified by endpoints=G.\n Generalized end conditions can be specified involving any arbitrary linear combination of the values of the derivatives (of any order $1,2,...,d$ at the nodes ${x}_{0}$ and ${x}_{n}$). Such end conditions can be represented by a linear system of the form $\\mathrm{Ax}=b$, where $x$ is a vector of dimension $2d$, with\n\n$x=\\left[{p}_{1}^{\\left(1\\right)}\\left({x}_{0}\\right),\\mathrm{...},{p}_{1}^{\\left(d-1\\right)}\\left({x}_{0}\\right),{p}_{1}^{\\left(d\\right)},{p}_{n+1}^{\\left(1\\right)}\\left({x}_{n}\\right),\\mathrm{...},{p}_{n+1}^{\\left(d-1\\right)}\\left({x}_{n}\\right),{p}_{n+1}^{\\left(d\\right)}\\right]$\n\n $A$ is the corresponding coefficient matrix of dimension $d$ by $2d$ and $b$, a vector of dimension $d$, represents the right-hand side of the linear system.\n Generalized end conditions are specified with the optional parameter endpoints=G, where $G$ is a Matrix or an Array. Here, $G$ represents the augmented linear system $[A|b]$, having dimensions $d$ by $2d+1$.",
null,
"Even degree with knots='data'\n\n With the knots='data' option included, CurveFitting[Spline] will avoid creating knots at the midpoints of the nodes, and instead use the nodes for the knots in the even case. The resulting function is of the form $\\mathrm{piecewise}\\left(v<{x}_{1},{p}_{1},...,{p}_{n}\\right)$, where the n spline sections $\\left\\{{p}_{1},{p}_{2},...,{p}_{n}\\right\\}$ are polynomials of degree at most d. These polynomials are given by the following $\\left(d+1\\right)n$ conditions:",
null,
"2n Interpolating Conditions\n\n • Force continuity at the knots.\n\n${p}_{i}\\left({x}_{i-1}\\right)={y}_{i-1},\\mathrm{and}{p}_{i}\\left({x}_{i}\\right)={y}_{i},\\mathrm{for}i=1,2,\\mathrm{...},n$",
null,
"(d-1)(n-1) Continuity Conditions\n\n • Force continuity of the derivatives of order $1,2,...,d-1$ at the knots.\n\n${p}_{i}^{\\left(k\\right)}\\left({x}_{i}\\right)={p}_{i+1}^{\\left(k\\right)}\\left({x}_{i}\\right),\\mathrm{for}i=1,2,\\mathrm{...},n-1,\\mathrm{and}k=1,2,\\mathrm{...},d-1$",
null,
"d-1 End Conditions\n\n • Natural splines specified by endpoints='natural'.\n Equate the derivates of order $\\frac{d}{2},...,d-1$ at the left end node and the derivatives of order $\\frac{d}{2},...,d-2$ at the right end node to zero.\n\n${p}_{1}^{\\left(k\\right)}\\left({x}_{0}\\right)=0,\\mathrm{for}k=\\frac{d}{2},\\mathrm{...},d-1,\\mathrm{and}$\n\n${p}_{n+1}^{\\left(k\\right)}\\left({x}_{n}\\right)=0,\\mathrm{for}k=\\frac{d}{2},\\mathrm{...},d-2$\n\n • Not-a-knot splines specified by endpoints='notaknot'.\n Force the continuity of the dth derivative at the knots ${z}_{i}$, for $i=1,2,...,\\frac{d}{2}$ and $i=n+1-\\frac{d}{2},...,n-1$.\n\n${p}_{i}^{\\left(d\\right)}\\left({z}_{i}\\right)={p}_{i+1}^{\\left(d\\right)}\\left({z}_{i}\\right),\\mathrm{for}i=1,2,\\mathrm{...},\\frac{d}{2},\\mathrm{and}$\n\n${p}_{n-i}^{\\left(d\\right)}\\left({z}_{n-i}\\right)={p}_{n-i+1}^{\\left(d\\right)}\\left({z}_{n-i}\\right),\\mathrm{for}i=1,2,\\mathrm{...},\\frac{d}{2}-1$",
null,
"Examples\n\n > $\\mathrm{with}\\left(\\mathrm{CurveFitting}\\right):$\n > $\\mathrm{data}≔\\left[\\left[0,0\\right],\\left[1,5\\right],\\left[2,-1\\right],\\left[3,0\\right]\\right]$\n ${\\mathrm{data}}{≔}\\left[\\left[{0}{,}{0}\\right]{,}\\left[{1}{,}{5}\\right]{,}\\left[{2}{,}{-1}\\right]{,}\\left[{3}{,}{0}\\right]\\right]$ (1)\n\nA quintic spline using the 'natural' end condition.\n\n > $\\mathrm{Spline}\\left(\\mathrm{data},v,\\mathrm{degree}=5,\\mathrm{endpoints}='\\mathrm{natural}'\\right)$\n $\\left\\{\\begin{array}{cc}\\frac{{3}}{{11}}{}{{v}}^{{5}}{-}\\frac{{101}}{{11}}{}{{v}}^{{2}}{+}\\frac{{153}}{{11}}{}{v}& {v}{<}{1}\\\\ {-}\\frac{{6}}{{11}}{}{{v}}^{{5}}{+}\\frac{{45}}{{11}}{}{{v}}^{{4}}{-}\\frac{{90}}{{11}}{}{{v}}^{{3}}{-}{{v}}^{{2}}{+}\\frac{{108}}{{11}}{}{v}{+}\\frac{{9}}{{11}}& {v}{<}{2}\\\\ \\frac{{3}}{{11}}{}{{v}}^{{5}}{-}\\frac{{45}}{{11}}{}{{v}}^{{4}}{+}\\frac{{270}}{{11}}{}{{v}}^{{3}}{-}\\frac{{731}}{{11}}{}{{v}}^{{2}}{+}\\frac{{828}}{{11}}{}{v}{-}\\frac{{279}}{{11}}& {\\mathrm{otherwise}}\\end{array}\\right\\$ (2)\n\nA cubic spline using the 'periodic' end condition.\n\n > $\\mathrm{Spline}\\left(\\mathrm{data},v,\\mathrm{degree}=3,\\mathrm{endpoints}='\\mathrm{periodic}'\\right)$\n $\\left\\{\\begin{array}{cc}{-}{5}{}{{v}}^{{3}}{+}{4}{}{{v}}^{{2}}{+}{6}{}{v}& {v}{<}{1}\\\\ {6}{}{{v}}^{{3}}{-}{29}{}{{v}}^{{2}}{+}{39}{}{v}{-}{11}& {v}{<}{2}\\\\ {-}{{v}}^{{3}}{+}{13}{}{{v}}^{{2}}{-}{45}{}{v}{+}{45}& {\\mathrm{otherwise}}\\end{array}\\right\\$ (3)\n\nA quadratic spline using the 'notaknot' end condition.\n\n > $\\mathrm{Spline}\\left(\\mathrm{data},v,\\mathrm{degree}=2,\\mathrm{endpoints}='\\mathrm{notaknot}'\\right)$\n $\\left\\{\\begin{array}{cc}{-}{7}{}{{v}}^{{2}}{+}{12}{}{v}& {v}{<}\\frac{{3}}{{2}}\\\\ {5}{}{{v}}^{{2}}{-}{24}{}{v}{+}{27}& {\\mathrm{otherwise}}\\end{array}\\right\\$ (4)\n\nA clamped cubic spline with slope A and B at the two end nodes.\n\n > $\\mathrm{Spline}\\left(\\mathrm{data},v,\\mathrm{endpoints}=\\left[A,B\\right]\\right)$\n $\\left\\{\\begin{array}{cc}\\left(\\frac{{11}{}{A}}{{15}}{-}\\frac{{49}}{{5}}{+}\\frac{{B}}{{15}}\\right){}{{v}}^{{3}}{+}\\left(\\frac{{74}}{{5}}{-}\\frac{{26}{}{A}}{{15}}{-}\\frac{{B}}{{15}}\\right){}{{v}}^{{2}}{+}{A}{}{v}& {v}{<}{1}\\\\ \\left({-}\\frac{{A}}{{5}}{+}\\frac{{42}}{{5}}{-}\\frac{{B}}{{5}}\\right){}{{v}}^{{3}}{+}\\left({-}\\frac{{199}}{{5}}{+}\\frac{{16}{}{A}}{{15}}{+}\\frac{{11}{}{B}}{{15}}\\right){}{{v}}^{{2}}{+}\\left({-}\\frac{{9}{}{A}}{{5}}{+}\\frac{{273}}{{5}}{-}\\frac{{4}{}{B}}{{5}}\\right){}{v}{+}\\frac{{14}{}{A}}{{15}}{-}\\frac{{91}}{{5}}{+}\\frac{{4}{}{B}}{{15}}& {v}{<}{2}\\\\ \\left({-}\\frac{{29}}{{5}}{+}\\frac{{A}}{{15}}{+}\\frac{{11}{}{B}}{{15}}\\right){}{{v}}^{{3}}{+}\\left(\\frac{{227}}{{5}}{-}\\frac{{8}{}{A}}{{15}}{-}\\frac{{73}{}{B}}{{15}}\\right){}{{v}}^{{2}}{+}\\left({-}\\frac{{579}}{{5}}{+}\\frac{{7}{}{A}}{{5}}{+}\\frac{{52}{}{B}}{{5}}\\right){}{v}{+}\\frac{{477}}{{5}}{-}\\frac{{6}{}{A}}{{5}}{-}\\frac{{36}{}{B}}{{5}}& {\\mathrm{otherwise}}\\end{array}\\right\\$ (5)\n\nA cubic spline using the generalized end conditions with second derivative equal to 5 at the end nodes.\n\n > $G≔\\mathrm{Matrix}\\left(2,13,\\left\\{\\left(1,3\\right)=1,\\left(1,13\\right)=5,\\left(2,10\\right)=1,\\left(2,13\\right)=5\\right\\}\\right):$\n > $\\mathrm{Spline}\\left(\\mathrm{data},v,\\mathrm{endpoints}=G\\right)$\n $\\left\\{\\begin{array}{cc}{-}\\frac{{22}}{{5}}{}{{v}}^{{3}}{+}\\frac{{5}}{{2}}{}{{v}}^{{2}}{+}\\frac{{69}}{{10}}{}{v}& {v}{<}{1}\\\\ {6}{}{{v}}^{{3}}{-}\\frac{{287}}{{10}}{}{{v}}^{{2}}{+}\\frac{{381}}{{10}}{}{v}{-}\\frac{{52}}{{5}}& {v}{<}{2}\\\\ {-}\\frac{{8}}{{5}}{}{{v}}^{{3}}{+}\\frac{{169}}{{10}}{}{{v}}^{{2}}{-}\\frac{{531}}{{10}}{}{v}{+}\\frac{{252}}{{5}}& {\\mathrm{otherwise}}\\end{array}\\right\\$ (6)"
] | [
null,
"https://bat.bing.com/action/0",
null,
"https://fr.maplesoft.com/support/help/Maple/arrow_down.gif",
null,
"https://fr.maplesoft.com/support/help/Maple/arrow_down.gif",
null,
"https://fr.maplesoft.com/support/help/Maple/arrow_down.gif",
null,
"https://fr.maplesoft.com/support/help/Maple/arrow_down.gif",
null,
"https://fr.maplesoft.com/support/help/Maple/arrow_down.gif",
null,
"https://fr.maplesoft.com/support/help/Maple/arrow_down.gif",
null,
"https://fr.maplesoft.com/support/help/Maple/arrow_down.gif",
null,
"https://fr.maplesoft.com/support/help/Maple/arrow_down.gif",
null,
"https://fr.maplesoft.com/support/help/Maple/arrow_down.gif",
null,
"https://fr.maplesoft.com/support/help/Maple/arrow_down.gif",
null,
"https://fr.maplesoft.com/support/help/Maple/arrow_down.gif",
null,
"https://fr.maplesoft.com/support/help/Maple/arrow_down.gif",
null,
"https://fr.maplesoft.com/support/help/Maple/arrow_down.gif",
null,
"https://fr.maplesoft.com/support/help/Maple/arrow_down.gif",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.6961296,"math_prob":1.000009,"size":6546,"snap":"2021-43-2021-49","text_gpt3_token_len":2573,"char_repetition_ratio":0.15408131,"word_repetition_ratio":0.38471502,"special_character_ratio":0.31515428,"punctuation_ratio":0.24193548,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99997735,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-12-06T10:41:20Z\",\"WARC-Record-ID\":\"<urn:uuid:eac3f9fb-b391-4d6d-ac52-046f7503434d>\",\"Content-Length\":\"727788\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ea59764a-dd42-4bac-8c21-3d0eccbbe86b>\",\"WARC-Concurrent-To\":\"<urn:uuid:d12972a3-96fa-47b7-93bc-2b909b5b58b7>\",\"WARC-IP-Address\":\"199.71.183.28\",\"WARC-Target-URI\":\"https://fr.maplesoft.com/support/help/Maple/view.aspx?path=CurveFitting%2FGeneral%2FSplineConditions\",\"WARC-Payload-Digest\":\"sha1:XZUWIFLFKL57RVVOGRJBGOG2Y5JSJHSP\",\"WARC-Block-Digest\":\"sha1:C4X5PVUYQAPQN32NOQYNLD7VTMCEF6AP\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964363292.82_warc_CC-MAIN-20211206103243-20211206133243-00096.warc.gz\"}"} |
https://kb.mtc-usa.com/article/aa-03974/46/ | [
"The accuracy of an HPLC method is the closeness of the measured value to the true value for the sample.\n\nTo determine the accuracy of a proposed method, different levels of the analyte concentrations: lower concentration (LC, 80%), intermediate concentration (IC, 100%) and higher concentration (HC, 120%) must be prepared from independent stock solutions and analyzed (n=10).\n\nAccuracy is assessed as the percentage relative error and mean %recovery. To provide an additional support to the accuracy of the developed assay method, standard addition method should be employed, which involves the addition of different concentrations of the analyte (for example: 10, 20 and 30 μg/mL) to a known pre-analyzed sample and the total concentration should be determined using the proposed methods (n=10).\n\nThe %recovery of the added analyte should be calculated as:\n\n%recovery = [(Ct-Cs)/Ca]×100\n\nwhere Ct is the total analyte concentration measured after standard addition; Cs, analyte concentration in the sample; Ca, analyte concentration added to sample.",
null,
""
] | [
null,
"https://kb.mtc-usa.com/wp-content/uploads/2020/10/Cogent-logo_HPLC-Columns_for-KB-300x128.jpg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.92131037,"math_prob":0.91065854,"size":1046,"snap":"2023-40-2023-50","text_gpt3_token_len":224,"char_repetition_ratio":0.18618043,"word_repetition_ratio":0.0,"special_character_ratio":0.2170172,"punctuation_ratio":0.10928962,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9521362,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-11T08:18:37Z\",\"WARC-Record-ID\":\"<urn:uuid:659e337f-4b63-4ecf-9d8e-79b0c1de506c>\",\"Content-Length\":\"408266\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:67d19e74-a3ca-4282-8145-98e448e0fe81>\",\"WARC-Concurrent-To\":\"<urn:uuid:a292fbe5-e1f1-45f5-a0c9-2d38a2ab6ed3>\",\"WARC-IP-Address\":\"70.32.105.45\",\"WARC-Target-URI\":\"https://kb.mtc-usa.com/article/aa-03974/46/\",\"WARC-Payload-Digest\":\"sha1:RX6BPMAXFQHCOYPNIEEOYYTJKC4XRFW3\",\"WARC-Block-Digest\":\"sha1:4O7GPMFBNE2YONJ47PQVRAO5LGVJIH34\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679103810.88_warc_CC-MAIN-20231211080606-20231211110606-00402.warc.gz\"}"} |
https://www.kemi.uu.se/research/organic-chemistry/research-groups/the-quantum-chemistry-group/two-electron-integrals/ | [
"Description of two electron integrals with tensor",
null,
"Two electron integrals is a mathematical entity that describes how two electrons interact with each other. This building block is an important part of the mathematical expressions and computer programs used to express and calculate the wave functions. Using alternative tensor representations, compared with the conventional representation of two electron integrals, one can rewrite expressions and writing computer programs so that they become much more efficient and can handle larger molecular systems. We use Cholesky decomposition to achieve this."
] | [
null,
"https://www.kemi.uu.se/digitalAssets/436/c_436308-l_3-k_lindh-cholesky.jpg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.90433514,"math_prob":0.9426833,"size":602,"snap":"2019-43-2019-47","text_gpt3_token_len":98,"char_repetition_ratio":0.12040134,"word_repetition_ratio":0.0,"special_character_ratio":0.15116279,"punctuation_ratio":0.06521739,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9577574,"pos_list":[0,1,2],"im_url_duplicate_count":[null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-10-19T02:33:20Z\",\"WARC-Record-ID\":\"<urn:uuid:b89c1990-5cdb-401d-adc5-94c7bb169ae6>\",\"Content-Length\":\"25647\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3154c0ec-bcf7-4394-a163-b270d8135540>\",\"WARC-Concurrent-To\":\"<urn:uuid:9f856bc3-dd5e-4671-bd6e-471991f0ff8d>\",\"WARC-IP-Address\":\"130.238.7.134\",\"WARC-Target-URI\":\"https://www.kemi.uu.se/research/organic-chemistry/research-groups/the-quantum-chemistry-group/two-electron-integrals/\",\"WARC-Payload-Digest\":\"sha1:6VA6IRXXBKF6ZEV7V3APSIXA6CRQRGMA\",\"WARC-Block-Digest\":\"sha1:565Z2JR5ST5MHJEHIYQKKMNPZBRXVN43\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-43/CC-MAIN-2019-43_segments_1570986688674.52_warc_CC-MAIN-20191019013909-20191019041409-00113.warc.gz\"}"} |
https://www.arxiv-vanity.com/papers/gr-qc/0207103/ | [
"# The Manifold Dimension of a Causal Set: tests in conformally flat spacetimes\n\nDavid D. Reid Email address: Department of Physics and Astronomy, Eastern Michigan University, Ypsilanti,\nMI 48197\n###### Abstract\n\nThis paper describes an approach that uses flat-spacetime estimators to estimate the manifold dimension of causal sets that can be faithfully embedded into curved spacetimes. The approach is invariant under coarse-graining and can be implemented independently of any specific curved spacetime. Results are given based on causal sets generated by random sprinklings into conformally flat spacetimes in 2, 3, and 4 dimensions, as well as one generated by a percolation dynamics.\nPACS: 02.90+p, 04.60-m\n\n## I Introduction\n\nSince the time of Einstein, the prospect that spacetime might be discrete on microscopic scales has been considered as one possible avenue to help solve the problem of quantum gravity. The causal set program proposes one approach to discrete quantum gravity [1-2]. A causal set is a set of elements and an order relation , such that the set obeys properties which make it a good discrete counterpart for continuum spacetime. These properties are that (a) the set is transitive: ; (b) it is noncircular, and ; (c) it is locally finite such that the number of elements between any two ordered elements is finite, i.e., ; and (d) it is reflexive, . The action of the order relation is to mimic the causal ordering of events in macroscopic spacetime. Since all events in spacetime are not causally related, then not all pairs of elements in the set are ordered by the order relation. Hence a causal set is a partially ordered set.\n\nIf the microscopic structure of spacetime is that of a causal set, then in appropriate macroscopic limits, causal sets must be consistent with the properties of general relativity which describes spacetime as a Lorentzian manifold. Therefore, it must be established that causal sets can possess manifold-like properties. A necessary (but not sufficient) requirement for a causal set to be like a manifold is that it can be embedded into a manifold uniformly with respect to the metric. Finding ways to embed a causal set has proven to be very difficult thus far. However, the properties of a causal set can be compared to the properties that a uniformly embedded causal set are expected to have. The kinds of tests that can check for manifold-like behavior generally require knowledge of the dimension of the manifold into which the causal set might embed. In fact, consistency between different ways to estimate the dimension of the manifold is itself a stringent test of manifold-like behavior. It is worth noting that within the mathematics of partial orders there are several types of dimensions. However, the dimensions traditionally studied by mathematicians do not correspond to what is meant here. Therefore, everywhere in this paper the phrase “dimension of a causal set” refers to the manifold dimension, i.e., the dimension of the Lorentzian manifold into which the causal set might be uniformly embedded.\n\nThe most useful methods for estimating the dimension of a causal set are the Myrheim-Meyer dimension [3-4] and the midpoint scaling dimension . By design, both of these methods work best in Minkowski space. The approach used to derive the Myrheim-Meyer dimension has been extended to curved spacetimes , but implementation of this more general Hausdorff dimension is specific to the particular spacetime against which the causal set is being checked. Since there are infinitely many curved spacetimes, this method is less useful in more generic cases. Therefore, there is a continuing need to find ways to estimate the dimension of a causal set for curved spacetimes that (a) are independent of the specific properties of the curved spacetime, (b) do not require very large causal sets to achieve useful results, and (c) are invariant under coarse-graining of the causal set. This last requirement is desired because, on the microscopic scale, the causal sets that might describe quantum gravity will not display manifold-like properties in the sense described above. Only in the macroscopic limit, after an appropriate change-of-scale, do we expect to see such properties; this change-of-scale is called coarse-graining.\n\nIn what follows, I first present the background theory and terminology needed to understand the dimension estimation methods described in this paper; then, the different approaches to dimension estimation are described. The extent to which the methods work are illustrated using causal sets generated by uniform sprinklings into flat and conformally flat spacetimes. I then illustrate the methods using a causal set generated by a percolation dynamics.\n\n## Ii Theory\n\nAs alluded to previously, in the causal set program we are interested in those causal sets that can be uniformly embedded into a manifold. An embedding of a causal set is a mapping of the set onto points in a Lorentzian manifold such that the lightcone structure of the manifold preserves the ordering of the set. With high probability, an embedding will be uniform if the mapping corresponds to selecting points in the manifold via a Poisson process (as described below). Two important results for understanding the dimension estimators to be discussed are (a) the correspondence between the volume of a region in a manifold and the number of causal set elements and, (b) the correspondence between geodesic length and the number of links in a chain of causal set elements. These topics are discussed in the next two subsections.\n\n### ii.1 Random Sprinklings\n\nOne way to generate a causal set that can be uniformly embedded into a manifold is to perform a random sprinkling of points in a manifold. If the set consists of points randomly distributed (sprinkled) in a manifold of finite volume , we can define a discrete random variable on a region of such that if and otherwise. In terms of the random variable we can define another discrete random variable that counts the number of up to a possible number equal to the size of , where . Random variables such as are described by the binomial distribution which, for our case, can be written as \n\n Fk(Nn)=(nk)(VAVM)k(1−VAVM)n−k, (1)\n\nwhere is the probability of outcome .\n\nIf we define the density of the sprinkled points as , the expectation value of in region is given by . To generalize this description to manifolds of infinite volume, we take the limit of eq. (1) as while holding the density of the sprinkling uniform, . This procedure is a standard approach for deriving the Poisson distribution \n\n Pk(Nn)=limVM→∞ρ=constFk(N)=ρVAk!e−ρVA, (2)\n\nwhere the equivalence has been used. From this distribution, we find that the average value of the number of points sprinkled into regions of volume is given by\n\n ⟨Nn⟩A=ρVA. (3)\n\nWhile it is customary to scale the sprinkling to unit density, , this scaling is not done in the present cases. Thus, we see that a random sprinkling of points in a manifold at uniform density is described by a Poisson distribution. Therefore, the interesting causal sets are from among those that will admit an embedding consistent with a Poisson sprinkling into a manifold (perhaps only after coarse-graining). Such an embedding is referred to as a faithful embedding.\n\n### ii.2 Geodesic Length\n\nRecall that the length of the geodesic between two causally related events corresponds to the longest proper time between those events. To see what the most natural analog to geodesic length is for causal sets we must first define a few terms. A link, , in a causal set is an irreducible relation; so, iff . A chain in a causal set is a set of elements for which each pair is related; for example, is a chain from to . A maximal chain is a chain consisting only of links, such as .\n\nAs explained by Myrheim , the length of the longest maximal chain between two related elements in a causal set is the most natural analog for the geodesic length between two causally connected events in spacetime. (Myrheim did not use the term “causal set” which was coined by Rafael Sorkin and used in ). The length of a maximal chain is defined to be the number of links in that chain. Brightwell and coworkers have proven that this correspondence between the geodesic length in a Lorentzian manifold and the number of links in the longest maximal chain is, in fact, valid in Minkowski space . Therefore, Myrheim’s expectation that this correspondence should be valid in the general case seems well founded. In this work, I shall assume the validity of what I will refer to as the Myrheim length conjecture:\n\nLet be a causal set that can be faithfully embedded, with density , into a Lorentzian manifold by a map . Then, in the limit , the expected length of the longest maximal chain between any ordered pair is directly proportional to the geodesic length between their images .\n\n## Iii Dimension Estimators\n\nA dimension estimator for a causal set is a method that only uses properties of the set to determine the dimension of the manifold into which the causal set might be faithfully embeddable. Ideally, we hope to have a scheme for estimating the dimension of a causal set that (a) works well for curved spacetime manifolds, (b) is invariant under coarse-grainings of the causal set, and (c) does not require very large causal sets in order to see useful results. As alluded to previously, one difficulty in finding a useful dimension estimator for curved spacetimes is that implementation of the estimators tend to depend on the properties of the particular spacetime against which the causal set is being compared. This circumstance is problematic for causal sets generated by a process that does not directly suggest a class of candidate spacetimes.\n\nHowever, one property that all physical spacetimes share is that locally, they are approximately Minkowskian. From the standpoint of causal sets, this implies that if a causal set , of size , is faithfully embeddable into a -dimensional curved manifold , then there ought to be subsets , of size that are faithfully embeddable (approximately) into -dimensional Minkowski space . Studying how these subsets behave under dimension estimators that work reliably for should allow us to identify which, if any, -dimensional Minkowski space is most closely approximated by these subsets. I will refer to dimensions found in the above manner as the local Minkowski dimension of the causal set. An approach similar to this was independently suggested by Sorkin .\n\nThe dimension estimators that will be used to determine the local Minkowski dimension in curved spacetimes are the Myrheim-Meyer dimension and the midpoint-scaling dimension mentioned in the introduction and described below. Both of these dimension estimators are defined in terms of causal set intervals. A causal set interval between two related elements is the inclusive subset . Taking as a causal order, is the intersection of the future of with the past of .\n\nThe Myrheim-Meyer dimension is based on the fact that for a causal set faithfully embeddable into an interval of , the expected number of chains that consists of elements, -chains (), is given by \n\n =(ρVI)kΓ(δ)Γ(2δ)Γ(2δ+1)k−12k−1kΓ(kδ)Γ((k+1)δ), (4)\n\nwhere . The easiest chains to count are 2-chains which count the relations between elements. Specializing to 2-chains, eq. (4) becomes\n\n f(d)≡2=Γ(d+1)Γ(d/2)4Γ(3d/2), (5)\n\nwhere I have used eq. (3) to relate number and volume. Therefore, for a given causal set, we can divide the number of relations by the square of the number of elements to approximate the value of for the interval. This function is monotonically decreasing with and can be numerically inverted to give a value for the dimension.\n\nThe midpoint-scaling dimension relies on the correspondence between number and volume, and on the relationship between the volume of an interval in and the length of the geodesic between its defining events \n\n VI=π(d−1)/22d−2d(d−1)Γ[(d−1)/2]τd. (6)\n\nAn interval of size can be divided into two sub-intervals and of sizes and , respectively. Let be the smaller of and , then the element is the midpoint of when is as large as possible. This process corresponds to a rescaling of lengths by a factor of ; therefore, in the manifold , which implies that . For the causal set interval, assuming the Myrheim length conjecture, this translates to so that\n\n d≈log2(N/Nsmall) (7)\n\nestimates the dimension.\n\n## Iv Results\n\nThe dimension estimators were applied to causal set intervals generated by random sprinklings into flat and conformally flat spacetimes given by the metric\n\n ds2=Ω2ηαβdxαdxβ, (8)\n\nwhere is the conformal factor (a smooth, strictly positive function of the spacetime coordinates) and is the Minkowski tensor. The sprinklings were performed by two different methods. The more efficient approach for sprinkling points into an interval of volume was to divide the interval into several little regions of volume . The number of points sprinkled into a region was determined by the ratio . The coordinates for the points were then determined randomly within the region of volume . The less efficient approach, which was much easier to implement, used a (double) rejection method similiar to the method described in . In this second approach, the interval was enclosed in a box and spacetime coordinates were randomly selected within this box; if the selected point was outside the interval it was rejected, otherwise, it was kept – this was the first rejection. In Minkowski space, this first rejection provides a uniform distribution of points.\n\nIn curved spacetimes, points that fell within the interval faced a second rejection designed to ensure that the points were distributed uniformly with respect to the volume form . Each point in the interval was associated with a random number selected within the range , where is the maximum value of the volume form within the interval . If was greater than the value of the volume form evaluated at the point in question, the point was rejected; otherwise, it was kept. This process continued until points were sprinkled into the interval. The sprinklings in 1+1 dimensions used the first method; all others used the rejection method. In a few cases the two methods were compared, and produced completely consistent results. That these methods produced causal sets that correspond to Poisson sprinklings were verified, in 1+1 dimensions, by chi-squared tests. In all sprinklings, random numbers were generated using the subroutine “ran2” from .\n\nSince the main result of this work comes from comparing the behavior of small sub-intervals between flat and curved spacetimes, we must determine the pertinent range of sub-interval sizes. This range can be determined from sprinklings into Minkowski space. Figure 1 shows the results for random sprinklings of points into intervals of 2-, 3-, and 4-dimensional Minkowski space. Both the Myrheim-Meyer () and midpoint-scaling () dimensions were calculated for every closed sub-interval of size . The average value of was calculated for sub-intervals of a given size. To decrease the statistical fluctuations, each curve in the figure represents an average of 15 different sprinklings.\n\nWhile there are a number of interesting features in this figure, two things are most relevant to this study. First, we can see that for the midpoint-scaling dimension the three different Minkowski spaces are effectively indistinguishable for sub-intervals smaller than . Therefore, since all three Minkowski results agree within this size region, any curved spacetime that behaves like one of these three should also be in agreement in this region. This fact sets the lower limit for the pertinent range of comparison with curved spacetimes at . Second, the general trends displayed by these curves are typical for all of the results. The curves for both and rise steeply producing a “shoulder” beyond which the curves level off. The locations of the shoulder are clearly different for the three different spacetimes; therefore, the degree to which the analogous results for the curved spacetimes match these flat-spacetime results around this shoulder will be used to determine the local Minkowski dimension. The broadest shoulder occurs for for which a value of is sufficient to incorporate. Therefore, a good size range for seeking local Minkowski behavior for the curved spacetimes studied here is . I will call this size range the local Minkowski region.\n\nThe quantitative measure of how well the results for a curved spacetime matches those of a particular flat spacetime is a relative goodness-of-fit test using a chi-squared statistic that compares values of for sub-intervals of the same size within the local Minkowski region. This relative measure requires knowledge of how well the different flat-spacetime results fit each other according to this method. The statistic is calculated as\n\n (9)\n\nwhere the subscript () means that -dimensional Minkowski space is being compared against -dimensional Minkowski space. The quantity is the number of bins into which the data was divided (either 22 or 30); this number depends on the bin size (either 4 or 3) which was chosen such that each “expected” value was greater than 5. The are the “observed” values. The results of these calculations for the Myrheim-Meyer dimension are the following: , , , , , and . For the midpoint dimension we also have the following results: , , , , , and . For both dimension estimators, the best (smallest) result comes from the comparison of three-dimensional Minkowski space against 4-dimensional Minkowski space. Therefore, these values will be used to determine the relative goodness-of-fit of the results for curved spacetimes, 0.365 for the Myrheim-Meyer calculations and 0.459 for the midpoint calculations.\n\n### iv.1 1+1 dimensions\n\nFigure 2 shows a uniform sprinkling of 512 points into an interval of a conformally flat spacetime in 1+1 dimensions with conformal factor . Both the midpoint and Myrheim-Meyer dimension estimators fail for the full interval giving values of and , respectively. The conformal factor for this spacetime causes the points to be more spread out in space which is consistent with the overestimates of the dimension. Figure 3 shows a plot of the average midpoint dimension for sub-intervals of different size averaged over 15 sprinklings of the spacetime shown in Fig. 2. This curve is compared against the results for Minkowski space. Despite the fact that the full interval values of the dimension estimators are closer to 3, the behavior for small sub-intervals clearly follows that of two-dimensional Minkowki space suggesting a local Minkowski dimension of two. What appears to be happening here is that the small sub-intervals are, in fact, behaving like causal sets that are embeddable in two-dimensional Minkowski space; then, as you look at sub-intervals of larger size the effects of curvature become more important and the flat-spacetime dimension estimators become less reliable. (A similar plot using the Mryheim-Meyer dimension shows identical features).\n\nTo quantify this result, a goodness-of-fit test is made, using an equation very similar to Eq. (9), which compares the average dimension of sub-intervals in the curved spacetime with those in each dimension of Minkowski space within the local Minkowski region. The curved spacetime values are taken as the observed and Minkowski space values as the expected. These results are then compared to the best chi-squared results from the mutual comparisons of the different dimensions of Minkowski space. For the curved spacetime shown in Fig. 2, we obtain relative goodness-of-fit values of\n\n ˜χ22D,MM=0.00181, ˜χ22D,mid=0.00119, ˜χ23D,MM=1.88, ˜χ23D,mid=1.84, ˜χ24D,MM=4.95, ˜χ24D,mid=4.84,\n(10)\n\nwhere the notation means that the curved spacetime result was compared against two-dimensional Minkowski space using the average Mryheim-Meyer dimension values relative to the value of for the Mryheim-Meyer dimension; and correspondingly for the values labeled with the subscript “mid.” As defined, this statistic means that values of represents a poor fit signifying that the two data sets being compared could certainly be Minkowski spaces differing in dimension by at least 1. Whereas, values of indicates a good fit with the spacetime in question. Clearly, the results displayed in Eq. (10) show that the small sub-intervals offer an excellent fit to those of two-dimensional Minkowski space. Furthermore, the fits with three- and four-dimensional Minkowski space are no better, or much worse, than what can be expected between Minkowski spaces of different dimensions. The conformally flat spacetime for which the above results are given represents only one of several 1+1 dimensional spacetimes studied. In all cases, the results are similiar to those given here.\n\n### iv.2 2+1 dimensions\n\nFigure 4 shows a uniform sprinkling of 512 points into an interval of a conformally flat spacetime in 2+1 dimensions with conformal factor . This example was chosen because it produced the worse full interval results for all of the 2+1 dimensional spacetimes studied. It is easier to see what this interval is like from the projections. The plane shows that more points are located at larger values of the coordinate and smaller values of ; the plane shows similar behavior. The projection onto the plane shows that the points are more crowded in the middle of the interval. This crowding is due to the preference for small where the spatial extent of the region is centralized.\n\nFigure 5 shows a plot of the average Myrheim-Meyer dimension for sub-intervals of different size averaged over 15 sprinklings of the spacetime shown in Fig. 4. This curve is compared against the results for Minkowski space. For this spacetime, the effects of the curvature become apparent around . Nevertheless, the result for the curved spacetime maintains a good approximation to the flat spacetime result within the designated locally flat region. To verify that the local Minkowski dimension of this spacetime should be taken to be three, the relative goodness-of-fit results are\n\n ˜χ22D,MM=3.03, ˜χ22D,mid=3.15, ˜χ23D,MM=0.00807, ˜χ23D,mid=0.00697, ˜χ24D,MM=1.15, ˜χ24D,mid=1.14.\n(11)\n\nHere we see clearly that in the locally flat region this spacetime provides results that give an excellent fit to the results for three-dimensional Minkowski space. Several other spacetimes in 2+1 dimensions were studied giving similar results.\n\n### iv.3 3+1 dimensions\n\nFigure 6 shows projections of a uniform sprinkling of 512 points into an interval of a conformally flat spacetime in 3+1 dimensions with conformal factor . Figure 7 is the corresponding plot for the average dimension per size of sub-interval. As with the other cases, the figure clearly shows that within the locally flat region the curved spacetime result gives a much better fit to the Minkowski space having the same dimension. The relative goodness-of-fit results for this case are\n\n ˜χ22D,MM=9.62, ˜χ22D,mid=10.4, ˜χ23D,MM=0.825, ˜χ23D,mid=0.919, ˜χ24D,MM=0.0382, ˜χ24D,mid=0.0252.\n(12)\n\nSeveral other spacetimes in 3+1 dimensions were studied giving similar results.\n\n### iv.4 A causal set generated by transitive percolation\n\nSo far, all of the causal sets used were guaranteed to be faithful because they were generated by sprinklings into known manifolds. Having established the approach, it is instructive to apply this method to a causal set generated by some other means. Ultimately, there will be a quantum dynamics for generating causal sets, and it will be these causal sets (or coarse-grained versions of them) whose manifold dimensions we would like to estimate. Although a quantum dynamics for causal sets does not yet exist, there is a classical dynamics, due to Rideout and Sorkin , which is proving to be very useful in helping to determine the extent to which causal sets can encode physical information. So, this classical dynamics provides an excellent avenue to illustrate how the suggestions presented here might be used in a more general case when we cannot be sure that the causal set is faithfully embeddable.\n\nPerhaps the simplest model within the class of models proposed by Rideout and Sorkin is the one that they have called transitive percolation. The procedure followed here for generating a random causal set of elements via percolation is as follows: (a) assign labels to the elements; (b) impose the partial ordering relation onto each pair of elements with probability . That is, if , the probability that element element is ; and (c) enforce the transitivity requirement on the set. It has been shown that despite the labeling, causal sets generated in this way are label invariant in that the probability of getting a particular causal set is independent of how the elements were initially labeled .\n\nFor this example, a causal set with and was generated; these values produce a causal set that has a Myrheim-Meyer dimension of 2.0 when applied to the full causal set . This causal set, however, is not a causal set interval. Therefore, the largest sub-interval of this 512-element causal set was used; this sub-interval contained 298 elements. Figure 8 is a plot of the average Myrheim-Meyer dimensions for the sub-intervals of the 298-element causal set interval taken from the causal set generated by transitive percolation, compared against 298-element causal sets generated by random sprinklings in Minkowski space. What stands out in this figure is that the percolation curve does not closely follow any of the Minkowski curves in the local Minkowski region. Therefore, not even the small sub-intervals of this causal set behave as sub-intervals in Minkowski space do. This suggests that the percolated causal set should not be embeddable into any (flat or curved) spacetime.\n\nLooking at the Fig. 8 shows that the percolation curve is a closer fit to the three-dimensional Minkowski curve than it is to the other curves. The relative goodness-of-fit test yields\n\n ˜χ22D,MM=1.82,˜χ23D,MM=0.308,˜χ24D,MM=1.952. (13)\n\nWhile these results confirm that the percolation curve is a better fit to the three-dimensional Minkowski result, we can also see that the value is nearly two orders-of-magnitude worse than what would be expected based on our study of the random sprinklings. This fact gives some quantitative weight to the conclusion reached by studying Fig. 8.\n\n## V Conclusions\n\nIn this paper, I have suggested a method for estimating the manifold dimension of a causal set that can be faithfully embedded into curved spacetimes and tested this method for several conformally flat spacetimes. The method uses flat-spacetime dimension estimators to search for local Minkowski behavior within the causal set. This approach can be applied to any causal set, and works independent of the specific properties of a particular curved spacetime. Very large causal sets are not required. Furthermore, this approach is invariant under coarse-graining since both the Myrheim-Meyer and midpoint-scaling dimensions are invariant under coarse-graining.\n\nImplementation of this procedure can be summarized as follows: (a) form an interval of size of the causal set, larger intervals give better statistics, but they don’t need to be extremely large; (b) average the Myrheim-Meyer (or midpoint-scaling) dimension for sub-intervals of a given size; (c) perform ramdom sprinklings of size or greater in 2-, 3-, and 4-dimensional Minkowski space and determine average dimension values for their sub-intervals; (d) a comparision of the results for the causal set being checked against the results for the three Minkowski sprinklings for small sub-intervals, , should reveal whether or not the causal set in question displays the local Minkowski behavior that would be required of causal sets that are faithfully embeddable into physically relevant spacetimes.\n\nIt is worth noting that instead of calculating the dimension values for closed sub-intervals, open sub-intervals can also be used. However, the statistical fluctuations are greater for open sub-intervals and this fact becomes somewhat problematic in 2+1 and especially in 3+1 dimensions. What now remains is to apply this method to generically curved spacetimes. The basic principle behind the local Minkowski dimension certainly applies in the generic case, but the extent to which this behavior can be extracted from the causal sets is yet unknown. If this approach proves useful in that case as well, it would be an important step toward the goal of a more comprehensive manifold test for causal sets.\n\n## Vi Acknowledgments\n\nI’d like to acknowledge the help of Mr. Jason Ruiz for writing the computer code to conduct the chi-squared tests used to test the faithfulness of some of the sprinklings. I also wish to thank Dr. Rafael D. Sorkin for providing information on the percolation dynamics and further motivation to work on problems in causal set quantum gravity. This work was supported by the Graduate Research Fund and a Faculty Research Fellowship both from Eastern Michigan University. Additional support from the National Science Foundation is gratefully acknowledged.\n\n## Viii Figure Captions\n\nFIG 1. The average value of the Myrheim-Meyer and midpoint dimensions for sub-intervals of a given size in 1+1, 2+1, and 3+1 dimensional Minkowski space. The sprinklings in 1+1 and 2+1 dimensions are of 512 points while, for better statistics, the 3+1 sprinklings were of 1024 points. Each curve is an average of 15 different sprinklings. To clearly see the behavior of the small sub-intervals results for sub-interval containing only are shown here. For the results for the Myrheim-Meyer and midpoint dimensions, for the 2+1 and 3+1 sprinklings, also merge to the appropriate interger values.\n\nFIG 2. A uniform sprinkling of 512 points into an interval of a conformally flat spacetime in 1+1 dimensions. The conformal factor is shown in the figure.\n\nFIG 3. Comparison of the average value of the midpoint-scaling dimension for sub-intervals of a given size for the set of points shown in Fig. 2 (2D curved) against the similar results for 2-, 3-, and 4-dimensional Minkowski space. The results for small sub-intervals suggests a local Minkowski dimension of two.\n\nFIG 4. A uniform sprinkling of 512 points into an interval of a conformally flat spacetime in 2+1 dimensions with conformal factor . The figure also shows projections of the points onto the , , and planes.\n\nFIG 5. Comparison of the average value of the Mryheim-Meyer dimension for sub-intervals of the set shown in Fig. 4 (3D curved) against similar results for Minkowski space. The results for small sub-intervals suggests a local Minkowski dimension of three.\n\nFIG 6. Projections of a uniform sprinkling of 512 points into an interval of a conformally flat spacetime in 3+1 dimensions with conformal factor . Panel (a) shows the projection onto the plane, panel (b) shows the projection onto the plane, panel (c) shows the projection onto the plane, and panel (d) shows the projection onto the plane.\n\nFIG 7. Comparison of the average value of the midpoint-scaling dimension for sub-intervals of the set represented in Fig. 6 (4D curved) against similar results for Minkowski space. The results for small sub-intervals suggests a local Minkowski dimension of four.\n\nFIG 8. Comparison of the average value of the Mryheim-Meyer dimension for the largest sub-interval of a 512-element causal set generated by transitive percolation. The results for small sub-intervals do not match any of the three Minkowski space results suggesting that this percolated causal set is not faithfully embeddable into any spacetime manifold."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.91422343,"math_prob":0.9737562,"size":32195,"snap":"2023-14-2023-23","text_gpt3_token_len":6915,"char_repetition_ratio":0.18579727,"word_repetition_ratio":0.081605226,"special_character_ratio":0.20344774,"punctuation_ratio":0.09920163,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9855205,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-09T17:29:42Z\",\"WARC-Record-ID\":\"<urn:uuid:958800a2-5ad2-4bf8-9eaa-c2340bde41b4>\",\"Content-Length\":\"379395\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5ced1a34-8eba-44c9-b335-0c32c221f64e>\",\"WARC-Concurrent-To\":\"<urn:uuid:775c455d-d3cb-490f-b191-39acf4abdd59>\",\"WARC-IP-Address\":\"104.21.14.110\",\"WARC-Target-URI\":\"https://www.arxiv-vanity.com/papers/gr-qc/0207103/\",\"WARC-Payload-Digest\":\"sha1:WHKWG3X4XGLVFQECW7ZHUOAYLEZDN6YE\",\"WARC-Block-Digest\":\"sha1:F55HJJP3R3NTWVSAOV3LEMQDYQOXYIVQ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224656788.77_warc_CC-MAIN-20230609164851-20230609194851-00054.warc.gz\"}"} |
https://myssec.com/question-2/ | [
"# Question 2\n\nQuestion 2.5\nWe need to nd the update rule for weights w\nj k between the\nk-th input neuron and the j-th hidden\nneuron. Therefore we need to derive the error with respect to those weights:\n@ E @ w\nj k=\n@ E @ y\nj\n@ y\nj @ net\nj\n@ net\nj @ w\nj k (Chain Rule)\nwhere net\nj= I\nX\nk =1 w\nj k y\nk and\ny\nj = p x\n2\n+ 1 1 2\n+\nx\nj @ w\nj k:\n@ net j @ w\nj k =\n@ @ w\nj k(I\nX\nk =1 w\nj k y\nk)\n= @ @ w\nj k(\nw\nj1 y\n1 +\nw\nj2 y\n2 +\n: : : +w\nj k y\nk +\n: : : +w\nj I y\nI) (\nExpandingnet\nj)\n= @ @ w\nj k(\nw\nj1 y\n1) + @ @ w\nj k(\nw\nj2 y\n2) +\n: : :+ @ @ w\nj k(\nw\nj k y\nk) +\n: : :+ @ @ w\nj k(\nw\nj I y\nI)\n(Sum Rule)\n= @ @ w\nj k(\nw\nj k y\nk)\n= y\nk\n!\nNow we derive the second term @ y\nj @ net\nj:\n@ y j @ net\nj=\n@ @ net\nj(\n1 2\n(q net\n2\nj + 1\n1) + net\nj)\n= @ @ net\nj(\n1 2\n(q net\n2\nj + 1\n1)) + @ @ net\nj(\nnet\nj)\n(Sum Rule)\nRH S = 1\nLH S =1 2\n(\n@ @ net\nj(q net\n2\nj + 1)\n@ @ net\nj(1))\n(Sum Rule and take out constant)\n= 1 2\n@ @ net\nj(q net\n2\nj + 1)\n1\n\nLet\nu= net 2\nj + 1\nUsing the chain rule we get: 1 2\n(\n@ y\nj @ net\nj=\n@ @ u\n(p u\n) @ @ net\nj(\nnet 2\nj + 1))\n= 1 2\n(\n@ @ u\n(\nu 1 2\n) @ @ net\nj(\nnet 2\nj ) + @ @ net\nj(1)) (Sum Rule and simplifying)\n= 1 2\n(\n1 2\nu\n\nWe Will Write a Custom Essay Specifically\nFor You For Only \\$13.90/page!\n\norder now\n\n1 2\n2net\nj)\n(Power Rule)\n= 1 2\n(\nnet\nj p\nu\n)\n(2s cancel out and simplify)\n= net\nj 2\np u\nSubstituting net2\nj + 1 back in for\nuwe get:\nLH S = net\nj 2\nq net\n2\nj + 1\n) @ y\nj @ net\nj=\nnet\nj 2\nq net\n2\nj + 1 + 1\n! (Substituting RHS and LHS)\nNow since the error function (E) is only calculated after the output neurons, E can be seen as a\nfunction of all neurons that receive input from neuron j of the hidden layer (i.e. the j-th output\nneuron). Thus @ E @ y\njcan be represented as:\n@ E @ y\nj= O\nX\ni =1 (\n@ E @ y\n[email protected] y\ni @ net\ni\n@ net\ni @ y\nj)\ni @ y\nj where\nnet\ni= P\nH\nj =1 w\nijy\nj:\n@ net i @ y\nj =\n@ @ y\nj( H\nX\nj =1 w\nijy\nj)\n= @ @ y\nj(\nw\ni1 y\n1 +\nw\ni2 y\n2 +\n: : : +w\nijy\nj +\n: : : +w\niH y\nH ) (\nExpandingnet\ni)\n= @ @ y\nj(\nw\ni1 y\n1) + @ @ y\nj(\nw\ni2 y\n2) +\n: : :+ @ @ y\nj(\nw\nijy\nj) +\n: : :+ @ @ y\nj(\nw\niH y\nH )\n(Sum Rule)\n= @ @ y\nj(\nw\nijy\nj)\n= w\nij\n! (1)\n2\n\nNow solve\n@ y\ni @ net\ni:\n@ y i @ net\ni=\n@ @ net\ni(\nnet\ni)\n= 1\n! (2)\nLastly solve @ E @ y\ni:\n@ E @ y\ni=\n@ @ y\ni( O\nX\ni =1 (ln(\nt\ni + 1)\nln( y\ni + 1)) 2\n)\n= @ @ y\ni((ln(\nt\n1 + 1)\nln( y\n1 + 1)) 2\n+ : : : + (ln( t\ni + 1)\nln( y\ni + 1)) 2\n+ : : :\n+ (ln( t\nO + 1)\nln( y\nO + 1)) 2\n) ( ExpandingE)\n= @ @ y\ni((ln(\nt\n1 + 1)\nln( y\n1 + 1)) 2\n) + : : :+ @ @ y\ni((ln(\nt\ni + 1)\nln( y\ni + 1)) 2\n) + : : :\n+ @ @ y\ni((ln(\nt\nO + 1)\nln( y\nO + 1)) 2\n) ( Sum Rule)\n= @ @ y\ni((ln(\nt\ni + 1)\nln( y\ni + 1)) 2\n)\nLet u= ln( t\ni + 1)\nln(y\ni + 1)\n= @ @ u\n(\nu 2\n) @ @ y\ni(ln(\nt\ni + 1)\nln( y\ni + 1))\n@ @ u\n(\nu 2\n) = 2 u (Chain Rule)\n@ @ y\ni(ln(\nt\ni + 1)\nln( y\ni + 1)) = @ @ y\ni(ln(\nt\ni + 1))\n@ @ y\ni(ln(\ny\ni + 1))\n(Sum Rule)\nLet p= y\ni + 1\n@ @ y\ni(ln(\nt\ni + 1)) = 0\n@ @ y\ni(ln(\ny\ni + 1)) = @ @ p\n(ln(\np)) @ @ y\ni(\ny\ni + 1)\n(Chain Rule)\n= 1 p\n\n1\n@ @ y\ni(ln(\nt\ni + 1)\nln( y\ni + 1)) = 0\n1 p\nSubstituting y\ni + 1back in for\npand ln( t\ni + 1)\nln( y\ni + 1) back in for\nuwe get:\n@ E @ y\ni=\n2(ln(\nt\ni + 1)\nln( y\ni + 1) y\ni + 1\n!\n(3)\n3\n\nNow substituting the answers from equations1,2and3back into\n@ E @ y\njwe get:\n@ E @ y\nj= O\nX\ni =1\n2(ln(\nt\ni + 1)\nln( y\ni + 1)) y\ni + 1\n1 w\nij\n!\nFor simplicity sake the rest of the answer will refer to @ E @ y\njas\n\nNow plugging in and the other parts of the original equation to @ E @ w\nj k:\n@ E @ w\nj k=\ny\nk( net\nj 2\nq net\n2\nj + 1 + 1)\nWhen updating the weights between the input and hidden layers the following formula is used:\nw j k =\nw\nj k +\nw\nj k\nto complete this formula w\nj k is as follows:\nw\nj k =\n(y\nk( net\nj 2\nq net\n2\nj + 1 + 1)) Where\nis the learning rate\nThe nal formula is:\nw j k =\ny\nk( net\nj 2\nq net\n2\nj + 1 + 1) +\nw\nj k\n!\n4"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.6695242,"math_prob":0.9989365,"size":3733,"snap":"2020-34-2020-40","text_gpt3_token_len":1927,"char_repetition_ratio":0.20809868,"word_repetition_ratio":0.33388293,"special_character_ratio":0.5322797,"punctuation_ratio":0.06781376,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999975,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-24T02:09:33Z\",\"WARC-Record-ID\":\"<urn:uuid:f071bcd0-eac4-41b0-aa06-444a84e01c66>\",\"Content-Length\":\"38996\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b065a9ae-43a4-46d6-ba76-d85d524460a0>\",\"WARC-Concurrent-To\":\"<urn:uuid:05b2a830-d50d-4245-a3a6-b3b2d68ad287>\",\"WARC-IP-Address\":\"104.27.171.30\",\"WARC-Target-URI\":\"https://myssec.com/question-2/\",\"WARC-Payload-Digest\":\"sha1:GSPGNYICBNL444W4A4W7BUWUHDSBLLLX\",\"WARC-Block-Digest\":\"sha1:GM5JA4JXCGNXUO4ONMKFL2R6NCMAXKGB\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600400213006.47_warc_CC-MAIN-20200924002749-20200924032749-00738.warc.gz\"}"} |
http://olympiacapitalresearch.com/base-ten-blocks-multiplication/ | [
"# 49 Marvelous Models Of Base Ten Blocks Multiplication\n\nAdvertisement",
null,
"49 Marvelous Models Of Base Ten Blocks Multiplication\nWelcome to our blog, in this particular time We’ll give you some great ideas regarding base ten blocks multiplication.\n\nmultiplication using base ten blocks concrete area model of multiplication using base ten blocks multiply by multiples of 10 with base ten blocks in this lesson you will learn how to multiply by a multiple of ten by using base ten blocks multiplication base ten blocks worksheets printable multiplication base ten blocks worksheets showing all 8 printables worksheets are name score place value blocks base ten blocks multiplication georgia how to teach digit multiplication using arrays and multiplication using arrays and base ten blocks is a great way to demonstrate why methods such as partial products and area model work and it is hands on two digit multiplication part e—using base ten blocks students de pose the factors of two digit multiplication problems into the sum of their tens and ones by manipulating a dynamic array of base ten blocks multiplication using base ten blocks math this pin was discovered by melissa boyd discover and save your own pins on pinterest using base ten blocks for multiplication by saintsarah this powerpoint shows how to use base 10 blocks to teach mulit digit multiplication additionally it bridges from the concrete representation to the base ten blocks worksheets math drills base ten blocks worksheets to help support the teaching of number and arithmetic using base 10 blocks to multiply 124 x 3 this video demonstrates how to use base 10 blocks to help students solve multiplication problems that cannot be solved with automatic retrieval base ten blocks base ten blocks worksheets contain position and de position of place value of ones units tens rods hundreds flats and thousands blocks",
null,
"Multiply a 1 digit number by a 2 or 3 digit number from base ten blocks multiplication , source:langfordmath.com",
null,
"Grade 4 Distributive Property of Multiplication Overview from base ten blocks multiplication , source:www.eduplace.com\n\nAdvertisement",
null,
"49 Marvelous Models Of Base Ten Blocks Multiplication | | 4.5"
] | [
null,
"http://olympiacapitalresearch.com/wp-content/themes/olympia/img/ads336.png",
null,
"http://olympiacapitalresearch.com/wp-content/uploads/2018/08/base-ten-blocks-multiplication-new-multiply-a-1-digit-number-by-a-2-or-3-digit-number-of-base-ten-blocks-multiplication.jpg",
null,
"http://olympiacapitalresearch.com/wp-content/uploads/2018/08/base-ten-blocks-multiplication-good-grade-4-distributive-property-of-multiplication-overview-of-base-ten-blocks-multiplication.gif",
null,
"http://olympiacapitalresearch.com/wp-content/themes/olympia/img/ads336.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.81698984,"math_prob":0.83260584,"size":2099,"snap":"2019-13-2019-22","text_gpt3_token_len":381,"char_repetition_ratio":0.28114557,"word_repetition_ratio":0.06927711,"special_character_ratio":0.17532158,"punctuation_ratio":0.025787966,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.993326,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,null,null,1,null,1,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-05-19T18:31:23Z\",\"WARC-Record-ID\":\"<urn:uuid:07bce535-6ebc-4281-9eae-36621c8b348b>\",\"Content-Length\":\"40626\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0a4b4569-6713-4999-80f8-11aeab5780d1>\",\"WARC-Concurrent-To\":\"<urn:uuid:fcedc5c7-bcf4-46a0-8306-09f7a9e086c3>\",\"WARC-IP-Address\":\"104.28.4.150\",\"WARC-Target-URI\":\"http://olympiacapitalresearch.com/base-ten-blocks-multiplication/\",\"WARC-Payload-Digest\":\"sha1:PGO6B4OTL6BIJTQAGOGIMQLKIYCHNW3G\",\"WARC-Block-Digest\":\"sha1:ESQS3LYOYDB544SKTFHN6ECK25GIHBB4\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-22/CC-MAIN-2019-22_segments_1558232255092.55_warc_CC-MAIN-20190519181530-20190519203530-00511.warc.gz\"}"} |
https://themortgagestudent.com/tag/550a36-binary-search-tree-program-in-c | [
"# binary search tree program in c\n\nAnd C program for Insertion, Deletion, and Traversal in Binary Search Tree. Submitted by Manu Jemini, on December 24, 2017 A Binary Search Tree (BST) is a widely used data structure. C Program to create a binary search tree. Graphical Educational content for Mathematics, Science, Computer Science. Each node can contain only two child node. A repository of tutorials and visualizations to help students learn Computer Science, Mathematics, Physics and Electrical Engineering basics. What is a Binary Search Tree? There are three ways which we use to traverse a tree − In-order Traversal; Pre-order Traversal; Post-order Traversal; We shall now look at the implementation of tree traversal in C programming language here using the following binary tree − Implementation in C Online C Array programs for computer science and information technology students pursuing BE, BTech, MCA, MTech, MCS, MSc, BCA, BSc. Due to this, on average, operations in binary search tree take only O(log n) time. In this example, you will learn about what is Binary search tree (BST)? Get the number of elements from the user. 2. 1. Write a C program to create a Binary Search Tree. The examples of such binary trees are given in Figure 2. Get the input from the user and create a binary search tree. The right node is always greater than its parent. One child is called left child and the other is called right child. An example of binary tree is shown in below diagram. Binary Search Tree Properties: The left sub tree of a node only contain nodes less than the parent node's key. Children of a node of binary tree are ordered. The left node is always smaller than its parent. 3. A tree is said to be a binary tree if each node of the tree can have maximum of two children. Find code solutions to questions for lab practicals and assignments. Visualizations are in the form of Java applets and HTML5 visuals. Some binary trees can have the height of one of the subtrees much larger than the other. The height of a randomly generated binary search tree is O(log n). In that case, the operations can take linear time. Logic. 3. The right sub tree of a node only contains nodes greter than the parent node's key. That is, we cannot random access a node in a tree. To implement binary tree, we will define the conditions for new data to enter into our tree. Some of them are: The implementation of BST (Binary Search Tree) is a fast and efficient method to find an element in a huge set. Detailed Tutorial on Binary Search Tree (BST) In C++ Including Operations, C++ Implementation, Advantages, and Example Programs: A Binary Search Tree or BST as it is popularly called is a binary tree that fulfills the following conditions: The nodes that are lesser than the root node which is placed as left children of the BST. What is Binary Tree? Need for Binary Tree in C. This tree proves to be of great importance, which we will discuss in detail one by one. To learn more about Binary Tree, go through these articles: Open Digital Education.Data for CBSE, GCSE, ICSE and Indian state boards. In that data structure, the nodes are in held in a tree-like structure. There are several applications of a binary tree when it comes to C programming."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9039444,"math_prob":0.7403953,"size":3482,"snap":"2021-04-2021-17","text_gpt3_token_len":767,"char_repetition_ratio":0.1359977,"word_repetition_ratio":0.016666668,"special_character_ratio":0.21510626,"punctuation_ratio":0.12732475,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98270535,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-01-28T12:43:30Z\",\"WARC-Record-ID\":\"<urn:uuid:d4586377-23f0-4a02-894e-862a2d20eb08>\",\"Content-Length\":\"53324\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:92539fb3-6c88-4a81-8c1e-4e2c8c22c4d0>\",\"WARC-Concurrent-To\":\"<urn:uuid:8ab2370c-1236-42ba-94a5-391e6fb78c36>\",\"WARC-IP-Address\":\"104.21.50.200\",\"WARC-Target-URI\":\"https://themortgagestudent.com/tag/550a36-binary-search-tree-program-in-c\",\"WARC-Payload-Digest\":\"sha1:MNSIAVACZERM2UL46ULZK3UTEARDIGOB\",\"WARC-Block-Digest\":\"sha1:4Q4CHWCH63QYCO4MSYUJZTPR7OWTKJMR\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-04/CC-MAIN-2021-04_segments_1610704843561.95_warc_CC-MAIN-20210128102756-20210128132756-00212.warc.gz\"}"} |
https://www.askiitians.com/maths/permutation-and-combination.html | [
"",
null,
"Click to Chat\n\n1800-1023-196\n\n+91-120-4616500\n\nCART 0\n\n• 0\n\nMY CART (5)\n\nUse Coupon: CART20 and get 20% off on all online Study Material\n\nITEM\nDETAILS\nMRP\nDISCOUNT\nFINAL PRICE\nTotal Price: Rs.\n\nThere are no items in this cart.\nContinue Shopping",
null,
"• Complete JEE Main/Advanced Course and Test Series\n• OFFERED PRICE: Rs. 15,900\n• View Details\n\n```Revision Notes on Permutations & Combinations\n\nFundamental Principle of Counting\n\nIf one experiment has n possible outcomes and another experiment has m possible outcomes, then there are m × n possible outcomes when both of these experiments are performed simultaneously. In other words, suppose a job has n parts and the job will be completed only when each part is completed. Further, it is known that the first part can be completed in a1 ways, the second part can be completed in a2 ways and so on ...... the nth part can be completed in an ways. Then the total number of ways of doing the job is a1a2a3 ... an. This is also known as the rule of product.\n\nNote:\n\nFundamental Principal of Counting (FPC) is used to calculate the possibilities of an event without actually counting them.\n\nBasic steps to be used while applying the FPC:\n\n1. Try to identify the independent events involved in a given problem.\n\n2. Find the number of ways of performing or possibilities of occurrence of an event.\n\n3. Multiply these numbers to get the total number of ways of occurrence of these events.\n\nPermutations\n\nThe concept of permutation is used for the arrangement of objects in a specific order i.e. whenever the order is important, permutation is used.\n\nThe total number of permutations on a set of n distinct objects is given by n! and is denoted as nPn = n!\n\nThe total number of permutations on a set of n objects taken r at a time is given by nPr = n!/ (n-r)!\n\nThe number of ways of arranging n objects of which r are the same is given by n!/ r!\n\nIf we wish to arrange a total of n objects, out of which ‘p’ are of one type, q of second type are alike, and r of a third kind are same, then such a computation is done by n!/p!q!r!\n\nAlmost all permutation questions involve putting things in order from a line where the order matters. For example ABC is a different permutation to ACB.\n\nThe number of permutations of n distinct objects when a particular object is not to be considered in the arrangement is given by n-1Pr.\n\nThe number of permutations of n distinct objects when a specific object is to be always included in the arrangement is given by r.n-1Pr-1.\n\nIf we need to compute the number of permutations of n different objects, out of which r have to be selected and each object has the probability of occurring once, twice or thrice… up to r times in any arrangement is given by (n)r.\n\nCircular permutation is used when some arrangement is to be made in the form of a ring or circle.\n\nWhen ‘n’ different or unlike objects are to be arranged in a ring in such a way that the clockwise and anticlockwise arrangements are different, then the number of such arrangements is given by (n – 1)!\n\nIf r things are taken at a time out of n distinct things and arranged along a circle, then the number of ways of doing this is given by nCr(r-1)!.\n\nIf clockwise and anti-clockwise are considered to be the same, then the total number of circular permutations is given by (n-1)!/2.\n\nIf n persons are to be seated around a round table in such a way that no person has similar neighbor then it is given by ½ (n – 1)!\n\nThe number of necklaces formed with n beads of different colors = ½ (n – 1)!\n\nnP0 =1\n\nnP1 = n\n\nnPn = n!/(n-n)! = n! /0! = n!/1 = n!\n\nCombinations\n\nIf certain objects are to be arranged in such a way that the order of objects is not important, then the concept of combinations is used.\n\nThe number of combinations of n things taken r (0 < r < n) at a time is given by nCr = n!/r!(n-r)!\n\nThe relationship between combinations and permutations is nCr = nPr/r!\n\nThe number of ways of selecting r objects from n different objects subject to certain condition like:\n\n1. k particular objects are always included = n-kCr-k\n\n2. k particular objects are never included = n-kCr\n\nThe number of arrangement of n distinct objects taken r at a time so that k particular objects are\n\n(i) Always included = n-kCr-k.r!,\n\n(ii) Never included = n-kCr.r!.\n\nIn order to compute the combination of n distinct items taken r at a time wherein, the chances of occurrence of any item are not fixed and may be one, twice, thrice, …. up to r times is given by n+r-1Cr\n\nIf there are m men and n women (m > n) and they have to be seated or accommodated in a row in such a way that no two women sit together then total no. of such arrangements = m+1Cn. m! This is also termed as the Gap Method.\n\nIf we have n different things taken r at a time in form of a garland or necklace, then the required number of arrangements is given by nCr(r-1)!/2.\n\nIf there is a problem that requires n number of persons to be accommodated in such a way that a fixed number say ‘p’ are always together, then that particular set of p persons should be treated as one person. Hence, the total number of people in such a case becomes (n-m+1). Therefore, the total number of possible arrangements is (n-m+1)! m! This is also termed as the String Method.\n\nLet there be n types of objects with each type containing at least r objects. Then the number of ways of arranging r objects in a row is nr.\n\nThe number of selections from n different objects, taking at least one = nC1 + nC2 + nC3 + ... + nCn = 2n - 1.\n\nTotal number of selections of zero or more objects from n identical objects is n+1.\n\nSelection when both identical and distinct objects are present:\n\nThe number of selections, taking at least one out of a1 + a2 + a3 + ... an + k objects, where a1 are alike (of one kind), a2 are alike (of second kind) and so on ... an are alike (of nth kind), and k are distinct = {[(a1 + 1)(a2 + 1)(a3 + 1) ... (an + 1)]2k} - 1.\n\nCombination of n different things taken some or all of n things at a time is given by 2n – 1.\n\nCombination of n things taken some or all at a time when p of the things are alike of one kind, q of the things are alike and of another kind and r of the things are alike of a third kind = [(p + 1) (q + 1)(r + 1)….] – 1.\n\nThe number of ways to select some or all out of (p+q+t) things where p are alike of first kind, q are alike of second kind and the remaining t are different is = (p+1)(q+1)2t – 1.\n\nCombination of selecting s1 things from a set of n1 objects and s2 things from a set of n2 objects where combination of s1 things and s2 things are independent is given by n1Cs1 x n2Cs2\n\nTotal number of ways in which n identical items which can be distributed among p persons so that each person may get any number of items is n+p-1Cp-1.\n\nTotal number of ways in which n identical items can be distributed among p persons such that each them receive at least one item n-1Cp-1\n\nSome results related to nCr\n\n1. nCr = nCn-r\n\n2. If nCr = nCk, then r = k or n-r = k\n\n3. nCr + nCr-1 = n+1Cr\n\n4. nCr = n/r n-1Cr-1\n\n5. nCr/nCr-1 = (n-r+1)/ r\n\n6. If n is even nCr is greatest for r = n/2\n\n7. If n is odd, nCr is greatest for r = (n-1)/2,(n+1)/2\n\nFormation of Groups\n\nThe number of ways in which (m + n) different things can be divided into two groups, one containing m items and the other containing n items is given by\n\nm+nCn = (m+n)!/ m!n!\n\nIn the above case, if m = n i.e. the groups are of same size then the total number of ways of dividing 2n distinct items into two equal groups is given by 2nCn/2!.This can be written as (2n)!/n!n!2!\n\nRemark: The result is divided by 2 in order to avoid repetition i.e. false counting.\n\nThe total number of ways of dividing (m + n + p) distinct items into three unequal groups m, n, p is (m + n + p)!/ m!n!p!.\n\nIn the above case, if m = n = p, then the total number of ways reduce to (3n)!/(n!)3 3!\n\nThe number of ways in which ‘l’ groups of n distinct objects can be formed in such a way that ‘p’ groups are of object n1, q groups of object n2 are given by\n\nn!/ (n1)!p(n2)!q(p!)(q!)\n\nIf (a + b + c) distinct items are to be divided into 3 groups and then distributed among three persons, then the number of ways of doing this is\n\n(a + b + c)!. 3!/ a!b!c!\n\nProblems Based on Number Theory\n\nNumber of divisors\n\nIf N = p1α1 p2α2 ….. pkαk, then\n\nNumber of divisors = Number of ways of selecting zero or more objects from the group of identical objects (α1+1)( α2+1)…(αk+1)\n\nThis includes 1 and N also.\n\nAll divisors excluding 1 and N are called Proper divisors.\n\nSum of divisors:\n\nIf N = p1α1 p2α2 ….. pkαk, then sum of divisors of N is\n\n(1+ p1 + p12 +…+ p1α1) × (1 + p2 + p22 +…+ p2α2) ….. (1 + pk + pk2 +…+ pkαk)\n\n=",
null,
"Number of ways of putting N as a product of two natural numbers is\n\nIf n is not a perfect square = ½ (a1 + 1)(a2 + 1) ….. (ak +1)\n\nIf n is a perfect square = ½ [(a1 + 1)(a2 + 1) ….. (ak +1) + 1].\n\nDerangement and Results on Points\n\nIf n things are arranged in a row, then the number of ways in which they can be deranged so that r things occupy wrong places while (n-r) things occupy their original places, is\n\n= nCn-r Dr, where\n\nDr =",
null,
"If n things are arranged in a row, the number of ways of deranging them so that none of them occupies its original place, is\n\n= nC0 Dn\n\n=",
null,
"If there are n points in plane put of which m (< n) are collinear, then the following results hold good:\n\nTotal number of different straight lines obtained by joining these n points is\n\nnC2 – mC2 +1\n\nTotal number of different triangles formed by joining these n points is nC3 – mC3\n\nNumber of diagonals in polygon of n sides is nC2 – n\n\nIf m parallel lines in a plane are intersected by a family of other n parallel lines. Then total number of parallelograms so formed is mC2 × nC2\n\nNumber of triangles formed by joining vertices of convex polygon of n sides is nC3 of which\n\nNumber of triangles having exactly 2 sides common to the polygon = n\n\nNumber of triangles having exactly 1 side common to the polygon = n(n-4)\n\nNumber of triangles having no side common to the polygon =",
null,
"```",
null,
"",
null,
"### Course Features\n\n• 728 Video Lectures\n• Revision Notes\n• Previous Year Papers\n• Mind Map\n• Study Planner\n• NCERT Solutions\n• Discussion Forum\n• Test paper with Video Solution",
null,
""
] | [
null,
"https://www.askiitians.com/Resources/images/tmp-30.png",
null,
"https://files.askiitians.com/static/ecom/cms/right/engineering-full-course.jpg",
null,
"https://files.askiitians.com/cdn1/images/2014912-151857875-5628-dede.png",
null,
"https://files.askiitians.com/cdn1/images/2014912-15291420-2109-ded.png",
null,
"https://files.askiitians.com/cdn1/images/2014912-152959868-7014-ded.png",
null,
"https://files.askiitians.com/cdn1/images/2014912-153937739-5122-dede.png",
null,
"https://www.askiitians.com/Resources/images/youtube_placeholder.jpg",
null,
"https://files.askiitians.com/static/ecom/cms/bottom/medical-full-course.png",
null,
"https://www.askiitians.com/Resources/images/wishCross-red.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.93407285,"math_prob":0.9961495,"size":10158,"snap":"2019-51-2020-05","text_gpt3_token_len":2738,"char_repetition_ratio":0.15392949,"word_repetition_ratio":0.18737476,"special_character_ratio":0.27387282,"punctuation_ratio":0.1059246,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99839795,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18],"im_url_duplicate_count":[null,null,null,null,null,2,null,2,null,2,null,2,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-21T06:42:48Z\",\"WARC-Record-ID\":\"<urn:uuid:8e87b821-aba4-4b03-b359-5ee7ef0b1906>\",\"Content-Length\":\"251722\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:981b25fc-a562-45c4-a619-f21bda142f29>\",\"WARC-Concurrent-To\":\"<urn:uuid:57b45c83-7228-4ecb-b8a3-a70dadf4ecf3>\",\"WARC-IP-Address\":\"164.52.193.203\",\"WARC-Target-URI\":\"https://www.askiitians.com/maths/permutation-and-combination.html\",\"WARC-Payload-Digest\":\"sha1:GRCMLWA2CN2GBE4Q7F6ZIKCFSECWFLRO\",\"WARC-Block-Digest\":\"sha1:5V6XJLJGSOPQ3I7TYGXQCJE3AL4L5OKK\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579250601615.66_warc_CC-MAIN-20200121044233-20200121073233-00216.warc.gz\"}"} |
https://microdata.no/en/docs/konvertere-fra-tekst-til-numerisk/ | [
"# Converting from alfanumerical (text format) to numerical value format\n\nA number of variables use alfanumerical value format (text format). In these cases the values need to be designated with single quotation marks, f.x. ‘1’, ‘2’. The commando `destring` can be used to convert the alfanumerical values into numerical values.\n\nThe example below demonstrates how to convert the values of the variable `kjønn` (gender) from alfanumerical to numerical. Note: The variable keeps the original name as it is only the value format that is changed.\n\nNote also that all values need to contain numerical characters only, in order for the convertion to be successful, unless the options `force` or `ignore()` are used. The first will force the convertion to go through anyways, but will assign a missing value to values containing non-numerical characters. The second option can be used to specify which characters to be ignored, f.x. commas, dots, hyphens.\n\n``````create-dataset demografidata\nimport BEFOLKNING_KJOENN as kjønn\n\ndestring kjønn\n\n//Produce simple statistics for the numerical variable\n\nsummarize kjønn\n``````"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.6414106,"math_prob":0.9466268,"size":1084,"snap":"2020-24-2020-29","text_gpt3_token_len":249,"char_repetition_ratio":0.16111112,"word_repetition_ratio":0.0,"special_character_ratio":0.19280443,"punctuation_ratio":0.109947644,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9672978,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-07-02T17:45:36Z\",\"WARC-Record-ID\":\"<urn:uuid:2b0d31d1-a830-4137-9674-43f97d5cf301>\",\"Content-Length\":\"26843\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:75356fc5-1203-4e19-804d-b42e889b63f3>\",\"WARC-Concurrent-To\":\"<urn:uuid:e6d546ac-814b-4aff-88c1-0dd45d9e5e12>\",\"WARC-IP-Address\":\"193.160.172.134\",\"WARC-Target-URI\":\"https://microdata.no/en/docs/konvertere-fra-tekst-til-numerisk/\",\"WARC-Payload-Digest\":\"sha1:7PKXMOI43G6NELXQU3AQMI5OLMU6C5XN\",\"WARC-Block-Digest\":\"sha1:EMRE2VPYCDOT7OPBQTJTD3OZROP7ZGOT\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-29/CC-MAIN-2020-29_segments_1593655879738.16_warc_CC-MAIN-20200702174127-20200702204127-00493.warc.gz\"}"} |
https://jeremykun.com/2016/07/11/the-blum-blum-shub-pseudorandom-generator/?shared=email&msg=fail | [
"# The Blum-Blum-Shub Pseudorandom Generator\n\nProblem: Design a random number generator that is computationally indistinguishable from a truly random number generator.\n\nSolution (in Python): note this solution uses the Miller-Rabin primality tester, though any primality test will do. See the github repository for the referenced implementation.\n\n```from randomized.primality import probablyPrime\nimport random\n\ndef goodPrime(p):\nreturn p % 4 == 3 and probablyPrime(p, accuracy=100)\n\ndef findGoodPrime(numBits=512):\ncandidate = 1\nwhile not goodPrime(candidate):\ncandidate = random.getrandbits(numBits)\nreturn candidate\n\ndef makeModulus():\nreturn findGoodPrime() * findGoodPrime()\n\ndef parity(n):\nreturn sum(int(x) for x in bin(n)[2:]) % 2\n\nclass BlumBlumShub(object):\ndef __init__(self, seed=None):\nself.modulus = makeModulus()\nself.state = seed if seed is not None else random.randint(2, self.modulus - 1)\nself.state = self.state % self.modulus\n\ndef seed(self, seed):\nself.state = seed\n\ndef bitstream(self):\nwhile True:\nyield parity(self.state)\nself.state = pow(self.state, 2, self.modulus)\n\ndef bits(self, n=20):\noutputBits = ''\nfor bit in self.bitstream():\noutputBits += str(bit)\nif len(outputBits) == n:\nbreak\n\nreturn outputBits\n```\n\nDiscussion:\n\nAn integer",
null,
"$x$ is called a quadratic residue of another integer",
null,
"$N$ if it can be written as",
null,
"$x = a^2 \\mod N$ for some",
null,
"$a$. That is, if it’s the remainder when dividing a perfect square by",
null,
"$N$. Some numbers, like",
null,
"$N=8$, have very special patterns in their quadratic residues, only 0, 1, and 4 can occur as quadratic residues.\n\nThe core idea behind this random number generator is that, for a specially chosen modulus",
null,
"$N$, telling whether a number",
null,
"$x$ is a quadratic residue mod",
null,
"$N$ is hard. In fact, one can directly convert an algorithm that can predict the next bit of this random number generator (by even a slight edge) into an arbitrarily accurate quadratic-residue-decider. So if computing quadratic residues is even mildly hard, then predicting the next bit in this random number generator is very hard.\n\nMore specifically, the conjectured guarantee about this random number generator is the following: if you present a polynomial time adversary with two sequences:\n\n1. A truly random sequence of bits of length",
null,
"$k$,\n2.",
null,
"$k$ bits from the output of the pseudorandom generator when seeded with a starting state shorter than",
null,
"$k$ bits.\n\nThen the adversary can’t distinguish between the two sequences with probability “significantly” more than 1/2, where by “significantly” I mean",
null,
"$1/k^c$ for any",
null,
"$c>0$ (i.e., the edge over randomness vanishes faster than any inverse polynomial). It turns out, due to a theorem of Yao, that this is equivalent to not being able to guess the next bit in a pseudorandom sequence with a significant edge over a random guess, even when given the previous",
null,
"$\\log(N)^{10}$ bits in the sequence (or any",
null,
"$\\textup{poly}(\\log N)$ bits in the sequence).\n\nThis emphasizes a deep philosophical viewpoint in theoretical computer science, that whether some object has a property (randomness) really only depends on the power of a computationally limited observer to identify that property. If nobody can tell the difference between fake randomness and real randomness, then the fake randomness is random. Offhand I wonder whether you can meaningfully apply this view to less mathematical concepts like happiness and status.\n\nAnyway, the modulus",
null,
"$N$ is chosen in such a way that every quadratic residue of",
null,
"$N$ has a unique square root which is also a quadratic residue. This makes the squaring function a bijection on quadratic residues. In other words, with a suitably chosen",
null,
"$N$, there’s no chance that we’ll end up with",
null,
"$N=8$ where there are very few quadratic residues and the numbers output by the Blum-Blum-Shub generator have a short cycle. Moreover, the assumption that detecting quadratic residues mod",
null,
"$N$ is hard makes the squaring function a one-way permutation.\n\nHere’s an example of how this generator might be used:\n\n```generator = BlumBlumShub()\n\nhist = * 2**6\nfor i in range(10000):\nvalue = int(generator.bits(6), 2)\nhist[value] += 1\n\nprint(hist)\n```\n\nThis produces random integers between 0 and 64, with the following histogram:",
null,
"See these notes of Junod for a detailed exposition of the number theory behind this random number generator, with full definitions and proofs.\n\n## 12 thoughts on “The Blum-Blum-Shub Pseudorandom Generator”\n\n1.",
null,
"Jay McCarthy\n\nYour comment about the idea that objects have properties only if we can tell that they do/n’t and whether this applies to “happiness and status”: I feel that this is the same principle that the Turing test applies to intelligence and is a very old idea in CS.\n\nLike\n\n•",
null,
"j2kun\n\nDefinitely. IIRC the Turing test doesn’t have any computational complexity restrictions on the questioner (EDIT: *or* on the algorithm displaying its intelligence), which would make pseudorandomness a sort of modern version of the same idea. I seem to recall Scott Aaronson arguing that a Turing test without a poly-time/space requirement is dumb.\n\nLike\n\n2.",
null,
"Michael Casebolt\n\nMaybe “computationally indistinguishable” isn’t the best way to describe this in theory, but I understand that in practice, as long as k is large enough it will be infeasible. Do you have any insight regarding whether the underlying computational hardness assumption will be invalidated by quantum computing? I found an article that suggests there’s a quantum algorithm for solving quadratic residuosity – would that enable the reversal of this PRNG?\n\nLike\n\n•",
null,
"j2kun\n\nYes definitely. Offhand I think also if you can factor the modulus you win.\n\nLike\n\n3.",
null,
"lisa\n\nHi,where can I get the package of ” randomized.primality ”\n\nin the 1st line of : from randomized.primality import probablyPrime .\n\nLike\n\n4.",
null,
"Gucio\n\nI can’t find randomized.primality in that path\n\nLike\n\n5.",
null,
"Alexx\n\nHow does Blum Blum Shub compare with http://www.pcg-random.org/ when run through the Diehard tests and TestU01?\n\nLike\n\n6.",
null,
"jan sebastian\n\nsorry if I ask a silly question. what the reason of using random.getrandbits(numBits) with the numBits value is 512? Thank you\n\nLike\n\n•",
null,
"j2kun\n\nrandom.getrandbits(512) returns an integer consisting of 512 randomly chosen bits. A quick way to find large prime numbers is to pick a random number and apply a probabilistic test for primality. You get confidence of 2^{-100} that it’s prime, which is often good enough.\n\nLike\n\n7.",
null,
"Pandu Abdi Putra\n\nSir, from direct link that have given there is no probablyPrime\nwould you like give explanation about this?\nthank you\n\nLike"
] | [
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://jeremykun.files.wordpress.com/2016/07/screen-shot-2016-07-09-at-8-57-28-pm.png",
null,
"https://0.gravatar.com/avatar/646b6450c8478a483eb5c24ab5150555",
null,
"https://0.gravatar.com/avatar/90b179348780a6e7fe8e502968dc534a",
null,
"https://1.gravatar.com/avatar/4b31332459d20d6709a1b973a78619f1",
null,
"https://0.gravatar.com/avatar/90b179348780a6e7fe8e502968dc534a",
null,
"https://2.gravatar.com/avatar/5cd164474f51446de5bf93f88037ec27",
null,
"https://1.gravatar.com/avatar/4eb14791e60991db00ea397b618e97f6",
null,
"https://2.gravatar.com/avatar/88c243dac971f103621a20d463f37423",
null,
"https://0.gravatar.com/avatar/f52384044b3aee23cf3ac9bc9e009370",
null,
"https://0.gravatar.com/avatar/90b179348780a6e7fe8e502968dc534a",
null,
"https://1.gravatar.com/avatar/7ff921a9877185fbbac537eb7344551a",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8494702,"math_prob":0.97537625,"size":6424,"snap":"2021-21-2021-25","text_gpt3_token_len":1407,"char_repetition_ratio":0.124299064,"word_repetition_ratio":0.0051282053,"special_character_ratio":0.20968243,"punctuation_ratio":0.11365564,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99597174,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,9,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-06-17T23:00:20Z\",\"WARC-Record-ID\":\"<urn:uuid:bb820800-7e3b-474b-82e8-0e58a28bf33b>\",\"Content-Length\":\"116429\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3c6eb1f3-1d3a-49f7-b15d-5b7c143dda14>\",\"WARC-Concurrent-To\":\"<urn:uuid:0e4610a8-9831-40ca-9ba5-6caf69142fe1>\",\"WARC-IP-Address\":\"192.0.78.24\",\"WARC-Target-URI\":\"https://jeremykun.com/2016/07/11/the-blum-blum-shub-pseudorandom-generator/?shared=email&msg=fail\",\"WARC-Payload-Digest\":\"sha1:FYG523JC35K6UJS4EELJJF3EOYSMV6UZ\",\"WARC-Block-Digest\":\"sha1:ERZOD43KAWOXP3QBRG5KQHV5OOSZ565N\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-25/CC-MAIN-2021-25_segments_1623487634576.73_warc_CC-MAIN-20210617222646-20210618012646-00519.warc.gz\"}"} |
https://codeforces.com/blog/entry/66783 | [
"### mnbvmar's blog\n\nBy mnbvmar, 3 years ago,",
null,
"This is the preliminary version of editorial. Expect bugs. Some changes might happen!\n\n1150A. Stock Arbitraging\n\nTutorial\n\n1150B. Tiling Challenge\n\nTutorial\n\nTutorial\n\nTutorial\n\nTutorial\n\nTutorial\nChallenges\n\n1149E. Election Promises\n\nTutorial",
null,
"Tutorial of Codeforces Round #556 (Div. 1)",
null,
"Tutorial of Codeforces Round #556 (Div. 2)",
null,
"Comments (43)\n » Pretest for B were weak :(\n• » » 3 years ago, # ^ | ← Rev. 2 → Yes, I didn't cater one case where we might have some dot lefts in grid.\n• » » » n=int(input()) l=[list(input()) for i in range(n)] for i in range(n-2): for j in range(1,n-1): if l[i][j]==l[i+1][j-1]==l[i+1][j]==l[i+1][j+1]==l[i+2][j]==\".\": l[i][j]=l[i+1][j-1]=l[i+1][j]=l[i+1][j+1]=l[i+2][j]=\"#\"if all([all([l[i][j]==\"#\" for j in range(n)])for i in range(n)]) : print(\"Yes\") else: print(\"No\")\n• » » 3 years ago, # ^ | ← Rev. 2 → a,b,c Div 2 pretests were weak(\n• » » » for div1 C it was strong\n• » » » for div 1 D it was strong\n » Thank you for a lightning fast editorial.What was that sudden increase in difficulty curve from problem C to D (Div. 2)?\n• » » I'll do it tomorrow, I'm too tired to even look at my screen now. :)\n• » » » Have a good rest man\n• » » » thanks for the problems you write,good job,man\n• » » Done!\n » I think that, at least, main tests 37 and 40 for Div2B should have been in the pretests.\n » 3 years ago, # | ← Rev. 2 → Can someone help me find a counterexample to the following logic in div1 D: Find connected components if we use a-edges only Discard all b-edges with ends in the same component In the remaining graph, find shortest path from 0 to each p that uses minimum number of b-edges\n• » » I'm trying to figure out this myself :/\n• » » If I understand you correctly, this should be a small counterexample for $a=2$, $b=3$. The dotted lines are heavier, solid lines are lighter:",
null,
"You can go from $1$ to $5$ using two heavy edges ($1 \\to 2 \\to 5$, cost $6$), which is more optimal than using one heavy edge ($1 \\to 3 \\to 4 \\to 5$, cost $7$).\n• » » 3 years ago, # ^ | ← Rev. 3 → UPD: Deleted, sorry\n » The contest had quite an interesting difficulty distribution. Problem B took quite a bit of time to implement (the solution was extremely simple, though) where C was absolute lolly and D had only 35+ solve in contest time (a little better in the Div 1 version, probably 50+).One other thing, there's a spelling mistake in the problem B name. You missed an 'l'. :pThanks for the super fast editorial.\n• » » One other thing, there's a spelling mistake in the problem B name. You missed an 'l'. :p Fixed! shame on me\n » In Div1.C, you wrote that $δ \\text{ (the final depth, might be non-zero)}$. Maybe it is more clear to say that $δ$ is the total depth change in that block?\n• » » 3 years ago, # ^ | ← Rev. 2 → RobeZH Can you please example a bit more clearly what δ means and how it is used?\n » Challenge to D about 2^o(n) solutionKeep in state last $\\sqrt{n}$ b-edges (or a-components of any size) used and mask of used components bigger than $\\sqrt{n}$. That way we get $2^{O(\\sqrt{n} \\log{n})}$, which can even be slightly improved to $2^{O(\\sqrt{n \\log{n}})}$. Key insight here is that if we have small a-component (say 10) then we need to care only about possible forbidden connections using at most 8 b-edges between its endpoints.\n• » » Yup, absolutely correct!\n » 3 years ago, # | ← Rev. 2 → Can't div2 D be solved searching by searching subsequence (using precalculated positions for each symbol) and marking them as used? We have 6 order varinats to search subsequences: (1, 2, 3), (1, 3, 2), (2, 1, 3), (2, 3, 1), (3, 1, 2), (3, 2, 1). It goes into $O(q \\cdot 250 \\cdot 6)$, but algorithm looks like not work :(my try\n• » » 3 years ago, # ^ | ← Rev. 2 → Try this testcase: The Word of Universe is \"abadadab\", and there are two religions in that moment. Their descriptions are \"abab\" and \"adad\".\n• » » the long word \"abcbdefe\", and two subsequences \"abef\", \"bcde\"\n• » » I think it can be implemented properly with some backtracking, but not sure how that affects the complexity.\n » Div2 D is a nice problem,though i didn't solve it during contest.. (also got fst on C..nooo..)\n » 3 years ago, # | ← Rev. 2 → For D, during contest I didn't realize we can ignore components of size $3$, so I had a higher complexity and with no optimizations my solution would get TLE. With the following heuristic I was able to pass main tests: in Dijkstra, if the shortest path from $1$ to $i$ is $2$(I used $4$ in contest but $2$ also passes main tests, interesingly $1.99$ doesn't pass) times shorter than the shortest path from $1$ to $i$ with some $mask$, ignore this $(i, mask)$ state. Can someone prove that this works or find a countercase?\n » I don't understand why in B div1 you obtain the shortest prefix in that way. Why can't you choose a character from the same prefix? Why is it always making the prefix bigger, what if there is a character that we can use in the current prefix?\n » Can somebody please explain the solution for Div2D? What exactly are we doing in the DP part? Thank You\n » Just a little request to all the coders who conduct the rounds try to please explain the editorial with the help of examples. I know this will take little bit of your extra time but it will help us how you approach the given problem. Example there are lot of people who are not able to understand the editorial of question D but if you try to explain us with the help of an example they will understand better. It's up to you guys but please think this once. It will help us to learn more and save our time. Thankyou Happy coding.\n• » » If I find some free time this week, I'll try to rewrite this editorial, e.g. to include some examples or write the explicit DP transitions.I even thought about writing some interactive app where you could fiddle with the inputs and see how DP states change, but it's imho quite time-consuming. No promises for this one, then.\n » The solution to div1 D is really beautiful\n » Can anyone prove that Div1D is NP-hard ? I might learn some way to deal with that type of problem to avoid some unnecessary ideas.\n• » » As nobody published their proof, let's go with mine: ReductionWe're reducing from Hamiltonian-Path. We're given a directed graph $G$ consisting of $n$ vertices: $1, 2, \\dots, n$. We want to test if there's a Hamiltonian path in $G$.Let $a=2$, $b=3$, and construct an instance of our problem consisting of $n(2n-1) + 2$ vertices: $s$ (source), $t$ (sink), and $n$ paths $P_1, \\dots, P_n$, each having $2n-1$ vertices: $P_{i,1}, P_{i,2}, \\dots, P_{i,2n-1}$. Each of the path is constructed using light edges only.For each edge $u \\to v$ in the original graph, add $n-1$ heavy edges to our instance: $P_{u,1} \\leftrightarrow P_{v,3}$, $P_{u,3} \\leftrightarrow P_{v,5}$, $\\dots$, $P_{u,2n-3} \\leftrightarrow P_{v,2n-1}$. Add also some heavy edges connecting source and sink with the path: connect the source $s$ with first vertices $P_{i, 1}$ of each path, and the sink $t$ with final vertices $P_{i,2n-1}$ of each path.Now, one can prove that the minimum distance from $s$ to $t$ in our instance is equal to $b(n+1)$ if and only if $G$ contains a Hamiltonian path. (It's an easy exercise for the reader.)On that note, I think probably nobody solving the problem actually proved that the problem is NP-complete — I guess they had some sort of intuition that they shouldn't be able to solve it in polynomial time.\n » Could someone elaborate more on the solution to Div2 D/Div 1 B. I don't quite understand how we can use the helper array to check for a valid construction without keeping an index to the Words Of Universe string in our dp.\n » Could anyone recommend a good textbook about some advanced game theory (with all that ordinal nimbers, hackerbushes etc)?\n• » » Surreal Number? That's so difficult. :( Bad memory to me, I know little about it until now.\n » For div1-D I implemented this code using a priority queue. This is tutorials logic. And took 639ms.But when I implemented same logic using a single normal queue it took 280ms. ( My code ).Can any one tell me why without priority queue Dijkstra working much faster. Please help me to understand the complexity for 2nd code ( without priority queue ) . I understood the complexity of the 1st code ( with priority queue ).\n » I think the transition has problem:when string is : \"abdabc\" , and then the s is \"ad\", the s is\"b\",s is empty,the dp should be 4 but the program show the dp is 2,and the problem is caused by the wrong transition:dp can not update the dp by just using the helper arry\n » I can hack you with the following data:the origin string is \"abd\",3 operations + 1 a + 1 d + 2 b the answer should be YES YES NO but yours is YES YES YES\n » emmmm.... I misunderstand the problem,sorry"
] | [
null,
"https://codeforces.org/s/28653/images/flags/24/gb.png",
null,
"https://codeforces.org/s/28653/images/icons/paperclip-16x16.png",
null,
"https://codeforces.org/s/28653/images/icons/paperclip-16x16.png",
null,
"https://codeforces.org/s/28653/images/icons/comments-48x48.png",
null,
"https://codeforces.com/predownloaded/23/f7/23f7d670f81ca380b7605e8878c3f229f09b08ec.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9182874,"math_prob":0.94796884,"size":8332,"snap":"2022-05-2022-21","text_gpt3_token_len":2271,"char_repetition_ratio":0.08981749,"word_repetition_ratio":0.004216444,"special_character_ratio":0.30088815,"punctuation_ratio":0.12316384,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9926555,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,4,null,null,null,null,null,null,null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-27T18:22:04Z\",\"WARC-Record-ID\":\"<urn:uuid:5504d4fb-8832-42da-b062-f6f5661f41c9>\",\"Content-Length\":\"223453\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:517c6bf9-5a75-429b-bf7c-0b8af6e215d1>\",\"WARC-Concurrent-To\":\"<urn:uuid:36816c58-7241-4bd5-b279-ebf40a0b9bd3>\",\"WARC-IP-Address\":\"213.248.110.126\",\"WARC-Target-URI\":\"https://codeforces.com/blog/entry/66783\",\"WARC-Payload-Digest\":\"sha1:RHCUT4JLJVIDWS6N2D6H4MTT373NXTYS\",\"WARC-Block-Digest\":\"sha1:HTZ6ZVUANR5MQXNTHQIGFIPGVDL5ZLGC\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662675072.99_warc_CC-MAIN-20220527174336-20220527204336-00767.warc.gz\"}"} |
http://93zxw.com/qspevdu_t1001004 | [
"• 湖南\n• 长沙市\n• 常德市\n• 郴州市\n• 衡阳市\n• 怀化市\n• 娄底市\n• 邵阳市\n• 湘潭市\n• 湘西土家族苗族自治州\n• 益阳市\n• 永州市\n• 岳阳市\n• 张家界市\n• 株洲市\n• 山西\n• 长治市\n• 大同市\n• 晋城市\n• 晋中市\n• 临汾市\n• 吕梁市\n• 朔州市\n• 太原市\n• 忻州市\n• 阳泉市\n• 运城市\n• 安徽\n• 安庆市\n• 蚌埠市\n• 亳州市\n• 巢湖市\n• 池州市\n• 滁州市\n• 阜阳市\n• 合肥市\n• 淮北市\n• 淮南市\n• 黄山市\n• 六安市\n• 马鞍山市\n• 宿州市\n• 铜陵市\n• 芜湖市\n• 宣城市\n• 广西\n• 百色市\n• 北海市\n• 崇左市\n• 防城港市\n• 贵港市\n• 桂林市\n• 河池市\n• 贺州市\n• 来宾市\n• 柳州市\n• 南宁市\n• 钦州市\n• 梧州市\n• 玉林市\n• 河南\n• 安阳市\n• 鹤壁市\n• 焦作市\n• 开封市\n• 洛阳市\n• 漯河市\n• 南阳市\n• 平顶山市\n• 濮阳市\n• 三门峡市\n• 商丘市\n• 新乡市\n• 信阳市\n• 许昌市\n• 郑州市\n• 周口市\n• 驻马店市\n• 吉林\n• 白城市\n• 白山市\n• 长春市\n• 吉林市\n• 辽源市\n• 四平市\n• 松原市\n• 通化市\n• 延边朝鲜族自治州\n• 广东\n• 潮州市\n• 东莞市\n• 佛山市\n• 广州市\n• 河源市\n• 惠州市\n• 江门市\n• 揭阳市\n• 茂名市\n• 梅州市\n• 清远市\n• 汕头市\n• 汕尾市\n• 韶关市\n• 深圳市\n• 阳江市\n• 云浮市\n• 湛江市\n• 肇庆市\n• 中山市\n• 珠海市\n• 辽宁\n• 鞍山市\n• 本溪市\n• 朝阳市\n• 大连市\n• 丹东市\n• 抚顺市\n• 阜新市\n• 葫芦岛市\n• 锦州市\n• 辽阳市\n• 盘锦市\n• 沈阳市\n• 铁岭市\n• 营口市\n• 湖北\n• 鄂州市\n• 恩施土家族苗族自治州\n• 黄冈市\n• 黄石市\n• 荆门市\n• 荆州市\n• 直辖行政单位\n• 十堰市\n• 随州市\n• 武汉市\n• 咸宁市\n• 襄阳市\n• 孝感市\n• 宜昌市\n• 江西\n• 抚州市\n• 赣州市\n• 吉安市\n• 景德镇市\n• 九江市\n• 南昌市\n• 萍乡市\n• 上饶市\n• 新余市\n• 宜春市\n• 鹰潭市\n• 浙江\n• 杭州市\n• 湖州市\n• 嘉兴市\n• 金华市\n• 丽水市\n• 宁波市\n• 衢州市\n• 绍兴市\n• 台州市\n• 温州市\n• 舟山市\n• 青海\n• 果洛藏族自治州\n• 海北藏族自治州\n• 海东地区\n• 海南藏族自治州\n• 海西蒙古族藏族自治州\n• 黄南藏族自治州\n• 西宁市\n• 玉树藏族自治州\n• 甘肃\n• 白银市\n• 定西市\n• 甘南藏族自治州\n• 嘉峪关市\n• 金昌市\n• 酒泉市\n• 兰州市\n• 临夏回族自治州\n• 陇南市\n• 平凉市\n• 庆阳市\n• 天水市\n• 武威市\n• 张掖市\n• 贵州\n• 安顺市\n• 毕节市\n• 贵阳市\n• 六盘水市\n• 黔东南苗族侗族自治州\n• 黔南布依族苗族自治州\n• 黔西南布依族苗族自治州\n• 铜仁地区\n• 遵义市\n• 陕西\n• 安康市\n• 宝鸡市\n• 汉中市\n• 商洛市\n• 铜川市\n• 渭南市\n• 西安市\n• 咸阳市\n• 延安市\n• 榆林市\n• 西藏\n• 阿里地区\n• 昌都地区\n• 拉萨市\n• 林芝地区\n• 那曲地区\n• 日喀则地区\n• 山南地区\n• 宁夏\n• 固原市\n• 石嘴山市\n• 吴忠市\n• 银川市\n• 中卫市\n• 福建\n• 福州市\n• 龙岩市\n• 南平市\n• 宁德市\n• 莆田市\n• 泉州市\n• 三明市\n• 厦门市\n• 漳州市\n• 内蒙古\n• 阿拉善盟\n• 巴彦淖尔市\n• 包头市\n• 赤峰市\n• 鄂尔多斯市\n• 呼和浩特市\n• 呼伦贝尔市\n• 通辽市\n• 乌海市\n• 乌兰察布市\n• 锡林郭勒盟\n• 兴安盟\n• 云南\n• 保山市\n• 楚雄彝族自治州\n• 大理白族自治州\n• 德宏傣族景颇族自治州\n• 迪庆藏族自治州\n• 红河哈尼族彝族自治州\n• 昆明市\n• 丽江市\n• 临沧市\n• 怒江傈僳族自治州\n• 曲靖市\n• 思茅市\n• 文山壮族苗族自治州\n• 西双版纳傣族自治州\n• 玉溪市\n• 昭通市\n• 新疆\n• 阿克苏地区\n• 阿勒泰地区\n• 巴音郭楞蒙古自治州\n• 博尔塔拉蒙古自治州\n• 昌吉回族自治州\n• 哈密地区\n• 和田地区\n• 喀什地区\n• 克拉玛依市\n• 克孜勒苏柯尔克孜自治州\n• 直辖行政单位\n• 塔城地区\n• 吐鲁番地区\n• 乌鲁木齐市\n• 伊犁哈萨克自治州\n• 黑龙江\n• 大庆市\n• 大兴安岭地区\n• 哈尔滨市\n• 鹤岗市\n• 黑河市\n• 鸡西市\n• 佳木斯市\n• 牡丹江市\n• 七台河市\n• 齐齐哈尔市\n• 双鸭山市\n• 绥化市\n• 伊春市\n• 香港\n• 香港\n• 九龙\n• 新界\n• 澳门\n• 澳门\n• 其它地区\n• 台湾\n• 台中市\n• 台南市\n• 高雄市\n• 台北市\n• 基隆市\n• 嘉义市\n•",
null,
"无锡波纹填料选雪浪化工_价格优惠,波纹填料价位\n\n品牌:雪浪牌,,\n\n出厂地:环江毛南族自治县(思恩镇)\n\n报价:面议\n\n无锡市雪浪化工填料淘宝彩票官网下载手机版\n\n黄金会员:",
null,
"经营模式:生产型\n\n主营:填料,化工填料,分布器,空分填料,塔内件\n\n•",
null,
"托普搅拌提供品牌好的搅拌机,昆山轻便可夹变速搅拌器\n\n品牌:托普,,\n\n出厂地:环江毛南族自治县(思恩镇)\n\n报价:面议\n\n无锡市托普搅拌设备淘宝彩票官网下载手机版\n\n黄金会员:",
null,
"经营模式:生产型\n\n主营:侧入式搅拌机,顶入式搅拌机,无漏油搅拌机,水处理搅拌机,自吸式搅拌机\n\n•",
null,
"厂家直销多功能锯末颗粒机 型号齐全 价格合理 免费安装\n\n品牌:茂祥\n\n出厂地:昭平县(昭平镇)\n\n报价:面议\n\n郑州茂祥机械淘宝彩票官网下载手机版\n\n经营模式:贸易型\n\n主营:粉碎机、木材粉碎机、烘干机、秸秆压块机、秸秆煤炭成型机、秸秆颗粒机、易拉罐粉碎....\n\n•",
null,
"聚乙烯蜡厂家江西宏远给您满意\n\n品牌:恒佳信\n\n出厂地:象州县(象州镇)\n\n报价:面议\n\n江西宏远化工淘宝彩票官网下载手机版\n\n经营模式:生产型\n\n主营:硬脂酸锌、硬脂酸钙、聚乙烯蜡、钙锌稳定剂\n\n•",
null,
"雪浪化工_质量好的分布器提供商-分布器价格\n\n品牌:雪浪牌,,\n\n出厂地:环江毛南族自治县(思恩镇)\n\n报价:面议\n\n无锡市雪浪化工填料淘宝彩票官网下载手机版\n\n黄金会员:",
null,
"经营模式:生产型\n\n主营:填料,化工填料,分布器,空分填料,塔内件\n\n•",
null,
"硬脂酸盐系列产品有何区别 宏远化工带你了解\n\n品牌:恒佳信\n\n出厂地:象州县(象州镇)\n\n报价:面议\n\n江西宏远化工淘宝彩票官网下载手机版\n\n经营模式:生产型\n\n主营:硬脂酸锌、硬脂酸钙、聚乙烯蜡、钙锌稳定剂\n\n•",
null,
"关于环保钙锌稳定剂固体的介绍\n\n品牌:恒佳信\n\n出厂地:象州县(象州镇)\n\n报价:面议\n\n江西宏远化工淘宝彩票官网下载手机版\n\n经营模式:生产型\n\n主营:硬脂酸锌、硬脂酸钙、聚乙烯蜡、钙锌稳定剂\n\n•",
null,
"新疆PVC-C电力管价格行情-阿克苏PVC-C电力管厂家推荐\n\n品牌:金飞龙,,\n\n出厂地:三江侗族自治县(古宜镇)\n\n报价:面议\n\n新疆金飞龙塑胶制品淘宝彩票官网下载手机版\n\n黄金会员:",
null,
"经营模式:生产型\n\n主营:玻璃钢化粪池,玻璃钢电力管,PVC-C电力管,玻璃钢储罐,玻璃钢夹砂管\n\n•",
null,
"青岛纤维填料价格,价位合理的填料供销\n\n品牌:宝林蝶,,\n\n出厂地:凤山县(凤城镇)\n\n报价:面议\n\n青岛宝林蝶环保设备淘宝彩票官网下载手机版\n\n黄金会员:",
null,
"经营模式:生产型\n\n主营:微孔曝气器,微孔曝气盘,可提升式曝气器,悬挂链曝气器,曝气设备\n\n•",
null,
"买除湿机认准海源泰工贸,广州除湿机代理商\n\n品牌:海尔,统帅,HOBOT\n\n出厂地:凤山县(凤城镇)\n\n报价:面议\n\n青岛海源泰工贸淘宝彩票官网下载手机版\n\n黄金会员:",
null,
"经营模式:贸易型\n\n主营:海尔智能扫地机器人,海尔指纹锁,海尔智能马桶盖,海尔空气净化器,海尔小家电\n\n• 没有找到合适的供应商?您可以发布采购信息\n\n没有找到满足要求的供应商?您可以搜索 化工批发 化工公司 化工厂\n\n### 最新入驻厂家\n\n相关产品:\n波纹填料 搅拌机 郑州茂祥机械淘宝彩票官网下载手机版 聚乙烯蜡厂家 分布器 硬脂酸锌 环保钙锌稳定剂 PVC-C电力管 填料 除湿机"
] | [
null,
"http://image-ali.bianjiyi.com/1/2016/1223/08/585c7566a75ac.jpg",
null,
"http://www.booksir.cn/Public/Images/ForeApps/grade2.png",
null,
"http://image-ali.bianjiyi.com/1/2019/0816/16/15659435721774.jpg",
null,
"http://www.booksir.cn/Public/Images/ForeApps/grade2.png",
null,
"http://imagebooksir.258fuwu.com/images/business/201838/15/1520493713.jpeg",
null,
"http://imagebooksir.258fuwu.com/images/business/2018131/16/1517386936.jpeg",
null,
"http://image-ali.bianjiyi.com/1/2017/0322/11/58d1ef84834c0.jpg",
null,
"http://www.booksir.cn/Public/Images/ForeApps/grade2.png",
null,
"http://imagebooksir.258fuwu.com/images/business/2019420/14/3366193741555741565.jpeg",
null,
"http://imagebooksir.258fuwu.com/images/business/201867/17/3366193741528364794.jpeg",
null,
"http://image-ali.bianjiyi.com/1/2018/1016/16/15396775607053.jpg",
null,
"http://www.booksir.cn/Public/Images/ForeApps/grade2.png",
null,
"http://image-ali.bianjiyi.com/1/2019/0528/17/15590341897319.jpg",
null,
"http://www.booksir.cn/Public/Images/ForeApps/grade2.png",
null,
"http://image-ali.bianjiyi.com/1/2019/0805/10/15649723693964.jpg",
null,
"http://www.booksir.cn/Public/Images/ForeApps/grade2.png",
null
] | {"ft_lang_label":"__label__zh","ft_lang_prob":0.6712182,"math_prob":0.48554677,"size":916,"snap":"2019-35-2019-39","text_gpt3_token_len":1113,"char_repetition_ratio":0.18640351,"word_repetition_ratio":0.060606062,"special_character_ratio":0.23799127,"punctuation_ratio":0.26872247,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9791872,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32],"im_url_duplicate_count":[null,1,null,null,null,1,null,null,null,1,null,2,null,3,null,null,null,1,null,1,null,1,null,null,null,1,null,null,null,1,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-08-19T14:46:11Z\",\"WARC-Record-ID\":\"<urn:uuid:79457906-3a82-4a2f-a551-b84892393dad>\",\"Content-Length\":\"99703\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d1ec7b2c-1cdc-4f3a-a690-c37f787b282c>\",\"WARC-Concurrent-To\":\"<urn:uuid:cfb4bb1d-d483-4702-95ac-e3d8e542c54e>\",\"WARC-IP-Address\":\"103.206.108.230\",\"WARC-Target-URI\":\"http://93zxw.com/qspevdu_t1001004\",\"WARC-Payload-Digest\":\"sha1:NL5DUUNF7H2SCHAUVWRJDUXGO3OGG5E7\",\"WARC-Block-Digest\":\"sha1:EFMCPMAYJRXEQJBTCBPCCK5PQBU2JJLX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-35/CC-MAIN-2019-35_segments_1566027314752.21_warc_CC-MAIN-20190819134354-20190819160354-00157.warc.gz\"}"} |
http://butterbeebetty.com/shapes-worksheets-2nd-grade/ | [
"",
null,
"",
null,
"shape shapes worksheets 2nd grade identifying polygons worksheet.",
null,
"",
null,
"first grade geometry worksheets mathematics page 4 shapes 2nd coloring worksheet.",
null,
"",
null,
"tric shapes worksheet elegant worksheets for second grade try save kindergarten of 2nd partitioning.",
null,
"",
null,
"studying the names of shapes mathematics skills online geometry worksheets 2nd grade math.",
null,
"grade shapes worksheets geometric geometry worksheet awesome edges and vertices 2nd plane shape.",
null,
"rectangle shape maze printable worksheet math worksheets grade shapes 2nd recognizing.",
null,
"shapes worksheets grade math fresh kindergarten colour similar worksheet for of cool 2nd plane shape.",
null,
"shapes worksheets for grade 2 polygons 2nd 3d.",
null,
"and shapes objects worksheets bundle grade math identifying translation rotation reflection translate rotate 2 geometry 3 d 2nd 3d.",
null,
"color the geometric figures shapes worksheets 2nd grade polygons.",
null,
"shapes worksheets for grade identifying polygons worksheet 2nd math.",
null,
"identifying shapes worksheets grade transformation geometry math on for 1 polygons 2nd solid.",
null,
"identifying shapes worksheets grade polygon gallery for geometry math polygons 2nd worksheet.",
null,
"plane shapes worksheet numbers shape worksheets grade figures for 3 2nd partitioning.",
null,
"plane figure worksheet geometry figures and solid shapes worksheets best images of hidden shape grade finding 2nd 3 dimensional.",
null,
"geometry worksheet grade 3 math worksheets shapes 2nd solid.",
null,
"shapes worksheets grade and for first activities fair plane in 2nd 3 dimensional.",
null,
"transform plane shapes worksheets for first grade in solid and images gr free library download print 2nd polygon.",
null,
"geometric patterns worksheet grade kindergarten shapes worksheets geometry practice on label the diagram 2nd 3d.",
null,
"3 dimensional figures worksheets shapes grade similar 2nd math 3d.",
null,
"first grade shapes worksheets geometric free kindergarten 2nd shape recognition.",
null,
"grade geometry worksheet generator activities maker 2 shapes worksheets games 2nd 3 dimensional gra.",
null,
"geometry worksheet grade worksheets shapes 3 second math 2nd dimensional.",
null,
"identifying shapes worksheets grade kindergarten identify 3 dimensional 2nd naming polygons worksheet.",
null,
"worksheet on basic shapes geometrical common solid solids pyramids chart nets faces edges vertices figures worksheets 2nd grade plane shape."
] | [
null,
"http://butterbeebetty.com/wp-content/uploads/2019/05/collection-of-free-geometric-shapes-worksheet-grade-ready-to-download-or-print-please-do-not-use-any-for-worksheets-2nd-math-3d-worksh.jpg",
null,
"http://butterbeebetty.com/wp-content/uploads/2019/05/shape-shapes-worksheets-2nd-grade-identifying-polygons-worksheet.jpg",
null,
"http://butterbeebetty.com/wp-content/uploads/2019/05/geometry-worksheets-quadrilaterals-shapes-2nd-grade-recognizing.jpg",
null,
"http://butterbeebetty.com/wp-content/uploads/2019/05/first-grade-geometry-worksheets-mathematics-page-4-shapes-2nd-coloring-worksheet.jpg",
null,
"http://butterbeebetty.com/wp-content/uploads/2019/05/identifying-shapes-worksheets-grade-collection-of-math-and-download-them-shape-for-first-2nd-coloring-worksheet.jpg",
null,
"http://butterbeebetty.com/wp-content/uploads/2019/05/tric-shapes-worksheet-elegant-worksheets-for-second-grade-try-save-kindergarten-of-2nd-partitioning.jpg",
null,
"http://butterbeebetty.com/wp-content/uploads/2019/05/download-by-shapes-worksheets-2nd-grade-naming-polygons-worksheet.jpg",
null,
"http://butterbeebetty.com/wp-content/uploads/2019/05/studying-the-names-of-shapes-mathematics-skills-online-geometry-worksheets-2nd-grade-math.jpg",
null,
"http://butterbeebetty.com/wp-content/uploads/2019/05/grade-shapes-worksheets-geometric-geometry-worksheet-awesome-edges-and-vertices-2nd-plane-shape.jpg",
null,
"http://butterbeebetty.com/wp-content/uploads/2019/05/rectangle-shape-maze-printable-worksheet-math-worksheets-grade-shapes-2nd-recognizing.jpg",
null,
"http://butterbeebetty.com/wp-content/uploads/2019/05/shapes-worksheets-grade-math-fresh-kindergarten-colour-similar-worksheet-for-of-cool-2nd-plane-shape.jpg",
null,
"http://butterbeebetty.com/wp-content/uploads/2019/05/shapes-worksheets-for-grade-2-polygons-2nd-3d.jpg",
null,
"http://butterbeebetty.com/wp-content/uploads/2019/05/and-shapes-objects-worksheets-bundle-grade-math-identifying-translation-rotation-reflection-translate-rotate-2-geometry-3-d-2nd-3d.jpg",
null,
"http://butterbeebetty.com/wp-content/uploads/2019/05/color-the-geometric-figures-shapes-worksheets-2nd-grade-polygons.jpg",
null,
"http://butterbeebetty.com/wp-content/uploads/2019/05/shapes-worksheets-for-grade-identifying-polygons-worksheet-2nd-math.jpg",
null,
"http://butterbeebetty.com/wp-content/uploads/2019/05/identifying-shapes-worksheets-grade-transformation-geometry-math-on-for-1-polygons-2nd-solid.jpg",
null,
"http://butterbeebetty.com/wp-content/uploads/2019/05/identifying-shapes-worksheets-grade-polygon-gallery-for-geometry-math-polygons-2nd-worksheet.jpg",
null,
"http://butterbeebetty.com/wp-content/uploads/2019/05/plane-shapes-worksheet-numbers-shape-worksheets-grade-figures-for-3-2nd-partitioning.jpg",
null,
"http://butterbeebetty.com/wp-content/uploads/2019/05/plane-figure-worksheet-geometry-figures-and-solid-shapes-worksheets-best-images-of-hidden-shape-grade-finding-2nd-3-dimensional.jpg",
null,
"http://butterbeebetty.com/wp-content/uploads/2019/05/geometry-worksheet-grade-3-math-worksheets-shapes-2nd-solid.jpg",
null,
"http://butterbeebetty.com/wp-content/uploads/2019/05/shapes-worksheets-grade-and-for-first-activities-fair-plane-in-2nd-3-dimensional.jpg",
null,
"http://butterbeebetty.com/wp-content/uploads/2019/05/transform-plane-shapes-worksheets-for-first-grade-in-solid-and-images-gr-free-library-download-print-2nd-polygon.jpg",
null,
"http://butterbeebetty.com/wp-content/uploads/2019/05/geometric-patterns-worksheet-grade-kindergarten-shapes-worksheets-geometry-practice-on-label-the-diagram-2nd-3d.jpg",
null,
"http://butterbeebetty.com/wp-content/uploads/2019/05/3-dimensional-figures-worksheets-shapes-grade-similar-2nd-math-3d.jpg",
null,
"http://butterbeebetty.com/wp-content/uploads/2019/05/first-grade-shapes-worksheets-geometric-free-kindergarten-2nd-shape-recognition.jpg",
null,
"http://butterbeebetty.com/wp-content/uploads/2019/05/grade-geometry-worksheet-generator-activities-maker-2-shapes-worksheets-games-2nd-3-dimensional-gra.jpg",
null,
"http://butterbeebetty.com/wp-content/uploads/2019/05/geometry-worksheet-grade-worksheets-shapes-3-second-math-2nd-dimensional.jpg",
null,
"http://butterbeebetty.com/wp-content/uploads/2019/05/identifying-shapes-worksheets-grade-kindergarten-identify-3-dimensional-2nd-naming-polygons-worksheet.jpg",
null,
"http://butterbeebetty.com/wp-content/uploads/2019/05/worksheet-on-basic-shapes-geometrical-common-solid-solids-pyramids-chart-nets-faces-edges-vertices-figures-worksheets-2nd-grade-plane-shape.jpg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.81951225,"math_prob":0.95471543,"size":2660,"snap":"2019-26-2019-30","text_gpt3_token_len":496,"char_repetition_ratio":0.3192771,"word_repetition_ratio":0.0,"special_character_ratio":0.16842106,"punctuation_ratio":0.0723192,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9903959,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58],"im_url_duplicate_count":[null,2,null,2,null,1,null,2,null,1,null,1,null,1,null,2,null,1,null,2,null,2,null,2,null,1,null,2,null,2,null,1,null,1,null,1,null,1,null,2,null,2,null,1,null,2,null,1,null,1,null,1,null,2,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-07-20T19:43:44Z\",\"WARC-Record-ID\":\"<urn:uuid:0c8bbaf7-0ea6-4e3e-ba6f-e21cd3442433>\",\"Content-Length\":\"42104\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:860956eb-8229-4ed7-b293-5287ecd732b9>\",\"WARC-Concurrent-To\":\"<urn:uuid:bec92758-246b-4af5-a1e6-c9e6cab6a375>\",\"WARC-IP-Address\":\"104.27.143.229\",\"WARC-Target-URI\":\"http://butterbeebetty.com/shapes-worksheets-2nd-grade/\",\"WARC-Payload-Digest\":\"sha1:2ZOZR2XDZLOQOVPJKJ62JOIC7EGYPMG5\",\"WARC-Block-Digest\":\"sha1:ACUOP4LP37GSJYTGVR4E3EHZTFTWGW2R\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-30/CC-MAIN-2019-30_segments_1563195526670.1_warc_CC-MAIN-20190720194009-20190720220009-00305.warc.gz\"}"} |
https://tex.stackexchange.com/questions/164797/vertical-space-between-matrices | [
"# Vertical space between matrices\n\nI am writing some lecture notes in Econometrics and I want your advice on how to insert some vertical space between two matrices so the result will be easier to follow.\n\nHere is the minimal example:\n\n\\documentclass[10pt,a4paper]{article}\n\\usepackage[latin1]{inputenc}\n\\usepackage{amsmath}\n\\usepackage{amsfonts}\n\\usepackage{amssymb}\n\n\\begin{document}\n\n\\begin{align*}\nVar(u) &= E(uu') = \\\\\n&= E\n\\begin{bmatrix}\nu_1 u_1' & u_1 u_2' & \\cdots & u_1 u_M' \\\\\nu_2 u_1' & u_2 u_2' & \\cdots & u_2 u_M' \\\\\n\\vdots & & \\ddots & \\vdots \\\\\nu_M u_1' & u_M u_2' & \\cdots & u_M u_M '\n\\end{bmatrix}\n=\n\\begin{bmatrix}\n\\sigma_{11} I & \\sigma_{12} I & \\cdots & \\sigma_{1M} I \\\\\n\\sigma_{12} I & \\sigma_{22} I & \\cdots & \\sigma_{2M} I \\\\\n\\vdots & & \\ddots & \\vdots\\\\\n\\sigma_{1n} I & \\sigma_{2M} I & \\cdots & \\sigma_{MM} I\n\\end{bmatrix} \\\\\n&=\n\\begin{pmatrix}\n\\sigma_{11} & \\sigma_{12} & \\cdots & \\sigma_{1M} \\\\\n\\sigma_{12} & \\sigma_{22} & \\cdots & \\sigma_{2M} \\\\\n\\vdots & & \\ddots & \\vdots \\\\\n\\sigma_{1M} & \\sigma_{2M} & \\cdots & \\sigma_{MM}\n\\end{pmatrix} \\otimes I_T\n\\end{align*}\n\n\\end{document}\n\n• You can use \\[10pt] or so. (that should be a double backslash) – John Kormylo Mar 10 '14 at 21:18\n• @JohnKormylo using that command just before \\begin{pmatrix}' does not seem to do the job for me. What exactly do you mean? Thanks in advance. – Pantelis Kazakis Mar 10 '14 at 21:33\n• Should the term \\otimes I_T really be typeset as a subscript to the preceding matrix? – Mico Mar 15 '14 at 9:00\n• @MIco No, it is a Kronecker product. – Pantelis Kazakis Mar 15 '14 at 23:49\n\nTo illustrate @JohnKormylo's comment, you would need to put it in place of the \\\\ you have already. Changing your MWE (for a gap of 10pt):\n\n\\documentclass[10pt,a4paper]{article}\n\\usepackage[latin1]{inputenc}\n\\usepackage{amsmath}\n\\usepackage{amsfonts}\n\\usepackage{amssymb}\n\n\\begin{document}\n\n\\begin{align*}\nVar(u) &= E(uu') = \\\\\n&= E\n\\begin{bmatrix}\nu_1 u_1' & u_1 u_2' & \\cdots & u_1 u_M' \\\\\nu_2 u_1' & u_2 u_2' & \\cdots & u_2 u_M' \\\\\n\\vdots & & \\ddots & \\vdots \\\\\nu_M u_1' & u_M u_2' & \\cdots & u_M u_M '\n\\end{bmatrix}\n=\n\\begin{bmatrix}\n\\sigma_{11} I & \\sigma_{12} I & \\cdots & \\sigma_{1M} I \\\\\n\\sigma_{12} I & \\sigma_{22} I & \\cdots & \\sigma_{2M} I \\\\\n\\vdots & & \\ddots & \\vdots\\\\\n\\sigma_{1n} I & \\sigma_{2M} I & \\cdots & \\sigma_{MM} I\n\\end{bmatrix} \\\\[10pt] %%%%%%%%%%%%%%%%% Change this for different spacing\n&=\n\\begin{pmatrix}\n\\sigma_{11} & \\sigma_{12} & \\cdots & \\sigma_{1M} \\\\\n\\sigma_{12} & \\sigma_{22} & \\cdots & \\sigma_{2M} \\\\\n\\vdots & & \\ddots & \\vdots \\\\\n\\sigma_{1M} & \\sigma_{2M} & \\cdots & \\sigma_{MM}\n\\end{pmatrix}_{\\otimes I_T}\n\\end{align*}\n\n\\end{document}\\documentclass[]{article}\n\n\\begin{document}\n\n\\title{Title}\n\\author{Author}\n\\date{Today}\n\\maketitle\n\nContent\n\n\\end{document}",
null,
"You can add space by changing the \\\\ after \\end{bmatrix} into \\\\[1em], where 1em can be any length. In your example:\n\n\\documentclass[10pt,a4paper]{article}\n\\usepackage[latin1]{inputenc}\n\\usepackage{amsmath}\n\\usepackage{amsfonts}\n\\usepackage{amssymb}\n\n\\begin{document}\n\n\\begin{align*}\nVar(u) &= E(uu') = \\\\[1em]\n&= E\n\\begin{bmatrix}\nu_1 u_1' & u_1 u_2' & \\cdots & u_1 u_M' \\\\\nu_2 u_1' & u_2 u_2' & \\cdots & u_2 u_M' \\\\\n\\vdots & & \\ddots & \\vdots \\\\\nu_M u_1' & u_M u_2' & \\cdots & u_M u_M '\n\\end{bmatrix}\n=\n\\begin{bmatrix}\n\\sigma_{11} I & \\sigma_{12} I & \\cdots & \\sigma_{1M} I \\\\\n\\sigma_{12} I & \\sigma_{22} I & \\cdots & \\sigma_{2M} I \\\\\n\\vdots & & \\ddots & \\vdots\\\\\n\\sigma_{1n} I & \\sigma_{2M} I & \\cdots & \\sigma_{MM} I\n\\end{bmatrix} \\\\[1em]\n&=\n\\begin{pmatrix}\n\\sigma_{11} & \\sigma_{12} & \\cdots & \\sigma_{1M} \\\\\n\\sigma_{12} & \\sigma_{22} & \\cdots & \\sigma_{2M} \\\\\n\\vdots & & \\ddots & \\vdots \\\\\n\\sigma_{1M} & \\sigma_{2M} & \\cdots & \\sigma_{MM}\n\\end{pmatrix}_{\\otimes I_T}\n\\end{align*}\n\n\\end{document}\n\n\nHere is how I will do it:\n\n\\documentclass{article}\n\n\\usepackage{mathtools}\n\\DeclareMathOperator*{\\Var}{Var}\n\n\\begin{document}\n\n\\begin{align*}\n\\Var(u)\n&= E(uu') \\\\[1ex]\n&= E {\\mkern -4mu}\n\\begin{bmatrix}\nu_{1}u_{1}' & u_{1}u_{2}' & \\cdots & u_{1}u_{M}' \\\\\nu_{2}u_{1}' & u_{2}u_{2}' & \\cdots & u_{2}u_{M}' \\\\\n\\vdots & & \\ddots & \\vdots \\\\\nu_{M}u_{1}' & u_{M}u_{2}' & \\cdots & u_{M}u_{M}'\n\\end{bmatrix} \\\\[1ex]\n&= {\\mkern -6mu}\n\\begin{bmatrix}\n\\sigma_{11}I & \\sigma_{12}I & \\cdots & \\sigma_{1M}I \\\\\n\\sigma_{12}I & \\sigma_{22}I & \\cdots & \\sigma_{2M}I \\\\\n\\vdots & & \\ddots & \\vdots \\\\\n\\sigma_{1M}I & \\sigma_{2M}I & \\cdots & \\sigma_{MM}I\n\\end{bmatrix} \\\\[1ex]\n&= {\\mkern -6mu}\n\\begin{pmatrix}\n\\sigma_{11} & \\sigma_{12} & \\cdots & \\sigma_{1M} \\\\\n\\sigma_{12} & \\sigma_{22} & \\cdots & \\sigma_{2M} \\\\\n\\vdots & & \\ddots & \\vdots \\\\\n\\sigma_{1M} & \\sigma_{2M} & \\cdots & \\sigma_{MM}\n\\end{pmatrix}_{\\mkern -4mu \\otimes I_{T}}\n\\end{align*}\n\n\\end{document}",
null,
"Note the horizontal spacing improvements using \\mkern`.\n\nP.S. I think it doesn't look good with two matrices on the same line in this case."
] | [
null,
"https://i.stack.imgur.com/AdoXk.png",
null,
"https://i.stack.imgur.com/THgwF.jpg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.51387703,"math_prob":0.99702775,"size":1162,"snap":"2020-45-2020-50","text_gpt3_token_len":462,"char_repetition_ratio":0.23316061,"word_repetition_ratio":0.05376344,"special_character_ratio":0.4182444,"punctuation_ratio":0.027472528,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99944586,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-12-01T15:03:00Z\",\"WARC-Record-ID\":\"<urn:uuid:e1b15a41-6388-4272-8e71-1eeb79872729>\",\"Content-Length\":\"170754\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a4741722-2e09-4265-ac8b-60a8a0c4072a>\",\"WARC-Concurrent-To\":\"<urn:uuid:50c66e94-087d-4458-aa85-8b9bf183d9c6>\",\"WARC-IP-Address\":\"151.101.193.69\",\"WARC-Target-URI\":\"https://tex.stackexchange.com/questions/164797/vertical-space-between-matrices\",\"WARC-Payload-Digest\":\"sha1:HNQNRYK7YPFI7AMIMBVV4WWTOJISYESC\",\"WARC-Block-Digest\":\"sha1:YDNOM2BL5WQ7GQPFY35BQH4FZI5BSTQ6\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141674594.59_warc_CC-MAIN-20201201135627-20201201165627-00254.warc.gz\"}"} |
https://scialert.net/fulltext/?doi=jas.2013.308.314&org=11 | [
"Research Article\n\n# Assessment of Different Matrix-fracture Shape Factor in Double Porosity Medium",
null,
"K.S. Lai and W.K.S. Pao\n\nABSTRACT\n\nShape factor is required in modeling naturally fractured reservoirs represented by double porosity theory. The function of shape factor is to define the leakage term between the matrix block and fracture conduits. Different geometric assumption of the double porosity medium resulted in different shape factors and consequently the leakage term. The influence of various shape factors based on different geometric assumptions on their matrix-fracture transfer rate has been studied in this study. Comparison of different shape factor performance is accomplished via the dimensionless pressure and time. This study proposed a new relation that measures the impact of different shape factor in the consequence flow rate in fractured reservoir. It was discovered that higher value of shape factor contributes to higher rate of change of flow rate. This correlation can aid in deciding the appropriate shape factor for modeling double porosity medium.\n\n Services Related Articles in ASCI Similar Articles in this Journal Search in Google Scholar View Citation Report Citation Science Alert\n\n How to cite this article: K.S. Lai and W.K.S. Pao, 2013. Assessment of Different Matrix-fracture Shape Factor in Double Porosity Medium. Journal of Applied Sciences, 13: 308-314. DOI: 10.3923/jas.2013.308.314 URL: https://scialert.net/abstract/?doi=jas.2013.308.314\n\nReceived: August 30, 2012; Accepted: January 30, 2013; Published: February 21, 2013\n\nINTRODUCTION\n\nAccording to Sarma and Aziz (2006), Naturally Fractured Reservoir (NFR) is a complex system with irregular fractures network, vugs and matrix blocks. They added that NFR can be defined as a reservoir having a connected fractures network which has significant higher permeability than matrix. This implies that production of hydrocarbons is highly dependent on the matrix-fractures interaction. This paper assessed the influence of different shape factor in modeling the matrix-fracture transfer rate.\n\nNFR can be modeled by using double porosity concept. The double porosity concept was introduced by Barenblatt et al. (1960), while Warren and Root (1963) were the first to use double porosity concept in reservoir simulation. Double porosity concept is having two separate partial differential equations to define matrix and fractures flow. Matrix usually have low permeability and high storativity, while fractures have high permeability but low storativity. This suggest that matrix function as a main source of hydrocarbons while, fractures become the flow path of hydrocarbon production. For this reason, interaction between matrix and fractures should be considered. This interaction can be described by using a transfer function. The matrix-fracture transfer function was given by Warren and Root (1963) as:",
null,
"(1)\n\nEquation 1 showed that the matrix-fracture transfer function which requires shape factor to governs the flow.\n\nThe initial double porosity model of Warren and Root (1963) used assumption of pseudosteady flow and the NFR system is simplified into blocks of matrix and fractures set which looks like sugar cubes (Fig. 1).",
null,
"Fig. 1(a-b): Actual reservoir block and sugar cubes model\n\nEach cube is known as matrix that contained in within a systematic array of identical and rectangular parallelepipeds. Matrix is assumed to be homogenous and isotropic. All the fractures are continuous and may have different spacing and width to simulate certain degree of anisotropy.\n\nSHAPE FACTOR\n\nMost NFR modeling require shape factor in the matrix-fracture transfer function, although, some approaches do not require shape factor (Firoozabadi and Thomas, 1990). Shape factor is commonly used and it is a crucial parameter in matrix-fracture transfer function. Warren and Root (1963) have defined rectangular shape factor as:",
null,
"(2)\n\nwhere, n refers to the number of fractures sets and:",
null,
"(3)",
null,
"(4)",
null,
"(5)\n\nIn subsequent years, Kazemi et al. (1976) developed new shape factor for their simulator using finite difference method. Their shape factor for rectangular geometry is:",
null,
"(6)\n\nUeda et al. (1989) research concluded that Kazemi’s shape factor needed to be multiplied by a factor of 2 or 3 in order to get more realistic pressure distributions. Their works were later supported by Lim and Aziz (1995), which showed that Kazemi’s shape factor needed to be adjusted with factor of ~2.5.\n\nCoats (1989) derived shape factor that are doubled of Kazemi’s shape factor. Coat’s shape factor for rectangular geometry is:",
null,
"(7)\n\nThe method used by Coats (1989) is Fourier finite sine transform and integration. Fourier transformation was also used by Chang (1993) and Lim and Aziz (1995) to arrive at another shape factor different from Coats (1989). Coat’s work became the main references for both of them. They continued Coat’s work but with different boundary conditions. By using pressure boundary conditions, they arrived at similar shape factor for rectangular geometry Eq. 8:",
null,
"(8)\n\nLim and Aziz (1995) added that the total amount of mass entered a system at time t, Mt with corresponding mass after infinite time, M8 can be expressed as in Eq. 9. In additions, the matrix-fracture transfer rate can be expressed as in Eq. 10:",
null,
"(9)",
null,
"(10)\n\nEquation 11-13 are the analytical solutions given by Lim and Aziz (1995) for single phase flow in fracture. The solutions can be differentiated with respect to time and related with Eq. 10 to obtain respective shape factor:",
null,
"(11)",
null,
"(12)",
null,
"(13)\n\nMeanwhile, Chang (1993) has derived another shape factor using constant flow rate boundary conditions which are:",
null,
"(14)\n\nOn the other hand, Quintard and Whitaker (1995) used assumption of infinite permeability in the fracture to set the boundary value problem for double porosity flow. By solving using Fourier series for rectangular geometry, they reached conclusion of shape factors which are:",
null,
"(15)",
null,
"(16)",
null,
"(17)\n\nTable 1 summarized all the shape factor for rectangular geometries.\n\n Table 1: Shape factors for rectangular geometry",
null,
"Table 2: General data used in comparison study",
null,
"COMPARISON OF SHAPE FACTORS\n\nThe reservoir problem selected for this comparison study was a multiphase depletion run with 5x3x2 blocks and only one production well at (1,1,1). The 5x3x2 blocks were selected for this purpose to allow 3-ways flow between the blocks during simulation. There is no injection well and only one production well. This is to avoid complication of the problems which later would complicate the results analysis. The production runs with no flow constraints. Table 2 describes the details of the reservoir problem. The additional reservoir properties used for this comparison are from the Sixth SPE Comparative Solution Project (Firoozabadi and Thomas, 1990).\n\nCOMPARISON APPROACH\n\nLim and Aziz (1995) has provided analytical derivation of shape factors for single phase flow.\n\nIn the effort of deriving shape factors, they have shown that the total amount of mass entered a system at time t, Mt and the corresponding mass after an infinite time, M8 can be expressed as in Eq. 9. The Equation 9 is known as a dimensionless pressure, PD. This dimensionless pressure is a function of dimensionless time as shown in Eq. 18.\n\nThe equivalent fractures length, Le is given in the Eq. 3-4:",
null,
"and the analytical expressions for single phase flow is shown in the Eq. 11-13:",
null,
"(18)\n\nIt is desired to know the effect of different shape factors in multiphase flow NFR simulation. The comparison is done by representing the simulation results in PD and tD. A basic double porosity simulator is used to solve the reservoir problem. The simulator solves the pressure and saturation by using Implicit Pressures Explicit Saturation (IMPES) method. When PD is plotted against tD, the gradient (PD/tD) gives indication of the matrix-fracture transfer rate. From Eq. 19, it is shown that the matrix-fracture transfer rate is proportional to the gradient (PD/tD). Eq. 19 can be found by extending the analytical solution given by Lim and Aziz (1995). The derivation new correlation is shown in the appendix:",
null,
"This relation is for analyzing the results. In additions, the results are compared against the Lim and Aziz (1995) analytical solutions.\n\nSingle set of fractures: Shape factor for single set of fractures is used when fractures are assumed to be present only in one direction (Fig. 2). The flow between matrix and fracture is assumed to be in a single direction. The direction of fractures is not necessary has to be in x-axis, but it can be at any axis such as straight y-axis or slanted x-y axis. This also applies to double and triple set of fractures.\n\nThe comparison result for single set of fractures is presented in Fig. 5. By applying Eq. 19, it is deduced that steeper gradient shows higher matrix-fracture transfer rate. At tD<0.5, Kazemi’s shape factor has showed much lower transfer rate as compared with other researcher’s shape factor. It also require twice the tD value to reach the same PD value. Note that all the curve gradient is very high at early tD and becoming lower as tD increases. This indicate that flow is high at early time and decreasing with time. This curve behaviour is associated with rate of change of flow Eq. 19:",
null,
"(19)",
null,
"Fig. 2(a-b): (a) Matrix block with single set fractures and (b) Flow boundary between matrix and fractures",
null,
"Fig. 3(a-b): (a) Matrix block with double set fractures and (b) Flow boundary between matrix and fractures\n\nConversely, Warren and Root (1963) shape factor yield the highest Δgradient. The Warrenand Root (1963) shape factor flow is the highest at early tD and then, it dropped quickly. In general, all the results converges into PD and the curve gradient becoming linear. This indicate that the flow is becoming steady. When the gradient is 0, it literally means that no flow between matrix-fracture and it occurs at PD = 1. At PD = 1, the pm must be equal to pf since the initial pressure, pi always constant. This supported by Eq. 1 whereby there must be a pressure difference to initiate the flow.\n\nAs stated earlier, analytical solution from Lim and Aziz (1995) is based on single phase flow and direct comparison cannot be made. However, it can be observed that the analytical solutions has much higher Δg as compared with others. This denotes that the analytical solutions flow is very high at initial td and then decreases rapidly. The reservoir problem is a multiphase flow problem whereby the transfer of fluids are much more complex. The components that present in a multiphase flow is water, oil and gas. Total multiphase matrix-fracture transfer rate is a summation of all the components, while single phase matrix-fracture transfer rate is having only the oil components. Analytical solution cannot be compared directly with results of other shape factors but it can serves as a reference line. This can be used to detect shape factor results that yield faster transfer rate by comparing the gradient and PD.\n\nTwo and three set of fractures: Shape factor for two sets of fractures is used when fractures assumed a matchstick model (Fig. 3) and three sets of fractures is used when fractures assumed sugarcube model (Fig. 4).",
null,
"Fig. 4(a-b): (a) Matrix block with triple set fractures (b) Flow boundary between matrix and fractures",
null,
"Fig. 5: Single set of fractures",
null,
"Fig. 6: Two sets of fractures\n\nThe comparison for two and three set of fractures are presented in Fig. 6 and 7, respectively. Both the results can be analyzed using the same approach discussed for the parallel plate model.",
null,
"Fig. 7: Three sets of fractures\n\nIt is interesting to note that at early tD, Warren and Root (1963) shape factor yields the highest flow as compared with the analytical solution.\n\nIn general, Warren and Root (1963) has the highest rate of change of flow while Kazemi et al. (1976) shape factor produces the lowest rate of change of flow. These two models can be taken as the two extreme bounds where (Quintard and Whitaker, 1995; Chang, 1993; Lim and Aziz, 1995 and Coats, 1989) shape factors produce intermediate rate of change of flow.\n\nIt is apparent that higher shape factor will results in high rate of change of flow. The flow behavior is such that there is an initial high flow between matrix-fracture and then followed by quick drop of flow rate. Figure 8 is an example of 15x1x1 line injection model investigated using different shape factors.",
null,
"Fig. 8: Comparison of different shape factors for a line injection problem taken Almengor et al. (2002)\n\nIt showed that the oil production using shape factor from Warren and Root (1963) has the highest productivity before the fifth year, followed by a quick drop in the productivity index. Kazemi et al. (1976) shape factor resulted in the lowest productivity among the models in the initial time but the production picked up after the fifth-year. Not surprisingly, oil production using shape factors from Coats (1989) and Lim and Aziz (1995) are bounded in between the results by Warren and Root (1963) and Kazemi et al. (1976).\n\nCONCLUSION\n\nComparison of shape factors is presented in dimensionless parameters, PD and tD. Gradient PD/tD are proportional to the matrix-fracture transfer rate. Higher gradient PD/tD indicate higher transfer rate. The dimensionless comparison displays the different flow behavior of different shape factors.\n\nNOMENCLATURE\n\n t = time, T L = length, L V = volume, L3 q = matrix-fracture transfer rate, L3/T Q = matrix-fracture transfer rate, M/(L3T) p = pressure, M/(LT2) k = absolute permeability, L2 σ = shape factor, 1/L2 μ = viscosity, M/(LT) n = set of fractures φ = porosity, fraction c = compressibility, fraction ρ = density, M/L3\n\nSUBSCRIPTS\n\n i = Initial t = Total m = Matrix ∞ = Infinity f = Fracture d = Dimensionless\n\nAPPENDIX\n\nDerivation of dimensionless correlation: Equation 13 can be differentiated with time to obtain Eq. A1:",
null,
"(A1)\n\nBy using finite approximations for first degree derivative, Eq. A1 can be rewritten as Eq. A2:",
null,
"(A2)\n\nFor Sufficient small points and if pf = Constant, then:",
null,
"(A3)\n\nSubstitute Eq. A3 into the given Eq. 10, we get:",
null,
"(A4)\n\nIf:",
null,
"is constant/equivalent, while t varies, then Eq. A2 can be rewrite into Eq. A5:",
null,
"(A5)\n\nSubstitute Eq. A6 into Eq. A5, we get Eq. A6:",
null,
"(A6)\nREFERENCES\n1: Almengor, J.G., G. Penuela, M.L. Wiggins, R.L. Brown, F. Civan and R.G. Hughes, 2002. User's guide and documentation manual for boast-nfr for excel. The University of Oklahoma, Office of Research Administration, Norman, Oklahoma, USA., DE-AC26-99BC15212.\n\n2: Barenblatt, G.I., I.P. Zeltov and I.N. Kochina, 1960. Basic concepts in the theory of seepage of homogeneous liquids in fissured rocks. J. Applied Math. Mech., 24: 1286-1303.\n\n3: Coats, K.H., 1989. Implicit compositional simulation of single-porosity and dual-porosity reservoirs. Proceedings of the SPE Symposium on Reservoir Simulation, February 6-8, 1989, Houston, Texas, USA -.\n\n4: Chang, M.M., 1993. Deriving the shape factor of a fractured rock matrix. Technical Report NIPER-696, (DE93000170), National Institute for Petroleum and Energy Research, Bartlesville, Oklahoma, USA., pp: 40.\n\n5: Firoozabadi, A. and L.K. Thomas, 1990. Sixth SPE comparative solution project: Dual-porosity simulators. J. Petroleum Technol., 42: 762-763.\nCrossRef |\n\n6: Kazemi, H., L.S. Merrill, K.L. Porterfield and P.R. Zeman, 1976. Numerical simulation of water-oil flow in naturally fractured reservoirs. SPE J., 16: 317-326.\nCrossRef |\n\n7: Lim, K.T. and K. Aziz, 1995. Matrix-fracture transfer shape factors for dual-porosity simulators. J. Petroleum Sci. Eng., 13: 169-178.\nCrossRef |\n\n8: Quintard, M. and S. Whitaker, 1995. Transport in chemically and mechanically heterogeneous porous media. II: Comparison with numerical experiments for slightly compressible single-phase flow. Adv. Water Resour., 19: 49-60.\nCrossRef |\n\n9: Sarma, P. and K. Aziz, 2006. New Transfer functions for simulation of naturally fractured reservoirs with dual-porosity models. SPE J., 11: 328-340.\nCrossRef | Direct Link |\n\n10: Ueda, Y., S. Murata, Y. Watanabe and K. Funatsu, 1989. Investigation of the shape factor used in the dual-porosity reservoir simulator. Proceedings of the Society of Petroleum Engineers Asia-Pacific Conference, September 13-15, 1989, Sydney, Australia -.\n\n11: Warren, J.E. and P.J. Root, 1963. The behaviour of naturally fractured reservoirs. SPE J., 3: 245-255.\nCrossRef |",
null,
"© 2021 Science Alert. All Rights Reserved"
] | [
null,
"https://crossmark.crossref.org/images/logos/cm_sbs_072_click.png",
null,
"https://docsdrive.com/images/ansinet/jas/2013/eq1-2k13-308-314.gif",
null,
"https://docsdrive.com/images/ansinet/jas/2013/fig1-2k13-308-314.gif",
null,
"https://docsdrive.com/images/ansinet/jas/2013/eq2-2k13-308-314.gif",
null,
"https://docsdrive.com/images/ansinet/jas/2013/eq3-2k13-308-314.gif",
null,
"https://docsdrive.com/images/ansinet/jas/2013/eq4-2k13-308-314.gif",
null,
"https://docsdrive.com/images/ansinet/jas/2013/eq5-2k13-308-314.gif",
null,
"https://docsdrive.com/images/ansinet/jas/2013/eq6-2k13-308-314.gif",
null,
"https://docsdrive.com/images/ansinet/jas/2013/eq7-2k13-308-314.gif",
null,
"https://docsdrive.com/images/ansinet/jas/2013/eq8-2k13-308-314.gif",
null,
"https://docsdrive.com/images/ansinet/jas/2013/eq9-2k13-308-314.gif",
null,
"https://docsdrive.com/images/ansinet/jas/2013/eq10-2k13-308-314.gif",
null,
"https://docsdrive.com/images/ansinet/jas/2013/eq11-2k13-308-314.gif",
null,
"https://docsdrive.com/images/ansinet/jas/2013/eq12-2k13-308-314.gif",
null,
"https://docsdrive.com/images/ansinet/jas/2013/eq13-2k13-308-314.gif",
null,
"https://docsdrive.com/images/ansinet/jas/2013/eq14-2k13-308-314.gif",
null,
"https://docsdrive.com/images/ansinet/jas/2013/eq15-2k13-308-314.gif",
null,
"https://docsdrive.com/images/ansinet/jas/2013/eq16-2k13-308-314.gif",
null,
"https://docsdrive.com/images/ansinet/jas/2013/eq17-2k13-308-314.gif",
null,
"https://docsdrive.com/images/ansinet/jas/2013/tab1-2k13-308-314.gif",
null,
"https://docsdrive.com/images/ansinet/jas/2013/tab2-2k13-308-314.gif",
null,
"https://docsdrive.com/images/ansinet/jas/2013/img1-2k13-308-314.gif",
null,
"https://docsdrive.com/images/ansinet/jas/2013/eq18-2k13-308-314.gif",
null,
"https://docsdrive.com/images/ansinet/jas/2013/eq19-2k13-308-314.gif",
null,
"https://docsdrive.com/images/ansinet/jas/2013/eq20-2k13-308-314.gif",
null,
"https://docsdrive.com/images/ansinet/jas/2013/fig2-2k13-308-314.gif",
null,
"https://docsdrive.com/images/ansinet/jas/2013/fig3-2k13-308-314.gif",
null,
"https://docsdrive.com/images/ansinet/jas/2013/fig4-2k13-308-314.gif",
null,
"https://docsdrive.com/images/ansinet/jas/2013/fig5-2k13-308-314.gif",
null,
"https://docsdrive.com/images/ansinet/jas/2013/fig6-2k13-308-314.gif",
null,
"https://docsdrive.com/images/ansinet/jas/2013/fig7-2k13-308-314.gif",
null,
"https://docsdrive.com/images/ansinet/jas/2013/fig8-2k13-308-314.gif",
null,
"https://docsdrive.com/images/ansinet/jas/2013/a1-2k13-308-314.gif",
null,
"https://docsdrive.com/images/ansinet/jas/2013/a2-2k13-308-314.gif",
null,
"https://docsdrive.com/images/ansinet/jas/2013/a3-2k13-308-314.gif",
null,
"https://docsdrive.com/images/ansinet/jas/2013/a4-2k13-308-314.gif",
null,
"https://docsdrive.com/images/ansinet/jas/2013/img2-2k13-308-314.gif",
null,
"https://docsdrive.com/images/ansinet/jas/2013/a5-2k13-308-314.gif",
null,
"https://docsdrive.com/images/ansinet/jas/2013/a6-2k13-308-314.gif",
null,
"https://scialert.net/fulltext/images/spacer.gif",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8903971,"math_prob":0.9333981,"size":12730,"snap":"2021-21-2021-25","text_gpt3_token_len":3091,"char_repetition_ratio":0.14702184,"word_repetition_ratio":0.053206652,"special_character_ratio":0.23566379,"punctuation_ratio":0.10352234,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9822655,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80],"im_url_duplicate_count":[null,null,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-06-25T02:17:42Z\",\"WARC-Record-ID\":\"<urn:uuid:cbb775c4-d7c6-4db2-9d78-bd9f7bc2e96c>\",\"Content-Length\":\"54382\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:36e9eb61-d0d5-4be1-afc1-cff29b7f5f37>\",\"WARC-Concurrent-To\":\"<urn:uuid:955a3226-9978-4d3a-821c-e9d2704d9f5d>\",\"WARC-IP-Address\":\"104.26.14.227\",\"WARC-Target-URI\":\"https://scialert.net/fulltext/?doi=jas.2013.308.314&org=11\",\"WARC-Payload-Digest\":\"sha1:AZKDUMZDXQRZO5GJKYCDXN6JWXU4KSKG\",\"WARC-Block-Digest\":\"sha1:M5NNXYBFWPTLSXFX37YYHROBYVR3WLJ7\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-25/CC-MAIN-2021-25_segments_1623488560777.97_warc_CC-MAIN-20210624233218-20210625023218-00032.warc.gz\"}"} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.