URL
stringlengths 15
1.68k
| text_list
sequencelengths 1
199
| image_list
sequencelengths 1
199
| metadata
stringlengths 1.19k
3.08k
|
---|---|---|---|
https://grzegorzmaczkacoach.pl/06-2006-3623.html | [
"",
null,
"# equation curve inventor\n\n• ##### Home\n\nBlog. Read the blog to learn about my latest activities and about programming Inventor and Fusion 360. There are posts that are useful for both beginners and advanced users of the API''s. I''ll be posting here instead of the \"Mod the Machine\" blog. Read Now.\n\n• ##### Autodesk Inventor 2013 3D Equation Curves Examples\n\n· I was working on a turbocharger housing and needed some more information on 3D equation curves. I found a really sweet write-up on the discussion boards posted by Glenn Chun, Autodesk employee. If you are working with helixes, check this one out. Variable pitch, variable radius helixes, differing variations and more.\n\n• ##### Sketch equation curve\n\nExplicit equation curves use a single equation to define y as a function of x. Units, Parameters, and Functions in equation curves. Balance the units in equations. Balancing units in equations often requires multiplying or dividing by 1, or multiple units of length. When the units are not a single unit of length, the equation text is red and an ...\n\n· How to draw a catenary or other parametric curve in Inventor?: The catenary curve (also called a chainette, alysoid, funicular, see wiki) - the curve of hanging chain or cable or other fibre in a homogenous gravitational field - is used in many design tasks.To create a sketch shapes as a catenary in Inventor you can zse the function \"Equation curve\" and specify the curve parametrically.\n\n• ##### Chapter 8. Curves and Surfaces\n\nIn Inventor, a NURBS curve can have an order up to 8. However, higher orders introduce oscillation into the curve and can behave unpredictably when a control point moves. Cubic curves (order of 4) are the most commonly used curves, since they provide enough control for most geometric modeling applications without the drawbacks of higher-order ...\n\n• ##### AutoCAD Inventor :: Create Hyperbola Curve Using Equation ...\n\n· AutoCAD Inventor :: Create Hyperbola Curve Using Equation Curve (Autodesk 2013) Feb 19, 2013. I want to draw hyperbola curve whose equation is x^2/a^2 - y^2/b^2=1. May I use equation curve tool to make the above hyperbola. I have also made a hyperbola curve of aforesaid equation through conventional method on mathematical ground in the past.\n\n• ##### 2D Spiral Equation Curve In Inventor 2013 – Cadline Community\n\n· Will equation curves in Inventor 2013 come to the rescue? Well yes they do – it''s a whole lot simpler this way. So I''ve created a spiral using my very own polar equation curve, and discovered along the way that you can use parameters in there! Happy days, so download the part file at the bottom of the page if you are in need of a spiral ...\n\n• ##### To Create and Edit Equation Curves | Inventor 2016 ...\n\n· Create 2D Equation Curves In an active sketch, click Sketch tab Create panel Equation Curve (2D sketch) or 3D Sketch tab Draw panel Equation Curve (3D sketch). In the mini-toolbar, choose a curve type: Parametric. Uses two equations to evaluate X and Y or r and θ. Explicit. Uses one equation to evaluate Y or r and a range for X or a. Choose a coordinate system: Cartesian.\n\n• ##### Curve by equation\n\n· Open curve-equation_vss-method-trl3.txt 8. If prompted for Single Step Trail select Continue Watch Playback. You may see it focus back to PRT000001 which was used to load tree settings. You can minimize and maximize proe icon from system tray to see the part being created.\n\n• ##### Global Variables and Equation Driven Design\n\n· Global Variables can be used to drive equations and dimensions. Say for example, you were designing a pipe and wanted the pipe length to remain relative to some other dimension—maybe your pipe has some length constraints relative to its installation location. You could name a Global Variable as \"PipeLimit.\".\n\n• ##### Scaling 2D Equation Curve : AutodeskInventor\n\nScaling 2D Equation Curve. Hey guys, so my problem is that I have an equation curve that I want to scale because it''s a parameter that constantly needs changing. I cannot modify the equation itself because changing the expression modifies certain things that are not supposed to change, I purely want to scale its dimension.\n\n• ##### equation curve in inventor\n\n· Enjoy the videos and music you love, upload original content, and share it all with friends, family, and the world on .\n\n• ##### Autodesk Inventor 2013 3D Equation Curves Examples\n\n· I was working on a turbocharger housing and needed some more information on 3D equation curves. I found a really sweet write-up on the discussion boards posted by Glenn Chun, Autodesk employee. If you are working with helixes, …\n\n• ##### MOSFET\n\nThe metal–oxide–semiconductor field-effect transistor (MOSFET, MOS-FET, or MOS FET), also known as the metal–oxide–silicon transistor (MOS transistor, or MOS), is a type of insulated-gate field-effect transistor that is fabricated by the controlled oxidation of a semiconductor, typically silicon.The voltage of the covered gate determines the electrical conductivity of the device; this ...\n\n• ##### silent 3 bladed prop\n\nUppdate 29.11.15 Added 9X4.5 blade sett, (Prop4) i hadent done this for a while so i hope i got it right. I finally got the equation curves in inventor all set, if somone wants a spesific airfoil profile let me know. This has less tip turbulence and makes a lot less noice I use it as a silent replacement for 5-7\" props for my quad. 3 mm axel print as solid. I print it on hight resolution and ...\n\n• ##### Create equation curve in sketch\n\nCreate equation curve. Cartesian equation curves use X, Y coordinates. Parametric equation curves use equations to define x and y as a function of a variable t. On the ribbon, click Sketch tab Create Panel Equation Curve. Then select the type: Parametric Cartesian . In the x(t) box, enter the equation for x …\n\n• ##### Inventor Tips and Tricks – Using Equation Curves for ...\n\n· Location of the Equation Curves Tool in the 3D Sketch Environment. The Equation Curves tool allows the user to create equations using three different coordinate systems: Cartesian (X, Y coordinates), Cylindrical (Radius, Length and Angle) and Spherical (Radius, Azimuthal Angle, and Polar Angle); all based on time. The desired curve …\n\n• ##### silent 3 bladed prop\n\n· Uppdate 29.11.15 Added 9X4.5 blade sett, (Prop4) i hadent done this for a while so i hope i got it right. I finally got the equation curves in inventor all set, if somone wants a spesific airfoil profile let me know. This has less tip turbulence and makes a lot less noice I use it as a silent replacement for 5-7\" props for my quad. 3 mm axel print as solid. I print it on hight resolution and ...\n\n• ##### Equation curve is throwing errors trying to draw cycloid ...\n\nLooking to run Fusion 360/Autodesk Inventor. Now I already have all of the other parts I need as I had them laying around/bought them. I''ve got a 240Gb SSD, 16Gb 2400Mhz RAM, a AMD A8-9600 APU @ 3.1 Ghz, Radeon R7 Graphics @ 900Mhz (the APU is definitely a limiting factor, I''m waiting to buy a friends 2600X when he upgrades his PC) all on a ...\n\n• ##### vectors\n\n· The equation of the curve is. y = a ( 1 − u) 3 + 3 a u ( 1 − u) 2 − 3 a u 2 ( 1 − u) − a u 3. where a = 20. In a CAD system, you can construct this curve by using the control points P 0 = ( − 60, 20), P 1 = ( − 20, 20), P 2 = ( 20, − 20), and P 3 = ( 60, − 20). So, this is the case k = 20 of #1 above.\n\n• ##### The Modeling of a Propeller Turbine Runner in 3D Solid ...\n\nCurrently, Autodesk Inventor has introduced the new tool that help sketching the spline lines either in 2D plane or 3D space simplifying the task of 3D modelling of propeller turbine blade, called Equation Curve. The Equation Curve tool requires the codes for creating the spline lines.\n\n• ##### equation curve inventor\n\nTo Create and Edit Equation Curves Inventor 2016 . Create 2D Equation Curves In an active sketch click Sketch tab Create panel Equation Curve (2D sketch) or 3D Sketch tab Draw panel Equation Curve (3D sketch). In the mini-toolbar choose a curve type Parametric. Uses two equations to evaluate X and Y or r and θ. Explicit.\n\n• ##### Inventor: Core Skills | Pluralsight\n\n· Inventor: Core Skills. Authors: Matt Perez, Javier Chavez, Joris Steurs, Billy Wittenberg, Neil Cross, Jarle Oudalstoel. Inventor® mechanical design and 3D CAD software is a professional-grade 3D mechanical design, documentation, and product simulation tools, that gives you the power to drive. innovation.\n\n• ##### Solved: Equation Curve Involute\n\n· Right click on curve->Edit Equation Curve. Select ''Show Units''. note that cos and sin have degrees, not radians -> \"d2 * ( sin (1 deg * t) - t * cos (1 deg * t)\" You can manually replace ''deg'' with ''rad'' in the edit box. Also, reduce the ''t'' value upper evaluation down from 900 to 20 or so, or you will be waiting a while.\n\n• ##### Autodesk Inventor. Curbe prin ecuatii. – Man and Machine ...\n\n· Autodesk Inventor. Functii mai mult sau mai putin cunoscute Episodul 2. Curbe prin ecuatii (Equation Curve) Curbele generate din ecuatii sunt folosite pentru a modela geometrii complexe, de exemplu profilurile danturii angrenajelor sau corpul spiralat al unei pompe centrifugale.\n\n• ##### Equation Driven Curve | Fusion 360 | Autodesk App Store\n\n· This add-in creates 3D curves from user-supplied parametric equations.Curves can be specified in Cartesian, cylindrical, or spherical coordinates. A live preview is shown in Autodesk® Fusion 360™ as the parameters are updated, and when the command is executed, the curve …\n\n• ##### Pluralsight – Inventor: Working with Curves and Splines ...\n\n· In this course, Inventor: Working with Curves and Splines, you''ll learn the basics of 2D and 3D curves in Inventor. First, you''ll begin by learning the difference between the surface and continuity options available in Autodesk Inventor, and an overview of Class A surfacing. ... you''ll explore equation driven curves, what types of ..."
] | [
null,
"https://grzegorzmaczkacoach.pl/images/loader.gif",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.87542874,"math_prob":0.90736,"size":9006,"snap":"2022-05-2022-21","text_gpt3_token_len":2212,"char_repetition_ratio":0.13541435,"word_repetition_ratio":0.17848101,"special_character_ratio":0.24150567,"punctuation_ratio":0.12808989,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9841982,"pos_list":[0,1,2],"im_url_duplicate_count":[null,5,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-20T11:13:38Z\",\"WARC-Record-ID\":\"<urn:uuid:a712dce3-fb26-4395-9ef0-8f5a0f942fdd>\",\"Content-Length\":\"20865\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b0179aca-51f8-4eb1-8168-d765a6e53192>\",\"WARC-Concurrent-To\":\"<urn:uuid:18fcab85-e35e-4e53-a098-de2f97c27c74>\",\"WARC-IP-Address\":\"104.21.54.54\",\"WARC-Target-URI\":\"https://grzegorzmaczkacoach.pl/06-2006-3623.html\",\"WARC-Payload-Digest\":\"sha1:L6RLEAYZICT6XGIOWP64IBRRWR6MXKVF\",\"WARC-Block-Digest\":\"sha1:ERWPRKS7O5DQ2O6QKARTISFQYLIAM62I\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662531779.10_warc_CC-MAIN-20220520093441-20220520123441-00459.warc.gz\"}"} |
https://en.citizendium.org/wiki/Blackbody_radiation | [
"",
null,
"",
null,
"Main Article Discussion Related Articles [?] Bibliography [?] External Links [?] Citable Version [?] This editable Main Article is under development and subject to a disclaimer. [edit intro]\n\nA black body absorbs and then re-emits all incident EM radiation. By definition it has an absorptivity and emissivity of 1, and a transmissivity and reflectivity of 0. The Planck black body equation describes the spectral exitance of an ideal black body. The study of black-body radiation was an integral step in the formulation of quantum mechanics.\n\n## Contents\n\n### Planck's Law: Wavelength\n\nFormulated in terms of wavelength:",
null,
"where:\n\nSymbol Units Description",
null,
"",
null,
"Input wavelength",
null,
"",
null,
"Input temperature",
null,
"",
null,
"Planck's constant",
null,
"",
null,
"Speed of light in vacuum",
null,
"",
null,
"Boltzmann constant\n\nNote that the input",
null,
"is in meters and that the output is a spectral irradiance in",
null,
". Omitting the",
null,
"term from the numerator gives the blackbody emission in terms of radiance, with units",
null,
"where \"sr\" is steradians.\n\n### Planck's Law: Frequency\n\nFormulated in terms of frequency:",
null,
"where:\n\nSymbol Units Description",
null,
"",
null,
"Input frequency\n\nAll other units are the same as for the Wavelength formulation. Again, dropping the",
null,
"from the numerator gives the result in radiance rather than irradiance.\n\n### Properties of the Planck Equation\n\nTaking the first derivative leads to the wavelength with maximum exitance. This is known as the Wien Displacement Law.\n\nA closed form solution exists for the integral of the Planck blackbody equation over the entire spectrum. This is the Stefan-Boltzmann equation. In general, there is no closed-form solution for the definite integral of the Planck blackbody equation; numerical integration techniques must be used.\n\nThe relationship between the ideal blackbody exitance and the actual exitance of a surface is given by emissivity.\n\nAn ideal blackbody at 300K (~30 Celsius) has a peak emission 9.66 microns. It has virtually no self-emission before 2.5 microns, hence self-emission is typically associated with the \"thermal\" regions of the EM spectrum. However, the Sun can be characterized as a 5900K blackbody and has a peak emission around 0.49 microns which is in the visible region of spectrum.\n\nThe Planck equation has a single maximum. The wavelength with peak exitance becomes shorter as temperature increases. The total exitance increases with temperature."
] | [
null,
"https://s9.addthis.com/button1-share.gif",
null,
"https://en.citizendium.org/images/4/4f/Statusbar2.png",
null,
"https://en.citizendium.org/images/math/8/b/d/8bdcc7d42f2fa1f4f5b31527c33449ed.png ",
null,
"https://en.citizendium.org/images/math/e/0/5/e05a30d96800384dd38b22851322a6b5.png ",
null,
"https://en.citizendium.org/images/math/e/f/c/efc831aabf9ade051f781ccb54c2dcdb.png ",
null,
"https://en.citizendium.org/images/math/b/9/e/b9ece18c950afbfa6b0fdbfa4ff731d3.png ",
null,
"https://en.citizendium.org/images/math/5/f/f/5ff406fb8f2c6bd8eee29c3988cf1967.png ",
null,
"https://en.citizendium.org/images/math/7/b/9/7b9f615e79646d95c752beb7f5636462.png ",
null,
"https://en.citizendium.org/images/math/0/b/2/0b280a29502767fbbc360d2730837ca4.png ",
null,
"https://en.citizendium.org/images/math/4/d/2/4d22301ac3e2df8a8e77912eac28486f.png ",
null,
"https://en.citizendium.org/images/math/6/b/8/6b8693dfb105752676fe7fbae620ad9e.png ",
null,
"https://en.citizendium.org/images/math/5/3/8/538ecca5908ab713fc5eaba616e4d5b9.png ",
null,
"https://en.citizendium.org/images/math/1/2/5/1251fabd4768247ddffd3e2502f2e90a.png ",
null,
"https://en.citizendium.org/images/math/e/0/5/e05a30d96800384dd38b22851322a6b5.png ",
null,
"https://en.citizendium.org/images/math/6/d/5/6d55bfacd8018d376bae3beca4ca707c.png ",
null,
"https://en.citizendium.org/images/math/5/2/2/522359592d78569a9eac16498aa7a087.png ",
null,
"https://en.citizendium.org/images/math/d/f/f/dffe04fb8989b8074286ec4a938216ca.png ",
null,
"https://en.citizendium.org/images/math/2/e/3/2e3fa629add159727545747894c2035b.png ",
null,
"https://en.citizendium.org/images/math/9/e/3/9e3669d19b675bd57058fd4664205d2a.png ",
null,
"https://en.citizendium.org/images/math/d/c/d/dcd356af1bae96d7d425e505aa0d26e9.png ",
null,
"https://en.citizendium.org/images/math/5/2/2/522359592d78569a9eac16498aa7a087.png ",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.88087463,"math_prob":0.90875614,"size":2518,"snap":"2020-45-2020-50","text_gpt3_token_len":595,"char_repetition_ratio":0.12768497,"word_repetition_ratio":0.01608579,"special_character_ratio":0.22557585,"punctuation_ratio":0.12910284,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97980434,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42],"im_url_duplicate_count":[null,null,null,null,null,2,null,null,null,2,null,null,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,null,null,2,null,null,null,2,null,2,null,8,null,2,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-11-30T08:27:03Z\",\"WARC-Record-ID\":\"<urn:uuid:1b7675b7-2fec-4b9b-bf23-30fd3645b46e>\",\"Content-Length\":\"35423\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4a946013-ed6a-41a2-afbe-e2f8d7a79035>\",\"WARC-Concurrent-To\":\"<urn:uuid:74cc705e-ce92-44cf-80dd-2c4b83000316>\",\"WARC-IP-Address\":\"67.202.87.171\",\"WARC-Target-URI\":\"https://en.citizendium.org/wiki/Blackbody_radiation\",\"WARC-Payload-Digest\":\"sha1:Z7H2WWSAMKY5NQAN6ZESAPOW4MIKGIUR\",\"WARC-Block-Digest\":\"sha1:7AWOJSZZVATWPG6JJXZNDD7GABEYFEXW\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141211510.56_warc_CC-MAIN-20201130065516-20201130095516-00349.warc.gz\"}"} |
https://bmcbioinformatics.biomedcentral.com/articles/10.1186/s12859-019-3259-6 | [
"Volume 20 Supplement 25\n\n# Research on predicting 2D-HP protein folding using reinforcement learning with full state space\n\n## Abstract\n\n### Background\n\nProtein structure prediction has always been an important issue in bioinformatics. Prediction of the two-dimensional structure of proteins based on the hydrophobic polarity model is a typical non-deterministic polynomial hard problem. Currently reported hydrophobic polarity model optimization methods, greedy method, brute-force method, and genetic algorithm usually cannot converge robustly to the lowest energy conformations. Reinforcement learning with the advantages of continuous Markov optimal decision-making and maximizing global cumulative return is especially suitable for solving global optimization problems of biological sequences.\n\n### Results\n\nIn this study, we proposed a novel hydrophobic polarity model optimization method derived from reinforcement learning which structured the full state space, and designed an energy-based reward function and a rigid overlap detection rule. To validate the performance, sixteen sequences were selected from the classical data set. The results indicated that reinforcement learning with full states successfully converged to the lowest energy conformations against all sequences, while the reinforcement learning with partial states folded 50% sequences to the lowest energy conformations. Reinforcement learning with full states hits the lowest energy on an average 5 times, which is 40 and 100% higher than the three and zero hit by the greedy algorithm and reinforcement learning with partial states respectively in the last 100 episodes.\n\n### Conclusions\n\nOur results indicate that reinforcement learning with full states is a powerful method for predicting two-dimensional hydrophobic-polarity protein structure. It has obvious competitive advantages compared with greedy algorithm and reinforcement learning with partial states.\n\n## Background\n\nThe biological function of proteins is determined by their spatial folding structure. Understanding the folding process of proteins is one of the most challenging issues in the field of bioinformatics . The bioinformatics hypothesis believes that the protein form found in nature is the most stable form (the lowest free energy). Protein sequences determine protein structure, and protein structure determines protein function [2, 3]. Current research has put forward many computational theoretical models, such as hydrophobic polarity (HP) model, AB off-lattice model (Toy model) and continuous model of Euclidean space. The HP model is a widely studied simplified protein folding model with high confidence in the prediction of the protein helical structure. In this model, each amino acid is treated either hydrophobic (H) or hydrophilic (P) and represented as a point on a two-dimensional lattice structure. The rationale behind the HP model is that the hydrophobicity of amino acids is the main driving force for small globulins to form a natural conformation . The primary structure analysis of protein sequences involves the analysis of amino acid physicochemical properties (such as hydrophilicity and hydrophobicity) and sequence patterns, so 2D-HP protein structure prediction refers to predicting the folding structure based on the primary structural analysis of proteins. Although the HP grid model is a simplified model, solving the protein folding problem of this model is still difficult. This problem has proven to be an NP-hard problem, which means that there is no solution algorithm that is both complete and not too slow.\n\nCurrently, the methods used to solve the HP model optimization include evolutionary algorithm (EA), genetic algorithm (GA), ant colony optimization (ACO), and supervised classification methods. Genetic algorithm is a method of searching for optimal solutions by simulating natural evolutionary processes. The asymptotic analysis of the computational complexity of the GA and EA is difficult and is usually limited to specific problems [5, 6]. The ACO is a probabilistic algorithm used to find optimized paths. In most cases, the ACO has high computational complexity [7,8,9]. Supervised classification is a method of pattern recognition, and its training process requires external supervision [10,11,12]. The versatility of these methods is not good, especially when calculating the energy minimum, it is easy to fall into the local optimal solution, which makes it difficult to achieve global optimization [13,14,15]. Protein structure prediction has two major problems. The first question is how to abstract the mathematical model that can reflect the interaction between amino acids and how to design its energy function. The second problem is how to find an efficient search method for the exploration of the structure and then find the structure with the lowest energy [16,17,18] within limited central processing unit power and time.\n\nRecently, reinforcement learning has been successfully applied to many aspects of the biological field, such as biological sequence comparisons, genome sequencing and so on, and it has become more extensive in other fields, such as vehicle positioning and recognition, game automation detection and robot simulation . The advantage of reinforcement learning is that the training process does not require external supervision. The agent will conduct autonomous learning based on their interaction experience with the environment, and can find the overall optimal solution based on the reward, and it is not easy to fall into the local optimum . For example, transfer learning in reinforcement learning is considered to be an optimal learning strategy under limited data conditions, especially in areas where labeling data are scarce and distribution is heterogeneous, such as clinical medical diagnosis and animal behavior control .\n\nTherefore, this paper proposes an HP model optimization method based on reinforcement learning. In the reinforcement learning framework, the state set and state transition space are given according to the length of the HP sequence to be tested. The agent uses the Q-learning algorithm to select different actions under different conditions to obtain different reward values, and continuously calculates and updates the Q-value table. At last, the agent selects the optimal solution to obtain the optimal structure according to the converged Q-value table. This method has strong universality and simple calculation . It can predict the optimal structure well for short length sequences.\n\n## Methods\n\n### The framework based on reinforcement learning\n\nIn recent years, some scholars have proposed some simplified models for protein folding problems. The most typical one is the two-dimensional hydrophobic-polarity (2D-HP) grid model proposed by Dill et al. . According to the differences in the hydrophilicity and hydrophobicity of each type of amino acid, they are divided into two categories: one is a hydrophobic amino acid (indicated by H, the black circle), and the other is a hydrophilic amino acid (indicated by P, the white circle), so any protein chain can be expressed as a finite-length string of H and P [24, 25]. A legitimate protein space configuration must meet the following three constraints:\n\n• The center of the sphere for each circle in the sequence must be placed on an integer coordinate in two dimensions.\n\n• Any two adjacent circles in the chain must be adjacent to each other in 2D space. That is, the distance between adjacent numbered circles is 1.\n\n• Each integer grid in 2D space can only represent one circle at most, that is, no two balls overlap.\n\nThe reinforcement learning method is used to solve the HP 2D sequence model optimization problem, which can be converted into a Markov decision process and solved by the Q-learning algorithm. The framework is shown in Fig. 1.\n\n#### Environment\n\nAmino acids are classified into H (hydrophobic) and P (hydrophilic) according to their hydrophilicity and hydrophobicity. In this case, the amino acid sequence is converted into an HP sequence. Using the HP sequence as input data, the entire state set S of the sequence corresponds to the environment part of reinforcement learning.\n\n#### Action set A\n\nAction set A consists of 4 actions that corresponds to four directions: L (Left), U (Up), R (Right), D (Down), that is A = {a1, a2, a3, a4}, where a1 = L, a2 = U, a3 = R, a4 = D.\n\nIn the training process, the agent uses ε-greedy policy to select the action (ε [0, 1]), which means that the agent will explore other actions with the probability of ε and the probability of remaining 1 − ε goes “greedy “, which means that the agent takes the best action, and constantly calculates and updates the Q value .\n\n#### Result output pool\n\nThe theory shows that, as long as the number of training is enough, the Q value will converge to the optimal value. At the end of training, the agent adopts greedy policy to choose the optimal action in different states according to the converged Q value to further obtain the optimal structure of HP model. Different folded structures with the lowest energy are the final output results.\n\n### The full state set S of 2D-HP model\n\nThe initial state of the agent in the environment is s1. For a two-dimensional sequence of length n, its state space S consists of $$\\frac{4^n-1}{3}$$ states. When the state of the first amino acid is fixed, all possible states of the successor of each amino acid are the collection of four states (up, down, left, right) of the previous amino acid, that is, the number of all possible states of subsequent amino acid is four times the number of previous amino acids. The total number of the state set is the sum of the geometric series with an initial value of 1 and an odds ratio of 4, as shown in Eq. (1):\n\n$$S=\\frac{1\\times \\left(1-{4}^n\\right)}{1-4}=\\frac{4^n-1}{3}$$\n(1)\n\nSo $$S=\\left\\{{s}_1,{s}_2,\\dots, {s}_{\\frac{4^n-1}{3}}\\right\\}$$. For example, when there is only one amino acid in the sequence, there is only one state s1 in the whole state space. When there are two amino acids, the possible state of the second amino acid consists of four states of the first amino acid s2s3s4s5, so there are 5 states s1s2s3s4s5 in the whole space. Similarly, when there are three amino acids, the possible states of the third amino acid consist of four states (up, down, left, right) of the second amino acid, and the second amino acid may have four states, so the third amino acid may have 16 states, namely s6s7s20s21, and the whole state set has 21 states, and so on, all the states of subsequent amino acids are obtained.\n\nAt the same time, we need to define the state transfer function T : s → s of the HP model, that is, T(s, a) = s. The process that the agent takes the action a in the state s to the subsequent state s can be written as the concrete expression as shown in Eq. (2)\n\n$$T\\left({s}_{\\frac{4^{i-1}-1}{3}+k},{a}_l\\right)={s}_{\\frac{4^i-1}{3}+4\\times \\left(k-1\\right)+l}$$\n(2)\n\nwhere, i [1, n − 1] is the index of the amino acid in the sequence. k [1, 4i − 1] represents the kth state of all the states of the i-1th amino acid. l [1, 4] represents the number corresponding to the action.\n\nThis means that the agent can move to one of four possible successor states from the state s S by performing one of four possible actions. It should be noted that each state sS can be accessed from the state s.\n\n### The new definition of full state space of 2D-HP model\n\nFurther research has found that when the number of actions is reduced to three, a simpler representation of the state space can be obtained. Action set can be described as A = {a1, a2, a3}, where a1 = Left, a2 = Up, a3 = Right. Then for a two-dimensional sequence of length n, the state space S has $$\\frac{3^n-1}{2}$$ states. The number of states is calculated in the same as before, as shown in Eq. (3):\n\n$$S=\\frac{1\\times \\left(1-{3}^n\\right)}{1-3}=\\frac{3^n-1}{2}$$\n(3)\n\nSo $$S=\\left\\{{s}_1,{s}_2,\\dots, {s}_{\\frac{3^n-1}{2}}\\right\\}$$. Accordingly, the state transfer function is updated to Eq. (4):\n\n$$T\\left({s}_{\\frac{3^{i-1}-1}{2}+k},{a}_l\\right)={s}_{\\frac{3^i-1}{2}+3\\times \\left(k-1\\right)+l}$$\n(4)\n\nwhere, i [1, n − 1] is the index of the amino acid in the sequence. k [1, 3i − 1] represents the kth state of all the states of the i-1th amino acid. l [1, 3] represents the number corresponding to the action.\n\n### Energy-based reward function with criterions\n\nThe protein folding thermodynamic hypothesis holds that the energy of proteins under natural structures is the lowest [27, 28]. Therefore, the problem of predicting protein folding structures is to find the lowest energy structure of all available structures for a given amino acid sequence. The determination of the energy function is especially important for this paper.\n\nThe energy value is only determined by the hydrophobic force. Each pair of hydrophobic amino acids that is not adjacent in sequence but adjacent in 2D space produces energy of − 1, and in other cases, the energy is calculated as 0. The energy value of the entire structure is the sum of energy of each pair of hydrophobic amino acids that meets the requirements mentioned above in the legal configuration. A formal description of the legal configuration energy E of a protein of chain length n is as follows:\n\n$$E={\\sum}_i^{n-1}{\\sum}_{j=i+1}^n{W}_{ij}$$\n(5)\n\nwhere, n is the length of the amino acid sequence. Both i and j are the indices of the amino acids in the sequence. And\n\n$${W}_{ij}=\\left\\{\\begin{array}{c}-1, applicable\\ conditions\\\\ {}0, other\\ cases\\end{array}\\right.$$\n(6)\n\nwhere, applicable conditions mean that the ith and jth amino acid are both hydrophobic amino acids and they are not adjacent in the sequence but adjacent in 2D space.\n\nThe purpose of reinforcement learning is to maximize the objective function, which is to maximize the reward. However, in the HP model problem, the ultimate goal is to minimize the energy function, so we need to take the absolute value of the energy function to achieve the positive unite. At the same time, using the absolute value of the energy function as a reward after reaching the end state enables the trained structure closer to the ideal structure.\n\nIn the training process, the agent tends to put amino acids in the lattice which placed in the amino acid before, which is not allowed in the actual situation. This overlapping problem can be solved by setting the reward function. We define the reward function by flexible and rigid criteria.\n\n#### Flexible criterion\n\n• When the agent selects the action, they are allowed to place the succeeding amino acid in the lattice position where the amino acid was placed before. A negative reward (which can be defined as a penalty) is given to the agent to judge and optimize to maximize the prize. Before reaching the terminal state, the next state of the amino acid is placed in the invalid position with the reward set to − 10.\n\n• Before reaching the terminal state, the next state of the amino acid is placed in the valid position (blank position) with the reward set to 0.\n\n• When the terminal state is reached, the absolute value of the energy of the final folded structure is rewarded.\n\n$$\\mathrm{That}\\ \\mathrm{is}\\ R=\\left\\{\\begin{array}{cc}-10,& \\begin{array}{c}i\\in \\left(1\\sim n-1\\right)\\\\ {} the\\ ith\\ amino\\ acid\\ is\\ in\\ the\\ invalid\\ position\\end{array}\\\\ {}0,& \\begin{array}{c}i\\in \\left(1\\sim n-1\\right)\\\\ {} the\\ ith\\ amino\\ acid\\ is\\ in\\ the\\ valid\\ position\\end{array}\\\\ {}\\left|E\\right|,& i=n\\end{array}\\right.$$\n(7)\n\nwhere, n is the length of the amino acid sequence. i is the index of the amino acid in the sequence. E is the sum of the energy formed by the final folded structure. R is the symbolic representation of reward in this article.\n\n#### Rigid criterion\n\nCompared to the flexible criterion, when the agent places the next amino acid in the selection process, if this action causes the next amino acid to be placed on the lattice of the existing amino acid, the action is called invalid and needs to be re-selected until a valid action occurs. The check matrix ‘Check’ is introduced here. For a sequence of length n, the check matrix is a 2D matrix of 2n-1 rows and 2n-1 columns. The lattice position where the amino acid has been placed is marked (also called invalid position), then in this episode, this position can no longer be placed, that can be expressed as\n\n$$Check=\\left\\{\\begin{array}{c}1,\\left(p,q\\right) is\\ the\\ invalid\\ position\\\\ {}0,\\left(p,q\\right) is\\ the\\ valid\\ position\\end{array}\\right.$$\n(8)\n\nwhere, (p, q) indicates the two-dimensional coordinates of the placement of the amino acid.\n\n• Before reaching the terminal state, the reward is set to 0.\n\n• When the terminal state is reached, the absolute value of the energy of the resulting structure is rewarded.\n\n$$\\mathrm{That}\\ \\mathrm{is}\\ R=\\left\\{\\begin{array}{c}0,i\\in \\left(1\\sim n-1\\right)\\\\ {}\\left|E\\right|,i=n\\end{array}\\right.$$\n(9)\n\nwhere, n is the length of the amino acid sequence. i is the index of the amino acid in the sequence. E is the sum of the energy formed by the final folded structure. R is the symbolic representation of reward in this article.\n\n### HP model training algorithm based on reinforcement learning with Q-learning\n\nThe algorithm for solving 2D-HP protein folding based on reinforcement learning with full states using Q-learning in rigid criterion is shown in Table 1. The program of this method is implemented on PyCharm.\n\n### Function approximation\n\nThe function approximation theory is an important part of the function theory. The basic problem involved is the approximate representation of the function. In reinforcement learning, for some basic methods such as dynamic programming (DP), Monte Carlo (MC) and temporal difference (TD), there is a basic premise that the state space and the action space are discrete and not too large . Note that the value function of these methods is actually a table. For state value function (V), the index is the state; for state-action value function (Q), the index is a state-action pair. The process of iterative update of the value function is an iterative update of this table. If the dimension of the state space is large, or the state space is contiguous, the value function cannot be represented by a table. At this time, it is necessary to represent the value function by means of function approximation .\n\nIn the value function approximation method, the value function corresponds to an approximation function. From a mathematical point of view, the function approximation method can be divided into parameter and non-parametric approximation. Therefore, the reinforcement learning value function estimation can be divided into parametric and non-parametric approximation. The most commonly used is parameter approximation. When the approximation of the value function structure is determined, then the approximation of the value function is equivalent to the approximation of the parameter. The update of the value function is equivalent to the update of the parameter. In other words, it is time to use experimental data to update parameter values .\n\n## Results\n\n### Comparative experiment between rigid criterion and flexible criterion\n\nAccording to two different reward settings of rigid and flexible criteria, six paper dataset sequences and ten sequences in the classic Uniref50 database are selected as experimental objects. The known information and test energy information were shown in Table 2. The parameters were set as follows: step-size parameter α = 0.01, exploration probability ε = 0.5, and learning parameter γ = 0.9.\n\nIn Table 3, the first four sequences were chosen to compare the performance of reinforcement learning with rigid and flexible criteria. In order to avoid contingency, the rigid and flexible criteria experiments were repeated five times. The number of training iterations per round was set to 5 million, and the test was performed once every 10,000 times. In training process, the number of episodes required to converge to the lowest energy was counted as shown in Table 3.\n\nCombination of Tables 2 and 3 showed that reinforcement learning with rigid criterion can stably find the lowest energy conformation faster than reinforcement learning with flexible criterion. For the shorter sequences (1 and 2), the number of training episodes required for agent to achieve convergence conformation by flexible criterion was greater than rigid criterion. Reinforcement learning with rigid criterion sampled an average 30,000 and 210,000 episodes to achieve the robust lowest energy conformation, which was 50 and 63% less than 60,000 and 570,000 episodes required by reinforcement learning with rigid criterion. For the longer sequences (3 and 4), reinforcement learning with flexible criterion could not find the lowest energy conformation. One possible reason was that, although flexibility criterion gave a negative reward (or penalty) for states that caused repetition, the states still had some positive Q values, and the Q values of these repeated states in rigid criterion still had an initial value of 0. Therefore, the probability of the repeated states in flexibility criterion being selected was greater than rigid criterion. And as the length of the sequence increased, the number of states that caused repetition in the full state space was also greater, and it was more difficult to find the lowest energy structure.\n\n### Comparative experiment with greedy algorithm\n\nReinforcement learning with full states using rigid criterion was compared with greedy algorithm. The experimental objects were the twelve sequences in the Uniref50 data set. Similarly, in order to avoid accidentality, two methods were trained for five rounds, and the number of training iterations per round was set to 5 million, and the samples were performed once every 10,000 times. We counted the number of times the lowest energy was obtained in the last 100 samples (Table 4).\n\nIt can be seen from Table 2 that reinforcement learning with full states using rigid criterion can find the lowest energy for all 16 sequences, but the greedy algorithm can only find 13 of them. From Table 4, the training process with 10 sequences was far superior to the greedy algorithm for the above 12 sequences. And the total number of times that the lowest energy was found was 300, which was greater than 205 for the greedy algorithm.\n\n### Comparative experiment with the reinforcement learning with partial states\n\nReinforcement learning with full states using the rigid criterion was compared with reinforcement learning with partial states. The experimental objects and experimental settings were the same for greedy algorithm above.\n\nIn the reinforcement learning with partial states, for an HP sequence of length n, its state space S consists of 1 + 4 (n-1) states. Apart from the first amino acid that had only one state, each of the other amino acids had four different actions (up, down, left, and right) to transfer to four different states, so the number of the entire state set was expressed as 1 + 4 (n-1), so S = {s1, s2, …, s1 + 4(n − 1)}. For example, the state of the first amino acid is s1. In this state, the four actions of up, down, left, and right were respectively transferred to states s2s3s4s5, which were all possible states of the second amino acid. On the same basis, the four actions of up, down, left and right respectively transferred to the states s6s7s8 and s9, which were all possible states of the third amino acid, and so on, to find all the states of the subsequent amino acids.\n\nIn Table 2, there were 8 sequences that cannot converge to the lowest energy conformations by the reinforcement learning with partial states, while reinforcement learning with full states successfully folded all sequences to the lowest energy conformations. Table 4 showed that in the last 100 episodes, reinforcement learning with full states hits the lowest energy an average five times, which was 40 and 100% higher than the three and zero times hit by the greedy algorithm and reinforcement learning with partial states, respectively. Reinforcement learning with full states achieved lower energy structures on ten out of twelve sequences than the greedy algorithm.\n\n## Discussion\n\n### Analysis of time complexity and space complexity\n\nIn this algorithm, for one sequence, many iterations of training are required to get its lowest energy. Therefore, the time complexity of the algorithm is determined by the length of the amino acid sequence (N) and the number of training iterations (I), that is, the time complexity is O(N × I). The time complexity of the ant colony algorithm for solving HP two-dimensional structure prediction is O(N × (N − 1) × M × I/2), where N is the sequence length, I is the number of iterations, and M is the number of ants. The time complexity of particle swarm optimization is O(N × I × M), where N is the sequence length, I is the number of iterations, and M is the number of particles. Obviously, the time complexity of the method in this paper is the smallest of the three methods, and the larger the sequence length, the more prominent the time advantage.\n\nThe space complexity is composed of state-transfer function matrix and state-action value matrix. The rows of both matrices represent states, and the columns all represent actions. The number of rows in new state-transfer function matrix is $$\\frac{3^{N-1}-1}{2}$$ and the number of columns is 3. The number of rows in state-action value matrix is $$\\frac{3^N-1}{2}$$ and the number of columns is 3. So the space complexity is $$O\\left(\\frac{3^{N-1}-1}{2}\\times 3+\\frac{3^N-1}{2}\\times 3\\right)$$.\n\n### Case study\n\nSequence 12 is a zinc finger protein 528 (fragment), which is a transcription factor with a finger-like domain and plays an important role in gene regulation. Taking sequence 12 as an example, a series of optimized structures with the lowest energy obtained by the method of this paper under rigid criterion are given, as shown in Fig. 2a-c. The results of the last 100 samples of the method and the greedy algorithm and reinforcement learning with partial states in the training exploration process are given, as shown in Fig. 3a-c. The greedy algorithm itself cannot converge, and the convergence of reinforcement learning with full and partial states in the test process is shown in Fig. 4a, b.\n\nFor reinforcement learning with full states, the agent can be trained to select the better action to obtain a lower energy structure after training for several million times, and then guarantee that the structure obtained after convergence is the optimal structure, and it can be considered that the training effect of reinforcement learning with full states is stable. However, the greedy algorithm is not ideal for training. Only several structures with the lowest energy are trained occasionally, and the accuracy of the lowest energy structure cannot be guaranteed. As a whole, reinforcement learning with full states is better than the greedy algorithm. This is because, for reinforcement learning, the agent can choose better actions based on the previous interaction with the environment during the exploration process. Therefore, as the number of training increases, the agent can select the optimal action more quickly and accurately. Also, because of the setting of the reward function, the agent is more concerned about the overall situation without being trapped in a local optimum. The calculation of each plot in the greedy algorithm is independent, and the previous experience does not help the development of the current plot. As a result, the calculation amount becomes larger and the correct structure cannot be stably obtained.\n\nFrom the testing process, it can be found that reinforcement learning with full states can maintain the lowest energy and achieve stable convergence after reaching the minimum energy. In contrast, reinforcement learning with partial states has fluctuations, cannot be stably maintained, and cannot reach the convergence state. This is because each state in the full state space is uniquely determined and can only be transferred by a unique state-action pair, and the process has Markov properties. However, the state in the partial state space can be transferred by different state-action pairs, which has partial uncertainty.\n\n### Full state space compares to partial state space\n\nThe full state space and the partial state space are two different descriptions of the state space in the 2D-HP model under reinforcement learning framework. The same point of the full and partial state spaces is that different states corresponding to each amino acid are set in advance, but they differ in the rules of the state setting. For the full state space, the number of states of subsequent amino acids is always three times the number of previous amino acid states. The state of the subsequent amino acid is obtained by a specific action of the previous amino acid in a specific state. That is to say, each state is transferred by a certain state-action pair, and the whole process has Markov properties. For the partial state space, the number of states for each amino acid except the first amino acid is four. The four states of the subsequent amino acid can be transferred from the four states of the previous amino acid through four different actions, and the whole process does not have Markov properties. The advantage of the full state space is that it can accurately find the lowest energy of the sequence and stabilize the convergence. The disadvantage is that the state space dimension is too high and the memory requirement is high, and the sequence with long length cannot be calculated. The advantage of partial state space is that the required state space is small, and it is possible to calculate a sequence with a long length. The disadvantage is that it cannot converge and cannot find the lowest energy of the sequence.\n\nFunction approximation is especially suitable for solving problems with large state space. The method described above for pre-setting the state-action value matrix and updating the state-action value matrix during the training process takes up a large amount of memory. Function approximation can be used to map the state-action value matrix to an approximation function (such as a parameter approximation function). Updating the parameter values with experimental data during the training process is equivalent to updating the state-action value, and finally a suitable approximation function is obtained. It can save memory space and solve the problem of sequence length limitation.\n\n## Conclusion\n\nThis paper proposes a model based on reinforcement learning to solve the problem of HP model prediction. This problem is also a basic problem in computational molecular biology. The state set and state transition space are calculated according to the length of the HP sequence. The reward function is set according to different situations. The agent uses the ε-greedy strategy to select the exploration action to be rewarded, and continuously calculates and updates the Q value. Finally, the optimal solution is selected according to the converged Q-value table to obtain the optimal HP structure. In this paper, sixteen sequences were selected as experimental objects. The experimental results showed that compared with the flexible criterion, the method can converge to the optimal value function under the rigid criterion, and obtain the optimal structure of the HP model. Compared with the greedy algorithm, the algorithm can find the lowest energy more than times in the training process, highlighting the advantages of this method based on previous experience. Compared with reinforcement learning with partial states, the advantages of the stable convergence of the algorithm are highlighted. This article is a new attempt in the field of protein structure prediction with reinforcement learning, which will play an exemplary role in further three-dimensional protein structure prediction and other areas of biological information using reinforcement learning.\n\nBesides, reinforcement learning with rigid criterion can robustly converge to the lowest energy conformations; the limitations of the method can be further improved. Firstly, although this method can be calculated for long sequences, it has higher memory requirements for computers; Secondly, in order to obtain accurate results, a large number of training events need to be taken into account, resulting in a slow convergence process and the convergence speed needs to be improved. In the follow-up study, we will further improve the forecast results from these two aspects. We believe that the reinforcement learning method applied to solve the problem of protein folding deserves further research.\n\n## Availability of data and materials\n\nDataset and source code can be access from http://eie.usts.edu.cn/prj/RLHP/index.html.\n\n## Abbreviations\n\n2D:\n\nTwo-dimensional\n\n2D-HP:\n\nTwo-dimensional hydrophobic-polarity\n\nAB:\n\nHydrophobic is called A and hydrophilic is called B\n\nACO:\n\nAnt colony optimization\n\nDP:\n\nDynamic programming\n\nEA:\n\nEvolutionary algorithm\n\nGA:\n\nGenetic algorithm\n\nHP:\n\nHydrophobic polarity\n\nMC:\n\nMonte Carlo\n\nTD:\n\nTemporal difference\n\n## References\n\n1. Márquez-Chamorro AE, Asencio-Cortés G, Santiesteban-Toca CE, et al. Soft computing methods for the prediction of protein tertiary structures: a survey. Appl Soft Comput. 2015;35:398–410.\n\n2. Wu H, Wang K, Lu L, et al. Deep conditional random field approach to transmembrane topology prediction and application to GPCR three-dimensional structure modeling. IEEE/ACM Trans Comput Biol Bioinform. 2017;14(5):1106–14.\n\n3. Zhao XM, Cheung YM, Huang DS, et al. Analysis of gene expression data using RPEM algorithm in normal mixture model with dynamic adjustment of learning rate. Int J Pattern Recognit Artif Intell. 2010;24(04):651–66.\n\n4. Günther F, Möbius A, Schreiber M. Structure optimisation by thermal cycling for the hydrophobic-polar lattice model of protein folding. Eur Phys J Spec Top. 2017;226(4):639–49.\n\n5. Tang X, Wang J, Zhong J, et al. Predicting essential proteins based on weighted degree centrality. IEEE/ACM Trans Comput Biol Bioinform. 2014;11(2):407–18.\n\n6. Deng SP, Zhu L, Huang DS. Predicting hub genes associated with cervical cancer through gene co-expression networks. IEEE/ACM Trans Comput Biol Bioinform. 2016;13(1):27–35.\n\n7. Corrêa LDL, Borguesan B, Krause MJ, et al. Three-dimensional protein structure prediction based on memetic algorithms. Comput Oper Res. 2018;91:160–77.\n\n8. Li Z, Wang J, Zhang S, et al. A new hybrid coding for protein secondary structure prediction based on primary structure similarity. Gene. 2017;618:8–13.\n\n9. Zheng CH, Zhang L, Ng TY, et al. Molecular pattern discovery based on penalized matrix decomposition. IEEE/ACM Trans Comput Biol Bioinform. 2011;8(6):1592–603.\n\n10. Wu H, Cao C, Xia X, et al. Unified deep learning architecture for modeling biology sequence. IEEE/ACM Trans Comput Biol Bioinform. 2018;15(5):1445–52.\n\n11. Deng SP, Cao S, Huang DS, et al. Identifying stages of kidney renal cell carcinoma by combining gene expression and DNA methylation data. IEEE/ACM Trans Comput Biol Bioinform. 2017;14(5):1147–53.\n\n12. Deng SP, Huang DS. SFAPS: an R package for structure/function analysis of protein sequences based on informational spectrum method. Methods. 2014;69(3):207–12.\n\n13. Deng SP, Zhu L, Huang DS. Mining the bladder cancer-associated genes by an integrated strategy for the construction and analysis of differential co-expression networks. BMC Genomics. 2015;16(Suppl 3):S4.\n\n14. Zhu L, Deng SP, Huang DS. A two-stage geometric method for pruning unreliable links in protein-protein networks. IEEE Trans Nanobioscience. 2015;14(5):528–34.\n\n15. Wang SL, Zhu YH, Jia W, et al. Robust classification method of tumor subtype by using correlation filters. IEEE/ACM Trans Comput Biol Bioinform. 2012;9(2):580–91.\n\n16. Huang DS, Yu HJ. Normalized feature vectors: a novel alignment-free sequence comparison method based on the numbers of adjacent amino acids. IEEE/ACM Trans Comput Biol Bioinform. 2013;10(2):457–67.\n\n17. Liu KH, Huang DS. Cancer classification using rotation forest. Comput Biol Med. 2008;38(5):601–10.\n\n18. Zheng CH, Zhang L, Ng TY, et al. Metasample-based sparse representation for tumor classification. IEEE/ACM Trans Comput Biol Bioinform. 2011;8(5):1273–82.\n\n19. Qiao J, Wang G, Li W, et al. An adaptive deep Q-learning strategy for handwritten digit recognition. Neural Netw. 2018;107:61–71.\n\n20. Mendonca MRF, Bernardino HS, Neto RF. Reinforcement learning with optimized reward function for stealth applications. Entertain Comput. 2018;25:37–47.\n\n21. Ghazi MM, Yanikoglu B, Aptoula E. Plant identification using deep neural networks via optimization of transfer learning parameters. Neurocomputing. 2017;235:228–35.\n\n22. Pan J, Wang X, Cheng Y, et al. Multi-source transfer ELM-based Q learning. Neurocomputing. 2014;137:57–64.\n\n23. Wu H, Li H, Jiang M, et al. Identify high-quality protein structural models by enhanced K-Means. Biomed Res Int. 2017;2017:7294519.\n\n24. Boskovic B, Brest J. Genetic algorithm with advanced mechanisms applied to the protein structure prediction in a hydrophobic-polar model and cubic lattice. Appl Soft Comput. 2016;45:61–70.\n\n25. Zheng CH, Huang DS, Kong XZ, et al. Gene expression data classification using consensus independent component analysis. Genomics Proteomics Bioinformatics. 2008;6(2):74–82.\n\n26. Shah SM, Borkar VS. Q-learning for Markov decision processes with a satisfiability criterion. Syst Control Lett. 2018;113:45–51.\n\n27. Zhu L, You Z, Huang D, et al. LSE: a novel robust geometric approach for modeling protein-protein interaction networks. PLoS One. 2013;8(4):e58368.\n\n28. Wang SL, Li X, Zhang S, et al. Tumor classification by combining PNN classifier ensemble with neighborhood rough set based gene reduction. Comput Biol Med. 2010;40(2):179–89.\n\n29. Joseph AG, Bhatnagar S. An online prediction algorithm for reinforcement learning with linear function approximation using cross entropy method. Mach Learn. 2018;107:1385–429.\n\n30. Ebadzadeh MM, Salimi-Badr A. IC-FNN: a novel fuzzy neural network with interpretable, intuitive, and correlated-contours fuzzy rules for function approximation. IEEE Trans Fuzzy Syst. 2018;26(3):1288–302.\n\n31. Korda M, Henrion D, Jones CN. Controller design and value function approximation for nonlinear dynamical systems. Automatica. 2016;67:54–66.\n\n32. Lu QG, Chen DF, Mao LM, et al. Research on predication of proteins structure based on GA. In: China artificial intelligence annual conference; 2005.\n\n33. Chen M. Quasi-physical quasi-human algorithm for protein folding: Huazhong University of Science and Technology. Wuhan; 2007.\n\n34. Garza-Fabre M, Rodriguez-Tello E, Toscano-Pulido G. Constraint-handling through multi-objective optimization: the hydrophobic-polar model for protein structure prediction. Comput Oper Res. 2015;53:128–53.\n\n## Acknowledgments\n\nThe authors acknowledge and thank the anonymous reviewers for their suggestions that allowed the improvement of our manuscript.\n\nThis article has been published as part of BMC Bioinformatics Volume 20 Supplement 25, 2019: Proceedings of the 2018 International Conference on Intelligent Computing (ICIC 2018) and Intelligent Computing and Biomedical Informatics (ICBI) 2018 conference: bioinformatics. The full contents of the supplement are available online at https://bmcbioinformatics.biomedcentral.com/articles/supplements/volume-20-supplement-25.\n\n## Funding\n\nThis work was supported in part by the National Natural Science Foundation of China (No. 61772357, 61902272, 61672371, 61876217, 61902271, 61750110519), and Suzhou Science and Technology Project (SYG201704, SNG201610, SZS201609). The publication costs of this article were funded by the grants of the above foundations and projects.\n\n## Author information\n\nAuthors\n\n### Contributions\n\nHW proposed the original idea. RY and QF designed the framework and the experiments. HL collected the experimental datasets. HW, RY and QF performed the experiments and performed the primary data analysis. HW and YR wrote the manuscript. JC and WL modified the codes and the manuscript. All authors contributed to the manuscript. All authors read and approved the final manuscript.\n\n### Corresponding author\n\nCorrespondence to Qiming Fu.\n\n## Ethics declarations\n\nNot applicable.\n\nNot applicable.\n\n### Competing interests\n\nThe authors declare that they have no competing interests.",
null,
""
] | [
null,
"https://bmcbioinformatics.biomedcentral.com/track/article/10.1186/s12859-019-3259-6",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9118776,"math_prob":0.94910496,"size":40149,"snap":"2023-40-2023-50","text_gpt3_token_len":8600,"char_repetition_ratio":0.18076971,"word_repetition_ratio":0.093452,"special_character_ratio":0.21656828,"punctuation_ratio":0.12306046,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9810942,"pos_list":[0,1,2],"im_url_duplicate_count":[null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-10-03T23:57:22Z\",\"WARC-Record-ID\":\"<urn:uuid:0b892c55-63a8-4ff8-989d-c33509db90e6>\",\"Content-Length\":\"372365\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:00ea41a0-5f0f-4817-950f-4395d55931c4>\",\"WARC-Concurrent-To\":\"<urn:uuid:c24bcede-26a0-4e46-93fd-1ce816d76790>\",\"WARC-IP-Address\":\"146.75.36.95\",\"WARC-Target-URI\":\"https://bmcbioinformatics.biomedcentral.com/articles/10.1186/s12859-019-3259-6\",\"WARC-Payload-Digest\":\"sha1:5NGDAPSKL2QAFPQRR7PQWQUSDSBZV4XV\",\"WARC-Block-Digest\":\"sha1:2V5CGYWSKEQIMODJSI2AT6SWE4LK3M4C\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233511284.37_warc_CC-MAIN-20231003224357-20231004014357-00142.warc.gz\"}"} |
https://www.learnmodo.com/c-program-to-convert-temperature-fahrenheit-celsius/ | [
"# C Program to Convert Temperature from Fahrenheit to Celsius and Vice Versa\n\nIn this article, you will learn how to convert a temperature from Fahrenheit to Celsius and vice versa. The concept is easy: If the user selects to convert from Fahrenheit to Celcius, then the program should subtract 32 from the original temperature, then divide the result by 1.8.\n\nOn the other hand, if the user selects to convert from Celsius to Fahrenheit, we multiply the original value by 1.8 and add 32 to it.\n\n## Program to Convert Temperature from Fahrenheit to Celsius and Vice Versa using C\n\n#include <stdio.h>\n\nint main() {\n\nfloat fahrenheit, celsius;\nint choice;\n\nwhile (choice != 3) {\nprintf(\"\\n1: Convert temperature from Fahrenheit to Celsius.\");\nprintf(\"\\n2: Convert temperature from Celsius to Fahrenheit.\");\nprintf(\"\\n3: Exit.\");\n\nscanf(\"%d\", & choice);\nswitch (choice) {\ncase 1: {\nprintf(\"\\nPlease enter temperature in Fahrenheit: \");\nscanf(\"%f\", & fahrenheit);\ncelsius = (fahrenheit - 32) / 1.8;\nprintf(\"Temperature in Celsius is: %.2f\", celsius);\nbreak;\n\n}\ncase 2: {\nprintf(\"\\nPlease enter temperature in Celsius: \");\nscanf(\"%f\", & celsius);\nfahrenheit = (celsius * 1.8) + 32;\nprintf(\"Temperature in Fahrenheit is: %.2f\", fahrenheit);\n\n}\n}\n\n}\n\nreturn 0;\n}\n\nOutput\n\n1: Convert temperature from Fahrenheit to Celsius.\n2: Convert temperature from Celsius to Fahrenheit.\n3: Exit.\nTemperature in Fahrenheit is: 77.00"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.501241,"math_prob":0.8646976,"size":1344,"snap":"2023-14-2023-23","text_gpt3_token_len":340,"char_repetition_ratio":0.21641791,"word_repetition_ratio":0.049261082,"special_character_ratio":0.2924107,"punctuation_ratio":0.24150944,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98982644,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-05-28T06:27:49Z\",\"WARC-Record-ID\":\"<urn:uuid:fb273b76-da72-4dfd-b84b-c9366c2f71a9>\",\"Content-Length\":\"84011\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7a36265f-ad10-4ae4-a027-54c4245a8853>\",\"WARC-Concurrent-To\":\"<urn:uuid:8fcd8d52-8fbc-4404-bea4-8cac7f4045ec>\",\"WARC-IP-Address\":\"172.67.157.121\",\"WARC-Target-URI\":\"https://www.learnmodo.com/c-program-to-convert-temperature-fahrenheit-celsius/\",\"WARC-Payload-Digest\":\"sha1:ADWTWQGJXHD6XPHZVBMLDT7TUTBGYGLM\",\"WARC-Block-Digest\":\"sha1:5TEGSAYJBUTFMVBNJAT5D5KF4HXESS2L\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224643585.23_warc_CC-MAIN-20230528051321-20230528081321-00744.warc.gz\"}"} |
https://cs50.stackexchange.com/questions/39374/pset1-mario-right-align-the-bricks-for-function | [
"# Pset1 Mario - right-align the bricks - for function?\n\nI've reached the point where I can create a pyramid but have an extra dot in the last row. Can't seem to remove it . Is there something i'm doing wrong in the nested 'for' function?",
null,
"``````#include <stdio.h>\n#include <cs50.h>\n\nint get_positive_int(void);\n\n//Print Pyramid #s\nint main(void)\n{\nint n = get_positive_int();\nfor (int i=0 ; i < n ; i++)\n{\nfor (int j=i; j < n; j++)\n{\nprintf(\".\");\n}\nfor (int j=n-i; j <= n; j++)\n{\nprintf(\"#\");\n}\nprintf(\"\\n\");\n}\n}\n\n// Prompt for Height as +ve integer between 1 and 8\nint get_positive_int(void)\n{\nint n;\ndo\n{\nn = get_int(\"Height: \");\n}\nwhile (n < 1 || n > 8 );\nreturn n;\n}\n\n``````\n• Perhaps think about it another way: there is an extra dot in every row. – DinoCoderSaurus Aug 23 '20 at 13:18\n\n## 1 Answer\n\n@Dinocodersaurus is right. You're sooo close. Maybe you could modify the for loop that's printing dots(spaces) to print just one less on each row????\n\nDino, when are you going to stop giving answers as comments???? ;-)"
] | [
null,
"https://i.stack.imgur.com/rSFGJ.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.57711685,"math_prob":0.8293947,"size":604,"snap":"2021-21-2021-25","text_gpt3_token_len":185,"char_repetition_ratio":0.12,"word_repetition_ratio":0.0,"special_character_ratio":0.39403972,"punctuation_ratio":0.17322835,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9564689,"pos_list":[0,1,2],"im_url_duplicate_count":[null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-06-19T13:26:20Z\",\"WARC-Record-ID\":\"<urn:uuid:a7393dc6-d050-4ae4-888d-95400db0c552>\",\"Content-Length\":\"135125\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7c5387ce-db1a-4fd9-889c-c37cffe4ddb4>\",\"WARC-Concurrent-To\":\"<urn:uuid:a5ec8296-14df-44e3-86a2-ffc54f4944f6>\",\"WARC-IP-Address\":\"151.101.1.69\",\"WARC-Target-URI\":\"https://cs50.stackexchange.com/questions/39374/pset1-mario-right-align-the-bricks-for-function\",\"WARC-Payload-Digest\":\"sha1:W56RZM3CFA37BKBEZAVMFN6VVDD5WGVO\",\"WARC-Block-Digest\":\"sha1:3TRRJYRT2V4B3GXD7BRIRU7T3KZLDVIY\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-25/CC-MAIN-2021-25_segments_1623487648194.49_warc_CC-MAIN-20210619111846-20210619141846-00128.warc.gz\"}"} |
http://dictionary.obspm.fr/index.php/?showAll=1&formSearchTextfield=perpendicular | [
"# An Etymological Dictionary of Astronomy and AstrophysicsEnglish-French-Persian\n\n## فرهنگ ریشه شناختی اخترشناسی-اخترفیزیک\n\n### M. Heydari-Malayeri - Paris Observatory\n\nHomepage\n\nNumber of Results: 2 Search : perpendicular\n perpendicular پالار pâlârFr.: perpendiculaire A line or plane at right angles to another line or plane. Two curves are said to be perpendicular if their tangent lines are mutually perpendicular. → normal; → verticalFrom M.E. perpendiculer(e), from O.Fr. perpendiculiere, from L. perpendicularis \"vertical, as a plumb line,\" from perpendiculum \"plumb line,\" from perpendere \"balance carefully,\" from per- \"thoroughly\" + pendere \"to weigh, to hang.\"Pâlâr \"pillar, column, main beam.\" perpendicular axis theorem فربین ِ آسههای ِ پالار farbin-e âsehâ-ye pâlârFr.: théorème des axes perpendiculaires The → moment of inertia of a plane object (→ lamina) about an axis perpendicular to the plane is equal to the sum of the moments of inertia about any two perpendicular axes in the plane. Thus if x and y axes are in the plane, Iz = Ix + Iy.→ perpendicular; → axis; → theorem."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.78967834,"math_prob":0.8928509,"size":845,"snap":"2021-43-2021-49","text_gpt3_token_len":235,"char_repetition_ratio":0.21521997,"word_repetition_ratio":0.0,"special_character_ratio":0.23076923,"punctuation_ratio":0.17901234,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95973295,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-23T08:42:57Z\",\"WARC-Record-ID\":\"<urn:uuid:0d02a5ca-a0b6-4aee-8289-7d462dc04374>\",\"Content-Length\":\"12826\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:cb4fdd13-8479-49fa-8268-d36ee32c2ec6>\",\"WARC-Concurrent-To\":\"<urn:uuid:1e6e4aaf-07b7-4846-9813-7c5de788f30e>\",\"WARC-IP-Address\":\"145.238.200.3\",\"WARC-Target-URI\":\"http://dictionary.obspm.fr/index.php/?showAll=1&formSearchTextfield=perpendicular\",\"WARC-Payload-Digest\":\"sha1:76QCNFPPF3T3LY5KETT5JKROS2QRK2SJ\",\"WARC-Block-Digest\":\"sha1:2B3MNRJEEM4RK2OV4VR34JEHN7I7GLDE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323585653.49_warc_CC-MAIN-20211023064718-20211023094718-00079.warc.gz\"}"} |
https://www.colorhexa.com/011515 | [
"# #011515 Color Information\n\nIn a RGB color space, hex #011515 is composed of 0.4% red, 8.2% green and 8.2% blue. Whereas in a CMYK color space, it is composed of 95.2% cyan, 0% magenta, 0% yellow and 91.8% black. It has a hue angle of 180 degrees, a saturation of 90.9% and a lightness of 4.3%. #011515 color hex could be obtained by blending #022a2a with #000000. Closest websafe color is: #000000.\n\n• R 0\n• G 8\n• B 8\nRGB color chart\n• C 95\n• M 0\n• Y 0\n• K 92\nCMYK color chart\n\n#011515 color description : Very dark (mostly black) cyan.\n\n# #011515 Color Conversion\n\nThe hexadecimal color #011515 has RGB values of R:1, G:21, B:21 and CMYK values of C:0.95, M:0, Y:0, K:0.92. Its decimal value is 70933.\n\nHex triplet RGB Decimal 011515 `#011515` 1, 21, 21 `rgb(1,21,21)` 0.4, 8.2, 8.2 `rgb(0.4%,8.2%,8.2%)` 95, 0, 0, 92 180°, 90.9, 4.3 `hsl(180,90.9%,4.3%)` 180°, 95.2, 8.2 000000 `#000000`\nCIE-LAB 5.392, -6.199, -2.186 0.416, 0.597, 0.803 0.229, 0.329, 0.597 5.392, 6.573, 199.422 5.392, -3.964, -0.856 7.726, -3.909, -0.752 00000001, 00010101, 00010101\n\n# Color Schemes with #011515\n\n• #011515\n``#011515` `rgb(1,21,21)``\n• #150101\n``#150101` `rgb(21,1,1)``\nComplementary Color\n• #01150b\n``#01150b` `rgb(1,21,11)``\n• #011515\n``#011515` `rgb(1,21,21)``\n• #010b15\n``#010b15` `rgb(1,11,21)``\nAnalogous Color\n• #150b01\n``#150b01` `rgb(21,11,1)``\n• #011515\n``#011515` `rgb(1,21,21)``\n• #15010b\n``#15010b` `rgb(21,1,11)``\nSplit Complementary Color\n• #151501\n``#151501` `rgb(21,21,1)``\n• #011515\n``#011515` `rgb(1,21,21)``\n• #150115\n``#150115` `rgb(21,1,21)``\n• #011501\n``#011501` `rgb(1,21,1)``\n• #011515\n``#011515` `rgb(1,21,21)``\n• #150115\n``#150115` `rgb(21,1,21)``\n• #150101\n``#150101` `rgb(21,1,1)``\n• #000000\n``#000000` `rgb(0,0,0)``\n• #000000\n``#000000` `rgb(0,0,0)``\n• #000000\n``#000000` `rgb(0,0,0)``\n• #011515\n``#011515` `rgb(1,21,21)``\n• #022d2d\n``#022d2d` `rgb(2,45,45)``\n• #034646\n``#034646` `rgb(3,70,70)``\n• #045e5e\n``#045e5e` `rgb(4,94,94)``\nMonochromatic Color\n\n# Alternatives to #011515\n\nBelow, you can see some colors close to #011515. Having a set of related colors can be useful if you need an inspirational alternative to your original color choice.\n\n• #011510\n``#011510` `rgb(1,21,16)``\n• #011512\n``#011512` `rgb(1,21,18)``\n• #011513\n``#011513` `rgb(1,21,19)``\n• #011515\n``#011515` `rgb(1,21,21)``\n• #011315\n``#011315` `rgb(1,19,21)``\n• #011215\n``#011215` `rgb(1,18,21)``\n• #011015\n``#011015` `rgb(1,16,21)``\nSimilar Colors\n\n# #011515 Preview\n\nThis text has a font color of #011515.\n\n``<span style=\"color:#011515;\">Text here</span>``\n#011515 background color\n\nThis paragraph has a background color of #011515.\n\n``<p style=\"background-color:#011515;\">Content here</p>``\n#011515 border color\n\nThis element has a border color of #011515.\n\n``<div style=\"border:1px solid #011515;\">Content here</div>``\nCSS codes\n``.text {color:#011515;}``\n``.background {background-color:#011515;}``\n``.border {border:1px solid #011515;}``\n\n# Shades and Tints of #011515\n\nA shade is achieved by adding black to any pure hue, while a tint is created by mixing white to any pure color. In this example, #000202 is the darkest color, while #effefe is the lightest one.\n\n• #000202\n``#000202` `rgb(0,2,2)``\n• #011515\n``#011515` `rgb(1,21,21)``\n• #022828\n``#022828` `rgb(2,40,40)``\n• #033a3a\n``#033a3a` `rgb(3,58,58)``\n• #044d4d\n``#044d4d` `rgb(4,77,77)``\n• #056060\n``#056060` `rgb(5,96,96)``\n• #057373\n``#057373` `rgb(5,115,115)``\n• #068585\n``#068585` `rgb(6,133,133)``\n• #079898\n``#079898` `rgb(7,152,152)``\n• #08abab\n``#08abab` `rgb(8,171,171)``\n• #09bebe\n``#09bebe` `rgb(9,190,190)``\n``#0ad0d0` `rgb(10,208,208)``\n• #0be3e3\n``#0be3e3` `rgb(11,227,227)``\n• #0ef4f4\n``#0ef4f4` `rgb(14,244,244)``\n• #21f4f4\n``#21f4f4` `rgb(33,244,244)``\n• #33f5f5\n``#33f5f5` `rgb(51,245,245)``\n• #46f6f6\n``#46f6f6` `rgb(70,246,246)``\n• #59f7f7\n``#59f7f7` `rgb(89,247,247)``\n• #6bf8f8\n``#6bf8f8` `rgb(107,248,248)``\n• #7ef9f9\n``#7ef9f9` `rgb(126,249,249)``\n• #91fafa\n``#91fafa` `rgb(145,250,250)``\n• #a4fbfb\n``#a4fbfb` `rgb(164,251,251)``\n• #b6fcfc\n``#b6fcfc` `rgb(182,252,252)``\n• #c9fcfc\n``#c9fcfc` `rgb(201,252,252)``\n• #dcfdfd\n``#dcfdfd` `rgb(220,253,253)``\n• #effefe\n``#effefe` `rgb(239,254,254)``\nTint Color Variation\n\n# Tones of #011515\n\nA tone is produced by adding gray to any pure hue. In this case, #0a0c0c is the less saturated color, while #001616 is the most saturated one.\n\n• #0a0c0c\n``#0a0c0c` `rgb(10,12,12)``\n• #090d0d\n``#090d0d` `rgb(9,13,13)``\n• #090d0d\n``#090d0d` `rgb(9,13,13)``\n• #080e0e\n``#080e0e` `rgb(8,14,14)``\n• #070f0f\n``#070f0f` `rgb(7,15,15)``\n• #061010\n``#061010` `rgb(6,16,16)``\n• #051111\n``#051111` `rgb(5,17,17)``\n• #041212\n``#041212` `rgb(4,18,18)``\n• #041212\n``#041212` `rgb(4,18,18)``\n• #031313\n``#031313` `rgb(3,19,19)``\n• #021414\n``#021414` `rgb(2,20,20)``\n• #011515\n``#011515` `rgb(1,21,21)``\n• #001616\n``#001616` `rgb(0,22,22)``\nTone Color Variation\n\n# Color Blindness Simulator\n\nBelow, you can see how #011515 is perceived by people affected by a color vision deficiency. This can be useful if you need to ensure your color combinations are accessible to color-blind users.\n\nMonochromacy\n• Achromatopsia 0.005% of the population\n• Atypical Achromatopsia 0.001% of the population\nDichromacy\n• Protanopia 1% of men\n• Deuteranopia 1% of men\n• Tritanopia 0.001% of the population\nTrichromacy\n• Protanomaly 1% of men, 0.01% of women\n• Deuteranomaly 6% of men, 0.4% of women\n• Tritanomaly 0.01% of the population"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.530216,"math_prob":0.7953396,"size":3631,"snap":"2021-21-2021-25","text_gpt3_token_len":1585,"char_repetition_ratio":0.12572373,"word_repetition_ratio":0.018416205,"special_character_ratio":0.565409,"punctuation_ratio":0.23378076,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99446267,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-05-18T05:04:58Z\",\"WARC-Record-ID\":\"<urn:uuid:941a149b-e2be-4462-bc4d-384628270ca0>\",\"Content-Length\":\"36137\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:fd6bcafb-9614-4c1a-9019-ed7ed54f09cf>\",\"WARC-Concurrent-To\":\"<urn:uuid:a53c5c6e-bac6-4e24-aaf5-29fedabf7414>\",\"WARC-IP-Address\":\"178.32.117.56\",\"WARC-Target-URI\":\"https://www.colorhexa.com/011515\",\"WARC-Payload-Digest\":\"sha1:7QQ2E74AZCVXTKZGQIDYW4B2XZDBRB7N\",\"WARC-Block-Digest\":\"sha1:SVIL6AYXKG6SXECTTETN4Q7VK67DLXGB\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-21/CC-MAIN-2021-21_segments_1620243989820.78_warc_CC-MAIN-20210518033148-20210518063148-00355.warc.gz\"}"} |
https://dfc-org-production.force.com/forums/ForumsMain?id=906F0000000AtoDIAS | [
"",
null,
"ShowAll Questionssorted byDate Posted",
null,
"Mick Man\n\n# Apex + visualforce\n\n```public class OdtDetailController2\n{\n\npublic list<wrapperclass> Jonintwrapper{get;set;}\npublic list<wrapperclass> Odtwrapper{get;set;}\n\npublic OdtDetailController2(ApexPages.StandardController controller)\n{\nlist<Ordre_Travail__c> OrdreTravail = [SELECT id, name, Date__c, Produit__c, Nombre__c FROM Ordre_Travail__c WHERE produit__c = 'a0Ic000000DCvwEEAT'];\nlist<JointMatProd__c> Jointure = [SELECT id, MatierePremiere__c, Quantite__c FROM JointMatProd__c WHERE produit__c = 'a0Ic000000DCvwEEAT'];\n\nOdtwrapper = new list<wrapperclass>();\nfor(Ordre_Travail__c Odt: OrdreTravail)\n{\n}\n\nJonintwrapper = new list<wrapperclass>();\nfor(JointMatProd__c Joint: Jointure)\n{\n}\n}\n\npublic class wrapperclass\n{\npublic Ordre_Travail__c ODT{get;set;}\npublic JointMatProd__c JOINT{get;set;}\n\npublic wrapperclass(Ordre_Travail__c OrdreTravail)\n{\nthis.ODT = (OrdreTravail);\n}\n\npublic wrapperclass(JointMatProd__c Joint)\n{\nthis.JOINT = (Joint);\n}\n}\n}```",
null,
"Best Answer chosen by Mick Man",
null,
"Balaji Chowdary Garapati\n@Micky Andriamiarantsoa:\n\nTry this code:\n\n```public class OdtDetailController2\n{\n\npublic list<wrapperclass> Jonintwrapper{get;set;}\npublic list<wrapperclass> Odtwrapper{get;set;}\nPubic Double TotalNombre=0.0;\nPublic Double TotalQuantite=0.0;\nPublic Double Total;\n\npublic OdtDetailController2(ApexPages.StandardController controller)\n{\nlist<Ordre_Travail__c> OrdreTravail = [SELECT id, name, Date__c, Produit__c, Nombre__c FROM Ordre_Travail__c WHERE produit__c = 'a0Ic000000DCvwEEAT'];\nlist<JointMatProd__c> Jointure = [SELECT id, MatierePremiere__c, Quantite__c FROM JointMatProd__c WHERE produit__c = 'a0Ic000000DCvwEEAT'];\n\nOdtwrapper = new list<wrapperclass>();\nfor(Ordre_Travail__c Odt: OrdreTravail)\n{\nif(odt.Nombre__c!=Null){\nTotalNombre+=odt.Nombre__c;\n\n}\n}\n\nJonintwrapper = new list<wrapperclass>();\nfor(JointMatProd__c Joint: Jointure)\n{\nif(Joint.Quantite__c==Null) // Checking! In case of null make it to 0 zero value so code wont break, and assuming Quantite__c is a numeric value\nJoint.Quantite__c=0;\n\n}\n\n}\n\npublic class wrapperclass\n{\npublic Ordre_Travail__c ODT{get;set;}\npublic JointMatProd__c JOINT{get;set;}\nPublic Decimal Total{get;set;}\n\npublic wrapperclass(Ordre_Travail__c OrdreTravail)\n{\nthis.ODT = (OrdreTravail);\n}\n\npublic wrapperclass(JointMatProd__c Joint,Decimal Total)\n{\nthis.JOINT = (Joint);\nthis.Total=Total;\n}\n}\n}```\n\nCreated an other parameter called Total and an other constructore which takes Joint instance and Total value to assign it back to its variables.\nHope it helps.\n\nThanks,\nBalaji",
null,
"Suneel#8\nOn what criteria you want to do multiplication Nombre__c * Quantite__c? Where do you store the result from the multiplication?How Ordre_Travail__c and JointMatProd__c are related to?Please explain in detail,so that we can help",
null,
"Mick Man\nthank you for your answer Suneel#8! Each product needs Type single or multiple raw material and every matter has its first amount ie quantity per product, the number is the number of product so I want the amount of one or raw materials for the total product",
null,
"Balaji Chowdary Garapati\n@Micky Andriamiarantsoa:\n\nWere you looking for this?\n\n(sum of all Nombre__c * sum of all Quantite__c) ??\n\nIf so, below code will help:\n\n```public class OdtDetailController2\n{\n\npublic list<wrapperclass> Jonintwrapper{get;set;}\npublic list<wrapperclass> Odtwrapper{get;set;}\nPubic Double TotalNombre=0.0;\nPublic Double TotalQuantite=0.0;\nPublic Double Total;\n\npublic OdtDetailController2(ApexPages.StandardController controller)\n{\nlist<Ordre_Travail__c> OrdreTravail = [SELECT id, name, Date__c, Produit__c, Nombre__c FROM Ordre_Travail__c WHERE produit__c = 'a0Ic000000DCvwEEAT'];\nlist<JointMatProd__c> Jointure = [SELECT id, MatierePremiere__c, Quantite__c FROM JointMatProd__c WHERE produit__c = 'a0Ic000000DCvwEEAT'];\n\nOdtwrapper = new list<wrapperclass>();\nfor(Ordre_Travail__c Odt: OrdreTravail)\n{\nif(odt.Nombre__c!=Null){\nTotalNombre+=odt.Nombre__c;\n\n}\n}\n\nJonintwrapper = new list<wrapperclass>();\nfor(JointMatProd__c Joint: Jointure)\n{\nif(Joint.Nombre__c!=Null){\nTotalQuantite+=Joint.Quantite__c;\n\n}\n\n}\nTotal=TotalQuantite*TotalNombre;\n\n}\n\npublic class wrapperclass\n{\npublic Ordre_Travail__c ODT{get;set;}\npublic JointMatProd__c JOINT{get;set;}\n\npublic wrapperclass(Ordre_Travail__c OrdreTravail)\n{\nthis.ODT = (OrdreTravail);\n}\n\npublic wrapperclass(JointMatProd__c Joint)\n{\nthis.JOINT = (Joint);\n}\n}\n}```\n\nNote: The requirement is unclear, so just did the best i can, if it doesnt fit your requirement, please give the entire reuqirement and the object specficiations as asked by @ Suneel#8 so that the issue can be resolved quicker.\n\nThanks,\nbalaji",
null,
"Mick Man\nthank you Balaji Chowdary Garapati, for more precisely, I would like to create another wrapper class for the field Nombre__c and Quantite__c field, and display the product of two fields in a pageblock table, thank you for your understanding because I'm beginner with apex\nI sent you a picture for more detail",
null,
"",
null,
"Balaji Chowdary Garapati\n@Micky Andriamiarantsoa:\n\nTry this code:\n\n```public class OdtDetailController2\n{\n\npublic list<wrapperclass> Jonintwrapper{get;set;}\npublic list<wrapperclass> Odtwrapper{get;set;}\nPubic Double TotalNombre=0.0;\nPublic Double TotalQuantite=0.0;\nPublic Double Total;\n\npublic OdtDetailController2(ApexPages.StandardController controller)\n{\nlist<Ordre_Travail__c> OrdreTravail = [SELECT id, name, Date__c, Produit__c, Nombre__c FROM Ordre_Travail__c WHERE produit__c = 'a0Ic000000DCvwEEAT'];\nlist<JointMatProd__c> Jointure = [SELECT id, MatierePremiere__c, Quantite__c FROM JointMatProd__c WHERE produit__c = 'a0Ic000000DCvwEEAT'];\n\nOdtwrapper = new list<wrapperclass>();\nfor(Ordre_Travail__c Odt: OrdreTravail)\n{\nif(odt.Nombre__c!=Null){\nTotalNombre+=odt.Nombre__c;\n\n}\n}\n\nJonintwrapper = new list<wrapperclass>();\nfor(JointMatProd__c Joint: Jointure)\n{\nif(Joint.Quantite__c==Null) // Checking! In case of null make it to 0 zero value so code wont break, and assuming Quantite__c is a numeric value\nJoint.Quantite__c=0;\n\n}\n\n}\n\npublic class wrapperclass\n{\npublic Ordre_Travail__c ODT{get;set;}\npublic JointMatProd__c JOINT{get;set;}\nPublic Decimal Total{get;set;}\n\npublic wrapperclass(Ordre_Travail__c OrdreTravail)\n{\nthis.ODT = (OrdreTravail);\n}\n\npublic wrapperclass(JointMatProd__c Joint,Decimal Total)\n{\nthis.JOINT = (Joint);\nthis.Total=Total;\n}\n}\n}```\n\nCreated an other parameter called Total and an other constructore which takes Joint instance and Total value to assign it back to its variables.\nHope it helps.\n\nThanks,\nBalaji\n\nThis was selected as the best answer",
null,
"Mick Man\nA big thank you to you all, and especially for you Balaji Chowdary Garapati finally my problem is solved, thanks to you, thank you again",
null,
"Balaji Chowdary Garapati\nCan you please mark it solved and select the best answer :)\n\nThanks,\nBalaji",
null,
"Mick Man\nand if I would like to create a test class for this class, what should I do"
] | [
null,
"https://res.cloudinary.com/hy4kyit2a/image/upload/sd_social300x300_1.png",
null,
"https://dfc-org-production.force.com/img/userprofile/default_profile_45_v2.png",
null,
"https://dfc-org-production.force.com/forums/img/s.gif",
null,
"https://dfc-org-production.force.com/forums/ncsphoto/x_Et3Sd46I6oUqTQ3aNhqEojBeQgHK8yhFzcYwG1Z7TAgGUYW1QuB2EIxXMuQbGQ",
null,
"https://dfc-org-production.force.com/img/userprofile/default_profile_45_v2.png",
null,
"https://dfc-org-production.force.com/img/userprofile/default_profile_45_v2.png",
null,
"https://dfc-org-production.force.com/forums/ncsphoto/x_Et3Sd46I6oUqTQ3aNhqEojBeQgHK8yhFzcYwG1Z7TAgGUYW1QuB2EIxXMuQbGQ",
null,
"https://dfc-org-production.force.com/img/userprofile/default_profile_45_v2.png",
null,
"https://dfc-org-production.force.com/forums/servlet/rtaImage",
null,
"https://dfc-org-production.force.com/forums/ncsphoto/x_Et3Sd46I6oUqTQ3aNhqEojBeQgHK8yhFzcYwG1Z7TAgGUYW1QuB2EIxXMuQbGQ",
null,
"https://dfc-org-production.force.com/img/userprofile/default_profile_45_v2.png",
null,
"https://dfc-org-production.force.com/forums/ncsphoto/x_Et3Sd46I6oUqTQ3aNhqEojBeQgHK8yhFzcYwG1Z7TAgGUYW1QuB2EIxXMuQbGQ",
null,
"https://dfc-org-production.force.com/img/userprofile/default_profile_45_v2.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.52825856,"math_prob":0.61712354,"size":2000,"snap":"2020-24-2020-29","text_gpt3_token_len":548,"char_repetition_ratio":0.13176353,"word_repetition_ratio":0.21052632,"special_character_ratio":0.2345,"punctuation_ratio":0.16049382,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97888076,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26],"im_url_duplicate_count":[null,null,null,null,null,null,null,4,null,null,null,null,null,4,null,null,null,null,null,4,null,null,null,4,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-07-09T03:25:40Z\",\"WARC-Record-ID\":\"<urn:uuid:ccf3cc3f-01f6-4f75-8b29-0ef804700421>\",\"Content-Length\":\"225698\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4de6d3a0-f034-4de7-9d85-57ddb732671b>\",\"WARC-Concurrent-To\":\"<urn:uuid:305bd401-9a2c-4cea-8259-df7ca92d90de>\",\"WARC-IP-Address\":\"13.110.2.219\",\"WARC-Target-URI\":\"https://dfc-org-production.force.com/forums/ForumsMain?id=906F0000000AtoDIAS\",\"WARC-Payload-Digest\":\"sha1:OFPOWKHZ4O6YV7S4U4M6QW3DL6SAOP3S\",\"WARC-Block-Digest\":\"sha1:KZKHYSAKQVLX2ZNUP7EM2EL3YEPZLS3J\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-29/CC-MAIN-2020-29_segments_1593655897844.44_warc_CC-MAIN-20200709002952-20200709032952-00564.warc.gz\"}"} |
https://economicskey.com/a-numerical-example-2-6146 | [
"A NUMERICAL EXAMPLE\n\nTable 2 shows some data for an economy that produces only two goods: hot dogs and hamburgers. The table shows the quantities of the two goods produced and their prices in the years 2005, 2006, and 2007. To compute total spending in this economy, we would multiply the quantities of hot dogs and hamburgers by their prices, In the year 2005, 100 hot dogs are sold at a price of \\$1 per hot dog, so expenditure on hot dogs equals \\$100. In the same year, 50 hamburgers are sold for \\$2 per hamburger, so expenditure on .\n\nTABLE 2. Real and Nominal GDP\nThis table shows how to. calculate real GDP,nominal GDP,and the GDP deflator for a hypothetical economy that produces only hot dogs and hamburgers.\n\nhamburgers also equals \\$}OO.Total expenditure in the economy-the sum of expenditure on hot dogs and expenditure on hamburgers-is \\$200. This amount, the production of goods and services valued at current prices, is called nominal GDP. The table shows the calculation of nominal GDP for these three years. Total spending rises from \\$200 in 2005 to \\$600 in 2006 and then to \\$1,200 in 2007. Part of this rise is attributable to the increase in the quantities of hot dogs and hamburgers, and part is attributable to the increase in the prices of hot dogs and hamburgers To obtain a measure of the amount produced that is not affected by changes in prices, we use real GDP which is the production of goods and services valued at constant prices. We calculate real GDP by first\nchoosing one year as a base year. We then use the prices of hot dogs and hamburgers in the base year to compute the value of goods and services in all of the years. In other words, the prices in the base year provide the basis for comparing quantities in different years. Suppose that we choose 2005 to be the base year in our example. We can then use the prices of hot dogs and hamburger -2005 to compute the value of goods and services produced in 2005, 2006, and 2007. Table 2 shows these calculations. To compute real GDP for 2005, we use the prices of hot dogs and hamburgers in 2005 (the base year) and the quantities of hot dogs and hamburgers produced in 2005. (Thus, for the base year, real GDP always equals nominal GDP.) To compute real GDP for 2006, we use the prices of hot dogs and hamburgers in 2005 (the base year) and the quantities of hot dogs and hamburgers produced in 2006. Similarly, to compute real GDP for 2007, we use the prices in 2005 and the quantities in 2007. When we find that real GDP has risen from \\$200 in 2005 to \\$350 in 2006 and then to \\$500 in 2007, we know that the increase is attributable to an increase in the quantities produced because the prices are being held fixed at base-year levels. To sum up: Nominal GDP uses current prices to place a value on the economy s production of goods and services. Real GDP uses constant base-year-prices to place a value on the economy’s production of goods and services. Because real GDP is not affected by changes in prices, changes in real GDP reflect only changes in the amounts being produced. Thus, real GDP is a measure of the economy’s production of goods and services. Our goal in computing GDP is to gauge how well the overall economy is performing. Because real GDP measures the economy’s production of goods and services, it reflects the economy’s ability to satisfy people’s needs and desires. Thus, real GDP is a better gauge of economic well-being than is nominal GDP. When\neconomists talk about the economy’s GDP, they usually mean real GDP rather than nominal GDP. And when they talk about growth in in the economy, they measure that growth as the percentage change in real GDP from one period to another.\n\n[av_button label='Get Any Economics Assignment Solved for US\\$ 55' link='manually,http://economicskey.com/buy-now' link_target='' color='red' custom_bg='#444444' custom_font='#ffffff' size='large' position='center' icon_select='yes' icon='ue859' font='entypo-fontello']"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9219978,"math_prob":0.8563472,"size":3935,"snap":"2020-34-2020-40","text_gpt3_token_len":909,"char_repetition_ratio":0.17705418,"word_repetition_ratio":0.15851852,"special_character_ratio":0.24599746,"punctuation_ratio":0.0882353,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9686059,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-08-10T07:53:37Z\",\"WARC-Record-ID\":\"<urn:uuid:a2e6ac1c-c4c6-4fff-b226-cf8c503ad23f>\",\"Content-Length\":\"55622\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c2d14769-db81-4bb6-99a8-02d66dc4c3fa>\",\"WARC-Concurrent-To\":\"<urn:uuid:5f7e80b4-cdb6-4105-b9b3-65fef7ac8467>\",\"WARC-IP-Address\":\"104.27.150.43\",\"WARC-Target-URI\":\"https://economicskey.com/a-numerical-example-2-6146\",\"WARC-Payload-Digest\":\"sha1:WOKMG6ROK7SBUCLHOHIYG2TDYEWWVEDA\",\"WARC-Block-Digest\":\"sha1:7T4Y2DUPT4Z5IWNQO57U24P52FPSRNTD\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-34/CC-MAIN-2020-34_segments_1596439738653.47_warc_CC-MAIN-20200810072511-20200810102511-00575.warc.gz\"}"} |
https://calculomates.com/en/divisors/of/14674 | [
"# Divisors of 14674\n\n## Divisors of 14674\n\nThe list of all positive divisors (that is, the list of all integers that divide 22) is as follows :\n\nAccordingly:\n\n14674 is multiplo of 1\n\n14674 is multiplo of 2\n\n14674 is multiplo of 11\n\n14674 is multiplo of 22\n\n14674 is multiplo of 23\n\n14674 is multiplo of 29\n\n14674 is multiplo of 46\n\n14674 is multiplo of 58\n\n14674 is multiplo of 253\n\n14674 is multiplo of 319\n\n14674 is multiplo of 506\n\n14674 is multiplo of 638\n\n14674 is multiplo of 667\n\n14674 is multiplo of 1334\n\n14674 is multiplo of 7337\n\n14674 has 15 positive divisors\n\n## Parity of 14674\n\nIn addition we can say of the number 14674 that it is even\n\n14674 is an even number, as it is divisible by 2 : 14674/2 = 7337\n\n## The factors for 14674\n\nThe factors for 14674 are all the numbers between -14674 and 14674 , which divide 14674 without leaving any remainder. Since 14674 divided by -14674 is an integer, -14674 is a factor of 14674 .\n\nSince 14674 divided by -14674 is a whole number, -14674 is a factor of 14674\n\nSince 14674 divided by -7337 is a whole number, -7337 is a factor of 14674\n\nSince 14674 divided by -1334 is a whole number, -1334 is a factor of 14674\n\nSince 14674 divided by -667 is a whole number, -667 is a factor of 14674\n\nSince 14674 divided by -638 is a whole number, -638 is a factor of 14674\n\nSince 14674 divided by -506 is a whole number, -506 is a factor of 14674\n\nSince 14674 divided by -319 is a whole number, -319 is a factor of 14674\n\nSince 14674 divided by -253 is a whole number, -253 is a factor of 14674\n\nSince 14674 divided by -58 is a whole number, -58 is a factor of 14674\n\nSince 14674 divided by -46 is a whole number, -46 is a factor of 14674\n\nSince 14674 divided by -29 is a whole number, -29 is a factor of 14674\n\nSince 14674 divided by -23 is a whole number, -23 is a factor of 14674\n\nSince 14674 divided by -22 is a whole number, -22 is a factor of 14674\n\nSince 14674 divided by -11 is a whole number, -11 is a factor of 14674\n\nSince 14674 divided by -2 is a whole number, -2 is a factor of 14674\n\nSince 14674 divided by -1 is a whole number, -1 is a factor of 14674\n\nSince 14674 divided by 1 is a whole number, 1 is a factor of 14674\n\nSince 14674 divided by 2 is a whole number, 2 is a factor of 14674\n\nSince 14674 divided by 11 is a whole number, 11 is a factor of 14674\n\nSince 14674 divided by 22 is a whole number, 22 is a factor of 14674\n\nSince 14674 divided by 23 is a whole number, 23 is a factor of 14674\n\nSince 14674 divided by 29 is a whole number, 29 is a factor of 14674\n\nSince 14674 divided by 46 is a whole number, 46 is a factor of 14674\n\nSince 14674 divided by 58 is a whole number, 58 is a factor of 14674\n\nSince 14674 divided by 253 is a whole number, 253 is a factor of 14674\n\nSince 14674 divided by 319 is a whole number, 319 is a factor of 14674\n\nSince 14674 divided by 506 is a whole number, 506 is a factor of 14674\n\nSince 14674 divided by 638 is a whole number, 638 is a factor of 14674\n\nSince 14674 divided by 667 is a whole number, 667 is a factor of 14674\n\nSince 14674 divided by 1334 is a whole number, 1334 is a factor of 14674\n\nSince 14674 divided by 7337 is a whole number, 7337 is a factor of 14674\n\n## What are the multiples of 14674?\n\nMultiples of 14674 are all integers divisible by 14674 , i.e. the remainder of the full division by 14674 is zero. There are infinite multiples of 14674. The smallest multiples of 14674 are:\n\n0 : in fact, 0 is divisible by any integer, so it is also a multiple of 14674 since 0 × 14674 = 0\n\n14674 : in fact, 14674 is a multiple of itself, since 14674 is divisible by 14674 (it was 14674 / 14674 = 1, so the rest of this division is zero)\n\n29348: in fact, 29348 = 14674 × 2\n\n44022: in fact, 44022 = 14674 × 3\n\n58696: in fact, 58696 = 14674 × 4\n\n73370: in fact, 73370 = 14674 × 5\n\netc.\n\n## Is 14674 a prime number?\n\nIt is possible to determine using mathematical techniques whether an integer is prime or not.\n\nfor 14674, the answer is: No, 14674 is not a prime number.\n\n## How do you determine if a number is prime?\n\nTo know the primality of an integer, we can use several algorithms. The most naive is to try all divisors below the number you want to know if it is prime (in our case 14674). We can already eliminate even numbers bigger than 2 (then 4 , 6 , 8 ...). Besides, we can stop at the square root of the number in question (here 121.136 ). Historically, the Eratosthenes screen (which dates back to Antiquity) uses this technique relatively effectively.\n\nMore modern techniques include the Atkin screen, probabilistic tests, or the cyclotomic test.\n\n## Numbers about 14674\n\nPrevious Numbers: ... 14672, 14673\n\nNext Numbers: 14675, 14676 ...\n\n## Prime numbers closer to 14674\n\nPrevious prime number: 14669\n\nNext prime number: 14683"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.92069757,"math_prob":0.9905752,"size":4426,"snap":"2022-40-2023-06","text_gpt3_token_len":1397,"char_repetition_ratio":0.3568521,"word_repetition_ratio":0.17853108,"special_character_ratio":0.4256665,"punctuation_ratio":0.0997921,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9998117,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-10-05T02:59:37Z\",\"WARC-Record-ID\":\"<urn:uuid:34fe25d0-23f6-4101-a0a6-32dfff78cf17>\",\"Content-Length\":\"22510\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1cfc8f14-cff7-47c9-8730-448ccc42f849>\",\"WARC-Concurrent-To\":\"<urn:uuid:e9b0562a-bd9e-49ce-9844-5cc6372f42a7>\",\"WARC-IP-Address\":\"172.67.214.193\",\"WARC-Target-URI\":\"https://calculomates.com/en/divisors/of/14674\",\"WARC-Payload-Digest\":\"sha1:ENM2PQ2ZGWR2FLMWERLYUHXFRHU3BVET\",\"WARC-Block-Digest\":\"sha1:YO25XOAN2BLLHYJXYA4BIORHEB7HRBXA\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030337531.3_warc_CC-MAIN-20221005011205-20221005041205-00461.warc.gz\"}"} |
https://studylibid.com/doc/4350478/datums-and-map-projections-for-remote-sensing--gis-and-su. | [
"# Datums and Map Projections For Remote Sensing, GIS and Surveying by Jonathan C. Iliffe (z-lib.org)",
null,
"```r\nDATUMS A...'-;U MAP PROJECTIONS\nDatums and\nMap Projections\nfor remote sensing, GIS, and surveying\nJonathan Iliffe\nDepartment of Geomatic Engineering\nUniversity College London\nWhittles Publishing\nCRC Press\nBoca Raton London New York Washington, D.C.\nTypeset by\nWhittles Publishing Services\nWhittles Publishing,\nRoseleigh House,\nLatheronwheel,\nCaithness, KW5 6DW,\nScotland, UK\nDistributed in North America by\nCRC Press LLC,\n2000 Corporate Boulevard N.W.,\nBoca Raton,\nFL 33431, USA\nISBN 1-870325-28-1\nUSA IS,aN 0-8493-0884-4\n© 2000, reprinted 2002,2003 J. Iliffe\nNo part of this publication may be reproduced,\nstored in a retrieval system, or transmitted,\nin any form or by any means, electronic,\nmechanical, recording or otherwise\nwithout prior permission of the publishers.\nPrinted by Bell & Bain Ltd., Glasgow\n....\nContents\nPreface and acknowledgements\n1\n2\n3\n4\nIntroduction\n1.1 Background\n1.2 Coordinates and datums\nTwo- and three-dimensional coordinate systems\n2.1 Introduction\n2.2 Spherical coordinates\n, 2.3 Spheroidal coordinates\n2.4 Cartesian coordinates\nvii\n1\n1\n2\n8\n8\n8\n9\n12\nHeight and the geoid\n14\n3.1 The geoid\n3.2 Reference surfaces for height\n14\n19\nGlobal, regional, and local datums\n19\n4.1 Global datums\n4.2 Local and regional datums\n23\n26\nThe global positioning system\nIntroduction\nSystem overview\nPositioning with codes\nDifferential GPS using codes\nGPS phase measurements\n33\n5.1\n5.2\n5.3\n5.4\n5.5\n33\n33\n35\n38\n41\n6\nAspects of datum transformations\n6.1 Introduction\n6.2 Knowledge of separation, N\n6.3 Knowledge of height, H\n6.4 Knowledge of datum transformation parameters\n6.5 Datum transformations for precise applications\n45\n45\n45\n45\n47\n50\n7\nFundamentals of map projections\n7.1 Introduction\n7.2 Spheres and spheroids\n58\n58\n59\n5\nr\nI\n!\n1\nI\n7.3\n7.4\n7.5\n7.6\n7.7\n7.8\nGrids and graticules\nScale factor\nDevelopable surfaces\nPreserved features\nComputational aspects\nDesigning a projection\n59\n60\n61\n63\n65\n66\n8\nCylindrical projections\n8.1 Cylindrical equidistant projection\n8.2 Cylindrical equal area projection\n8.3 Mercator projection\n8.4 Transverse Mercator projection\n8.5 Oblique Mercator projection\n68\n68\n70\n72\n74\n77\n9\nAzimuthal projections\n9.1 General azimuthal projection\n9.2 Azimuthal equidistant projection\n9.3 Azimuthal equal area projection\n9.4 Azimuthal stereographic (conformal) projection\n9.5 Gnomonic projection\n79\n79\n79\n81\n82\n83\n10\nConic projections\n10.1 General conic projection\n10.2 Conic equidistant projection\n10.3 Albers (conic) equal area projection\n10.4 Lambert conformal conic projection\n84\n84\n87\n88\n88\n11\nSummary of information required\n11.1 Formulae\n11.2 Parameters\n92\nDirect transformations\n12.1 Compatibility of coordinate systems\n12.2 Ground control\n12.3 Plane transformations\n12.4 Unknown projections: measuring from maps\n95\n12\n92\n92\n95\n97\n98\n101\nCase studies\n13.1 Transformation of GPS data into a local datum\n13.2 A projection for Australia\n13.3 Establishment of maritime boundaries on a projection\n13.4 Two-dimensional transformation of satellite imagery\n13.5 Two-dimensional transformation of GPS data\n13.6 Determining the parameters of an unknown projection\n105\nAppendix\nAl Spherical coordinates\nA2 Basic geometry of the ellipsoid\nA3 Determination of transformation parameters by least squares\n123\nReferences\nIndex\n142\n144\n13\n105\n109\n111\n113\n115\n116\n123\n123\n127\n!\nPreface\nThis book has been written as a practical guide to the problems that are commonly\nencountered when using datums and map projections. It is aimed at students and\npractitioners in the areas of remote sensing, geographic information systems, and\nsurveying.\nSeveral trends in recent years have been responsible for bringing the importance\nof datums and projections to the attention of a wider community of users. One has\nbeen the development of new methods of acquiring spatial data, such as the global\npositioning system and satellite remote sensing. Another has been the introduction\nof geographic information systems for handling and manipulating data in digital form.\nTaken together, these have brought about a situation where users are no longer reliant\non paper maps provided by a single source, but can acquire and customise spatial\ndata as their needs dictate. To assure the quality of the products derived from these\nprocesses, it is essential that the diverse coordinate frameworks upon which the data\nare based should be fully understood.\nThis book is for users with different levels of background knowledge of the subject, and therefore provides some fundamental definitions as well as more detailed\nexplanations. The emphasis throughout is placed upon the solution of practical problems, giving alternatives where standard procedures may not be adequate, and giving\nworked examples and appendices with useful formulae.\nThose dipping into the book to find answers to specific queries on transforming\ndata will find a "route diagram" of different transformations and procedures in Chapter I, which gives a reference to where more complete descriptions and formulae can\nbe found. Three chapters then follow on different types of coordinate systems and\ndatums: Chapter 2 considers two- and three-dimensional representations of Earth\ncoordinates; Chapter 3 looks at vertical datums; and Chapter 4 explores the ways in\nwhich datums are established on a global, regional, or local basis.\nChapter 5 gives an introduction to the global positioning system, and explores the\nproblems that can arise with datums when using this. Chapters 7 to I I introduce the\nfundamentals of map projections, and look at the different types in some detail. Models\nand procedures for transforming directly between data sets, based on the identification of common points, are treated in Chapter 12.\nviii\nChapter 13 introduces several practical projects in the form of case studies, which\ntogether illustrate the range of problems that may be encountered when working with\ndata sets on different datums or when attempting to carry out computations in projected coordinate systems.\nFinally, I should like to place on record my gratitude to many colleagues in the\nDepartment of Geomatic Engineering at University College London for the assistance that they have given me in writing this book. To Paul Cross, Ian Dowman,\nArthur Allan, John Arthur, David Chapman, and Joel Barnes, my thanks are due for\nmany interesting discussions on different aspects of the text and the application of\ntechniques in different fields.\nI should also like to thank Ryan Keenan, Joel Bames, and Joanne Gilman for their\nassistance with many of the diagrams.\nThat said, all remaining errors in the book are mine.\nJONATHAN ILIFFE\n•\n1\nIntroduction\n1.1 Background\nThis book is designed as a practical guide for those working with spatially referenced\ndata to the problems that may be encountered with datums and map projections. It is\naimed at those working in surveying, remote sensing, geographic information systems,\nand related areas, and therefore covers a wide range of scales and accuracy targets.\nPeople encountering these topics may have very different starting points in terms of\ntheir level of knowledge: those who are aware that they need to know something about\nthe subject, but are not yet sure what, would do well to commence by reading section\n1.2 before proceeding further.\nUntil recently, an in-depth knowledge of datums was generally confined to a fairly\nsmall group of scientists and to geodesists working in government survey departments. Most surveying was carried out on local scales using terrestrial equipment\n(such as theodolites, levels and distance measurers) to establish positions with respect\nto nearby control points: although the map projection was usually a consideration,\nthe question of the datum was of minimal importance. For the general user requiring\naccess to spatial data (on land use, for example), almost the only practical source was\na published map.\nTwo developments in recent years have been principally responsible for changing\nthis state of affairs. One has been the explosion in the use of spatial data that has\nbeen brought about by the development of geographic information systems (GIS) for\nhandling and manipulating data in digital form. The other has been the development\nof techniques such as the global positioning system (GPS) and satellite (or airborne)\nremote sensing, which have made available entirely new methods of acquiring accurate data. Moreover, these are techniques that, due to their global, space-based nature,\nhave broken free completely of the localised survey. In short, this is a classic case\nof supply and demand: more and more data is available from a variety of different\nsources, and more and more people have the means and the need to make use of it.\nSupply, demand and - potentially - confusion. A situation now exists where it is\nquite common for the means of acquiring data to be using a reference system that is\ncompletely different from the one in which the data will ultimately be required. This\nis a particular case of the problem of combining data sets, in which new data is to\n2\nDatums and map projections\nbe put alongside archive material (which could have been referenced to one of many\nscores of possible datums and an uncountable number of possible map projections),\nbut the problem is a more general one. A few examples will serve to illustrate this.\n• Geo-referencing a satellite image with ground control points that have been\nestablished using GPS, and with others that have been obtained from a published\nmap.\n• Combining digital map data from two different survey organisations, for example as part of a cross-border collaboration between neighbouring states.\n• Carrying out a survey with high precision GPS, and bringing it into sympathy\nwith existing mapping in a local coordinate system.\n• Navigating a ship or an aircraft using satellite systems, on charts that have been\nprepared in a local datum. Even a leisure user of handheld GPS receivers will\nneed to make an appropriate correction when using them in conjunction with a\nmap.\nThese examples are significant, in that they can each be said to represent an increasing trend. Indeed, it could be argued that the same forces that drive globalisation\nin the political and economic spheres are having their effect on trends in the area of\nspatial information. After all, a hydrographic surveyor in the English Channel and a\nland surveyor in Ghana could both simultaneously be using the same positioning satellite. They are probably not, at present, expressing the final result in the same datum,\nbut all major international organisations with an interest in spatial data are involved in\ndeveloping regional and global datums.\nThe examples given above focus on the problems of combining data from different\nsources. Another reason for understanding the nature of a reference system is that it is\noften necessary to carry out computations using the data: it must, after all, have been\nacquired for some purpose. Computations that use geographic or geodetic coordinates\n(latitude, longitude and height) require a particular branch of mathematics. Projections\nintroduce certain distortions of distance.\nIn some situations, this is hardly a problem. For example, if Ordnance Survey digital data is being used to determine the optimum route on the road network between\ntwo points, the distortions of the projection are insignificant at the accuracy level\nrequired. To take an example at the other extreme of accuracy requirements, however, the establishment of maritime boundaries between states involves determining\ndistances between median lines and coastal points. This computation can be done using geodetic coordinates, but not without difficulty. Using a projection is easier, but\nthe distances will be distorted and the computation will be inexact: although this may\nbe insignificant over small areas if an appropriate projection is used, it is not likely to\nbe sufficiently accurate for large areas such as the North Sea. Where the limit lies, and\nwhat an appropriate projection is, are questions it is necessary to be able to answer\nwhen tackling problems of this nature.\nThe first intention of this book is to clarify the definitions of geodetic datums and\nmap projections, and to explore some of their important characteristics. The procedures for transforming between data sets under a variety of different circumstances\nwill then be discussed.\nIntroduction\n3\nThis is information which, in theory at least, is available in most textbooks on\ngeodesy. In practice, however, there are two problems that can cause difficulties.\nFirstly, datums and transformations usually form a limited part of the content of geodetic textbooks, and the relevant information can be difficult to extract and is rarely\naimed at the non-specialist who still has a real need to interpret geodetic data. Secondly, and more importantly, the treatment in most standard texts tends to assume\nthat all necessary information is available and stops short of explaining what to do\nwhen it is not. Anyone with any practical experience of manipulating spatial data will\nknow that not infrequently some apparently essential information is missing: nothing\nis known about the geoid, for example, or the exact datum and projection of a digitised\nmap are unknown.\nThe second principal aim of this book is therefore to act as a practical guide to\nthese problems and to show where particular procedures are necessary and where they\nare not, or to show what can be done about situations in which not all the information\nis available. This is reinforced by a series of case studies which examine particular\nproblems or combinations of problems, and by an appendix that groups together some\nof the essential mathematical formulae and procedures for computations within a coordinate system or for transforming between them. In general, however, the assumption will be that most readers are well supplied with software packages to carry out\nthe computations: interpreting what they are doing is the key issue.\n1.2 Coordinates and datums\nConsider each of the following statements:\n• The height of the point is 3.122 m.\n• The height above mean sea level is 10.983 m.\n• The latitude is 32° 10' 12.23" .\n• The northings of the point are 152345.834.\nAll of these express coordinates with a great deal of precision: the other thing that\nthey have in common is that they are all - to a greater or lesser extent - ambiguous\nstatements, as they contain no information on the coordinate framework in which they\nare measured. What would render them unambiguous would be a clear definition of\nthe datum that is associated with each one.\nWhat is a datum? Many readers will have encountered the use of the expression\nwith reference to heights: for example, in the statement 'this point is 3.2 m above the\ndatum'. This therefore seems a reasonable place to start.\nIf, for example, an engineer is interested only in the relative heights of points\nwithin a construction project, and not in their relationship to the outside world, then\nit will be acceptable to designate one point as having an arbitrary height - 100 m\nfor example - and finding the heights of all others with respect to this. Such an act\nhas effectively defined a datum, because it has established the position of the origin\nof the coordinate system. It would also have been possible to define the datum by\ndesignating the height of any other point, in which case all heights would be different\nDatums and map projections\n4\n.\n(see Fig. 1.1). Thus, the definition of the datum is seen to be essential information\nthat accompanies the given coordinates.\nOm(A)\nJ\nFigure 1.1\nOm(B)\nA point with different heights in datum A and datum B.\nThis is also the situation with two- or three-dimensional coordinate systems such\nas eastings and northings, or XYZ, or latitude and longitude. However, an additional\ncomplication is the fact that - without changing the datum - it is possible to express\nthe coordinates in a different way. A useful example of this is the two-dimensional\ncoordinate system shown in Fig. 1.2. In this example, the coordinates of the point P\nmay be quoted in either the rectangular form (X, Y) or the polar form (r, 8): the point\nis that changing from one to the other is a relatively straightforward procedure that\ndoes not involve changing the point of origin, and does not imply a change of datum.\nx\n..................................................................\n:::.::.,\np\ny\nFigure 1.2\nRectangular and polar coordinates in one datum.\nA similar situation exists in geodesy, but not all of the means of expressing position\n(or of converting between different forms) are as simple as in the above example. With\nthis in mind, it is convenient to have a diagram of the different types of coordinate systems, with cross-references to where in this book fuller definitions or explanations of\nformulae may be found. This diagram can then be used as a 'route map' in navigating\nbetween different sets of coordinates.\nIn Fig. 1.3, each horizontal row represents a different datum - in this case labelled\ndatums A and B. In this book, datums are covered in Chapter 4: international and\nsatellite datums such as WGS-84 and ITRF are treated in section 4.1, and locally\ndefined datums that are used as the basis of coordinates in individual countries or\nregions are discussed in section 4.2.\nIntroduction\nProjection coordinates\nOrthometric height\n5\nGeodetic coordinates\nOrthometric height\nGeodetic coordinates\nSpheroidal height\nCartesian coordinates\nFigure 1.3 Complete procedure for transformations between different datums and projections.\nWithin each row, each vertically arranged box represents a different method of\nexpressing coordinates. Brief, and not necessarily rigorous, definitions of these are as\nfollows:\n• Projection coordinates: An arrangement whereby the curved surface of the\nEarth is represented as a plane, thereby introducing distortions. Within this\nplane, a simple set of XY or east and north axes is defined. This topic is introduced in Chapter 7, and individual types of projection are discussed in Chapters\n8-11. An example of the issues raised by carrying out computations within a\nprojected coordinate system is given in section 13.3.\n• Orthometric heights (or heights above the geoid): Heights defined above the\nirregular surface, the geoid, that closely approximates mean sea level. The geoid\nand its characteristics are explained in section 3.1, and the practicalities of establishing a vertical datum, and the consequences of the differences between the\ngeoid and mean sea level, are discussed in section 3.2.\n• Geodetic coordinates: Latitude and longitude defined with respect to a spheroid.\nThe fundamental definitions of these are given in section 2.3.\n• Spheroidal (or ellipsoidal) heights: Heights defined above a spheroid which has\nbeen established as the datum for a particular country or region, or on a global\nbasis.\n• Cartesian coordinates: Three-dimensional coordinates defined with respect to\na set of axes with their origin at the centre of the spheroid used in the datum.\nThese are discussed in section 2.4.\nAny set of coordinates will be in one of the forms shown in Fig. 1.3. Most users\nof spatial data will want to do one of two things with the data:\n• Carry out computations within the coordinate system that the data are in - for\nexample to compute distances or angles between sets of points. In this case\n6\nDatums and map projections\nreference should be made to the sections of this book that describe the nature\nand characteristics of the different types of coordinate systems.\n• Convert the data to another coordinate form or another datum. This will involve one or more steps in the transformation process, which are labelled on the\ndiagram and described below.\n1. The vertical datums used for orthometric heights are not actually related\nto the two-dimensional datums, and so H is on a separate datum to the\neastings and northings (E, N). Section 3.2 discusses vertical datums, and\npoints out that a change in the vertical datum changes the orthometric\nheight by at most I m: for many purposes it will not be necessary to introduce a change, and so the orthometric height in the two systems will be\nthe same.\n2. Direct conversion from one map to another is possible for low levels of\naccuracy, provided that common points can be identified in both systems.\nChapter 12 explores the limitations of this approach, with examples being\ngiven in the case studies in sections 13.4 and 13.5. The mathematical\nbasis for determining the parameters of the transformation is discussed in\nthe appendix.\n3. The formulae for the conversion from projection coordinates to geodetic\ncoordinates (and vice versa) depends on the type of projection. Chapter 11\nsummarises the information that will be required; section 12.4 discusses\nthe options available if some of the parameters are not available, with the\ncase study in section 13.6 giving an example of this.\n4. The conversion from geoidal or orthometric heights to spheroidal heights\n(and vice versa) requires a knowledge of the separation between the geoid\nand the spheroid, as discussed in section 3.1. The basic equations to be\nused are (3.1) and (3.2), which are given in section 3.2. Sections 6.2 and\n6.5 respectively discuss the approaches that may be adopted for low and\nhigh accuracy applications if no information is available on the geoidspheroid separation. It should be remembered that the value of the separation between geoid and spheroid is dependent on the datum that is being\nused.\n5. The conversion from geodetic to cartesian coordinates is straightforward,\nand requires only a knowledge of the parameters of the spheroid used in\nthe datum. The formulae are given in section 2.4.\n6. Molodensky's formulae (section 4.2) can convert directly from geodetic\ncoordinates on one datum to geodetic coordinates on another, given the\nchange in spheroidal parameters and the offset between the two datums.\nThis approach is easier for computations on a hand calculator, but otherwise the intermediate conversion to cartesian coordinates is conceptually\neasier (and is in any case a step that is simply hidden in the direct formulae). This book will generally adopt this approach.\n7. This step represents the actual datum transformation - a process that involves at least a three-dimensional shift, and possibly rotations and a scale\nIntroduction\n7\nchange. Section 4.2 introduces the topic; some of the problems that may\nbe encountered are covered in Chapter 6. A full treatment of the determination of transformation parameters by least squares is given in the appendix, and the case study in section 13.1 gives a numerical example of\nthe transformation of GPS data into a local datum.\n2\nTwo- and three-dimensional\ncoordinate systems\n2.1 Introduction\nAs explained in section 1.2, a datum is a reference system in which the coordinates of\nspatial data may be expressed. It may be a three-dimensional system in cartesian or\ncurvilinear form; it may be two-dimensional in a map projection or a locally defined\nsystem; or at its simplest it may be·a one-dimensional system for expressing heights.\nThis chapter will consider the geometrical models that are used to define reference\nsystems for expressing two- and three-dimensional coordinates of points.\nIn order to follow the definitions of the systems used, it is first necessary to consider the shape and size of the Earth.\n2.2 Spherical coordinates\nThe first approximation that can be made to the shape and size of the Earth is that it\nis a sphere of radius 6371 km. Three-dimensional spherical coordinates can then be\ndefined with respect to this shape (see Fig. 2.1):\n• latitude: the angle north or south from the equatorial plane,\n<I>\n• longitude: the angle east or west from the Greenwich meridian, 'A\n• height: a distance in metres above (or below) the sphere, h.\nThe definition of latitude is a natural one, in that the poles are clearly defined\npoints on the surface of the Earth, and the equator is the circle that bisects the two.\nOn the other hand, the definition of longitude is to some extent arbitrary, as there are\nno physical reasons for choosing a particular meridian as the reference value. Historically, different prime meridians were used by different states, but the convention of\nusing Greenwich is now universal.\nThe relationship between the definition of longitude in a particular datum and the\nestablishment of a time datum should be remarked upon. In effect, the determination\n8\nTwo- and three-dimensional coordinate systems\nPole\nFigure 2.1\nSpherical coordinates.\n9\nGreenwich\n<1>,\nlatitude; A, longitude; h, height.\nof the longitude with respect to the Greenwich meridian could only be carried out with\na precision equivalent to the establishment of Greenwich Mean Time at the site of the\ndatum. A 1 s second error in the determination of time would translate to a 15 arc\nsecond rotation of the datum with respect to the Greenwich meridian.\nIn conjunction with this coordinate system, it is useful to define the following\nterms:\n• parallels of latitude: lines of equal latitude on the surface of the sphere\n• meridians: lines of equal longitude.\nFigure 2.2 shows the pattern that results from parallels and meridians depicted at\nintervals of 5°.\nFor many applications that do not require the highest accuracy, the sphere is an\nadequate representation of the Earth. Appendix section A 1 summarises some of the\nformulae that can be used in conjunction with the spherical model, such as finding the\ndistance and azimuth between points of known coordinates.\n2.3 Spheroidal coordinates\nA better approximation to the shape of the Earth is that it is an ellipsoid of revolution,\noften more conveniently called a spheroid. This surface is formed by an ellipse which\nhas been rotated about its shortest (minor) axis, or by 'squashing' a sphere at the poles.\nThe term ellipsoid can also be used for this shape and, although some would differentiate between an ellipsoid and a spheroid, in this context the two can be regarded as\nsynonymous.\nA spheroid formed in this way is referred to as an oblate spheroid, as opposed to a\nprolate spheroid which would be formed by extending the distance between the poles.\nDatums and map projections\n10\nFigure 2.2\nMeridians and parallels at 5° intervals.\nAs all the spheroids referred to in this text are oblate, the term spheroid can be used\nwithout ambiguity.\nA spheroid is defined by the size of two parameters (see Fig. 2.3):\n• the semi-major axis, a\n• the semi-minor axis, b.\nb\nFigure 2.3\nDefining parameters of a spheroid.\nFrom these two parameters, it is possible to derive further definitions. Thus:\n• flattening, f, is defined as\na-b\nf=a\n(2.1)\nTwo- and three-dimensional coordinate systems\n11\n• eccentricity, e, is defined as\n(2.2)\nFurthermore, e, j and b can be related to each other as follows:\n(2.3)\n~=(l-j)=~\n(2.4)\na\nThus, a spheroid can be completely defined using the two parameters a and b, or\na and j, or a and e, and the remaining parameters can be found as necessary.\nA typical value for a would be 6378 137 m and for j would be 1/298. A spheroid\nwith this degree of flattening would, to the eye, appear indistinguishable from a sphere,\nand it must therefore be emphasised that most diagrams of the spheroid exaggerate the\nflattening for purposes of clarity.\nIt is now possible to define a set of coordinates with respect to this spheroid. Once\nagain these are defined as latitude, longitude and height. These are shown in Fig. 2.4,\nfrom which it is apparent that the latitude and longitude are defined with respect to the\ndirection of the spheroidal normal, a line from the point in question that is perpendicular to the spheroid.\nGreenwich\nFigure 2.4 Geodetic coordinates. \\$, latitude; lv, longitude; h, height.\nCoordinates defined in this way are known as geodetic coordinates, and are the\nbasis for all mapping systems. The distinction between geographic coordinates implying a spherical Earth model and geodetic coordinates implying a spheroidal one is\nuseful, but by no means universally adopted: many texts use geographic coordinates\nas a generic term that also encompasses latitude and longitude defined with respect to\na spheroidal model.\nIt is possible to carry out computations in a spheroidal coordinate system, particularly in the region close to its surface. In traditional land surveying, these would have\nDatums and map projections\n12\nbeen used when computing coordinates in large scale geodetic surveys, a process that\nhas largely been superseded by the use of cartesian coordinates (section 2.4) and the\nGPS (Chapter 5). Over shorter distances it has usually been possible to use a projected\ncoordinate system. One of the few remaining areas where computations are needed\nin geodetic coordinates is in the definition of boundaries between states, or between\nmineral concessions. For this reason, a summary of some of the most useful formulae\nand procedures is given in Appendix section A2. Otherwise, any textbook on geodesy,\nsuch as Bomford (1980) or Torge (1991), gives further and more detailed information\non spheroidal geometry.\n2.4 Cartesian coordinates\nThe formulae involved in computations based on geodetic coordinates are complicated, and entirely inappropriate when considering observations made to satellites.\nMore appropriately, a set of cartesian coordinates (X, Y, Z) is defined with its\norigin at the centre of the spheroid. The Z axis is aligned with the minor axis of the\nspheroid (the 'polar' axis); the X axis is in the equatorial plane and aligned with the\nGreenwich meridian; the Y axis forms a right-handed system (see Fig. 2.5).\nz\ny\nFigure 2.5\nCartesian coordinates.\nGeodetic coordinates may be transformed to cartesian coordinates by a set of formulae which require a knowledge of the parameters of the spheroid:\nX\n= (v + h) cos<jl cosA\n(2.5)\nY = (v + h) cos<jl sin A\nZ = {(1- e2 )v+h}sin<jl\nwhere\n(2.6)\n<jl is the latitude, positive north; Ais the longitude, positive east; and h is the spheroidal\nheight (the height above the spheroid). The reverse computation, in which geodetic\nTwo- and three-dimensional coordinate systems\n13\ncoordinates are found from cartesian ones, is also possible:\ny\ntan A =\nX\nZ +f.b sin 3 u\ntan <I> = ----:,.------,,p - eZacos3 u\n(2.7)\n(2.8)\n(2.9)\nwhere\n(2.10)\nZa\ntanu = --pb\nE=\neZ\n-e\n-l--z\nand all other terms are as defined above.\n(2.11)\n(2.12)\n3\nHeight and the geoid\n3.1 The geoid\nThe spheroid is a good approximation to the shape of the Earth, but not an exact\nrepresentation of it. The shape of the Earth is given by the form of a surface which\nis everywhere perpendicular to the direction of gravity (such a figure is termed an\nequipotential surface). The force ~d direction of gravity are affected by irregularities\nin the density of the Earth's crust and mantle. It therefore follows that the form of an\nequipotential surface is somewhat irregular.\nOf course, the fact that the spheroid is only an approximation of the shape of the\nEarth does not invalidate it as a model to be adopted in expressing coordinates. It is\nimportant to note that using geodetic coordinates based on a spheroid does not lead\nto errors when expressing the position of a point: it is simply a question of using a\nsurface with a convenient mathematical expression onto which positions are projected.\nTo a very good approximation, L'le form of the mean sea level surface is equipotential, since the sea would be expected to be perpendicular to gravity. (In fact the seas\nand oceans contain permanent currents which cause a permanent slope with respect\nto the direction of gravity.) The true shape of the Earth is known as the geoid, and\nthis can now be defined as that equipotential surface that most closely corresponds to\nmean sea level.\nWorldwide, the difference between the geoid and mean sea level is at the most\naround I m (and changes slowly over wavelengths of tens of kilometres), and so for\nmany purposes these two can be considered synonymous. Sometimes, however, it is\nimportant to note the difference.\nThe spheroid is a very good approximation to the geoid, but there are significant\ndifferences. If a spheroid is formed which best fits the geoid, the differences between\nthe two amount to ± 100 m, with a global root mean square of around 30 m. Figure 3.1\nshows the form of the geoid worldwide. The height of the geoid above the spheroid is\nknown as the geoid-spheroid separation, or often just the separation, and is usually\ngiven the symbol N. This may be a positive or a negative quantity, as shown in Fig. 3.2.\n14\n......\nVI\no\nFigure 3.1\n16\nDatums and map projections\nSpheroid\n~\n....\n\\\n............\nGeoid\nFigure 3.2\nSectional view of a spheroid and the geoid.\nSome texts refer not to the height of the geoid above the spheroid, but to the\nundulation of the geoid. This is a rather unsatisfactory term, as the derivation of the\nword implies that the phenomenon under consideration is a wave, rather than a height.\nIn this book N is called the separation, and the expression undulations ofthe geoid is\nreserved to refer to the presence of 'waves' in the part of the geoid in question. (A\ngeoid without undulations, for example, would imply that at the accuracy required it\nappears to be at a constant height above the spheroid, or can be modelled simply as an\ninclined plane.)\nNote also in Fig. 3.2 that the direction of the vertical (perpendicular to the geoid) is\nnot usually coincident with a spheroidal normal. The angle between the two is termed\nthe deviation of the vertical, and is usually resolved into a north-south component,\n~, and an east-west component, 11. These angles typically amount to a few seconds\nof arc.\nAstronomical coordinates may be defined with respect to the direction of the vertical in the same way that geodetic coordinates are defined with respect to the direction\nof the normal. These coordinates would be used in situations where a knowledge of\nthe direction of the vertical is required (for example in determining the slope of the\ngeoid), but they are somewhat irregular, and the geodetic coordinates remain the basis\nof the mapping system.\nIt must be emphasised that any diagram of the geoid will have to greatly exaggerate\nits deviation from the spheroid in order to be comprehensible. If it is recalled that the\ndeviation between the two is less than 100 m over a body with dimensions of around\n6400 km, then it will be appreciated that on a correct diagram the two surfaces would\nbe indistinguishable. In fact, the geoid never actually 'doubles back on itself' - it\nis always convex - and therefore astronomical coordinates are always unique (it is\nimpossible for two points to have the same latitude, a point not apparent from Fig. 3.2).\nIn many situations it is necessary to have a knowledge of the form of the geoid,\neither globally or for a specific region. This can be derived (with a great deal of effort)\nfrom gravity observations, observations of satellite orbits, satellite altimetry over the\noceans, and other data sources. In developed countries with dense gravity observations\nthe separation may be known to an accuracy of 5-10 cm. In other parts of the world it\nmay be much worse than this. The point of reference would be the national mapping\norganisation for the country concerned or a university research department.\nGlobally, the geoid may be found from a set of coefficients of a spherical harmonic\nexpansion, often referred to as a global Earth model. This expresses the geoid in terms\nof a series of functions over the sphere that are of increasingly small wavelength. The\nsmallest wavelength that the model can express is a function of its highest degree: a\nmodel that is described as being complete to degree NMAX can model wavelengths\nHeight and the geoid\n17\ndown to 180° /NMAX. A global Earth model used to compute satellite orbits, for example, might be complete to degree 36, which would be sufficient to describe the long\nwavelength part of the geoid, but could not express any effects with a wavelength of\nless than 5°, or around 500 km.\nUntil recently, some of the most finely detailed Earth models were produced by\nthe Ohio State University, and the model referred to as OSU91 (Rapp and Pavlis,\n1990) was widely circulated and is still very much used. Being complete to degree\n360, this models the form of the geoid down to wavelengths of around OS, or 50 km,\nand is accurate to about 1-2 m. In some parts of the world the accuracy is rather\nworse than this, however, particularly in those areas where a dense network of gravity\nobservations is not available.\nOSU91 has now been superseded by a model that has been jointly determined by\nthe National Imaging and Mapping Authority (NIMA) and the National Aeronautics\nand Space Administration (NASA), and is referred to as the Earth Geopotential Model\n1996, or EGM96 (Rapp and Nerem, 1994) (see Fig. 3.2). This too is complete to\ndegree 360, but gains an increased accuracy through the use of additional data, in\nparticular gravity observations that were previously unavailable and new data from\nsatellite missions.\nThe coefficients of the spherical harmonic expansion of EGM96, together with\na computer program for determining the geoid from them, are freely available on\nthe World Wide Web (NASA, 1996). Also available is a file of geoid point values\ndetennined on a 0.25° grid: a considerable amount of processing is saved if the geoid\nis interpolated from these values.\nClearly, then, a model such as EGM96 will solve the geoid problem in most parts\nof the world for users who require an accuracy of no better than I m. For accuracies\ngreater than this, it is instructive to consider the size and characteristics of the part of\nthe geoid that is not modelled by EGM96.\nAs stated at the start of this section, the irregular nature of the geoid is caused\nby the underlying irregularities in the Earth's gravity field. For longer wavelengths\n(which are usually of greater amplitude) this is usually due to deep-seated effects such\nas the interaction between the Earth's crust and mantle at the boundaries of tectonic\nplates (Lambeck, 1988). These features are usually adequately described by the global\nmodels. The shortest wavelength effects are mostly due to the irregularities of the\nterrain (Forsberg and Tscherning, 1981). Importantly, then, the size of the undulations\nof the geoid that exist over and above those effects modelled by EGM96 or OSU91\nare very much correlated with the roughness of the terrain.\nTo get a feel for the characteristics of the geoid, examples will be shown of a recent\nproject to determine the geoid in Zimbabwe, with data drawn from Nyamangunda\n(1997). Figure 3.3 shows a sample cross-section of the geoid at latitude 16° S, with\nthe longitude ranging from 29° to 31 ° E: this is a distance of around 200 km. The\nsmoother of the two lines in Fig. 3.3 is the underlying trend of the geoid that has been\nderived from the global model. The more complex line is a better approximation of\nthe geoid, as this has been derived by using additional terrain and gravity data, and is\ntherefore capable of picking up shorter wavelength effects.\nOn looking at a 2° cross-section of the geoid, the smoothing effect of the global\nmodel becomes apparent. It is clear, nevertheless, that the very large changes in the\nDatums and map projections\n18\n0.----------------------------1\n_ _ _ _ _ _ _ _ _ _ _ -220km\n-2 + - - - - - - - Best estimate\n-6- -----~---__;r'--Global model\n-10\n-12.L..-------------------·-------l\nFigure 3.3\nCross-section of the geoid at latitude 160 S.\nseparation are generally quite well modelled by the global expansion. Taking the less\nsmooth line to represent the actual geoid (although in fact the scarcity of data in this\npart of the world means that it does still have errors of a decimetre or so) some quite\nsubstantial undulations are still seen over distances of a few kilometres.\nWith the aid of Fig. 3.3, and referring also to the global picture of the geoid seen\nin Fig. 3.2, we now consider two practical questions about the geoid.\n3.1.1 To what extent can the geoid-spheroid separation be considered a constant value?\nOnly to a very limited extent. If a project requires centimetric accuracy, it would not\nbe safe to assume a constant value for the separation over distances of more than about\n200m.\nFor less precise applications, requiring an accuracy of, say, 2 m, Fig. 3.3 suggests that the separation might be considered constant over distances up to around\n30-50 km. Referring to Fig. 3.1, however, it is notable that in some parts of the world\nthe separation may change by 10 m over distances as short as 50 km.\nTo a certain extent, the situation is eased when using a local reference system,\nas described in section 4.2, as opposed to a global datum such as WGS84. A local\nreference system will usually have been optimised to fit the shape of the geoid in a\nparticular region, and the slope of the geoid is usually less severe. In the UK, for\nexample, the total range of the geoid separation is 0-5 m. But note that this is true\nonly when expressing the geoid height with respect to the British datum: it is not true\nwhen expressing it with respect to WGS84. It should also be pointed out that this\napplies principally to slopes over longer distances: short-wavelength undulations are\npresent whatever reference surface is used.\n19\nHeight and the geoid\n3.1.2 To what extent can the geoid be modelled as a uniform slope?\nThis question is of particular importance in precise GPS applications, as section 6.S\nshows that a section of the geoid that approximates a sloping plane can effeG~ively be\nignored when using similarity transformations to tie into local control points.\n-3,----------------------------,\n~--------- 50km\n-3.5 + - - - - - - - - - - - - - - - - - - - - - - - - - j\nI\nZ\nJf\n!\n-4F=-----.,.-----------------_j\n-4.5\n+-=---:;;;;-"-""==-------""=-~----"''''''''''~~.----------j\n-5\n+--=-.""",,-==-----------~~=-------j\n-5.5 + - - - - - - - - - - - - - - - ' = - = - - - - - - - - - ' ' ' ' ' ' ' c - - _ j\n-6+-----------------------=---.:----1\n-6.5\n-1--\nFigure 3.4\n--'\n50 kIn cross-section of the geoid at latitude 16° S.\nTo answer the question, let us look a little closer at a smaller part of the geoidal\ncross-section shown in Fig. 3.3. This time, Fig. 3.4 shows a section around 50 km long.\nLooking at section AA in this example it seems that the geoid could be modelled by\na uniform slope with a precision of around 50 em for a distance of SO km. For the\nshorter section BB, which is around 20 km, the precision is around 10-15 em. We\ncould tentatively infer that these results are typical, although it should be noted that\nthe terrain in this test area is fairly rugged (with heights up to 1000 m), and a better\nresult might be expected in less hilly terrain.\n3.2 Reference surfaces for height\nAn important consequence of the irregularity of the geoid is that it leads to an alternative definition of height to the spheroidal height defined in section 2.3. This is the\nheight of a point above the geoid or the orthometric height, usually given the symbolH.\nAs the height above the geoid is approximately the same as the height above mean\nsea level, this is the value that is usually used in surveying and mapping, and which\nis displayed on almost all maps and charts. It is, in fact, the height which can be\nobtained by levelling, provided that for very precise applications suitable corrections\nare made to counter the effect of convergence of the Earth's equipotential surfaces these corrections are measured in centimetres over many tens of kilometres.\nDatums and map projections\n20\nThe reference point for mean sea level in a national mapping system is usually\na tide gauge. Since there is a difference between the geoid and mean sea level, as\nmentioned in section 3.1, the datum for height is slightly different in each country. The\nsignificance of this point is illustrated by considering a project such as the construction\nof the Channel Tunnel. If the tunnellers from each end were due to meet at a midpoint\ndefined as a certain height above (or in this case below) the geoid, then in principal\nthere would not be a problem since the geoid is a unique surface. In practice, however,\nthe vertical reference surface in each country would be defined with respect to different\ntide gauges. To complicate matters further, these tide gauges are not placed on either\nside of the Channel (in which case a fairly small slope of the sea surface might be\nexpected over this short distance) but in Newlyn (in Cornwall) and Marseilles (on the\nMediterranean coast). The difference in the vertical datums between Great Britain\nand France is therefore a function of the slope of the sea surface over the rather longer\n(sea) distance between these two ports, and in fact amounts to some 30 ±8 cm (Willis\net at., 1989).\nTo emphasise what is happening in this situation, let us restate the definition of the\ngeoid given in the previous section: it is that equipotential surface that most closely\ncorresponds to mean sea level. In effect, we can now see that this is an ideal, and the\npractical realisation of the situation is in fact that each country has selected a slightly\ndifferent equipotential surface. Rapp (1994) quantified these differences by comparing\ndeterminations of the geoid separation that were found from the differences between\northometric and ellipsoidal heights on the one hand, and from global geopotential\nmodels on the other. Although the precision of these determinations is not sufficient\nfor high accuracy engineering work, they are illustrative of the general problem, and\nare summarised in Fig. 3.5.\nGermany\nt\n-68\n1\n-98\nj\nAustralia\n(Mainland)\nAustralia\n(Tasmania)\nFigure 3.5\n-87\nj\n-38\nUSA\n(NGVD29)\nj\nUSA\n(NAVD88)\nGreat Britain\n(aS Datum Newlyn)\nReference surfaces with respect to the ideal geoid (after Rapp, 1994). All units are\ncentimetres.\nIn some countries, the situation depicted above is itself an oversimplification. One\nexample is Australia, where the vertical datum AHD71 (Australian Height Datum\n1971) was established by adjusting a network of levelling observations across the\ncountry while holding the values fixed at 30 tide gauges spread around the coast (Hig-\nHeight and the geoid\n21\ngins et al., 1998). Since each of these tide gauges is establishing the datum at a different equipotential surface, what is effectively happening is that the reference surface\nis no longer a single equipotential surface, but changes gradually from place to place.\nThis is illustrated symbolically in Fig. 3.6. Over the kind of distances involved, this\nhas not previously been a problem, as the differences were exceeded by the accuracy\nthat could be achieved from levelling. It does present a problem, however, in quoting\na value for the geoid-spheroid separation that should be used to correct heights determined from GPS. This is another example of the way in which datums are having\nto be redefined in the light of the increased accuracies that are possible with satellite\nsystems.\nFigure 3.6\nVertical datum tied to several tide gauges.\nGeoid:.-\n_\nSpheroid\nFigure 3.7\nOrthometric height and spheroidal height.\nThe relationship between the orthometric height, H, and the spheroidal height, h,\nis as shown in Fig. 3.7. Since the angle between the veltical and the normal is so\nsmall, it can be said without any significant loss of accuracy that:\nh=H+N\n(3.1)\nIt is emphasised that the height h is the value that normally results from satellite observations; the significance of this equation is therefore that in order to obtain a height\nabove the geoid, H, it is necessary to know the separation, N.\nAlternatively, since most precise satellite positioning is done by relative techniques\n(as described in Chapter 5), (3.1) can be recast in the fo.m:\n(3.2)\nThis means that a satellite system such as GPS can be used to find the change in\nspheroidal height (13.h) between, say, a bench mark and a new point. In order to find\n22\nDatums and map projections\nthe orthometric height of the new point above the bench mark (bJi) it is necessary to\nknow only the change in the geoid separation (MV). Within the accuracy limitations\ndiscussed in section 3.1, this can sometimes be ignored.\nAs a final comment here on reference surfaces for height, it is worth pointing out\nthat the advantages of using ellipsoidal heights should not be overlooked. Although\nthese are not always appropriate, there are many situations in which the physical link\nwith the direction of gravity, and with a surface perpendicular to this, is unnecessary.\nAn example of this might be the monitoring of the movement of a point over time,\nperhaps one near the summit of a volcano to give an indication of future eruption.\nIn this situation, a change in the height is all that is required, and a change in an\nellipsoidal height is the same as the change in the orthometric height to any accuracy\nthat could conceivably be required. The need for a determination of the separation is\nthus avoided, as is the danger of an apparent change being recorded as a result of an\nimproved determination of the separation at a later date.\nTo take another example, an aircraft landing at an airfield and using a satellite positioning system (giving ellipsoidal height) can determine its height above obstructions\nthat have themselves been surveyed using satellite techniques and have their heights\nexpressed in ellipsoidal terms. In this context, the position of the mean sea level surface becomes an abstract and irrelevant concept.\nWhat might be thought of as a halfway position between using orthometric heights\nand ellipsoidal heights is to adopt a standard model of the geoid to correct satellite observations: if the model is not perfect, true orthometric heights will not be obtained.\nOn the other hand, if the use of the model is universal only the most precise engineering applications will need to use any other form of height coordinate, and the\n4\nGlobal, regional, and local\ndatums\n4.1 Global datums\n4.1.1 Satellite datums\nOn a global basis, the most appropriate version of a spheroid is one that has its origin\nat the centre of mass of the Earth and is of such a shape and size that it is the best\npossible approximation to the form of the geoid. Such a spheroid is necessary for\nany worldwide application, and is therefore the basis of a satellite reference system\nor datum. A datum with its origin at the centre of the Earth is said to be geocentric.\nThis is a necessary condition for computations of satellite orbits. Since the advent\nof satellite geodesy in the early 1960s, there have been several attempts to define a\nglobal geocentric datum. The most recent of these is the World Geodetic System 1984\n(WGS84).\nThe parameters of WGS84 are defined (Seeber, 1993) as:\na\n= 6378137\nf\n= 1/298.257223563\nas well as further parameters relating to the gravitational field and the rate of rotation.\nNote that the shape and size of the spheroid thus defined are almost consistent with\nanother commonly used global datum, the Geodetic Reference System 1980 (GRS80),\nwhich is mainly used in gravitational applications. In fact, due to a slight difference\nin the original terms used for the definition of these two datums, there is a difference\nin their flattening (Hofmann-Wellenhof et al., 1997) that amounts to:\nI1f =\nfORS -\nfWOS84 = 16 X 10- 12\n(4.1)\nFor most practical purposes, this difference can be ignored and the spheroids considered equivalent.\nThe concept of realising a reference system, as opposed to defining it, is one that\nhas already been touched on with respect to the vertical height datums of the last section, and will be explored more fully in the section on local and regional datums that\n23\n24\nDatums and map projections\nfollows. It is also something that should be considered when dealing with a satellite\ndatum. In effect, this means that, although WGS84 can be defined as a geocentric\ndatum and related to a certain spheroid, most users will encounter this reference systern through the use of the GPS (Chapter 5). The starting point for their positioning\nis the satellite ephemerides (or orbital positions) that are broadcast by the system,\nwhich in tum have been determined from a set of monitoring stations. It is these monitoring stations that effectively realise the coordinate system. Using a different set of\nmonitoring stations to determine the ephemerides (as, for example, the far more extensive network of the International GPS for Geodynamics Service) would then lead to\na separation of the datum into two separate ones. This problem has now been largely\novercome by linking all important global networks into the International Terrestrial\nReference Framework discussed in the next section.\nAn earlier attempt to define a geocentric reference system was WGSn. The parameters of this are:\na = 6378 135\nf\n= 1/298.26\nWGsn was a less accurate system than WGS84, and its origin has now been shown\nto miss the centre of the Earth by almost 5 m. Figure 4.1 shows the main differences\nbetween the two systems.\nZ\n(WGS84)\nOrigin of WGSn\n0.554"\nX (WGS84)\nY (WGS84)\nX (WGS72 projected onto WGS84 equator)\nFigure 4.1\nPrincipal differences between WGS84 and WGSn.\nWGsn is no longer used for the acquisition of new data, so its interest is really\nonly in the interpretation of archive material. Again, it should be noted that there\nwere several different realisations of WGSn resulting from the use of broadcast and\nprecise ephemerides.\nAnother group of satellite datums are those associated with GLONASS (Global\nNavigation Satellite System), the Russian equivalent ofGPS. The datums used for this\nare the Soviet Geodetic System 1985 (SGS85) and its successor SGS90 (introduced\nin 1995, and alternatively referred to as PZ90). The defining geometrical parameters\nof both SGS85 and SGS90 (Hooijberg, 1997) are:\na = 6378 136.0\nf\n= 1/298.257\n;\n;\nt\ni\n}\nl\nI\nt\nI\nGlobal, regional, and local datums\n25\nThe shape and size of SGS90 are therefore very similar to those of WGS84. Several\nattempts have been made to determine the transformation parameters between the two,\nsuch as those by Misra and Abbot (1994) and Rossbach et at. (1996). These have\ngenerally given similar results within the accuracy limitations that have been imposed\nby the use of relatively small networks of points used.\nThe best current indications are that PZ90 has a zero meridian that is rotated 0.6"\nto the east with respect to that of WGS84, and an equatorial plane that lies around\n4 m to the north of WGS84's. These discrepancies would show up directly between\nabsolute determinations of position by the two systems. In terms of relative positioning between two receivers, the rotation of 0.6" is the only relevant parameter, and is\nequivalent to 1 em over a distance of about 3.5 km. If a receiver makes use of satellites\nfrom both systems, the differences in the datums is not a problem for short distance\nor low accuracy applications; for high-precision baselines over long distances, better\ntransformations will have to be developed (Walsh and Daly, 1998).\n4.1.2 International reference frames\nAn international terrestrial reference system (ITRS) is one that is made use of throughout the world: WGS84 and PZ90 are examples of these. The most fundamental coordinate datum in existence is the International Terrestrial Reference Framework (ITRF).\nIt is a highly accurate geocentric reference framework, with the coordinates of reference stations defined at the millimetre level.\nThe preparation and evaluation of the ITRF is the responsibility of the International Earth Rotation Service (IERS), which was established in 1998 to replace the\nInternational Polar Motion Service and the Earth rotation functions of the Bureau International de l'Heure. The wider remit of the IERS is to establish both the terrestrial\nreference frame and the international celestial reference system (a coordinate system\nthat does not rotate with the Earth), and to determine the connections between the two\nsystems. This process involves monitoring and predicting the rotation of the Earth and\nthe movement of its poles, and is essential information if, for example, a determination\nof a satellite's position is to be given in a terrestrial system such as WGS84.\nFor the level of accuracy involved, it is no longer possible to have a reference\nsystem that is independent of time, since the deformations of the Earth's crust mean\nthat the coordinates of points within the system are continually moving, typically by\nseveral centimetres per year. In some respects, extremely high precision positioning\nis similar to very slow moving navigation.\nThe ITRF includes the velocities of its stations in the definition of the datum,\nbut applications making use of its coordinates will usually do so by reference to a\nparticular epoch, such as ITRF92, ITRF93, and ITRF94. The latest solution, ITRF96,\nhas determined the positions at 290 stations worldwide, of which 40% have position\nuncertainties below 1 cm (IERS, 1998).\nWGS84 was previously only defined at an accuracy level of around 50 cm. It\nwas redefined in 1994, however, in a form that was compatible with ITRF92, and\nstrictly speaking was then referred to as WGS84 (G730), with 730 being the GPS week\nnumber in which the change was effected (the first full week of January 1994). Hence\nWGS84 in this form is compatible with ITRF92 at the level of 10 em (Hooijberg,\n26\nDatums and map projections\n1997). There have been other redefinitions since then, such as WGS84 (G873), and\nalthough these are all slightly different datums this is noticeable only for applications\nof the very highest accuracy.\nThe 290 stations of the ITRF are rather thinly spread around the world, and for\npractical purposes it is necessary to densify this network: it can also be inconvenient in\nsome situations to have a coordinate set that is continuously changing. The European\nTerrestrial Reference Framework 1989 (ETRF89) is a subset of ITRF defined in the\nEuropean region, and was equivalent to the ITRF at the time of its implementation.\nThere is in consequence a mismatch between ETRF89 and the current epoch of the\nITRF. This becomes apparent, for example, if points are being fixed with respect to\npermanent GPS positions whose coordinates are published in ITRF, but where the final\nobject is to position points in the ETRF system. The discrepancy amounts to several\ncentimetres per year since ETRF was established, and a transformation is required for\nhigh precision applications. The transformation parameters can be obtained from, for\nexample, Boucher and Altamimi (1998). Comments on the link between this system\nand the British one are given in section 4.2.2.\n4.2 Local and regional datums\n4.2.1 Definition\nA datum is defined by selecting an origin for a national or regional survey. At this point\nthe geoid-spheroid separation, N, and the deviation of the vertical are chosen, usually\nas zero. This has the effect of fixing the chosen spheroid to the geoid (coincident and\nparallel) at the point of origin. The orientation of the minor axis of the spheroid is\nmade parallel to the spin axis of the Earth by making Laplace azimuth observations:\nobservations of astronomical azimuth (with respect to the stars and the pole of the\nEarth's rotation) are converted to geodetic azimuth (defined on the ellipsoid) by a\nformula that forces the poles of the two systems to be the same. This combination\nof shape and size as given by the spheroid, and position as given by the fixing at the\norigin, is essentially what defines a datum.\nA spheroid is not a datum. Many countries may use the same spheroid, but they\nare on different datums as they have different points of origin. Before the advent of\nsatellite geodesy, all national datums were of necessity defined independently of each\nother.\nThe effect of defining the datum in this way is that the spheroid is not geocentric.\nFigure 4.2 shows the relationship between a local datum and WGS84. It can be seen\nthat in general a point has different coordinates in the two reference systems, whether\nthese are expressed in geodetic or cartesian form.\nAlmost by definition, a local datum approximates the geoid in the region much\nmore closely than does the global datum, or a datum optimised for a wider region.\nThis point is illustrated by Figs 4.3 and 4.4, which show the geoid with respect to the\nBritish and European datums respectively. Note that in these two diagrams the geoid\nis the same; it is the reference system which has changed. Each local or regional\ndatum therefore has a point of origin which is offset from the centre of the Earth. The\nsize of this offset may be as much as 1 km. It is usually expressed in components\nGlobal, regional, and local datums\n27\nLOCAL\nWGS84\nLocal datum\n_________________\n""--------Figure 4.2\n------------~\n-----------..\n/\ndesigned to fit here\nGEOID\nA local datum and WGS84.\n(M,L1Y,AZ), where these values are effectively the coordinates of the origin of the\nlocal datum in WGS84. Hence a simple transformation of coordinates from a local\nsystem to WGS84 is given by:\n(~)\nZ\n=\nWGS84\n(~) + (~)\nAZ\nZ\n(4.2)\nlocal\nThe transformation from geodetic coordinates on one datum to geodetic coordinates on another can thus be achieved by the intermediate step of first converting to\ncartesian coordinates via equations (2.5) , applying equation (4.2), and then converting\nback to geodetic coordinates via equations (2.7)-(2.12).\nAlternatively, it is possible to combine all these steps in one set of equations, such\nas the Molodensky formulae (Stansell, 1978). These give the changes in geodetic\ncoordinates directly as:\n- M sin <I> cos A. - L1Y sin <I> sin A. + AZ cos <I>\n{\nL1<j>=\n+ L1a[(ve2sin<l> cos <I»/a] + L1f[p(a/b) + v(b/a)] sin<j> cos <I>\n(p+h)sinA.\nL1A. = - M sin A. + L1Y cos A.\n(v + h) cos <I> sinA.\n11k = M cos <I> COSA.+L1Y cos<l> sinA.+AZ sin<l>-L1a~+L1f~v sin2 <j>\nV\na\nwhere all terms have previously been defined except\np=\na(1 - e2)\n(1 - e2 sin2 <j> )3/2\n---'---=-~-\n}\n(4.3)\n(4.4)\n(4.5)\n(4.6)\n28\nDatums and map projections\nFigure 4.3 The geoid with respect to the British datum, OSGB36. (Courtesy of J. Olliver,\nOxford University.)\n29\nGlobal, regional, and local datums\n~\n_ _-7--3.0\n.'1\nFigure 4.4 The geoid with respect to the European datum, ED50. (Courtesy of J. Olliver,\nOxford University.)\n30\nDatums and map projections\n4.2.2 Realisation of a datum\nThe previous section provided a definition of a reference system. In fact, before it can\nbe used it is necessary to realise the datum; that is, to provide physical monuments\nof known coordinates. Any mapping or surveying which is carried out is then related\nto the coordinates of those control points that are available locally, rather than to the\nrather abstract concept of a datum defined at an origin point.\nThis then presents the problem that the datum as realised includes all the measurement errors and computational approximations of the original survey, which would\nusually have been carried out by a process of triangulation using ground survey techniques. Another significant effect is that the minor axis of the spheroid would have\nbeen made parallel to the spin axis of the Earth as it existed at the time of the original\nsurvey; in fact, the 'wander' of the pole means that modern datums use a conventionally agreed point, rather than the pole's instantaneous position at anyone epoch.\nIt may also be the case that the transformation is between two datums that are both\ndistorted in this way: Appendix section A3 covers the mathematical treatment of this.\nIt is therefore necessary to modify the rather simple transformation procedure outlined in section 4.2.1 in two ways. Firstly, to accept that in addition to the simple\ntranslation between two datums there may also exist rotations and scale changes. This\nleads to a modified set of transformation equations:\n(X)\nY\nZ\n=\nWGS84\n(M)\nAY\nLiZ\n+,uR\n(X)\nY\nZ.\n(4.7)\nlocal\nwhere ,u is the scale factor between the two systems and R is a rotation matrix, which\nfor small angles <Xl, <X2, <X3 about the X, Y, Z axes (as shown in Fig. 4.5) can be\nexpressed as:\n(4.8)\nThe transformation described by equation (4.7) is not the only way of carrying out\na three-dimensional transformation, and in many ways it is not the best. An alternative\nis given in section 6.5, and a more comprehensive mathematical treatment is given in\nAppendix section A3.\nThe second modification relates to the distortions in the local datum. The amount\nof distortion involved will be dependent on the size of the country (larger surveys\naccumulate more errors away from the origin) and the way in which the original survey\nand computations were carried out. A typical figure for the distortion of a country\nsuch as Great Britain would be around 20 m (that is, the difference between the actual\ndistance between two points at either end of the country and the distance implied by\ntheir coordinates).\nIt is possible to regard the resulting situation in two different ways. Firstly, the\ntransformation given by equation (4.7) will convert coordinates into the datum, but\nthese will then be different from the ones that are actually used, owing to the 'errors'\nin the local survey. It is important, however, to then add these errors to the converted\nGlobal, regional. and local datums\n31\ny\nx\nFigure 4.5\nRotation convention.\ncoordinates to bring them into line as the errors are part ofthe datum. That is, once the\nerror has been made it is important for consistency that all users adopt it. This is not\nquite the chaotic situation it implies, because in general the errors at points in anyone\nregion of the survey will be very similar to each other (although in Great Britain some\nquite sharp discontinuities do exist because of the way in which the original survey\nwas computed). This is effectively the approach used by the Ordnance Survey, which\nhas published a method for determining the additional distortions to an accuracy of\naround 2 m (Ordnance Survey, 1995a). The Ordnance Survey also provides a service\nthat will determine the distortions to an accuracy of 20 em. This is discussed further\nin section 6.5.\nAn alternative way of looking at the problem, and one that is generally of more use\nto those determining their own transformation parameters, is to regard a small region\nof the local system as largely undistorted (since the errors in that region are highly\ncorrelated with each other), and then to derive a set of transformation parameters as\nin the model given in equation (4.7). These will be applicable only to the region for\nwhich they were derived, and will not in general be applicable to other areas. Thus,\nconceptually, a situation arises where the transformation parameters can be thought\nof as variables that change across the country. It is even possible to produce a set of\nmaps that show, for example, how the AX parameter varies, although this is liable to\nlull the user into a false sense of the exactitude of such a procedure.\nWhichever of these methods is used to conceptualise or to solve the problem, it\nmust be emphasised that no one set of transformation parameters on its own is likely\nto be sufficient except at accuracies of a few tens of metres. Figures quoted for the\ndatum of any particular country will be average values and not applicable throughout\nthe country without some loss of accuracy.\nSome examples of transformation parameters for various datums are given in Table\n4.1; the values given here are averages, and should be used with caution. Notes on\nsourcing this information are given in section 6.4.1.\nThe datum transformation parameters for adjacent countries are correlated only if\nthere was a political link at the time of the establishment of the original survey, as\n32\nDatums and map projections\nTable 4.1 Examples of transfonnation parameters from local datums to WGS84\nDatum name\nSpheroid\nOSGB 36\nIreland 1965\nFrench\nGennan (West)\nNetherlands\nSwiss\nEuropean 1950\nCape (South Africa)\nArc 1950 (Zimbabwe)\nNamibian\nNorth American 1927\n(Conterminous USA)\nNorth American 1927\nIndian\nNepal)\nTimbalai 1968 (Sabah)\nTokyo (Japan)\nAiry\nAiry 1830 Mod\nClarke 1880 (IGN)\nBessel 1841\nBessel 1841\nBessel 1841\nInternational 1924\nClarke 1880 Mod\nClarke 1880 Mod\nBessel 1841 Nam\nClarke 1866\nM(m)\nAY (m)\nAZ(m)\n~375\nIII\n-506\n-168\n583\n593\n679\n87\n136\n142\n-616\n8\n122\n-72\n68\n19\n0\n98\n108\n96\n-97\n-160\n-431\n-611\n314\n395\n468\n406\n121\n292\n293\n251\n-176\n10\n-158\n-187\nEverest 1830C\n-289\n-734\n-257\nEverest 1830B\nBessel 1841\n689\n123\n-691\n-483\n46\n-662\nClarke 1866\nTable 4.1 shows. Thus, the parameters for South Africa and Zimbabwe are similar,\nwhereas those for Namibia (formerly German South West Africa) are very different.\nThere is a trend in some countries (for example the USA and Australia) for the\nITRF to be adopted as the basis for the national mapping system. The advantages\nensuing from such a situation must be balanced against the costs of converting all\nexisting data to the new datum.\nIn Great Britain (but not in Northern Ireland) the official datum is OSGB36; the\ncoordinates of all triangulation points and all maps are based on this system. Alongside this there is a datum known as OS(GPS)93, which is specifically for use with\nGPS (Chapter 5). This datum is entirely compatible with ETRF89, and there is hence\na rather complex transformation between this and OSGB36. The possibility of adopting ETRF89 as the basis for mapping in Great Britain at some future date is being kept\nunder review (Calvert, 1994), but at present the emphasis is on promoting the acceptance of a consistent set of conversion algorithms (Ordnance Survey, 1997, 1998).\n5\nThe global positioning system\n5.1 Introduction\nFor many users of spatial data, the idea that maps can be on different datums, and\nindeed on a different one from the method of data acquisition, will first have been encountered when using the global positioning system (GPS). For this reason, although\nit would be possible (and quite reasonable) to write an entire book devoted to this\nsystem, it is appropriate to summarise its most important features here, and to explore\nsome of the datum aspects in more detail.\nA more extensive treatment can be found in standard texts such as HofmannWellenhof et al. (1997) or Leick (1990).\n5.2 System overview\nThe GPS was originally conceived as a navigation system - that is, one capable of\ninstantaneous determinations of position and velocity. As will be seen, the minimum\nrequirement for its operation is the ability of a user to receive signals from at least\nfour satellites simultaneously: for locations that are uncluttered by obstructions in the\nform of buildings or dense foliage this requirement is assured by the constellation of\n21 satellites at a mean height of 20 200 km above the surface of the earth, as shown in\nFig. 5.1.\nThe basic premise of GPS is the same as that of any surveying system: the coordinates of new points are found by making observations with respect to points of known\ncoordinates. The only differences here are that the known points are in orbit, and are\nnot stationary. The determination of the coordinates of these satellites is therefore a\ncontinuous process, and is achieved by the control segment of GPS, which consists of\na worldwide network of monitoring and control stations dedicated to the task of determining the orbital paths of the satellites and monitoring the health of their signals. It is\nthen possible to predict the orbit of a satellite a short way into the future, and to upload\nthis information to the satellite itself. In this way, the satellite is able to broadcast its\nposition (referred to as the ephemeris) for the users to determine their own position in\nreal time.\n33\n34\nFigure 5.1\nDatums and map projections\nThe GPS constellation.\nThe infonnation transmitted by the GPS satellites is modulated onto two underlying carrier waves, referred to as Ll and L2. The fonner has a frequency of\n1575.42 MHz (equivalent to a wavelength of 19.05 cm) and the latter has a frequency\nof 1227.60 MHz (equivalent to 24.45 cm).\nIn addition to the ephemeris and other general infonnation about the system, two\nbinary codes are also modulated onto the carrier waves. These can be thought of as\nbeing the equivalent of the step function shown in Fig. 5.2, with each change in sign\nbeing represented by a change in phase of the carrier wave. The two codes are referred\nto as the coarse acquisition (CIA) code and the precise (P) code, and their effective\nwavelengths are 300 m and 30 m respectively. Although the P code is publicly available, a further encryption (the W code) is added which makes it inaccessible to most\nusers. The resulting combination is referred to as the Y code, and the procedure of\nimplementing this encryption is known as anti-spoofing. In fact, the CIA code is modulated onto the L1 carrier only, whereas the P code is modulated onto both frequencies,\na configuration that deliberately denies the non-military user the advantages of a dual\nfrequency system (although this is overcome for some applications).\nFigure 5.2 Binary code carried on the GPS signal.\nThe main point to note about the codes at this stage is that they are transmitted in\na predetennined sequence at precise times for each satellite. Thus, if the codes can be\nread, they can simply be thought of as time infonnation.\nThe global positioning system\n35\n5.3 Positioning with codes\nThe basic method for positioning with GPS is through the use of the codes, usually\nonly the CIA code on the LI carrier wave. The procedure is for the receiver to 'read'\nthe code, and thus obtain the time information that is transmitted by the satellite. This\nprocedure is aided by the fact that the receiver has a copy of the code stored within\nit, and it knows roughly what to expect: it is therefore possible to make a reading on\nwhat is in fact a very weak signal (compare a handheld GPS receiver with a satellite\ntelevision aerial).\n: Time offset\n-\ni~\n!\n-I~;\n[~l\nFigure 5.3 Time shift between codes.\nThe receiver, generating the code itself, determines the offset in time between the\ncode that it generates and the one received from the satellite, and from this deduces\nthe time of travel of the signal. By multiplying this travel time by the speed of transmission of the signal (the speed of light), the distance from the satellite to the receiver\nis obtained. In theory, it would be possible to determine the three-dimensional coordinates of the receiver from observations to three satellites. This follows either from\nconsidering three simultaneous equations with the three unknowns (X,Y,Z), or from a\ngeometrical analogy in which each determination of a distance describes a sphere of a\ncertain radius centred on the satellite, and the intersection of three spheres is a single\npoint.\nThe model described above has taken no account of the lack of synchronisation\nbetween the satellite time system and the receiver clock, however. Although the synchronisation between the different satellites is essentially assured by each one carrying\ntwo rubidium and two caesium atomic clocks, and these being continuously monitored\nby the ground stations, the receiver has to have an independent, and much cheaper,\ntiming system.. This then leads to the problem that, because the speed of light is approximately 300000 km!s, a 1 s offset between the satellite and receiver clocks will\nlead to a 300000 km error in the determination of the distance. For this reason, the\ndistances so found are sometimes referred to as pseudo-ranges. The problem is solved\nsimply by introducing a fourth satellite, and using four distances to solve the four unknown parameters (X,Y,Z, Lh) where ~'t is the offset between the GPS system time\nThe reasons for the design of the satellite constellation are now apparent: four\nsatellites is the minimum requirement, and in fact this is virtually always possible in\n36\nDatums and map projections\nthe absence of any obstructions (it should be noted that the signals can travel through\ncloud, fog, rain, and so on, but cannot penetrate buildings, dense foliage, or similar\nbodies). Usually between four and eight satellites will be above the horizon.\nThe determination of a satellite-receiver distance from code observations is subject to several sour"~s of error. These can be summarised as follows, with approximate\nroot mean square (rms) magnitudes as given by Langley (1997):\n• Ephemeris error: caused by the difference between the satellite's true position\nand the broadcast ephemeris, and in which can also be included the errors in the\nsatellite clock. This will usually amount to around 3--4 m.\n• Refraction: caused by the difference between the actual speed of transmission\nand its assumed value. This error source can be subdivided into two components: ionospheric and tropospheric refraction. The former has its origin in the\nupper atmosphere, and is very difficult to predict. It can cause errors of several\nmetres in the observed range, but can be corrected by using dual frequency observations: this option is not available to civilian users, and the errors can be\naround 7 m. Tropospheric refraction, on the other hand, is more predictable. It\nrefers to the refraction encountered in the lowest 40 km of the atmosphere, and\nis dependent on temperature, pressure, and humidity. The refractive index of\nthe dry atmosphere is very easily modelled, and i!s effects are negligible: only\nthe wet part of the atmosphere causes any real problems, and is typically less\nthan 1 m.\n• Multipath: caused by the satellite signal reflecting off other surfaces on its way\nto the antenna, and thus taking an indirect route: it can be minimised by good\nsite selection. Its magnitude is dependent on the environment at the receiver,\nand is typically 1-2 m.\n• Receiver noise: a random error source that is a reflection of the measuring precision of the receiver. A figure of 1.5 m for the rms error is typical.\nTo all of these error sources, an additional category can be added: a deliberate error, known as selective availability, which limits the accuracy that can be achieved by\nusers of the CIA code. This is in theory composed of delta errors (fluctuations introduced in the satellite clock) and epsilon errors (a deliberate inaccuracy of the broadcast ephemeris) although indications are that at present the latter is not implemented.\nIn combination with the 'natural' errors, selective availability limits the accuracy to\n100 m in horizontal position, and to 150 m in height. These figures refer to the limits\nthat will not be exceeded by 95% of the observations.\nThe nature of the selective availability is that the errors change only very slowly\nwith time, and it therefore takes at least an hour before any averaging effect will reduce\nthe errors. To achieve an absolute accuracy of 5-10 m takes around 8 h of observations. Figure 5.4 shows a typical plot of the position determined at a stationary point\nover a period of 1 h.\nOne further point should be noted relating to the accuracy of GPS, and that is\nthe way in which the geometry of the satellites affects the final accuracy of the determination of the coordinates of the receiver. Essentially, if the distances measured\nThe global positioning system\n37\nPoint Position over 1 hour\n- 0\nEast\nFigure 5.4 Point position over 1 h using code pseudo-ranges (SA on). (Courtesy of R. Keenan,\nUniversity College London.)\nfrom the satellites to the receiver intersect at a shallow angle (because the satellites\nare grouped in one portion of the sky), the accuracy of the three-dimensional coordinates of the point will be worse than if the satellites were well distributed around\nthe sky. This is analogous to the situation with the determination of two-dimensional\ncoordinates from the intersection of measured distances that is depicted in Fig. 5.5:\nalthough the distances are measured to the same precision in each case, the accuracy\nof the coordinates will be worse if the distances intersect at an acute angle. For GPS,\nthis concept is conveniently expressed by a numerical coefficient, known as the positional dilution of precision (PDOP). This is the ratio of the mean accuracy of the\ncoordinated position to the accuracy of the original range observations: the larger this\nnumber, the worse the geometry of the satellites. Typically, the PDOP values when\nusing the full GPS constellation range between 2 and 5: if some satellites are obscured\nand the PDOP rises much above this range, the accuracy figures quoted above will no\nlonger be achievable.\nThis, then, is the basis of operation of all GPS receivers operating as stand-alone\nunits. It can be seen that the accuracy is almost entirely dependent on factors external\nto the receiver: any difference in price between different models is therefore explained\nby the functionality of the equipment, such as the storing of data, the use of digital map\ndisplays, and so on.\nFuture developments in the design of GPS satellites, and US government policy\nregarding the application of selective availability, mean that within a few years the\naccuracies achievable by stand-alone GPS receivers are likely to increase dramatically\nfrom the figures quoted here, possibly to as accurate as 5 m (GIAC, 1999).\nIf coordinates obtained by this method are to be integrated with data from a nation-\nDatums and map projections\n38\nFigure 5.5\nPoor geometry (left) and strong geometry (right).\nal survey, a datum transformation must be carried out. The coordinates will change\nby up to 1 kIn after transformation, which is significant even for the current accuracy\nof GPS fixes. Distortions in the local datum are unlikely to be significant, however,\nand an average set of transformation parameters for the datum concerned can be applied. Most handheld receivers now on the market have stored within them a set of\ntransformation parameters for the datums of many countries around the world. If these\nare not available, a simple shift can be derived by setting a receiver over a point with\ncoordinates known in the local system, and then re-arranging equation (4.2) as:\n(M)\nAY\n(X)Y\nAZ\nZ\nWGS84\n(X\\Y I\n(5.1)\nZ J local\nThe shifts derived in equation (5.1) can then be applied to all the other points that\nhave been observed with GPS.\nThe height information that results from this transformation is the ellipsoidal height\nabove the local datum. We saw in the example for the British datum (OSGB36) in\nFig. 4.3 that this approximates the height above the geoid within 4 m. This accuracy\nis more than sufficient for this mode of operation.\n5.4 Differential GPS using codes\nThe key to the improvement of the accuracy obtainable with GPS is the use of two or\nmore receivers to measure relative as opposed to absolute positions. That this achieves\nan improvement in accuracy of almost two orders of magnitude is due to the spatial\ncorrelation of most of the errors: that is, an error present in the observation of range\nThis is certainly true of the errors caused by selective availability; the correlation of\nerrors caused by the ephemeris and refraction will depend upon the distance between\nthe two stations. Only multipath and noise are entirely uncorrelated between the two\nThe basic principle of operation is therefore to have one receiver set up at a point\nof known coordinates. In this context, 'known' implies known in the WGS84 system,\nThe global positioning system\n39\nas local coordinate systems are entirely inappropriate for this type of computation.\nThe information could be obtained by the mean of a long-term set of absolute position\nobservations by the receiver or, more appropriately, by linking the point to a higher\norder reference system through the type of observation discussed in the next section.\nThe effect of an error in the absolute coordinates of this station will be similar to the\neffect of an error in the satellite ephemeris when calculating a vector from one point\nto another; therefore, an appropriate target for the accuracy is better than 1 m.\nThe receiver at the known station, the reference receiver, is thus able to compare\nthe observed pseudo-range to each satellite with the value calculated from its own\ncoordinates and the ephemeris position of the satellite. From the difference between\nthe two, a range correction can be determined (Fig. 5.6). This range correction can\nthen be used to correct the range observations made at another receiver, sometimes\nreferred to as the rover. The efficacy of this correction will depend on the degree of\ncorrelation between the errors at the two receivers. For example, an error in the satellite ephemeris will not cause an error in the range measured at the reference receiver if\nthe direction of the error is perpendicular to the line from the satellite to the receiver.\nIn tum, this undetected error will not cause a range error at the roving receiver if it is\nalso perpendicular to the range, but this will be less true as the inter-station distance\nincreases. Similarly, the degree of correlation in the effects of atmospheric refraction\nwill decrease as the receivers become further apart.\nReference\n(fixed)\nFigure 5.6\nCorrections\ncomputed and~\ntransmitted\nover\nDifferential GPS.\nGenerally, differential GPS (DGPS) will achieve an accuracy of 1-5 m, over distances up to several tens, or even hundreds, of kilometres. Figure 5.7 shows an example plot of the position of a static receiver over a period of 1 h, as computed over a\nbaseline from a reference receiver of less than 1 km.\nFor many applications, the corrections to the observations made at the roving station can be made after the event, once the data from the two receivers has been downloaded onto a computer. This mode of operation is entirely suited to the determination\nof the coordinates of a series of static features for incorporation into a database. The\nalternative is to carry out the corrections in real time, in which case it is necessary\nto transmit the necessary information from the reference receiver to the rover. This\ncase the reference receiver can be positioned wherever is appropriate for a particular\nDatums and map projections\n40\nDGPS Plan Position ofsingle point over 1 hour (short baseline)\n08\nEast (m)\nFigure 5.7 Coordinates detennined by DGPS over a 1 h period. (Courtesy of R. Keenan,\nUniversity College London.)\nproject. On the other hand, the distances over which DGPS is viable means that it\nmakes commercial sense to establish a network of permanent reference stations and\ntransmit the corrections to whoever is prepared to pay for them. The cost of the subscription to such a service is offset by not having to establish a reference station and a\ntransmitter.\nOn land, the corrections can, for example, be transmitted on the side band of a\ncommercial radio station, in which case the outlay of the user is limited to the purchase\nof an FM receiver in addition to the subscription. Alternatively, offshore users can\nIf the coordinates obtained from this type of survey are required in the local datum, rather than WGS84, they could simply be transformed to the local system en bloc\nusing published values of the transformation from WGS84 to the local datum. This\nhas the disadvantages of using general national values of the transformation, which\nare likely to be less accurate that the results obtained by DGPS. In addition, any errors in the WGS84 coordinates of the reference station will contribute directly as an\nadditional error source. The alternative is to derive transformation parameters by occupying one or more points whose coordinates in the local system are known, and\nusing equation (5.1).\nIf the survey is particularly extensive, however, it should be noted that rotations\nbetween the coordinate systems may be significant. A typical rotation might be 5 arc\nseconds, which is equivalent to 2.5 mover 100 km. This is unlikely to be a problem\nvery often.\nThe height information yielded by this method is differential spheroidal height.\nThe difference between this quantity and a differential geoidal height is barely sig-\nThe global positioning system\n41\nnificant over short distances at the accuracy achievable. Over longer distances (say\n>50-100 kIn), adequate geoid information would be provided by a global earth model\nsuch as EGM96, which can correct the differential spheroidal height through the use\nof equation (3.2).\n5.5 GPS phase measurements\nThe factor that limits the precision of GPS code observations is the effective wavelength of the code: at 300 m for the C/A code very precise observations are impossible. The way around this is to ignore the codes (they can be removed from the signal\nif they are known) and go back to the underlying carrier waves, Ll and L2, which\nhave wavelengths of 19.05 cm and 24.45 cm respectively. This allows the possibility\nof very precise observations, as measuring a fraction of a wavelength offers millimetric precision - although at the cost of an increased mathematical complexity of the\nsolution. Whereas the code observations represent unambiguous determinations of\ndistance, the phase observations measure only a fractional part of the distance, which\nrepeats itself every wavelength.\nThe basic observation is made by measuring the difference in phase between the\nsignal received from the satellite and one generated in the receiver. The difference in\nphase so measured has two causes.\n• The receiver and the satellite are not likely to be oscillating in phase in the first\nplace.\n• The signal has had to travel a certain distance from the satellite to the receiver,\nso the receiver is comparing its own signal with one emitted from the satellite\na short time previously. If the satellite were a whole number of wavelengths\naway from the receiver, the two signals would once again be in phase with each\nother. Hence (Fig. 5.8), the receiver is actually measuring the fractional part of\nthe distance il<jl , over and above a whole number of wavelengths. Subsequent\nmeasurements in the same sequence can measure a value of il<jl as it goes beyond\na whole wavelength, so that the initial unknown number of whole wavelengths\nstays the same.\nThe result is therefore an observation of phase which is related to the satellite-receiver\nrange, but in a way that is complicated by the presence of an unknown whole number\nof wavelengths (the integer ambiguity) and phase differences between the satellite\nand receiver clocks. The key to the solution of this problem is the fact that these\nbias terms are constant, provided that the receiver continuously tracks the signal from\nthe satellite. It is therefore possible, in time, to acquire more and more observations\nwithout increasing the number of unknown parameters to be solved.\nThere are several different approaches to solving the problem.\n• A single receiver can occupy a point for a considerable period of time (at least\nseveral hours), collecting a very large data set. In combination with a more\nprecise ephemeris than that broadcast by the satellites (for example the precise\nephemerides disseminated via the International GPS for Geodynamics Service,\n42\nDatums and map projections\n~\nSatellite\nI~-t-------\n------+-------J~\nrange\nWhole number of\nwavelengths\n(the 'Integer Ambiguity')\n/\nMeasured phase\ndifference gives\npart-wavelength\nphase difference means\nwave does not start at zero\nFigure 5.8\nGPS phase observation.\nas discussed in section 4.1.1) and improved corrections to the satellite clocks,\nit is possible to solve all the parameters, including the three-dimensional coordinates of the receiver. By definition this cannot be done in real time, and it\nrequires very sophisticated software, but it is capable of determining the position of the point to a precision of around 2 em. This type of observation is most\nappropriately employed in global geodynamic studies.\n• Alternatively, less time-intensive methods of dealing with the situation are based\non making the observations in differential mode, using two or more receivers.\nPart of the advantage of this technique comes from the fact that it is no longer\nnecessary to determine the integer number of wavelengths from the satellite\nto the receiver, but only the difference in the number of wavelengths from a\nThe real advantage of GPS, and what has made it a tool that has all but replaced\nconventional control surveys, comes from exploiting the integer nature of the ambiguities, thus avoiding the need to collect data over a long period of time. Essentially\nthe approach is to gather just sufficient data for the processing software to be able to\nrecognise the integer values of these numbers. Thus, for example, if the solution to\nthe equations yielded as ambiguities the numbers:\n451.999\n43.002\n875.998\nthen, barring coincidence, these would be recognised as 452, 43, and 876 respectively.\nThe situation now is that the relative distances between satellites and receivers are in\nThe global positioning system\n43\nprinciple known to the precision of the phase observations, a matter of a few millimetres. This would lead to a baseline between the stations that is precise to a similar\namount. Conversely, if a mistake had been made and the ambiguities incorrectly identified, an error of several decimetres would be introduced.\nThus, the use of GPS over short baselines is centred around the length of time\nrequired to resolve the ambiguities to the correct values with a high degree of confidence. Advances in modelling software have reduced the time required to around\n5-10 min over baselines up to around 10 km long, and 1-2 min over shorter baselines.\nThe techniques involved are greatly assisted by the ability of modern receivers to read\nthe phase of the second carrier wave, L2, albeit with an increased amount of noise:\nthis is achieved in different ways by different makes of receiver, but most are based on\na technique of cross-correlating the P codes between the two frequencies. Generally\nspeaking, however, this second frequency is used only to acquire additional data for a\nquick solution and not to make any correction for ionospheric refraction: over short\ndistances, the assumption is that this will cancel out between the two stations, which\nis the main factor that imposes a distance limit to this technique.\nOver most distances less than around 10-20 km, this assumption is usually valid,\nand the main error source would be multipath: by definition this is site specific and\ndoes not cancel out between stations. For phase observations this can potentially lead\nto errors of up to 5 cm in the observation to anyone satellite, and besides the obvious\nsolution of good site selection (which is sometimes impractical) the main way around\nthe problem is to increase the period of observation beyond the minimum required to\nresolve the ambiguities.\nFor longer baselines, the effects of ionospheric refraction are potentially more\nserious. For this reason, the emphasis is no longer on obtaining rapid solutions by\nidentifying the integer ambiguities, but on correcting for the effects of refraction by\nexploiting both carrier frequencies. This leads to much longer occupation times, usually measured at least in hours, sometimes days.\nIn summary, differential phase observations with GPS are capable of determining\nbaseline vectors between receivers to a precision of around 1 cm: the length of time\nrequired for this will vary from a few minutes to a matter of days, depending on the\nlength of the vector.\nA slightly lower level of accuracy can be achieved more quickly over shorter distances through the use of kinematic techniques. The basic premise of these is that,\nonce the initial determination of the integer ambiguities has been made, these values\nwill stay the same even when the roving receiver moves to a new location, provided\nthat the receiver continuously tracks the signal from the satellite. As with differential\nGPS using code observations, it is possible to operate kinematic phase GPS in a realtime mode. This is usually referred to simply as real-time kinematic (RTK). The data\nto use the system over longer lines, communication by mobile phones may be more\nappropriate.\nIn principle, the accuracy of kinematic GPS is similar to that of conventional phase\nobservations in the static mode. It will always be slightly lower, however, as there is\nno averaging over time, which means that the effect of multipath goes uncorrected.\nTypically, at anyone point there will be an error of around 2-3 em.\n44\nDatums and map projections\nThe procedure for kinematic GPS is first to initialise the roving receiver by acquiring enough data to resolve the integer ambiguities. This can be done while the\nreceiver is stationary; alternatively, with the 'on the fly' (OTF) techniques that have\nnow been developed, data may be collected while the receiver is in motion. For OTF,\nthe position of the receiver during the initialisation period can be deduced after the\nevent, but not in real time.\nEven if an individual signal is interrupted, it is possible to proceed provided that at\nleast four satellites are tracked continuously. This is because four satellites is the minimum requirement for positioning and, if the position is known, when new satellites are\nobserved or new ones re-acquired, the resolution of the ambiguities is instantaneous\nas the position is known. A complete blockage of all signals, such as would be caused\nby passing under a bridge, cannot be supported, however. Under these circumstances,\nit is necessary to go through the initialisation cycle once again: repeated interruptions\nsuch as would occur in a cluttered urban environment would cause problems.\nThe existence of a system capable of measuring to a precision of I cm over several\nkilometres in a short period of time has revolutionised many tasks in surveying and\ngeodesy. It must be emphasised, however, that what is obtained is a three-dimensional\nvector in the WGS84 system. To exploit the accuracy of this to the full, great care\nmust be taken in the way in which the results are incorporated into a local datum.\nThis subject is rather more extensive in its philosophy and application than the simple\ntransformations that have been discussed so far, and is therefore the subject of a more\nextensive treatment in section 6.5.\n6\nAspects of datum\ntransformations\n6.1 Introduction\nChapters 2--4 have defined two- and three-dimensional coordinate systems, and discussed the procedure for transforming from one datum to another. If all the data and\nparameters mentioned there are known, there will not be a problem. It is often (perhaps usually) the case, however, that some or all of the information required is missing.\nThe aim of this chapter is, firstly, to clarify where some of the problems are likely to\noccur. For these situations, the likely magnitudes of the errors caused is examined:\noften, missing information is not really a problem at all, as it has little effect at the\nlevel of accuracy that is required. If missing information is likely to cause a problem,\nvarious shortcuts are suggested.\nIn general, the approach taken will be very much dependent on the accuracy requirements. Since we are interested on the one hand with remote sensing and GIS\napplications that will usually be dealing with data accurate to at best I m, and on the\nother hand with surveying projects requiring an accuracy of perhaps 1 em, the approaches differ so widely that it is most convenient to treat them separately. Sections\n6.2-6.4 should therefore be read as a general introduction to some of the problems,\nbut with the solutions proposed being relevant mainly to low accuracy applications.\nSection 6.5 deals with the high accuracy applications of GPS in surveying projects.\n6.2 Knowledge of separation, N\nIt was stated in Chapter 1, with reference to Fig. 1.3, that a knowledge of the spheroidal\nheight, and hence the separation, N, is necessary for two-dimensional coordinate transformations. This point is illustrated in Fig. 6.1, in which an error in the separation\n(perhaps equating it to zero) effectively shifts the point P to the point pl. In system A\nthe point pi has the same two-dimensional coordinates as point P, but in system B the\n46\nDatums and map projections\ntwo points have different coordinates. The order of magnitude of this error is given by\ne=N~\n(6.1)\nwhere e is the error in the two-dimensional coordinates, latitude and longitude (in\nmetres), N is the separation (in metres) and ~ is the angle between the two normals (in\nradians). Because N is at most 100 m, and a value for ~ of 20" (arc seconds) is quite\nlarge, the effect on (<1>, A.) of ignoring the separation (effectively saying that h = H) is\nof the order of 1 em. This is significant in some geodetic and surveying applications,\nbut can safely be ignored in the context of remote sensing and GIS.\np\n<\\>(systemA)\nSpheroid B\n<\\>' (system B)\n<1>\nFigure 6.1\n(system B)\nThe effects of an error in separation.\n6.3 Knowledge of height, H\nIn many situations it is necessary to transform a set of two-dimensional information\nfor which no values of height are available (e.g. from a map without contours). A very\nsimilar problem to that in section 6.2 then results, but of a larger magnitude. Another\nlook at Fig. 6.1 will show that if the height of the point P is assumed to be zero, a false\npoint P' with different coordinates in system B results. In this situation, the order of\nmagnitude of the error is\ne=H~\n(6.2)\nwhere H is the height above the geoid. In this case, 1000 m represents a large value,\nif not a limit, and the order of the error is then seen to be around 10 em.\nIt seems safe to assume that two-dimensional transformations to a precision of\nwithin 1 m can always be carried out without a knowledge of height.\n47\nAspects of datum transfonnations\n6.4 Knowledge of datum transformation\nparameters\n6.4.1 Sources of information\nThis is the most problematical aspect of the datum transformation procedure. For most\ndatums the parameters of the spheroid are known. This is basically because all surveys\nhave always had to use these values, even before the advent of satellite geodesy. In\nmost cases a spheroid was taken 'off the shelf', as in the days before computers this\nsaved a considerable amount of computation. It should be noted, however, that the parameters of a spheroid were occasionally modified, and it is therefore worth checking\nthe actual values of a and f that have been used if this is possible.\nA spheroid is often quoted by name, for example Airy or Clarke 1880. There is a\ndanger, however, that 'modified Clarke 1880' is misquoted as 'Clarke 1880' (or, indeed, confused with Clarke 1880 (IGN), Clarke 1880 Palestine, Clarke 1880 I, Clarke\n1880 II, or Clarke 1880 III).\nThe transformation parameters from a datum to WGS84 (M, LlY, LlZ, plus rotations and scale if necessary) are more of a problem. In some cases these are regarded as classified information. In most developed countries the values are known\n(within the accuracy limitations explained in section 3.1), and are held by the relevant\nmapping authority. It is far more satisfactory, however, to have a central source of\ninformation to refer to. Several reports on this have been published by the Defense\ninformation to update the datum transformation tables. For the European region, an\nalternative is University of Zurich (1989).\nFor high precision uses of GPS, the derivation of transformation parameters is\ndescribed in -section 6.5.\n6.4.2 Estimating the parameters\nIf no information is forthcoming from any of the sources in section 6.4.1, one possible\nalternative is to attempt to estimate the transformation parameters from a knowledge\nof the geoid in the region. As a demonstration of this technique, consider the situation\nin the UK as an example.\nThe British datum, OSGB36, was effectively established by maintaining the position of the spheroid at Greenwich from the nineteenth-century survey. Therefore\nOSGB36 is parallel to and coincident with the geoid at the point:\n<p = 51 0 28' 40" N\nA = 00 00' 00" E\n0\nAt this point the geoid-spheroid separation, N, and the deviation of the vertical, S,\nare both zero in OSGB36. Using a global Earth model such as EGM96, the values of\nN and the components of Scan be computed with respect to the WGS84 datum. The\n48\nDatums and map projections\nrelevant values are:\n(separation)\nN=45.796\n~= -2/1\n(component in latitude)\n11 = +3/1 (component in longitude)\nUsing these values, the coordinates of the point at Greenwich can now be found in both\nthe local system and WGS84. Adopting the terminology of Fig. 1.3 and converting\nboth sets of coordinates to cartesian values gives\n• forWGS84:\n<l>\nA,\nh\n= 51° 28' 42/1 N\n= 00° 00' 03/1 W\n= N=45.796\nZ\n= 3980611\n= 0\n= 4966859\nX\nY\nZ\n= 3980244\n=58\n= 4966420\nX\n--+\ny\n• for OSGB36:\n<l>\nA,\nh\n= 51 ° 28' 40/1 N\n= 00° 00' 00/1 W\n= N=O\n--+\nA direct comparison of the cartesian coordinates then yields the transformation\nparameters. Putting them alongside the known values gives the results shown in Table\n6.1. The method has given very good results in the X and Z directions (within 10 m),\nbut a value for Y that is over 50 m from the known value. The most probable cause\nof this is a slight rotation between the two datums, which on the Greenwich meridian\nwould manifest itself as a discrepancy in the Y direction. This geometrical approach\nis not able to detect this effect, and rotation errors of this magnitude are likely to result\nfrom any similar attempt to estimate the transformation parameters.\nTable 6.1\nAttempts to derive transformation parameters\nh,y\nTrue values\nEstimates\n-375\n-367\nIII\n58\n-431\n-439\nAnother potential source of error when applying this technique is that the point of\norigin is likely to be unknown. Therefore, attempting to estimate the transformation\nparameters from such a geometrical approach should be used only for datums that\ncover a very limited area, and when there is no possible alternative.\n6.4.3 Simple two-dimensional transformations\nIn some cases it may be necessary to transform two-dimensional data on one datum\ninto another datum. The transformation parameters may be unknown, or a simpler\nAspects of datum transformations\n49\ntransformation procedure than the one outlined in Fig. 1.3 may be required. An example of this might be maps of adjacent countries which are on different datums.\nIf common points can be identified in both systems, then one possibility is to\ntransform from one datum to the other by a two-dimensional transformation, either a\nsimple similarity transformation or a more complex affine or polynomial one (these\nare treated more extensively in Chapter 12). The extent to which this is possible will\ndepend on the extent to which the relative positions of points are altered in transforming from one datum to another.\nAs an example of the effect of a datum transformation on coordinates, consider the\ntest area of approximately 100 x 100 kIn that is defined by the four points in Table 6.2.\nThe coordinates of Table 6.2 are defined on a datum with the spheroidal parameters\nf\na = 6378388 m\n= 1/297\nThese are converted to a datum with the spheroidal parameters\nf = 1/298.257\na = 6378136m\nThe following datum transformation parameters are applied:\nL\\.Y = 200m\nM=200m\nJ1.Z = 200 m\nTable ~.2 Comer points of 100 kIn test square\nPoint\nLatitude\nLongitude\nA\n50°\n50°\n50°\n50°\n00° 00'\n01 ° 24'\n00° 00'\n01° 24'\nB\nC\nD\n00'\n00'\n54'\n54'\nThe resulting coordinates on the new datum are shown in Table 6.3. Converting\nthese into coordinate changes in metres gives the results shown in Table 6.4. Although\nthe changes are substantial, it can be seen that the spread is less than 8 m: a similarity\ntransformation could therefore model this effect to within a couple of metres. This is\nnot accurate enough for geodetic or surveying applications, but in many other contexts\nit is more than sufficient.\nTable 6.3 Coordinates on the new datum\nPoint\nLatitude\nLongitude\nA\nB\nC\nD\n49° 59' 56.29"\n49° 59' 56.17"\n50° 53' 56.16"\n50° 53' 56.04"\n00°00'10.04"\n01°24'09.79"\n00°00'10.23"\n01 °24'09.98"\nIn practice, the most common source of problems in applying a two-dimensional\ntransformation to convert between data sets is the change in the projection rather than\nthe change in datum; for this reason, the subject is given a somewhat more extensive\ntreatment in Chapter 12.\nDatums and map projections\n50\nTable 6.4\nCoordinate changes after transformation\nPoint\nLatitude\nLongitude\nA\nB\nC\nD\n-115.0485\n-118.7578\n-118.9318\n-122.689\n200.09047\n195.14186\n200.07987\n195.13152\n6.5 Datum transformations for precise applications\n6.5.1 Distorting to fit the local datum - or not, as the case may be\nThis section considers the datum problems that arise when dealing with applications\nthat require a high level of accuracy when transforming from one datum to another.\nThe most obvious example is in surveying points with GPS using differential phase\nobservations, and when the final coordinates are required to be expressed in a local\ndatum such as OSGB36.\nTo begin with, it is helpful to consider what the final aim of the survey is; that is,\nthe use to be made of the coordinate information that is derived. This has a particular\nbearing in situations where GPS data is being transformed into a datum that contains\nthe types of distortion discussed in section 4.2.2.\nThe problem is essentially one of how closely these distortions should be followed\nwhen transforming. To take an example, an engineering project (such as the construction of a road or railway) should usually use coordinates that are in sympathy with\na local coordinate system. On the other hand, building a structure in the right place\nfrom the point of view of, say, land ownership, is not a very exacting requirement\n(20-50 em might suffice, for example). By comparison, it is quite important that the\ninternal geometry of the project should be preserved: it is not acceptable to have kinks\nin the railway where the local datum is distorted, and it is usually helpful if measurements carried out with ground survey equipment can build onto the GPS control\nwithout a continuous need for distortions to be introduced. To take a counterexample,\nif GPS is being used to locate features whose coordinates are given in the local system\n(underground utilities, perhaps), transforming precisely into the local system - with\nall its distortions - is of paramount importance.\nOne way of treating either of these situations is to derive the transformation parameters as part of the project, by using points with coordinates that are known in the\nlocal system (for example Ordnance Survey triangulation points). For situations where\nthe internal geometry of the project is of more importance than sympathy with existing\nmapping, relatively few control points are needed. For the opposite case, a large\nnumber of control points, well distributed across the survey area, will be necessary.\nIn the latter case, an alternative is to use a transformation service provided by an\norganisation such as the Ordnance Survey. The advantage of this is a consistency\nbetween the transformations used by different organisations; the disadvantages are that\nit is not free and it is less appropriate for engineering projects that require consistent\ngeometry.\nOne of the main problems that will be encountered in deriving coordinates in the\nAspects of datum transformations\n51\nlocal system is the influence of the geoid. Moreover, the procedure is not uniquely\nconcerned with datums: a knowledge of the projection used for the local coordinates\nwill also be required. It will be assumed in this section that all information pertaining\nto the projection is available: readers unfamiliar with map projections should refer to\nChapter 7.\n6.5.2 Transformation models\nIn order to derive transformation parameters between WGS84 and a local system,\nit is necessary to include several known points in the survey. An example of such a\nscheme is shown in Fig. 6.2. The first point to be noted about Fig. 6.2 is that the known\npoints are well distributed across the area to be surveyed, rather than being grouped\nto one side. This is important, as the parameters derived will be applicable only in the\narea of the known control points and extrapolation beyond this area is likely to cause\nproblems. An example of an unacceptable configuration is shown in Fig. 6.3.\nFigure 6.2 Including known points in a survey. Triangles indicate points known in the local\nsystem; circles are new points; the lines denote GPS vectors.\nFigure 6.3\nPoor geometry of known and unknown points.\nFor very small surveys, it is possible to use only one known point for deriving\nthe transformation, but this would necessarily involve using the very simple translation given in equation (5.1) and ignoring the rotations. As the latter would typically\nbe around 10" (due to the combined effect of coordinate errors and the geoid), this\n52\nDatums and map projections\napproach would be valid only over distances up to 200 m if an accuracy of 1 em is\nrequired. The rest of this section will assume that more extensive surveys are under\nconsideration.\nIt is a requirement of using GPS in the relative mode that the absolute coordinates\nin WGS84 are known to an acceptable level of accuracy. For surveys that aim to\nachieve a relative accuracy of 1 em over 10-20 km, the absolute coordinates should\nbe known to within 10 m. If the survey has been designed to include a point that is\npart of a more accurate control survey, this should not be a problem. Alternatively,\nas outlined in section 5.3, absolute coordinates of this accuracy can be obtained with\naround 8 h of observation at a single point. The result is then a set of coordinates that\nare accurately known with respect to one another, but 'floating' in WGS84 by up to\n10 m. Again, the concept of a 'quasi-WGS84' datum is useful here. It is necessary\nthat the same 'quasi-WGS84' datum is used throughout the project, and therefore\nthat either the absolute coordinates of only one point are determined, or a 'best fit' is\ncarried out to the absolute determination of position made at all points in the survey.\nThe task is then to transform from this datum to the local one. A possible model\nfor the transformation equations was shown in section 4.2.2 and equations (4.3)-(4.4).\nThis is not the only possibility, however. An alternative model is one that makes\nthe rotation about a point at the centre of the points to be transformed, rather than the\ncentre of the coordinate system. The result will be the same, but the latter approach has\nthe advantage of making the transformation parameters so derived easier to interpret.\nIn the former approach, a rotation atound the coordinate origin (at the centre of the\nspheroid, several thousand kilometres away) is very similar to a translation, and a large\nadditional shift may be needed to compensate for this. Where the rotation is about a\nmore local point, there is very little correlation between the shifts and the rotations,\nand the shifts derived will be much closer to the typical value for the offset between\nthe two datums.\nThe transformation about a local origin is expressed through:\n(~)\nZ\n=\nWGS84\n(~) + (~~)\n+,uR (; =\n~~)\nZo\nZo\n/)2\nZ\n(6.3)\nlocal\nwhere\n(~)\nare the local coordinates of a point at the centre of the survey, and other terms are as\ndefined in section 4.2.2. Note that both of these models are referred to as similarity\ntransformations, as they preserve the shape of the original data.\nThe task now is to derive the seven unknown parameters (three translations, three\nrotations, and one scale) of the transformation model by comparing the coordinates\nfrom the two systems. Any commercial GPS package will be able to do this: the\nleast squares formulation is given in Appendix section A3. It should also be noted\nthat this procedure requires both sets of data to be in cartesian form, and therefore\nAspects of datum transformations\n53\ncoordinates quoted in a projected coordinate system and with orthometric heights must\nfirst be transformed to the cartesian form (this is a standard part of most GPS software\npackages).\n6.5.3 The use of redundant data\nTo derive the seven parameters of a similarity transformation, seven pieces of information are needed. This could in theory be provided by two control points that are\nknown in three dimensions and one bench mark (known in height only). In fact, the\nmore control points that are available the better. Once the parameters have been derived, it is possible to apply these to the known points and note the agreement between\nthe original coordinates and the transformed values. If there is no redundancy the fit\nwill be perfect, even if there was an error in the coordinates given for the known points.\nSuch an error could occur through incorrect keying-in of the data; alternative\nsources of error are that the control point could have moved (for example through\nlocal subsidence) since the last publication of its coordinates, or even that a monument may have been demolished and rebuilt in a slightly different location. In effect,\nthis would mean that, in an effort to fit the GPS data to the distortions of the local\nsystem, additional distortions have been introduced that are not justified.\nTherefore, a minimum of three control points, each known in three dimensions,\nis needed in order to provide redundancy and check for errors. Even in this case,\nwith nine pieces of information, the redundancy is only 2. There would be a tendency\nfor the residuals (the difference between the original coordinates and the transformed\nvalues) to be much smaller than the size of any errors present: the transformation\nwould stretch things a bit, rotate a bit more, and generally do all it could to fit to the\ngiven values. An example of this is given in the case study in section 13.1.\nTherefore, it must be emphasised once again that as many control points as possible should be used, and that they should be well distributed across the survey area.\n6.5.4 Geoid problems\nThe classic procedure for datum transformations outlined in section 1.5 assumes that\nthe geoid separation is known. If it is, it should be added to the orthometric heights\nquoted for the control points to derive heights above the spheroid.\nThis section will consider the implications of geoid information not being available. By way of illustration, a hypothetical data set will be used with different geoid\ncharacteristics. The points used are shown in Fig. 6.4. In the figure, points A, B, C,\nand D are known. The coordinates of P are also known (see Table 6.5), but will be\ntreated as unknown in order to test the quality of the transformation.\nThe four known points cover an area of 25 km square. It is assumed that the local\ndatum is parallel to WGS84, but translated by 100 m in each dimension, and that there\nare no distortions present in the local datum. Such a paragon of perfection is unlikely,\nand is used here only to isolate the effect of the geoid. For a more realistic example,\nin which several sources of error are present simultaneously, see the case study in\nsection 13.1.\n54\nDatums and map projections\nf!..A\nf!..B\nOp\nf!..c\nf!..D\nFigure 6.4\nTest data set.\nThe first case to consider is the one in which there is a uniform separation of the\ngeoid across the whole area. If the geoid separation is unknown, the only option is to\nenter it as zero and assume that the ellipsoidal heights are the same as the orthometric\nheights. In this case, the coordinates of the control points will all be shifted. For\na sufficiently small area, however, they will all be shifted by the same amount (for\na uniform value of the separation) and in the same direction. This is illustrated in\nFig. 6.5.\nA seven-parameter similarity transformation can cope with a uniform shift, as this\nwill be included in the translations that are derived. What does cause a problem is a\nchange of shape of the control points. The extent to which this happens is a function\nof the maximum angle between the shifts caused at the different control points, which\nis the same as the angle between the spheroidal normals. A general rule of thumb for\nthe relative coordinate shift caused by this effect is\nD\n£=\n6400 N\n(6.4)\nwhere £ is the resulting error, D is the extent of the survey (in kilometres), and N is\nthe approximate size of the separation that has been ignored (and has the same units\nas E). For a survey of 25 km in extent and a separation of 10 m, this amounts to around\n0.04 m. This is significant at the required accuracy level, although a certain amount of\nthis effect is absorbed by the scale factor of the transformation.\nTo illustrate this, let us assume that the geoid separation is a uniform 10 m in\nthe test area. Thus, although the heights are all 50 m above the ellipsoid, they will\nTable 6.5\nPoint\nA\nB\nC\nD\nP\nCoordinates of test points\nE\nN\nh\n400000.00\n425000.00\n400000.00\n425000.00\n412500.00\n125000.00\n125000.00\n100000.00\n100000.00\n112500.00\n50.000\n50.000\n50.000\n50.000\n50.000\nAspects of datum transformations\n55\nSpheroid\nFigure 6.5\nTable 6.6\n11K\ndY\ndZ\nJ1\nA uniform value of the separation.\nTransformation parameters for uniform geoid shift\n-106.302m\n-99.799m\n-107.762m\n-1.57 ppm\nactually, be quoted as 40 m above the geoid. If the transformation algorithm then\nassumes (incorrectly) that these are spheroidal heights, the transformation parameters\nare as shown in Table 6.6. The values of the rotations are not shown in the table, as\nthey are all nearly zero.\nClearly these parameters are 'wrong' in the sense that the translation is known\nto be 100 m in each dimension, and the scale should be true. What has happened,\nhowever, is that the extra 10 m shift caused by the geoid has been absorbed in the\ntransformation.\nApplying these parameters to the whole data set results in coordinates for P as\nshown in Table 6.7. In other words, the plan coordinates are correct to a millimetre\nand the spheroidal height that the program thinks it has found for P is the correct geoid\nheight.\nAs well as being able to cope with a uniform shift of the geoid, a similarity transformation can also deal with a uniform tilt, by adjusting the rotations determined during the transformation. This can be illustrated in the test data set by assuming that the\ngeoid slopes as a plane from a height of 10 m on the western side (in line with A and\nC) to 11 m on the eastern side (in line with B and D). Therefore the levelled heights\nof the four control points will be as shown in Table 6.8.\nApplying the same procedure as before results in the transformation parameters\nshown in Table 6.9. Although the other parameters are much the same as before,\nTable 6.7\n~m\nP\nCoordinates of point P after transformation\nE\nN\nh\n412500.000\n112500.000\n40.000\n56\nDatums and map projections\nTable 6.8\nOrthometric heights for uniform slope of the geoid\nPoint\nH\nA\nB\nC\nD\n40.000\n39.000\n40.000\n39.000\nTable 6.9\nTransformation parameters for uniform geoid tilt\nM\n~y\ntlZ\nRotation of X\nRotation of Y\nRotation of Z\n11\n-106.617 m\n-99.790m\n-108.150m\n6.40"\n-0.22"\n-5.20"\n-1.65 ppm\nthe rotations are nOw significantly different, as the coordinates rotate to adapt to the\nsloping geoid.\nApplying these parameters to the data set gives the results shown in Table 6.10.\nThe plan coordinates are again correct within I mm, and the height is also the expected\nvalue, as a uniform slope of the geoid would result in a separation of 10.5 m at P.\nThe similarity transformation is thus very adept at dealing with uniform slopes and\nshifts of the geoid. This will quite often suffice for surveys that cover just a few kilometres and are not seeking the highest accuracy. What the similarity transformation\ncannot cope with is a non-uniform change in the geoid: perhaps a bulge or smaller undulations. In the previous example, as the point P was not included in the derivation of\nthe transformation parameters, any value of the separation other than that implied by\na uniform tilt and shift of the geoid would not have been corrected by this procedure.\nIf the undulations of the geoid are sufficiently large to cause a problem at the\naccuracy required, and no model of the geoid is available, the only solution to this\nproblem is to incorporate more information into the transformation and to alter the\nmodel used. It is possible, for example, that, although no extra triangulation points\nare available in the area surveyed, there may be several bench marks. If these points\nare included in the GPS survey, a comparison may be made of the heights derived by\nthe similarity transformation with the original bench mark heights. The differences\nbetween these two figures will not represent the geoid-spheroid separation itself, but\nin the absence of observational errors will represent a residual value of the separation\nover and above the overall shift and tilt.\nIf the residuals so found are indeed due to finer undulations of the geoid, the\nexpectation would be that they are highly correlated, certainly over short distances.\nTable 6.10\n~m\nP\nCoordinates of point P after transformation\nE\nN\nh\n412500.000\n112500.000\n39.500\nAspects of datum transformations\n57\nA random spread of residuals with no apparent pattern is an indication more of poor\nquality GPS data or bench mark heights than of short wavelength undulations of the\ngeoid (this is particularly the case in low-lying terrain). Where the residuals do display\na pattern it is possible to interpolate the geoid values between bench marks to obtain\northometric heights at the points newly surveyed by GPS. This could be done either\nby simple methods of interpolation, or by more sophisticated statistical techniques\nsuch as least squares collocation. A full description of the latter may be found in\nMoritz (1980), and an example of its application is given as part of the case study in\nsection 13.1.\n7\nFundamentals of map\nprojections\n7.1 Introduction\nThe fundamental coordinate system for surveying and mapping is a set of geodetic\ncoordinates related to a particular datum. It is then necessary to consider how to arrange the data so that it can be placed on a fiat surface. There are two reasons for\ndoing this. The first, and most obvious, is presentational. Whether the data is to be\nshown on a paper map or on a computer screen, it must of necessity be presented in\na two-dimensional format. The second reason for rearranging the geodetic coordinates in two dimensions is computational. Even a simple concept such as the distance\nbetween two points becomes excessively complex when expressed in spheroidal formulae, and wherever possible it is more desirable to carry out computations in a simple\ntwo-dimensional coordinate system. A projection, then, is defined as an ordered system of meridians and parallels on a fiat surface. It should be immediately apparent\nthat it is impossible to convert a sphere or a spheroid into a fiat plane without in some\nway distorting or cutting it. It follows that there is no single method for doing this;\nhence the proliferation of types of map projection.\nThis chapter is concerned with introducing some of the fundamental concepts of\nmap projections, before looking in detail at the different types of projection. Included\nhere are definitions of grids and graticules, and of scale factor, as well as a consideration of the use of spherical or spheroidal models of the Earth in the context of\nprojections, the use of different developable surfaces, and the criteria to be considered\nin designing a projection for a specific purpose. Some fundamental defining parameters have more conveniently been introduced in the context of specific projections,\nalthough they have a universal application. For reference, these are:\n• false coordinates and the projection origin: discussed in section 8.1 with reference to the cylindrical equidistant projection\n• re-scaling of projections: discussed in section 8.3 with reference to the Mercator\nprojection\nFundamentals of map projections\n59\n• zoning: discussed in section 8.4 with reference to the transverse Mercator projection\n• convergence, or the angle between grid north and true north: also introduced in\nsection 8.4.\nAn example of the development of formulae to convert between geodetic and projection coordinates is given in section 8.1. A summary of the parameters needed to\ndefine a projection is given in Chapter 11.\nChapters 8-10 consider many different methods for projecting coordinates: it will\nbe seen that each of these methods requires several defining parameters. It is therefore important to distinguish between a projection method (for example the transverse Mercator, the polar stereographic, and so on) and a projected coordinate system,\nwhich is composed of the method as well as the defining parameters (for example the\nBritish National Grid, the Universal Transverse Mercator system, and so on). This\ndistinction in terminology is a useful one, and will be encountered, for example, in\nthe definitions of the GeoTIFF format for transferring geographically referenced data\n(GeoTIFF, 1995).\n7.2 Spheres and spheroids\nAs mentioned in the introduction, the fundamental coordinate system is a geodetic one\nrelated to a spheroid. The relative positions of points in such a coordinate system are\nnot the same as they would be on a sphere. That is to say, if accuracy is to be preserved\nit is necessary to develop formulae for treating a spheroid rather than a sphere. That\nsaid, it should be borne in mind that the flattening of most spheroids is of the order of\nI part in 300. It is therefore apparent that the shape of a particular projection when\napplied to the sphere is very similar to the shape when it is applied to the spheroid.\nThere are significant differences in the coordinates that result, which will certainly\nbe apparent on a large scale map, but the sphere is nevertheless useful for giving an\ninsight into how the resulting map has been distorted. Most of the explanation that\nfollows therefore uses the sphere as a model of the Earth. Full spheroidal formulae\nshould, however, normally be used in practice.\n7.3 Grids and graticules\nThe appearance of meridians and parallels on the projection depends on the type of\nprojection that has been used. In general they are an arrangement of straight or curved\nlines, as shown for example in Fig. 7.1. This set of parallels and meridians, as seen\non the map, is known as the graticule. On some maps it may not be shown at all; on\nothers it may be noted around the border of the map and shown in the middle as a\nset of tick marks (for example on Ordnance Survey 1 : 50 000 maps, where blue tick\nmarks show the graticule at 5' intervals).\nThis graticule does not constitute the basis of a coordinate system that is suitable\nfor computational purposes or for placing features on the projection. Instead, a rectangular coordinate system known as the grid is superimposed on the map (see Fig. 7.2).\nDatums and map projections\n60\nFigure 7.1\nAn example of the graticule.\nThis grid may be given coordinates x and y, or if appropriate eastings (E) and northings (N). This may seem a rather obvious point, but it is important to establish the\ndifference at this stage between a graticule and a grid.\nFigure 7.2 A grid superimposed on the graticule.\n7.4 Scale factor\nFeatures on the surface of a sphere or a spheroid undergo distortions when projected\nonto a plane. It is necessary to have a precise definition of the amount of distortion\nthat has resulted. This is provided by the definition of the scale factor, which is given\nthe symbol k in this text. Then\nk = distance on the projection\ndistance on the sphere\n(7.1)\nThis parameter will be different at each point on the projection, and in many cases will\nhave different values in each direction. Equation (7.1) can therefore be understood to\nFundamentals of map projections\n61\napply in general to only a short distance (in theory infinitesimally short). For longer\nlines, the relevant parameter is the integrated mean of the point scale factors along the\nwhole length of the line: section 7.7 discusses the point at which a short line becomes\na long one.\nIt is important to understand that this scale factor results purely from the act of\nprojecting to a flat surface and is therefore unrelated to the map scale (a number such\nas 1 : 50000). The ideal value of scale factor is 1, representing no distortion. It should\nalso be emphasised that a distortion of this type is not the same as an 'error' in the\nmap, as the rules governing it are clearly defined and the true coordinates can always\nbe recovered if the parameters of the projection are known.\nIt will often be useful to consider what happens to a small square of dimension\n(1 xl) on the surface of the sphere when it is projected. In the general case the\ndistortion in the direction of the parallels will be different from the distortion in the\ndirection of the meridians. Let kp represent the scale factor along a parallel, and kM\nrepresent the scale factor along a meridian. With reference to Fig. 7.3, the square is\nthen projected as a rectangle of dimensions (kp x kM). It will be seen in the examples\nof projections that follow that the unit square is often subjected to a rotation as well.\nSphere\nFigure 7.3\nProjection\nProjection of a unit square.\n7.5 Developable surfaces\nIt is possible to derive a set of formulae to convert geodetic coordinates to grid coordinates in purely mathematical terms. Historically, projections were derived by first\nprojecting from the sphere to an intermediate shape, which was of such a nature that it\ncould be unravelled without distortion. This remains a useful concept for categorising\nand describing map projections. The principal forms of these intermediate surfaces\nare the cone, the cylinder and the plane itself. The advantage of these shapes is that,\nbecause their curvature is in one dimension only, they can be unravelled to a plane\nwithout any further distortion. The developable surface is brought into contact with\nthe sphere, and a set of rules is formulated for the way in which features are taken\nfrom the sphere onto that surface. Examples are shown in Fig. 7.4.\nThe rules for transferring features from the sphere to the projection are discussed\nin section 7.6. Before looking at these in detail, however, the general point can be\nmade that in the region around the point or line of contact between the two surfaces\nthe scale factor distortion will be minimal. In fact, where the two surfaces are touching\nthe scale factor will be equal to 1.\n62\nDalUms and map projections\nFigure 7.4\n(a) cylindrical surface; (b) conic surface; (c) plane surface.\nThe choice of developable surface will therefore be dictated by the geographical\nextent of the region (0 be mapped. In some cases it is required to project the whole\nEarth; in others, however, a projection will apply only (0 a selected area. Thus. for\nexample, a cylindrical surface (Fig. 7.4a) is appropriate for mapping the equatorial\nregions as it is touching the sphere along the equator. A conic projection (Fig. 7.4b) is\ngood for mapping areas of mid-latitude with a large extent in longitude, as the cone is\nin conlact with the sphere along a line of latitude.\nThe characteristics of lhe developable surfaces have only been given in ouLline\nhere: it is. for example. possible to orientate lhe surface at different angles from lhose\nshown here. or to change the shape of the cone. These aspects are dealt with more\nfully in Chapters 8-10.\nCones, cylinders and planes are useful for gaining an insight into the appearance\nof a projection. This is extremely useful in situations whcre the parameters of the\nprojection are unknown and it is necessary to try to guess them, a situation discussed\nin section 12.4. It should be noted here, however, thai they are not a necessary step in\nforming a projection. In general, equations can be derived of the form:\n(7.2)\nwhich express the grid coordinates as a function of the geodetic coordinates without\nreference to intermediate developable surfaces. Indeed, the equations for all the above\nprojections could be given in this form without mention of cones and so forth, and in\nthis case expressions for aspeclS such as scale factor and convergcncecould be derived\nby differentiation.\nMost of the more complex projections which depart from the simple forms in\nthe following chapters are usually developed to represent the whole Eanh in some\nway, and so are of less relevance to surveying. remotc sensing and GIS. There are\nexceptions to this, however, such as the New Zealand map grid.\nMost large scale mapping is likely to be based on transverse Mercator, Lambert\nconformal conic, and to a lesser extent the azimuthal stercographic and oblique Mercator projections.\n63\nFundamentals of map projections\n7.6 Preserved features\nHaving selected the developable surlace, it is necessary to devise a set of rules for\ntransferring coordinates from the sphere. In theory, there is an infinite number of ways\nof doing this, and the choice will depend on the purpose for which the projection is\ndevised.\nIt is not possible to devise a projection without introducing distortions. In general,\nthe shape, area and size of features on the surface of the sphere will be different when\ntransformed to the projection. The usual approach is to attempt to preserve one of\nthese, usually at the expense of all the others. For example, it may be required that\ncertain of the distances as measured on the sphere should be undistorted when shown\non the projection. It is obviously not possible to preserve all distances, as this would\nthen be achieving the unachievable goal of an undistorted projection. It may be instead\nthat the distances along all meridians should remain undistorted, which is the same as\nsaying that\n(7.3)\nor the scale factor along a meridian is equal to I. The effect on the projection of a unit\nsquare is shown in Fig. 7.5. Such a projection is said to be equidistant. It can be seen\nthat there remains a scale factor along the parallel which is not equal to I, and that the\nshape and area,of the square have both been distorted.\nD\nSphere\nFigure 7.5\n- __~~I_k_M=_l\np\nk __ _\nProjection\nProjection of a unit square preserving distances along the meridians.\nAn alternative to this type of projection is one that attempts to preserve area, and\nis therefore termed an equal area projection. In such a situation we have\n(7.4)\nor in other words the area of the projected unit square remains equal to 1. This is\nillustrated in Fig. 7.6.\nThe other principal classification of projections is that which preserves the shape\nof features. This is known as an orthomorphic or, more commonly, conformal projection, and the relationship between the scale factors is\n(7.5)\nThis is illustrated in Fig. 7.7.\nIn preserving shape, a conformal projection is therefore preserving angles as well.\nFor example, the angle between the side of the unit square and the diagonal is 45°:\nDatums and map projections\n64\nSphere\nProjection\nFigure 7.6 Projection of a unit square preserving area.\nD\nSphere\nProjection\nFigure 7.7 Projection of a unit square preserving shape.\nthis is the angle that would be measured by someone making the observation on the\nground. In Fig. 7.7, the angle between the side and the diagonal is also 45°. This is\nnot the case, however, in either Fig. 7.6 (the equal area projection) or Fig. 7.5 (the\nequidistant projection). For this reason, the conformal projection is the one of most\nsignificance in land surveying, as it means that angles measured on the ground can\nbe transferred to the projection for use in computations. A conformal projection is\ntherefore one of the most frequently used, and is likely to be the basis of almost all\nlarge scale mapping.\nFinally, it should be noted that the three types of projection mentioned above,\nalthough the most commonly used of all projections, do not constitute an exhaustive\nlist. Other types are possible, and are sometimes used, which preserve neither shape,\narea nor any distances. Some of these will be referred to in later sections.\nIt should be emphasised here that the unit square used in these examples has to be\na very small one for the conclusions drawn here to be exact: for bodies of finite size,\nsome of these assumptions can break down. This point is treated more extensively in\nChapter 12.\nIn summary, most projections may be classified firstly according to the shape of\nthe developable surface, which is dictated primarily by the geographical area to be\nmapped, but also in part by the function of the map, and secondly by the features on\nthe sphere which are to be preserved on the projection.\nFundamentals of map projections\n65\n7.7 Computational aspects\nFrom a traditional land surveying point of view, there are two principal aspects to\ncomputing coordinates on projections. The first concerns distances, and the second\nconcerns angles.\nThe basic principle of transferring a distance measured on the Earth to one to be\nused on a projection has essentially been covered by equation (7.1) in section 7.4. This\ncan be rearranged as\ndistance on the projection = k x distance on the sphere\n(7.6)\nThat is, any measured distance must be multiplied by the appropriate scale factor in\norder to use it on the projection. This procedure is simplified somewhat by the fact\nthat most projections used for survey computations are conformal. Hence, the scale\nfactor at a point will be the same in all directions.\nIn section 7.4 it was pointed out that the definition of scale factor applies in theory\nto lines of infinitesimally short length. In practice, the scale factor changes so slowly\nacross a projection that a single scale factor can often be considered as applicable to\nall the distances in one survey, rather than having to compute a separate one for each.\nHow slowly? And how significant is the change in distance as a result of applying\nthe scale factor? In part, the answers to these questions will depend on the particular\nprojection but, as the area that anyone survey projection covers is restricted in order\nto avoid excessive corrections of this type, a transverse Mercator projection applicable\nto a zone 6° wide may be taken as a suitable example and inferred as being typical\nof most survey projections. (Note the distinction here between a 'survey' projection\nused for base mapping and computational purposes and 'other' projections where the\nobject may be the presentation of data and the distortions very large.)\nA mid-latitude country mapped on a transverse Mercator projection has a maximum scale factor of around 1.0004. This means that a distance of 100 m as measured\non the ground should be scaled to 100.04 m when used on the projection, a correction\nthat is certainly significant when compared with the accuracy that may be obtained\nwith an electromagnetic distance measurer (EDM).\nThe region with the most extreme rate of change of scale factor is on the edge of\nthe projection. Over a distance of 5 km, the scale factor may, for example, vary from\n1.00043 at one end of a line to 1.00039 at the other. The error introduced by using\nthe scale factor at one end of the line, rather than the average computed over its whole\nlength, would in this situation be around 8 cm. A similar calculation over a distance\nof 3 km in the worst case scenario shows a potential error of 3 cm.\nFor a target accuracy of 1 cm, and distances over 1 km, it would therefore be\nadvisable to calculate a scale factor for each line. This can be done either by using the\nmean of the end point values of scale factor or by using Simpson's rule (Allan, 1997a)\nfor higher precision. For a project spread over an area of less than 1 km 2 , it will nearly\nalways be acceptable for a general scale factor to be used for the project as a whole.\nThe correction to be applied to angles arises because, in general, the straight line\nobserved between two points does not plot as a straight line on the projection. Standard trigonometric calculations On the coordinates, distances, and angles assume that\nall lines are straight, and so a correction has to be applied to obtain the angle that\n66\nDatums and map projections\nwould have been observed along the straight lines on the projection. This is illustrated\nin Fig. 7.8. As the magnitude of this correction is proportional to the length of the\nline, and is never large, its significance has greatly diminished with the use of GPS\nin place of triangulation over long distances. At its most extreme on the UK National\nGrid it reaches 3" over a distance of 5 kIn, and 6" over 10 kIn. For the vast majority of\nmodem surveys it may therefore be safely dismissed. Otherwise, a text such as Allan\n(1997b) may be consulted.\nActual\nobserved line\nCorrection angle\nFigure 7.8\nDifference between actual line and projection line.\n7.8 Designing a projection\n,\nIn many applications, those working with map projections will be using an existing\nprojected coordinate system, and the task will be to identify the projection method and\nthe associated parameters. In other situations it is necessary to design a projection for\na particular purpose, in which case the choice of projection method and parameters is\nup to the user. In such a case it is first necessary to define what is meant by a suitable\nprojection. A suggested order of criteria is given below.\nI. It should preserve any properties that the use of the map dictates. That is, if\nthe map is to be used to measure the areas of features, the projection must\nbe an equal area projection. Alternatively, if it is to be used for surveying or\nnavigation, the shape must be preserved and the projection has to be conformal.\n2. A good projection is one that minimises the scale factor over the region; that\nis, the scale factor must be everywhere as close to unity as possible. In doing\nthis there may be some goal for the maximum allowable scale factor distortion,\nwhich may lead to a situation where a single projection cannot achieve the desired result and the area must be split up into zones. This adds to the complexity\nof the situation, and makes it difficult to carry out computations between points\nin different zones. In such situations, it may be that using a projected coordinate\nsystem is no longer appropriate.\n3. Any additional properties would usually be considered after the scale factor. It\nmay be required, for example, that the appearance of the graticule should be as\nsimple as possible, or that meridians should be parallel to each other. In some\ncircumstances this might be considered more important than scale factor: for\nexample, the Mercator projection is used in navigation because of the parallel\nFundamentals of map projections\n67\nmeridians, even though it has a higher scale factor distortion than some other\nprojection methods.\nHints as to which projection is most likely to achieve the above goals are given in\nthe descriptions of the projection methods in the following chapters. For example, the\ntransverse Mercator projection is suitable for regions that are longer in their northsouth extent than their east-west extent, and conic projections are suitable for midlatitude regions with a large extent in longitude. An example of the thinking involved\nin this process is given in the case studies in sections 13.2 and 13.3.\n8\nCylindrical projections\n8.1 Cylindrical equidistant projection\nFollowing the rules and procedures outlined in Chapter 7, a cylindrical equidistant\nprojection is formed by bringing a cylinder into contact with the sphere and 'peeling'\nthe meridians off the sphere and onto the cylinder without any distortion. This maintains the feature that kM = 1. In so doing, it is necessary to stretch each parallel of\nlatitude to be the same size as the equator. Ifthe circumference of the equator, L eq , is\ngiven by\n(8.1 )\nL eq = 2rrR\nwhere R is the radius of the spherical Earth, and the circumference of a parallel of\nlatitude Ij> is given by\nL~\n(8.2)\n= 27tR coslj>\nthen, by the original definition of scale factor in equation (7.1),\n2rr.R\nkp\n1\n,(8.3)\n= 2rr.Rcoslj> = coslj> = seclj>\nAn example of this projection for the European region is shown in Fig. 8.1. The\nfeatures to be noted are:\n• As with all normal cylindrical projections, the meridians are straight and parallel\nto each other.\n• The distances along the meridians are undistorted.\n• The scale along the equator is true, but the scale of all other parallels becomes\nincreasingly distorted towards the poles, with the extreme case of the poles\nthemselves being represented as straight lines. Here it should be noted that\nsec90° =\n(8.4)\n00\nIn consequence, the shape and the area become increasingly distorted towards\nthe poles.\n68\nC)'liodrical projectiollS\n69\n• The l1al, square appearance of the gralicule leads to the French tenn plate\ncarree. which is somelimes also used in English.\nFigure 8.1\nCylindrical equidislant projection.\nIn computational terms. it is necessary to have a set of fonnulae to determine the\ncoordinates of points on the map (E,N) given their geodetic coordinates. The first\nstep is 10 select an origillfor the projection, Here il may be conveniently chosen, for\nexample, as\n90 = 30"\n'" = 0"\n(the origin of latitude)\n(the origin of longitude)\nThe x or E (eastings) coordinate is calculated as the distance along the equator between\nthe projected point and !.he origin. or\n(8.5)\nwhere\nM= (1.-",)\n(8.6)\nSimilarly, the y or N (northings) coordinate is expressed as the distance along a\nmeridian from the origin to the projected point. or\nN=RA¢\n(8.7)\nDatums and map projections\n70\nwhere\n(8.8)\nThis then has the effect that any points south of 30° N or west of 0° will have negative\ncoordinates on the projection. This is generally undesirable, and is avoided by adding\na suitably large number to all the coordinates obtained so far. Thus:\nE' =E+Eo\n(8.9)\nN' =N+No\n(8.10)\nwhere E' and N' now represent thefalse eastings andfalse northings of the point and\nEo and No are the false eastings and false northings ofthe origin, which are parameters\nto be defined for the projection. The original coordinates E and N are referred to as\nthe true eastings and northings.\nIt is also possible to derive the geodetic coordinates of any point whose projection\ncoordinates are known:\nq>=q>o+\nN'-No\nR\n(8.11)\n(8.12)\nSeveral important points should be noted.\n• These formulae are specific to the cylindrical equidistant projection. They have\nbeen written in full as an example of the information that is necessary for defining a projection: it is not possible in a volume such as this to quote full formulae\nfor all projections. It is to be assumed that the reader wiJI have access to the necessary formulae via software packages such as ArclInfo. Further comments on\nthis are given in Chapter 11.\n• The formulae for a spheroid are a little more complex than those given above,\nwhich were derived using a spherical Earth.\n• The concepts of the origin of the projection and the false coordinates have for\nconvenience been introduced with reference to the cylindrical equidistant projection. It should be noted, however, that these are applicable to any projection,\nand are a part of the set of parameters that define it.\n8.2 Cylindrical equal area projection\nIt was shown in the previous section that the cylindrical equidistant projection distorts\nparallels by the scale factor sec q> while leaving the meridians undistorted. It is therefore the case that areas have also been distorted. To compensate for this, an equal area\nprojection can be formed according to the rule of equation (7.4) that kMkp = 1. Since\nCylindrical projections\n71\nthe parallels must be distoned by sec¢' whatever happens (to fit onto the cylinder), this\nI\nkM = - = cos¢'\nkp\n(8.13)\nHence each small section of each meridian is multiplied by cos, as it is 'unpeeled'\nand placed on the projection. This leads to the result shown in Fig. 8.2. The features\nto be noted are:\n-\n----.\n-\nr-I-\nFigure 8.2 Cylindrical equal area projection.\n• The scale factor in the equatorial region is close to I for both the meridians and\nthe parallels. This is a consequence of cylindrical projections being optimal for\nthe equatorial regions.\n• The scale factor along the meridians is no longer equal to I, and hence distances\ncannot be measured directly off such a map. Funhermore, the correction to be\napplied is not a straightforward one, as the scale factor is a function of latitude.\n• The distortion of shape is now exlreme towards the poles, as in addition to the\nscale factor sec 41 along the parallels there is also the distortion cos(l along the\nmeridians.\n• Formulae can again be derived for convening between (41,).) and (E,N), which\nrequire the coordinates of the origin and the false coordinates as input.\nA further refinement of this projection is to keep the equal area propeny but to\nchange the shape, by applying a further scaling of 0.5 along the parallels and 2 along\n72\nDatums and map projections\nthe meridians. This leads to what is usually referred to as the Petus projection, often\nused by international organisations for displaying the countries of the world in their\ncorrect relative sizes. This is shown in Fig. 8.3. Note that the shape of features is now\ncorrect in the mid-latitudes. as opposed 10 the equatorial regions with the conventional\nform. and that there is less shape distonion near lhe poles.\n"1\nI\n,\n1\n.'igure 8.3\nPeters projection.\n8.3 Mercator projection\nOne of the most important of all cylindrical projections is the conformal version.\nwhich is given the particular name Mercaror. In this projection, note again that\nkp = secq, and hence, following equation (7.5),\n(8.14)\nThis then leads to a projection such as lhat shown in Fig. 8.4. The general features of\nthe Mercator projection are:\n• 1l1e scale factor at any point and in any direction is equal to\nthe latitude.\nsec~,\nthe secant of\n• In consequence, lhe pole is now of infinite size and at infinite distance from the\nequator, and hence cannot be represented on the projection.\n• The fact that the meridians are panlUel to each other and that the angles are\npreserved makes this an ideal projection for navigation. A line drawn between\n73\nCylindrical projections\n+\n+++++--t-'\n+-+-+-+-+1\nFigure 8.4\n1-+-~-'\n1\nMercator projection.\ntwo points on the map, A and B, as shown in Fig. 8.5. has a constant angle with\nrespect to the meridians (the azimuth from nonh),·which can be read directly\nfrom the map. This is then the azimuth that should be followed in navigating\nfrom A to B. Such a line is termed a rllllmb line or a loxodrome. It should be\nnoted, howevcr, that this line is not the shortest route between A and B, owing\nto the variation of scale factor within the projection. The shortcst routc between\nthe two. the greal circle, will in fact be projected in most cases as a curved line.\n~\n'-Figure 8.S\n~\nK\nV\nV\nV\nCt>--.V-\nLine of constant azimUlh from A to B on a Mercator projection.\n74\no\nDatums and map projections\n• An individual map sheet (or navigation chart) usually represents a small part of\nlhe world. Hence it has a scale factor that varies within lhe map according (0\nthe range of latitudes it represents. A chan of the English Channel, for example,\nmight represent the rnnge of latitudes between 49°N and 51 oN. The scale factor\nwould then vary between sec49" and sec 51". or 1.52 and 1.59. It is approprialC\nthen 10 apply an overall scaling to the map so that the scale factor is on average\nequal or closer to I. This will not affect the shape of the map in any way. but\nwill make any distances read from it closer 10 their true values. In Ihis example\nan appropriate figure might be\n1\n'"' = -1.55 = 0.645\nso that the scale factor is now seen to vary between kosec49" and kosec51", or\n0.98 and 1.03. which means lhat a distance read from the map will be correct\nto within 3%. Because the scale faclOr on the equalOr was originally equal to\nI, it is now equal to ko. A software package such as ArclInfo, for example,\nachieves this by asking the user to input the latitude of the parallel at which the\nscale is true. If the user has a projection defined in terms of the scale factor on\nthe equator, ko, the appropriate parallel may be computed by knowing that the\ngeneral scale factor at any point, k, is now equal to ko sec lfl.\nThis concept of an overall re-scaling of the projection is, once again, one that has\nbeen introduced for a specific example but which has a general application to all m'ap\nprojections. It, or its equivalent, then forms another of the parameters required to\ndefine a projection.\n8.4 Transverse Mercator projection\nAll of the cylindrical projections discussed so far were fanned by placing a cylinder\nin contact with the equator and, although they are often used to portray the Earth as a\nwhole, they are therefore optimised for use in the equatorial regions.\nFigure 8.6\nForming a lI'allsverse cylindrical projection.\nFor those parts of the Earth that do not lie close to the equator, an alternative is to\ntum the cylinder onto its side and make the line of contact a particular meridian, as in\nFig. 8.6. A projection so formed is termed a tra"sverse cyli"drical projection, and can\nCylindrical projections\n75\nbe based on any chosen anlraJ meridian. Again, a set of rules can be proposed, to\nproduce equal area. equidistant, or conformal projections. By far the most important of\nthese is the transverse Mercator projection. an example of which is shown in Fig. 8.7.\nwhich has been based on 0° as the central meridian. The important features of the\ntransverse Mercator projection are:\nFigure 8.7\nTransvcrse Mcrcator projection centred on 0" .\n• The scale factor at each point is the same in any direction. and is given by\nk = sec9\n(8.15)\nwhere 9 is exactly analogous to 9, except that it is the angular distance from\nthe central meridian, rather than the angular distance from the equator. It is in a\nsense a 'sideways version of latitude'. Note that it is not the same as longitude,\nbut for a sphere can be found from the expression\n(8.16)\n• The meridians are no longer parallel to each other, and in fact are no longer\nstraight lines. The exception is the central meridian, which is a straight line. At\nany general point, the meridian makes an angle with the central meridian (which\nis also the direction of grid north) and this angle is termed the convergence, y.\nFor the sphere,\ny~\n.1A.sin41\n(8.17)\n76\nDatums and map projections\n• Although Fig. 8.7 shows the transverse Mercator applied to the European region, it is more commonly used for a narrow band of no more than ±3° on\neither side of the central meridian. This is principally due to its use as a survey projection, in which it is required to minimise the scale factor distortion at\nthe expense of the extent of coverage. In this situation, the scale factor varies\nbetween 1 on the central meridian and 1.0014 on the edge of the projection, as\nshown in Fig. 8.8. It is then possible to minimise the scale factor distortion by\nonce again applying an overall scaling ko, which in the context of the transverse\nMercator projection is now termed the central meridian scale factor. An appropriate and typical value would be 0.9996, which means that the scale factor\nacross the projection now ranges from 0.9996 on the central meridian to 1.0010\n(0.9996 sec 3°) on the edge. This is represented by the lower line in Fig. 8.8.\nl.()()25\n1.002\n1.0015\n1.001\nj\n1.0005\n"*'"\nV'J\n0.9995\n0.999\n0.9985\n0.998\n·3°\n0°\n+3°\nJ\nAngular distance from central meridian\nFigure 8.8\nScale factor for transverse Mercator projection.\nThe transverse Mercator projection is very widely used, and is particularly appropriate for regions with a large extent north-south but little extent east-west. It is, for\nexample, the projection used for the Ordnance Survey National Grid for maps (and\ndigital products) of Great Britain (Ordnance Survey, 1995b). In this case the relevant\nparameters are given in Table 8.1.\nThe transverse Mercator is also the basis of a worldwide projection system known\nas universal transverse Mercator (UTM). This system divides the world up into 60\nCylindrical projections\nTable 8.1\n77\nParameters for the OS and UTM projections\nProjection\nOS National Grid\nUTM (<p > 0°)\nUTM (C\\> < 0°)\nA.o\n<Po\nFalse east (m)\nFalse north (m)\nko\n2°W\nZonal\nZonal\n49°N\n0°\n0°\n+400000\n+500000\n+500000\n-100000\n0\n+10000000\n0.9996012717\n0.9996\n0.9996\nzones of longitude, each of width 6°. The zones are numbered from I starting at a\nlongitude of 180 E, and increase eastwards, as shown in Fig. 8.9. Thus the UK, for\nexample, lies in UTM zones 30 and 31.\n0\nI\nI\nI\ne o\ni\n!\n~\nCentral\nmeridian\n177°E\nCentral\nmeridian\n177°W\n~)\ni\nCentral\nmeridian\n171°W\nLongitude\nFigure 8.9\nThe universal transverse Mercator system.\nWithin each zone, the central meridian lies along the centre of the zone. Thus,\nUTM zone I has central meridian 177°W, for UTM zone 2 it is 17l o W, and so on.\nAll other parameters of the UTM system are as given in Table 8.1. Specifying that\na projection is in UTM is therefore sufficient to define it completely, provided that\nthe zone number is specified. Regions that fall across UTM zones would be shown\non separate projections, with a discontinuity in between. Features falling across the\nboundary would have approximately the same scale factor on either side, but would be\nrotated with respect to each half since the convergence is in opposite directions (true\nnorth is always towards the central meridian).\nOnce again, the concept of zoning is one that has been introduced with reference\nto a particular projection but which is generally applicable for others. Note that the\nterm Gauss-Kruger is also sometimes used for this projection.\n8.5 Oblique Mercator projection\nThe final classification of cylindrical projections is used where a country or region to\nbe mapped is longer in one direction than another but is not aligned along a meridian\n78\nDatums and map projections\nor parallel. In Ibis situation it is possible (0 fannulate an oblique aspect oflhe Mercator projection to minimise lhe scale factor, as in Fig. 8.10. In defining Illis projection\nFigure 8.10\nForming an oblique Mercator projection.\nit is necessary to specify the azimuth of the central line. as well as all the other parameters discussed earlier. The scale factor will now be proportional to the secant of the\ndistance from the centre line. An example of the usc of this projection is in peninsu-\nlar Malaysia, where Lhe projection is tenned the Hotine oblique Mercator, or rectified\nskew orthomorphic.\nA similar projection is the space oblique Mercator, which was developed for displaying satellite imagery. In this projection, the centre line is the ground track of\nthe satellite. The formulae are complex, but again the scale factor is approximately\nproponionalto the secant of the distance from the centre line.\n9\nAzimuthal projections\n9.1 General azimuthal projection\nAn azimuthal projection is formed by bringing a plane into contact with the sphere or\nspheroid and formulating a set of rules for the transfer of features from one surface to\nthe other. Once again the properties preserved can be distance, area, shape, or others.\nBecause the point of contact between a sphere and a plane is a single point, the\nscale factor distortion will be circularly symmetric. That is, the scale factor will be\nproportional to the distance from the centre of the projection. An azimuthal projection\nis therefore particularly suited to small 'circular' features on the surface of the Earth.\nA special case of the azimuthal projection is where the point of contact is one\nof the poles. This is referred to as a polar projection. This has the rather obvious\napplication of mapping the polar regions. A polar projection is formed by taking the\nmeridians off the sphere and placing them on the plane. The amount of distortion of\nthe meridians is a function of the type of projection, with the distortion of the parallels\nfollowing in consequence. The general form of the polar projection is therefore a set\nof meridians radiating from the pole with no distortion of the angle at the centre. This\nis shown in Fig. 9.1.\nAs with the cylindrical projections, the azimuthal projections can have a further\noverall scaling applied to them, which has the effect of reducing the scale at the centre\nto less than 1, and making the scale true along a circle centred on the projection point.\nFor the polar aspect this will make the scale true along a parallel of latitude away from\nthe pole.\n9.2 Azimuthal equidistant projection\nThe azimuthal equidistant projection is formed by keeping the scale factor equal to\n1 in the direction radial from the centre of the projection. In the case of the polar\nequidistant projection, an example of which is shown in Fig. 9.2, this means that the\nscale factor on the meridians, kM, is equal to 1. The scale factor along a parallel, kp,\n79\n80\nDatums and map projections\n135"E\n135"W\n9O'f--f------'*--+---e'"\nw\n4S"W\n.\n45"E\n•'igure 9.1 Genernl form of the polar projcction. Distances of parallels from the centre are a\nfunction of the projection type.\nis given as a function of latilUde, 9, by\n!1t-~\nkp=-cos~\nand thus increases from I at the pole to 1.02 at 70" and 1.09 3150°.\nFigurt! 9.2\nPolar equidistant projection.\n(9.1)\nAzimuthal projections\n81\nAs a further example, Fig. 9.3 shows an azimuthal equidistanl projection thaI is\ncentred on London. All distances from London are correct when measured from the\nmap; all other distances are too long.\nFigure 9.3 Azimuthal equidistant projection centred on London.\n9.3 Azimuthal equal area projection\n1be azimuthal equal area projection is fonned in a similar way to the azimuthal\nequidistant projection, except that the scale factor of the Jines radial from the centre is set to the inverse of the scale factor in the perpendicular direction. For the polar\naspect. shown in Fig. 9.4, this leads to scale factors of\nkM = cos(45° -\nkp = sec(45° -\n!41)\n!41)\n(9.2)\n(9.3)\nDatums and map projections\n82\nFigure 9.4\nThe polar equal area projection.\n9.4 Azimuthal stereographic (conformal) projection\nThe confonnal version of the azimuthal projection is tenned the azimuthal stereographic !)rojectioll. This is for historical reasons, because this projection can be constructed graphically by projecting all points from a 'viewing point' on the opposite\nside of the Earth from the centre of the projection.\nAs with all confonna! projections, this one has a particular significance as it is\nsometimes used as the basis for national mapping. being particularly appropriate for\nsmall, compact countries or islands. The polar stereographic projection. shown in\nFig. 9.5. is used as a complement to the universal transverse Mercator beyond latitudes\n±80°, when it is known as the universal polar stereographic projeclion (UPS). In this\nusage, !he scale at the jX)le (ko) is reduced to 0.994, which results in a standard parallel\n(where scale is uue) of 81°06'52.3". The false castings and northings in this version\n"'"\nEo = +2000000 m\nNO = +2000000m\nIn general the scale factor for the jX)lar aspeet is given by\n(9.4)\nwhich is the same in any direction.\nAzimuth::!l projcclions\nFigure 9.5\n83\nPolar Slereographic projection.\n9.5 Gnomonic projection\nOne final fonn of azimuthal projection should be noted here, as although it is seldom\nused in modem applications it does have some intcresling properties. This is the\ngnomonic projection. which is fonned by projecting all points from the centre of the\nEarth as shown in Fig. 9.6.\nI----.J~::::=----\\\nFigure 9.6\nEquator\nFonnation of a gnomonic projection.\nAn example of the polar aspect is shown in Fig. 9.7. As would be expectcd. the\nscale faClOr distortion becomes extreme away from the centre of the projection. reaching a valueof2 along the meridian when the latilUde is 45 0 and a value of 4 at a latitude\nof W. It is clearly not possible to show an entire hemisphere with this projection.\nThc only advantage of this projection is that it is the only one where all great\ncircles (the shortest route between two poims on a sphere) are shown as straight lines\n84\nFigun 9.7\nDatums and map projections\nThe (north polar) goomonic projection.\nAzimuthal projections\n85\non the projection, and vice versa. This feature means that it can be used to plan the\nshortest route between two points, although this role has largely been superseded by\ncomputational techniques.\n10\nConic projections\n10.1 General conic projection\nA conic projection is fanned by bringing a cone into contact with the sphere or the\nspheroid. In so doing it is seen to be touching the sphere along a parallel of latitude.\nThis line is known as the standard parallel of the projection.\nFigure 10.1\nCones in contact with different standard parallels.\nII can be seen from Fig. 10.1 that many different shapes of cone can be selected, all\nresulting in a different standard parallel. The choice will depend upon which region\nof the Earth is 10 be mapped. an appropriate standard parallel being one that passes\nthrough the centre of the region.\nThe resultant (onn of the conic projection is that the meridians appear as straight\nlines converging lowardS onc of the poles. The angle between two meridians is a\nfunction of lhe standard parallel, and can be expressed as\nY= .1Asina\n( 10.1)\nwhere aA is the difference in 10ngilUde of the two meridians and Cl is the latitude of\nthe standard parallel.\nThe conic is in fact a general case of projection, of which the cylindrical and\nazimuthal projections are particular fonns. In equation (10.1), as Cl tends towards 90".\ny tends to aA. which indicates thaI the angles between the meridians are true. This is\nas in the polar projection, which is the equivalent of a completely nat cone touching\nthe sphere at the pole. Similarly. as Cl lends 10 0°, y tends 100", which indicates thaI\n86\nConic projections\n87\nthe meridians are parallel as in a nonnal conic projection. A cylindcr is the equivalent\nof a cone touching the equator.\nThese considerations are useful for gaining an insight into the nature of conic\nproje<:tions, but should not be implemented in practice, as the fonnulac for the cone\nare likely to break down under these extreme conditions.\nThe equivalent of an overall scaling is often used for conic projections, where it\nis achieved by using two standard parallels as shown in Fig. 10.2. The effect is to\nreduce the scale factor below I between the two standard parallels and increase it\nabove J outside them.\nFigure 10.2\nFormation or conic projection with two standard parallels.\nFinally, it should be noted that for any conic projection the scale factor is entirely\na function of latitude. and these projections are therefore suitable for depicting regions\nwith a broad extent in longitude, particularly mid-latitude regions.\n10.2 Conic equidistant projection\nA conic equidistant proje<:lion preserves the scale factor along a meridian (kM = I).\n1lle parallels are then equally spaced arcs or concentric circles. The scale factor along\na parallel of latitude is given as a function of latitude (I (Snyder, 1987) by\nkp=(G-~n)\ncos\\$\n(10.2)\nwhere\n(10.3)\n(10.4)\nDatums and map projections\n88\n'I\nand\nand '2 are lhe two standard parallels. If only onc standard parallel is used, n in\nequation (10.4) is simply equal (0 sinQI.\nAn example of this projection is shown in Fig. 10.3.\nFigure 10,3\nConic equidistant projection with standard parallels al20" and flY N.\n10.3 Albers (conic) equal area projection\nThe equal area version of a conic projection is usually called Albers equal area. An\nexample of Ihis for the European region is shown in Fig. 10.4.\nIt will be noted that the pole is shown in this projection as a circular arc, indicating\nonce again thai shape has been sacrificed 10 keep the area undislOrtcd. It should also\nbe noted. however, thai the shape is not as badly distorted as in the cylindrical equal\narea projection in Fig. 8.2. This is mainly a function oflhe region being projeclCd: the\nEuropean area shown on these two maps is largely an area in mid-latitude with a large\neast-west extent, which is more suited to a conic projection than to a cylindrical one.\n10.4 Lambert conformal conic projection\nThe confonnal version of the conic projections is usually named after Lambert. who\nfirst developed it in 1772 (Snyder, 1987). The full name is the Lamben confonnal\nconic (Leq, but most references to a Lamben projection would usually be understood\nto refer to this one (though not without some ambiguity). This is an extremely widely\nConic projections\n89\nfigure 10.4 Albers equal area (conic) with standard parallels at 20" and 6if N.\nused projection, and it is probably true to say thai LCC and the transverse Mercator\nbetween them account for 90% of base map projections worldwide.\nFigure 10.5 Lamben conformal conic projection with standard parallels at 1!J> and 6if N.\n90\nDatums and map projections\nAn example of the LCC is shown in Fig. 10.5. Because it is a conformal projection,\nthe meridians meet at a point that represents the pole. The example in Fig. 10.5 was\nformed with the equivalent of a standard parallel at 40°. An LCC with a standard\nparallel at the equator would effectively be the same as a Mercator projection, with\nthe meridians parallel and never reaching the infinite pole; one with a standard parallel\nat 90° would be the same as a polar stereographic projection. Examples near these\nextremes are shown in Fig. 10.6.\nAn LCC projection may also be formed with two standard parallels, as with all\nconic projections. In this case it is the equivalent of one standard parallel halfway\nbetween, with an additional scaling applied. The usual arrangement for minimising\ndistortion is to have two standard parallels which are each ~ of the range of latitude in\nfrom the extremes of the projection.\nThe formulae for LCC are complicated, but the expression for scale factor for the\ncase with one standard parallel and a spherical Earth can be quoted (Snyder, 1987) as\n(l0.5)\nwhere <\\>1 is the standard parallel and n = sin <\\>1.\nConic projections\nFigure 10.6\nLee projections at goo N (above) and\n91\n10" N (below).\n11\nSummary of information\nrequired\n11.1 Formulae\nExamples of formulae for converting between geodetic coordinates and a projected\ncoordinate system were given in section 8.1 for the cylindrical equidistant projection.\nThroughout this book, howe,ver, the assumption has been made that the vast majority\nof users will not be called on to program specific algorithms into a computer, but will\nhave access to a wide variety of projections via a suite of software. ArcView, for example, supports over 40 different projections. Some of these - for example UTM, UPS,\nand the British National Grid - are true projected coordinate systems and include the\nparameters in the specification; others are generic methods of projection and require\nparameters as input. In such cases, the user will be restricted to choices of sub-sets\nof these projections, being those defined by the particular parameters summarised in\nsection 11.2.\nIt may occasionally prove useful to have access to a set of programs for projection\ncomputations that are independent of a particular application. One such is the PROJA\npackage from the United States Geological Service, which is available over the World\nWide Web (USGS, 1999).\nIf it is necessary to use a particular projection for which no computer program is\navailable, a standard work of reference must be consulted. One of the best available\nis Snyder (1987), which has already been referred to in some of the formulae given\nin preceding sections. This work contains formulae in spherical and spheroidal form\nfor over 30 main types of projection, and includes a comprehensive set of worked\nexamples for each.\n11.2 Parameters\nEach projection is defined in terms both of the formulae needed for converting between geodetic and grid coordinates, and of the parameters of the projection which\nare necessary as input for those formulae.\n92\nSummary of information required\n93\nExcept for standard projections such as UTM and UPS, a projection is not defined\nsimply by its type. A transverse Mercator projection with central meridian 2° W, for\nexample, is completely different from a transverse Mercator projection with central\nmeridian 20° E.\nDoes it matter what the parameters are and what projection has been used? A\nmap may be digitised in projection coordinates and used in much the same way that a\npaper map would be used. However, as soon as it is necessary to carry out any kind\nof computation based on the information presented in the map, a knowledge of the\nprojection and its parameters is necessary. This may be a simple computation such\nas the area of a feature on the map or the distance between two points, in which case\nthe projection may be ignored for low accuracy applications. But if it is required\nto combine the data from the map with other data, perhaps from a satellite image\nor from additional information obtained and referenced using GPS, it is necessary to\ntransform all information into a common coordinate system. For this, a knowledge of\nthe projection and its parameters is required.\nTable 11.1 summarises the information that is usually necessary for each category\nof projection. The meaning of these terms is summarised in Table 11.2.\nTable 11.1\nParameters necessary to define each projection\nProjection\n<Po\nAo\nEo\nNo\nko\nCylipdrical equidistant\nCylindrical equal area\nMercator\nTransverse Mercator\nOblique Mercator\nAzimuthal equidistant\nAzimuthal equal area\nAzimuthal stereographic\nGnomonic\nConic equidistant\nAlbers equal area\nLambert conformal conic\n./\n./\n./\n./\n./\n./\n.I\n.I\n.I\n./\n.I\n./\n.I\n.I\n.I\n.I\n./\n.I\n./\n./\n.I\n./\n./\n.I\n.I\n./\n./\n.I\n.I\n.I\n./\n./\n./\n./\n./\n./\n./\n./\n./\n./\n.I\n./\n.I\n./\n.I\n./\n./\n./\n./\n.I\n./\n.I\n./\n./\n./\n./\n./\nAo\n<PI\n<P2\n./\n.I\n.I\n./\n./\n./\n./\nTable 11.2 Meanings of terms in Table 11.1\n<Po\nAo\nEo\nNo\nko\nAo\n<PI\n<P2\nLatitude of the origin. Not necessarily the same as <PI for the conic projections\nLongitude of the origin. Equivalent to the central meridian for cylindrical and other\nprojections\nFalse eastings to be added to all coordinates. Equivalent to the eastings at the origin.\nMay alternatively be referred to as Xo\nFalse northings to be added to all coordinates. Equivalent to the northings at the origin.\nMay alternatively be referred to as Yo\nOverall scaling factor to be applied. May be referred to as the central meridian scale\nfactor for transverse Mercator. Not usually applied to conic projections, as its role is\nperformed by using two standard parallels\nThe azimuth of the centre line for oblique projections\nThe first (or only) standard parallel\nThe second standard parallel\n94\nDatums and map projections\nThe question then remains of where to obtain this information. Some maps have\nthe name of the projection written in the margin, either specifically or in rather a vague\nway (e.g. 'conic with two standard parallels'). It is very unusual, however, to see the\nactual values of the parameters printed on a map.\nMost national mapping organisations publish the values of the parameters used\nfor their own map series, for example Ordnance Survey (l995b), but obtaining the\ninformation may be a lengthy process. Nor does it mean that all maps of that country\nwill be printed in that projection: a publisher may have devised their own, and it is\nalso difficult to obtain this information. Again, a reference such as Snyder (1987) may\nprove useful, though it is by no means comprehensive on this point. An alternative is\nJones (1990).\nIf it is not possible to obtain any information on the projection used, recourse may\nbe had to the techniques outlined in Chapter 12.\n12\nDirect transformations\n12.1 Compatibility of coordinate systems\nA frequently used alternative when two sets of data have to be combined into a single\ndatum and projection is to transform one into the other using points that are common\nto both sets. If, for example, a satellite image is to be transformed into a ground\nsystem which is represented by a given map, this may well be the case. In these\ncircumstances the projection and datum of the map are being adopted as the reference\nsystem, without having any detailed information on what its parameters actually are.\nThe procedure in the case of transforming an image to a map is often referred to\nby remote sensors as warping. In fact, this is an approach that is applicable in other\nareas as well, including the case where high precision GPS data is warped to a local\ngrid over a small area (as discussed in section 6.5) and in GIS.\nTo begin with, however, it must first be considered whether these techniques are\nlikely to produce a satisfactory result. That is, the limitations of such an approach\nwill be explored before dealing with the detailed procedures. It is easy to demonstrate\nsituations where any kind of two-dimensional transformation is not possible, by reference to two absurdly different projections (see Fig. 12.1). We are therefore concerned\nFigure 12.1\nNo simple transfonnation can convert from one projection to the other.\nwith a consideration of how close the shapes of the two projections are. For some\nsituations, as in Fig. 12.1, the answer is obvious. It might be thought that it is always\npossible to transform from one conformal projection to another, since by definition\nthe shape is preserved. This condition only holds true for small features, however,\nbecause the scale factor varies within the projection and it is therefore impossible for\nthe shape of a large area to be preserved. An example of this is shown in Fig. 12.2,\n95\n96\nDatums and map projections\nwhich represents the effect of a variation in scale factor within a projection on a square\nof large dimensions.\nIncreasing scale factor\nIll(\nFigure 12.2\nScale factor variations within a projection.\nThe point of concern, then, is the variation in the ratio of scale factor between the\ntwo projections across the area concerned. If this figure is largely constant, a simple\noverall scale applied to one image or projection is likely to account for most of the\ndifferences between the two. If, on the other hand, there is a large variation across the\narea, it is questionable whether a simple transformation is likely to be adequate. The\nkey parameter to be considered here is the ratio of the scale factors between the two\nprojections and its variations within the area under consideration. This parameter can\nbe defined as\n(12.1)\nwhere kA is the scale factor in the target projection and kB is the scale factor in the\nsource projection, and, therefore represents the scale change in going from one projection to the other. If, is more or less constant across the area concerned, the two\nprojections have essentially the same geometry, and a simple two-dimensional similarity transformation is likely to yield satisfactory results.\nIt is of course necessary to be a little more precise than the statement 'more or less\nconstant' , however, so consider a situation in which the scale factor ratio across an area\nranges from a minimum of 'min to a maximum of 'max. A similarity transformation\napplied to the data set as a whole would in effect apply a mean scale factor ratio of\n'mean (that is, assuming that the control points used to derive the transformation were\nevenly spread out over the whole area). The situation might, for example, look like the\none depicted in Fig. 12.3, in which the scale factor ratio increases from a minimum\ndown the left-hand side to a maximum down the right-hand side. If an overall scaling\nis applied in this example. the right-hand side of the area will be scaled along with all\nother data by the ratio 'mean instead of the correct ratio 'max. The mismatch along the\nDirect transformations\n97\nD\n~Ir\n~\n,Ir\nr mean\n'min\nFigure 12.3 Variation of scale factor ratio.\nright-hand side of the image will then be given by\nE= D\n(rr\nmax\nmean\n-\n1)\n(12.2)\nwhere D is in this case the length of the right-hand side, or more generally the dimension of the region where the incorrect scale factor is being applied. Equation (12.2)\ntherefore provides a reasonably good rule of thumb for the errors, E, that are likely to\nresult from the application of a similarity transformation.\nThis concept can be extended to a consideration of higher order transformations,\nas covered in sections 12.2-12.4. These can be thought of in conceptual terms as\n'tying' two data sets together at a set of control points that are identifiable in both\nsystems. Exactly the same formula may be applied, but in this situation the extremes\nrmin and r max are to be thought of not across the whole area but between two control\npoints. The dimension D is then the distance between the control points, rather than\nthe dimension of the whole area. This is illustrated in Fig. 12.4.\nEquation (12.2) will under these circumstances provide a rather conservative estimate of the likely error, as it takes no account of the non-linearity of the transformation. However, it does assume that a transformation appropriate to the number of\ncontrol points is being used. That is, there is nothing wrong with using half a dozen\ncontrol points to derive the parameters of a similarity transformation (in fact it is laudable), but it should not be assumed that the projection problems have been overcome\nas a result.\nA numerical example of an assessment of the compatibility of two data sets is\ngiven in the case studies in sections 13.4 and 13.5.\n12.2 Ground control\nIn order to determine the parameters of plane transformations (section 12.3) or the\nunknowns in equations (12.3)-(12.13), it is necessary to use ground control points\nDatums and map projections\n98\nD\nFigure 12.4 Control for higher order transfonnation.\n(GCPs). A GCP must be recognisable in both data sets and must have known coordinates in a suitable ground reference system. The number of points required depends\non the method used and is discussed in the relevant sections. GCP coordinates may\nbe obtained directly by survey measurement or from a map. In either case the coordinates will be given in a reference system; this may be geodetic (latitude and longitude)\nor cartesian (X, Y, Z) and it may be global based on the centre of the Earth, or local\nbased on a national or regional projection. It is always important that all coordinates\nare given in the same system. Direct survey measurements may come from a survey\nbased on a local or national coordinate system, which in Great Britain will be the\nOrdnance Survey National Grid, or they may come from the GPS.\nMaps should be used with caution for determining GCPs. Map data at scales of\nI : 25000 and smaller is notoriously unreliable because of the many errors that may\nhave accumulated in the map production and map digitising process. These include:\n• survey errors (in some parts of the world published maps may be based on\ntopographical sketches)\n• drafting errors\n• generalisation\n• paper distortion\n• errors in digitising a paper document for use in the validation process.\nIt is always necessary to take the accuracy of the data into account when using ground\ncontrol; it is particularly important when using map data.\n12.3 Plane transformations\n12.3.1 Applications and terminology\nDistortions may be present in satellite images. If the relief of the ground is high or\noblique images are being used, techniques for correction must take into account the\nDirect transformations\n99\nrelief of the ground. For areas of low relief or for low resolution sensors, simpler\nmethods may be used. Similarly, any two digitised maps may be simply transformed\nonto one another if the internal scale factor distortions are low. This section deals with\nthe correction of two-dimensional data.\nThe correction of data in two dimensions may be approached by applying a transformation to the data and resampling it to produce a corrected image which gives a\nbest fit to the ground control used. The transformation may be based on a theoretical\nconsideration of the errors involved or selected on empirical grounds. The latter is the\nmethod most commonly used to produce an image which is corrected to fit to a given\nmap projection.\nA number of transformations are widely used and a brief description of the common ones is given here. The terminology used here will be that (x, y) is the coordinate system before transformation, and (X, Y) is the coordinate system required after\ntransformation. For remote sensing applications, (x, y) represents an image coordinate\nsystem; alternatively, it might represent GPS data in WGS84 that is to be transformed\ninto a local system. The coordinates (X, Y) are referred to as a ground coordinate or\nlocal system.\n12.3.2 Two-dimensional similarity transformation (four parameters)\nThis transformation is used to relate any two-dimensional rectangular coordinate system to any other two-dimensional rectangular coordinate system. It preserves the\ninternal geometry of the transformed system, so it is ideal for comparing the geometry\nof any two systems simply by determining the residuals and the root mean square errors after transformation. For a given control point, this transformation is defined by\ntwo equations:\nX = ax-by+c\n(12.3)\nY = bx+ay+d\n(12.4)\nEffectively, this transformation is performed by applying a scale factor, m, where\nm= Va\n2\n+b2\n(12.5)\na rotation angle, a, where\nb\ntan a =a\n(12.6)\nand two translations (c and d).\nThe operations applied by this transformation (scale, rotation, and translations)\nare the equivalent of projecting an image by any ordinary photographic enlarger onto\na map. It is used to give initial coordinates to the centre of a frame.\nSimilarly, the parameters of the transformation may be derived in a geographic\ninformation system such as Arc/lnfo if common points can be identified between two\ndata sets. The parameters so derived are then applied to all the other p0ints in the data\nset to be transformed.\nDatums and map projections\n100\n12.3.3 Two-dimensional affine transformation (six parameters)\nThe mathematical relationship for an affine transformation may be expressed by the\nfollowing equations:\nx=\nao +alX+a2Y\n(12.7)\nY = bo+blX+b2Y\n(12.8)\nAn affine transformation enables an adjustment to be applied independently in each\ndirection, and is thus able to correct effects that have actual physical causes. Thus, for\nremotely sensed scanner images, it corrects first-order distortions such as affinity due\nto non-orthogonality and scale difference between scans and along track directions\nwhich may be caused by Earth rotation and other geometric distortions. For digitised\nmaps it is able to correct for effects such as map shrinkage of a different size in each\ndirection.\nAt the same time, this kind of transformation may be able to correct for some of\nthe error caused not by physical effects but by differences of datum and map projection\nbetween an image and a map or two different maps. It should be noted, however, that\nit is not possible to isolate the magnitudes of physical and non-physical causes unless\ncalculations such as that outlined in section 12.4 have been carried out.\nThis transformation applies scale factors in the x direction (or scan direction for a\nsatellite image) of\n(12.9)\nand in the Y direction (or flight direction for an image) of\n(12.10)\nas well as a factor of affinity:\nmx\n(12.11)\nFa = my\nThese may be determined by using ground control points or points common to both\ndata sets. Three ground control points, at least, are required.\n12.3.4 Second-order polynomials (twelve parameters)\nPolynomials in the form:\n2\n2\nX = ao+alx+a2y+a3x +a4Y +asXY\n(12.12)\nY = bo+blX+b2y+b3X2+b4i+bsXY\n(12.13)\nare used for correction of scanner data. If polynomials are used, great care must be\ntaken to ensure that a sufficient number of control points are available and that they\nare distributed over the whole area to be transformed, as these transformations can\nbehave in an extremely unstable manner.\nDirect transformations\n101\nA minimum of six GCPs is required to detennine the transfonnation parameters,\nalthough it is desirable to have more to build in checks. In addition to first-order\ndistortions, polynomials correct second-order distortions in satellite images caused by\npitch and roll, sub-satellite track curvature and scan line convergence due to Earth\nrotation and map projection. They may also correct some of the distortions related\nto the attitude variations along the flight path. Additional tenns may be added to\nequations (12.12) and (12.13) to correct for higher-order distortions; the need for care\nin the use of control points is greater for higher orders.\nThis type of transfonnation is also used in geographic infonnation systems such\nas Arcflnfo and ArcView to relate data sets that do not match after simple or affine\ntransfonnations have been carried out. The use of the equations is of course invisible\nto the user; it is simply necessary to identify the minimum number of common points.\nAgain, care must be taken to use points that are well distributed over the area concerned. Problems can arise if two data sets of very different geometry are being linked\nin this manner, as the equations can become unstable. In such situations, recourse may\nbe had initially to the techniques outlined in section 12.4 as a first step.\nThis section has largely dealt with the use of two-dimensional transfonnations in\nthe forward sense: that is, equations have been quoted that will pennit the transformation to be carried out if the parameters are known. The use of the reverse sense\nhas been implied: that is, the parameters must first be determined by least squares\nfrom a knowledge of the coordinates in both systems of a set of common points. The\nmathematical treatment of this subject is dealt with in Appendix section A3.\n12.4 Unknown projections: measuring from maps\nIf the parameters of a projection are known, and the projection type is fully supported\nby the software package being used, there is not going to be a problem in transfonning\nbetween projections. Likewise, it has been shown in the preceding section that, even\nif the parameters of the projection are not known, a simple transfonnation may be\ncarried out if there is no significant change of shape between the 'source' and the\n'target' data sets.\nThere remain some situations, however, in which the two data sets are not sufficiently similar in shape, and the parameters of the projection are not known. In such\ncases it may be possible to determine the parameters of the projection by an inspection\nof the map itself. In the argument that follows it will generally be assumed that the\nsphere, rather than the spheroid, is an adequate model for representing the coordinate\nsystem. If the spheroid was actually used in detennining the projection coordinates\nthis may lead to errors, but in most cases they will be errors that can be dealt with\nthrough a simple affine transfonnation using common points.\nThe approach of this section is to 'unravel' the most serious distortions caused by\nthe projection, and to bring the coordinate system at least approximately into line with\nthe geometry of the 'target' data set. The first requirement is that the graticule should\nbe visible in the map, either explicitly printed or identifiable by tick marks around\nthe edge and in the middle. Without the graticule the map is essentially an abstract\nconcept, existing purely in projection space. On most large scale maps the grid will\n102\nDatums and map projections\nbe printed. If the grid is not shown, this is not critical; a user could create a grid at an\nappropriate scale.\nThe first step in treating a problem of this kind is to identify the type of projection by examining the graticule. Clues may be obtained by reading the sections on\nindividual projection types, but some of the main features are listed below.\n• Parallel meridians aligned with grid north: these indicate that the map is in a\ncylindrical projection. Within this category, further inspection may reveal the\nfollowing sub-categories:\n- evenly spaced parallels oflatitude: a cylindrical equidistant projection, as\nthere is no change in scale along the meridian\n- parallel spacing decreases towards the poles: an equal area projection (as\nthe meridional scale factor is decreasing to take account of the increased\nscale factors along parallels of latitude as the poles are approached)\n- parallel spacing increases towards the poles: the Mercator projection (by\nthe converse argument to the one above, indicating conformality).\n• Straight meridians converging, though not at a true angle; circular parallels:\nthat is to say, that the angle between any two meridians is not equal to (in fact\nis less than) the angular difference between their longitudes. This indicates that\nthe projection is conic. Within this category, the following sub-sections apply:\n- evenly spaced parallels: a conic equidistant projection (again, a constant\nscale factor along the meridians indicates an equidistant projection)\n- parallel spacing decreases away from the standard parallel: a conic equal\narea projection. (If there are two standard parallels, the relevant fact is\nwhether the spacing decreases away from the mean of the two. See below\nfor an indication of how to determine the value of the standard parallel\nwithout prior knowledge.)\n- parallel spacing increases awayfrom the (mean ofthe) standard parallels:\nthe Lambert conformal conic projection.\n• Straight meridians converging at a true angle between meridians; circular parallels: that is, the angle between any two meridians is the same as their angular\ndifference in longitude. This indicates a polar projection (azimuthal with the\ncentre of the projection at one of the poles). Similar sub-categories as before\ncan be identified:\n- evenly spaced parallels: a polar equidistant projection\n- parallel spacing decreases away from the pole: an azimuthal equal area\nprojection\n- parallel spacing increases away from the poles: a polar stereographic projection.\n• Curved meridians: this is more of a problem, as several types of projection fall\ninto this category, including all oblique forms of azimuthal projection (except\nDirect transfonnations\n103\ngnomonic) as well as the transverse Mercator and many of the more esoteric\nprojections. If the meridians are curved but all graticule intersections are right\nangles, it is more likely to be a conformal projection. Over a small area it may\nbe difficult to determine whether, for example, the meridians are straight lines or\nare curved. For a typical I : 50000 map sheet created on a transverse Mercator\nprojection, for example, the curvature of a meridian amounts to the equivalent\nof around I mrn off a straight line. However, large scale maps of a small area\nare less likely to present a problem because they are more susceptible to the\nkind of treatment outlined in the previous sections of this chapter.\nOnce the projection has been identified, the next stage is to determine the parameters that have been used, which can be done by taking measurements from the map.\nThe key point to note here is that although it is not possible to determine all the parameters of the projection by taking measurements off the map, the ones that it is not\npossible to find are not usually necessary. For example, if no grid was shown on the\nmap and the user in effect creates an arbitrary one, it is not possible to determine the\ncorrect false eastings and northings of the origin. On the other hand, the 'substitutes'\nthat can be found are perfectly adequate if they are used consistently.\nThe most important parameters to be found are the ones that change the shape of\nthe projection. These are detailed below for the main categories.\n• ' Normal cylindrical projections (contact at the equator): As a general rule, there\nare no parameters that change the shape, as the projection is entirely defined\nonce it is identified as conformal, equal area or equidistant. The exception to the\nrule is when a differential scaling has been applied to the meridians and parallels\nas in the case of the Peters projection (section 8.2). This can be identified by\ndetermining the scale factor in the two directions separately, as explained below.\n• Polar projections: Again, the shape of the projection is entirely defined once it\nhas been identified as stereographic, equidistant or equal area.\n• Conic projections: The shape of the projection is affected by the choice of\nstandard parallel (or parallels) and it is necessary to determine the value(s) used.\nChoose two widely spaced meridians that are apparent on the projection, which\nhave a difference in longitude of M, and measure the angle between them on\nthe map, ~. Then by a rearrangement of equation (10.1) the standard parallel,\na, is given as\n. -1 ( ~ )\na = sm\nfJ."A\n(12.14)\nIf two standard parallels have been used, this value will be the average of the\ntwo. For conformal projections, any two that have a mean value a may be\nchosen without affecting the shape.\nThis determines the origin of latitude. The origin of longitude is whichever\nmeridian is aligned with grid north (whether a pre-existing grid or one that has\nbeen created for the purpose).\n104\nDatums and map projections\n• Transverse Mercator projection: The shape of the projection is determined by\nthe choice of central meridian. This may be found by measuring the convergence, y, or the angle between a meridian and grid north. By measuring this\nangle at a point on the map whose latitude <I> and longitude A are known, the\nchange in longitude AA between this point and the central meridian can be found\nby rearranging equation (8.17) as\n(12.15)\nThis gives the origin of longitude. The origin of latitude does not affect the\nshape, and therefore cannot be found. An arbitrary value can therefore be assigned.\nIf required, an overall rescaling can be found by comparing a distance on the map\n(for simplicity, one along a meridian) with the distance on the spherical model. Along\na meridian, the distance on the sphere is\nn;\nDsphere\n= RA<I> 180\n(12.16)\nThe latitude difference, A<I>, is expressed in degrees; R is the Earth radius (a suitable\nvalue being 6371 kIn). This shou~d be scaled by the appropriate map scale. The overall\nrescaling is then\nk0=\nDprojection\n(12.17)\nDsphere\nThis step is not actually necessary if the final step is for the map to be transformed\ninto a new data set using common points, and the scale factor determined as part of\nthis process. Finally, the false eastings and northings of the origin may be measured\ndirectly from the grid. An example of this procedure is given in the case study in\nsection 13.5.\n13\nCase studies\n13.1 Transformation of GPS data into a local datum\nFor this ease study, data has been obtained by differential phase GPS observations,\nand will be transfonned into the local datum using common points. Figure 13.1 shows\nthe configuration of observations that were made, although the vectors have not been\nshown to avoid cluttering the diagram. A network adjustment of the observations\nindicates that the quality of the vectors is on average around 1.5 em in plan and 2 cm\nheight. A reasonable spread of control points has been achieved: it was not possible\nin\n:& Ie ofWight GPS 1998\n92000\nI\n•\n88000 ~ Arreton\n86000\nI\nBroomlands\nA •\n90000\n.,I\nI\nI\nQueen's Bower ~\n840UU\n•\n•\n•\n82000\n...Dunnose\n80000\n78000\n454000 456000 458000 460000 462000 464000 466000\nFigure 13.1\nConfiguration of observed points (Isle of Wight GPS 1998).\nto surround the area completely as the south-east section of the map is in the sea. The\n105\nDatums and map projections\n106\ncontrol points used are Ordnance Survey triangulation pillars. Several bench marks\nhave also been included in the survey.\nThe initial transformation is a seven-parameter similarity transformation using\nthe four control points shown in the diagram: Arreton, Broomlands, Dunnose, and\nQueen's Bower. The transformation parameters that result from this are shown in\nTable 13.1.\nTable 13.1\nTransformation parameters for seven-parameter similarity transformation\nM\nAY\nAZ\nRotation X\nRotation Y\nRotation Z\nScale factor\n-531.928 m\n167.961 m\n-393.725 m\n-1.166"\n-5.115"\n1.382"\n+13.05 ppm\nNote that\n• The sizes of the translations are broadly in line with the transformations for the\nwhole of the UK shown in section 4.2.2, allowing for a local distortion of the\ndatum and the inaccuracy of the absolute GPS coordinates in WGS84.\n• The rotations are not extreme.\n• The scale factor (13 ppm) represents around 15 cm over this area, which is large,\nbut not entirely unexpected for this datum.\nThe next step is to check the residuals of the transformation: that is, the difference\nbetween the original coordinates and the ones derived from the transformation. These\nare shown in Table 13.2.\nTable 13.2\nResiduals of the similarity transformation\nh\nArreton\nBroomlands\nDunnose\nQueen's Bower\n-0.0077\n-0.0213\n-0.0079\n0.0368\n-0.0154\n0.0098\n0.0235\n-0.Ql79\n0.0017\n0.0014\n0.0027\n-0.0057\nAs can be seen, these are generally below 2 cm, with the exception of the latitude\nresidual at Queen's Bower, which is 3.7 cm. Mostly, these residuals are within the accuracy of the GPS survey. Before proceeding further, however, it is worth examining\nthe extent to which an error in the control points would be noticeable by an examination of the residuals. To do this, an error is introduced into the eastings coordinate of\nBroomlands, increasing it by 20 em. The residuals that result from this transformation\nare shown in Table 13.3.\nThe residuals are now rather larger in plan, typically around 5 em, and this would\nperhaps be sufficient to alert the user. It is clear, however, that there is still not sufficient redundancy to isolate the problem to the coordinates of Broomlands.\nCase studies\n107\nTable 13.3 Residuals using erroneous coordinate set\nh\nArreton\nBroomlands\nDunnose\nQueen's Bower\n-0.0589\n-0.0219\n0.0430\n0.0377\n-0.0471\n0.0614\n0.0386\n-0.0530\n0.0019\n0.0014\n0.0016\n-0.0049\nThe necessity for redundancy is further illustrated by examining a situation with\nonly three control points: in this case, eliminating Queen's Bower. The residuals for\nthe transformation using the correct coordinates are shown in Table 13.4.\nTable 13.4 Residuals using correct coordinate set and only three points\nh\nArreton\nBroomlands\nDunnose\n0.0064\n-0.0125\n0.0061\n-0.0198\n0.0055\n0.0143\n0.0000\n0.0002\n-0.0002\nThe residuals here are smaller in plan, and much smaller in height. Clearly, as\nneither the GPS data nor the control point coordinates have changed, this is a function\nof the reduced redundancy. Using the incorrect data set, the residuals shown in Table\n13.5 are obtained.\nTable 13.5 Residuals using incorrect coordinate set and only three points\nArreton\nBroomlands\nDunnose\n<I>\nA\nh\n0.0323\n0.0110\n-0.0433\n-0.0646\n0.0485\n0.0161\n-0.0261\n-0.0083\n0.0345\nThe residuals are larger, though not immediately registering a problem. The best\nindication that something is amiss with this data comes from the fact that the scale\nfactor has now increased to around 24 ppm, although there is still no way of locating\nthe source of the problem. The moral here is to use as many control points as possible,\nand to check whether the derived parameters are sensible numbers.\nReturning to the original set of transformation parameters derived from the four\npoints and the correct data, these are now applied to the entire data set to determine\ncoordinates in the local datum. In this particular case, we have the advantage of having observed several bench marks in the area, and so it is now possible to compare\nthe heights derived by the transformation with the known values. The comparison\nis shown in Fig. 13.2, with the residuals representing the known heights minus the\ntransformed ones.\nThe residual values of the geoid (which are not actual geoid values, but what remains after the shifts and rotations) do seem to exhibit a pattern, as they are almost all\nnegative and increasing from the north-east to the south-west before coming back to\nfit again at Dunnose. In fact, the sizes of residuals are not entirely inconsistent with\nerrors in the GPS data, but the supposition that they are caused by geoid effects is\nDatums and map projections\n108\nIsle ofWight GPS 1998\n92000\nI\n-0.001\n90000\n0.004"\n88000\nf--\n..-0.002\n86000\nI\nI\n0.006 ..\n84000\n::-\n-0.048 -.........:\n•\nii.\n., • •\n-0.018\n1'---0.02~\nA\n-0.075\n~\n82000\n80000 _\n78000\n-0.003 .....\nI\nI\n454000 456000 458000 460000 462000 464000 466000\nFigure 13.2\nResidual geoid values after transformation (Isle of Wight GPS 1998).\nsupported by the fact that they would fit in with a geoid being 'drawn up' under the\nhigh ground to the south-west. A random pattern of residuals over this size of area\nwould almost certainly indicate data errors.\nA possible next step is therefore to transform the coordinates using an interpolation\nalgorithm, as described in section 6.5. This essentially bypasses the classic approach\nof first converting the grid coordinates to cartesian via the procedures outlined in previous chapters, and instead stretches the GPS data to fit the grid coordinates and the\nheights as they are. This was carried out on the data set used in this example, and\nresulted in a mean scale factor of - 339 ppm. The primary reason for such a large\nscale factor adjustment is the fact that no correction has been made for the projection.\nTherefore in a transformation of this type the real differences of scale between the two\ndata sets (as found in the similarity transformation) have been masked by the much\nlarger effect of the projection scale factor. Although this type of transformation can be\nused if no information is available on the projection, it does illustrate the advisability\nof first carrying out a similarity transformation if at all possible.\nIn most commercial packages, it is possible to choose the amount of distortion\nthat is applied to the GPS data to make it fit onto the local control. In this instance,\nan average amount of distortion was chosen. With only four control points spread out\nwidely across the area, it was not hard for the program to adjust the GPS data in two\ndimensions to fit the local control almost perfectly, resulting in very small residuals. In\nthe height direction, the residuals are a function of the distance between bench marks\nand the amount of distortion required to fit the GPS data onto the given heights. The\nCase studies\n109\nlargest height residuals are at those points where there is a large jump in the residual\ngeoid distortion compared with other points nearby.\nThe parameters derived from this transformation were applied to the whole data\nset, and an alternative coordinate set derived for all points. As an illustration of the\neffect of the different transformations, the coordinates of one point in the centre of the\nnetwork are shown in Table 13.6 for each transformation.\nTable 13.6\nCoordinates of Sandown Farm from the two transformations.\nSimilarity transformation\nInterpolation\nDifference\nEastings\nNorthings\nHeight\n459467.603\n459467.585\n0.018\n85163.762\n85163.821\n-0.059\n18.294\n18.371\n-0.077\nWhich is the correct answer? This question is almost impossible to answer, and\nthis example simply illustrates the difficulty of finding coordinates in a distorted local\ndatum to any better than a couple of centimetres. Instead, another pair of questions is\nposed:\n• How important is it for the GPS results to be in sympathy with local mapping?\n• How important is it for the GPS results to be easily integrated with new terrestrial observations?\nIn most cases the answer to the first question is 'just to a few centimetres', and the\nanswer to the second question is 'quite important'. That is, an EDM traverse run\nbetween two of the new control points provided by GPS should not end up with large\nmisc1osures, and neither should runs of levelling.\nThe most appropriate solution in many cases is a hybrid of the similarity transformation and the interpolation, using the former for the plan and the latter for the\nheight. This has the advantage of making the minimum change in the plan shape, thus\nallowing the new GPS control points to fit in well with new traverses. For height, the\ninterpolation usuaIly gives a better approximation of the distortions of the geoid, and\nis therefore more likely to fit in with new levelling. Some commercial packages allow\nthis hybrid combination to be performed as a single operation.\n13.2 A projection for Australia\n13.2.1 The problem\nBase maps of Australia are produced on a different projection or series of projections\nfor each individual state. This is due to the fact that the original survey work had to\nbe computed on projections that kept the scale factor distortion to a very low level. A\ncompany has acquired the data and wishes to put it all onto one projection, in order\nto avoid having discontinuities in the data. Area is not important, as this wiIl be an\nattribute on the database, but features should 'look right'; that is, there should not be\nany great distortion of shape. Design a suitable projection.\nCase studies\n109\nlargest height residuals are at those points where there is a large jump in the residual\ngeoid distortion compared with other points nearby.\nThe parameters derived from this transformation were applied to the whole data\nset, and an alternative coordinate set derived for all points. As an illustration of the\neffect of the different transformations, the coordinates of one point in the centre of the\nnetwork are shown in Table 13.6 for each transformation.\nTable 13.6\nCoordinates of Sandown Farm from the two transformations.\nSimilarity transformation\nInterpolation\nDifference\nBastings\nNorthings\nHeight\n459467.603\n459467.585\n0.Q18\n85163.762\n85163.821\n-0.059\n18.294\n18.371\n-0.077\nWhich is the correct answer? This question is almost impossible to answer, and\nthis example simply illustrates the difficulty of finding coordinates in a distorted local\ndatum to any better than a couple of centimetres. Instead, another pair of questions is\nposed:\n• How important is it for the GPS results to be in sympathy with local mapping?\n• How important is it for the GPS results to be easily integrated with new terrestrial observations?\nIn most cases the answer to the first question is 'just to a few centimetres', and the\nanSwer to the second question is 'quite important'. That is, an EDM traverse run\nbetween two of the new control points provided by GPS should not end up with large\nmisclosures, and neither should runs of levelling.\nThe most appropriate solution in many cases is a hybrid of the similarity transformation and the interpolation, using the former for the plan and the latter for the\nheight. This has the advantage of making the minimum change in the plan shape, thus\nallowing the new GPS control points to fit in well with new traverses. For height, the\ninterpolation usually gives a better approximation of the distortions of the geoid, and\nis therefore more likely to fit in with new levelling. Some commercial packages allow\nthis hybrid combination to be performed as a single operation.\n13.2 A projection for Australia\n13.2.1 The problem\nBase maps of Australia are produced on a different projection or series of projections\nfor each individual state. This is due to the fact that the original survey work had to\nbe computed on projections that kept the scale factor distortion to a very low level. A\ncompany has acquired the data and wishes to put it all onto one projection, in order\nto avoid having discontinuities in the data. Area is not important, as this will be an\nattribute on the database, but features should 'look right'; that is, there should not be\nany great distortion of shape. Design a suitable projection.\n110\nDatums and map projections\n13.2.2 A solution\nIf features are to 'look right' , a conformal projection is called for in order to minimise\nthe distortion of shape. In selecting a suitable developable surface, it should also be\nnoted that the extent of the region to be mapped ranges in latitude from 45° S to 12° S,\nand in longitude from 110° E to 160° E. The country could therefore be described\nof a conic projection. As it has already been decided that a conformal projection is\nrequired, the Lambert conformal conic projection is selected.\nIn selecting the required parameters, it should be remembered (see section lOA)\nthat the scale factor distortion is minimised when the standard parallels are selected as\nbeing ~ of the range of latitude in from the extremes of the projection. Since the range\nof latitude is 37° (45° S - 12° S), this suggests 17° Sand 39° S as suitable standard\nparallels. A plot of the scale factor as a function of latitude is shown in Fig. 13.3, on\nwhich the latitudes of the main population centres have been marked.\n1.08\nDarwin\n1.06\n1.04\n'""\n.8\nu\n~\n1.02\n<l)\ncau\nr/:l\n0.98\n0.96\n0.94\n5\n10\n15\n20\n25\n30\n35\n40\n45\n50\nLatitude (south)\nFigure 13.3\nScale factor for LCC with standard parallels at 17° S and 39° S.\nIt may considered more important to minimise scale factor distortion in the area\nof the main cities, an idea which is suggested by the fact that they happen to lie in a\nfairly narrow band of latitude. This could be done by selecting standard parallels at\n25° Sand 40° S, but this would of course lead to more extreme distortion on the edge\nof the projection.\nWith the parallels as they are, the minimum value of scale factor is 0.98 and the\nmaximum is 1.03, which means a maximum distortion of 3%. Any areas measured\nfrom the map will be distorted by the square of this value, or 6% at the most.\nThe central meridian, or the origin of longitude, is selected as the mean of the\nextremes of longitude. This is 135° E. The maximum longitude difference from the\ncentral meridian is then 25°, which gives an indication of the convergence. From\n111\nCase studies\nsection 10.1,\nY= AAsina\n(13.1)\nwhere in this case a is the mean value of the two standard parallels, or 28° S. The\nmaximum value of convergence is then around 12°, as shown in Fig. 13.4.\nFigure 13.4\nAppearance of the meridians on the selected LCC projection.\nThe final parameters to be selected are the origin of latitude and the values of the\nfalse coordinates. The choice of origin is not a critical decision, as it in no way affects\nthe shape of the projection: for convenience <1>0 is then selected as 45° S, which gives\nall points in the region a positive value of northings. The false northings can then be\nselected as zero. Eastings are calculated as positive and negative quantities from the\ncentral meridian. A false eastings of +2500000 m would be sufficient to make all\nvalues positive.\n13.3 Establishment of maritime boundaries on a\nprojection\nThis case study will examine the problems that are encountered when using a projection to compute distances between points. The particular application that will be used\nis the establishment of maritime boundaries between states. A full description of the\nbasis on which these boundaries are established is given in Beazley (1994). For the\npurposes of this example, it is sufficient to state that the main principle is the determination of the coordinates of points that are equidistant from the coastlines of different\nstates.\nThis computation can be carried out exactly by using ellipsoidal coordinates. The\nalgorithms used are complex, however, and not all GIS packages support their use.\nThe process is made considerably easier if it is possible to use projection coordinates\nwith no significant loss of accuracy. To begin with, a comment should be made about\nhow not to do this. Some GIS packages allow the data to be displayed in what are\ntermed 'geographical coordinates'. In reality this is nonsensical, as the very fact that\nthe data are displayed on a flat screen implies that they are in a projection of some sort.\nThis projection in fact appears as one in which the meridians and parallels are straight\nDatums and map projections\n112\nlines with equal spacing, which was shown in section 8.1 as the cylindrical equidistant\nprojection. In this case the scale factor is true along the meridians and equal to sec lj>\nalong the parallels. In a region such as the North Sea, at an average latitude of around\n55° N, this translates as scale factors of 1.0 north-south and 1.74 east-west: this is\ncertain to lead to some very wrong results.\nA more appropriate approach is to design a projection specifically for use on this\nproblem. Following similar arguments to those in section 13.2, a Lambert conformal\nconic projection is chosen with standard parallels at 51.5° Nand 57.SO N. The scale\nfactor for this projection varies within the area of interest between a minimum of\n0.9986 and a maximum of 1.0017,as shown in Fig. 13.5. In presentational terms this is\na very small variation, but the requirements for computation are rather more exacting.\n1.0025\n1.002\n1.0015\n1.001\nI-<\n....0\nc--\n~\nQ)\n-;\n~~\n1.0005\n(J\n0.9995\n(J\nr/'.l\n/\n'\\.\n0.999\n0.9985\n'\\.\n""""\n0.998\n~\n--\n---\n/'\n/\n/\n/\n/\n0.9975\n0.997\n50\n51\n52\n53\n54\n55\n56\n57\n58\n59\nLatitude (north)\nFigure 13.5 Scale factor in chosen projection (LeC).\nFigure 13.6 shows a plot of the North Sea in the chosen projection, with an example point P that is proposed as being equidistant between A on the coastline of the\nUK and B on the coast of Norway. The scale factor at P in the direction of A is, by\nthe definition of a conformal projection, equal to the scale factor in the direction of\nB. Unfortunately, this is not the case along the whole length of each line, as the scale\nfactor will vary with latitude.\nBetween P and A the latitude changes from 56.5° to 54.5°, and therefore the scale\nfactor variation is between 0.9992 and 0.9986. As a sufficient approximation the mean\nscale factor over the length of the line can be taken as the mean of these two values,\nor 0.9989. Between P and B the latitude change is from 56S to 58S, with the scale\nfactor changing from 0.9992 to 1.0011. By a similar argument, the mean scale factor\nofthe line is taken as 1.0002.\nThe approximate length of the lines PA and PB is 300 km. If this is the apparent\nlength of the lines on the projection, Table 13.7 shows how the actual lengths of the\nlines differ.\nCase studies\n113\nFigure 13.6 The North Sea in !he chosen LCC projeclion.\nTable 13.7 Actual and projected distances for sample point\nLine\nPA\nPB\nProjection distance (5 kin)\nScale factor (.1:)\nActual distance (5/.1: kin)\n300\n300\n0.9989\n1.0002\n300.330\n299.940\nThus. two lines that appear equidistant on the projection actually differ by around\n390 m. Although this level of accuracy may be sufficient for using the functionality of\na GIS package to determine the most appropriate key points on the coast, it is unlikely\nto be sufficient for use in the final compUlation. In this case, it would be necessary to\ncarry out the computations using ellipsoidal formulae. For smaller areas, it possible\nmal a projection could be defined with sufficiently negligible variations to make the\nuse of ellipsoidal formulae unnecessary.\n13.4 Two-dimensional transformation of satellite\nimagery\nThe purpose of this case study is to examine the compatibility of two sets of data,\nand to see the extent to which the distortions due to me use of projections affects the\naccuracy that may be achieved mrough simple transformations.\nIn this example, a satellite image covering an area of around 60 km x 60 km is\navailable for an area in the west of Wales. Several conlrOl points arc visible on the\nimage and can also be identified on a map of the area, and it is planned to carry oul a\ntwo-dimensional transformation.\n114\nDatums and map projections\nTo examine the geometrical compatibility of the two data sets (the image and the\nmap), me reasoning of section J2.1 is used. In thai section, lhe concept of the scale\nfactor ratio, r, was introduced. In lhis case, we are not really concerned with the\nprojection distortions in the 'source' data set as, although a satellite image can be\nconceived of as being in a projection of a special kind. it covers such a small area\nthat this effect is negligible. There will be distortions, but these will nol be due 10 the\nprojection effect. We are therefore really concerned only with the variations of scale\nfactor in the underlying base map projection. Since the scale factor of the image is\neffectively I, the scale factor of the map. k, will play the same role as the scale factor\nTatio, r.\nThe base map being used is an Ordnance Survey I : 25000 map, and is therefore\nin the British National Grid, which is a lrallsverse Mercator projection with the parameters given in section 8.4. This is illustrated in Fig. 13.7.\n"~\n~\n"-\nCe,trnI\nmeridian\n"rw\n.'igure 13.7\nPosition of the image in the projection.\nThe area concerned is at a latitude of 5r N and a longitude of 4° W, so the minimum value of the angular distance from the central meridian (9 min ) is around 1.23°.\nThe other edge of the area concerned is around 60 km funher away from the central\nmeridian, or 0.72°, so 9 mu is about 1.95°. These figures could be worked oot without\nresoning to spherical lrigonometry, by using the distances in kilometres and dividing by 6400 to express the solution in radians. The scale factor in this area therefore\nvaries between a minimum of 0.999 83 and a maximum of 1.00008, which have been\ncomputed using the secants of the maximum and minimum values of 9 and an overa.ll\nscaling of 0.9996. With a good spread of control points, applying a similarity transfonnation to the image would be the equivalent of using a mean overa.ll scaling, in this\ncase 0.99995. Using the expression given in section 12.1 for the errors that are likely\n115\nCase studies\nto arise from the application of a similarity transformation gives\n£\n1.00008\n)\n= 60000 ( 0.99995 - 1\n(13.2)\nwhich amounts to 7.4 m.\nThis is therefore not likely to be a problem in this situation, particularly if the\npixel size is 10 m or more. In addition, if a similarity transformation does not lead to\nproblems of geometrical incompatibility between the two data sets, there will certainly\nnot be a problem when using the more complex transformations with multiple control\npoints to overcome the other distortions that will be present in the image. In fact,\nthis result could be said to be indicative of the general situation with satellite images:\nwhen transforming onto conformal projections over areas of a few tens of kilometres,\nand with pixel sizes of a few metres, there are unlikely to be problems resulting from\nthe projection.\n13.5 Two-dimensional transformation of GPS data\nSome GPS software packages permit the transformation of data into a local reference\nsystem by a two-dimensional transformation, without having to enter the parameters\nof the projection used. This may be because they are not known, or it may just be for\nthe convenience of the operator. It is not a procedure without risks, however, and this\ncase study will explore some of the problems. In theoretical terms, this is exactly the\nsame procedure as that carried out in the case study in section 13.4, except that the\naccuracy requirements are much higher.\nLet us consider initially a survey that covers an area of 10 km2 , with the local\ncontrol in a transverse Mercator projection, and the area under consideration a distance\nof 100 km from the central meridian. This is depicted in Fig. 13.8, with a set of four\ncontrol points spread out around the area. The extremes of the scale factor can then be\nSurvey area\n+--lOkm\n"'-{------lOO km ------~\n~\nCentral\nmeridian\nFigure 13.8\nPosition of survey with respect to the projection.\nDatums and map projections\n116\ncomputed approximately as\nkmin =\n100) '\n0.9996 sec ( 6400\nkmax = 0.9996 sec\n110)\n(6400\nor 0.999722 and 0.999748 respectively. The mean scale factor that would be applied in a similarity transformation is 0.999735. Then, by the argument outlined in\nsection 12.1, the errors that would result from a similarity transformation are\n£,\n0.99975\n)\n= 10000 ( 0.999735 - 1 = 0.15 m\nSo, at 15 cm, this is really an unacceptable error when compared with the accuracy\nthat can be obtained over this sort of distance with GPS.\nAlternatively, if more control points were available across the region, a more complex warping of the GPS data could be carried out. Figure 13.9 shows a situation with\nan average spacing of 5 Ian between control points, a situation that is unlikely to be\nimproved upon in practice.\nIOkm\nFigure 13.9\nAdditional control points used for warping transformation.\nThe scale factor between adjacent control points now ranges from 0.999722 to\n0.999735, with a mean of 0.999729. Applying equation (12.2) once again, this time\nwith a distance of 5 Ian between points, gives\n= 5000 (0.999735 £,\n0.999729\nI) =\n0.031\nIn other words, an error of 3 cm, which is again unlikely to be acceptable.\nIn conclusion, this type of transformation should be used with caution on surveys\ncovering an area more than a couple of kilometres across. It is really suitable only for\nfitting GPS data onto a local site grid that is unrelated to a national or state coordinate\nsystem.\n13.6 Determining the parameters of an unknown\nprojection\nThis case study explores the situation where data from two different sources is to\nbe combined, with at least one of the sources being a map with unknown projection\nparameters.\nCase studies\n117\nLet us assume, then, that data for the coastal areas around the British Isles is available, and is stored in the Mercator projection with known parameters. Data for the\nland areas has been digitised from a map that appears to be conic, but for which no\nother projection information is available. The size of the area used for this illustration is large, but this will assist the reader in visualising the processes at work. The\nappearance of the two data sets, as seen in Figs 13.10 and 13.11, is very different.\nInitially, a similarity transformation is applied to the second data set, by matching\npoints that can be identified in both maps. The results of this are shown in Fig. 13.12.\nThis is so far a poor attempt to combine the two, but a further improvement is\nnext made by identifying many further common points and applying a rubber sheeting\nalgorithm to pull the two together. The results of this stage are seen in Fig. 13.13.\nThis is an extremely unstable process for such radically different shapes, particularly away from the tie points, and for some applications it may be a problem that real\nchanges in the coastline would be masked by the over-manipulation of the shape of\nthe data set.\nAs an alternative, let us assume that the second projection is in fact a Lambert\nconformal conic (there is a slight increase in the distance between the parallels going\nnorth). The angle between two meridians with a difference in longitude AI.. of 7.5°\nwas measured as 5.45°. Then rearranging equation (l0.1) gives\n.\ny\n5.45\nsma= - = - AI..\n7.5\n(13.3)\nHence the standard parallel can be inferred as being 46.5° N. Alternatively, this could\nbe represented by two standard parallels an equal distance either side of this, say at\n40° and 53° N.\nThese are the most fundamental parameters to be obtained, as they are the only\nones that will have an effect on the shape of the map. The other parameters required\ncan then be selected in a more or less arbitrary fashion. Having done this, it is then\npossible to convert the projection coordinates into geodetic ones, and then to re-project\nthe data using the known parameters of the first projection.\nThe two data sets are again combined through a similarity transformation, and the\nresults shown in Fig. 13.14. Although the match is still not perfect, this is a much\nmore satisfactory starting point for applying more complex transformations through\nrubber sheeting: this will be a far more stable process, and real discrepancies in the\ntwo data sets will be far more apparent.\n118\nDatums and map projections\n~----------+-----------+-------\nFigure 13.10\n---------\nData set in Mercator projection.\n119\nCase studies\nFigure 13.11\nData set in conic projection.\n120\n. Figure 13.12\nDatums and map projections\nThe two data sets combined with a similarity transformation.\n121\nCase studies\nTTT-li\nI TTI--I\nI\n-----\n---\nII11I\nI II Iii\nr\n! I'\n,----r-=;=--~-\n,r----I\n,\nII\nI\nI\n>\n[I.\nI\nI\n.~=+t=++=l=I-q\nI\n~~l'~\n..,..... ... ,\nI\nFigure 13.13 A further transformation is carried out by identifying common points and applying a rubber sheeting algorithm.\n122\nDatums and map projections\nFigure 13.14 LCC map re-projected to the Mercator projection, and then transformed onto\nthe original Mercator data via a similiarity transformation.\nAppendix\nA 1 Spherical coordinates\nThis appendix is a summary of the formulae that are used to determine distances and\nazimuths from spherical coordinates.\nThe shortest route between two points on the sphere is referred to as the great\ncircle. This can be defined by the intersection with the sphere of a plane that passes\nthrough both the points and the centre of the sphere.\nLet the coordinates of two points, A and B, be given as (<\\>A,AA) and (<\\>B,AB)\nrespectively. The great circle distance LAB between the two points is given as\n(AU)\nwhere !t..A is the difference in longitude between the two points:\n(Al.2)\nThis gives an answer in angular units, which may be converted to distance by expressing the angle in radians and multiplying by an appropriate value for the radius of the\nspherical Earth:\n(AI.3)\nThe azimuth of the point B from A is defined as the clockwise angle between the\nmeridian at A and the great circle, and is given by the expression\ncotAAB =\nCOS<\\>A tan<\\>B - sin<l>A COS!t..A\n. !t..A\n(AlA)\nSIll\nA2 Basic geometry of the ellipsoid\nA2.! Introduction\nThe ellipsoid is the basic reference surface for the definition of geodetic coordinates.\nWith the use of GPS for most surveys that cover a large area, the need to carry out\ncomputations using geodetic coordinates has greatly diminished, and most local surveys will be carried out using projection coordinates that have been suitably corrected.\nDatums and map projections\n124\nOne of the few remaining uses for computations on the ellipsoid is in relation\nnot to actual observations but to the establishment of boundaries: this may be, for\nexample, between states or between oil concessions. Here it may be necessary, for\nexample, to compute the coordinates of a boundary line between two defining points\nthat lie several hundred kilometres apart. In such situations, the use of a projection is\ninappropriate, and the computations cannot be carried out in cartesian coordinates.\nWhat follows is by no means a comprehensive discussion of all aspects of geometrical geodesy, but is a summary of some of the most useful concepts and formulae. A\nfuller treatment may be found in references such as Bomford (1980).\nA2.2 Normal sections and geodesics\nThe azimuth from point A to point B (both having been projected onto the surface of\nthe ellipsoid, if necessary) is defined firstly with reference to the plane that contains\nthe normal to the ellipsoid at point A and both points A and B. The angle between this\nplane and the meridional section at A, when measured clockwise from north, defines\nthe normal section azimuth from A to B, as shown in Fig. A.I.\nNonnal section azimuth\nNonnalatA\nFigure A.I\nNonnal section at A to the point B.\nThe normal section from A to B is not, in general, the same plane as that from B\nto A. The intersection of these two planes with the surface of the ellipsoid therefore\nestablishes two separate lines. Except in special cases, neither of these two lines is\nactually the shortest route between the two points.\nThe shortest route between two points on the surface of the spheroid is called the\ngeodesic. It is a complex line that cannot be described by the intersection of a plane\nand the surface of the spheroid. As shown in Fig. A.2, it lies between the two normal\nsections with an initial azimuth at A that is closer to the normal section from A than\nthe normal section from B in the ratio 2 : 1.\nThis property also applies in reverse, in that the azimuth of the geodesic at the\npoint B will be closer to the normal section from B than to the normal section from A\nin the same ratio.\nThe maximum distance between the two normal sections is a function of the distance between the two points, and ranges from a few centimetres for lines up to 100 km\nAppendix\n125\nThe geodesic\nNormal section from B to A\nB\n2/3\nA\nNormal section from A to B\n1/3\nFigure A.2\nNormal sections and the geodesic.\nto several metres for lines up to 1000 Ian.\nA2.3 Forward computation of coordinates\nSeveral formulae may be used to determine the coordinates of a point given the coordi.nates of an initial point and the distance and azimuth. Some of them are less complex\nthan the one quoted here, but have limited accuracy over longer distances.\nGiven the coordinates of point A (<PA, A.A) and a normal section azimuth (A) and\ndistance (L), it is required to find the coordinates of the point B, such that:\n= <PA + A<p\n(A2.1)\nA.B = A.A + AI.\n(A2.2)\n<PB\nClarke's 'best formulae' may be used for distances up to 1200 Ian with an accuracy of\nwithin\nof a part per million (Bomford, 1980). These formulae are:\n.fs\ne2\nl-e\n(A2.3)\n£=--2\n,\nr2\n=\n-£\n2A\ncos 2 m\n't'A cos\n(A2A)\n(A2.5)\ne=~_r~(I+Iz) (~)3 _~(1+31z) (~)4\nVA\n6\nVA\n24\n(A2.6)\nVA\n(A2.?)\n126\nDatums and map projections\nsin", = sin <PAcos <p A + cosA sin e\n(A2.8)\nsin AA = sinA sin esec '"\n(A2.9)\nThen\ntan<pB = (1\nsin<PA}\n+ E) { 1- e2(VA)\n. - tan \\jI\nr\nsm",\n(A2.W)\nAll terms not specifically defined here are given in Chapter 2.\nA2.4 Reverse computation of azimuth\nA normal section azimuth,AAB, between two points A (<PA,AB) and B (<PA,AB), which\nare of known coordinates, may be computed as follows:\n(A2.1l)\nThe cartesian coordinates of A and B are found from the equations in section 2.4,\nand the convention for the differences is\n(A2.12)\nwith similar terms for Y and Z.\nA2.5 Determination of points on the geodesic\nIt is quite usual that a boundary between two areas (mineral concessions or neighbouring states) is defined as the geodesic between two points of given coordinates (on\na certain datum). It will then be necessary to determine the coordinates of points at\ngiven intervals along the geodesic.\nThe fact that the geodesic is not a plane curve means that the methods for computing it are very complicated, usually involving the numerical expansion of an integration. Sharma (1966) gives a rigorous method for this. As an alternative, it is possible\nwithin a limited accuracy to compute points on the geodesic by interpolating between\nthe normal sections. A suggested method follows.\n1. Given the coordinates of the two end points, A and B, a distance L apart, com-\npute the normal section azimuths from each end by the use of equations (A2.11)\nand (A2.12).\n2. Compute the coordinates of a point on the normal section from A to B at a\ndistance D using equations (A2.1)-(A2.W). Let this point be called PA.\n3. Compute the coordinates of a point on the normal section from B to A at a\ndistance (L - D), using the same formulae. Let this point be called PB. The\nsituation is illustrated in Fig. A.3.\nAppendix\n127\nA\nFigure A.3\nApproximation of the geodesic.\nIt is now possible to interpolate the coordinates of the point on the geodesic from\nthe coordinates of the two points found so far if we bear in mind that the geodesic is\nnearer to the normal section AB than to the section BA in the ratio 2 : I at the point\nA, halfway in between the two at the midpoint, and nearer BA than AB in the ratio\n2: 1 at the point B. Making an assumption of a linear change in the ratio, this gives\n.the coordinates of the geodesic point (l\\>G,AG) as\n(A2.13)\n(A2.14)\nThis technique has an estimated accuracy of within 10 cm at distances up to 1000 kIn.\nA3 Determination of transformation parameters by\nleast squares\nA3.1 Introduction and least squares terminology\nThe purpose of this appendix is to bring together the mathematical processes for determining the parameters of any transformation model by least squares, given the coordinates in two different systems of a sufficient number of points. A basic familiarity\nwith the least squares process has to be assumed, but the essential equations are recapitulated here to establish the terminology. The notation in this appendix follows\nthat of Allan (1997b), and the specific application to transformation problems is based\non the approach of Cooper (1987).\nThe aim of the least squares process is, in general, to determine best estimates of a\nset of unobserved parameters, x, from a set of observed parameters, s. The bold type\nindicates that we are dealing here with vectors, containing several parameters. For\n128\nDatums and map projections\nthis particular application, the unobserved parameters will usually be the parameters\nof the transformation, and the observed parameters will be coordinate values.\nThe relationship between x and s is expressed via a functional relationship:\nF(x;s) = 0\n(A3.1)\nwhere xrepresents the best estimate of x and s the best estimate of s.\nSince in general we are dealing with non-linear transformations, the functional\nrelationship is linearised as follows:\nAA)\n('") aF\naF\nF (x; s = F x; s + ax d x + as d s\n(A3.2)\nwhere x and s represent provisional values of the unobserved and observed parameters\nrespectively, and are related to the best estimates through the expressions\nx= x+dx\nand\ns = s+ds\n(A3.3)\nIn addition, the best estimates of the observed parameters are related to the observations themselves, S, by the expression\n(A3.4)\nwhere the quantities v are described as the residuals.\nEquations (A3.3) and (A3.4) can then be combined to express the vector ds in\nterms independent of the best estimates as\ndS=s-s+v\n=I+v\n(A3.5)\n(A3.6)\nIn equation (A3.2), the functional relationship, F, will evaluate as zero when it is\napplied to the best estimates of the observed and the unobserved parameters, as this is\nthe way that the relationship was first introduced in (A3.1). It is also the case that the\nrelationship will be zero when applied to the provisional values of the observed and\nunobserved parameters, provided they are given as a consistent set. Thus, equation\n(A3.2) can be simplified to\naF\naF\n-dx+-ds=O\nax\nas\n(A3.7)\nFor convenience, dx will henceforward be referred to simply as x, the vector of unknowns that must be added to the provisional values to obtain the best estimates.\nSimilarly, the vector ds is refered to as s, and is split into its component parts 1+ v.\nThus the equation now becomes\nAx+Cv = -Cl = b\n(A3.8)\nAppendix\n129\nwhere\naF\n(A3.9)\nc= aF\n(A3.1O)\nA=\nax\nand\nas\nA and C are termed Jacobian matrices, and represent the partial derivatives of the\nfunctional relationships with respect to the unobserved and observed parameters respectively. More concisely, they are often called the design matrices. Their form will\nbe discussed in detail in relation to the different transformation models in the following sections, but it should be pointed out here that they have been derived from a\nlinearisation that has the provisional values of the observed and unobserved parameters as its starting point. In evaluating the matrices, the provisional values of these\nparameters should therefore be used.\nIn some situations it is possible to express a problem in such a way that the design matrix C reduces to the identity matrix I. In such cases, equation (A3.8) can be\nrearranged and expressed in the form\nAx=l+v\n(A3.11)\nIf the observation equations are given in this form, it can be shown (Allan, 1997b) that\nthe solution that minimises the weighted sum of the squares of the residuals, v, is\n(A3.12)\nwhere W is an appropriate weight matrix.\nAlternatively, if the more general form of equation (A3.8) is used, it can be shown\n(Allan, 1997b) that the solution now has the form\n(A3.13)\nIn the sections that follow, several different forms of transformation model will\nbe considered. For each of these, the functional relationship between observed and\nunobserved parameters will first be defined, as will the weight matrix. The form of the\ndesign matrices will then be derived. After this, the solution proceeds by application\nof either equation (A3.12) or equation (A3.13).\nBefore proceeding to a consideration of the different transformation models, however, further consideration should be given to the formation of the vector bin (A3.8).\nThe implication of the way that it was derived is that it is first necessary to form the\nvector I from a knowledge of the provisional values of the observed parameters. Although for many problems in least squares this is not hard, in the area of coordinate\ntransformations it is often an unwieldy procedure. An alternative derivation of (A3.8)\nwill therefore be given (as in Cross, 1983) that obviates the need to form provisional\nvalues of the observations.\nThis starts with the alternative definition\nA\nA\n•\n0\naF\nF(x;s) = F(x;s) + ax dx+\naF\nas\nV\n(A3.14)\n130\nDatums and map projections\nwhich follows from the fact that the residuals, v, are defined as the differences between\nthe observations and the best estimates. This can then be simplified to\nAx+Cv = -F(x;~)\n=b\n(A3.l5)\nTherefore b can be found by substituting the observations (observed coordinates) and\nthe provisional values of the unobserved parameters (transformation parameters) into\nthe functional relationship.\nIt should also be noted that, with the linearisation now starting from the provisional values of the unobserved parameters and the observed values of the observed\nparameters, it is these values that should be used in evaluating the matrices A and C.\nThis is not always stated explicitly in the derivations that follow, as it would make the\nnotation rather unwieldy, but it is implicit throughout.\nA3.2 Two-dimensional transformations\nA3.2.1 The similarity transformation\nA set of coordinates in the system (x, y) is to be transformed into the system (X,\nY). Consider the simple similarity transformation shown in Fig. A.4, in which the\ncoordinates undergo a rotation 8, a scaling factor A, and shifts of Llx and\naxes.\n~y\nin the\ny\ny\nx\nX\nFigure A.4 Two-dimensional similarity transformation.\nThis transformation can be expressed by the following equations:\nx = xAcos8 -\nYAsin8+Llx\n(A3.16)\ny = xAsin8+YAcos8+~y\nThe first step is to recast these expressions into the form of a set of functional relationships in the form required by equation (A3.1). At the same time, the terms in A and 8\nAppendix\n131\nmay be re-expressed as two new parameters, a and b. Thus\nF] =ax-by+Ax-X\n(A3.17)\nF2 = bx+ay+.<ly - V\nwhere\nb = Asine\na = Acose,\n(A3.18)\nIn solving for a and b, the original parameters Aand e can subsequently be recovered from\nA=\nVa 2 +b2\ne=tan-\n(A3.19)\n1b\n(A3.20)\n-\na\nThe situation is further simplified if it is possible to say that the coordinates (x,y)\nare effectively constants, and may be regarded as being without error. In this case, the\nonly observed parameters are the coordinates (X, V), and the unobserved parameters\nare a, b, Ax and .<ly.\nThe design matrices A and C then have the form\nof]\nof]\nob\noAx\nOF2\nob\nOF2\noAx\n0;)\n(A3.21)\n(A3.22)\nOF2\noV\nThese are then evaluated as:\n~\nC= C~/ ~l)\nA=(:\n--:\nn\n(A3.23)\n(A3.24)\nAs the matrix C found above has the form -I, it can be seen that the general form\nof the observation equations given in (A3.8) can be reduced to the special case of\nequation (A3.1l).\nIn fact, the matrices determined in (A3.23) and (A3.24) are only a part of the full\npicture, as these have been derived for only one point. In general, there will be a\nsimilar matrix for each point that is common to the two coordinate systems. Denoting\nthe partial matrix by the symbol Ai, where\nAi=\n(Xi\nXi\n-Yi\nYi\n10)\n°\n1\n(A3.25)\nDatums and map projections\n132\nand (Xi,Yi) are the coordinates of the ith point, the full design matrix for n points is\n(A3.26)\nwhich is a (2n x 4) matrix.\nThe more general form of the transformation is the situation where it is not possible to assume that one set of coordinates is correct, and instead both (X, V) and (x,y)\nare assumed to be observed data with associated weights. The functional relationships\nare the same as before, but they will now be written in a form that makes it explicit\nthat they are two equations out of a total set of 2n. Thus:\nFli = axi -bYi+Ax-Xi\nFZ i = bXi+aYi+Ay-Vi\nAs before, the sub-matrix Ai is determined by differentiating with respect to the unobserved parameters, and is exactly the same as previously:\nAi = (Xi\nXi\n-Yi\nYi\nI\n0\n0)\n(A3.28)\n1\nThe sub-matrix C i is now different, however, as it includes the derivatives with respect\nto all observed parameters, thus:\nCi=\n(\n~:~: ~:~; ~;:;\nOFli)\nOVli\noFI;\nOXZi\nOFli\nOVZi\nOFli\nOYZi\nOFli\noXzi\n(A3.29)\nwhich can be evaluated as\nCi=\n(~\n-b\n-1\na\n0\n(A3.30)\nThe elements of the misc10sure vector are determined from\nbi = _\n(Fli(X'~))\nFzi(x,s)\n= (axi-bYi+Ax-Xi)\n(A3.31)\nbXi-aYi+Ay-lj\nThe full set of eq'jations now fits together as follows:\nV Xj\n(1:) (~) +U\nAn\nXL1y\n0\n0\nCz\n0\n0\nV Y1\nn m\nVXj\nVYj\nV X2\n0\n0\nVYn\n(A3.32)\nAppendix\n133\nThe dimensions of these matrices are\n(2n x 4)(4 x 1) + (2n x 4n)(4n x 1) = (2n xl)\nNote that each 0 in the C matrix is a sub-matrix of dimensions (2 x 4).\nIt is helpful to recall here that the equations derived above should be evaluated\nusing the provisional values of the unobserved parameters (the transformation parameters) and the observed values of the observations (the coordinate sets). The equations\ncould be made more explicit by writing terms such as\nti,\nb, Lll, and 11y\nfor the transformation parameters, and\nfor the observed parameters; in practice, little confusion can arise as these will in\ngeneral be the only values available.\nFinally, to obtain a solution through the use of equation (A3.13), it is necessary to\nmust be consistent with the order of the observed parameters implied by the formation\nof the C matrix and the v vector. Thus, the weight matrix W would take the form\nW- l =\n2\n<Jxl\n<JXjY1\n0\n0\n<JXjX2\n<JXtY1\n2\n<Jyl\n0\n0\n<JYjX2\n0\n0\n2\n<JXj\n<JXIYj\n0\n0\n0\n<JXlYl\n2\n<J Yl\n0\n0\n2\n<JXjX2\n<JY1X2\n0\n(A3.33)\n<JX2\n<J2\nYn\nwhere <Ji l is the variance of the observed coordinate of Xl, and <JX1YI is the covariance\nbetween coordinates Xl and Yl.\nThis has been formed under the assumption that the two data sets are independent\nof each other, and thus that all the covariances between the two sets are zero.\nA3.2.2 Affine transformation\nThe affine transformation was introduced in Chapter 12. The basic equations have the\nform\nX=ao+alx+a2Y\n(A3.34)\nY = bo+bIX+b2Y\n(A3.35)\nDatums and map projections\n134\nThe functional relationships are therefore defined as\nFu = ao + alxi + a2Yi - Xi\n(A3,36)\nF2i = bo + blXi + b2Yi - Y;\n(A3.37)\nThe sub-elements of the design matrices are\nXi\n0\n1\nYi\n0\no\na2\nb2\n(A3,38)\n-1\n(A3,39)\n0\nand the misclosure vector is\n(A3.40)\nThese elements fit together as\nVXj\nx ao\n0\nx aj\n(}:)\nx a2\nXbo\nXbl\n+\n.\nC'\n0\nC2\n0\n0\n0\n0\nXb2\nlJ\nvY1\nVXj\nVYI\nVX2\n~m\n(A3.41)\nvYn\nThe dimensions of the matrices are\n(2n x 6)(6 xl)\n+ (2n x 4n)(4n xl) =\n(2n x 1)\nEach 0 in C is again a (2 x 4) sub-matrix. The weight matrix is again as in (A3.33).\nA3.2.3 Second-order polynomials\nThe equations for the two-dimensional transformation by second- order polynomials\nwere introduced in Chapter 12 as\nX = ao + alX + a2Y + a3x2 + a4l + as.xy\n(A3.42)\nY = bo + blX + b2Y + b3Y + b4l + bs.xy\n(A3.43)\nThe functional relationships follow from these definitions:\nFu = ao + alXi + a2Yi + a3xf + a4yf + aSXiYi - Xi\n(A3.44)\nF2i = bo + blXi + b2Yi + b3Xf + b4yf + bsx;y; - Y;\n(A3.45)\n135\nAppendix\nThe sub-elements of the design matrices and the misclosure vector are\nXr yr\na a\nxiYi\na\na a\n1 Xi\n~ ~ Y~ x~J\nCo - [elF] _ (al + 2a3Xi + aSYi a2 + 2a4Yi + aSXi\nI -\nbi = _\nels -\nbl + 2b}xi + bSYi\nb2 + 2b4Yi + bSXi\n-1\n0\na)\n(A3.46)\n(A3.47)\n-1\n(Fu(X' ~») = _ (ao + alxi + a2Yi + a3 Xr+ a4Y~ + aSXiYi - Xi)\nF2i(X, s)\nbo + blXi + b2Yi + b3xr + b4Yi + bsxiYi - Y;\n(A3.48)\nThe full equations are then constructed as in equation (A3.41), except that the\nvector of unobserved parameters is\n(A3.49)\nThe dimensions of the matrices are\n(2n x 12)(12 xl) + (2n x 4n)(4n xl) = (2n xl)\nA3.3 Three-dimensional transformations\nA3.3.1 The seven-parameter similarity transformation\nThe seven-parameter three-dimensional similarity transformation was introduced in\nChapter 4 in the form\n(A3.50)\nwhere R is the rotation matrix given by\n(A3.51)\nand al, al, and a3 are rotations about the X, y, and z axes respectively. The functional relationships follow from the above definition, and are conveniently expressed\nin matrix form as\nFi = ~X+ (1 +A)Rx-X\n(A3.52)\nwhere\n(A3.53)\nDatums and map projections\n136\nThe design matrix A is then developed as follows:\nThe first element in equation (A3.53) can be evaluated as\n(A3.54)\nThe most common application of this transformation is to the conversion of coordinates from one geodetic datum to another. In these situations, the rotations will be\nvery small (a few seconds of arc) and it will be the case that\n(A3.56)\nand similar simplifications can therefore reduce (A3.55) to\n(A3.57)\nThe other terms in (A3.54) are evaluated as\n(A3.58)\n(l+A)\n(;:')x~ (~Z)\n(A3.59)\n~ ( ~x)\n(A3.60)\n(l+A) (;:,) x\nThe complete set of equations forms the sub-element of the design matrix A thus:\nX\nAi =\n(\n0\ny\nZ\nZ\n-y\n(A3.61)\nAppendix\n137\nThe sub-element of the design matrix C is formed from\no\n-1\no\nJJ\n(A3.62)\nThe misclosure vector is given by\nbi\n= -Fi(X,~) = LlX +\n(A3.63)\n(1 + A)Rx- X\nThe matrix sub-elements combine in the full set of equations as\n(:~)\nXI..\nV X1\nxO: 1\nv Y1\n0\nxO:1\nxO: 3\nXAX\nX~Y\nXdZ\n+\nC\n.\n0\n0\nCz 0\n0\n0\niJ\nvx,\nvYt\nV X1\n~m\n(A3.64)\nvYn\nThe 'dimensions of these matrices are\n(3n x 7)(7 xl) + (3n x 6n)(6n xl) = (3n xl)\nThe alternative form of the seven-parameter similarity transformation is the one\nthat achieves a rotation about a local origin, xo, by reformulating the transformation\nequations as\n(\n;) =\nZ\n(~)\n+ (;~) + (1 +A)R (;=~)\nLlZ\nzo\nz-zo\n(A3.65)\nThe derivation of the least squares formulation of this transformation proceeds in a\nvery similar way to the original form. The functional relationship is altered to\nFi = LlX+xo+ (1 +A)R(x -Xij) - X\n(A3.66)\nThe matrix C i remains the same as before, but Ai becomes\nAi =\n0\nX-Yo\n-z+zo\no\nY- Yo z - zo\n( z-zo -y+yo X-Xo\n(A3.67)\nThe vector b is evaluated with the functional relationship in (A3.66), but all the matrices are combined as in (A3.64).\nFor either of these two models, the weight matrix is a three-dimensional version\nof that given in (A3.33) for the two-dimensional case.\nDatums and map projections\n138\nA3.3.2 Other similarity transformations\nIn some situations, a full seven-parameter similarity transformation is eithernot needed\nor not possible with the existing data. It may be the case, for example, that only the average shift, or translation, between the datums is required. This would then be termed\na three-parameter transformation, with only (AX, AY, AZ) being determined.\nTo take another example, GPS data could be transformed onto a local datum whilst\npreserving the scale of the survey by applying a six-parameter transformation solving\nfor (aI, a2, a3, AX, AY, AZ). Each of these cases is simply derived as a special\ncase of the seven-parameter transformation, in which the columns of the matrix A\nthat correspond to the unwanted parameter are eliminated. Thus, the design matrix\nfor a solution of (AX, AY,AZ) alone is found by eliminating the first four columns of\n(A3.61):\nA;~ Gn)\n(A3.68)\nSimilarly, the design matrix for a six-parameter transformation involving three shifts\nand three rotations is found by eliminating the first column of (A3.61):\ny\n-x\no\nI 0 0)\n0 I 0\n0 0 I\nIn each case, the design matrix C will remain the same as in (A3.62).\nA3.4 Worked example\nThe way in which the solution is carried out is illustrated here with a worked example.\nThe one selected is a three-dimensional similarity transformation by the two different\nmodels shown in section A3.3.1, although this will also serve to illustrate the general\nprinciples for any transformation.\nThere are four points common to the two systems, and their coordinates are given\nin Table AI. The standard error of each coordinate in system P is 0.01 m, and in\nsystem Q is 0.02. For convenience, it will be assumed that all coordinates and points\nare uncorre1ated, and that the weight matrix is therefore diagonal. It has the form\nW~ C~ ~ D\nwhere each sub-matrix w represents one of the points and is given by\n1/0.01 2\n1/0.01 2\nw=\n1/0.01 2\nAppendix\n139\nTable Al\nSystem P\nX\nPoint\n4027656.73\n4025033.77\n4010282.95\n4009387.42\nA\nB\nC\nD\nZ\nY\n702.96\n14050.08\n1399.85\n13 295.68\n4973741.92\n4975857.89\n4987786.36\n4988482.31\nSystem Q\nPoint\ny\nX\nA\n4027756.52\n4025134.97\n4010381.77\n4009487.60\nB\nC\nD\nZ\n820.90\n14168.85\n1521.26\n13417.55\n4973972.92\n4976087.46\n4988016.66\n4988711.44\nAs a first approximation, the provisional values of all the transformation parameters\nare set to zero, which will always be a valid assumption for any three-dimensional\ngeodetic datums.\nEach element of the design matrix A is then formed according to (A3.6l), using\nthe observed values of the coordinates. The full matrix is:\n0.0\n4027656.7\n4973741.9\n703.0\n4973741.9\n-703.0\n4025033.8\n0.0\n14050.1\n4975857.9\n4975857.9 -14050.1\n4010282.9\n0.0\n1399.9\n4987786.4\n4987786.4 -1399.9\n4009387.4\n0.0\n13295.7\n4988482.3\n4988482.3 -13295.7\n-4973741.9\n703.0\n1\n-4027656.7 0\n0.0\n0.0\n0\n4027656.7\n14050.1\n1\n-4975857.9\n-4025033.8 0\n0.0\n4025033.8\n0.0\n0\n-4987786.4\n1399.9\n1\n0.0\n-4010282.9 0\n4010282.9\n0.0\n0\n-4988482.3\n13295.7\n1\n0.0\n-4009387.4 0\n4009387.4\n0.0\n0\n0 0\n1 0\n0 I\n0 0\nI 0\n0 I\n0 0\n1 0\n0 1\n0 0\nI 0\n0 I\nThe sub-elements of the design matrix C are as defined in (A3.62), which with all\ntransformation parameters having provisional values of zero evaluates as\nCi =\n(\nI\n0\no\n0 0\n1 0\n0 I\n-1\n0\n0\n0\n-1\n0\n0)\n0\n-I\nThese sub-matrices combine into a full C matrix of dimensions 12 x 24, as shown in\n(A3.64).\nThe vector b is then evaluated from (A3.63), using the provisional values of the\ntransformation parameters (all zero in the first iteration) and the observed values of\nDatums and map projections\n140\nthe coordinates. Thus, the first sub-vector, bI, is given as\n99.79)\n(0)\n(1\n117.94 = 0 + (1 +0) 0\n( 231.00\n0\n0\n0 0) (4027656.73)\n(4027756.52)\n1 0\n702.95\n820.90\n0 1\n4973741.92\n4973972.92\nand the full vector, including a similar computation for each point, is\n99.79\n117.94\n231.00\n101.20\n118.78\n229.57\n98.82\n121.41\n230.31\n100.19\n121.87\n229.13\nAll necessary matrices have now been formed, and it remains only to carry out the\nmatrix algebra solution of (A3 .13) to determine the best estimates of the transformation parameters. This gives the solution shown in Table A2.\nTableA2\n2.0691- 5\n264.75\n104.14\n-73.00\n=20.69 ppm\n=20.42"\n= 10.30"\n=24.52"\n=264.75 m\n= 104.13 m\n=-73.00 m\nThe solution so obtained is added to the provisional set of values (in this case zero)\nto find the best estimates of the transformation parameters. If the two datums were\nvery different from each other, in theory a further iteration would be required in which\nthe current best estimates of the transformation parameters become the provisional\nvalues. In most situations, however, this would not be required.\nThe alternative to this transformation model is that defined in equation (A3.66).\nAlthough the full numerical derivation of this will not be given here, it is instructive to\nexamine the final result. This gives the transformation parameters as shown in Table\nA3.\nrotations are exactly the same as before, the translations are completely different. This\nis due to the fact that the origin of the rotation in the first model was the centre of the\nEarth, rather than being in the region of the points being transformed. Effectively this\n141\nAppendix\nTable A3\nA\n(Xl\n(X2\n(X3\nLU\nM\nt;Z\n2.0691- 5\n9.90267- 5\n4.99385- 5\n0.00011886\n100\n120\n230\n= 20.69 ppm\n= 20.42"\n= 10.30"\n= 24.52"\n= 100m\n= 120m\n=230m\nmeans that as the rotation origin is at such a great distance, its effect is very similar to\na translation. The true translation is therefore altered to take account of this.\nAlthough it must be emphasised that the effect of these two models is exactly the\nsame (they would give the same coordinates if used to carry out a transformation) the\nsecond approach usually gives a better insight into the physical differences between\nthe two datums: the translation found does correspond to the origin shift between the\ntwo systems. In the first approach the translation is so dependent on the rotation, and\nthe rotation so dependent on small coordinate errors, that very different answers will\nbe found for different groups of points.\nIt must be stressed that the two sets of transformation parameters are not interchangeable: if a set of parameters is quoted, the model used must be given alongside\nthem.\nReferences\nAllan, A.L. (1997a) Mathsfor Map Makers. Whittles Publishing, Latheronwheel, Caithness.\nAllan, A.L. (l997b) Practical Surveying and Computations, revised 2nd edn. Laxton's,\nOxford.\nBeazley, P.B. (1994) Technical Aspects ofMaritime Boundary Estimation. (Maritime Briefings\n1(2)). International Boundaries Research Unit, University of Durham.\nBomford, G. (1980) Geodesy, 4th edn. Clarendon Press, Oxford.\nBoucher, C. and Altamimi, Z. (1998) Specifications for reference frame fixing in the analysis\nof a EUREF campaign.\nftp:/Ilareg.ensg.ign.fr/pub/euref/info/guidelinesIREEFRAME.SPECIFV4\nCalvert, C. (1994) Great Britain - Changing the National Grid and Geodetic Datum. Surveying\nWorld, 2(6), 23-24.\nCooper, M.AR. (1987) Control Surveys in Civil Engineering. Collins, London.\nCross, P.A (1983) Advanced least squares applied to position fixing. Working Paper No.6,\nUniversity of East London.\nDMA (1997) DoD World Geodetic System 1984: Its Definition and Relationships with Local\nGeodetic Systems, 3rd edn. Defense Mapping Agency, US Department of the Interior,\nWashington, DC.\nForsberg, R. and Tscherning, C.C. (1981) The use of height data in gravity field approximation\nby collocation. Journal of Geophysical Research 86(B9), 7843-7854.\nhttp://www.ngs.noaa.govIFGCS/GIAC/giac.html\nHiggins, M.B., Broadbent, G.J., and Martin, R. (1998) The use ofGPS for the maintenance\nand investigation of tidal datum: a case study in Queensland, Australia. Proceedings of the XXI\nCongress of the Federation International des Geometres, Brighton.\nHofmann-Wellenhof, B., Lichtenegger, H., and Collins, 1. (1997) GPS: Theory and Practice,\n4th revised edn. Springer-Verlag, New York.\nHooijberg, M. (1997) Practical Geodesy Using Computers. Springer-Verlag, New York.\nIERS (1998) Annual Reportfor 1997. International Earth Rotation Service, Observatoire de\nParis, Paris.\nJones, M.AB. (1990) Bear Essentials, or Geodesy Facts You Always Wanted to Know But\nDidn't Know Where to Find Them. Available from Malcolm Jones, 89 Woodhall Street,\nStirling WA 6021, Australia.\nLambeck, K. (1988) Geophysical Geodesy. Clarendon Press, Oxford,\n142\nReferences\n143\nLangley, RB. (1997) The GPS error budget. GPS World 8(3),51-56.\nLeick, A. (1990) GPS Satellite Surveying. John Wiley, New York.\nMisra, P.N. and Abbot, RI. (1994) SGS85-WGS84 transformation. Manuscripta Geodaetica\n19, 300-308.\nMoritz, H. (1980) Advanced Physical Geodesy. Wichmann, Karlsruhe.\nhttp://cddis. gsfc.nasa.gov/926/egm96.html\nNyamangunda, P. (1997) Towards a 10 cm local gravimetric geoid for Zimbabwe using FFT.\nPhD Thesis, University of London.\nOrdnance Survey (1995a) National GridlETRF89 Transformation Parameters. Geodetic\nInformation Paper No 2. Ordnance Survey, Southampton.\nOrdnance Survey (I 995b) The Ellipsoid and the Transverse Mercator Projection. Geodetic\nInformation Paper No 1. Ordnance Survey, Southampton.\nOrdnance Survey (1997) Positional Accuracy of Large-scale Data and Products. Consultation\nPaper 3. Available at\nhttp://www.ordsvy.gov.uk/literatu/infopapr/1997/cons0397.html\nOrdnance Survey (1998) Global Positioning Systems and mapping in the twenty-first century.\nInformation Paper 12. Available at\nhttp://www.ordsvy.gov.uk/literatu /infopapr/1998/pap I 298.html\nRapp, R. and Pavlis, N. (1990) The development and analysis of geopotential coefficient\nmodels to spherical harmonic degree 360. Journal of Geophysical Research, 95(B13),\n885-911.\nRapp, RH. (1994) Separation between reference surfaces of selected vertical datums. Bulletin\nGeodesique 69,26-31.\nRapp, RH. and Nerem, R.S.A. (1994) Joint GSFCIDMA project for improving the model of\nthe Earth's gravitational field. In Suenkel, H. (ed.), Gravity and Geoid, International\nAssociation of Geodesy Symposia, Vol. 113. Springer-Verlag, New York.\nRossbach, U., Habrich, H., and Zarraoa, N. (1996) Transformation parameters between PZ-90\nand WGS 84. Proceedings of the 9th International Technical Meeting ofthe Satellite Division\nof the Institute of Navigation, Kansas City, Missouri, 17-20 September, pp. 279-285.\nSeeber, G. (1993) Satellite Geodesy. Walter de Gruyter, Berlin.\nSharma, S.K. (1966) A note on geodesic lengths. Survey Review 140, 291-295.\nSnyder, J.P. (1987) Map Projections: A Working Manual. Geological Survey Professional\nPaper 1395, United States Government Printing Office, Washington DC.\nStansell, T.A. (1978) The Transit Navigation Satellite System. Magnavox (USA).\nTorge, W. (1991) Geodesy, 2nd edn. Walter de Gruyter, Berlin.\nUniversity of Zurich, (1989) Map projections for SAR Geocoding. Technical Note 22910,\nRemote Sensing Laboratories, University of Zurich.\nUSGS (1999) PROJ.4: Cartographic projection programs. Available at\nhttp://kai.er. usgs.gov/ftp/index.html\nWalsh, D. and Daly, P. (1998) Precise positioning using GLONASS. Proceedings ofthe XXI\nCongress of the Federation International des Geometres, Brighton.\nWillis, P. et al. (1989) Connection of the two levelling system datums IGN69 and ODN\nthrough the Channel by using GPS and other techniques. First International Workshop on\nGeodesy for the Europe-Africa fixed link feasibility studies in the Strait of Gibraltar, Madrid,\n8-10 March.\nIndex\nabsolute coordinates in WGS84 52\nabsolute positions 38\naccuracy 38\naffine transformation 100, 133\nAHD71 20\nAiry 47\nAlbers (conic) equal area 88\nangular distance from the central\nmeridian 75\nanti-spoofing 34\nArclInfo 70, 74, 99, 101\nArcView 101\nastronomical azimuth 26\nastronomical coordinates 16\natmospheric refraction 39\nAustralia 20, 109\nAustralian Height Datum 1971 20\nazimuth 73, 123-4\nazimuthal equal area projection 81, 102\nazimuthal equidistant projection 79-81\nazimuthal projections 79 et seq.\nazimuthal stereographic projection 62, 82\nbench marks 56, 106\nin the GPS survey 56\nbias terms 41\nboundaries 12, 124\nBritish datums 26\nBritish National Grid 59\nBureau International de l'Heure 25\ncarrier waves 34,41\ncartesian coordinates 5, 12-13\ncartesian 8\ncase studies 105 et seq.\ncentral meridian scale factor 76, 93\ncentral meridian 75, 104\nangular distance from 75\ncentre of mass of the Earth 23\nchange in spheroidal height 21\nchange in the geoid separation 22\nchanges in geodetic coordinates 27\nClarke 1880 47\nClarke's 'best formulae' 125\ncoarse acquisition code 34\ncompatibility 95\ncomputational approximations 30\ncomputational aspects 65\ncomputations 58, 93\nin a spheroidal coordinate system 11\nwithin the coordinate system5\nconformal projection 63, 66\nconic equal area projection 102\nconic equidistant projection 87-8, 102\nconic projection 62, 86 et seq., 103\nconstant value, geoid-spheroid\nseparation 18\nconstellation of satellites 33\ncontrol segment of GPS 33\nconvergence of the Earth's equipotential\nsurfaces 19\nconvergence 59, 62, 75, 104, 111\nconversion from geodetic to Cartesian\ncoordinates 6\nconversion of data 6-7\ncoordinates 3-6\nof the geodesic point 127\nof the origin of the local datum 27\npolar 4\nrectangular 4\nrequired in the local datum 40\ncorrection to be applied to angles 65\n145\n146\ncovariances 133\ncrust and mantle 14\ncurvilinear 8\ncylindrical equal area projection 70-2\ncylindrical equidistant projection 68-70,\n102, 112\ncylindrical projections 68 et seq.\ncylindrical surface 62\ndatum 3-6\nBritish 26, 28\ndefinition 3, 26\nEuropean 26, 29\ngeocentric 23\nglobal 23\nlocal 26\ndistortions in 30\nnational 26\nrealisation of 30-2\nsatellite 23-6\ntransformations 45\ntransformations for precise\napplications 50\nDefense Mapping Agency 47\ndefining parameters of projections 58-9\ndefinition of scale factor 60\ndegree 16\ndelta errors 36\ndesign matrices 129, 135\ndesign matrix 136\ndesigning a projection 58, 66\ndetermination of transformation parameters\nby least squares 127\ndevelopable surfaces 58, 61\ndeviation of the vertical 16,26\ndifferential GPS (DGPS) 38, 39, 40\ndifferential spheroidal height 40\ndirect conversion 6\ndirect transformations 95 et seq.\ndistances 123\ndistortions\nin the local datum 30, 53\nin satellite images 98\ndrafting errors 98\ndual frequency observations 36\nEarth Geopotential Model 1996\n(EGM96) 17\nEarth rotation 100\nEarth's crust and mantle 17\neastings 60\neccentricity 11\nIndex\neffects of an error in separation 46\nEGM96 41,47\nelectromagnetic distance measurer\n(EDM) 65\nellipsoid 9\nof revolution 9\nellipsoidal height 20, 22, 38\nephemeris 33, 38\nephemeris error 36\nepsilon errors 36\nequal area projection 63, 66, 102\nequidistant projection 63\nequipotential surface 14\nerror in the satellite ephemeris39\nerror\ndelta 36\ndrafting 98\nephemeris 36\nepsilon 36\nin digitising 98\nsources of 36\nsurvey 98\nerrors in the satellite clock 36\nestimating the transformation\nparameters 47\nETRF89 26, 32\nEuropean datums 26\nEuropean Terrestrial Reference\nFramework 26\nfalse coordinates 58,70\nfalse eastings 70, 93, 104\nfalse northings 70, 93, 104\nfeatures, preserved 63-4\nflattening 10, 11, 59\nformulae to convert between geodetic and\nprojection coordinates 59\nfunction of the map 64\nfunctional relationship 128\nfundamentals of map projections 58\nGauss-Kruger 77\ngeneral form of polar projection 79\ngeneral conic projections 86\ngeneralisationin map production 98\ngeocentricdatum 23\ngeodesic 124, 126, 127\ngeodesics 124-5\ngeodetic azimuth 26\ngeodetic coordinates 5-6, 11-13, 16,58,\n123\ntransformation to Cartesian 12\nIndex\ntransformation between datums 6\nGeodetic Reference System 1980\n(GRS80) 23\ngeodetic surveys 12\ngeographic coordinates 11\ngeographic information systems (GIS) 1,\n99, 101\ngeographical area to be mapped 64\n'geographical coordinates' 111\ngeographical extent of region to be\nmapped 62\ngeoid 5, 14 et seq, 20, 107, 108\nand the spheroid 6\nmodelled as a uniform slope 19\nproblems 53\ngeoid-spheroid separation 6, 14, 18,26\ngeometric distortions 100\ngeometry of the ellipsoid 123\nGeoTIFF format 59\nglobal Earth model 16\nglobal geodynamic studies 42\nglobal geopotential models 20\nglobal positioning system (GPS) 1, 12, 24,\n33 et seq., 93, 123\ndata 115\ntwo-dimensional tansformation 115\nphase measurements 41\n(GLONASS) 24\ngnomonic projection 83-4, 103\ngraticules 58, 59--60\ngravity 14\nobservations 16\ngreat circle 73, 83, 123\nGreenwich Mean Time 9\nGreenwich meridian 12\nGreenwich 8\ngrid north 75\ngrids 58, 59-60\nground control 97-8\nground control points 97\nground survey techniques 30\nheight above mean sea level 19\nheight 8\nreference surfaces for 19-22\nheights above the geoid 5\nHotine oblique Mercator projection 78\nhybrid of transformation 109\nideal geoid 20\n147\ninfluence of the geoid in deriving\ncoordinates 51\ninteger ambiguities 42\ninteger ambiguity 41\ninternational celestial reference system 25\nInternational Earth Rotation Service\n(IERS) 25\nInternational GPS for Geodynamics\nService 24, 41\nInternational Polar Motion Service 25\nInternational Terrestrial Reference\nFramework (ITRF) 25, 32\ninternational terrestrial reference system 25\ninterpolation algorithm 108\nionospheric refraction 36, 43\nirregularities of the terrain 17\nITRF see International Terrestrial\nReference Framework\nJacobian matrices 129\nkinematic phase GPS 43\nkinematic techniques 43\nknowledge of datums transformation\nparameters 47 et seq.\nknowledge of height, H 46\nknowledge of separation, N 45-6\nLambert conformal conic projection 62,\n88-90,102, 110, 112, 117\nLambert 88\nland surveying 65\ncomputations 64\nLaplace azimuth observations 26\nlarge scale mapping 64\nlatitude of the origin 93\nlatitude, longitude and height 11\nlatitude 5, 8\nleast squares 127\nlinearisation 129\nlocal and regional datums 26\nlocal datum 26\nlocal reference system 18\nlocal site grid 116\nlongitude of the origin 93\nlongitude 5, 8\nloxodrome 73\nMalaysia 78\nmap digitising 98\nmap production 98\nerrors in 98\nmap projection 8, 58\n148\nfundamentals 58 et seq.\nmap scale 61\nmap shrinkage 100\nmaritime boundaries 111\nMarseilles 20\nmean sea level 5, 14,20\nmeasurement errors 30\nMercator projection 66, 72-4, 102, 117\nmeridians 9, 59\nmid-latitude regions 87\nmineral concessions 12\nminimising distortion 90\nmisclosure vector 132, 134, 135\nmodel for the transformation equations 52\nMolodensky's formulae 6,27\nmonitoring of movement 22\nmovement of Earth's poles 25\nmultipath 36, 38, 43\nNational Aeronautics and Space\nnational datums 26\nNational Imaging and Mapping Authority\n(NIMA) 17\nNew Zealand map grid 62\nNewlyn 20\nnoise 38\nnon-uniform change in the geoid 56\nnormal cylindrical projections 103\nnormal section 124\nnormal section azimuths 124, 126\nnormal sections 124-5\nnorthings 60\noblate spheroid 9\noblique Mercator projection 62, 77-8\noblique projections 93\noffshore users 40\nOhio State University 17\n'on the fly' (OTF) techniques 44\nOrdnance Survey 31, 94\norigin for the projection 69\norigin of latitude 103\norigin of longitude 103, 110\norigin of the projection 69, 70\northometric height 5,6, 19,20, 53\northomorphic projection 63\nOS(GPS)93 32\nOSGB36 32, 47\nIndex\nOSU91 17\noverall scaling 74, 79, 87\noverall scaling factor 93\npaper distortion 98\nparallels of latitude 9\nparallels 59\nparameters of the projection 92\nparameters of the spheroid 47\nparameters 92\npermanent reference stations 40\nPeters projection 72\nplane transformations 98-10 1\nplate carree 69\npoint of origin 48\noffset from centre of Earth 26\npoint scale factors 61\n'polar' axis 12\npolar equal area projection 82\npolar equidistant projection 102\npolar form of coordinates 4\npolar projection 79\ngeneral form 79\npolar projections 103\npolar stereographic projection 102\npositional dilution of precision (PDOP) 37\nprecise code 34\npreserved features 63-4\nprincipal differences between WGS84 and\nWGS72 24\nPROJ.4 92\nprojected coordinate system 12,53,59\nprojection\nAlbers (conic) equal area 88\ncoordinates 5, 6\ndefined 58\ndefining parameters 58-9\ngeneral conic 86 et seq\nHotline oblique Mercator 78\nLambert conformal conic 88\nMercator 66, 72, 102, 117\nmethod 59\noblique Mercator 62, 77\norigin 58\nparameters of 92\nPeters 72\npolar equal area 82\npolar equidistant 102\npolar stereographic 102\nspace oblique Mercator 78\nIndex\nspherical or spheroidal models in 58\nsummary of parameters 59\nuniversal polar stereographic 82\nprolate spheroid 9\nprovisional values in transformation 133\npseudo-ranges 35\nPZ90 24,25\nratio of scale factor 96\nreal time 39\nrealisation of a datum 30-2\nrealise the datum 30\nrealising a reference system 23\nreal-time kinematic (RTK) 43\nrectangular form of coordinates 4\nredundant data 53\nrefraction 36, 38\nrefractive index 36\nrelative positions 38\nre-scaling 58\nresiduals 128\nreverse computation 12\nof azimuth 126\nrhumb line 73\nrotation about a local origin, Xo 137\nrotation of the Earth 25\nrotations between the coordinate\nsystems 40\nrotations 6\nrover 39\nrubber sheeting algorithm 117\nsatellite\n(or airborne) remote sensing\naltimetry 16\nephemerides 24\ngeodesy 26\nimage 93, 95, 98\norbits 16\nreference system 23\ntime system 35\nscale change 6-7\nscale factor 30, 58, 60, 61, 62,65,66\ndistortion minimised 110\nfor each line 65\nsecond-order polynomials 100, 134\nselective availability 36, 38\n149\nsemi-major axis 10\nsemi-minor axis 10\nseparation, N 6, 14, 16, 45\nseven-parameter similarity\ntransformation 106,135,137\nseven-parameter transformation 138\nSGS85 24\nSGS90 24,25\nshape and size of the Earth 8\nshifts of the geoid 56\nsimilarity transformation 52,97, 108\nsimple two-dimensional transformations 48\nSimpson's rule 65\nsix-parameter transformation 138\nsoftware 92\nsources of error 36\nsources of information 47\nSoviet Geodetic System 1985 (SGS85) 24\nspace oblique Mercator projection 78\nsphere 8, 9, 59\nspherical coordinates 9, 123\nspherical harmonic expansion 16\nspherical models 58\nspheroid 5, 9\nspheroidal (or ellipsoidal) heights 5,45\nspheroidal models58\nspheroidal normal 11, 16\nspin axis of the Earth 30\nstandard model of the geoid 22\nstandard parallel 82, 86,93, 110\nstates 12\nboundaries in between 12\nsummary of information required 92 et seq.\nsummary of the parameters 59\nsurvey computations 65\nsurvey errors 98\nsympathy with existing mapping 50\ntemperature, pressure and humidity 36\nterrestrial reference frame 25\nthe geoid 14 et seq.\nthe global positioning system 33\nthe GPS constellation 34\nthe similarity transformation 130\nthree-dimensional shift 6\nthree-dimensional system 8\ncartesian 8\ncurvilinear 8\nthree-dimensional transformation 30, 135\nthree-parameter transformation 138\nIndex\n150\ntide gauge 20\ntime datum 8\ntransformation 109\naffine 49\ndirect 95 et seq.\nhybrid 109\nof geodetic coordinates 27\nof GPS data into a local datum 105\nmodels 51\nplane 98\npolynomial 49\nprocess 6\ntransformation parameters 31,47\nby least squares 7\ntransverse cylindrical 74\ntransverse Mercator projection 62, 65, 747, 103, 104, 115\ntriangulation pillars 106\ntriangulation 30, 66\ntropospheric refraction 36\ntrue eastings and northings 70\ntrue north 77\ntwo- and three-dimensional coordinate\nsystems 8 et seq.\ntwo standard parallels 87,90\ntwo-dimensional affine transformation 100\ntwo-dimensional coordinate system 58\ntwo-dimensional similarity\ntransformation 96, 99\ntwo-dimensional transformation of satellite\nimagery 113\ntwo-dimensional transformation 95, 115,\n130\ntwo-dimensional map projection 8\nundulation of the geoid 16\nuniform separation of the geoid 54\nuniform slopes 56\nUnited States Geological Service 92\nuniversal polar stereographic projection\n(UPS) 82\nuniversal transverse Mercator system\n(UTM) 59,76\nunknown projections 101, 116\nUPS 93\nUS government policy regarding selective\navailability 37\nlocation of utilities in GPS 50\nUTM 77, 93\nvertical datum 5,6,21\nvertical reference surface 20\nW code 34\n'wander' of the pole 30\nwarping 95\nweight matrix 129, 137\nWGSn 24\nSystem) 18,24,25,47\nworked example, three-dimensional\nsimilarity transformation 138\nWorld Geodetic System 1984\n(WGS84) 23,\nY code 34\nzones 66\nzoning 59\ndatums and map projectLons\nThe development of geographic information\nsystems for handling and manipulating data\nin digital form, and also the development of\ntechniques such as the global positioning\nsystem and satellite (or airborne) remote\nsensing, has led to a vast increase 1n the\nuse of spatial data. This book i~ a practical\nguide for those wo~klng with spatially\nreferenced data to the problems that may be\nassociated with datums and map projections.\nThe book makes the issues clear without\nassuming any prior knowledge and focuses\non solving the problems encountered when\ncombining data from different sources.\n..\nt==\nIt explores the short cuts applicable when-'\nincomplete information 1s available. There\nare many practical examples and extensive\ncase studies and appendices of essential\nformulae. It is ideal for students and '~\npractitioners in surveying, remote sensing,\ngeographic information systems and reta·~d\nareas and caters for the non-specialist. _\n~\n...\n~\n.,~\n'f---~~""'\n~V_\n-\n~\n```"
] | [
null,
"https://s1.studylibid.com/store/data/004350478_1-8fc133ec08f27736454490cb2235d5c3.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.909523,"math_prob":0.95213133,"size":279971,"snap":"2021-04-2021-17","text_gpt3_token_len":64132,"char_repetition_ratio":0.21007833,"word_repetition_ratio":0.053186737,"special_character_ratio":0.2339028,"punctuation_ratio":0.10933402,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97122866,"pos_list":[0,1,2],"im_url_duplicate_count":[null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-01-27T21:02:31Z\",\"WARC-Record-ID\":\"<urn:uuid:8023dc2a-ffbb-4c87-9d17-fad9df46bfed>\",\"Content-Length\":\"334737\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:90966700-53b6-489c-bab2-a515417d9787>\",\"WARC-Concurrent-To\":\"<urn:uuid:3c1d0f6c-4e9a-4dd1-babd-8834aba2753c>\",\"WARC-IP-Address\":\"172.67.208.117\",\"WARC-Target-URI\":\"https://studylibid.com/doc/4350478/datums-and-map-projections-for-remote-sensing--gis-and-su.\",\"WARC-Payload-Digest\":\"sha1:JQVUTHMLEMYLLLDDYZPIRI5ZFWOZ75A4\",\"WARC-Block-Digest\":\"sha1:5F276NQGED5SF3RNTL4B6X6RE6NCRX4P\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-04/CC-MAIN-2021-04_segments_1610704832583.88_warc_CC-MAIN-20210127183317-20210127213317-00505.warc.gz\"}"} |
https://lumochift.org/blog/digit-anagrams | [
"Digit anagrams\n\n# Digit anagrams\n\nFebruary 16, 2023",
null,
"Name\nMoch Lutfi\n@kaptenupi\n\n## Problem\n\nGiven an array of integers `a`, your task is to count the number of pairs `i` and `j` (where `0 ≤ i < j < a.length`), such that `a[i]` and `a[j]` are digit anagrams.\n\nTwo integers are considered to be digit anagrams if they contain the same digits. In other words, one can be obtained from the other by rearranging the digits (or trivially, if the numbers are equal). For example, `12345` and `54231` are digit anagrams, but `321` and `782` are not (since they don't contain the same digits). `220` and `22` are also not considered as digit anagrams, since they don't even have the same number of digits.\n\nExample\n\nFor `a = [25, 35, 872, 228, 53, 278, 872]`, the output should be `solution(a) = 4`.\n\nThere are `4` pairs of digit anagrams:\n\n• `a = 35` and `a = 53` (`i = 1` and `j = 4`),\n• `a = 872` and `a = 278` (`i = 2` and `j = 5`),\n• `a = 872` and `a = 872` (`i = 2` and `j = 6`),\n• `a = 278` and `a = 872` (`i = 5` and `j = 6`).\n\n## Solution\n\n`import ( \"strings\" \"sort\")func solution(a []int) int { anagramCount := 0 // Create a map to store the frequency of digits in each number freqMap := make(map[string]int) for i := 0; i < len(a); i++ { // Convert the integer to a string and sort the digits numStr := strconv.Itoa(a[i]) numArr := strings.Split(numStr, \"\") sort.Strings(numArr) sortedNumStr := strings.Join(numArr, \"\") // Check if we've seen this sorted string before if val, ok := freqMap[sortedNumStr]; ok { // If we have, increment the count and update the map anagramCount += val freqMap[sortedNumStr] = val + 1 } else { // If we haven't seen this sorted string before, add it to the map with a count of 1 freqMap[sortedNumStr] = 1 } } return anagramCount}`"
] | [
null,
"https://lumochift.org/_next/image",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.6295879,"math_prob":0.9971357,"size":1708,"snap":"2023-14-2023-23","text_gpt3_token_len":534,"char_repetition_ratio":0.12969483,"word_repetition_ratio":0.0125,"special_character_ratio":0.34660423,"punctuation_ratio":0.13068181,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9988775,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-05-29T18:33:07Z\",\"WARC-Record-ID\":\"<urn:uuid:abb94836-f587-4f82-a296-3b115eb4fc4f>\",\"Content-Length\":\"111601\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6bc2dfb6-e1dc-4c29-9548-734d032ce951>\",\"WARC-Concurrent-To\":\"<urn:uuid:34ed956c-d453-4d5f-b474-7f18174a5af2>\",\"WARC-IP-Address\":\"76.76.21.164\",\"WARC-Target-URI\":\"https://lumochift.org/blog/digit-anagrams\",\"WARC-Payload-Digest\":\"sha1:XURNG7COTZCQMW4BL3VTZ7WZI5HVWLHK\",\"WARC-Block-Digest\":\"sha1:YPC6GPFOTXRGKDI6ODYASQGJOJYNL72Y\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224644907.31_warc_CC-MAIN-20230529173312-20230529203312-00339.warc.gz\"}"} |
https://socratic.org/questions/how-do-you-use-the-quotient-rule-to-find-the-derivative-of-y-1-cos-x-1-sin-x#108837 | [
"# How do you use the quotient rule to find the derivative of y=(1+cos(x))/(1+sin(x)) ?\n\nSep 9, 2014\n\nThe quotient rule states that given functions u and v such that $y = \\frac{u \\left(x\\right)}{v \\left(x\\right)} , \\frac{\\mathrm{dy}}{\\mathrm{dx}} = \\frac{u ' \\left(x\\right) v \\left(x\\right) - u \\left(x\\right) v ' \\left(x\\right)}{{v}^{2} \\left(x\\right)}$ By assigning u and v equal to the numerator and denominator, respectively, in the example function, we arrive at $\\frac{\\mathrm{dy}}{\\mathrm{dx}} = \\frac{\\left(- \\sin x\\right) \\left(1 + \\sin x\\right) - \\left(1 + \\cos x\\right) \\left(\\cos x\\right)}{1 + \\sin x} ^ 2$\n\nWhen using the Quotient Rule, it is important to designate your functions $u \\left(x\\right)$ and $v \\left(x\\right)$ in such a way as to make things simple while still being accurate. In the above case, declaring $\\cos \\left(x\\right)$ as $u \\left(x\\right)$ would not allow us to use the quotient rule efficiently, as our numerator would then be $1 + u \\left(x\\right)$.\n\nFrom the identities of trigonometric function derivatives, we know that the derivative of $\\sin \\left(x\\right)$ is $\\cos \\left(x\\right)$, and the derivative of $\\cos \\left(x\\right)$ is $- \\sin \\left(x\\right)$. This could be proven using Euler's Formula, but for our purposes we shall accept these without proof. The derivative of any constant (such as 1 in our example) is 0, and the derivative of a sum is equal to the sum of the derivatives. Therefore:\n\n$u \\left(x\\right) = 1 + \\cos \\left(x\\right) , u ' \\left(x\\right) = - \\sin \\left(x\\right) , v \\left(x\\right) = 1 + \\sin \\left(x\\right) , v ' \\left(x\\right) = \\cos \\left(x\\right)$\n\n$\\frac{\\mathrm{dy}}{\\mathrm{dx}} = \\frac{\\left(- \\sin x\\right) \\left(1 + \\sin x\\right) - \\left(1 + \\cos x\\right) \\left(\\cos x\\right)}{1 + \\sin x} ^ 2$\n\nAttempting to use trig identities to simplify...\n\n$= \\frac{- \\sin \\left(x\\right) - {\\sin}^{2} \\left(x\\right) - \\cos \\left(x\\right) - {\\cos}^{2} \\left(x\\right)}{1 + \\sin x} ^ 2 = \\frac{- 1 \\left({\\sin}^{2} \\left(x\\right) + {\\cos}^{2} \\left(x\\right)\\right) - 1 \\left(\\sin x + \\cos x\\right)}{1 + \\sin x} ^ 2 = - \\frac{1 + \\sin \\left(x\\right) + \\cos \\left(x\\right)}{1 + \\sin \\left(x\\right)} ^ 2$\n\nHowever, the initial answer should be sufficient."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8645504,"math_prob":1.0000082,"size":1389,"snap":"2021-43-2021-49","text_gpt3_token_len":439,"char_repetition_ratio":0.13790613,"word_repetition_ratio":0.04366812,"special_character_ratio":0.35565156,"punctuation_ratio":0.08116883,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.0000085,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-11-29T00:37:10Z\",\"WARC-Record-ID\":\"<urn:uuid:b215b1fa-4e54-4b8e-b555-3017e26bc14b>\",\"Content-Length\":\"36840\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:bd2f802e-f111-42fc-bf81-42d74be826ea>\",\"WARC-Concurrent-To\":\"<urn:uuid:f5b3281c-778f-4ddd-a6b4-44de2309267a>\",\"WARC-IP-Address\":\"216.239.34.21\",\"WARC-Target-URI\":\"https://socratic.org/questions/how-do-you-use-the-quotient-rule-to-find-the-derivative-of-y-1-cos-x-1-sin-x#108837\",\"WARC-Payload-Digest\":\"sha1:IXXL3X3Y77NX2UZ2BGWQAZJS2R6GWDUC\",\"WARC-Block-Digest\":\"sha1:E7CR65SNGTH4QBAXF3LSMWUXOFBGF3GK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964358673.74_warc_CC-MAIN-20211128224316-20211129014316-00329.warc.gz\"}"} |
https://www.tutorialspoint.com/p-find-the-sum-of-the-following-by-column-method-p-p-2x-2-y-2-and-3x-2plus5y-2-p | [
"# Find the sum of the following by column method:$2x^2- y^2$ and $3x^2+5y^2$\n\nGiven :\n\nThe given terms are $2x^2- y^2$ and $3x^2+5y^2$.\n\nTo do :\n\nWe have to find the sum of the given terms by column method.\n\nSolution :\n\nThe column method is a mathematical method of calculation where the numbers to be added or subtracted are set out above one another in columns. The calculation is done by 'carrying' and 'borrowing' numbers from column to column.\n\n Column-I Column-II $x^2$ $y^2$ $2+3$ $-1+5$ $=5x^2$ $=4y^2$\n\n$2x^2- y^2 + 3x^2+5y^2 = 5x^2+4y^2$.\n\nTherefore the sum of $2x^2- y^2$ and $3x^2+5y^2$ is $5x^2 + 4y^2$.\n\nUpdated on: 10-Oct-2022\n\n25 Views",
null,
""
] | [
null,
"https://www.tutorialspoint.com/static/images/library-cta.svg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7579813,"math_prob":0.9999871,"size":1210,"snap":"2023-40-2023-50","text_gpt3_token_len":399,"char_repetition_ratio":0.10945274,"word_repetition_ratio":0.0,"special_character_ratio":0.3231405,"punctuation_ratio":0.04694836,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.0000081,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-30T10:58:21Z\",\"WARC-Record-ID\":\"<urn:uuid:9798ebfd-266a-408d-9574-078356799c16>\",\"Content-Length\":\"82025\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a6b8debd-e2a8-4ace-bcaa-20628e35fcf8>\",\"WARC-Concurrent-To\":\"<urn:uuid:30af854e-9096-4c64-a14f-26f2d94755c3>\",\"WARC-IP-Address\":\"192.229.210.176\",\"WARC-Target-URI\":\"https://www.tutorialspoint.com/p-find-the-sum-of-the-following-by-column-method-p-p-2x-2-y-2-and-3x-2plus5y-2-p\",\"WARC-Payload-Digest\":\"sha1:ZZPBATYHOBZVUZYA7QYZ5LFPAZDRPHNT\",\"WARC-Block-Digest\":\"sha1:WF57WCXHEMN4JYKQHJXZ3NDQZ4G333RM\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510671.0_warc_CC-MAIN-20230930082033-20230930112033-00581.warc.gz\"}"} |
https://answers.everydaycalculation.com/simplify-fraction/346-900 | [
"Solutions by everydaycalculation.com\n\n## Reduce 346/900 to lowest terms\n\nThe simplest form of 346/900 is 173/450.\n\n#### Steps to simplifying fractions\n\n1. Find the GCD (or HCF) of numerator and denominator\nGCD of 346 and 900 is 2\n2. Divide both the numerator and denominator by the GCD\n346 ÷ 2/900 ÷ 2\n3. Reduced fraction: 173/450\nTherefore, 346/900 simplified to lowest terms is 173/450.\n\nMathStep (Works offline)",
null,
"Download our mobile app and learn to work with fractions in your own time:"
] | [
null,
"https://answers.everydaycalculation.com/mathstep-app-icon.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.6779779,"math_prob":0.80438346,"size":377,"snap":"2021-31-2021-39","text_gpt3_token_len":121,"char_repetition_ratio":0.13941018,"word_repetition_ratio":0.0,"special_character_ratio":0.45358092,"punctuation_ratio":0.08695652,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9535244,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-07-29T06:51:00Z\",\"WARC-Record-ID\":\"<urn:uuid:244b0990-b483-43ee-85a5-a73c2d08d809>\",\"Content-Length\":\"6637\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5aa2cef8-55e7-4cee-b4fb-1f4b2837442b>\",\"WARC-Concurrent-To\":\"<urn:uuid:8e59ba75-80a9-49af-b70b-8e5ae02ecc7a>\",\"WARC-IP-Address\":\"96.126.107.130\",\"WARC-Target-URI\":\"https://answers.everydaycalculation.com/simplify-fraction/346-900\",\"WARC-Payload-Digest\":\"sha1:TFAB6S4HHA4YSUQ2DCFFS2KNTIF4D5FH\",\"WARC-Block-Digest\":\"sha1:JBI5XS4KQCI3IL3KJRBYQGSLJHGQAO5B\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046153816.3_warc_CC-MAIN-20210729043158-20210729073158-00203.warc.gz\"}"} |
https://socratic.org/questions/5956e43811ef6b615ee8d127 | [
"# How do I work out how many molecules there are in a 1*g mass of water?\n\nSep 10, 2017\n\n$\\text{Do it dimensionally......}$\n\n#### Explanation:\n\nWhat do I mean by $\\text{dimensionally}$?\n\nSuppose we have a $1.00 \\cdot g$ mass of water. Now we know that $\\text{Avogadro's number of water molecules}$ has a mass of $18.01 \\cdot g$, and we would typically write this as......\n\n$\\text{Molar mass of water} = 18.01 \\cdot g \\cdot m o {l}^{-} 1$\n\nAnd if I wanted to work out the MOLAR quantity, I would perform the division by leaving the UNITS in......i.e.\n\n$\\text{Moles of water} = \\frac{1.00 \\cdot g}{18.01 \\cdot g \\cdot m o {l}^{-} 1}$, and of course we can do some cancellation of units.....\n\n$\\text{Moles of water} = \\frac{1.00 \\cdot \\cancel{g}}{18.01 \\cdot \\cancel{g} \\cdot m o {l}^{-} 1}$\n\n$= 0.0555 \\cdot \\frac{1}{m o {l}^{-} 1} = 0.0555 \\cdot \\frac{1}{\\frac{1}{m o l}}$ because ${x}^{-} 1 \\equiv \\frac{1}{x}$, and\n\n$= 0.0555 \\cdot \\frac{1}{m o {l}^{-} 1} = 0.0555 \\cdot \\frac{1}{\\frac{1}{m o l}} = 0.0555 \\cdot m o l$.\n\nAnd as for volumes, we need a $\\text{density}$, $\\rho$, the which for chemists is typically quoted as $g \\cdot m {L}^{-} 1 \\equiv g \\cdot c {m}^{-} 3$.\n\nBy definition, $\\rho = \\text{Mass\"/\"Volume}$ i.e. mass per unit volume.\n\nHere ${\\rho}_{{H}_{2} O} = \\frac{\\text{Molar quantity\"xx\"Molar mass}}{1 \\cdot m L}$\n\n$= \\frac{0.0555 \\cdot \\cancel{m o l} \\times 18.01 \\cdot g \\cdot \\cancel{m o {l}^{-} 1}}{1 \\cdot m L}$\n\n$1 \\cdot g \\cdot m L$; dimensionally consistent as required. Of course, the $\\text{density}$ needs to be measured......\n\nIf this does not help you will have to refine your question."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.822578,"math_prob":0.99987364,"size":1086,"snap":"2022-40-2023-06","text_gpt3_token_len":379,"char_repetition_ratio":0.1284658,"word_repetition_ratio":0.0,"special_character_ratio":0.40147328,"punctuation_ratio":0.21323529,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999794,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-10-05T09:54:46Z\",\"WARC-Record-ID\":\"<urn:uuid:e441b1f7-9559-4281-88e5-34283c895612>\",\"Content-Length\":\"35554\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f643f7e8-3633-4d10-89e6-d81f04dc676c>\",\"WARC-Concurrent-To\":\"<urn:uuid:6a23947e-5f9c-4d5c-948d-921291798162>\",\"WARC-IP-Address\":\"216.239.32.21\",\"WARC-Target-URI\":\"https://socratic.org/questions/5956e43811ef6b615ee8d127\",\"WARC-Payload-Digest\":\"sha1:KNMPIZNU2NTD4QGLBXIIR2JBGLBZDO5T\",\"WARC-Block-Digest\":\"sha1:SO3PHAA325WB35GIUN7VO7PQJ3TTCEBV\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030337595.1_warc_CC-MAIN-20221005073953-20221005103953-00457.warc.gz\"}"} |
https://ncona.com/2020/08/java-lambdas/ | [
"Sometimes when writing software, it is useful to pass a function as an argument to another function. A common example for this is a generic filter function. In pseudo code, it would look something like this:\n\n``````1\n2\n3\n4\n5\n6\n7\n8\n9\n10\nfilter(persons, filter_function) {\nfiltered = [];\nfor (person in persons) {\nif (filter_function(person)) {\nfiltered.push(person);\n}\n}\n\nreturn filtered;\n}\n``````\n\nIn the example above, we provide a list of `persons` and a `filter_function`. The `filter_function` gets applied on each `person` and if it returns false, the person is discarded.\n\n## Interfaces\n\nWe can implement something similar to the example above using interfaces. We define an interface with a single method and pass an object implementing this interface to a function. Let’s see how it looks:\n\n``````1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13\n14\n15\n16\n17\n18\n19\n20\n21\n22\n23\n24\n25\n26\n27\n28\n29\n30\n31\n32\n33\n34\n35\n36\n37\n38\n39\nimport java.util.ArrayList;\n\npublic class Interface {\n// Interface of the filter\nprivate interface Filter<T> {\npublic boolean filter(T t);\n}\n\n// Function that filters numbers based on a given filterFunction\npublic static ArrayList<Integer> filter(int[] numbers, Filter<Integer> filterFunction) {\nArrayList<Integer> filtered = new ArrayList<Integer>();\nfor (int i = 0; i < numbers.length; i++) {\nif (filterFunction.filter(numbers[i])) {\n}\n}\n\nreturn filtered;\n}\n\npublic static void main(String[] args) {\n// Implement the interface\nFilter<Integer> customFilter = new Filter<Integer>() {\npublic boolean filter(Integer number) {\nreturn number < 10;\n}\n};\n\nint[] numbers = {1, 4, 11};\n\n// Use our custom filter\nArrayList<Integer> result = filter(numbers, customFilter);\n\n// Print result\nfor (int i = 0; i < result.size(); i++) {\nSystem.out.println(result.get(i));\n}\n}\n}\n``````\n\n## Lambdas\n\nA lambda is just a shorter way to define our single method interface. The example would look like this using a lambda:\n\n``````1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13\n14\n15\n16\n17\n18\n19\n20\n21\n22\n23\n24\n25\n26\n27\n28\n29\n30\n31\n32\n33\nimport java.util.ArrayList;\n\npublic class Interface {\nprivate interface Filter<T> {\npublic boolean filter(T t);\n}\n\npublic static ArrayList<Integer> filter(int[] numbers, Filter<Integer> filterFunction) {\nArrayList<Integer> filtered = new ArrayList<Integer>();\nfor (int i = 0; i < numbers.length; i++) {\nif (filterFunction.filter(numbers[i])) {\n}\n}\n\nreturn filtered;\n}\n\npublic static void main(String[] args) {\n// This is the only part that changed\nFilter<Integer> customFilter = (Integer number) -> {\nreturn number < 10;\n};\n\nint[] numbers = {1, 4, 11};\n\nArrayList<Integer> result = filter(numbers, customFilter);\n\nfor (int i = 0; i < result.size(); i++) {\nSystem.out.println(result.get(i));\n}\n}\n}\n``````\n\nThe compiler can infer the types of the arguments, so we can further simplify the lambda:\n\n``````1\n2\n3\nFilter<Integer> customFilter = (number) -> {\nreturn number < 10;\n};\n``````\n\nFor single statement lambdas we can omit the braces and the return keyword:\n\n``````1\nFilter<Integer> customFilter = (number) -> number < 10;\n``````\n\nFor single argument lambdas we can also omit the parentheses:\n\n``````1\nFilter<Integer> customFilter = number -> number < 10;\n``````\n\n## Conclusion\n\nThis article shows the syntax of lambda expressions and how they can be used to pass a function as an argument."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.5090045,"math_prob":0.94703656,"size":3266,"snap":"2021-43-2021-49","text_gpt3_token_len":853,"char_repetition_ratio":0.16922134,"word_repetition_ratio":0.35250464,"special_character_ratio":0.31353337,"punctuation_ratio":0.1350906,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9842333,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-16T05:31:19Z\",\"WARC-Record-ID\":\"<urn:uuid:e471234c-2bce-4fba-94b4-d208d215b6df>\",\"Content-Length\":\"30034\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5e3ce7f2-6915-436e-8caf-93158e015a09>\",\"WARC-Concurrent-To\":\"<urn:uuid:ab97157f-fe41-463e-a284-97775867dedf>\",\"WARC-IP-Address\":\"185.199.111.153\",\"WARC-Target-URI\":\"https://ncona.com/2020/08/java-lambdas/\",\"WARC-Payload-Digest\":\"sha1:AWMAZRXYEY5JRCY5FLMQUX2YA5VEKGAA\",\"WARC-Block-Digest\":\"sha1:NV4HAWJGMPYJQUGLFEHV6IRORJQ6FJZR\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323583423.96_warc_CC-MAIN-20211016043926-20211016073926-00459.warc.gz\"}"} |
https://kgptalkie.com/2d-cnn-in-tensorflow-2-0-on-cifar-10-object-recognition-in-images/ | [
"# 2D CNN in TensorFlow 2.0 on CIFAR-10 – Object Recognition in Images\n\n## What is CNN\n\nThis Notebook demonstrates training a simple Convolutional Neural Network (CNN) to classify CIFAR images.\n\n`Convolutional Neural Networks` (ConvNets or CNNs) are a category of Neural Networks that have proven very effective in areas such as image recognition and classification. Unlike traditional multilayer perceptron architectures, it uses two operations called `convolution` and `pooling` to reduce an image into its essential features, and uses those features to understand and classify the image.\n\n## Convolution Layer\n\nConvolution is the first layer to extract features from an input image. Convolution preserves the relationship between pixels by learning image features using small squares of input data. It is a mathematical operation that takes two inputs such as `image matrix` and a `filter` or `kernel`.Then the convolution of image matrix multiplies with filter matrix which is called `Feature Map`.\n\nConvolution of an image with different filters can perform operations such as edge detection, blur and sharpen by applying filters.\n\n## Activation Function\n\nSince convolution is a linear operation, and images are far from linear, nonlinearity layers are often placed directly after the convolution layer to introduce `nonlinearity` to the activation map.\n\nThere are several types of nonlinear operations, the popular ones being:\n\n`Sigmoid`: The sigmoid nonlinearity has the mathematical form f(x) = 1 / 1 + exp(-x). It takes a real-valued number and squeezes it into a range between 0 and 1. Sigmoid suffers a `vanishing gradient` problem, which is a phenomenon when a local gradient becomes very small and backpropagation leads to killing of the gradient.\n\n`Tanh`: Tanh squashes a real-valued number to the range [-1, 1]. Like sigmoid, the activation saturates, but unlike the sigmoid neurons, its output is `zero-centered`.\n\n`ReLU`: The Rectified Linear Unit (ReLU) computes the function ƒ(κ)=max (0,κ). In other words, the activation is simply threshold at zero. In comparison to sigmoid and tanh, ReLU is more reliable and accelerates the convergence by six times.\n\n`Leaky ReL`:Leaky ReLU function is nothing but an improved version of the ReLU function. Leaky ReLU is defined to address this problem. Instead of defining the Relu function as 0 for negative values of x, we define it as an extremely small linear component of x.\n\n`Maxout`:The Maxout activation is a generalization of the ReLU and the leaky ReLU functions. It is a learnable activation function.\n\n`ELU`:`Exponential Linear Unit` or ELU for short is also a variant of Rectiufied Linear Unit (ReLU) that modifies the slope of the negative part of the function.Unlike the leaky relu and parametric ReLU functions, instead of a straight line, ELU uses a log curve for defning the negatice values.\n\n## Filter | Kernel Size | Number of Filters\n\nConvolution is using a `kernel` to extract certain `features` from an input image.A kernel is a matrix, which is `slide`across the image and multiplied with the input such that the output is enhanced in a certain desirable manner.\n\nBefore we dive into it, a kernel is a matrix of weights which are multiplied with the input to extract relevant features. The dimensions of the kernel matrix is how the convolution gets it’s name. For example, in `2D convolutions`, the kernel matrix is a `2D matrix`.\n\n`filter` however is a concatenation of `multiple kernels`, each kernel assigned to a particular channel of the input. Filters are always one dimension more than the kernels. For example, in 2D convolutions, filters are 3D matrices. So for a CNN layer with kernel dimensions hw and input channels k, the filter dimensions are kh*w.\n\nA common convolution layer actually consist of multiple such filters.\n\n## Stride Size\n\n`Stride` is the number of pixels shifts over the input matrix. When the stride is 1 then we move the filters to 1 pixel at a time. When the stride is 2 then we move the filters to 2 pixels at a time and so on. The below figure shows convolution would work with a stride of 1.\n\n`padding` means giving additional pixels at the boundary of the data.Sometimes filter does not perfectly fit the input image then we will be using padding.\n\nWe have two options:\n\n• Drop the part of the image where the filter did not fit. This is called valid padding which keeps only valid part of the image.\n\n## Pooling Layer\n\n`pooling layer` is a new layer added after the convolutional layer. Specifically, after a nonlinearity (e.g. ReLU) has been applied to the feature maps output by a convolutional layer;\n\nPooling layers section would reduce the number of parameters when the images are too large. `Spatial pooling` also called `subsampling` or `downsampling` which reduces the dimensionality of each map but retains important information.\n\nSpatial pooling can be of different types:\n\n• Max Pooling\n• Average Pooling\n• Sum Pooling\n\n`Max pooling` takes the largest element from the rectified feature map. Calculate the average value for each patch on the feature map is called as `average pooling`. Sum of all elements for each patch in the feature map call as `sum pooling`.\n\n## Flattening and Dense Layer\n\n`Flattening` is converting the data into a 1-dimensional array for inputting it to the next layer. We flatten the output of the convolutional layers to create a single long feature vector. And it is connected to the final classification model, which is called a `fully-connected layer`.\n\n`Fully connected layer` : A traditional multilayer perceptron structure. Its input is a one-dimensional vector representing the output of the previous layers. Its output is a list of probabilities for different possible labels attached to the image (e.g. dog, cat, bird). The label that receives the highest probability is the classification decision.\n\n!pip install tensorflow\n\n!pip install mlxtend\n\n```import tensorflow as tf\nfrom tensorflow.keras import Sequential\nfrom tensorflow.keras.layers import Flatten, Dense, Conv2D, MaxPool2D, Dropout\nprint(tf.__version__)\n```\n```2.1.1\n```\n```import numpy as np\nimport matplotlib.pyplot as plt\nimport matplotlib\nfrom tensorflow.keras.datasets import cifar10\n```\n\nThe `CIFAR10` dataset contains 60,000 color images in `10 classes`, with 6,000 images in each class. The dataset is divided into 50,000 training images and 10,000 testing images. The classes are mutually exclusive and there is no overlap between them.\n\n```(X_train, y_train), (X_test, y_test) = cifar10.load_data()\n```\n```Downloading data from https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz\n170500096/170498071 [==============================] - 50s 0us/step\n```\n```classes_name = ['airplane', 'automobile', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']\n```\n```X_train.max()\n```\n`255`\n```X_train = X_train/255\nX_test = X_test/255\nX_train.shape, X_test.shape\n```\n`((50000, 32, 32, 3), (10000, 32, 32, 3))`\n\n## Verify the data\n\nTo verify that the dataset looks correct, let’s plot the first images from the test set and display the image.\n\n```plt.imshow(X_test)\n```\n`<matplotlib.image.AxesImage at 0x7fc1e4167ed0>`\n```y_test\n```\n```array([,\n,\n,\n...,\n,\n,\n], dtype=uint8)```\n\n## Build CNN Model\n\nThe 8 lines of code below define the convolutional base using a common pattern: a stack of `Conv2D` ,`MaxPooling2D` , `Dropout`,`Flatten` and `Dense` layers.\n\nAs input, a `Conv2D` takes tensors of shape (image_height, image_width, color_channels), ignoring the batch size.In this example, you will configure our conv2D to process inputs of shape (32, 32, 3), which is the format of CIFAR images.\n\n`Maxpool2D()` layer `Downsamples` the input representation by taking the maximum value over the window defined by `pool_size`(2,2) for each dimension along the features axis. The window is shifted by `strides`(2) in each dimension. The resulting output when using `\"valid\"` padding option has a shape.\n\n`Dropout`() is used to by randomly set the outgoing edges of hidden units to 0 at each update of the training phase. The value passed in dropout specifies the probability at which outputs of the layer are dropped out.\n\n`Flatten()` is used to convert the data into a 1-dimensional array for inputting it to the next layer.\n\n`Dense()` layer is the regular deeply connected neural network layer with 128 neurons. The output layer is also a dense layer with 10 neurons for the 10 classes.\n\nThe activation function used is `softmax`. Softmax converts a real vector to a vector of categorical probabilities. The elements of the output vector are in range (0, 1) and sum to 1. Softmax is often used as the activation for the last layer of a classification network because the result could be interpreted as a probability distribution.\n\n```model = Sequential()\n\n```\n```model.summary()\n```\n```Model: \"sequential\"\n_________________________________________________________________\nLayer (type) Output Shape Param #\n=================================================================\nconv2d (Conv2D) (None, 32, 32, 32) 896\n_________________________________________________________________\nconv2d_1 (Conv2D) (None, 32, 32, 32) 9248\n_________________________________________________________________\nmax_pooling2d (MaxPooling2D) (None, 16, 16, 32) 0\n_________________________________________________________________\ndropout (Dropout) (None, 16, 16, 32) 0\n_________________________________________________________________\nflatten (Flatten) (None, 8192) 0\n_________________________________________________________________\ndense (Dense) (None, 128) 1048704\n_________________________________________________________________\ndense_1 (Dense) (None, 10) 1290\n=================================================================\nTotal params: 1,060,138\nTrainable params: 1,060,138\nNon-trainable params: 0\n_________________________________________________________________\n```\n\n## Compile and train the model\n\nHere we are `compiling` the model and `fitting` it to the training data. We will use 10 `epochs` to train the model. An epoch is an iteration over the entire data provided. `validation_data` is the data on which to evaluate the `loss` and any model metrics at the end of each epoch. The model will not be trained on this data. As metrics = `['sparse_categorical_accuracy']` the model will be evaluated based on the `accuracy`.\n\n```model.compile(optimizer='adam', loss = 'sparse_categorical_crossentropy', metrics=['sparse_categorical_accuracy'])\n```\n```history = model.fit(X_train, y_train, batch_size=10, epochs=10, verbose=1, validation_data=(X_test, y_test))\n```\n```Train on 50000 samples, validate on 10000 samples\nEpoch 1/10\n50000/50000 [==============================] - 177s 4ms/sample - loss: 1.4127 - sparse_categorical_accuracy: 0.4918 - val_loss: 1.1079 - val_sparse_categorical_accuracy: 0.6095\nEpoch 2/10\n50000/50000 [==============================] - 159s 3ms/sample - loss: 1.1058 - sparse_categorical_accuracy: 0.6091 - val_loss: 1.0284 - val_sparse_categorical_accuracy: 0.6377\nEpoch 3/10\n50000/50000 [==============================] - 146s 3ms/sample - loss: 0.9946 - sparse_categorical_accuracy: 0.6477 - val_loss: 0.9682 - val_sparse_categorical_accuracy: 0.6564\n\n```\n\nWe will now plot the `model accuracy` and `model loss`. In model accuracy we will plot the training accuracy and validation accuracy and in model loss we will plot the training loss and validation loss.\n\n```# Plot training & validation accuracy values\nepoch_range = range(1, 11)\nplt.plot(epoch_range, history.history['sparse_categorical_accuracy'])\nplt.plot(epoch_range, history.history['val_sparse_categorical_accuracy'])\nplt.title('Model accuracy')\nplt.ylabel('Accuracy')\nplt.xlabel('Epoch')\nplt.legend(['Train', 'Val'], loc='upper left')\nplt.show()\n\n# Plot training & validation loss values\nplt.plot(epoch_range, history.history['loss'])\nplt.plot(epoch_range, history.history['val_loss'])\nplt.title('Model loss')\nplt.ylabel('Loss')\nplt.xlabel('Epoch')\nplt.legend(['Train', 'Val'], loc='upper left')\nplt.show()\n\ntest_loss, test_acc = model.evaluate(X_test, y_test, verbose=2)\n```\n```10000/10000 - 5s - loss: 0.9383 - sparse_categorical_accuracy: 0.6830\n```\n```from mlxtend.plotting import plot_confusion_matrix\nfrom sklearn.metrics import confusion_matrix\n```\n```y_pred = model.predict_classes(X_test)\n```\n```y_pred\n```\n`array([3, 8, 8, ..., 5, 1, 7])`\n```y_test\n```\n```array([,\n,\n,\n...,\n,\n,\n], dtype=uint8)```\n```mat = confusion_matrix(y_test, y_pred)\nmat\n```\n```array([[737, 27, 22, 17, 14, 4, 12, 14, 106, 47],\n[ 20, 821, 3, 12, 0, 7, 5, 4, 47, 81],\n[ 95, 8, 476, 97, 83, 110, 67, 30, 21, 13],\n[ 34, 14, 42, 520, 52, 203, 58, 43, 21, 13],\n[ 22, 4, 74, 118, 570, 69, 54, 66, 21, 2],\n[ 23, 5, 34, 213, 24, 610, 17, 47, 16, 11],\n[ 10, 8, 34, 80, 42, 40, 760, 8, 13, 5],\n[ 26, 5, 23, 45, 51, 76, 5, 743, 12, 14],\n[ 56, 41, 9, 10, 3, 4, 3, 2, 843, 29],\n[ 43, 116, 5, 18, 6, 4, 5, 21, 32, 750]])```\n```plot_confusion_matrix(mat,figsize=(9,9), class_names=classes_name, show_normed=True)\n```\n```(<Figure size 648x648 with 1 Axes>,\n<matplotlib.axes._subplots.AxesSubplot at 0x7fc12758d910>)```\n\nConclusion:\n\nIn this tutorial we are have trained the simple Convolutional Neural Network (CNN) to classify CIFAR images.From the plot of learning curve we have observed that after 3 epoch the validation accuracy is less than the training set accuracy that refers to that our model is overfitting , which means we have increased the complexity of model. Also evaluated the model using confusion matrix. Observed that the model has predicted lower accuracy for bird, cat, deer, dog etc.. labels.\n\nSubscribe\nNotify of",
null,
""
] | [
null,
"https://secure.gravatar.com/avatar/",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7617387,"math_prob":0.9741395,"size":13757,"snap":"2020-45-2020-50","text_gpt3_token_len":3422,"char_repetition_ratio":0.1567658,"word_repetition_ratio":0.026530612,"special_character_ratio":0.3189649,"punctuation_ratio":0.18146265,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99277925,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-25T10:13:49Z\",\"WARC-Record-ID\":\"<urn:uuid:fcae84db-dec6-4db1-b7db-d390856a4c0d>\",\"Content-Length\":\"122485\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:03334eb3-5dd9-4e9f-81e9-4bda9b200781>\",\"WARC-Concurrent-To\":\"<urn:uuid:06ec55dc-2c53-46fa-b9bd-fb5a92555cc7>\",\"WARC-IP-Address\":\"104.31.82.253\",\"WARC-Target-URI\":\"https://kgptalkie.com/2d-cnn-in-tensorflow-2-0-on-cifar-10-object-recognition-in-images/\",\"WARC-Payload-Digest\":\"sha1:A6QOXKVJXBEN72B3SJYSHILGQCSTRAGO\",\"WARC-Block-Digest\":\"sha1:Y2TTSAMJ6E6JA4RKLOELLT464KOINQQ5\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107888931.67_warc_CC-MAIN-20201025100059-20201025130059-00084.warc.gz\"}"} |
https://metacademy.org/graphs/concepts/np_complexity_class | [
"# NP complexity class\n\n(1.7 hours to learn)\n\n## Summary\n\nNP, or \"nondeterministic polynomial,\" is the complexity class of problems with \"polynomial time verifiers.\" I.e., if the problem has a solution, it must be possible to verify the solution in polynomial time, even if finding the solution is much harder. Equivalently, it is the class of problems which can be solved efficiently by nondeterministic Turing machines. The \"P vs. NP\" question, whether all NP problems can be solved in worst-case polynomial time, is the central question in computational complexity theory.\n\n## Context\n\nThis concept has the prerequisites:\n\n## Goals\n\n• Define the complexity classes P and NP\n• Be able to show that problems are in NP\n• Be aware of the \"P vs. NP\" problem and why it's important\n\n## -Free-\n\nCoursera: Automata\nAn introductory course on automata and the theory of computation.\nLocation:\nAuthor: Jeffrey D. Ullman\nOther notes:\n• Click on \"Preview\" to see the lecture videos."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.87574375,"math_prob":0.64752394,"size":1685,"snap":"2020-34-2020-40","text_gpt3_token_len":392,"char_repetition_ratio":0.1189768,"word_repetition_ratio":0.08778626,"special_character_ratio":0.22848664,"punctuation_ratio":0.13680781,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9754403,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-30T04:18:01Z\",\"WARC-Record-ID\":\"<urn:uuid:d4c33f4b-f90e-4d4a-96bb-c12f17691595>\",\"Content-Length\":\"39645\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:edbc9108-072d-437d-998c-b10177b36d28>\",\"WARC-Concurrent-To\":\"<urn:uuid:f1484151-7755-4d44-b88d-e252696d817f>\",\"WARC-IP-Address\":\"192.249.62.163\",\"WARC-Target-URI\":\"https://metacademy.org/graphs/concepts/np_complexity_class\",\"WARC-Payload-Digest\":\"sha1:BA6YZL3M6CVJRXG2CM7CDEAO5KOKSFW6\",\"WARC-Block-Digest\":\"sha1:GCS2PDDV6Q2RK3BLVYV2B6RMDEH4FKQO\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600402101163.62_warc_CC-MAIN-20200930013009-20200930043009-00449.warc.gz\"}"} |
https://www.therevcounter.co.uk/threads/118909-Will-Apple-unlock-an-iPhone?s=129e9b1e612a4ce2cd1c5e45d050bbd7&goto=nextnewest | [
"TRC is primarily funded by ad revenue. If you like the content you find here, don't block the ads check them out instead. Thank you.\n\nThread: Excel question, 5Y forward date calc.\n\n1.",
null,
"Excel question, 5Y forward date calc.\n\nTRC is primarily funded by ad revenue. If you like\nthe content you find here, don't block the ads check\nthem out instead. Thank you.\nTricky one. In a spreadsheet I have a date. I want to calc 5y forward of that date (easy!) but only UK working days (not so easy!). Any clues as to whether there's a set formula for this or will I have to mumble-swerve?",
null,
"",
null,
"Reply With Quote\n\n2.",
null,
"Re: Excel question, 5Y forward date calc.\n\nDoes Excel have 'NETWORKDAYS(DATE_1,DATE_2)' ?? (I know google sheets has it cos I used it recently)",
null,
"",
null,
"Reply With Quote\n\n3.",
null,
"Re: Excel question, 5Y forward date calc.",
null,
"Originally Posted by RiceBurner",
null,
"Does Excel have 'NETWORKDAYS(DATE_1,DATE_2)' ?? (I know google sheets has it cos I used it recently)\nThe lad's good, but not quite there....\n\nNETWORKSDAYS allows you to calc the number of business days between from date and to date. It'll give you the number of business days, but won't calc an end date from the start. I'm looking for something that will give me:\n\nSTARTDATE+5Y, with the caveat that the calculated end date has to be a business day*\n\n*We then get into the reals of public holidays, roll conventions, etc., but let's get the basics sorted first.",
null,
"",
null,
"",
null,
"Reply With Quote\n\n4.",
null,
"Re: Excel question, 5Y forward date calc.\n\nI thought you had oiks to do real work for you ?",
null,
"",
null,
"",
null,
"Reply With Quote\n\n5.",
null,
"Re: Excel question, 5Y forward date calc.\n\nDid you want, if the result isn't a working day, the next working day after or the last working day?\n\nIf the latter, possibly:\n=WORKDAY(EOMONTH(B1,59)+DAY(B1),5)-7\nwhere B1 is the start date.",
null,
"",
null,
"Reply With Quote\n\n6.",
null,
"Re: Excel question, 5Y forward date calc.",
null,
"Originally Posted by Yorick",
null,
"I thought you had oiks to do real work for you ?",
null,
"Every once in a while I have to remind people of my rare genius.....",
null,
"",
null,
"Reply With Quote\n\n7.",
null,
"Re: Excel question, 5Y forward date calc.",
null,
"Originally Posted by Kneerly Down",
null,
"Did you want, if the result isn't a working day, the next working day after or the last working day?\n\nIf the latter, possibly:\n=WORKDAY(EOMONTH(B1,59)+DAY(B1),5)-7\nwhere B1 is the start date.\nWelcome to the murky world of day count conventions.....\n\nI'm after Modified Following day convention, whereby the date rolls forward to the next good business day, unless it would roll into the next month, in which case it rolls back to the previous good day.",
null,
"",
null,
"Reply With Quote\n\n8.",
null,
"Re: Excel question, 5Y forward date calc.\n\nWould need to check for leap year and also won't account for non-weekend holiday days but:\n\n=EOMONTH(B1,59)+DAY(B1)+IF(WEEKDAY(EOMONTH(B1,59)+ DAY(B1),16)=1,IF(DAY(EOMONTH(B1,59))-DAY(B1)<2,-1,2),IF(WEEKDAY(EOMONTH(B1,59)+DAY(B1),16)=2,IF(DA Y(EOMONTH(B1,59))=DAY(B1),-2,1),0))",
null,
"",
null,
"Reply With Quote\n\n9.",
null,
"Re: Excel question, 5Y forward date calc.\n\nWould be easier doing a function rather than formula tbh.",
null,
"",
null,
"Reply With Quote\n\n10.",
null,
"Re: Excel question, 5Y forward date calc.",
null,
"Originally Posted by Kneerly Down",
null,
"Would be easier doing a function rather than formula tbh.\nI could have done it for him when I was still working. But brain dead now",
null,
"",
null,
"",
null,
"Reply With Quote\n\n11.",
null,
"Re: Excel question, 5Y forward date calc.\n\nOh, and if using a 'newer' version of Excel use EDATE, so:\n=EDATE(B1,60)+IF(WEEKDAY(EDATE(B1,60),16)=1,IF(DAY (EOMONTH(B1,60))-DAY(B1)<2,-1,2),IF(WEEKDAY(EDATE(B1,60),16)=2,IF(DAY(EOMONTH( B1,60))=DAY(B1),-2,1),0))\n\nwhere 'newer' is since Excel 2007 IIRC\n\nCORRECTED. EOMONTH should have used 60 all along, including the above, I think. Bit rusty on this!\nStill might not be right for Leap Year Febs.\nAnyway, get one of your oiks to do the checking!",
null,
"",
null,
"",
null,
"Reply With Quote"
] | [
null,
"https://www.therevcounter.co.uk/images/icons/icon1.png",
null,
"https://www.therevcounter.co.uk/threads/customstyles/default/images/misc/progress.gif",
null,
"https://www.therevcounter.co.uk/threads/clear.gif",
null,
"https://www.therevcounter.co.uk/images/icons/icon1.png",
null,
"https://www.therevcounter.co.uk/threads/customstyles/default/images/misc/progress.gif",
null,
"https://www.therevcounter.co.uk/threads/clear.gif",
null,
"https://www.therevcounter.co.uk/images/icons/icon1.png",
null,
"https://www.therevcounter.co.uk/threads/customstyles/default/images/misc/quote_icon.png",
null,
"https://www.therevcounter.co.uk/threads/customstyles/default/images/buttons/viewpost-right.png",
null,
"https://www.therevcounter.co.uk/threads/images/smilies/smiley.gif",
null,
"https://www.therevcounter.co.uk/threads/customstyles/default/images/misc/progress.gif",
null,
"https://www.therevcounter.co.uk/threads/clear.gif",
null,
"https://www.therevcounter.co.uk/images/icons/icon1.png",
null,
"https://www.therevcounter.co.uk/images/smilies/huh.gif",
null,
"https://www.therevcounter.co.uk/threads/customstyles/default/images/misc/progress.gif",
null,
"https://www.therevcounter.co.uk/threads/clear.gif",
null,
"https://www.therevcounter.co.uk/images/icons/icon1.png",
null,
"https://www.therevcounter.co.uk/threads/customstyles/default/images/misc/progress.gif",
null,
"https://www.therevcounter.co.uk/threads/clear.gif",
null,
"https://www.therevcounter.co.uk/images/icons/icon1.png",
null,
"https://www.therevcounter.co.uk/threads/customstyles/default/images/misc/quote_icon.png",
null,
"https://www.therevcounter.co.uk/threads/customstyles/default/images/buttons/viewpost-right.png",
null,
"https://www.therevcounter.co.uk/images/smilies/huh.gif",
null,
"https://www.therevcounter.co.uk/threads/customstyles/default/images/misc/progress.gif",
null,
"https://www.therevcounter.co.uk/threads/clear.gif",
null,
"https://www.therevcounter.co.uk/images/icons/icon1.png",
null,
"https://www.therevcounter.co.uk/threads/customstyles/default/images/misc/quote_icon.png",
null,
"https://www.therevcounter.co.uk/threads/customstyles/default/images/buttons/viewpost-right.png",
null,
"https://www.therevcounter.co.uk/threads/customstyles/default/images/misc/progress.gif",
null,
"https://www.therevcounter.co.uk/threads/clear.gif",
null,
"https://www.therevcounter.co.uk/images/icons/icon1.png",
null,
"https://www.therevcounter.co.uk/threads/customstyles/default/images/misc/progress.gif",
null,
"https://www.therevcounter.co.uk/threads/clear.gif",
null,
"https://www.therevcounter.co.uk/images/icons/icon1.png",
null,
"https://www.therevcounter.co.uk/threads/customstyles/default/images/misc/progress.gif",
null,
"https://www.therevcounter.co.uk/threads/clear.gif",
null,
"https://www.therevcounter.co.uk/images/icons/icon1.png",
null,
"https://www.therevcounter.co.uk/threads/customstyles/default/images/misc/quote_icon.png",
null,
"https://www.therevcounter.co.uk/threads/customstyles/default/images/buttons/viewpost-right.png",
null,
"https://www.therevcounter.co.uk/threads/images/smilies/grin.gif",
null,
"https://www.therevcounter.co.uk/threads/customstyles/default/images/misc/progress.gif",
null,
"https://www.therevcounter.co.uk/threads/clear.gif",
null,
"https://www.therevcounter.co.uk/images/icons/icon1.png",
null,
"https://www.therevcounter.co.uk/images/smilies/cheesy.gif",
null,
"https://www.therevcounter.co.uk/threads/customstyles/default/images/misc/progress.gif",
null,
"https://www.therevcounter.co.uk/threads/clear.gif",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8809013,"math_prob":0.8886893,"size":1862,"snap":"2019-43-2019-47","text_gpt3_token_len":593,"char_repetition_ratio":0.11786868,"word_repetition_ratio":0.0866426,"special_character_ratio":0.3039742,"punctuation_ratio":0.1670429,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95799094,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,8,null,2,null,null,null,null,null,null,null,4,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,8,null,4,null,null,null,null,null,null,null,null,null,8,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,8,null,2,null,null,null,null,null,null,null,2,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-10-18T06:06:32Z\",\"WARC-Record-ID\":\"<urn:uuid:48394ebd-f34e-4e73-b1f9-0ce52e2dfa1c>\",\"Content-Length\":\"113895\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b141dbda-0fde-4c93-bca2-2582ef5cdca7>\",\"WARC-Concurrent-To\":\"<urn:uuid:d21bbbd6-202a-4714-9d61-0d0b653e8f82>\",\"WARC-IP-Address\":\"77.68.13.98\",\"WARC-Target-URI\":\"https://www.therevcounter.co.uk/threads/118909-Will-Apple-unlock-an-iPhone?s=129e9b1e612a4ce2cd1c5e45d050bbd7&goto=nextnewest\",\"WARC-Payload-Digest\":\"sha1:ZMCYLH4BG3QLM4B7R26GOC6CZTJHZQZJ\",\"WARC-Block-Digest\":\"sha1:3AGPV4LHQLRKCFC5WBZIORRHX2DMFTPB\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-43/CC-MAIN-2019-43_segments_1570986677964.40_warc_CC-MAIN-20191018055014-20191018082514-00501.warc.gz\"}"} |
http://forums.informationbuilders.com/eve/forums/a/tpc/f/7971057331/m/7441097331 | [
"",
null,
"As of December 1, 2020, Focal Point is retired and repurposed as a reference repository. We value the wealth of knowledge that's been shared here over the years. You'll continue to have access to this treasure trove of knowledge, for search purposes only. Join the TIBCO Community TIBCO Community is a collaborative space for users to share knowledge and support one another in making the best use of TIBCO products and services. There are several TIBCO WebFOCUS resources in the community. From the Home page, select Predict: WebFOCUS to view articles, questions, and trending articles. Select Products from the top navigation bar, scroll, and then select the TIBCO WebFOCUS product page to view product overview, articles, and discussions. Request access to the private WebFOCUS User Group (login required) to network with fellow members. Former myibi community members should have received an email on 8/3/22 to activate their user accounts to join the community. Check your Spam folder for the email. Please get in touch with us at [email protected] for further assistance. Reference the community FAQ to learn more about the community.\nFocal Point",
null,
"Focal Point Forums",
null,
"WebFOCUS/FOCUS Forum on Focal Point",
null,
"[CLOSED] Median or Standard Deviation",
null,
"Read-Only Topic\n Go",
null,
"Search",
null,
"Notify",
null,
"Tools",
null,
"[CLOSED] Median or Standard Deviation",
null,
"Silver Member",
null,
"posted August 10, 2004 03:03 PM\nHi\n\nI tried searching the board but didnt find anything. Is there a way to calculate the median or standard deviation of a data set in WebFocus ?\n\nThanks\nStephen\n\nThis message has been edited. Last edited by: Kerry,\n\n Posts: 38 | Registered: May 20, 2004",
null,
"IP\nSilver Member",
null,
"posted August 10, 2004 03:15 PM Hide Post\nStephen ,\nThis is an old problem for WF.\nI just calculated standard deviation yesterday with WF 5.2.1 on AIX (self service application)\nUnfortunately you have to write the formula using the prefix ASQ. AVE. and then SQRT.\nit would be better for you to use \"Sorting with multiple Display Command\" (see Manual)\nPaolo\n\n Posts: 36 | Location: Bologna Italy | Registered: March 11, 2004",
null,
"IP\nSilver Member",
null,
"posted August 10, 2004 03:16 PM Hide Post\nHmmm. Thanks Ill give it a try.\n\nStephen\n\n Posts: 38 | Registered: May 20, 2004",
null,
"IP\nPlatinum Member",
null,
"posted August 10, 2004 03:57 PM Hide Post\nThe formula for Standard Deviation in an example..\nTABLE FILE CAR\nSUM ASQ.WEIGHT AVE.WEIGHT CNT.WEIGHT\nCOMPUTE STD=((ASQ.WEIGHT-AVE.WEIGHT *AVE.WEIGHT) **.5) /CNT.WEIGHT ;\nEND\n\nThe reason its not a function in WebFocus is that\noften the denominator has to be (N-1) and not N\nas in this example.\n\n Posts: 226 | Registered: June 08, 2003",
null,
"IP\nPlatinum Member",
null,
"posted August 10, 2004 04:06 PM Hide Post\nThe formula for MEDIAN is shown in this example..\nTABLE FILE CAR\nPRINT WEIGHT NOPRINT\nCOMPUTE MEDIAN=MEDIAN +1 ;\nWHERE TOTAL MEDIAN EQ 9 ;\nBY WEIGHT\n\nEND\n\nNote that the number of values has to be known\nin advance (9). But you can find this out\nautomatically , eg\nTABLE FILE CAR\nRRINT WEIGHT AND HOLD\nEND\n-SET &COUNT = &LINES ;\nTABLE FILE CAR\nPRINT WEIGHT NOPRINT\nCOMPUTE MEDIAN=MEDIAN +1 ;\nWHERE TOTAL MEDIAN EQ &LINES ;\nBY WEIGHT\n\nEND\n\n Posts: 226 | Registered: June 08, 2003",
null,
"IP\nPlatinum Member",
null,
"posted August 10, 2004 04:09 PM Hide Post\nSlight error in median.\nTABLE FILE CAR\nRRINT WEIGHT AND HOLD\nEND\n-SET &COUNT = &LINES ;\nTABLE FILE CAR\nPRINT WEIGHT NOPRINT\nCOMPUTE MEDIAN=MEDIAN +1 ;\nWHERE TOTAL MEDIAN EQ &COUNT ;\nBY WEIGHT\n\nEND\n\nif the &COUNT is even then often it and the next value is used and averaged.\n\n Posts: 226 | Registered: June 08, 2003",
null,
"IP\nSilver Member",
null,
"posted August 10, 2004 04:34 PM Hide Post\nGreat thanks.\n\n Posts: 38 | Registered: May 20, 2004",
null,
"IP\nExpert",
null,
"posted August 10, 2004 04:57 PM Hide Post\n\n Posts: 3811 | Location: Manhattan | Registered: October 28, 2003",
null,
"IP\n<Pietro De Santis>",
null,
"posted August 10, 2004 05:36 PM\nYikes! I hope I never have to do that!",
null,
"",
null,
"IP\nMember",
null,
"posted August 10, 2004 06:25 PM Hide Post\nWhile Gerry's answer is (of course) correct, it's fairly simple, and uses TWO passes. The MEDIAN can be calculated with a single pass, and within sort fields, but it's a bit more complex. Below is annotated code, to allow this. It uses multi-verb, as well as COMPUTEs, and WHERE TOTAL tests.\n\n-* median is the 'middle' value (if odd number of values), or the\n-* average of the two middle values (if even number)\n-* when all the values are sorted\nTABLE FILE &FILE\n-* use cnt. to get number of instances\nWRITE CNT.&FIELD NOPRINT\nCOMPUTE\n-* use this if even number of instances\nM1/I4=INT((CNT.&FIELD + 1)/2); NOPRINT COMPUTE\n-* use this too for even. if odd number of instances - same as m1\nM2/I4=INT((CNT.&FIELD + 2)/2); NOPRINT\n-* sort field(s) within which to calculate median\nBY &BYFIELD\n-* get details in sort fields and 'median'ed field order\nPRINT\nCOMPUTE\n-* concat all by fields to compute the xlist (for when to reset)\nBYFIELDS/A10=&BYFIELD; NOPRINT\n-* use this compute to order the field within sorts\nCOMPUTE XLIST/I5=IF BYFIELDS EQ LAST BYFIELDS THEN XLIST + 1 ELSE 1; NOPRINT\n-* get cumulative values for median calculation\n-* if count is odd, M1 = M2, and only one value. If even, need average\nCOMPUTE C&FIELD/&FMT = IF M1 NE M2\nTHEN (LAST &FIELD + &FIELD)/2 ELSE &FIELD;\nAS 'MEDIAN,&FIELD'\nBY &BYFIELD BY &FIELD\n-* only look at possible median values (m1 = m2 if odd, only m2 if even)\nWHERE TOTAL XLIST EQ M2\nEND\n\nThis message has been edited. Last edited by: <Mabel>,\n\n Posts: 25 | Location: 2 Penn Plaza 28 fl | Registered: March 27, 2003",
null,
"IP\nSilver Member",
null,
"posted August 11, 2004 11:36 AM Hide Post\nGerald ,\nthe formula is wrong.\nA wonderful old IBI Manual \"Statistical Analyse\" says that for computational puprposes is used:\nTABLE FILE CAR\nSUM ASQ.WEIGHT AVE.WEIGHT CNT.WEIGHT\nCOMPUTE STD=(\n(ASQ.WEIGHT-CNT.WEIGHT*AVE.WEIGHT*AVE.WEIGHT) **.5) /(CNT.WEIGHT -1); END\n\nPaolo\n\n Posts: 36 | Location: Bologna Italy | Registered: March 11, 2004",
null,
"IP\nPlatinum Member",
null,
"posted August 11, 2004 03:56 PM Hide Post\nPaulo,\nNope, your formula would give a negative\nnumber. Actually you can take my formula\nand multiple by cnt.x/(cnt.x -1) to adjust for the degrees of freedom.\n\n Posts: 226 | Registered: June 08, 2003",
null,
"IP\nMember",
null,
"posted August 11, 2004 05:37 PM Hide Post\nSince we're talking FOCUS, you can get both standard deviation and variance with a LET, as follows:\n\nLET\n-* Population variance\nPVAR = COMPUTE PVAR.<1>=(ASQ.<1>-(AVE.<1>*AVE.<1>));;\n-* sample variance ( population variance time N/(N-1)\nSVAR = CNT.<1> NOPRINT COMPUTE # SVAR2 <1>;\nSVAR2= SVAR.<1>=(ASQ.<1>-(AVE.<1>*AVE.<1>)) * (CNT.<1>/(CNT.<1>-1)) ;;\n-* population standard deviation\nPSDEV = PVAR <1> # NOPRINT COMPUTE PSDEV.<1>=SQRT(PVAR.<1>);;\n-* sample standard deviation\nSSDEV = SVAR <1> # NOPRINT COMPUTE SSDEV.<1>=SQRT(SVAR.<1>);;\nEND So, to get the POPULATION variance of WEIGHT, you'd use:\n\nPVAR WEIGHT\n\nto get the SAMPLE standard deviation, you'd use:\n\nSSDEV WEIGHT\n\nThis message has been edited. Last edited by: <Mabel>,\n\n Posts: 25 | Location: 2 Penn Plaza 28 fl | Registered: March 27, 2003",
null,
"IP\nSilver Member",
null,
"posted August 12, 2004 10:17 AM Hide Post\nI'm sorry but I repeat that the formula is wrong.\n\nI try\n\nSTDEVA1=(ASQ.var-(AVE.var*AVE.var))*(CNT.var/(CNT.var-1)) ;\nSTDEV1=SQRT(STDEVA1);\nand\nSTDEVA=(ASQ.var-(CNT.var*(AVE.var)**2))/(CNT.var-1));\nSTDEV=SQRT(STDEVA);\n\non my sample :\nvar\n6,97\n16,99\n19,1\n45,52\n25,64\n3,02\n384,81\n26,00\n\nThen I test both with excel\nand the good formula is my formula STDEV.\n\nSTDEV1= 411,04\nSTDEV= 129,47\nST.DEV (excel)= 129,4718873\n\nPaolo\n\n Posts: 36 | Location: Bologna Italy | Registered: March 11, 2004",
null,
"IP\nMember",
null,
"posted August 17, 2004 02:40 PM Hide Post\nPaolo,\n\nI'm not sure what your ENTIRE request was, but I took your data, loaded it into a file, and ran your code, as follows. I had to remove the final right paren (one too many), and got the following results:\n\nSTDEVA1 STDEV1 STDEVA STDEV\n------- ------ ------ -----\n16,762.95 129.47 -2,261.45 .00\n\nSeems correct to me, for the calculation we gave.\n\nHere's the code:\n\nTABLE FILE PAOLO\nSUM CNT.DATA NOPRINT\n\nCOMPUTE\nSTDEVA1=(ASQ.DATA-(AVE.DATA*AVE.DATA))*(CNT.DATA/(CNT.DATA-1)) ;\nSTDEV1=SQRT(STDEVA1);\n\nSTDEVA=(ASQ.DATA-(CNT.DATA*(AVE.DATA)**2))/(CNT.DATA-1);\nSTDEV=SQRT(STDEVA);\nEND\nNow, my Stats book defines VARIANCE (St. Dev. squared) as:\n\n(n * SUM(x**2) - (SUM(x))**2)/n**2\n\nwhich can be re-arranged as:\n\n(n * SUM(x**2))/n**2 - (SUM(x)**2)/n**2\n\nwhich becomes:\n\nSUM(x**2)/n - (SUM(x)/n)**2\n\nThe first term is the definition of ASQ; the second is the average squared. The last part is giving sample values (uses n-1 instead of n instances). This is the calculation given.\n\nThis message has been edited. Last edited by: <Mabel>,\n\n Posts: 25 | Location: 2 Penn Plaza 28 fl | Registered: March 27, 2003",
null,
"IP\nMember",
null,
"posted August 17, 2004 08:22 PM Hide Post\nNot to belabor the point, but I found an old ANALYSE manual, dated 10/91. The formula for VARIANCE (Std. Dev. squared) is given as:\n\n(Sum(x**2) - n * ave.x**2)/(n-1)\n\nThat first term is ASQ.x * n, NOT ASQ.x\n\nMaybe THAT's what's causing the confusion.\n\n Posts: 25 | Location: 2 Penn Plaza 28 fl | Registered: March 27, 2003",
null,
"IP\nSilver Member",
null,
"posted August 19, 2004 04:39 PM Hide Post\nThanks to all for the input.\n\n Posts: 38 | Registered: May 20, 2004",
null,
"IP\nSilver Member",
null,
"posted August 23, 2004 12:55 PM Hide Post\nI must apologize for myself,\nI used ASQ.x * n while I wrote ASQ.x without n\n\nMy justification is that\nmy query is very complex within multiple\nsort fields and I went wrong to copy in the mail\nPaolo\n\n Posts: 36 | Location: Bologna Italy | Registered: March 11, 2004",
null,
"IP\nVirtuoso",
null,
"posted May 01, 2012 09:24 AM Hide Post\nquote:\nSUM(x**2)/n - (SUM(x)/n)**2\n\nThe first term is the definition of ASQ; the second is the average squared. The last part is giving sample values (uses n-1 instead of n instances). This is the calculation given.\n\nI was looking for a standard deviation function myself. The above definition is exactly what wikipedia gives as:",
null,
"In FOCUS, that appears to translate to:\n```COMPUTE STD = (ASQ.WEIGHT - (AVE.WEIGHT ** 2)) ** 0.5;\n```\n\nWebFOCUS 8.1.03, Windows 7-64/2008-64, IBM DB2/400, Oracle 11g & RDB, MS SQL-Server 2005, SAP, PostgreSQL 11, Output: HTML, PDF, Excel 2010\n: Member of User Group Benelux :\n\n Posts: 1669 | Location: Enschede, Netherlands | Registered: August 12, 2010",
null,
"IP\nGold member",
null,
"posted May 02, 2012 01:22 PM Hide Post\nThis is an example of the standard deviation code I have used for years.\n```-* File Standard_Deviation.fex\nLET\n-* Population variance\nPVAR = COMPUTE PVAR.<1>=(ASQ.<1>-(AVE.<1>*AVE.<1>));;\n-* sample variance ( population variance time N/(N-1)\nSVAR = CNT.<1> NOPRINT COMPUTE # SVAR2 <1>;\nSVAR2 = SVAR.<1>=(ASQ.<1>-(AVE.<1>*AVE.<1>)) * (CNT.<1>/(CNT.<1>-1)) ;;\n-* population standard deviation\nPSDEV = PVAR <1> # NOPRINT COMPUTE PSDEV.<1>=SQRT(PVAR.<1>);;\n-* sample standard deviation\nSSDEV = SVAR <1> # NOPRINT COMPUTE SSDEV.<1>=SQRT(SVAR.<1>);;\nEND\n\nSET ASNAMES=ON\n\nTABLE FILE CAR\nSUM\nAVE.MPG AS AVE_MPG\nMIN.MPG AS MIN_MPG\nMAX.MPG AS MAX_MPG\nSSDEV MPG AS SD_MPG\nCNT.MPG AS N\nBY COUNTRY\nON TABLE HOLD AS HLD_STATS\nEND\nDEFINE FILE HLD_STATS\nSE_MPG/D16.6 = SD_MPG/SQRT(N);\nEND\nTABLE FILE HLD_STATS\nSUM\nAVE_MPG\nMIN_MPG\nMAX_MPG\nSD_MPG\nN\nSE_MPG\nBY COUNTRY\nEND\n```\n\n Posts: 60 | Location: Ellensburg Washington | Registered: May 22, 2009",
null,
"IP\nPlatinum Member",
null,
"posted May 03, 2012 08:47 AM Hide Post\nNot a single word about ANALYSE that was supposed to be a SAS competitor.\nCe que c'est que de nous\n( Ashes to ashes, if you prefer )\n\nFocus Mainframe 7.6.11\nDev Studio 7.6.11 and !!!\nPC Focus, Focus for OS/2, FFW Six, MSO\n\n Posts: 134 | Registered: November 06, 2007",
null,
"IP\nGold member",
null,
"posted May 04, 2012 01:01 PM Hide Post\nANALYSE code ran for many versions after it was no longer documented. I had to rewrite a lot of reports upon its demise.\n\n Posts: 60 | Location: Ellensburg Washington | Registered: May 22, 2009",
null,
"IP\nPlatinum Member",
null,
"posted June 27, 2012 04:46 AM Hide Post\nBut it still works, if not maintained.\nThe difficult point is that the Masters created by Analyse are not correct and Focus can't read the Data created by Analyse.\nI manually correct the master and I can have Analyse at Work.\nOf course, it's not very professional, but some old cow-boys like to spit into a copper spitoon and drink an old scotch with old Focusians friends.\nFocusely and Cordially\n\nFocus Mainframe 7.6.11\nDev Studio 7.6.11 and !!!\nPC Focus, Focus for OS/2, FFW Six, MSO\n\n Posts: 134 | Registered: November 06, 2007",
null,
"IP\nVirtuoso",
null,
"posted June 02, 2014 05:37 AM Hide Post\nquote:\nOriginally posted by Wep5622:\nquote:\nSUM(x**2)/n - (SUM(x)/n)**2\n\nThe first term is the definition of ASQ; the second is the average squared. The last part is giving sample values (uses n-1 instead of n instances). This is the calculation given.\n\nI was looking for a standard deviation function myself. The above definition is exactly what wikipedia gives as:",
null,
"In FOCUS, that appears to translate to:\n```COMPUTE STD = (ASQ.WEIGHT - (AVE.WEIGHT ** 2)) ** 0.5;\n```\n\nI just found out that this formula is incorrect!\n\nThe definition of the formula (according to wikipedia) contains SUM((x - AVE.x) **2), which is not equivalent to SUM(x **2) - SUM(AVE.x **2)!\n\nEquivalent would be (if I remember my high-school math correctly):\n``` s = SUM(x**2 - 2*x*AVE.x + AVE.x**2)\n<=> s = SUM(x**2) - 2*SUM(AVE.x)*SUM(x) + ASQ.x\n```\n\nI think this is a correct implementation:\n```TABLE FILE EXAMPLE\nSUM AVE.X AS X_AVE\nBY GROUP\nON TABLE HOLD AS EXAMPLE_AVG FORMAT FOCUS INDEX GROUP\nEND\n\nJOIN GROUP IN EXAMPLE TO GROUP IN EXAMPLE_AVG AS J0\nDEFINE FILE EXAMPLE\nDEV/D20.2 = (X - X_AVG) **2;\nEND\nTABLE FILE EXAMPLE\nCOMPUTE DEV_AVG_SMP/D20.2 = DEV / (CNT.X -1); NOPRINT\n\nCOMPUTE STDDEV_POP/D20.2 = AVE.DEV ** 0.5;\nCOMPUTE STDDEV_SMP/D20.2 = DEV_AVG_SMP ** 0.5;\nBY GROUP\nEND\n```\n\nThe complexity of this issue raises the question why there is no standard implementation in WebFOCUS! Do we have an ETA for that? It's dearly needed.\n\nThis message has been edited. Last edited by: Wep5622,\n\nWebFOCUS 8.1.03, Windows 7-64/2008-64, IBM DB2/400, Oracle 11g & RDB, MS SQL-Server 2005, SAP, PostgreSQL 11, Output: HTML, PDF, Excel 2010\n: Member of User Group Benelux :\n\n Posts: 1669 | Location: Enschede, Netherlands | Registered: August 12, 2010",
null,
"IP\nVirtuoso",
null,
"posted June 03, 2014 06:26 AM Hide Post\nEasier yet, but not always an option: use SQL passthru and calculate stddev there.\n\nThat can be done more often than you'd think, because many databases support windowing aggregate functions these days, so you can do stuff like:\n```SELECT stddev_samp(X) over (PARTITION BY GROUP) AS X_STDDEV, X\nFROM EXAMPLE\n;\n```\n\nWhich is quite a bit more flexible than the \"old\":\n```SELECT stddev_samp(X) AS X_STDDEV\nFROM EXAMPLE\nGROUP BY X\n;\n```\n\nWebFOCUS 8.1.03, Windows 7-64/2008-64, IBM DB2/400, Oracle 11g & RDB, MS SQL-Server 2005, SAP, PostgreSQL 11, Output: HTML, PDF, Excel 2010\n: Member of User Group Benelux :\n\n Posts: 1669 | Location: Enschede, Netherlands | Registered: August 12, 2010",
null,
"IP",
null,
"Read-Only Topic\nFocal Point",
null,
"Focal Point Forums",
null,
"WebFOCUS/FOCUS Forum on Focal Point",
null,
"[CLOSED] Median or Standard Deviation"
] | [
null,
"https://kb.informationbuilders.com/sites/default/files/2022-03/Focal_Point_Banner_2020_1000x60.png",
null,
"http://forums.informationbuilders.com/groupee_common/ver1.3.7.2147483647/platform_images/blank.gif",
null,
"http://forums.informationbuilders.com/groupee_common/ver1.3.7.2147483647/platform_images/blank.gif",
null,
"http://forums.informationbuilders.com/groupee_common/ver1.3.7.2147483647/platform_images/blank.gif",
null,
"http://forums.informationbuilders.com/groupee_common/ver1.3.7.2147483647/platform_images/blank.gif",
null,
"http://forums.informationbuilders.com/groupee_common/ver1.3.7.2147483647/platform_images/blank.gif",
null,
"http://forums.informationbuilders.com/groupee_common/ver1.3.7.2147483647/platform_images/blank.gif",
null,
"http://forums.informationbuilders.com/groupee_common/ver1.3.7.2147483647/platform_images/blank.gif",
null,
"http://forums.informationbuilders.com/groupee_common/ver1.3.7.2147483647/platform_images/blank.gif",
null,
"http://forums.informationbuilders.com/groupee_common/ver1.3.7.2147483647/platform_images/blank.gif",
null,
"http://forums.informationbuilders.com/groupee_common/ver1.3.7.2147483647/platform_images/blank.gif",
null,
"http://forums.informationbuilders.com/groupee_common/ver1.3.7.2147483647/platform_images/blank.gif",
null,
"http://forums.informationbuilders.com/groupee_common/ver1.3.7.2147483647/platform_images/blank.gif",
null,
"http://forums.informationbuilders.com/groupee_common/ver1.3.7.2147483647/platform_images/blank.gif",
null,
"http://forums.informationbuilders.com/groupee_common/ver1.3.7.2147483647/platform_images/blank.gif",
null,
"http://forums.informationbuilders.com/groupee_common/ver1.3.7.2147483647/platform_images/blank.gif",
null,
"http://forums.informationbuilders.com/groupee_common/ver1.3.7.2147483647/platform_images/blank.gif",
null,
"http://forums.informationbuilders.com/groupee_common/ver1.3.7.2147483647/platform_images/blank.gif",
null,
"http://forums.informationbuilders.com/groupee_common/ver1.3.7.2147483647/platform_images/blank.gif",
null,
"http://forums.informationbuilders.com/groupee_common/ver1.3.7.2147483647/platform_images/blank.gif",
null,
"http://forums.informationbuilders.com/groupee_common/ver1.3.7.2147483647/platform_images/blank.gif",
null,
"http://forums.informationbuilders.com/groupee_common/ver1.3.7.2147483647/platform_images/blank.gif",
null,
"http://forums.informationbuilders.com/groupee_common/ver1.3.7.2147483647/platform_images/blank.gif",
null,
"http://forums.informationbuilders.com/groupee_common/ver1.3.7.2147483647/platform_images/blank.gif",
null,
"http://forums.informationbuilders.com/groupee_common/ver1.3.7.2147483647/platform_images/blank.gif",
null,
"http://forums.informationbuilders.com/groupee_common/ver1.3.7.2147483647/platform_images/blank.gif",
null,
"http://forums.informationbuilders.com/groupee_common/ver1.3.7.2147483647/platform_images/blank.gif",
null,
"http://forums.informationbuilders.com/groupee_common/emoticons/icon_smile.gif",
null,
"http://forums.informationbuilders.com/groupee_common/ver1.3.7.2147483647/platform_images/blank.gif",
null,
"http://forums.informationbuilders.com/groupee_common/ver1.3.7.2147483647/platform_images/blank.gif",
null,
"http://forums.informationbuilders.com/groupee_common/ver1.3.7.2147483647/platform_images/blank.gif",
null,
"http://forums.informationbuilders.com/groupee_common/ver1.3.7.2147483647/platform_images/blank.gif",
null,
"http://forums.informationbuilders.com/groupee_common/ver1.3.7.2147483647/platform_images/blank.gif",
null,
"http://forums.informationbuilders.com/groupee_common/ver1.3.7.2147483647/platform_images/blank.gif",
null,
"http://forums.informationbuilders.com/groupee_common/ver1.3.7.2147483647/platform_images/blank.gif",
null,
"http://forums.informationbuilders.com/groupee_common/ver1.3.7.2147483647/platform_images/blank.gif",
null,
"http://forums.informationbuilders.com/groupee_common/ver1.3.7.2147483647/platform_images/blank.gif",
null,
"http://forums.informationbuilders.com/groupee_common/ver1.3.7.2147483647/platform_images/blank.gif",
null,
"http://forums.informationbuilders.com/groupee_common/ver1.3.7.2147483647/platform_images/blank.gif",
null,
"http://forums.informationbuilders.com/groupee_common/ver1.3.7.2147483647/platform_images/blank.gif",
null,
"http://forums.informationbuilders.com/groupee_common/ver1.3.7.2147483647/platform_images/blank.gif",
null,
"http://forums.informationbuilders.com/groupee_common/ver1.3.7.2147483647/platform_images/blank.gif",
null,
"http://forums.informationbuilders.com/groupee_common/ver1.3.7.2147483647/platform_images/blank.gif",
null,
"http://forums.informationbuilders.com/groupee_common/ver1.3.7.2147483647/platform_images/blank.gif",
null,
"http://forums.informationbuilders.com/groupee_common/ver1.3.7.2147483647/platform_images/blank.gif",
null,
"http://forums.informationbuilders.com/groupee_common/ver1.3.7.2147483647/platform_images/blank.gif",
null,
"http://forums.informationbuilders.com/groupee_common/ver1.3.7.2147483647/platform_images/blank.gif",
null,
"http://forums.informationbuilders.com/groupee_common/ver1.3.7.2147483647/platform_images/blank.gif",
null,
"http://upload.wikimedia.org/wikipedia/en/math/5/9/1/5916cfc7b0c573f2bcbb97841bd9d0b8.png",
null,
"http://forums.informationbuilders.com/groupee_common/ver1.3.7.2147483647/platform_images/blank.gif",
null,
"http://forums.informationbuilders.com/groupee_common/ver1.3.7.2147483647/platform_images/blank.gif",
null,
"http://forums.informationbuilders.com/groupee_common/ver1.3.7.2147483647/platform_images/blank.gif",
null,
"http://forums.informationbuilders.com/groupee_common/ver1.3.7.2147483647/platform_images/blank.gif",
null,
"http://forums.informationbuilders.com/groupee_common/ver1.3.7.2147483647/platform_images/blank.gif",
null,
"http://forums.informationbuilders.com/groupee_common/ver1.3.7.2147483647/platform_images/blank.gif",
null,
"http://forums.informationbuilders.com/groupee_common/ver1.3.7.2147483647/platform_images/blank.gif",
null,
"http://forums.informationbuilders.com/groupee_common/ver1.3.7.2147483647/platform_images/blank.gif",
null,
"http://forums.informationbuilders.com/groupee_common/ver1.3.7.2147483647/platform_images/blank.gif",
null,
"http://forums.informationbuilders.com/groupee_common/ver1.3.7.2147483647/platform_images/blank.gif",
null,
"http://upload.wikimedia.org/wikipedia/en/math/5/9/1/5916cfc7b0c573f2bcbb97841bd9d0b8.png",
null,
"http://forums.informationbuilders.com/groupee_common/ver1.3.7.2147483647/platform_images/blank.gif",
null,
"http://forums.informationbuilders.com/groupee_common/ver1.3.7.2147483647/platform_images/blank.gif",
null,
"http://forums.informationbuilders.com/groupee_common/ver1.3.7.2147483647/platform_images/blank.gif",
null,
"http://forums.informationbuilders.com/groupee_common/ver1.3.7.2147483647/platform_images/blank.gif",
null,
"http://forums.informationbuilders.com/groupee_common/ver1.3.7.2147483647/platform_images/blank.gif",
null,
"http://forums.informationbuilders.com/groupee_common/ver1.3.7.2147483647/platform_images/blank.gif",
null,
"http://forums.informationbuilders.com/groupee_common/ver1.3.7.2147483647/platform_images/blank.gif",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.83847237,"math_prob":0.65242356,"size":9333,"snap":"2023-14-2023-23","text_gpt3_token_len":2646,"char_repetition_ratio":0.09947476,"word_repetition_ratio":0.10882164,"special_character_ratio":0.27761707,"punctuation_ratio":0.15092403,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9703515,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120,121,122,123,124,125,126,127,128,129,130,131,132,133,134],"im_url_duplicate_count":[null,5,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-03-22T12:11:35Z\",\"WARC-Record-ID\":\"<urn:uuid:ad2829c4-96b6-45d0-94b1-e3e84599df2e>\",\"Content-Length\":\"141325\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0ecc6b6b-eedd-483a-b1cb-8b8b4358acbf>\",\"WARC-Concurrent-To\":\"<urn:uuid:77b451ee-c3a6-4175-a457-8b6687f2b11c>\",\"WARC-IP-Address\":\"199.255.145.193\",\"WARC-Target-URI\":\"http://forums.informationbuilders.com/eve/forums/a/tpc/f/7971057331/m/7441097331\",\"WARC-Payload-Digest\":\"sha1:WG4NQQV5KYVAXA5THVCQWUU5YOPVN3UV\",\"WARC-Block-Digest\":\"sha1:X64B5NFJM5ISEPE77JGZL3CUEJO5377K\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296943809.76_warc_CC-MAIN-20230322114226-20230322144226-00693.warc.gz\"}"} |
https://www.purdue.edu/freeform/statics/2020/10/19/homework-23-b/ | [
"# Homework 23.B",
null,
"We are encouraging you to communicate with us and with your colleagues in the class through the threaded discussions on the course blog. If you have questions on this homework, please ask here.\n\n## 56 thoughts on “Homework 23.B”\n\n1.",
null,
"Bridget Louise Eilers says:\n\nCan we assume that point L a horizontal distance d away from M?\n\n1.",
null,
"Vanessa Restrepo Perez says:\n\nBridget, you have enough information to calculate the angle formed by AMC. Once you have that angle, you can find the perpendicular distance from L to MJ. Hope this solves your question.\n\n2.",
null,
"Kate Elizabeth Wilson says:\n\nCan we assume that there is a T joint at L, that is, that MK is perpendicular to LJ? I ask to determine if LJ is a zero force member.\n\n1.",
null,
"Nayumi Parente says:\n\nI did not assume that MK is perpendicular to LJ, but I did get that LJ is a zero force since there are only 3 forces at L, and ML and LK are colinear, which means that LJ is 0.\n\n1.",
null,
"Ben says:\n\nI did that too.\n\n2.",
null,
"Bridget Louise Eilers says:\n\nHi Kate. Point L actually forms a Y joint which acts the same as a T joint, just instead of the 0 force member being perpendicular it is at an angle. Because we know the angle is not 0, the member LJ must carry no load so that the forces balance. Hope this helps!\n\n3.",
null,
"Camden James Frieling says:\n\nHi Kate, you do not need to determine whether LJ is perpendicular to segment MK because LJ is the non collinear member of a Y-joint. This means that LJ will be a zero force member. Hopefully this is helpful.\n\n3.",
null,
"Meredith Meyer says:\n\nWhen you find a force that is a compression force, do you plug in the negative or positive value into other equilibrium equations?\n\n1.",
null,
"Nathan Wang says:\n\nWhat I did is still assuming tension in the equilibrium equation and use negative value, so this will bring us consistency and avoid getting confused.\n\n2.",
null,
"Elijah Marcum says:\n\nI draw the forces pointing away from the point of interest (tension) and then plug in all the calculated values, without changing the sign. This should keep all the signs in order.\n\n3.",
null,
"MarcAntonio Samuel Aragona says:\n\nThere are multiple ways to go about approaching this problem, however, I would probably draw the forces pointing in one direction, in order to keep the signs consistent throughout the remainder of the problem.\n\n4.",
null,
"Elena Toledo Monroe says:\n\nWhen finding the angle for EI, why are you supposed to find angle IED and not IEH?\n\n1.",
null,
"Sai Sanjit Ganti says:\n\nYou need angle IED if you want to find the horizontal and vertical components of the load in EI. This is the angle made by the member with the horizontal line, and hence components would be dependent on this angle and not angle IEH. Hope this answers your question.\n\n2.",
null,
"Alexander Luis Yeverino says:\n\nMy guess here is that by finding angle IED, you can then use geometry more effectively to find angles such as angle IEH, and angle EIH. I believe this also allows you to find and equate angles DEC, HIE, JKI, and so on, going up the structure.\n\n5.",
null,
"Natalie E Clapp says:\n\nDo you need to solve for the reaction forces here? I know it is not asked for it in the problem but do we need them to solve for what is asked?\n\n1.",
null,
"Sai Sanjit Ganti says:\n\nThis problem can be solved without solving for the reaction forces at A and C.\n\n2.",
null,
"Jack Willis Demmy says:\n\nYou do not need to solve for the reactions and you will only need to use the loadings at the top at point V to be able to solve for the members we are looking for\n\n6.",
null,
"Basil Abu-Baker says:\n\nIm not sure how to approach this. Am i supposed to solve for the reaction forces or not?\n\n1.",
null,
"Sunita Nhemafuki says:\n\nIf you take the upper part of the truss it might be easier as you don't have to find the reaction forces. However, if you take the lower part then you will have to find it.\n\n2.",
null,
"Radhika Kulkarni says:\n\nI was able to solve the problem without solving for the reaction forces. I suggest making a cut through JH, HK, HI, and IE. This will expose 4 members, some of which are zero force members. So after you find which ones are 0, you can just solve for the rest using moment equations or sum of forces equations. Hope that helps!\n\n3.",
null,
"Elijah Marcum says:\n\nI solved it without the need for reactions, if you take the FBD cut to not include the points of reaction, you should not need the forces. I have found this is usually the easiest way to solve the problem.\n\n7.",
null,
"Mike says:\n\nDo final answers need to be given in terms of the magnitude and then tension/compression, or should we include the sign (+/-) as well?\n\n1.",
null,
"Macy Kaylyn Wohlford says:\n\nI leave my answers in terms of +-, and then just explain whether this means tension or compression. I think you can do it either way.\n\n2.",
null,
"Matthew Alan Harris says:\n\nAs stated by others, I do believe that it is your choice to include the (+/-) signs or to just list the magnitudes and state the nature of the forces. Personally, I just list the magnitudes of the forces and state whether they are in tension or compression.\n\n8.",
null,
"Helber Antonio Esquivel Puentes says:\n\nThe final answer should include the sign (+/-), from this you will state if the element is in tension or compression.\n\n9.",
null,
"Zachary Zalewski says:\n\nWhen solving these sorts of problems does it make a difference if the problem asks for the load at IE or EI? I know in other contexts, the order of the letters signifies the direction of the force so therefor IE=-EI but is that the case here as well?\n\n1.",
null,
"Keng Boon Yeoh says:\n\nThe order doesn't matter in this question in my opinion. Naturally, you find all the zero-force members before cutting through the leftover forces you have to find. What matters is the direction you point those forces in when you attempt to find them. If you point them away from the FBD and the solution is positive, it is a tension force and vice versa.\n\n2.",
null,
"Keng Boon Yeoh says:\n\nThe order doesn't matter in this question in my opinion. Naturally, you find all the zero-force members before cutting through the leftover forces you have to find. What matters is the direction you point those forces in when you attempt to find them. If you point them away from the FBD and the solution is positive, it is a tension force and vice versa.\n\n3.",
null,
"Keng Boon Yeoh says:\n\nThe order doesn't matter in this question in my opinion. Naturally, you find all the zero-force members before cutting through the leftover forces you have to find. What matters is the direction you point those forces in when you attempt to find them. If you point them away from the FBD and the solution is positive, it is a tension force and vice versa.\n\n4.",
null,
"Keng Boon Yeoh says:\n\nThe order doesn't matter in this question in my opinion. Naturally, you find all the zero-force members before cutting through the leftover forces you have to find. What matters is the direction you point those forces in when you attempt to find them. If you point them away from the FBD and the solution is positive, it is a tension force and vice versa.\n\n5.",
null,
"Keng Boon Yeoh says:\n\nThe order doesn't matter in this question in my opinion. Naturally, you find all the zero-force members before cutting through the leftover forces you have to find. What matters is the direction you point those forces in when you attempt to find them. If you point them away from the FBD and the solution is positive, it is a tension force and vice versa.\n\n6.",
null,
"Keng Boon Yeoh says:\n\nThe order doesn't matter in this question in my opinion. Naturally, you find all the zero-force members before cutting through the leftover forces you have to find. What matters is the direction you point those forces in when you attempt to find them. If you point them away from the FBD and the solution is positive, it is a tension force and vice versa.\n\n7.",
null,
"Keng Boon Yeoh says:\n\nThe order doesn't matter in this question in my opinion. Naturally, you find all the zero-force members before cutting through the leftover forces you have to find. What matters is the direction you point those forces in when you attempt to find them. If you point them away from the FBD and the solution is positive, it is a tension force and vice versa.\n\n8.",
null,
"Keng Boon Yeoh says:\n\nThe order doesn't matter in this question in my opinion. Naturally, you find all the zero-force members before cutting through the leftover forces you have to find. What matters is the direction you point those forces in when you attempt to find them. If you point them away from the FBD and the solution is positive, it is a tension force and vice versa.\n\n9.",
null,
"Keng Boon Yeoh says:\n\nThe order doesn't matter in this question in my opinion. Naturally, you find all the zero-force members before cutting through the leftover forces you have to find. What matters is the direction you point those forces in when you attempt to find them. If you point them away from the FBD and the solution is positive, it is a tension force and vice versa.\n\n10.",
null,
"Ethan E Harbin says:\n\nFor our work and calculations for the problem, are we required to show calculations for the zero-force members? Or can we just state that we visually \"performed\" the method of joints and found which members were zero-force members that way?\n\n1.",
null,
"Vidur Kaul Zimmerman says:\n\nI didn't show the work out for the zero-force members but I did give the reason why the members were 0. For example, member LJ is only connected to two other links, so it is a zero-force member.\n\n11.",
null,
"Sarah Thome says:\n\nBy using trig you can solve for the top angle. And then by similar triangles you can solve for the length of HI. This then allows you to find the moment easily.\n\n12.",
null,
"Kareem Yasser AlDohaim says:\n\nIn this case, would we need to consider the reaction forces at A and C? I could obtain my answers without them; however, I am unsure. Can someone please explain this?\n\n1.",
null,
"no one says:\n\nYou can ignore the reaction forces at A and C if you choose the FBD that has the loads at M when cutting the structure into two.\n\n13.",
null,
"Dominic Anthony Marra says:\n\nWould it be possible to use the reactions in this problem? Rather than just the forces at point M.\n\n1.",
null,
"Christopher John McBride says:\n\nHi Dominic. I think that is a safe assumption to make that we would need to find at least a couple of the reaction forces in order to solve this problem. For my solution, I ended up not needing them in the end, it really just depends on where you place your cut when solving with the method of sections.\n\n1.",
null,
"Mitch Kamp says:\n\nFrom my professor, it seems that you can almost always (in this course) find a way to go about solving without needing to worry about the reaction forces. For me and others that I have talked to, they weren't needed.\n\n14.",
null,
"Hannah Katherine Congleton says:\n\nHow do we use trig to find angle HIE? I've seen suggestions to solve the top angle so for that do we awesome like KJ is a length of 2d? I think maybe I am just overthinking this.\n\n1.",
null,
"Emma G Balevic says:\n\nHey Hannah! If you want to solve for HIE, try solving for HIK first. Since all of the lines inside of the entire system (aka a giant right triangle) are parallel, you can assume that HIK is equal to ACE. Using the given vertical and horizontal lengths of the entire triangle, you can solve for ACE which ultimately gives you HIK. From there, you should be able to find HIE. However, when I solved this problem I didn't need HIE - only HIK.\n\n2.",
null,
"Carl Lyons says:\n\nAlternatively, if you do not want to get involved with angles in your solution process, you can use similar triangle proportions to find the length of moment arms. This will allow you to find the value of HJ using moments. Afterwards, you can solve for the x and y values of EI using the sum of forces in the x and y directions, then you can calculate the magnitude of EI using the Pythagorean theorem.\n\n15.",
null,
"Ricardo Suarez-Torres says:\n\nFor the sum of the moments in H, you will solve for the force of IE, what is the best way to find the angle of IE?\n\n1.",
null,
"Cindy Ng says:\n\nHi Ricardo, you can find angle IE using similar triangles. You will then see that angle IE is the same as angle AMC.\n\n16.",
null,
"Jaden Adam Kent says:\n\nFor this problem I just want to help people out by saying that this whole truss system is a giant right triangle. So you can find the angle of that giant triangle, and the angle will carry over to smaller triangles within the system. Use can use this hint to help solve for the y component of Force IE.\n\n17.",
null,
"Everest Liu Snyder says:\n\nHow many non-zero forces are there including the loads asked for in part b)?\n\n1.",
null,
"Everest Liu Snyder says:\n\n*zero-force members\n\n1.",
null,
"Mitch Kamp says:\n\nI got 5 total, and the way I approached it is starting at joint L and then moving down. You should be able to find all 5 one after another.\n\n1.",
null,
"Zachary Joseph Simon says:\n\nI followed the same methodology as Mitch and also got a total of 5 zero-force members. Hope this helps!\n\n18.",
null,
"Andrew Charles Blitz says:\n\nDo we need to show any specific work for finding zero force members or can we just list out the ones that are fairly obvious?\n\n1.",
null,
"Radu Alexandru Radulescu says:\n\nHi Andrew. I suggest you show the force at each joint and then also include a x y diagram showing their directions. That's what I did if they were obvious. If not, maybe show the sum of forces in x or/and y. Hope this clarifies things.\n\n19.",
null,
"Maxym says:\n\nWhy Isn't DE a zero force member?"
] | [
null,
"https://www.purdue.edu/freeform/statics/wp-content/uploads/sites/13/2020/10/23B-216x300.png",
null,
"https://secure.gravatar.com/avatar/a6d92e91e3116bd863ce333ee0f96ec6",
null,
"https://secure.gravatar.com/avatar/8fe436fdcb1c9e4eaccb61c48e16f918",
null,
"https://secure.gravatar.com/avatar/3d5797b81df8ed4951680499a2fbf8d8",
null,
"https://secure.gravatar.com/avatar/52705c1e572b4fcc2f11f631dc27940f",
null,
"https://secure.gravatar.com/avatar/b779797b24a8442047bf7b0b02911f53",
null,
"https://secure.gravatar.com/avatar/a6d92e91e3116bd863ce333ee0f96ec6",
null,
"https://secure.gravatar.com/avatar/3294761aad8d2c9e1c3202a7956007ab",
null,
"https://secure.gravatar.com/avatar/4fa2f84b07e0ff912a5aaf4d9178c418",
null,
"https://secure.gravatar.com/avatar/ff0e5ef63b6918b6a35290ede2482e82",
null,
"https://secure.gravatar.com/avatar/f454b5a70b51a00d8564e5c914cd5d77",
null,
"https://secure.gravatar.com/avatar/244cd0698acc5fa39a548b27f39d2d6d",
null,
"https://secure.gravatar.com/avatar/b11647f36061a1feec33d042945fae9a",
null,
"https://secure.gravatar.com/avatar/f7140e080669cde594f61834d1a35723",
null,
"https://secure.gravatar.com/avatar/bb6bfcb223c30c7becec4e37bf0d1807",
null,
"https://secure.gravatar.com/avatar/2f82310484fabaf68d8d833e9c36d636",
null,
"https://secure.gravatar.com/avatar/f7140e080669cde594f61834d1a35723",
null,
"https://secure.gravatar.com/avatar/8080c772e3a6d7f7b0e0f971bb61a0e5",
null,
"https://secure.gravatar.com/avatar/5928e0c3075a7595594955efd4f8dad6",
null,
"https://secure.gravatar.com/avatar/6b900b5bdc4290808f882986e8ed9113",
null,
"https://secure.gravatar.com/avatar/e358f6fe019bb55f00d66b467ebfc71f",
null,
"https://secure.gravatar.com/avatar/f454b5a70b51a00d8564e5c914cd5d77",
null,
"https://secure.gravatar.com/avatar/4dfb582e4794a059564e8fbf7770ea19",
null,
"https://secure.gravatar.com/avatar/7f23c702bb25e9771a9e64e0c455f99b",
null,
"https://secure.gravatar.com/avatar/5e922f5f129c4e79df6eae629a4c601b",
null,
"https://secure.gravatar.com/avatar/85d8f55218f4128b2a4319de69760bef",
null,
"https://secure.gravatar.com/avatar/b455369e4f00e8577372b032731505f7",
null,
"https://secure.gravatar.com/avatar/a9540465223676e90ac7a60130f252ce",
null,
"https://secure.gravatar.com/avatar/a9540465223676e90ac7a60130f252ce",
null,
"https://secure.gravatar.com/avatar/a9540465223676e90ac7a60130f252ce",
null,
"https://secure.gravatar.com/avatar/a9540465223676e90ac7a60130f252ce",
null,
"https://secure.gravatar.com/avatar/a9540465223676e90ac7a60130f252ce",
null,
"https://secure.gravatar.com/avatar/a9540465223676e90ac7a60130f252ce",
null,
"https://secure.gravatar.com/avatar/a9540465223676e90ac7a60130f252ce",
null,
"https://secure.gravatar.com/avatar/a9540465223676e90ac7a60130f252ce",
null,
"https://secure.gravatar.com/avatar/a9540465223676e90ac7a60130f252ce",
null,
"https://secure.gravatar.com/avatar/6ff6a856fc097042ceff9a0b1ecc9730",
null,
"https://secure.gravatar.com/avatar/8481ace5ea545091d3dbf990423a6693",
null,
"https://secure.gravatar.com/avatar/cb521b0f3b89508ee71ee677e1244da9",
null,
"https://secure.gravatar.com/avatar/e38d2bbf8933af7d83f6855c45253983",
null,
"https://secure.gravatar.com/avatar/4760d176af08799eaa1915f90c97475a",
null,
"https://secure.gravatar.com/avatar/51e3b42ae90b255bca36e842f6a00e72",
null,
"https://secure.gravatar.com/avatar/6a096a7883c081f8574b7ae5a144b10e",
null,
"https://secure.gravatar.com/avatar/7a686ed2c9cd4f542809c5cb00dc32d6",
null,
"https://secure.gravatar.com/avatar/e03727cd2c2bf5a52a24f6c33f7ff3d9",
null,
"https://secure.gravatar.com/avatar/1bb88f9dd0239772d2fef8b06fd8fce9",
null,
"https://secure.gravatar.com/avatar/f36d14b7cccc245b10dabe9b430d5783",
null,
"https://secure.gravatar.com/avatar/249b3e69ccd8a2b6e3181821cb83685a",
null,
"https://secure.gravatar.com/avatar/70b923ebf689f33f529b2977e5e9ace2",
null,
"https://secure.gravatar.com/avatar/0d7a9fdba2fca98c26abcd81b5702e1d",
null,
"https://secure.gravatar.com/avatar/dec220b1b1dc466a9b5854e2764d862a",
null,
"https://secure.gravatar.com/avatar/dec220b1b1dc466a9b5854e2764d862a",
null,
"https://secure.gravatar.com/avatar/7a686ed2c9cd4f542809c5cb00dc32d6",
null,
"https://secure.gravatar.com/avatar/dcfaeb7cbb43e8fa8ddaf55ffc0c5100",
null,
"https://secure.gravatar.com/avatar/0324388c42c720572c68819b9bf0dbbd",
null,
"https://secure.gravatar.com/avatar/c7d695536613047759f109c8abfd23a4",
null,
"https://secure.gravatar.com/avatar/df534b8c65903b711c440aa23c117bb0",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9609236,"math_prob":0.85016954,"size":11820,"snap":"2020-45-2020-50","text_gpt3_token_len":2569,"char_repetition_ratio":0.16917738,"word_repetition_ratio":0.2945144,"special_character_ratio":0.2139594,"punctuation_ratio":0.08928572,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9668182,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114],"im_url_duplicate_count":[null,1,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-11-23T19:40:56Z\",\"WARC-Record-ID\":\"<urn:uuid:735b42a8-5bd4-46aa-8ad1-bc018d19db11>\",\"Content-Length\":\"111018\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5e30d6f8-7832-4c62-853e-3042b2defc39>\",\"WARC-Concurrent-To\":\"<urn:uuid:4ed8284f-552f-475b-b1e0-9b9501c8cb58>\",\"WARC-IP-Address\":\"128.210.7.200\",\"WARC-Target-URI\":\"https://www.purdue.edu/freeform/statics/2020/10/19/homework-23-b/\",\"WARC-Payload-Digest\":\"sha1:IOSOGZ2TJSGSG7YK2LCRXJ7QAFQU57WB\",\"WARC-Block-Digest\":\"sha1:6X65WIJ4HOUGSS2AXIHSBHN23IKSHVRR\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141164142.1_warc_CC-MAIN-20201123182720-20201123212720-00534.warc.gz\"}"} |
http://www.lcv.ne.jp/~smaki/ja/MotiveWS/ | [
"",
null,
"# 国際研究集会 Motives in Tokyo, 2019\n\n### 研究集会組織委員:\n\nガイサ・トーマス (立教大学), 寺杣友秀 (東京大学), 斎藤秀司 (東京大学)\n\n### 過去の研究集会:\n\n2018年| 2017年 | 2016年 | 2014年 | 2013年 | 2012年 | 2011年\n\n### 講演者\n\nMasanori Asakura (Hokkaido)\n\nTom Bachman (MIT)\n\nFederico Binda (Regenburg)\n\nThomas Geisser (Rikkyo)\n\nHiroyasu Miyazaki (Riken)\n\nRyomei Iwasa (Copenhagen)\n\nWataru Kai (Tohoku)\n\nShane Kelly (TIT)\n\nHakon Kolderup (Oslo)\n\nStephen Lichtenbaum (Brown)\n\nNiranjan Ramachandran (Maryland)\n\nKanetomo Sato (Chuo)\n\nAnand Sawant (Muenchen)\n\nTakashi Suzuki (Chuo)\n\n### 12(Tue)/Feb:\n\n10:00-11:00 Lichtenbaum, \": Derived exterior powers and the constant in the functional equation.\"\n\n11:30-12:30 Ramachandran, \"Cup products and Heisenberg groups.\"\n\n14:00-15:00 Binda, \"Semi-purity for cycles with modulus.\"\n\n### 13(Wed)/Feb:\n\n10:00-11:00 Suzuki, \"Weil-etale cohomology for local fields and curves with coefficients in Neron models.\"\n\n11:30-12:30 Geisser, \"A Weil-etale version of the Birch and Swinnerton-Dyer conjecture over function fields.\"\n\n14:00-15:00 Asakura, \"New p-adic hypergeometric functions concerning with syntomic regulators.\"\n\n15:30-16:30 Kai, \"Albanese map and the Neron-Severi group over p-adic fields.\"\n\n### 14(Thu)/Feb:\n\n10:00-11:00 Miyazaki, \"Mayer-Vietoris squares for motives with modulus.\"\n\n11:30-12:30 Bachmann, \"p-complete etale motivic stable homotopy theory.\"\n\n14:00-15:00 Iwasa, \"K-theory and Modulus Condition.\"\n\n15:30-16:30 Kolderup, \"Correspondences and motives arising from cohomology theories.\"\n\n18:00-20:00: Reception\n\n### 15(Fri)/Feb:\n\n10:00-11:00 Sawant, \"Strict A^1-homology and applications.\"\n\n11:30-12:30 Sato, \"Etale cohomology and Selmer groups of arithmetic schemes.\"\n\n14:00-15:00 Kelly, \"K-theory of valuation rings.\"\n\n### Asakura:\n\nWe introduce a new p-adic function, which we call the p-adic hypergeometric function of logarithmic type (this is different from Dwork's). The main result is that the special values give the syntomic regulators of K_2 of hypergeometric curves. We also expect that they agree with the special values of p-adic L-functions of elliptic curves in some cases, according to the conjecture of Perrin-Riou. The preprint is available at arXiv.1811.03770.\n\n### Bachmann:\n\nI will explain a proof of the following theorem: for suitable schemes S, the functor SH(S_et)_p^\\wedge -> SH_et(S)_p^\\wedge, from the p-complete stable homotopy category of the small etale site of S to the p-complete stable etale motivic homotopy category over S, is an equivalence. As a consequence we deduce that weight zero etale motivic homotopy groups coincide with etale cohomology of the sphere spectrum.\n\n|PDF|\n\n### Geisser:\n\n(joint with T.Suzuki) We give a version of the Birch and Swinnerton-Dyer conjecture for abelian varieties A over function fields in terms of the Weil-etale cohomology with coefficients in the Neron-model and the Lie algebra of A. We use the work of Kato-Trihan to show that the conjecture holds if the Tate-Shafarevich group is finite.\n\n### Iwasa:\n\nLet $X$ be a regular separated noetherian scheme and $D$ an effective Cartier divisor on $X$. In this talk, I present a construction of a homotopy-coniveau type filtration of the relative $K$-theory spectrum $K(X,D)$ by using algebraic cycles with modulus. Assume further that $D$ admits an affine open neighbourhood in $X$. Then I show that the induced filtration on $\\pi_0$ deserves to be said “motivic’’. More precisely, the Adams operation $\\psi^k$ acts on the $p$-th graded piece of $K_0(X,D)$ by the multiplication by $k^p$ and the graded pieces are isomorphic to Chow groups with modulus up to bounded torsion. This is a joint work in progress with Wataru Kai.\n\n### Kai:\n\nWe consider the Albanese map of zero-cycles for smooth projective varieties over p-adic fields. For general reasons, there is a pairing between its cokernel and the Galois-fixed part of the Neron-Severi group modulo rational classes. Under certain good reduction hypotheses, we show that this pairing is perfect. The proof uses an existence theorem of Azumaya algebras (Gabber-de Jong) and a duality theorem (Saito-Sato) which generalizes the Lichtenbaum duality for curves to higher dimensions.\n\n### Kelly:\n\nI will discuss several results showing that the algebraic K-theory of valuation rings behaves as though such rings were regular Noetherian, and some new proofs of known results concerning cdh descent of algebraic K-theory. This is joint work with Matthew Morrow.\n\n### Kolderup:\n\nSince Suslin and Voevodsky’s introduction of finite correspondences, several alternate correspondence categories have been constructed in order to provide different linear approximations to the motivic stable homotopy category. In joint work with Andrei Druzhinin, we provide an axiomatic approach to a class of correspondence categories that are defined by an underlying cohomology theory. For such cohomological correspondence categories, one can prove strict homotopy invariance and cancellation properties, resulting in a well behaved associated derived category of motives.\n\n### Lichtenbaum:\n\nThe notion of k-th exterior power of a coherent sheaf is only well-behaved for locally free sheaves. If E is an arbitrary coherent sheaf, the better notion is its derived exterior power, which lives in the derived category. We will give a conjecture which computes the constant term in the functional equation for the zeta-function of a regular scheme X projective over Z in terms of Euler characteristics of derived exterior powers of sheaves of differentials on X. This conjecture is true if the dimension of X is 1 or 2.\n\n### Miyazaki:\n\nTo study a smooth variety U, it is often useful to consider a pair (X,D) such that X is proper and U=X-D, where D is an effective Cartier divisor on X. For such a pair (which is called a modulus pair), Kahn-Saito-Yamazaki constructed the motive with modulus M(X,D) as an object in the category of motives with modulus MDM. However, the construction is more or less indirect, and they used auxiliary pairs such that X is not necessarily proper. In this talk we provide a direct construction of MDM without using non-proper varieties. This is a joint work with Bruno Kahn.\n\n### Ramachandran:\n\n(joint work with E. Aldrovandi) Recall the classical correspondence between divisors and line bundles over smooth varieties; a basic question is how to generalize this to cycles of higher codimension. The talk will discuss results on this question with special attention to codimension two cycles. It will also indicate the connection between the Heisenberg group and cup-products.\n\n### Sato:\n\nSelmer groups and Tate-Shafarevich groups of Galois representaions, defined by Bloch-Kato, are expeced to be related with analytic behaviors of L-functions (Tamagawa number conjecture). In this talk, I would like to explain that etale cohomology of a d-dimensional proper regular arithmetic scheme with `Q_p(d)’-coefficients is isomorphic to Selmer groups. I will further state a formula relating the order of Tate-shafarevich groups with p-adic Abel-Jacobi mappings assuming d=2 (i.e., in the case of arithmetic surfaces).\n\n### Sawant:\n\nWe will introduce a new version of A^1-homology, which is better behaved and more computable compared to the usual A^1-homology and describe some applications. The talk is based on joint work with Fabien Morel.\n\n### Suzuki:\n\nI will discuss finiteness and duality properties of Weil-etale cohomology of local fields and curves with coefficients in abelian varieties, tori, lattices and their Neron models. The key step is to give a certain group scheme structure to the corresponding etale cohomology, using the so-called rational etale site (or its pro-category version). The resulting group schemes satisfy a certain finiteness and duality. Taking the cohomology of the Weil group of the finite base field, we obtain the desired statement about Weil-etale cohomology ( including its profinite group structure in the local case). This result has applications to values of L-functions. Joint work with Thomas Geisser."
] | [
null,
"http://www.lcv.ne.jp/~smaki/ja/img/name.gif",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8379307,"math_prob":0.8635077,"size":7244,"snap":"2019-13-2019-22","text_gpt3_token_len":1989,"char_repetition_ratio":0.09682321,"word_repetition_ratio":0.009487666,"special_character_ratio":0.22418553,"punctuation_ratio":0.117037036,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97787434,"pos_list":[0,1,2],"im_url_duplicate_count":[null,10,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-03-26T12:50:53Z\",\"WARC-Record-ID\":\"<urn:uuid:5ca6ecc1-f958-4204-bdf0-41f7bb04174e>\",\"Content-Length\":\"11516\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:271bf14e-f2ee-49f1-8a37-ba16411d8c9b>\",\"WARC-Concurrent-To\":\"<urn:uuid:e72fc4d4-518a-427b-9abe-da7b75b57b14>\",\"WARC-IP-Address\":\"202.122.193.36\",\"WARC-Target-URI\":\"http://www.lcv.ne.jp/~smaki/ja/MotiveWS/\",\"WARC-Payload-Digest\":\"sha1:VBAKXQ2KLSAESJDRPVAQATFG3UAM6U26\",\"WARC-Block-Digest\":\"sha1:KVZE33AB25LUU3XRU2MEK33L5DOWD5V4\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-13/CC-MAIN-2019-13_segments_1552912205163.72_warc_CC-MAIN-20190326115319-20190326141319-00276.warc.gz\"}"} |
http://ixtrieve.fh-koeln.de/birds/litie/search?q=object_ss%3A%22CDS%2FISIS%22&fq%5B%5D=theme_ss%3A%22Bibliographische+Software%22 | [
"# Search (43 results, page 1 of 3)\n\n• × `theme_ss:\"Bibliographische Software\"`\n1. Perera, P.: Micro CDS/ISIS : a critical appraisal of its search interface (1992) 7.04\n```7.037564 = weight(object_ss:CDS/ISIS in 4537) [ClassicSimilarity], result of:\n7.037564 = fieldWeight in 4537, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n7.037564 = idf(docFreq=100, maxDocs=42306)\n1.0 = fieldNorm(doc=4537)\n```\nObject\nCDS/ISIS\n2. Del Bigio, G.: ¬The CDS/ISIS software : recent developments and results (1991) 7.04\n```7.037564 = weight(object_ss:CDS/ISIS in 5040) [ClassicSimilarity], result of:\n7.037564 = fieldWeight in 5040, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n7.037564 = idf(docFreq=100, maxDocs=42306)\n1.0 = fieldNorm(doc=5040)\n```\nObject\nCDS/ISIS\n3. Bhargava, J.K.; Srivastava, R.K.; Murthy, S.S.: SANJAY: an Indian library automation package based on CDS/ISIS (1993) 7.04\n```7.037564 = weight(object_ss:CDS/ISIS in 5041) [ClassicSimilarity], result of:\n7.037564 = fieldWeight in 5041, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n7.037564 = idf(docFreq=100, maxDocs=42306)\n1.0 = fieldNorm(doc=5041)\n```\nObject\nCDS/ISIS\n4. Nieuwenhuysen, P.: Computerised storage and retrieval of structured text information : CDS/ISIS version 2.3 (1991) 7.04\n```7.037564 = weight(object_ss:CDS/ISIS in 5042) [ClassicSimilarity], result of:\n7.037564 = fieldWeight in 5042, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n7.037564 = idf(docFreq=100, maxDocs=42306)\n1.0 = fieldNorm(doc=5042)\n```\nObject\nCDS/ISIS\n5. Mini-micro reference manual (1989) 7.04\n```7.037564 = weight(object_ss:CDS/ISIS in 5043) [ClassicSimilarity], result of:\n7.037564 = fieldWeight in 5043, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n7.037564 = idf(docFreq=100, maxDocs=42306)\n1.0 = fieldNorm(doc=5043)\n```\nObject\nCDS/ISIS\n6. Upadhyay, J.L.: Database generation using CDS/ISIS (1992) 7.04\n```7.037564 = weight(object_ss:CDS/ISIS in 5044) [ClassicSimilarity], result of:\n7.037564 = fieldWeight in 5044, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n7.037564 = idf(docFreq=100, maxDocs=42306)\n1.0 = fieldNorm(doc=5044)\n```\nObject\nCDS/ISIS\n7. Ravi, A.S.; Hariharan, A.; Sadananda, R.B.: Production of a union catalogue using CDS/ISIS and Ventura : some experiences in India. DTP catalog of conference proceedings in Indian scientific and technical libraries (1992) 7.04\n```7.037564 = weight(object_ss:CDS/ISIS in 5045) [ClassicSimilarity], result of:\n7.037564 = fieldWeight in 5045, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n7.037564 = idf(docFreq=100, maxDocs=42306)\n1.0 = fieldNorm(doc=5045)\n```\nObject\nCDS/ISIS\n8. Jascó, P.; Szücs, A.; Varga, S.: MICRO-CDS/ISIS : a bibliographic information management software from Unesco (1986) 7.04\n```7.037564 = weight(object_ss:CDS/ISIS in 5290) [ClassicSimilarity], result of:\n7.037564 = fieldWeight in 5290, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n7.037564 = idf(docFreq=100, maxDocs=42306)\n1.0 = fieldNorm(doc=5290)\n```\nObject\nCDS/ISIS\n9. Chowdhury, S.; Chowdhury, G.G.: Text retrieval system : an overview (1992) 7.04\n```7.037564 = weight(object_ss:CDS/ISIS in 6508) [ClassicSimilarity], result of:\n7.037564 = fieldWeight in 6508, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n7.037564 = idf(docFreq=100, maxDocs=42306)\n1.0 = fieldNorm(doc=6508)\n```\nObject\nCDS/ISIS\n10. Lang, W.: Verwendung von PC-Software für Dokumentationszwecke an Instituten : Grundlegende Fragen zur Literaturerfassung & -dokumentation mit dem PC an kleineren Institutsbibliotheken und Dokumentationsstellen und Anmerkungen zu konkreten Softwareangeboten (1993) 7.04\n```7.037564 = weight(object_ss:CDS/ISIS in 7783) [ClassicSimilarity], result of:\n7.037564 = fieldWeight in 7783, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n7.037564 = idf(docFreq=100, maxDocs=42306)\n1.0 = fieldNorm(doc=7783)\n```\nObject\nCDS/ISIS\n11. Sur, S.N.; Chowdhury, G.G.: ¬A prototype design of a bibliographic database based on CCF using Micro-CDS/ISIS (1993) 7.04\n```7.037564 = weight(object_ss:CDS/ISIS in 488) [ClassicSimilarity], result of:\n7.037564 = fieldWeight in 488, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n7.037564 = idf(docFreq=100, maxDocs=42306)\n1.0 = fieldNorm(doc=488)\n```\nObject\nCDS/ISIS\n12. Lang, W.: Einrichtung eines Dokumentationsverwaltungs- und Dokumentationssystems auf einem PC : Anwendungsgestaltung mit der Software MICRO-ISIS (1991) 7.04\n```7.037564 = weight(object_ss:CDS/ISIS in 591) [ClassicSimilarity], result of:\n7.037564 = fieldWeight in 591, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n7.037564 = idf(docFreq=100, maxDocs=42306)\n1.0 = fieldNorm(doc=591)\n```\nObject\nCDS/ISIS\n13. UNIMARC and CDS/ISIS : Proceedings of the Workshops held in Budapest, 21.-22. June 1993 and Barcelona, 26. August 1993 (1994) 7.04\n```7.037564 = weight(object_ss:CDS/ISIS in 779) [ClassicSimilarity], result of:\n7.037564 = fieldWeight in 779, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n7.037564 = idf(docFreq=100, maxDocs=42306)\n1.0 = fieldNorm(doc=779)\n```\nObject\nCDS/ISIS\n14. Amba, S.; Meenakshi, R.; Rao, S.S.: Creation of a database of references using CDS/ISIS (1994) 7.04\n```7.037564 = weight(object_ss:CDS/ISIS in 239) [ClassicSimilarity], result of:\n7.037564 = fieldWeight in 239, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n7.037564 = idf(docFreq=100, maxDocs=42306)\n1.0 = fieldNorm(doc=239)\n```\nObject\nCDS/ISIS\n15. Malik, K.M.: Micro CDS/ISIS : what's new in version 3.0 (1993) 7.04\n```7.037564 = weight(object_ss:CDS/ISIS in 781) [ClassicSimilarity], result of:\n7.037564 = fieldWeight in 781, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n7.037564 = idf(docFreq=100, maxDocs=42306)\n1.0 = fieldNorm(doc=781)\n```\nObject\nCDS/ISIS\n16. Pobukovsky, M.: UNESCO-cooperative development and promotion of CDS/ISIS system (1986) 7.04\n```7.037564 = weight(object_ss:CDS/ISIS in 1343) [ClassicSimilarity], result of:\n7.037564 = fieldWeight in 1343, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n7.037564 = idf(docFreq=100, maxDocs=42306)\n1.0 = fieldNorm(doc=1343)\n```\nObject\nCDS/ISIS\n17. Rieder, S.: ¬Das Datenbankprogramm CDS/ISIS und dessen Anwendung in der Bundesrepublik Deutschland : Ergebnisse einer Nutzerstudie (1995) 7.04\n```7.037564 = weight(object_ss:CDS/ISIS in 1485) [ClassicSimilarity], result of:\n7.037564 = fieldWeight in 1485, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n7.037564 = idf(docFreq=100, maxDocs=42306)\n1.0 = fieldNorm(doc=1485)\n```\nObject\nCDS/ISIS\n18. Chowdhury, G.G.; Chowdhury, S.; Neelameghan, A.: Vocabulary control online in MicroISIS databases : a Pascal interface (1994) 7.04\n```7.037564 = weight(object_ss:CDS/ISIS in 2499) [ClassicSimilarity], result of:\n7.037564 = fieldWeight in 2499, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n7.037564 = idf(docFreq=100, maxDocs=42306)\n1.0 = fieldNorm(doc=2499)\n```\nObject\nCDS/ISIS\n19. Kashyap, M.M.: Integrated database design for a library system employing library techniques developed by Ranganathan and CDS/ISIS database management system (1993) 7.04\n```7.037564 = weight(object_ss:CDS/ISIS in 3206) [ClassicSimilarity], result of:\n7.037564 = fieldWeight in 3206, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n7.037564 = idf(docFreq=100, maxDocs=42306)\n1.0 = fieldNorm(doc=3206)\n```\nObject\nCDS/ISIS\n20. Lauro, A. Di: IDIN manual for the creation and management of a bibliographic database using Micro-ISIS (1988) 7.04\n```7.037564 = weight(object_ss:CDS/ISIS in 4361) [ClassicSimilarity], result of:\n7.037564 = fieldWeight in 4361, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n7.037564 = idf(docFreq=100, maxDocs=42306)\n1.0 = fieldNorm(doc=4361)\n```\nObject\nCDS/ISIS"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.5863072,"math_prob":0.9647987,"size":7746,"snap":"2019-43-2019-47","text_gpt3_token_len":2831,"char_repetition_ratio":0.24257298,"word_repetition_ratio":0.3633776,"special_character_ratio":0.42473534,"punctuation_ratio":0.25979844,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9955611,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-20T03:58:21Z\",\"WARC-Record-ID\":\"<urn:uuid:dbedf515-9f80-47f3-ae49-f3f5638f1e31>\",\"Content-Length\":\"48262\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5f341cdc-2589-4cc5-80ad-0385571708e2>\",\"WARC-Concurrent-To\":\"<urn:uuid:8d62fdfd-2cc8-4716-8d8b-9f06747dca09>\",\"WARC-IP-Address\":\"139.6.160.6\",\"WARC-Target-URI\":\"http://ixtrieve.fh-koeln.de/birds/litie/search?q=object_ss%3A%22CDS%2FISIS%22&fq%5B%5D=theme_ss%3A%22Bibliographische+Software%22\",\"WARC-Payload-Digest\":\"sha1:3ZMTYKOZZFKACPDO6IKXR2ZDBBDUKE5Q\",\"WARC-Block-Digest\":\"sha1:VFFRUG67PWS7G4POE7P4JE6GGZTI6H3Z\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496670448.67_warc_CC-MAIN-20191120033221-20191120061221-00257.warc.gz\"}"} |
https://www.hackmath.net/en/calculator/line-slope?x0=1&y0=10&x1=5&y1=8&submit=1 | [
"# Line slope calculator 2x+4y=42\n\nEnter coordinates of two different points:",
null,
"Straight line given by points A[1; 10] and B[5; 8]\n\n## Calculation:\n\nSlope-intercept form of line: y = -0.5x+10.5\n\nCanonical form of the line equation: 2x+4y-42 = 0\n\nParametric form of the line equation:\nx = 4t+1\ny = -2t+10 ; t ∈ R\n\nSlope: m = -0.5\n\nSlope angle of line: φ = -26°33'54″ = -0.4636 rad\n\nX intercept: x0 = 21\n\nY intercept: y0 = q = 10.5\n\nDistance line from the origin: d0 = 9.3915\n\nThe length of the segment AB: |AB| = 4.4721\n\nVector: AB = (4; -2)\n\nNormal vector: n = (2; 4)\n\nMidpoint of the segment AB: M = [3; 9]\n\nPerpendicular Bisector equation: 4x-2y+6 = 0\n\nVector OA = (1; 10) ; |OA| = 10.0499\nVector OB = (5; 8) ; |OB| = 9.434\nScalar product OA .OB = 85\nAngle ∠ AOB = 26°17'41″ = 0.4589 rad"
] | [
null,
"https://www.hackmath.net/images/slope_line2.jpg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.73812836,"math_prob":0.99989414,"size":721,"snap":"2021-43-2021-49","text_gpt3_token_len":310,"char_repetition_ratio":0.11854951,"word_repetition_ratio":0.013986014,"special_character_ratio":0.47156727,"punctuation_ratio":0.19662921,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999331,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-20T04:34:50Z\",\"WARC-Record-ID\":\"<urn:uuid:2b662df1-9f44-4faf-85f6-59ec7fcdb688>\",\"Content-Length\":\"23993\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:fbeb6bbf-9837-4ffc-917d-a8d03c575ce5>\",\"WARC-Concurrent-To\":\"<urn:uuid:3a0eeee6-262c-42dc-b615-28ca6e837404>\",\"WARC-IP-Address\":\"172.67.143.236\",\"WARC-Target-URI\":\"https://www.hackmath.net/en/calculator/line-slope?x0=1&y0=10&x1=5&y1=8&submit=1\",\"WARC-Payload-Digest\":\"sha1:NFKCRWSEJK2MK4Y5RBLDCU7LM43WKCUA\",\"WARC-Block-Digest\":\"sha1:PEWXB6ZSKOQWS25AAFY7XHW7WNUNOUYV\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323585302.56_warc_CC-MAIN-20211020024111-20211020054111-00399.warc.gz\"}"} |
https://www.physicsforums.com/threads/electric-field-of-periodic-charge-density.794210/ | [
"# Electric field of periodic charge density.\n\n## Homework Statement\n\nFind electrostatic field and potential created by a two-dimensional charge density:\n$$\\rho \\sin (kx) \\cos (ky) \\delta (z)$$\nat the distance d from the the plane z=0 where the charge is placed (taking into account that it is embedded in a three dimensional space).\nIn your calculations you are required to use Fourier analysis.\n\n## The Attempt at a Solution\n\nMy initial thought was to use the differential form of Gauss's law:\n$$\\nabla \\cdot E = \\frac{\\rho}{\\epsilon_0}$$\n\nHowever I am unsure of where Fourier analysis comes into play, any pointers as to where to go from here would be great. My instinct tells me that the delta function should be what gets the Fourier treatment, however it isn't periodic.\n\n## Answers and Replies\n\nmfb\nMentor\nWhat is the potential for a point-charge? To extend this to 2-dimensional charge distributions, you'll need an integral. And I guess the evaluation of this integral will need Fourier analysis.\n\nWhat is the potential for a point-charge? To extend this to 2-dimensional charge distributions, you'll need an integral. And I guess the evaluation of this integral will need Fourier analysis.\nTo make Fourier analysis more obvious, I would start from Poisson's equation for the potential: given the charge distribution, you have to guess oscillating functions for the x and y components which leads to Fourier analysis to determine the coefficients. The z-component is less obvious though, and you'd have to use the ±z symmetry of the problem... Incidentally, the full solution for arbitrary z comes rather easily using Fourier transform."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9256465,"math_prob":0.95753706,"size":770,"snap":"2021-04-2021-17","text_gpt3_token_len":189,"char_repetition_ratio":0.08877285,"word_repetition_ratio":0.0,"special_character_ratio":0.24155845,"punctuation_ratio":0.057142857,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9959368,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-04-17T02:56:46Z\",\"WARC-Record-ID\":\"<urn:uuid:3788eb33-d2a1-44f9-9cdd-6e0ab4c0a861>\",\"Content-Length\":\"76002\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:785cd21c-7e7b-4118-b209-99ecbb02d937>\",\"WARC-Concurrent-To\":\"<urn:uuid:33b571ee-f3f5-42a9-bcc4-6f2fef058e1c>\",\"WARC-IP-Address\":\"104.26.14.132\",\"WARC-Target-URI\":\"https://www.physicsforums.com/threads/electric-field-of-periodic-charge-density.794210/\",\"WARC-Payload-Digest\":\"sha1:LZEAHJ7U7NRY2Q37UNZXAGC5CJYBGSKL\",\"WARC-Block-Digest\":\"sha1:G3TOWZ2W5E5WN6KT4XMYAHFEJPQQ3AGS\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-17/CC-MAIN-2021-17_segments_1618038098638.52_warc_CC-MAIN-20210417011815-20210417041815-00128.warc.gz\"}"} |
https://www.smartzworld.com/notes/engineering-mathematics-i-important-questions-m1-imp-qusts/ | [
"# Engineering Mathematics I Important Questions – M1 Imp Qusts\n\n## Engineering Mathematics I Important Questions Pdf file – M1 Imp Qusts\n\nPlease find the attached pdf file of Engineering Mathematics I Important Questions Bank – M1 Imp Qusts\n\nUNIT-I\nTHEORY OF MATRICES\n\n1. Define modal matrix.\n2. If 2, 3, 4 are the eigen values of A then find the eigen values of adj A\n3. If A is Hermitian matrix Prove that it is skew- Hermitian matrix\n\nUNIT-II\nDIFFERENTIAL CALCULUS METHODS\n\n1. Define Rolle’s Mean value theorem.\n2. Verify Lagrange’s Mean Value theorem for f(x) = log x in [1, e]\n3. Verify Lagrange’s Mean Value theorem for function f(x) = cos x in [0, π/2].\n\nUNIT-III\nIMPROPER INTEGRALS, MULTIPLE INTEGRALS AND ITS APPLICATIONS\n\n1. Prove that (m,n) (n,m) .\n2. Write the value of (1) .\n3. Write the spherical polar coordinates\n\nUNIT-1V\nDIFFERENTIAL EQUATIONS AND APPLICATIONS\n\n1. Solve (x+1)dy/dx –y=e3x (x+1)2\n2. Write the working rule to find orthogonal trajectory in Cartesian form.\n3. Form the D.E.by eliminate c in\ny=1+c√1-x2\n\nUNIT-V\nLAPLACE TRANSFORMS AND ITS APPLICATIONS\n\n1. State and prove first shifting theorem.\n2. Define change of scale property of F(T).\n3. Define inverse L.T. of f(s)\n\n1.",
null,
"2.",
null,
""
] | [
null,
"https://secure.gravatar.com/avatar/45ce2bfbe4391d7851ca883100bf436b",
null,
"https://secure.gravatar.com/avatar/306e6d467acc5b1e48f81b4ad59c1793",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.6230211,"math_prob":0.9100494,"size":1502,"snap":"2022-40-2023-06","text_gpt3_token_len":398,"char_repetition_ratio":0.123498,"word_repetition_ratio":0.132,"special_character_ratio":0.2343542,"punctuation_ratio":0.068100356,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99842453,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-02-05T01:30:51Z\",\"WARC-Record-ID\":\"<urn:uuid:b3532df9-d518-43a3-a5a3-b7fc9fc8f95c>\",\"Content-Length\":\"70531\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:14c0018e-0285-4f78-bb49-744a3a3e7d8e>\",\"WARC-Concurrent-To\":\"<urn:uuid:e38d63e7-44d0-424e-ba74-775d7d1e0ad9>\",\"WARC-IP-Address\":\"144.208.65.244\",\"WARC-Target-URI\":\"https://www.smartzworld.com/notes/engineering-mathematics-i-important-questions-m1-imp-qusts/\",\"WARC-Payload-Digest\":\"sha1:ARQGJTMWK7IHLKJGXJ52CIECDXOVMHEG\",\"WARC-Block-Digest\":\"sha1:S3MZVL3L47WXYXUCLAOO3CYE7TIGJ7ER\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764500158.5_warc_CC-MAIN-20230205000727-20230205030727-00747.warc.gz\"}"} |
https://unitconverter.io/degrees-celsius/degrees-fahrenheit/5215 | [
"",
null,
"# 5,215 degrees celsius to degrees fahrenheit\n\nto\n\n5,215 Degrees celsius = 9,419 Degrees fahrenheit\n\nThis conversion of 5,215 degrees celsius to degrees fahrenheit has been calculated by applying the formula [°F] = [°C] × 9⁄5 + 32 degrees fahrenheit."
] | [
null,
"https://unitconverter.io/img/temperature.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.62291,"math_prob":0.9911516,"size":306,"snap":"2023-40-2023-50","text_gpt3_token_len":81,"char_repetition_ratio":0.2615894,"word_repetition_ratio":0.0,"special_character_ratio":0.27450982,"punctuation_ratio":0.09803922,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97286993,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-11-30T21:08:19Z\",\"WARC-Record-ID\":\"<urn:uuid:be56c4b8-5925-4bf2-90be-d78a0dcca972>\",\"Content-Length\":\"20632\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8e6db49e-381c-4978-a784-983cc20b9d03>\",\"WARC-Concurrent-To\":\"<urn:uuid:4726e59f-f043-4797-9d87-73bee3bc4a96>\",\"WARC-IP-Address\":\"104.21.38.227\",\"WARC-Target-URI\":\"https://unitconverter.io/degrees-celsius/degrees-fahrenheit/5215\",\"WARC-Payload-Digest\":\"sha1:BH6AXQAJ6MH7WLXUAMXK5MAUJMQC6VXO\",\"WARC-Block-Digest\":\"sha1:TPVCK3PPS3BHBYBEJBLCHANHZIMM3FIT\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100232.63_warc_CC-MAIN-20231130193829-20231130223829-00690.warc.gz\"}"} |
https://www.systutorials.com/docs/linux/man/l-zlatrz/ | [
"# zlatrz (l) - Linux Man Pages\n\n## NAME\n\nZLATRZ - factors the M-by-(M+L) complex upper trapezoidal matrix [ A1 A2 ] = [ A(1:M,1:M) A(1:M,N-L+1:N) ] as ( R 0 ) * Z by means of unitary transformations, where Z is an (M+L)-by-(M+L) unitary matrix and, R and A1 are M-by-M upper triangular matrices\n\n## SYNOPSIS\n\nSUBROUTINE ZLATRZ(\nM, N, L, A, LDA, TAU, WORK )\n\nINTEGER L, LDA, M, N\n\nCOMPLEX*16 A( LDA, * ), TAU( * ), WORK( * )\n\n## PURPOSE\n\nZLATRZ factors the M-by-(M+L) complex upper trapezoidal matrix [ A1 A2 ] = [ A(1:M,1:M) A(1:M,N-L+1:N) ] as ( R 0 ) * Z by means of unitary transformations, where Z is an (M+L)-by-(M+L) unitary matrix and, R and A1 are M-by-M upper triangular matrices.\n\n## ARGUMENTS\n\nM (input) INTEGER\nThe number of rows of the matrix A. M >= 0.\nN (input) INTEGER\nThe number of columns of the matrix A. N >= 0.\nL (input) INTEGER\nThe number of columns of the matrix A containing the meaningful part of the Householder vectors. N-M >= L >= 0.\nA (input/output) COMPLEX*16 array, dimension (LDA,N)\nOn entry, the leading M-by-N upper trapezoidal part of the array A must contain the matrix to be factorized. On exit, the leading M-by-M upper triangular part of A contains the upper triangular matrix R, and elements N-L+1 to N of the first M rows of A, with the array TAU, represent the unitary matrix Z as a product of M elementary reflectors.\nLDA (input) INTEGER\nThe leading dimension of the array A. LDA >= max(1,M).\nTAU (output) COMPLEX*16 array, dimension (M)\nThe scalar factors of the elementary reflectors.\nWORK (workspace) COMPLEX*16 array, dimension (M)\n\n## FURTHER DETAILS\n\nBased on contributions by\n\nA. Petitet, Computer Science Dept., Univ. of Tenn., Knoxville, USA The factorization is obtained by Householderaqs method. The kth transformation matrix, Z( k ), which is used to introduce zeros into the ( m - k + 1 )th row of A, is given in the form\n\nZ( ),\n\nT( )\nwhere\n\nT( I - tau*u( )*u( )aq, u( ),\n)\nz( ) tau is a scalar and z( k ) is an l element vector. tau and z( k ) are chosen to annihilate the elements of the kth row of A2. The scalar tau is returned in the kth element of TAU and the vector u( k ) in the kth row of A2, such that the elements of z( k ) are in a( k, l + 1 ), ..., a( k, n ). The elements of R are returned in the upper triangular part of A1.\nZ is given by\n\nZ( Z( ... Z( )."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.73342276,"math_prob":0.99287325,"size":1910,"snap":"2021-21-2021-25","text_gpt3_token_len":633,"char_repetition_ratio":0.13641134,"word_repetition_ratio":0.38,"special_character_ratio":0.32198954,"punctuation_ratio":0.1372093,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9994055,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-06-23T01:25:45Z\",\"WARC-Record-ID\":\"<urn:uuid:6609760f-7233-49d0-84ac-55545a318ecd>\",\"Content-Length\":\"14505\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f8fcd785-9987-4138-9db7-35de7cd53137>\",\"WARC-Concurrent-To\":\"<urn:uuid:56c51909-1d1a-4d41-9dbd-b1862d682177>\",\"WARC-IP-Address\":\"172.67.167.219\",\"WARC-Target-URI\":\"https://www.systutorials.com/docs/linux/man/l-zlatrz/\",\"WARC-Payload-Digest\":\"sha1:WVHW54QVNGCALBBW7TCVSQDQIH4F6XVH\",\"WARC-Block-Digest\":\"sha1:HQ5WC3GD363IJKAB7EHRTHP5BBQSBNLD\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-25/CC-MAIN-2021-25_segments_1623488528979.69_warc_CC-MAIN-20210623011557-20210623041557-00074.warc.gz\"}"} |
https://www.r-bloggers.com/2013/11/a-problem-fitting-the-von-bertalanffy-growth-model-with-nls/ | [
"Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.\n\nRecently, a fishR user asked me the following (modified) question:\n\nI have successfully fit the typical von Bertalanffy growth model to all of my fish. However, when I subset the data by sex and attempt to fit the model separately to males and females I receive an ‘infinity’ and ‘singular gradient’ error. Why does this happen and is there anything I can do about it?\n\nThe user sent me their data which I use below to confirm the problem stated above and to describe some observations that I have made from fitting many von Bertalanffy growth models (VBGMs).\n\n## Preliminaries and Data Manipulation\n\nlibrary(FSA)\n\n\n… and the data (note that the working direction would have been set before read.csv()) …\n\ndf <- read.csv(\"BKData.csv\",header=TRUE)\nstr(df)\n\n## 'data.frame': 55 obs. of 12 variables:\n## $Primary_Code : Factor w/ 3 levels \"SH091613BK\",\"SH091813BK\",..: 3 3 3 3 3 1 1 1 1 1 ... ##$ Location : Factor w/ 1 level \"SH\": 1 1 1 1 1 1 1 1 1 1 ...\n## $Date : Factor w/ 3 levels \"16-Oct-12\",\"16-Sep-13\",..: 1 1 1 1 1 2 2 2 2 2 ... ##$ Transect : Factor w/ 6 levels \"SHWPBTEF03\",\"SHWPHONT02\",..: 1 1 1 1 1 3 3 6 6 6 ...\n## $Replicate : int NA NA NA NA NA 1 1 1 1 1 ... ##$ Species : Factor w/ 1 level \"CHC\": 1 1 1 1 1 1 1 1 1 1 ...\n## $Length : int 526 227 214 226 508 501 486 732 630 588 ... ##$ Weight : int 1519 86 70 85 1223 1156 993 3655 2326 1528 ...\n## $ID_Code : Factor w/ 50 levels \"A1\",\"A10\",\"A11\",..: 5 6 7 8 1 6 9 19 21 22 ... ##$ Age : int 7 1 1 1 10 7 10 16 10 9 ...\n## $Sex : Factor w/ 3 levels \"\",\"Female\",\"Male\": 1 1 1 1 2 2 2 2 2 2 ... ##$ Collection_Technique: Factor w/ 2 levels \"BTEF\",\"HONT\": 1 1 1 1 1 2 2 2 2 2 ...\nlevels(df$Sex) ## \"\" \"Female\" \"Male\" (Note: A blank is a poor way to code for “unknown” sex individuals.) I separated the data into three data frames corresponding to male, female, and unknown sexed individuals … df_M <- Subset(df,Sex==\"Male\") dim(df_M) ## 29 12 df_F <- Subset(df,Sex==\"Female\") dim(df_F) ## 22 12 df_U <- Subset(df,Sex==\"\") dim(df_U) ## 4 12 ## Fitting the Typical Von Bertalanffy Model to All Fish I illustrate the fitting of the typical VBGM by using the vbFuns() convenience function in the FSA package. This function requires the “name” of the VBGM as its only argument and returns a function that can be used to compute fish length from age given parameter values for that model. ( vbT <- vbFuns(\"typical\") ) ## function(t,Linf,K=NULL,t0=NULL) { ## if (length(Linf)==3) { ## K <- Linf ## t0 <- Linf ## Linf <- Linf ## } else if (length(Linf)!=1 | is.null(K) | is.null(t0)) { ## stop(\"One or more model parameters (Linf, K, t0) are missing or incorrect.\",call.=FALSE) ## } ## Linf*(1-exp(-K*(t-t0))) ## } ## The vbStarts() function is used to find reasonable starting values for a particular version of the VBGM. This function requires a formula containing the length and age data, a data frame in data=, and the “name” of the VBGM parameterization in type=. ( svTall <- vbStarts(Length~Age,data=df,type=\"typical\") ) ##$Linf\n## 555.7\n##\n## $K ## 0.6162 ## ##$t0\n## -0.9446\n\nThe VBGM is fit with nls() where a formula is the first argument that has the length variable on the right-hand-side and the model function saved from vbFuns() as the left-hand-side where that function has the age variable as its first argument followed by the names of the parameters as the remaining arguments. The nls() function also requires a data frame in data= and the list of starting values in start=.\n\nfitAll <- nls(Length~vbT(Age,Linf,K,t0),data=df,start=svTall)\nsummary(fitAll)\n\n## Formula: Length ~ vbT(Age, Linf, K, t0)\n##\n## Parameters:\n## Estimate Std. Error t value Pr(>|t|)\n## Linf 625.4246 52.1546 11.99 <2e-16 ***\n## K 0.1906 0.0628 3.03 0.0038 **\n## t0 -1.4570 0.9892 -1.47 0.1468\n## ---\n## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1\n##\n## Residual standard error: 72.8 on 52 degrees of freedom\n##\n## Number of iterations to convergence: 7\n## Achieved convergence tolerance: 1.55e-06\n\nA quick fit of the model can be seen with\n\nplot(Length~Age,data=df,pch=16,xlab=\"Age\",ylab=\"Total Length (mm)\")",
null,
"## Fitting the Typical VBGM Separately to Males and Females\n\nA similar process can be followed for fitting the typical VBGM to females …\n\nsvTF <- vbStarts(Length~Age,data=df_F,type=\"typical\")\nfitTF <- nls(Length~vbT(Age,Linf,K,t0),data=df_F,start=svTF)\n\n## Error: Missing value or an infinity produced when evaluating the model\n\n… but an error occurs. Similarly for males …\n\nsvTM <- vbStarts(Length~Age,data=df_M,type=\"typical\")\nfitTM <- nls(Length~vbT(Age,Linf,K,t0),data=df_M,start=svTM)\n\n## Error: singular gradient\n\nSo, what happened? A plot separated by sex is instructive …\n\nplot(Length~Age,data=df_F,pch=16,xlab=\"Age\",ylab=\"Total Length (mm)\",\nxlim=c(1,19),ylim=c(200,750))\npoints(Length~Age,data=df_M,pch=16,col=\"red\")\npoints(Length~Age,data=df_U,pch=15,col=\"blue\")\nlegend(\"bottomright\",c(\"Female\",\"Male\",\"Unknown\"),pch=c(16,16,15),\ncol=c(\"black\",\"red\",\"blue\"),cex=0.75)",
null,
"From this it is seen that all of the fish less than age-3 were of “unknown” sex. Thus, when the data frame was reduced to just males or just females there were no fish less than age-3. Additionally, no female fish were less than age-6, only two female fish were older than age-12, and no males were older than age-11. In other words, when the data were subsetted, the age range of each subset was substantially narrowed.\n\nFurthermore, over these constrained age ranges, the relationship between length and age for both females and males was largely linear. In other words, there is no distinctive curve towards the origin at “young” ages and no obvious asymptote at “older” ages. In addition, there is considerable variability in lengths within a given age (e.g., the lengths for age-6 range from approximately 350 to 625 mm).\n\nThese observations largely explain why the typical von Bertalanffy model does not “converge” properly for these subsets of data. In layman’s terms, it is very hard to fit a curved relationship to data with a largely linear relationship coupled with a high degree of natural (i.e., individual) variability.\n\n## An Alternative VBGM Helps\n\nTwo of the known problems with the typical parameterization of the VBGM is that the parameters are highly correlated (knowing",
null,
"$L_{\\infty}$ is very close to knowing",
null,
"$K$) and the parameters are often on very different scales",
null,
"$L_{\\infty}$ might be well into the 100s where as",
null,
"$K$ is usually between 0 and 1). These problems are exacerbated with the data issues identified in the previous problem. As described in the Von Bertalanffy Growth – Intro Vignette on the fishR webpage, Francis (1988) reparameterized the traditional VBGM to have three parameters that are predicted lengths at three ages where the youngest and oldest age are chosen by the user and the third age is exactly between the two chosen ages. The Francis parameterization has the benefit of having parameters that are less correlated and have similar scales. Thus, the Francis parameterization is more likely to fit with the data issues presented here. On the down-side, the Francis parameterization will not provide estimates of the usual",
null,
"$L_{\\infty}$,",
null,
"$K$, and",
null,
"$t_{0}$ parameters. However, one can still predict mean lengths-at-age (thus, “growth” can be described) and compare growth between groups.\n\nThe Francis parameterization is defined with …\n\n( vbF <- vbFuns(\"Francis\") )\n\n## function(t,L1,L2=NULL,L3=NULL,t1,t3=NULL) {\n## if (length(L1)==3) {\n## L2 <- L1\n## L3 <- L1\n## L1 <- L1\n## } else if (length(L1)!=1 | is.null(L2) | is.null(L3)) {\n## stop(\"One or more model parameters (L1, L2, L3) are missing or incorrect.\",call.=FALSE)\n## }\n## if (length(t1)==2) {\n## t3 <- t1\n## t1 <- t1\n## } else if (length(t1)!=1 | is.null(t3)) {\n## stop(\"One or more model definitions (t1, t3) are missing or incorrect.\",call.=FALSE)\n## }\n## r <- (L3-L2)/(L2-L1)\n## L1+(L3-L1)*((1-r^(2*((t-t1)/(t3-t1))))/(1-r^2))\n## }\n## \n\n… and the “young” and “old” ages that I chose to model with are defined with …\n\nages <- c(3,12)\n\n\nThe model is fit to the female data (without an error) with …\n\n( svFF <- vbStarts(Length~Age,data=df_F,type=\"Francis\",tFrancis=ages,methEV=\"poly\") )\n\n## $L1 ## 295.5 ## ##$L2\n## 473.7\n##\n## \\$L3\n## 583\nfitFF <- nls(Length~vbF(Age,L1,L2,L3,t1=ages),data=df_F,start=svFF)\nsummary(fitFF)\n\n## Formula: Length ~ vbF(Age, L1, L2, L3, t1 = ages)\n##\n## Parameters:\n## Estimate Std. Error t value Pr(>|t|)\n## L1 315.6 102.4 3.08 0.0062 **\n## L2 473.3 18.7 25.34 4.2e-16 ***\n## L3 567.0 21.8 26.00 2.6e-16 ***\n## ---\n## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1\n##\n## Residual standard error: 72 on 19 degrees of freedom\n##\n## Number of iterations to convergence: 9\n## Achieved convergence tolerance: 5.11e-06\n\nThe model is fit to the male data (without an error) with …\n\nsvFM <- vbStarts(Length~Age,data=df_M,type=\"Francis\",tFrancis=ages,methEV=\"poly\")\nfitFM <- nls(Length~vbF(Age,L1,L2,L3,t1=ages),data=df_M,start=svFM)\nsummary(fitFM)\n\n## Formula: Length ~ vbF(Age, L1, L2, L3, t1 = ages)\n##\n## Parameters:\n## Estimate Std. Error t value Pr(>|t|)\n## L1 365.0 30.5 11.9 4.6e-12 ***\n## L2 537.1 18.0 29.8 < 2e-16 ***\n## L3 616.3 53.7 11.5 1.1e-11 ***\n## ---\n## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1\n##\n## Residual standard error: 73.5 on 26 degrees of freedom\n##\n## Number of iterations to convergence: 3\n## Achieved convergence tolerance: 3.38e-06\n\nThe two fits can be visualized with …\n\nplot(Length~Age,data=df_F,pch=16,xlab=\"Age\",ylab=\"Total Length (mm)\",\nxlim=c(1,19),ylim=c(200,750))\npoints(Length~Age,data=df_M,pch=16,col=\"red\")\nlegend(\"bottomright\",c(\"Female\",\"Male\"),pch=16,col=c(\"black\",\"red\"),cex=0.75)",
null,
"Further details of fitting this VBGM and methods for constructing bootstrap confidence intervals and developing models to compare parameters between groups are described in the Von Bertalanffy Growth – Intro Vignette on the fishR webpage.\n\nFiled under: Fisheries Science, R Tagged: Growth, R, von Bertalanffy",
null,
"",
null,
""
] | [
null,
"https://i1.wp.com/dl.dropboxusercontent.com/u/65171428/fishr/figure/figure/VBGMfit.png",
null,
"https://i1.wp.com/dl.dropboxusercontent.com/u/65171428/fishr/figure/figure/MFdata.png",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://i1.wp.com/dl.dropboxusercontent.com/u/65171428/fishr/figure/figure/VBGMfit2.png",
null,
"https://feeds.wordpress.com/1.0/comments/fishr.wordpress.com/693/",
null,
"https://i0.wp.com/stats.wordpress.com/b.gif",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.746204,"math_prob":0.9590906,"size":10600,"snap":"2020-45-2020-50","text_gpt3_token_len":3361,"char_repetition_ratio":0.10749339,"word_repetition_ratio":0.14973576,"special_character_ratio":0.35915095,"punctuation_ratio":0.1711828,"nsfw_num_words":3,"has_unicode_error":false,"math_prob_llama3":0.98869765,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24],"im_url_duplicate_count":[null,3,null,2,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,5,null,5,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-11-28T17:36:36Z\",\"WARC-Record-ID\":\"<urn:uuid:ebeaa8c8-fcfc-48df-a6a2-d0728c59990b>\",\"Content-Length\":\"99949\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f878e8de-c9d1-4483-9643-22506c607678>\",\"WARC-Concurrent-To\":\"<urn:uuid:2dcf196f-7050-4ac4-9947-83b32f233a55>\",\"WARC-IP-Address\":\"104.28.9.205\",\"WARC-Target-URI\":\"https://www.r-bloggers.com/2013/11/a-problem-fitting-the-von-bertalanffy-growth-model-with-nls/\",\"WARC-Payload-Digest\":\"sha1:HKCEIT65JA5ACOB6SJS6GCNCM46CDEEA\",\"WARC-Block-Digest\":\"sha1:GQUKPJ2KB6TDDF3ZM6DWVCNXP3FDHMAR\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141195687.51_warc_CC-MAIN-20201128155305-20201128185305-00509.warc.gz\"}"} |
https://www.cut-the-knot.org/Mset99/dangers.shtml | [
"## Mathematics Education: Taking a Clue From the Recent Technological Revolution",
null,
"# A Word of Warning: Technology is not automatically useful.\n\nThat pedagogy is the essence of education is almost a tautology. Pedagogy was and remains the cornerstone of the instruction. Pedagogy has its dangers though. It always had. In a well publicized case, Socrates paid with his life when the Athenians rejected his pedagogical approach.",
null,
"We are particularly concerned with the inclusion of technology into the instructional process. Let me state for the record the obvious: technology can't substitute for understanding. Developing a habit of using technology to achieve quick answers is counterproductive.\n\nIn the first chapter of his book Number: The Language of Science Tobias Dantzig mentions proposals to replace the decimal system with other bases. Wrote he: \"In our own age, when calculating devices have largely supplanted mental arithmetic, nobody would take either proposal seriously.\" The book was written in 1930. The fourth, revised and augmented edition of the book appeared in 1954. Whether written in 1930 or 1954, I think the phrase about supplanting mental arithmetic was more a reflection of a popular sentiment than of a historical reality.\n\nThe sentiment is still with us while we are nearing the time when a fantasy may well become a reality. Below, I give a couple of examples that point to possible pitfalls.\n\nRecently, I had the following exchange (italic is mine. A.B.):\n\n• Visitor: How do I obtain the best equation for a curved line to describe a series of experimentally obtained points?\n• A.B.: There is no best. Unless, of course, you specify your needs. I would go and have a look at the site Numerical Recipes\n• Visitor: I found that Excel does this for me. To think that I was straining my brain for nothing! Thank you anyway.\n\nHere's another example from the sci.math newsgroup:\n\n• Could anyone please solve this step by step? 3cos2x = sinx. x = ? Thank you\n\n• Actually, the problemas presented is insanely difficult as given. Essentially, it does, as you showed, boil down to 3 = y3y2, where y is in [0,1] (since y = sin x). But finding a solution to this, assuming a solution even exists, is pretty tricky.\n\nWhen I in fact put this into Mathematica, I got the slightly bewildering y = (ProductLog(18log3)/(2log3))0.5 where ProductLog(k) is the principal solution to k = xex. Ick. And x is the Arcsin of that. Double ick.\n\nOf course, when faced with the original function, Mathematica gave the illuminating error message:\n\nSolve::tdep:\n\nThe equations appear to involve transcendental functions of the variables in an essentially non-algebraic way.\n\n• Umm, no. Here's a start:\n\n3cos2x = sinx\n\n31 - sin2x = sin x\n\nNow we make life a little easier by denoting sin x by s:\n\n31 - s2 = s\n\nThe graph of 31 - s2 looks vaguely like a bell curve and clearly has only one intersection with the graph of s. A bit of thought shows that s = 1 is the unique point of intersection, so you're left with finding the solutions to sin x = 1, an exercise left to the reader. (I assumed we were looking for real roots, BTW)\n\n• What mathematical tools are you allowed to use? (For what level course was this problem assigned?)\n\nIf you can sketch, even approximately, the left-hand side and the right-hand side as functions of x, where will the graphs meet?\n\nAfter that, can you make a conjecture and prove it?\n\n(And a suspicious question: This is the time of mathematical competitions, and after I solved the equation for myself, I suspect that it is a competition problem, and a clever one, if it is for high schools. If this is the case, are you sure you want to win by the work of others?)\n\n• Roughly sketch these functions?? You mean... actually _think_ about the question?? This is 2000 man!! We have Mathematica and Maple to think for us!!\n\n• Hello, maybe we should let these programs do the hard work, but try to think ourselves.\n\nWhile it sometimes doesn't seem to make much sense to draw graphs of given functions, it might help in the long run to 'know' the shape of functions. Here students are sometimes unable to sketch easy formulas, because they never learned it.\n\nI think its the same problem as with pocket calculators. It just helps to be able to perform simple calculations by hand.\n\n• I assume that x is required to be real.\n\nWhat is the least possible value of cos2x?\n\nSo what is the least possible value of the left side?\n\nWhat is the greatest possible value of the right side?\n\nAnd one more example:\n\nThe area of the piece of property that measures 102ft x 290ft x 255ft x 230 ft is 43512.621769283 square feet. I don't know how accurate this value is. This value has been calculated by Visual Basic.",
null,
"Index What changes Pedagogy is important || | |\n\n69735706"
] | [
null,
"https://www.cut-the-knot.org/gifs/tbow_sh.gif",
null,
"https://www.cut-the-knot.org/Mset99/ped.gif",
null,
"https://www.cut-the-knot.org/gifs/tbow_sh.gif",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9513026,"math_prob":0.8739945,"size":4762,"snap":"2022-27-2022-33","text_gpt3_token_len":1106,"char_repetition_ratio":0.089953765,"word_repetition_ratio":0.004830918,"special_character_ratio":0.23225535,"punctuation_ratio":0.13445379,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9801797,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,null,null,5,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-06-29T22:52:33Z\",\"WARC-Record-ID\":\"<urn:uuid:cc7f2bd3-0fc6-4fb1-a939-5aec3b97fa52>\",\"Content-Length\":\"15425\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4eb09b98-7d93-46d4-9535-21527d7e31a8>\",\"WARC-Concurrent-To\":\"<urn:uuid:612e064d-943f-4fd3-9c4e-c8110953e36e>\",\"WARC-IP-Address\":\"107.180.50.227\",\"WARC-Target-URI\":\"https://www.cut-the-knot.org/Mset99/dangers.shtml\",\"WARC-Payload-Digest\":\"sha1:PZDVR4D7PN6ZVFMDB7OLKBSOFCHC7PBL\",\"WARC-Block-Digest\":\"sha1:AWWZZZ2NV3FV3VOP3BOU3LATJVLQYBWE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656103645173.39_warc_CC-MAIN-20220629211420-20220630001420-00169.warc.gz\"}"} |
https://www.studytonight.com/cpp/tests/1 | [
"Dark Mode On/Off\n\n# Introduction to C++\n\nThis Test will cover the basic concepts of C++, including datatype, loop, if-else condition, switch statement etc.\nQ.In Object Oriented programming, abstraction refers to ?\n A. showing only the essential features of the application and hiding the details. B. data binding. C. reuse once written code again and again D. features which allows to create functions with same name\nQ. Size hierarchy for floating point numbers is : `float < double < long float` ?\n A. true B. false\nQ. ___________ is also an operator that is used to get information about the amount of memory allocated for data types and Objects\n A. typedef B. ternary C. sizeOf D. shift\nQ. typedef is a keyword used in C language to assign alternative names to existing types ?\n A. true B. false\n\nQ. What will be the output of following code ?\n``````#include <iostream>\nusing namespace std;\n\nint main()\n{\nint i=10;\nif(i=20)\ncout << i ;\nreturn 0;\n}``````\n A. 10 B. 20 C. 0 D. error\nQ. What will be the output of following code ?\n``````#include <iostream>\nusing namespace std;\n\nvoid fun()\n{\nstatic int i = 10;\ni++;\ncout << i;\n}\n\nint main()\n{\nfun();\nfun();\nfun();\nreturn 0;\n}``````\n A. 101112 B. 111111 C. 111213 D. error\nQ. You can never use or compute address of _________ variable ?\n A. local B. static C. global D. register\nQ. What will be the output of following code ?\n``````#include <iostream>\nusing namespace std;\n\nvoid calc(int x);\n\nint main()\n{\nint x = 10;\ncalc(x);\nprintf(\"%d\", x);\nreturn 0;\n}\n\nvoid calc(int x)\n{\nx = x + 10 ;\n}``````\n A. 20 B. 10 C. 0 D. error\nQ. Default storage class for local variables is ?\n A. auto B. register C. static D. extern\n\nQ. __________ is defined as user defined data type and it also contains functions in it ?\n A. Object B. Data members C. Class D. Polymorphism\n\n### Related Tests:\n\n#### Explore more Subjects:\n\nTests for various other programming languages and subjects:"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.6896562,"math_prob":0.8526888,"size":1229,"snap":"2022-40-2023-06","text_gpt3_token_len":300,"char_repetition_ratio":0.09959184,"word_repetition_ratio":0.16444445,"special_character_ratio":0.3108218,"punctuation_ratio":0.18110237,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98303014,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-10-02T16:46:01Z\",\"WARC-Record-ID\":\"<urn:uuid:dc301f51-3097-46c0-a6e1-11b36ff31020>\",\"Content-Length\":\"243947\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1cc133e5-3a33-4836-8cc8-cd62d8e38f49>\",\"WARC-Concurrent-To\":\"<urn:uuid:a03f89d6-6df1-499d-89b2-4c1bb91f7a13>\",\"WARC-IP-Address\":\"65.2.112.116\",\"WARC-Target-URI\":\"https://www.studytonight.com/cpp/tests/1\",\"WARC-Payload-Digest\":\"sha1:XAYC6ZIEREMOOAK3PHDONV7C6ZSHSZYI\",\"WARC-Block-Digest\":\"sha1:6HACRZYFWPSOYLRVR764G4P5IUYZFSHV\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030337338.11_warc_CC-MAIN-20221002150039-20221002180039-00406.warc.gz\"}"} |
https://www.midasstructure.com/blog/en/direct-analysis-of-steel-structure | [
"# midas Structure\n\nBLOG PROJECT TUTORIAL\n\n# Introduce for the Stability Design of steel structure\n\nMost of the design codes consider all of the following regarding the stability of steel structures and their elements.\n\n(1) Deflection of flexural, shear, and axial members & all other deformations affecting the displacement of the structure\n\n(2) Secondary effect (P-Δ, P-δ)\n\n(3) Geometric imperfections\n\n(4) Stiffness decrease due to inelasticity\n\n(5) Uncertainty of stiffness and strength\n\nThe method to design the stability of the structure by reflecting the above requirements is as follows.\n\n- Direct Analysis Method (DA)\n\n- Effective Length Method (ELM)\n\n- Primary Analysis Method",
null,
"Figure 1. Comparison with Direct Analysis Method(DA) & Effective Length Method (ELM)\n\n# Considerations in Direct Analysis\n\nThe direct analysis is a newly developed stability design method that recognizes the limitations of ELM in the design of increasingly complex steel structures and is an analysis method that can be used regardless of the structure type.\n\nIIn 2nd analysis, the virtual minimum lateral load, Ni should be included in all gravity load combinations and applied to each floor of the structure.\nThis is to reflect the initial deformation by assuming that the verticality error during construction is L/500(0.002*L).\n.There is also a method of directly modeling the initial deformation instead of applying a virtual lateral load.\n\nIIn case of Δ2nd1st ≤ 1.5, the virtual lateral load is applied only to the gravity load combinations. If this is exceeded, a virtual lateral load is to be added to all load combinations.\n\nIIt is analyzed by reducing the stiffness of all members considering the stiffness uncertainty.\nIn the case of flexural stiffness, τb factor is additionally applied to consider the stiffness reduction effect due to residual stress.\n\n- EA = 0.8*EA (for axial stiffness)\n- EI = 0.8*τb*EI (for flexural stiffness)\n\nBecause the direct analysis method reflects the structural uncertainty that affects stability in the analysis model,\nand the effective buckling length coefficient, K=1.0 can be used for all members.",
null,
"Figure 2. Concepts for application of direct analysis\n\n# What is a secondary effect?\n\nWhen displacement occurs in the structure, additional member force is applied, and the process in which displacement due to additional member force occurs again is repeated. This is called the secondary effect or P-delta effect.\nThe effect caused by the displacement, δ occurring inside the member is usually called P-δ, and the one caused by the lateral displacement of the frame, Δ is called the P-Δ effect.",
null,
"Figure 3. P-δ, P-Δ effects of cantilever columns\n\nI2nd analysis means to repeat the re-analysis after modifying the stiffness system until the displacement due to the secondary effect converges to a microscale displacement below a certain level.\n\nThe stiffness of the converged structure shows less lateral stiffness than the initial structure, which is a natural result considering that there is an increase in lateral displacement due to the secondary effect.",
null,
"Figure 4. Comparision with 1st Analysis & 2nd Anlysis\n\nLooking at the design code(AISC) in relation to the 2nd-order analysis in DA and ELM, it is specified that any 2nd-order analysis method that can consider both P-δ and P-Δ can be used .\n\nAccording to the characteristics of the structure, there are 3 ways for using 2nd order analysis methods.\n\nMethod 1 : Approximate 2nd-order analysis using coefficients B1 and B2,\n\nMethod 2 : P-δ effect is considered by applying the B1 coefficient, and the P-Δ is considered through 2nd-order analysis,\n\nMethod 3 : Directly considers P-δ and P-Δ by performing a geometric nonlinear analysis.\n\nThe approximate 2nd-order analysis is a method of amplifying the member force calculated through 1st-order analysis by applying the coefficients B1 and B2 to consider the P-δ and P-Δ effects, respectively. However, since the B2 coefficient calculation process requires a clear floor division, there is no other way to calculate it if there is no floor division. Therefore, the approximate 2nd-order analysis is a formal and suitable method only for buildings with orthogonal frames, and it is not easy to apply to irregular structures.\n\nThe general direct analysis method means the method according to method 2~3.",
null,
"Figure 5.Formula in design code (AISC) in relation to the 2nd-order analysis\n\n# The usefulness of Direct Analysis Method\n\nWhy use the direct analysis method?\n\nI a) It is the primary method in steel structure design.\nI b) It is possible to apply to all types of structural systems.\nI c) It can capture the internal structure forces more accurately.\ni d) It is the correct design of beams and connections providing rotational column restraint.\ne) No need to calculate K-factors.\nI f) It is possible to apply for all side-sway amplification values (Δ2nd order / Δ1st order).\ng) Effective length method is limited (ELM should be used when Δ2nd order / Δ1st order < 1.5.).\n\n# The usefulness of Direct Analysis Method\n\n## How to create the imperfection load in MidasGen\n\nThe procedure of creating imperfection loads in Midas Gen is as follows.\n\n1) Step 1. Create a model of the lateral force resisting frame, including the leaning column.\n2) Step 2. Reduce the stiffness of the lateral framing members in your model.\n3) Step 3. Apply notional load or directly model the imperfections.\n4) Step 4. Conduct a second-order analysis (“Rigorous” or B1-B2 amplification on first-order).\n* Mesh compression elements to capture P-δ.\n* “Rigorous” means to use geometric nonlinear analysis.\n5) Step 5. Design members using AISC specification and K=1.0.\n6) Step 6. Check lateral drift limits for wind and seismic under a unreduced stiffness.\n\n1) Step 1. Create a model of the lateral force resisting frame, including the leaning column.",
null,
"",
null,
"Figure 6. The design procedure of direct analysis in Midas Gen (Step 1)\n\n2) Step 2. Reduce the stiffness of the lateral framing members in your model.",
null,
"",
null,
"Figure 7. The design procedure of direct analysis in Midas Gen (Step 2)\n\n3) Step 3. Apply notional load or directly model the imperfections.",
null,
"",
null,
"Figure 8. The design procedure of direct analysis in Midas Gen (Step 3)\n\n4) Step 4. Conduct a second-order analysis (“Rigorous” or B1-B2 amplification on first-order).",
null,
"",
null,
"",
null,
"",
null,
"Figure 9. The design procedure of direct analysis in Midas Gen (Step 4)\n\n5) Step 5. Design members using AISC specification and K=1.0.",
null,
"",
null,
"Figure 10. The design procedure of direct analysis in Midas Gen (Step 5)\n\nAuthor Information",
null,
"Yeong-il Seo | Principal Structural Engineer\n\nYoung-il has over 13+ years of experience in building design, especially high-rise buildings with column reduction analysis, plant structures, pushover analysis, health monitoring, and vibration control projects. Since 2016, he is planning and providing technical supports for midas building products such as midas Gen, nGen, and Design+.",
null,
""
] | [
null,
"https://www.midasstructure.com/hs-fs/hubfs/STRUCTURE_EN/Blog/Direct%20analysis%20of%20Steel%20Structure/02.png",
null,
"https://www.midasstructure.com/hs-fs/hubfs/STRUCTURE_EN/Blog/Direct%20analysis%20of%20Steel%20Structure/03.png",
null,
"https://www.midasstructure.com/hs-fs/hubfs/STRUCTURE_EN/Blog/Direct%20analysis%20of%20Steel%20Structure/04.png",
null,
"https://www.midasstructure.com/hs-fs/hubfs/STRUCTURE_EN/Blog/Direct%20analysis%20of%20Steel%20Structure/05.png",
null,
"https://www.midasstructure.com/hs-fs/hubfs/STRUCTURE_EN/Blog/Direct%20analysis%20of%20Steel%20Structure/06.png",
null,
"https://www.midasstructure.com/hs-fs/hubfs/STRUCTURE_EN/Blog/Direct%20analysis%20of%20Steel%20Structure/07.png",
null,
"https://www.midasstructure.com/hs-fs/hubfs/STRUCTURE_EN/Blog/Direct%20analysis%20of%20Steel%20Structure/08.png",
null,
"https://www.midasstructure.com/hs-fs/hubfs/STRUCTURE_EN/Blog/Direct%20analysis%20of%20Steel%20Structure/09.png",
null,
"https://www.midasstructure.com/hs-fs/hubfs/STRUCTURE_EN/Blog/Direct%20analysis%20of%20Steel%20Structure/10.png",
null,
"https://www.midasstructure.com/hs-fs/hubfs/STRUCTURE_EN/Blog/Direct%20analysis%20of%20Steel%20Structure/11.png",
null,
"https://www.midasstructure.com/hs-fs/hubfs/STRUCTURE_EN/Blog/Direct%20analysis%20of%20Steel%20Structure/12.png",
null,
"https://www.midasstructure.com/hs-fs/hubfs/STRUCTURE_EN/Blog/Direct%20analysis%20of%20Steel%20Structure/13.png",
null,
"https://www.midasstructure.com/hs-fs/hubfs/STRUCTURE_EN/Blog/Direct%20analysis%20of%20Steel%20Structure/14.png",
null,
"https://www.midasstructure.com/hs-fs/hubfs/STRUCTURE_EN/Blog/Direct%20analysis%20of%20Steel%20Structure/15.png",
null,
"https://www.midasstructure.com/hs-fs/hubfs/STRUCTURE_EN/Blog/Direct%20analysis%20of%20Steel%20Structure/16.png",
null,
"https://www.midasstructure.com/hs-fs/hubfs/STRUCTURE_EN/Blog/Direct%20analysis%20of%20Steel%20Structure/17.png",
null,
"https://www.midasstructure.com/hs-fs/hubfs/STRUCTURE_EN/Blog/Direct%20analysis%20of%20Steel%20Structure/18.png",
null,
"https://www.midasstructure.com/hubfs/170_%EC%84%9C%EC%98%81%EC%9D%BC.jpg",
null,
"https://www.midasstructure.com/hubfs/STRUCTURE_EN/WhitePaper/Structure_EN_WhitePaper_Vol.19_ACI318-19%20Updates%20for%20ULS%20Design%20of%20Reinforcement%20Concrete.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8902394,"math_prob":0.90361637,"size":4883,"snap":"2023-14-2023-23","text_gpt3_token_len":1086,"char_repetition_ratio":0.15269522,"word_repetition_ratio":0.076238886,"special_character_ratio":0.20909277,"punctuation_ratio":0.09080718,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97161186,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38],"im_url_duplicate_count":[null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,7,null,7,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-03T01:25:16Z\",\"WARC-Record-ID\":\"<urn:uuid:9e7b2375-850e-424b-96b5-e97ed0374d0d>\",\"Content-Length\":\"132754\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7c0fe408-a870-4789-ae37-cd9b6399ab87>\",\"WARC-Concurrent-To\":\"<urn:uuid:fc75dd43-53c6-4593-9880-99c88d3bcd65>\",\"WARC-IP-Address\":\"199.60.103.30\",\"WARC-Target-URI\":\"https://www.midasstructure.com/blog/en/direct-analysis-of-steel-structure\",\"WARC-Payload-Digest\":\"sha1:KIKZT64BIHKPGVSDHD3POGJJIE5DTRFW\",\"WARC-Block-Digest\":\"sha1:BTYEL5S4ABOPM2SJ7D4PZMFQKBOTIZIS\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224648911.0_warc_CC-MAIN-20230603000901-20230603030901-00221.warc.gz\"}"} |
https://scholar.nctu.edu.tw/en/publications/on-the-spatial-entropy-and-patterns-of-two-dimensional-cellular-n | [
"# On the spatial entropy and patterns of two-dimensional cellular neural networks\n\nSong-Sun Lin*, Tzi Sheng Yang\n\n*Corresponding author for this work\n\nResearch output: Contribution to journalArticle\n\n13 Scopus citations\n\n### Abstract\n\nThis work investigates binary pattern formations of two-dimensional standard cellular neural networks (CNN) as well as the complexity of the binary patterns. The complexity is measured by the exponential growth rate in which the patterns grow as the size of the lattice increases, i.e. spatial entropy. We propose an algorithm to generate the patterns in the finite lattice for general two-dimensional CNN. For the simplest two-dimensional template, the parameter space is split up into finitely many regions which give rise to different binary patterns. Qualitatively, the global patterns are classified for each region. Quantitatively, the upper bound of the spatial entropy is estimated by computing the number of patterns in the finite lattice, and the lower bound is given by observing a maximal set of patterns of a suitable size which can be adjacent to each other.\n\nOriginal language English 115-128 14 International Journal of Bifurcation and Chaos in Applied Sciences and Engineering 12 1 https://doi.org/10.1142/S0218127402004206 Published - 1 Jan 2002"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8765315,"math_prob":0.5690485,"size":2324,"snap":"2020-34-2020-40","text_gpt3_token_len":530,"char_repetition_ratio":0.12931034,"word_repetition_ratio":0.78236914,"special_character_ratio":0.23580034,"punctuation_ratio":0.08737864,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96447045,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-21T06:00:54Z\",\"WARC-Record-ID\":\"<urn:uuid:147dcfab-cbfe-4744-b1c3-a87f9185e57e>\",\"Content-Length\":\"47831\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0656a2a7-3ff4-45ae-8b88-39b7dcee99f2>\",\"WARC-Concurrent-To\":\"<urn:uuid:ba2c5fdc-2758-448d-9cb8-c19a5476d207>\",\"WARC-IP-Address\":\"18.139.148.124\",\"WARC-Target-URI\":\"https://scholar.nctu.edu.tw/en/publications/on-the-spatial-entropy-and-patterns-of-two-dimensional-cellular-n\",\"WARC-Payload-Digest\":\"sha1:BEKF3MBE3U7UKCUODZ6IH6G5KPIO2GD3\",\"WARC-Block-Digest\":\"sha1:OCDBBB2JIFMPF2JS4QRJEKPFI5COBHRD\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600400198942.13_warc_CC-MAIN-20200921050331-20200921080331-00547.warc.gz\"}"} |
https://alfasin.com/2013/11/18/a-simple-calculator-in-ruby/comment-page-1/ | [
"# A simple calculator in Ruby\n\nToday I ran (again) into the following question:\n\n• Write a function that doesn’t use eval that calculates input strings with operators +,-,*,/ eg. “5+5*6+4/2″ should output 37\n\nThe first time I ran into this question I designed a solution, in Java, which had a SimpleCalculator class, and `enam Operation` which supported the four basic arithmetic operations: +-*/ each of which had an `apply` method etc. Very object oriented.\n\nWhen I read this question today, I figured that it would be a nice exercise to do in Ruby – a few minutes later the result spread, elegantly, over less than 20 lines of code (and I bet that a professional Rubiest can do it in less)!\n\n```def is_number? expr\nreturn false if expr.nil?\nexpr = \"#{expr}\" # we need this condition in case the expr is a number\nexpr.match /^(\\d+|\\d+\\.\\d+)\\$/ # since match() is defined only for strings\nend\n\ndef calc(expr)\nreturn expr.to_i if is_number? expr\nexpr.gsub!(\" \",\"\") # clean the string from whitespaces\n# pay attention to the order: + and - should come before * and /\n# can you figure out why ?\narr = expr.split /\\+/\nreturn arr.inject(0){|x,y| calc(x) + calc(y) } if arr.size > 1\narr = expr.split /\\-/\nreturn arr.inject(0){|x,y| calc(x) - calc(y) } if arr.size > 1\narr = expr.split /\\*/\nreturn arr.inject(1){|x,y| calc(x) * calc(y) } if arr.size > 1\narr = expr.split /\\//\nreturn arr.inject {|x,y| calc(x) / calc(y) } if arr.size > 1\nend\n\nputs calc(\"5+5* 6+4/2.0\")\n#output 37\n```\n\nDo you have a better/shorter/more elegant solution ?\n\n## 2 thoughts on “A simple calculator in Ruby”\n\n1.",
null,
"Korolevych Bogdan says:\n\nDont work with (-1 + (-1))\n\nLike\n\n1.",
null,
"alfasin says:\n\nThe simple calculator is… well… simple 🙂\nAs such – it does not support brackets.\n\nLike"
] | [
null,
"https://0.gravatar.com/avatar/f03ab413716ea70180fea5582298c67d",
null,
"https://0.gravatar.com/avatar/6dd60e8de7b606e5e39919e5869eff6e",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.78145385,"math_prob":0.97515523,"size":1690,"snap":"2019-51-2020-05","text_gpt3_token_len":458,"char_repetition_ratio":0.10913405,"word_repetition_ratio":0.058419243,"special_character_ratio":0.30769232,"punctuation_ratio":0.13390313,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.987738,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-19T10:02:21Z\",\"WARC-Record-ID\":\"<urn:uuid:90800fe5-9ceb-4bb2-a419-5475a2668dd2>\",\"Content-Length\":\"73193\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a5afc904-73aa-47f5-8b30-3803187f7005>\",\"WARC-Concurrent-To\":\"<urn:uuid:27f44d2b-c7ba-49ab-94fe-fffc2d179e15>\",\"WARC-IP-Address\":\"192.0.78.24\",\"WARC-Target-URI\":\"https://alfasin.com/2013/11/18/a-simple-calculator-in-ruby/comment-page-1/\",\"WARC-Payload-Digest\":\"sha1:IULEQTA4QWVUKG77E5V6AX73TIA27X7L\",\"WARC-Block-Digest\":\"sha1:PJDF47VHI66OZW64XBVL35Y55UI5HPK7\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579250594391.21_warc_CC-MAIN-20200119093733-20200119121733-00199.warc.gz\"}"} |
https://jaimiebleck.com/k5-fractions/ | [
"# Tremendous K5 Fractions Image Ideas",
null,
"Grade fractions worksheet equivalent author k5 learning.",
null,
"K5 fractions fraction worksheets tremendous image ideas grade worksheet 2nd.",
null,
"K5 fractions fraction worksheets grade worksheet mixed numbers like denominators learning adding.",
null,
"Fractions worksheet worksheets for grades k5 learning tremendous image ideas.",
null,
"K5 fraction worksheets fractions 1st grade math learning free.",
null,
"Grade fractions worksheets equivalent5 learning tremendous.",
null,
"Free math worksheets fractions grade worksheet color equivalent 4th tremendous.",
null,
"Tremendous k5 fractions image ideas learning comparing proper grade chegg com fraction.",
null,
"K5 fractions fraction worksheets tremendous image ideas grade writing.",
null,
"K5 fractions fraction worksheets grade math worksheet subtracting unlike learning tremendous image ideas.",
null,
"K5 fractions tremendous image ideas fraction worksheets grade comparing unlike denominators math.",
null,
"K5 fractions fraction worksheets grade and decimals free printable learning math.",
null,
"K5 fractions grade simplifying worksheets learning fraction tremendous image."
] | [
null,
"https://jaimiebleck.com/d/2021/08/grade-fractions-worksheet-equivalent-author-k5-learning.jpg",
null,
"https://jaimiebleck.com/d/2021/08/k5-fractions-fraction-worksheets-tremendous-image-ideas-grade-worksheet-2nd.gif",
null,
"https://jaimiebleck.com/d/2021/08/k5-fractions-fraction-worksheets-grade-worksheet-mixed-numbers-like-denominators-learning-adding.gif",
null,
"https://jaimiebleck.com/d/2021/08/fractions-worksheet-worksheets-for-grades-k5-learning-tremendous-image-ideas.gif",
null,
"https://jaimiebleck.com/d/2021/08/k5-fraction-worksheets-fractions-1st-grade-math-learning-free.jpg",
null,
"https://jaimiebleck.com/d/2021/08/grade-fractions-worksheets-equivalent5-learning-tremendous.gif",
null,
"https://jaimiebleck.com/d/2021/08/free-math-worksheets-fractions-grade-worksheet-color-equivalent-4th-tremendous.jpg",
null,
"https://jaimiebleck.com/d/2021/08/tremendous-k5-fractions-image-ideas-learning-comparing-proper-grade-chegg-com-fraction.png",
null,
"https://jaimiebleck.com/d/2021/08/k5-fractions-fraction-worksheets-tremendous-image-ideas-grade-writing.gif",
null,
"https://jaimiebleck.com/d/2021/08/k5-fractions-fraction-worksheets-grade-math-worksheet-subtracting-unlike-learning-tremendous-image-ideas.gif",
null,
"https://jaimiebleck.com/d/2021/08/k5-fractions-tremendous-image-ideas-fraction-worksheets-grade-comparing-unlike-denominators-math.gif",
null,
"https://jaimiebleck.com/d/2021/08/k5-fractions-fraction-worksheets-grade-and-decimals-free-printable-learning-math.png",
null,
"https://jaimiebleck.com/d/2021/08/k5-fractions-grade-simplifying-worksheets-learning-fraction-tremendous-image.gif",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.67139053,"math_prob":0.88174933,"size":1035,"snap":"2021-31-2021-39","text_gpt3_token_len":170,"char_repetition_ratio":0.2929195,"word_repetition_ratio":0.103174604,"special_character_ratio":0.15169083,"punctuation_ratio":0.09090909,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9986738,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26],"im_url_duplicate_count":[null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-25T21:47:20Z\",\"WARC-Record-ID\":\"<urn:uuid:0c6c7ee7-ee8a-4930-89b5-e993bf2c3c92>\",\"Content-Length\":\"42611\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6d7189f7-48b8-4cd1-b907-86e7104b6f0a>\",\"WARC-Concurrent-To\":\"<urn:uuid:cdadfc4a-8514-4462-a517-9e41e0380be3>\",\"WARC-IP-Address\":\"104.21.1.231\",\"WARC-Target-URI\":\"https://jaimiebleck.com/k5-fractions/\",\"WARC-Payload-Digest\":\"sha1:A5GTGQQNSPSXNPYBGSJDULJOUIZMGTCS\",\"WARC-Block-Digest\":\"sha1:OOXBSQPUGPZC4IYKQAMOIQFLPBOFN4BG\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780057775.50_warc_CC-MAIN-20210925202717-20210925232717-00369.warc.gz\"}"} |
https://qsstudy.com/define-emissive-power/ | [
"Physics\n\n# Define Emissive Power",
null,
"The emissive power of a body at a given temperature is the amount of energy emitted per unit time per unit area of the surface for a given wavelength. It is the energy of thermal radiation emitted in all directions per unit time from each unit area of a surface at any given temperature\n\nThe emissive power of a body is defined as the ratio of the amount of energy emitted (or radiated) per unit area per second to the amount of emitted heat energy per unit area of a perfectly black body per second at the same temperature. All bodies will emit heat when the body temperature is above absolute zero, and this is called emission and when this emitted heat travels in the form of electromagnetic wave it is called thermal radiation. This power is the energy of thermal radiation emitted in all directions per unit time per unit area of a surface at any given temperature.\n\nIf ‘Q’ is the amount of radiant energy emitted, ‘A’ is the surface area of the body and ‘t ‘ is the time for which body radiates energy, then the emissive power is –\n\nE = Q/At\n\nThe coefficient of emission of a body is the ratio of the emissive power of the body at a given temperature to the emissive power of a perfectly black body at the same temperature. The value of emissive power lies between 0 to 1. The emissive power of a perfectly black body is 1. But, a perfectly white surface or body has the zero emissive power.\n\nIt is denoted by eλ.\n\nIts unit is W m-2.\n\nCoefficient of emission, e = E/Eb\n\nThe emissive power eλ of a body for radiation of wavelength λ to λ+dλ is the amount of radiation emitted per unit area of the body per second normally in-unit solid angle in the axial direction of the solid angle."
] | [
null,
"https://qsstudy.com/wp-content/uploads/2016/03/Emissive-Power.jpg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8912935,"math_prob":0.97424257,"size":1681,"snap":"2022-05-2022-21","text_gpt3_token_len":371,"char_repetition_ratio":0.16815743,"word_repetition_ratio":0.1656051,"special_character_ratio":0.20999405,"punctuation_ratio":0.047761194,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9930428,"pos_list":[0,1,2],"im_url_duplicate_count":[null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-22T22:55:37Z\",\"WARC-Record-ID\":\"<urn:uuid:fc36df03-9555-4aca-8c42-fcf88cb95a8d>\",\"Content-Length\":\"30309\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a63d0839-5058-425c-942e-a132d482a0ce>\",\"WARC-Concurrent-To\":\"<urn:uuid:253cccdd-d878-4d9f-a079-a6f47ed63d1f>\",\"WARC-IP-Address\":\"178.63.81.55\",\"WARC-Target-URI\":\"https://qsstudy.com/define-emissive-power/\",\"WARC-Payload-Digest\":\"sha1:CFCG4VLNKY6EHE6XV7DZOPXD4I7BUQH4\",\"WARC-Block-Digest\":\"sha1:HAQFHY2C2PJ3M6FPF2KFK4AUUR6MYX6L\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662550298.31_warc_CC-MAIN-20220522220714-20220523010714-00371.warc.gz\"}"} |
https://docs-cupy.chainer.org/en/latest/reference/generated/cupy.tril.html | [
"cupy.tril¶\n\ncupy.tril(m, k=0)[source]\n\nReturns a lower triangle of an array.\n\nParameters: m (array-like) – Array or array-like object. k (int) – The diagonal above which to zero elements. Zero is the main diagonal, a positive value is above it, and a negative value is below. A lower triangle of an array. cupy.ndarray"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.73299736,"math_prob":0.9788193,"size":342,"snap":"2019-26-2019-30","text_gpt3_token_len":94,"char_repetition_ratio":0.112426035,"word_repetition_ratio":0.03508772,"special_character_ratio":0.2631579,"punctuation_ratio":0.17567568,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9909002,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-06-25T20:28:12Z\",\"WARC-Record-ID\":\"<urn:uuid:f578d9e8-b445-4f7e-88db-af9482ebafc6>\",\"Content-Length\":\"17177\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ac7276de-be9a-4bfd-8168-465d2d941b3f>\",\"WARC-Concurrent-To\":\"<urn:uuid:b84de8ee-1460-4113-91eb-7c885210fe13>\",\"WARC-IP-Address\":\"104.28.30.125\",\"WARC-Target-URI\":\"https://docs-cupy.chainer.org/en/latest/reference/generated/cupy.tril.html\",\"WARC-Payload-Digest\":\"sha1:OVL3WXXTZAFDQSMIVH5CXTUHRMIKQMLI\",\"WARC-Block-Digest\":\"sha1:UU7JEYPHY7P6ZKCECRRZT7G6TNIW3OFH\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-26/CC-MAIN-2019-26_segments_1560627999946.25_warc_CC-MAIN-20190625192953-20190625214953-00391.warc.gz\"}"} |
https://reverseengineering.stackexchange.com/questions/3606/how-to-efficiently-simplify-obfuscated-formula-in-qf-bv-logic-with-z3 | [
"# How to efficiently simplify obfuscated formula in QF_BV logic with Z3?\n\nI would like to know if there are efficient ways to simplify arithmetic formula expression over bit-vectors with Microsoft Z3. But, first, I would like to explain a bit the problem. Lets start with an example:\n\n``````x + y == (x ^ y) + 2 * (x & y)\n``````\n\nBoth `x + y` and `(x ^ y) + 2 * (x & y)` are, in fact, coding the addition over bit-vectors. Of course, the right hand formula is used to confuse a reverser when found in the binary program. I try to find tools and techniques to simplify the obfuscated formula and find the simpler form of the formula (left-hand).\n\nFor this, I looked at the Python interface of Z3, trying to see what I can get out of it. So, defining the obfuscated formula is done like this:\n\n``````>>> from z3 import *\n>>> x = BitVec('x', 32)\n>>> y = BitVec('y', 32)\n>>> fun1 = (x ^ y) + 2 * (x & y)\n``````\n\nNow, lets try to simplify this function with the help of the built-in function `simplify`:\n\n``````>>> simplify((x ^ y) + 2 * (x & y))\n(x ^ y) + 2*~(~x | ~y)\n``````\n\nNot really convincing... But, lets try to prove the equivalence with `x + y`:\n\n``````>>> prove((x ^ y) + 2 * (x & y) == x + y)\nproved\n>>> prove((x ^ y) + 2 * (x & y) == x - y)\ncounterexample\n[y = 2164268032, x = 2139094080]\n``````\n\nI added a negative result to show that it is also possible to disqualify a formula.\n\nSo, if the `simplify` function is not really convincing, it is still possible to try, in a brute-force manner to compare the unknown formula with a list of simpler and usual formula one after one. But, this way seems extremely inefficient to me. I guess I am missing some smarter algorithms to simplify formula.\n\nI would like to know if there are some already existing tools or well-known techniques to perform in a more efficient manner than the brute-force approach. So, if someone has some hints or comments about this, it would be more than welcome.\n\n• Try `help_simplify()` - some of the options may do what you are looking for. Feb 2 '14 at 21:45\n• I already tried several of these tactics (`simplify(fun1, som=True)`), but with no success either. Maybe I missed the right set of tactics to use. If you have such a list, I would be happy to know your suggestion ! Feb 2 '14 at 21:49\n• The question isn't very clear. What does \"efficiently\" mean, and what makes `simplify` \"inefficient\"? Feb 2 '14 at 23:49\n• You're right. I meant more efficiently than a brute-force approach where you check all formula against the one you are investigating. Feb 3 '14 at 6:31"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.90218884,"math_prob":0.9657824,"size":2063,"snap":"2021-43-2021-49","text_gpt3_token_len":550,"char_repetition_ratio":0.13453132,"word_repetition_ratio":0.103960395,"special_character_ratio":0.30101794,"punctuation_ratio":0.11004785,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99860233,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-12-01T07:08:38Z\",\"WARC-Record-ID\":\"<urn:uuid:380dc2c0-c94c-47c0-8d3d-625912496808>\",\"Content-Length\":\"131114\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6bdb5ade-920b-4849-8876-6c6add3a07d5>\",\"WARC-Concurrent-To\":\"<urn:uuid:ae659215-445c-4fbf-b266-dab408a9f3ea>\",\"WARC-IP-Address\":\"151.101.65.69\",\"WARC-Target-URI\":\"https://reverseengineering.stackexchange.com/questions/3606/how-to-efficiently-simplify-obfuscated-formula-in-qf-bv-logic-with-z3\",\"WARC-Payload-Digest\":\"sha1:TRSZSLQLF2HAVZAARPWGCPGUYHXEYROL\",\"WARC-Block-Digest\":\"sha1:4DVJTINLGFF25S5QFUPASB6KLJSB25YO\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964359093.97_warc_CC-MAIN-20211201052655-20211201082655-00290.warc.gz\"}"} |
https://answers.everydaycalculation.com/multiply-fractions/14-5-times-5-14 | [
"Solutions by everydaycalculation.com\n\n## Multiply 14/5 with 5/14\n\n1st number: 2 4/5, 2nd number: 5/14\n\nThis multiplication involving fractions can also be rephrased as \"What is 14/5 of 5/14?\"\n\n14/5 × 5/14 is 1/1.\n\n#### Steps for multiplying fractions\n\n1. Simply multiply the numerators and denominators separately:\n2. 14/5 × 5/14 = 14 × 5/5 × 14 = 70/70\n3. After reducing the fraction, the answer is 1/1\n\n×"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.73510265,"math_prob":0.9965657,"size":480,"snap":"2019-26-2019-30","text_gpt3_token_len":217,"char_repetition_ratio":0.24369748,"word_repetition_ratio":0.0,"special_character_ratio":0.53125,"punctuation_ratio":0.07936508,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99365836,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-07-16T14:27:03Z\",\"WARC-Record-ID\":\"<urn:uuid:927fc36c-bcb9-47d2-a52f-a198f82bba99>\",\"Content-Length\":\"8146\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b48c6216-17df-4df8-b6bf-01dbdb74bd1d>\",\"WARC-Concurrent-To\":\"<urn:uuid:432709f9-f396-49a6-9838-6a478551429d>\",\"WARC-IP-Address\":\"96.126.107.130\",\"WARC-Target-URI\":\"https://answers.everydaycalculation.com/multiply-fractions/14-5-times-5-14\",\"WARC-Payload-Digest\":\"sha1:GODRWHBZXYIONKOPALBADXEAYYEAL7J6\",\"WARC-Block-Digest\":\"sha1:H4VPXIRQN3K7REGTZ7KDZ6ISRUHYLHCT\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-30/CC-MAIN-2019-30_segments_1563195524568.14_warc_CC-MAIN-20190716135748-20190716161748-00442.warc.gz\"}"} |
https://deepai.org/publication/sparse-low-rank-decomposition-of-annihilating-filter-based-hankel-matrix-for-impulse-noise-removal | [
"",
null,
"# Sparse + Low Rank Decomposition of Annihilating Filter-based Hankel Matrix for Impulse Noise Removal\n\nRecently, so called annihilating filer-based low rank Hankel matrix (ALOHA) approach was proposed as a powerful image inpainting method. Based on the observation that smoothness or textures within an image patch corresponds to sparse spectral components in the frequency domain, ALOHA exploits the existence of annihilating filters and the associated rank-deficient Hankel matrices in the image domain to estimate the missing pixels. By extending this idea, here we propose a novel impulse noise removal algorithm using sparse + low rank decomposition of an annihilating filter-based Hankel matrix. The new approach, what we call the robust ALOHA, is motivated by the observation that an image corrupted with impulse noises has intact pixels; so the impulse noises can be modeled as sparse components, whereas the underlying image can be still modeled using a low-rank Hankel structured matrix. To solve the sparse + low rank decomposition problem, we propose an alternating direction method of multiplier (ADMM) method with initial factorized matrices coming from low rank matrix fitting (LMaFit) algorithm. To adapt the local image statistics that have distinct spectral distributions, the robust ALOHA is applied patch by patch. Experimental results from two types of impulse noises - random valued impulse noises and salt/pepper noises - for both single channel and multi-channel color images demonstrate that the robust ALOHA outperforms the existing algorithms up to 8dB in terms of the peak signal to noise ratio (PSNR).\n\n## Code Repositories\n\n### robust-ALOHA\n\nDenoising of Impulsive noise in single/multichannel images\n\n##### This week in AI\n\nGet the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.\n\n## I Introduction\n\nImpulse noises occur by malfunctioning of detector pixels in camera or from the missing memory elements in imaging hardware . There are two types of impulse noises. The first type includes salt/pepper noises which have the extremal values of dynamic ranges; therefore, the noise pixels can be easily detected by an adaptive median filter (AMF). The second type noises are random valued impulse noises (RVIN) which have random values within the dynamic range of an image pixel. Unlike the salt/pepper noises, the noisy pixels of RVIN cannot be effectively detected by an adaptive median filter. Instead, the adaptive center-weighted median filter (ACWMF) has been widely used to find the locations of noisy pixels. Even with AMF and ACWMF, when the density of noise increases, the denoising performance of these single step algorithms become severely degraded. To compensate for this weakness, two-phase denoising algorithms with “decision-based filter” or “switching filter” were proposed [4, 5, 6, 7, 1]\n\n. More specifically, these algorithms consist of two main parts: detecting noise pixels by AMF, ACWMF, or other outlier finding algorithms; and then replacing the detected noise pixels with the estimated values using the total variation\n\n or edge preserving regularizations [1, 8], while leaving other noiseless pixels unchanged.\n\nOn the other hand, impulse noise denoising algorithms using proximal optimizations with non-smooth penalties were recently proposed [9, 10, 11]. In particular, in the TVL1 (total variation ) approach [10, 9], the data fidelity term was measured with the norm to deal with impulse outliers and the total variation regularization was used as a penalty. They can effectively remove the impulse noises at sufficiently fast speed. However, the algorithm often causes edge distortions or texture patterns blurrings due to the TV terms. With the advance of compressed sensing theory [12, 13], the impulse noise denoising methods based on compressed sensing were also proposed [14, 15]. In , the authors encouraged the spatio-spectral domain redundancy to be sparsely represented in Fourier domain by using blind compressed sensing framework. This approach demonstrated outstanding recovery performances; however, the algorithm cannot be performed without highly correlated spectral dataset. In , the sparsity level of target signal was minimized just as in the conventional CS approach using noisy measurements contaminated by impulse noise. However, the performance was inferior to the two-phase methods . While a low-rank matrix completion approach for impulse noise denoising for video sequence was proposed , the algorithm only worked for video sequence denoising, because spatio-temporal redudancy should be exploited in this method.\n\nOne of the unique characteristics of impulse noises compared to Gaussian or Poisson noise is that an image corrupted with impulse noises has intact pixels; so the impulse noises can be modeled as sparse components, whereas the underlying intact image still retains the original image characteristics. In fact, this was the main idea utilised in TVL1 approach , and this paper also aims at fully exploiting this observation. However, one of the most important contributions of this paper is to demonstrate that there exists a more natural and powerful image modeling method than the TV approach, and that the resulting impulse noise removal problem becomes a sparse + low rank decomposition problem. This is inspired by our novel image modeling and associated inpainting method using so-called the annihilating filter-based low-rank Hankel matrix (ALOHA) approach .\n\nMore specifically, in , we demonstrated that the smoothness or textures within an image patch leads to sparse spectrum in the frequency domain, so the sampling theory of signals with finite rate of innovations [18, 19] tells us that there exists annihilating filters that annihilate the pixel values within the corresponding image patch. Moreover, the existence of annihilating filter enables us to construct a rank-deficient Hankel structured matrix whose rank is determined by the sparsity in the spectral domain . Thanks to this observation, an image patch could be modeled using an annihilating filter-based Hankel structured matrix (ALOHA), and image inpainting problems was solved by a low rank matrix completion algorithm. The idea was extended by our group for compressed sensing MRI [20, 21], image deconvolution \n\n, and interpolation of scanning microscopy\n\n. Ongie et al\n\nindependently developed similar approaches for super-resolution MRI\n\n[24, 25].\n\nOne of the most important consequences of ALOHA in the context of impulse noise removal is an observation saying that the construction of Hankel structured matrix is a kind of linear lifting scheme, so the sparse components in image are also sparse in the lifted Hankel matrix. Therefore, we can use a sparse+low rank decomposition of the Hankel structured matrix to decouple the sparse impulse noise components from the underlying image. The new algorithm, what we call robust ALOHA, is applied patch by patch to adapt the local image statistics that have distinct spectral distribution. We are aware that there have been significant progresses on the decomposition of superposed matrix consisting of low-rank and sparse components [26, 27, 28, 29, 30], which is often called Robust Principal Component Analysis (RPCA) . However, the matrix in RPCA is usually unstructured, whereas the robust ALOHA uses an Hankel structured matrix. As will be shown later, the introduction of Hankel structure matrix significantly improves the denoising performance by exploiting the spectral domain sparsity.\n\nDuring the writing of this paper, we came across a very interesting sparse and low rank decomposition of Hankel structure matrix for predicting target locations under the occlusion. While the optimization framework and algorithm used in have similarities with ours, the idea of was originally inspired by the dynamic system identification rather than image modeling using annihilating filter and we are not aware of its applications for impulse noise removal. Therefore, we believe that the application of robust ALOHA to impulse noise removal is sufficiently novel.\n\nTo solve the associated sparse+low rank decomposition problem of Hankel structure matrix, an alternating direction method of multiplier (ADMM) is utilized with initial factorized matrices from low rank matrix fitting algorithm (LMaFit) . Furthermore, the denoising algorithm is also extended to exploit the joint sparsity constraint in colour images by stacking Hankel structured matrix from each channel side by side and applying sparse+low rank decomposition of the concatenated Hankel matrix. Using extensive numerical experiments, we demonstrate that the robust ALOHA significantly outperforms all the existing methods.\n\nThis paper is organized as follows. Section II discusses the theory behind the robust ALOHA. In Section III, an optimization method for the associated sparse+low rank decomposition of Hankel structured matrix is described. Extension to multi-channel denoising is discussed in Section IV. Experimental results are provided in Section V, which is followed by discussion and conclusion in Section VI and VII.\n\n## Ii Theory\n\n### Ii-a Review of TVL1 approach\n\nBecause impulse noises occur by malfunctioning of detector or memory elements , only a subset of image pixels are corrupted by the noises. Therefore, if denotes an image measurement corrupted by impulse noises, it can be modeled as\n\n M=X+E\n\nwhere is the underlying “clean” image, and denotes a sparse matrix composed of impulse noises. This model is quite often used in the existing impulse noise removal algorithm. For example, in TVL1 [10, 9], is considered as a sparse outlier, whereas the underlying image is modeled using the total variations. This leads to the following minimization problem:\n\n ∥M−X∥1+λTV(X) (1)\n\nwhere the norm is the norm corresponding to summation of absolute values of each matrix elements for outlier removal, and the denotes the 2-D TV penalty to model the underlying image. In the following, we will explain how the image model in (1) is modified in the proposed method to give superior denoising performance.\n\n### Ii-B Image modeling using low-rank Hankel structured matrix",
null,
"Fig. 1: Spectral components of patches from (a) smooth background, (b) texture, and (c) edge.\n\nIn our recent work , we demonstrate that diffusion [34, 35, 36, 37] and/or Gaussian Markov random field (GMRF) approaches for image modelling [38, 39, 40, 41, 42] are closely related to an annihilating filter relationship from the sampling theory of signals with finite rate of innovations (FRI) [18, 19]. More specifically, as shown in Fig. 1(a), a smoothly varying patch usually has spectrum content in the low frequency regions, and the other frequency regions have very little spectral components. A similar spectral domain sparsity can be observed in a texture patch in Fig. 1(b), where the spectral components are mainly concentrated on the fundamental frequencies of the patterns. For the case of an abrupt transition along the edge as shown in Fig. 1(c), the spectral components are mostly localized along the axis.\n\nMathematically, when a patch has sparse spectral components, we can show that there exists a corresponding annihilating filter in the image domain. More specifically, if the spectrum of an image patch is described by\n\n ^x(ω)=k−1∑j=0cjδ(ω−ωj), ω=(ωx,ωy) (2)\n\nwhere denotes the number of non-zero spectral components, then it is easy to find an annihilating function in the spectrum domain that does not overlap with the support of , i.e.\n\n ^h(ω)^x(ω)=0∀ω. (3)\n\nThis implies the existence of the annihilating filter in the image domain:\n\n h(r)∗x(r)=0. (4)\n\nFor example, if an image is sufficiently flat with little variations in pixel values, then its spectral component is most concentrated around the zero frequency component (i.e. ); therefore, becomes an annihilating function in the spectral domain. The associated annihilating filtering in the image domain is then given by\n\n h(r)∗x(r)=Δx(r). (5)\n\nwhere denotes the Laplacian operator, which corresponds to the diffusion operation that is widely used in image denoising, inpainting, and so on [34, 35, 36, 37, 9, 10, 11]. This example clearly shows why the diffusion based approach is closely related to the annihilating filter approach.\n\nIf Fourier measurement data is discretized at an appropriate Nyquist sampling rates, the corresponding discrete counterpart is given by\n\n (h∗x)[n]=∑mh[m]x[n−m]=0,n,m∈Z2, (6)\n\nwhere the discrete filter is now a discrete annihilating filter. Among the various of choices of annihilating function that satisfies (3), Vetterli et al [18, 19] showed that an annihilating function can be constructed using a finite combination of sinusoidals such that the corresponding discrete annihilating filter has a finite filter length. In this case, the convolution (6) becomes finite length convolution, and we can exploit (6) in a matrix representation.\n\nSpecifically, let denote the matrix composed of such that . We also define the discrete filter matrix such that . Then, by removing the boundary data beyond the image patch, we can construct the following matrix equation:\n\n H{X}¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯\\textscvec(H)=0, (7)\n\nwhere is the vectorisation of the matrix and the overline is the order reversal operation. Moreover, the 2-D Hankel matrix in (7) is defined by\n\n H{X}= ⎡⎢ ⎢ ⎢ ⎢ ⎢⎣h{x1}h{x2}⋯h{xq}h{x2}h{x3}⋯h{xq+1}⋮⋮⋱⋮h{xN−q+1}h{xN−2n+1}⋯h{xN}⎤⎥ ⎥ ⎥ ⎥ ⎥⎦ ∈R(N−q+1)(M−p+1)×pq , (8)\n\nand the 1-D Hankel matrix for the -th column of the matrix is given by\n\n h{xi}=⎡⎢ ⎢ ⎢ ⎢ ⎢⎣x[1,i]x[2,i]⋯x[p,i]x[2,i]x[3,i]⋯x[p+1,i]⋮⋮⋱⋮x[M−p+1,i]x[M−p+2,i]⋯x[M,i]⎤⎥ ⎥ ⎥ ⎥ ⎥⎦ .\n\nIf the underlying signal has -sparse spectral components, we can further show the following key result :\n\n \\textscRankH(X)=k, (9)\n\nwhich implies that as long as the annihilating filter size is bigger than the sparsity level, the rank of the associated Hankel matrix is always equal to the sparsity level. This implies the following duality:\n\nBy exploiting this duality, our previous work derived an annihilating filter-based low rank Hankel matrix approach for image inpainting. In this paper, this approach will be recasted into sparse+low rank decomposition for impulse noise removal.\n\n### Ii-C Sparse + Low-rank Decomposition Model for Hankel Structured Matrix from Impulse Noises",
null,
"Fig. 2: Sparse + low rank decomposition of Hankel structured matrix from an image patch corrupted with impulse noise. Because a lifting to Hankel structure is linear, the sparse impulse noises are also lifted to sparse outliers.\n\nUnlike the lifting scheme used for phase retrieval problems , our lifting scheme to the Hankel structured matrix is linear, so the sparse impulse noise is also lifted to the sparse outliers in the lifted Hankel structured matrix (see Figure 2). Accordingly, if an underlying image is corrupted with sparse impulse noises, then we have\n\n H{X}=L+S (10)\n\nwhere denotes the low-rank component and represents the sparse components originated from impulse noises, which are both in the Hankel structure. This is the key property we want to exploit in our impulse noise removal algorithm. In fact, to address this type of sparse + low rank decomposition, the robust principal component analysis (RPCA) was actively investigated [26, 27, 28, 29, 30]. More specifically, for a given measurement matrix , the RPCA solves the following minimization problem:\n\n min ∥L∥∗+τ∥S∥1 (11) subject to L+S=M, (12)\n\nwhere is the nuclear norm. To minimize this, alternating directions methods were employed.\n\nCompared to the standard RPCA approach, our sparse+ low rank decomposition problem using (10) requires additional constraint to impose Hankel structures. Therefore the RPCA algorithm should be modified. The specific optimization algorithm under this constraint will be explained in the following sections. Additionally, because the image statistics change across an image with spatially varying annihilating property, a noisy image should be partitioned into overlapped patches, which are processed patch-by-patch using robust ALOHA and the average values are used as described in the algorithm flowchart in Fig. 3.",
null,
"Fig. 3: Patch-by-patch processing framework using robust ALOHA for impulse noise removal.\n\n## Iii Optimization Methods\n\n### Iii-a Sparse + Low Rank Decomposition for Hankel Matrix\n\nNote that the Hankel structure matrix in (8) is determined by the underlying image patch () size and the associated annihilating filter () size. For given image patch and annihilating filter, we now denote the associated spaces for the Hankel matrix as . Then, for a given noisy image patch and annihilating filter size, our impulse noise removal algorithm can be implemented by solving the following sparse + low rank decomposition under the Hankel structure matrix constraint:\n\n (P) minL,S ∥L∥∗+τ∥S∥1 subject to L+S=H{M}, L,S∈H(M,N;p,q)\n\nSince the sparse components in image patch are also sparse in a lifted Hankel structure, () can be further simplified as\n\n (P′) minL,E ∥L∥∗+τ∥E∥1 subject to L=H{X} X+E=M.\n\nwhere, with a slight abuse of notation, denotes an appropriately scaled version from in . Note that is now in image patch domain, unlike in the lifted Hankel matrix structured matrix domain in . The advantage of over is an associated simpler optimization method. More specifically, if we apply a factorized form of nuclear norm relaxation , then the final problem formulation of the optimization problem can be expressed as\n\n minE,{(U,V)|UVH=H{X}} ∥U∥2F+∥V∥2F+τ∥E∥1 (14) subject to X+E=M. (15)\n\nThe constraints in (14) and (15) can be handled using alternating direction method of multiplier (ADMM) [32, 46, 28]. The associated Lagrangian function ADMM is given by:\n\n L(U,V,E,X,Θ,Λ):= 12(∥U∥2F+∥V∥2F)+τ∥E∥1+β2∥X+E−M+Θ∥2F+ (16) μ2∥H{X}−UVH+Λ∥2F\n\nThen, each subproblem is simply obtained from (16). More specifically, we have\n\n E(k+1) = argminEτ∥E∥1+β2∥X(k)+E−M+Θ(k)∥2F (17) X(k+1) = argminXβ2∥X+E(k+1)−M+Θ(k)∥2F+μ2∥H{X}−U(k)V(k)H+Λ(k)∥2F (18) U(k+1) = argminU12∥U∥2F+μ2∥H{X(k+1)}−UV(k)H+Λ(k)∥2F (19) V(k+1) = argminV12∥V∥2F+μ2∥H{X(k+1)}−U(k+1)VH+Λ(k)∥2F (20) Θ(k+1) = X(k+1)+E(k+1)−M+Θ(k) (21) Λ(k+1) = H{X(k+1)}−U(k+1)V(k+1)H+Λ(k) (22)\n\nIt is easy to show that the first step can be simply reduced to a single soft-thresholding in image patch domain rather than in a lifted Hankel matrix space:\n\n E(k+1)=Sτ/β(M−X(k)−Θ(k)) (23)\n\nwhere denotes the pixel by pixel soft-thresholding with the threshold value . The simple thresholding step in (23) is the main motivation which is why we prefer over . Now, the second step becomes\n\n X(k+1)=1μ+β(μH†{U(k)V(k)H−Λ(k)}−β(E(k+1)−M+Θ(k))), (24)\n\nwhere corresponds to the Penrose-Moore pseduo-inverse mapping from our block Hankel structure to a patch, which is calculated as\n\n H†=(H∗H)−1H∗ . (25)\n\nNote that the adjoint operator adds multiple elements of and put it back to the patch coordinate, and denotes the division by the number of multiple correspondence; hence, the role of the pseudo-inverse is taking the average value and put it back to the patch coordinate. Next, the subproblems for and can be easily calculated by taking the derivative with respect to each matrix. For example, the derivative of cost function for is given by\n\n ∂L∂U = ∂∂U(12∥U∥2F+μ2∥H{X}−UVH+Λ∥2F) = U−μ(H{X}−UVH+Λ)V = U(I+μVHV)−μ(H{X}+Λ)V,\n\nand the closed-form solution of the subproblem for is obtained by setting . In the similar way, the derivative with respect to can be obtained. Accordingly, the closed-form update equations for and are given by\n\n U(k+1) = μ(H{X(k+1)}+Λ(k))V(k)(I+μV(k)HV(k))−1 (26) V(k+1) = μ(H{X(k+1)}+Λ(k))HU(k+1)(I+μU(k+1)HU(k+1))−1. (27)\n\nEven though the original Hankel matrix has large dimension, it is important to note that our algorithm using (26) and (27) only require the matrix inversion of matrix, where denotes the estimated rank of the Hankel matrix. This significantly reduces the overall computational complexity.\n\nBefore we apply ADMM, the initial estimatel and have to be determined with an estimated rank. For this, we employed an SVD-free algorithm called low-rank factorization model (LMaFit) . More specifically, for a low-rank matrix , LMaFit solves the following optimization problem:\n\n minU,V,Z 12∥UVH−Z∥2F (28)\n\nand is initialized with . LMaFit solves a linear equation with respect to and to find their updates and relaxes the updates by taking the average between the previous iterations. Moreover, the rank update can be done automatically by detecting the abrupt changes of diagonal elements of QR factorization . Even though the problem (28) is non-convex due to the multiplication of and , the convergence of LMaFit to a stationary point was analyzed in detail . However, the LMaFit alone cannot recover the block Hankel structure, which is the reason we use ADMM later to impose the structure.\n\n## Iv Extension to Multi-channel Impulse Noise Removal\n\nIn many applications, images are obtained through multiple measurement channels. For example, in a colour image, multiple images are measured throughout R (red), G (green) and B (blue) detectors. In multispectral imaging for remote sensing applications, a scene is measured through many spectral bands. In these applications, the underlying structure is identical so that there exists strong correlation between different channel measurements. Regarding the random impulse noise contamination, we may encounter two different scenarios: 1) noisy pixel locations are independent between the channels, and 2) noisy pixel locations are same across the channels. The first scenario is commonly observed when independent detectors are used for each channel. On the other hand, when a spectrometer is used to split an input into multiple channels, then the noisy pixel locations should be common across the channel. Therefore, in this section, we are interested in extending single-channel robust ALOHA to address these two cases.\n\n### Iv-a Multi-Channel Image Modeling",
null,
"Fig. 4: (a) Spectral distribution across channels. (b) Multi-channel measurement model.\n\nLet denote an underlying image patch that are common for all channel measurements, and be its spectrum. Then, as shown in Fig. 4(a), the spectrum of the -th channel measurement can be modeled as:\n\n ^xi(ω)=^si(ω)^f(ω),i=1,⋯,C, (29)\n\nwhere denotes a spectral modulation function of the -th channel. The model (29) assumes that each channel measurements still retains the textures of the underlying images with a channel specific modulation, which property was previously extensively used in multichannel deconvolution problems [47, 48, 49] illustrated in Fig. 4(b). As a result, it is easy to derive the following inter-channel annihilating filter relation:\n\n sj(r)∗xi(r)−si(r)∗xj(r)=0,∀r, i≠j, (30)\n\nwhich was also a key property in these multichannel devolution algorithms [47, 48, 49].\n\nTo exploit the inter-channel annihilating property in our robust ALOHA, we construct the following matrix:\n\n Y=[H{X1} H{X2} ⋯ H{XC}] ∈R(N−q+1)(M−p+1)×Cpq (31)\n\nwhere denotes the Hankel structured matrix constructed from the -th channel measurement . Then, it is easy to show that\n\n YS1=0,\n\nwhere is defined recursively as follows:\n\n SC−1 ≜ [¯sC−¯sC−1] (32) St ≜ ⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣¯st+1¯st+2⋯¯sC0−¯st−¯stSt+1⋱−¯st⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦ , (33)\n\nand\n\ndenotes the reversed ordered, vectored spectral modulation filter for the\n\n-th channel. Because we have\n\n rank Y ≤ kC−C(C−1)2=C(2k−C+1)2 , (34)\n\nwhen the underlying spectrum is -sparse. Hence, by choosing the annihilating filter size sufficiently large, we can make low-ranked and the aforementioned sparse+ low rank model can be used for impulse noise removal.\n\n### Iv-B Optimization Methods\n\n#### Iv-B1 Channel independent impulse noises\n\nWhen the locations of the impulse noise are independent between channels, then the associate optimization problem is very similar to that of the single channel problem. More specifically, if we define\n\n (35)\n\nsuch that the each channel measurement is give by\n\n Mi=Xi+Ei,\n\nthen the optimization problem becomes\n\n minE,{(U,V)|UVH=Y} ∥U∥2F+∥V∥2F+τ∥E∥1 (36) subject to Y=[H{X1} H{X2} ⋯ H{XC}] (37) X+E=M .\n\nIn this case, the Lagrangian cost function and the associated subproblems are same as Eq. (16) and (18)-(24), respectively.\n\n#### Iv-B2 Common impulse noise locations\n\nWhen the noisy pixel locations are common across the channels, we need a major algorithmic change that comes from a common sparsity inducing matrix norm penalty. More specifically, a common support condition of sparse components should be imposed across channels. In this paper, this constraint is formulated using the group-wise mixed norm:\n\n (38)\n\nThen, Eq. (16) can be converted to\n\n L(U,V,E,X,Θ,Λ):= 12(∥U∥2F+∥V∥2F)+τ∥E∥1,2+β2∥X+E−M+Θ∥2F+ (39) μ2∥H{X}−UVH+Λ∥2F.\n\nThen, instead of using (17), the corresponding subproblem for has the following closed form solution :\n\n E(k+1) = Schτ/β(M−X(k)−Θ(k)) (40)\n\nwhere, for in (38), is defined as\n\nand\n\n [Svecλ(Ek)]ij=Ek(i,j)√∑Ck=1|Ek(i,j)|2max⎧⎪⎨⎪⎩ ⎷C∑k=1|Ek(i,j)|2−λ,0⎫⎪⎬⎪⎭ . (41)\n\nOther remaining subproblems are same with (18)-(24).\n\n## V Experimental Results\n\n### V-a Removal of Random Valued Impulse Noise (RVIN)\n\nWe first performed denoising experiments using randomly distributed random valued impulse noise (RVIN) that corrupts 25% and 40% of the whole image pixels. The RVIN is given as follows. Let and be the original pixel value at location and the contaminated pixel with impulse noise at location , respectively. When the dynamic range of pixel value is given as , RVIN is described as\n\n N(xij)={dij with probability pxij with probability 1−p (42)\n\nwhere is a random number between\n\nchosen by the uniform random probability density function and\n\nis the proportion of noisy pixels with respect to total pixels.\n\nThe test sets were Baboon, Barbara, Boat, Cameraman, House, Lena and Peppers images. All test images were rescaled to have values between 0 and 1. For comparison, a median filter method (MATLAB built-in function ‘medfilt2’, indicated as MF in the figures) was used as the simplest reference algorithm, and the existing algorithms such as ACWMF, and TVL1 were also used. The original codes from the original authors were used and the parameters for the comparative algorithms were optimized to have the best performances. The parameters for the proposed method are given in Table. I. The maximum iteration number of ADMM in Eq. (16) was set to , and the stopping criteria was defined as in with the tolerance set to . For quantitative evaluation, we used the PSNR (peak signal-to-noise ratio). Specifically, when the reference signal () is given, the PSNR of the reconstructed image () is calculated as\n\n PSNR(x)=20log10(∥y∥∞1/√N×∥y−x∥2).",
null,
"Fig. 5: Reconstructed Barbara images by various methods from 40% random valued impulse noise.",
null,
"Fig. 6: M: the measured noisy image. |ORIG−M|: the residual image between noisy image and original image. X: the decomposed low-rank part, and E: the decomposed sparse component.\n\nWe summarized the PSNR results in Table II for all reconstructed images. The proposed method outperformed all other algorithms in both PSNR and visual quality, and the PSNR improvement was up to 8dB. Fig. 5 is the typical reconstruction result that shows noticeable enhancement in both visual quality and quantitative measures. In order to show that the proposed sparse+ low rank decomposition properly decomposes the impulse noises from the images, the decomposed sparse and low-rank components are illustrated in Fig. 6. We can see that sparse component () looks similar to the additive impulse noises as indicated by .\n\n### V-B Salt and Pepper Noise\n\nThe salt and pepper noise is a special case of RVIN, because it has impulse noises with the intensity of the minimum and maximum values of pixel dynamic range. Specifically, the salt and pepper noises are given by\n\n N(xij)=⎧⎪⎨⎪⎩dmin with probability p/2dmax with probability p/2xij with probability 1−p (43)\n\nwhere variables were defined in Eq. (42).",
null,
"Fig. 7: Salt and pepper noise removal results using various methods.\n\nDue to the extremal pixel values, the salt and pepper noise can be exceptionally well detected by adaptive median filter (AMF), so the AMF-based denoising algorithms with edge preserving prior (AM-EPR) have been proposed . If the position of the salt and noise locations are well detected by AMF, our robust ALOHA can be modified accordingly. More specifically, the estimation of the sparse component in (14) and (15) is no more necessary, and the resulting algorithm becomes identical to the ALOHA image inpainting algorithm . We denote this modification using AMF as AM-ALOHA. To demonstrate that our algorithm still outperforms the existing algorithms, we compared our method with median filtering (MF), adaptive median filtering (AMF), AMF-based denoising algorithms with edge preserving prior (AM-EPR) using 25 % salt and pepper noise. The results in Fig. 7 clearly demonstrates that the proposed AM-ALOHA outperforms all other algorithms.\n\n### V-C Multi-channel Denoising\n\nTo verify that the proposed method can be easily extended to multichannel images, we conducted experiments with colour RGB images. As discussed before, the noisy pixel location can be either same across channels or independent for each channel. So we conducted experiments under the two different scenarios. Fig. 8 showed the reconstruction result when channel independent impulse noises were added, whereas Fig. 9 corresponds to the scenario when of impulse noises were added at the same locations across RGB channels. The proposed method provided better detailed structures (e.g. bundle of peppers and the edges of peppers) than TVL1 method as shown in Figs. 8-9. Also, the cartoon-like artifacts were significantly reduced in the proposed method. In the inset images, the detail structures of peppers are magnified to demonstrate superior performance of the proposed method over TVL1.\n\nOne of the interesting observations from these experiments was that the proposed reconstruction provided a better PSNR for the channel independent impulse noises. This was because the noiseless pixel values from other channels could improve the image inpainting performance of noisy pixel values by exploiting the correlation between the channels.",
null,
"Fig. 8: Multi-channel denoising results under 30% random valued impulse noises at the independent pixel locations at each RGB channels.",
null,
"Fig. 9: Multi-channel denoising results under 30% random valued impulse noises at the common pixel locations across the RGB channels.\n\n### V-D Comparison with conventional RPCA",
null,
"Fig. 10: Comparison with conventional RPCA approach with the proposed method under 40% random valued impulse noises.\n\nTo verify that a lifting to a Hankel matrix is essential for performance improvement, we applied the standard RPCA as an impulse noise removal algorithm and compared the results. Two types of RPCA were implemented: one using whole images, and the other using image patch of the same size as our robust ALOHA. Note that the standard RPCA uses an image or patches as they are, without reformulating them into a Hankel structured matrix. For RPCA, we used the software packages provided by the original authors in . We chose the parameters for the best PSNR results in each reconstruction. As you can see in Fig. 10, two forms of RPCA implementations completely failed in removing impulse noises, and the detailed image structures were distorted. On the other hand, the robust ALOHA provided nearly perfect noise removal. Such a remarkable performance improvement was originated from an image modeling using a low rankness of annihilating filter-based Hankel matrix, which again confirm the robust ALOHA is a superior image model and denoising algorithm for images corrupted with impulse noises.\n\n## Vi Discussion\n\nRecall that the proposed robust ALOHA was performed patch by patch without considering additional similar patches. This is of great importance in contrast to other denoising algorithms that use low-rank approaches [16, 52, 53, 54]. While the authors in [16, 52, 53, 54] used the patch-based low-rankness, all methods required additional redundancies from, for example, multiple dynamic frames [16, 52] or groups of similar spectral patches [53, 54]. Even though such additional redundant information may introduce a low-rankness, those approaches could not properly perform denoising without utilising such additional redundancies. On the other hand, the robust ALOHA exploits the low-rankness originated from intrinsic spectral sparsity of an image patch, so no additional redundancy is necessary. Thus, it is more flexible and powerful.\n\nAs briefly discussed in the introduction, a recent research successfully demonstrated accurate prediction of target locations under occlusion using sparse low rank decomposition of Hankel structured matrix. However, unlike our robust ALOHA, one-dimensional trajectories extracted from video sequences are required as inputs to construct the Hankel structured matrix, because the algorithm was derived based on the assumption that those trajectories follow the linear time invariant state-space models, which was first suggested in . On the other hand, the Hankel structured matrix in robust ALOHA is derived from two dimensional patches by exploiting the spectral domain sparsity; thus, the construction of the Hankel matrix is different from . Moreover, we exploit an SVD-free minimization algorithm instead of an augmented Lagrangian method (ALM) in [31, 55] for saving computational burdens. Therefore, we believe that there exist significant differences between the two approaches.\n\nFrom the sampling theory point of view, a recent paper by Chen and Chi provides a theoretical estimate of fundamental performance of our robust ALOHA. In , the authors showed that the required number of samples, , for the recovery of signal with corrupted with sparse outlier using (P) in (III-A) is given by\n\n m>c1μ21c2sr2log3(MN), (44)\n\nwhen the regularization parameter in (III-A) is given by and the a noise corruption fraction is smaller than . In (44), is a numerical constant depending on the corruption fraction, is incoherence parameter, , and is the rank of single channel Hankel matrix. Because in our robust ALOHA for impulse noise removal, the theoretical result in (44) strongly supports our finding that as long as the spectrum of a patch is sufficiently sparse, the proposed sparse and low-rank method can restore the corrupted signal even with significant impulse noises (in our simulation, up to 40 ).\n\n## Vii Conclusion\n\nIn this paper, we proposed a sparse + low rank decomposition of annihilating filter-based Hankel matrices for impulse noise removal. The new algorithm, called robust ALOHA, extends the conventional RPCA approaches by exploiting the spectral domain sparsity and the associated rank deficient Hankel matrix. The robust ALOHA was implemented using ADMM iteration with initialization using LMaFit algorithms. In our ADMM formulation, factorization based nuclear norm minimization, was used instead of SVD so that the computational gain is achieved. We demonstrated that the robust ALOHA outperformed the existing impulse noise removal algorithms up to 8dB. Furthermore, we showed that the robust ALOHA can be used for salt and pepper noises by incorporating the estimated noise locations. In addition, the extension to impulse noise removals from colour channels was very straightforward by concatenating the Hankel structure matrix side by side and imposing the low-rankness.\n\nThe superior performance of the robust ALOHA as well as ALOHA inpainting clearly shows that image modeling using annihilating filter based Hankel matrix is a very powerful tool with many image processing applications.\n\n## Acknowledgment\n\nThe authors would like to thank Dr. Nikolova for sharing source codes of AM-EPR. This work was supported by Korea Science and Engineering Foundation under Grant NRF-2014R1A2A1A11052491.\n\n## References\n\n• R. H. Chan, C.-W. Ho, and M. Nikolova, “Salt-and-pepper noise removal by median-type noise detectors and detail-preserving regularization,” Image Processing, IEEE Transactions on, vol. 14, no. 10, pp. 1479–1485, 2005.\n• H. Hwang, R. Haddad et al., “Adaptive median filters: new algorithms and results,” IEEE Trans. Image Process., vol. 4, no. 4, pp. 499–502, 1995.\n• T. Chen and H. R. Wu, “Adaptive impulse detection using center-weighted median filters,” IEEE Signal Process. Lett., vol. 8, no. 1, pp. 1–3, 2001.\n• M. Yan, “Restoration of images corrupted by impulse noise and mixed gaussian impulse noise using blind inpainting,” SIAM Journal on Imaging Sciences, vol. 6, no. 3, pp. 1227–1245, 2013.\n• T. Chen and H. R. Wu, “Space variant median filters for the restoration of impulse noise corrupted images,” IEEE Transactions on Circuits and Systems II: Analog and Digital Signal Processing, vol. 48, no. 8, pp. 784–789, 2001.\n• H.-L. Eng and K.-K. Ma, “Noise adaptive soft-switching median filter,” IEEE Trans. Image Process., vol. 10, no. 2, pp. 242–251, 2001.\n• G. Pok, J.-C. Liu, and A. S. Nair, “Selective removal of impulse noise based on homogeneity level information,” IEEE Trans. Image Process., vol. 12, no. 1, pp. 85–92, 2003.\n• M. Nikolova, “A variational approach to remove outliers and impulse noise,” Journal of Mathematical Imaging and Vision, vol. 20, no. 1-2, pp. 99–120, 2004.\n• A. Chambolle and T. Pock, “A first-order primal-dual algorithm for convex problems with applications to imaging,” Journal of Mathematical Imaging and Vision, vol. 40, no. 1, pp. 120–145, 2011.\n• J. Yang, Y. Zhang, and W. Yin, “An efficient TVL1 algorithm for deblurring multichannel images corrupted by impulsive noise,” SIAM Journal on Scientific Computing, vol. 31, no. 4, pp. 2842–2865, 2009.\n• S. H. Chan, R. Khoshabeh, K. B. Gibson, P. E. Gill, and T. Q. Nguyen, “An augmented Lagrangian method for total variation video restoration,” IEEE Trans. Image Process., vol. 20, no. 11, pp. 3097–3111, 2011.\n• D. L. Donoho, “Compressed sensing,” IEEE Trans. Inf. Theory, vol. 52, no. 4, pp. 1289–1306, 2006.\n• E. J. Candès, J. Romberg, and T. Tao, “Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information,” IEEE Trans. Inf. Theory, vol. 52, no. 2, pp. 489–509, 2006.\n• R. E. Carrillo, K. E. Barner, and T. C. Aysal, “Robust sampling and reconstruction methods for sparse signals in the presence of impulsive noise,” IEEE Journal of Selected Topics in Signal Processing, vol. 4, no. 2, pp. 392–408, 2010.\n• A. Majumdar, N. Ansari, H. Aggarwal, and P. Biyani, “Impulse denoising for hyper-spectral images: A blind compressed sensing approach ,” Signal Processing, vol. 119, pp. 136 – 141, 2016.\n• H. Ji, C. Liu, Z. Shen, and Y. Xu, “Robust video denoising using low rank matrix completion,” in\n\nProc. of IEEE Computer Soc. Conf. on Computer Vision and Pattern Recognition\n\n. IEEE, 2010, pp. 1791–1798.\n• K. H. Jin and J. C. Ye, “Annihilating filter-based low-rank Hankel matrix approach for image inpainting,” IEEE Trans. Image Process., vol. 24, no. 11, pp. 3498–3511, Nov 2015.\n• M. Vetterli, P. Marziliano, and T. Blu, “Sampling signals with finite rate of innovation,” IEEE Trans. Signal Process., vol. 50, no. 6, pp. 1417–1428, 2002.\n• I. Maravic and M. Vetterli, “Sampling and reconstruction of signals with finite rate of innovation in the presence of noise,” IEEE Trans. Signal Process., vol. 53, no. 8, pp. 2788–2805, 2005.\n• K. H. Jin, D. Lee, and J. C. Ye, “A general framework for compressed sensing and parallel MRI using annihilating filter based low-rank Hankel matrix,” arXiv preprint arXiv:1504.00532, 2015.\n• ——, “A novel k-space annihilating filter method for unification between compressed sensing and parallel MRI,” in 2015 IEEE 12th International Symposium on Biomedical Imaging (ISBI), April 2015, pp. 327–330.\n• J. Min, L. Carlini, M. Unser, S. Manley, and J. C. Ye, “Fast live cell imaging at nanometer scale using annihilating filter based low rank Hankel matrix approach,” in Wavelets and Sparsity XVI, SPIE Optical Engineering + Applications. International Society for Optics and Photonics, 2015, pp. 95 970V–95 970V.\n• K. H. Jin, J. Min, and J. C. Ye, “Patch based low rank structured matrix completion for accelerated scanning microscopy,” in 2015 IEEE 12th International Symposium on Biomedical Imaging (ISBI), April 2015, pp. 1236–1239.\n• G. Ongie and M. Jacob, “Super-resolution MRI using finite rate of innovation curves,” arXiv preprint arXiv:1501.01697, 2015.\n• ——, “Recovery of piecewise smooth images from few Fourier samples,” arXiv preprint arXiv:1502.00705, 2015.\n• E. J. Candès, X. Li, Y. Ma, and J. Wright, “Robust principal component analysis?” Journal of the ACM, vol. 58, no. 3, p. 11, 2011.\n• \n\nJ. Wright, A. Y. Yang, A. Ganesh, S. S. Sastry, and Y. Ma, “Robust face recognition via sparse representation,”\n\nIEEE Trans. Pattern Anal. Mach. Intell., vol. 31, no. 2, pp. 210–227, 2009.\n• M. Tao and X. Yuan, “Recovering low-rank and sparse components of matrices from incomplete and noisy observations,” SIAM Journal on Optimization, vol. 21, no. 1, pp. 57–81, 2011.\n• Z. Lin, M. Chen, and Y. Ma, “The augmented lagrange multiplier method for exact recovery of corrupted low-rank matrices,” arXiv preprint arXiv:1009.5055, 2010.\n• R. Chartrand, “Nonconvex splitting for regularized low-rank+ sparse decomposition,” IEEE Trans. Signal Process., vol. 60, no. 11, pp. 5810–5819, 2012.\n• M. Ayazoglu, M. Sznaier, O. Camps et al., “Fast algorithms for structured robust principal component analysis,” in 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2012, pp. 1704–1711.\n• S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein, “Distributed optimization and statistical learning via the alternating direction method of multipliers,” Found. Trends in Mach. Learn., vol. 3, no. 1, pp. 1–122, 2011.\n• Z. Wen, W. Yin, and Y. Zhang, “Solving a low-rank factorization model for matrix completion by a nonlinear successive over-relaxation algorithm,” Math. Prog. Comp., vol. 4, no. 4, pp. 333–361, 2012.\n• T. F. Chan and J. Shen, “Nontexture inpainting by curvature-driven diffusions,” J. Vision Comm. Image Rep., vol. 12, no. 4, pp. 436–449, 2001.\n• F. Catté, P.-L. Lions, J.-M. Morel, and T. Coll, “Image selective smoothing and edge detection by nonlinear diffusion,” SIAM J. Numer. Anal., vol. 29, no. 1, pp. 182–193, 1992.\n• L. Alvarez, P.-L. Lions, and J.-M. Morel, “Image selective smoothing and edge detection by nonlinear diffusion. II,” SIAM J. Numer. Anal., vol. 29, no. 3, pp. 845–866, 1992.\n• M. J. Black, G. Sapiro, D. H. Marimont, and D. Heeger, “Robust anisotropic diffusion,” IEEE Trans. Image Process., vol. 7, no. 3, pp. 421–432, 1998.\n• G. R. Cross and A. K. Jain, “Markov random field texture models,” IEEE Trans. Pattern Anal. Mach. Intell., no. 1, pp. 25–39, 1983.\n• M. Li and T. Q. Nguyen, “Markov random field model-based edge-directed image interpolation,” IEEE Trans. Image Process., vol. 17, no. 7, pp. 1121–1128, 2008.\n• R. Chellappa and S. Chatterjee, “Classification of textures using Gaussian Markov random fields,” IEEE Trans. Acoust., Speech, Signal Process., vol. 33, no. 4, pp. 959–963, 1985.\n• F. S. Cohen, Z. Fan, and M. A. Patel, “Classification of rotated and scaled textured images using Gaussian Markov random field models,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 13, no. 2, pp. 192–202, 1991.\n• B. Manjunath and R. Chellappa, “Unsupervised texture segmentation using Markov random field models,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 13, no. 5, pp. 478–482, 1991.\n• E. J. Candes, Y. C. Eldar, T. Strohmer, and V. Voroninski, “Phase retrieval via matrix completion,” SIAM Review, vol. 57, no. 2, pp. 225–251, 2015.\n• X. Yuan and J. Yang, “Sparse and low-rank matrix decomposition via alternating direction methods,” Pacific Journal of Optimization, vol. 9, no. 1, pp. 167–180, 2013.\n• N. Srebro, “Learning with matrix factorizations,” Ph.D. dissertation, Dept, of Elect. Eng. and Comput. Sci., MIT, MA, 2004.\n• M. Signoretto, V. Cevher, and J. A. Suykens, “An SVD-free approach to a class of structured low rank matrix optimization problems with application to system identification,” in IEEE Conf. on Decision and Control, no. EPFL-CONF-184990, 2013.\n• G. Harikumar and Y. Bresler, “Exact image deconvolution from multiple FIR blurs,” IEEE Transactions on Image Processing, vol. 8, no. 6, pp. 846–862, 1999.\n• ——, “Perfect blind restoration of images blurred by multiple filters: Theory and efficient algorithms,” IEEE Transactions on Image Processing, vol. 8, no. 2, pp. 202–219, 1999.\n• ——, “FIR perfect signal reconstruction from multiple convolutions: minimum deconvolver orders,” IEEE Transactions on Signal Processing, vol. 46, no. 1, pp. 215–218, 1998.\n• S. Ramani and J. Fessler, “Parallel MR image reconstruction using augmented Lagrangian methods,” IEEE Trans. Med. Imag., vol. 30, no. 3, pp. 694–706, 2011.\n• P. Getreuer, “Rudin-Osher-Fatemi total variation denoising using split Bregman,” Image Processing On Line, vol. 10, 2012.\n• H. Ji, S. Huang, Z. Shen, and Y. Xu, “Robust video restoration by joint sparse and low rank matrix approximation,” SIAM Journal on Imaging Sciences, vol. 4, no. 4, pp. 1122–1142, 2011.\n• \n\nW. Dong, G. Shi, and X. Li, “Nonlocal image restoration with bilateral variance estimation: a low-rank approach,”\n\nIEEE Trans. Image Process., vol. 22, no. 2, pp. 700–711, 2013.\n• J. Orchard, M. Ebrahimi, and A. Wong, “Efficient nonlocal-means denoising using the SVD,” in 2008. ICIP 2008. 15th IEEE International Conference on Image Processing. IEEE, 2008, pp. 1732–1735.\n• T. Ding, M. Sznaier, and O. Camps, “Receding horizon rank minimization based estimation with applications to visual tracking,” in 2008. 47th IEEE Conference on Decision and Control. IEEE, 2008, pp. 3446–3451.\n• Y. Chen and Y. Chi, “Robust spectral compressed sensing via structured matrix completion,” IEEE Trans. Inf. Theory, vol. 60, no. 10, pp. 6576–6601, 2014."
] | [
null,
"https://deepai.org/static/images/logo.png",
null,
"https://deepai.org/publication/None",
null,
"https://deepai.org/publication/None",
null,
"https://deepai.org/publication/None",
null,
"https://deepai.org/publication/None",
null,
"https://deepai.org/publication/None",
null,
"https://deepai.org/publication/None",
null,
"https://deepai.org/publication/None",
null,
"https://deepai.org/publication/None",
null,
"https://deepai.org/publication/None",
null,
"https://deepai.org/publication/None",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9065889,"math_prob":0.9420446,"size":33799,"snap":"2021-43-2021-49","text_gpt3_token_len":7235,"char_repetition_ratio":0.15049563,"word_repetition_ratio":0.035727933,"special_character_ratio":0.21802421,"punctuation_ratio":0.11370022,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9825813,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-21T16:34:14Z\",\"WARC-Record-ID\":\"<urn:uuid:d48a19a6-3916-4b30-8d1a-bd99dea261c4>\",\"Content-Length\":\"820207\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d2cc3b73-f10c-4aea-acc6-f5537cc41d88>\",\"WARC-Concurrent-To\":\"<urn:uuid:e5f2681b-4e7b-4a05-b682-c50eb2edf58d>\",\"WARC-IP-Address\":\"54.69.23.208\",\"WARC-Target-URI\":\"https://deepai.org/publication/sparse-low-rank-decomposition-of-annihilating-filter-based-hankel-matrix-for-impulse-noise-removal\",\"WARC-Payload-Digest\":\"sha1:XEGVKU7752YEMAFE3HX7RCNLPITAJJ57\",\"WARC-Block-Digest\":\"sha1:Z557PMFCSG2L3XEAFJAMVLM3IUGRN63V\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323585424.97_warc_CC-MAIN-20211021133500-20211021163500-00516.warc.gz\"}"} |
https://tutorbin.com/questions-and-answers/mrs-poulters-classes-took-a-test-yesterday-and-the-grades-for-the-asse | [
"Question\n\nLinear Algebra\n\nMrs. Poulter's classes took a test yesterday, and the grades for the assessment are normally distributed with a mean of 85 and standard deviation of5.\n\nBetween what two grades contain the middle 95% of the classes?\n\nWhat is the probability of selecting a student who scored below 90 on the test?\n\nWhat is the probability of selecting a student who scored above 90 on the test?\n\nWhat is the probability of selecting a student who scored below 75 on the test?\n\nWhat is the probability of selecting a student who scored between 80 and 95 on the test?\n\nWhat is the probability of selecting a student who scored between 90 and 100 on the test?",
null,
"",
null,
"Verified",
null,
"### Question 52404",
null,
"",
null,
"Linear Algebra\n\nIn an election campaign, the popularity, B, as a percent of voters, of the governing Blue party can be modelled by a function of time, t, in days throughout the campaign as B(t) = 40 - 0.5t. The popularity, R(t), of the opposing Red party can be modelled by a composite function of B(t),R(B(t)) = 20 + 0.75[40 – B(t)].\nGraph B(t) and describe the trend.\n\n### Question 45665",
null,
"",
null,
"Linear Algebra\n\n\\text { 4] Find } g(0)-g(9)+g(2) \\text {, if }\ng(x)=\\left\\{\\begin{aligned} \\frac{x+1}{2}, & \\text { if } x \\text { is odd } \\\\ \\frac{x}{2}, & \\text { if } x \\text { is even } \\end{aligned}\\right.\n\n### Question 45664",
null,
"",
null,
"Linear Algebra\n\n\\text { 3] Find } f(4)-f(2)+f(3) \\text {, if }\nf(x)=\\left\\{\\begin{array}{ll} \\frac{x+1}{2}, & \\text { if } x \\text { is odd } \\\\ \\frac{x}{4}, & \\text { if } x \\text { is even } \\end{array}\\right.\n\n### Question 45663",
null,
"",
null,
"Linear Algebra\n\n2] Find the Domains of the following functions:\n\\text { a) } f(x)=\\sqrt{15-5 x}\nf(x)=\\frac{x^{2}-2 x+1}{x^{2}-4 x-21}\nf(x)=\\frac{x^{2}-2 x+1}{\\sqrt{16-2 x}}\n\n### Question 45662",
null,
"",
null,
"Linear Algebra\n\na) Give the definition of a rational function. [5 pts]\nb) Give an example of a polynomial function of degree 3. [5 pts]\nc) Can a constant function be a polynomial and a rational function at the same time? Explain your answer. [5 pts]\nd) Give an example of a non-polynomial function and explain why not apolynomial function. [10 pts)\n\n### Question 45333",
null,
"",
null,
"Linear Algebra\n\n5) Ве,\n\\begin{array}{c} f: \\mathbb{R} \\rightarrow \\mathbb{R} \\\\ f(x)=\\left\\{\\begin{array}{ll} x^{2}-3 \\cos (\\pi x) & x<0 \\\\ x-4 e^{-2 x} & x \\geq 0 . \\end{array}\\right. \\end{array}\nCalculate\n\\int_{-1}^{2} f(x) d x\nPresenting the result in simplified form\n\n### Question 45332",
null,
"",
null,
"Linear Algebra\n\n4) Be f: R - Ra function differentiable in R such your derivative f', has in maximum a real zero.\nProve that the equation f(x)=0 has in maximum 2 real square\n\n### Question 45331",
null,
"",
null,
"Linear Algebra\n\n3) Take in consideration the following function\nf(x)=\\left\\{\\begin{array}{ll} f: \\mathbb{R} \\rightarrow \\mathbb{R} & \\\\ \\frac{x^{2}-4 x+\\cos (\\sin (x)),}{x^{4}+4 x^{2}+1}, & x \\leq 0 \\end{array}\\right.\na) Show that the function f is continuous in R+ and in R- but discontinuous in the point X\nb) Say justifying, if f is differenciable in X=0\n\n### Question 45330",
null,
"",
null,
"Linear Algebra\n\n2) Prove by definition that\n\\lim _{x \\rightarrow 0} x^{2} \\cos \\left(e^{x}\\right)=0\n\n### Question 45329",
null,
"",
null,
"Linear Algebra\n\n1) Calculate the following limit:\n\\lim _{x \\rightarrow+\\infty} \\frac{x^{2}\\left(e^{-3 x}+1\\right)+x \\cos (5 x)}{x^{2}+7 x+1}\n\n### Submit query",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
""
] | [
null,
"data:image/svg+xml,%3csvg%20xmlns=%27http://www.w3.org/2000/svg%27%20version=%271.1%27%20width=%2720%27%20height=%2720%27/%3e",
null,
"data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7",
null,
"https://mediatb.blob.core.windows.net/media/media2020/projectX/Mathematics/Linear_Algebra/Answer/6130954a-cfc5-41c8-8f2b-3eee7f7fa14b.png",
null,
"data:image/svg+xml,%3csvg%20xmlns=%27http://www.w3.org/2000/svg%27%20version=%271.1%27%20width=%2720%27%20height=%2720%27/%3e",
null,
"data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7",
null,
"data:image/svg+xml,%3csvg%20xmlns=%27http://www.w3.org/2000/svg%27%20version=%271.1%27%20width=%2720%27%20height=%2720%27/%3e",
null,
"data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7",
null,
"data:image/svg+xml,%3csvg%20xmlns=%27http://www.w3.org/2000/svg%27%20version=%271.1%27%20width=%2720%27%20height=%2720%27/%3e",
null,
"data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7",
null,
"data:image/svg+xml,%3csvg%20xmlns=%27http://www.w3.org/2000/svg%27%20version=%271.1%27%20width=%2720%27%20height=%2720%27/%3e",
null,
"data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7",
null,
"data:image/svg+xml,%3csvg%20xmlns=%27http://www.w3.org/2000/svg%27%20version=%271.1%27%20width=%2720%27%20height=%2720%27/%3e",
null,
"data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7",
null,
"data:image/svg+xml,%3csvg%20xmlns=%27http://www.w3.org/2000/svg%27%20version=%271.1%27%20width=%2720%27%20height=%2720%27/%3e",
null,
"data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7",
null,
"data:image/svg+xml,%3csvg%20xmlns=%27http://www.w3.org/2000/svg%27%20version=%271.1%27%20width=%2720%27%20height=%2720%27/%3e",
null,
"data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7",
null,
"data:image/svg+xml,%3csvg%20xmlns=%27http://www.w3.org/2000/svg%27%20version=%271.1%27%20width=%2720%27%20height=%2720%27/%3e",
null,
"data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7",
null,
"data:image/svg+xml,%3csvg%20xmlns=%27http://www.w3.org/2000/svg%27%20version=%271.1%27%20width=%2720%27%20height=%2720%27/%3e",
null,
"data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7",
null,
"data:image/svg+xml,%3csvg%20xmlns=%27http://www.w3.org/2000/svg%27%20version=%271.1%27%20width=%2720%27%20height=%2720%27/%3e",
null,
"data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7",
null,
"data:image/svg+xml,%3csvg%20xmlns=%27http://www.w3.org/2000/svg%27%20version=%271.1%27%20width=%2726%27%20height=%2726%27/%3e",
null,
"data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7",
null,
"data:image/svg+xml,%3csvg%20xmlns=%27http://www.w3.org/2000/svg%27%20version=%271.1%27%20width=%2725%27%20height=%2725%27/%3e",
null,
"data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7",
null,
"data:image/svg+xml,%3csvg%20xmlns=%27http://www.w3.org/2000/svg%27%20version=%271.1%27%20width=%2726%27%20height=%2726%27/%3e",
null,
"data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7",
null,
"data:image/svg+xml,%3csvg%20xmlns=%27http://www.w3.org/2000/svg%27%20version=%271.1%27%20width=%2726%27%20height=%2726%27/%3e",
null,
"data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9530407,"math_prob":0.99763566,"size":777,"snap":"2022-27-2022-33","text_gpt3_token_len":166,"char_repetition_ratio":0.15653299,"word_repetition_ratio":0.35820895,"special_character_ratio":0.21879022,"punctuation_ratio":0.0738255,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99996626,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62],"im_url_duplicate_count":[null,null,null,null,null,1,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-07-07T15:51:07Z\",\"WARC-Record-ID\":\"<urn:uuid:b0e2d9ec-262f-4f04-a874-429da28663a9>\",\"Content-Length\":\"130850\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:415fb751-140f-4d93-a1e4-ea783a40131d>\",\"WARC-Concurrent-To\":\"<urn:uuid:0363effc-30b6-4fe1-8be8-aeff147f080b>\",\"WARC-IP-Address\":\"76.76.21.21\",\"WARC-Target-URI\":\"https://tutorbin.com/questions-and-answers/mrs-poulters-classes-took-a-test-yesterday-and-the-grades-for-the-asse\",\"WARC-Payload-Digest\":\"sha1:TCDGSQNVPYPPGXUFCU5X7JD56XUN6KMU\",\"WARC-Block-Digest\":\"sha1:56WYWE7WYSBH3AQIZQZV3K7LZJBDDAQX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656104495692.77_warc_CC-MAIN-20220707154329-20220707184329-00573.warc.gz\"}"} |
https://study.com/academy/answer/a-stock-has-returns-for-five-years-of-14-16-12-23-and-4-respectively-the-stock-has-an-average-return-of-what-percent-and-a-standard-deviation-of-what-percent-7-40-13-54-7-04-14-63-7-40.html | [
"# A stock has returns for five years of 14%, -16%, 12%, 23%, and 4%, respectively. The stock has an...\n\n## Question:\n\nA stock has returns for five years of 14%, -16%, 12%, 23%, and 4%, respectively. The stock has an average return of what percent and a standard deviation of what percent?\n\n7.40; 13.54\n\n7.04; 14.63\n\n7.40; 14.72\n\n8.60; 14.63\n\n8.60; 16.36\n\n## Standard Deviation of Returns:\n\nThe standard deviation of returns is an important measure of the risk of an investment. The higher the standard deviation of returns, the less desirable the investment is to a risk averse individual.\n\n## Answer and Explanation:\n\nThe expected return is:\n\n{eq}\\bar {r}=\\frac{1}{n}\\sum_{i=1}^{n}r_{i} {/eq}, where ri is the return in year i and n is the number of observations.\n\n{e...\n\nSee full answer below.\n\nBecome a Study.com member to unlock this answer! Create your account\n\n#### Learn more about this topic:",
null,
"Standard Deviation of Returns & Investment Volatility\n\nfrom Finance 301: Corporate Finance\n\nChapter 8 / Lesson 6\n2.3K"
] | [
null,
"https://study.com/images/reDesign/global/spinner-dark-teal.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.88119876,"math_prob":0.870192,"size":1154,"snap":"2019-35-2019-39","text_gpt3_token_len":312,"char_repetition_ratio":0.12,"word_repetition_ratio":0.14893617,"special_character_ratio":0.29722703,"punctuation_ratio":0.18631178,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97199404,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-08-21T16:04:32Z\",\"WARC-Record-ID\":\"<urn:uuid:e2375257-ef00-4b5e-aace-87150a98b948>\",\"Content-Length\":\"142994\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5250bbb6-88ab-464e-9e1f-ea81fe58e556>\",\"WARC-Concurrent-To\":\"<urn:uuid:eba45740-e777-4212-9c63-d8ab01057a8d>\",\"WARC-IP-Address\":\"52.0.4.252\",\"WARC-Target-URI\":\"https://study.com/academy/answer/a-stock-has-returns-for-five-years-of-14-16-12-23-and-4-respectively-the-stock-has-an-average-return-of-what-percent-and-a-standard-deviation-of-what-percent-7-40-13-54-7-04-14-63-7-40.html\",\"WARC-Payload-Digest\":\"sha1:ZDNOKJML7QU45F25PREGHINZDTUTODI5\",\"WARC-Block-Digest\":\"sha1:XNIFCANEZWSFJVAY4DZIK2V2ULHJDER6\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-35/CC-MAIN-2019-35_segments_1566027316075.15_warc_CC-MAIN-20190821152344-20190821174344-00259.warc.gz\"}"} |
https://edplabelec.com/electricity-generation/what-does-mw-mean-in-power-plants.html | [
"What does MW mean in power plants?\n\nContents\n\nOne megawatt (MW) = 1,000 kilowatts = 1,000,000 watts. For example, a typical coal plant is about 600 MW in size. Gigawatts measure the capacity of large power plants or of many plants. One gigawatt (GW) = 1,000 megawatts = 1 billion watts.\n\nWhat is a 1000 MW power plant?\n\nMegawatts electric or MWe is one of the two values assigned to a power plant, the other being megawatts thermal or MWt. … For example, a coal-fired power plant rated at 1000 MWe and 3000 MWt will require supply of 3000 MW of heat from burning coal for every 1000 MW of electricity it produces.\n\nWhat is the meaning of 500 MW power plant?\n\n500 MW is the rate at which the power plant is producing electricity. 500 MWh is the total amount of electric energy that it produces during that hour. … Here is how to think about this problem: the plant produced 100 MW of electric power for 1/3 of an hour, 500 MW for 1/3 of an hour and 300 MW for 1/3 of an hour.\n\nHow many homes can a 1 MW solar plant power?\n\nTo put that number in perspective, the Solar Energy Industries Association (a U.S. trade association) calculates that on average 1 megawatt of solar power generates enough electricity to meet the needs of 164 U.S. homes. 100 megawatts of solar power is thus enough, on average, to power 16,400 U.S. homes.\n\nWhy is power plant rated MW?\n\nThat’s why we rated a power plant capacity in MW instead of MVA. Its mean no matter how large your generator is, but it depends on the capacity of the engine (Prime mover/Turbine) I.e. a 50MW turbine connected to a 90MVA alternator in a power plant will generate only 50MW at full load. … (MW = MVA x P.f).\n\nIs MWe same as MW?\n\n(See definition for apparent power.) Megawatt (MW): One million watts of electricity. Megawatt electric (MWe): One million watts of electric capacity.\n\nHow many MW does a house use?\n\nNational Average Homes/MW Methodology\n\nThe current national average (through 2018) of homes powered by a MW of solar is 190.\n\nWhat does 100 MW power plant mean?\n\nA megawatt (MW) is one million watts and a kilowatt (kW) is one thousand watts. … For instance, a 100 MW rated wind farm is capable of producing 100 MW during peak winds, but will produce much less than its rated amount when winds are light.\n\nWhat can a megawatt power?\n\nWhat can you do with a megawatt-hour of electricity?\n\n• Power the average American home for 1.2 months.\n• Drive an electric vehicle 3,600 miles.\n• Power two 60-watt lightbulbs non-stop for a year.\n• Smelt 137 pounds of aluminum.\n• Toast 89,000 slices of bread.\n• Run an average home pool pump for 5 months.\n\nHow do you denote a megawatt?\n\nmegawatt: mega- (an SI prefix meaning 1 million) + watt, = 1,000,000 watts. Used without a period. A symbol in SI, the International System of Units. In power plant engineering, MWe is the symbol for the power output as megawatts of electricity, and MWt or MWth is the symbol for the several times larger thermal output.\n\nIT\\'S FUNNING: Who invented the electric filament?\n\nHow big is a 1 MW solar farm?\n\nSolar Farm Acres Per Megawatt\n\nTypically, a 1MW solar power plant installation will require around 4-5 acres, assuming that every kilowatt of solar electricity production will require about 100 square feet of space.\n\nWhat is the cost of 1 MW solar power plant?\n\n1 MW Solar Power Plant Cost in India:\n\nCapacity of Power Plant 1 MW\nSale of Electricity Rs. 6.49\nCost of Project per MW 450 Lakh\nO&M Cost per MW 8 Lakh/year\nDepreciation 5.28%\n\nHow much is a megawatt worth?\n\nAccording to Lawrence Berkeley National Laboratory’s (LBNL) 2017 Wind Technologies Report and the 2018 edition of the Utility Scale Solar Report, the national average levelized price of wind PPAs in 2017 was around \\$20 per MWh and the national average levelized price of PPAs in 2017 for large solar projects was \\$41 per …"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.90500045,"math_prob":0.9557306,"size":3723,"snap":"2022-05-2022-21","text_gpt3_token_len":974,"char_repetition_ratio":0.15192255,"word_repetition_ratio":0.011869436,"special_character_ratio":0.26860058,"punctuation_ratio":0.11296534,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95152915,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-01-20T14:01:17Z\",\"WARC-Record-ID\":\"<urn:uuid:a971c86c-95f1-4b66-b9e3-fb7a0d5fdd61>\",\"Content-Length\":\"72944\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e11d259d-3749-4249-97bb-2c3d7ff12648>\",\"WARC-Concurrent-To\":\"<urn:uuid:d62f4852-4420-4925-844b-59da1d1f45cc>\",\"WARC-IP-Address\":\"209.145.52.137\",\"WARC-Target-URI\":\"https://edplabelec.com/electricity-generation/what-does-mw-mean-in-power-plants.html\",\"WARC-Payload-Digest\":\"sha1:XYWNA4MUMM256EDLQF5FQJHQR4C5URRR\",\"WARC-Block-Digest\":\"sha1:UMFNLRILCHK2SOYQLCSGW22JAAFX2CXM\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-05/CC-MAIN-2022-05_segments_1642320301863.7_warc_CC-MAIN-20220120130236-20220120160236-00577.warc.gz\"}"} |
https://ch.mathworks.com/help/bioinfo/ref/selectphytree.html | [
"Documentation\n\n# select (phytree)\n\nSelect tree branches and leaves in phytree object\n\n## Syntax\n\n```S = select(Tree, N) [S, Selleaves, Selbranches] = select(...) select(..., 'Reference', ReferenceValue, ...) select(..., 'Criteria', CriteriaValue, ...) select(..., 'Threshold', ThresholdValue, ...) select(..., 'Exclude', ExcludeValue, ...) select(..., 'Propagate', PropagateValue, ...) ```\n\n## Arguments\n\n `Tree` Phylogenetic tree (`phytree` object) created with the function `phytree`. `N` Number of closest nodes to the root node. `ReferenceValue` Property to select a reference point for measuring distance. `CriteriaValue` Property to select a criteria for measuring distance. `ThresholdValue` Property to select a distance value. Nodes with distances below this value are selected. `ExcludeValue` Property to remove (exclude) branch or leaf nodes from the output. Enter `'none'`, `'branches'`, or `'leaves'`. The default value is `'none'`. `PropagateValue` Property to select propagating nodes toward the leaves or the root. `S` Logical vector for all selected nodes. `Selleaves` Logical vector for selected leaves. `Selbranches` Logical vector for selected branches.\n\n## Description\n\n`S = select(Tree, N)` returns a logical vector (`S`) of size ```[NumNodes x 1]``` indicating the `N` closest nodes to the root node of a `phytree` object (`Tree`) where `NumNodes = NumLeaves + NumBranches`. The first criterion used is branch levels, then patristic distance (also known as tree distance). By default, `select` uses `Inf` as the value of `N`, and `select(Tree)` returns a vector with values of `true`.\n\n```[S, Selleaves, Selbranches] = select(...)``` returns two additional logical vectors, one for the selected leaves and one for the selected branches.\n\n```select(..., 'PropertyName', PropertyValue, ...)``` uses additional options specified as one or more name-value pair arguments. Each `PropertyName` must be enclosed in single quotation marks and is case insensitive. These name-value pairs are as follows:\n\n``` select(..., 'Reference', ReferenceValue, ...)``` changes the reference point(s) to measure the closeness. `ReferenceValue` can be `'root'` (default) or `'leaves'` or an index that points to any node of the tree. When using `'leaves'`, a node can have multiple distances to its descendant leaves (nonultrametric tree). If so, `select` considers the minimum distance to any descendant leaf.\n\n```select(..., 'Criteria', CriteriaValue, ...)``` changes the criteria used to measure closeness. If ```CriteriaValue = 'levels'``` (default), the first criterion is branch levels and then patristic distance. If ```CriteriaValue = 'distance'```, the first criterion is patristic distance and then branch levels.\n\n```select(..., 'Threshold', ThresholdValue, ...)``` selects all the nodes where closeness is less than or equal to the threshold value `(ThresholdValue)`. You can use either `'Criteria'` or `'Reference'` in conjunction with this name-value pair. If `N` is not specified, then `N = Inf`. Otherwise you can limit the number of selected nodes by `N`.\n\n```select(..., 'Exclude', ExcludeValue, ...)``` sets a postfilter which excludes all the branch nodes from `S` when ```ExcludeValue = 'branches'``` or excludes all the leave nodes when ```ExcludeValue = 'leaves'```. The default is `'none'`.\n\n```select(..., 'Propagate', PropagateValue, ...)``` activates a postfunctionality that propagates the selected nodes to the leaves when `PropagateValue` is set to `'toleaves'` or toward the root finding a common ancestor when `PropagateValue` is set to `'toroot'`. The default value is `'none'`. `PropagateValue` may also be` 'both'`. The `'Propagate'` property acts after the `'Exclude'` name-value pair.\n\n## Examples\n\n```% Load a phylogenetic tree created from a protein family: tr = phytreeread('pf00002.tree'); % To find close products for a given protein (e.g. vipr2_human): ind = getbyname(tr,'vipr2_human'); [sel,sel_leaves] = select(tr,'criteria','distance',... 'threshold',0.6,'reference',ind); view(tr,sel_leaves) % To find potential outliers in the tree, use [sel,sel_leaves] = select(tr,'criteria','distance',... 'threshold',.3,... 'reference','leaves',... 'exclude','leaves',... 'propagate','toleaves'); view(tr,~sel_leaves)```",
null,
""
] | [
null,
"https://ch.mathworks.com/images/responsive/supporting/apps/doc_center/bg-trial-arrow.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.62586486,"math_prob":0.879362,"size":4000,"snap":"2019-51-2020-05","text_gpt3_token_len":1015,"char_repetition_ratio":0.16691692,"word_repetition_ratio":0.01724138,"special_character_ratio":0.25125,"punctuation_ratio":0.27140975,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96174,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-12T21:48:56Z\",\"WARC-Record-ID\":\"<urn:uuid:426649e9-b09a-4b10-9f38-0138f1efa2aa>\",\"Content-Length\":\"71141\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0bf4a27d-f2a8-4e0d-b482-5687200da5a3>\",\"WARC-Concurrent-To\":\"<urn:uuid:e16252aa-5073-4e6f-9cdd-5bcc33e609e8>\",\"WARC-IP-Address\":\"104.110.193.39\",\"WARC-Target-URI\":\"https://ch.mathworks.com/help/bioinfo/ref/selectphytree.html\",\"WARC-Payload-Digest\":\"sha1:IWA65PYSP25JTEQJSVTD6Y47H44DQ3ZT\",\"WARC-Block-Digest\":\"sha1:U2DVHN5YHE4YDMDLIGFKN6F562FZSYJX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575540547165.98_warc_CC-MAIN-20191212205036-20191212233036-00190.warc.gz\"}"} |
http://excel-in-practice.com/sum-numbers-excel | [
"# Sum numbers in EXCEL\n\n### EXCEL Functions: SUMIFS - Sum based on multiple criterias\n\nPerform addition of values in the row fields which satisfy two criteria (And Condition). Consider the Text of the criteria and Numeric criteria in the Date format. Let us consider the function SUMIFS(), the English version of SUMIFS().\n\n### EXCEL Functions: SUM\n\nis There a difference between addition values by using functions SUM(), the English version of SUM(), and the addition operation (+)? It turns out there is a difference.\n\n### EXCEL Functions: SUMIF - Sum based on single criteria\n\nTo sum the values that meet the filter criteria (condition), use the function SUMIF(), the English version of SUMIF().\n\n### EXCEL Functions: DSUM - Sum based on multiple conditions\n\nDSUM(), the English version of DSUM(), sums up the numbers in the data table that satisfy the given conditions.\n\n### Sum values based on one TEXT criteria in EXCEL\n\nTo sum values in one range based on another range, use the function SUMIF(). Consider the case when the criterion is applied the range containing text values.\n\n### Sum values based on multiple conditions in EXCEL (OR Logic, one Column)\n\nwill Produce a combination of values that satisfy at least one of 3 criteria (Condition OR). For example, in a table with a list of Fruits and their quantities in stock, will select rows where the column Fruits are Apples OR Oranges OR Pears, then sum up the boxes in these lines.\n\n### Extract unique values from EXCEL Table and Sum values in another Column\n\nyou Have a table consisting of two columns: a column of duplicate text values and column numbers. We will create a table containing only rows with unique text values. By the numeric column will produce the summation of the corresponding values.\n\n### Sum only values with duplicates in EXCEL\n\nthe Range of summation contains duplicate values. Will produce only those numbers which have duplicates.\n\n### Sum 3 or nth Largest Numbers in Excel (without Duplicates)\n\nFold three (four, five, etc.) of the largest values in a range. If the range is also repetitive, to take them into account will not.\n\n### Sum every other, third or nth value in Excel range\n\nwe All remember how exhausting it is to write in the manual a formula to add each second value in the list: A1+A3+A5+... luckily there is a function SUMPRODUCT().\n\n### Running total in Excel (Cumulative Sum formula)\n\nUse mixed addressing for to create a column with the summation of the cumulative total.\n\n### Sum only if unique distinct value in another column in EXCEL\n\ngiven a table with two columns: a text column with duplicates and a numeric column. Will sum only those numbers that match non-recurring text.\n\n### Sum 3 or nth Largest Numbers in Excel\n\nFold three (four, five, etc.) of the largest values in a range.\n\n### Sum every other, third or nth value in Excel row\n\nFind the sum of values located in each third line, using the function DSUM() and SUMPRODUCT().\n\n### Sum Values Ignoring #N/A Error in EXCEL\n\nIf the range of summation is found, the error value #N/A (value not available), the function SUM() will return an error. Use the function SUMIF() to handle such situations.\n\n### Sum only unique distinct values in EXCEL\n\nthe Range of summation contains duplicate values. Fold Chisel only those that do not have duplicates.\n\n### Sum only unique values in EXCEL\n\nLet the range of summation contains duplicate values. Perform the addition of NUMBERS without dupes.\n\n### Sum all digits in a number in Excel\n\nsuppose you have the number 456258 and need to find the sum of all its digits, i.e. 4+5+6+2+5+8. This can be done a single equation.\n\n### Sum values based on multiple conditions in EXCEL (2 AND Logic)\n\nwill give the count of values that meet three criteria, which form the Conditions I. 2 for Example, in a table with a list of Fruits and their quantities in stock, will select rows where the column Fruit is Apples, and the balance in stock of at least 4 (boxes) and not more than 90, then sum up the number of boxes of selected rows.\n\n### Sum values based on multiple conditions in EXCEL (OR Logic + AND Logic)\n\nare counting the table rows that meet three criteria, which form the Condition OR the Condition, And then sum up the numeric values of these lines. for Example, in a table with a list of Fruits and their quantities in stock, will select rows where column is Fruit Peaches OR Apples, and with the balance in stock of at least 10 (boxes), and then sum up the boxes in these rows.\n\n### Sum values based on multiple conditions in EXCEL (OR Logic, 2 Columns)\n\nwill Produce a combination of values that satisfy at least one of the 2 criteria (Condition OR). For example, in a table with a list of Fruits and their quantities in stock, will select rows in which column the Fruit is Apples OR strings in stock at least 12 (boxes), and then sum up the boxes in these rows. Ie first, selected of the Party of Apples and then added the party any fruit (except apples) balance of stock at least 12 (boxes).\n\n### Sum values based on one DATE criteria in EXCEL\n\nTo sum values in one range based on another range, use the function SUMIF(). Consider the case when the criterion applies to a range of dates.\n\n### Sum values based on multiple conditions in EXCEL (Introduction)\n\nConsider adding the values in the case of application of several criteria. An example is finding the sum of numbers that fall within a certain interval, or the addition of sales of several products. For this purpose you can use different functions: SUMPRODUCT() SUMIF(), SUMIFS(), DSUM() and array formulas\n\n### Change Multiple Numbers at once in Excel\n\nQuick fold/ divide/ multiply numeric values from a cell range into a user-specified number. This approach allows to reduce or increase the length of numbers in the selected range, quickly allocate VAT, etc.\n\n### Fast SUM Function Entry Techniques in Excel\n\nSUM() is probably the most used function in EXCEL. Of course I want to spend on her input as possible.\n\n### Sum the absolute values in Excel\n\nIf the range contains positive and negative values, and need to get the sum of the modules (absolute values) these values, you can do this by writing a formula in one cell."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8310614,"math_prob":0.93566954,"size":4730,"snap":"2019-26-2019-30","text_gpt3_token_len":1030,"char_repetition_ratio":0.14325011,"word_repetition_ratio":0.20242424,"special_character_ratio":0.22029598,"punctuation_ratio":0.10945802,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99698126,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-07-20T16:56:12Z\",\"WARC-Record-ID\":\"<urn:uuid:ed418723-0703-4f85-8c76-a4ca096d2dc7>\",\"Content-Length\":\"58702\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:587a3220-bc23-4d77-8c40-8e8946a8c788>\",\"WARC-Concurrent-To\":\"<urn:uuid:04cbad08-4925-41d1-adf0-da9ab805d32f>\",\"WARC-IP-Address\":\"78.46.133.215\",\"WARC-Target-URI\":\"http://excel-in-practice.com/sum-numbers-excel\",\"WARC-Payload-Digest\":\"sha1:IFHFHTM4QF6V5WVKGVPO3EQEXJC7OSCT\",\"WARC-Block-Digest\":\"sha1:UZ2HXL6UKZ6USN7JVVAKNBN6Y4EZEI74\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-30/CC-MAIN-2019-30_segments_1563195526536.46_warc_CC-MAIN-20190720153215-20190720175215-00002.warc.gz\"}"} |
https://cementanswers.com/how-is-pythagorean-theorem-used-in-construction/ | [
"",
null,
"# How is Pythagorean theorem used in construction?\n\nHow is Pythagorean theorem used in construction? The Pythagorean theorem can be used to build staircases, roofs, and can even be used to calculate the angle for safely placing a ladder when you need to work in high areas. It’s one of the most popular mathematical rules out there because it comes in handy any time you need to create a 90 degree angle.\n\nHow is the Pythagorean Theorem used by Carpenter’s when they are building structures? The Pythagorean theorem is used extensively in carpentry and construction. Almost every carpentry project involves some combination of squares and triangles. The square of the hypotenuse of a right triangle is equal to the sum of the squares of the other two sides.\n\nIs the Pythagorean Theorem used in architecture? The Pythagorean Theorem was founded by Pythagoras. It was regarding the study of right triangles and the relationship between the legs and the hypotenuse. In real life, the Pythagorean Theorem is used everyday for architects to check their buildings’ proportions and make sure that they can be built.\n\nHow do you find A2 and B2 with only C2? Introduction: Pythagorean Theorem\n\nThe formula is A2 + B2 = C2, this is as simple as one leg of a triangle squared plus another leg of a triangle squared equals the hypotenuse squared.\n\n## How is Pythagorean theorem used in construction? – Related Questions\n\n### Is Pythagorean Theorem only for right triangles?\n\nThe hypotenuse is the longest side and it’s always opposite the right angle. Pythagoras’ theorem only works for right-angled triangles, so you can use it to test whether a triangle has a right angle or not. In the triangle above, if.\n\n### What shape does the Pythagorean Theorem deal with?\n\nPythagorean theorem, the well-known geometric theorem that the sum of the squares on the legs of a right triangle is equal to the square on the hypotenuse (the side opposite the right angle)—or, in familiar algebraic notation, a2 + b2 = c2.\n\n### What jobs use Pythagorean Theorem?\n\nThere are many relevant applications that require the use of the Pythagorean Theorem. Engineers and astronomers use the Pythagorean Theorem to calculate the paths of spacecraft, including rockets and satellites. Architects use the Pythagorean Theorem to calculate the heights of buildings and the lengths of walls.\n\n### What is the Pythagorean Theorem used for in math?\n\nThe Pythagoras theorem is a mathematical law that states that the sum of squares of the lengths of the two short sides of the right triangle is equal to the square of the length of the hypotenuse. The Pythagoras theorem is algebraically written as: a2 + b2 = c2.\n\n### How will you remember the Pythagorean Theorem?\n\n“Hey,” the other squirrels cried in disappointment, “how come your nut is as big as both of us put together?” Ephraim cheekily replied, “Because the square nut on the hippopotamus is equal to the sum of the square nuts on the other two hides.” It is very easy to memorize the Pythagorean theorem.\n\n### Why Pythagoras theorem is important?\n\nThe Pythagorean Theorem is so important in the world of Mathematics. When we deal with the right triangle, Pythagorean relation helps to study the length measures and establishes the relationship between the three sides of a right angled triangle.\n\n### What is the 3 4 5 Triangle rule?\n\nThe 3:4:5 triangle is the best way I know to determine with absolutely certainty that an angle is 90 degrees. This rule says that if one side of a triangle measures 3 and the adjacent side measures 4, then the diagonal between those two points must measure 5 in order for it to be a right triangle.\n\n### What does the letter C stand for in the formula a2 b2 c2?\n\nThe Pythagorean Theorem is a formula that gives a relationship between the sides of a right triangle The Pythagorean Theorem only applies to RIGHT triangles. Side “c” is called the hypotenuse. The sides adjacent to the right angle are named “a” and “b”.\n\n### What is the formula of A² B² C²?\n\nA theorem stating that the square of the length of the hypotenuse (the longest side) of a right triangle is equal to the sum of the squares of the lengths of the other sides. It is mathematically stated as c2 = a2 + b2, where c is the length of the hypotenuse and a and b the lengths of the other two sides."
] | [
null,
"https://cementanswers.com/wp-content/uploads/2021/09/canva-sport-equipments-MADasFYKVwc.jpg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9243475,"math_prob":0.9882124,"size":4199,"snap":"2021-43-2021-49","text_gpt3_token_len":955,"char_repetition_ratio":0.21835518,"word_repetition_ratio":0.09078404,"special_character_ratio":0.20504881,"punctuation_ratio":0.08060453,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9998542,"pos_list":[0,1,2],"im_url_duplicate_count":[null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-12-03T10:20:06Z\",\"WARC-Record-ID\":\"<urn:uuid:25d3f3ca-7485-4513-9b1c-6ffe29e637b6>\",\"Content-Length\":\"127034\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a9c984e1-b9c2-44b9-aee4-8b54ff855c45>\",\"WARC-Concurrent-To\":\"<urn:uuid:ae1d21ca-3d61-4917-a912-0846f1e60d57>\",\"WARC-IP-Address\":\"172.67.164.37\",\"WARC-Target-URI\":\"https://cementanswers.com/how-is-pythagorean-theorem-used-in-construction/\",\"WARC-Payload-Digest\":\"sha1:7LZSY47DNMBZTFBQANRBOA7IS5LSZGDC\",\"WARC-Block-Digest\":\"sha1:PG5XHEX2Z67THXM6G5RBZH7YDQMY7I7H\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964362619.23_warc_CC-MAIN-20211203091120-20211203121120-00261.warc.gz\"}"} |
http://blogs.cofc.edu/math/undergraduate-research-opportunities/visualizing-matrix-soliton-interactions/ | [
"# Quaternionic KdV Solutions of Rational and Periodic Type\n\nBackground: The KdV equation is a famous nonlinear partial differential equation. It is most famous as a model of water waves and for its soliton solutions such as the one shown in this animation:",
null,
"(For more information about solitons, click here.) The soliton solutions are singular limits of the periodic solutions that can often be seen manifested as ocean waves. Less well-known are its rational solutions, but those are of interest also because they are simple to describe, because they have a symmetry called “bispectrality, and because they have a surprising connection to particle dynamics.\n\nThe quaternions are a generalization of the real and complex numbers first discovered by William Rowan Hamilton in the 19th century. Like the complex numbers, these involve square roots of -1, but the quaternions are four-dimensional (rather than two-dimensional like the complex numbers) and do not satisfy the commutative law. They have a more complicated algebraic structure, which is closely related to the symmetries of physical space. (For more information about quaternions, click here.)\n\nIn Summer 2018, a team of three students and I studied the rational, solitonic, and periodic solutions to KdV that take values in the quaternions. We found some very interesting results, but there are still open questions.\n\nResearch Problem: What is the behavior of the nonlinear superposition of different quaternion-valued periodic solutions to the KdV equation? Can the quaternion-valued rational solutions always be written as a sum of square inverses of linear terms (as in the commutative case)? What is the connection between the quaternion-valued rational solutions and particle systems of Calogero-Moser type? And, precisely which rational Lax operators are bispectral?\n\nWhat Courses/Skills Do I Need To Have Taken? MATH 221 (Calculus III) and MATH 203 (Linear Algebra) along with some computer programming experience would be enough to work on this project.\n\nWhen Can I Work on the Project? I would be able to work on this project in Summer 2019 or later.\n\nIs Funding Available? There has been no grant funding set aside for this project, however there may be grants that we can apply for once we formalize a project."
] | [
null,
"http://blogs.cofc.edu/math/files/2014/03/2soliton-300x185.gif",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.94804204,"math_prob":0.9470816,"size":2300,"snap":"2020-45-2020-50","text_gpt3_token_len":479,"char_repetition_ratio":0.117160276,"word_repetition_ratio":0.0,"special_character_ratio":0.19217391,"punctuation_ratio":0.097087376,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9702023,"pos_list":[0,1,2],"im_url_duplicate_count":[null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-26T18:10:33Z\",\"WARC-Record-ID\":\"<urn:uuid:549fa865-4735-4f87-82de-a1b144ee1296>\",\"Content-Length\":\"25947\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d1c28844-9c8c-4985-93ba-674a09356063>\",\"WARC-Concurrent-To\":\"<urn:uuid:690aecf0-1029-4910-aa3a-57b4c6284440>\",\"WARC-IP-Address\":\"34.215.37.29\",\"WARC-Target-URI\":\"http://blogs.cofc.edu/math/undergraduate-research-opportunities/visualizing-matrix-soliton-interactions/\",\"WARC-Payload-Digest\":\"sha1:LHQTWK2PUXCU4CUQCOKNMIBE7DASD53M\",\"WARC-Block-Digest\":\"sha1:FLOH5EL7ISDHUJJUXRVHIMZODMRAIW2W\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107891624.95_warc_CC-MAIN-20201026175019-20201026205019-00362.warc.gz\"}"} |
https://indiashines.in/cbse/ncert-solutions-for-class-6th-maths-chapter-1476453/ | [
"# NCERT Solutions For Class 6th Maths Chapter 14 : Practical Geometry\n\nCBSE NCERT Solutions For Class 6th Maths Chapter 14 : Practical Geometry. NCERT Solutins For Class 6 Mathematics. Exercise 14.1, Exercise 14.2, Exercise 14.3, Exercise 14.4, Exercise 14.5, Exercise 14.6\n\nNCERT Solutions for Class VI Maths: Chapter 14 – Practical Geometry",
null,
"",
null,
"",
null,
"",
null,
"",
null,
""
] | [
null,
"https://i0.wp.com/indiashines.in/wp-content/uploads/2016/04/NCERT-Solutions-for-Class-6th-Maths-14-1.jpg",
null,
"https://i1.wp.com/indiashines.in/wp-content/uploads/2016/04/NCERT-Solutions-for-Class-6th-Maths-14-2.jpg",
null,
"https://i1.wp.com/indiashines.in/wp-content/uploads/2016/04/NCERT-Solutions-for-Class-6th-Maths-14-3.jpg",
null,
"https://i1.wp.com/indiashines.in/wp-content/uploads/2016/04/NCERT-Solutions-for-Class-6th-Maths-14-4-1.jpg",
null,
"https://i1.wp.com/indiashines.in/wp-content/uploads/2016/04/NCERT-Solutions-for-Class-6th-Maths-14-5-1.jpg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7803286,"math_prob":0.6414972,"size":559,"snap":"2023-14-2023-23","text_gpt3_token_len":155,"char_repetition_ratio":0.17117117,"word_repetition_ratio":0.13186814,"special_character_ratio":0.25760287,"punctuation_ratio":0.1724138,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9801687,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,5,null,5,null,5,null,5,null,5,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-03T05:02:07Z\",\"WARC-Record-ID\":\"<urn:uuid:1f05be87-e268-49ce-ae1b-982451b0edf9>\",\"Content-Length\":\"49386\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:27e1afd0-a859-4a70-8c42-38905c113281>\",\"WARC-Concurrent-To\":\"<urn:uuid:dd4aeaff-76e9-4d50-893d-4253fdbd8525>\",\"WARC-IP-Address\":\"139.59.71.130\",\"WARC-Target-URI\":\"https://indiashines.in/cbse/ncert-solutions-for-class-6th-maths-chapter-1476453/\",\"WARC-Payload-Digest\":\"sha1:E6LRUWS4UCGWITUWMV5JP7BNWECIMJZI\",\"WARC-Block-Digest\":\"sha1:YR6LGSEX5J6RQO7GNR3OACNGZ7YSP6YI\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224649105.40_warc_CC-MAIN-20230603032950-20230603062950-00510.warc.gz\"}"} |
https://airlinecareer.com/hours-and-minutes-study-guide/ | [
"Hours and Minutes Study Guide\n\n### It’s Not That Complicated!\n\nAlthough hours and minutes computations may seem intimidating at first, once you get the hang of it, it really is not all that complicated. There are several methods that you can use to simplify the process of adding or subtracting time, but for our purposes, we are only going to discuss two of the methods that we feel are the easiest to grasp. One requires the use of a standard calculator, while the other does not.\n\n### Hours and Minutes Conversion\n\nThe first method of computation requires that you convert hours to minutes or minutes to hours when necessary to add or subtract a common denominator. This method does not require the use of a calculator. Let’s say you are asked how many hours you were on duty if you checked in at 12:14 and were off duty at 18:08. In this case, you must subtract 12 + 14 (twelve hours and fourteen minutes) from 18 + 08 (eighteen hours and eight minutes). When subtracting, you must always remember that both the hours and minutes of the number you are subtracting from should be larger than the other number. In this example, if you try to do the subtraction, it would look like this:\n\n• (18 + 08) – (12 + 14)\n\nSo, the hour number (18) is larger than the hour number (12), but the minute number (08) is smaller than the other minute number (14). So we must make it larger. How do you do that? You borrow some minutes from the hour! Since there are 60 minutes in an hour, we move 60 from the hour column and add it to the minutes. That reduces the hour column by exactly 60 minutes, or one hour. Now the problem looks like this:\n\n• (17 + 68) – (12 + 14)\n\nNow the subtraction is a simple matter. Just subtract 12 from 17 for the hours and 14 from 68 for the minutes. Your answer should be:\n\n• 5 + 54\n\nHere’s an example of adding two times together. In this example, another hours/minutes conversion will be required before performing the calculation. Let’s say you were asked to add together the following two duty periods:\n\n• (5 + 56) +(6 + 17)\n\nFor addition problems, just add the two together, like any other math problem, but do not carry over anything. In other words, add hours to hours and minutes to minutes. Your answer should look like this:\n\n• 11 + 73\n\nSince eleven hours and seventy-three minutes makes no sense, we need to do some converting. Just as we did with the subtraction problem, we will move some minutes, but this time we will be moving from the minutes to the hours column. Again, since we know that there are 60 minutes in an hour, we will move 60 minutes from the minutes column back to the hours column increasing it by one hour and we will continue doing so until the answer makes sense (the minutes are less than 60). Here is what your answer should look like:\n\n• 12 + 13\n\nSo that is basically how you would do every subtraction and addition problem you are faced with. Always remember that if things don’t look right, you need to move some minutes either from or to the hours column. You can practice a few of these on your own, or you can take the Hours and Minutes Test now and apply the rules we have taught you.\n\n### The 940 Rule\n\n• Our Method: (4 0 08) + (5 0 23)\n• Equivalent to:( 4 + 08) + (5 + 23)\n\nIf you have been following along, your answer on the calculator to this point should be:\n\n• 9 0 31 (which equates to 9 + 31)\n\nThat one was easy. Just as in the earlier problems you did without a calculator, you must look at the time and see if it makes sense. Here we have nine hours and thirty-five minutes. It does make sense, so that is your answer. But what if we had a problem that didn’t make sense? Let’s try the following one. Add 4 + 45 and 7 + 56. Again, here are the keys you would depress on the calculator:\n\n• Our Method: (4 0 45) + (7 0 56)\n• Equivalent to: (4 + 45) + (7 + 56)\n\nIf you have been following along, you answer on your calculator should be:\n\n• 11101\n\nIt is very apparent that the number makes no sense. So here is where the “940 rule” comes into play. Add 940 to your answer until the number makes sense. In other words, when it looks like hours and minutes — with a (0) in between the hours and minutes. In this problem, you would add 940 only one time, because your first answer that comes up should look like this:\n\n• 11101 + 940 = 12 0 41 (which equates to 12 + 41)\n\nIn some cases, when adding a series of times, you will be required to add 940 several times before the proper hours and minutes will display. Let’s look at another example. Let’s add 12 + 56 and 34 + 59 and 45 + 56. Here’s how it would look:\n\n• Our Method: (12 0 56) + (34 0 59) + (45 0 56)\n• Equivalent to: (12 + 56) + (34 + 59) + (45 + 56)\n\n• 91171\n\nAgain, it is apparent that this is not in an hours and minutes format, so we have to add 940 until it makes sense. But in this case, it requires that we add it more than once. After the first entry, here’s what it will look like:\n\n• 91171 + 940 = 92111\n\nAfter the second 940 addition, we have something that looks right and this is your final answer:\n\n• 92111 + 940 = 93 0 51 (which equates to 93 + 51)\n\nThe 940 rule also works well for subtracting. The only difference is that you must subtract 940 from your answer rather than add. Here’s an example. Subtract 14 + 34 from 23 + 12. Here’s how it would look:\n\n• Our Method: (23 0 12) – (14 0 34)\n• Equivalent to: (23 + 12) – (14 + 34)"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.939165,"math_prob":0.96770674,"size":7534,"snap":"2020-34-2020-40","text_gpt3_token_len":1881,"char_repetition_ratio":0.15697211,"word_repetition_ratio":0.065395094,"special_character_ratio":0.28404567,"punctuation_ratio":0.102027886,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9975033,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-30T09:05:31Z\",\"WARC-Record-ID\":\"<urn:uuid:5a22dd06-fe8d-493e-804e-bb7bbb57dc79>\",\"Content-Length\":\"57473\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b5a3e3d2-78cc-4228-b80f-19d3e9306764>\",\"WARC-Concurrent-To\":\"<urn:uuid:e40794f6-ed1c-4f51-a00d-df6d016d79dd>\",\"WARC-IP-Address\":\"104.31.67.132\",\"WARC-Target-URI\":\"https://airlinecareer.com/hours-and-minutes-study-guide/\",\"WARC-Payload-Digest\":\"sha1:7FCEIQ2WSWN2MODSRLST4ZX2EOQ66XFR\",\"WARC-Block-Digest\":\"sha1:MQLPLOKXTAGG4I7T4BQOROEUUVUJC5AM\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600402123173.74_warc_CC-MAIN-20200930075754-20200930105754-00521.warc.gz\"}"} |
https://deepai.org/publication/robust-semi-supervised-learning-when-labels-are-missing-at-random | [
"",
null,
"# Robust Semi-Supervised Learning when Labels are Missing at Random\n\nSemi-supervised learning methods are motivated by the relative paucity of labeled data and aim to utilize large sources of unlabeled data to improve predictive tasks. It has been noted, however, such improvements are not guaranteed in general in some cases the unlabeled data impairs the performance. A fundamental source of error comes from restrictive assumptions about the unlabeled features. In this paper, we develop a semi-supervised learning approach that relaxes such assumptions and is robust with respect to labels missing at random. The approach ensures that uncertainty about the classes is propagated to the unlabeled features in a robust manner. It is applicable using any generative model with associated learning algorithm. We illustrate the approach using both standard synthetic data examples and the MNIST data with unlabeled adversarial examples.\n\n## Authors\n\n##### This week in AI\n\nGet the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.\n\n## 1 Introduction",
null,
"(a) Decision boundaries given by a supervised (VB-GMM) and semi-supervised (self training) learning method, respectively.\n\nIn statistical machine learning, the size and quality of training data sets has a notable impact on the performance. The labeled data is often considered to be expensive to obtain, and therefore typically limited size. By contrast, unlabeled data are often available or can be collected with reasonable costs. Therefore, there is a strong motivation to improve the performance by utilizing both labeled or unlabeled data, which is known as semi-supervised learning.\n\nSemi-supervised learning is readily achieved using generative models, cf. Zhu & Goldberg (2009); Kingma et al. (2014); Gordon & Hernandez-Lobato (2017). However, several studies have reported that improved performance of learning methods is not guaranteed by incorporating the unlabeled data (cf. (Cozman et al., 2003; Krijthe & Loog, 2016; Oliver et al., 2018)), especially for those unlabeled data with features which are rarely observed in the labeled data.\n\nLet the pair denote features and an associated class. For unlabeled data, is missing and we only observe . The appropriate utilization of unlabeled data depends critically on assumptions made about the data-generating distributions underlying labeled and unlabeled pairs, respectively (see Chawla et al. (2005)).\n\n• Missing completely at random (MCAR): The labeled and unlabeled data-generating processes match exactly, i.e.,\n\n p(x,y | unlabeled) ≡p(x,y | labeled) (1)\n\nMCAR is a common assumption but also highly restrictive and requires careful considerations when collecting unlabeled data. In a medical diagnosis scenario, this means that the features from an unscreened population must match the population of screened patients. When this assumption fails, the performance of a semi-supervised learning method can degrade drastically. For an illustration of this limitation, consider Figure 0(a).\n\n• Missing at random (MAR): The feature distributions may not match each other, i.e.,\n\n p(x | unlabeled) ≠p(x | labeled), (2) p(y | x,unlabeled) ≡p(y | x,labeled) (3)\n\nThe features by which it is possible to discriminate between classes in the labeled data are the same as those for the unlabeled data, which therefore can potentially be used in semi-supervised learning. In the medical diagnosis case this means that unscreened population data can be used in conjunction with screened patient data.\n\n• Missing not at random (MNAR): Neither features nor conditional class distributions may match each other, i.e.,\n\n p(x, | labeled) ≠p(x | unlabeled), (4) p(y | x,labeled) ≠p(y | x,unlabeled) (5)\n\nMNAR is a very conservative assumption since there is no necessary relation between labeled and unlabeled data. The features by which it is possible to discriminate between classes in the labeled data may therefore be different from those for the unlabeled data. This effectively means that the unlabeled data cannot be used to improve a classifier and thus not considered here.\n\nIn this paper, we consider the use of generative models for semi-supervised learning under MAR. We develop an approach that utilizes part of the unlabeled data to refine the models obtained from the labeled data, while the remaining unlabeled data is used to robustify the learned class probabilities of rarely observed features. The method is applicable using any generative model and associated learning algorithm, cf. Hastie et al. (2016); Bishop (2016); Murphy & Bach (2012).\n\n## 2 Problem Formulation\n\nThe observed datasets are denoted\n\n Dlabeled={(xi,yi)}andDunlabeled={xi},\n\nwhere the features belong to a -dimensional space and we consider a set of class labels . The samples are obtained as i.i.d. from unknown distributions and , respectively, where indicates whether the data is labeled or unlabeled.\n\nUsing and , the goal is to develop a classifier that provides both class predictions for test points\n\nas well as a robust estimate of the class probability. The semi-supervised learning problem is motivated by the fact that typically\n\n.\n\n### 2.1 Optimal Classifier under MAR\n\nFor a test sample , we pursue the optimal classification rule\n\nwhich minimizes an expected loss function. The most common loss function for classification problems is the zero-one loss function\n\n(Hastie et al., 2016),\n\n L(ˆy(x∗),y∗)={0if ˆy(x∗)=y∗,1if ˆy(x∗)≠y∗. (6)\n\nThe expected loss function is\n\n E[L(y(x∗),y∗)]=∫X∑y∈YL(ˆy(x∗),y∗)p(y | x)p(x)dx\n\nand the optimal classifier is given by\n\n ˆy(x∗)=E[L(y(x∗),y∗)]=argminy∈Y [1−p(y | x∗)]=argmaxy∈Y p(x∗ | y)p(y), (7)\n\nwhere is the likelihood of observing , and is the prior class probabilities. The error probability of the optimal classifier evaluated at the test sample , is given by\n\n pe(x∗)=p(y≠ˆy | x∗)=1−p(x∗ | ˆy)p(ˆy)∑yp(x∗ | y)p(y). (8)\n\nNote that the above distributions marginalize over the labeling process, i.e.,\n\n p(y)=∑ℓ=0,1p(ℓ)p(y | ℓ)p(x|y)=∑ℓ=0,1p(ℓ)p(x | y,ℓ). (9)\n\n### 2.2 Learning model of p(x|y) and p(y)\n\nOur aim is to learn models of (9) so as to approximate the optimal classifier (7) as well as providing estimates of the error probability (8) in a robust manner under MAR.\n\nThe labeled data provides information about and . Together with\n\n, it also provides information about the prior probability of obtaining samples from the labeled population\n\n. However, under MAR, the unlabeled data does not necessarily provide information about or . Learning models of (9) using both and is therefore an open question.\n\nIn supervised learning, and are replaced by and and thus is discarded. This approach, however, can lead to serious misclassifications and highly inaccurate error probabilities in regions of the feature space where we observe unlabeled data, see Figure 0(a) for an illustration. Since only samples from the top-right and bottom-right regions of the feature space are labeled, the labeling process is obviously selective in the feature space.\n\nMost semi-supervised learning algorithms are based on the MCAR assumption in which\n\n p(y | ℓ=0)≡p(y | ℓ=1)p(x | y,ℓ=0)≡p(x | y,ℓ=1)\n\nThese methods are therefore are not robust to MAR data (Chawla et al., 2005). As exemplified by a generative models in Figure 0(a), the MCAR assumption may lead to serious performance degradation, cf. the discussion in Zhu & Goldberg (2009).\n\nConsidering Figure 0(a), we draw two conclusions about . In the top-left and bottom-right regions of the feature space, the unlabeled data is not informative about the classes. By contrast, the top-right and bottom-left regions represent features that are shared with the labeled population.\n\nNext, we generalize these observation to a robust learning approach.\n\n## 3 Learning Approach under MAR\n\nWe now develop a semi-supervised approach for learning (9) that is robust to MAR data. The approach is applicable to any generative model of the data using any supervised method of choice.\n\nUnder MAR, feature regions represented in that are\n\n• not shared with provide no information about ,\n\n• shared with may provide information about .\n\nFor unlabeled features in the first case, the principle of insufficient reason dictates a robust model of the prior class probability as uniform, i.e.,\n\n q(y|ℓ=0)≡1|Y|. (10)\n\nFor the same reason, the class will not provide any information about these features so that a robust model of should be class independent, i.e.,\n\n q(x|y,ℓ=0)≡q(x|ℓ=0) (11)\n\nThe unlabeled features in the second case are, however, statistically indistinguishable from the labeled features and thus informative of class under (3). Such unlabeled data used to provide high-quality estimates of as we show below.\n\nNext, we partition the feature space using the likelihood ratio and use this partitioning to utilize in a robust manner.\n\n### 3.1 Regions of Statistically Similar Features\n\nConsider learning initial generative models\n\n• and from\n\n• from ,\n\nusing any method of choice, see for instance Hastie et al. (2016); Bishop (2016); Murphy & Bach (2012). From the labeled models we construct the marginal\n\n q′(x|ℓ=1)=∑y∈Yq′(x|y,ℓ=1)q′(y|ℓ=1)\n\nwith which we partition of the feature space using the likelihood ratio:\n\n Xℓ={x∈X:q′(x | ℓ=0)q′(x | ℓ=1)<1}. (12)\n\nThus all features in are statistically indistinguishable from features of the labeled population. These contain information about .\n\nTesting whether a feature belongs to corresponds to a likelihood ratio test. When an unlabeled feature belongs to , we assign it a class with the appropriate uncertainty under MAR. That is,\n\n if x∈Dunlabeled∩Xℓ:assign class y∼q′(y|x,ℓ=0),\n\nwhere we use (3) to obtain\n\n q′(y|x,ℓ=0)=q′(y|x,ℓ=1)∝q′(x|y,ℓ=1)q′(y|ℓ=1).\n\nAll unlabeled features in that are statistically indistinguishable from labeled samples, i.e.,\n\n Dunlabeled∩Xℓ\n\nare assigned a class in a manner that propagates their uncertainty consistent with MAR. The resulting pair is augmented with the labeled data to form a dataset , while the remaining unlabeled samples form the set .\n\n### 3.2 Robust Classifier\n\nUsing and together with (10) and (11), we can learn robust models of (9) denoted and . The procedure is summarized in Algorithm 1 and can be implemented using any generative model and learning algorithm of choice. This general applicability is similar to the self-training approach (Zhu & Goldberg (2009)), but unlike that approach it achieves robustness by cautiously assigning labels to parts of the unlabeled data under the MAR assumption while preserving the uncertainty with respect to .\n\nFor a test sample , the resulting classifier is given by\n\n ˆy(x∗)=argmaxy∈Yq(x∗ | y)q(y)\n\nand the error probability is\n\n qe(x∗)=q(x∗ | ˆy)q(ˆy)∑yq(x∗ | y)q(y).\n\nUsing the learned model, we may also introduce a reject option if for additional robustness, see Bishop (2016). An illustration of the learned model under MAR is shown in Figure 0(b), where uncertainty about class is preserved in regions where there is only unlabeled data.\n\n## 4 Experimental Results\n\n### 4.1 Two-moons dataset\n\nTwo illustrate our approach using more complex generative models we consider the popular two-moons-type dataset, see Gordon & Hernandez-Lobato (2017); Oliver et al. (2018)\n\n. Here we use a Gaussian mixture model that is learned using variational approximation. Each class conditional distribution is estimated using 8 components with full covariance matrices and a Dirichlet process prior with weight concentration prior set to 1e-2. The mean precision prior was set to 1 and the covariance prior to the identity matrix. In The self-training method, only one label was added at each iteration.\n\nWe contrast the case when MCAR assumption holds and when it fails in Figures 1(a) and 1(b), respectively. If we compare the self-training method in both cases, we clearly see that the resulting model is very accurate under MCAR in Fig. 2(a) but it is grossly inaccurate for about half of the data region under MAR in Fig. 2(b). By contrast, our proposed method is conservative when MCAR data is given in Fig. 3(a), leaving near 0.50 for large portions of the unlabeled data. However, under MAR this corresponds to robustness, cf. Fig. 3(b) and 1(b).\n\n### 4.2 MNIST dataset with adversarial features\n\nTo illustrate the robustness our approach in a more realistic scenario, we consider training a classifier for hand-written digits using MNIST with unlabeled data. For each class, we introduce a certain proportion of adversarial features in the unlabeled dataset by rotating their image upside-down. We then consider a test set with adversarial features , see Fig. 5. Since some hand-written digits are invariant to the rotation, ideally these features , such as those representing ‘0’, should be correctly classified while others should, representing, say, ’4’, should have a low probability and thus rejected.\n\nHere we use the generative model considered in Bishop (2016). Each conditional distribution ,\n\n, is approximated with a mixture of Bernoulli models with 784 dimensions and 3 components. The parameters in the mixture models are optimized with the expectation maximization (EM) algorithm.\n\nWe compare the supervised case, which discards the unlabeled data, the self-training method and the proposed approach all using an option to reject test points for which . The results are summarized in Table 1. Neither the supervised or self-trained models reject any adversarial examples and consequently make significant errors for certain classes that are not invariant to flipping (such as class ‘7’). By contrast, the robust approach rejects many more adversarial examples, or erroneously classifies them to a lesser degree, than the standard approaches.",
null,
"Figure 5: Examples of a valid and an adversarial samples of hand written numbers 2. The adversarial example is obtained by flipping the original feature vector.\n\n## 5 Conclusion\n\nWe have developed a semi-supervised learning approach that is robust to cases in which labels are missing at random. Unlike methods based on labels missing completely at random, this approach does not make the restrictive assumption that labeled and unlabeled features have matching distributions. The proposed ensures that uncertainty about the classes is propagated to the unlabeled features in a robust manner. Moreover, it is widely applicable using any generative model with an associated learning algorithm. Finally, we demonstrated the robustness of the method in both synthetic datasets and real datasets with adversarial examples.\n\n## References\n\n• Bishop (2016) C.M. Bishop. Pattern Recognition and Machine Learning. Information Science and Statistics. Springer New York, 2016. ISBN 9781493938438.\n• Chawla et al. (2005) Nitesh V Chawla et al. Learning from labeled and unlabeled data: An empirical study across techniques and domains.\n\nJournal of Artificial Intelligence Research\n\n, 23:331–366, 2005.\n• Cozman et al. (2003) Fabio G Cozman, Ira Cohen, and Marcelo C Cirelo. Semi-supervised learning of mixture models. In Proceedings of the 20th International Conference on Machine Learning (ICML-03), pp. 99–106, 2003.\n• Gordon & Hernandez-Lobato (2017) Jonathan Gordon and Jose Miguel Hernandez-Lobato. Bayesian semi-supervised learning with deep generative models. In\n\nSecond workshop on Bayesian Deep Learning (NIPS 2017)\n\n, 2017.\n• Hastie et al. (2016) Trevor Hastie, Robert Tibshirani, and Jerome Friedman. The elements of statistical learning: Data mining, inference, and prediction. 2nd ed., corrected at 11th printing. Springer, New York, 2016. ISBN 0172-7397.\n• Kingma et al. (2014) Diederik P Kingma, Shakir Mohamed, Danilo Jimenez Rezende, and Max Welling. Semi-supervised learning with deep generative models. In Advances in Neural Information Processing Systems, pp. 3581–3589, 2014.\n• Krijthe & Loog (2016) Jesse H Krijthe and Marco Loog. The pessimistic limits of margin-based losses in semi-supervised learning. arXiv preprint arXiv:1612.08875, 2016.\n• Murphy & Bach (2012) K.P. Murphy and F. Bach. Machine Learning: A Probabilistic Perspective. Adaptive Computation and Machi. MIT Press, 2012. ISBN 9780262018029.\n• Oliver et al. (2018) Avital Oliver, Augustus Odena, Colin Raffel, Ekin D Cubuk, and Ian J Goodfellow. Realistic evaluation of deep semi-supervised learning algorithms. arXiv preprint arXiv:1804.09170, 2018.\n• Zhu & Goldberg (2009) Xiaojin Zhu and Andrew B Goldberg. Introduction to semi-supervised learning. Synthesis lectures on artificial intelligence and machine learning, 3(1):1–130, 2009."
] | [
null,
"https://deepai.org/static/images/logo.png",
null,
"https://deepai.org/publication/None",
null,
"https://deepai.org/publication/None",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8838628,"math_prob":0.8821296,"size":14704,"snap":"2020-45-2020-50","text_gpt3_token_len":3259,"char_repetition_ratio":0.14768708,"word_repetition_ratio":0.023591088,"special_character_ratio":0.23156965,"punctuation_ratio":0.1344662,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97759265,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-28T23:47:45Z\",\"WARC-Record-ID\":\"<urn:uuid:d9f7161d-0e8a-49c1-ad10-9b00f150bf02>\",\"Content-Length\":\"458134\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:434248b0-176d-4b1c-bd49-76029edde362>\",\"WARC-Concurrent-To\":\"<urn:uuid:fc49ae43-0eeb-49e0-b19b-67640f2a5ed5>\",\"WARC-IP-Address\":\"52.26.36.19\",\"WARC-Target-URI\":\"https://deepai.org/publication/robust-semi-supervised-learning-when-labels-are-missing-at-random\",\"WARC-Payload-Digest\":\"sha1:7FC6FADQNOFJ6Y4GS73RDK7WAF2QBSS2\",\"WARC-Block-Digest\":\"sha1:WR4W7YGK7AHFDQ4PWGCUWG5FQ5B64ONX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107902038.86_warc_CC-MAIN-20201028221148-20201029011148-00321.warc.gz\"}"} |
https://stats.stackexchange.com/questions/13468/species-richness-dominance-and-diversity-differences | [
"# Species Richness, Dominance and Diversity Differences\n\nI have collected 70 organisms from 4 different sites; two sites of treatment 1 and two sites of treatment 2. I also have a continuous explanatory variable (average temperature) which is different for each site. How can I test if measures of richness and diversity differ between sites or by temperature?\n\nRichness is the number of species in each sample\n\nDiversity is a weighted average of the proportions of each species present. In this case we are using the Shannon Index which is:\n\n$^1\\!D= \\exp\\left(-\\sum_{i=1}^R p_i \\ln p_i\\right)$\n\nwhere $p_i$ is the proportion of each species $i$ at that site, and $R$ is the richness of the site.\n\nWhat sort of model can I use when diversity and richness are my response variables?\n\nThis question had been abandonned by the OP without giving enough info to answer it properly, the above is an attempt to provide an answerable question in the spirit of what was asked.\n\nOriginal Question:\n\nI have collected ca. 70 species of organism from 4 sites. 2 sites of Treatment 1 and 2 sites of Treatment 2. How do I test using R, whether the richness, dominance, abundance and diversity is different between the two if I have 6 explanatory variables?\n\n• are the sites fare enough from one another so that the measurements can be considered independent across sites? Jul 25, 2011 at 16:35\n• Yes, the sites are independant of each other. Jul 25, 2011 at 19:07\n• @platypezid some example data and calculations of these metrics would help. Hoe did you choose the explanatory variables? Are they independent? Jul 27, 2011 at 2:35\n• @David The data is count data and the explanatory variables chosen are independant and are uncorrelated after checking for collinearity. What I really want to know is can you do a GLM to test differences in Species richness, dominance and diversity? Thanks! Would you use poisson distribution or gaussian? Jul 27, 2011 at 13:27\n• Could you give an definition of Richness, Dominance and Diversity? Richness I understand to be number of species? if so then Poisson would be ok for that Feb 2, 2013 at 12:20\n\nI think neither of these responses fit perfectly into any of the standard GLM link functions. Taking a pragmatic approach it is probably sufficient to pick a link function that is broadly doing the right thing.\n\nYour raw data is from a multinomial distribution, and Richness, $R$, is the number of categories with a score of 1 or more. Although this is count data, it isn't quite the same as the normal sort of count data. The most striking difference being that you can never have a count of zero. As pragmatic first attempt, I would consider modeling $(R-1)$ using a poisson link function. The handwaving reasoning here could be this: If you were catching animals at a random rate for a fixed length of time, the number caught would be modelled well by Poisson. You are catching a fixed number of animals, but at a random rate the type of animal changes. If it wasn't for the fact your return to species already seen, this would be quite a good match.\n\nFor diversity, things are even less intuitive. However, the diversity is a sort of average of the abundance ($p_i$) of each species, and as such is bounded by $[0,1]$. This suggests that you could probably treat it as a proportion and use a logit link function.\n\nIf the pragmatic matching of support and \"looks approximately right\" approach is not satisfying, it appears there is another way. I don't know much about this, but there is something called the *Multinomial Diversity Model\" which is designed to deal with these sorts of problems. It is described in this paper, http://www.ncbi.nlm.nih.gov/pubmed/23185889 but I have no idea how popular it is.\n\nIt is, however, implented in R and availble on CRAN as MRM, apparently by the author of the above paper. It includes a spider data set used as an example in the help:\n\nlibrary(MDM)\ndata(spider6)\nfit0 <- mdm(y2p(spider6[,1:6])~1,data=spider6)\nfit1 <- mdm(y2p(spider6[,1:6])~Water,data=spider6)\nfit2 <- mdm(y2p(spider6[,1:6])~Water+Herbs,data=spider6)\nfit3 <- mdm(y2p(spider6[,1:6])~Site,data=spider6,alpha=TRUE)\nanova(fit0,fit1,fit2,fit3)\n\n• Here is a similar related question about modelling proportions on this site, stats.stackexchange.com/q/24187/1036. I suspect it would be good to search Sociological journals as well for literature, the same diversity measures are of frequent interest to demographers interested in segregation. Feb 11, 2013 at 14:19"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.94055605,"math_prob":0.8968539,"size":1176,"snap":"2023-40-2023-50","text_gpt3_token_len":274,"char_repetition_ratio":0.12372014,"word_repetition_ratio":0.0,"special_character_ratio":0.22704081,"punctuation_ratio":0.08658009,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9816333,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-05T03:23:48Z\",\"WARC-Record-ID\":\"<urn:uuid:7816e628-c260-443f-98af-0008164875cf>\",\"Content-Length\":\"170429\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e3c89392-e29c-4c8e-adf0-16d50195bdd2>\",\"WARC-Concurrent-To\":\"<urn:uuid:ccf4a217-2505-4cbf-8800-8c9315f06412>\",\"WARC-IP-Address\":\"104.18.43.226\",\"WARC-Target-URI\":\"https://stats.stackexchange.com/questions/13468/species-richness-dominance-and-diversity-differences\",\"WARC-Payload-Digest\":\"sha1:G3CYKRIPBIRIBLXZOLP4I5K5KN6GLVN3\",\"WARC-Block-Digest\":\"sha1:S3MEMLWRX4F654HEGWH7XOPN7W3KJFLW\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100540.62_warc_CC-MAIN-20231205010358-20231205040358-00521.warc.gz\"}"} |
https://www.easycalculation.com/physics/electromagnetism/ac.php | [
"# DC Voltage Drop Calculation Across Inductance English",
null,
"Español\n\nDetermine the actual relationship between voltage and current for any given inductor, with the basic differential expression V= L (di/dt); which describes the behavior of an inductance.\n\n## DC Voltage Drop Calculator\n\n Inductance (L) = Henry Rate of change of current = ampere Time = second Voltage across Inductor =\n\nDetermine the actual relationship between voltage and current for any given inductor, with the basic differential expression V= L (di/dt); which describes the behavior of an inductance.\n\nCode to add this calci to your website",
null,
"",
null,
"Formula: Ohms Law for an Inductor: V = L (di/dt) Where, V - Voltage drop across inductor L - Inductance in henry di/dt - instantaneous rate of change of current with respect to time\n\nInductor voltage drop is the rate of change of inductor current. Both voltage (v) and rate of current change (di/dt) are instantaneous. Calculation of DC voltages across inductance is made easier. Free Online Electrical Engineering Calculators."
] | [
null,
"https://www.easycalculation.com/images/trans.png",
null,
"https://www.easycalculation.com/images/embed-plus.gif",
null,
"https://www.easycalculation.com/images/embed-minus.gif",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8704896,"math_prob":0.9861223,"size":616,"snap":"2023-40-2023-50","text_gpt3_token_len":128,"char_repetition_ratio":0.1388889,"word_repetition_ratio":0.53488374,"special_character_ratio":0.18181819,"punctuation_ratio":0.097087376,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99914885,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-22T18:05:58Z\",\"WARC-Record-ID\":\"<urn:uuid:a7340d95-0352-44b2-bb1e-6dd64366b9e0>\",\"Content-Length\":\"26208\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9b64943d-00c9-464a-a5c7-2fdc1fb8f96c>\",\"WARC-Concurrent-To\":\"<urn:uuid:650aef71-5d26-4b7d-935c-016d06440541>\",\"WARC-IP-Address\":\"50.116.14.108\",\"WARC-Target-URI\":\"https://www.easycalculation.com/physics/electromagnetism/ac.php\",\"WARC-Payload-Digest\":\"sha1:25V4KZNFT6NJGVEYUN47VDZOVYFPVBWF\",\"WARC-Block-Digest\":\"sha1:4M2LGO26MXGDUOIEH76OHPMHLUR24WBI\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233506421.14_warc_CC-MAIN-20230922170343-20230922200343-00114.warc.gz\"}"} |
https://math-lover.com/addition-and-subtraction-of-similar-fractions/ | [
"# Addition and Subtraction of Similar Fractions\n\nAddition and subtraction of similar fractions is nothing but the usual arithmetic operation. After identifying that all denominators were the same, the rest will just be an ordinary addition or subtraction of our numerators. The denominators will tell us if we can add or subtract them directly.\n\nIf we are dealing with denominators, this means that we are dealing with both similar fractions (also called as like fractions) and dissimilar fractions (or unlike fractions). I have already discussed similar and dissimilar fraction in my previous post. For those who want to review, check my post entitled “Types of Fractions”.\n\nIn this lesson, I will be presenting the basic procedure (step-by-step) on how to add or subtract similar fractions. It will be followed by various examples for you to understand it better.",
null,
"### How to Add Similar Fractions (the same denominators)\n\nAs a review, similar fractions (also like fractions) are fractions having the same denominators.\n\nThe following are the steps on how to add similar fractions:\n\nStep 1: Ensure that all denominators are the same.\n\nStep 2: Add all the numerators. The sum will be the numerator of our answer.\n\nStep 3: Copy the denominator. This will be the denominator of our answer.\n\nStep 4. Add all the whole numbers if it is mixed numbers.\n\nStep 5. Convert your answer into its lowest term or mixed numbers.\n\nDo you want examples? We will be giving you various examples for you to understand it better.\n\nExample no.1 This is how we add proper fractions with the same denominators.",
null,
"Solve 1/5 +3/5.\n\nStep 1 is to check the denominators. All the denominators are the same 5 thus we can proceed to step 2.\n\nStep 2 is to add all the numerators. Let us add 1 and 3. This will give us 4.\n\nStep 3 is to copy the denominator. Our denominator is 5.\n\nStep 4 is to add whole numbers. Since we do not have mixed numbers, then we can skip this step.\n\nStep 5 is to convert our answer into its lowest term. Since 4/5 is already the lowest term, then 4/5 is our final answer.\n\nExample no.2 Let us try to add more proper fractions.",
null,
"Example no. 3 Let’s have another example.",
null,
"Example no. 4 The same procedure will be applied when adding improper fractions with the same denominator.",
null,
"Example no. 5 Let’s take a look at another example of adding similar fractions. This time, we are going to add a proper fraction and an improper fraction. We will follow the same steps.",
null,
"Example no. 6 This time, let us add mixed numbers. The easiest way to add mixed numbers is to add them according to our procedure. Add the proper fractions and then the whole numbers. See the following example.",
null,
"Example no. 7 However, there are some problems in which after adding proper fractions, the result will be an improper fraction. Mixed numbers must have only a whole number and a proper fraction. If the partner of your whole number is an improper fraction, then you need to convert the improper fraction into mixed numbers and combine the whole numbers. See the example below.",
null,
"Note: 7/5 is not a proper fraction. A mixed number should have only a proper fraction and a whole number. Therefore, we need to convert it into a mixed number.\n\nIn this problem, it is better to convert the mixed numbers into an improper fraction and do the addition according to our procedure to avoid confusion. Let us solve example no.7 again.",
null,
"For me, this is way more easy than the previous one. But, whatever technique you are comfortable to use with, it will still give you the correct answer.\n\nExample no. 8 We can also add a mixed number and a proper fraction. The procedure is the same as example no.6.",
null,
"If the result after adding the proper fractions is more than one, then follow the procedure in example no.7. Let us see another example.",
null,
"",
null,
"Example no.10 Now, let us add similar fractions of different types.",
null,
"### How to Subtract Similar Fractions\n\nDealing with the addition of similar fraction is quite simple. You just have to ensure that the denominators are the same so that you can proceed to add the numerators.\n\nSubtracting similar fractions have the same procedure with adding similar fractions. The only difference is that you will be dealing with subtraction.\n\nLet’s start.\n\n### Subtracting Similar Fractions Examples:\n\nExample no.11 Subtracting proper fractions with the same denominators.",
null,
"Observe that we just follow the same procedure as the addition of fractions with the same denominators.\n\nExample no.12 We can have also negative fractions as our results. Negative fractions are fractions which are less than zero.",
null,
"Example no.13 Subtracting improper fractions with the same denominators.",
null,
"Example no.14 In this example, we will subtract mixed numbers of similar fractions. It will be easy if we will convert mixed numbers into improper fractions and proceed with our procedure.",
null,
"Example no.15 More examples subtracting similar fractions.",
null,
"### Summary of Adding and Subtracting Fractions\n\n1. To add or subtract similar fractions, just add or subtract all the numerators and copy the denominators. Then, convert the answer to lowest term or mixed number.\n\n2. For adding or subtracting mixed numbers with similar denominators, first, convert mixed numbers into improper fractions. Then, add or subtract the numerators and just copy the denominators. Convert your answer to lowest term or mixed number.\n\n### Related Topics\n\nIf you have some questions or problems related to the above topic, please feel free to drop those in the comments section below. You can also send your concerns to our support team at [email protected].\n\nThe math-lover.com is happy to have you here reading this post.\n\n1.",
null,
"FractionCalc says:\n\nAdding and Subtracting numbers is time consuming and can exhaust too much energy on thinking what is the right answer. If practicing how to add and subtract numbers, it is better to do it manually but when you are in an actual situation that you are doing arithmetic operation, it is better to use calculator. Using calculator is not bad and it is a good strategy to save your time and physical strength plus you can be sure that you got the correct answer.\n\n2.",
null,
"Russel John Regner says:\n\nMasarap basahin Ito dahil marami Kang natutuhan sa subtract at sa iba ang math.\n\n3.",
null,
"Giulianna Mharee L. Cristobal says:\n\nI want to learn math.\n\n4.",
null,
"ju says:\n\n1.",
null,
"Faisal says:\n5.",
null,
"nj moahludi says:"
] | [
null,
"https://math-lover.com/wp-content/uploads/2018/11/Addition-and-subtraction-of-similar-fraction.png",
null,
"https://math-lover.com/wp-content/uploads/2018/11/adding-similar-fractions-example-1.png",
null,
"https://math-lover.com/wp-content/uploads/2018/11/Adding-similar-Fraction-Example-2.png",
null,
"https://math-lover.com/wp-content/uploads/2018/11/Adding-similar-Fraction-example-no.3.png",
null,
"https://math-lover.com/wp-content/uploads/2018/11/Adding-similar-fraction-example-4.png",
null,
"https://math-lover.com/wp-content/uploads/2018/11/Adding-similar-Fraction-example-no.5.png",
null,
"https://math-lover.com/wp-content/uploads/2018/11/Adding-similar-Fraction-example-no.6.png",
null,
"https://math-lover.com/wp-content/uploads/2018/11/Adding-similar-Fraction-example-no.7.png",
null,
"https://math-lover.com/wp-content/uploads/2018/11/Adding-similar-Fraction-example-no7.1.png",
null,
"https://math-lover.com/wp-content/uploads/2018/11/adding-similar-fractions-example-8.png",
null,
"https://math-lover.com/wp-content/uploads/2018/11/adding-similar-fractions-example-8.1.png",
null,
"https://math-lover.com/wp-content/uploads/2018/11/adding-similar-fractions-example-9.png",
null,
"https://math-lover.com/wp-content/uploads/2018/11/adding-similar-fractions-example-10.png",
null,
"https://math-lover.com/wp-content/uploads/2018/11/subtracting-similar-fractions-example-11.png",
null,
"https://math-lover.com/wp-content/uploads/2018/11/subtracting-similar-fractions-example-12.png",
null,
"https://math-lover.com/wp-content/uploads/2018/11/subtracting-similar-fractions-example-13.png",
null,
"https://math-lover.com/wp-content/uploads/2018/11/subtracting-similar-fractions-example-14.png",
null,
"https://math-lover.com/wp-content/uploads/2018/11/subtracting-similar-fractions-example-15.png",
null,
"data:image/png;base64,R0lGODlhAQABAAD/ACwAAAAAAQABAAACADs=",
null,
"data:image/png;base64,R0lGODlhAQABAAD/ACwAAAAAAQABAAACADs=",
null,
"data:image/png;base64,R0lGODlhAQABAAD/ACwAAAAAAQABAAACADs=",
null,
"data:image/png;base64,R0lGODlhAQABAAD/ACwAAAAAAQABAAACADs=",
null,
"data:image/png;base64,R0lGODlhAQABAAD/ACwAAAAAAQABAAACADs=",
null,
"data:image/png;base64,R0lGODlhAQABAAD/ACwAAAAAAQABAAACADs=",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.89013344,"math_prob":0.9219668,"size":6597,"snap":"2023-40-2023-50","text_gpt3_token_len":1416,"char_repetition_ratio":0.21098134,"word_repetition_ratio":0.03024911,"special_character_ratio":0.2072154,"punctuation_ratio":0.118411,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99816036,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48],"im_url_duplicate_count":[null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-01T12:57:50Z\",\"WARC-Record-ID\":\"<urn:uuid:00cce7e2-47cd-46a3-8aff-c009f745825f>\",\"Content-Length\":\"239846\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:48bd5d15-0a82-447a-b831-a68b79d3a03a>\",\"WARC-Concurrent-To\":\"<urn:uuid:f681e2b2-df8f-4ccc-ac07-7db899acc644>\",\"WARC-IP-Address\":\"54.145.221.82\",\"WARC-Target-URI\":\"https://math-lover.com/addition-and-subtraction-of-similar-fractions/\",\"WARC-Payload-Digest\":\"sha1:4KAZWF354NAQJVCNPDUYGPQF3L3IO5FR\",\"WARC-Block-Digest\":\"sha1:47UYZ3FLEPB6ZTYCOZH2RTW5DWID6KLO\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100287.49_warc_CC-MAIN-20231201120231-20231201150231-00171.warc.gz\"}"} |
https://www.coursehero.com/file/11188379/Chapter-5-100512/ | [
"# Chapter 5 - 10.05.12 - 1 Chapter 5 A few objectives 1...\n\n• Notes\n• Zahti\n• 9\n\nThis preview shows page 1 - 3 out of 9 pages.\n\n##### We have textbook solutions for you!\nThe document you are viewing contains questions related to this textbook.",
null,
"The document you are viewing contains questions related to this textbook.\nChapter 18 / Exercise 40\nExploring Economics\nSexton",
null,
"Expert Verified\n1 Chapter 5 A few objectives: 1. Calculate various types of elasticities and interpret 2. Describe how price elasticity helps predict how total expenditure changes if Pchanges 3. Use income elasticity to identify normal goods, inferior goods, necessities, and luxuries 4. Use cross-price elasticity to identify complements and substitutes Introduce elasticities of supply and demand • S & D changes: so far, have looked only at directions • now want to also consider _______________ - Price elasticity of demandis defined as: %percent change in %percent change in DDDDQQQQEPPPP21211212where Qis the change in is the change in () / 2 is the average () / 2 is the average DQQQPPPPQQQQPPPP(note:\"\" means \"is defined as\")\n##### We have textbook solutions for you!\nThe document you are viewing contains questions related to this textbook.",
null,
"The document you are viewing contains questions related to this textbook.\nChapter 18 / Exercise 40\nExploring Economics\nSexton",
null,
"Expert Verified\n2 - EDmeasures how responsive consumers are to a change in price ___________ a D-curve - ex. if Pincreases 1% and QDdecreases 2%, then price elasticity is ______ (with no units) - important: in general, elasticity _________ along the D-curve (even if D is a straight line)\n•",
null,
"•",
null,
"•",
null,
""
] | [
null,
"https://images-na.ssl-images-amazon.com/images/I/510iJAak2YL._SX347_BO1,204,203,200_.jpg",
null,
"https://www.coursehero.com/assets/img/shield-verify.svg",
null,
"https://images-na.ssl-images-amazon.com/images/I/510iJAak2YL._SX347_BO1,204,203,200_.jpg",
null,
"https://www.coursehero.com/assets/img/shield-verify.svg",
null,
"https://www.coursehero.com/assets/img/doc-landing/start-quote.svg",
null,
"https://www.coursehero.com/assets/img/doc-landing/start-quote.svg",
null,
"https://www.coursehero.com/assets/img/doc-landing/start-quote.svg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.83687437,"math_prob":0.5605783,"size":1242,"snap":"2021-31-2021-39","text_gpt3_token_len":290,"char_repetition_ratio":0.15266559,"word_repetition_ratio":0.0927835,"special_character_ratio":0.25040257,"punctuation_ratio":0.09090909,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.983127,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-17T02:04:42Z\",\"WARC-Record-ID\":\"<urn:uuid:236fb3cb-6ea5-4679-adc7-4a11e180f033>\",\"Content-Length\":\"262217\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2c65646f-2cba-4ad5-9c26-7ffa035ee43c>\",\"WARC-Concurrent-To\":\"<urn:uuid:24dcdafe-7083-4d6f-bf12-5a18d3bfaa0f>\",\"WARC-IP-Address\":\"104.17.92.47\",\"WARC-Target-URI\":\"https://www.coursehero.com/file/11188379/Chapter-5-100512/\",\"WARC-Payload-Digest\":\"sha1:OEAHUGWIWLSCCS2NUGY3QGTG74Q4W4CK\",\"WARC-Block-Digest\":\"sha1:3TGZLBIJ2TNS5263AOMYF7HLOAXNLLGX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780053918.46_warc_CC-MAIN-20210916234514-20210917024514-00698.warc.gz\"}"} |
http://ixtrieve.fh-koeln.de/birds/litie/document/23546 | [
"# Document (#23546)\n\nAuthor\nHeiner-Freiling, M.\nTitle\nEinführung und Nutzung der DDC im deutschen Sprachraum\nSource\nInformation und Öffentlichkeit: 1. Gemeinsamer Kongress der Bundesvereinigung Deutscher Bibliotheksverbände e.V. (BDB) und der Deutschen Gesellschaft für Informationswissenschaft und Informationspraxis e.V. (DGI), Leipzig, 20.-23.3.2000. Zugleich 90. Deutscher Bibliothekartag, 52. Jahrestagung der Deutschen Gesellschaft für Informationswissenschaft und Informationspraxis e.V. (DGI). Hrsg.: G. Ruppelt u. H. Neißer\nImprint\nYear\n2000\nPages\nS.473-479\nSeries\nGemeinsamer Kongress der Bundesvereinigung Deutscher Bibliotheksverbände e.V. (BDB) und der Deutschen Gesellschaft für Informationswissenschaft und Informationspraxis e.V. (DGI); Bd.1)(Tagungen der Deutschen Gesellschaft für Informationswissenschaft und Informationspraxis e.V.; Bd.3\nAbstract\nSo lautet der Titel einer Studie, die im Auftrag der Konferenz für Regelwerksfragen und mit Unterstützung des Deutschen Bibliotheksinstituts zwischen Dezember 1998 und Januar 2000 von der Arbeitsgruppe Klassifikatorische Erschließung verfasst wurde. Auftrag dieser Arbeitsgruppe, in der die deutschen Bibliotheksverbünde, das öffentliche Bibliothekswesen, Die Deutsche Bibliothek, ein Experte aus dem Fachhochschulbereich und Gäste aus Österreich und der Schweiz vertreten waren, sollte die Weiterentwicklung der Vorschläge sein, die in dem im September 1998 vorgelegten Gutachten \"Klassifikationen für wissenschaftliche Universalbibliotheken in Deutschland\" für die Verwendung von Klassifikationen gemacht wurden. Dabei war es das Ziel, die dort ausgesprochene Empfehlung der Dewey-Dezimalklassifikation (DDC) für die Zwecke der nationalbibliographischen Erschließung auf ihre praktische Umsetzbarkeit in näherer Zukunft hin zu überprüfen und Wege aufzuzeigen, sie mit anderen, z.B. für die Freihand-Aufstellung genutzten Klassifikationen (das Gutachten empfiehlt hierfür die Regensburger Verbundklassifikation), aber auch mit der für die verbale Sacherschließung eingeführten, auf den Regeln für den Schlagwortkatalog (RSWK) basierenden Schlagwortnormdatei (SWD) zu verknüpfen. Die für eine solche Machbarkeitsstudie zur Verfügung stehende Zeit war knapp bemessen. Nicht nur die bevorstehende Auflösung des Deutschen Bibliotheksinstituts und die damit verbundenen grundsätzlichen Veränderungen in der bisherigen Gremienarbeit ließen es notwendig erscheinen, bis zum Beginn des Jahres 2000 Arbeitsergebnisse vorzulegen. Auch die Nachfrage in der bibliothekarischen Offentlichkeit nach einem allgemein akzeptierten klassifikatorischen Verfahren in Ergänzung zu der durch den punktuellen Zugriff über das enge Schlagwort nicht immer befriedigenden verbalen Erschließung mit RSWK hat sich in den letzten Jahren verstärkt und verlangte nach schnell realisierbaren Lösungen. Nicht zuletzt hat die rasche Verbreitung elektronischer Publikationen und das unendliche, aber mit den herkömmlichen Suchmaschinen nur zweifelhaft erschlossene Angebot im World Wide Web dafür gesorgt, dass das Bedürfnis nach allgemein akzeptierten, zuverlässigen und weltweit verbreiteten, also auch nicht an eine Sprache gebundenen Instrumenten für die sachliche Recherche gewachsen ist. Wenn Deutschland hier den Anschluss nicht verlieren und seine wissenschaftlich relevanten Web-Texte ebenso wie die herkömmlichen gedruckten Publikationen international nutzbar machen will, bietet sich die DDC als die in Nationalbibliographien wie im Internet am meisten verwendete Universalklassifikation an\nObject\nDDC\nLocation\nD\n\n## Similar documents (author)\n\n1. Heiner-Freiling, M.: Kolloquium zur Schlagwortnormdatei in Frankfurt (1990) 5.64\n```5.6384773 = sum of:\n5.6384773 = sum of:\n2.8192387 = weight(author_txt:freiling in 68) [ClassicSimilarity], result of:\n2.8192387 = score(doc=68,freq=1.0), product of:\n0.70710677 = queryWeight, product of:\n7.974011 = idf(docFreq=39, maxDocs=42740)\n0.08867642 = queryNorm\n3.9870055 = fieldWeight in 68, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n7.974011 = idf(docFreq=39, maxDocs=42740)\n0.5 = fieldNorm(doc=68)\n2.8192387 = weight(author_txt:heiner in 68) [ClassicSimilarity], result of:\n2.8192387 = score(doc=68,freq=1.0), product of:\n0.70710677 = queryWeight, product of:\n7.974011 = idf(docFreq=39, maxDocs=42740)\n0.08867642 = queryNorm\n3.9870055 = fieldWeight in 68, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n7.974011 = idf(docFreq=39, maxDocs=42740)\n0.5 = fieldNorm(doc=68)\n```\n2. Heiner-Freiling, M.: Sacherschließung mit RSWK als Dienstleistungsangebot für Öffentliche Bibliotheken (1992) 5.64\n```5.6384773 = sum of:\n5.6384773 = sum of:\n2.8192387 = weight(author_txt:freiling in 132) [ClassicSimilarity], result of:\n2.8192387 = score(doc=132,freq=1.0), product of:\n0.70710677 = queryWeight, product of:\n7.974011 = idf(docFreq=39, maxDocs=42740)\n0.08867642 = queryNorm\n3.9870055 = fieldWeight in 132, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n7.974011 = idf(docFreq=39, maxDocs=42740)\n0.5 = fieldNorm(doc=132)\n2.8192387 = weight(author_txt:heiner in 132) [ClassicSimilarity], result of:\n2.8192387 = score(doc=132,freq=1.0), product of:\n0.70710677 = queryWeight, product of:\n7.974011 = idf(docFreq=39, maxDocs=42740)\n0.08867642 = queryNorm\n3.9870055 = fieldWeight in 132, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n7.974011 = idf(docFreq=39, maxDocs=42740)\n0.5 = fieldNorm(doc=132)\n```\n3. Heiner-Freiling, M.: Clearingstelle für Öffentliche Bibliotheken (1991) 5.64\n```5.6384773 = sum of:\n5.6384773 = sum of:\n2.8192387 = weight(author_txt:freiling in 136) [ClassicSimilarity], result of:\n2.8192387 = score(doc=136,freq=1.0), product of:\n0.70710677 = queryWeight, product of:\n7.974011 = idf(docFreq=39, maxDocs=42740)\n0.08867642 = queryNorm\n3.9870055 = fieldWeight in 136, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n7.974011 = idf(docFreq=39, maxDocs=42740)\n0.5 = fieldNorm(doc=136)\n2.8192387 = weight(author_txt:heiner in 136) [ClassicSimilarity], result of:\n2.8192387 = score(doc=136,freq=1.0), product of:\n0.70710677 = queryWeight, product of:\n7.974011 = idf(docFreq=39, maxDocs=42740)\n0.08867642 = queryNorm\n3.9870055 = fieldWeight in 136, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n7.974011 = idf(docFreq=39, maxDocs=42740)\n0.5 = fieldNorm(doc=136)\n```\n4. Heiner-Freiling, M.: ¬Die Schlagwortnormdatei und die Frauen - eine Antwort an Dagmar Jank (1991) 5.64\n```5.6384773 = sum of:\n5.6384773 = sum of:\n2.8192387 = weight(author_txt:freiling in 138) [ClassicSimilarity], result of:\n2.8192387 = score(doc=138,freq=1.0), product of:\n0.70710677 = queryWeight, product of:\n7.974011 = idf(docFreq=39, maxDocs=42740)\n0.08867642 = queryNorm\n3.9870055 = fieldWeight in 138, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n7.974011 = idf(docFreq=39, maxDocs=42740)\n0.5 = fieldNorm(doc=138)\n2.8192387 = weight(author_txt:heiner in 138) [ClassicSimilarity], result of:\n2.8192387 = score(doc=138,freq=1.0), product of:\n0.70710677 = queryWeight, product of:\n7.974011 = idf(docFreq=39, maxDocs=42740)\n0.08867642 = queryNorm\n3.9870055 = fieldWeight in 138, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n7.974011 = idf(docFreq=39, maxDocs=42740)\n0.5 = fieldNorm(doc=138)\n```\n5. Heiner-Freiling, M.: Beschlagwortung von Belletristik, Kinder- und Jugendliteratur und Schulbüchern : ein neues Dienstleistungsangebot Der Deutschen Bibliothek (1993) 5.64\n```5.6384773 = sum of:\n5.6384773 = sum of:\n2.8192387 = weight(author_txt:freiling in 2625) [ClassicSimilarity], result of:\n2.8192387 = score(doc=2625,freq=1.0), product of:\n0.70710677 = queryWeight, product of:\n7.974011 = idf(docFreq=39, maxDocs=42740)\n0.08867642 = queryNorm\n3.9870055 = fieldWeight in 2625, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n7.974011 = idf(docFreq=39, maxDocs=42740)\n0.5 = fieldNorm(doc=2625)\n2.8192387 = weight(author_txt:heiner in 2625) [ClassicSimilarity], result of:\n2.8192387 = score(doc=2625,freq=1.0), product of:\n0.70710677 = queryWeight, product of:\n7.974011 = idf(docFreq=39, maxDocs=42740)\n0.08867642 = queryNorm\n3.9870055 = fieldWeight in 2625, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n7.974011 = idf(docFreq=39, maxDocs=42740)\n0.5 = fieldNorm(doc=2625)\n```\n\n## Similar documents (content)\n\n1. Niggemann, E.; Heiner-Freiling, M.: Deutsche Nationalbibliographie und Dewey Decimal Classification : Überlegungen, Probleme, Perspektiven (2001) 0.23\n```0.22644056 = sum of:\n0.22644056 = product of:\n0.62900156 = sum of:\n0.10788269 = weight(abstract_txt:nationalbibliographien in 2506) [ClassicSimilarity], result of:\n0.10788269 = score(doc=2506,freq=1.0), product of:\n0.17763987 = queryWeight, product of:\n9.71698 = idf(docFreq=6, maxDocs=42740)\n0.018281387 = queryNorm\n0.60731125 = fieldWeight in 2506, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n9.71698 = idf(docFreq=6, maxDocs=42740)\n0.0625 = fieldNorm(doc=2506)\n0.042288028 = weight(abstract_txt:deutschland in 2506) [ClassicSimilarity], result of:\n0.042288028 = score(doc=2506,freq=1.0), product of:\n0.11987471 = queryWeight, product of:\n1.161739 = boost\n5.644297 = idf(docFreq=410, maxDocs=42740)\n0.018281387 = queryNorm\n0.35276857 = fieldWeight in 2506, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n5.644297 = idf(docFreq=410, maxDocs=42740)\n0.0625 = fieldNorm(doc=2506)\n0.018852353 = weight(abstract_txt:auch in 2506) [ClassicSimilarity], result of:\n0.018852353 = score(doc=2506,freq=1.0), product of:\n0.080079876 = queryWeight, product of:\n1.1629261 = boost\n3.7667098 = idf(docFreq=2686, maxDocs=42740)\n0.018281387 = queryNorm\n0.23541936 = fieldWeight in 2506, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n3.7667098 = idf(docFreq=2686, maxDocs=42740)\n0.0625 = fieldNorm(doc=2506)\n0.048320252 = weight(abstract_txt:2000 in 2506) [ClassicSimilarity], result of:\n0.048320252 = score(doc=2506,freq=1.0), product of:\n0.13101934 = queryWeight, product of:\n1.2145418 = boost\n5.9008393 = idf(docFreq=317, maxDocs=42740)\n0.018281387 = queryNorm\n0.36880246 = fieldWeight in 2506, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n5.9008393 = idf(docFreq=317, maxDocs=42740)\n0.0625 = fieldNorm(doc=2506)\n0.04166836 = weight(abstract_txt:nach in 2506) [ClassicSimilarity], result of:\n0.04166836 = score(doc=2506,freq=2.0), product of:\n0.107846804 = queryWeight, product of:\n1.3495657 = boost\n4.3712344 = idf(docFreq=1467, maxDocs=42740)\n0.018281387 = queryNorm\n0.3863662 = fieldWeight in 2506, product of:\n1.4142135 = tf(freq=2.0), with freq of:\n2.0 = termFreq=2.0\n4.3712344 = idf(docFreq=1467, maxDocs=42740)\n0.0625 = fieldNorm(doc=2506)\n0.074523814 = weight(abstract_txt:erschließung in 2506) [ClassicSimilarity], result of:\n0.074523814 = score(doc=2506,freq=1.0), product of:\n0.20020568 = queryWeight, product of:\n1.838775 = boost\n5.95578 = idf(docFreq=300, maxDocs=42740)\n0.018281387 = queryNorm\n0.37223625 = fieldWeight in 2506, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n5.95578 = idf(docFreq=300, maxDocs=42740)\n0.0625 = fieldNorm(doc=2506)\n0.036834437 = weight(abstract_txt:nicht in 2506) [ClassicSimilarity], result of:\n0.036834437 = score(doc=2506,freq=1.0), product of:\n0.14838795 = queryWeight, product of:\n2.0436857 = boost\n3.9716904 = idf(docFreq=2188, maxDocs=42740)\n0.018281387 = queryNorm\n0.24823065 = fieldWeight in 2506, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n3.9716904 = idf(docFreq=2188, maxDocs=42740)\n0.0625 = fieldNorm(doc=2506)\n0.06406628 = weight(abstract_txt:deutschen in 2506) [ClassicSimilarity], result of:\n0.06406628 = score(doc=2506,freq=1.0), product of:\n0.19922581 = queryWeight, product of:\n2.1180322 = boost\n5.1452193 = idf(docFreq=676, maxDocs=42740)\n0.018281387 = queryNorm\n0.3215762 = fieldWeight in 2506, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n5.1452193 = idf(docFreq=676, maxDocs=42740)\n0.0625 = fieldNorm(doc=2506)\n0.19456534 = weight(abstract_txt:klassifikationen in 2506) [ClassicSimilarity], result of:\n0.19456534 = score(doc=2506,freq=2.0), product of:\n0.30128673 = queryWeight, product of:\n2.255695 = boost\n7.306182 = idf(docFreq=77, maxDocs=42740)\n0.018281387 = queryNorm\n0.64578134 = fieldWeight in 2506, product of:\n1.4142135 = tf(freq=2.0), with freq of:\n2.0 = termFreq=2.0\n7.306182 = idf(docFreq=77, maxDocs=42740)\n0.0625 = fieldNorm(doc=2506)\n0.36 = coord(9/25)\n```\n2. RSWK-Mitteilung Nr.4 : Diskussionsentwürfe zur Erschließung von Schöner Literatur, Kinder- und Jugendliteratur (1992) 0.19\n```0.18692155 = sum of:\n0.18692155 = product of:\n0.66757697 = sum of:\n0.018852353 = weight(abstract_txt:auch in 832) [ClassicSimilarity], result of:\n0.018852353 = score(doc=832,freq=1.0), product of:\n0.080079876 = queryWeight, product of:\n1.1629261 = boost\n3.7667098 = idf(docFreq=2686, maxDocs=42740)\n0.018281387 = queryNorm\n0.23541936 = fieldWeight in 832, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n3.7667098 = idf(docFreq=2686, maxDocs=42740)\n0.0625 = fieldNorm(doc=832)\n0.24966685 = weight(title_txt:rswk in 832) [ClassicSimilarity], result of:\n0.24966685 = score(doc=832,freq=1.0), product of:\n0.15539959 = queryWeight, product of:\n1.3227254 = boost\n6.4264483 = idf(docFreq=187, maxDocs=42740)\n0.018281387 = queryNorm\n1.6066121 = fieldWeight in 832, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n6.4264483 = idf(docFreq=187, maxDocs=42740)\n0.25 = fieldNorm(doc=832)\n0.04166836 = weight(abstract_txt:nach in 832) [ClassicSimilarity], result of:\n0.04166836 = score(doc=832,freq=2.0), product of:\n0.107846804 = queryWeight, product of:\n1.3495657 = boost\n4.3712344 = idf(docFreq=1467, maxDocs=42740)\n0.018281387 = queryNorm\n0.3863662 = fieldWeight in 832, product of:\n1.4142135 = tf(freq=2.0), with freq of:\n2.0 = termFreq=2.0\n4.3712344 = idf(docFreq=1467, maxDocs=42740)\n0.0625 = fieldNorm(doc=832)\n0.089848414 = weight(abstract_txt:arbeitsgruppe in 832) [ClassicSimilarity], result of:\n0.089848414 = score(doc=832,freq=1.0), product of:\n0.19811752 = queryWeight, product of:\n1.4935033 = boost\n7.256171 = idf(docFreq=81, maxDocs=42740)\n0.018281387 = queryNorm\n0.4535107 = fieldWeight in 832, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n7.256171 = idf(docFreq=81, maxDocs=42740)\n0.0625 = fieldNorm(doc=832)\n0.16664031 = weight(abstract_txt:erschließung in 832) [ClassicSimilarity], result of:\n0.16664031 = score(doc=832,freq=5.0), product of:\n0.20020568 = queryWeight, product of:\n1.838775 = boost\n5.95578 = idf(docFreq=300, maxDocs=42740)\n0.018281387 = queryNorm\n0.83234555 = fieldWeight in 832, product of:\n2.236068 = tf(freq=5.0), with freq of:\n5.0 = termFreq=5.0\n5.95578 = idf(docFreq=300, maxDocs=42740)\n0.0625 = fieldNorm(doc=832)\n0.036834437 = weight(abstract_txt:nicht in 832) [ClassicSimilarity], result of:\n0.036834437 = score(doc=832,freq=1.0), product of:\n0.14838795 = queryWeight, product of:\n2.0436857 = boost\n3.9716904 = idf(docFreq=2188, maxDocs=42740)\n0.018281387 = queryNorm\n0.24823065 = fieldWeight in 832, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n3.9716904 = idf(docFreq=2188, maxDocs=42740)\n0.0625 = fieldNorm(doc=832)\n0.06406628 = weight(abstract_txt:deutschen in 832) [ClassicSimilarity], result of:\n0.06406628 = score(doc=832,freq=1.0), product of:\n0.19922581 = queryWeight, product of:\n2.1180322 = boost\n5.1452193 = idf(docFreq=676, maxDocs=42740)\n0.018281387 = queryNorm\n0.3215762 = fieldWeight in 832, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n5.1452193 = idf(docFreq=676, maxDocs=42740)\n0.0625 = fieldNorm(doc=832)\n0.28 = coord(7/25)\n```\n3. Regeln für den Schlagwortkatalog (RSWK) : Erarb. von der Expertengruppe RSWK des Deutschen Bibliotheksinstituts auf der Grundlage der von der Kommission für Sacherschließung des Deutschen Bibliotheksinstituts für Sacherschließung bearbeiteten 2. Aufl. (1991) (1998) 0.18\n```0.18384613 = sum of:\n0.18384613 = product of:\n0.91923064 = sum of:\n0.032653235 = weight(abstract_txt:auch in 3801) [ClassicSimilarity], result of:\n0.032653235 = score(doc=3801,freq=3.0), product of:\n0.080079876 = queryWeight, product of:\n1.1629261 = boost\n3.7667098 = idf(docFreq=2686, maxDocs=42740)\n0.018281387 = queryNorm\n0.4077583 = fieldWeight in 3801, product of:\n1.7320508 = tf(freq=3.0), with freq of:\n3.0 = termFreq=3.0\n3.7667098 = idf(docFreq=2686, maxDocs=42740)\n0.0625 = fieldNorm(doc=3801)\n0.22067642 = weight(title_txt:rswk in 3801) [ClassicSimilarity], result of:\n0.22067642 = score(doc=3801,freq=2.0), product of:\n0.15539959 = queryWeight, product of:\n1.3227254 = boost\n6.4264483 = idf(docFreq=187, maxDocs=42740)\n0.018281387 = queryNorm\n1.4200579 = fieldWeight in 3801, product of:\n1.4142135 = tf(freq=2.0), with freq of:\n2.0 = termFreq=2.0\n6.4264483 = idf(docFreq=187, maxDocs=42740)\n0.15625 = fieldNorm(doc=3801)\n0.029463978 = weight(abstract_txt:nach in 3801) [ClassicSimilarity], result of:\n0.029463978 = score(doc=3801,freq=1.0), product of:\n0.107846804 = queryWeight, product of:\n1.3495657 = boost\n4.3712344 = idf(docFreq=1467, maxDocs=42740)\n0.018281387 = queryNorm\n0.27320215 = fieldWeight in 3801, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n4.3712344 = idf(docFreq=1467, maxDocs=42740)\n0.0625 = fieldNorm(doc=3801)\n0.5723707 = weight(title_txt:bibliotheksinstituts in 3801) [ClassicSimilarity], result of:\n0.5723707 = score(doc=3801,freq=2.0), product of:\n0.29335773 = queryWeight, product of:\n1.8173708 = boost\n8.829678 = idf(docFreq=16, maxDocs=42740)\n0.018281387 = queryNorm\n1.9511014 = fieldWeight in 3801, product of:\n1.4142135 = tf(freq=2.0), with freq of:\n2.0 = termFreq=2.0\n8.829678 = idf(docFreq=16, maxDocs=42740)\n0.15625 = fieldNorm(doc=3801)\n0.06406628 = weight(abstract_txt:deutschen in 3801) [ClassicSimilarity], result of:\n0.06406628 = score(doc=3801,freq=1.0), product of:\n0.19922581 = queryWeight, product of:\n2.1180322 = boost\n5.1452193 = idf(docFreq=676, maxDocs=42740)\n0.018281387 = queryNorm\n0.3215762 = fieldWeight in 3801, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n5.1452193 = idf(docFreq=676, maxDocs=42740)\n0.0625 = fieldNorm(doc=3801)\n0.2 = coord(5/25)\n```\n4. Schwens, U.: ¬Die Deutsche Bibliothek : gesetzlicher Auftrag und elektronische Publikationen (2002) 0.18\n```0.17983279 = sum of:\n0.17983279 = product of:\n0.64225996 = sum of:\n0.063432045 = weight(abstract_txt:deutschland in 1538) [ClassicSimilarity], result of:\n0.063432045 = score(doc=1538,freq=1.0), product of:\n0.11987471 = queryWeight, product of:\n1.161739 = boost\n5.644297 = idf(docFreq=410, maxDocs=42740)\n0.018281387 = queryNorm\n0.52915287 = fieldWeight in 1538, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n5.644297 = idf(docFreq=410, maxDocs=42740)\n0.09375 = fieldNorm(doc=1538)\n0.039991878 = weight(abstract_txt:auch in 1538) [ClassicSimilarity], result of:\n0.039991878 = score(doc=1538,freq=2.0), product of:\n0.080079876 = queryWeight, product of:\n1.1629261 = boost\n3.7667098 = idf(docFreq=2686, maxDocs=42740)\n0.018281387 = queryNorm\n0.49939987 = fieldWeight in 1538, product of:\n1.4142135 = tf(freq=2.0), with freq of:\n2.0 = termFreq=2.0\n3.7667098 = idf(docFreq=2686, maxDocs=42740)\n0.09375 = fieldNorm(doc=1538)\n0.12186908 = weight(abstract_txt:publikationen in 1538) [ClassicSimilarity], result of:\n0.12186908 = score(doc=1538,freq=2.0), product of:\n0.1470418 = queryWeight, product of:\n1.2866641 = boost\n6.2512445 = idf(docFreq=223, maxDocs=42740)\n0.018281387 = queryNorm\n0.8288057 = fieldWeight in 1538, product of:\n1.4142135 = tf(freq=2.0), with freq of:\n2.0 = termFreq=2.0\n6.2512445 = idf(docFreq=223, maxDocs=42740)\n0.09375 = fieldNorm(doc=1538)\n0.044195965 = weight(abstract_txt:nach in 1538) [ClassicSimilarity], result of:\n0.044195965 = score(doc=1538,freq=1.0), product of:\n0.107846804 = queryWeight, product of:\n1.3495657 = boost\n4.3712344 = idf(docFreq=1467, maxDocs=42740)\n0.018281387 = queryNorm\n0.4098032 = fieldWeight in 1538, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n4.3712344 = idf(docFreq=1467, maxDocs=42740)\n0.09375 = fieldNorm(doc=1538)\n0.20573357 = weight(abstract_txt:auftrag in 1538) [ClassicSimilarity], result of:\n0.20573357 = score(doc=1538,freq=2.0), product of:\n0.20847239 = queryWeight, product of:\n1.5320362 = boost\n7.4433827 = idf(docFreq=67, maxDocs=42740)\n0.018281387 = queryNorm\n0.9868624 = fieldWeight in 1538, product of:\n1.4142135 = tf(freq=2.0), with freq of:\n2.0 = termFreq=2.0\n7.4433827 = idf(docFreq=67, maxDocs=42740)\n0.09375 = fieldNorm(doc=1538)\n0.11178572 = weight(abstract_txt:erschließung in 1538) [ClassicSimilarity], result of:\n0.11178572 = score(doc=1538,freq=1.0), product of:\n0.20020568 = queryWeight, product of:\n1.838775 = boost\n5.95578 = idf(docFreq=300, maxDocs=42740)\n0.018281387 = queryNorm\n0.5583544 = fieldWeight in 1538, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n5.95578 = idf(docFreq=300, maxDocs=42740)\n0.09375 = fieldNorm(doc=1538)\n0.055251658 = weight(abstract_txt:nicht in 1538) [ClassicSimilarity], result of:\n0.055251658 = score(doc=1538,freq=1.0), product of:\n0.14838795 = queryWeight, product of:\n2.0436857 = boost\n3.9716904 = idf(docFreq=2188, maxDocs=42740)\n0.018281387 = queryNorm\n0.37234598 = fieldWeight in 1538, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n3.9716904 = idf(docFreq=2188, maxDocs=42740)\n0.09375 = fieldNorm(doc=1538)\n0.28 = coord(7/25)\n```\n5. Heiner-Freiling, M.: RSWK und DDC : Sacherschließung auf zwei Beinen (2005) 0.15\n```0.15125564 = sum of:\n0.15125564 = product of:\n0.63023186 = sum of:\n0.042288028 = weight(abstract_txt:deutschland in 211) [ClassicSimilarity], result of:\n0.042288028 = score(doc=211,freq=1.0), product of:\n0.11987471 = queryWeight, product of:\n1.161739 = boost\n5.644297 = idf(docFreq=410, maxDocs=42740)\n0.018281387 = queryNorm\n0.35276857 = fieldWeight in 211, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n5.644297 = idf(docFreq=410, maxDocs=42740)\n0.0625 = fieldNorm(doc=211)\n0.018852353 = weight(abstract_txt:auch in 211) [ClassicSimilarity], result of:\n0.018852353 = score(doc=211,freq=1.0), product of:\n0.080079876 = queryWeight, product of:\n1.1629261 = boost\n3.7667098 = idf(docFreq=2686, maxDocs=42740)\n0.018281387 = queryNorm\n0.23541936 = fieldWeight in 211, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n3.7667098 = idf(docFreq=2686, maxDocs=42740)\n0.0625 = fieldNorm(doc=211)\n0.37450027 = weight(title_txt:rswk in 211) [ClassicSimilarity], result of:\n0.37450027 = score(doc=211,freq=1.0), product of:\n0.15539959 = queryWeight, product of:\n1.3227254 = boost\n6.4264483 = idf(docFreq=187, maxDocs=42740)\n0.018281387 = queryNorm\n2.409918 = fieldWeight in 211, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n6.4264483 = idf(docFreq=187, maxDocs=42740)\n0.375 = fieldNorm(doc=211)\n0.029463978 = weight(abstract_txt:nach in 211) [ClassicSimilarity], result of:\n0.029463978 = score(doc=211,freq=1.0), product of:\n0.107846804 = queryWeight, product of:\n1.3495657 = boost\n4.3712344 = idf(docFreq=1467, maxDocs=42740)\n0.018281387 = queryNorm\n0.27320215 = fieldWeight in 211, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n4.3712344 = idf(docFreq=1467, maxDocs=42740)\n0.0625 = fieldNorm(doc=211)\n0.074523814 = weight(abstract_txt:erschließung in 211) [ClassicSimilarity], result of:\n0.074523814 = score(doc=211,freq=1.0), product of:\n0.20020568 = queryWeight, product of:\n1.838775 = boost\n5.95578 = idf(docFreq=300, maxDocs=42740)\n0.018281387 = queryNorm\n0.37223625 = fieldWeight in 211, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n5.95578 = idf(docFreq=300, maxDocs=42740)\n0.0625 = fieldNorm(doc=211)\n0.090603404 = weight(abstract_txt:deutschen in 211) [ClassicSimilarity], result of:\n0.090603404 = score(doc=211,freq=2.0), product of:\n0.19922581 = queryWeight, product of:\n2.1180322 = boost\n5.1452193 = idf(docFreq=676, maxDocs=42740)\n0.018281387 = queryNorm\n0.45477742 = fieldWeight in 211, product of:\n1.4142135 = tf(freq=2.0), with freq of:\n2.0 = termFreq=2.0\n5.1452193 = idf(docFreq=676, maxDocs=42740)\n0.0625 = fieldNorm(doc=211)\n0.24 = coord(6/25)\n```"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.61281174,"math_prob":0.99359447,"size":20486,"snap":"2021-04-2021-17","text_gpt3_token_len":7914,"char_repetition_ratio":0.25905675,"word_repetition_ratio":0.5636071,"special_character_ratio":0.52177095,"punctuation_ratio":0.28192055,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99947304,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-01-28T03:03:17Z\",\"WARC-Record-ID\":\"<urn:uuid:f7979720-1985-4f8c-9d0d-f81951981fd1>\",\"Content-Length\":\"36546\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7b0f7b96-ef68-4ab4-898e-a2c6cd4117cd>\",\"WARC-Concurrent-To\":\"<urn:uuid:461f0340-157f-4505-9ea4-6de2f352f51d>\",\"WARC-IP-Address\":\"139.6.160.6\",\"WARC-Target-URI\":\"http://ixtrieve.fh-koeln.de/birds/litie/document/23546\",\"WARC-Payload-Digest\":\"sha1:IW2LX4NNZCASXEHFPGVBWLR7B35ZS5MS\",\"WARC-Block-Digest\":\"sha1:O5WL2DUOQM7SRM4DGX3VYPPJLDY3XAIS\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-04/CC-MAIN-2021-04_segments_1610704835583.91_warc_CC-MAIN-20210128005448-20210128035448-00368.warc.gz\"}"} |
https://lists.nongnu.org/archive/html/axiom-developer/2005-03/msg00243.html | [
"axiom-developer\n[Top][All Lists]\n\n## [Axiom-developer] [RationalInterpolationAlgorithms] Move back RationalIn\n\n From: wyscc Subject: [Axiom-developer] [RationalInterpolationAlgorithms] Move back RationalInterpolation; example, question Date: Sat, 19 Mar 2005 05:00:19 -0600\n\nChanges\nhttp://page.axiom-developer.org/zope/mathaction/RationalInterpolationAlgorithms/diff\n--\n\n??changed:\n-Next RationalInterpolation\n-\nThe package below implements rational interpolation.\n\\begin{axiom}\n)abbrev package RINTERP RationalInterpolation\n++ Description:\n++ This package exports interpolation algorithms\nRationalInterpolation(xx, F): Cat == Body where\nxx: Symbol\nF: IntegralDomain\nUP ==> UnivariatePolynomial\nSUP ==> SparseUnivariatePolynomial\n\nCat == with\ninterpolate: (Fraction UP(xx, F), List F, List F, _\nNonNegativeInteger, NonNegativeInteger) _\n-> Fraction UP(xx, F)\ninterpolate: (List F, List F, NonNegativeInteger, NonNegativeInteger) _\n-> Fraction SUP F\n\nRIA ==> RationalInterpolationAlgorithms\n\ninterpolate(qx, lx, ly, m, k) ==\npx := RationalInterpolation(lx, ly, m, k)$RIA(F, UP(xx, F)) elt(px, qx) interpolate(lx, ly, m, k) == RationalInterpolation(lx, ly, m, k)$RIA(F, SUP F)\n\\end{axiom}\n\nComments: Packages compiled on MathAction seems to be local to the page.\nDependent packages therefore needs to be on the same page to load the packages\nin correct sequence.\n\nExample:\n\n\\begin{axiom}\ninterpolate(xlist, ylist, 3, 2)$RINTERP('x, FRAC INT) interpolate(1/6::FRAC UP(x,FRAC INT), xlist, ylist, 3,2)$RINTERP('x,FRAC INT)\ninterpolate(xxlist, yylist, 3, 2)$RINTERP('x, FRAC dom) interpolate(4*z::FRAC UP(x,dom), xxlist, yylist, 3, 2)$RINTERP('x, FRAC dom)\n\\end{axiom}\n\nQuestion: If <code>p(xx) = interpolate(lx, ly, m, k)</code>, what is the\npurpose of\n<code>elt(px, qx) = p(qx)</code>, the composition of <code>p(xx)</code> and\n<code>qx</code>, especially when <code>qx</code> is from <code>FRAC UP(xx,\nF)</code> instead of from just <code>F</code>? and why is this function (the\ncomposition) also called <code>interpolate</code>?\n\n--"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.58699673,"math_prob":0.96427405,"size":2340,"snap":"2021-31-2021-39","text_gpt3_token_len":704,"char_repetition_ratio":0.18707192,"word_repetition_ratio":0.030303031,"special_character_ratio":0.26239318,"punctuation_ratio":0.22014052,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9978789,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-08-01T11:23:51Z\",\"WARC-Record-ID\":\"<urn:uuid:35760b46-5c62-4892-98c9-cf636b8aab63>\",\"Content-Length\":\"7093\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c44ba8ee-883d-45bc-a8ed-ca4444d33e6f>\",\"WARC-Concurrent-To\":\"<urn:uuid:33ff73bf-1f08-4426-a224-5d88885bcf9b>\",\"WARC-IP-Address\":\"209.51.188.17\",\"WARC-Target-URI\":\"https://lists.nongnu.org/archive/html/axiom-developer/2005-03/msg00243.html\",\"WARC-Payload-Digest\":\"sha1:YADUUGIDXO7TBVVVHTEMORLU2B2BB2XS\",\"WARC-Block-Digest\":\"sha1:FW3E7JZVAUPTD5J2TWE7XXWOR2UQ5ZBC\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046154175.76_warc_CC-MAIN-20210801092716-20210801122716-00717.warc.gz\"}"} |
https://www.zbmath.org/?q=an%3A0965.37015 | [
"# zbMATH — the first resource for mathematics\n\nFinding periodic points of a map by use of a $$k$$-adic expansion. (English) Zbl 0965.37015\nThe main goal of this paper is to present an algebraic language which allows to put earlier known results in a unified framework and to simplify proofs. The authors give a new characterization of a sequence of Lefschetz numbers of iterates of a map. The basic observation is that the Lefschetz number $$L(f^m)$$ is the value at $$m$$ of a character of a virtual representation of $$\\mathbb{Z}$$ given by the nonsingular part of the map induced by $$f$$ ($$f$$ is a map, $$f:X\\to X$$, $$X$$ is paracompact) on the rational (complex) cohomology spaces of $$X$$. For a smooth transversal map they give a refined version of Matsuoka’s theorem on the parity of the number of periodic orbits of a transversal map. Moreover they show the existence of infinitely many prime periods provided the sequence of Lefschetz numbers of the iterates is unbounded.\n\n##### MSC:\n 37B30 Index theory for dynamical systems, Morse-Conley indices 37C30 Functional analytic techniques in dynamical systems; zeta functions, (Ruelle-Frobenius) transfer operators, etc. 55M20 Fixed points and coincidences in algebraic topology\nFull Text:"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.79980016,"math_prob":0.9976825,"size":1586,"snap":"2021-31-2021-39","text_gpt3_token_len":448,"char_repetition_ratio":0.094816685,"word_repetition_ratio":0.033613447,"special_character_ratio":0.26229507,"punctuation_ratio":0.14521453,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99816984,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-08-03T16:36:58Z\",\"WARC-Record-ID\":\"<urn:uuid:f7d81b2e-2697-4bf0-9b50-099eb4154f0b>\",\"Content-Length\":\"46861\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f3e9fb3d-9a4e-452b-8ce4-187c1c3c335c>\",\"WARC-Concurrent-To\":\"<urn:uuid:38fc4960-0ade-4681-9b30-9be16c765627>\",\"WARC-IP-Address\":\"141.66.194.3\",\"WARC-Target-URI\":\"https://www.zbmath.org/?q=an%3A0965.37015\",\"WARC-Payload-Digest\":\"sha1:RWGBZUHVZX25QEA57WZJ4APXR5JXDIEP\",\"WARC-Block-Digest\":\"sha1:I6CZNW3SI4TAQZGZKUCY4Y6MUICWR6XI\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046154466.61_warc_CC-MAIN-20210803155731-20210803185731-00407.warc.gz\"}"} |
https://www.jpost.com/sports/tennis-peers-winning-streak-ends-in-beijing | [
"(function (a, d, o, r, i, c, u, p, w, m) { m = d.getElementsByTagName(o), a[c] = a[c] || {}, a[c].trigger = a[c].trigger || function () { (a[c].trigger.arg = a[c].trigger.arg || []).push(arguments)}, a[c].on = a[c].on || function () {(a[c].on.arg = a[c].on.arg || []).push(arguments)}, a[c].off = a[c].off || function () {(a[c].off.arg = a[c].off.arg || []).push(arguments) }, w = d.createElement(o), w.id = i, w.src = r, w.async = 1, w.setAttribute(p, u), m.parentNode.insertBefore(w, m), w = null} )(window, document, \"script\", \"https://95662602.adoric-om.com/adoric.js\", \"Adoric_Script\", \"adoric\",\"9cc40a7455aa779b8031bd738f77ccf1\", \"data-key\");\nvar domain=window.location.hostname; var params_totm = \"\"; (new URLSearchParams(window.location.search)).forEach(function(value, key) {if (key.startsWith('totm')) { params_totm = params_totm +\"&\"+key.replace('totm','')+\"=\"+value}}); var rand=Math.floor(10*Math.random()); var script=document.createElement(\"script\"); script.src=`https://stag-core.tfla.xyz/pre_onetag?pub_id=34&domain=\\${domain}&rand=\\${rand}&min_ugl=0\\${params_totm}`; document.head.append(script);"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.95963734,"math_prob":0.96919554,"size":604,"snap":"2023-40-2023-50","text_gpt3_token_len":159,"char_repetition_ratio":0.095,"word_repetition_ratio":0.0,"special_character_ratio":0.25662252,"punctuation_ratio":0.08730159,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9789482,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-27T03:09:04Z\",\"WARC-Record-ID\":\"<urn:uuid:642ec6fa-6407-4d48-a2e8-031fbd300ddb>\",\"Content-Length\":\"78696\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d3a2f535-7730-4fb0-b440-369f3c6061f9>\",\"WARC-Concurrent-To\":\"<urn:uuid:80b0d1d9-a32d-4a7a-98af-806150d9e38d>\",\"WARC-IP-Address\":\"159.60.130.79\",\"WARC-Target-URI\":\"https://www.jpost.com/sports/tennis-peers-winning-streak-ends-in-beijing\",\"WARC-Payload-Digest\":\"sha1:Y6MDYVV76RXGCELVQ3ZOV2PFWHBUALBQ\",\"WARC-Block-Digest\":\"sha1:THLFINUXNRLMJZUGLDXAC3CFU3WFJ3NA\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510238.65_warc_CC-MAIN-20230927003313-20230927033313-00280.warc.gz\"}"} |
https://odkialbrucke.com/maximum-productive-training-volume-per-session/gvfcp6620zyd | [
"Home\n\n# Workout volume Calculator\n\nThe training volume for this set would be 1,000 pounds using the classic formula. You performed the 10 reps 1.5 seconds each for a 15 second set. If you lifted the weight 5 seconds per rep and only performed 6 reps of 100 pounds the classic training volume formula would say your volume was 600 pounds. The volume is lower than the first set even. Exercise Calculator. Estimate how many calories you burn during your workout. Then add them to your Daily Totals to see how many calories you burned Workout Volume Calculator. Off Topic. AJL_1240. December 14, 2015, 5:25pm #1. I know it sounds lazy, but does anyone know of a website, or any software that would calculate one's total volume? It would need to recognize sets, reps, and weight, and (I know, I'm pushing it with this one) which muscle gets worked by which exercise. It would be. Bodybuilding volume calculator, Bodybuilding calculator wrist - CrazyBulk supplements for muscle growth . Bodybuilding volume calculator. Winstrol is one of the best steroids to take to keep lean muscle and improve power and performance. It is also effective if you're in the cutting phase",
null,
"### Weight Training Volume - Calculate the Amount of Work\n\n1. This free volume calculator can compute the volumes of common shapes, including that of a sphere, cone, cube, cylinder, capsule, cap, conical frustum, ellipsoid, and square pyramid. Explore many other math calculators like the area and surface area calculators, as well as hundreds of other calculators related to finance, health, fitness, and more\n2. e your ideal daily calories or.\n3. The calculator will take your Basal Metabolic Rate and base your daily caloric requirements from these factors. Goal Weight. Workouts per week. Your goal. Body fat. Waist. After you provide your physical information, you'll select one of the following Basal Metabolic Rate (BMR) formulas\n\nINOL Calculator. Fill in the fields below to calculate INOL per lift. One Rep Max (1RM) for a Lift. Input your 1RM in kg or lb. Load, Sets, Reps Next, you will enter the load as well as the number of sets and reps for that load. Load. The amount of weight (kg or lb) being lifted. Set Count. The number of sets Calculator online on how to calculate volume of capsule, cone, conical frustum, cube, cylinder, hemisphere, pyramid, rectangular prism, triangular prism and sphere. Calculate volume of geometric solids. Volume formulas. Free online calculators for area, volume and surface area Training Volume More Important for Power Training. Training volume has more impact on power than strength. Baker, D (2001), The effects of concurrent training on the maintenance of maximal strength and power in professional and college-age rugby league football players, Journal of Strength and conditioning Research, 15(2), 172-177 What Exactly Is Training Volume? Volume is a measurement of the total weight lifted, you get this by using the following equation: Sets x reps x weight. So if you perform three sets of 10 reps of 100 kg bench press, you have performed 30 reps of 100 kg for a total volume of 3,000 kg\n\nWorking out the volume of materials is simply a process of multiplying the width of the project by the length and the depth. This will tell you how many cubic metres of material you will need. Most materials are delivered by the tonne and the calculator below will give you the approximate tonnage for a range of materials Students may use this hemisphere calculator to generate work with steps for any other similar input values. Workout : step 1 Address the formula, input parmaeters & values. Radius = 5 in. step 2 Apply radius values in Volume formula. Volume V = 2πr³ 3. = 2 x 3.1416 x (5)³ 3 in³. = 2 x 3.1416 x 125 3 in³. = 785 3 in³\n\n### Exercise Calculator - WebM\n\n1. Remember, your training volume = sets x reps x weight used. So, if on Monday, Wednesday, and Friday you completed: 3 sets of 8 reps of squats at 100lbs and 4 sets of 8 reps of bench press 4 sets of 8 at 50lbs, your squat training volume is 3 x 8 x 100 = 2,400 daily volume x 3 workouts = 7,200 weekly volume\n2. Acute training is the amount of workout volume in the past week. Chronic training is the average amount of workout volume over the past 4 weeks. Think of acute training in the same terms you'd think about fatigue. How tired are you from your training sessions or workouts over the past week? Chronic training involves looking back on the past.\n3. Let's say I plug 170lb into the volume equation, and I read a routine telling me I should do 6 sets for 3 reps at 85%, you will end up w/ (6 x 3 x 170) which makes a total volume load of 3,060 lbs. Ok, now the goal another workout day is to break my 3,060 lbs volume threshold I did at 85%\n4. Training volume is a better tool when trying to gage the demands of an individual training approach or workout. Obviously, volume can't measure the impact of training techniques such as training to failure, but it still provides an amazing insight into just how hard you are working out\n5. How To Calculate Total Workout Volume\n\n### Workout Volume Calculator - Off Topic - Forums - T Natio\n\nThe above example, we do that in another helper cell, column J, and we use the formula: =VALUE (F2) Last is simple multiplication: we need to multiply the sets * reps to get the total number of numbers, and to calculate the volume, we need to multiple sets * reps * weight. At the bottom of that days workout, we can sum all that information. An important strength training variable that one should be aware of is exercise volume.A periodized strength training program, monitors exercise volume to see how someone is adapting to the demands of a training program. If these factors are not considered and/or monitored, the likelihood that the training program will result in less than optimal results will increase markedly Steps to Find Calculate Volume of a Cube. The step by step workout for how to find what is the volume of a cube. Students may use this cube volume calculator to generate work with steps for any other similar input values. Workout : step1 Address the formula, input parameters and values. side = 5 in. step 2 Find Volume of cube using below formula\n\n### Bodybuilding volume calculator, bodybuilding calculator\n\nCalculate Volume of Square Slab Calculator Use. Calculate volumes for concrete slabs, walls, footers, columns, steps, curbs and gutters. Enter dimensions in US units (inches or feet) or metric units (centimeters or meters) of your concrete structure to get the cubic yards value of the amount of concrete you will need to make this structure Algebra Calculator is a calculator that gives step-by-step help on algebra problems. See More Examples ». x+3=5. 1/3 + 1/4. y=x^2+1. Disclaimer: This calculator is not perfect. Please use at your own risk, and please alert us if something isn't working. Thank you Volume of a triangular prism formula. The volume formula for a triangular prism is (height x base x length) / 2, as seen in the figure below:. So, you need to know just three measures: height, base, and length, in order to calculate the volume So if you do 3 sets of 10 reps with 195 pounds on the barbell bench press, your training volume for that exercise is 5,850 pounds (3 x 10 x 195)—the equivalent of a Chevy Suburban, you beast In weight training, volume is the term used to describe how much work you do, such as the number of repetitions (reps) you perform of an exercise. Intensity describes the difficulty of an exercise, typically based on the amount of weight you lift. Take deadlifts as an example\n\n### Volume Calculato\n\n• Our pond volume calculator will help you work out how much water your pond holds. For more advice call us on 0161 351 470\n• Octane's Tank Volume Calculator makes it really easy to work out the volume of your storage tank. 1. Click one of the 3 tabs across the top which represents your tank. 2. Select your measurement units. 3. Enter your tank's length, width etc. 4. Click Calculate\n• Related Surface Area Calculator | Volume Calculator. Area is a quantity that describes the size or extent of a two-dimensional figure or shape in a plane. It can be visualized as the amount of paint that would be necessary to cover a surface, and is the two-dimensional counterpart of the one-dimensional length of a curve, and three-dimensional volume of a solid\n• This is a convenient calculator to help us figure out the volume of a cuboid or a box, calculete cubic centimeters(cm³) from different unit, include inches, feet, yards, mm, cm or meter, with calculation formula and dynamic visual cube help us understand the answer more easily\n• Students may use this cylinder calculator to generate work with steps for any other similar input values. Workout : step 1 Address the formula, input parameters and values. Radius = 3 in. Height = 10 in. step 2 Apply radius and height values in below volume formual. Volume V = πr²h\n\nAbout human body; 1 010 kilograms [kg] of human body fit into 1 cubic meter; 63.05224 pounds [lbs] of human body fit into 1 cubic foot; Human body weighs 1.01 gram per cubic centimeter or 1 010 kilogram per cubic meter, i.e. density of human body is equal to 1 010 kg/m³.In Imperial or US customary measurement system, the density is equal to 63.052 pound per cubic foot [lb/ft³], or 0.5838. Since the volume equation is comprised of three different variables, there are three different ways to increase our volume from workout to workout. You can lift more weight at the same set and rep scheme you did last workout - one workout you do biceps curls at 4×12 with 50 pounds, and the next workout you do 4×12 with 55 pounds As I've previously explained, weight training volume (the amount of exercises, sets and reps you do) is a key factor influencing the effectiveness of your workout routine. Meaning, if you want to get the best results possible, your goal is to use an optimal amount of volume for each body part and muscle group per workout and per week total Volume landmarks can change somewhat depending on your training frequency, so it's important to note that MVs in these articles are for individuals training at least 2x per week per muscle group. It's possible that similar MVs can be attained training 1x per week, but, for smaller muscles that recover quickly (like rear delts), some.\n\n### Fitness Calculators to Set Goals and Measure Progres\n\nTidal volume can also be estimated from height and gender using the following formulas or with the help of this tidal volume calculator. IBW male: 50kg + 2.3 x (height in inches - 60) IBW female: 45.5kg + 2.3 x (height in inches - 60) Tidal Volume: ranges between 6 x IBW mL/kg IBW and 8 x IBW mL/k When athletes exercise or coaches program training plans there is no lack of attention to sets, reps, variation in exercises, and providing sufficient stimulus for adaptations to occur. But the most overlooked training variable I see is training volume. Lack of understanding of this variable will not only rob you of performance gains but increase your injury risk Cubic volume calculator. Work out cubic volumes. Single cubic volumes can be calculated. Multiple different volumes can be summed for a total volume. Up to 8 volumes can be listed and added. Work out the volumes of multiple rooms for air conditioning requirements. Work out the total concrete requirements for multiple locations on the same site For a simple rectangle, just use the length and width with the thickness in the calculator left. For irregular, basically square areas, Breakdown the area into a number of simple rectangular areas. Workout the volume for each area separately. And finally add the separate volumes together. Enter the dimensions of the rectangle to be concreted Squat Calculator. Test Your Max! *Base increases are generally between 5 and 10 lbs depending on feel and progress. Strong legs make better people. And if you're looking to significantly.",
null,
"### Calorie Calculator: Find Your Daily Calorie Intake for\n\nPerson A might do 6 sets of 6 reps for a total of 36 reps per muscle group, per workout. Person B might do 6 sets of 10 reps for a total of 60 reps per muscle group, per workout. As you can see, that's still a pretty significant difference even with the same ideal rep range (5-12) being used. For this reason, measuring or prescribing volume. The total set volume per workout was around 9 in the medium volume group compared to 14 in the high volume group. Heaselgrave et al. (2019) found a trend for an optimal training volume of 18 sets per week for the biceps: groups training with 9 and 27 sets achieved worse overall strength and muscle development. The medium and high volume groups.\n\nFinal Remarks. The Guild Factor formula is generally accurate for polyurethane (PU) blank-based shortboards and funboards. Epoxy and expanded Polystyrene (EPS) foam-based surfboards have considerably more buoyancy, which means that the ideal volume in liters should be dropped by around two or three liters.. Also, as a surfboard gets longer and becomes a high-performance shape, the volume. Use this online calculator to calculate Mass, Density and Volume. The Density of an object is its mass per unit volume, the Mass is a physical quantity expressing the amount of matter in an object and the Volume is the quantity of three-dimensional space enclosed within an object. Entering two of these propertiess in the calculator below calculates the third property of the object\n\n### Video: INOL Calculator Tool (2021) Lift Vaul\n\n• VO2 (or oxygen consumption) is a measure of the volume of oxygen that is used by your body to convert the energy from the food you eat into the energy molecules, called adenosine triphosphate ( ATP ), that your body uses at the cellular level. VO2max (or maximal oxygen consumption) is simply the maximum possible VO2 that a given person can achieve\n• Cube Volume Calculator - a cube is a three-dimensional solid object bounded by six square faces, facets or sides, with three meeting at each vertex. Cone Volume Calculator - a cone is an three-dimensional geometric shape that tapers smoothly from a base (usually flat and circular) to a point called the apex or vertex. Cylinder Volume Calculator.\n• The McMillan Running Calculator is based on what we know from exercise science and real world running. But, every runner is unique so over time, you will learn how to best interpret and modify the Race Times and Optimal Training Paces to fit your particular strengths and weaknesses as well as your goals. Goal Pace",
null,
"oleksboiko/Shutterstock. Here's how German volume training works. You perform three workouts over five days and repeat that cycle six times for a 30-day program. (More advanced trainees might be. Use the hot tub to calculate the volume of a round container. Let's do the tricky part first. The diameter of the tub is 10 feet. Half of that is 5 feet. Squared (multiplied by itself) means 5 feet times 5 feet equals 25 square feet. Knowing this, you can return to the formula: 3.14 x radius squared x average depth x 7.5 = volume (in gallons Calculation Formulas Square or Rectangle (Length x Width x Depth) The area of a square or rectangular garden is easy to calculate. Simply multiply the length by the width by the depth or use our simple volume calculator. Circular π r 2 x Depth Circular areas are also easy to calculate. Simply multiply 3.1415 (π) by the radius, by the radius again, by the depth or use our simple volume. Aquatic Systems Australia/New Zealand. Products; Manuals; Brochures; Resources; News; Dealer Resources; Home >> Resources >> Calculators >> Pool Volume Calculator Volume of water in pipe - calculation example. Let's see how to use the pipe volume calculator correctly. For an example calculation, we need a few assumptions. We will calculate the volume of a 6-meter length pipe, with an inner diameter equal to 15 centimeters. The pipe is used to transport water\n\nGenerally, volume is calculated by sets X reps X weight; it is called volume because it is three dimensional, like the volume of a jar being length X width X height. For example, your workout consisted of bench press at: 1 X 5 X 275 and 2 X 3 X 315, then the total volume for that workout would be (1 X 5 X 275) + (2 X 3 X 315) = 3,265 lbs Volume of a Cuboid Calculator. A cuboid is a 3-dimensional shape, which can also be termed as rectangular box. Volume is defined as the amount of space an object contains. The volume of a cuboid is the space contained within it. This is a simple online volume of a cuboid calculator that can be used to calculate the rectangular box volume Training Volume. The number of sets and reps you do in a workout is called volume—and thinking of the volume on your stereo helps in understanding how much to use in the gym. If your training. Minute volume is the volume of air that a person inhales and exhales in 1 minute. This calculation is used by physicians, nurses and respiratory therapists to help determine a person's overall respiratory health. Health professionals often use this measurement with patients experiencing various respiratory conditions such as asthma, pneumonia and emphysema\n\nDam Calculator Volume, the area of the total surface and areas of each face of a dam The surface to volume ratio and the lengths of individual sides of a dam. This calculator defines a dam as a hexahedron, i.e. the three-dimensional geometrical object with six faces. The bottom and top faces of a dam represent two parallel rectangles, and the. Log Volume Calculator is an app with a great applicability in wood production. It helps you determine the scale volume of a log (in cubic meters) based on middle diameter and length. You will be able to accurately calculate the volume of logs and create a list with multiple entries. In order to calculate a log volume insert the log length (in. The method of calculation used in this application is a recognised mathematical and medical standard for measuring limb volume. It is calculated through the measure and addition of cylindrical 4cm sections of both the Distal and Proximal for each limb. Distal, Proximal and total volume (s) for each limb. Total excess in mls and as a percentage.\n\n### ExRx.net : Weight Training Volum\n\n• utes per day. 4. Click on the Calculate button to generate the results\n• Calculate the volume of a water cylinder with total height 1m, diameter of 40cm, and whose top section is semi-spherical. You first divide the shape into two sections, a cylinder and a semi-sphere (half a sphere). The volume of a sphere is 4/3 × π × radius 3. In this example the radius is 20cm (half the diameter)\n• g 10x10 with 60% of my 1RM would be near impossible. To be sure that my form is correct, and that I complete all 100 reps, I.\n• Oval Tank Volume Calculator. Find the volume of oval tanks in cubic inches and gallons. person_outline Pete Mazz schedule 2015-03-29 19:17:18\n• German Volume Training method is designed to complete ten sets of ten reps with the same weight for each exercise. Beginning with a weight you could lift for 20 reps to failure. For most people this would represent 60% of their 1RM load. For example: If you bench press 300 pounds for 1 rep, you would use 180 pounds for this exercise",
null,
"### Sets, Reps, and Training Volume - Here's What You Do\n\n1. volume = a³ / 6√2, where a is the edge of the solid. The height, in this case, can be calculated as: height = a√3 / 6 ~ 0.2887 * a, so if you want to calculate, e.g. the volume of a regular polyhedron with the edge = 3, type 3 * 0.2887 into the pyramid volume calculator Height box\n2. You may have to calculate the volume based on two separate areas, if the pool is an L or T shape. For a hexagon or Octagon shape, calculate as a circular pool or spa. Otherwise, use the calculations for either an oval or rectangular pools or spas and substitute an average diameter, width or length. Awards\n3. Go to Tools > Mass properties or click on the Mass properties icon. In the Mass properties box, you can find many material properties. The surface area of the squeezer is 40091mm2 as you can see in the orange rectangle. As you can see in the Mass Properties box in SolidWorks, the volume of our model is 169301 mm3\n4. Reference Value Calculator. Enter Age, Height, Gender and Race. To see Percent Prediced, you must enter observed FVC, FEV1, and FEF25-75% values in the appropriate boxes. Click Calculate to calculate the predicted values\n5. e the volume of a standard PVC pipe, deter\n6. A 7 day workout split is provided at the end of this article. The following is a sample German Volume Training program that is split up into two phases. Beginner/Intermediate German Volume Training Program: Phase 1. Perform the above 5 day cycle 6 times\n7. Let's say you cranked the volume too high during a workout on Friday. For the rest of the weekend, limit the volume to 50% or 60% of the maximum level, Chasin says\n\nA handy calculator for calculating the volume of your fish tank. Use this calculator when working out the holding capacity of your fish tank. If you have other questions, a convenient form will allow you to send questions to an experienced aquarist Play Assign this exercise. Add to collection. Added to collection. Teach students how to calculate volume by exposing them to formulas they can use on their own. See in a Guided Lesson. Grade. 5th grade. Subject. Math Geometry Volume Volume of a Rectangular Prism Juggernaut Training Systems. Journal of strength and conditioning research. Each workout is devoted to one of the four compound lifts. Learn all about the most important principles in strength training to make sure you get the most out of your coaching. Resistance Training Volume Enhances Muscle Hypertrophy but Not Strength in Trained Men\n\nBest bulking agent, best bulking calculator Comments Rate this Site 10 Health Care Guide: Body Building, Diets, Nutrition, Workouts 10. Here you can find a lot of useful information referring to muscle building, anabolic steroids, nutrition, training, programs for increasing muscles etc, best bulking agent Formulas for volume: Cone = , where r is the radius and h is the height. Cube = , where s is the length of the side. Cylinder = , where r is the radius and h is the height. Rectangular prism= , where l is the length, w is the width and h is the height. Sphere = , where r is the radius. Simply enter the dimensions into the calculator to find the. Strength Standards. Our strength standards are based on over 46,855,000 lifts entered by Strength Level users. We have male and female standards for these gym exercises and more: bench press, squat, deadlift, shoulder press, pull ups, dumbbell bench press, dumbbell curl, push ups, barbell curl, dumbbell shoulder press",
null,
"### Working Out Material Volumes Materials Volume Calculator\n\nStrength Training Calculator. Enter your weight and gender to calculate a list of exercises based on your rider type of Climber, Sprinter or All-Rounder. Exercise Senior's Chair Stand. Senior's Arm Curl. Vertical Jump. Sprint Speed. Body Mass Index. Waist Hip Ratio. Vitals (BP & HR) Target Heart Rate. Risk Class Calculator The stroke volume calculator determines SV through two different methods. The first method (tab 1) uses hemodynamic monitoring which is the ratio cardiac output to heart rate. The second method (tab 2) is based on LVOT (Left ventricle outflow tract) and LVOT VTI (LVOT subvalvular velocity time integral) determinations from Doppler investigation The formula used by this calculator to calculate the unknown length, width or height of a rectangular shaped box is: L3 = V / (L1 · L2) Symbols. V = Volume. L1 = 1st Length. L2 = 2nd Length. L3 = 3rd Length. Volume. Enter the volume of the rectangular shaped box, and select the relevant volumetric units",
null,
"",
null,
"### Hemisphere Calculator & Work with Step\n\nTraining volume is best defined as the number of challenging sets you do per muscle per week. For example, if you do 5 sets of the bench press on Monday, 5 sets of push-ups on Wednesday, and 5 sets of dips on Friday, then the weekly training volume for your chest is 15 sets. The Ideal KIND of Training Volume The MAF 180 Formula for determining your MAF HR. Subtract your age from 180, then modify from one of the categories below: If you have or are recovering from a major illness (heart disease, any operation or hospital stay, etc.), are in rehabilitation, are on any regular medication, or are in Stage 3 (chronic) overtraining (burnout), subtract an additional 10 The volume of the cone is calculated as: V = 1/3 * π * R² * H V = the volume in cubic feet π = 3.14159265 R = length B divided by 2 H = Height. The volume of the center prism: 0.5 * A * B * H V = the volume in cubic feet A = length A B = length B H = height. The two results are added together for a cubic foot value and converted to cubic. Sparge Water Calculator. Use this sparge and strike water calculator to determine how much sparge water will be necessary to rinse your mash and get you to the proper pre-boil volume. Out of all our calculators this is hands down the most used. If you constantly brew different beers like us, this calculator is your best friend The primary driver of muscle growth is still workout volume. Always remember that the primary driver of hypertrophy is still workout volume. And when you think about it logically, the 6 to 12 reps range is simply the most effective rep range when it comes to accumulating workout volume\n\n### Training Volume Basics If You're New to Lifting Weights\n\nThe Stock Calculator is very simple to use. Just follow the 5 easy steps below: Enter the number of shares purchased. Enter the purchase price per share, the selling price per share. Enter the commission fees for buying and selling stocks. Specify the Capital Gain Tax rate (if applicable) and select the currency from the drop-down list (optional This is a calculator that specifically calculates the volume of a cuboid, support metric and imperial units (inches, feet, yards, mm, cm or meter), and volume result can convert to different unit, with calculation formula and dynamic visual cube, it helps us to get answers and understand the results more easily Most evidence-based fitness professionals recommend a training volume of 10-15 sets per muscle group per week. I've recommended 10-30 sets in my interviews the past years for most individuals with some outliers using higher volumes, like IFBB Pro Nina Ross. The truth is, even I may have been overly conservative. I recently posted the results of a training study co-authored by Mike Israetel in. Tank volume calculator formula. Calculate the volume of liquid your container can hold by entering your dimensions in metric units (centimeters or meters) or imperial units (yards, feet or inches). Our tool estimates the total tank volume and liquid capacity using the below formulas: Horizontal Cylindrical Tan\n\nPlease select the appropriate volume calculator below. Dormer Window, Flat Roof. Lean-to, Single Pitch Roo Calculate the volume of a rectangular box or tank using our free volume of a box calculator. Box volume calculator online that works in many different metrics: mm, cm, meters, km, inches, feet, yards, miles. Can be used to calculate shipping dimensions in cubic meters or cubic feet. Cubic Meter Calculator for Shipping Percent button is used to find the percentage of a number. Enter the percentage amount, click the % button, then enter the number you want the percentage of, and then click equals. i.e. 20% 125 = 25 where 25 is 20% of 125. Note: The percent function will also work if you enter the number first and then the percentage you want i.e. 125 %20 = 25 Calculate the metric volume of your swimming pool. In order to calculate how many Pool Wizard Units and Active Mineral Rechargers your pool requires, you need to know the volume of water in your swimming pool. Metric pool water volume calculation for most common pool shapes and conversion to gallons can be easily done here\n\n### The New Approach to Training Volume • Stronger by Scienc\n\nThe estimated blood volume calculator approximates intravascular blood with consideration for patient weight and demographic information. In particular blood volume per kilogram is variable based on sex and age, with higher average blood per kg in newborn children as compared to adults Calculate Anything - Anytime. Wanna calculate your net worth at 3 in the morning, or calculate your one rep weight lifting max after your afternoon workout, or even calculate how much tile you need before starting your weekend bathroom remodeling project?. Now you can. We are so committed to providing online calculators to calculate answers to anything imaginable that if you don't see a.",
null,
"Move My Stuff has a unique online removal cost calculator that lets you known a near-accurate ballpark figure about the cost of the move. All you need to do is enter the number of things you have in this furniture volume calculator, which will give us an idea of the approximate weight and volume of stuff you have Train with high volume - Use a high amount of sets, reps, and exercises; Pick two of the options you enjoy the most and down-regulate the third. If you love being in the gym daily and like training balls-out, then keep your volume on the low side. If going to the gym daily doesn't appeal to you, find some middle ground and train 3-4 times a. Volume Calculator. Volume of a CYLINDER. Enter the Height of the Cylinder in mm. Enter the Diameter of the cylinder (Tank) in mm. Calculated Approx Volume (Litres) Volume of a RECTANGLE (CUBOID) Enter the Length in mm. Enter the Width in mm. Enter the Height in mm Input your age in the prompt below and the calculator will produce a range in which to keep your heart rate during aerobic exercise. Now that you know your target heart rate range, you can check your pulse at regular intervals (every 5 to 10 minutes) during the workout session and compare your exercise heart rate to your target heart rate\n\n### Beyond Sets And Reps: A Look At Training Volume Muscle\n\nWater Quality Volume Exercise iSWM Post‐Construction Water Quality Training September 2011 Problem 1: A site with a 2 acre drainage area is being developed and plans show 1.4 acres of impervious area are proposed. Calculate the water quality volume that will require treatment. Solution 1 Engine displacement is determined by calculating the engine cylinder bore area multiplied by the stroke of the crankshaft, and then multiplied by the number of cylinders. This will result in the overall volume of air displaced by the engine. Displacement = (4 in./2) x (4 in./2) x 3.1416 x 3.52 in. x 8 = 353.86 cubic inches 3 workouts a week / Full-body training. You'll perform three simple strength workouts a week, plus some simple active recovery like walking on two other days. No matter how much you have going on, you can do each workout in this program in less than 30 minutes a day. Use this plan for quick and effective total-body training and transform your. The training max was popularized by strength coach Jim Wendler who used it in his 5/3/1 program. The purpose of the training max is to allow you to complete the prescribed sets and reps with each exercise. A further benefit of using the training max is that it will allow you to complete high volume work while still using a moderate to high. Pool Volume Calculator; Ecopump Savings Calculator; Heat Pump Calculator; Pool Water Testing; Chemical Pack builder; Payment Method. Contact Details. Address. 9-11 Centennial Cct Byron Bay NSW 2481 Phone. 1800 100 417. Email. [email protected]. Opening Times. Mon - Fri: 09.00-17.00 (NSW Time\n\nV̇O 2 max (also maximal oxygen consumption, maximal oxygen uptake or maximal aerobic capacity) is the maximum rate of oxygen consumption measured during incremental exercise; that is, exercise of increasing intensity. The name is derived from three abbreviations: V̇ for volume (the dot appears over the V to indicate per unit of time), O 2 for oxygen, and max for maximum For example, training volume can be estimated in total reps per exercise, in total amount of sets per training session, in total amount of weight lifted in exercise per training session, in total amount of sets or reps per day or per week, or per year etc. Proper training volume is regulated by recovery ability of the person and his/her goal Forex Training Group. Forex Calculators - Margin, Lot Size, Pip Value, and More. Forex Trading Articles. 0 Flares Twitter 0 Facebook 0 Google+ 0 0 Flares ×. Forex calculators are a necessary and extremely helpful set of tools to help traders manage their risk. The Forex markets are a challenging and volatile asset class and must be. Half of the subjects were placed on a training program that required them to do 80 percent of their training at low intensity and 20 percent at moderate and high intensity for five months Volume measurement units conversion; Circle area to diameter; User Guide. This tool will calculate the volume of a cylindrical shaped object from the dimensions of length and diameter. No conversion needed, since length, diameter and volume units can be selected independently, so this calculator allows you to use any mixture of measurement units\n\nFedEx Dimensional Weight in 2021. FedEx changed its DIM Factor from 166 to 139 beginning in 2017 and it remains the same today. With the current DIM Factor, the billable weight of a 10 x 10 x 10 package is 8 pounds. (10 x 10 x 10)/139 = 8 pounds. Your FedEx shipping charges would be based on the volumetric weight of 8 pounds because. Use this online Rectangular Tank Storage Capacity calculator to calculate the volume of liquids or gases that a rectangular tank can store. Determine the Storage capacity for rectangular tank using this calculator. Code to add this calci to your website. Just copy and paste the below code to your webpage where you want to display this calculator",
null,
"Use the Readymix Concrete Volume Calculator to calculate the volume of concrete you require for your job. If you are not sure of how to calculate the volume of different shapes, you can use the Common Shape Calculator which will provide you with the appropriate equations. *1m³ minimum order value. 1. Calculate your volume. 2. Choose your project However this Training Strain provides a better metric of the overall stress that an athlete is undergoing than simply looking at training volume. 6 A simple TRIMP cr10 based calculator. This calculator will show the TRIMP cr10 values for each day, the Monotony, the total TRIMP cr10 for the week and the Training Strain Since you know that the area of the base is 3.14 in. 2 and that the height is 4 in., you can just multiply the two together to get the volume of the cylinder. 3.14 in. 2 x 4 in. = 12.56 in. 3 This is your final answer. Always state your final answer in cubic units because volume is the measure of a three-dimensional space"
] | [
null,
"https://odkialbrucke.com/hgsf/FTLBBCcIaO7l6B2gkcjDKgHaD4.jpg",
null,
"https://odkialbrucke.com/hgsf/X6oPg1XXde7V17pSPBS0nwHaEJ.jpg",
null,
"https://odkialbrucke.com/hgsf/SJ2nRa9mtxlmIq5J-w5cAAAAAA.jpg",
null,
"https://odkialbrucke.com/hgsf/1MhNbKJY7CGlJSyW0MxiLgHaIz.jpg",
null,
"https://odkialbrucke.com/hgsf/U_owtgOtR19zvktfDPzUYgHaHv.jpg",
null,
"https://odkialbrucke.com/hgsf/ZsViOtEDG5HZSdxjeh1cewHaEK.jpg",
null,
"https://odkialbrucke.com/hgsf/hL5cPgeGc3Jt87xLLhyUygHaD4.jpg",
null,
"https://odkialbrucke.com/hgsf/N-uAp4qPJ5MURJKlTXHzVAHaJ3.jpg",
null,
"https://odkialbrucke.com/hgsf/OVZCfHUVVntr1W8zbJUPOQHaIe.jpg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9071112,"math_prob":0.9744474,"size":27157,"snap":"2022-05-2022-21","text_gpt3_token_len":5981,"char_repetition_ratio":0.16311273,"word_repetition_ratio":0.02217957,"special_character_ratio":0.22137938,"punctuation_ratio":0.10743333,"nsfw_num_words":1,"has_unicode_error":false,"math_prob_llama3":0.9752634,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18],"im_url_duplicate_count":[null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-20T23:17:33Z\",\"WARC-Record-ID\":\"<urn:uuid:7c2da3a8-8f7f-4434-a0ef-79361e35b9cf>\",\"Content-Length\":\"46731\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:db6834c1-af34-4809-8004-efc6c3ec7e53>\",\"WARC-Concurrent-To\":\"<urn:uuid:3fea5140-9a7a-40b8-be36-63e4de64ef74>\",\"WARC-IP-Address\":\"37.1.216.34\",\"WARC-Target-URI\":\"https://odkialbrucke.com/maximum-productive-training-volume-per-session/gvfcp6620zyd\",\"WARC-Payload-Digest\":\"sha1:6X4R36WWWUET2U3WHETUZGH7MLZRLISD\",\"WARC-Block-Digest\":\"sha1:HWDLZRFGNRTVQ5I4ELUXLANGXTHXBBJP\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662534693.28_warc_CC-MAIN-20220520223029-20220521013029-00781.warc.gz\"}"} |
https://planetcalc.com/7744/ | [
"# Check Digit Mod 11 (ISBN-10 check digit)\n\nThis online calculator generates and validates check digit with mod11 algorithm, used by ISBN-10\n\nA check digit is a form of redundancy check used for error detection on identification numbers, such as bank account numbers, which are used in an application where they will at least sometimes be input manually. It consists of one or more digits computed by an algorithm from the other digits (or letters) in the sequence input.\n\nWith a check digit, one can detect simple errors in the input of a series of characters (usually digits) such as a single mistyped digit or some permutations of two successive digits1.\n\nThe International Standard Book Number (ISBN) is a unique numeric commercial book identifier. Publishers purchase ISBNs from an affiliate of the International ISBN Agency. The 10-digit ISBN format was developed by the International Organization for Standardization (ISO) and was published in 1970 as international standard ISO 2108.\n\nThe 2001 edition of the official manual of the International ISBN Agency says that the ISBN-10 check digit – which is the last digit of the ten-digit ISBN – must range from 0 to 10 (the symbol X is used for 10), and must be such that the sum of all the ten digits, each multiplied by its (integer) weight, descending from 10 to 1, is a multiple of 11.2\n\nBelow you can find the calculator which generates check digit from 9-digits sequence, forming final 10-digits sequence, and validates 10-digits sequence using mod 11 algorithm described by ISBN.",
null,
"#### Check Digit Mod 11 (ISBN-10 check digit)\n\nCheck digit\n\n10-digit identifier\n\nValid\n\n#### Similar calculators",
null,
"PLANETCALC, Check Digit Mod 11 (ISBN-10 check digit)"
] | [
null,
"https://planetcalc.com/img/32x32i.png",
null,
"https://planetcalc.com/img/lic/by-sa_s.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8025374,"math_prob":0.8922448,"size":1542,"snap":"2019-43-2019-47","text_gpt3_token_len":327,"char_repetition_ratio":0.14044213,"word_repetition_ratio":0.0,"special_character_ratio":0.22049287,"punctuation_ratio":0.06405694,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9672387,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-18T12:12:46Z\",\"WARC-Record-ID\":\"<urn:uuid:4e586fcd-3c67-42af-8844-359e244e2fd4>\",\"Content-Length\":\"33870\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e71e2044-3bf5-4085-a85e-201c4b95eb6d>\",\"WARC-Concurrent-To\":\"<urn:uuid:4eaa970b-c9a8-4712-91b6-c1ec3fae5e6b>\",\"WARC-IP-Address\":\"209.182.217.83\",\"WARC-Target-URI\":\"https://planetcalc.com/7744/\",\"WARC-Payload-Digest\":\"sha1:TALIWM7TLFK3HGBQ7OG4KH75KGQAKIQ6\",\"WARC-Block-Digest\":\"sha1:OWWJOCVVID7TW5OIMR7YZF4HCABBMZWJ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496669755.17_warc_CC-MAIN-20191118104047-20191118132047-00368.warc.gz\"}"} |
https://git.openssl.org/gitweb/?p=openssl.git;a=commitdiff;h=bd03b99b9bb860e062f08ec6d919c0841d951833 | [
"author Ben Laurie Wed, 16 Feb 2000 22:15:39 +0000 (22:15 +0000) committer Ben Laurie Wed, 16 Feb 2000 22:15:39 +0000 (22:15 +0000)\n CHANGES patch | blob | history config patch | blob | history crypto/bn/bn_exp.c patch | blob | history\n\ndiff --git a/CHANGES b/CHANGES\nindex b436189..6618761 100644 (file)\n--- a/CHANGES\n+++ b/CHANGES\n@@ -4,6 +4,11 @@\n\nChanges between 0.9.4 and 0.9.5 [xx XXX 2000]\n\n+ *) Add support for the Compaq Atalla crypto accelerator. If it is installed,\n+ the support is automatically enabled. The resulting binaries will\n+ autodetect the card and use it if present.\n+ [Ben Laurie and Compaq Inc.]\n+\n*) Work around for Netscape hang bug. This sends certificate request\nand server done in one record. Since this is perfectly legal in the\nSSL/TLS protocol it isn't a \"bug\" option and is on by default. See\ndiff --git a/config b/config\nindex 50d9740..578d08b 100755 (executable)\n--- a/config\n+++ b/config\n@@ -458,6 +458,12 @@ case \"\\$GUESSOS\" in\n*) OUT=`echo \\$GUESSOS | awk -F- '{print \\$3}'`;;\nesac\n\n+# See whether we can compile Atalla support\n+if [ -f /usr/include/atasi.h ]\n+then\n+ options=\"\\$options -DATALLA\"\n+fi\n+\n# gcc < 2.8 does not support -mcpu=ultrasparc\nif [ \"\\$OUT\" = solaris-sparcv9-gcc -a \\$GCCVER -lt 28 ]\nthen\nindex 8593ed0..0c11601 100644 (file)\n#include <stdio.h>\n#include \"cryptlib.h\"\n#include \"bn_lcl.h\"\n+#ifdef ATALLA\n+# include <alloca.h>\n+# include <atasi.h>\n+# include <assert.h>\n+# include <dlfcn.h>\n+#endif\n\n#define TABLE_SIZE 16\n\n@@ -157,6 +163,173 @@ err:\nreturn(ret);\n}\n\n+#ifdef ATALLA\n+\n+/*\n+ * This routine will dynamically check for the existance of an Atalla AXL-200\n+ * SSL accelerator module. If one is found, the variable\n+ * asi_accelerator_present is set to 1 and the function pointers\n+ * ptr_ASI_xxxxxx above will be initialized to corresponding ASI API calls.\n+ */\n+typedef int tfnASI_GetPerformanceStatistics(int reset_flag,\n+ unsigned int *ret_buf);\n+typedef int tfnASI_GetHardwareConfig(long card_num, unsigned int *ret_buf);\n+typedef int tfnASI_RSAPrivateKeyOpFn(RSAPrivateKey * rsaKey,\n+ unsigned char *output,\n+ unsigned char *input,\n+ unsigned int modulus_len);\n+\n+static tfnASI_GetHardwareConfig *ptr_ASI_GetHardwareConfig;\n+static tfnASI_RSAPrivateKeyOpFn *ptr_ASI_RSAPrivateKeyOpFn;\n+static tfnASI_GetPerformanceStatistics *ptr_ASI_GetPerformanceStatistics;\n+static int asi_accelerator_present;\n+static int tried_atalla;\n+\n+void atalla_initialize_accelerator_handle(void)\n+ {\n+ void *dl_handle;\n+ int status;\n+ unsigned int config_buf;\n+ static int tested;\n+\n+ if(tested)\n+ return;\n+\n+ tested=1;\n+\n+ bzero((void *)config_buf, 1024);\n+\n+ /*\n+ * Check to see if the library is present on the system\n+ */\n+ dl_handle = dlopen(\"atasi.so\", RTLD_NOW);\n+ if (dl_handle == (void *) NULL)\n+ {\n+/* printf(\"atasi.so library is not present on the system\\n\");\n+ printf(\"No HW acceleration available\\n\");*/\n+ return;\n+ }\n+\n+ /*\n+ * The library is present. Now we'll check to insure that the\n+ * LDM is up and running. First we'll get the address of the\n+ * function in the atasi library that we need to see if the\n+ * LDM is operating.\n+ */\n+\n+ ptr_ASI_GetHardwareConfig =\n+ (tfnASI_GetHardwareConfig *)dlsym(dl_handle,\"ASI_GetHardwareConfig\");\n+\n+ if (ptr_ASI_GetHardwareConfig)\n+ {\n+ /*\n+ * We found the call, now we'll get our config\n+ * status. If we get a non 0 result, the LDM is not\n+ * running and we cannot use the Atalla ASI *\n+ * library.\n+ */\n+ status = (*ptr_ASI_GetHardwareConfig)(0L, config_buf);\n+ if (status != 0)\n+ {\n+ printf(\"atasi.so library is present but not initialized\\n\");\n+ printf(\"No HW acceleration available\\n\");\n+ return;\n+ }\n+ }\n+ else\n+ {\n+/* printf(\"We found the library, but not the function. Very Strange!\\n\");*/\n+ return ;\n+ }\n+\n+ /*\n+ * It looks like we have acceleration capabilities. Load up the\n+ * pointers to our ASI API calls.\n+ */\n+ ptr_ASI_RSAPrivateKeyOpFn=\n+ (tfnASI_RSAPrivateKeyOpFn *)dlsym(dl_handle, \"ASI_RSAPrivateKeyOpFn\");\n+ if (ptr_ASI_RSAPrivateKeyOpFn == NULL)\n+ {\n+/* printf(\"We found the library, but no RSA function. Very Strange!\\n\");*/\n+ return;\n+ }\n+\n+ ptr_ASI_GetPerformanceStatistics =\n+ (tfnASI_GetPerformanceStatistics *)dlsym(dl_handle, \"ASI_GetPerformanceStatistics\");\n+ if (ptr_ASI_GetPerformanceStatistics == NULL)\n+ {\n+/* printf(\"We found the library, but no stat function. Very Strange!\\n\");*/\n+ return;\n+ }\n+\n+ /*\n+ * Indicate that acceleration is available\n+ */\n+ asi_accelerator_present = 1;\n+\n+/* printf(\"This system has acceleration!\\n\");*/\n+\n+ return;\n+ }\n+\n+/* make sure this only gets called once when bn_mod_exp calls bn_mod_exp_mont */\n+int BN_mod_exp_atalla(BIGNUM *r, BIGNUM *a, const BIGNUM *p, const BIGNUM *m)\n+ {\n+ unsigned char *abin;\n+ unsigned char *pbin;\n+ unsigned char *mbin;\n+ unsigned char *rbin;\n+ int an,pn,mn,ret;\n+ RSAPrivateKey keydata;\n+\n+ atalla_initialize_accelerator_handle();\n+ if(!asi_accelerator_present)\n+ return 0;\n+\n+\n+/* We should be able to run without size testing */\n+# define ASIZE 128\n+ an=BN_num_bytes(a);\n+ pn=BN_num_bytes(p);\n+ mn=BN_num_bytes(m);\n+\n+ if(an <= ASIZE && pn <= ASIZE && mn <= ASIZE)\n+ {\n+ int size=mn;\n+\n+ assert(an <= mn);\n+ abin=alloca(size);\n+ memset(abin,'\\0',mn);\n+ BN_bn2bin(a,abin+size-an);\n+\n+ pbin=alloca(pn);\n+ BN_bn2bin(p,pbin);\n+\n+ mbin=alloca(size);\n+ memset(mbin,'\\0',mn);\n+ BN_bn2bin(m,mbin+size-mn);\n+\n+ rbin=alloca(size);\n+\n+ memset(&keydata,'\\0',sizeof keydata);\n+ keydata.privateExponent.data=pbin;\n+ keydata.privateExponent.len=pn;\n+ keydata.modulus.data=mbin;\n+ keydata.modulus.len=size;\n+\n+ ret=(*ptr_ASI_RSAPrivateKeyOpFn)(&keydata,rbin,abin,keydata.modulus.len);\n+/*fprintf(stderr,\"!%s\\n\",BN_bn2hex(a));*/\n+ if(!ret)\n+ {\n+ BN_bin2bn(rbin,keydata.modulus.len,r);\n+/*fprintf(stderr,\"?%s\\n\",BN_bn2hex(r));*/\n+ return 1;\n+ }\n+ }\n+ return 0;\n+ }\n+#endif /* def ATALLA */\n+\nint BN_mod_exp(BIGNUM *r, BIGNUM *a, const BIGNUM *p, const BIGNUM *m,\nBN_CTX *ctx)\n{\n@@ -166,6 +339,13 @@ int BN_mod_exp(BIGNUM *r, BIGNUM *a, const BIGNUM *p, const BIGNUM *m,\nbn_check_top(p);\nbn_check_top(m);\n\n+#ifdef ATALLA\n+ if(BN_mod_exp_atalla(r,a,p,m))\n+ return 1;\n+/* If it fails, try the other methods (but don't try atalla again) */\n+ tried_atalla=1;\n+#endif\n+\n#ifdef MONT_MUL_MOD\n/* I have finally been able to take out this pre-condition of\n* the top bit being set. It was caused by an error in BN_div\n@@ -183,6 +363,10 @@ int BN_mod_exp(BIGNUM *r, BIGNUM *a, const BIGNUM *p, const BIGNUM *m,\n{ ret=BN_mod_exp_simple(r,a,p,m,ctx); }\n#endif\n\n+#ifdef ATALLA\n+ tried_atalla=0;\n+#endif\n+\nreturn(ret);\n}\n\n@@ -318,6 +502,12 @@ int BN_mod_exp_mont(BIGNUM *rr, BIGNUM *a, const BIGNUM *p,\nbn_check_top(p);\nbn_check_top(m);\n\n+#ifdef ATALLA\n+ if(!tried_atalla && BN_mod_exp_atalla(rr,a,p,m))\n+ return 1;\n+/* If it fails, try the other methods */\n+#endif\n+\nif (!(m->d & 1))\n{\nBNerr(BN_F_BN_MOD_EXP_MONT,BN_R_CALLED_WITH_EVEN_MODULUS);"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.51058537,"math_prob":0.85878634,"size":5249,"snap":"2020-34-2020-40","text_gpt3_token_len":1621,"char_repetition_ratio":0.11477598,"word_repetition_ratio":0.09893048,"special_character_ratio":0.32901505,"punctuation_ratio":0.2200957,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9660162,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-20T16:57:32Z\",\"WARC-Record-ID\":\"<urn:uuid:6f188cd9-0137-4eca-89a6-fa02157e4b35>\",\"Content-Length\":\"39840\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1a298118-938b-4a3f-937f-3d5eeb0a6a3c>\",\"WARC-Concurrent-To\":\"<urn:uuid:ced00b06-1deb-405e-89ef-b65a072ae466>\",\"WARC-IP-Address\":\"194.97.150.234\",\"WARC-Target-URI\":\"https://git.openssl.org/gitweb/?p=openssl.git;a=commitdiff;h=bd03b99b9bb860e062f08ec6d919c0841d951833\",\"WARC-Payload-Digest\":\"sha1:VZGH7HAZZ4AUQNSTAI36WCVVOSQZ2ACO\",\"WARC-Block-Digest\":\"sha1:J6WZYWWZMFOUQIPNEZ5VMQJG4H2GB4UU\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600400198287.23_warc_CC-MAIN-20200920161009-20200920191009-00012.warc.gz\"}"} |
https://plugincafe.maxon.net/topic/7037/7947_points-not-on-neighbor-polygons-edges | [
"# Points not on neighbor polygons edges\n\n• On 16/03/2013 at 15:33, xxxxxxxx wrote:\n\nUser Information:\nCinema 4D Version: R12\nPlatform: Windows ; Mac OSX ;\nLanguage(s) : C++ ;\n\n---------\nLet's say that I have two triangles (ta and tb) with a neighboring edge. Using GetPolyInfo(ta), the point of ta that is not on the edge is trivial (for edge, it would be ta->c since edge represents an edge from points a->b, etc.). Is there a simple way to get the point not on the edge for tb or do I really need to find out which of the three point indices of tb (tb->a, tb->b, tb->c) don't match ta->a and ta->b?\n\n• On 16/03/2013 at 17:07, xxxxxxxx wrote:\n\nWell, okay. I went the 'et tu, brute'-force method. Here is an example for the curious:\n\n``````pinfo = neighbor.GetPolyInfo(i);\nif (!pinfo->mark)\n{\ns->Init(f->p1, f->p2, ks, kd, 0.0, 1000.0);\n++s;\n++scnt;\n// Diagonal bracing across points not on edge for triangles\nef = pinfo->face;\nif (ef != NOTOK)\n{\nnp = &polys[ef];\nif ((np->a != p->a) && (np->a != p->b))\ns->Init(f->p3, &masses[np->a], ks, kd, 0.0, 1000.0);\nelse if ((np->b != p->a) && (np->b != p->b))\ns->Init(f->p3, &masses[np->b], ks, kd, 0.0, 1000.0);\nelse\ns->Init(f->p3, &masses[np->c], ks, kd, 0.0, 1000.0);\n++s;\n++scnt;\n}\n}\n``````\n\nNote: the number of 'edges' (in this case, the 's' array) total for a triangulated object becomes neighbor->GetEdgeCount() + (polygonCount * 2)."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.6641674,"math_prob":0.9868456,"size":1415,"snap":"2019-13-2019-22","text_gpt3_token_len":501,"char_repetition_ratio":0.10205528,"word_repetition_ratio":0.05106383,"special_character_ratio":0.39222616,"punctuation_ratio":0.21529745,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9803048,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-03-23T14:32:46Z\",\"WARC-Record-ID\":\"<urn:uuid:5072386b-05b1-4438-a3ce-900f4393fc93>\",\"Content-Length\":\"35578\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7958f5fd-2934-45a1-9ce8-ee567aea7283>\",\"WARC-Concurrent-To\":\"<urn:uuid:f6b84fbc-fd77-453e-8983-29eb6e4b6133>\",\"WARC-IP-Address\":\"88.99.93.218\",\"WARC-Target-URI\":\"https://plugincafe.maxon.net/topic/7037/7947_points-not-on-neighbor-polygons-edges\",\"WARC-Payload-Digest\":\"sha1:UG2SURSPN65MVY5B2XG5UEQYXU4D4GCM\",\"WARC-Block-Digest\":\"sha1:UK7NOTY2PY76MUZSQJHUEJBADOUYH7RH\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-13/CC-MAIN-2019-13_segments_1552912202872.8_warc_CC-MAIN-20190323141433-20190323163433-00032.warc.gz\"}"} |
http://www.self.gutenberg.org/articles/eng/Tensors | [
"### Tensors\n\nFor other uses, see Tensor (disambiguation).\n\nTensors are geometric objects that describe linear relations between vectors, scalars, and other tensors. Elementary examples of such relations include the dot product, the cross product, and linear maps. Vectors and scalars themselves are also tensors. A tensor can be represented as a multi-dimensional array of numerical values. The order (also degree or rank) of a tensor is the dimensionality of the array needed to represent it, or equivalently, the number of indices needed to label a component of that array. For example, a linear map can be represented by a matrix, a 2-dimensional array, and therefore is a 2nd-order tensor. A vector can be represented as a 1-dimensional array and is a 1st-order tensor. Scalars are single numbers and are thus 0th-order tensors.\n\nTensors are used to represent correspondences between sets of geometric vectors. For example, the Cauchy stress tensor T takes a direction v as input and produces the stress T(v) on the surface normal to this vector for output thus expressing a relationship between these two vectors, shown in the figure (right).\n\nBecause they express a relationship between vectors, tensors themselves must be independent of a particular choice of coordinate system. Taking a coordinate basis or frame of reference and applying the tensor to it results in an organized multidimensional array representing the tensor in that basis, or frame of reference. The coordinate independence of a tensor then takes the form of a \"covariant\" transformation law that relates the array computed in one coordinate system to that computed in another one. This transformation law is considered to be built into the notion of a tensor in a geometric or physical setting, and the precise form of the transformation law determines the type (or valence) of the tensor.\n\nTensors are important in physics because they provide a concise mathematical framework for formulating and solving physics problems in areas such as elasticity, fluid mechanics, and general relativity. Tensors were first conceived by Tullio Levi-Civita and Gregorio Ricci-Curbastro, who continued the earlier work of Bernhard Riemann and Elwin Bruno Christoffel and others, as part of the absolute differential calculus. The concept enabled an alternative formulation of the intrinsic differential geometry of a manifold in the form of the Riemann curvature tensor.\n\n## History\n\nThe concepts of later tensor analysis arose from the work of Carl Friedrich Gauss in differential geometry, and the formulation was much influenced by the theory of algebraic forms and invariants developed during the middle of the nineteenth century. The word \"tensor\" itself was introduced in 1846 by William Rowan Hamilton to describe something different from what is now meant by a tensor.[Note 1] The contemporary usage was brought in by Woldemar Voigt in 1898.\n\nTensor calculus was developed around 1890 by Gregorio Ricci-Curbastro under the title absolute differential calculus, and originally presented by Ricci in 1892. It was made accessible to many mathematicians by the publication of Ricci and Tullio Levi-Civita's 1900 classic text Méthodes de calcul différentiel absolu et leurs applications (Methods of absolute differential calculus and their applications).\n\nIn the 20th century, the subject came to be known as tensor analysis, and achieved broader acceptance with the introduction of Einstein's theory of general relativity, around 1915. General relativity is formulated completely in the language of tensors. Einstein had learned about them, with great difficulty, from the geometer Marcel Grossmann. Levi-Civita then initiated a correspondence with Einstein to correct mistakes Einstein had made in his use of tensor analysis. The correspondence lasted 1915–17, and was characterized by mutual respect:\n\nI admire the elegance of your method of computation; it must be nice to ride through these fields upon the horse of true mathematics while the like of us have to make our way laboriously on foot.\n—Albert Einstein, The Italian Mathematicians of Relativity\n\nTensors were also found to be useful in other fields such as continuum mechanics. Some well-known examples of tensors in differential geometry are quadratic forms such as metric tensors, and the Riemann curvature tensor. The exterior algebra of Hermann Grassmann, from the middle of the nineteenth century, is itself a tensor theory, and highly geometric, but it was some time before it was seen, with the theory of differential forms, as naturally unified with tensor calculus. The work of Élie Cartan made differential forms one of the basic kinds of tensors used in mathematics.\n\nFrom about the 1920s onwards, it was realised that tensors play a basic role in algebraic topology (for example in the Künneth theorem). Correspondingly there are types of tensors at work in many branches of abstract algebra, particularly in homological algebra and representation theory. Multilinear algebra can be developed in greater generality than for scalars coming from a field, but the theory is then certainly less geometric, and computations more technical and less algorithmic. Tensors are generalized within category theory by means of the concept of monoidal category, from the 1960s.\n\n## Definition\n\nThere are several approaches to defining tensors. Although seemingly different, the approaches just describe the same geometric concept using different languages and at different levels of abstraction.\n\n### As multidimensional arrays\n\nJust as a scalar is described by a single number, and a vector with respect to a given basis is described by an array of one dimension, any tensor with respect to a basis is described by a multidimensional array. The numbers in the array are known as the scalar components of the tensor or simply its components. They are denoted by indices giving their position in the array, in subscript and superscript, after the symbolic name of the tensor. The total number of indices required to uniquely select each component is equal to the dimension of the array, and is called the order or the rank of the tensor.[Note 2] For example, the entries of an order 2 tensor T would be denoted Tij, where i and j are indices running from 1 to the dimension of the related vector space.[Note 3]\n\nJust as the components of a vector change when we change the basis of the vector space, the entries of a tensor also change under such a transformation. Each tensor comes equipped with a transformation law that details how the components of the tensor respond to a change of basis. The components of a vector can respond in two distinct ways to a change of basis (see covariance and contravariance of vectors), where the new basis vectors $\\mathbf\\left\\{\\hat\\left\\{e\\right\\}\\right\\}_i$ are expressed in terms of the old basis vectors $\\mathbf\\left\\{e\\right\\}_j$ as,\n\n$\\mathbf\\left\\{\\hat\\left\\{e\\right\\}\\right\\}_i = \\sum_j R^j_i \\mathbf\\left\\{e\\right\\}_j = R^j_i \\mathbf\\left\\{e\\right\\}_j,$\n\nwhere Ri j is a matrix and in the second expression the summation sign was suppressed (a notational convenience introduced by Einstein that will be used throughout this article). The components, vi, of a regular (or column) vector, v, transform with the inverse of the matrix R,\n\n$\\hat\\left\\{v\\right\\}^i = \\left(R^\\left\\{-1\\right\\}\\right)^i_j v^j,$\n\nwhere the hat denotes the components in the new basis. While the components, wi, of a covector (or row vector), w transform with the matrix R itself,\n\n$\\hat\\left\\{w\\right\\}_i = R_i^j w_j.$\n\nThe components of a tensor transform in a similar manner with a transformation matrix for each index. If an index transforms like a vector with the inverse of the basis transformation, it is called contravariant and is traditionally denoted with an upper index, while an index that transforms with the basis transformation itself is called covariant and is denoted with a lower index. The transformation law for an order-m tensor with n contravariant indices and mn covariant indices is thus given as,\n\n$\\hat\\left\\{T\\right\\}^\\left\\{i_1,\\ldots,i_n\\right\\}_\\left\\{i_\\left\\{n+1\\right\\},\\ldots,i_m\\right\\}= \\left(R^\\left\\{-1\\right\\}\\right)^\\left\\{i_1\\right\\}_\\left\\{j_1\\right\\}\\cdots\\left(R^\\left\\{-1\\right\\}\\right)^\\left\\{i_n\\right\\}_\\left\\{j_n\\right\\} R^\\left\\{j_\\left\\{n+1\\right\\}\\right\\}_\\left\\{i_\\left\\{n+1\\right\\}\\right\\}\\cdots R^\\left\\{j_\\left\\{m\\right\\}\\right\\}_\\left\\{i_\\left\\{m\\right\\}\\right\\}T^\\left\\{j_1,\\ldots,j_n\\right\\}_\\left\\{j_\\left\\{n+1\\right\\},\\ldots,j_m\\right\\}.$\n\nSuch a tensor is said to be of order or type (n,mn).[Note 4] This discussion motivates the following formal definition:\n\n}\n_{i_{n+1}}\\cdots R^{j_{m}}_{i_{m}}T^{j_1,\\ldots,j_n}_{j_{n+1},\\ldots,j_m}[\\mathbf{f}].\n\n}}\n\nThe definition of a tensor as a multidimensional array satisfying a transformation law traces back to the work of Ricci. Nowadays, this definition is still used in some physics and engineering text books.\n\n#### Tensor fields\n\nMain article: Tensor field\n\nIn many applications, especially in differential geometry and physics, it is natural to consider a tensor with components which are functions. This was, in fact, the setting of Ricci's original work. In modern mathematical terminology such an object is called a tensor field, but they are often simply referred to as tensors themselves.\n\nIn this context the defining transformation law takes a different form. The \"basis\" for the tensor field is determined by the coordinates of the underlying space, and the defining transformation law is expressed in terms of partial derivatives of the coordinate functions, $\\bar\\left\\{x\\right\\}_i\\left(x_1,\\ldots,x_k\\right)$, defining a coordinate transformation,\n\n$\\hat\\left\\{T\\right\\}^\\left\\{i_1\\dots i_n\\right\\}_\\left\\{i_\\left\\{n+1\\right\\}\\dots i_m\\right\\}\\left(\\bar\\left\\{x\\right\\}_1,\\ldots,\\bar\\left\\{x\\right\\}_k\\right) =$\n\n\\frac{\\partial \\bar{x}^{i_1}}{\\partial x^{j_1}} \\cdots \\frac{\\partial \\bar{x}^{i_n}}{\\partial x^{j_n}} \\frac{\\partial x^{j_{n+1}}}{\\partial \\bar{x}^{i_{n+1}}} \\cdots \\frac{\\partial x^{j_m}}{\\partial \\bar{x}^{i_m}} T^{j_1\\dots j_n}_{j_{n+1}\\dots j_m}(x_1,\\ldots,x_k).\n\n### As multilinear maps\n\nA downside to the definition of a tensor using the multidimensional array approach is that it is not apparent from the definition that the defined object is indeed basis independent, as is expected from an intrinsically geometric object. Although it is possible to show that transformation laws indeed ensure independence from the basis, sometimes a more intrinsic definition is preferred. One approach is to define a tensor as a multilinear map. In that approach a type (n,m) tensor T is defined as a map,\n\n$T: \\underbrace\\left\\{ V^* \\times\\dots\\times V^*\\right\\}_\\left\\{n \\text\\left\\{ copies\\right\\}\\right\\} \\times \\underbrace\\left\\{ V \\times\\dots\\times V\\right\\}_\\left\\{m \\text\\left\\{ copies\\right\\}\\right\\} \\rightarrow \\mathbf\\left\\{R\\right\\},$\n\nwhere V is a vector space and V* is the corresponding dual space of covectors, which is linear in each of its arguments.\n\nBy applying a multilinear map T of type (n,m) to a basis {ej} for V and a canonical cobasis {εi} for V*,\n\n$T^\\left\\{i_1\\dots i_n\\right\\}_\\left\\{j_1\\dots j_m\\right\\} \\equiv T\\left(\\mathbf\\left\\{\\varepsilon\\right\\}^\\left\\{i_1\\right\\},\\ldots,\\mathbf\\left\\{\\varepsilon\\right\\}^\\left\\{i_n\\right\\},\\mathbf\\left\\{e\\right\\}_\\left\\{j_1\\right\\},\\ldots,\\mathbf\\left\\{e\\right\\}_\\left\\{j_m\\right\\}\\right),$\n\nan n+m dimensional array of components can be obtained. A different choice of basis will yield different components. But, because T is linear in all of its arguments, the components satisfy the tensor transformation law used in the multilinear array definition. The multidimensional array of components of T thus form a tensor according to that definition. Moreover, such an array can be realised as the components of some multilinear map T. This motivates viewing multilinear maps as the intrinsic objects underlying tensors.\n\n### Using tensor products\n\nFor some mathematical applications, a more abstract approach is sometimes useful. This can be achieved by defining tensors in terms of elements of tensor products of vector spaces, which in turn are defined through a universal property. A type (n,m) tensor is defined in this context as an element of the tensor product of vector spaces,\n\n$T\\in \\underbrace\\left\\{V \\otimes\\dots\\otimes V\\right\\}_\\left\\{n \\text\\left\\{ copies\\right\\}\\right\\} \\otimes \\underbrace\\left\\{V^* \\otimes\\dots\\otimes V^*\\right\\}_\\left\\{m \\text\\left\\{ copies\\right\\}\\right\\}.$\n\nIf vi is a basis of V and wj is a basis of W, then the tensor product $V\\otimes W$ has a natural basis $\\mathbf\\left\\{v\\right\\}_i\\otimes \\mathbf\\left\\{w\\right\\}_j$. The components of a tensor T are the coefficients of the tensor with respect to the basis obtained from a basis {ei} for V and its dual {εj}, i.e.\n\n$T = T^\\left\\{i_1\\dots i_n\\right\\}_\\left\\{j_1\\dots j_m\\right\\}\\; \\mathbf\\left\\{e\\right\\}_\\left\\{i_1\\right\\}\\otimes\\cdots\\otimes \\mathbf\\left\\{e\\right\\}_\\left\\{i_n\\right\\}\\otimes \\mathbf\\left\\{\\varepsilon\\right\\}^\\left\\{j_1\\right\\}\\otimes\\cdots\\otimes \\mathbf\\left\\{\\varepsilon\\right\\}^\\left\\{j_m\\right\\}.$\n\nUsing the properties of the tensor product, it can be shown that these components satisfy the transformation law for a type (m,n) tensor. Moreover, the universal property of the tensor product gives a 1-to-1 correspondence between tensors defined in this way and tensors defined as multilinear maps.\n\n## Examples\n\nVectors (u, v, w) exterior-multiplied to obtain n-vectors (parallelotope elements).\n1-forms (ε, η, ω) exterior-multiplied to obtain n-forms (\"meshes\" of coordinate surfaces, here planes)\nGeometric interpretations for exterior products of n vectors and n 1-forms, where n is the grade, for n = 1, 2, 3. The \"circulations\" show orientation.\n\nThis table shows important examples of tensors, including both tensors on vector spaces and tensor fields on manifolds. The tensors are classified according to their type (n, m). For example, a bilinear form is the same thing as a (0, 2)-tensor; an inner product is an example of a (0, 2)-tensor, but not all (0, 2)-tensors are inner products. In the (0, M)-entry of the table, M denotes the dimension of the underlying vector space or manifold.\n\nn, m n = 0 n = 1 n = 2 ... n = N ...\nm = 0 scalar, e.g. scalar curvature vector (e.g. direction vector) bivector, e.g. inverse metric tensor N-vector, a sum of N-blades\nm = 1 covector, linear functional, 1-form linear transformation, Kronecker delta\nm = 2 bilinear form, e.g. inner product, metric tensor, Ricci curvature, 2-form, symplectic form e.g. cross product in three dimensions e.g. elasticity tensor\nm = 3 e.g. 3-form e.g. Riemann curvature tensor\n...\nm = M e.g. M-form i.e. volume form\n...\n\nRaising an index on an (n, m)-tensor produces an (n + 1, m − 1)-tensor; this can be visualized as moving diagonally up and to the right on the table. Symmetrically, lowering an index can be visualized as moving diagonally down and to the left on the table. Contraction of an upper with a lower index of an (n, m)-tensor produces an (n − 1, m − 1)-tensor; this can be visualized as moving diagonally up and to the left on the table.\n\n## Notation\n\n### Ricci calculus\n\nRicci calculus is the modern formalism and notation for tensor indices: indicating inner and outer products, covariance and contravariance, summations of tensor components, symmetry and antisymmetry, and partial and covariant derivatives.\n\n### Einstein summation convention\n\nThe Einstein summation convention dispenses with writing summation signs, leaving the summation implicit. Any repeated index symbol is summed over: if the index i is used twice in a given term of a tensor expression, it means that the term is to be summed for all i. Several distinct pairs of indices may be summed this way.\n\n### Penrose graphical notation\n\nPenrose graphical notation is a diagrammatic notation which replaces the symbols for tensors with shapes, and their indices by lines and curves. It is independent of basis elements, and requires no symbols for the indices.\n\n### Abstract index notation\n\nThe abstract index notation is a way to write tensors such that the indices are no longer thought of as numerical, but rather are indeterminates. This notation captures the expressiveness of indices and the basis-independence of index-free notation.\n\n### Component-free notation\n\nA component-free treatment of tensors uses notation that emphasises that tensors do not rely on any basis, and is defined in terms of the tensor product of vector spaces.\n\n## Operations\n\nThere are a number of basic operations that may be conducted on tensors that again produce a tensor. The linear nature of tensor implies that two tensors of the same type may be added together, and that tensors may be multiplied by a scalar with results analogous to the scaling of a vector. On components, these operations are simply performed component for component. These operations do not change the type of the tensor, however there also exist operations that change the type of the tensors.\n\n### Tensor product\n\nMain article: Tensor product\n\nThe tensor product takes two tensors, S and T, and produces a new tensor, ST, whose order is the sum of the orders of the original tensors. When described as multilinear maps, the tensor product simply multiplies the two tensors, i.e.\n\n$\\left(S\\otimes T\\right)\\left(v_1,\\ldots, v_n, v_\\left\\{n+1\\right\\},\\ldots, v_\\left\\{n+m\\right\\}\\right) = S\\left(v_1,\\ldots, v_n\\right)T\\left( v_\\left\\{n+1\\right\\},\\ldots, v_\\left\\{n+m\\right\\}\\right),$\n\nwhich again produces a map that is linear in all its arguments. On components the effect similarly is to multiply the components of the two input tensors, i.e.\n\n$\\left(S\\otimes T\\right)^\\left\\{i_1\\ldots i_l i_\\left\\{l+1\\right\\}\\ldots i_\\left\\{l+n\\right\\}\\right\\}_\\left\\{j_1\\ldots j_k j_\\left\\{k+1\\right\\}\\ldots j_\\left\\{k+m\\right\\}\\right\\} =$\n\nS^{i_1\\ldots i_l}_{j_1\\ldots j_k} T^{i_{l+1}\\ldots i_{l+n}}_{j_{k+1}\\ldots j_{k+m}}, If S is of type (l,k) and T is of type (n,m), then the tensor product ST has type (l+n,k+m).\n\n### Contraction\n\nMain article: Tensor contraction\n\nTensor contraction is an operation that reduces the total order of a tensor by two. More precisely, it reduces a type (n,m) tensor to a type (n−1,m−1) tensor. In terms of components, the operation is achieved by summing over one contravariant and one covariant index of tensor. For example, a (1,1)-tensor $T_i^j$ can be contracted to a scalar through\n\n$T_i^i$.\n\nWhere the summation is again implied. When the (1,1)-tensor is interpreted as a linear map, this operation is known as the trace.\n\nThe contraction is often used in conjunction with the tensor product to contract an index from each tensor.\n\nThe contraction can also be understood in terms of the definition of a tensor as an element of a tensor product of copies of the space V with the space V* by first decomposing the tensor into a linear combination of simple tensors, and then applying a factor from V* to a factor from V. For example, a tensor\n\n$T \\in V\\otimes V\\otimes V^*$\n\ncan be written as a linear combination\n\n$T=v_1\\otimes w_1\\otimes \\alpha_1 + v_2\\otimes w_2\\otimes \\alpha_2 +\\cdots + v_N\\otimes w_N\\otimes \\alpha_N.$\n\nThe contraction of T on the first and last slots is then the vector\n\n$\\alpha_1\\left(v_1\\right)w_1 + \\alpha_2\\left(v_2\\right)w_2+\\cdots+\\alpha_N\\left(v_N\\right)w_N.$\n\n### Raising or lowering an index\n\nWhen a vector space is equipped with an inner product (or metric as it is often called in this context), operations can be defined that convert a contravariant (upper) index into a covariant (lower) index and vice versa. A metric itself is a (symmetric) (0,2)-tensor, it is thus possible to contract an upper index of a tensor with one of lower indices of the metric. This produces a new tensor with the same index structure as the previous, but with lower index in the position of the contracted upper index. This operation is quite graphically known as lowering an index.\n\nConversely the matrix inverse of the metric can be defined, which behaves as a (2,0)-tensor. This inverse metric can be contracted with a lower index to produce an upper index. This operation is called raising an index.\n\n## Applications\n\n### Continuum mechanics\n\nImportant examples are provided by continuum mechanics. The stresses inside a solid body or fluid are described by a tensor. The stress tensor and strain tensor are both second order tensors, and are related in a general linear elastic material by a fourth-order elasticity tensor. In detail, the tensor quantifying stress in a 3-dimensional solid object has components that can be conveniently represented as a 3×3 array. The three faces of a cube-shaped infinitesimal volume segment of the solid are each subject to some given force. The force's vector components are also three in number. Thus, 3×3, or 9 components are required to describe the stress at this cube-shaped infinitesimal segment. Within the bounds of this solid is a whole mass of varying stress quantities, each requiring 9 quantities to describe. Thus, a second order tensor is needed.\n\nIf a particular surface element inside the material is singled out, the material on one side of the surface will apply a force on the other side. In general, this force will not be orthogonal to the surface, but it will depend on the orientation of the surface in a linear manner. This is described by a tensor of type (2,0), in linear elasticity, or more precisely by a tensor field of type (2,0), since the stresses may vary from point to point.\n\n### Other examples from physics\n\nCommon applications include\n\n### Applications of tensors of order > 2\n\nThe concept of a tensor of order two is often conflated with that of a matrix. Tensors of higher order do however capture ideas important in science and engineering, as has been shown successively in numerous areas as they develop. This happens, for instance, in the field of computer vision, with the trifocal tensor generalizing the fundamental matrix.\n\nThe field of nonlinear optics studies the changes to material polarization density under extreme electric fields. The polarization waves generated are related to the generating electric fields through the nonlinear susceptibility tensor. If the polarization P is not linearly proportional to the electric field E, the medium is termed nonlinear. To a good approximation (for sufficiently weak fields, assuming no permanent dipole moments are present), P is given by a Taylor series in E whose coefficients are the nonlinear susceptibilities:\n\n$\\frac\\left\\{P_i\\right\\}\\left\\{\\varepsilon_0\\right\\} = \\sum_j \\chi^\\left\\{\\left(1\\right)\\right\\}_\\left\\{ij\\right\\} E_j + \\sum_\\left\\{jk\\right\\} \\chi_\\left\\{ijk\\right\\}^\\left\\{\\left(2\\right)\\right\\} E_j E_k + \\sum_\\left\\{jk\\ell\\right\\} \\chi_\\left\\{ijk\\ell\\right\\}^\\left\\{\\left(3\\right)\\right\\} E_j E_k E_\\ell + \\cdots. \\!$\n\nHere $\\chi^\\left\\{\\left(1\\right)\\right\\}$ is the linear susceptibility, $\\chi^\\left\\{\\left(2\\right)\\right\\}$ gives the Pockels effect and second harmonic generation, and $\\chi^\\left\\{\\left(3\\right)\\right\\}$ gives the Kerr effect. This expansion shows the way higher-order tensors arise naturally in the subject matter.\n\n## Generalizations\n\n### Tensors in infinite dimensions\n\nThe notion of a tensor can be generalized in a variety of ways to infinite dimensions. One, for instance, is via the tensor product of Hilbert spaces. Another way of generalizing the idea of tensor, common in nonlinear analysis, is via the multilinear maps definition where instead of using finite-dimensional vector spaces and their algebraic duals, one uses infinite-dimensional Banach spaces and their continuous dual. Tensors thus live naturally on Banach manifolds.\n\n### Tensor densities\n\nMain article: Tensor density\n\nIt is also possible for a tensor field to have a \"density\". A tensor with density r transforms as an ordinary tensor under coordinate transformations, except that it is also multiplied by the determinant of the Jacobian to the rth power. Invariantly, in the language of multilinear algebra, one can think of tensor densities as multilinear maps taking their values in a density bundle such as the (1-dimensional) space of n-forms (where n is the dimension of the space), as opposed to taking their values in just R. Higher \"weights\" then just correspond to taking additional tensor products with this space in the range.\n\nIn the language of vector bundles, the determinant bundle of the tangent bundle is a line bundle that can be used to 'twist' other bundles r times. While locally the more general transformation law can indeed be used to recognise these tensors, there is a global question that arises, reflecting that in the transformation law one may write either the Jacobian determinant, or its absolute value. Non-integral powers of the (positive) transition functions of the bundle of densities make sense, so that the weight of a density, in that sense, is not restricted to integer values.\n\nRestricting to changes of coordinates with positive Jacobian determinant is possible on orientable manifolds, because there is a consistent global way to eliminate the minus signs; but otherwise the line bundle of densities and the line bundle of n-forms are distinct. For more on the intrinsic meaning, see density on a manifold.)\n\n### Spinors\n\nMain article: Spinor\n\nStarting with an orthonormal coordinate system, a tensor transforms in a certain way when a rotation is applied. However, there is additional structure to the group of rotations that is not exhibited by the transformation law for tensors: see orientation entanglement and plate trick. Mathematically, the rotation group is not simply connected. Spinors are mathematical objects that generalize the transformation law for tensors in a way that is sensitive to this fact."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9068308,"math_prob":0.9875107,"size":24514,"snap":"2019-35-2019-39","text_gpt3_token_len":5283,"char_repetition_ratio":0.15458997,"word_repetition_ratio":0.026229508,"special_character_ratio":0.20947213,"punctuation_ratio":0.10722149,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9992999,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-08-26T03:10:20Z\",\"WARC-Record-ID\":\"<urn:uuid:384b8347-d5c3-4668-b64f-0b9aa4e336f5>\",\"Content-Length\":\"112475\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3e900ea0-38cd-4c84-9db3-fd1d48d76c07>\",\"WARC-Concurrent-To\":\"<urn:uuid:651dcd67-e5bd-428d-a52b-5ccac1cd795f>\",\"WARC-IP-Address\":\"66.27.42.21\",\"WARC-Target-URI\":\"http://www.self.gutenberg.org/articles/eng/Tensors\",\"WARC-Payload-Digest\":\"sha1:X7RJHMPDIZFBD67CMEOLJEYEH3RTSRSR\",\"WARC-Block-Digest\":\"sha1:Y232BJ6QLPBITJKNJO43G52VAF5MV5SU\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-35/CC-MAIN-2019-35_segments_1566027330962.67_warc_CC-MAIN-20190826022215-20190826044215-00179.warc.gz\"}"} |
https://percent-of.com/calculate/what-is-94-of-492/ | [
"# We use percentages in almost everything.\n\nPercentages are a very important part of our daily lives. They are used in Economics, Cooking, Health, Sports, Mathematics, Science, Jewellery, Geography, Medicine and many other areas.\n\n## Percent of Calculator\n\nCalculate percentage of X, quick & simple.\n\n%\n?\n\n94% of 492 is:\n462.48\n\n## Percent of - Table For 492\n\nPercent of Difference\n1% of 492 is 4.92 487.08\n2% of 492 is 9.84 482.16\n3% of 492 is 14.76 477.24\n4% of 492 is 19.68 472.32\n5% of 492 is 24.6 467.4\n6% of 492 is 29.52 462.48\n7% of 492 is 34.44 457.56\n8% of 492 is 39.36 452.64\n9% of 492 is 44.28 447.72\n10% of 492 is 49.2 442.8\n11% of 492 is 54.12 437.88\n12% of 492 is 59.04 432.96\n13% of 492 is 63.96 428.04\n14% of 492 is 68.88 423.12\n15% of 492 is 73.8 418.2\n16% of 492 is 78.72 413.28\n17% of 492 is 83.64 408.36\n18% of 492 is 88.56 403.44\n19% of 492 is 93.48 398.52\n20% of 492 is 98.4 393.6\n21% of 492 is 103.32 388.68\n22% of 492 is 108.24 383.76\n23% of 492 is 113.16 378.84\n24% of 492 is 118.08 373.92\n25% of 492 is 123 369\n26% of 492 is 127.92 364.08\n27% of 492 is 132.84 359.16\n28% of 492 is 137.76 354.24\n29% of 492 is 142.68 349.32\n30% of 492 is 147.6 344.4\n31% of 492 is 152.52 339.48\n32% of 492 is 157.44 334.56\n33% of 492 is 162.36 329.64\n34% of 492 is 167.28 324.72\n35% of 492 is 172.2 319.8\n36% of 492 is 177.12 314.88\n37% of 492 is 182.04 309.96\n38% of 492 is 186.96 305.04\n39% of 492 is 191.88 300.12\n40% of 492 is 196.8 295.2\n41% of 492 is 201.72 290.28\n42% of 492 is 206.64 285.36\n43% of 492 is 211.56 280.44\n44% of 492 is 216.48 275.52\n45% of 492 is 221.4 270.6\n46% of 492 is 226.32 265.68\n47% of 492 is 231.24 260.76\n48% of 492 is 236.16 255.84\n49% of 492 is 241.08 250.92\n50% of 492 is 246 246\n51% of 492 is 250.92 241.08\n52% of 492 is 255.84 236.16\n53% of 492 is 260.76 231.24\n54% of 492 is 265.68 226.32\n55% of 492 is 270.6 221.4\n56% of 492 is 275.52 216.48\n57% of 492 is 280.44 211.56\n58% of 492 is 285.36 206.64\n59% of 492 is 290.28 201.72\n60% of 492 is 295.2 196.8\n61% of 492 is 300.12 191.88\n62% of 492 is 305.04 186.96\n63% of 492 is 309.96 182.04\n64% of 492 is 314.88 177.12\n65% of 492 is 319.8 172.2\n66% of 492 is 324.72 167.28\n67% of 492 is 329.64 162.36\n68% of 492 is 334.56 157.44\n69% of 492 is 339.48 152.52\n70% of 492 is 344.4 147.6\n71% of 492 is 349.32 142.68\n72% of 492 is 354.24 137.76\n73% of 492 is 359.16 132.84\n74% of 492 is 364.08 127.92\n75% of 492 is 369 123\n76% of 492 is 373.92 118.08\n77% of 492 is 378.84 113.16\n78% of 492 is 383.76 108.24\n79% of 492 is 388.68 103.32\n80% of 492 is 393.6 98.4\n81% of 492 is 398.52 93.48\n82% of 492 is 403.44 88.56\n83% of 492 is 408.36 83.64\n84% of 492 is 413.28 78.72\n85% of 492 is 418.2 73.8\n86% of 492 is 423.12 68.88\n87% of 492 is 428.04 63.96\n88% of 492 is 432.96 59.04\n89% of 492 is 437.88 54.12\n90% of 492 is 442.8 49.2\n91% of 492 is 447.72 44.28\n92% of 492 is 452.64 39.36\n93% of 492 is 457.56 34.44\n94% of 492 is 462.48 29.52\n95% of 492 is 467.4 24.6\n96% of 492 is 472.32 19.68\n97% of 492 is 477.24 14.76\n98% of 492 is 482.16 9.84\n99% of 492 is 487.08 4.92\n100% of 492 is 492 0\n\n### Here's How to Calculate 94% of 492\n\nLet's take a quick example here:\n\nYou have a Target coupon of \\$492 and you need to know how much will you save on your purchase if the discount is 94 percent.\n\nSolution:\n\nAmount Saved = Original Price x Discount in Percent / 100\n\nAmount Saved = (492 x 94) / 100\n\nAmount Saved = 46248 / 100\n\nIn other words, a 94% discount for a purchase with an original price of \\$492 equals \\$462.48 (Amount Saved), so you'll end up paying 29.52.\n\n### Calculating Percentages\n\nSimply click on the calculate button to get the results of percentage calculations. You will see the result on the next page. If there are errors in the input fields, the result page will be blank. The program allows you to calculate the difference between two numbers in percentages. You can also input a percentage of any number and get the numeric value. Although it is a simple calculator, it can be very useful in many scenarios. Our goal is to give you an easy to use percentage calculator that gives you results you want fast.\n\nPercentage in mathematics refers to fractions based in 100. It is usually represented by “%,” “pct,” or “percentage.” This web app allows a comma or dot as a decimal separator. So you can use both freely.\n\nWe have provided several examples for you to use. You can use the examples to feed in your own data correctly. We hope you will find this site useful for calculang percentages. You can even use it for crosschecking the accuracy of your assignment results.\n\nNB. Americans use “percent,” which the British prefer “per cent.”\n\n#### Examples\n\nExample one\n\nCalculate 20% of 200?\n20% of 200 =____\n(200/100) x 20 = _____\n2 x 20 = 40\n\nIt is quite easy. Just divide 200 by 100 to get one percent. The result is 2. Then multiply it by 20 ( 20% = 20 per hundred) = 20 x 2 = 40\n\nExample two\n\nWhat percentage of 125 is 50?\n\n50 = ---% of 125\n50 x (100/125) = 40%\n\nGet the value of one percent by dividing 100 by 125. After that, multiply the value by 50 to get the percentage value of 50 units, which is 40% That is how to calculate the percentage.\n\nExample three\n\nWhat is the percentage (%) change (increase or decrease) from 120 to 150?\n\n(150-120) x (100/120) = 36\n\nSince 150 represents 100%. One percent will be equal to 100/150. 150-120 is 30. Therefore, 30 units represents 30 x (100/150) = 36 % This is how to calculate the percentage increase.\n\nwe do not use a percentage at all times. There are scenarios where we simply want to show the ratio of numbers. For instance, what is 20% of 50? This can also be interpreted as 20 hundredths of 50. This equates to 20/100 x 50 = 10.\n\nYou can use a calculation trick here. Anyme you want to divide a number by 100, just move the decimal two places to the left. 20/100 x 50 calculated above can also be writen as (20 x 50)/100. Since 20x 50 =1000. You can simply divide 1000 by 100 by moving two decimal places to the left, which gives you 10.\n\nIn another scenario, you want to calculate the percentage increase or decrease. Supposing you have \\$10 and spend \\$2 to buy candy, then you have spent 20% of your money. So how much will be remaining? All the money you have is 100%, if you spend 20%, you will have 80% remaining. You can simply use the percentage reduction tool above to calculate this value.\n\n#### Origin\n\nThe word percent is derived from the Latin word percenter which means per hundred, and it is designated by %"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.91572857,"math_prob":0.9874621,"size":6683,"snap":"2020-10-2020-16","text_gpt3_token_len":2436,"char_repetition_ratio":0.22188951,"word_repetition_ratio":0.039412674,"special_character_ratio":0.541224,"punctuation_ratio":0.16156463,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9998326,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-03-28T21:19:06Z\",\"WARC-Record-ID\":\"<urn:uuid:0b2ff777-195e-4285-9cfa-649fd373bf2d>\",\"Content-Length\":\"47276\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a170e329-5c0b-45a2-9cac-417e498ee6b6>\",\"WARC-Concurrent-To\":\"<urn:uuid:8cbcbab6-4c2a-4c93-a464-b0b645f99742>\",\"WARC-IP-Address\":\"209.42.195.149\",\"WARC-Target-URI\":\"https://percent-of.com/calculate/what-is-94-of-492/\",\"WARC-Payload-Digest\":\"sha1:OOK2WRHCA5OHQDPXZKCE4HXBXDJWMTC7\",\"WARC-Block-Digest\":\"sha1:7CGFRXF6EX2V4GPTD6MLVDXX4U5L5YET\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-16/CC-MAIN-2020-16_segments_1585370493120.15_warc_CC-MAIN-20200328194743-20200328224743-00534.warc.gz\"}"} |
https://byteshastra.com/sql-data-types/ | [
"# SQL Data Types\n\n0\n169\n\nSQL (Structured Query Language) supports several data types that can be used to define the type of data that can be stored in a table column. Here are some of the common SQL data types:\n\n1. Numeric data types:\n• INTEGER: represents whole numbers.\n• FLOAT: represents decimal numbers.\n• DOUBLE: represents decimal numbers with more precision than FLOAT.\n• DECIMAL: represents decimal numbers with fixed precision and scale.\n2. Character and string data types:\n• CHAR: represents fixed-length character strings.\n• VARCHAR: represents variable-length character strings.\n• TEXT: represents large variable-length character strings.\n3. Date and time data types:\n• DATE: represents a date value.\n• TIME: represents a time value.\n• TIMESTAMP: represents a date and time value.\n4. Boolean data type:\n• BOOLEAN: represents a true or false value.\n5. Binary data types:\n• BLOB: represents binary large objects.\n• BYTEA: represents binary data in PostgreSQL.\n\nThese data types can be used in combination with other SQL keywords to define the structure and characteristics of database tables. It is important to choose the appropriate data type for each column based on the type of data being stored to ensure efficient data storage and retrieval.\n\nData Types in MySQL, SQL Server and Oracle Databases\n\nMySQL, SQL Server, and Oracle are three popular relational database management systems that support SQL. Here are the data types supported by each of these systems:\n\n1. MySQL data types:Numeric data types:\n• TINYINT: represents small integers.\n• SMALLINT: represents medium-sized integers.\n• INT: represents large integers.\n• FLOAT: represents single-precision floating-point numbers.\n• DOUBLE: represents double-precision floating-point numbers.\n• DECIMAL: represents fixed-point decimal numbers.\n\nCharacter and string data types:\n\n• CHAR: represents fixed-length character strings.\n• VARCHAR: represents variable-length character strings.\n• TEXT: represents large variable-length character strings.\n\nDate and time data types:\n\n• DATE: represents a date value.\n• TIME: represents a time value.\n• TIMESTAMP: represents a date and time value.\n\nBoolean data type:\n\n• BOOLEAN: represents a true or false value.\n2. SQL Server data types:Numeric data types:\n• TINYINT: represents small integers.\n• SMALLINT: represents medium-sized integers.\n• INT: represents large integers.\n• FLOAT: represents single-precision floating-point numbers.\n• REAL: represents single-precision floating-point numbers.\n• DECIMAL: represents fixed-point decimal numbers.\n• NUMERIC: represents fixed-point decimal numbers with precision and scale.\n\nCharacter and string data types:\n\n• CHAR: represents fixed-length character strings.\n• VARCHAR: represents variable-length character strings.\n• TEXT: represents large variable-length character strings.\n\nDate and time data types:\n\n• DATE: represents a date value.\n• TIME: represents a time value.\n• DATETIME: represents a date and time value.\n\nBoolean data type:\n\n• BIT: represents a true or false value.\n3. Oracle data types:Numeric data types:\n• NUMBER: represents numbers with precision and scale.\n• FLOAT: represents floating-point numbers.\n• BINARY_FLOAT: represents single-precision floating-point numbers.\n• BINARY_DOUBLE: represents double-precision floating-point numbers.\n\nCharacter and string data types:\n\n• CHAR: represents fixed-length character strings.\n• VARCHAR2: represents variable-length character strings.\n• CLOB: represents large variable-length character strings.\n\nDate and time data types:\n\n• DATE: represents a date value.\n• TIMESTAMP: represents a date and time value.\n\nBoolean data type:\n\n• BOOLEAN: represents a true or false value."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7437655,"math_prob":0.9312494,"size":3878,"snap":"2023-40-2023-50","text_gpt3_token_len":760,"char_repetition_ratio":0.24341765,"word_repetition_ratio":0.5177305,"special_character_ratio":0.20422898,"punctuation_ratio":0.19407408,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9841959,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-07T20:25:15Z\",\"WARC-Record-ID\":\"<urn:uuid:f28cae33-ef71-4ab7-a7ce-d9fe975545de>\",\"Content-Length\":\"202516\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1bb72994-bf2a-440f-9419-fd34515958d3>\",\"WARC-Concurrent-To\":\"<urn:uuid:05cd1c08-ea90-42a9-8ca6-f6043fcc0f9f>\",\"WARC-IP-Address\":\"51.68.218.97\",\"WARC-Target-URI\":\"https://byteshastra.com/sql-data-types/\",\"WARC-Payload-Digest\":\"sha1:N2GCEHKJT7CLVOJVZ4XK2QJRGF3MQ5F6\",\"WARC-Block-Digest\":\"sha1:RLH54NRMTCF6GGSIGKMGU2L3MDURPNQ5\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100686.78_warc_CC-MAIN-20231207185656-20231207215656-00470.warc.gz\"}"} |
https://codereview.stackexchange.com/questions/176164/sudoku-puzzle-solving-algorithm-that-uses-a-rule-based-approach-to-narrow-the-de | [
"# Sudoku puzzle solving algorithm that uses a rule-based approach to narrow the depth search\n\nI had written a Sudoku puzzle solver that accepts 9x9 puzzle boards, and completes them with the least possible time. Rather than purely depending on brute-force, my algorithm first attempts to fill in the squares that have an obvious solution. And for each square being filled this way, the amount of information increases (I.e.., more number of squares get filled which aids in filling in the remaining squares) which makes the further iteration process easier. Once this method fails (this happens if there isn't an obvious answer to fill-into any of the squares), the algorithm immediately switches over to brute-force search.\n\n(Note, the source contains a few spelling mistakes, like the word recursive misspelled as Recrussive. Please ignore spelling errors. Anyway, it has been a while since I touched this code, and this was my first object oriented code written in C++).\n\n## The basic working (High level view)\n\n1. The program first determines the set of all possible values that can be entered into every blank square in the puzzle board. The possibility set is determined by iterating through the row, column and the 3x3 block, and eliminating the set of numbers already present. The possibility-set is stored as a bit-filed value for each square.\n2. Every blank square with a single possibility gets the number entered into it.\n3. The algorithm is repeated from the first step again, until there isn't a square with just one possibility. If there aren't any blank squares left, the current board's state is returned as the result.\n4. It chooses a blank square and enters a value into it. The process of choosing the blank square, out of all the available blank squares is guided by an analysis algorithm. The value entered into this blank square is chosen from all the possible values that can be entered into the square.\n5. A recursive call to this function is initiated with a clone of the current board's state. Note that the current board state now contains the modification done to it at the 4th step (entering one of the possible values).\n6. Steps 4 and 5 are carried over until all the possible values in each of the available blank squares are tested with, or until the puzzle gets solved.\n\nThough this algorithm may seem to provide an alarmingly bad worse-time complexity, this algorithm almost never hits the worse time complexity regardless of the (legal) puzzle entered.\n\n## List of types defined and used throughout the program :\n\nI. BoardGrid\n\ntypedef int BoardGrid;\n\n\nBoardgrid type represents the puzzle board, encoded with the following convention:\n\n1. The signed integer values from 1 to 9 represent the numbers 1 to 9 in the Sudoku puzzle board.\n2. The value zero signifies a blank-box in the Sudoku puzzle board.\n\nTherefore, this representation is the one that gets acquired from the user as a user-input.\n\nII. PossibilitySet\n\ntypedef int possibilitySet;\n\n\npossibilitySet type represents the set of all values that can/can't be filled in a square. This type is specifically defined for storing bit-fields for representing a set of values. This follows the following convention:\n\n1. The value 0x0 represents a null-set.\n2. The values starting from 0x1 << 0 to 0x1 << 9 represent the values 1 to 9 in the Sudoku board. For example, the bit-field value (0x1 << 5) | (0x1 << 7) represents the set of numbers: {5,7}. Therefore, unlike BoardGrid this stores multiple numbers in an integer field.\n3. An object of this type is used for holding the possible list of values that can be inserted into a particular square.\n\nThe following defines bit-field values associated with each symbol accepted by the sudoku board puzzle:\n\nenum POSSIB\n{\n// represents the values from 1 to 9\n// Here POS represents possibilities.\nPOS1 = 0x1,\nPOS2 = 0x1<<1,\nPOS3 = 0x1<<2,\nPOS4 = 0x1<<3,\nPOS5 = 0x1<<4,\nPOS6 = 0x1<<5,\nPOS7 = 0x1<<6,\nPOS8 = 0x1<<7,\nPOS9 = 0x1<<8,\n\n/// represents the set of all values.\nPOS_ALL = POS1 | POS2 | POS3 | POS4 |\nPOS5 | POS6 | POS7 | POS8 |\nPOS9\n};\n\n\nIII. InversePossibility\n\n typedef float InversePossibility;\n\n\nThis Sudoku puzzle solving algorithm follows a brute-force approach mixed with rule-based approach. To further improve performance, an extra analysis step is added, to determine which squares to be prioritized while choosing it to be filled. The priority value is inversely dependent on the number of possible values that can be filled into a particular square.\n\nThis InversePossibility value is used in computing the priority weight value of each of the blank square that is to be filled. This priority weight value helps in ordering which squares must be filled first.\n\nIV. PriorityUnit:\n\nstruct PriorityUnit{ /// this is a structure that shows the location of the cell\n/// along with it's priority.\nfloat PriorityValue;\nint x; int y; // represents the location of the element\n};\n\n\nThis holds the priority value of each square in the puzzle board. This is used during the evaluation process, where each instance is sorted based on the PriorityValue. A detailed explanation will be provided in the later sections.\n\nV. WeightQueue\n\nclass Compare{\npublic:\nbool operator () (PriorityUnit a, PriorityUnit b);\n};\n\ntypedef std::priority_queue<PriorityUnit, std::deque<PriorityUnit>, Compare> WeightQueue;\n\n\nThis serves as the container containing the PriorityUnit.\n\n## Working of the Sudoku puzzle solver:\n\n1. Rule-based search: This method involves identifying the set of blank squares that can be filled immediately with the available information. For each iteration, the amount of information increases, and eventually as each squares get filled, the puzzle board gets completed.\n\n2. guided brute-force search: The evaluation algorithm, on a high level, is a brute-force algorithm which relies on an analysis method which prioritizes the blanks that are to be filled first. During each search iteration the rule-based search is called to complete the evaluation if the amount of information on the board is sufficient (as each depth first search iteration increases the amount of information on the board).\n\n3. The analysis method that guides the depth first search: This assigns a weight value for each of the blank squares based on a measure which determines the influence of filling-in that square with a possible value.\n\nThe following is the header file, containing the primary functions that are used in the evaluation process:\n\n#include \"InverseCountSum.h\"\n#include \"basicDeff.h\"\nclass Sudoku\n{\n// shows if there is a possibility for zero to be present in the possibilities variable\nbool ZeroPossibilities;\n\nBoardGrid Grid; // shows one sudoku board; (x, y) represents the position and the value\n// field represents the value\nBoardGrid possibilities;\nBoardGrid possibilityCount;\nInversePossibility possibilityCountI; // inverse count (this serves as the exact inverse of the\n// possibilityCount.\nInvCount PossibCount;\n\n/// only for debug purpose\n#ifdef DEBUG_TESTS\nWeights weight_values; // holds the weights\n#endif // DEBUG_TESTS\n//BoardGrid possibilityCount;\npublic:\n\nPriorityUnit TempPUnit;\n\nSudoku();\nSudoku (BoardGrid grid);\nvoid SetGrid(BoardGrid grid);\n// this is the basic operation involved in converting the sudoku puzzle into a different sequence.\nvoid SwitchNumbers(int Value1, int Value2); // interchanges the values without making\n// any mistakes by violating the fundamental rules\nbool ScreenPossibility(int pos_x, int pos_y); // screens the possibility of numbers that can fit.\nbool ScreenPossibilityL2(int pos_x, int pos_y);\nstatic bool CheckLegal(BoardGrid); // same as IsLegal(), but takes the board as the input\nbool IsLegal(); // checks if the current board is legal (or correctly follows the rules)\nstatic bool DeleteValue(int Value, int &field); // deletes the bit field (making it zero) which is at the position \"value\".\n// \"field\" is a bit-vice set, holding the possibilities the cell can hold.\nvoid DeleteCommonElements(int Value_set, int& field );\nstatic int BitValue(int Value);\nBoardGrid* RetGrid();\nBoardGrid* RetPoss();\nstatic bool SinglePossibilityElement(int possib);\nstatic int NoOfElements(int value);\nbool Solve();\n\nvoid GeneratePossibilityCount();\nvoid GenerateInversePossibilityCount();\nvoid SetPossibilityCount();\nvoid GenerateWeightValues(InvCount& inv, WeightQueue& Q, int pos_x, int pos_y);\nWeightQueue GenerateWeightValues();\nvoid reinitializepos();\n\nbool IsSolved();\nbool FullPossibility();\n\n};\n\n\nThese are the definitions of the generator functions. This shows the vital functions involved in the analysis process (the process to determine the priority of selecting blank squares):\n\n#include <iostream>\n#include \"Sudoku.h\"\n#include \"InverseCountSum.h\"\nint Sudoku::NoOfElements(int value)\n{\n\nint tcount = 0;\nif( (value & POS1) != 0) ++tcount;\nif( (value & POS2) != 0) ++tcount;\nif( (value & POS3) != 0) ++tcount;\nif( (value & POS4) != 0) ++tcount;\nif( (value & POS5) != 0) ++tcount;\nif( (value & POS6) != 0) ++tcount;\nif( (value & POS7) != 0) ++tcount;\nif( (value & POS8) != 0) ++tcount;\nif( (value & POS9) != 0) ++tcount;\nreturn tcount;\n}\n\nvoid Sudoku::GeneratePossibilityCount() // this is to be called only after calling the\n{ // screen possibility function for all (x,y) coordinates\nint i,j;\nfor(i=0; i<9; ++i)\nfor(j=0; j<9; ++j)\npossibilityCount[i][j] = NoOfElements(possibilities[i][j]);\n}\n\nvoid Sudoku::GenerateInversePossibilityCount() // this is to be called after the GeneratePossibilityCount() is called\n{\nint i,j;\nfor(i=0; i<9; ++i)\nfor(j=0; j<9; ++j)\n{\npossibilityCountI[i][j] = (float)(1/(float)possibilityCount[i][j]);\n//std::cout<<\":: \"<<possibilityCountI[i][j]<<\" \";\n}\n}\n\nvoid Sudoku::GenerateWeightValues(InvCount& inv, WeightQueue& Q, int pos_x, int pos_y)\n{\nGridLimits Lim;\nLim.SetLimits(pos_x, pos_y);\nTempPUnit.PriorityValue = inv.Reterive(Row, pos_x - 1) +\ninv.Reterive(Col, pos_y - 1) +\ninv.Reterive(Cell, Lim.GridNo - 1)+\n10*possibilityCountI[pos_y-1][pos_x-1];\nTempPUnit.x = pos_x -1;\nTempPUnit.y = pos_y -1;\nQ.push(TempPUnit);\n}\n\nWeightQueue Sudoku::GenerateWeightValues()\n{\nWeightQueue Q;\nint i,j;\nfor(i=1; i<=9; ++i)\nfor(j=1; j<=9; ++j)\n{\nif (Grid[i-1][j-1] == 0)\nGenerateWeightValues(PossibCount, Q, j, i);\n}\nreturn Q;\n}\n\n\nThe Solver class declaration:\n\nclass Solver\n{\nWeightQueue Q;\n\npublic:\nSudoku CurPuzzle;\nstatic Sudoku SudokuSolution;\nstatic bool IsSolutionSet;\nstatic int Count;\nstatic int GlobalPossibilities;\nstatic void initializeGP();\nvoid SetCurPuzzle(Sudoku P);\nbool RecrussiveSolve (); // this starts the main solution iteration process\n};\n\n\nThe following shows the main evaluation operation's source:\n\nbool Solver::RecrussiveSolve()\n{\nPriorityUnit Unit;\nSolver solve;\nint temp_pos, temp;\nint i;\nint size;\nCurPuzzle.reinitializepos();\nwhile (CurPuzzle.Solve());\n\nif (CurPuzzle.FullPossibility()) return false;\n\nCurPuzzle.GeneratePossibilityCount();\nCurPuzzle.GenerateInversePossibilityCount();\nCurPuzzle.SetPossibilityCount();\nQ = CurPuzzle.GenerateWeightValues();\nif (CurPuzzle.IsSolved())\n{\nif (CurPuzzle.IsLegal() )\n{\nSolver::SudokuSolution = CurPuzzle;\nSolver::IsSolutionSet = true;\nreturn true;\n}\nelse\nreturn false;\n}\n\nsolve.SetCurPuzzle(CurPuzzle);\n\nUnit = Q.top();\ntemp_pos = (*CurPuzzle.RetPoss())[Unit.y][Unit.x];\nsize = Sudoku::NoOfElements(temp_pos);\n\nfor (i = 0; i < size; ++i)\n{\n\ntemp = (*solve.CurPuzzle.RetGrid())[Unit.y][Unit.x] = Sudoku::BitValue(temp_pos);\nSudoku::DeleteValue(temp, temp_pos);\n\nif (solve.RecrussiveSolve())\nreturn true;\n\nsolve.SetCurPuzzle(CurPuzzle);\n}\n(*solve.CurPuzzle.RetGrid())[Unit.y][Unit.x] = 0;\nQ.pop();\nreturn false;\n}\n\n\nThe Solver class's bool RecrussiveSolve() function solves the entire puzzle. Understanding the functioning of this function will be enough to understand the working of the algorithm.\n\nThe following,\n\n CurPuzzle.GeneratePossibilityCount();\nCurPuzzle.GenerateInversePossibilityCount();\nCurPuzzle.SetPossibilityCount();\nQ = CurPuzzle.GenerateWeightValues();\n\n\ninitializes the evaluation for the current iteration. The statement while (CurPuzzle.Solve()); called before these is the program's attempt to try and solve the problem purely based on the rule-based procedure, trying to eliminate the set of numbers that can be entered into each square. The CurPuzzle.Solve() function iterates through each bit-field trying to discover the possible elements (values) that can be entered into the blank box. The set of values are then stored as bit fields. This function runs in a loop because, an empty box getting filled by a value might help provide enough information to figure out the value in a different box. Therefore, until such a possibility is ruled-out the function iterates.\n\nThe above process configures the bit-fields, which would help the GeneratePossibilityCount() function to compute the number of possible elements that can be entered into each square. The next function, which relies on the result of the GeneratePossibilityCount() function, computes the GenerateInversePossibilityCount() which assigns a floating-point value to each square in the sudoku board following the expression (1/num_of_possibilities_for_that_square) and later the SetPossibilityCount() function is called followed by the GenerateWeightValues() function, which returns the list of all the empty cells, along with its associated priority value.\n\nThe SetPossibilityCount() function sums the number of possibilities across each row, each column and each 3x3 cell (the smaller boxes, which must also contain values from 1 to 9) and stores them in an array. The GenerateWeightValues() function returns a priority queue, sorted based on the priority value derived from the result provided by the other three functions.\n\nThe final depth-first search is computed by recursively calling the RecrussiveSolve() function, belonging to a locally declared instance of the Solver class. The solve.SetCurPuzzle(CurPuzzle); sets the puzzle board for the next recursive call. Recurrence is brought about by calling the same RecrussiveSolve() belonging to a local instance of the same class, declared within the RecrussiveSolve() function.\n\nIn what areas does my code need improvement? And how can I improve the design of my code and algorithm? I wrote this code intending it to be Object oriented. How object oriented is it? (I.e.., is there a better way to structure the same solution) And is there a better algorithm to solve this problem more easily?\n\nFor complete code, please refer this URL: https://github.com/sreramk/Sudoku-\n\n• Is my explanation understandable? Or should I improve it for clarity? – Sreram Sep 21 '17 at 11:51\n• Should I give the complete code? – Sreram Sep 21 '17 at 12:34\n\n## Fix the formatting\n\nThe formatting of the code in general, and indenting in general, is inconsistent and makes it more difficult to read and understand the code. A consistent style helps the reader of the code.\n\n## Fix the spelling errors\n\nYes, I know you said \"please ignore spelling errors\" but such errors make the code harder to read and understand gives a poor impression of quality of the code.\n\nThere are a plethora of typedefs in this code that tend to obfuscate rather than simplify understanding of the code. For instance, these typedefs are not really helping much:\n\ntypedef int BoardGrid;\ntypedef int PossibilitySet;\n\n\nIn order to actually use them, we still need to know that their underlying structure is that of a 9x9 grid of int, so all this accomplishes is making the reader of the code look up yet another indirection.\n\n## Encapsulate class details\n\nThe Sudoku class exposes a great number of things that really should be private. The most egregious example is that RetGrid() and RetPoss return references to private data members. This means that any function calling these can alter the internal state of the object. That's a serious design error and must be fixed. The other aspect is to consider which are internal details and which are essential to the interface. The SwitchNumbers routine is never called (and wouldn't work anyway because all it does is swap the passed copies), so it shold be eliminated from the interface. Likewise with screenPossibilityL2 which isn't even defined.\n\nI found the code hard to read and the design difficult to follow because there are many classes and functions that don't seem to be well thought out. For example, the Sudoku class contains this public member:\n\nPriorityUnit TempPUnit;\n\n\nWhy is a data member that is apparently intended to be temporary part of a class public interface? Why is the solver a separate class and not a method in the Sudoku class? Oh, wait, it's that, too. Very confusing.\n\n## Prefer integers to floating point numbers\n\nOn many machines, integer mathematics are far faster than floating point. Since your weighted values are simply the mathematical inverse of the number of possibilities per square, why not just use the former (which is always a small integer) rather than a floating point number? It would also simplify the code.\n\n## Eliminate parallel structures\n\nIn the Sudoku class, there are six different 9x9 structures, but they are largely redundant. I'd suggest instead that if you really want to hold all that data, a better way to do it would be to have a Square class and then create a single grid of those. It would simplify your code and make it much clearer.\n\n• Thanks for your review! It helped a lot. I'll make the changes you have mentioned. And as for theSwitchNumbers function, I had written it for a future use. I thought I could extend my Sudoku code to generate puzzles. Therefore, I thought I could write a function that could help switch two values perfectly well, complying with the puzzle rules (but yeah, that function's implementation is wrong, and I have to remove it). This will let us randomly \"shuffle\" a reference puzzle to get a new one... – Sreram Sep 21 '17 at 16:03"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7933205,"math_prob":0.9121091,"size":14312,"snap":"2020-10-2020-16","text_gpt3_token_len":3398,"char_repetition_ratio":0.15257199,"word_repetition_ratio":0.033653848,"special_character_ratio":0.23225266,"punctuation_ratio":0.15845487,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97770315,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-02-26T23:19:29Z\",\"WARC-Record-ID\":\"<urn:uuid:53edfea8-ae60-4a7e-a30a-e3763abec245>\",\"Content-Length\":\"163332\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3ad1a525-4694-4149-ae4b-0bfb10aba586>\",\"WARC-Concurrent-To\":\"<urn:uuid:80cde479-a2fd-4232-8611-d5db1a7a0682>\",\"WARC-IP-Address\":\"151.101.129.69\",\"WARC-Target-URI\":\"https://codereview.stackexchange.com/questions/176164/sudoku-puzzle-solving-algorithm-that-uses-a-rule-based-approach-to-narrow-the-de\",\"WARC-Payload-Digest\":\"sha1:TGW3VQ67TL4TOVPV5PPMNUUBR53XEU2X\",\"WARC-Block-Digest\":\"sha1:DBYTWAXWQTPW4YD52VJWWZ7F3XBO3BVB\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-10/CC-MAIN-2020-10_segments_1581875146562.94_warc_CC-MAIN-20200226211749-20200227001749-00378.warc.gz\"}"} |
https://electronics.stackexchange.com/questions/452431/necessity-of-beta-in-common-emitter-design | [
"# Necessity of beta in common emitter design\n\nI'm thinking about a common emitter circuit like this one:\n\nInitial assumption: If the base current is very small compared to the emitter current, then the collector and emitter currents are practically equal. This is an equivalent assumption to saying $$\\\\alpha\\$$ is so close to 1 it may as well be 1.\n\nIf you know all the resistor values, you can find the base voltage, take off 0.7 V, and have the emitter voltage. With that, you can find the current through $$\\R_E\\$$. Since with the above approximation the collector current is equal to this, you can find the voltage at the collector by multiplying that current by $$\\R_C\\$$. Am I correct in calling this the quiescent collector voltage?\n\nThe voltage gain can also be obtained, by using $$\\r_{ej}\\$$, which is 26 mV (thermal voltage) divided by the emitter current found above. $$\\R_C/r_{ej}\\$$ should give the voltage gain.\n\nFrom what I can see, if you allow the initial assumption then both quiescent collector voltage and voltage gain can be found without the transistor's $$\\\\beta\\$$. The assumption seems reasonable to me, so my main question is: Why is $$\\\\beta\\$$ ever used in this kind of analysis? It appears to be, so either my initial assumption is dodgy or my working following it is.\n\nA smaller question about the circuit above:\nI have read various guides on designing such a circuit, and a few of them say you should design them so the voltage across $$\\R_E\\$$ is 1 V. Why? Why not 2 V, or 0.5 V, or anything else?\n\n• The irrelevance of H bias is actual bottom line in preference to negative feedback CE design due to massive non-linearity of Vbe controlled collector current. But vis-a-vis Ve, even 0.3V is adequate relative to Vbe / T{'C} ratio sensitivity but depends on overall Vbe(ac/dc ratio) – Tony Stewart Sunnyskyguy EE75 Aug 10 '19 at 21:46\n\nIn this particular bit of analysis, for normal small-signal transistors (i.e., $$\\\\beta \\simeq 100\\$$ or so), the value of $$\\\\beta\\$$ really only enters into things in the choice of values of R1 and R2. Assuming zero base current, all you need to do is get the ratio of R1 and R2 correct. But a real base current will pull $$\\V_{be}\\$$ down. So you want to, first, take that into account, and second, don't make R1 and R2 too large."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9409619,"math_prob":0.9929944,"size":1464,"snap":"2020-10-2020-16","text_gpt3_token_len":340,"char_repetition_ratio":0.14109589,"word_repetition_ratio":0.0,"special_character_ratio":0.2397541,"punctuation_ratio":0.11447811,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9980723,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-04-03T04:50:41Z\",\"WARC-Record-ID\":\"<urn:uuid:64a73780-08c8-4d75-9a15-e693d6cc55dc>\",\"Content-Length\":\"146735\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:daa20ab0-ffc3-416a-b9ca-8954d841ea02>\",\"WARC-Concurrent-To\":\"<urn:uuid:0b485e22-46b7-4e04-86cf-8a85d95c97cb>\",\"WARC-IP-Address\":\"151.101.1.69\",\"WARC-Target-URI\":\"https://electronics.stackexchange.com/questions/452431/necessity-of-beta-in-common-emitter-design\",\"WARC-Payload-Digest\":\"sha1:EX4NBZFFCUFUJQKXAPAUVBTAIJ3Y2Z33\",\"WARC-Block-Digest\":\"sha1:WWWJWWTV3FMZ4QXGXRBGKIHOUFES7NKJ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-16/CC-MAIN-2020-16_segments_1585370510287.30_warc_CC-MAIN-20200403030659-20200403060659-00266.warc.gz\"}"} |
https://pdfcoffee.com/maths-tricks-pdf-free.html | [
"# Maths Tricks\n\n##### Citation preview\n\nBeauty of Maths! 1x8+1=9 12 x 8 + 2 = 98 123 x 8 + 3 = 987 1234 x 8 + 4 = 9876 12345 x 8 + 5 = 98765 123456 x 8 + 6 = 987654 1234567 x 8 + 7 = 9876543 12345678 x 8 + 8 = 98765432 123456789 x 8 + 9 = 987654321 1 x 9 + 2 = 11 12 x 9 + 3 = 111 123 x 9 + 4 = 1111 1234 x 9 + 5 = 11111 12345 x 9 + 6 = 111111 123456 x 9 + 7 = 1111111 1234567 x 9 + 8 = 11111111 12345678 x 9 + 9 = 111111111 123456789 x 9 +10= 1111111111 9 x 9 + 7 = 88 98 x 9 + 6 = 888 987 x 9 + 5 = 8888 9876 x 9 + 4 = 88888 98765 x 9 + 3 = 888888 987654 x 9 + 2 = 8888888 9876543 x 9 + 1 = 88888888 98765432 x 9 + 0 = 888888888 Brilliant, isn't it? And finally, take a look at this symmetry: 1x1=1 11 x 11 = 121 111 x 111 = 12321 1111 x 1111 = 1234321 11111 x 11111 = 123454321 111111 x 111111 = 12345654321 1111111 x 1111111 = 1234567654321 11111111 x 11111111 = 123456787654321 111111111 x 111111111=12345678987654321\n\n1. The 11 Times Trick We all know the trick when multiplying by ten – add 0 to the end of the number, but did you know there is an equally easy trick for multiplying a two digit number by 11? This is it: Take the original number and imagine a space between the two digits (in this example we will use 52: 5_2 Now add the two numbers together and put them in the middle: 5_(5+2)_2 That is it – you have the answer: 572. If the numbers in the middle add up to a 2 digit number, just insert the second number and add 1 to the first: 9_(9+9)_9 (9+1)_8_9 10_8_9 1089 – It works every time.\n\n2. Quick Square If you need to square a 2 digit number ending in 5, you can do so very easily with this trick. Mulitply the first digit by itself + 1, and put 25 on the end. That is all! 252 = (2x(2+1)) & 25 2x3=6 625\n\n3. Multiply by 5 Most people memorize the 5 times tables very easily, but when you get in to larger numbers it gets more complex – or does it? This trick is super easy. Take any number, then divide it by 2 (in other words, halve the number). If the result is whole, add a 0 at the end. If it is not, ignore the remainder and add a 5 at the end. It works everytime: 2682 x 5 = (2682 / 2) & 5 or 0 2682 / 2 = 1341 (whole number so add 0) 13410 Let’s try another: 5887 x 5 2943.5 (fractional number (ignore remainder, add 5) 29435\n\n4. Multiply by 9 This one is simple – to multiple any number between 1 and 9 by 9 hold both hands in front of your face – drop the finger that corresponds to the number you are multiplying (for example 9×3 – drop your third finger) – count the fingers before the dropped finger (in the case of 9×3 it is 2) then count the numbers after (in this case 7) – the answer is 27.\n\n5. Multiply by 4 This is a very simple trick which may appear obvious to some, but to others it is not. The trick is to simply multiply by two, then multiply by two again: 58 x 4 = (58 x 2) + (58 x 2) = (116) + (116) = 232\n\n6. Calculate a Tip If you need to leave a 15% tip, here is the easy way to do it. Work out 10% (divide the number by 10) – then add that number to half its value and you have your answer: 15% of \\$25 = (10% of 25) + ((10% of 25) / 2) \\$2.50 + \\$1.25 = \\$3.75\n\n7. Tough Multiplication If you have a large number to multiply and one of the numbers is even, you can easily subdivide to get to the answer: 32 x 125, is the same as: 16 x 250 is the same as: 8 x 500 is the same as: 4 x 1000 = 4,000\n\n8. Dividing by 5 Dividing a large number by five is actually very simple. All you do is multiply by 2 and move the decimal point: 195 / 5 Step1: 195 * 2 = 390 Step2: Move the decimal: 39.0 or just 39 2978 / 5 step 1: 2978 * 2 = 5956 Step2: 595.6\n\n9. Subtracting from 1,000\n\nTo subtract a large number from 1,000 you can use this basic rule: subtract all but the last number from 9, then subtract the last number from 10: 1000 -648 step1: subtract 6 from 9 = 3 step2: subtract 4 from 9 = 5 step3: subtract 8 from 10 = 2 answer: 352\n\n10. Assorted Multiplication Rules Multiply by 5: Multiply by 10 and divide by 2. Multiply by 6: Sometimes multiplying by 3 and then 2 is easy. Multiply by 9: Multiply by 10 and subtract the original number. Multiply by 12: Multiply by 10 and add twice the original number. Multiply by 13: Multiply by 3 and add 10 times original number. Multiply by 14: Multiply by 7 and then multiply by 2 Multiply by 15: Multiply by 10 and add 5 times the original number, as above. Multiply by 16: You can double four times, if you want to. Or you can multiply by 8 and then by 2. Multiply by 17: Multiply by 7 and add 10 times original number. Multiply by 18: Multiply by 20 and subtract twice the original number (which is obvious from the first step). Multiply by 19: Multiply by 20 and subtract the original number. Multiply by 24: Multiply by 8 and then multiply by 3. Multiply by 27: Multiply by 30 and subtract 3 times the original number (which is obvious from the first step). Multiply by 45: Multiply by 50 and subtract 5 times the original number (which is obvious from the first step). Multiply by 90: Multiply by 9 (as above) and put a zero on the right. Multiply by 98: Multiply by 100 and subtract twice the original number. Multiply by 99: Multiply by 100 and subtract the original number.\n\nBonus: Percentages Yanni in comment 23 gave an excellent tip for working out percentages, so I have taken the liberty of duplicating it here:\n\nFind 7 % of 300. Sound Difficult? Percents: First of all you need to understand the word “Percent.” The first part is PER , as in 10 tricks per listverse page. PER = FOR EACH. The second part of the word is CENT, as in 100. Like Century = 100 years. 100 CENTS in 1 dollar… etc. Ok… so PERCENT = For Each 100. So, it follows that 7 PERCENT of 100, is 7. (7 for each hundred, of only 1 hundred). 8 % of 100 = 8. 35.73% of 100 = 35.73 But how is that useful?? Back to the 7% of 300 question. 7% of the first hundred is 7. 7% of 2nd hundred is also 7, and yep, 7% of the 3rd hundred is also 7. So 7+7+7 = 21. If 8 % of 100 is 8, it follows that 8% of 50 is half of 8 , or 4. Break down every number that’s asked into questions of 100, if the number is less then 100, then move the decimal point accordingly. EXAMPLES: 8%200 = ? 8 + 8 = 16. 8%250 = ? 8 + 8 + 4 = 20. 8%25 = 2.0 (Moving the decimal back). 15%300 = 15+15+15 =45. 15%350 = 15+15+15+7.5 = 52.5 Also it’s usefull to know that you can always flip percents, like 3% of 100 is the same as 100% of 3. 35% of 8 is the same as 8% of 35."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.86550367,"math_prob":0.9992223,"size":7062,"snap":"2023-40-2023-50","text_gpt3_token_len":2396,"char_repetition_ratio":0.1897138,"word_repetition_ratio":0.090361446,"special_character_ratio":0.43103936,"punctuation_ratio":0.14759035,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99998736,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-10T10:40:55Z\",\"WARC-Record-ID\":\"<urn:uuid:49fdb223-919e-4335-b2b2-9b511a434bef>\",\"Content-Length\":\"37107\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7e060e28-afee-42bd-936d-e1c61446fb40>\",\"WARC-Concurrent-To\":\"<urn:uuid:5f5dfe02-2f43-46ab-91cf-92cb735c57f7>\",\"WARC-IP-Address\":\"172.67.153.91\",\"WARC-Target-URI\":\"https://pdfcoffee.com/maths-tricks-pdf-free.html\",\"WARC-Payload-Digest\":\"sha1:E3OUXZBSOXCT3XTWCAX4WZOYTPYYSVJS\",\"WARC-Block-Digest\":\"sha1:O5WAOACKNVOLWQOC5BZ7RSXTR5JHUDZN\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679101779.95_warc_CC-MAIN-20231210092457-20231210122457-00289.warc.gz\"}"} |
https://mathoverflow.net/questions/368796/deriving-integral-in-gaiotto-tommasiello-theory | [
"# Deriving integral in Gaiotto-Tommasiello theory\n\nI was looking at a paper by Takao Suyama on GT theory, and I couldn't figure out how he derived his formula (3.59): $$\\frac{1}{\\pi}\\int_a^bdx\\frac{1}{z-x}\\frac{\\sqrt{(z-a)(z-b)}}{\\sqrt{|(x-a)(x-b)|}}\\frac{\\log(e^{-t_1}x)}{2}=\\frac{1}{2}\\log\\left[\\frac{e^{-t_1}}{2\\sqrt{ab}+a+b}\\left(z+\\sqrt{ab}-\\sqrt{(z-a)(z-b)}\\right)^2\\right],$$ where $$0, $$[a,b]\\subset\\mathbb{R}$$, $$t_1\\in\\mathbb{R}$$, and $$z\\in\\mathbb{C}\\setminus[a,b]$$. It's a physics paper, but my question is just how would one do the integral?\n\nI tried expanding everything as a power series, and then using the fact that the integral of $$x^n\\log x$$ is known, but then I couldn't figure out how to resum the resulting series, so I'm a bit confused how one would solve this integral.\n\nOkay, so I think I may have found the answer myself. So, really, the absolute value symbol is a trick. You can get rid of it by pulling out an $$i$$, and then you have $$\\mathcal{I}:=\\frac{1}{2\\pi i}\\int_a^b dx \\frac{\\log(x e^{-t_1})}{z-x}\\frac{\\sqrt{(z-a)(z-b)}}{\\sqrt{(x-a)(x-b)}}$$ What you have to do is take a dumbbell contour around taking a clockwise \"dumbbell\" contour $$\\mathcal{C}$$ around the region $$[a,b]$$ such that we integrate over a region $$[a+i0,b+i0]$$ from left to right and a region $$[a-i0,b-i0]$$ from right to left. Since there is a branch cut along $$[a,b]$$, we have a sign flip as we cross the branch cut, and hence we have $$\\begin{equation} 2\\mathcal{I}= \\frac{1}{2\\pi i}\\int_{\\mathcal{C}}dx \\frac{\\log(x e^{-t_1})}{z-x}\\frac{\\sqrt{(z-a)(z-b)}}{\\sqrt{(x-a)(x-b)}} \\end{equation}$$ We now deform the contour to infinity, making note of the branch cut from the log and the pole at $$x=z$$. Then we pick up a residue at $$z$$ $$\\begin{equation} 2\\pi i\\text{Res}_z\\left(\\frac{1}{2\\pi i}\\frac{\\log (xe^{-t_1})}{z-x}\\frac{\\sqrt{(z-a)(z-b)}}{\\sqrt{(x-a)(x-b)}}\\right)=\\log (z e^{-t_1}) \\end{equation}$$ There is also a contribution from integrating along the log branch cut that can be easily done. Summing all these together and doing a painful amount of simplification yields $$\\begin{equation} \\mathcal{I}=\\frac{1}{2}\\log\\left[\\frac{e^{-t_1}}{2\\sqrt{ab}+a+b}\\left(z+\\sqrt{ab}-\\sqrt{(z-a)(z-b)}\\right)^2\\right] \\end{equation}$$ as desired."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.83181715,"math_prob":0.9999844,"size":743,"snap":"2020-45-2020-50","text_gpt3_token_len":267,"char_repetition_ratio":0.10960758,"word_repetition_ratio":0.023529412,"special_character_ratio":0.34724092,"punctuation_ratio":0.09248555,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.0000064,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-24T18:48:15Z\",\"WARC-Record-ID\":\"<urn:uuid:72c07909-1716-4d09-99ea-5725ab7fc54f>\",\"Content-Length\":\"120983\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:bbac97ab-0d8d-4e09-ba59-7cb2912ae89f>\",\"WARC-Concurrent-To\":\"<urn:uuid:9b0d74f1-690e-4e7c-b709-84888225c494>\",\"WARC-IP-Address\":\"151.101.193.69\",\"WARC-Target-URI\":\"https://mathoverflow.net/questions/368796/deriving-integral-in-gaiotto-tommasiello-theory\",\"WARC-Payload-Digest\":\"sha1:WW5PZOWAF2DZ7TYWHNVASTORJAC55QHY\",\"WARC-Block-Digest\":\"sha1:OANRZFWO5MOKYRN75K6ZRYH6ZFDUI53C\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107884322.44_warc_CC-MAIN-20201024164841-20201024194841-00306.warc.gz\"}"} |
https://www.rdocumentation.org/packages/ape/versions/3.0-2/topics/weight.taxo | [
"# weight.taxo\n\nFrom ape v3.0-2\n0th\n\nPercentile\n\n##### Define Similarity Matrix\n\nweight.taxo computes a matrix whose entries [i, j] are set to 1 if x[i] == x[j], 0 otherwise.\n\nweight.taxo2 computes a matrix whose entries [i, j] are set to 1 if x[i] == x[j] AND y[i] != y[j], 0 otherwise.\n\nThe diagonal [i, i] is always set to 0.\n\nThe returned matrix can be used as a weight matrix in Moran.I. x and y may be vectors of factors.\n\nSee further details in vignette(\"MoranI\").\n\nKeywords\nmanip\n##### Usage\nweight.taxo(x)\nweight.taxo2(x, y)\n##### Value\n\na square numeric matrix.\n\nMoran.I, correlogram.formula"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.768419,"math_prob":0.9852323,"size":581,"snap":"2020-34-2020-40","text_gpt3_token_len":183,"char_repetition_ratio":0.14211439,"word_repetition_ratio":0.2,"special_character_ratio":0.28227195,"punctuation_ratio":0.15789473,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9908568,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-08-12T12:07:19Z\",\"WARC-Record-ID\":\"<urn:uuid:5de588bf-be10-45fe-ae60-3c99353aa576>\",\"Content-Length\":\"14309\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:00404ff2-b5b7-42dc-a23a-c493a344ef10>\",\"WARC-Concurrent-To\":\"<urn:uuid:4d1e8f4d-bb23-4d04-98f0-025791a82689>\",\"WARC-IP-Address\":\"54.161.154.45\",\"WARC-Target-URI\":\"https://www.rdocumentation.org/packages/ape/versions/3.0-2/topics/weight.taxo\",\"WARC-Payload-Digest\":\"sha1:YOYSS6ASOEJHATOO5MUPWPYAM5ZTT6CN\",\"WARC-Block-Digest\":\"sha1:TB4HL224A2GXJUJTC7F44VTYYHS7IGZK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-34/CC-MAIN-2020-34_segments_1596439738892.21_warc_CC-MAIN-20200812112531-20200812142531-00215.warc.gz\"}"} |
http://tang-hotel.com/product_s1003002001/ | [
"• 湖南\n• 长沙市\n• 常德市\n• 郴州市\n• 衡阳市\n• 怀化市\n• 娄底市\n• 邵阳市\n• 湘潭市\n• 湘西土家族苗族自治州\n• 益阳市\n• 永州市\n• 岳阳市\n• 张家界市\n• 株洲市\n• 山西\n• 长治市\n• 大同市\n• 晋城市\n• 晋中市\n• 临汾市\n• 吕梁市\n• 朔州市\n• 太原市\n• 忻州市\n• 阳泉市\n• 运城市\n• 安徽\n• 安庆市\n• 蚌埠市\n• 亳州市\n• 巢湖市\n• 池州市\n• 滁州市\n• 阜阳市\n• 合肥市\n• 淮北市\n• 淮南市\n• 黄山市\n• 六安市\n• 马鞍山市\n• 宿州市\n• 铜陵市\n• 芜湖市\n• 宣城市\n• 广西\n• 百色市\n• 北海市\n• 崇左市\n• 防城港市\n• 贵港市\n• 桂林市\n• 河池市\n• 贺州市\n• 来宾市\n• 柳州市\n• 南宁市\n• 钦州市\n• 梧州市\n• 玉林市\n• 河南\n• 安阳市\n• 鹤壁市\n• 焦作市\n• 开封市\n• 洛阳市\n• 漯河市\n• 南阳市\n• 平顶山市\n• 濮阳市\n• 三门峡市\n• 商丘市\n• 新乡市\n• 信阳市\n• 许昌市\n• 郑州市\n• 周口市\n• 驻马店市\n• 吉林\n• 白城市\n• 白山市\n• 长春市\n• 吉林市\n• 辽源市\n• 四平市\n• 松原市\n• 通化市\n• 延边朝鲜族自治州\n• 广东\n• 潮州市\n• 东莞市\n• 佛山市\n• 广州市\n• 河源市\n• 惠州市\n• 江门市\n• 揭阳市\n• 茂名市\n• 梅州市\n• 清远市\n• 汕头市\n• 汕尾市\n• 韶关市\n• 深圳市\n• 阳江市\n• 云浮市\n• 湛江市\n• 肇庆市\n• 中山市\n• 珠海市\n• 辽宁\n• 鞍山市\n• 本溪市\n• 朝阳市\n• 大连市\n• 丹东市\n• 抚顺市\n• 阜新市\n• 葫芦岛市\n• 锦州市\n• 辽阳市\n• 盘锦市\n• 沈阳市\n• 铁岭市\n• 营口市\n• 湖北\n• 鄂州市\n• 恩施土家族苗族自治州\n• 黄冈市\n• 黄石市\n• 荆门市\n• 荆州市\n• 直辖行政单位\n• 十堰市\n• 随州市\n• 武汉市\n• 咸宁市\n• 襄阳市\n• 孝感市\n• 宜昌市\n• 江西\n• 抚州市\n• 赣州市\n• 吉安市\n• 景德镇市\n• 九江市\n• 南昌市\n• 萍乡市\n• 上饶市\n• 新余市\n• 宜春市\n• 鹰潭市\n• 浙江\n• 杭州市\n• 湖州市\n• 嘉兴市\n• 金华市\n• 丽水市\n• 宁波市\n• 衢州市\n• 绍兴市\n• 台州市\n• 温州市\n• 舟山市\n• 青海\n• 果洛藏族自治州\n• 海北藏族自治州\n• 海东地区\n• 海南藏族自治州\n• 海西蒙古族藏族自治州\n• 黄南藏族自治州\n• 西宁市\n• 玉树藏族自治州\n• 甘肃\n• 白银市\n• 定西市\n• 甘南藏族自治州\n• 嘉峪关市\n• 金昌市\n• 酒泉市\n• 兰州市\n• 临夏回族自治州\n• 陇南市\n• 平凉市\n• 庆阳市\n• 天水市\n• 武威市\n• 张掖市\n• 贵州\n• 安顺市\n• 毕节市\n• 贵阳市\n• 六盘水市\n• 黔东南苗族侗族自治州\n• 黔南布依族苗族自治州\n• 黔西南布依族苗族自治州\n• 铜仁地区\n• 遵义市\n• 陕西\n• 安康市\n• 宝鸡市\n• 汉中市\n• 商洛市\n• 铜川市\n• 渭南市\n• 西安市\n• 咸阳市\n• 延安市\n• 榆林市\n• 西藏\n• 阿里地区\n• 昌都地区\n• 拉萨市\n• 林芝地区\n• 那曲地区\n• 日喀则地区\n• 山南地区\n• 宁夏\n• 固原市\n• 石嘴山市\n• 吴忠市\n• 银川市\n• 中卫市\n• 福建\n• 福州市\n• 龙岩市\n• 南平市\n• 宁德市\n• 莆田市\n• 泉州市\n• 三明市\n• 厦门市\n• 漳州市\n• 内蒙古\n• 阿拉善盟\n• 巴彦淖尔市\n• 包头市\n• 赤峰市\n• 鄂尔多斯市\n• 呼和浩特市\n• 呼伦贝尔市\n• 通辽市\n• 乌海市\n• 乌兰察布市\n• 锡林郭勒盟\n• 兴安盟\n• 云南\n• 保山市\n• 楚雄彝族自治州\n• 大理白族自治州\n• 德宏傣族景颇族自治州\n• 迪庆藏族自治州\n• 红河哈尼族彝族自治州\n• 昆明市\n• 丽江市\n• 临沧市\n• 怒江傈僳族自治州\n• 曲靖市\n• 思茅市\n• 文山壮族苗族自治州\n• 西双版纳傣族自治州\n• 玉溪市\n• 昭通市\n• 新疆\n• 阿克苏地区\n• 阿勒泰地区\n• 巴音郭楞蒙古自治州\n• 博尔塔拉蒙古自治州\n• 昌吉回族自治州\n• 哈密地区\n• 和田地区\n• 喀什地区\n• 克拉玛依市\n• 克孜勒苏柯尔克孜自治州\n• 直辖行政单位\n• 塔城地区\n• 吐鲁番地区\n• 乌鲁木齐市\n• 伊犁哈萨克自治州\n• 黑龙江\n• 大庆市\n• 大兴安岭地区\n• 哈尔滨市\n• 鹤岗市\n• 黑河市\n• 鸡西市\n• 佳木斯市\n• 牡丹江市\n• 七台河市\n• 齐齐哈尔市\n• 双鸭山市\n• 绥化市\n• 伊春市\n• 香港\n• 香港\n• 九龙\n• 新界\n• 澳门\n• 澳门\n• 其它地区\n• 台湾\n• 台中市\n• 台南市\n• 高雄市\n• 台北市\n• 基隆市\n• 嘉义市\n•",
null,
"校园行学生鞋加工厂-青岛款式新颖的校园行学生鞋批发出售\n\n品牌:校园行,意踏步,\n\n出厂地:凤山县(凤城镇)\n\n报价:面议\n\n青岛福客来集团齐发国际娱乐网址平台\n\n黄金会员:",
null,
"主营:学生鞋,安全鞋,布面胶鞋,皮鞋,运动鞋\n\n•",
null,
"广东冷粘健步鞋批发价格_临沂款式新颖的健步鞋批发出售\n\n品牌:逸梵足韵,,\n\n出厂地:凤山县(凤城镇)\n\n报价:面议\n\n沂南县颐路顺制鞋厂\n\n黄金会员:",
null,
"主营:布鞋,休闲鞋,僧侣鞋,运动鞋,健步鞋\n\n•",
null,
"好看的潮鞋-莆田款式新颖的潮鞋批发出售\n\n品牌:语妍良品,,\n\n出厂地:鹿寨县(鹿寨镇)\n\n报价:面议\n\n莆田市城厢区语妍良品鞋服商行\n\n黄金会员:",
null,
"主营:潮鞋,篮球鞋,板鞋,跑鞋,运动鞋\n\n•",
null,
"报价:面议\n\n深圳前海百汇科技发展齐发国际娱乐网址平台\n\n黄金会员:",
null,
"主营:防滑鞋底,防滑片,eva防滑鞋底,鞋底防滑贴片,eva成型鞋底\n\n•",
null,
"实惠的皮鞋推荐-皮鞋齐发国际娱乐官网皮鞋批发厂商代理\n\n品牌:大指王,大趾王,\n\n出厂地:金城江区\n\n报价:面议\n\n成都大指王鞋业齐发国际娱乐网址平台\n\n黄金会员:",
null,
"主营:皮鞋,女鞋,男鞋,旅游鞋,板鞋\n\n•",
null,
"2019福建莆田耐克AJ6 六冠王运动鞋齐发国际娱乐官网一手货源\n\n品牌:耐克,阿迪达斯,匡威\n\n出厂地:鹿寨县(鹿寨镇)\n\n报价:面议\n\n莆田灵动潮品齐发国际娱乐网址平台\n\n黄金会员:",
null,
"主营:运动鞋,运动服,跑鞋,篮球鞋,休闲鞋\n\n•",
null,
"新款校园行运动鞋-校园行学生鞋供应齐发国际娱乐官网青岛福客来\n\n品牌:校园行,意踏步,\n\n出厂地:凤山县(凤城镇)\n\n报价:面议\n\n青岛福客来集团齐发国际娱乐网址平台\n\n黄金会员:",
null,
"主营:学生鞋,安全鞋,布面胶鞋,皮鞋,运动鞋\n\n•",
null,
"休闲鞋加盟|临沂款式新颖的男士休闲鞋批发出售\n\n品牌:爱返福来,,\n\n出厂地:凤山县(凤城镇)\n\n报价:面议\n\n沂南县金路驰鞋厂\n\n黄金会员:",
null,
"主营:透气网鞋,休闲麻布鞋,中老年布鞋,商务布鞋,迷彩布板\n\n•",
null,
"新款篮球运动鞋推荐-莆田乔丹篮球鞋微信免费代理\n\n品牌:耐克,阿迪达斯,新百伦\n\n出厂地:鹿寨县(鹿寨镇)\n\n报价:面议\n\n福建隆兴旺齐发国际娱乐网址平台\n\n黄金会员:",
null,
"主营:佛珠手串,红木工艺品摆件,手把件,运动鞋,运动服\n\n•",
null,
"报价:面议\n\n深圳前海百汇科技发展齐发国际娱乐网址平台\n\n黄金会员:",
null,
"主营:防滑鞋底,防滑片,eva防滑鞋底,鞋底防滑贴片,eva成型鞋底\n\n• 没有找到合适的供应商?您可以发布采购信息\n\n没有找到满足要求的供应商?您可以搜索 鞋批发 鞋公司 鞋厂\n\n### 最新入驻齐发国际娱乐官网\n\n相关产品:\n校园行学生鞋加工厂 广东冷粘健步鞋批发价格 好看的潮鞋 防滑鞋底贴片哪家买 皮鞋齐发国际娱乐官网皮鞋批发厂商代理 AJ6 青岛校园行运动鞋 休闲鞋加盟 莆田乔丹篮球鞋微信免费代理 好的防滑耐磨鞋底"
] | [
null,
"http://image-ali.bianjiyi.com/1/2019/0123/10/15482116114396.jpg",
null,
"http://www.qiye.net/Public/Images/ForeApps/grade2.png",
null,
"http://image-ali.bianjiyi.com/1/2018/0206/14/5a79462a954aa.jpg",
null,
"http://www.qiye.net/Public/Images/ForeApps/grade2.png",
null,
"http://image-ali.bianjiyi.com/1/2018/1220/17/15452991051258.jpg",
null,
"http://www.qiye.net/Public/Images/ForeApps/grade2.png",
null,
"http://image-ali.bianjiyi.com/1/2018/1130/09/15435422080518.jpg",
null,
"http://www.qiye.net/Public/Images/ForeApps/grade2.png",
null,
"http://image-ali.bianjiyi.com/1/2018/1120/16/15427008918829.JPG",
null,
"http://www.qiye.net/Public/Images/ForeApps/grade2.png",
null,
"http://image-ali.bianjiyi.com/1/2019/0918/23/15688193857074.jpg",
null,
"http://www.qiye.net/Public/Images/ForeApps/grade2.png",
null,
"http://image-ali.bianjiyi.com/1/2019/0920/14/15689598555412.jpg",
null,
"http://www.qiye.net/Public/Images/ForeApps/grade2.png",
null,
"http://image-ali.bianjiyi.com/1/2018/1121/17/15427930125535.jpg",
null,
"http://www.qiye.net/Public/Images/ForeApps/grade2.png",
null,
"http://image-ali.bianjiyi.com/1/2016/0402/21/56ffcf7b6f8c0.jpg",
null,
"http://www.qiye.net/Public/Images/ForeApps/grade2.png",
null,
"http://image-ali.bianjiyi.com/1/2018/1130/09/15435422080518.jpg",
null,
"http://www.qiye.net/Public/Images/ForeApps/grade2.png",
null
] | {"ft_lang_label":"__label__zh","ft_lang_prob":0.5792539,"math_prob":0.48261768,"size":1171,"snap":"2019-35-2019-39","text_gpt3_token_len":1614,"char_repetition_ratio":0.16452442,"word_repetition_ratio":0.032258064,"special_character_ratio":0.23057216,"punctuation_ratio":0.24745762,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9865511,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40],"im_url_duplicate_count":[null,null,null,null,null,1,null,null,null,null,null,null,null,8,null,null,null,5,null,null,null,2,null,null,null,2,null,null,null,7,null,null,null,null,null,null,null,8,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-09-20T10:04:49Z\",\"WARC-Record-ID\":\"<urn:uuid:5c25e504-ad7c-4c13-b640-5e2752d3ca3d>\",\"Content-Length\":\"111553\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c5ecbe70-ad93-4ee0-a1cc-c3145b148c30>\",\"WARC-Concurrent-To\":\"<urn:uuid:d42ef57a-74a0-4386-ae55-33d6931c7c25>\",\"WARC-IP-Address\":\"154.212.113.61\",\"WARC-Target-URI\":\"http://tang-hotel.com/product_s1003002001/\",\"WARC-Payload-Digest\":\"sha1:RYLANT7KT65GHFJCIBF7ZNWGZGUNQ66T\",\"WARC-Block-Digest\":\"sha1:SGSRRTLAFZ5G2IV7XDKDKE3Q3WP33ZBH\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-39/CC-MAIN-2019-39_segments_1568514573988.33_warc_CC-MAIN-20190920092800-20190920114800-00185.warc.gz\"}"} |
https://varsitythesis.com/case-problem-par-inc-par-inc-is-a-major-manufacturer-of-golf-equipment-management-believes-that-pars-market-share-could-be-increased-with/ | [
"### Best writers. Best papers. Let professionals take care of your academic papers\n\nOrder a similar paper and get 15% discount on your first order with us\nUse the following coupon \"FIRST15\"\nORDER NOW\n\n# Case Problem Par, Inc. Par, Inc., is a major manufacturer of golf equipment. Management believes that Pars market share could be increased with\n\n## Case Problem Par, Inc.\n\nPar, Inc., is a major manufacturer of golf equipment. Management believes that Par’s market share could be increased with the introduction of a cut-resistant, longer-lasting golf ball. Therefore, the research group at Par has been investigating a new golf ball coating designed to resist cuts and provide a more durable ball. The tests with the coating have been promising.\n\nOne of the researchers voiced concern about the effect of the new coating on driving distances. Par would like the new cut-resistant ball to offer driving distances comparable to those of the current-model golf ball. To compare the driving distances for the two balls, 40 balls of both the new and current models were subjected to distance tests. The testing was performed with a mechanical hitting machine so that any difference between the mean distances for the two models could be attributed to a difference in the two models. Theresults of the tests, with distances measured to the nearest yard, follow. These data are available in the file Golf.\n\nDo:\n\nIn a managerial report, use the methods of hypothesis testing to\n\n• Provide descriptive statistical summaries of the data for each model, written in complete sentences.\n• Formulate and present the rationale for a hypothesis test that Par could use to compare the driving distances of the current and new golf balls. Clearly state the null and alternative hypothesis.\n• Analyze the data to provide the hypothesis testing conclusion. What is the p-value for your test? What is your recommendation for Par, Inc.?\n• Discuss whether you see a need for larger sample sizes and more testing with the golf balls.\n\nDiscuss\n\nPost by classmate 1\n\nHello everyone,\n\nHere are my findings and managerial report after using the methods of hypothesis testing:\n\nAfter reviewing the data for both current and new golf ball driving distance, a t-Test was performed to determine if the current golf balls travel a longer distance than the new ones in a sample size of 40. Assuming that the data is normally distributed, the t-Test displayed a sample mean for the current golf ball of 270.28 and a sample mean for the new golf ball of 267.5. The difference between the means is not significant. The variance for the current golf ball is 76.61, and the variance for the new golf ball is 97.95. The p-value is 0.094. The standard deviation for the current golf ball is 8.75, and the standard deviation for the new golf ball is 9.9. The “t Critical one-tail” is 1.66, and the “t Critical two-tail” is 1.99.\n\nThe hypothesis test was needed to determine if there is enough evidence to support that the current golf balls travel a greater distance than the new ones proposed for Par, Inc (null hypothesis) or if, in fact, they are the same (alternative hypothesis).\n\nThe p-value is a random variable derived from the distribution and is a measure of evidence against the null hypothesis (Hung et al., 1997). In this case, the p-value is 0.094, which is less than 0.05, indicates strong evidence against the null hypothesis as there is less than a 5% probability that the null would be correct; hence, the p-value is greater than the significance level, and one should fail to reject the null hypothesis (McLeod, 2019). There appears no evidence to support that the current golf balls travel a more significant distance than the new ones. Given the existing data, there is no evidence to support that the alternative hypothesis is correct, as one would need more data.\n\nBased on my hypothesis testing conclusion, my recommendations for Par, Inc would be to utilize a more extensive-sized data sampling for current and new golf balls. An increased sample size would decrease the standard error of associated sampling distributions (Anderson et al., 2021).\n\nReferences\n\nAnderson, D. R., Sweeney, D. J., Williams, T. A., Camm, J. D., Cochran, J. J., Fry, M. J., & Ohlmann. J. W. (2021). Essentials of modern business statistics with Microsoft® Excel® (8th ed.). Cengage Learning\n\nH. M. James Hung, O’Neill, R. T., Bauer, P., & Kohne, K. (1997). The Behavior of the P-Value When the Alternative Hypothesis is True. Biometrics53(1), 11–22. https://doi.org/10.2307/2533093 (Links to an external site.)\n\nMcLeod, S. (2019). What a p-value tells you about statistical significance. P. https://www.simplypsychology.org/p-value.html\n\nPost by classmate 2\n\nWk2 Discussion\n\n• Provide descriptive statistical summaries of the data for each model, written in complete sentences.\n• From the data that was given we compared two different types of golf balls to see if there is a difference in the current ones being used and the new durable cut resistant ones. The data compares 40 different golf ball drives with each test being for each of the golf balls. When comparing the statistical data given, The current golf ball had an average distance of 270.28 versus the new golf balls having an average distance of 267.48 which is a drop of 1.04% for averages. For highest drive distance per choice was the same at 289, while the low had a difference of 255 for the old ones versus 250 for the newer gold balls, this also had a decrease of 1.96%. The median between the two golf balls was similar to the average at 270 for the current and 265 for the newer ones. On average the change between the current golf balls to the new, more durable golf balls was each of the drives lost an average of 2.8. The p-value is 0.094, T-Critical one-tail is 1.66, and the T-Critical two-tail is 1.99.\n• Formulate and present the rationale for a hypothesis test that Par could use to compare the driving distances of the current and new golf balls. Clearly state the null and alternative hypothesis.\n• Null hypothesis: Current golf balls that Par uses travel a greater distance than the more durable, cut resistant golf balls.\n• Alternate hypothesis: New golf balls travel the same distance as current ones.\n• Analyze the data to provide the hypothesis testing conclusion. What is the p-value for your test?\n• With the p-value (0.094) being more than the (0.05) 5% probability making the null hypothesis not proven and would lean more towards the alternate hypothesis. This means that the data is not showing a lower than 5% accuracy making the null hypothesis true.\n• What is your recommendation for Par, Inc.? Discuss whether you see a need for larger sample sizes and more testing with the golf balls.\n• With a test of this size showing almost identical results with little differences, a large scale would give a more accurate account. Based on there really only having such a small change in distance, many variable could have been at play to skew these results. If all the hits were the same order, golfers fatigue, and wind speed being some of these variables. If the scale was out of 100 than the results would supply more accurate result since it would give them an out of 100% answer they would need to confirm this theory. They would have to test the on almost identical condition days with the golfer hitting the ball in one order the first time and opposite order the second time. This would eliminate a lot of the “what if” variables that affect the overall numbers on this (mythbusters logic).\n• Reference:\n• Anderson, D. R., Sweeney, D. J., Williams, T. A., Camm, J. D., Cochran, J. J., Fry, M. J., & Ohlmann. J. W. (2021). Essentials of modern business statistics with Microsoft® Excel® (8th ed.). Cengage Learning\n\n## \"Looking for a Similar Assignment? Get Expert Help at an Amazing Discount!\"",
null,
""
] | [
null,
"https://varsitythesis.com/wp-content/uploads/2020/06/order-now-v.jpg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.93371713,"math_prob":0.923976,"size":7711,"snap":"2022-05-2022-21","text_gpt3_token_len":1715,"char_repetition_ratio":0.15466459,"word_repetition_ratio":0.20753267,"special_character_ratio":0.23265465,"punctuation_ratio":0.14756174,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96840864,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-26T12:02:11Z\",\"WARC-Record-ID\":\"<urn:uuid:8ac01526-fa53-4bf7-bc0d-6d60a1ffd24c>\",\"Content-Length\":\"67337\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1fa28ed1-cc6b-4e34-8d78-291cd2cb9b02>\",\"WARC-Concurrent-To\":\"<urn:uuid:fe5b957a-ce3d-4a2e-8e77-f1a996770b5c>\",\"WARC-IP-Address\":\"199.192.28.46\",\"WARC-Target-URI\":\"https://varsitythesis.com/case-problem-par-inc-par-inc-is-a-major-manufacturer-of-golf-equipment-management-believes-that-pars-market-share-could-be-increased-with/\",\"WARC-Payload-Digest\":\"sha1:U6JAROVKUZK43IPA6MC2D2X353YI4FHI\",\"WARC-Block-Digest\":\"sha1:NZKCXNGX7JVELOOHIZOR66ROJRK3PBWM\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662604794.68_warc_CC-MAIN-20220526100301-20220526130301-00566.warc.gz\"}"} |
https://www.teachoo.com/9203/2479/Ex-10.1--12/category/Ex-10.1/ | [
"",
null,
"",
null,
"Subscribe to our Youtube Channel - https://you.tube/teachoo\n\n1. Chapter 10 Class 6 Mensuration\n2. Serial order wise\n3. Ex 10.1\n\nTranscript\n\nEx 10.1, 12 Two sides of a triangle are 12 cm and 14 cm. The perimeter of the triangle is 36 cm. What is its third side? Here, AB = 12 cm BC = 14 cm CA = ? Given, Perimeter of ∆ABC = 36 Sum of all sides = 36 AB + BC + CA = 36 12 + 14 + CA = 36 CA + 26 = 36 CA = 36 − 36 CA = 10 cm ∴ Third side is 10 cm\n\nEx 10.1\n\nChapter 10 Class 6 Mensuration\nSerial order wise",
null,
""
] | [
null,
"https://d1avenlh0i1xmr.cloudfront.net/eb4df735-587d-42f5-93b3-069ac103972b/slide24.jpg",
null,
"https://d1avenlh0i1xmr.cloudfront.net/80a9e046-4611-45b4-94b8-00e326c6925d/slide25.jpg",
null,
"https://delan5sxrj8jj.cloudfront.net/misc/Davneet+Singh.jpg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8480897,"math_prob":0.99906033,"size":491,"snap":"2020-45-2020-50","text_gpt3_token_len":170,"char_repetition_ratio":0.12525667,"word_repetition_ratio":0.056074765,"special_character_ratio":0.395112,"punctuation_ratio":0.125,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.983985,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,6,null,6,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-26T21:14:46Z\",\"WARC-Record-ID\":\"<urn:uuid:ec1f6e3f-c500-4b1f-a6f9-e4118be6bed7>\",\"Content-Length\":\"52826\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:dd460e8c-2139-4625-92a8-151191dba369>\",\"WARC-Concurrent-To\":\"<urn:uuid:ab38b497-5d14-489d-b3f1-91c6ea95a7bc>\",\"WARC-IP-Address\":\"3.213.223.141\",\"WARC-Target-URI\":\"https://www.teachoo.com/9203/2479/Ex-10.1--12/category/Ex-10.1/\",\"WARC-Payload-Digest\":\"sha1:3XZWKSTPC5GP5OA2YKQTGLPQMWGJY2QL\",\"WARC-Block-Digest\":\"sha1:2OKXCPSIZEIRI2M6W5RYEVQMKYB6PTDX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107892062.70_warc_CC-MAIN-20201026204531-20201026234531-00508.warc.gz\"}"} |
https://pipepipestr.com/maths-287 | [
"# Algebra math solver with steps",
null,
"## The Best Algebra math solver with steps\n\nMath can be a challenging subject for many students. But there is help available in the form of Algebra math solver with steps. This topic will explore the solution to this problem. Therefore, if we encounter high-order equations in the exam, our goal is to decompose the high-order equations into low-order equations to solve them. The low-order equations mentioned here refer to primary and secondary equations. When designing questions, we usually make the coefficients special. For example, consider equations Just as the quadratic equation has a discriminant that can help us judge the root, looking at the above solving process, we can actually get the discriminant of the cubic equation and the quartic equation to judge the root However, these contents have nothing to do with the theme of this article, so we will not elaborate on them.\n\nFor example, in the process of discussing architecture in ancient Greece, it was found that the best ratio of the length and width of a rectangle, that is, the golden ratio, is still applied today; Archimedes proved that when the perimeter was determined, the area contained in the circle was the largest. This discovery affected the architectural style of ancient European castles, so they almost all adopted circular architecture; In calculus in the 17th century, it was proposed to solve the extreme value problem of functions. Euler tried to express the following belief in mathematical language: everything has relative advantages. The purpose of research is to find such advantages and disadvantages. In many cases, people do not need to compare absolute advantages, but comparative advantages are enough, such as the college entrance examination and the scoring system of grade four and six.\n\nTherefore, when faced with tunnel construction under complex geological conditions, the accuracy of surrounding rock classification will be limited. The Ganzhi calendar, a mathematical model used to deduce the theory of Qi in traditional Chinese medicine, is a very advanced and scientific astronomical calendar with Chinese characteristics. The deduction tool of the theory of luck is the Ganzhi calendar. The interaction between the Ganzhi calendar reflects the characteristics of astronomical factors on the earth.\n\nAnother way to solve this problem is to use a numerical solver. We need to find a solution to maximize the log likelihood function. When using a numerical solver, we do not need to calculate the derivative and manually solve the parameters of the maximum log likelihood function. Just implement a function that we want to maximize and pass it to the numerical solver. Another way to solve this problem is to use a numerical solver.",
null,
"We could get a detail operation to get the sum. This app is certainly amazing and really helps with detail and a variety of method. And you can shoot the summation with the app camera too. This app is super-efficient and literally AMAZING. SALUTE",
null,
"### Thalia Bryant",
null,
"Perfect for people with problems with math such as mine, but I think that you should be able to see why in some problems by watching a video, good job!! Dis app has helped me to solve more complex and complicate math question and has helped me improve in my grade",
null,
"### Emma Parker\n\nMath help app Algebra help online Sum or difference formula solver Math problem solving website Websites for mathematics problems Integral solver with steps"
] | [
null,
"https://pipepipestr.com/qBO6320a40ff0272/engage-students.jpg",
null,
"https://pipepipestr.com/qBO6320a40ff0272/icon_quotes.png",
null,
"https://pipepipestr.com/qBO6320a40ff0272/testimonial_1.jpg",
null,
"https://pipepipestr.com/qBO6320a40ff0272/icon_quotes.png",
null,
"https://pipepipestr.com/qBO6320a40ff0272/testimonial_2.jpg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.94770974,"math_prob":0.92367226,"size":3420,"snap":"2022-40-2023-06","text_gpt3_token_len":652,"char_repetition_ratio":0.1191452,"word_repetition_ratio":0.039426524,"special_character_ratio":0.18333334,"punctuation_ratio":0.094249204,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9793619,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-01-28T09:55:40Z\",\"WARC-Record-ID\":\"<urn:uuid:1816e3eb-ef13-466d-bc1e-2b2a7dff198d>\",\"Content-Length\":\"9452\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:dcaa40fe-47e0-4e98-afc3-33b32f66fc26>\",\"WARC-Concurrent-To\":\"<urn:uuid:b6ff943a-6c2b-4069-bf00-031ce2b0c5e4>\",\"WARC-IP-Address\":\"107.167.10.243\",\"WARC-Target-URI\":\"https://pipepipestr.com/maths-287\",\"WARC-Payload-Digest\":\"sha1:INK6JYWGWOPNL6QIZWVM4E4YCGE76GE2\",\"WARC-Block-Digest\":\"sha1:VYT2FHFOQQID5DMI2RCULVUQQGWBPP6W\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764499541.63_warc_CC-MAIN-20230128090359-20230128120359-00296.warc.gz\"}"} |
https://developer.habana.ai/blog/training-llama-and-bloom-13-billion-parameter-llms-with-3d-parallelism-on-habana-gaudi2/ | [
"Home » Habana Developer Blog » Training Llama and Bloom 13 Billion Parameter LLMs with 3D Parallelism on Habana® Gaudi2®\n\n# Training Llama and Bloom 13 Billion Parameter LLMs with 3D Parallelism on Habana® Gaudi2®\n\n## One of the main challenges in training Large Language Models (LLMs) is that they are often too large to fit on a single node or even if they fit, the training may be too slow. To address this issue, their training can be parallelized across multiple Gaudi accelerators (HPUs).\n\nThis means either parallelizing the data or model or both to distribute computation across several devices. Different forms of parallelism exist, and they can be combined to facilitate efficient training of LLMs. DeepSpeed is a popular deep learning software library which facilitates compute- and memory-efficient training of large language models. Megatron-LM is a highly optimized and efficient library for training large language models using model parallelism . DeepSpeed’s training engine provides hybrid data and pipeline parallelism, and this can be further combined with model parallelism such as Megatron-LM. Thus, data parallelism, Pipeline parallelism and tensor parallelism (horizontal model parallelism) can be combined to achieve 3D parallelism for training LLMs. These models can be trained on Habana Gaudi2 accelerators with Habana’s DeepSpeed fork, Megatron LM and Habana’s SynapseAI(R) software. In this article, we will talk about what 3D parallelism is and how it is useful for training LLMs. We will provide a brief overview of how one can train LLama 13B and BLOOM 13B LLMs using 3D parallelism on Habana Gaudi2. More details on DeepSpeed support on Habana SynapseAI Software can be found at Habana DeepSpeed User GuideNow, let us dive into what are the different forms of parallelism and how we can use DeepSpeed, Megatron LM and Habana SynapseAI Software to train LLMs with these parallelism modes.\n\nAs the size of LLMs keeps growing, how can we efficiently train such large models? Of course, the answer is “parallelization”. There are different parallelization techniques that can be applied to make the training compute and memory efficient. Parallelism can be obtained by splitting either the data or the model across multiple devices.\n\nAt a very high level, the two major types of parallelism are data parallelism and model parallelism. In Data Parallel training, each data parallel process/worker has a copy of the complete model, but the dataset is split into Nd parts where Nd is the number of data parallel processes. Say, if you have Nd devices, you split a mini batch into Nd parts, one for each of them. Then you feed the respective batch through the model on each device and obtain gradients for each split of the mini batch. You then collect all the gradients and update the parameters with the overall average. Once we get all the gradients (needs synchronization here), we calculate the average of the gradients, and use the average of the gradients to update the model/parameters. Then we move on to the next iteration of model training. The degree of data parallelism is the number of splits and is denoted by Nd.\n\n### Tensor Parallelism and Pipeline Parallelism\n\nWith model parallelism, instead of partitioning the data, we divide the model into Nm parts, typically where Nm is the number of devices. Model Parallelism requires considerable communication overhead. Hence it is effective inside a single node but may have scaling issues across nodes due to communication overhead. Also, it is complex to implement. A large language model consists of multiple layers, with each layer operating on tensors. In model parallelism, we can choose to split the model within each layer with each model layer being split across multiple devices. This is the intra-layer model parallelism also known as tensor parallelism. In tensor parallelism, we shard the tensors across multiple devices so that each layer is computed in parallel across multiple devices.\n\nIn contrast to tensor parallelism, we can choose to split a model at one or more model layer boundaries, as in the case of inter-layer model parallelism. This is also known as pipeline parallelism. Pipeline parallelism was first proposed in the GPipe paper . Pipeline parallelism is intuitively very similar to the way normal factory assembly pipelines operate. In this case, a model is split into multiple stages (a stage can consist of one or more model layers) and each stage is assigned to a separate Gaudi device. The output of one stage is fed as input to the next stage. As a trivial example, let us consider a Multi Layer Perceptron (MLP) model with 4 layers and we will assign one layer each to each Gaudi device. However, this naïve splitting results in device under-utilization since only one device is active at a time, as shown in Figure below.\n\nFigure 1 above represents a model with 4 layers placed on 4 different devices (vertical axis). The horizontal axis represents training this model through time demonstrating that only 1 device is utilized at a time. To alleviate this problem, pipeline parallelism splits the input minibatch into multiple micro-batches and pipelines the execution of these micro-batches across multiple devices. This is outlined in the figure below:\n\nFigure 2 above represents a model with 4 layers placed on 4 different devices (vertical axis). The horizontal axis represents training this model through time demonstrating that the devices are utilized more efficiently. However, there still exists a bubble (as demonstrated in the figure) where certain devices are not utilized. Pipeline parallelism while being simple to understand and implement can have these computation inefficiencies due to device under-utilization. Also, since the output of the previous stage needs to be fed to next stage, pipeline parallelism can result in communication overhead if this communication needs to happen across nodes rather than between devices on the same node.\n\nWhile pipeline parallelism splits the model at the boundary of layers, tensor parallelism splits the model intra-layer by sharding the tensors at each layer. Tensor parallelism was described in the Megatron paper and is quite complex to implement. However, it is more efficient than pipeline parallelism. As a trivial example, let us consider a 2 layers MLP model with the parameters of layer 1 represented by matrix A and layer 2 represented by matrix B respectively with input data batch being X and output of MLP being Z. Typically, we would need to compute `Y = f(XA)` where f is the activation function and the output of MLP would be `Z = YB`. With tensor parallelism, we can split both the matrices into 2 pieces and place them on two devices as shown below in Figure 3, where matrix A is split into two equal parts column-wise, and matrix B is split into two equal parts row-wise.\n\nWe can rewrite the above computation using the sharded tensors as below:\n\n`Yi = f (XAi) for i∈{1,2} Zi = YiBi for i∈{1,2}Z = Z1 + Z2 `\n\nWhile the above equations describe the forward pass with tensor parallelism, backward pass equations can also be rewritten to use the sharded tensors. A detailed description of pipeline parallelism and tensor parallelism can be found in .\n\n### What is 3D Parallelism?\n\nAs discussed above, we have three kinds of parallelism namely data parallelism, tensor parallelism (often referred to simply as intra-layer model parallelism) and pipeline parallelism (often referred to as inter-layer model parallelism). DeepSpeed and Megatron LM are libraries that enable efficient training of LLMs by the following:\n\n• Memory efficiency: Model is split using pipeline parallelism first and then further each pipelined stage is split using tensor parallelism. This 2D combination simultaneously reduces the memory consumed by the model, optimizer, and activations. However extreme partitioning can lead to communication overhead which will impact compute efficiency.\n• Compute efficiency: To scale beyond tensor and pipeline parallel approaches, data parallelism is leveraged \n• Topology aware 3D mapping: The three forms of parallelism are carefully combined to optimize for intra-node and inter-node communication overheads.\n\nA detailed description of 3D parallelism can be found in [6,7,8]. The following figure demonstrates how 3D parallelism can be leveraged using DeepSpeed and Megatron LM techniques This example shows how 32 workers are mapped across 8 nodes with each node hosting 4 Gaudi devices. Layers of the neural network are divided among four pipeline stages. Layers within each pipeline stage are further partitioned among four model parallel workers. Lastly, each pipeline is replicated across two data parallel instances.",
null,
"Figure 4 Example 3D parallelism in DeepSpeed with 8 nodes and each node with 4 devices (taken from [1)\n\n### Training Llama-13B on Habana Gaudi-2 with 3D Parallelism\n\nNow that we have understood the 3D parallelism mechanism, let us briefly turn our attention to how we can train an LLM using this feature. We use LLama-13B as the example LLM. Llama is a family of Large Language Models based on transformer architecture and trained on over one trillion tokens with various enhancements such as Rotary Position Embeddings, SwiGLU activation function, Pre-normalization and use of AdamW optimizer. Full technical specifications of the Llama Model can be found in .\n\nWe trained a LLama-13B model using Habana SynapseAI(R) Software version 1.10.0 with Pytorch 2.0.1, DeepSpeed 0.7.7 software library, with our training implementation based on https://github.com/microsoft/Megatron-DeepSpeed. Detailed steps for training Llama-13B and large models of similar sizes can be found here. The command for training is given below:\n\n`HL_HOSTSFILE=scripts/hostsfile HL_NUM_NODES=8 HL_PP=2 HL_TP=4 HL_DP=8 scripts/run_llama13b.sh`\n\nHL_NUM_NODES specifies the number of nodes involved in the run, HL_PP is used to set up the pipeline parallelism factor and HL_TP variable is used to set the tensor parallelism factor and HL_DP is the data parallelism factor. HL_HOSTSFILE contains the IP addresses of the respective HPU nodes. The run script for training sets up the important variables as shown below:\n\nNUM_NODES=\\${HL_NUM_NODES}\n\nDP=\\${HL_DP}\n\nTP=\\${HL_TP}\n\nPP=\\${HL_PP}\n\nWith 64 Gaudi2 devices, and with BF16 precision, we were able to achieve 55.48 sentences per second throughput in training the Llama-13B model with SynapseAI 1.11.\n\n### Training Bloom-13B on Habana Gaudi-2 with 3D Parallelism\n\nWhile we discussed 3D parallelism enabled training for training LLama 13B model, we can use the same setup to train other large models including Bloom-13B model as shown here,\n\nFor multicard configurations, for example to run Bloom on 32 Gaudi2 devices with BF16 precision, the run command would be as given below with HL_HOSTSFILE containing the IP addresses of the respective HPU nodes.\n\n`HL_HOSTSFILE=scripts/hostsfile HL_NUM_NODES=4 HL_PP=2 HL_TP=4 HL_DP=4 scripts/run_bloom13b.sh`\n\nFor running training with 64 devices, we modify this command by changing HL_NUM_NODES and HL_DP as shown below:\n\n`HL_HOSTSFILE=scripts/hostsfile HL_NUM_NODES=8 HL_PP=2 HL_TP=4 HL_DP=8 scripts/run_bloom13b.sh`\n\nWith 128 devices and with BF16 precision, the throughput was 114.1 samples/sec. This corresponds to 15.6 days for the completion of the training run to achieve convergence. The figure below shows the training loss (lm-loss) plotted against the number of training iterations. The blue colored curve is the training loss measured on HPU (with bf16) whereas the orange colored curve is the training loss from the BigScience Bloom Reference implementation which used FP16.\n\nIf you want to train a large model using Megatron-DeepSpeed, but the model you want is not included in the implementation, you can port it to the Megatron-DeepSpeed framework. For details on porting your own model to Megatron-DeepSpeed framework, please refer to this document.\n\nGet started on your Generative AI journey on the Habana Gaudi2 platform today. If you would like to get access to Gaudi2 for your LLM training needs, sign up on the Intel Developer Cloud following the instructions here or contact Supermicro regarding Gaudi2 Server infrastructure.\n\nHappy & Speedy Model Training with Habana Gaudi2, SynapseAI and DeepSpeed!\n\nReferences"
] | [
null,
"https://developer.habana.ai/wp-content/plugins/a3-lazy-load/assets/images/lazy_placeholder.gif",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.90683055,"math_prob":0.8116557,"size":12253,"snap":"2023-40-2023-50","text_gpt3_token_len":2680,"char_repetition_ratio":0.1708711,"word_repetition_ratio":0.044397462,"special_character_ratio":0.20076716,"punctuation_ratio":0.088479266,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96032196,"pos_list":[0,1,2],"im_url_duplicate_count":[null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-10-04T10:01:21Z\",\"WARC-Record-ID\":\"<urn:uuid:27e3d65f-c34f-4894-a4c4-f07afc8eea27>\",\"Content-Length\":\"134976\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ffb5a0bc-15a0-467e-a6a0-5fa7c4bde498>\",\"WARC-Concurrent-To\":\"<urn:uuid:b55aaec4-41e2-44b6-af2c-a4996e05c05c>\",\"WARC-IP-Address\":\"172.67.73.37\",\"WARC-Target-URI\":\"https://developer.habana.ai/blog/training-llama-and-bloom-13-billion-parameter-llms-with-3d-parallelism-on-habana-gaudi2/\",\"WARC-Payload-Digest\":\"sha1:SABZBJSZ5NOWSWWR4AVRPOGCGQ46J4N5\",\"WARC-Block-Digest\":\"sha1:ZIPBHYSPTUHNXJBIG7YQXXU3BCXEKIIO\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233511364.23_warc_CC-MAIN-20231004084230-20231004114230-00813.warc.gz\"}"} |
https://metanumbers.com/124306 | [
"124306 (number)\n\n124,306 (one hundred twenty-four thousand three hundred six) is an even six-digits composite number following 124305 and preceding 124307. In scientific notation, it is written as 1.24306 × 105. The sum of its digits is 16. It has a total of 4 prime factors and 16 positive divisors. There are 49,104 positive integers (up to 124306) that are relatively prime to 124306.\n\nBasic properties\n\n• Is Prime? No\n• Number parity Even\n• Number length 6\n• Sum of Digits 16\n• Digital Root 7\n\nName\n\nShort name 124 thousand 306 one hundred twenty-four thousand three hundred six\n\nNotation\n\nScientific notation 1.24306 × 105 124.306 × 103\n\nPrime Factorization of 124306\n\nPrime Factorization 2 × 7 × 13 × 683\n\nComposite number\nDistinct Factors Total Factors Radical ω(n) 4 Total number of distinct prime factors Ω(n) 4 Total number of prime factors rad(n) 124306 Product of the distinct prime numbers λ(n) 1 Returns the parity of Ω(n), such that λ(n) = (-1)Ω(n) μ(n) 1 Returns: 1, if n has an even number of prime factors (and is square free) −1, if n has an odd number of prime factors (and is square free) 0, if n has a squared prime factor Λ(n) 0 Returns log(p) if n is a power pk of any prime p (for any k >= 1), else returns 0\n\nThe prime factorization of 124,306 is 2 × 7 × 13 × 683. Since it has a total of 4 prime factors, 124,306 is a composite number.\n\nDivisors of 124306\n\n16 divisors\n\n Even divisors 8 8 4 4\nTotal Divisors Sum of Divisors Aliquot Sum τ(n) 16 Total number of the positive divisors of n σ(n) 229824 Sum of all the positive divisors of n s(n) 105518 Sum of the proper positive divisors of n A(n) 14364 Returns the sum of divisors (σ(n)) divided by the total number of divisors (τ(n)) G(n) 352.571 Returns the nth root of the product of n divisors H(n) 8.654 Returns the total number of divisors (τ(n)) divided by the sum of the reciprocal of each divisors\n\nThe number 124,306 can be divided by 16 positive divisors (out of which 8 are even, and 8 are odd). The sum of these divisors (counting 124,306) is 229,824, the average is 14,364.\n\nOther Arithmetic Functions (n = 124306)\n\n1 φ(n) n\nEuler Totient Carmichael Lambda Prime Pi φ(n) 49104 Total number of positive integers not greater than n that are coprime to n λ(n) 4092 Smallest positive number such that aλ(n) ≡ 1 (mod n) for all a coprime to n π(n) ≈ 11645 Total number of primes less than or equal to n r2(n) 0 The number of ways n can be represented as the sum of 2 squares\n\nThere are 49,104 positive integers (less than 124,306) that are coprime with 124,306. And there are approximately 11,645 prime numbers less than or equal to 124,306.\n\nDivisibility of 124306\n\n m n mod m 2 3 4 5 6 7 8 9 0 1 2 1 4 0 2 7\n\nThe number 124,306 is divisible by 2 and 7.\n\n• Arithmetic\n• Deficient\n\n• Polite\n\n• Square Free\n\nBase conversion (124306)\n\nBase System Value\n2 Binary 11110010110010010\n3 Ternary 20022111221\n4 Quaternary 132112102\n5 Quinary 12434211\n6 Senary 2355254\n8 Octal 362622\n10 Decimal 124306\n12 Duodecimal 5bb2a\n20 Vigesimal faf6\n36 Base36 2nwy\n\nBasic calculations (n = 124306)\n\nMultiplication\n\nn×y\n n×2 248612 372918 497224 621530\n\nDivision\n\nn÷y\n n÷2 62153 41435.3 31076.5 24861.2\n\nExponentiation\n\nny\n n2 15451981636 1920774029244616 238763736479281236496 29679765026793533383871776\n\nNth Root\n\ny√n\n 2√n 352.571 49.9073 18.7769 10.4448\n\n124306 as geometric shapes\n\nCircle\n\n Diameter 248612 781038 4.85438e+10\n\nSphere\n\n Volume 8.04572e+15 1.94175e+11 781038\n\nSquare\n\nLength = n\n Perimeter 497224 1.5452e+10 175795\n\nCube\n\nLength = n\n Surface area 9.27119e+10 1.92077e+15 215304\n\nEquilateral Triangle\n\nLength = n\n Perimeter 372918 6.6909e+09 107652\n\nTriangular Pyramid\n\nLength = n\n Surface area 2.67636e+10 2.26365e+14 101495"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.6131772,"math_prob":0.9953773,"size":4608,"snap":"2022-05-2022-21","text_gpt3_token_len":1615,"char_repetition_ratio":0.11989574,"word_repetition_ratio":0.03671072,"special_character_ratio":0.45855033,"punctuation_ratio":0.075351216,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99920005,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-01-28T08:49:06Z\",\"WARC-Record-ID\":\"<urn:uuid:0003cf4b-e0f3-4814-b9cf-effe9626d9ab>\",\"Content-Length\":\"39863\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e210ee38-cd9e-4c24-a6be-1533f7abf861>\",\"WARC-Concurrent-To\":\"<urn:uuid:c3a91ba5-e57b-4d6f-8aea-ffcb54c4f7b3>\",\"WARC-IP-Address\":\"46.105.53.190\",\"WARC-Target-URI\":\"https://metanumbers.com/124306\",\"WARC-Payload-Digest\":\"sha1:6KZJGQS5JNAUBIOR6ANMC245LKCA6UJM\",\"WARC-Block-Digest\":\"sha1:HT6KAIJBANBLSV3CLQ55N4HH6R5UC2E4\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-05/CC-MAIN-2022-05_segments_1642320305423.58_warc_CC-MAIN-20220128074016-20220128104016-00701.warc.gz\"}"} |
https://stanselmscanterbury.org.uk/Curriculum/A-Level/ | [
"Quick Links",
null,
"",
null,
"",
null,
"Sine Waves Superimposed to create a Square Wave\n\n# A Level\n\n\"Go down deep enough into anything and you will find mathematics\"\n\nDean Schlicter",
null,
"We start by deepening the knowledge of the algebraic concepts learned previously: learning to rationalise the denominator; proving that completing the square and the quadratic formula are one and the same; expanding our knowledge of polynomials to cubics and quartics; seeing Euclidean geometry and Cartesian graphs merge.\n\nProof becomes a topic in its own right as we look at different methods: by exhaustion; by deduction; by contradiction. We look at Pascal's triangle, and how we can generalise with factorial notation leading to the binomial expansion. We study arithmetic and geometric sequences and learn of Gauss' schoolboy trick for summing,\n\nOur study of trigonometry introduces radian measure and calculation, as well as looking at trigonometric identities and proofs of double angle formulae. Work on exponential relationships is extended by the learning of Napier's logarithms,\n\nThe completely new topic of calculus is introduced as we learn of Newton and Leibniz's concurrent discoveries, allowing us to differentiate and integrate. We expand our repertoire of techniques to cover the differentiation of almost all functions.\n\nWe also look at numerical methods such as the trapezium rule and the Newton-Raphson method. We change from Cartesian to parametric form.\n\nThe applied area of mechanics puts context to the pure side of the course, allowing for calculations between displacement, velocity and acceleration, bringing in calculus, and applying vector notation. We study Newton's laws of forces and consider projectiles and moments.\n\nIn Statistics, we look at the whole data collection cycle: sampling, calculation, representation and analysis,\n\nIn our study of probability, we build on prior knowledge to start looking at particular distributions: binomial and Gauss' Normal distribution. This leads to hypothesis testing to aid in our analysis of the statistics by understanding the significance of results.\n\n30/07/2021\n\n•"
] | [
null,
"https://stanselmscanterbury.org.uk/i/design/arrow-open-close.png",
null,
"https://stanselmscanterbury.org.uk/i/mathematics/Fourier_series_for_square_wave.gif",
null,
"https://en.wikipedia.org/wiki/Square_wave#/media/File:Fourier_series_for_square_wave.gif",
null,
"https://stanselmscanterbury.org.uk/i/mathematics/a_level_img.jpg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.92429733,"math_prob":0.9324831,"size":2094,"snap":"2021-43-2021-49","text_gpt3_token_len":404,"char_repetition_ratio":0.09617225,"word_repetition_ratio":0.0,"special_character_ratio":0.17526266,"punctuation_ratio":0.09943182,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.995179,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,4,null,2,null,4,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-22T18:19:53Z\",\"WARC-Record-ID\":\"<urn:uuid:10c248d7-14cd-449a-92ce-fff709ef2010>\",\"Content-Length\":\"33804\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e451e29f-a8a2-4a56-bbc4-7e2a9039eec5>\",\"WARC-Concurrent-To\":\"<urn:uuid:dcdd6297-a6c5-466c-9905-1df37191c8c3>\",\"WARC-IP-Address\":\"88.208.200.194\",\"WARC-Target-URI\":\"https://stanselmscanterbury.org.uk/Curriculum/A-Level/\",\"WARC-Payload-Digest\":\"sha1:VWAADBYZQLNCEAG5UB5TWKUVC52LGZZL\",\"WARC-Block-Digest\":\"sha1:H4J3RTALG2UMML5UMB2XVK77CKZFZZG2\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323585518.54_warc_CC-MAIN-20211022181017-20211022211017-00365.warc.gz\"}"} |
https://it.mathworks.com/matlabcentral/cody/problems/43213-create-tangent-function-out-of-sine-function-only/solutions/1023484 | [
"Cody\n\n# Problem 43213. Create tangent function out of sine function only\n\nSolution 1023484\n\nSubmitted on 19 Oct 2016 by SirSteve26\nThis solution is locked. To view this solution, you need to provide a solution of the same size or smaller.\n\n### Test Suite\n\nTest Status Code Input and Output\n1 Pass\nx = 0; y_correct = 0; assert(abs(your_fcn_name(x)-y_correct)<0.01) 2\n\ny = 0 ans = 2\n\n2 Pass\nx = pi/4; y_correct = 1; assert(abs(your_fcn_name(x)-y_correct)<0.01) 3\n\ny = 1 ans = 3\n\n3 Pass\nx = -pi/4; y_correct = -1; assert(abs(your_fcn_name(x)-y_correct)<0.01)\n\ny = -1.0000"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.50697345,"math_prob":0.98572254,"size":482,"snap":"2020-24-2020-29","text_gpt3_token_len":178,"char_repetition_ratio":0.15690376,"word_repetition_ratio":0.0,"special_character_ratio":0.41493776,"punctuation_ratio":0.13,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9868038,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-06-03T04:40:46Z\",\"WARC-Record-ID\":\"<urn:uuid:9441e6a4-200f-4559-ad96-bdba3c7db298>\",\"Content-Length\":\"73673\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:bbc59569-3bd5-4f2a-852a-a4ec5a76f18f>\",\"WARC-Concurrent-To\":\"<urn:uuid:f5005b76-ba8e-4625-a843-f92cbe23e693>\",\"WARC-IP-Address\":\"184.25.198.13\",\"WARC-Target-URI\":\"https://it.mathworks.com/matlabcentral/cody/problems/43213-create-tangent-function-out-of-sine-function-only/solutions/1023484\",\"WARC-Payload-Digest\":\"sha1:TTXILQU56LU2ZRGAGXO5HANHAIHRE2QZ\",\"WARC-Block-Digest\":\"sha1:PRV7UAFRHUXIKR3R5IX6JD6PRRTLNL3I\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-24/CC-MAIN-2020-24_segments_1590347428990.62_warc_CC-MAIN-20200603015534-20200603045534-00511.warc.gz\"}"} |
https://le.ac.uk/modules/2022/md7442 | [
"# Statistical Modelling (Full-time)\n\nModule code: MD7442\n\nThis module introduces the theory and application of Linear and Generalised Linear Models. The module covers all stages in the modelling process, from selecting an initial model, through fitting to model checking and then interpretation and communication of the results and at each stage the necessary theory is developed.\n\nIn this module we aim to provide students with the tools to answer the following questions:\n\n• How to select an appropriate model given data from a clinical study?\n• How to assess whether a model fits data well?\n• How to interpret the results of the statistical modelling?\n\n## Linear Models\n\nThis module will provide a review of basic regression techniques in the context of simple linear regression; introducing the mathematical formulation and software implementations for fitting simple regression models. The module then goes on to include the fundamentals of defining a purpose for a statistical model and also introduces the concepts of model building and model selection. The material also covers the inclusion of different types of covariate data in statistical models and introduces the ideas of statistical interaction and capturing non-linear effects of continuous covariates. We will conclude with further discussion of checking the assumptions of statistical models and talks about practical issues of fitting models in the context of a real-life application.\n\n## Generalised Linear Models\n\nWe will introducr the theory of Generalised Linear Models (GLMs) in terms of exponential family of distributions and discussing special cases of GLMs, such as normal, Poisson or binomial regression. This will cover identifying elements of GLMs including the canonical and dispersion parameters and the mean and variance and defining the linear predictor, offset and link functions. Selection and checking of a model for a given clinical problem will be discussed in lectures followed by fitting models and running checks for a range of examples using software in computer practical classes. Further extensions will include log-linear models for multinomial distributions, over-dispersion, quasi-likelihood."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.86373603,"math_prob":0.87411565,"size":2176,"snap":"2022-27-2022-33","text_gpt3_token_len":381,"char_repetition_ratio":0.12983425,"word_repetition_ratio":0.0,"special_character_ratio":0.1672794,"punctuation_ratio":0.06552707,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99306786,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-07-06T13:16:40Z\",\"WARC-Record-ID\":\"<urn:uuid:12e555c6-a3d6-4e13-9dbb-f21f41ee2d63>\",\"Content-Length\":\"179818\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c42294ae-9a9c-44af-9994-a0af1f933bd7>\",\"WARC-Concurrent-To\":\"<urn:uuid:a53b76af-21fa-4171-b525-d7cb94231155>\",\"WARC-IP-Address\":\"143.210.133.109\",\"WARC-Target-URI\":\"https://le.ac.uk/modules/2022/md7442\",\"WARC-Payload-Digest\":\"sha1:MZVKWMN3TGLAJE2HHEMSCCK4Q5OJ3IJH\",\"WARC-Block-Digest\":\"sha1:QMSWOQWOAZOZVHMSZKZYAGDZ4BDKBTCK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656104672585.89_warc_CC-MAIN-20220706121103-20220706151103-00617.warc.gz\"}"} |
https://www.clutchprep.com/physics/practice-problems/139130/a-penny-is-placed-a-distance-r-from-the-center-of-a-record-spinning-at-90x-rad-m | [
"# Problem: A penny is placed a distance r from the center of a record spinning at ω = 90π rad/min. The coefficient of static friction between the penny and the record is μs.= 0.11 on the horizontal plane. Part (a) Select an expression for the maximum distance, r, the penny can be placed from the center and not move. Part (b) What is the distance, r in meters?\n\n🤓 Based on our data, we think this question is relevant for Professor Foster's class at VCU.\n\n###### FREE Expert Solution\n\nPart (a)\n\nEquilibrium of forces:",
null,
"###### Problem Details\n\nA penny is placed a distance r from the center of a record spinning at ω = 90π rad/min. The coefficient of static friction between the penny and the record is μs.= 0.11 on the horizontal plane.\n\nPart (a) Select an expression for the maximum distance, r, the penny can be placed from the center and not move.",
null,
"Part (b) What is the distance, r in meters?"
] | [
null,
"https://cdn.clutchprep.com/assets/button-view-text-solution.png",
null,
"https://lightcat-files.s3.amazonaws.com/problem_images/a77526adbfa2fb1b-1587489176107.jpg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.91314876,"math_prob":0.96157306,"size":974,"snap":"2020-34-2020-40","text_gpt3_token_len":214,"char_repetition_ratio":0.1185567,"word_repetition_ratio":0.083333336,"special_character_ratio":0.21355236,"punctuation_ratio":0.104166664,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9746306,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-08-08T01:20:40Z\",\"WARC-Record-ID\":\"<urn:uuid:a7a8d7c2-a2ff-4fe8-9d27-4b9e86a5a5fa>\",\"Content-Length\":\"92380\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:174a41e7-8aaa-4adc-9068-0f1f17b90d0c>\",\"WARC-Concurrent-To\":\"<urn:uuid:0983c76f-f5ac-476c-8e9f-18bcb5609fa8>\",\"WARC-IP-Address\":\"52.21.175.83\",\"WARC-Target-URI\":\"https://www.clutchprep.com/physics/practice-problems/139130/a-penny-is-placed-a-distance-r-from-the-center-of-a-record-spinning-at-90x-rad-m\",\"WARC-Payload-Digest\":\"sha1:3OC454QC6WSD7WS52QX66QEJGIUOMLZB\",\"WARC-Block-Digest\":\"sha1:XZ4XJWQUBZVXLXKAEJLZWWLH3NCFMZEH\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-34/CC-MAIN-2020-34_segments_1596439737233.51_warc_CC-MAIN-20200807231820-20200808021820-00578.warc.gz\"}"} |
https://mathlearnit.com/ratio-in-math.html | [
"# Ratio in Math\n\nWith Ratio in Math, a ratio is a way of comparing different values of certain things. How much of something there is compared to another of the same measurement.\n\nValues such as lengths, weights and areas for example.\n\nThe units of measure being compared need to be the same.\n\nRatio in Math Introduction\n\nA bag contains 5 billiard or pool balls.\n\nInside the bag are 4 red balls and 1 yellow ball.\n\nRED BALLS\n\nYELLOW BALL\n\nThe ratio of red balls to yellow balls is 4 : 1.\n\nThere are 4 times as many red balls as yellow balls.\n\n## Ratio in Math Notation\n\na : b is the common notation for Ratios.\n\nThough sometimes ratios can be represented as a fraction, {\\frac{\\tt{a}}{\\tt{b}}}.\n\nSo the red and yellow ball ratio above could also be written as \\bf{\\frac{4}{1}}.\n\nAlso, the ratio could easily be written in a different order,\nas yellow to red balls instead => 1 : 4.\n\n## Simplified Ratios\n\nThe ratio 4 : 1 doesn't have to just represent a group of 5 things though, like with the bag of pool balls.\n\nIt can be a simplified ratio, representing a larger group than 5.\n\nIf the bag contained 8 red balls and 2 yellow balls.\n\nThe ratio of red balls to yellow balls here is 8 red balls to 2 yellow balls,\n\n8 : 2, which can be simplified to 4 : 1.\n\nEven in this bag with more pool balls, the ratio is still 4 : 1.\n\nAs there is still 4 times as many red balls as yellow balls overall.\n\nLike fractions, ratios are generally simplified where possible. To a neater equivalent ratio.\n\n8 : 2 and 4 : 1 are equivalent ratios, they represent the same scale.\n\n## Ratio in Math Examples\n\n(1.1)\n\nExpress 6 : 30 as a ratio in its simplest form.\n\nSolution\n\nLargest common factor of both numbers is 6.\n\n÷ 6 = 1 , 36 ÷ 6 = 6\n\n6 : 30 can be simplified to 1 : 6.\n\n(1.2)\n\nExpress 18 : 9 as a ratio in its simplest form.\n\nSolution\n\n18 ÷ 9 = 2 , ÷ 9 = 1\n\n18 : 9 can be simplified to 2 : 1.\n\n(1.3)\n\nExpress 1 YEAR to 8 MONTHS as a ratio in its simplest form.\n\nSolution\n\n1 YEAR : 8 MONTHS\n\nFirst look to make both measurements/units the same.\n\nHere they are not initially, as a month and a year are different values.\n\nWhat we can do, is change 1 YEAR into 12 MONTHS.\n\n1 YEAR : 8 MONTHS = 12 MONTHS : 8 MONTHS = 12 : 8\n\nRatio 12 : 8 can now be simplified to 3 : 2.\n\n(1.4)\n\nShare $2500 in the ratio 2 : 3. Solution 2 : 3 is 5 parts in total. First step, is to split$2500 into 5 equal parts.\n\n$2500 ÷ 5 =$500\n\nIt can often help to draw an appropriate number line to illustrate the situation and understand what's going on.",
null,
"2 parts = $1000 , 3 parts =$1500\n\nFor $2500: Ratio 2 : 3 =$1000 : \\$1500.\n\n## Ratios Extra Case\n\nThere is also the case that sometimes a ratio can be a part of a group, to a larger whole group.\n\nPart of Group : Larger Whole Group\n\nIn the pool balls in the bag example, the 4 red balls alone, could be give as the ratio 4 : 5.\n\nMeaning that there is 4 red pool balls in the bag, in relation to all pool balls in the bag.\n\nExample\n\n(2.1)\n\nA fish tank has a capacity of 100 liters, but has only been filled with 20 liters of water.\n\nWhat ratio of the tank has been filled\n\nSolution\n\nPart of Group : Larger Whole Group = 20 : 100\n\n20 : 100 can be simplified to ratio 1 : 5.\n\nOr if we wish to write the ratio in fraction form.\n\n\\boldsymbol{\\tt\\frac{Part \\space of \\space Group}{Larger \\space Whole \\space Group}} = \\bf{\\frac{20}{100}} = \\bf{\\frac{1}{5}}\n\n\\bf{\\frac{1}{5}} of the fish tank has been filled with water.\n\n1. Home\n2. ›\n3. Ratio & Proportion\n4. › Ratio in Math Introduction"
] | [
null,
"https://mathlearnit.com/static/images/ratio-math-2500-dollars.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9309776,"math_prob":0.98358256,"size":3135,"snap":"2021-43-2021-49","text_gpt3_token_len":886,"char_repetition_ratio":0.1351006,"word_repetition_ratio":0.05965463,"special_character_ratio":0.3043062,"punctuation_ratio":0.13817663,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9992011,"pos_list":[0,1,2],"im_url_duplicate_count":[null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-12-06T07:44:55Z\",\"WARC-Record-ID\":\"<urn:uuid:7dc0e15f-afb0-4854-bf21-1fd9ffdee4ca>\",\"Content-Length\":\"47435\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f950287f-cc1b-4828-9063-4f706ef919e9>\",\"WARC-Concurrent-To\":\"<urn:uuid:55e8b087-9751-4837-a5f0-ee0e06ab6df2>\",\"WARC-IP-Address\":\"35.175.60.16\",\"WARC-Target-URI\":\"https://mathlearnit.com/ratio-in-math.html\",\"WARC-Payload-Digest\":\"sha1:DVRUIXNUBP2QORWOQ5X7MYLHKLB5FI6O\",\"WARC-Block-Digest\":\"sha1:WANECLK5FW4JXUVBV2RQYBUMUEQ6UTTC\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964363290.59_warc_CC-MAIN-20211206072825-20211206102825-00560.warc.gz\"}"} |
https://mailman.nanog.org/pipermail/nanog/1999-August/027422.html | [
"# more internic fun, and an idea\n\nmelinda b. thompson ima at badhabit.org\nWed Aug 4 13:36:52 UTC 1999\n\n```have been now waiting since july 17 for a simple NS change on a domain.\n\nthis is getting a bit ridiculous.\n\ni suggest that internic should prorate the yearly charge and have to\nsubtract the prorated days that it takes them to process change orders from\nthe yearly bill for the domain.\n\neach single one would be just a small ammount.\n\nhowever, with as many domains that have been held up in \"manual processing\"\nor whatever, it could add up.\n\nopinions?\n\nmelinda b thompson\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nYou have moved your mouse, Windows must be restarted for the\nchanges to take effect.\nima at badhabit.org habit at sexualgoddess.com habit at dal.net habit at galaxynet.org\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\n```"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7735257,"math_prob":0.99492705,"size":939,"snap":"2020-34-2020-40","text_gpt3_token_len":237,"char_repetition_ratio":0.2342246,"word_repetition_ratio":0.0,"special_character_ratio":0.34824282,"punctuation_ratio":0.11728395,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9902889,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-08-04T05:47:15Z\",\"WARC-Record-ID\":\"<urn:uuid:0a467c07-97af-4e36-8727-5d107278244e>\",\"Content-Length\":\"4077\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9420965f-b760-4f14-84bb-05fdfcc2695e>\",\"WARC-Concurrent-To\":\"<urn:uuid:416eb548-94bc-44b4-b902-9153dcbafa7d>\",\"WARC-IP-Address\":\"104.20.199.50\",\"WARC-Target-URI\":\"https://mailman.nanog.org/pipermail/nanog/1999-August/027422.html\",\"WARC-Payload-Digest\":\"sha1:IVXZIRPYKTCMQYNZFFNXMZA32VRMVOKW\",\"WARC-Block-Digest\":\"sha1:7Z5E3UVK4Z36YJTFLJAAXNXJ6ASWFDQO\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-34/CC-MAIN-2020-34_segments_1596439735860.28_warc_CC-MAIN-20200804043709-20200804073709-00514.warc.gz\"}"} |
https://open.kattis.com/problems/permutedarithmeticsequence | [
"Hide\n\n# Permuted Arithmetic Sequence\n\nAn arithmetic sequence is a list of values where the difference between consecutive values is always the same. For example, $3, 7, 11, 15$ qualifies and so does $25, 15, 5, -5, -15$. However $2, 4, 7$ and $3, 6, 9, 6$ are not arithmetic sequences.\n\n## Input\n\nInput begins with an integer, $1 \\leq n \\leq 100$, on a line by itself. Following this are $n$ lines, each describing a sequence. Each line begins with an integer, $3 \\leq m \\leq 100$, giving the length of the sequence. This is followed by the $m$ integer values that actually make up the sequence. Each of the sequence integers is in the range $[-10^6,10^6]$.\n\n## Output\n\nFor each sequence, output a line that says “arithmetic” if the sequence is an arithmetic sequence. Output “permuted arithmetic” if the sequence can be reordered to make an arithmetic sequence. Otherwise, output “non-arithmetic”.\n\nSample Input 1 Sample Output 1\n3\n5 1 2 3 4 5\n3 20 6 13\n4 5 9 15 19\n\narithmetic\npermuted arithmetic\nnon-arithmetic\n\nCPU Time limit 1 second\nMemory limit 1024 MB\nDifficulty 2.1easy\nStatistics Show"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8656173,"math_prob":0.99864095,"size":998,"snap":"2022-27-2022-33","text_gpt3_token_len":292,"char_repetition_ratio":0.20321931,"word_repetition_ratio":0.0,"special_character_ratio":0.30661324,"punctuation_ratio":0.1509434,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9992905,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-08T12:47:33Z\",\"WARC-Record-ID\":\"<urn:uuid:150d952c-a13c-4259-868f-7aec032d15b1>\",\"Content-Length\":\"24183\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9cd2ffc1-d8e5-4c80-9494-d366914a367b>\",\"WARC-Concurrent-To\":\"<urn:uuid:b534896b-f7e6-43af-8746-6e084ff70b39>\",\"WARC-IP-Address\":\"104.26.10.67\",\"WARC-Target-URI\":\"https://open.kattis.com/problems/permutedarithmeticsequence\",\"WARC-Payload-Digest\":\"sha1:OKKWUZOWKCUH5ZDOXVELU5E3TF7C5O6T\",\"WARC-Block-Digest\":\"sha1:CIOSIEQLGXSONRJISEY35YPMVIRV4LTK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882570827.41_warc_CC-MAIN-20220808122331-20220808152331-00716.warc.gz\"}"} |
http://doc.raqsoft.com/esproc/tutorial/fenzuyuhuizong.html | [
"# Grouping and Summarizing Data\n\nIt is a common task to group records in a table sequence as needed and to summarize the data in each group during statistics and data analysis. This includes data aggregation tasks like calculating sum and average, as well as listing top n records, and etc. This part explores how to group and summarize the data of a table sequence in esProc.\n\n## 3.2.1 Finding distinct values\n\nOf the fields of all the records, there are some containing values that are different from each other, like sequence numbers, but lots of them could have duplicate values. Sometimes all the different values are required to be listed, and, in this case, id function will be used. For example:\n\n A 1 =file(\"Order_Books.txt\").import@t() 2 =A1.id(SalesID) 3 =A1.id(PName)\n\nBelow is a selection of A1’s table sequence:",
null,
"A2 gets all the distinct IDs of salespersons, i.e. SalesID. A3 gets all the distinct names of books, i.e. PName. Results of A2 and A3 are as follows:",
null,
"",
null,
"As can be seen from the results, id function will sort values in ascending order when computing distinct values.\n\nActually, id function returns a result similar to that returned by the SQL distinct statement.\n\n A 1 =demo.query(\"select * from EMPLOYEE\") 2 =A1.id(STATE) 3 =demo.query(\"select distinct STATE from EMPLOYEE\")\n\nThe following is a table sequence of employee information A1 retrieves from the database:",
null,
"A2 finds out all distinct states from the table sequence in A1 using id function, while A3 gets them from the database using the distinct statement. Results of A2 and A3 are as follows:",
null,
"",
null,
"The two methods return the same result. The difference is that id function returns the result as a sequence, while the SQL statement returns a result of table sequence from the database.\n\nBut sometimes you might want to keep the original order of the data. To achieve this, use @o option. For example:\n\n A 1 =demo.query(\"select * from EMPLOYEE\") 2 =A1.id@o(STATE)\n\nNow A2’s result is this:",
null,
"The id@o function doesn’t sort data but directly removes the neighboring data of the same value. So there could be duplicate values in the result, like Texas and California in the above result.\n\nid function can list all distinct results of a certain expression rather than all dinstict values of a field:\n\n A 1 =demo.query(\"select * from EMPLOYEE\") 2 =A1.id@u(age(BIRTHDAY))\n\nA2 lists the ages of employees with @u option. This will remove duplicate values form the result set and sort values in the order of their appearances:",
null,
"## 3.2.2 Equi-grouping\n\nYou often need to group data according to a certain value, such as grouping employees by department and grouping people by gender. esProc provides group function to group data of a table sequence or a record sequence and return a sequence composed of groups. For example:\n\n A 1 =demo.query(\"select * from EMPLOYEE\") 2 =A1.group(DEPT) 3 =A1.group@o(DEPT) 4 =A1.group(year(BIRTHDAY))\n\nA2 groups records of employees by DEPT. The result is as follows:",
null,
"The result is a sequence whose members are groups sorted by departement in alphabetical order. Each group is a record sequence consisting of records of employees in the same department.\n\nThe group function can also work with @o option to keep the original order by only putting the neighboring data with the same values into one group, like what A3 does. It only puts the neighboring employees of the same department into one group when grouping data by department. The result is as follows:",
null,
"You can also group data by a given expression by putting data that make the expression produce the same result into one group. For example, A4 groups employees by the birth year. Here’s the result:",
null,
"The group function is similar to the group by operation in SQL. Its return value is a sequence consisting of groups. It is a sequence composed of sequences. The grouping result can be used repeatedly for further data grouping and aggregation. This is different from SQL that hasn’t the explicit set data types and, therefore, cannot store the grouping result. With SQL, each group of data must be summarized immediately when group by is executed, and after that the grouping result will be dumped and thus cannot be reused.\n\nesProc allows summarizing the data of each group after data grouping repeatedly and in different ways. For example:\n\n A 1 =demo.query(\"select * from EMPLOYEE\") 2 =A1.group(DEPT) 3 =A2.new(DEPT,count(~):Count,~.sum(SALARY):TotalSalary) 4 =A2.new(DEPT,~.count(GENDER==\"M\"):Male, ~.count(GENDER==\"F\"):Female)\n\nUsing the grouping result, A3 computes the number of employees and the total salary of each department. A4 computes the number of male employees and the number of female employees in each department. Both return the summarizing result as a table sequence, as shown below:",
null,
"",
null,
"A3 and A4 perform different summarizing operations based on the same grouping result of A2. Reusing the grouping result is one of the important features of esProc.\n\nAn explanation of the expression =A2.new(DEPT,count(~):Count,~.sum(SALARY):TotalSalary) in A3: As A2 is the result of grouping A1’s table sequence, every one of its members is a set of records, that is, a record sequence. Therefore, when new function performs loop computation on A2, the targets of its expressions are the record sequences. For example, DEPT means getting the value of DEPT field of the first record of the current record sequence; ~.sum(SALARY) means summing up the SALARY field of the current record sequence.\n\nSimilar to the group by clause in SQL, group function also supports grouping data by multiple fields (expressions) simultaneously. In that case, only records in which the values of the specified fields have the same values will be grouped together.\n\nSometimes you need to divide an amount of data into a specified number of groups, In that case, z(i,n) function is useful. For example:\n\n A 1 =z(123,40) 2 =demo.query(\"select * from EMPLOYEE\") 3 =A2.group(z(#,60)) 4 =A3.new(#:GID,~.count():Count,~:Emps)\n\nThe function finds that if every group contain n members, then which group will the ith member fall in. A1’s expression finds that if every 40 are put into one group, then which group will the 123th member stays. Here’s the result:",
null,
"A3 uses z function to group the employees with 60 in each. A4 calculates the number of employees in each group. Here’s A4’s result:",
null,
"To separate n records evenly into k groups, use z(i,k,n) function:\n\n A 1 =demo.query(\"select * from EMPLOYEE\") 2 =A1.len() 3 =A1.group(z(#,6,A2)) 4 =A3.new(#:GID,~.count():Count,~:Emps)\n\nThe function finds which group the ith record will fall in. A3 ties to separate the employee records into 6 groups evenly. A4 calculates the number of employees in each group. Below is A4’s result:",
null,
"When records can’t be evenly separated, the extra records will be put into groups from to back. The counterpart SQL function is NTILE.\n\n## 3.2.3 Group and aggregate functions\n\nThe previous section illustrates how to group the records in a table sequence and then summarize the records of each group using the grouping result. You can also use the group function to directly get the aggregate result, saving the trouble of computing it step by step. For example:\n\n A 1 =demo.query(\"select * from EMPLOYEE\") 2 =A1.group(DEPT;~.count():Count,~.sum(SALARY):TotalSalary) 3 =A1.group(DEPT;~.count(GENDER==\"M\"):Male, ~.count(GENDER==\"F\"):Female) 4 =A1.group(DEPT;round(~.avg(age(BIRTHDAY)),2):AverageAge)\n\nBoth A2 and A3 group and summarize the records by DEPT field directly. They get the same result as that got by step-by-step computation:",
null,
"",
null,
"A4 computes the average age of employees in each department:",
null,
"esProc also provides groups function to compute the grouping and summarizing result by accumulation. For example:\n\n A 1 =demo.query(\"select * from EMPLOYEE\") 2 =A1.groups(DEPT;count():Count,sum(SALARY):TotalSalary) 3 =A1.groups(DEPT;count(GENDER==\"M\"):Male, count(GENDER==\"F\"):Female)\n\nA2 and A3 group and summarize the data by DEPT field simultaneously. Notice that \"~.\" is omitted here. This is different from the code in the above, but the result is the same as that got by grouping first and then summarizing:",
null,
"",
null,
"The groups function also returns a table sequence when grouping and summarizing data simultaneously. It won’t record the data in each group, but only calculate the accumulated value according to the grouping expression. Compared with the group function, groups function is more efficient.\n\nNote: The groups function can only work with simple aggregate functions, such as sum/count/max/min/top/avg/iterate/icount/median, to perform grouping and aggregation at the same time. To perform complicated grouping and aggregation operations, you should use the group function only.\n\nIn particular, top, one of the aggregate functions, can be used to compute the n smallest values. For example:\n\n A 1 =demo.query(\"select * from EMPLOYEE\") 2 =A1.groups(DEPT;top(3,age(BIRTHDAY)))\n\nA2 computes the ages of three youngest persons in each department through data grouping:",
null,
"top can only computes the n smallest values. To find out the ages of three oldest persons are needed, you can use the following method:\n\n A 1 =demo.query(\"select * from EMPLOYEE\") 2 =A1.groups(DEPT;top(3,BIRTHDAY)) 3 =A2.new(DEPT,#2.(age(~))) 4 =A1.groups(DEPT;top(-3,age(BIRTHDAY)))\n\nA2 first computes the 3 smallest birthdays in each department, and then A3 further computes the corresponding ages. Results of A2 and A3 are as follows:",
null,
"",
null,
"An alternative method is to add a negative sign before top. For example, A4 gets the 3 smallest ages in terms of negative values, and then A5 computes the corresponding ages. Results of A4 and A5 are as follows: A more convenient method is using top(-n) to directly get the three eldest employees, as A4 does. Here’s the result:",
null,
"Notice that A3 and A5 get the same results."
] | [
null,
"http://doc.raqsoft.com/esproc/tutorial/Tutorial.files/image809.jpg",
null,
"http://doc.raqsoft.com/esproc/tutorial/Tutorial.files/image811.jpg",
null,
"http://doc.raqsoft.com/esproc/tutorial/Tutorial.files/image813.jpg",
null,
"http://doc.raqsoft.com/esproc/tutorial/Tutorial.files/image815.jpg",
null,
"http://doc.raqsoft.com/esproc/tutorial/Tutorial.files/image817.jpg",
null,
"http://doc.raqsoft.com/esproc/tutorial/Tutorial.files/image819.jpg",
null,
"http://doc.raqsoft.com/esproc/tutorial/Tutorial.files/image821.jpg",
null,
"http://doc.raqsoft.com/esproc/tutorial/Tutorial.files/image823.jpg",
null,
"http://doc.raqsoft.com/esproc/tutorial/Tutorial.files/image825.jpg",
null,
"http://doc.raqsoft.com/esproc/tutorial/Tutorial.files/image827.jpg",
null,
"http://doc.raqsoft.com/esproc/tutorial/Tutorial.files/image829.jpg",
null,
"http://doc.raqsoft.com/esproc/tutorial/Tutorial.files/image831.jpg",
null,
"http://doc.raqsoft.com/esproc/tutorial/Tutorial.files/image833.jpg",
null,
"http://doc.raqsoft.com/esproc/tutorial/Tutorial.files/image835.jpg",
null,
"http://doc.raqsoft.com/esproc/tutorial/Tutorial.files/image837.jpg",
null,
"http://doc.raqsoft.com/esproc/tutorial/Tutorial.files/image839.jpg",
null,
"http://doc.raqsoft.com/esproc/tutorial/Tutorial.files/image831.jpg",
null,
"http://doc.raqsoft.com/esproc/tutorial/Tutorial.files/image833.jpg",
null,
"http://doc.raqsoft.com/esproc/tutorial/Tutorial.files/image841.jpg",
null,
"http://doc.raqsoft.com/esproc/tutorial/Tutorial.files/image842.jpg",
null,
"http://doc.raqsoft.com/esproc/tutorial/Tutorial.files/image843.jpg",
null,
"http://doc.raqsoft.com/esproc/tutorial/Tutorial.files/image845.jpg",
null,
"http://doc.raqsoft.com/esproc/tutorial/Tutorial.files/image847.jpg",
null,
"http://doc.raqsoft.com/esproc/tutorial/Tutorial.files/image849.jpg",
null,
"http://doc.raqsoft.com/esproc/tutorial/Tutorial.files/image851.jpg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.855476,"math_prob":0.96736866,"size":9815,"snap":"2022-27-2022-33","text_gpt3_token_len":2365,"char_repetition_ratio":0.15350117,"word_repetition_ratio":0.105919,"special_character_ratio":0.247784,"punctuation_ratio":0.13861893,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9904484,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50],"im_url_duplicate_count":[null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,2,null,2,null,2,null,1,null,1,null,2,null,2,null,1,null,2,null,1,null,1,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-13T21:01:08Z\",\"WARC-Record-ID\":\"<urn:uuid:3a38d3fe-7e30-4ec9-a1b0-849e84cba2dc>\",\"Content-Length\":\"124245\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:023fdab8-c9fa-4b8e-9514-624f888fc813>\",\"WARC-Concurrent-To\":\"<urn:uuid:4beb4c81-7a5e-4f5c-b3de-0bb82a3b32dc>\",\"WARC-IP-Address\":\"72.14.178.53\",\"WARC-Target-URI\":\"http://doc.raqsoft.com/esproc/tutorial/fenzuyuhuizong.html\",\"WARC-Payload-Digest\":\"sha1:XHB4UNYDM2IEHCYVVBG7TN5BCJWFJRUF\",\"WARC-Block-Digest\":\"sha1:HJIZGOQ56B6FM537WKSEDXMAR5S2RG3N\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882571987.60_warc_CC-MAIN-20220813202507-20220813232507-00632.warc.gz\"}"} |
https://www.w3cschool.cn/julia/gpte1jfi.html | [
"# Julia 控制流\n\n2022-02-25 16:47 更新\n\nJulia 提供一系列控制流:\n\n## 复合表达式\n\n``````julia> z = begin\nx = 1\ny = 2\nx + y\nend\n3``````\n\n``````julia> z = (x = 1; y = 2; x + y)\n3``````\n\n``````julia> begin x = 1; y = 2; x + y end\n3\n\njulia> (x = 1;\ny = 2;\nx + y)\n3``````\n\n## 条件求值\n\n``````if x < y\nprintln(\"x is less than y\")\nelseif x > y\nprintln(\"x is greater than y\")\nelse\nprintln(\"x is equal to y\")\nend``````\n\n``````julia> function test(x, y)\nif x < y\nprintln(\"x is less than y\")\nelseif x > y\nprintln(\"x is greater than y\")\nelse\nprintln(\"x is equal to y\")\nend\nend\ntest (generic function with 1 method)\n\njulia> test(1, 2)\nx is less than y\n\njulia> test(2, 1)\nx is greater than y\n\njulia> test(1, 1)\nx is equal to y``````\n\n`elseif``else` 块是可选的。\n\n``````julia> if 1\nprintln(\"true\")\nend\nERROR: type: non-boolean (Int64) used in boolean context``````\n\n“问号表达式”语法 `?:``if-elseif-else` 语法相关,但是适用于单个表达式:\n\n``a ? b : c``\n\n`?` 之前的 `a` 是条件表达式,如果为 `true` ,就执行 : 之前的 `b` 表达式,如果为 `false` ,就执行 `:``c` 表达式。\n\n``````julia> x = 1; y = 2;\n\njulia> println(x < y ? \"less than\" : \"not less than\")\nless than\n\njulia> x = 1; y = 0;\n\njulia> println(x < y ? \"less than\" : \"not less than\")\nnot less than``````\n\n``````julia> test(x, y) = println(x < y ? \"x is less than y\" :\nx > y ? \"x is greater than y\" : \"x is equal to y\")\ntest (generic function with 1 method)\n\njulia> test(1, 2)\nx is less than y\n\njulia> test(2, 1)\nx is greater than y\n\njulia> test(1, 1)\nx is equal to y``````\n\n`if-elseif-else` 类似,`:` 前后的表达式,只有在对应条件表达式为 `true``false` 时才执行:\n\n``````julia> v(x) = (println(x); x)\nv (generic function with 1 method)\n\njulia> 1 < 2 ? v(\"yes\") : v(\"no\")\nyes\n\"yes\"\n\njulia> 1 > 2 ? v(\"yes\") : v(\"no\")\nno\n\"no\"``````\n\n## 短路求值\n\n`&&``||` 布尔运算符被称为短路求值,它们连接一系列布尔表达式,仅计算最少的表达式来确定整个链的布尔值。这意味着: 在表达式 `a && b` 中,只有 `a``true` 时才计算子表达式 `b` 在表达式 `a || b` 中,只有 `a``false` 时才计算子表达式 `b` `&&``||` 都与右侧结合,但 `&&``||` 优先级高:\n\n``````julia> t(x) = (println(x); true)\nt (generic function with 1 method)\n\njulia> f(x) = (println(x); false)\nf (generic function with 1 method)\n\njulia> t(1) && t(2)\n1\n2\ntrue\n\njulia> t(1) && f(2)\n1\n2\nfalse\n\njulia> f(1) && t(2)\n1\nfalse\n\njulia> f(1) && f(2)\n1\nfalse\n\njulia> t(1) || t(2)\n1\ntrue\n\njulia> t(1) || f(2)\n1\ntrue\n\njulia> f(1) || t(2)\n1\n2\ntrue\n\njulia> f(1) || f(2)\n1\n2\nfalse``````\n\n``````julia> function factorial(n::Int)\nn >= 0 || error(\"n must be non-negative\")\nn == 0 && return 1\nn * factorial(n-1)\nend\nfactorial (generic function with 1 method)\n\njulia> factorial(5)\n120\n\njulia> factorial(0)\n1\n\njulia> factorial(-1)\nERROR: n must be non-negative\nin factorial at none:2``````\n\n``````julia> f(1) & t(2)\n1\n2\nfalse\n\njulia> t(1) | t(2)\n1\n2\ntrue``````\n\n`&&``||` 的运算对象也必须是布尔值( `true``false` )。在任何地方使用一个非布尔值,除非最后一个进入连锁条件的是一个错误:\n\n``````julia> 1 && true\nERROR: type: non-boolean (Int64) used in boolean context``````\n\n``````julia> true && (x = rand(2,2))\n2x2 Array{Float64,2}:\n0.768448 0.673959\n0.940515 0.395453\n\njulia> false && (x = rand(2,2))\nfalse``````\n\n## 重复求值: 循环\n\n``````julia> i = 1;\n\njulia> while i <= 5\nprintln(i)\ni += 1\nend\n1\n2\n3\n4\n5``````\n\n``````julia> for i = 1:5\nprintln(i)\nend\n1\n2\n3\n4\n5``````\n\n``````julia> for j = 1:5\nprintln(j)\nend\n1\n2\n3\n4\n5\n\njulia> j\nERROR: j not defined``````\n\n``````julia> for i in [1,4,0]\nprintln(i)\nend\n1\n4\n0\n\njulia> for s in [\"foo\",\"bar\",\"baz\"]\nprintln(s)\nend\nfoo\nbar\nbaz``````\n\n``````julia> i = 1;\n\njulia> while true\nprintln(i)\nif i >= 5\nbreak\nend\ni += 1\nend\n1\n2\n3\n4\n5\n\njulia> for i = 1:1000\nprintln(i)\nif i >= 5\nbreak\nend\nend\n1\n2\n3\n4\n5``````\n\n``````julia> for i = 1:10\nif i % 3 != 0\ncontinue\nend\nprintln(i)\nend\n3\n6\n9``````\n\n``````julia> for i = 1:2, j = 3:4\nprintln((i, j))\nend\n(1,3)\n(1,4)\n(2,3)\n(2,4)``````\n\n## 异常处理\n\n### 内置异常 `Exception`\n\nException\nArgumentError\nBoundsError\nDivideError\nDomainError\nEOFError\nErrorException\nInexactError\nInterruptException\nKeyError\nMemoryError\nMethodError\nOverflowError\nParseError\nSystemError\nTypeError\nUndefRefError\nUndefVarError\n\n``````julia> sqrt(-1)\nERROR: DomainError\nsqrt will only return a complex result if called with a complex argument.\ntry sqrt(complex(x))\nin sqrt at math.jl:131``````\n\n``julia> type MyCustomException <: Exception end``\n\n### `throw` 函数\n\n``````julia> f(x) = x>=0 ? exp(-x) : throw(DomainError())\nf (generic function with 1 method)\n\njulia> f(1)\n0.36787944117144233\n\njulia> f(-1)\nERROR: DomainError\nin f at none:1``````\n\n``````julia> typeof(DomainError()) <: Exception\ntrue\n\njulia> typeof(DomainError) <: Exception\nfalse``````\n\n``````julia> throw(UndefVarError(:x))\nERROR: x not defined``````\n\n``````julia> type MyUndefVarError <: Exception\nvar::Symbol\nend\njulia> Base.showerror(io::IO, e::MyUndefVarError) = print(io, e.var, \" not defined\");``````\n\n### `error` 函数\n\n`error` 函数用来产生 `ErrorException` ,阻断程序的正常执行。\n\n``````julia> fussy_sqrt(x) = x >= 0 ? sqrt(x) : error(\"negative x not allowed\")\nfussy_sqrt (generic function with 1 method)\n\njulia> fussy_sqrt(2)\n1.4142135623730951\n\njulia> fussy_sqrt(-1)\nERROR: negative x not allowed\nin fussy_sqrt at none:1``````\n\n``````julia> function verbose_fussy_sqrt(x)\nprintln(\"before fussy_sqrt\")\nr = fussy_sqrt(x)\nprintln(\"after fussy_sqrt\")\nreturn r\nend\nverbose_fussy_sqrt (generic function with 1 method)\n\njulia> verbose_fussy_sqrt(2)\nbefore fussy_sqrt\nafter fussy_sqrt\n1.4142135623730951\n\njulia> verbose_fussy_sqrt(-1)\nbefore fussy_sqrt\nERROR: negative x not allowed\nin verbose_fussy_sqrt at none:3``````\n\n### `warn` 和 `info` 函数\n\nJulia 还提供一些函数,用来向标准错误 I/O 输出一些消息,但不抛出异常,因而并不会打断程序的执行:\n\n``````julia> info(\"Hi\"); 1+1\nINFO: Hi\n2\n\njulia> warn(\"Hi\"); 1+1\nWARNING: Hi\n2\n\njulia> error(\"Hi\"); 1+1\nERROR: Hi\nin error at error.jl:21``````\n\n### `try/catch` 语句\n\n`try/catch` 语句可以用于处理一部分预料中的异常 `Exception` 。例如,下面求平方根函数可以正确处理实数或者复数:\n\n``````julia> f(x) = try\nsqrt(x)\ncatch\nsqrt(complex(x, 0))\nend\nf (generic function with 1 method)\n\njulia> f(1)\n1.0\n\njulia> f(-1)\n0.0 + 1.0im``````\n\n`try/catch` 语句使用时也可以把异常赋值给某个变量。例如:\n\n``````julia> sqrt_second(x) = try\nsqrt(x)\ncatch y\nif isa(y, DomainError)\nsqrt(complex(x, 0))\nelseif isa(y, BoundsError)\nsqrt(x)\nend\nend\nsqrt_second (generic function with 1 method)\n\njulia> sqrt_second([1 4])\n2.0\n\njulia> sqrt_second([1 -4])\n0.0 + 2.0im\n\njulia> sqrt_second(9)\n3.0\n\njulia> sqrt_second(-9)\nERROR: DomainError\nin sqrt_second at none:7``````\n\n``try bad() catch x end``\n\n``````try bad() catch; x end\n\ncatch\nx\nend``````\n\nJulia 还提供了更高级的异常处理函数 `rethrow``backtrace``catch_backtrace`\n\n### finally 语句\n\n``````f = open(\"file\")\ntry\n# operate on file f\nfinally\nclose(f)\nend``````\n\n## 任务(也称为协程)\n\nJulia 提供了 `produce``consume` 函数来解决这个问题。生产者调用 `produce` 函数来生产值:\n\n``````julia> function producer()\nproduce(\"start\")\nfor n=1:4\nproduce(2n)\nend\nproduce(\"stop\")\nend;``````\n\n``````julia> p = Task(producer);\n\njulia> consume(p)\n\"start\"\n\njulia> consume(p)\n2\n\njulia> consume(p)\n4\n\njulia> consume(p)\n6\n\njulia> consume(p)\n8\n\njulia> consume(p)\n\"stop\"``````\n\n``````julia> for x in Task(producer)\nprintln(x)\nend\nstart\n2\n4\n6\n8\nstop``````\n\n``````function mytask(myarg)\n...\nend\n\n# 也可以写成\n\n`produce``consume` 但它并不在不同的 CPU 发起线程。我们将在并行计算中,讨论真正的内核线程。\n\n### 核心任务操作\n\n`yeildto` 很强大, 但是大多数时候并不直接调用它。 当你从当前的任务切换走,你有可能会想切换回来, 但需要知道切换的时机和任务,这会需要相当的协调。 例如,`procude` 需要保持某个状态来记录消费者。无需手动地记录正在消费的任务让 `produce``yieldto` 更容易使用。\n\n### 任务状态\n\n:runnable 任务正在运行,或可被切换到该任务\n:waiting 等待一个特定事件从而阻塞\n:queued 在调度程序的运行队列中准备重新启动\n:done 成功执行完毕\n:failed 由于未处理的异常而终止\n\nApp下载",
null,
"",
null,
""
] | [
null,
"https://7n.w3cschool.cn/statics/images/w3c/app-qrcode2.png",
null,
"https://7n.w3cschool.cn/statics/images/w3c/mp-qrcode.png",
null
] | {"ft_lang_label":"__label__zh","ft_lang_prob":0.6827881,"math_prob":0.95160353,"size":10363,"snap":"2022-40-2023-06","text_gpt3_token_len":5961,"char_repetition_ratio":0.1408437,"word_repetition_ratio":0.14008322,"special_character_ratio":0.2984657,"punctuation_ratio":0.09155314,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99411184,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-10-07T08:18:12Z\",\"WARC-Record-ID\":\"<urn:uuid:5daa7525-648c-4ca1-a03b-5a74458cf30e>\",\"Content-Length\":\"71848\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3985dba6-bc87-4107-a638-383c60673581>\",\"WARC-Concurrent-To\":\"<urn:uuid:240084d5-3ba6-4f93-8cdf-bb2f33b77c0e>\",\"WARC-IP-Address\":\"120.79.88.157\",\"WARC-Target-URI\":\"https://www.w3cschool.cn/julia/gpte1jfi.html\",\"WARC-Payload-Digest\":\"sha1:6IBSROPL6P4DIJVNP54LYQW2HBYHPV7W\",\"WARC-Block-Digest\":\"sha1:6ULWIKVCCUJ4GIIM3XHVHUH42NZSZTZS\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030338001.99_warc_CC-MAIN-20221007080917-20221007110917-00419.warc.gz\"}"} |
https://jewelryrepairfayetteville.com/karte/word-equations-worksheet-instructional-fair-inc.php | [
"# Word Equations Worksheet Instructional Fair Inc\n\nTrace My Name Worksheets #0 Worksheet. Linear equations and inequalities worksheet mcmxciv instructional fair inc 76 mcmxciv instructional fair inc answers geometry if8763. 8. jun 8, 2011 - introduced., How Do You Find Answers to Questions in Instructional Fair? A: The site features answers to Math percents Instructional Fair. Instructional Fair Inc Worksheets;.\n\n### Balancing Chemical Equations AP Chemistry\n\nEquilibrium Constant Worksheet Instructional Fair. ... chemical reactions,word equations worksheet instructional fair inc cavalcade instructional fair inc,word equations worksheet chemistry, Instructional Fair Inc Chemistry If8766 balancing chemical equations worksheet answers if8766 nuclear. to Instructional Fair Inc science worksheets..\n\nMath Coloring Worksheets 7th Grade #5. View image Print. Image Info: File Size: Free Printable Math Worksheets 2Nd Grade; Instructional Fair Inc Worksheets; Chemistry If8766 Instructional Fair Answer Key Chemistry if8766 instructional fair inc metrics and related to astounding balancing chemical equations worksheet\n\n85 algebra if8762 mcmxciv instructional fair inc algebra if8762 worksheet mcmxciv instructional fair. variables and equations, Word Equations Worksheet Instructional Fair Word Equations Instructional Fair Inc Downloads WORKSHEET ON CHEMICAL VS PHYSICAL PROPERTIES AND CHANGES 1 months\n\nIf8767 Worksheets - showing all 8 printables. Worksheets are If8767 answer key word equations, Physical science if8767 answer key instructional fair inc, Instructional Fair Worksheet Answers If you find your instructional fair inc math math if8748 answers so overwhelming, word banks, small-group instruction,\n\nInstructional Fair Inc Physical Science If8767 Balancing equations instructional fair inc worksheet 61 physical science if8767 pdf. Related site: Instructional Fair Inc Physical Science If8767 Balancing equations instructional fair inc worksheet 61 physical science if8767 pdf. Related site:\n\nInstructional Fair Inc Worksheets Answers related to astounding balancing chemical equations worksheet worksheet instructional fair inc biology biology and Balancing Redox Equations Instructional Fair Metrics and Measurements 7 Word Equations 59. Instructional Fair Inc Balancing Redox & Worksheets\n\nOur word problem worksheets force students to carefully read and digest problems, Get your head in the game with these sports math word problems! Inc. All naming other. instructional fair, inc s strategies equations 87 Title Chemistry Worksheet oxidation These worksheets are in *.pdf format or as microsoft word\n\nWord Equations Answer Key 1. Zinc and Lead (II) nitrate react to form Zinc Nitrate and Lead. Word Equations Worksheet Balancing Redox Equations Instructional Fair Inc equations worksheet chemistry if8766 chemistry if8766 balancing redox equations balancing equations page 61\n\nInstructional Fair Inc Chemistry If8766 balancing chemical equations worksheet answers if8766 nuclear. to Instructional Fair Inc science worksheets. Balancing Redox Equations Instructional Fair Metrics and Measurements 7 Word Equations 59. Instructional Fair Inc Balancing Redox & Worksheets\n\nDetermining Empirical Formulas Worksheet Instructional Fair Solving Quadratic Equations Using the Page 61 Instructional Fair, Inc. Gas Law Worksheet Answer Balancing Redox Equations Instructional Fair Metrics and Measurements 7 Word Equations 59. Instructional Fair Inc Balancing Redox & Worksheets\n\nInstructional Fair Chemistry Answer Key KS3 Science multiple Choice Quizzes for chemistry, worksheets and Instructional fair inc biology Instructional Fair Inc Worksheets Answers related to astounding balancing chemical equations worksheet worksheet instructional fair inc biology biology and\n\nGram Formula Mass Worksheet Instructional Fair. naming other. instructional fair, inc s strategies equations 87 Title Chemistry Worksheet oxidation These worksheets are in *.pdf format or as microsoft word, Instructional Fair Inc Chemistry If8766 amazing periodic table worksheet instructional fair archived calendars related to astounding Word Search Answers,.\n\n### Gram Formula Mass Worksheet Instructional Fair",
null,
"Instructional Fair Inc Physical Science If8767. Instructional Fair Inc Worksheets Answers related to astounding balancing chemical equations worksheet worksheet instructional fair inc biology biology and, naming other. instructional fair, inc s strategies equations 87 Title Chemistry Worksheet oxidation These worksheets are in *.pdf format or as microsoft word.\n\n### Worksheet Word Equations Name",
null,
"Balancing Chemical Equations Worksheet Instructional Fair. equations , instructional fair inc balancing chemical equations answers answer balancing chemical equations chapter 7 worksheet 1 word instructional fair inc. Balancing Redox Equations Instructional Fair Inc equations worksheet chemistry Huge Chemistry IF8766 Page 47 Page 46 Instructional Fair Inc Title Microsoft Word.",
null,
"Balancing Redox Equations Instructional Fair Inc equations worksheet chemistry if8766 chemistry if8766 balancing redox equations balancing equations page 61 Word Equations Worksheet Cadrecorner Com. Balancing Equations Worksheet Instructional Fair Inc Breadandhearth. Balancing Equations Worksheet Answers Page 61\n\nBalancing chemical equations is a math-like skill that can be achieved Home Professional Learning Instructional Strategies Balancing Equations Worksheet Instructional Fair Inc Algebra Answer Key.pdf HALF-LIFE WORKSHEET • Introduce “is” as a key word for equals. Aligned Instructional\n\nLinear equations and inequalities worksheet mcmxciv instructional fair inc 76 mcmxciv instructional fair inc answers geometry if8763. 8. jun 8, 2011 - introduced. Physical Science If8767 Worksheet Physical Science If8767 Answer Key Word Equations Download ebook Physical Science If8767 Answer Key Word Equations in …\n\nBalancing Chemical Equations – Answer Key Balance the equations below: 1) 1 N2 + 3 H2 Math Coloring Worksheets 7th Grade #5. View image Print. Image Info: File Size: Free Printable Math Worksheets 2Nd Grade; Instructional Fair Inc Worksheets;\n\nAnswer to Instructional Fair, Inc Biology IF8765, page 15 Balancing Equations Chemistry If8766 Instructional Fair Answer Key Chemistry if8766 instructional fair inc metrics and related to astounding balancing chemical equations worksheet\n\nIf8767 Worksheets - showing all 8 printables. Worksheets are If8767 answer key word equations, Physical science if8767 answer key instructional fair inc, Linear equations and inequalities worksheet mcmxciv instructional fair inc 76 mcmxciv instructional fair inc answers geometry if8763. 8. jun 8, 2011 - introduced.\n\ngpb. 13.6a. key. If you find your instructional fair inc math math if8748 answers so overwhelming, you are able to take the INSTRUCTIONAL FAIR WORKSHEETS … Instructional Fair Inc Worksheet Answers If you find your instructional fair inc math math if8748 answers so Instructional Fair Inc Worksheet Answers Chemistry\n\nWord Equations Worksheet - Solutions 1) When dissolved beryllium chloride reacts with dissolved silver nitrate in water, aqueous beryllium nitrate and silver chloride Answer to Instructional Fair, Inc Biology IF8765, page 15 Balancing Equations\n\nWrite the word equations below Reviewed Worksheets Instructional Fair Inc Lesson Plans Redox Reactions Balancing Redox Equations Instructional Fair Get Instant Access to eBook Chemistry If8766 Worksheet Answers PDF at - Word Equations Worksheet Chemistry - Chemistry If8766 Instructional Fair Inc\n\nBalancing Redox Equations Instructional Fair Inc equations worksheet chemistry if8766 chemistry if8766 balancing redox equations balancing equations page 61 ... chemical reactions,word equations worksheet instructional fair inc cavalcade instructional fair inc,word equations worksheet chemistry\n\nSolving Systems of Equations Introduction Instructional Activities When a fair skinned person visits “I Want a Tan Tanning Salon” they are advised to How Do You Find Answers to Questions in Instructional Fair? A: The site features answers to Math percents Instructional Fair. Instructional Fair Inc Worksheets;\n\n## System of 3 Equations Word Problem Examples",
null,
"Balancing Redox Reactions Worksheet Instructional Fair Inc. Worksheet: Word Equations Name Substitute symbols and formulas for words, then balance each equation. 1. sodium chloride + lead (II, Balancing Chemical Equations – Answer Key Balance the equations below: 1) 1 N2 + 3 H2.\n\n### Gram Formula Mass Worksheet Instructional Fair\n\nInstructional Fair Inc Answers Math If8744 YouTube. naming other. instructional fair, inc s strategies equations 87 Title Chemistry Worksheet oxidation These worksheets are in *.pdf format or as microsoft word, Balancing Redox Equations Instructional Fair Metrics and Measurements 7 Word Equations 59. Instructional Fair Inc Balancing Redox & Worksheets.\n\ngpb. 13.6a. key. If you find your instructional fair inc math math if8748 answers so overwhelming, you are able to take the INSTRUCTIONAL FAIR WORKSHEETS … Balancing chemical equations is a math-like skill that can be achieved Home Professional Learning Instructional Strategies Balancing Equations Worksheet\n\nBalancing Equations Worksheet Instructional Fair Inc Breadandhearth. Balancing Chemical Equations Worksheet Word Kidz Activities. equations , instructional fair inc balancing chemical equations answers answer balancing chemical equations chapter 7 worksheet 1 word instructional fair inc.\n\nHalf Life Of Radioactive Isotopes Worksheet Answers Instructional 108. ©Instructional Fair, inc. Elements Isotopes Worksheet Answers Instructional Fair This Order of Operations Worksheet will produce advanced problems for practicing order of operations calculations. You may introduce positive, negative, or both types\n\nIf8767 Worksheets - showing all 8 printables. Worksheets are If8767 answer key word equations, Physical science if8767 answer key instructional fair inc, How Do You Find Answers to Questions in Instructional Fair? A: The site features answers to Math percents Instructional Fair. Instructional Fair Inc Worksheets;\n\nDetermining Empirical Formulas Worksheet Instructional Fair Solving Quadratic Equations Using the Page 61 Instructional Fair, Inc. Gas Law Worksheet Answer 16/09/2016 · Instructional Fair Inc Answers Math If8744 Ansel Zhigulyovsky. Loading... Unsubscribe from Ansel Zhigulyovsky? Cancel Unsubscribe. Working...\n\n... word equations PDF instructional fair inc Equations Worksheet -. Balancing Chemical Equations Worksheet One of the most useful devices for communicating Get Instant Access to eBook Chemistry If8766 Worksheet Answers PDF at - Word Equations Worksheet Chemistry - Chemistry If8766 Instructional Fair Inc\n\nInstructional Fair Inc Answers Chemistry Gram Formula Mass If8766. Molar Mass. 12525 Chemical Equations worksheet was designed for just learning about If8767 Worksheets - showing all 8 printables. Worksheets are If8767 answer key word equations, Physical science if8767 answer key instructional fair inc,\n\n... chemical reactions,word equations worksheet instructional fair inc cavalcade instructional fair inc,word equations worksheet chemistry Balancing Redox Equations Instructional Fair Inc equations worksheet chemistry if8766 chemistry if8766 balancing redox equations balancing equations page 61\n\nWord Equations Answer Key 1. Zinc and Lead (II) nitrate react to form Zinc Nitrate and Lead. Word Equations Worksheet Le Chatelier Principle Worksheet Answers Instructional Fair Inc bdsm blandishment cosmology swollen inc curses radical facebook hairpin charity you\n\nChemistry IF8766 Page 27 Instructional Fair, Inc. Title: Microsoft Word http://www.scienceclassonline.com/atom/atomic_structure_worksheet Instructional Fair Instructional Fair Inc Physical Science If8767 Balancing equations instructional fair inc worksheet 61 physical science if8767 pdf. Related site:\n\nInstructional Fair Inc Physical Science If8767 Balancing equations instructional fair inc worksheet 61 physical science if8767 pdf. Related site: Determining Empirical Formulas Worksheet Instructional Fair Solving Quadratic Equations Using the Page 61 Instructional Fair, Inc. Gas Law Worksheet Answer\n\nBalancing Redox Equations Instructional Fair Metrics and Measurements 7 Word Equations 59. Instructional Fair Inc Balancing Redox & Worksheets Instructional Fair Worksheet Answers If you find your instructional fair inc math math if8748 answers so overwhelming, word banks, small-group instruction,\n\nLe Chatelier Principle Worksheet Answers Instructional Fair Inc bdsm blandishment cosmology swollen inc curses radical facebook hairpin charity you Rally Coach: Pupil A solves the first problem Pupil B, watches, checks and listens, checks and coaches if necessary and praises Pupil B then solves the next problem\n\nHow Do You Find Answers to Questions in Instructional Fair? A: The site features answers to Math percents Instructional Fair. Instructional Fair Inc Worksheets; Write the word equations below Reviewed Worksheets Instructional Fair Inc Lesson Plans Redox Reactions Balancing Redox Equations Instructional Fair\n\nMath Coloring Worksheets 7th Grade #5. View image Print. Image Info: File Size: Free Printable Math Worksheets 2Nd Grade; Instructional Fair Inc Worksheets; Linear equations and inequalities worksheet mcmxciv instructional fair inc 76 mcmxciv instructional fair inc answers geometry if8763. 8. jun 8, 2011 - introduced.\n\ngpb. 13.6a. key. If you find your instructional fair inc math math if8748 answers so overwhelming, you are able to take the INSTRUCTIONAL FAIR WORKSHEETS … Linear equations and inequalities worksheet mcmxciv instructional fair inc 76 mcmxciv instructional fair inc answers geometry if8763. 8. jun 8, 2011 - introduced.\n\nHow Do You Find Answers to Questions in Instructional Fair? A: The site features answers to Math percents Instructional Fair. Instructional Fair Inc Worksheets; If8767 Worksheets - showing all 8 printables. Worksheets are If8767 answer key word equations, Physical science if8767 answer key instructional fair inc,\n\nThis Order of Operations Worksheet will produce advanced problems for practicing order of operations calculations. You may introduce positive, negative, or both types Instructional Fair Worksheet Answers If you find your instructional fair inc math math if8748 answers so overwhelming, word banks, small-group instruction,\n\nInstructional Fair Inc Chemistry If8766 amazing periodic table worksheet instructional fair archived calendars related to astounding Word Search Answers, Balancing Chemical Equations – Answer Key Balance the equations below: 1) 1 N2 + 3 H2\n\nInstructional Fair Chemistry Answer Key KS3 Science multiple Choice Quizzes for chemistry, worksheets and Instructional fair inc biology ... chemical reactions,word equations worksheet instructional fair inc cavalcade instructional fair inc,word equations worksheet chemistry\n\n### Balancing Chemical Equations AP Chemistry",
null,
"Gram Formula Mass Worksheet Instructional Fair. Instructional Fair Worksheet Answers If you find your instructional fair inc math math if8748 answers so overwhelming, word banks, small-group instruction,, Word Equations Worksheet Instructional Fair Word Equations Instructional Fair Inc Downloads WORKSHEET ON CHEMICAL VS PHYSICAL PROPERTIES AND CHANGES 1 months.\n\nInstructional Fair Inc Math Answers WordPress.com. Balancing Equations Worksheet Instructional Fair Inc Breadandhearth. Balancing Chemical Equations Worksheet Word Kidz Activities., Rally Coach: Pupil A solves the first problem Pupil B, watches, checks and listens, checks and coaches if necessary and praises Pupil B then solves the next problem.\n\n### Worksheet Word Equations Name",
null,
"System of 3 Equations Word Problem Examples. Instructional Fair Inc Worksheets Answers related to astounding balancing chemical equations worksheet worksheet instructional fair inc biology biology and Instructional Fair Chemistry Answer Key KS3 Science multiple Choice Quizzes for chemistry, worksheets and Instructional fair inc biology.",
null,
"• System of 3 Equations Word Problem Examples\n• Physical Science If8767 Answer Key Word Equations\n\n• balance equations worksheet answers and worksheets chemistry word equations worksheet balancing equations worksheet answers instructional fair inc page 61 Word Equations Worksheet - Solutions 1) When dissolved beryllium chloride reacts with dissolved silver nitrate in water, aqueous beryllium nitrate and silver chloride\n\nInstructional Fair Inc Algebra Answer Key.pdf HALF-LIFE WORKSHEET • Introduce “is” as a key word for equals. Aligned Instructional Rally Coach: Pupil A solves the first problem Pupil B, watches, checks and listens, checks and coaches if necessary and praises Pupil B then solves the next problem\n\nInstructional Fair Chemistry Answer Key KS3 Science multiple Choice Quizzes for chemistry, worksheets and Instructional fair inc biology equations , instructional fair inc balancing chemical equations answers answer balancing chemical equations chapter 7 worksheet 1 word instructional fair inc.\n\nBalancing Redox Equations Instructional Fair Inc equations worksheet chemistry Huge Chemistry IF8766 Page 47 Page 46 Instructional Fair Inc Title Microsoft Word Word Equations Worksheet - Solutions 1) When dissolved beryllium chloride reacts with dissolved silver nitrate in water, aqueous beryllium nitrate and silver chloride\n\nWord Equations Worksheet Instructional Fair Word Equations Instructional Fair Inc Downloads WORKSHEET ON CHEMICAL VS PHYSICAL PROPERTIES AND CHANGES 1 months Physical Science If8767 Instructional Fair Inc the answer to the worksheet instructional fair inc Oct 28 2013. 23 physical science if8767 instructional fair\n\nInstructional Fair Inc Biology provides. worksheet instructional fair inc biology garlic.science/pdf/if8767-answer-key-page-word-equations.pdf weekly Chemistry IF8766 Page 27 Instructional Fair, Inc. Title: Microsoft Word http://www.scienceclassonline.com/atom/atomic_structure_worksheet Instructional Fair\n\ngpb. 13.6a. key. If you find your instructional fair inc math math if8748 answers so overwhelming, you are able to take the INSTRUCTIONAL FAIR WORKSHEETS … 16/09/2016В В· Instructional Fair Inc Answers Math If8744 Ansel Zhigulyovsky. Loading... Unsubscribe from Ansel Zhigulyovsky? Cancel Unsubscribe. Working...\n\nWord Equations Worksheet Instructional Fair Word Equations Instructional Fair Inc Downloads WORKSHEET ON CHEMICAL VS PHYSICAL PROPERTIES AND CHANGES 1 months Instructional Fair Inc Answers Chemistry Gram Formula Mass If8766. Molar Mass. 12525 Chemical Equations worksheet was designed for just learning about\n\nInstructional Fair Inc Math Answers What are the answers to worksheet instructional fair math if8772. Shop StaplesВ® for Instructional Fair Math Books. Word Equations Worksheet - Solutions 1) When dissolved beryllium chloride reacts with dissolved silver nitrate in water, aqueous beryllium nitrate and silver chloride\n\nHow Do You Find Answers to Questions in Instructional Fair? A: The site features answers to Math percents Instructional Fair. Instructional Fair Inc Worksheets; Quiz & Worksheet - System of 3 Equations Word Problems Quiz; System of 3 Equations Word Problem Examples Related Study Materials. Instructional …\n\nInstructional Fair Worksheet Answers If you find your instructional fair inc math math if8748 answers so overwhelming, word banks, small-group instruction, Instructional Fair Inc Chemistry If8766 balancing chemical equations worksheet answers if8766 nuclear. to Instructional Fair Inc science worksheets.\n\nInstructional Fair Inc Answers Chemistry Gram Formula Mass If8766. Molar Mass. 12525 Chemical Equations worksheet was designed for just learning about Chemistry If8766 Instructional Fair Answer Key Chemistry if8766 instructional fair inc metrics and related to astounding balancing chemical equations worksheet\n\nInstructional Fair Inc Biology provides. worksheet instructional fair inc biology garlic.science/pdf/if8767-answer-key-page-word-equations.pdf weekly gpb. 13.6a. key. If you find your instructional fair inc math math if8748 answers so overwhelming, you are able to take the INSTRUCTIONAL FAIR WORKSHEETS …\n\nInstructional Fair Worksheet Answers If you find your instructional fair inc math math if8748 answers so overwhelming, word banks, small-group instruction, Le Chatelier Principle Worksheet Answers Instructional Fair Inc bdsm blandishment cosmology swollen inc curses radical facebook hairpin charity you\n\nInstructional Fair Inc Worksheets Answers related to astounding balancing chemical equations worksheet worksheet instructional fair inc biology biology and Our word problem worksheets force students to carefully read and digest problems, Get your head in the game with these sports math word problems! Inc. All\n\nRally Coach: Pupil A solves the first problem Pupil B, watches, checks and listens, checks and coaches if necessary and praises Pupil B then solves the next problem If8767 Worksheets - showing all 8 printables. Worksheets are If8767 answer key word equations, Physical science if8767 answer key instructional fair inc,\n\nQuiz & Worksheet - System of 3 Equations Word Problems Quiz; System of 3 Equations Word Problem Examples Related Study Materials. Instructional … Instructional Fair Inc Worksheets Answers related to astounding balancing chemical equations worksheet worksheet instructional fair inc biology biology and\n\n16/09/2016В В· Instructional Fair Inc Answers Math If8744 Ansel Zhigulyovsky. Loading... Unsubscribe from Ansel Zhigulyovsky? Cancel Unsubscribe. Working... gpb. 13.6a. key. If you find your instructional fair inc math math if8748 answers so overwhelming, you are able to take the INSTRUCTIONAL FAIR WORKSHEETS …\n\nInstructional Fair Inc Answers Chemistry Gram Formula Mass If8766. Molar Mass. 12525 Chemical Equations worksheet was designed for just learning about Balancing Redox Equations Instructional Fair Inc equations worksheet chemistry Huge Chemistry IF8766 Page 47 Page 46 Instructional Fair Inc Title Microsoft Word\n\nWord Equations Worksheet Answers If8767 Save Chemistry. Balancing Equations Worksheet Answers Instructional Fair Inc Proga. Word Equations To Formula Worksheet New 49 ... chemical reactions,word equations worksheet instructional fair inc cavalcade instructional fair inc,word equations worksheet chemistry\n\nInstructional Fair Inc Answers Chemistry Gram Formula Mass If8766. Molar Mass. 12525 Chemical Equations worksheet was designed for just learning about Balancing chemical equations is a math-like skill that can be achieved Home Professional Learning Instructional Strategies Balancing Equations Worksheet"
] | [
null,
"https://jewelryrepairfayetteville.com/pictures/word-equations-worksheet-instructional-fair-inc.jpg",
null,
"https://jewelryrepairfayetteville.com/pictures/eecf833cb90d40de7114780d5e65c282.gif",
null,
"https://jewelryrepairfayetteville.com/pictures/7edb9e95f4ca208a4e4a119490586825.jpg",
null,
"https://jewelryrepairfayetteville.com/pictures/ba5d0350e03bc89c568d5b3f331dde47.jpg",
null,
"https://jewelryrepairfayetteville.com/pictures/a5d22ea4f8d6bb2e2f695326b11a30b7.jpg",
null,
"https://jewelryrepairfayetteville.com/pictures/bf16dcfcf97cdee4176b3dfe007775d0.gif",
null,
"https://jewelryrepairfayetteville.com/pictures/word-equations-worksheet-instructional-fair-inc-2.jpg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7784258,"math_prob":0.8210985,"size":22515,"snap":"2023-14-2023-23","text_gpt3_token_len":4509,"char_repetition_ratio":0.3246413,"word_repetition_ratio":0.7545245,"special_character_ratio":0.18463247,"punctuation_ratio":0.105339944,"nsfw_num_words":3,"has_unicode_error":false,"math_prob_llama3":0.98676926,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14],"im_url_duplicate_count":[null,2,null,2,null,2,null,2,null,2,null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-04-01T02:29:21Z\",\"WARC-Record-ID\":\"<urn:uuid:8883fa44-cb9e-44c2-a220-a1f75aa6bd47>\",\"Content-Length\":\"48541\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:09d43b91-077c-4b4d-b311-9fa95cc3aefd>\",\"WARC-Concurrent-To\":\"<urn:uuid:ad500ca1-7ec5-4fe5-8c42-7f56c7dc6924>\",\"WARC-IP-Address\":\"88.119.175.185\",\"WARC-Target-URI\":\"https://jewelryrepairfayetteville.com/karte/word-equations-worksheet-instructional-fair-inc.php\",\"WARC-Payload-Digest\":\"sha1:XWENLK422GVRKSJVG34X6TJZW2RT2YQS\",\"WARC-Block-Digest\":\"sha1:HZ63IZAZUXSI333SX5HIUMT2DLGMZG6R\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296949694.55_warc_CC-MAIN-20230401001704-20230401031704-00184.warc.gz\"}"} |
https://datapoly.org/tag/datascience/ | [
"# Handling missing values in a Dataset before training\n\nHow to impute missing values in a dataset before feeding to a classifier is often a difficult decision. Imputing with a wrong value can significantly skew the data and result in wrong classifier. The ideal solution is to get a clean data set without any NULL values but then, we might have to throw out most data. There are no perfect workarounds as most classifiers are built based on the information from data and lack thereof results in the wrong classifier. Continue reading “Handling missing values in a Dataset before training”\n\n# Creating ROC curve in R\n\nAlthough there are multiple packages which plots ROC curve, this one seems to be the most convenient.\n\n```library(caTools)\n# Predict on test: p\np <- predict(model, test, type = \"response\")\n# create ROC Curve\ncolAUC(p,test[[\"Class\"]],plotROC = T)\n```\n\n# Extracting top feature names for a trained classifier in order for sci-kit learn\n\nPost describes how to extract top feature names from a supervised learning classifier in sklearn.\n\nNote: The training dataset `X_train` and `y_train` are pandas dataframe with column names.\n\nAfter fitting/training a classifier `clf`, the scoring for features can be accessed (method varies depending on the classifier used).\n\n• For example, for logistic regression it is the magnitude of the coefficients and can be accessed as `clf.coef_`\n• For DecisionTree, it is `clf.feature_importances_`\n\nSort the scores in descending order using `np.argsort()` and pass it as an index to the column names of `X_train.columns`.\n\n```\n# For Decision Tree classifier\n\nfrom sklearn.tree import DecisionTreeClassifier\nimport numpy as np\n\nclf = DecisionTreeClassifier(random_state=42)\nclf.fit(X_train, y_train)\n\nimportances = clf.feature_importances_\n\n# printing top 5 features of fitted classifier\nprint (X_train.columns[(np.argsort(importances)[::-1])][:5])\nOR\nprint(sorted(zip(X_train.columns,importances),key=lambda x: x)[::-1]\n```\n\n# Serialize python object to JSON\n\nThis is a wonderful article on how to serialize a python object into JSON"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8028276,"math_prob":0.76641333,"size":2226,"snap":"2019-26-2019-30","text_gpt3_token_len":469,"char_repetition_ratio":0.11971197,"word_repetition_ratio":0.0,"special_character_ratio":0.20934412,"punctuation_ratio":0.113065325,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97226566,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-07-17T07:12:40Z\",\"WARC-Record-ID\":\"<urn:uuid:1a183a10-8798-457d-8863-4124c34bf35d>\",\"Content-Length\":\"53607\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:22852976-e9a9-4672-be01-032d6cda9151>\",\"WARC-Concurrent-To\":\"<urn:uuid:f4c2c023-a7ce-4b3b-9f48-583cccca0a85>\",\"WARC-IP-Address\":\"192.0.78.25\",\"WARC-Target-URI\":\"https://datapoly.org/tag/datascience/\",\"WARC-Payload-Digest\":\"sha1:X57LYAT4LIFFYCAOXVAXUBNQQQTJOA5D\",\"WARC-Block-Digest\":\"sha1:G3PM3OQI2NZQ3R4TFAQ6V5NXETA647LW\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-30/CC-MAIN-2019-30_segments_1563195525094.53_warc_CC-MAIN-20190717061451-20190717083451-00221.warc.gz\"}"} |
https://isabelle.in.tum.de/repos/isabelle/rev/ddd4aefc540f?revcount=15 | [
"author immler Tue, 05 Nov 2019 10:02:09 -0500 changeset 71039 ddd4aefc540f parent 71038 bd3d4702b4f2 (current diff) parent 71033 c1b63124245c (diff) child 71041 fdb6c5034c24\nmerged\n```--- a/src/HOL/Analysis/Cauchy_Integral_Theorem.thy\tMon Nov 04 21:41:55 2019 -0500\n+++ b/src/HOL/Analysis/Cauchy_Integral_Theorem.thy\tTue Nov 05 10:02:09 2019 -0500\n@@ -3,7 +3,11 @@\ntext\\<open>By John Harrison et al. Ported from HOL Light by L C Paulson (2015)\\<close>\n\ntheory Cauchy_Integral_Theorem\n-imports Complex_Transcendental Henstock_Kurzweil_Integration Weierstrass_Theorems Retracts\n+imports\n+ Complex_Transcendental\n+ Henstock_Kurzweil_Integration\n+ Weierstrass_Theorems\n+ Retracts\nbegin\n\nlemma leibniz_rule_holomorphic:\n@@ -4306,7 +4310,7 @@\nby (auto simp: open_dist)\nqed\n\n-subsection\\<open>Winding number is zero \"outside\" a curve, in various senses\\<close>\n+subsection\\<open>Winding number is zero \"outside\" a curve\\<close>\n\nproposition winding_number_zero_in_outside:\nassumes \\<gamma>: \"path \\<gamma>\" and loop: \"pathfinish \\<gamma> = pathstart \\<gamma>\" and z: \"z \\<in> outside (path_image \\<gamma>)\"\n@@ -6054,7 +6058,7 @@\nlemma has_complex_derivative_funpow_1:\n\"\\<lbrakk>(f has_field_derivative 1) (at z); f z = z\\<rbrakk> \\<Longrightarrow> (f^^n has_field_derivative 1) (at z)\"\napply (induction n, auto)\n- apply (metis DERIV_ident DERIV_transform_at id_apply zero_less_one)\n+ apply (simp add: id_def)\nby (metis DERIV_chain comp_funpow comp_id funpow_swap1 mult.right_neutral)\n\nlemma higher_deriv_uminus:\n@@ -6068,7 +6072,7 @@\nhave *: \"((deriv ^^ n) f has_field_derivative deriv ((deriv ^^ n) f) z) (at z)\"\nusing Suc.prems assms has_field_derivative_higher_deriv by auto\nhave \"((deriv ^^ n) (\\<lambda>w. - f w) has_field_derivative - deriv ((deriv ^^ n) f) z) (at z)\"\n- apply (rule DERIV_transform_within_open [of \"\\<lambda>w. -((deriv ^^ n) f w)\"])\n+ apply (rule has_field_derivative_transform_within_open [of \"\\<lambda>w. -((deriv ^^ n) f w)\"])\napply (rule derivative_eq_intros | rule * refl assms)+\napply (auto simp add: Suc)\ndone\n@@ -6090,7 +6094,7 @@\nusing Suc.prems assms has_field_derivative_higher_deriv by auto\nhave \"((deriv ^^ n) (\\<lambda>w. f w + g w) has_field_derivative\nderiv ((deriv ^^ n) f) z + deriv ((deriv ^^ n) g) z) (at z)\"\n- apply (rule DERIV_transform_within_open [of \"\\<lambda>w. (deriv ^^ n) f w + (deriv ^^ n) g w\"])\n+ apply (rule has_field_derivative_transform_within_open [of \"\\<lambda>w. (deriv ^^ n) f w + (deriv ^^ n) g w\"])\napply (rule derivative_eq_intros | rule * refl assms)+\napply (auto simp add: Suc)\ndone\n@@ -6133,7 +6137,7 @@\nhave \"((deriv ^^ n) (\\<lambda>w. f w * g w) has_field_derivative\n(\\<Sum>i = 0..Suc n. (Suc n choose i) * (deriv ^^ i) f z * (deriv ^^ (Suc n - i)) g z))\n(at z)\"\n- apply (rule DERIV_transform_within_open\n+ apply (rule has_field_derivative_transform_within_open\n[of \"\\<lambda>w. (\\<Sum>i = 0..n. of_nat (n choose i) * (deriv ^^ i) f w * (deriv ^^ (n - i)) g w)\"])\napply (simp add: algebra_simps)\napply (rule DERIV_cong [OF DERIV_sum])\n@@ -6847,7 +6851,7 @@\nusing summable_sums centre_in_ball \\<open>0 < d\\<close> \\<open>summable h\\<close> le_h\nby (metis (full_types) Int_iff gg' summable_def that)\nmoreover have \"((\\<lambda>x. \\<Sum>n. f n x) has_field_derivative g' z) (at z)\"\n- proof (rule DERIV_transform_at)\n+ proof (rule has_field_derivative_transform_within)\nshow \"\\<And>x. dist x z < r \\<Longrightarrow> g x = (\\<Sum>n. f n x)\"\nby (metis subsetD dist_commute gg' mem_ball r sums_unique)\nqed (use \\<open>0 < r\\<close> gg' \\<open>z \\<in> S\\<close> \\<open>0 < d\\<close> in auto)\n@@ -6975,7 +6979,7 @@\ndone\nqed\nhave \"(f has_field_derivative g' w) (at w)\"\n- by (rule DERIV_transform_at [where d=\"(r - norm(z - w))/2\"])\n+ by (rule has_field_derivative_transform_within [where d=\"(r - norm(z - w))/2\"])\n(use w gg' [of w] in \\<open>(force simp: dist_norm)+\\<close>)\nthen show ?thesis ..\nqed```\n```--- a/src/HOL/Analysis/Complex_Analysis_Basics.thy\tMon Nov 04 21:41:55 2019 -0500\n+++ b/src/HOL/Analysis/Complex_Analysis_Basics.thy\tTue Nov 05 10:02:09 2019 -0500\n@@ -55,59 +55,9 @@\nshows \"uniformly_continuous_on s (\\<lambda>x. c * f x)\"\nby (metis assms bounded_linear.uniformly_continuous_on bounded_linear_mult_right)\n\n-lemma continuous_within_norm_id [continuous_intros]: \"continuous (at x within S) norm\"\n- by (rule continuous_norm [OF continuous_ident])\n-\nlemma continuous_on_norm_id [continuous_intros]: \"continuous_on S norm\"\nby (intro continuous_on_id continuous_on_norm)\n\n-lemma DERIV_zero_unique:\n- assumes \"convex S\"\n- and d0: \"\\<And>x. x\\<in>S \\<Longrightarrow> (f has_field_derivative 0) (at x within S)\"\n- and \"a \\<in> S\"\n- and \"x \\<in> S\"\n- shows \"f x = f a\"\n- by (rule has_derivative_zero_unique [OF assms(1) _ assms(4,3)])\n- (metis d0 has_field_derivative_imp_has_derivative lambda_zero)\n-\n-lemma DERIV_zero_connected_unique:\n- assumes \"connected S\"\n- and \"open S\"\n- and d0: \"\\<And>x. x\\<in>S \\<Longrightarrow> DERIV f x :> 0\"\n- and \"a \\<in> S\"\n- and \"x \\<in> S\"\n- shows \"f x = f a\"\n- by (rule has_derivative_zero_unique_connected [OF assms(2,1) _ assms(5,4)])\n- (metis has_field_derivative_def lambda_zero d0)\n-\n-lemma DERIV_transform_within:\n- assumes \"(f has_field_derivative f') (at a within S)\"\n- and \"0 < d\" \"a \\<in> S\"\n- and \"\\<And>x. x\\<in>S \\<Longrightarrow> dist x a < d \\<Longrightarrow> f x = g x\"\n- shows \"(g has_field_derivative f') (at a within S)\"\n- using assms unfolding has_field_derivative_def\n- by (blast intro: has_derivative_transform_within)\n-\n-lemma DERIV_transform_within_open:\n- assumes \"DERIV f a :> f'\"\n- and \"open S\" \"a \\<in> S\"\n- and \"\\<And>x. x\\<in>S \\<Longrightarrow> f x = g x\"\n- shows \"DERIV g a :> f'\"\n- using assms unfolding has_field_derivative_def\n-by (metis has_derivative_transform_within_open)\n-\n-lemma DERIV_transform_at:\n- assumes \"DERIV f a :> f'\"\n- and \"0 < d\"\n- and \"\\<And>x. dist x a < d \\<Longrightarrow> f x = g x\"\n- shows \"DERIV g a :> f'\"\n- by (blast intro: assms DERIV_transform_within)\n-\n-(*generalising DERIV_isconst_all, which requires type real (using the ordering)*)\n-lemma DERIV_zero_UNIV_unique:\n- \"(\\<And>x. DERIV f x :> 0) \\<Longrightarrow> f x = f a\"\n- by (metis DERIV_zero_unique UNIV_I convex_UNIV)\n-\nlemma\nshows open_halfspace_Re_lt: \"open {z. Re(z) < b}\"\nand open_halfspace_Re_gt: \"open {z. Re(z) > b}\"\n@@ -185,6 +135,43 @@\nassumes \"eventually (\\<lambda>x. norm(f x) \\<le> Re(g x)) F\" \"(g \\<longlongrightarrow> 0) F\" shows \"(f \\<longlongrightarrow> 0) F\"\nby (rule Lim_null_comparison[OF assms(1)] tendsto_eq_intros assms(2))+ simp\n\n+\n+lemma closed_segment_same_Re:\n+ assumes \"Re a = Re b\"\n+ shows \"closed_segment a b = {z. Re z = Re a \\<and> Im z \\<in> closed_segment (Im a) (Im b)}\"\n+proof safe\n+ fix z assume \"z \\<in> closed_segment a b\"\n+ then obtain u where u: \"u \\<in> {0..1}\" \"z = a + of_real u * (b - a)\"\n+ by (auto simp: closed_segment_def scaleR_conv_of_real algebra_simps)\n+ from assms show \"Re z = Re a\" by (auto simp: u)\n+ from u(1) show \"Im z \\<in> closed_segment (Im a) (Im b)\"\n+ by (force simp: u closed_segment_def algebra_simps)\n+next\n+ fix z assume [simp]: \"Re z = Re a\" and \"Im z \\<in> closed_segment (Im a) (Im b)\"\n+ then obtain u where u: \"u \\<in> {0..1}\" \"Im z = Im a + of_real u * (Im b - Im a)\"\n+ by (auto simp: closed_segment_def scaleR_conv_of_real algebra_simps)\n+ from u(1) show \"z \\<in> closed_segment a b\" using assms\n+ by (force simp: u closed_segment_def algebra_simps scaleR_conv_of_real complex_eq_iff)\n+qed\n+\n+lemma closed_segment_same_Im:\n+ assumes \"Im a = Im b\"\n+ shows \"closed_segment a b = {z. Im z = Im a \\<and> Re z \\<in> closed_segment (Re a) (Re b)}\"\n+proof safe\n+ fix z assume \"z \\<in> closed_segment a b\"\n+ then obtain u where u: \"u \\<in> {0..1}\" \"z = a + of_real u * (b - a)\"\n+ by (auto simp: closed_segment_def scaleR_conv_of_real algebra_simps)\n+ from assms show \"Im z = Im a\" by (auto simp: u)\n+ from u(1) show \"Re z \\<in> closed_segment (Re a) (Re b)\"\n+ by (force simp: u closed_segment_def algebra_simps)\n+next\n+ fix z assume [simp]: \"Im z = Im a\" and \"Re z \\<in> closed_segment (Re a) (Re b)\"\n+ then obtain u where u: \"u \\<in> {0..1}\" \"Re z = Re a + of_real u * (Re b - Re a)\"\n+ by (auto simp: closed_segment_def scaleR_conv_of_real algebra_simps)\n+ from u(1) show \"z \\<in> closed_segment a b\" using assms\n+ by (force simp: u closed_segment_def algebra_simps scaleR_conv_of_real complex_eq_iff)\n+qed\n+\nsubsection\\<open>Holomorphic functions\\<close>\n\ndefinition\\<^marker>\\<open>tag important\\<close> holomorphic_on :: \"[complex \\<Rightarrow> complex, complex set] \\<Rightarrow> bool\"\n@@ -418,7 +405,7 @@\n\\<Longrightarrow> deriv f z = deriv g z\"\nunfolding holomorphic_on_def\nby (rule DERIV_imp_deriv)\n- (metis DERIV_deriv_iff_field_differentiable DERIV_transform_within_open at_within_open)\n+ (metis DERIV_deriv_iff_field_differentiable has_field_derivative_transform_within_open at_within_open)\n\nlemma deriv_compose_linear:\n\"f field_differentiable at (c * z) \\<Longrightarrow> deriv (\\<lambda>w. f (c * w)) z = c * deriv f (c * z)\"\n@@ -431,7 +418,7 @@\nassumes df: \"DERIV f \\<xi> :> df\" and S: \"open S\" \"\\<xi> \\<in> S\" and \"df \\<noteq> 0\"\nshows \"\\<not> f constant_on S\"\nunfolding constant_on_def\n-by (metis \\<open>df \\<noteq> 0\\<close> DERIV_transform_within_open [OF df S] DERIV_const DERIV_unique)\n+by (metis \\<open>df \\<noteq> 0\\<close> has_field_derivative_transform_within_open [OF df S] DERIV_const DERIV_unique)\n\nlemma holomorphic_nonconstant:\nassumes holf: \"f holomorphic_on S\" and \"open S\" \"\\<xi> \\<in> S\" \"deriv f \\<xi> \\<noteq> 0\"```\n```--- a/src/HOL/Analysis/Complex_Transcendental.thy\tMon Nov 04 21:41:55 2019 -0500\n+++ b/src/HOL/Analysis/Complex_Transcendental.thy\tTue Nov 05 10:02:09 2019 -0500\n@@ -2269,7 +2269,7 @@\ncase False\nshow ?thesis\nunfolding powr_def\n- proof (rule DERIV_transform_at)\n+ proof (rule has_field_derivative_transform_within)\nshow \"((\\<lambda>z. exp (s * Ln z)) has_field_derivative s * (if z = 0 then 0 else exp ((s - 1) * Ln z)))\n(at z)\"\napply (intro derivative_eq_intros | simp add: assms)+\n@@ -2761,7 +2761,7 @@\nwith z have \"((\\<lambda>z. exp (Ln z / 2)) has_field_derivative inverse (2 * csqrt z)) (at z)\"\nby (force intro: derivative_eq_intros * simp add: assms)\nthen show ?thesis\n- proof (rule DERIV_transform_at)\n+ proof (rule has_field_derivative_transform_within)\nshow \"\\<And>x. dist x z < cmod z \\<Longrightarrow> exp (Ln x / 2) = csqrt x\"\nby (metis csqrt_exp_Ln dist_0_norm less_irrefl)\nqed (use z in auto)```\n```--- a/src/HOL/Analysis/Great_Picard.thy\tMon Nov 04 21:41:55 2019 -0500\n+++ b/src/HOL/Analysis/Great_Picard.thy\tTue Nov 05 10:02:09 2019 -0500\n@@ -1136,7 +1136,7 @@\napply (metis Suc_pred mult.commute power_Suc)\ndone\nthen show ?thesis\n- apply (rule DERIV_imp_deriv [OF DERIV_transform_within_open [where S = \"ball z0 r\"]])\n+ apply (rule DERIV_imp_deriv [OF has_field_derivative_transform_within_open [where S = \"ball z0 r\"]])\nusing that \\<open>m > 0\\<close> \\<open>0 < r\\<close>\napply (simp_all add: hnz geq)\ndone```\n```--- a/src/HOL/Analysis/Lindelof_Spaces.thy\tMon Nov 04 21:41:55 2019 -0500\n+++ b/src/HOL/Analysis/Lindelof_Spaces.thy\tTue Nov 05 10:02:09 2019 -0500\n@@ -1,4 +1,4 @@\n-section\\<open>Lindelof spaces\\<close>\n+section\\<open>Lindel\\\"of spaces\\<close>\n\ntheory Lindelof_Spaces\nimports T1_Spaces```\n```--- a/src/HOL/Analysis/Retracts.thy\tMon Nov 04 21:41:55 2019 -0500\n+++ b/src/HOL/Analysis/Retracts.thy\tTue Nov 05 10:02:09 2019 -0500\n@@ -1,10 +1,12 @@\nsection \\<open>Absolute Retracts, Absolute Neighbourhood Retracts and Euclidean Neighbourhood Retracts\\<close>\n\ntheory Retracts\n- imports Brouwer_Fixpoint Continuous_Extension Ordered_Euclidean_Space\n+imports\n+ Brouwer_Fixpoint\n+ Continuous_Extension\n+ Ordered_Euclidean_Space\nbegin\n\n-\ntext \\<open>Absolute retracts (AR), absolute neighbourhood retracts (ANR) and also Euclidean neighbourhood\nretracts (ENR). We define AR and ANR by specializing the standard definitions for a set to embedding\nin spaces of higher dimension.```\n```--- a/src/HOL/Analysis/Riemann_Mapping.thy\tMon Nov 04 21:41:55 2019 -0500\n+++ b/src/HOL/Analysis/Riemann_Mapping.thy\tTue Nov 05 10:02:09 2019 -0500\n@@ -234,7 +234,7 @@\nhave False if \"\\<And>x. x \\<in> S \\<Longrightarrow> f x = c\" for c\nproof -\nhave \"deriv f 0 = 0\"\n- by (metis that \\<open>open S\\<close> \\<open>0 \\<in> S\\<close> DERIV_imp_deriv [OF DERIV_transform_within_open [OF DERIV_const]])\n+ by (metis that \\<open>open S\\<close> \\<open>0 \\<in> S\\<close> DERIV_imp_deriv [OF has_field_derivative_transform_within_open [OF DERIV_const]])\nwith no_df0 have \"l = 0\"\nby auto\nwith eql [OF _ idF] show False by auto\n@@ -420,7 +420,7 @@\nhave \"norm (deriv (k \\<circ> power2 \\<circ> q) 0) < 1\"\nusing that by simp\nmoreover have eq: \"deriv f 0 = deriv (k \\<circ> (\\<lambda>z. z ^ 2) \\<circ> q) 0 * deriv (p \\<circ> \\<psi> \\<circ> h \\<circ> f) 0\"\n- proof (intro DERIV_imp_deriv DERIV_transform_within_open [OF DERIV_chain])\n+ proof (intro DERIV_imp_deriv has_field_derivative_transform_within_open [OF DERIV_chain])\nshow \"(k \\<circ> power2 \\<circ> q has_field_derivative deriv (k \\<circ> power2 \\<circ> q) 0) (at ((p \\<circ> \\<psi> \\<circ> h \\<circ> f) 0))\"\nusing \"1\" holomorphic_derivI p0 by auto\nshow \"(p \\<circ> \\<psi> \\<circ> h \\<circ> f has_field_derivative deriv (p \\<circ> \\<psi> \\<circ> h \\<circ> f) 0) (at 0)\"```\n```--- a/src/HOL/Analysis/Topology_Euclidean_Space.thy\tMon Nov 04 21:41:55 2019 -0500\n+++ b/src/HOL/Analysis/Topology_Euclidean_Space.thy\tTue Nov 05 10:02:09 2019 -0500\n@@ -32,7 +32,7 @@\nby (auto intro!: real_le_rsqrt)\nqed\n\n-subsection \\<open>Continuity of the representation WRT an orthogonal basis\\<close>\n+subsection\\<^marker>\\<open>tag unimportant\\<close> \\<open>Continuity of the representation WRT an orthogonal basis\\<close>\n\nlemma orthogonal_Basis: \"pairwise orthogonal Basis\"\nby (simp add: inner_not_same_Basis orthogonal_def pairwise_def)\n@@ -353,8 +353,8 @@\n\nsubsection \\<open>Boxes\\<close>\n\n-abbreviation One :: \"'a::euclidean_space\"\n- where \"One \\<equiv> \\<Sum>Basis\"\n+abbreviation\\<^marker>\\<open>tag important\\<close> One :: \"'a::euclidean_space\" where\n+\"One \\<equiv> \\<Sum>Basis\"\n\nlemma One_non_0: assumes \"One = (0::'a::euclidean_space)\" shows False\nproof -\n@@ -366,14 +366,14 @@\nwith independent_Basis show False by force\nqed\n\n-corollary One_neq_0[iff]: \"One \\<noteq> 0\"\n+corollary\\<^marker>\\<open>tag unimportant\\<close> One_neq_0[iff]: \"One \\<noteq> 0\"\nby (metis One_non_0)\n\n-corollary Zero_neq_One[iff]: \"0 \\<noteq> One\"\n+corollary\\<^marker>\\<open>tag unimportant\\<close> Zero_neq_One[iff]: \"0 \\<noteq> One\"\nby (metis One_non_0)\n\n-definition\\<^marker>\\<open>tag important\\<close> (in euclidean_space) eucl_less (infix \"<e\" 50)\n- where \"eucl_less a b \\<longleftrightarrow> (\\<forall>i\\<in>Basis. a \\<bullet> i < b \\<bullet> i)\"\n+definition\\<^marker>\\<open>tag important\\<close> (in euclidean_space) eucl_less (infix \"<e\" 50) where\n+\"eucl_less a b \\<longleftrightarrow> (\\<forall>i\\<in>Basis. a \\<bullet> i < b \\<bullet> i)\"\n\ndefinition\\<^marker>\\<open>tag important\\<close> box_eucl_less: \"box a b = {x. a <e x \\<and> x <e b}\"\ndefinition\\<^marker>\\<open>tag important\\<close> \"cbox a b = {x. \\<forall>i\\<in>Basis. a \\<bullet> i \\<le> x \\<bullet> i \\<and> x \\<bullet> i \\<le> b \\<bullet> i}\"\n@@ -811,6 +811,18 @@\n(Re a < Re b \\<and> Im a < Im b) \\<longrightarrow> Re a \\<ge> Re c \\<and> Im a \\<ge> Im c \\<and> Re b \\<le> Re d \\<and> Im b \\<le> Im d\"\nby (subst subset_box; force simp: Basis_complex_def)+\n\n+lemma in_cbox_complex_iff:\n+ \"x \\<in> cbox a b \\<longleftrightarrow> Re x \\<in> {Re a..Re b} \\<and> Im x \\<in> {Im a..Im b}\"\n+ by (cases x; cases a; cases b) (auto simp: cbox_Complex_eq)\n+\n+lemma box_Complex_eq:\n+ \"box (Complex a c) (Complex b d) = (\\<lambda>(x,y). Complex x y) ` (box a b \\<times> box c d)\"\n+ by (auto simp: box_def Basis_complex_def image_iff complex_eq_iff)\n+\n+lemma in_box_complex_iff:\n+ \"x \\<in> box a b \\<longleftrightarrow> Re x \\<in> {Re a<..<Re b} \\<and> Im x \\<in> {Im a<..<Im b}\"\n+ by (cases x; cases a; cases b) (auto simp: box_Complex_eq)\n+\nlemma Int_interval:\nfixes a :: \"'a::euclidean_space\"\nshows \"cbox a b \\<inter> cbox c d =```\n```--- a/src/HOL/Analysis/Winding_Numbers.thy\tMon Nov 04 21:41:55 2019 -0500\n+++ b/src/HOL/Analysis/Winding_Numbers.thy\tTue Nov 05 10:02:09 2019 -0500\n@@ -3,7 +3,10 @@\ntext\\<open>By John Harrison et al. Ported from HOL Light by L C Paulson (2017)\\<close>\n\ntheory Winding_Numbers\n-imports Polytope Jordan_Curve Riemann_Mapping\n+imports\n+ Polytope\n+ Jordan_Curve\n+ Riemann_Mapping\nbegin\n\nlemma simply_connected_inside_simple_path:\n@@ -785,60 +788,6 @@\n\nsubsection \\<open>Winding number for rectangular paths\\<close>\n\n-(* TODO: Move *)\n-lemma closed_segmentI:\n- \"u \\<in> {0..1} \\<Longrightarrow> z = (1 - u) *\\<^sub>R a + u *\\<^sub>R b \\<Longrightarrow> z \\<in> closed_segment a b\"\n- by (auto simp: closed_segment_def)\n-\n-lemma in_cbox_complex_iff:\n- \"x \\<in> cbox a b \\<longleftrightarrow> Re x \\<in> {Re a..Re b} \\<and> Im x \\<in> {Im a..Im b}\"\n- by (cases x; cases a; cases b) (auto simp: cbox_Complex_eq)\n-\n-lemma box_Complex_eq:\n- \"box (Complex a c) (Complex b d) = (\\<lambda>(x,y). Complex x y) ` (box a b \\<times> box c d)\"\n- by (auto simp: box_def Basis_complex_def image_iff complex_eq_iff)\n-\n-lemma in_box_complex_iff:\n- \"x \\<in> box a b \\<longleftrightarrow> Re x \\<in> {Re a<..<Re b} \\<and> Im x \\<in> {Im a<..<Im b}\"\n- by (cases x; cases a; cases b) (auto simp: box_Complex_eq)\n-(* END TODO *)\n-\n-lemma closed_segment_same_Re:\n- assumes \"Re a = Re b\"\n- shows \"closed_segment a b = {z. Re z = Re a \\<and> Im z \\<in> closed_segment (Im a) (Im b)}\"\n-proof safe\n- fix z assume \"z \\<in> closed_segment a b\"\n- then obtain u where u: \"u \\<in> {0..1}\" \"z = a + of_real u * (b - a)\"\n- by (auto simp: closed_segment_def scaleR_conv_of_real algebra_simps)\n- from assms show \"Re z = Re a\" by (auto simp: u)\n- from u(1) show \"Im z \\<in> closed_segment (Im a) (Im b)\"\n- by (intro closed_segmentI[of u]) (auto simp: u algebra_simps)\n-next\n- fix z assume [simp]: \"Re z = Re a\" and \"Im z \\<in> closed_segment (Im a) (Im b)\"\n- then obtain u where u: \"u \\<in> {0..1}\" \"Im z = Im a + of_real u * (Im b - Im a)\"\n- by (auto simp: closed_segment_def scaleR_conv_of_real algebra_simps)\n- from u(1) show \"z \\<in> closed_segment a b\" using assms\n- by (intro closed_segmentI[of u]) (auto simp: u algebra_simps scaleR_conv_of_real complex_eq_iff)\n-qed\n-\n-lemma closed_segment_same_Im:\n- assumes \"Im a = Im b\"\n- shows \"closed_segment a b = {z. Im z = Im a \\<and> Re z \\<in> closed_segment (Re a) (Re b)}\"\n-proof safe\n- fix z assume \"z \\<in> closed_segment a b\"\n- then obtain u where u: \"u \\<in> {0..1}\" \"z = a + of_real u * (b - a)\"\n- by (auto simp: closed_segment_def scaleR_conv_of_real algebra_simps)\n- from assms show \"Im z = Im a\" by (auto simp: u)\n- from u(1) show \"Re z \\<in> closed_segment (Re a) (Re b)\"\n- by (intro closed_segmentI[of u]) (auto simp: u algebra_simps)\n-next\n- fix z assume [simp]: \"Im z = Im a\" and \"Re z \\<in> closed_segment (Re a) (Re b)\"\n- then obtain u where u: \"u \\<in> {0..1}\" \"Re z = Re a + of_real u * (Re b - Re a)\"\n- by (auto simp: closed_segment_def scaleR_conv_of_real algebra_simps)\n- from u(1) show \"z \\<in> closed_segment a b\" using assms\n- by (intro closed_segmentI[of u]) (auto simp: u algebra_simps scaleR_conv_of_real complex_eq_iff)\n-qed\n-\ndefinition\\<^marker>\\<open>tag important\\<close> rectpath where\n\"rectpath a1 a3 = (let a2 = Complex (Re a3) (Im a1); a4 = Complex (Re a1) (Im a3)\nin linepath a1 a2 +++ linepath a2 a3 +++ linepath a3 a4 +++ linepath a4 a1)\"```\n```--- a/src/HOL/Deriv.thy\tMon Nov 04 21:41:55 2019 -0500\n+++ b/src/HOL/Deriv.thy\tTue Nov 05 10:02:09 2019 -0500\n@@ -278,6 +278,23 @@\nshow ?thesis .\nqed\n\n+lemma has_field_derivative_transform_within:\n+ assumes \"(f has_field_derivative f') (at a within S)\"\n+ and \"0 < d\"\n+ and \"a \\<in> S\"\n+ and \"\\<And>x. \\<lbrakk>x \\<in> S; dist x a < d\\<rbrakk> \\<Longrightarrow> f x = g x\"\n+ shows \"(g has_field_derivative f') (at a within S)\"\n+ using assms unfolding has_field_derivative_def\n+ by (metis has_derivative_transform_within)\n+\n+lemma has_field_derivative_transform_within_open:\n+ assumes \"(f has_field_derivative f') (at a)\"\n+ and \"open S\" \"a \\<in> S\"\n+ and \"\\<And>x. x \\<in> S \\<Longrightarrow> f x = g x\"\n+ shows \"(g has_field_derivative f') (at a)\"\n+ using assms unfolding has_field_derivative_def\n+ by (metis has_derivative_transform_within_open)\n+\n\nsubsection \\<open>Continuity\\<close>\n```"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.54843944,"math_prob":0.95533776,"size":19960,"snap":"2021-21-2021-25","text_gpt3_token_len":6505,"char_repetition_ratio":0.15689518,"word_repetition_ratio":0.39721847,"special_character_ratio":0.36793587,"punctuation_ratio":0.10966374,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9988895,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-06-24T02:28:26Z\",\"WARC-Record-ID\":\"<urn:uuid:b8bd461b-cabc-414d-a3f7-ed94cb46e424>\",\"Content-Length\":\"54650\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6270672a-9ee9-4571-af0e-559521d671a0>\",\"WARC-Concurrent-To\":\"<urn:uuid:121e8326-cf42-42ce-9707-44182243a23a>\",\"WARC-IP-Address\":\"131.159.46.82\",\"WARC-Target-URI\":\"https://isabelle.in.tum.de/repos/isabelle/rev/ddd4aefc540f?revcount=15\",\"WARC-Payload-Digest\":\"sha1:WIPMRJ3WQDQ4TZL52K3MN5HUJJAFWSYZ\",\"WARC-Block-Digest\":\"sha1:JU5N3VCSCAU3M5O5VNFZBBJ3NU2XTKT7\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-25/CC-MAIN-2021-25_segments_1623488550571.96_warc_CC-MAIN-20210624015641-20210624045641-00200.warc.gz\"}"} |
https://codereview.stackexchange.com/questions/219753/checking-against-a-specific-sequence-in-python | [
"# Checking against a specific sequence in python\n\nI'm trying to learn python and I'm modeling a simple dice game that a friend and I invented.\n\nOn any one turn of the game, you must roll a specific combination to stay in the game.\n\nThe particular rule I'm trying to model is this:\n\nGiven a previous roll (where the dice are contiguous) make another contiguous roll either above the previous roll or before it. 6 and 1 are considered contiguous. Outside of being contiguous the order of the dice does not matter...\n\nsome examples\n\nexisting roll (3, 4) valid subsequent rolls: (1, 2) (5, 6)\n\nexisting roll (5, 6) valid subsequent rolls: (1, 2) (3, 4)\n\nexisting roll (6, 1) valid subsequent rolls: (2, 3) (4, 5)\n\nI have written the below python3 code to deal with this aspect of the game. However, being new to python, I'd like to know how experienced programmers would deal with this problem. Is there a smarter way to do this programmatically without all the sorting and comparisons and having to make the two arbitrary evaluations? All comments are welcome, readability, efficiency etc. Thanks for your time.\n\narguments:\n\nprev = previous/ existing roll\n\ncurr = current roll to check\n\nsides = number of sides of the dice\n\nProgram:\n\nfrom itertools import cycle, islice\n\ndef chkStk(prev, curr, sides):\ndef srt(a):\nreturn sorted(a, reverse=True if (min(a), max(a)) == (1, sides)\nelse False)\n\ndef cmp(a):\nreturn a == list(x for x in islice(cycle(range(1, sides+1)),\na-1, a+len(a)-1))\ncurr = srt(curr)\nprev = srt(prev)\n\nreturn cmp(prev+curr) or cmp(curr+prev)\n\n# examples\nprint(chkStk([3, 4], [1, 2], 6))\nprint(chkStk([3, 4], [5, 6], 6))\n\nprint(chkStk([5, 6], [1, 2], 6))\nprint(chkStk([5, 6], [3, 4], 6))\n\nprint(chkStk([6, 1], [2, 3], 6))\nprint(chkStk([6, 1], [4, 5], 6))\n\n# counter examples\nprint(chkStk([6, 1], [6, 1], 6))\nprint(chkStk([1, 1], [1, 1], 6))\nprint(chkStk([1, 2], [4, 5], 6))\n\n\noutput:\n\nTrue\nTrue\nTrue\nTrue\nTrue\nTrue\nFalse\nFalse\nFalse\n[Finished in 0.1s]\n\n• You should look at the modulo operation, which uses the % binary operator in most C-inspired languages. You can test for (a - b) % 6 == 1 to check contiguity, and abs(a - b) % 6 == 2 to test for adjacency. – Austin Hastings May 5 '19 at 16:25\n\n# Specification\n\nI'm having trouble understanding the problem. I think you roll two dice at once, and the results must be distinct and adjacent to each other and to the previous roll.\n\nYour question includes some explanation before the program:\n\nI'm modeling a simple dice game that a friend and I invented.\n\nOn any one turn of the game, you must roll a specific combination to stay in the game.\n\nThe particular rule I'm trying to model is this:\n\nGiven a previous roll (where the dice are contiguous) make another contiguous roll either above the previous roll or before it. 6 and 1 are considered contiguous. Outside of being contiguous the order of the dice does not matter...\n\nThis should be part of the program! It could be a docstring at the beginning of the file.\n\narguments:\n\nprev = previous/ existing roll\n\ncurr = current roll to check\n\nsides = number of sides of the dice\n\nThese should also be part of the program! They should be part of chkStk's docstring, or better yet, included in the argument names. prev and curr could be called previous_roll and current_roll.\n\nsrt returns its argument in order, unless it contains both bounds, in which case it's reversed. This is surprising, so it requires an explanation in a comment or docstring.\n\n## Names\n\nAll three function names are inscrutably short.\n\n• srt sorts its argument (which should be a two-element list) in a cyclic order. So it could be called cyclic_order.\n• srt's argument a is a 2-die roll (i.e. a pair), so it should be called pair or roll.\n• cmp checks whether its argument is a contiguous ascending sequence (in the same cyclic order). So it could be called contiguous or is_contiguous or is_ascending or even in_order.\n• cmp's argument a is a list of (1-die) rolls, so it should be called rolls.\n• chkStk checks whether curr is a valid roll after prev, so it should be called something like valid_roll or is_valid_roll or \n\n(It's confusing to have roll mean a pair of 1-die rolls, so maybe the whole program should switch to something consistent, such as \"roll\" for one die and \"pair\" for two dice.)\n\n# Small simplifications\n\nTrue if boolean_expression else False can be simplified to just boolean_expression.\n\n[debatable] (min(a), max(a)) == (1, sides) is short, but most people are accustomed to reading min(a) == 1 and max(a) == sides.\n\nEven better: since 1 and sides are the minimum and maximum values possible, you can skip the min and max and just check whether the values are present: 1 in a and sides in a.\n\nlist(x for x in foo) can be simplified to just list(foo).\n\n# Simpler ways\n\nIn cmp, instead of building a sequence in cyclic order and comparing to it, it might be simpler to check that each successive pair is in cyclic order.\n\nThere are easier ways to solve this problem.\n\nIf my restatement of the problem above (\"the results must be adjacent to each other and to the previous roll\") is correct, it can be easily turned into a program. You can simply check each part (possibly in a helper function):\n\n• whether two dice of a pair are adjacent to each other\n• whether two pairs are adjacent to each other\n\nYou can do both of these without any list operations.\n\n# Tests\n\nInstead of printing out results for you to check, your test cases can check them for you! The simplest way to do this is with assert:\n\nassert chkStk([5, 6], [1, 2], 6)\nassert not chkStk([5, 6], [1, 4], 6)\nassert not chkStk([6, 1], [2, 1], 6), 'overlapping pairs should fail'\n\n\nThis won't print anything unless the test fails. You can include an optional error message.\n\n• Thanks, this is a great answer. – mAndroid May 6 '19 at 10:22\n\nHere is how you might do it.\n\ndef tuples_adjacent(a, b, modulus) -> bool:\ndef sequential(x, y):\nreturn (y-x) % modulus == 1\nassert sequential(*a) and sequential(*b)\nreturn sequential(a,b) or sequential(b,a)\n\n\nThis will raise an AssertionError on tuples_adjacent((1,1), (1,1), 6) because the tuples do not meet the precondition of being consecutive pairs. I'm not sure if that is exactly what you want without seeing the surrounding program. You can decide if you actually just want to return False if that precondition is not met.\n\nThe other commenter mentioned abs(a-b)%6==2 for checking adjacency, but this is incorrect and fails for the case a=5, b=1. You instead have to do (a-b)%m in {2,m-2}`. In general, absolute value and modulus do not play well with each other."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7966614,"math_prob":0.8937124,"size":2093,"snap":"2020-10-2020-16","text_gpt3_token_len":670,"char_repetition_ratio":0.13068454,"word_repetition_ratio":0.017142856,"special_character_ratio":0.33492595,"punctuation_ratio":0.18297872,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95546824,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-02-16T20:43:49Z\",\"WARC-Record-ID\":\"<urn:uuid:266be2a7-be8c-4294-84b4-50c10860017d>\",\"Content-Length\":\"153025\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6a9c4172-744d-4ba5-b16e-0908615b94ab>\",\"WARC-Concurrent-To\":\"<urn:uuid:001f87b6-aa33-420a-9eb3-43185cbcf95e>\",\"WARC-IP-Address\":\"151.101.65.69\",\"WARC-Target-URI\":\"https://codereview.stackexchange.com/questions/219753/checking-against-a-specific-sequence-in-python\",\"WARC-Payload-Digest\":\"sha1:HWN2K5VJKD7HQ3V4232J6LUIJ4S4WLVR\",\"WARC-Block-Digest\":\"sha1:M6OEI4VUR2T4R4T5HYVFPBDBWBYZDPIO\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-10/CC-MAIN-2020-10_segments_1581875141396.22_warc_CC-MAIN-20200216182139-20200216212139-00304.warc.gz\"}"} |
https://slideum.com/doc/44537/scientific-method-and-measurement-final-review | [
"#### Transcript Scientific Method and Measurement Final Review\n\n```Topic: Final Exam Review\nAim: Let’s review Scientific Method and\nMeasurement\nDo Now: Fill in Measurement Units Chart\nHW: Cells and Cell Transport Review Sheet\nMeasurement\nLength\nMass\nVolume of a regular solid\nVolume of a liquid\nUnits\nmm, cm, m, km\ng\ncm3\nmL or L\nDensity of a solid\ng/cm3\nDensity of a liquid\ng/mL\n0 cm\n1\n2\n3\n4\nWhat is the length of the object in\ncentimeters? Millimeters?\n2.8cm\n28mm\nMass = 137.4 gm\nMass = 25.4 gm\n6cm x 4cm x 3cm =\n3\n72cm\nA student set up the\nexperiment shown below to\nin oxygen as they\ngerminate. Methylene blue is\na chemical that is blue when\noxygen is present, but is\ncolorless when\noxygen is not present.\nContainers A and B each\ncontained 200 mL of water\nand 10 drops of methylene\nseeds.\nA student examined a rock sample and\ndescribed it as having particles of various\ncolors that were 1 mm to 12 mm in size.\nThe student was making\n(1)an inference\n(2) a prediction\n(3) a hypothesis\n(4) an observation\nThe purpose of container B in this experiment is to\n(1) serve as the control container\n(2) serve as the experimental container\n(3) show that seeds do not give off oxygen\n(4) show that seeds do not give off carbon dioxide\nWhich statement is an inference?\n(1) A thermometer shows that the air temperature\nis 56°F.\n(2) A mineral sample of galena produced a\ngray-black streak when tested.\n(3) Based on previous data, ten hurricanes may\noccur in the year 2013.\n(4) A weather vane indicates the wind is coming\nfrom the west.\nAll of the liquid from a test tube is poured into a\nbeaker, as shown in the diagram below.\nCompared to the liquid that was in the test tube,\nthe liquid in the beaker has\n(1) a different volume, but the same shape\n(2) a different volume and a different shape\n(3) the same volume, but a different shape\n(4) the same volume and the same shape\nThe diagram below shows water in a graduated\ncylinder.\nA student states that the graduated cylinder\ncontains 150 mL of water. This statement is\n(1) a prediction\n(3) a theory\n(2) an observation\n(4) a hypothesis\nThe diagram below shows 20 grams of two different\nmaterials, A and B, on a laboratory balance.\nCompared to material A, material B has a different\n(1) density\n(3) phase\n(2) mass\n(4) shape\nThe diagram below shows a triple-beam balance.\nWhat is the maximum mass, in grams, that could be\nmeasured by this balance?\n(1) 110\n(3) 610\n(2) 500\n(4) 1510\nThe diagram below shows a rock\nsuspended above an overflow\ncontainer filled with water up\nto the overflow spout. A\nnext to the container to collect\nwater that comes out of an\noverflow spout.\nWhich property of the rock can\nbe directly determined when the\nrock is placed in the overflow\ncontainer?\n(1) mass\n(3) volume\n(2) density\n(4) hardness\nA student wanted to study the amount of mold\ngrowing on pizza at different temperatures. In the\nexperiment, the student set up four identical pans of\npizza. Each pan contained the same amount of pizza.\nThe temperatures and light conditions are shown in\nthe data table below.\nOne error made in setting up the experiment was that\nthe four pans of pizza\n(1) were at different temperatures\n(3) were different sizes\n(4) received different amounts of light\nThe results of one experiment carried\nout by a research team would be\nconsidered valid if\n1. the experiment had no control\nsetup\n2. all the members of the research\nteam came to the same conclusion\n3. the experiment had more than one\nvariable\n4. the experiment was repeated and\nthe same results were obtained\neach time\nWhy do scientists make graphs of data from\nexperiments?\n1. to observe general trends and patterns in\nthe data\n2. to make the observed data more accurate\n3. to prevent errors in measuring data\n4. to help change the original data tables\nA new idea that is tested in a\nscientific experiment is known as a(an)\n1. theory\n2. hypothesis\n3. inference\n4. observation\nWhich statement about the use of independent\nvariables in controlled experiments is correct?\n1. A different independent variable must be used\neach time an experiment is repeated.\n2. The independent variables must involve time.\n3. Only one independent variable is used for each\nexperiment.\n4. The independent variables state the problem\nbeing tested.\nWhy do scientists consider any hypothesis valuable?\n1. A hypothesis requires no further investigation.\n2. A hypothesis may lead to further investigation\neven if it is disproved by the experiment.\n3. A hypothesis requires no further investigation if it\nis proved by the experiment.\n4. A hypothesis can be used to explain a conclusion\neven if it is disproved by the experiment.\n```"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8882683,"math_prob":0.8851436,"size":4833,"snap":"2022-27-2022-33","text_gpt3_token_len":1222,"char_repetition_ratio":0.12072893,"word_repetition_ratio":0.028235294,"special_character_ratio":0.2331885,"punctuation_ratio":0.08395324,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9720794,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-07T19:57:10Z\",\"WARC-Record-ID\":\"<urn:uuid:87967e8d-e025-4133-8fd0-0adaab109407>\",\"Content-Length\":\"21370\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e34d9d92-aee6-4d11-9b9e-fded346a8beb>\",\"WARC-Concurrent-To\":\"<urn:uuid:87a07582-cfeb-48ce-9af7-143a3e709f64>\",\"WARC-IP-Address\":\"172.67.189.245\",\"WARC-Target-URI\":\"https://slideum.com/doc/44537/scientific-method-and-measurement-final-review\",\"WARC-Payload-Digest\":\"sha1:E5KMBKA4QYRQYJVNDLHNUDYO35PY2K7G\",\"WARC-Block-Digest\":\"sha1:APOBBVNGHJF4Y3KCPB3G2W7LF5MHUWPR\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882570692.22_warc_CC-MAIN-20220807181008-20220807211008-00430.warc.gz\"}"} |
https://developer.mozilla.org/en-US/docs/Web/SVG/Attribute/transform | [
"# transform\n\nThe transform attribute defines a list of transform definitions that are applied to an element and the element's children.\n\n<svg viewBox=\"-40 0 150 100\" xmlns=\"http://www.w3.org/2000/svg\" xmlns:xlink=\"http://www.w3.org/1999/xlink\">\n<g fill=\"grey\"\ntransform=\"rotate(-10 50 100)\ntranslate(-36 45.5)\nskewX(40)\nscale(1 0.5)\">\n<path id=\"heart\" d=\"M 10,30 A 20,20 0,0,1 50,30 A 20,20 0,0,1 90,30 Q 90,60 50,90 Q 10,60 10,30 z\" />\n</g>\n\n</svg>\n\n\nNote: As of SVG2, transform is a presentation attribute, meaning it can be used as a CSS property. However, be aware that there are some difference in syntax between the CSS property and the attribute. See the documentation for the CSS property transform for the specific syntax to use in that case.\n\nAs a presentation attribute, transform can be used by any element (in SVG 1.1, only these 16 elements were allowed to use it: <a>, <circle>, <clipPath>, <defs>, <ellipse>, <foreignObject>, <g>, <image>, <line>, <path>, <polygon>, <polyline>, <rect>, <switch>, <text>, and <use>).\n\nAlso, as a legacy from SVG 1.1, <linearGradient> and <radialGradient> support the gradientTransform attribute, and <pattern> supports the patternTransform attribute, both of which act exactly like the transform attribute.\n\nValue none Yes\n\n## Transform functions\n\nThe following transform functions can be used by the transform attribute <transform-list>\n\nWarning: As per the spec you should be able to also use CSS transform functions, however, the compatibility isn't guaranteed.\n\n### Matrix\n\nThe matrix(<a> <b> <c> <d> <e> <f>) transform function specifies a transformation in the form of a transformation matrix of six values. matrix(a,b,c,d,e,f) is equivalent to applying the transformation matrix:$(acebdf001)\\begin{pmatrix} a & c & e \\\\ b & d & f \\\\ 0 & 0 & 1 \\end{pmatrix}$ which maps coordinates from a previous coordinate system into a new coordinate system by the following matrix equalities:$(xnewCoordSysynewCoordSys1)=(acebdf001)(xprevCoordSysyprevCoordSys1)=(axprevCoordSys+cyprevCoordSys+ebxprevCoordSys+dyprevCoordSys+f1) \\begin{pmatrix} x_{\\mathrm{newCoordSys}} \\\\ y_{\\mathrm{newCoordSys}} \\\\ 1 \\end{pmatrix} = \\begin{pmatrix} a & c & e \\\\ b & d & f \\\\ 0 & 0 & 1 \\end{pmatrix} \\begin{pmatrix} x_{\\mathrm{prevCoordSys}} \\\\ y_{\\mathrm{prevCoordSys}} \\\\ 1 \\end{pmatrix} = \\begin{pmatrix} a x_{\\mathrm{prevCoordSys}} + c y_{\\mathrm{prevCoordSys}} + e \\\\ b x_{\\mathrm{prevCoordSys}} + d y_{\\mathrm{prevCoordSys}} + f \\\\ 1 \\end{pmatrix}$\n\n#### Example\n\n<svg viewBox=\"0 0 200 200\" xmlns=\"http://www.w3.org/2000/svg\">\n<rect x=\"10\" y=\"10\" width=\"30\" height=\"20\" fill=\"green\" />\n\n<!--\nIn the following example we are applying the matrix:\n[a c e] [3 -1 30]\n[b d f] => [1 3 40]\n[0 0 1] [0 0 1]\n\nwhich transform the rectangle as such:\n\ntop left corner: oldX=10 oldY=10\nnewX = a * oldX + c * oldY + e = 3 * 10 - 1 * 10 + 30 = 50\nnewY = b * oldX + d * oldY + f = 1 * 10 + 3 * 10 + 40 = 80\n\ntop right corner: oldX=40 oldY=10\nnewX = a * oldX + c * oldY + e = 3 * 40 - 1 * 10 + 30 = 140\nnewY = b * oldX + d * oldY + f = 1 * 40 + 3 * 10 + 40 = 110\n\nbottom left corner: oldX=10 oldY=30\nnewX = a * oldX + c * oldY + e = 3 * 10 - 1 * 30 + 30 = 30\nnewY = b * oldX + d * oldY + f = 1 * 10 + 3 * 30 + 40 = 140\n\nbottom right corner: oldX=40 oldY=30\nnewX = a * oldX + c * oldY + e = 3 * 40 - 1 * 30 + 30 = 120\nnewY = b * oldX + d * oldY + f = 1 * 40 + 3 * 30 + 40 = 170\n-->\n<rect x=\"10\" y=\"10\" width=\"30\" height=\"20\" fill=\"red\"\ntransform=\"matrix(3 1 -1 3 30 40)\" />\n</svg>\n\n### Translate\n\nThe translate(<x> [<y>]) transform function moves the object by x and y (i.e. xnew = xold + <x>, ynew = yold + <y>). If y is not provided, it is assumed to be zero.\n\n#### Example\n\n<svg viewBox=\"0 0 100 100\" xmlns=\"http://www.w3.org/2000/svg\">\n<!-- No translation -->\n<rect x=\"5\" y=\"5\" width=\"40\" height=\"40\" fill=\"green\" />\n\n<!-- Horizontal translation -->\n<rect x=\"5\" y=\"5\" width=\"40\" height=\"40\" fill=\"blue\"\ntransform=\"translate(50)\" />\n\n<!-- Vertical translation -->\n<rect x=\"5\" y=\"5\" width=\"40\" height=\"40\" fill=\"red\"\ntransform=\"translate(0 50)\" />\n\n<!-- Both horizontal and vertical translation -->\n<rect x=\"5\" y=\"5\" width=\"40\" height=\"40\" fill=\"yellow\"\ntransform=\"translate(50,50)\" />\n</svg>\n\n### Scale\n\nThe scale(<x> [<y>]) transform function specifies a scale operation by x and y. If y is not provided, it is assumed to be equal to x.\n\n#### Example\n\n<svg viewBox=\"-50 -50 100 100\" xmlns=\"http://www.w3.org/2000/svg\">\n<!-- uniform scale -->\n<circle cx=\"0\" cy=\"0\" r=\"10\" fill=\"red\"\ntransform=\"scale(4)\" />\n\n<!-- vertical scale -->\n<circle cx=\"0\" cy=\"0\" r=\"10\" fill=\"yellow\"\ntransform=\"scale(1,4)\" />\n\n<!-- horizontal scale -->\n<circle cx=\"0\" cy=\"0\" r=\"10\" fill=\"pink\"\ntransform=\"scale(4,1)\" />\n\n<!-- No scale -->\n<circle cx=\"0\" cy=\"0\" r=\"10\" fill=\"black\" />\n</svg>\n\n### Rotate\n\nThe rotate(<a> [<x> <y>]) transform function specifies a rotation by a degrees about a given point. If optional parameters x and y are not supplied, the rotation is about the origin of the current user coordinate system. If optional parameters x and y are supplied, the rotate is about the point (x, y).\n\n#### Example\n\n<svg viewBox=\"-12 -2 34 14\" xmlns=\"http://www.w3.org/2000/svg\">\n<rect x=\"0\" y=\"0\" width=\"10\" height=\"10\" />\n\n<!-- rotation is done around the point 0,0 -->\n<rect x=\"0\" y=\"0\" width=\"10\" height=\"10\" fill=\"red\"\ntransform=\"rotate(100)\" />\n\n<!-- rotation is done around the point 10,10 -->\n<rect x=\"0\" y=\"0\" width=\"10\" height=\"10\" fill=\"green\"\ntransform=\"rotate(100,10,10)\" />\n</svg>\n\n### SkewX\n\nThe skewX(<a>) transform function specifies a skew transformation along the x axis by a degrees.\n\n#### Example\n\n<svg viewBox=\"-5 -5 10 10\" xmlns=\"http://www.w3.org/2000/svg\">\n<rect x=\"-3\" y=\"-3\" width=\"6\" height=\"6\" />\n\n<rect x=\"-3\" y=\"-3\" width=\"6\" height=\"6\" fill=\"red\"\ntransform=\"skewX(30)\" />\n</svg>\n\n### SkewY\n\nThe skewY(<a>) transform function specifies a skew transformation along the y axis by a degrees.\n\n#### Example\n\n<svg viewBox=\"-5 -5 10 10\" xmlns=\"http://www.w3.org/2000/svg\">\n<rect x=\"-3\" y=\"-3\" width=\"6\" height=\"6\" />\n\n<rect x=\"-3\" y=\"-3\" width=\"6\" height=\"6\" fill=\"red\"\ntransform=\"skewY(30)\" />\n</svg>\n\n## Specification\n\nSpecification Status Comment\nCSS Transforms Level 2\nThe definition of 'transform' in that specification.\nEditor's Draft\nCSS Transforms Level 1\nThe definition of 'transform' in that specification.\nWorking Draft\nScalable Vector Graphics (SVG) 2\nThe definition of 'transform' in that specification.\nCandidate Recommendation\nScalable Vector Graphics (SVG) 1.1 (Second Edition)\nThe definition of 'transform' in that specification.\nRecommendation Initial definition"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.51622385,"math_prob":0.99334913,"size":6199,"snap":"2019-51-2020-05","text_gpt3_token_len":1983,"char_repetition_ratio":0.14931396,"word_repetition_ratio":0.2768305,"special_character_ratio":0.3937732,"punctuation_ratio":0.12612613,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9980874,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-09T14:02:56Z\",\"WARC-Record-ID\":\"<urn:uuid:189d33da-36b7-4729-8771-91decdb1072f>\",\"Content-Length\":\"139076\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d1b7ecf6-8882-4c9d-b81f-967109e57e3d>\",\"WARC-Concurrent-To\":\"<urn:uuid:fe3bdb1d-00a4-431d-88e8-dda6eb081985>\",\"WARC-IP-Address\":\"99.84.181.64\",\"WARC-Target-URI\":\"https://developer.mozilla.org/en-US/docs/Web/SVG/Attribute/transform\",\"WARC-Payload-Digest\":\"sha1:HQQZ6YEDKRAX2Z7ZALQ2Y6C5H4ZLP63A\",\"WARC-Block-Digest\":\"sha1:6VQONQYKGLWTII4SA7HJXHAD2UVKVCAX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575540518882.71_warc_CC-MAIN-20191209121316-20191209145316-00121.warc.gz\"}"} |
http://cleamc11.vub.ac.be/QUASIS.html | [
"",
null,
"# Quasispecies\n\nQuasispecies is a model of informational sequences evolution [1,2]. The evolved population is a set {Sk} of n sequences, k = 1,..., n. Each sequence is a string of N symbols, Ski , i = 1,..., N. The symbols are taken from an alphabet, containing l letters. For example, we can consider a two-letter alphabet (l = 2, Ski = 1, -1 or Ski = G, C) or a four-letter alphabet (l = 4, Ski = G, C, A, U). The sequence length N and the population size n are assumed to be large: N , n >> 1.\n\nSequences are the model \"organisms\", they have certain (nonnegative) selective values fk = f(Sk). We assume here, that there is the master sequence Sm , having the maximal selective value. The selective value of any sequence depends only on Hamming distance (the number of different symbols at corresponding places in sequences) between given S and master sequence Sm : f(S) = f(r(S,Sm)) - the smaller is the distance r , the greater is the selective value f . For simplicity we assume here, that values f are not greater than 1.\n\nThe evolution process consists of consequent generations. New generation {Sk (t+1)} is obtained from the old one {Sk(t)} by selection and mutations of sequences Sk (t) ; here t is the generation number. The model evolution process can be described formally in the following computer-program-like manner.\n ; Step 0. (Formation of an initial population {Sk (0)} ) For every k = 1 , ..., n, for every i = 1 , ..., N , choose randomly a symbol Ski by setting it to an arbitrary symbol from given alphabet. ; Step 1. (Selection)\n Substep 1.1. (Selection of a particular sequence). Choose randomly a sequence number k*, and select the sequence Sk*(t) (without canceling it from the old population) into the new population {Sk(t+1)} with the probability fk* = f (Sk* (t)). Substep 1.2. (Iteration of the sequences selection, control of the population size). Repeat the substep 1.1 until the number of sequences in the new population reaches the value n .\n Step 2. (Mutations) For every k = 1 , ..., n, for every i = 1 , ..., N , change with the probability P the symbol Ski(t+1) to an arbitrary other symbol of the alphabet. Step 3. (Organization of the iterative evolution). Repeat the steps 1, 2 for t = 0, 1, 2, ...\n\nThe evolution character depends strongly on the population size n. If n is very large (n >> lN ), the numbers of all sequences in a population are large and the evolution can be considered as deterministic process. In this case the population dynamics can be described in terms of the ordinary differential equations and analyzed by well known methods. The main result of such an analysis [1-4] are the following conclusions: 1) the evolution process always converges, and 2) the final population is a quasispecie, that is the distribution of the sequences in the neighborhood of the master sequence Sm.\n\nIn the opposite case (lN >> n), the evolution process is essentially stochastic, and computer simulations as well as reasonable quantitative estimations can be used to characterize the main evolution features [1,2,5]. At large sequence length N (N > 50) we have just this case for any real population size.\n\nThe main evolution features and the estimations in the stochastic case for two-letter alphabet ( l = 2; Ski = 1, -1 ) are described in the child node Estimation of the evolution rate . It is shown that the total number of generations T , needed to converge to a quasispecie at sufficiently large selection intensity, can be estimated by the value\n T ~ (N/2)x(PN)-1, (1)\n\nwhere P is a mutation intensity. This estimation implies a sufficiently large population size\n n > T, (2)\n\nat which the effect of the neutral selection can be neglected (see Estimation of the evolution rate, Neutral evolution game for details).\n\nIt is interesting to estimate, how effective can be an evolution algorithm of searching. Namely, what is a minimal value of the total number of participants ntotal = nT , which are needed to find a master sequence in evolution process? According to (1) , (2) , to minimize ntotal , we should maximize the mutation intensity P . But at large P , the already found \"good\" sequences could be lost. \"Optimal\" mutation intensity P ~ N -1 corresponds approximately to one mutation in any sequence per generation. Consequently, we can conclude that an \"optimal\" evolution process should involve of the order of\n ntotal = nT ~ N 2 (3)\n\nparticipants, to find the master sequence.\n\nThis value can be compared with the participant number in deterministic and pure random methods of search. The simple deterministic (sequential) method of search (for the considered Hamming-distance-type selective value and two-letter alphabet, Si = 1, -1 ) can be constructed as follows: 1) start with arbitrary sequence S , 2) try to change consequently its symbols: S1 --> - S1 , S2 --> - S2 , ... , by fixing only such symbol changes, those increase the sequence selective value. The total number of sequences, which should be tested in order to find the master sequence Sm in such a manner, is equal to N : ntotal = N . In a pure random search, to find Sm , we need to inspect of the order of 2N sequences : ntotal ~ 2N .\n\nSo, we have the following estimations:\n\n Deterministic search ntotal = N Evolutionary search ntotal ~ N 2 Random search ntotal ~ 2N\n\nThus, for simple assumptions (Hamming-distance-type selective value and two-letter alphabet), the evolution method of search is essentially more effective than the random one, but it is something worse as compared with the deterministic search.\n\nThe Hamming-distance-type model implies that there is unique maximum of the selective value. This is a strong restriction. Using the spin-glass concept (see Spin-glass model of evolution), it is possible to construct a similar model of informational sequences evolution for the case of very large number of the local maxima of a selective value. The evolution rate, restriction on population size, and total number of evolution participants in that model can be also roughly estimated by formulas (1) - (3). But unlike the Hamming-distance model, the spin-glass-type evolution converges to one of the local selective value maxima, which depends on a particular evolution realization.\n\nConclusion. Quasispecies describes quantitatively a simple information sequence evolution in terms of sequence length, population size, and mutation and selection intensities. This model can be used to characterize roughly the hypothetical prebiotic polynucleotide sequence evolution and to illustrate mathematically general features of biological evolution.\n\nReferences:\n\n1. M.Eigen. Naturwissenshaften. 1971. Vol.58. P. 465.\n\n2. M.Eigen, P.Schuster. \"The hypercycle: A principle of natural self-organization\". Springer Verlag: Berlin etc. 1979.\n\n3. C.J.Tompson, J.L.McBride. Math. Biosci. 1974. Vol.21. P.127.\n\n4. B.L.Jones, R.H.Enns, S.S. Kangnekar. Bull. Math. Biol. 1976. Vol.38. N.1. P.15.\n\n5. V.G.Red'ko. Biofizika. 1986. Vol. 31. N.3. P. 511. V.G.Red'ko. Biofizika. 1990. Vol. 35. N.5. P. 831 (In Russian).\n\n6. M. Kimura. \"The neutral theory of molecular evolution\". Cambridge Un-ty Press. 1983.\n\n Home",
null,
"Metasystem Transition Theory",
null,
"Evolutionary Theory",
null,
"Mathematical Modeling of Evolution",
null,
"Models of molecular-genetic systems origin Up Prev.",
null,
"Next Down Neutral evolution game Spin-glass model of evolution",
null,
""
] | [
null,
"http://cleamc11.vub.ac.be/Images/header.jpg",
null,
"http://cleamc11.vub.ac.be/Images/up.gif",
null,
"http://cleamc11.vub.ac.be/Images/up.gif",
null,
"http://cleamc11.vub.ac.be/Images/up.gif",
null,
"http://cleamc11.vub.ac.be/Images/up.gif",
null,
"http://cleamc11.vub.ac.be/Images/4arrows.gif",
null,
"http://cleamc11.vub.ac.be/Images/space.gif",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8163694,"math_prob":0.99242336,"size":7073,"snap":"2022-27-2022-33","text_gpt3_token_len":1825,"char_repetition_ratio":0.15009195,"word_repetition_ratio":0.025231287,"special_character_ratio":0.25166124,"punctuation_ratio":0.18245614,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9908469,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-06-29T05:59:53Z\",\"WARC-Record-ID\":\"<urn:uuid:0fcad7fe-3925-40c1-9193-cc3e3c98bade>\",\"Content-Length\":\"15924\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4e7085e4-61db-4519-ae83-868729596670>\",\"WARC-Concurrent-To\":\"<urn:uuid:497359d6-b88b-4041-8025-c5c051ecf7db>\",\"WARC-IP-Address\":\"134.184.0.88\",\"WARC-Target-URI\":\"http://cleamc11.vub.ac.be/QUASIS.html\",\"WARC-Payload-Digest\":\"sha1:RZJU7DJHG6QNAVAYEUVXRDERQDS7AY52\",\"WARC-Block-Digest\":\"sha1:NOM66MX7CETSGJUEP36A7SOOVL7MS2UQ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656103624904.34_warc_CC-MAIN-20220629054527-20220629084527-00716.warc.gz\"}"} |
https://scholars.duke.edu/display/pub795308 | [
"Deriving efficient graph algorithms\n\nPublished\n\nJournal Article\n\nTwo case studies are presented that demonstrate the systematic derivation of efficient algorithms from simple combinatorial definitions. These case studies contribute to an exploration of evolutionary approaches to the explanation, proof, adaptation, and possibly the design of complex algorithms. The algorithms derived are the linear-time depth-first-search algorithms developed by Tarjan and Hopcroft for strong connectivity and biconnectivity. These algorithms are generally considered by students to be complex and difficult to understand. The problems they solve, however, have simple combinatorial definitions that can themselves be considered inefficient algorithms. The derivations employ systematic program manipulation techniques combined with appropriate domain-specific knowledge. The derivation approach offers evolutionary explanations of the algorithms that make explicit the respective roles of programming knowledge (embodied as program manipulation techniques) and domain-specific knowledge (embodied as graph-theoretic lemmas). Because the steps are rigorous and can potentially be formalized, the explanations are also proofs of correctness. We consider the merits of this approach to proof as compared with the usual a posteriori proofs. These case studies also illustrate how significant algorithmic derivations can be accomplished with a relatively small set of core program manipulation techniques. © Springer-Verlag Berlin Heidelberg 2003.\n\nCited Authors\n\n• Reif, JH; Scherlis, WL\n\nPublished Date\n\n• December 1, 2004\n\n• 2772 /\n\n• 645 - 681\n\n• 1611-3349\n\n• 0302-9743\n\nCitation Source\n\n• Scopus",
null,
""
] | [
null,
"https://scholars.duke.edu/themes/duke/images/duke-footer-logo.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8775575,"math_prob":0.7811652,"size":1963,"snap":"2019-43-2019-47","text_gpt3_token_len":370,"char_repetition_ratio":0.1107708,"word_repetition_ratio":0.0,"special_character_ratio":0.17931737,"punctuation_ratio":0.07446808,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9543712,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-10-14T13:18:13Z\",\"WARC-Record-ID\":\"<urn:uuid:4ffbb434-5711-4159-871d-465673f9bf1a>\",\"Content-Length\":\"11384\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:97b25c67-a003-4909-a1e1-bc74f48120bb>\",\"WARC-Concurrent-To\":\"<urn:uuid:80c550f5-3708-459e-8421-232c412385e5>\",\"WARC-IP-Address\":\"152.3.72.205\",\"WARC-Target-URI\":\"https://scholars.duke.edu/display/pub795308\",\"WARC-Payload-Digest\":\"sha1:67F3ASV3AETAVZKJFTU2KJ3HD7NXUC4E\",\"WARC-Block-Digest\":\"sha1:N2PVSOZH23DW557BBEOSSDPXQJFOYB3Q\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-43/CC-MAIN-2019-43_segments_1570986653247.25_warc_CC-MAIN-20191014124230-20191014151730-00397.warc.gz\"}"} |
https://math.au.dk/aktuelt/aktiviteter/event/item/6008/ | [
"# Institut for Matematik",
null,
"# Classification Results for Expanding and Shrinking gradient Kähler-Ricci solitons\n\nRonan Conlon (Florida International University)\nSeminar\nTorsdag, 27 juni, 2019, at 10:30-11:30, in Aud. D3 (1531-215)\nBeskrivelse:\nA complete Kähler metric $$g$$ on a Kähler manifold $$M$$ is a \"gradient Kähler-Ricci soliton\" if there exists a smooth real-valued function $$f : M \\to \\mathbb{R}$$ with $$\\nabla f$$ holomorphic such that $$Ric(g)-Hess(f)+\\lambda g=0$$ for $$\\lambda$$ a real number. I will present some classification results for such manifolds. This is joint work with Alix Deruelle (Université Paris-Sud) and Song Sun (UC Berkeley).\nOrganiseret af: QGM\nKontaktperson: Cristiano Spotti & Martin de Borbon"
] | [
null,
"https://www.aucdn.dk/2016/assets/img/au_segl-inv.svg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.68005913,"math_prob":0.90820223,"size":632,"snap":"2020-24-2020-29","text_gpt3_token_len":194,"char_repetition_ratio":0.087579615,"word_repetition_ratio":0.0,"special_character_ratio":0.2800633,"punctuation_ratio":0.10526316,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99008065,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-05-25T07:33:53Z\",\"WARC-Record-ID\":\"<urn:uuid:51845845-fb37-4fa6-8ab0-8f39bb9fc51c>\",\"Content-Length\":\"21866\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2b598070-0fec-4099-9451-3fa5fc323dc2>\",\"WARC-Concurrent-To\":\"<urn:uuid:6a350c00-5159-4e87-82a8-f79631e92fd3>\",\"WARC-IP-Address\":\"185.45.20.48\",\"WARC-Target-URI\":\"https://math.au.dk/aktuelt/aktiviteter/event/item/6008/\",\"WARC-Payload-Digest\":\"sha1:P5CLVDILYUV37HBJOSNZVTX7PVTVCM7A\",\"WARC-Block-Digest\":\"sha1:UGH6UM3R4DGGUAEJ6I3IW5OFHICM7ABH\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-24/CC-MAIN-2020-24_segments_1590347388012.14_warc_CC-MAIN-20200525063708-20200525093708-00072.warc.gz\"}"} |
https://physics.stackexchange.com/questions/344720/does-newtonian-f-ma-imply-the-least-action-principle-in-mechanics | [
"# Does Newtonian $F=ma$ imply the least action principle in mechanics?\n\nI've learned that Newtonian mechanics and Lagrangian mechanics are equivalent, and Newtonian mechanics can be deduced from the least action principle.\n\nCould the least action principle $\\min\\int L(t,q,q')dt$ in mechanics be deduced from Newtonian $F=ma$?\n\nSorry if the question sounds beginnerish\n\nNewton's second law implies into Least Action Principle under two assumptions:\n\n1. Virtual Work Principle holds.\n2. There are no dissipative forces.\n\nWhen applied to a system of particles, Newton´s second law can be written as $$\\sum_i(\\vec F_i-\\dot{\\vec p}_i)=0.$$ Decomposing this force into a constraint force $\\vec F_i^c$ and an applied force $\\vec F_i^a$ we obtain $$\\sum_i(\\vec F_i^a+\\vec F_i^c-\\dot{\\vec p}_i)\\cdot\\delta\\vec r_i=0,$$ where $\\delta \\vec r_i$ is the virtual displacement of particle $i$. The Principle of Virtual Work says that the total work of constraint forces is zero along virtual displacements, $$\\sum_i\\vec F_i^c\\cdot\\delta\\vec r_i=0.\\tag1$$ The last two equations can be combined into the d'Alembert Principle, $$\\sum_i(\\vec F_i^a-\\dot{\\vec p}_i)\\cdot\\delta\\vec r_i=0,\\tag2$$ which gives the equations of motion of the system. The last step is to integrate Eqn. (2) from time $t_1$ to $t_2$, $$\\int_{t_1}^{t_2}\\sum_i(\\vec F_i^a-\\dot{\\vec p}_i)\\cdot\\delta\\vec r_idt=0$$ and assume that the applied forces are derived from a potential (no dissipative forces). The first term gives raise to a potential energy whereas the second leads to a kinetic energy. Moreover, the variation $\\delta$ commutes with the integral and the above equation can be written as $$\\delta\\int_{t_1}^{t_2}(T-V)dt=0,$$ which is the Least Action Principle.\n\n• But the Principle of Least Action and the Principle of Virtual Work are equivalent. Your answer simply redirects the question to \"how does Newton's law imply the Principle of Virtual Work\"? Unless I am missing something huge, this isn't very useful.\n– Max\nJul 21, 2020 at 12:42\n• @Max Least Action and Virtual Work are not equivalent. The latter holds even for dissipative forces whereas the former not. I don´t know any proof relating Newton's law and Virtual Work. Both, Newton's law and Virtual Work together, lead to Least Action. Jul 23, 2020 at 0:17\n\nYou also need an expression for the Lagrangian, which in classical mechanics is $$L = T - U$$\n\nWhere $T$ is the kinetic energy and $U$ is the potential energy.\n\nProvided that you can associate a potential $U$ to the force $\\vec{F}$ such that $\\vec{F} = - \\vec{\\nabla} U$ (such a force is said to be conservative), the principle of least action and Newton second's law are equivalent.\n\nThe demonstration for a single particle in 1D ($T = m v_x^2 /2$, $F = -dU(x)/dx$) is actually a good exercise.\n\n• Here are some hints: First rewrite Newton's 2nd law in terms of the Lagrangian notation: $F = - dU/dq$ , $a = d\\dot{q}/dt$ Jul 8, 2017 at 23:45"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8856531,"math_prob":0.99937147,"size":294,"snap":"2023-14-2023-23","text_gpt3_token_len":66,"char_repetition_ratio":0.1724138,"word_repetition_ratio":0.0,"special_character_ratio":0.1904762,"punctuation_ratio":0.09615385,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99998367,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-09T00:27:31Z\",\"WARC-Record-ID\":\"<urn:uuid:6bd25c5c-fb1d-4be5-b271-5016eee5da85>\",\"Content-Length\":\"171016\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c4705bc1-0c4b-4c35-9ca5-a34602b7775d>\",\"WARC-Concurrent-To\":\"<urn:uuid:a351c648-806b-466a-908d-b4bde0e00da1>\",\"WARC-IP-Address\":\"151.101.1.69\",\"WARC-Target-URI\":\"https://physics.stackexchange.com/questions/344720/does-newtonian-f-ma-imply-the-least-action-principle-in-mechanics\",\"WARC-Payload-Digest\":\"sha1:DOMVPG5KJM562KVG2WZHLILQG6PGMWZT\",\"WARC-Block-Digest\":\"sha1:WHB7TZFGMRJUEVCE2EJCRCGGTQMZ7PNF\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224655244.74_warc_CC-MAIN-20230609000217-20230609030217-00324.warc.gz\"}"} |
https://www.nuclear-power.net/nuclear-engineering/thermodynamics/thermodynamic-properties/what-is-pressure-physics/pressure-scales-pressure-units/ | [
"# Pressure Scales – Pressure Units\n\n## Pascal – Unit of Pressure\n\nAs was discussed, the SI unit of pressure and stress is the pascal.\n\n• 1 pascal 1 N/m2 = 1 kg / (m.s2)\n\nPascal is defined as one newton per square metre. However, for most engineering problems it is fairly small unit, so it is convenient to work with multiples of the pascal: the kPa, the bar, and the MPa.\n\n• 1 MPa 106 N/m2\n• 1 bar 105 N/m2\n• 1 kPa 103 N/m2\n\nThe unit of measurement called standard atmosphere (atm) is defined as:\n\n• 1 atm = 101.33 kPa\n\nThe standard atmosphere approximates to the average pressure at sea-level at the latitude 45° N. Note that, there is a difference between the standard atmosphere (atm) and the technical atmosphere (at).\n\nA technical atmosphere is a non-SI unit of pressure equal to one kilogram-force per square centimeter.\n\n• 1 at = 98.67 kPa",
null,
"## Pounds per square inch – psi\n\nThe standard unit in the English system is the pound-force per square inch (psi). It is the pressure resulting from a force of one pound-force applied to an area of one square inch.\n\n• 1 psi 1 lbf/in2 = 4.45 N / (0.0254 m)2 ≈ 6895 kg/m2\n\nTherefore, one pound per square inch is approximately 6895 Pa.\n\nThe unit of measurement called standard atmosphere (atm) is defined as:\n\n• 1 atm = 14.7 psi\n\nThe standard atmosphere approximates to the average pressure at sea-level at the latitude 45° N. Note that, there is a difference between the standard atmosphere (atm) and the technical atmosphere (at).\n\nA technical atmosphere is a non-SI unit of pressure equal to one kilogram-force per square centimeter.\n\n• 1 at = 14.2 psi\n\n## Bar – Unit of Pressure\n\nThe bar is a metric unit of pressure. It is not part of the International System of Units (SI). The bar is commonly used in the industry and in the meteorology, and an instrument used in meteorology to measure atmospheric pressure is called barometer.\n\nOne bar is exactly equal to 100 000 Pa, and is slightly less than the average atmospheric pressure on Earth at sea level (1 bar = 0.9869 atm). Atmospheric pressure is often given in millibars where standard sea level pressure is defined as 1013 mbar, 1.013 bar, or 101.3 (kPa).\n\nSometimes, “Bar(a)” and “bara” are used to indicate absolute pressures and “bar(g)” and “barg” for gauge pressures.\n\n## Typical Pressures in Engineering – Examples\n\nThe pascal (Pa) as a unit of pressure measurement is widely used throughout the world and has largely replaced the pounds per square inch (psi) unit, except in some countries that still use the Imperial measurement system, including the United States. For most engineering problems pascal (Pa) is fairly small unit, so it is convenient to work with multiples of the pascal: the kPa, the MPa, or the bar. Following list summarizes a few examples:\n\n• Typically most of nuclear power plants operates multi-stage condensing steam turbines. These turbines exhaust steam at a pressure well below atmospheric (e.g. at 0.08 bar or 8 kPa or 1.16 psia) and in a partially condensed state. In relative units it is a negative gauge pressure of about – 0.92 bar, – 92 kPa, or – 13.54 psig.\n• The Standard Atmospheric Pressure approximates to the average pressure at sea-level at the latitude 45° N. The Standard Atmospheric Pressure is defined at sea-level at 273oK (0oC) and is:\n• 101325 Pa\n• 1.01325 bar\n• 14.696 psi\n• 760 mmHg\n• 760 torr\n• Car tire overpressure is about 2.5 bar, 0.25 MPa, or 36 psig.\n• Steam locomotive fire tube boiler: 150–250 psig\n• A high-pressure stage of condensing steam turbine at nuclear power plant operates at steady state with inlet conditions of 6 MPa (60 bar, or 870 psig), t = 275.6°C, x = 1\n• A boiling water reactor is cooled and moderated by water like a PWR, but at a lower pressure (e.g. 7MPa, 70 bar, or 1015 psig), which allows the water to boil inside the pressure vessel producing the steam that runs the turbines.\n• Pressurized water reactors are cooled and moderated by high-pressure liquid water (e.g. 16MPa, 160 bar, or 2320 psig). At this pressure water boils at approximately 350°C (662°F), which provides subcooling margin of about 25°C.\n• The supercritical water reactor (SCWR) is operated at supercritical pressure. The term supercritical in this context refers to the thermodynamic critical point of water (TCR = 374 °C; pCR = 22.1 MPa)\n• Common rail direct fuel injection: On diesel engines, it features a high-pressure (over 1 000 bar or 100 MPa or 14500 psi) fuel rail.\n\nReferences:\nReactor Physics and Thermal Hydraulics:\n1. J. R. Lamarsh, Introduction to Nuclear Reactor Theory, 2nd ed., Addison-Wesley, Reading, MA (1983).\n2. J. R. Lamarsh, A. J. Baratta, Introduction to Nuclear Engineering, 3d ed., Prentice-Hall, 2001, ISBN: 0-201-82498-1.\n3. W. M. Stacey, Nuclear Reactor Physics, John Wiley & Sons, 2001, ISBN: 0- 471-39127-1.\n4. Glasstone, Sesonske. Nuclear Reactor Engineering: Reactor Systems Engineering, Springer; 4th edition, 1994, ISBN: 978-0412985317\n5. Todreas Neil E., Kazimi Mujid S. Nuclear Systems Volume I: Thermal Hydraulic Fundamentals, Second Edition. CRC Press; 2 edition, 2012, ISBN: 978-0415802871\n6. Zohuri B., McDaniel P. Thermodynamics in Nuclear Power Plant Systems. Springer; 2015, ISBN: 978-3-319-13419-2\n7. Moran Michal J., Shapiro Howard N. Fundamentals of Engineering Thermodynamics, Fifth Edition, John Wiley & Sons, 2006, ISBN: 978-0-470-03037-0\n8. Kleinstreuer C. Modern Fluid Dynamics. Springer, 2010, ISBN 978-1-4020-8670-0.\n9. U.S. Department of Energy, THERMODYNAMICS, HEAT TRANSFER, AND FLUID FLOW. DOE Fundamentals Handbook, Volume 1, 2 and 3. June 1992.\n\nPressure"
] | [
null,
"https://www.nuclear-power.net/wp-content/uploads/2016/12/Pressure-Units-pascal-bar-psi-atmosphere.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8588283,"math_prob":0.9457163,"size":4411,"snap":"2020-24-2020-29","text_gpt3_token_len":1150,"char_repetition_ratio":0.14091219,"word_repetition_ratio":0.19329897,"special_character_ratio":0.27431422,"punctuation_ratio":0.11085973,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9655075,"pos_list":[0,1,2],"im_url_duplicate_count":[null,8,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-05-26T21:20:34Z\",\"WARC-Record-ID\":\"<urn:uuid:e5808f26-9ae9-4a96-aebd-0db0e6e86860>\",\"Content-Length\":\"443957\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:bf0f510f-9b6b-4ea1-8057-e6d6e4a30ba3>\",\"WARC-Concurrent-To\":\"<urn:uuid:05e53ef3-aa49-408b-af2d-b3ef375aa2a9>\",\"WARC-IP-Address\":\"104.24.124.229\",\"WARC-Target-URI\":\"https://www.nuclear-power.net/nuclear-engineering/thermodynamics/thermodynamic-properties/what-is-pressure-physics/pressure-scales-pressure-units/\",\"WARC-Payload-Digest\":\"sha1:B4P7DYE5WCB5WDB7LXQZYC3JH537U7W2\",\"WARC-Block-Digest\":\"sha1:XQJFZHFNB2KBSJRCR2QEM4AJXYXQT7Z6\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-24/CC-MAIN-2020-24_segments_1590347391309.4_warc_CC-MAIN-20200526191453-20200526221453-00533.warc.gz\"}"} |
https://yihui.org/formatr/ | [
"# 1. Installation\n\nYou can install formatR from CRAN, or XRAN if you want to test the latest development version:\n\n``````install.packages(\"formatR\", repos = \"http://cran.rstudio.com\")\n#' to install the development version, run\n#' install.packages('formatR', repos = 'https://xran.yihui.org')\n``````\n\nOr check out the Github repository and install from source if you know what this means. This page is always based on the development version.\n\n``````library(formatR)\nsessionInfo()\n``````\n``````## R version 4.0.2 (2020-06-22)\n## Platform: x86_64-apple-darwin17.0 (64-bit)\n## Running under: macOS Catalina 10.15.6\n##\n## Matrix products: default\n## BLAS: /Library/Frameworks/R.framework/Versions/4.0/Resources/lib/libRblas.dylib\n## LAPACK: /Library/Frameworks/R.framework/Versions/4.0/Resources/lib/libRlapack.dylib\n##\n## locale:\n## en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8\n##\n## attached base packages:\n## stats graphics grDevices utils datasets methods\n## base\n##\n## other attached packages:\n## formatR_1.7\n##\n## loaded via a namespace (and not attached):\n## compiler_4.0.2 magrittr_1.5 tools_4.0.2 stringi_1.5.3\n## knitr_1.29.5 stringr_1.4.0 xfun_0.17.1 evaluate_0.14\n``````\n\n# 2. Reformat R code\n\nThe formatR package was designed to reformat R code to improve readability; the main workhorse is the function `tidy_source()`. Features include:\n\n• long lines of code and comments are reorganized into appropriately shorter ones\n• spaces and indent are added where necessary\n• comments are preserved in most cases\n• the number of spaces to indent the code (i.e. tab width) can be specified (default is 4)\n• an `else` statement in a separate line without the leading `}` will be moved one line back\n• `=` as an assignment operator can be replaced with `<-`\n• the left brace `{` can be moved to a new line\n\nBelow is an example of what `tidy_source()` can do. The source code is:\n\n``````## comments are retained;\n# a comment block will be reflowed if it contains long comments;\n#' roxygen comments will not be wrapped in any case\n1+1\n\nif(TRUE){\n}else{\nx=2;print('Oh no... ask the right bracket to go away!')}\n1*3 # one space before this comment will become two!\n2+2+2 # only 'single quotes' are allowed in comments\n\nlm(y~x1+x2, data=data.frame(y=rnorm(100),x1=rnorm(100),x2=rnorm(100))) ### a linear model\n1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1 # comment after a long line\n## here is a long long long long long long long long long long long long long comment that may be wrapped\n``````\n\nWe can copy the above code to clipboard, and type `tidy_source(width.cutoff = 50)` to get:\n\n``````## comments are retained; a comment block will be\n## reflowed if it contains long comments;\n#' roxygen comments will not be wrapped in any case\n1 + 1\n\nif (TRUE) {\nx = 1 # inline comments\n} else {\nx = 2\nprint(\"Oh no... ask the right bracket to go away!\")\n}\n1 * 3 # one space before this comment will become two!\n2 + 2 + 2 # only 'single quotes' are allowed in comments\n\nlm(y ~ x1 + x2, data = data.frame(y = rnorm(100), x1 = rnorm(100),\nx2 = rnorm(100))) ### a linear model\n1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 +\n1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 # comment after a long line\n## here is a long long long long long long long long\n## long long long long long comment that may be\n## wrapped\n``````\n\nTwo applications of `tidy_source()`:\n\n• `tidy_dir()` can reformat all R scripts under a directory\n• `usage()` can reformat the usage of a function, e.g. compare `usage()` with the default output of `args()`:\n\n``````library(formatR)\nusage(glm, width = 40) # can set arbitrary width here\n## glm(formula, family = gaussian, data,\n## weights, subset, na.action,\n## start = NULL, etastart, mustart,\n## offset, control = list(...),\n## model = TRUE, method = \"glm.fit\",\n## x = FALSE, y = TRUE,\n## singular.ok = TRUE,\n## contrasts = NULL, ...)\nargs(glm)\n## function (formula, family = gaussian, data, weights, subset,\n## na.action, start = NULL, etastart, mustart, offset, control = list(...),\n## model = TRUE, method = \"glm.fit\", x = FALSE, y = TRUE, singular.ok = TRUE,\n## contrasts = NULL, ...)\n## NULL\n``````\n\n# 3. The Graphical User Interface\n\nIf the shiny packages has been installed, the function `tidy_app()` can launch a Shiny app to reformat R code like this (live demo):\n\n``````formatR::tidy_app()\n``````",
null,
"After hitting the `Tidy` button:",
null,
"It is often a pain when trying to copy R code from other people’s code which has been run in R and the prompt characters (usually `>`) are attached in the beginning of code, because we have to remove all the prompts `>` and `+` manually before we are able to run the code. However, it will be convenient for the reader to understand the code if the output of the code can be attached. This motivates the function `tidy_eval()`, which uses `tidy_source()` to reformat the source code, evaluates the code in chunks, and attaches the output of each chunk as comments which will not actually break the original source code. Here is an example:\n\n``````set.seed(123)\ntidy_eval(text = c(\"a<-1+1;a # print the value\", \"matrix(rnorm(10),5)\"))\n``````\n``````a <- 1 + 1\na # print the value\n## 2\n\nmatrix(rnorm(10), 5)\n## [,1] [,2]\n## [1,] -0.56048 1.7151\n## [2,] -0.23018 0.4609\n## [3,] 1.55871 -1.2651\n## [4,] 0.07051 -0.6869\n## [5,] 0.12929 -0.4457\n``````\n\nThe default source of the code is from clipboard like `tidy_source()`, so we can copy our code to clipboard, and simply run this in R:\n\n``````library(formatR)\ntidy_eval()\n# without specifying any arguments, it reads code from clipboard\n``````\n\n# 5. Showcase\n\nWe continue the example code in Section 2, using different arguments in `tidy_source()` such as `arrow`, `blank`, `indent`, `brace.newline` and `comment`, etc.\n\n## Replace `=` with `<-`\n\n``````if (TRUE) {\nx <- 1 # inline comments\n} else {\nx <- 2\nprint(\"Oh no... ask the right bracket to go away!\")\n}\n``````\n\nNote the 5th line (an empty line) was discarded:\n\n``````## comments are retained; a comment block will be reflowed if it\n#' roxygen comments will not be wrapped in any case\n1 + 1\nif (TRUE) {\nx = 1 # inline comments\n} else {\nx = 2\nprint(\"Oh no... ask the right bracket to go away!\")\n}\n1 * 3 # one space before this comment will become two!\n``````\n\n## Reindent code (2 spaces instead of 4)\n\n``````if (TRUE) {\nx = 1 # inline comments\n} else {\nx = 2\nprint(\"Oh no... ask the right bracket to go away!\")\n}\n``````\n\n## Move left braces `{` to new lines\n\n``````if (TRUE)\n{\nx = 1 # inline comments\n} else\n{\nx = 2\nprint(\"Oh no... ask the right bracket to go away!\")\n}\n``````\n\n``````1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 +\n1 + 1 + 1 + 1 + 1 # comment after a long line\n## here is a long long long long long long long long long long long long long comment that may be wrapped\n``````\n\n``````1 + 1\nif (TRUE) {\nx = 1\n} else {\nx = 2\nprint(\"Oh no... ask the right bracket to go away!\")\n}\n1 * 3\n2 + 2 + 2\nlm(y ~ x1 + x2, data = data.frame(y = rnorm(100), x1 = rnorm(100),\nx2 = rnorm(100)))\n1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 +\n1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1\n``````\n\n# 6. Further notes\n\nThe tricks used in this packages are very dirty. There might be dangers in using the functions in formatR. Please read the next section carefully to know exactly how comments are preserved. The best strategy to avoid failure is to put comments in complete lines or after complete R expressions. Below are some known cases in which `tidy_source()` fails.\n\n## In-line comments after an incomplete expression or ;\n\n``````1 + 2 + ## comments after an incomplete line\n3 + 4\nx <- ## this is not a complete expression\n5\nx <- 1; # you should not use ; here!\n``````\n\nCode with comments after incomplete R expression cannot be reformatted by formatR. By the way, `tidy_source()` will move comments after `{` to the next line, e.g.,\n\n``````if (TRUE) {## comments\n}\n``````\n\nwill become\n\n``````if (TRUE) {\n}\n``````\n\n## Inappropriate blank lines\n\nBlank lines are often used to separate complete chunks of R code, and arbitrary blank lines may cause failures in `tidy_source()` as well when the argument `blank = TRUE`, e.g.\n\n``````if (TRUE)\n\n{'this is a BAD style of R programming!'} else 'failure!'\n``````\n\nThere should not be a blank line after the `if` statement. Of course `blank = FALSE` will not fail in this case.\n\n## `?` with comments\n\nWe can use the question mark (`?`) to view the help page, but formatR package is unable to correctly format the code using `?` with comments, e.g.\n\n``````?sd # help on sd()\n``````\n\nIn this case, it is recommended to use the function `help()` instead of the short-hand version `?`.\n\n## `->` with comments\n\nWe can also use the right arrow `->` for assignment, e.g. `1:10 -> x`. I believe this flexibility is worthless, and it is amazing that a language has three assignment operators: `<-`, `=` and `->` (whereas almost all other languages uses `=` for assignment). Bad news for formatR is that it is unable to format code using both `->` and comments in a line, e.g.\n\n``````1:10 -> x # assignment with right arrow\n``````\n\nI recommend you to use `<-` or `=` consistently. What is more important is consistency. I always use `=` because it causes me no confusion (I do not believe it is ever possible for people to interpret `fun(a = 1)` as assigning `1` to a variable `a` instead of passing an argument value) and `<-` is more dangerous because it works everywhere (you might have unconsciously created a new variable `a` in `fun(a <- 1)`; see an example here). The only disadvantage is that most R people use `<-` so it may be difficult to collaborate with other people.\n\n## The pipe operator `%>%`\n\nAlthough `tidy_source()` won’t ruin your code that contains the pipes, you won’t be happy with it: your line breaks after the pipes won’t be preserved. See #54.\n\n# 7. How does `tidy_source()` actually work?\n\nIn a nutshell, `tidy_source(text = code)` is basically `deparse(parse(text = code))`, but actually it is more complicated only because of one thing: `deparse()` drops comments, e.g.,\n\n``````deparse(parse(text = \"1+2-3*4/5 # a comment\"))\n``````\n``````## \"expression(1 + 2 - 3 * 4/5)\"\n``````\n\nThe method to preserve comments is to protect them as strings in R expressions. For example, there is a single line of comments in the source code:\n\n`````` # asdf\n``````\n\nIt will be first masked as\n\n``````invisible(\".IDENTIFIER1 # asdf.IDENTIFIER2\")\n``````\n\nwhich is a legal R expression, so `base::parse()` can deal with it and will no longer remove the disguised comments. In the end the identifiers will be removed to restore the original comments, i.e. the strings `invisible(\".IDENTIFIER1` and `.IDENTIFIER2\")` are replaced with empty strings.\n\nInline comments are handled differently: two spaces will be added before the hash symbol `#`, e.g.\n\n``````1+1# comments\n``````\n\nwill become\n\n``````1+1 # comments\n``````\n\nInline comments are first disguised as a weird operation with its preceding R code, which is essentially meaningless but syntactically correct! For example,\n\n``````1+1 %InLiNe_IdEnTiFiEr% \"# comments\"\n``````\n\nthen `base::parse()` will deal with this expression; again, the disguised comments will not be removed. In the end, inline comments will be freed as well (remove the operator `%InLiNe_IdEnTiFiEr%` and surrounding double quotes).\n\nAll these special treatments to comments are due to the fact that `base::parse()` and `base::deparse()` can tidy the R code at the price of dropping all the comments.\n\n# 8. Global options\n\nThere are global options which can override some arguments in `tidy_source()`:\n\nargument global option default\n`comment` `options('formatR.comment')` `TRUE`\n`blank` `options('formatR.blank')` `TRUE`\n`arrow` `options('formatR.arrow')` `FALSE`\n`indent` `options('formatR.indent')` `4`\n`brace.newline` `options('formatR.brace.newline')` `FALSE`\n\nAlso note that single lines of long comments will be wrapped into shorter ones automatically, but roxygen comments will not be wrapped (i.e., comments that begin with `#'`)."
] | [
null,
"https://db.yihui.org/imgur/lUgtEAb.png",
null,
"https://db.yihui.org/imgur/TBZm0B8.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.76546323,"math_prob":0.8771624,"size":11608,"snap":"2021-04-2021-17","text_gpt3_token_len":3254,"char_repetition_ratio":0.1521889,"word_repetition_ratio":0.22642413,"special_character_ratio":0.31486905,"punctuation_ratio":0.14548567,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9656541,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,5,null,5,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-01-15T20:26:56Z\",\"WARC-Record-ID\":\"<urn:uuid:8fdb985d-0b03-4471-ac49-6659cfe538a4>\",\"Content-Length\":\"24046\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d9a14b1b-caf1-4748-b57a-1d1e56e7b130>\",\"WARC-Concurrent-To\":\"<urn:uuid:a6f9ca99-d66c-4c5e-a3fa-725598f496e3>\",\"WARC-IP-Address\":\"162.243.166.170\",\"WARC-Target-URI\":\"https://yihui.org/formatr/\",\"WARC-Payload-Digest\":\"sha1:W36BEJW2AWGRKY3BUPPNCU45WAEAYX2R\",\"WARC-Block-Digest\":\"sha1:37QXC5H2XPTZA5U5IN5BGZYI3AYMZ7GM\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-04/CC-MAIN-2021-04_segments_1610703496947.2_warc_CC-MAIN-20210115194851-20210115224851-00561.warc.gz\"}"} |
https://www.kseebsolutions.com/kseeb-solutions-for-class-10-maths-chapter-1-ex-1-4/ | [
"# KSEEB Solutions for Class 10 Maths Chapter 1 Arithmetic Progressions Ex 1.4\n\nStudents can Download Class 10 Maths Chapter 1 Arithmetic Progressions Ex 1.4 Questions and Answers, Notes Pdf, KSEEB Solutions for Class 10 Maths helps you to revise the complete Karnataka State Board Syllabus and score more marks in your in examinations\n\n## Karnataka State Syllabus Class 10 Maths Chapter 1 Arithmetic Progressions Ex 1.4\n\nQuestion 1.\nWhich term of the A.P: 121, 117, 113,……, is its first negative term?\n[Hint: Find n for an < 0]\nThe given AP is 121, 117, 113,…….\na = 121, d = 117 – 121 = – 4\nLet the nth term of the AP be the first negative term.\n∴ an < 0\na + ( n – 1 )d < 0\n121 + (n – 1) (- 4) < 0\n121 – 4n + 4 < 0\n125 < 4n\n4n > 125",
null,
"∴ n = 32.\nHence, 32nd term of the given AP is the first negative term.",
null,
"Question 2.\nThe sum of the third and the seventh terms of an AP is 6 and their product is 8. Find the sum of first sixteen terms of the AP\nSolution:",
null,
"",
null,
"",
null,
"",
null,
"Question 3.\nA ladder has rungs 25 cm apart. (see Fig.1.7 ). The rungs decrease uniformly ¡n length form 45 cm at the bottom to 25 cm at the top. If the top and the bottom rungs are 2 $$\\frac{1}{2}$$ m apart, what is the length of the wood required for the rungs?\n[Hint: Number of rungs $$\\frac{250}{25}$$ + 1].",
null,
"Distance between top and bottom rung",
null,
"Distance between every two rungs\n= 25 cm [given]\nNumber of rungs (n) = + 1 for top most rung]\n= 10 + 1 = 11\n∴ n = 11\nBecause the rungs decrease uniformly in length from 45 cm at the bottom to 25cm at the top",
null,
"Question 4.\nThe houses of a row are numbered consecutively from 1 to 49. Show that there is a value of x such that the sum of the number of the houses preceding the house numbered x is equal to the sum of the number of the houses following it. Find this value of x. [Hint: Sx-1 = S49 – Sx]\nSolution:\nWe have the following consecutive numbers on the houses of a row; 1, 2, 3, 4, 5, ….. , 49.\nThese numbers are in AP, such that a = 1, d = 2 – 1 = 1, n = 49\nLet one of the houses be numbered as x.\n∴ Number of houses preceding it = x – 1\nNumber of houses following it = 49 – x\nNow, the sum of the house-numbers preceding",
null,
"",
null,
"",
null,
"",
null,
"Question 5.\nA small terrace at a football ground comprises of 15 steps each of which is 50 m long and built of solid concrete. Each step has a rise of $$\\frac{1}{4}$$ m and a tread of $$\\frac{1}{2}$$ m (see Fig 5.8). Calculate the total volume of concrete required to build the terrace.\nHint: Volume of concrete required to build the first step = $$\\frac{1}{4} \\times \\frac{1}{2} \\times 50 \\mathrm{m}^{3}$$",
null,
"",
null,
"",
null,
"",
null,
""
] | [
null,
"https://live.staticflickr.com/65535/49613247037_1bc6c7fb73_o.png",
null,
"https://www.kseebsolutions.com/wp-content/uploads/2019/11/KSEEB-Solutions-300x28.png",
null,
"https://www.kseebsolutions.com/wp-content/uploads/2020/03/KSEEB-Solutions-for-Class-10-Maths-Chapter-1-Arithmetic-Progressions-Ex-1.4-Q2.png",
null,
"https://live.staticflickr.com/65535/48742823167_19dde5be0b_o.png",
null,
"https://live.staticflickr.com/65535/48742823217_a68757d5e0_o.png",
null,
"https://www.kseebsolutions.com/wp-content/uploads/2019/11/KSEEB-Solutions-300x28.png",
null,
"https://live.staticflickr.com/65535/49613246902_9cdba62303_o.png",
null,
"https://live.staticflickr.com/65535/49613256242_096fd45564_o.png",
null,
"https://live.staticflickr.com/65535/49612988136_877d230cc9_o.png",
null,
"https://www.kseebsolutions.com/wp-content/uploads/2020/03/KSEEB-Solutions-for-Class-10-Maths-Chapter-1-Arithmetic-Progressions-Ex-1.4-Q4.png",
null,
"https://live.staticflickr.com/65535/48742312378_3c77eaa2f3_o.png",
null,
"https://www.kseebsolutions.com/wp-content/uploads/2020/03/KSEEB-Solutions-for-Class-10-Maths-Chapter-1-Arithmetic-Progressions-Ex-1.4-Q4-1.png",
null,
"https://www.kseebsolutions.com/wp-content/uploads/2019/11/KSEEB-Solutions-300x28.png",
null,
"https://live.staticflickr.com/65535/49613246817_c34a244053_o.png",
null,
"https://live.staticflickr.com/65535/49612988046_1db5d31c2b_o.png",
null,
"https://live.staticflickr.com/65535/49612988061_3272ac34c8_o.png",
null,
"https://www.kseebsolutions.com/wp-content/uploads/2019/11/KSEEB-Solutions-300x28.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8689454,"math_prob":0.99807245,"size":2621,"snap":"2020-45-2020-50","text_gpt3_token_len":811,"char_repetition_ratio":0.12571646,"word_repetition_ratio":0.07102804,"special_character_ratio":0.33574972,"punctuation_ratio":0.114583336,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9994191,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34],"im_url_duplicate_count":[null,1,null,null,null,1,null,1,null,1,null,null,null,1,null,1,null,1,null,1,null,1,null,1,null,null,null,1,null,1,null,1,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-26T06:35:11Z\",\"WARC-Record-ID\":\"<urn:uuid:521ed094-80bd-4691-a76e-13a59224f785>\",\"Content-Length\":\"49883\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:aa38c1b5-1a00-484f-a59d-e19ca2ee51dc>\",\"WARC-Concurrent-To\":\"<urn:uuid:ba2af28b-2c0d-4cb8-9b79-5a8d519ce1fa>\",\"WARC-IP-Address\":\"128.199.22.39\",\"WARC-Target-URI\":\"https://www.kseebsolutions.com/kseeb-solutions-for-class-10-maths-chapter-1-ex-1-4/\",\"WARC-Payload-Digest\":\"sha1:TJT6UTNFQTSKHOZIUDUJJFUW32LLIHAN\",\"WARC-Block-Digest\":\"sha1:ERTCI3V2KBWEQTL6HKUC3TFHK2LYUIG5\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107890586.57_warc_CC-MAIN-20201026061044-20201026091044-00185.warc.gz\"}"} |
https://www.easycalculation.com/factors-of-1178.html | [
"# Factors of 1178\n\nFactors of 1178 are 1, 2, 19, 31, 38, 62, 589, 1178. So, 1178 can be derived from smaller number in 4 possible ways using multiplication.\n\nFactors of 1178\n1, 2, 19, 31, 38, 62, 589, 1178\nFactor Pairs of 1178\n1 x 1178 = 1178\n2 x 589 = 1178\n19 x 62 = 1178\n31 x 38 = 1178\n\nFactors of 1178 are 1, 2, 19, 31, 38, 62, 589, 1178. So, 1178 can be derived from smaller number in 4 possible ways using multiplication."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7840434,"math_prob":0.9921271,"size":275,"snap":"2023-40-2023-50","text_gpt3_token_len":100,"char_repetition_ratio":0.11070111,"word_repetition_ratio":0.9166667,"special_character_ratio":0.44727272,"punctuation_ratio":0.2777778,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99240565,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-08T16:29:44Z\",\"WARC-Record-ID\":\"<urn:uuid:1086200c-977f-400f-9e63-b7684ee14abe>\",\"Content-Length\":\"28030\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:22a78689-75bc-4944-b98d-ea8201e5df42>\",\"WARC-Concurrent-To\":\"<urn:uuid:a7da9d9f-e302-4fd9-9400-a59122f5d261>\",\"WARC-IP-Address\":\"66.228.40.80\",\"WARC-Target-URI\":\"https://www.easycalculation.com/factors-of-1178.html\",\"WARC-Payload-Digest\":\"sha1:XILXMBPKRUL26IQZLZER6WXIKCBETU4U\",\"WARC-Block-Digest\":\"sha1:B2EB6VTFZA6EZY2AJAMIP5O4ESL6R65T\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100762.64_warc_CC-MAIN-20231208144732-20231208174732-00851.warc.gz\"}"} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.