URL
stringlengths
15
1.68k
text_list
sequencelengths
1
199
image_list
sequencelengths
1
199
metadata
stringlengths
1.19k
3.08k
https://phys.libretexts.org/Courses/University_of_California_Davis/UCD%3A_Physics_9A_Lab/Lab_5%3A_Energy_Forms/5.1%3A_Background_Material
[ "# 5.1: Background Material\n\n$$\\newcommand{\\vecs}{\\overset { \\rightharpoonup} {\\mathbf{#1}} }$$ $$\\newcommand{\\vecd}{\\overset{-\\!-\\!\\rightharpoonup}{\\vphantom{a}\\smash {#1}}}$$$$\\newcommand{\\id}{\\mathrm{id}}$$ $$\\newcommand{\\Span}{\\mathrm{span}}$$ $$\\newcommand{\\kernel}{\\mathrm{null}\\,}$$ $$\\newcommand{\\range}{\\mathrm{range}\\,}$$ $$\\newcommand{\\RealPart}{\\mathrm{Re}}$$ $$\\newcommand{\\ImaginaryPart}{\\mathrm{Im}}$$ $$\\newcommand{\\Argument}{\\mathrm{Arg}}$$ $$\\newcommand{\\norm}{\\| #1 \\|}$$ $$\\newcommand{\\inner}{\\langle #1, #2 \\rangle}$$ $$\\newcommand{\\Span}{\\mathrm{span}}$$ $$\\newcommand{\\id}{\\mathrm{id}}$$ $$\\newcommand{\\Span}{\\mathrm{span}}$$ $$\\newcommand{\\kernel}{\\mathrm{null}\\,}$$ $$\\newcommand{\\range}{\\mathrm{range}\\,}$$ $$\\newcommand{\\RealPart}{\\mathrm{Re}}$$ $$\\newcommand{\\ImaginaryPart}{\\mathrm{Im}}$$ $$\\newcommand{\\Argument}{\\mathrm{Arg}}$$ $$\\newcommand{\\norm}{\\| #1 \\|}$$ $$\\newcommand{\\inner}{\\langle #1, #2 \\rangle}$$ $$\\newcommand{\\Span}{\\mathrm{span}}$$$$\\newcommand{\\AA}{\\unicode[.8,0]{x212B}}$$\n\n## Non-Ideal Springs\n\nWe found the potential energy function for a spring by computing the work done by that spring. This assumes that the spring is \"ideal,\" which means that it perfectly obeys Hooke's law. It turns out that many real springs don't closely approximate this behavior. The springs used in this lab are no exception.\n\nThe spring used in this lab pulls its coils tightly together. That is, the pull of the spring is not zero when the spring is at its minimum length – the pull is balanced by the repulsive normal force between the coils that are in contact. If we hang a small weight from these springs, the coils don't separate. With no change in coil separation, the spring force doesn't change – the contact forces between them just get a bit smaller. Eventually, when we add sufficient weight, the coils do separate, causing the spring force to increase (and of course the contact force between coils vanishes, as they are no longer in contact). But the force exerted on the spring does not exhibit Hooke's law, which means that the potential energy stored in the spring can not be computed in the usual way.\n\nFigure 5.1.1 Coil Behavior for Our Non-Ideal Spring", null, "This doesn't mean we can't do anything with these springs, because we can still measure the displacement for various forces, which means that we can still compute the work done on the spring by stretching it. Of course, we need to know how the force changes as a function of the displacement to do the work integral – we can't simply multiply the force by the displacement.\n\nFigure 5.1.2 Graph of Applied Force vs. Spring Stretch", null, "The graph above shows how the spring behaves when certain forces are applied to it (in our experiment, we will do this by hanging weights). Imagine applying force $$F_A$$ first, and noting the amount that the spring stretches $$x_A$$. When we reduce the force to $$F_B$$, naturally the spring stretch decreases (to $$x_B$$). As long as the coils don't touch each other, this follows a linear relationship, as we would expect for any spring. But when we reduce the force to the point where the coils are in contact ($$F_C$$), then every applied force from there down to zero produces the same stretch – zero.\n\nWhen we store potential energy in a spring, we do this by doing work on the spring, and for a perfect spring, this work happened to equal the potential energy function $$\\frac{1}{2}kx^2$$ that we are so familiar with. But this is not such a spring, so to determine the potential energy stored in the spring, we must do a new calculation of the work done on it, using the function $$F(x)$$ that we will determine experimentally.\n\n## Gravitational Potential Energy of Extended Objects\n\nWhen an object changes heights, its gravitational potential energy changes, which is proportional to its change in height. When the object is not a point mass (i.e. it has extension in space), different parts of the object can be at different heights. How do we measure the change in potential energy, when there are so many points to choose from? What happens if the object rotates as it rises? The answer (which we will proved later in the course) is that this potential energy is computed using the change in height of the object's center of mass. Keeping this in mind for the spring, which actually changes length during its journey will be useful.\n\nThis page titled 5.1: Background Material is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Tom Weideman directly on the LibreTexts platform." ]
[ null, "https://phys.libretexts.org/@api/deki/files/17362/non-ideal_spring.png", null, "https://phys.libretexts.org/@api/deki/files/17368/Screen_Shot_2020-04-28_at_2.30.18_PM.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9392498,"math_prob":0.99559915,"size":3400,"snap":"2022-40-2023-06","text_gpt3_token_len":735,"char_repetition_ratio":0.15312132,"word_repetition_ratio":0.010186757,"special_character_ratio":0.21882352,"punctuation_ratio":0.095022626,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9929125,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,5,null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-09-27T15:35:17Z\",\"WARC-Record-ID\":\"<urn:uuid:4a4aca56-3aac-4063-8be3-6cc6e6424af7>\",\"Content-Length\":\"103939\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d0c49794-209e-45de-9e39-4e09a4089877>\",\"WARC-Concurrent-To\":\"<urn:uuid:67324207-35e8-4eed-aca8-fc2cf262d810>\",\"WARC-IP-Address\":\"18.160.46.78\",\"WARC-Target-URI\":\"https://phys.libretexts.org/Courses/University_of_California_Davis/UCD%3A_Physics_9A_Lab/Lab_5%3A_Energy_Forms/5.1%3A_Background_Material\",\"WARC-Payload-Digest\":\"sha1:HUR5TMIQXCAUHIG6TADG6ETUIZZX6MEB\",\"WARC-Block-Digest\":\"sha1:IN7OULEQCK6FEGPUZJ472IEOA7DKEBXL\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030335034.61_warc_CC-MAIN-20220927131111-20220927161111-00513.warc.gz\"}"}
https://www.hackmath.net/en/math-problem/8178
[ "# Gravitation\n\nFrom the top of the 80m high tower, the body is thrown horizontally with an initial speed of 15 m/s. At what time and at what distance from the foot of the tower does the body hit the horizontal surface of the Earth? (use g = 10 ms-2)\n\nt =  4 s\nx =  60 m\n\n### Step-by-step explanation:", null, "Did you find an error or inaccuracy? Feel free to write us. Thank you!", null, "Tips to related online calculators\nDo you want to convert velocity (speed) units?\nDo you want to convert time units like minutes to seconds?\n\n## Related math problems and questions:\n\n• Collision", null, "The two bodies, whose initial distance is 240 m, move evenly against each other consistently. The first body has an initial velocity of 4 m/s and an acceleration of 3 m/s2, the second body has an initial speed of 6 m/s and an acceleration of 2 m/s2. Fin\n• Up and down motion", null, "We throw the body from a height h = 5 m above the Earth vertically upwards v0 = 10 m/s. How long before we have to let the second body fall freely from the same height to hit the Earth at the same time?\n• Free fall", null, "The free fall body has gone 10m in the last 0.5s. Find the body speed at the moment of impact.\n• Athletic competition", null, "In a 400 meter athletic competition, a participant covers the distance as given below. find the average speed? first 80 meters 10 m/s next 240 meters 7.5 m/s last 80 meters 10 m/s\n• Bomber", null, "The aircraft flies at an altitude of 4100 m above the ground at speed 777 km/h. At what horizontal distance from the point B should be release any body from the aircraft body to fall into point B? (g = 9.81 m/s2)\n• Free fall", null, "How long does the stone fall freely into a depth of 80m? What speed will it hit the bottom of the abyss?\n• Brakes", null, "The braking efficiency of a passenger car is required to stop at 12.5 m at an initial speed of 40 km/h. What is the acceleration braking by brakes?\n• Braking distance", null, "The car travels at an average speed of 12 km/h and detects an obstacle 10 m in front of it. At 1 m in front of the obstacle it already runs 2 km/h. What is the braking distance? What is the required deceleration for stop in: A) 1m B) 1s?\n• The tram", null, "The tram is moving with acceleration a = 0.3m/s2. How long will it pass the first meter of the track? How long does it take 10 meters? What is its speed at the end of the 10 meters track?\n• Train speed", null, "The train speed is decreased during 50 sec from 72 km/h to 36 km/h. Assuming that the train movement is equally slowing, find the the acceleration and the distance that it travels at.\n• An acceleration", null, "The car goes on a straight road at a speed of 72 km/h. At some point, the driver starts to brake and stops the car in 5 seconds. Find: (a) the acceleration during braking (b) the distance traveled during braking.\n• Acceleration", null, "The car accelerates at rate 0.5m/s2. How long travels 400 meters and what will be its speed?\n• Free fall", null, "For how long and at what speed does the body fall to the ground during a free fall from a height of 35 m?\n• Two trains meet", null, "From A started at 7:15 express train at speed 85 km/h to B. From B started passenger train at 8:30 in the direction to A and at speed 55 km/h. The distance A and B are 386 1/4 km. At what time and at what distance from B the two trains meet?\n• A car", null, "A car weighing 1.05 tonnes driving at the maximum allowed speed in the village (50 km/h) hit a solid concrete bulkhead. Calculate height would have to fall on the concrete surface to make the impact intensity the same as in the first case!\n• Acceleration 2", null, "if a car traveling at a velocity of 80 m/s/south accelerated to a velocity of 100 m/s east in 5 seconds, what is the cars acceleration? using Pythagorean theorem\n• Free fall", null, "Lloyd fall from height 7 m. Calculate the speed he hit the ground when falling with acceleration g = 9.81 m/s2" ]
[ null, "https://www.hackmath.net/img/78/vodorovny_vrh.jpg", null, "https://www.hackmath.net/hashover/images/avatar.png", null, "https://www.hackmath.net/thumb/61/t_7061.jpg", null, "https://www.hackmath.net/thumb/51/t_33951.jpg", null, "https://www.hackmath.net/thumb/31/t_7231.jpg", null, "https://www.hackmath.net/thumb/97/t_8097.jpg", null, "https://www.hackmath.net/thumb/45/t_1045.jpg", null, "https://www.hackmath.net/thumb/50/t_5450.jpg", null, "https://www.hackmath.net/thumb/76/t_7076.jpg", null, "https://www.hackmath.net/thumb/41/t_8441.jpg", null, "https://www.hackmath.net/thumb/16/t_4916.jpg", null, "https://www.hackmath.net/thumb/70/t_7070.jpg", null, "https://www.hackmath.net/thumb/73/t_7073.jpg", null, "https://www.hackmath.net/thumb/64/t_2164.jpg", null, "https://www.hackmath.net/thumb/71/t_38471.jpg", null, "https://www.hackmath.net/thumb/23/t_2623.jpg", null, "https://www.hackmath.net/thumb/68/t_7268.jpg", null, "https://www.hackmath.net/thumb/45/t_4945.jpg", null, "https://www.hackmath.net/thumb/48/t_448.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.91661036,"math_prob":0.9733226,"size":3861,"snap":"2021-31-2021-39","text_gpt3_token_len":987,"char_repetition_ratio":0.1366347,"word_repetition_ratio":0.033377837,"special_character_ratio":0.26107225,"punctuation_ratio":0.08243728,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9833334,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38],"im_url_duplicate_count":[null,3,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-21T22:41:59Z\",\"WARC-Record-ID\":\"<urn:uuid:9aed27cd-9c6c-4be8-ae33-277288bd4f26>\",\"Content-Length\":\"52418\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d7d120f0-7826-48ba-b41d-890df751947b>\",\"WARC-Concurrent-To\":\"<urn:uuid:bfacaefa-ff9f-4fa9-bc7b-399d445087c4>\",\"WARC-IP-Address\":\"104.21.55.14\",\"WARC-Target-URI\":\"https://www.hackmath.net/en/math-problem/8178\",\"WARC-Payload-Digest\":\"sha1:ZEJ6E5O7S7VWCEIG37YXHIL5M54A7D2L\",\"WARC-Block-Digest\":\"sha1:6JPO5OV7YM6YPYH7M5DMAEJK4UP6IHBD\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780057274.97_warc_CC-MAIN-20210921221605-20210922011605-00665.warc.gz\"}"}
https://xiith.com/python-program-to-check-number-is-armstrong-or-not/
[ "# Python Program to check number is Armstrong or not\n\nIn this program, You will learn how to check number is Armstrong or not in Python.\n\n``Some list are: 153 370 371 407``\n\n## Example: How to check number is Armstrong or not in Python.\n\n``````n = int(input(\"Enter a number:\"))\n\nnum = n\nrev = 0\n\nwhile n > 0:\nr = n % 10\nrev = rev + r * r * r\nn = int(n / 10)\n\nif rev == num:\nprint(\"Number is Armstrong:\", num)\nelse:\nprint(\"Number is not Armstrong:\", num)``````\n\n#### Output:\n\n``````Enter a number:153\nNumber is Armstrong: 153``````" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6230491,"math_prob":0.9917038,"size":432,"snap":"2021-21-2021-25","text_gpt3_token_len":132,"char_repetition_ratio":0.18691589,"word_repetition_ratio":0.11627907,"special_character_ratio":0.36805555,"punctuation_ratio":0.16161616,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97619224,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-06-20T09:55:31Z\",\"WARC-Record-ID\":\"<urn:uuid:2f00df5d-898e-4427-a357-a9a6796b70bc>\",\"Content-Length\":\"226857\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:877a3b8f-7748-4bf4-bc64-c48771aff8e7>\",\"WARC-Concurrent-To\":\"<urn:uuid:c7b888ed-9f5e-4ded-8dc0-e4ce96bf6e95>\",\"WARC-IP-Address\":\"172.67.144.5\",\"WARC-Target-URI\":\"https://xiith.com/python-program-to-check-number-is-armstrong-or-not/\",\"WARC-Payload-Digest\":\"sha1:4FJIJUXQMIF6GBJH4EJG5SZFEMHSVRFA\",\"WARC-Block-Digest\":\"sha1:VJYUH56LVQOACPRKDNTSBR5MYXOYBBQL\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-25/CC-MAIN-2021-25_segments_1623487660269.75_warc_CC-MAIN-20210620084505-20210620114505-00336.warc.gz\"}"}
https://git.dynare.org/DoraK/dynare/-/commit/25ca5d3ee8cee19ac1e125d027fe1f56f0807c62
[ "### New implementation of block decomposition & feedback variables using Boost for DynamicModel\n\n`git-svn-id: https://www.dynare.org/svn/dynare/trunk@2671 ac1d8469-bf42-47a9-8791-bf33cf982152`\nparent 2eed8a5f\nThis diff is collapsed.\n ... ... @@ -34,31 +34,39 @@ //! Matrix of doubles for representing jacobian //! Sparse matrix of double to store the values of the Jacobian typedef map,double> jacob_map; typedef vector > t_type; //! Create the incidence matrix, computes prologue & epilogue, normalizes the model and computes the block decomposition //! Creates the incidence matrix, computes prologue & epilogue, normalizes the model and computes the block decomposition class BlockTriangular { //friend class IncidenceMatrix; private: //! Find equations and endogenous variables belonging to the prologue and epilogue of the model void Prologue_Epilogue(bool* IM, int &prologue, int &epilogue, int n, vector &Index_Var_IM, vector &Index_Equ_IM, bool* IM0); //! Allocates and fills the Model structure describing the content of each block void Allocate_Block(int size, int *count_Equ, int count_Block, BlockType type, BlockSimulationType SimType, Model_Block * ModelBlock); //! Finds a matching between equations and endogenous variables bool Compute_Normalization(bool *IM, int equation_number, int prologue, int epilogue, bool verbose, bool *IM0, vector &Index_Var_IM) const; //! Decomposes into recurive blocks the non purely recursive equations and determines for each block the minimum feedback variables void Compute_Block_Decomposition_and_Feedback_Variables_For_Each_Block(bool *IM, int nb_var, int prologue, int epilogue, vector &Index_Equ_IM, vector &Index_Var_IM, vector > &blocks, bool verbose_) const; //! Tries to merge the consecutive blocks in a single block and determine the type of each block: recursive, simultaneous, ... t_type Reduce_Blocks_and_type_determination(int prologue, int epilogue, vector > &blocks, vector equations ); public: const SymbolTable &symbol_table; BlockTriangular(const SymbolTable &symbol_table_arg); //! Frees the Model structure describing the content of each block void Free_Block(Model_Block* ModelBlock) const; //BlockTriangular(const IncidenceMatrix &incidence_matrix_arg); //const SymbolTable &symbol_table; Blocks blocks; Normalization normalization; IncidenceMatrix incidencematrix; void Normalize_and_BlockDecompose_Static_0_Model(const jacob_map &j_m, vector equations); bool Normalize_and_BlockDecompose(bool* IM, Model_Block* ModelBlock, int n, int* prologue, int* epilogue, simple* Index_Var_IM, simple* Index_Equ_IM, bool Do_Normalization, bool mixing, bool* IM_0 , jacob_map j_m, vector equations); void Prologue_Epilogue(bool* IM, int* prologue, int* epilogue, int n, simple* Index_Var_IM, simple* Index_Equ_IM, bool* IM0); void Allocate_Block(int size, int *count_Equ, int count_Block, BlockType type, BlockSimulationType SimType, Model_Block * ModelBlock); void Free_Block(Model_Block* ModelBlock) const; t_type Reduce_Blocks_and_type_determination(int prologue, int epilogue, block_result_t* res, vector equations ); simple *Index_Equ_IM; simple *Index_Var_IM; void Normalize_and_BlockDecompose(bool* IM, Model_Block* ModelBlock, int n, int &prologue, int &epilogue, vector &Index_Var_IM, vector &Index_Equ_IM, bool* IM_0 , jacob_map j_m, vector equations); vector Index_Equ_IM; vector Index_Var_IM; int prologue, epilogue; bool bt_verbose; //int endo_nbr, exo_nbr; ... ...\n ... ... @@ -21,6 +21,7 @@ #include #include #include \"DynamicModel.hh\" // For mkdir() and chdir() ... ... @@ -124,7 +125,7 @@ DynamicModel::computeTemporaryTermsOrdered(Model_Block *ModelBlock) it->second->computeTemporaryTerms(reference_count, temporary_terms, first_occurence, j, ModelBlock, ModelBlock->Block_List[j].Size-1, map_idx); } } for (m=0;m<=ModelBlock->Block_List[j].Max_Lead+ModelBlock->Block_List[j].Max_Lag;m++) /*for (m=0;m<=ModelBlock->Block_List[j].Max_Lead+ModelBlock->Block_List[j].Max_Lag;m++) { lag=m-ModelBlock->Block_List[j].Max_Lag; for (i=0;iBlock_List[j].IM_lead_lag[m].size_exo;i++) ... ... @@ -134,7 +135,7 @@ DynamicModel::computeTemporaryTermsOrdered(Model_Block *ModelBlock) it=first_derivatives.find(make_pair(eq,getDerivID(symbol_table.getID(eExogenous, var), lag))); it->second->computeTemporaryTerms(reference_count, temporary_terms, first_occurence, j, ModelBlock, ModelBlock->Block_List[j].Size-1, map_idx); } } }*/ //jacobian_max_exo_col=(variable_table.max_exo_lag+variable_table.max_exo_lead+1)*symbol_table.exo_nbr; for (m=0;m<=ModelBlock->Block_List[j].Max_Lead+ModelBlock->Block_List[j].Max_Lag;m++) { ... ... @@ -172,7 +173,7 @@ DynamicModel::computeTemporaryTermsOrdered(Model_Block *ModelBlock) it->second->collectTemporary_terms(temporary_terms, ModelBlock, j); } } for (m=0;m<=ModelBlock->Block_List[j].Max_Lead+ModelBlock->Block_List[j].Max_Lag;m++) /*for (m=0;m<=ModelBlock->Block_List[j].Max_Lead+ModelBlock->Block_List[j].Max_Lag;m++) { lag=m-ModelBlock->Block_List[j].Max_Lag; for (i=0;iBlock_List[j].IM_lead_lag[m].size_exo;i++) ... ... @@ -183,7 +184,7 @@ DynamicModel::computeTemporaryTermsOrdered(Model_Block *ModelBlock) //it=first_derivatives.find(make_pair(eq,variable_table.getID(var, lag))); it->second->collectTemporary_terms(temporary_terms, ModelBlock, j); } } }*/ //jacobian_max_exo_col=(variable_table.max_exo_lag+variable_table.max_exo_lead+1)*symbol_table.exo_nbr; for (m=0;m<=ModelBlock->Block_List[j].Max_Lead+ModelBlock->Block_List[j].Max_Lag;m++) { ... ... @@ -1736,7 +1737,7 @@ DynamicModel::writeDynamicModel(ostream &DynamicOutput) const << endl << jacobian_output.str() << \"end\" << endl; if (second_derivatives.size()) { // Writing initialization instruction for matrix g2 ... ... @@ -1780,7 +1781,7 @@ DynamicModel::writeDynamicModel(ostream &DynamicOutput) const << \" {\" << endl << jacobian_output.str() << \" }\" << endl; if (second_derivatives.size()) { DynamicOutput << \" /* Hessian for endogenous and exogenous variables */\" << endl ... ... @@ -2150,6 +2151,9 @@ DynamicModel::BlockLinear(Model_Block *ModelBlock) } } void DynamicModel::computingPass(bool jacobianExo, bool hessian, bool thirdDerivatives, bool paramsDerivatives, const eval_context_type &eval_context, bool no_tmp_terms) ... ... @@ -2210,6 +2214,8 @@ DynamicModel::computingPass(bool jacobianExo, bool hessian, bool thirdDerivative block_triangular.incidencematrix.Print_IM(eEndogenous); } block_triangular.Normalize_and_BlockDecompose_Static_0_Model(j_m, equations); BlockLinear(block_triangular.ModelBlock); if (!no_tmp_terms) computeTemporaryTermsOrdered(block_triangular.ModelBlock); ... ... @@ -2372,7 +2378,7 @@ DynamicModel::computeDynJacobianCols(bool jacobianExo) const int &deriv_id = it->second; SymbolType type = symbol_table.getType(symb_id); int tsid = symbol_table.getTypeSpecificID(symb_id); switch(type) { case eEndogenous: ... ...\n ... ... @@ -110,6 +110,7 @@ private: //! Computes temporary terms for the file containing parameters derivatives void computeParamsDerivativesTemporaryTerms(); public: DynamicModel(SymbolTable &symbol_table_arg, NumericalConstants &num_constants); //! Adds a variable node ... ...\n ... ... @@ -211,16 +211,16 @@ IncidenceMatrix::Print_IM(SymbolType type) const //------------------------------------------------------------------------------ // Swap rows and columns of the incidence matrix void IncidenceMatrix::swap_IM_c(bool *SIM, int pos1, int pos2, int pos3, simple* Index_Var_IM, simple* Index_Equ_IM, int n) const IncidenceMatrix::swap_IM_c(bool *SIM, int pos1, int pos2, int pos3, vector &Index_Var_IM, vector &Index_Equ_IM, int n) const { int tmp_i, j; bool tmp_b; /* We exchange equation (row)...*/ if(pos1 != pos2) { tmp_i = Index_Equ_IM[pos1].index; Index_Equ_IM[pos1].index = Index_Equ_IM[pos2].index; Index_Equ_IM[pos2].index = tmp_i; tmp_i = Index_Equ_IM[pos1]; Index_Equ_IM[pos1] = Index_Equ_IM[pos2]; Index_Equ_IM[pos2] = tmp_i; for(j = 0;j < n;j++) { tmp_b = SIM[pos1 * n + j]; ... ... @@ -231,9 +231,9 @@ IncidenceMatrix::swap_IM_c(bool *SIM, int pos1, int pos2, int pos3, simple* Inde /* ...and variables (column)*/ if(pos1 != pos3) { tmp_i = Index_Var_IM[pos1].index; Index_Var_IM[pos1].index = Index_Var_IM[pos3].index; Index_Var_IM[pos3].index = tmp_i; tmp_i = Index_Var_IM[pos1]; Index_Var_IM[pos1] = Index_Var_IM[pos3]; Index_Var_IM[pos3] = tmp_i; for(j = 0;j < n;j++) { tmp_b = SIM[j * n + pos1]; ... ...\n ... ... @@ -46,7 +46,7 @@ public: void Free_IM() const; void Print_IM(SymbolType type) const; void Print_SIM(bool* IM, SymbolType type) const; void swap_IM_c(bool *SIM, int pos1, int pos2, int pos3, simple* Index_Var_IM, simple* Index_Equ_IM, int n) const; void swap_IM_c(bool *SIM, int pos1, int pos2, int pos3, vector &Index_Var_IM, vector &Index_Equ_IM, int n) const; int Model_Max_Lead, Model_Max_Lag; int Model_Max_Lead_Endo, Model_Max_Lag_Endo, Model_Max_Lead_Exo, Model_Max_Lag_Exo; private: ... ...\n ... ... @@ -28,6 +28,7 @@ MAIN_OBJS = \\ ExprNode.o \\ ModelNormalization.o \\ ModelBlocks.o \\ MinimumFeedbackSet.o \\ IncidenceMatrix.o \\ BlockTriangular.o \\ ModelGraph.o \\ ... ...\nThis diff is collapsed." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.656407,"math_prob":0.9387985,"size":300,"snap":"2022-27-2022-33","text_gpt3_token_len":86,"char_repetition_ratio":0.10810811,"word_repetition_ratio":0.0,"special_character_ratio":0.27,"punctuation_ratio":0.115384616,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9942397,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-06-25T16:43:31Z\",\"WARC-Record-ID\":\"<urn:uuid:adeb621f-8ff6-41d9-b761-cba91652395c>\",\"Content-Length\":\"798995\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e82ee1a3-64f0-43b3-aebe-dd9b98d57b45>\",\"WARC-Concurrent-To\":\"<urn:uuid:f22fc716-cbd3-49b8-97b5-73ca2599612e>\",\"WARC-IP-Address\":\"217.70.191.81\",\"WARC-Target-URI\":\"https://git.dynare.org/DoraK/dynare/-/commit/25ca5d3ee8cee19ac1e125d027fe1f56f0807c62\",\"WARC-Payload-Digest\":\"sha1:MG73JEBIJOIGJCMPE4XCAN7VKUMJ2MFE\",\"WARC-Block-Digest\":\"sha1:GJHPUYRQRIGX3UQRRXREB3PIO5AX25IY\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656103036077.8_warc_CC-MAIN-20220625160220-20220625190220-00298.warc.gz\"}"}
https://dba.stackexchange.com/questions/195809/why-does-outer-apply-cause-a-broadcast-move
[ "# Why does outer apply cause a broadcast move?\n\nI've got an outer apply with a condition on the distribution keys.\n\n``````select e.a\n,e.b\n,p1.c\nfrom e\nouter apply\n(\nselect top 1\np.DateStamp\nfrom p\nwhere e.distributionKey = p.distributionKey\nand p.client = e.client\nand p.DateStamp > e.DateStamp\norder by p.DateStamp\n)\nas p1;\n``````\n\nUsing `Explain` I can see that this causes a broadcast move\n\n``````<?xml version=\"1.0\" encoding=\"utf-8\"?>\n<dsql_query number_nodes=\"1\" number_distributions=\"60\" number_distributions_per_node=\"60\">\n<sql></sql>\n<dsql_operations total_cost=\"9047512.32\" total_number_operations=\"5\">\n<dsql_operation operation_type=\"RND_ID\">\n<identifier>TEMP_ID_1231</identifier>\n</dsql_operation>\n<dsql_operation operation_type=\"ON\">\n<location permanent=\"false\" distribution=\"AllComputeNodes\" />\n<sql_operations>\n<sql_operation type=\"statement\"></sql_operation>\n</sql_operations>\n</dsql_operation>\n<operation_cost cost=\"9047512.32\" accumulative_cost=\"9047512.32\" average_rowsize=\"308\" output_rows=\"122396000\" GroupNumber=\"12\" />\n<source_statement></source_statement>\n<destination_table>[TEMP_ID_1231]</destination_table>\n</dsql_operation>\n<dsql_operation operation_type=\"RETURN\">\n<location distribution=\"AllDistributions\" />\n<select></select>\n</dsql_operation>\n<dsql_operation operation_type=\"ON\">\n<location permanent=\"false\" distribution=\"AllComputeNodes\" />\n<sql_operations>\n<sql_operation type=\"statement\">DROP TABLE [tempdb].[dbo].[TEMP_ID_1231]</sql_operation>\n</sql_operations>\n</dsql_operation>\n</dsql_operations>\n</dsql_query>\n``````\n\nHowever, the first line of my where clause should mean the query is distribution aligned\n\n``````where e.distributionKey = p.distributionKey\n``````\n\nWhy is this not the case?\n\n• One for @JRJ... – wBob Jan 22 '18 at 22:42\n• Have you tried simplifying the query by removing the predicates on client and date stamp. that will help you determine if the outer apply is the cause or something else. you may want to try looking into the sys.dm_pdw_sql_requests dmv for more insights into the sql that is being pushed into the nodes. my guess tells me the outer apply is opaque to the DMS – JasonHorner Feb 14 '18 at 13:27\n• @JasonHorner Thanks, I needed a little nudge. Interestingly, it doesn't move data when the greater than is removed, so that appears to be the cause – Neil P Feb 14 '18 at 13:56\n• makes sense the presence of the theta join and not the outer apply is the cause. recall that data movement will occur when incompatible joins are used there is a section on this in the APS chm file: microsoft.com/en-us/download/details.aspx?id=51610 – JasonHorner Feb 14 '18 at 14:00" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5082498,"math_prob":0.71350205,"size":1682,"snap":"2019-51-2020-05","text_gpt3_token_len":424,"char_repetition_ratio":0.24851014,"word_repetition_ratio":0.0729927,"special_character_ratio":0.2764566,"punctuation_ratio":0.1273585,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9543856,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-13T13:58:26Z\",\"WARC-Record-ID\":\"<urn:uuid:e5ce8bb4-20b3-4bb4-a8f7-83e73f5bed47>\",\"Content-Length\":\"130560\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2bd3a9e7-b4dc-4915-a24e-beee0a7129fd>\",\"WARC-Concurrent-To\":\"<urn:uuid:b07acd57-9cbd-4cbd-b400-1c275e466179>\",\"WARC-IP-Address\":\"151.101.193.69\",\"WARC-Target-URI\":\"https://dba.stackexchange.com/questions/195809/why-does-outer-apply-cause-a-broadcast-move\",\"WARC-Payload-Digest\":\"sha1:OMBLUEATQFTUV7F4LLWKHWNJTQ6R3S6E\",\"WARC-Block-Digest\":\"sha1:IK2QHMP44HVFKLNGQ4B2XCXPU3WU4232\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575540555616.2_warc_CC-MAIN-20191213122716-20191213150716-00000.warc.gz\"}"}
https://answers.everydaycalculation.com/add-fractions/60-75-plus-30-98
[ "Solutions by everydaycalculation.com\n\n60/75 + 30/98 is 271/245.\n\n1. Find the least common denominator or LCM of the two denominators:\nLCM of 75 and 98 is 7350\n2. For the 1st fraction, since 75 × 98 = 7350,\n60/75 = 60 × 98/75 × 98 = 5880/7350\n3. Likewise, for the 2nd fraction, since 98 × 75 = 7350,\n30/98 = 30 × 75/98 × 75 = 2250/7350\n5880/7350 + 2250/7350 = 5880 + 2250/7350 = 8130/7350\n5. 8130/7350 simplified gives 271/245\n6. So, 60/75 + 30/98 = 271/245\nIn mixed form: 126/245\n\nMathStep (Works offline)", null, "Download our mobile app and learn to work with fractions in your own time:" ]
[ null, "https://answers.everydaycalculation.com/mathstep-app-icon.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.77629,"math_prob":0.9968818,"size":691,"snap":"2020-10-2020-16","text_gpt3_token_len":271,"char_repetition_ratio":0.14410481,"word_repetition_ratio":0.0,"special_character_ratio":0.536903,"punctuation_ratio":0.090277776,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99789256,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-03-30T22:41:17Z\",\"WARC-Record-ID\":\"<urn:uuid:56c54616-177f-4c05-8a83-dc4d2712d7ab>\",\"Content-Length\":\"7175\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:bb548bdb-b3d1-44a5-8308-81c6f05df0bb>\",\"WARC-Concurrent-To\":\"<urn:uuid:c9752f7e-0deb-4294-8d6f-81c303707c06>\",\"WARC-IP-Address\":\"96.126.107.130\",\"WARC-Target-URI\":\"https://answers.everydaycalculation.com/add-fractions/60-75-plus-30-98\",\"WARC-Payload-Digest\":\"sha1:HN7SCKSNIHNP3MYYKZA3MVWPCXO23XW2\",\"WARC-Block-Digest\":\"sha1:RPPUYNWTPUXERSD7ZKMFNFLAPAO3RXSK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-16/CC-MAIN-2020-16_segments_1585370497309.31_warc_CC-MAIN-20200330212722-20200331002722-00319.warc.gz\"}"}
https://math.stackexchange.com/questions/2969804/finitely-generated-module-over-mathbb-z
[ "# Finitely generated module over $\\mathbb Z$\n\nLet $$\\alpha,\\beta\\in\\mathbb C$$ be algebraic integers, so there exist monic $$p,q\\in\\mathbb Z[x]$$ such that $$p(\\alpha)=q(\\beta)=0$$. It follows that $$\\mathbb Z[\\alpha,\\beta]$$ is finitely-generated as a $$\\mathbb Z$$-module.\n\nI want to show directly that any submodule of $$\\mathbb Z[\\alpha,\\beta]$$ is finitely generated.\n\nI'm aware that the result is true in general, since $$\\mathbb Z$$ is a PID and all submodules of a finitely generated module over a PID are finitely generated. But I'm curious if there is a particular direct way in the above case, without going through the general argument.\n\n• When $\\beta=0$ this amounts to proving that any submodule of $\\mathbb{Z}^n$ is finitely generated (where $n$ is the degree of the minimal polynomial of $\\alpha$). If you know that, it follows immediately that a submodule of a finitely generated module is finitely generated, by writing your finitely generated module as a quotient of $\\mathbb{Z}^n$ for some $n$. So, it seems that your special case should be just as difficult as the general case. – Eric Wofsey Oct 24 '18 at 23:00\n• That's a very good observation. Thanks, Eric! – Martin Argerami Oct 25 '18 at 0:44\n\n## 1 Answer\n\n[Converting my comment into an answer.]\n\nWhen $$\\beta=0$$ this amounts to proving that any submodule of $$\\mathbb{Z}^n$$ is finitely generated (where $$n$$ is the degree of the minimal polynomial of $$\\alpha$$). If you know that, it follows immediately that a submodule of a finitely generated $$\\mathbb{Z}$$-module is finitely generated, by writing your finitely generated module as a quotient of $$\\mathbb{Z}^n$$ for some $$n$$.\n\nSo, if you had a simple proof of your special case, you could very easily get a similarly simple proof that every submodule of a finitely generated $$\\mathbb{Z}$$-module is finitely generated. As a result, I would not expect any easier proof to exist in your special case." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.81489086,"math_prob":0.9991088,"size":586,"snap":"2021-21-2021-25","text_gpt3_token_len":160,"char_repetition_ratio":0.16151203,"word_repetition_ratio":0.0,"special_character_ratio":0.25085324,"punctuation_ratio":0.10169491,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9998765,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-06-25T06:05:50Z\",\"WARC-Record-ID\":\"<urn:uuid:d3326c07-562e-4216-8d88-287bcc2c4bd5>\",\"Content-Length\":\"161967\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a2ab9087-996a-4eb8-89c2-b5f80325c36c>\",\"WARC-Concurrent-To\":\"<urn:uuid:859d756b-0fd1-4519-a598-e8c33b38de12>\",\"WARC-IP-Address\":\"151.101.193.69\",\"WARC-Target-URI\":\"https://math.stackexchange.com/questions/2969804/finitely-generated-module-over-mathbb-z\",\"WARC-Payload-Digest\":\"sha1:G7HVZ2GETQTR2ZZQ43356BGYK7ACECAX\",\"WARC-Block-Digest\":\"sha1:HOM3ER7MDDRR4Y3IPGPSBQ4USDGNTGYG\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-25/CC-MAIN-2021-25_segments_1623487622113.11_warc_CC-MAIN-20210625054501-20210625084501-00534.warc.gz\"}"}
https://blender.stackexchange.com/questions/64495/update-object-with-simulink-data
[ "# Update object with Simulink data\n\nI'm trying to simulate a simple dynamic system with Simulink (a bouncing ball) but the Simulink's 3D really sucks (because I'm on Linux and don't have the V-Realm Builder) so I was thinking to use Blender as a 3D world. I made a ball above a floor and I was thinking to change the ball's position using the data generated from a Simulink model. I'm using UDP to exchange data (and it works fine) but I can't achieve to update the scene in real-time because the refresh is really really slow. I'd like to move on the next step using data from an IMU and update the scene in real-time but I'm stuck on that.\n\nI'd like to achieve something like this\n\nHow can I obtain a decent refresh rate?\n\nPython code\n\nimport socket\nimport struct\nimport bpy\n\nport = 8888\ns = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)\naddress = \"127.0.0.1\"\ns.bind((address, port))\nprint(\"Waiting for data...\\n\")\nsphere = bpy.data.objects[\"Sphere\"]\nsphere.location = (0, 0, 20)\n\nfor i in range(1000):\ndata, addr = s.recvfrom(1024)\ndata = struct.unpack('!d', data)\nz = data\nsphere.location = (0, 0, z)\nprint(\"Location:\", sphere.location)\nbpy.ops.wm.redraw_timer(type='DRAW_WIN_SWAP', iterations=1)\n\n\nEDIT\n\nSolution #1\n\nI found a possible solution in this related question, which says that you'll have to edit the template. In my case, it will be something like this (and it works fine; now the only problem is that I have got some delay, maybe because of the UDP)\n\nimport bpy\nimport math\nimport socket\nimport struct\n\nport = 8888\naddress = \"127.0.0.1\"\nbase = 20\ns = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)\ns.bind((address, port))\nsphere = bpy.data.objects['Sphere']\nsphere.location = (0, 0, base)\n\nclass ModalTimerOperator(bpy.types.Operator):\n\"\"\"Operator which runs its self from a timer\"\"\"\nbl_idname = \"wm.modal_timer_operator\"\nbl_label = \"Modal Timer Operator\"\n\n_timer = None\n\ndef modal(self, context, event):\nif event.type in {'RIGHTMOUSE', 'ESC'}:\nself.cancel(context)\ns.close()\nsphere.location = (0, 0, base)\nreturn {'CANCELLED'}\n\nif event.type == 'TIMER':\ndata, addr = s.recvfrom(16)\ndata = struct.unpack('!d', data)\nz = data\nsphere.location = (0, 0, z)\nprint(\"Z:\", z)\n\nreturn {'PASS_THROUGH'}\n\ndef execute(self, context):\nwm = context.window_manager\nself._timer = wm.event_timer_add(0.1, context.window)\nwm.modal_handler_add(self)\nreturn {'RUNNING_MODAL'}\n\ndef cancel(self, context):\nwm = context.window_manager\nwm.event_timer_remove(self._timer)\n\ndef register():\nbpy.utils.register_class(ModalTimerOperator)\n\ndef unregister():\nbpy.utils.unregister_class(ModalTimerOperator)\n\nif __name__ == \"__main__\":\nregister()\n\n# test call\nbpy.ops.wm.modal_timer_operator()\n\n\n(I'm confident that there is a simpler way to achieve that)" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7643995,"math_prob":0.86666703,"size":2689,"snap":"2021-21-2021-25","text_gpt3_token_len":710,"char_repetition_ratio":0.108379886,"word_repetition_ratio":0.053908356,"special_character_ratio":0.28263295,"punctuation_ratio":0.19889502,"nsfw_num_words":1,"has_unicode_error":false,"math_prob_llama3":0.97809494,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-06-24T11:43:13Z\",\"WARC-Record-ID\":\"<urn:uuid:37c2ab75-ace2-49f0-bf63-9337dc12b8ca>\",\"Content-Length\":\"156617\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f24351c6-860e-4d36-90c2-9c1b1e0511d4>\",\"WARC-Concurrent-To\":\"<urn:uuid:cf72a7ef-d920-42a5-8d8a-1a9bc1ac4a71>\",\"WARC-IP-Address\":\"151.101.65.69\",\"WARC-Target-URI\":\"https://blender.stackexchange.com/questions/64495/update-object-with-simulink-data\",\"WARC-Payload-Digest\":\"sha1:Z7FXLYBR45LRN7EEJLR2VAP62X6P2ZGH\",\"WARC-Block-Digest\":\"sha1:T5F2POM5RADQ74K5RGEKTMJS2Q6X4VLR\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-25/CC-MAIN-2021-25_segments_1623488553635.87_warc_CC-MAIN-20210624110458-20210624140458-00178.warc.gz\"}"}
https://rdrr.io/cran/nlme/man/lme.html
[ "# lme: Linear Mixed-Effects Models In nlme: Linear and Nonlinear Mixed Effects Models\n\n lme R Documentation\n\n## Linear Mixed-Effects Models\n\n### Description\n\nThis generic function fits a linear mixed-effects model in the formulation described in Laird and Ware (1982) but allowing for nested random effects. The within-group errors are allowed to be correlated and/or have unequal variances.\n\nThe methods `lme.lmList` and `lme.groupedData` are documented separately.\n\n### Usage\n\n```lme(fixed, data, random, correlation, weights, subset, method,\nna.action, control, contrasts = NULL, keep.data = TRUE)\n\n## S3 method for class 'lme'\nupdate(object, fixed., ..., evaluate = TRUE)\n```\n\n### Arguments\n\n `object` an object inheriting from class `lme`, representing a fitted linear mixed-effects model. `fixed` a two-sided linear formula object describing the fixed-effects part of the model, with the response on the left of a `~` operator and the terms, separated by `+` operators, on the right, an `\"lmList\"` object, or a `\"groupedData\"` object. There is limited support for formulae such as `resp ~ 1` and `resp ~ 0`, and less prior to version 3.1-112. `fixed.` Changes to the fixed-effects formula – see `update.formula` for details. `data` an optional data frame containing the variables named in `fixed`, `random`, `correlation`, `weights`, and `subset`. By default the variables are taken from the environment from which `lme` is called. `random` optionally, any of the following: (i) a one-sided formula of the form `~ x1 + ... + xn | g1/.../gm`, with `x1 + ... + xn` specifying the model for the random effects and `g1/.../gm` the grouping structure (`m` may be equal to 1, in which case no `/` is required). The random effects formula will be repeated for all levels of grouping, in the case of multiple levels of grouping; (ii) a list of one-sided formulas of the form `~ x1 + ... + xn | g`, with possibly different random effects models for each grouping level. The order of nesting will be assumed the same as the order of the elements in the list; (iii) a one-sided formula of the form `~ x1 + ... + xn`, or a `pdMat` object with a formula (i.e. a non-`NULL` value for `formula(object)`), or a list of such formulas or `pdMat` objects. In this case, the grouping structure formula will be derived from the data used to fit the linear mixed-effects model, which should inherit from class `\"groupedData\"`; (iv) a named list of formulas or `pdMat` objects as in (iii), with the grouping factors as names. The order of nesting will be assumed the same as the order of the order of the elements in the list; (v) an `reStruct` object. See the documentation on `pdClasses` for a description of the available `pdMat` classes. Defaults to a formula consisting of the right hand side of `fixed`. `correlation` an optional `corStruct` object describing the within-group correlation structure. See the documentation of `corClasses` for a description of the available `corStruct` classes. Defaults to `NULL`, corresponding to no within-group correlations. `weights` an optional `varFunc` object or one-sided formula describing the within-group heteroscedasticity structure. If given as a formula, it is used as the argument to `varFixed`, corresponding to fixed variance weights. See the documentation on `varClasses` for a description of the available `varFunc` classes. Defaults to `NULL`, corresponding to homoscedastic within-group errors. `subset` an optional expression indicating the subset of the rows of `data` that should be used in the fit. This can be a logical vector, or a numeric vector indicating which observation numbers are to be included, or a character vector of the row names to be included. All observations are included by default. `method` a character string. If `\"REML\"` the model is fit by maximizing the restricted log-likelihood. If `\"ML\"` the log-likelihood is maximized. Defaults to `\"REML\"`. `na.action` a function that indicates what should happen when the data contain `NA`s. The default action (`na.fail`) causes `lme` to print an error message and terminate if there are any incomplete observations. `control` a list of control values for the estimation algorithm to replace the default values returned by the function `lmeControl`. Defaults to an empty list. `contrasts` an optional list. See the `contrasts.arg` of `model.matrix.default`. `keep.data` logical: should the `data` argument (if supplied and a data frame) be saved as part of the model object? `...` some methods for this generic require additional arguments. None are used in this method. `evaluate` If `TRUE` evaluate the new call else return the call.\n\n### Value\n\nAn object of class `\"lme\"` representing the linear mixed-effects model fit. Generic functions such as `print`, `plot` and `summary` have methods to show the results of the fit. See `lmeObject` for the components of the fit. The functions `resid`, `coef`, `fitted`, `fixed.effects`, and `random.effects` can be used to extract some of its components.\n\n### Note\n\nThe function does not do any scaling internally: the optimization will work best when the response is scaled so its variance is of the order of one.\n\n### Author(s)\n\nJosé Pinheiro and Douglas Bates [email protected]\n\n### References\n\nThe computational methods follow the general framework of Lindstrom and Bates (1988). The model formulation is described in Laird and Ware (1982). The variance-covariance parametrizations are described in Pinheiro and Bates (1996). The different correlation structures available for the `correlation` argument are described in Box, Jenkins and Reinsel (1994), Littell et al (1996), and Venables and Ripley (2002). The use of variance functions for linear and nonlinear mixed effects models is presented in detail in Davidian and Giltinan (1995).\n\nBox, G.E.P., Jenkins, G.M., and Reinsel G.C. (1994) \"Time Series Analysis: Forecasting and Control\", 3rd Edition, Holden–Day.\n\nDavidian, M. and Giltinan, D.M. (1995) \"Nonlinear Mixed Effects Models for Repeated Measurement Data\", Chapman and Hall.\n\nLaird, N.M. and Ware, J.H. (1982) \"Random-Effects Models for Longitudinal Data\", Biometrics, 38, 963–974.\n\nLindstrom, M.J. and Bates, D.M. (1988) \"Newton-Raphson and EM Algorithms for Linear Mixed-Effects Models for Repeated-Measures Data\", Journal of the American Statistical Association, 83, 1014–1022.\n\nLittell, R.C., Milliken, G.A., Stroup, W.W., and Wolfinger, R.D. (1996) \"SAS Systems for Mixed Models\", SAS Institute.\n\nPinheiro, J.C. and Bates., D.M. (1996) \"Unconstrained Parametrizations for Variance-Covariance Matrices\", Statistics and Computing, 6, 289–296.\n\nPinheiro, J.C., and Bates, D.M. (2000) \"Mixed-Effects Models in S and S-PLUS\", Springer.\n\nVenables, W.N. and Ripley, B.D. (2002) \"Modern Applied Statistics with S\", 4th Edition, Springer-Verlag.\n\n`corClasses`, `lme.lmList`, `lme.groupedData`, `lmeControl`, `lmeObject`, `lmeStruct`, `lmList`, `pdClasses`, `plot.lme`, `predict.lme`, `qqnorm.lme`, `residuals.lme`, `reStruct`, `simulate.lme`, `summary.lme`, `varClasses`, `varFunc`\n\n### Examples\n\n```fm1 <- lme(distance ~ age, data = Orthodont) # random is ~ age\nfm2 <- lme(distance ~ age + Sex, data = Orthodont, random = ~ 1)\nsummary(fm1)\nsummary(fm2)\n```\n\nnlme documentation built on March 26, 2022, 1:07 a.m." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7075767,"math_prob":0.8091576,"size":4030,"snap":"2022-05-2022-21","text_gpt3_token_len":1063,"char_repetition_ratio":0.10457029,"word_repetition_ratio":0.0033003301,"special_character_ratio":0.25384617,"punctuation_ratio":0.22111802,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95576715,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-23T11:45:09Z\",\"WARC-Record-ID\":\"<urn:uuid:1fdc188a-e961-4462-bc20-44f5dbc51124>\",\"Content-Length\":\"46906\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b689133f-c7a7-4265-a6a1-477619a0f476>\",\"WARC-Concurrent-To\":\"<urn:uuid:728112bb-4852-45fb-abbb-518103b7bfd6>\",\"WARC-IP-Address\":\"51.81.83.12\",\"WARC-Target-URI\":\"https://rdrr.io/cran/nlme/man/lme.html\",\"WARC-Payload-Digest\":\"sha1:Z6NMQ5LTUNGYGFMG5BNTJI4I55RIT4QN\",\"WARC-Block-Digest\":\"sha1:DTSFZTEYD4YZJBV4ISDJK5LOFM32AYQL\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662558015.52_warc_CC-MAIN-20220523101705-20220523131705-00653.warc.gz\"}"}
https://pytorch.org/tutorials/recipes/recipes/changing_default_device.html?utm_source=whats_new_tutorials&utm_medium=changing_default_device
[ "Shortcuts\n\n# Changing default device¶\n\nIt is common practice to write PyTorch code in a device-agnostic way, and then switch between CPU and CUDA depending on what hardware is available. Typically, to do this you might have used if-statements and cuda() calls to do this:\n\nNote\n\nThis recipe requires PyTorch 2.0.0 or later.\n\nimport torch\n\nUSE_CUDA = False\n\nmod = torch.nn.Linear(20, 30)\nif USE_CUDA:\nmod.cuda()\n\ndevice = 'cpu'\nif USE_CUDA:\ndevice = 'cuda'\ninp = torch.randn(128, 20, device=device)\nprint(mod(inp).device)\n\ncpu\n\n\nPyTorch now also has a context manager which can take care of the device transfer automatically. Here is an example:\n\nwith torch.device('cuda'):\nmod = torch.nn.Linear(20, 30)\nprint(mod.weight.device)\nprint(mod(torch.randn(128, 20)).device)\n\ncuda:0\ncuda:0\n\n\nYou can also set it globally like this:\n\ntorch.set_default_device('cuda')\n\nmod = torch.nn.Linear(20, 30)\nprint(mod.weight.device)\nprint(mod(torch.randn(128, 20)).device)\n\ncuda:0\ncuda:0\n\n\nThis function imposes a slight performance cost on every Python call to the torch API (not just factory functions). If this is causing problems for you, please comment on this issue\n\nTotal running time of the script: ( 0 minutes 0.005 seconds)\n\nGallery generated by Sphinx-Gallery", null, "## Docs\n\nAccess comprehensive developer documentation for PyTorch\n\nView Docs\n\n## Tutorials\n\nGet in-depth tutorials for beginners and advanced developers\n\nView Tutorials" ]
[ null, "https://www.googleadservices.com/pagead/conversion/795629140/", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.75752515,"math_prob":0.49252492,"size":1232,"snap":"2023-40-2023-50","text_gpt3_token_len":318,"char_repetition_ratio":0.13273616,"word_repetition_ratio":0.057471264,"special_character_ratio":0.26461038,"punctuation_ratio":0.17938931,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9503241,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-29T00:21:33Z\",\"WARC-Record-ID\":\"<urn:uuid:c65be96a-c597-4152-9945-25fe2ae374b8>\",\"Content-Length\":\"63127\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:115b1724-bc79-460a-aeef-e3f3e199a390>\",\"WARC-Concurrent-To\":\"<urn:uuid:613c78bd-9462-475c-8973-9ec8d245b98c>\",\"WARC-IP-Address\":\"185.199.111.153\",\"WARC-Target-URI\":\"https://pytorch.org/tutorials/recipes/recipes/changing_default_device.html?utm_source=whats_new_tutorials&utm_medium=changing_default_device\",\"WARC-Payload-Digest\":\"sha1:GQS3X6MQEGMZI6G6J6653Q3VNCRUDB7V\",\"WARC-Block-Digest\":\"sha1:QE7B5WMLGSGIQQ2OMFZFZQPA2NAW3PYM\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510462.75_warc_CC-MAIN-20230928230810-20230929020810-00180.warc.gz\"}"}
https://en.wikiversity.org/wiki/Advanced_elasticity/Mooney-Rivlin_material
[ "A Mooney-Rivlin solid is a generalization of the w:Neo-Hookean solid model, where the strain energy W is a linear combination of two invariants of the w:Finger tensor $\\mathbf {B}$", null, ":\n\n$W=C_{1}({\\overline {I}}_{1}-3)+C_{2}({\\overline {I}}_{2}-3)$", null, ",\n\nwhere ${\\overline {I}}_{1}$", null, "and ${\\overline {I}}_{2}$", null, "are the first and the second invariant of w:deviatoric component of the w:Finger tensor:\n\n$I_{1}=\\lambda _{1}^{2}+\\lambda _{2}^{2}+\\lambda _{3}^{2}$", null, ",\n$I_{2}=\\lambda _{1}^{2}\\lambda _{2}^{2}+\\lambda _{2}^{2}\\lambda _{3}^{2}+\\lambda _{3}^{2}\\lambda _{1}^{2}$", null, ",\n$I_{3}=\\lambda _{1}^{2}\\lambda _{2}^{2}\\lambda _{3}^{2}$", null, ",\n\nwhere: $C_{1}$", null, "and $C_{2}$", null, "are constants.\n\nIf $C_{1}={\\frac {1}{2}}G$", null, "(where G is the w:shear modulus) and $C_{2}=0$", null, ", we obtain a w:Neo-Hookean solid, a special case of a Mooney-Rivlin solid.\n\nThe stress tensor $\\mathbf {T}$", null, "depends upon Finger tensor $\\mathbf {B}$", null, "by the following equation:\n\n$\\mathbf {T} =-p\\mathbf {I} +2C_{1}\\mathbf {B} +2C_{2}\\mathbf {B} ^{-1}$", null, "The model was proposed by w:Melvin Mooney and w:Ronald Rivlin in two independent papers in 1952.\n\n## Uniaxial extension", null, "Comparison of experimental results (dots) and predictions for w:Hooke's law(1, blue line), w:Neo-Hookean solid(2, red line) and Mooney-Rivlin solid models(3, green line)\n\nFor the case of uniaxial elongation, true stress can be calculated as:\n\n$T_{11}=\\left(2C_{1}+{\\frac {2C_{2}}{\\alpha _{1}}}\\right)\\left(\\alpha _{1}^{2}-\\alpha _{1}^{-1}\\right)$", null, "and w:engineering stress can be calculated as:\n\n$T_{11eng}=\\left(2C_{1}+{\\frac {2C_{2}}{\\alpha _{1}}}\\right)\\left(\\alpha _{1}-\\alpha _{1}^{-2}\\right)$", null, "The Mooney-Rivlin solid model usually fits experimental data better than w:Neo-Hookean solid does, but requires an additional empirical constant.\n\n## Rubber\n\nElastic response of rubber-like materials are often modelled based on the Mooney-Rivlin model.\n\n## Source\n\n• C. W. Macosko Rheology: principles, measurement and applications, VCH Publishers, 1994, ISBN 1-56081-579-5\n\n## Notes and References\n\n1. The characteristic polynomial of the linear operator corresponding to the second rank three-dimensional Finger tensor is usually written\n$p_{B}(\\lambda )=\\lambda ^{3}-a_{1}\\,\\lambda ^{2}+a_{2}\\,\\lambda -a_{3}$", null, "In this article, the trace $a_{1}$", null, "is written $I_{1}$", null, ", the next coefficient $a_{2}$", null, "is written $I_{2}$", null, ", and the determinant $a_{3}$", null, "would be written $I_{3}$", null, "." ]
[ null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/cafb0ef39b0f5ffa23c170aa7f7b4e718327c4d1", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/5182b2c83b731279763a1d9dca93501ae2949efe", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/a27efdfedebd44d2aec539512c48dbfa5055a41e", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/28feaea9acb554b8dc6aed3ddff69bdc88dff62d", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/4e6d1a2195a67d647220a351423ed7f8b72d55f0", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/c9a07c8f091014634ef3a37bba615e049a49d155", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/6c746fd61f05cdde84619b1121aad5ffd524ce1c", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/babf569931f1a7b5182b9bec51873c2f5692fbb8", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/7ec545f7870665e1028b7492746848d149878808", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/2316ef5b037d26fafd38da3a1af4ee68df5009dd", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/7eb6f21ac2690dc4ec9294db5d018379bf7c1c9d", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/9593e3b995a1b57c078873a5ea186c7012e1a5ee", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/cafb0ef39b0f5ffa23c170aa7f7b4e718327c4d1", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/752ae651b7796df3729deb3f497c40621134aa37", null, "https://upload.wikimedia.org/wikipedia/commons/thumb/c/ca/Moonie-Rivlin.PNG/350px-Moonie-Rivlin.PNG", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/14087adc1804e4a6b11a38a542daff59abd92439", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/74d492d507705092963a7ae2968113b1a0195cb7", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/bfdd851dd33fd5ebb77b23c78e578b551063b7c6", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/bbf42ecda092975c9c69dae84e16182ba5fe2e07", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/03f18d041b2df30adef07164dbf285878893dedc", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/270580da7333505d9b73697417d0543c43c98b9f", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/5e3506ae39df854f347365bae6f326ef4f565be5", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/602d08dd865689204f563ce6f0de095c8ca67410", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/becba5d3350c4dd244f3cda48eb13439f21ed350", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8296613,"math_prob":0.9998072,"size":1319,"snap":"2023-14-2023-23","text_gpt3_token_len":315,"char_repetition_ratio":0.119391635,"word_repetition_ratio":0.009708738,"special_character_ratio":0.22137983,"punctuation_ratio":0.15891473,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999429,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48],"im_url_duplicate_count":[null,null,null,2,null,2,null,2,null,2,null,2,null,2,null,null,null,null,null,2,null,2,null,null,null,null,null,2,null,3,null,2,null,2,null,2,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-03-26T16:36:05Z\",\"WARC-Record-ID\":\"<urn:uuid:7a6909d2-6070-4e2d-a1d6-b0d465d596cb>\",\"Content-Length\":\"71040\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:eb6b0dd5-aff6-47d1-aebf-edf5d0c282fd>\",\"WARC-Concurrent-To\":\"<urn:uuid:c92fb428-da7d-4c73-8553-a4f36698556e>\",\"WARC-IP-Address\":\"208.80.154.224\",\"WARC-Target-URI\":\"https://en.wikiversity.org/wiki/Advanced_elasticity/Mooney-Rivlin_material\",\"WARC-Payload-Digest\":\"sha1:CRKALJ6OLWHBLSUMR4R3ZJJZAFSLIYHJ\",\"WARC-Block-Digest\":\"sha1:AUZ4TUMUDZH4BIEY74ND2E2YXLKU22WV\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296945473.69_warc_CC-MAIN-20230326142035-20230326172035-00116.warc.gz\"}"}
https://warwick.ac.uk/fac/sci/maths/currentstudents/ughandbook/ext/ma142/
[ "# MA142 Calculus 1\n\nLecturer: Roger Tribe\n\nTerm(s): Term 1\n\nStatus for Mathematics students: Core for M.O.R.S.E., Data Science and Discrete Maths students\n\nCommitment: 20 lectures, written assignments\n\nAssessment: 15% from assignments and 85% from 2 hour January exam\n\nRecommended prerequisites: MA1K2 Refresher Mathematics\n\nSynergies: specifically:\n\nAims: This is the first half of a year long, rigorous, one variable calculus course.\n\nContent: Convergence and divergence for sequences. Completeness of the real numbers. Infinite Series. Continuity for functions on R.\n\nApplications to computational problems and to calculations in probability.\n\nObjectives: Along with the theory listed above, confidence and understanding of careful proof writing.\n\nBooks:\n\nReal Analysis by Howie, John M, 2001\n\nMathematical Analysis and Proof, Stirling, David S. G., 1997\n\nAnalysis by its History, Hairer, E. and Wanner, G. 2008" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8262639,"math_prob":0.59477097,"size":1035,"snap":"2023-40-2023-50","text_gpt3_token_len":250,"char_repetition_ratio":0.11251213,"word_repetition_ratio":0.0,"special_character_ratio":0.2289855,"punctuation_ratio":0.21465969,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95339984,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-02T11:23:51Z\",\"WARC-Record-ID\":\"<urn:uuid:eb32f595-fd7d-4fd8-8db3-69f9f89a0920>\",\"Content-Length\":\"31427\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:afb78e42-7e72-4a29-9db4-b2e4066fcbec>\",\"WARC-Concurrent-To\":\"<urn:uuid:1e9b6f6c-5289-45cc-af37-07ab29a11eb3>\",\"WARC-IP-Address\":\"137.205.28.41\",\"WARC-Target-URI\":\"https://warwick.ac.uk/fac/sci/maths/currentstudents/ughandbook/ext/ma142/\",\"WARC-Payload-Digest\":\"sha1:DOP65MOBX4KCTW5ZOJLVE2F57BKXU4DJ\",\"WARC-Block-Digest\":\"sha1:MIQTFUL4F3BII3UWZKNLVLY52SL3K4BW\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100399.81_warc_CC-MAIN-20231202105028-20231202135028-00607.warc.gz\"}"}
https://questions.llc/questions/591296/first-term-and-larst-term-of-a-geometric-progression-is-42-if-the-forth-term-is-greater
[ "# first term and larst term of a geometric progression is 42.if the forth term is greater than the second term by 168, find the first term.the forth term.\n\n1. 👍\n2. 👎\n3. 👁\n4. ℹ️\n5. 🚩\n\n1. check your typing.\n\nIf the first and last term of a GS are the same, then r = 1, that is, the terms do not change.\nBut then how can there be a difference between the fourth and second term?\n\nDid you mean..\n\"The sum of the 1st and last terms is 42\" ?\n\nEither way, you would have a major problem:\nyou have 3 unknowns, a, r, and n\nbut only two sets of information.\n\n1. 👍\n2. 👎\n3. ℹ️\n4. 🚩\n2. first term and larst term of a geometric progression is 42.if the forth term is greater than the second term by 168, find the first term.the forth term\n\n1. 👍\n2. 👎\n3. ℹ️\n4. 🚩" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.77398473,"math_prob":0.9770319,"size":353,"snap":"2022-40-2023-06","text_gpt3_token_len":139,"char_repetition_ratio":0.18911175,"word_repetition_ratio":0.2962963,"special_character_ratio":0.3286119,"punctuation_ratio":0.1,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97813225,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-10-04T06:09:27Z\",\"WARC-Record-ID\":\"<urn:uuid:0a711aa0-a3a1-4d14-b2d1-e0e069205b35>\",\"Content-Length\":\"24098\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:63429f63-57f0-40e7-a95e-dd88a79fad4e>\",\"WARC-Concurrent-To\":\"<urn:uuid:7cdb8fd5-5c5f-4755-a8bb-eae1461e4173>\",\"WARC-IP-Address\":\"45.79.29.166\",\"WARC-Target-URI\":\"https://questions.llc/questions/591296/first-term-and-larst-term-of-a-geometric-progression-is-42-if-the-forth-term-is-greater\",\"WARC-Payload-Digest\":\"sha1:QXWEKRD7E2EDJSPHFEKY52BNVGITUNTW\",\"WARC-Block-Digest\":\"sha1:XEZKHSXZAFUFPKTYAXK2WVD54UP3N6XD\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030337480.10_warc_CC-MAIN-20221004054641-20221004084641-00674.warc.gz\"}"}
https://reference.wolfram.com/language/ref/StandardDeviation.html
[ "# StandardDeviation\n\nStandardDeviation[list]\n\ngives the sample standard deviation of the elements in list.\n\nStandardDeviation[dist]\n\ngives the standard deviation of the distribution dist.\n\n# Details", null, "# Examples\n\nopen allclose all\n\n## Basic Examples(3)\n\nStandard deviation of a list of numbers:\n\nStandard deviation of elements in each column:\n\nStandard deviation of a parametric distribution:\n\n## Scope(14)\n\n### Data(11)\n\nExact input yields exact output:\n\nApproximate input yields approximate output:\n\nStandardDeviation for a matrix gives columnwise standard deviations:\n\nStandardDeviation for a tensor gives columnwise standard deviations at the first level:\n\nWorks with large arrays:\n\nSparseArray data can be used just like dense arrays:\n\nFind the standard deviation of WeightedData:\n\nFind the standard deviation of EventData:\n\nFind the standard deviation of TemporalData:\n\nFind the standard deviation of a TimeSeries:\n\nThe standard deviation depends only on the values:\n\nFind a three-element moving standard deviation:\n\nFind the standard deviation of data involving quantities:\n\n### Distributions and Processes(3)\n\nFind the standard deviation for univariate distributions:\n\nMultivariate distributions:\n\nStandard deviation for derived distributions:\n\nData distribution:\n\nStandard deviation function for a random process:\n\n## Applications(7)\n\nStandardDeviation is a measure of dispersion:\n\nTransform data to have mean 0 and unit variance:\n\nIdentify periods of high volatility in the S&P 500 using a five-year moving standard deviation:\n\nFind the mean and standard deviation for the number of cycles to failure of deep-groove ball-bearings:\n\nPlot the data:\n\nProbability that the values lie within two standard deviations of the mean:\n\nInvestigate weak stationarity of the process data by analyzing standard deviations of slices:\n\nUse a larger plot range to see how relatively small the variations are:\n\nCompute standard deviation for slices of a collection of paths of a random process:\n\nChoose a few slice times:\n\nCompute standard deviations and means:\n\nCreate a standard deviation band around the mean:\n\nPlot standard deviations around the mean over these paths:\n\nFind the standard deviation of the heights for the children in a class:\n\nThe heights within one standard deviation from the mean:\n\n## Properties & Relations(9)\n\nThe square of StandardDeviation is Variance:\n\nStandardDeviation is a scaled Norm of deviations from the Mean:\n\nStandardDeviation is the square root of a scaled CentralMoment:\n\nStandardDeviation is a scaled RootMeanSquare of the deviations:\n\nStandardDeviation is the square root of a scaled Mean of squared deviations:\n\nStandardDeviation as a scaled EuclideanDistance from the Mean:\n\nStandardDeviation squared is less than MeanDeviation if all absolute deviations are less than 1:\n\nStandardDeviation squared is greater than MeanDeviation if all absolute deviations are greater than 1:\n\nStandardDeviation of a random variable as the square root of Variance:\n\n## Neat Examples(1)\n\nThe distribution of StandardDeviation estimates for 20, 100, and 300 samples:\n\nIntroduced in 2003\n(5.0)\n|\nUpdated in 2007\n(6.0)" ]
[ null, "https://reference.wolfram.com/language/ref/Files/StandardDeviation.en/details_1.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7333585,"math_prob":0.93762314,"size":2795,"snap":"2020-45-2020-50","text_gpt3_token_len":580,"char_repetition_ratio":0.30347547,"word_repetition_ratio":0.0546875,"special_character_ratio":0.18890877,"punctuation_ratio":0.117256634,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99851865,"pos_list":[0,1,2],"im_url_duplicate_count":[null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-11-26T04:28:10Z\",\"WARC-Record-ID\":\"<urn:uuid:fa028af4-be94-4c75-9747-db2f3bb388e7>\",\"Content-Length\":\"156673\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b2e116ba-8a0f-427b-bc2b-db2c86535218>\",\"WARC-Concurrent-To\":\"<urn:uuid:41049883-a48f-4707-9c8c-1abf187e21cf>\",\"WARC-IP-Address\":\"140.177.205.163\",\"WARC-Target-URI\":\"https://reference.wolfram.com/language/ref/StandardDeviation.html\",\"WARC-Payload-Digest\":\"sha1:3T66H3274RKIMREMCSTC77E3HJVXBRAI\",\"WARC-Block-Digest\":\"sha1:SNKGBQKODX7IZVD7XBY3OV24T737QWVV\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141186414.7_warc_CC-MAIN-20201126030729-20201126060729-00101.warc.gz\"}"}
https://www.aanda.org/articles/aa/full_html/2011/05/aa15955-10/aa15955-10.html
[ "Subscriber Authentication Point\nFree Access\n Issue A&A Volume 529, May 2011 A93 10 Cosmology (including clusters of galaxies) https://doi.org/10.1051/0004-6361/201015955 08 April 2011\n\n## 1. Introduction\n\nWeak gravitational lensing is a unique technique that allows us to probe the distribution of dark matter in the Universe. It measures the very small distortions in the shapes of faint background galaxies, which are caused by foreground mass structures. The technique requires a very accurate measurement of the shape parameters as well as the removal of the systematic effects that affect them. In addition, galaxies to be used in a weak-lensing analysis must be carefully selected so that they do not include a significant fraction of unlensed sources with redshift lower than that of the lens. This would introduce a dilution and therefore an underestimated signal, in particular toward the cluster center (see Broadhurst et al. 2005): this effect may be the reason of the under-prediction by weak-lensing of the observed Einstein radius by a factor of  ~2.5 (Smith et al. 2001; Bardeau et al. 2005). It would be ideal if photometric redshifts were available. Even if for weak-lensing one needs no high accuracy in the estimates for individual galaxies, on average we need at least σz/(1 + z) < 0.1: this implies having observations in several bands, spanning a good wavelength range. If few bands are available, an uncontaminated background sample can be obtained by selecting only galaxies redder than the cluster red sequence (Broadhurst et al. 2005). However, this method often does not allow one to derive a number density of the background sources sufficiently high to allow an accurate weak-lensing measure. Including galaxies bluer than the red sequence (Okabe et al. 2010) requires a careful selection of the color offset, because bluer galaxies can still be contaminated by late-types members of the cluster. Finally, if more than two bands are available, Medezinski et al. (2010) discussed how to identify cluster members and the foreground population as overdensities in the color-color space.\n\nIn this paper we exploit deep uBVRIz images of the cluster Abell 383, taken with the MEGACAM and SUPRIME camera mounted on the 3.6 m CFHT and 8 m SUBARU telescopes respectively, which are publicly available. The mass of the cluster is derived by weak-lensing, and values obtained by different selection methods are compared. The properties of the cluster are reviewed in Sect. 2. The data reduction is discussed in Sect. 3. In Sect. 4 we describe the algorithm used for the shape measurement and some improvements for the removal of biases. The accuracy in the mass estimate is derived by a comparison with simulations. In Sect. 5 we first summarize the different methods for the selection of the background galaxies from which the lensing signal is measured. These methods are applied to Abell 383, and the masses derived in this way are then compared. Finally, we compare in Sect. 6 the mass derived in this paper with literature values, both by X-rays and weak-lensing; we also compare our results with the mass expected for the R-band luminosity, which is derived from the luminosity function of Abell 383.\n\nA standard cosmology was adopted in this paper: ΩΛ = 0.7, ΩM = 0.3, H0 = 70 km s-1 Mpc-1, giving a scale of 2.92 kpc/arcsec at the redshift of Abell 383.\n\n## 2. Abell 383\n\nAbell 383 is an apparently fairly relaxed cluster of galaxies of richness class 2 and of Bautz-Morgan type II–III (Abell et al. 1989), located at z = 0.187 (Fetisova et al. 1993). It is dominated by the central cD galaxy, a blue-core emission-line bright cluster galaxy (BCG) that is aligned with the X-ray peak Smith et al. (2001). Abell 383 is one of the clusters of the XBACs sample (X-ray-Brightest Abell-type Clusters), observed in the ROSAT All-Sky Survey (RASS; Voges 1992): its X-ray luminosity is 8.03 × 1044 erg s-1 in the 0.1–2.4 keV band and its X-ray temperature is 7.5 keV (Ebeling et al. 1996). A small core radius, a steep surface brightness profile, and an inverted deprojected temperature profile show evidence of the presence of a cooling flow, as supported by the strong emission lines in the optical spectra of its BCG (Rizza et al. 1998).\n\nAn extensive study of this cluster was carried out by Smith et al. (2001), in which lensing and X-ray properties were analyzed in deep optical HST images and ROSAT HRI data, respectively. A complex system of strong-lensed features (a giant arc, two radial arcs in the center, and numerous arclets) were identified in its HST images, some of which are also visible in the deep SUBARU data used here.\n\n## 3. Data retrieval and reduction\n\nThe cluster Abell 383 was observed with the SUPRIME camera mounted on the 8 m SUBARU telescope: SUPRIME is a ten CCDs mosaic, with a 34 × 27 arcmin2 field-of-view (Miyazaki et al. 2002). The data are publicly available in the BVRIz filters, with total exposure times of 7800 s (R),  ~6000 s (B, V), 3600 s (I) and 1500 s (z); they were retrieved using the SMOKA1 Science Archive facility. The data were collected from seven different runs and amount to  ~55 GB. Details about the observation nights and exposure times for each band are given in Table 1. We reduced the data with the VST–Tube imaging pipeline, which was specifically developed for the VLT survey telescope (VST, Capaccioli et al. 2005), but is adaptable to other existing or future multi-CCD cameras (Grado et al. 2011) .\n\nThe field of Abell 383 was also observed in the u ∗  band with the MEGACAM camera attached to the Canada-France Hawaii Telescope (CFHT), with a total exposure time of 10 541 s. The preprocessed images were retrieved from the CADC archive.\n\nTable 1\n\nSummary of observations with the MEGACAM (u) and SUPRIME (BVRIz) cameras used in this paper.\n\nThe basic reduction steps were performed for each frame, namely overscan correction, flat fielding, correction of the geometric distortion caused by the optics, and sky background subtraction. To improve the photometric accuracy, a sky superflat was used.\n\nGeometric distortions were first removed from each exposure with the Scamp tool 2 and taking the USNO-B1 as the astrometric reference catalog. The internal accuracy provided by Scamp as measured by positions of the same sources in different exposures, is  ~0.05 arcsec. Different exposures were then stacked using SWarp. The coaddition was made in a way that all the images had the same scale and size.\n\nThe photometric calibration of the BVRI bands was performed using the standard Stetson fields, which were observed in the same nights as the data. For the z band we used a pointing that is also covered by the Stripe 82 scans in the sloan digital sky survey (SDSS); the SDSS photometry of sources identified as point-like was used to derive the zero point in the SUBARU image.\n\nFor the MEGACAM-CFHT u ∗  band images, reduced images and photometric zero points are already available from the CADC public archive, hence only astrometric calibration and stacking were required.\n\nTable 2 summarizes the photometric properties of the final coadded images (average FWHMs and limiting magnitudes for point-like sources) for each band. All magnitudes were converted to the AB system; magnitudes of sources classified as galaxies were corrected for Galactic extinction using the Schlegel maps (Schlegel et al. 1998).\n\nThe weak-lensing analysis was done in the R-band image. The masking of reflection haloes and diffraction spikes near bright stars were performed by ExAM, a code developed for this purpose. In short, ExAM takes the SExtractor catalog as input, locates the stellar locus in the size-magnitude diagram (see Sect. 4.1), picks out stars with spike-like features from the isophotal shape analysis, derives a mask region file that may be visualized in the ds9 software, and finally creates a mask image in FITS format. The reflection haloes are masked by estimating the background contrast near the bright stars, whose positions are obtained from the USNO-B1. The effective area available after the removal of regions masked in this way was 801 arcmin2. Catalogs for the other bands were extracted with SExtractor in dual-mode, where the R-band image was used as the detection image.\n\nTable 2\n\nPhotometric properties of the coadded images.", null, "Fig. 1Density plots comparing the photometric redshifts in the Abell 383 field available from the SDSS and those computed here from the uBVRIz photometry. Open with DEXTER\n\nPhotometric redshifts were computed from uBVRIz photometry using the zebra code (Feldmann et al. 2006). This software allows one to define six basic templates (elliptical, Sbc, Sbd, irregular, and two starburst SEDs), and to compute log-interpolations between each pair of adjacent templates. We first applied the offset derived within the COSMOS survey by Capak et al. (2007) to the BVz magnitudes, that is +0.19 (B), +0.04 (V), -0.04 (z). We then convolved the stellar spectra from the Pickles’ library (Pickles 1985) with the transmission curves used for each filter and derived the offsets for the other filters, that is: 0.0 (u), 0.0 (R), +0.05 (I). Figure 3 shows the comparison of the model colors with those derived for the stars in our catalogs after the above offsets were applied. We also verified that the offsets derived in this way are consistent with those obtained by running zebra in the so-called photometry-check mode, which allows one to compute magnitude offsets that minimize the average residuals of the observed versus the template magnitudes.\n\nWe then removed from the catalog those galaxies with a photometric redshift zph > 3 and σz/(1 + z) > 0.1, where σz was derived from the 68%-level errors computed in zebra. The distribution of these photometric redshifts is displayed in Fig. 4. The accuracy of these photometric redshifts was derived from the comparison with the SDSS photometric redshifts of galaxies in the SDSS (DR7), whose rms error is  ~0.025 for r′ < 20 mag. We derived (Fig. 1) a systematic offset Δz/(1 + z) = 0.003 and an rms error σΔz/(1 + z) = 0.07. As an additional check, we extracted those galaxies classified by ZEBRA as early-type from our catalog, with R < 23 mag and |zphot − 0.187| = 0.1. As expected, these galaxies define a red sequence (Fig. 2): this was fitted as V − R = a + bR, with a = 0.5, b = −6 × 10-3.", null, "Fig. 2V − R vs. R color plot: red and blue points are galaxies at zphot = 0.187 ± 0.1 and are classified as early and late-type, respectively; dashed lines show the  ±1σ levels. Open with DEXTER", null, "Fig. 3Observed (red dots) and model colors (black dots) for stars after the offsets given in the text were applied. Model colors were derived convolving the Pickles’ library of stellar spectra with the filter transmission curves. Open with DEXTER\n\n## 4. Shape measurement\n\nEllipticities of galaxies were estimated with the KSB approach (Luppino & Kaiser 1997): even if this algorithm does not allow one to achieve accurate measurements of very low shear signals, γ ≲ 10-3, it is nevertheless adequate for weak-lensing by clusters, as discussed e.g. by Gill et al. (2009) and Romano et al. (2010).\n\nIn our KSB implementation, the SExtractor software was modified to compute all relevant quantities, namely the raw ellipticity e, the smear polarizability Psm, and the shear polarizability Psh. The centers of the detected sources were measured with the windowed centroids in SExtractor.\n\nThe KSB approach assumes that the point spread function (PSF) can be described as the sum of an isotropic component (simulating the effect of seeing) and an anisotropic part. The correction of the observed ellipticity eobs for the anisotropic part is computed as", null, "(1)where (starred terms indicate that they are derived from measurement of stars)", null, "(2)The intrinsic ellipticity e of a galaxy and the reduced shear, g = γ/(1 − κ), are then related by", null, "(3)The term Pγ, introduced by Luppino & Kaiser (1997) as the pre-seeing shear polarizability, describes the effect of seeing and is defined to be", null, "(4)The final output of the pipeline is the quantity eiso = eaniso/Pγ, from which the average reduced shear, ⟨g⟩ = ⟨eiso⟩, provided that the average intrinsic ellipticity vanishes, ⟨e⟩ = 0.", null, "Fig. 4Distribution of the photometric redshifts computed from the uBVRIz data for R < 25 mag. Open with DEXTER\n\nWe calculated the ellipticity by using a window function to suppress the outer, noisy part of a galaxy: the function is usually chosen to be Gaussian with size θ. The size of the window function is commonly taken as the radius containing 50% of the total flux of the galaxy (as given by e.g. the FLUX_RADIUS parameter in SExtractor). In our case, we proceeded as follows. We defined a set of bins with θ varying between 2 and 10 pixels (sources with smaller and larger sizes are rejected in our analysis), and a step of 0.5 pixel. For each bin we compute eobs, Psh and Psm, and the ellipticity signal-to-noise ratio defined by Eq. (16) in Erben et al. (2001):", null, "(5)The optimal size of the window function, θmax, is then defined as the value that maximizes SNe. Figure 5 shows the typical trend of SNe, which was normalized for display purposes, as a function of θ. Evidently (Fig. 6) there is a constant offset on average between θmax and FLUX_RADIUS. Below SNe  ~ 5, the FLUX_RADIUS starts to decrease: this provides an estimate of the limit on SNe below which the shape measurement is not meaningful any more.", null, "Fig. 5SNe is displayed as a function of the window function size, θ, used to measure ellipticities. For display purposes, galaxies were selected to have the same value of θmax, and SNe was normalized so that min(SNe) = 0, max(SNe) = 1. The vertical lines indicate the average (solid) and standard deviation (dashed) of FLUX_RADIUS for the same galaxies. Open with DEXTER", null, "Fig. 6Running median of FLUX_RADIUS-θmax as a function of SNe. The vertical line shows the limit chosen for the selection of background galaxies. Open with DEXTER", null, "Fig. 7PSF anisotropy correction derived with the GAM algorithm: the first three panels we show the ellipticity pattern (measured, fitted, and residuals; X and Y are in pixels). The scale is displayed by the arrows in the upper right part of each panel (e = 0.05). In the next panel, black dots are the measured values, green dots are after the correction; values rejected during fitting are marked in red. The last row shows for comparison the corrected ellipticities obtained using a third degree polynomial for the fit. Open with DEXTER\n\nThe terms p and q, derived from stars, must be evaluated at each galaxy position: this is usually made by fitting them with a polynomial (see e.g. Radovich et al. 2008), whose degree must be chosen to fit the observed trend, without overfitting. The usage of the window function introduces a calibration factor, which is compensated for by the Pγ term. This implies that stellar terms must be computed and fitted with the same value of θ used for each galaxy (Hoekstra et al. 1998). An alternative approach, not based on a constant (and somehow arbitrary) degree polynomial, is given e.g. by the generalized additive models: we found that a good result is provided by the function gam in the mgcv library (Wood 2011) of the r language3. Figure 7 shows fitting and residuals of the anisotropic PSF component: from the comparison between the results obtained with polynomial and GAM fitting we see that in the latter case we obtain lower residuals, in particular in the borders of the image. To quantify the improvement compared with using the polynomial, we obtain ⟨eaniso,1⟩ = (1 ± 5) × 10-4, ⟨eaniso,2⟩ = (−2 ± 9) × 10-4 with a third degree polynomial, ⟨eaniso,1⟩ = (2 ± 4) × 10-4, ⟨eaniso,2⟩ = (1 ± 7) × 10-4 with the GAM algorithm. The values of the fitted terms p and q at the positions of the galaxies are predicted by gam, which also provides an estimate of the standard errors of the predictions, Δp and Δq. From the error propagation, the uncertainty on eiso was computed as", null, "(6)where (Δeaniso)2 = (Δeobs)2 + (PsmΔp)2 and (ΔPγ)2 = (PsmΔq)2; uncertainties on the measured values of Psm and Psh were not considered.\n\nFor each galaxy, a weight is defined as", null, "(7)where Δe0 ~ 0.3 is the typical intrinsic rms of galaxy ellipticities.\n\n### 4.1. Star-galaxy classification\n\nStars and galaxies were separated in the magnitude (MAG_AUTO) vs. size plot. Instead of using e.g. FLUX_RADIUS as the estimator of size, we used the quantity δ = MU_MAX-MAG_AUTO, where MU_MAX is the peak surface brightness above background. Saturated stars were found in the locus of sources with constant MU_MAX; in the δ vs. MAG_AUTO plot, stars are identified as sources in the vertical branch. Sources with δ lower than stars were classified as spurious detections. In addition, we rejected those sources for which δ is  ~2σ higher than the median value. This is to exclude those sources from the sample of stars used to compute the PSF correction terms for which the shape measurement may be wrong owing to close blended sources, noise, etc.\n\nWe additionally excluded those galaxies with w < 1 or SNe  <  5, for which the ellipticity measurement is not meaningful.\n\n### 4.2. Error estimate in shear and mass measurement\n\nFor the faint galaxies that we used for the weak-lensing analysis, the ellipticity is underestimated due to noise. This effect is not included in the Pγ term, which can be only computed for stars with high signal-to-noise ratio. Schrabback et al. (2010) proposed the following parametrization for this bias as a function of the signal-to-noise ratio:", null, "(8)where em and ek are the ellipticities before and after the correction, respectively. These parameters were derived with the STEP1 (Heymans et al. 2006) and STEP2 (Massey et al. 2007) simulations, where both PSF and shear are constant for each simulated image. We obtain a = −0.1 and b = −0.45, which corresponds to a bias m changing from  ~5% for SNe = 5 to  <2% for SNe  >  50. After this correction was applied, we again computed the average shear from the STEP1 and STEP2 simulations, and obtained a typical bias of  ~3% for SNe  =  5.\n\nWe then estimated the accuracy on the mass that can be obtained from an image with the same noise and depth as in the R-band SUBARU image. To this end we dropped the assumption on constant shear, and produced more realistic simulations: the effect on galaxy shapes by weak-lensing from a galaxy cluster was produced with the shuff code, which will be described in a separate paper (Huang et al., in prep.). To summarize, the code takes as input a catalog of galaxies produced by the Stuff tool; it computes the shear produced by a standard mass profile (e.g. Navarro-Frenk-White, NFW hereafter) and applies it to the ellipticities of the galaxies behind the cluster. This catalog is then used in the SkyMaker software, configured with the telescope parameters suitable for the SUBARU telescope and with the exposure time of the R-band image, which produces a simulated image; the background rms of this image was set to be as close as possible to that of the real image. We considered for the lens a range of masses at log Mvir/M = 13.5,14,14.5,15.0, and a NFW mass profile with cvir = 6. Each simulation was repeated 50 times for each mass value, randomly changing the morphology, position, and redshift of the galaxies.\n\nFor each of these images, we ran our lensing pipeline with the same configuration as for the real data. The density of the background galaxies used for the lensing analysis was  ~20 gals arcmin-2. The fit of the mass was performed as described in Radovich et al. (2008) and Romano et al. (2010): the expressions for the radial dependence of tangential shear γT derived by Bartelmann (1996) and Wright & Brainerd (2000) were used, and the NFW parameters (Mvir, cvir) were derived using a maximum likelihood approach. In addition, the 2D projected mass can be derived in a non-parametric way by aperture densitometry, where the mass profile of the cluster is computed by the ζ statistics (Fahlman et al. 1994; Clowe et al. 1998):", null, "(9)The mass is estimated as", null, ", and θout is chosen so that", null, ".\n\nThe average errors on the mass estimate obtained in this way are displayed in Table 3, showing that masses can be estimated within an uncertainty of  <20% for M ≥ 1014 M. This accuracy only includes the contribution of the shape measurement and mass fitting method, but it does not include the uncertainty owing to the selection of the lensed galaxies.\n\nFinally, the masses derived by aperture densitometry are  ~1.3 higher than those obtained by mass fitting: this agrees with Okabe et al. (2010), who find M2D/M3D = 1.34 for virial overdensity.\n\nTable 3\n\nMasses derived by simulations with the NFW model fitting (M3D) and aperture densitometry (M2D).\n\nTable 4\n\nBest-fit NFW parameters.", null, "Fig. 8Shear profiles obtained with the different selection methods (see Table 4, where the parameters derived for each model are given). The fitting was done using the maximum likelihood approach. Binned points are shown for display purposes only. In panel a, the curves obtained by the two models (nfw/mnfw) overlap. Open with DEXTER\n\n## 5. Results\n\nOne of the most critical sources of systematic errors, which can lead to an underestimation of the true WL signal, is dilution of the distortion owing to the contamination of the background galaxy catalog by unlensed foreground and cluster member galaxies (see e.g. Broadhurst et al. 2005). The dilution effect increases as the cluster-centric distance decreases because the number density of cluster galaxies that contaminate the faint galaxy catalog is expected to roughly follow the underlying density profile of the cluster. Thus, correcting for the dilution effect is important to obtain unbiased, accurate constraints on the cluster parameters and mass profile.\n\nAs discussed by Broadhurst et al. (2005); Okabe et al. (2010); Oguri et al. (2010), the selection of background galaxies to be used for the weak-lensing analysis can be made by excluding those galaxies redder than the cluster red sequence. However, this selection produces a low number density (10 galaxies/arcmin2 in our case) and correspondingly high uncertainties in the derived parameters. Below we compare the results obtained by different methods. We first assumed that no information on the redshift is available, and that photometry from only one band is available (magnitude cut), or from more than two bands (color selection). Finally, we included in our analysis the photometric redshifts. The density of background galaxies is  ~25–30 galaxies/arcmin2, see Table 4.\n\nTo derive the mass, we need to know the critical surface density:", null, "(10)Dls, Ds, and Dl being the angular distances between lens and source, observer and source, and observer and lens, respectively. This quantity should be computed for each lensed galaxy. Because the reliability of photometric redshifts for the faint background galaxies is not well known, we prefer to adopt the single sheet approximation, where all background galaxies are assumed to lie at the same redshift, which is defined as β(zs) = ⟨β(z)⟩. For the selection based only on magnitude or colors, this value was derived from the COSMOS catalog of photometric redshifts (Capak et al. 2007), to which the same cuts as those used for the Abell 383 catalog are applied. Later on, we computed β(zs) from the photometric redshifts themselves, and compared these two values.\n\nThe mass was computed by fitting a NFW profile (M3D = Mvir) and by aperture densitometry (M2D). In addition to a 2-parameters fit (virial mass Mvir and concentration cvir) for the NFW fit, we also show (MNFW) the results obtained using the relation proposed by Bullock et al. (2001) between Mvir, cvir and the cluster redshift zcl:", null, "(11)with M = 1.5 × 1013/hM, K = 9, α = −0.13.\n\nThe shear profiles obtained from the different methods discussed below are displayed in Fig. 8; also displayed are the average values of tangential and radial components of shear, computed in bins selected to contain at least 200 galaxies, and centered on the BCG: this is also where the peak of the X-ray emission is located (Rizza et al. 1998). To check the possible error introduced by a wrong center, we considered a grid around the position of the BCG, with a step of 2 arcsec: for each position in the grid, we took it as the center, performed the fit and derived the mass. Within 30 arcsec, we obtain that the rms is σ(Mvir) < 5%. The NFW parameters obtained by model fitting, and the reduced χ2 computed from the binned average tangential shear, are given in Table 4.", null, "Fig. 9Color selection of foreground galaxies (zphot < 0.2). The contour in red is the density level chosen for the selection; the points display the model colors computed by ZEBRA for this redshift range. Open with DEXTER\n\nThe magnitude cut is the simplest approach because it only requires photometry from the same band in which the lensing measurement is done. Taking galaxies in the range 23 < R < 26 mag (a), produces a sample dominated by faint background galaxies, but the inner regions of the cluster may still present an unknown contamination by cluster galaxies.\n\nTo improve the selection, we proceeded as follows. The locus of foreground galaxies was first found, which allowed a better separation of different galaxy populations (see Medezinski et al. 2010, and references therein), compared e.g. with methods based on the selection of only red galaxies. Here we considered two color selections, namely B − z vs. V − z (b) and B − V vs. V − I (c), with 21 < R < 26 mag. For intermediate redshifts (z ~ 0.2), foreground and background objects are well separated in these two color diagrams. We explored the possibility to find the best selection criteria based only on the observed colors, without any information on the redshift of the galaxies. To this end, we developed a semi-automatic procedure, implemented in the R language. We first selected bright (R < 21 mag) galaxies, which are expected to be mainly at zph < 0.2. A kernel density estimate, obtained by the kde2d package in R (Venables & Ripley 2002) was then applied to these points, giving the plots displayed in Fig. 9: the normalization was made in a way that the value of the maximum density in the binned data was equal to one. The boundary of the foreground galaxy region was then defined by the points within the same density level l (e.g., l = 0.2). This region was converted to a polygon using the splancs4 package in R, which also allows one to select those sources for a given catalog whose colors lie inside or outside the polygon. A comparison with the model colors obtained in ZEBRA from the convolution of the spectral templates with filter transmission curves shows that the colors inside the area selected in this way are consistent with those expected for galaxies at redshift  <0.2. Galaxies classified as foreground in this way were therefore excluded from the weak-lensing analysis.\n\nWe finally used photometric redshifts (d) for the selection of the background galaxies, which are defined as those with 21 < R < 26 mag, 0.3 < zph < 3, and to compute the average value of β: we obtain in this way β(zs) = 0.74, in good agreement with the value obtained from the COSMOS catalog with the same magnitude and redshift selection, β(zs) = 0.73.\n\nThe effect of the different selections on the residual presence of cluster galaxies is displayed in Fig. 11, showing the density of background galaxies computed in different annuli around the cluster: a clear increase of the density in the inner regions is visible in case a, which indicates that magnitude selection alone does not allow one to completely remove the contamination by cluster galaxies. This contamination is greatly reduced by color selection, and the optimal result is given by photometric redshifts, as expected. As a further check, we also found for each method that the tangential shear signals of the rejected “foreground/cluster” galaxies average out. In the following discussion, we take the results from case d as reference, which is very close to c in terms of uncertainties on fitted parameters, density of background galaxies, and residuals in the radial component of the shear.\n\nHoekstra (2003) and Hoekstra et al. (2011) pointed out that large-scale structures along the line of sight provide a source of uncertainty on cluster masses derived by weak-lensing, which is usually ignored and increases as a larger radius (θmax) is used in the fitting. The uncertainty introduced by this component on the mass estimate can be  ~10–20% for a cluster with M = 1015   M at z ~ 0.2, θmax = 10 arcmin as in our case (see Figs. 6 and 7 in Hoekstra 2003), which is comparable with the uncertainties derived in the fitting.\n\nAlso displayed in Table 4 are the projected masses computed by aperture densitometry from Eq. (10), at a distance from the cluster center r = rvir, and θ2 = 900″, θout = 1000″. A good agreement is found between these masses and the values computed from parametric fits, if we take into account the expected ratio M2D/M3D = 1.34 (see Sect. 4.2).\n\nFrom the catalog based on the photometric redshift selection (case d), we finally derived the S-map introduced by Schirmer et al. (2004), that is: S = Map/σMap, where", null, "(12)The map was obtained by defining a grid of points along the image; the tangential components et,i of the lensed galaxy ellipticities were computed taking each point in this grid as center. The weight wi was defined in Eq. (7), and Q is a Gaussian function as in Radovich et al. (2008):", null, "(13)where θ0 and θs are the center and size of the aperture (θs ~ 1.5 arcmin). The S-map is displayed in Fig. 10, showing a quite circular mass distribution centered on the BCG.", null, "Fig. 10Weak-lensing S-map showing the mass distribution derived by weak-lensing; overlaid is the central region of the Abell 383 field. Open with DEXTER", null, "Fig. 11Density (gal/arcmin2) of background galaxies used for the lensing analysis as a function of the distance from the cluster for the different selection methods considered here. The case with no selection is also displayed for comparison. Open with DEXTER\n\n## 6. Discussion\n\nSeveral mass measurements of this cluster are available in literature, based on different data and/or methods. Schmidt & Allen (2007) used Chandra data and modeled the dark matter halo by a generalized NFW profile, obtaining a mass value", null, "and a concentration value", null, ".\n\nA weak-lensing analysis of Abell 383 was performed by Bardeau et al. (2007) using CFH12K data in the B, R, I filters. For the shape measurements they used a Bayesian method implemented into the IM2SHAPE software. To retrieve the weak-lensing signal they selected the background galaxies as those within 21.6 < R < 24.9 mag and (R − I) ≳ 0.7, obtaining a number density of  ~10 gal arcmin-2. Their fit of the shear profile by a NFW profile gave as result a mass of", null, "at", null, "Mpc and a concentration value of c = 2.62 ± 0.69.\n\nAnother weak-lensing mass estimate of Abell 383 from CHF12K data was obtained by Hoekstra (2007) using two bands, B (7200 s) and R (4800 s). The background sample was selected by a magnitude cut 21 < R < 24.5, from which cluster red-sequence galaxies were discarded. The remaining contamination was estimated from the stacking of several clusters by assuming that the fraction of cluster galaxies fgc was a function of radius  ∝ r-1. This function was used to correct the tangential shear measurements. As discussed in Okabe et al. (2010), this kind of calibration method does not allow one to perform an unbiased cluster-by-cluster correction. Assuming an NFW profile, the fitted virial mass of Abell 383 was", null, ".\n\nAbell 383 belongs to the clusters sample selected for the Local Cluster Substructure Survey (LoCuSS) project (P.I. Smith). Within this project, a weak-lensing analysis of this cluster has been recently performed by Okabe et al. (2010) using SUBARU data in two filters, i′ (36 min) and V (30 min). In addition to a magnitude cut 22 < i < 26 mag, they used the color information to select galaxies redder and bluer than the cluster red sequence. Looking at the trend of the lensing signal as function of the color offset, they selected the sample where the dilution were minimized, and obtained a background sample of  ~34 gal arcmin-2 for the computation of tangential shear profile of the cluster. The fit of this profile by a NFW model did not yield an acceptable fit: the virial mass computed assuming an NFW profile was", null, "with a high concentration parameter", null, ". The same authors also derived the projected mass, obtaining M2D = 8.69 × h-11014   M at the virial radius.\n\nScaling these mass values to h = 0.7, we derive Mvir ~ (4−5) × 1014 M. This value is still consistent within uncertainties with the value derived in the present analysis (", null, "M,", null, ", case d). In our case, we obtain a better agreement however with both the values of Mvir and cvir given by the X-ray data, and a better consistency between parametric and non-parametric mass estimates, once the projection factor is taken into account. This may be due either to how the selection of background/foreground galaxies was made, or to a higher accuracy in the shape measurement as a combination of the deeper image and calibration of the bias caused by the SNR (Sect. 4).", null, "Fig. 12R-band LF of the galaxies in Abell 383. Data points are derived by binning the data in magnitude bins of 0.5 mag and error bars by Poissonian errors; the curve traces the best-fitting LF and the shaded area marks the model uncertainty, which is obtained by a bootstrap technique. Open with DEXTER\n\nFinally, we computed the luminosity function (LF hereafter) in the R-band and derived the total luminosity by integrating the fitted Schechter (Schechter 1976) function as described in Radovich et al. (2008). To obtain the cluster LF, that is the number of galaxies per unit luminosity and volume belonging to the cluster, we need to remove from our catalog all the background and foreground galaxies. Usually, this is made by statistically subtracting the galaxy counts in a control field from galaxy counts in the cluster direction. Here we take advantage of the selection in the color-color diagrams described in Sect. 5, and extract a catalog that includes cluster, foreground and residual background galaxies. The last two components were further removed by the statistical subtraction, where we defined as cluster area the circular region around the cluster center of radius r = 10.2 arcmin (1.3 Mpc); the control field was instead defined as the area outside the circle of radius r =  15.3 arcmin. For the best-fitting procedure to the Schechter function, we adopted conventional routines of minimization on the binned distributions. Best-fitting parameters are listed in Table 5. The R-band total luminosity, calculated as the Schechter integral, is Ltot = (2.14 ± 0.5) × 1012   L. The errors were estimated by the propagation of the 68%-confidence-errors of each parameter.\n\nFor comparison, we then used the relation in Popesso et al. (2007) between M200 and the optical luminosity, Lop, to see whether the mass obtained in this paper is consistent with the value expected for that luminosity. According to this relation, the mass expected for this luminosity is M200 = (4.73 ± 1.3) ×  1014   M, which agrees well the value derived by our weak-lensing analysis, M200 ~ 6.3 ×  1014   M (case d), corresponding to M/L ~ 300   M/L.\n\n## 7. Conclusions\n\nWe have computed the cluster Abell 383 mass by weak-lensing, using a deep R-band image taken with the Suprime camera on the Subaru telescope. Catalogs extracted from combined CFHT+SUBARU uBVRIz images were used to derive photometric redshifts and improve the weak-lensing analysis. The data were reduced with a pipeline developed in-house, which was specifically designed for wide-field imaging data. The ellipticities from which the shear signal was derived, were measured with a pipeline based on the KSB approach. We discussed some aspects that may improve the results, namely the size of the window used to suppress the noise from the outer part of the galaxies, the selection of a limit on SNR below which the measurement ellipticity is not sufficiently accurate, and a weighting scheme where uncertainties on spatial fitting of the PSF correction terms were taken into account.\n\nTable 5\n\nLuminosity function parameters and uncertainties.\n\nThe accuracy on the mass estimate by weak-lensing available with our KSB pipeline was first derived on simulated images, which were built in a way to mimic the background noise and the depth of the real image as closely as possible. From these simulations we conclude that the mass can be measured with an uncertainty  ~5–10% for log M/M ≥ 14.5. These accuracy takes into account the measurement errors of the ellipticity, but not the errors caused by e.g. the foreground/background galaxy separation, which may introduce an underestimate of the mass. The impact of this selection was evaluated by comparing three methods for the foreground/background galaxy separation, namely: magnitude cut in one band, color selection, and usage of photometric redshifts. All methods gave consistent estimates of the total virial mass, but from the shear profile it can be seen that a dilution of the signal in the inner regions is still present for a simple magnitude cut. Color selection and photometric redshifts provide better results, even if the accuracy of the photometric redshifts is not high owing to the few available bands. The virial mass of Abell 383 obtained here by NFW model fitting agrees with the value obtained from the non-parametric mass estimate, that is Mvir ~ 7 × 1014 M. Other previous weak-lensing analyses give Mvir ~ (4−5) × 1014 M: the value found in this paper seems to agree more with the value found by X-ray data, and we also have a better agreement between parametric and non-parametric estimates, compared with e.g. Okabe et al. (2010).\n\nFinally, we estimated the R-band LF of Abell 383, and derived the total R-band luminosity of the cluster: starting from this value and using the relation between mass and luminosity found for clusters by Popesso et al. (2007), we conclude that the mass derived by weak-lensing is consistent with the value expected for this luminosity.\n\n2\n\nStuff, Skymaker, SWarp and SExtractor are part of the Astromatic software developed by Bertin, see http://www.astromatic.net\n\n## Acknowledgments\n\nL.F., Z.H., and M.R. acknowledge the support of the European Commission Programme 6-th framework, Marie Curie Training and Research Network “DUEL”, contract number MRTN-CT-2006-036133. L.F. was partly supported by the Chinese National Science Foundation Nos. 10878003 & 10778725, 973 Program No. 2007CB 815402, Shanghai Science Foundations and Leading Academic Discipline Project of Shanghai Normal University (DZL805), and Chen Guang project with No. 10CG46 of Shanghai Municipal Education Commission and Shanghai Education Development Foundation. A.R. acknowledges support from the Italian Space Agency (ASI) contract Euclid-IC I/031/10/0. We are grateful to the referee for the useful comments that improved this paper.\n\n## All Tables\n\nTable 1\n\nSummary of observations with the MEGACAM (u) and SUPRIME (BVRIz) cameras used in this paper.\n\nTable 2\n\nPhotometric properties of the coadded images.\n\nTable 3\n\nMasses derived by simulations with the NFW model fitting (M3D) and aperture densitometry (M2D).\n\nTable 4\n\nBest-fit NFW parameters.\n\nTable 5\n\nLuminosity function parameters and uncertainties.\n\n## All Figures", null, "Fig. 1Density plots comparing the photometric redshifts in the Abell 383 field available from the SDSS and those computed here from the uBVRIz photometry. Open with DEXTER In the text", null, "Fig. 2V − R vs. R color plot: red and blue points are galaxies at zphot = 0.187 ± 0.1 and are classified as early and late-type, respectively; dashed lines show the  ±1σ levels. Open with DEXTER In the text", null, "Fig. 3Observed (red dots) and model colors (black dots) for stars after the offsets given in the text were applied. Model colors were derived convolving the Pickles’ library of stellar spectra with the filter transmission curves. Open with DEXTER In the text", null, "Fig. 4Distribution of the photometric redshifts computed from the uBVRIz data for R < 25 mag. Open with DEXTER In the text", null, "Fig. 5SNe is displayed as a function of the window function size, θ, used to measure ellipticities. For display purposes, galaxies were selected to have the same value of θmax, and SNe was normalized so that min(SNe) = 0, max(SNe) = 1. The vertical lines indicate the average (solid) and standard deviation (dashed) of FLUX_RADIUS for the same galaxies. Open with DEXTER In the text", null, "Fig. 6Running median of FLUX_RADIUS-θmax as a function of SNe. The vertical line shows the limit chosen for the selection of background galaxies. Open with DEXTER In the text", null, "Fig. 7PSF anisotropy correction derived with the GAM algorithm: the first three panels we show the ellipticity pattern (measured, fitted, and residuals; X and Y are in pixels). The scale is displayed by the arrows in the upper right part of each panel (e = 0.05). In the next panel, black dots are the measured values, green dots are after the correction; values rejected during fitting are marked in red. The last row shows for comparison the corrected ellipticities obtained using a third degree polynomial for the fit. Open with DEXTER In the text", null, "Fig. 8Shear profiles obtained with the different selection methods (see Table 4, where the parameters derived for each model are given). The fitting was done using the maximum likelihood approach. Binned points are shown for display purposes only. In panel a, the curves obtained by the two models (nfw/mnfw) overlap. Open with DEXTER In the text", null, "Fig. 9Color selection of foreground galaxies (zphot < 0.2). The contour in red is the density level chosen for the selection; the points display the model colors computed by ZEBRA for this redshift range. Open with DEXTER In the text", null, "Fig. 10Weak-lensing S-map showing the mass distribution derived by weak-lensing; overlaid is the central region of the Abell 383 field. Open with DEXTER In the text", null, "Fig. 11Density (gal/arcmin2) of background galaxies used for the lensing analysis as a function of the distance from the cluster for the different selection methods considered here. The case with no selection is also displayed for comparison. Open with DEXTER In the text", null, "Fig. 12R-band LF of the galaxies in Abell 383. Data points are derived by binning the data in magnitude bins of 0.5 mag and error bars by Poissonian errors; the curve traces the best-fitting LF and the shaded area marks the model uncertainty, which is obtained by a bootstrap technique. Open with DEXTER In the text\n\nCurrent usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.\n\nData correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days." ]
[ null, "https://www.aanda.org/articles/aa/full_html/2011/05/aa15955-10/aa15955-10-fig1_small.jpg", null, "https://www.aanda.org/articles/aa/full_html/2011/05/aa15955-10/aa15955-10-fig2_small.jpg", null, "https://www.aanda.org/articles/aa/full_html/2011/05/aa15955-10/aa15955-10-fig3_small.jpg", null, "https://www.aanda.org/articles/aa/full_html/2011/05/aa15955-10/aa15955-10-eq51.png", null, "https://www.aanda.org/articles/aa/full_html/2011/05/aa15955-10/aa15955-10-eq52.png", null, "https://www.aanda.org/articles/aa/full_html/2011/05/aa15955-10/aa15955-10-eq54.png", null, "https://www.aanda.org/articles/aa/full_html/2011/05/aa15955-10/aa15955-10-eq56.png", null, "https://www.aanda.org/articles/aa/full_html/2011/05/aa15955-10/aa15955-10-fig4_small.jpg", null, "https://www.aanda.org/articles/aa/full_html/2011/05/aa15955-10/aa15955-10-eq62.png", null, "https://www.aanda.org/articles/aa/full_html/2011/05/aa15955-10/aa15955-10-fig5_small.jpg", null, "https://www.aanda.org/articles/aa/full_html/2011/05/aa15955-10/aa15955-10-fig6_small.jpg", null, "https://www.aanda.org/articles/aa/full_html/2011/05/aa15955-10/aa15955-10-fig7_small.jpg", null, "https://www.aanda.org/articles/aa/full_html/2011/05/aa15955-10/aa15955-10-eq77.png", null, "https://www.aanda.org/articles/aa/full_html/2011/05/aa15955-10/aa15955-10-eq80.png", null, "https://www.aanda.org/articles/aa/full_html/2011/05/aa15955-10/aa15955-10-eq86.png", null, "https://www.aanda.org/articles/aa/full_html/2011/05/aa15955-10/aa15955-10-eq103.png", null, "https://www.aanda.org/articles/aa/full_html/2011/05/aa15955-10/aa15955-10-eq104.png", null, "https://www.aanda.org/articles/aa/full_html/2011/05/aa15955-10/aa15955-10-eq106.png", null, "https://www.aanda.org/articles/aa/full_html/2011/05/aa15955-10/aa15955-10-fig8_small.jpg", null, "https://www.aanda.org/articles/aa/full_html/2011/05/aa15955-10/aa15955-10-eq162.png", null, "https://www.aanda.org/articles/aa/full_html/2011/05/aa15955-10/aa15955-10-eq169.png", null, "https://www.aanda.org/articles/aa/full_html/2011/05/aa15955-10/aa15955-10-fig9_small.jpg", null, "https://www.aanda.org/articles/aa/full_html/2011/05/aa15955-10/aa15955-10-eq198.png", null, "https://www.aanda.org/articles/aa/full_html/2011/05/aa15955-10/aa15955-10-eq202.png", null, "https://www.aanda.org/articles/aa/full_html/2011/05/aa15955-10/aa15955-10-fig10_small.jpg", null, "https://www.aanda.org/articles/aa/full_html/2011/05/aa15955-10/aa15955-10-fig11_small.jpg", null, "https://www.aanda.org/articles/aa/full_html/2011/05/aa15955-10/aa15955-10-eq206.png", null, "https://www.aanda.org/articles/aa/full_html/2011/05/aa15955-10/aa15955-10-eq207.png", null, "https://www.aanda.org/articles/aa/full_html/2011/05/aa15955-10/aa15955-10-eq212.png", null, "https://www.aanda.org/articles/aa/full_html/2011/05/aa15955-10/aa15955-10-eq213.png", null, "https://www.aanda.org/articles/aa/full_html/2011/05/aa15955-10/aa15955-10-eq219.png", null, "https://www.aanda.org/articles/aa/full_html/2011/05/aa15955-10/aa15955-10-eq222.png", null, "https://www.aanda.org/articles/aa/full_html/2011/05/aa15955-10/aa15955-10-eq223.png", null, "https://www.aanda.org/articles/aa/full_html/2011/05/aa15955-10/aa15955-10-eq227.png", null, "https://www.aanda.org/articles/aa/full_html/2011/05/aa15955-10/aa15955-10-eq228.png", null, "https://www.aanda.org/articles/aa/full_html/2011/05/aa15955-10/aa15955-10-fig12_small.jpg", null, "https://www.aanda.org/articles/aa/full_html/2011/05/aa15955-10/aa15955-10-fig1_small.jpg", null, "https://www.aanda.org/articles/aa/full_html/2011/05/aa15955-10/aa15955-10-fig2_small.jpg", null, "https://www.aanda.org/articles/aa/full_html/2011/05/aa15955-10/aa15955-10-fig3_small.jpg", null, "https://www.aanda.org/articles/aa/full_html/2011/05/aa15955-10/aa15955-10-fig4_small.jpg", null, "https://www.aanda.org/articles/aa/full_html/2011/05/aa15955-10/aa15955-10-fig5_small.jpg", null, "https://www.aanda.org/articles/aa/full_html/2011/05/aa15955-10/aa15955-10-fig6_small.jpg", null, "https://www.aanda.org/articles/aa/full_html/2011/05/aa15955-10/aa15955-10-fig7_small.jpg", null, "https://www.aanda.org/articles/aa/full_html/2011/05/aa15955-10/aa15955-10-fig8_small.jpg", null, "https://www.aanda.org/articles/aa/full_html/2011/05/aa15955-10/aa15955-10-fig9_small.jpg", null, "https://www.aanda.org/articles/aa/full_html/2011/05/aa15955-10/aa15955-10-fig10_small.jpg", null, "https://www.aanda.org/articles/aa/full_html/2011/05/aa15955-10/aa15955-10-fig11_small.jpg", null, "https://www.aanda.org/articles/aa/full_html/2011/05/aa15955-10/aa15955-10-fig12_small.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8979459,"math_prob":0.9228959,"size":50858,"snap":"2019-51-2020-05","text_gpt3_token_len":13020,"char_repetition_ratio":0.15540567,"word_repetition_ratio":0.14422396,"special_character_ratio":0.25014746,"punctuation_ratio":0.1507473,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96196276,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96],"im_url_duplicate_count":[null,4,null,4,null,4,null,2,null,2,null,2,null,2,null,4,null,2,null,4,null,4,null,4,null,2,null,2,null,2,null,2,null,2,null,2,null,4,null,2,null,2,null,4,null,2,null,2,null,4,null,4,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-27T17:32:03Z\",\"WARC-Record-ID\":\"<urn:uuid:4ee293e0-73a0-48eb-b596-c009543db1d6>\",\"Content-Length\":\"199803\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d2f27f45-fccf-427f-b8b8-4d4fca8da42f>\",\"WARC-Concurrent-To\":\"<urn:uuid:34626059-9b29-4f5a-8697-64fe1d0830a1>\",\"WARC-IP-Address\":\"167.114.155.65\",\"WARC-Target-URI\":\"https://www.aanda.org/articles/aa/full_html/2011/05/aa15955-10/aa15955-10.html\",\"WARC-Payload-Digest\":\"sha1:AHAGXZCVYVLMJCUYLZGN7P37OL4SVYHU\",\"WARC-Block-Digest\":\"sha1:RYX7FKANY6I7KLDHK4IZ6FTZNYBSRSAC\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579251700988.64_warc_CC-MAIN-20200127143516-20200127173516-00057.warc.gz\"}"}
https://softkeys.uk/blogs/blog/how-to-return-in-excel
[ "Blog\n\n# How to Return in Excel?\n\nDo you need help returning in Excel? Excel is a powerful program used by many people to store, organize, and analyze data. Knowing how to use Excel correctly can save you time and make your work more efficient. In this guide, we will cover the basics of how to return in Excel. We’ll cover how to create a return, how to edit it, and how to save it. By the end of this guide, you’ll have the knowledge and skills to create, edit, and save returns in Excel.", null, "## Returning in Excel: Overview\n\nReturning in Excel is the process of entering values in a spreadsheet that are calculated from other values in the same sheet. This technique is used to calculate a variety of different types of data, from simple summaries to complex models. Excel provides a wide range of functions that can be used to calculate return values, including SUM, AVERAGE, MAX, MIN, and more. In this article, we’ll discuss how to return in Excel and provide some tips for getting the most out of your returns.\n\nReturning in Excel is a fairly straightforward process. To start, you’ll need to identify what values you want to return. Once you’ve identified those values, you can use the various functions available in Excel to calculate the return values. When entering the function, you’ll need to specify the range of cells that contain the values you want to return. Once you’ve entered the function, the return value will be displayed in the cell where you entered the function.\n\nIn some cases, you may need to adjust the range of cells in order to get the desired return value. This can be done by using the OFFSET function, which allows you to specify a range of cells relative to the cell where the function was entered. This can be useful when the range of cells you want to use for the return value is not the same as the range of cells that contain the values you want to return.\n\n## Returning in Excel: Tips for Success\n\nWhen returning in Excel, there are a few tips that can help ensure that you get the most out of your returns. First, it’s important to double-check your functions and ranges to make sure you’re getting the desired return value. It’s also important to make sure that any formulas you use are accurate and up-to-date. Finally, it’s important to remember that you can use the OFFSET function to adjust the range of cells used for the return value.\n\nWhen entering functions, it’s important to remember that Excel is case-sensitive. This means that functions must be entered exactly as they appear in the function list. If you enter a function incorrectly, it may not return the desired value. Additionally, it’s important to make sure that the range of cells you specify for the return value is accurate. If you specify an incorrect range of cells, the return value will not be accurate.\n\nFinally, it’s important to remember that you can use the OFFSET function to adjust the range of cells used for the return value. By adjusting the range of cells, you can ensure that the return value is accurate and up-to-date. Additionally, the OFFSET function can be used to return values from multiple sheets or tables, which can be useful in more complex calculations.\n\n## Returning in Excel: Troubleshooting\n\nWhen returning in Excel, it’s important to remember that there are a few common issues that can arise. If you find that your return values are not accurate, it’s possible that you’ve entered a function incorrectly or have specified an incorrect range of cells. Additionally, it’s possible that the data in the cells you’re using for the return value is not up-to-date or accurate.\n\nIf you’re having trouble getting the return value you desire, it may be helpful to double-check the range of cells you’re using for the return value. Additionally, it may be helpful to check the data in the cells to make sure it is accurate and up-to-date. Finally, it may be helpful to use the OFFSET function to adjust the range of cells used for the return value.\n\n### Checking Your Functions and Ranges\n\nWhen returning in Excel, it’s important to double-check your functions and ranges to make sure you’re getting the desired return value. Make sure the functions you’ve entered are accurate and up-to-date. Additionally, make sure that the range of cells you specify for the return value is accurate. If you enter an incorrect function or specify an incorrect range of cells, the return value may not be accurate.\n\nWhen returning in Excel, it’s important to make sure that the data in the cells you’re using for the return value is accurate and up-to-date. If the data is not up-to-date or accurate, the return value may not be accurate. It’s important to double-check the data in the cells to make sure it is accurate and up-to-date.\n\n### Using the OFFSET Function\n\nIf you’re having trouble getting the return value you desire, it may be helpful to use the OFFSET function to adjust the range of cells used for the return value. The OFFSET function can be used to specify a range of cells relative to the cell where the function was entered. This can be useful when the range of cells you want to use for the return value is not the same as the range of cells that contain the values you want to return. Additionally, the OFFSET function can be used to return values from multiple sheets or tables, which can be useful in more complex calculations.\n\n## Related Faq\n\n### What is a Return in Excel?\n\nA Return in Excel is a mathematical operation that subtracts one number from another. It is used to calculate the difference between two numbers, and is often used to measure the rate of change. In Excel, the Return is written as =A1-A2, where A1 is the starting number and A2 is the ending number.\n\n### What are some common uses of a Return in Excel?\n\nReturns in Excel are commonly used to measure the rate of change in a given set of data over time. They can also be used to calculate the percentage change in a given value between two points in time. Returns are also used to measure the performance of investments, such as stocks and bonds, over time.\n\n### What is the syntax for calculating a Return in Excel?\n\nThe syntax for calculating a Return in Excel is =A1-A2, where A1 is the starting number and A2 is the ending number. For example, if A1 is the opening price of a stock and A2 is the closing price of the same stock, the Return for that stock is calculated by subtracting A1 from A2.\n\n### What is an absolute Return in Excel?\n\nAn absolute Return in Excel is a Return value expressed as a whole number, without any decimal places. It is calculated by subtracting the starting number from the ending number, and then taking the absolute value of the result. This is sometimes used when comparing numbers that have different signs, such as positive and negative numbers.\n\n### What is a cumulative Return in Excel?\n\nA cumulative Return in Excel is a Return value that is calculated by adding up the individual Returns over a series of time periods. It is a way of measuring the overall rate of change over a longer period of time, and is often used to measure the performance of investments or portfolios over time.\n\n### What is a year-over-year Return in Excel?\n\nA year-over-year Return in Excel is a Return value that is calculated by subtracting the value of an item at the end of the previous year from the value of the same item at the end of the current year. It is a way of measuring the rate of change between the two years, and is often used to measure the performance of investments or portfolios over time.\n\n### Microsoft Excel – Adding a Return in a Cell\n\nReturning in Excel can be a great way to streamline your workflows and make your data more organized. With the help of the many tools available in Excel, you can quickly and easily return your data to its original format. By following the steps outlined in this article, you should be able to quickly and efficiently return your data in Excel. With a bit of practice and patience, you can soon become a pro at returning data in Excel." ]
[ null, "https://cdn.shopify.com/s/files/1/0381/7642/4068/files/Carriage-Return-in-Excel-1.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.89709264,"math_prob":0.904888,"size":8414,"snap":"2023-40-2023-50","text_gpt3_token_len":1744,"char_repetition_ratio":0.19131985,"word_repetition_ratio":0.30527723,"special_character_ratio":0.20525315,"punctuation_ratio":0.08744132,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.956521,"pos_list":[0,1,2],"im_url_duplicate_count":[null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-10-04T06:16:19Z\",\"WARC-Record-ID\":\"<urn:uuid:68530ca0-4787-4f74-8587-5af63f104e12>\",\"Content-Length\":\"287343\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9da88c9c-9b16-4e17-9a8d-d5801e281fd9>\",\"WARC-Concurrent-To\":\"<urn:uuid:60145f3a-e9c0-4191-b74b-3f2b1330913b>\",\"WARC-IP-Address\":\"23.227.38.65\",\"WARC-Target-URI\":\"https://softkeys.uk/blogs/blog/how-to-return-in-excel\",\"WARC-Payload-Digest\":\"sha1:PW4EXILDOYGOREWGOPHWEBUKWWA4SUCX\",\"WARC-Block-Digest\":\"sha1:PSTY76VJMTEC6FE4OFAIH4MDZPSMJCLP\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233511361.38_warc_CC-MAIN-20231004052258-20231004082258-00613.warc.gz\"}"}
https://snipcademy.com/challenges/quadratic-primes
[ "# Project Euler Problem 27: Quadratic Primes\n\nQuestion:\n\nEuler discovered the remarkable quadratic formula:\n\nn2 + n + 41\n\nIt turns out that the formula will produce 40 primes for the consecutive values n = 0 to 39. However, when n = 40, 402 + 40 + 41 = 40(40 + 1) + 41 is divisible by 41, and certainly when n = 41, 412 + 41 + 41 is clearly divisible by 41.\n\nThe incredible formula  n2 - 79n + 1601 was discovered, which produces 80 primes for the consecutive values n = 0 to 79. The product of the coefficients, -79 and 1601, is -126479.\n\nn² + an + b, where |a| < 1000 and |b| < 1000\n\nwhere |n| is the modulus/absolute value of ne.g. |11| = 11 and |-4| = 4\n\nFind the product of the coefficients, a and b, for the quadratic expression that produces the maximum number of primes for consecutive values of n, starting with n = 0.\n\n## Brain storm to Brute force\n\nWhen approaching a problem like this, you should start by talking out loud. Keep brainstorming ideas so that you can get your brain going. If nothing appears in your head, write down the brute force approach.\n\nLet's go through this step-by-step.\n\nPerhaps one way of brainstorming is to notice the quadratic equation and think of the quadratic formula. Would the formula help in this case? In this case, we aren't looking for n, and the equation isn't set to 0 so the quadratic equation is irrelevant. However, it's still good to keep your brain rolling by thinking of things like this.\n\n## Determining a value if prime\n\nWe'll definitely need a way to check if the value is actually prime, so let's make sure we have that helper method.\n\nWe've covered several ways to check if a value is prime earlier, so let's use this method.\n\npublic static boolean isPrimeSqRoot(int input) {\n\nif (input == 2)\nreturn true;\nif (input <= 1)\nreturn false;\nif (input % 2 == 0)\nreturn false;\n\n/**\n* Only up to the square root.\n* If there exists a factor between input and sqrt(input)\n* then that value will have a corresponding factor between 3 and sqrt(input)\n*/\nfor (int i = input - 2; i >= Math.sqrt(input); i -= 2) {\nif (input % i == 0)\nreturn false;\n}\n\nreturn true;\n}\n\n\nBasically this uses the property that if our input value is composite (not prime), then there exists one factor between 0 and sqrt(input) and a corresponding factor between sqrt(input) and input\n\n## Brute Force\n\nThere comes a time when you need to start jotting down the brute force method just to gauge your understanding and help brainstorm even more. If we write this out, we have values from -999 to 999 for a and the same for b. We calculate each value starting from n = 1 until we hit a non-prime.\n\nWe have to do this for all values so we have O(a*b*n) time, where the n here is the value that n can go to without making our equation be composite.", null, "## Plug and chug!\n\nLet's try plugging in numbers to see if we can find any special methods. We start with n = 0.\n\nn2 + an + b = b\n\nWe find that b itself must be prime in order for n to pass n = 0. Since b has to be prime, we can say that b must be a positive odd number.\n\nLet's try n = 1.\n\nn2 + an + b = 1 + a + b\n\nWe learned earlier that b must be odd. If we plug in any odd numbers for b into the equation above, we must have an odd value for a to make the entire sum prime. This is because all primes except 2 are odd.\n\nSo now we have that all values of a and b must be odd for n to be greater than 1.\n\n## Full code implementation in Java\n\nHere's the full code in Java. We broke it down into three methods: one that finds the maximum quadratic prime values for a and b, which calls on another function that finds the maximum value of n, which calls another to check if the value is prime.\n\npublic class QuadraticPrimes {\n\npublic static void main(String[] args) {\n\n}\n\nint maxN = 0;\nint maxA = 0;\nint maxB = 0;\n\nint currentMax = 0;\n\nfor (int a = -999; a < 1000; a += 2) {\nfor (int b = 3; b < 1000; b += 2) {\ncurrentMax = findMaxN(a,b);\nif (currentMax > maxN) {\nmaxN = currentMax;\nmaxA = a;\nmaxB = b;\n}\n}\n}\n\nreturn maxA * maxB;\n}\n\npublic static int findMaxN(int a, int b) {\n\nint max = 0;\nint n = 2;\n\nwhile (isPrime(n*n + a*n + b)) {\nif (n > max)\nmax = n++;\n}\n\nreturn max;\n}\n\npublic static boolean isPrime(int input) {\n\nif (input == 2)\nreturn true;\nif (input <= 1)\nreturn false;\nif (input % 2 == 0)\nreturn false;\n\n/**\n* Only up to the square root.\n* If there exists a factor between input and sqrt(input)\n* then that value will have a corresponding factor between 3 and sqrt(input)\n*/\nfor (int i = input - 2; i >= Math.sqrt(input); i -= 2) {\nif (input % i == 0)\nreturn false;\n}\n\nreturn true;\n}\n}\n\n## References\n\nProject Euler - Question 27\n\nCame up with a better solution or have a question? Comment below!" ]
[ null, "https://snipcademy.com/code/img/challenges/math/quadratic-primes.svg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8388755,"math_prob":0.99711424,"size":4442,"snap":"2020-34-2020-40","text_gpt3_token_len":1182,"char_repetition_ratio":0.120775126,"word_repetition_ratio":0.18993135,"special_character_ratio":0.29648808,"punctuation_ratio":0.111597374,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9996187,"pos_list":[0,1,2],"im_url_duplicate_count":[null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-08-13T05:55:05Z\",\"WARC-Record-ID\":\"<urn:uuid:a8ffbf6a-f2d7-48a7-888a-d2b0a07fe91e>\",\"Content-Length\":\"20286\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e2f8566f-0ba3-4d88-a295-b3d0fc4a3b07>\",\"WARC-Concurrent-To\":\"<urn:uuid:39f08621-7f00-41d7-92c3-57f54b15b0bf>\",\"WARC-IP-Address\":\"198.199.92.236\",\"WARC-Target-URI\":\"https://snipcademy.com/challenges/quadratic-primes\",\"WARC-Payload-Digest\":\"sha1:DNNDBG6BP2UGQDAL4KBEM5NTV5ZYHOXY\",\"WARC-Block-Digest\":\"sha1:AZNVE4BEPJY7GUPKLVJGLV2NU7FABXXQ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-34/CC-MAIN-2020-34_segments_1596439738960.69_warc_CC-MAIN-20200813043927-20200813073927-00329.warc.gz\"}"}
https://fr.maplesoft.com/support/help/maple/view.aspx?path=Physics%2FPerformOnAnticommutativeSystem
[ "Perform a Maple command originally programmed to work only with commutative variables, in a system of equations with anticommutative variables - Maple Programming Help\n\nPhysics[PerformOnAnticommutativeSystem] - Perform a Maple command originally programmed to work only with commutative variables, in a system of equations with anticommutative variables\n\n Calling Sequence PerformOnAnticommutativeSystem(command, list_of_arguments, other_arguments) ToSuperfields(expression)\n\nParameters\n\n command - a Maple command (not part of the Physics package) that is programmed to work only with commutative variables list_of_arguments - a list with one or many algebraic expressions or relations, possibly including sets or lists of them, involving anticommutative variables, these are the arguments that command cannot handle directly other_arguments - the rest of the arguments to be sent to command and that command can handle regardless of the presence of anticommutative variables in list_of_arguments\n\nDescription\n\n • The PerformOnAnticommutativeSystem command performs, in a system of of expressions/relations involving anticommutative variables, an operation (Maple command) originally programmed to work only with commutative variables.\n • For Maple 16, only commands having a syntax similar to the Maple differential equation solvers dsolve or pdsolve are handled properly, although PerformOnAnticommutativeSystem will accept any other command and operation, and attempt to perform the operation as described below.\n • The first argument is the command that will perform the operation. The second argument is a list of arguments that will be sent to command after being pre-processed as explained in the itemization below. The remaining arguments will also be sent to command but are not supposed to require any pre-processing; command is expected to handle them regardless of the presence of anticommutative variables within them or in list_of_arguments.\n • The output of PerformOnAnticommutativeSystem is then what would be the output of command if it were capable of handling anticommutative variables. In this sense PerformOnAnticommutativeSystem extends the ability of existing Maple commands, originally programmed only to work on commutative domains, to handle extended domains involving anticommutative variables. The strategy used in PerformOnAnticommutativeSystem is as follows:\n 1 The list_of_arguments involving anticommutative variables is viewed as a system of relations (equations would be a special case of them) where each relation is expanded as a polynomial in its anticommutative variables (see ToFieldComponents).\n 2 Each polynomial is split into the Coefficients of the anticommutative variables transforming each relation into a system of relations, now involving only commutative variables. At this point the given problem got mapped into one that command is expected to handle.\n 3 The resulting system in step 2 is sent to command that performs the operation.\n 4 The result of step 3 is processed to reconstruct the original functions of anticommutative variables departing from its commutative components (see ToSuperfields), and the result returned.\n • PerformOnAnticommutativeSystem was written as an experimental command by the research team at Maplesoft, aiming at bridging the gap between thousands of programs originally written for commutative domains, and the computational needs of noncommutative geometry and its applications in Mathematics and Physics. There is a great deal of scope for changing and improving things in PerformOnAnticommutativeSystem. You are welcome to contribute your ideas by email to [email protected].\n\nExamples\n\n > $\\mathrm{with}\\left(\\mathrm{Physics}\\right):$\n > $\\mathrm{Setup}\\left(\\mathrm{mathematicalnotation}=\\mathrm{true}\\right)$\n $\\left[{\\mathrm{mathematicalnotation}}{=}{\\mathrm{true}}\\right]$ (1)\n\nSet first $\\mathrm{\\theta }$ and $Q$ as prefixes for variables of type/anticommutative (see Setup)\n\n > $\\mathrm{Setup}\\left(\\mathrm{anticommutativepre}=\\left\\{\\mathrm{θ},Q\\right\\}\\right)$\n $\\mathrm{* Partial match of \\text{'}}{}\\mathrm{anticommutativepre}{}\\mathrm{\\text{'} against keyword \\text{'}}{}\\mathrm{anticommutativeprefix}{}\\text{'}$\n $\\mathrm{_______________________________________________________}$\n $\\left[{\\mathrm{anticommutativeprefix}}{=}\\left\\{{Q}{,}{\\mathrm{\\theta }}\\right\\}\\right]$ (2)\n\nConsider this partial differential equation for the anticommutative function $Q$ of commutative and anticommutative variables $x,\\mathrm{\\theta }$\n\n > $\\frac{{\\partial }^{2}}{\\partial \\mathrm{θ}\\partial x}Q\\left(x,y,\\mathrm{θ}\\right)=0$\n $\\frac{{{\\partial }}^{{2}}}{{\\partial }{x}{\\partial }{\\mathrm{\\theta }}}\\phantom{\\rule[-0.0ex]{0.4em}{0.0ex}}{Q}{}\\left({x}{,}{y}{,}{\\mathrm{\\theta }}\\right){=}{0}$ (3)\n\nIts solution using pdsolve, originally written to handle problems in a commutative domain\n\n > $\\mathrm{PerformOnAnticommutativeSystem}\\left(\\mathrm{pdsolve},\\left[\\right]\\right)$\n ${Q}{}\\left({x}{,}{y}{,}{\\mathrm{\\theta }}\\right){=}{\\mathrm{_F1}}{}\\left({x}{,}{y}\\right){}{\\mathrm{_λ1}}{+}{\\mathrm{_F3}}{}\\left({y}\\right){}{\\mathrm{\\theta }}$ (4)\n\nNote the presence of the anticommutative arbitrary constant $\\mathrm{_lambda2}$, introduced by dsolve when solving intermediate ordinary differential equations. In fact both dsolve and pdsolve in Maple 16 have this approach calling PerformOnAnticommutativeSystem coded within them so they can tackle the problem directly:\n\n > $\\mathrm{pdsolve}\\left(\\right)$\n ${Q}{}\\left({x}{,}{y}{,}{\\mathrm{\\theta }}\\right){=}{\\mathrm{_F1}}{}\\left({x}{,}{y}\\right){}{\\mathrm{_λ1}}{+}{\\mathrm{_F3}}{}\\left({y}\\right){}{\\mathrm{\\theta }}$ (5)\n\nTo avoid redundant typing in the input that follows and redundant display of information on the screen let's use PDEtools:-diff_table PDEtools:-declare\n\n > $\\mathrm{PDEtools}:-\\mathrm{declare}\\left(Q\\left(x,y,{\\mathrm{θ}}_{1},{\\mathrm{θ}}_{2}\\right)\\right)$\n ${Q}{}\\left({x}{,}{y}{,}{{\\mathrm{\\theta }}}_{{1}}{,}{{\\mathrm{\\theta }}}_{{2}}\\right){}{\\mathrm{will now be displayed as}}{}{Q}$ (6)\n > $q≔\\mathrm{PDEtools}:-\\mathrm{diff_table}\\left(Q\\left(x,y,{\\mathrm{θ}}_{1},{\\mathrm{θ}}_{2}\\right)\\right):$\n\nNow you can enter derivatives directly as the function's name indexed by the differentiation variables and see the display the same way; two PDEs\n\n > ${\\mathrm{pde}}_{1}≔{q}_{x,y,{\\mathrm{θ}}_{1}}+{q}_{x,y,{\\mathrm{θ}}_{2}}-{q}_{y,{\\mathrm{θ}}_{1},{\\mathrm{θ}}_{2}}=0$\n ${\\mathrm{diff}}{}\\left({\\mathrm{diff}}{}\\left({\\mathrm{diff}}{}\\left({Q}{}\\left({x}{,}{y}{,}{{\\mathrm{θ}}}_{{1}}{,}{{\\mathrm{θ}}}_{{2}}\\right){,}{x}\\right){,}{y}\\right){,}{{\\mathrm{θ}}}_{{1}}\\right){+}{\\mathrm{diff}}{}\\left({\\mathrm{diff}}{}\\left({\\mathrm{diff}}{}\\left({Q}{}\\left({x}{,}{y}{,}{{\\mathrm{θ}}}_{{1}}{,}{{\\mathrm{θ}}}_{{2}}\\right){,}{x}\\right){,}{y}\\right){,}{{\\mathrm{θ}}}_{{2}}\\right){-}{\\mathrm{diff}}{}\\left({\\mathrm{diff}}{}\\left({\\mathrm{diff}}{}\\left({Q}{}\\left({x}{,}{y}{,}{{\\mathrm{θ}}}_{{1}}{,}{{\\mathrm{θ}}}_{{2}}\\right){,}{y}\\right){,}{{\\mathrm{θ}}}_{{1}}\\right){,}{{\\mathrm{θ}}}_{{2}}\\right){=}{0}$ (7)\n > ${\\mathrm{pde}}_{2}≔{q}_{{\\mathrm{θ}}_{1}}=0$\n ${\\mathrm{diff}}{}\\left({Q}{}\\left({x}{,}{y}{,}{{\\mathrm{θ}}}_{{1}}{,}{{\\mathrm{θ}}}_{{2}}\\right){,}{{\\mathrm{θ}}}_{{1}}\\right){=}{0}$ (8)\n\nReduce pde using pde (see PDEtools:-ReducedForm)\n\n > $\\mathrm{PerformOnAnticommutativeSystem}\\left(\\mathrm{PDEtools}:-\\mathrm{ReducedForm},\\left[{\\mathrm{pde}}_{1},{\\mathrm{pde}}_{2}\\right]\\right)$\n ${\\mathrm{casesplit/ans}}{}\\left(\\left[{\\mathrm{diff}}{}\\left({\\mathrm{diff}}{}\\left({\\mathrm{diff}}{}\\left({Q}{}\\left({x}{,}{y}{,}{{\\mathrm{θ}}}_{{1}}{,}{{\\mathrm{θ}}}_{{2}}\\right){,}{x}\\right){,}{y}\\right){,}{{\\mathrm{θ}}}_{{2}}\\right)\\right]{,}\\left[{}\\right]\\right)$ (9)\n\nSet $\\mathrm{\\Upsilon }$ and $\\mathrm{Κ}$ to also be prefixes for anticommutative names\n\n > $\\mathrm{Setup}\\left(\\mathrm{anticommutativepre}=\\left\\{\\mathrm{Υ},\\mathrm{Κ}\\right\\},\\mathrm{additionally}\\right)$\n $\\mathrm{* Partial match of \\text{'}}{}\\mathrm{anticommutativepre}{}\\mathrm{\\text{'} against keyword \\text{'}}{}\\mathrm{anticommutativeprefix}{}\\text{'}$\n $\\mathrm{_______________________________________________________}$\n $\\left[{\\mathrm{anticommutativeprefix}}{=}\\left\\{{\\mathrm{Κ}}{,}{Q}{,}{\\mathrm{\\Upsilon }}{,}{\\mathrm{_λ}}{,}{\\mathrm{\\theta }}\\right\\}\\right]$ (10)\n\nDeclare the anticommutative functions $\\mathrm{\\Upsilon }\\left(x,y,{\\mathrm{\\theta }}_{1},{\\mathrm{\\theta }}_{2}\\right)$ and $\\mathrm{Κ}\\left(x,y,{\\mathrm{\\theta }}_{1},{\\mathrm{\\theta }}_{2}\\right)$ as well as the commutative function $\\mathrm{\\Xi }\\left(x,y,{\\mathrm{\\theta }}_{1},{\\mathrm{\\theta }}_{2}\\right)$ and $\\mathrm{Τ}\\left(x,y,{\\mathrm{\\theta }}_{1},{\\mathrm{\\theta }}_{2}\\right)$, and use corresponding diff_table for all of them\n\n > $\\mathrm{PDEtools}:-\\mathrm{declare}\\left(\\left(\\mathrm{Υ},\\mathrm{Κ},\\mathrm{Ξ},\\mathrm{Τ}\\right)\\left(x,y,{\\mathrm{θ}}_{1},{\\mathrm{θ}}_{2}\\right)\\right)$\n ${\\mathrm{Upsilon}}{}\\left({x}{,}{y}{,}{{\\mathrm{\\theta }}}_{{1}}{,}{{\\mathrm{\\theta }}}_{{2}}\\right){}{\\mathrm{will now be displayed as}}{}{\\mathrm{\\Upsilon }}$\n ${\\mathrm{Kappa}}{}\\left({x}{,}{y}{,}{{\\mathrm{\\theta }}}_{{1}}{,}{{\\mathrm{\\theta }}}_{{2}}\\right){}{\\mathrm{will now be displayed as}}{}{\\mathrm{Κ}}$\n ${\\mathrm{Xi}}{}\\left({x}{,}{y}{,}{{\\mathrm{\\theta }}}_{{1}}{,}{{\\mathrm{\\theta }}}_{{2}}\\right){}{\\mathrm{will now be displayed as}}{}{\\mathrm{\\Xi }}$\n ${\\mathrm{Tau}}{}\\left({x}{,}{y}{,}{{\\mathrm{\\theta }}}_{{1}}{,}{{\\mathrm{\\theta }}}_{{2}}\\right){}{\\mathrm{will now be displayed as}}{}{\\mathrm{Τ}}$ (11)\n > $U≔\\mathrm{PDEtools}:-\\mathrm{diff_table}\\left(\\mathrm{Υ}\\left(x,y,{\\mathrm{θ}}_{1},{\\mathrm{θ}}_{2}\\right)\\right):$\n > $K≔\\mathrm{PDEtools}:-\\mathrm{diff_table}\\left(\\mathrm{Κ}\\left(x,y,{\\mathrm{θ}}_{1},{\\mathrm{θ}}_{2}\\right)\\right):$\n > $X≔\\mathrm{PDEtools}:-\\mathrm{diff_table}\\left(\\mathrm{Ξ}\\left(x,y,{\\mathrm{θ}}_{1},{\\mathrm{θ}}_{2}\\right)\\right):$\n > $T≔\\mathrm{PDEtools}:-\\mathrm{diff_table}\\left(\\mathrm{Τ}\\left(x,y,{\\mathrm{θ}}_{1},{\\mathrm{θ}}_{2}\\right)\\right):$\n\nA large PDE system involving these four anticommutative and commutative functions $\\mathrm{\\Upsilon },\\mathrm{Κ},\\mathrm{\\Xi },\\mathrm{Τ}$\n\n > $\\mathrm{sys}≔\\left[{U}_{x,x,x}-{U}_{y}+a{\\mathrm{θ}}_{2}{\\mathrm{θ}}_{1}{U}_{x,x}+a{U}_{x,{\\mathrm{θ}}_{1}}{\\mathrm{θ}}_{2}=0,-{T}_{x,x,x}-3{X}_{x}+{T}_{y}-a{\\mathrm{θ}}_{2}{T}_{x,{\\mathrm{θ}}_{1}}-a{T}_{x,x}{\\mathrm{θ}}_{2}{\\mathrm{θ}}_{1}=0,-{K}_{x,x,x}+{K}_{y}+6{U}_{x}-3a{U}_{x}+a{K}_{x,{\\mathrm{θ}}_{1}}{\\mathrm{θ}}_{2}-a{\\mathrm{θ}}_{2}{\\mathrm{θ}}_{1}{K}_{x,x}+a{U}_{x,{\\mathrm{θ}}_{2}}{\\mathrm{θ}}_{2}=0,-6{T}_{x}+3a{T}_{x}-a{\\mathrm{θ}}_{2}{T}_{x,{\\mathrm{θ}}_{2}}=0,-6{K}_{x}+a{K}_{x,{\\mathrm{θ}}_{2}}{\\mathrm{θ}}_{2}+3a{K}_{x}=0,-{X}_{x,x,x}+3{U}_{x,x,{\\mathrm{θ}}_{2}}+{X}_{y}+6{U}_{{\\mathrm{θ}}_{1}}+a{\\mathrm{θ}}_{2}{U}_{{\\mathrm{θ}}_{2},{\\mathrm{θ}}_{1}}-3a{U}_{{\\mathrm{θ}}_{1}}-a{\\mathrm{θ}}_{2}{X}_{x,{\\mathrm{θ}}_{1}}-a{X}_{x,x}{\\mathrm{θ}}_{2}{\\mathrm{θ}}_{1}+2a{U}_{x,{\\mathrm{θ}}_{2}}{\\mathrm{θ}}_{2}{\\mathrm{θ}}_{1}=0,-3{T}_{x,x,{\\mathrm{θ}}_{2}}-3{X}_{{\\mathrm{θ}}_{2}}-a{T}_{{\\mathrm{θ}}_{2},{\\mathrm{θ}}_{1}}{\\mathrm{θ}}_{2}-6{T}_{{\\mathrm{θ}}_{1}}+3a{T}_{{\\mathrm{θ}}_{1}}-2a{\\mathrm{θ}}_{2}{\\mathrm{θ}}_{1}{T}_{x,{\\mathrm{θ}}_{2}}=0,-3{K}_{x,x,{\\mathrm{θ}}_{2}}+6{U}_{{\\mathrm{θ}}_{2}}+12{X}_{x}-6{K}_{{\\mathrm{θ}}_{1}}-3a{U}_{{\\mathrm{θ}}_{2}}-6a{X}_{x}+3a{K}_{{\\mathrm{θ}}_{1}}+a{\\mathrm{θ}}_{2}{K}_{{\\mathrm{θ}}_{2},{\\mathrm{θ}}_{1}}-2a{K}_{x,{\\mathrm{θ}}_{2}}{\\mathrm{θ}}_{2}{\\mathrm{θ}}_{1}-a{\\mathrm{θ}}_{2}{X}_{x,{\\mathrm{θ}}_{2}}=0,-6{T}_{{\\mathrm{θ}}_{2}}+3a{T}_{{\\mathrm{θ}}_{2}}=0,-6{K}_{{\\mathrm{θ}}_{2}}+3a{K}_{{\\mathrm{θ}}_{2}}=0,-a{T}_{x}{\\mathrm{θ}}_{2}=0,a{\\mathrm{θ}}_{2}{T}_{{\\mathrm{θ}}_{2}}=0,-3{T}_{x,x}-2a{T}_{x}{\\mathrm{θ}}_{2}{\\mathrm{θ}}_{1}-a{\\mathrm{θ}}_{2}{T}_{{\\mathrm{θ}}_{1}}=0,-a{\\mathrm{θ}}_{2}{T}_{{\\mathrm{θ}}_{2}}=0,6{T}_{x,{\\mathrm{θ}}_{2}}+2a{\\mathrm{θ}}_{2}{\\mathrm{θ}}_{1}{T}_{{\\mathrm{θ}}_{2}}=0,-3{K}_{x,x}-2a{\\mathrm{θ}}_{2}{\\mathrm{θ}}_{1}{K}_{x}+a{K}_{{\\mathrm{θ}}_{1}}{\\mathrm{θ}}_{2}+a{U}_{[]}+2a{X}_{x}{\\mathrm{θ}}_{2}=0,a{K}_{{\\mathrm{θ}}_{2}}{\\mathrm{θ}}_{2}=0,-6{K}_{x,{\\mathrm{θ}}_{2}}-2a{\\mathrm{θ}}_{2}{X}_{{\\mathrm{θ}}_{2}}-2a{K}_{{\\mathrm{θ}}_{2}}{\\mathrm{θ}}_{2}{\\mathrm{θ}}_{1}=0,-a{\\mathrm{θ}}_{2}{K}_{[]}+3{U}_{x,{\\mathrm{θ}}_{2}}-3{X}_{x,x}-a{\\mathrm{θ}}_{2}{X}_{{\\mathrm{θ}}_{1}}-a{\\mathrm{θ}}_{1}{U}_{[]}+a{X}_{x}{\\mathrm{θ}}_{2}{\\mathrm{θ}}_{1}=0,-3{T}_{x,{\\mathrm{θ}}_{2}}=0,-3{K}_{x,{\\mathrm{θ}}_{2}}-a{\\mathrm{θ}}_{2}{X}_{{\\mathrm{θ}}_{2}}=0,3{X}_{x,{\\mathrm{θ}}_{2}}-a{\\mathrm{θ}}_{2}{\\mathrm{θ}}_{1}{X}_{{\\mathrm{θ}}_{2}}=0,-3{T}_{{\\mathrm{θ}}_{2}}=0,-3{K}_{{\\mathrm{θ}}_{2}}=0,-3{T}_{x}=0,3{T}_{{\\mathrm{θ}}_{2}}=0\\right]$\n (12)\n\nNote that the notation used in this display is compact, but the actual contents is there. For example, for the first equation in sys\n\n > ${\\mathrm{sys}}_{1}$\n ${\\mathrm{diff}}{}\\left({\\mathrm{diff}}{}\\left({\\mathrm{diff}}{}\\left({\\mathrm{Υ}}{}\\left({x}{,}{y}{,}{{\\mathrm{θ}}}_{{1}}{,}{{\\mathrm{θ}}}_{{2}}\\right){,}{x}\\right){,}{x}\\right){,}{x}\\right){-}\\left({\\mathrm{diff}}{}\\left({\\mathrm{Υ}}{}\\left({x}{,}{y}{,}{{\\mathrm{θ}}}_{{1}}{,}{{\\mathrm{θ}}}_{{2}}\\right){,}{y}\\right)\\right){-}{a}{}{\\mathrm{*}}{}\\left({{\\mathrm{θ}}}_{{1}}{,}{{\\mathrm{θ}}}_{{2}}{,}{\\mathrm{diff}}{}\\left({\\mathrm{diff}}{}\\left({\\mathrm{Υ}}{}\\left({x}{,}{y}{,}{{\\mathrm{θ}}}_{{1}}{,}{{\\mathrm{θ}}}_{{2}}\\right){,}{x}\\right){,}{x}\\right)\\right){+}{a}{}{\\mathrm{*}}{}\\left({\\mathrm{diff}}{}\\left({\\mathrm{diff}}{}\\left({\\mathrm{Υ}}{}\\left({x}{,}{y}{,}{{\\mathrm{θ}}}_{{1}}{,}{{\\mathrm{θ}}}_{{2}}\\right){,}{x}\\right){,}{{\\mathrm{θ}}}_{{1}}\\right){,}{{\\mathrm{θ}}}_{{2}}\\right){=}{0}$ (13)\n\nShow the contents:\n\n > $\\mathrm{lprint}\\left({\\mathrm{sys}}_{1}\\right)$\n diff(diff(diff(Upsilon(x,y,theta,theta),x),x),x)-diff(Upsilon(x,y,theta[1 ],theta),y)-a*Physics:-*(theta,theta,diff(diff(Upsilon(x,y,theta, theta),x),x))+a*Physics:-*(Physics:-diff(diff(Upsilon(x,y,theta,theta[2 ]),x),theta),theta) = 0\n\nThe simplification of sys taking into account its integrability conditions (see PDEtools:-casesplit)\n\n > $\\mathrm{PerformOnAnticommutativeSystem}\\left(\\mathrm{PDEtools}:-\\mathrm{casesplit},\\left[\\mathrm{sys}\\right]\\right)$\n ${\\mathrm{casesplit/ans}}{}\\left(\\left[{-}{\\mathrm{*}}{}\\left({{\\mathrm{θ}}}_{{1}}{,}{\\mathrm{diff}}{}\\left({\\mathrm{diff}}{}\\left({\\mathrm{Ξ}}{}\\left({x}{,}{y}{,}{{\\mathrm{θ}}}_{{1}}{,}{{\\mathrm{θ}}}_{{2}}\\right){,}{x}\\right){,}{{\\mathrm{θ}}}_{{1}}\\right)\\right){-}{\\mathrm{*}}{}\\left({{\\mathrm{θ}}}_{{2}}{,}{\\mathrm{diff}}{}\\left({\\mathrm{diff}}{}\\left({\\mathrm{Ξ}}{}\\left({x}{,}{y}{,}{{\\mathrm{θ}}}_{{1}}{,}{{\\mathrm{θ}}}_{{2}}\\right){,}{x}\\right){,}{{\\mathrm{θ}}}_{{2}}\\right)\\right){+}{\\mathrm{diff}}{}\\left({\\mathrm{Ξ}}{}\\left({x}{,}{y}{,}{{\\mathrm{θ}}}_{{1}}{,}{{\\mathrm{θ}}}_{{2}}\\right){,}{x}\\right){=}{0}{,}{-}{\\mathrm{*}}{}\\left({{\\mathrm{θ}}}_{{1}}{,}{\\mathrm{diff}}{}\\left({\\mathrm{diff}}{}\\left({\\mathrm{Ξ}}{}\\left({x}{,}{y}{,}{{\\mathrm{θ}}}_{{1}}{,}{{\\mathrm{θ}}}_{{2}}\\right){,}{y}\\right){,}{{\\mathrm{θ}}}_{{1}}\\right)\\right){-}{\\mathrm{*}}{}\\left({{\\mathrm{θ}}}_{{2}}{,}{\\mathrm{diff}}{}\\left({\\mathrm{diff}}{}\\left({\\mathrm{Ξ}}{}\\left({x}{,}{y}{,}{{\\mathrm{θ}}}_{{1}}{,}{{\\mathrm{θ}}}_{{2}}\\right){,}{y}\\right){,}{{\\mathrm{θ}}}_{{2}}\\right)\\right){+}{\\mathrm{diff}}{}\\left({\\mathrm{Ξ}}{}\\left({x}{,}{y}{,}{{\\mathrm{θ}}}_{{1}}{,}{{\\mathrm{θ}}}_{{2}}\\right){,}{y}\\right){=}{0}{,}{\\mathrm{diff}}{}\\left({\\mathrm{Υ}}{}\\left({x}{,}{y}{,}{{\\mathrm{θ}}}_{{1}}{,}{{\\mathrm{θ}}}_{{2}}\\right){,}{{\\mathrm{θ}}}_{{2}}\\right){=}{0}{,}{\\mathrm{diff}}{}\\left({\\mathrm{Κ}}{}\\left({x}{,}{y}{,}{{\\mathrm{θ}}}_{{1}}{,}{{\\mathrm{θ}}}_{{2}}\\right){,}{{\\mathrm{θ}}}_{{1}}\\right){=}{0}{,}{\\mathrm{diff}}{}\\left({\\mathrm{Υ}}{}\\left({x}{,}{y}{,}{{\\mathrm{θ}}}_{{1}}{,}{{\\mathrm{θ}}}_{{2}}\\right){,}{{\\mathrm{θ}}}_{{1}}\\right){=}{0}{,}{\\mathrm{diff}}{}\\left({\\mathrm{diff}}{}\\left({\\mathrm{Ξ}}{}\\left({x}{,}{y}{,}{{\\mathrm{θ}}}_{{1}}{,}{{\\mathrm{θ}}}_{{2}}\\right){,}{{\\mathrm{θ}}}_{{1}}\\right){,}{{\\mathrm{θ}}}_{{2}}\\right){=}{0}{,}{-}{\\mathrm{*}}{}\\left({{\\mathrm{θ}}}_{{1}}{,}{\\mathrm{diff}}{}\\left({\\mathrm{diff}}{}\\left({\\mathrm{Τ}}{}\\left({x}{,}{y}{,}{{\\mathrm{θ}}}_{{1}}{,}{{\\mathrm{θ}}}_{{2}}\\right){,}{x}\\right){,}{{\\mathrm{θ}}}_{{1}}\\right)\\right){-}{\\mathrm{*}}{}\\left({{\\mathrm{θ}}}_{{2}}{,}{\\mathrm{diff}}{}\\left({\\mathrm{diff}}{}\\left({\\mathrm{Τ}}{}\\left({x}{,}{y}{,}{{\\mathrm{θ}}}_{{1}}{,}{{\\mathrm{θ}}}_{{2}}\\right){,}{x}\\right){,}{{\\mathrm{θ}}}_{{2}}\\right)\\right){+}{\\mathrm{diff}}{}\\left({\\mathrm{Τ}}{}\\left({x}{,}{y}{,}{{\\mathrm{θ}}}_{{1}}{,}{{\\mathrm{θ}}}_{{2}}\\right){,}{x}\\right){=}{0}{,}{-}{\\mathrm{*}}{}\\left({{\\mathrm{θ}}}_{{1}}{,}{\\mathrm{diff}}{}\\left({\\mathrm{diff}}{}\\left({\\mathrm{Τ}}{}\\left({x}{,}{y}{,}{{\\mathrm{θ}}}_{{1}}{,}{{\\mathrm{θ}}}_{{2}}\\right){,}{y}\\right){,}{{\\mathrm{θ}}}_{{1}}\\right)\\right){-}{\\mathrm{*}}{}\\left({{\\mathrm{θ}}}_{{2}}{,}{\\mathrm{diff}}{}\\left({\\mathrm{diff}}{}\\left({\\mathrm{Τ}}{}\\left({x}{,}{y}{,}{{\\mathrm{θ}}}_{{1}}{,}{{\\mathrm{θ}}}_{{2}}\\right){,}{y}\\right){,}{{\\mathrm{θ}}}_{{2}}\\right)\\right){+}{\\mathrm{diff}}{}\\left({\\mathrm{Τ}}{}\\left({x}{,}{y}{,}{{\\mathrm{θ}}}_{{1}}{,}{{\\mathrm{θ}}}_{{2}}\\right){,}{y}\\right){=}{0}{,}{\\mathrm{diff}}{}\\left({\\mathrm{Κ}}{}\\left({x}{,}{y}{,}{{\\mathrm{θ}}}_{{1}}{,}{{\\mathrm{θ}}}_{{2}}\\right){,}{{\\mathrm{θ}}}_{{2}}\\right){=}{0}{,}{\\mathrm{Υ}}{}\\left({x}{,}{y}{,}{{\\mathrm{θ}}}_{{1}}{,}{{\\mathrm{θ}}}_{{2}}\\right){-}{\\mathrm{*}}{}\\left({\\mathrm{diff}}{}\\left({\\mathrm{Υ}}{}\\left({x}{,}{y}{,}{{\\mathrm{θ}}}_{{1}}{,}{{\\mathrm{θ}}}_{{2}}\\right){,}{{\\mathrm{θ}}}_{{2}}\\right){,}{{\\mathrm{θ}}}_{{2}}\\right){-}{\\mathrm{*}}{}\\left({\\mathrm{diff}}{}\\left({\\mathrm{Υ}}{}\\left({x}{,}{y}{,}{{\\mathrm{θ}}}_{{1}}{,}{{\\mathrm{θ}}}_{{2}}\\right){,}{{\\mathrm{θ}}}_{{1}}\\right){,}{{\\mathrm{θ}}}_{{1}}\\right){=}{0}{,}{\\mathrm{Κ}}{}\\left({x}{,}{y}{,}{{\\mathrm{θ}}}_{{1}}{,}{{\\mathrm{θ}}}_{{2}}\\right){-}{\\mathrm{*}}{}\\left({\\mathrm{diff}}{}\\left({\\mathrm{Κ}}{}\\left({x}{,}{y}{,}{{\\mathrm{θ}}}_{{1}}{,}{{\\mathrm{θ}}}_{{2}}\\right){,}{{\\mathrm{θ}}}_{{2}}\\right){,}{{\\mathrm{θ}}}_{{2}}\\right){-}{\\mathrm{*}}{}\\left({\\mathrm{diff}}{}\\left({\\mathrm{Κ}}{}\\left({x}{,}{y}{,}{{\\mathrm{θ}}}_{{1}}{,}{{\\mathrm{θ}}}_{{2}}\\right){,}{{\\mathrm{θ}}}_{{1}}\\right){,}{{\\mathrm{θ}}}_{{1}}\\right){=}{0}{,}{-}{\\mathrm{diff}}{}\\left({\\mathrm{Ξ}}{}\\left({x}{,}{y}{,}{{\\mathrm{θ}}}_{{1}}{,}{{\\mathrm{θ}}}_{{2}}\\right){,}{{\\mathrm{θ}}}_{{1}}\\right){+}{\\mathrm{*}}{}\\left({\\mathrm{diff}}{}\\left({\\mathrm{diff}}{}\\left({\\mathrm{Ξ}}{}\\left({x}{,}{y}{,}{{\\mathrm{θ}}}_{{1}}{,}{{\\mathrm{θ}}}_{{2}}\\right){,}{{\\mathrm{θ}}}_{{1}}\\right){,}{{\\mathrm{θ}}}_{{2}}\\right){,}{{\\mathrm{θ}}}_{{2}}\\right){=}{0}{,}{-}{\\mathrm{diff}}{}\\left({\\mathrm{Τ}}{}\\left({x}{,}{y}{,}{{\\mathrm{θ}}}_{{1}}{,}{{\\mathrm{θ}}}_{{2}}\\right){,}{{\\mathrm{θ}}}_{{1}}\\right){=}{0}{,}{-}{\\mathrm{*}}{}\\left({\\mathrm{diff}}{}\\left({\\mathrm{diff}}{}\\left({\\mathrm{Ξ}}{}\\left({x}{,}{y}{,}{{\\mathrm{θ}}}_{{1}}{,}{{\\mathrm{θ}}}_{{2}}\\right){,}{{\\mathrm{θ}}}_{{1}}\\right){,}{{\\mathrm{θ}}}_{{2}}\\right){,}{{\\mathrm{θ}}}_{{1}}\\right){-}{\\mathrm{diff}}{}\\left({\\mathrm{Ξ}}{}\\left({x}{,}{y}{,}{{\\mathrm{θ}}}_{{1}}{,}{{\\mathrm{θ}}}_{{2}}\\right){,}{{\\mathrm{θ}}}_{{2}}\\right){=}{0}{,}{\\mathrm{diff}}{}\\left({\\mathrm{diff}}{}\\left({\\mathrm{Υ}}{}\\left({x}{,}{y}{,}{{\\mathrm{θ}}}_{{1}}{,}{{\\mathrm{θ}}}_{{2}}\\right){,}{{\\mathrm{θ}}}_{{1}}\\right){,}{{\\mathrm{θ}}}_{{2}}\\right){=}{0}{,}{\\mathrm{diff}}{}\\left({\\mathrm{diff}}{}\\left({\\mathrm{Τ}}{}\\left({x}{,}{y}{,}{{\\mathrm{θ}}}_{{1}}{,}{{\\mathrm{θ}}}_{{2}}\\right){,}{{\\mathrm{θ}}}_{{1}}\\right){,}{{\\mathrm{θ}}}_{{2}}\\right){=}{0}{,}{\\mathrm{diff}}{}\\left({\\mathrm{diff}}{}\\left({\\mathrm{Κ}}{}\\left({x}{,}{y}{,}{{\\mathrm{θ}}}_{{1}}{,}{{\\mathrm{θ}}}_{{2}}\\right){,}{{\\mathrm{θ}}}_{{1}}\\right){,}{{\\mathrm{θ}}}_{{2}}\\right){=}{0}{,}{-}{\\mathrm{*}}{}\\left({\\mathrm{diff}}{}\\left({\\mathrm{diff}}{}\\left({\\mathrm{Τ}}{}\\left({x}{,}{y}{,}{{\\mathrm{θ}}}_{{1}}{,}{{\\mathrm{θ}}}_{{2}}\\right){,}{{\\mathrm{θ}}}_{{1}}\\right){,}{{\\mathrm{θ}}}_{{2}}\\right){,}{{\\mathrm{θ}}}_{{1}}\\right){-}{\\mathrm{diff}}{}\\left({\\mathrm{Τ}}{}\\left({x}{,}{y}{,}{{\\mathrm{θ}}}_{{1}}{,}{{\\mathrm{θ}}}_{{2}}\\right){,}{{\\mathrm{θ}}}_{{2}}\\right){=}{0}\\right]{,}\\left[{}\\right]\\right)$ (14)\n > \n\nCompatibility\n\n • The Physics[PerformOnAnticommutativeSystem] command was introduced in Maple 16." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6544651,"math_prob":1.0000077,"size":8925,"snap":"2019-51-2020-05","text_gpt3_token_len":3861,"char_repetition_ratio":0.21454994,"word_repetition_ratio":0.016607355,"special_character_ratio":0.24459384,"punctuation_ratio":0.2192706,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9984475,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-22T07:25:09Z\",\"WARC-Record-ID\":\"<urn:uuid:567ab00b-31ea-4068-a0f7-fecbd14a38df>\",\"Content-Length\":\"845420\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b875a316-9ee2-458a-b5d2-c920e53e9f6f>\",\"WARC-Concurrent-To\":\"<urn:uuid:dc641293-3344-4ab4-aaf2-3f22f790e1fd>\",\"WARC-IP-Address\":\"199.71.183.28\",\"WARC-Target-URI\":\"https://fr.maplesoft.com/support/help/maple/view.aspx?path=Physics%2FPerformOnAnticommutativeSystem\",\"WARC-Payload-Digest\":\"sha1:EQ4C3CIF67GNNSN77O53TIFFATT4OHXA\",\"WARC-Block-Digest\":\"sha1:3YRLUZB3UJ6YN75ONQYOSWH7WC5PRLF6\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579250606872.19_warc_CC-MAIN-20200122071919-20200122100919-00316.warc.gz\"}"}
https://stats.stackexchange.com/questions/589821/lstm-gru-and-k-foldcv
[ "# LSTM/GRU and K-FoldCV\n\nI am applying LSTM and GRU models to a financial problem where the dataset is a time series composed of 365 rows (365 continuous days) and 23 columns (my outcome is the closing price of a financial asset and the remaining 22 are other financial features).\n\nI have two questions:\n\n1. Is it correct to use K-Fold Cross validation? Let's assume 5-fold, I am using non-continuous data to train and predict.\n2. If K-Fold is a good approach, should I divide each fold to be continuous or shuffled?" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.93385416,"math_prob":0.762729,"size":487,"snap":"2022-40-2023-06","text_gpt3_token_len":111,"char_repetition_ratio":0.11801242,"word_repetition_ratio":0.0,"special_character_ratio":0.23819302,"punctuation_ratio":0.072164945,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9900567,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-10-05T18:56:24Z\",\"WARC-Record-ID\":\"<urn:uuid:8dd7e805-e092-4ef3-9461-54e9a98e1c62>\",\"Content-Length\":\"130556\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f1059fb9-df62-4f45-ab2d-ee2486aba8a3>\",\"WARC-Concurrent-To\":\"<urn:uuid:22ee09bc-3285-459a-86f8-fd99be6a9514>\",\"WARC-IP-Address\":\"151.101.193.69\",\"WARC-Target-URI\":\"https://stats.stackexchange.com/questions/589821/lstm-gru-and-k-foldcv\",\"WARC-Payload-Digest\":\"sha1:FWR2ZGF4LH7WMSZ7K5EPGD5TYYDGVH6O\",\"WARC-Block-Digest\":\"sha1:436SIEQJHZ2RAT6Z62H4CQHXS7WENXH5\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030337663.75_warc_CC-MAIN-20221005172112-20221005202112-00672.warc.gz\"}"}
https://deepai.org/publication/advances-in-learning-bayesian-networks-of-bounded-treewidth
[ "", null, "# Advances in Learning Bayesian Networks of Bounded Treewidth\n\nThis work presents novel algorithms for learning Bayesian network structures with bounded treewidth. Both exact and approximate methods are developed. The exact method combines mixed-integer linear programming formulations for structure learning and treewidth computation. The approximate method consists in uniformly sampling k-trees (maximal graphs of treewidth k), and subsequently selecting, exactly or approximately, the best structure whose moral graph is a subgraph of that k-tree. Some properties of these methods are discussed and proven. The approaches are empirically compared to each other and to a state-of-the-art method for learning bounded treewidth structures on a collection of public data sets with up to 100 variables. The experiments show that our exact algorithm outperforms the state of the art, and that the approximate approach is fairly accurate.\n\n## Authors\n\n##### This week in AI\n\nGet the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.\n\n## 1 Introduction\n\nBayesian networks are graphical models widely used to represent joint probability distributions on complex multivariate domains\n\n\n\n. A Bayesian network comprises two parts: a directed acyclic graph (the structure) describing the relationships among variables in the model, and a collection of conditional probability tables from which the joint distribution can be reconstructed. As the number of variables in the model increases, specifying the underlying structure becomes a tedious and difficult task, and practitioners often resort to learning Bayesian networks directly from data. Here, learning a Bayesian network refers to inferring the underlying graphical structure from data, a task well-known to be NP-hard\n\n.\n\nLearned Bayesian networks are commonly used for drawing inferences such as querying the posterior probability of some variable after evidence is entered (a task known as belief updating), finding the mode of the joint distribution (known as most probable explanation or MAP inference), or selecting a configuration of a subset of the variables that maximizes their conditional probability (known as marginal MAP inference). All those inferences are NP-hard to compute even approximately\n\n[18, 38, 1, 19, 21], and all known (exact and provably good) algorithms have worst-case time complexity that is exponential in the treewidth [31, 19, 34, 24], which is a measure of connectedness of the graph. Polynomial-time algorithms for such inferences do exist, but they provide no guarantees on the quality of the solution they deliver, which raises doubts as to whether occasional bad results are a consequence of suboptimal structure learning or of approximate inference. In fact, under widely believed assumptions from complexity theory, exponential time complexity in the treewidth is inevitable for any algorithm that provides provably good inferences [11, 33]\n\n. Thus, learning network structures of small treewidth is essential if one wishes to perform reliable and efficient inference. This is particularly important in the presence of missing data, as learning methods usually resort to some kind of Expectation-Maximization procedure that requires performing belief updating in the network at every iteration\n\n. In those cases inefficient inference leads to great computational cost of learning; unreliable inference leads to learning underfitted/overfitted structures.\n\nSince estimating a network’s treewidth is itself an NP-hard task\n\n, extending current methods for learning Bayesian networks to the case of bounded treewidth while maintaining their relative efficiency and accuracy is not trivial. In comparison to unconstrained Bayesian network learning, few algorithms have been designed for the bounded treewidth case. Korhonen and Parviainen showed that learning bounded treewidth Bayesian networks is NP-hard, and developed an exact algorithm based on dynamic programming that learns optimal -node structures of treewidth at most in time , which is above the time required by the best worst-case algorithms for learning optimal Bayesian networks with no constraint on treewidth . Elidan and Gould \n\ncombined several heuristics to treewidth computation and network structure learning in order to design approximate methods. Others have addressed the similar (but not equivalent) problem of learning undirected models of bounded treewidth\n\n[3, 42, 12]. Very recently, there seems to be an increase of interest in the topic. Berg et al. showed that the problem of learning bounded treewidth Bayesian networks can be reduced to a weighted maximum satisfiability problem, and subsequently solved by weighted MAX-SAT solvers. They report experimental results showing that their approach outperforms Korhonen and Parviainen’s dynamic programming approach. In the same year, Parviainen et al. showed that the problem can be reduced to a mixed-integer linear program (MILP), and then solved by off-the-shelf MILP optimizers (e.g. CPLEX). Their reduced MILP problem however has exponentially many constraints in the number of variables. Following the work of Cussens , the authors avoid creating such large programs by a cutting plane generation mechanism, which iteratively includes a new constraint while the optimum is not found. The generation of each new constraint (cutting plane) requires solving another MILP problem. The works of and have been developed independently and simultaneously with our work presented here; for this reason, we do not compare our methods with theirs. We intend to do so in the near future.\n\nIn this paper, we present two novel ideas for score-based Bayesian network structure learning with a hard constraint on treewidth. We first introduce a mixed integer linear programming formulation of the problem (Section 3) that builds on existing MILP formulations for unconstrained structure learning of Bayesian networks [16, 17] and for computing the treewidth of a graph . The designed formulation is able to find a score-maximizer Bayesian network of treewidth smaller than a given constant for models containing many more variables than Korhonen and Parviainen’s method, as we empirically demonstrate in Section 5. Unlike the MILP formulation of Parviainen et al. , the MILP problem we generate is of polynomial size in the number of variables, and does not require the use of cutting planes techniques. This makes for a clean and succinct formulation that can be solved with a single call of a MILP optimizer. A better understanding of cases where one approach is preferred to the other is yet to be achieved.\n\nSince linear programming relaxations are used for solving the MILP problem, any MILP formulation can be used to provide approximate solutions and error estimates in an anytime fashion (i.e., the method can be stopped at any time during the computation with a feasible solution). However, the MILP formulations (both ours and the one proposed by Parviainen et al. ) cannot cope with very large domains, even if we agreed on obtaining only approximate solutions. This is because the minimum size of the MILP problems is cubic in the number of variables (hence it is difficult even to start the MILP solver for large domains), and there is probably little we can do to considerably improve this situation (a further discussion on that is given in Section 3). This limitation is observed in the experiments reported in Section 5, where our MILP formulation requires a much larger amount of time to obtain much poorer solutions for networks with over 50 variables.\n\nIn order to deal with large domains, we devise (in Section 4) an approximate method based on a uniform sampling of -trees (maximal triangulated graphs of treewidth ), which is achieved by using a fast computable bijection between -trees and Dandelion codes . For each sampled -tree, we either run an exact algorithm similar to the one proposed in  (when computationally appealing) to learn the score-maximizing network whose moral graph is a subgraph of that -tree, or we resort to a much more efficient method that takes partial variable orderings uniformly at random from a (relatively small) space of orderings that are compatible with the -tree. We discuss the time and sample complexity of both variants, and compare it to those of similar schemes for learning unconstrained networks. We show empirically (in Section 5) that the double sampling scheme (of -trees and partial variable orderings) is very effective in learning close to optimal structures in a selected set of data sets. We conclude in Section 6 by noting that the methods we propose can be considered as state-of-the-art, and by suggesting possible improvements. To start, Section 2 presents some background knowledge on learning Bayesian networks.\n\n## 2 Preliminaries\n\nA Bayesian network is a concise graphical representation of a multivariate domain, where each random variable is associated with a node of its underlying directed acyclic graph (DAG) and local conditional probability distributions are specified for the variable given its parents in the graph (we often refer to variables and nodes in the graph interchangeably).\n\nLet be and consider a finite set of categorical random variables taking values in finite sets . Formally, a Bayesian network is a triple , where is a DAG whose nodes are in one-to-one correspondence with variables in , and is a set of numerical parameters specifying (conditional) probability values , for every node in , value of and assignment to the parents of , according to . The structure (that is, the DAG of the network) represents a set of stochastic independence assessments among variables in . In particular, represents a set of graphical Markov conditions: every variable is conditionally independent of its nondescendant nonparents given its parents. As a consequence, a Bayesian network uniquely defines a joint probability distribution over as the product of its parameters [31, Chapter 3.2.3]:\n\n P(x1,…,xn;G,θ)=∏i∈Nθi(xi,xπi). (1)\n\nLearning the structure from data is a challenging problem. One approach is to identify, for each variable, the minimal set of variables that makes that variable conditionally independent of others (Markov blanket), which is usually done by means of statistical tests of stochastic independence or information theoretic measures \n\n. Alternatively, structural learning can be posed as a combinatorial optimization problem in which one seeks the structure that maximizes a score function that relates to the data likelihood, while avoiding some excessive model complexity. Commonly used score functions include the Minimum Description Length (which is equivalent to the Bayesian Information Criterion)\n\n, and Bayesian Dirichlet (likelihood) equivalent uniform score [9, 15, 28]. These functions follow different rationale but they all satisfy two properties: (i) they can be written as a sum of local score functions that depend only on the parent set of each node and on the data, and (ii) the local score functions can be efficiently computed and stored. Score-based structure learning is a difficult task, and research on this topic has been very active [30, 29, 44, 17, 4, 45, 32].\n\nIn score-based Bayesian network learning we seek a DAG structure such that\n\n G∗=argmaxG∈Gn∑i∈Nsi(πi), (2)\n\nwhere is the class of all DAGs with nodes, are local score functions that depend only on the parent set as given by (i.e., the computation of each depends only on the values that and take in the data set). We assume (unless otherwise stated) that local scores have been previously computed and can be retrieved at constant time. Despite the decomposability of the score functions, the optimization cannot be performed locally lest it almost certainly introduce directed cycles in the graph.\n\nWe say that a cycle in an undirected graph has a chord if there are two nodes in the cycle which are connected by an edge outside the cycle. A chordal graph is an undirected graph in which all cycles of length four or more have a chord. Any graph can be made chordal by inserting edges, a process called chordalization [2, 8]. The treewidth of a chordal graph is the size of its largest clique minus one. The treewidth of an arbitrary undirected graph is the minimum treewidth over all chordalizations of it. The moral graph of a DAG is the undirected graph obtained by connecting any two nodes with a common child and dropping arc directions. The treewidth of a DAG is the treewidth of its corresponding moral graph. The treewidth of a Bayesian network is the treewidth of the DAG .\n\nAn elimination order is a linear ordering of the nodes in a graph. We say that an elimination order is perfect if for every node in the order its higher-ordered neighbors form a clique (i.e., are pairwise connected). A graph admits a perfect elimination order if and only if it is chordal. Perfect elimination orders can be computed in linear time if they exist. The elimination of a node according to an elimination order is the process of pairwise connecting all of its higher-ordered neighbors. Thus, the elimination of all nodes produces a chordal graph for which the elimination order used is perfect. The edges inserted by the elimination process are called fill-in edges. Given a perfect elimination order, the treewidth of the graph can be computed as the maximum number of higher ordered neighbors in the graph.\n\nThe reason why most score functions penalize model complexity (as given by the number of free numerical parameters) is that data likelihood always increases by augmenting the number of parents of a variable (and hence the number of free parameters in the model), which leads to overfitting and poor generalization. The way scores penalize model complexity generally leads to structures of bounded in-degree and helps in preventing overfitting, but even bounded in-degree graphs can have large treewidth (for instance, directed square grids have treewidth equal to the square root of the number of nodes, yet have maximum in-degree equal to two), which yields a great problem to subsequent probabilistic inferences with the model.\n\nThere are at least two direct reasons to aim at learning Bayesian networks of bounded treewidth: (i) As discussed previously, all known exact algorithms for probabilistic inference have exponential time complexity in the treewidth, and networks with very high treewidth are usually the most challenging for approximate methods; (ii) Previous empirical results [37, 23] suggest that bounding the treewidth might improve model performance on held-out data. There is also evidence that bounding the treewidth does not impose a great burden on the expressivity of the model for real data sets .\n\nThe goal of learning Bayesian networks of bounded treewidth is to search for such that\n\n G∗=argmaxG∈Gn,k∑i∈Nsi(πi), (3)\n\nwhere is the class of all DAGs of treewidth not (strictly) greater than . From a theoretical point of view, this is no easy task. Korhonen and Parviainen adapted Srebro’s complexity result for Markov networks to show that learning the structure of Bayesian networks of bounded treewidth strictly greater than one is NP-hard. Dasgupta’s results also prove this hardness if the score maximizes data likelihood  (in the case of networks of treewidth one, that is, directed trees with at most one parent per node, learning can be performed efficiently by the Chow and Liu’s algorithm ).\n\n## 3 Mixed integer linear programming\n\nThe first contribution of this work is the mixed integer linear programming (MILP) formulation that we design to exactly solve the problem of structure learning with bounded treewidth. MILP formulations have shown to be very effective to learning Bayesian networks without the treewidth bound [16, 4], surpassing other attempts in a range of data sets. Moreover, the great language power of a MILP problem allows us to encode the treewidth constraint in a natural manner, which might not be easy with other structure learning approaches [45, 44, 29, 22, 35]. We note that computing the treewidth of a graph is an NP-hard problem itself , even if there are linear algorithms that are only exponential in the treewidth (these algorithms might be seen mostly as theoretical results, since their practical use is shadowed by very large hidden constants). Hence, one should not hope to enforce a bound on the treewidth (which should work for any chosen bound) without a machinery that is not at least as powerful as NP.\n\nThe novel formulation is based on combining the MILP formulation for structure learning in with the MILP formulation presented in  for computing the treewidth of an undirected graph. There are although crucial differences, which we highlight later on. We have avoided the use of sophisticated techniques for MILP in the context of structure learning, such as constraint generation [16, 4], because we are interested in providing a clean and succinct MILP formulation, which can be ran using off-the-shelf solvers without additional coding.\n\nSince our formulation is a combination of two previous MILP formulations of distinct problems, we will present each formulation separately, and then describe how to combine them into a concise MILP problem.\n\n### 3.1 A MILP formulation for bounding the treewidth\n\nConsider a graph . We begin with the MILP formulation of the class of all supergraphs of a graph that have treewidth less than or equal to a given value :\n\n ∑j∈Nyij ≤w, ∀i∈N, (4a) (n+1)⋅yij ≤n+zj−zi, ∀i,j∈N, (4b) yij+yji =1, ∀(i,j)∈E, (4c) yij+yik−(yjk+ykj) ≤1, ∀i,j,k∈N, (4d) zi∈[0,n], ∀i∈N, (4e) yij∈{0,1}, ∀i,j∈N. (4f)\n\nThe formulation above is based on encoding all possible elimination orders of the nodes of . A chordalization of of treewidth at most can be obtained from a feasible solution (if it exists) of the program by setting . Constraint (4a) ensures has treewidth at most by bounding the number of higher-ordered neighbors of every node (which is an alternative way of defining the treewidth of chordal graphs). The variables , , take (real) values in (Constraint (4e)) and partially define an elimination order of the nodes: a node is eliminated before node if (the specification is partial since its allows for two nodes and with ). This order does not need to be linear because there are cases where multiple linearizations of the partial order are equally good in building a chordalization of (i.e., in minimizing the maximum clique size of ). In such cases, two nodes and might be assigned the same value indicating that eliminating before and the converse results in chordal graphs with the same treewidth. The variables , , are -valued (Constraint (4f)) and indicate whether node precedes in the order (i.e., whether ) and an edge exists among them in the resulting chordal graph (recall that an elimination process always produces a chordal graph). Although the values are not forced to be integers in our formulation, in practice they will most likely be so. Constraint (4b) allows to be only if appears after in the order (it in fact requires that to allow to be one). Constraint (4c) ensures is a supergraph of . Constraint (4d) guarantees that the elimination ordering induced by , , is perfect for : if and are higher ordered neighbors of in , then and are also neighbors in , that is, either or must be . The practical difference of this formulation with respect to the one in  lies in the fact that we allow partial elimination orders, and we do not need integer variables to enforce such orders. A bottleneck is the specification of Constraint (4d), as there are such constraints. The following result is an immediate conclusion of the above reasoning.\n\n###### Proposition 1\n\nThe graph has treewidth at most if and only if the set defined by Constraints (4) is non empty.\n\n###### Proposition 2\n\nLet , , be variables satisfying Constraints (4a)–(4f). Then the graph , where , is a chordalization of with treewidth at most , and any elimination order consistent with the partial order induced by is perfect for .\n\n### 3.2 A MILP formulation for structure learning\n\nWe now turn our attention to the MILP formulation of the structure learning part. Consider a chordal (undirected) graph , a perfect elimination order for , and let , , be -valued variables such that if and only if contains and is eliminated before . For each node in let be the collection of all allowed parent sets for that node (these sets can be specified manually by the user or simply defined as the subsets of with cardinality less than a given bound). We denote an element of as , with (hence ). The following MILP formulation specifies the class of all DAGs over that are consistent with the parent sets and whose moral graph is a subgraph of :\n\n ∑tπit =1, ∀i∈N, (5a) (n+1)πit ≤n+vj−vi, ∀i∈N,∀t,∀j∈Fit, (5b) πit ≤yij+yji, ∀i∈N,∀t,∀j∈Fit, (5c) πit ≤yjk+ykj, ∀i∈N,∀t,∀j,k∈Fit, (5d) vi∈[0,n], ∀i∈N, (5e) πit∈{0,1}, ∀i∈N,∀t, (5f)\n\nwhere the scope of the in each constraint is . A DAG can be obtained from a solution to the above program by setting . The variables , , take values in (Constraint (5e)) and partially specify a topological order of the nodes in : if then is not an ancestor of . The variables , , , are -valued (Constraint (5f)) and indicate whether the -th parent set in was chosen for node . Constraint (5a) enforces that exactly one parent set is chosen for each node. Constraint (5b) forces those choices to be acyclic, that is, to respect the topological order induced by the variables (with ties broken arbitrarily for nodes with ). Here too the order does not need to be linear. In fact, only the relative ordering of nodes that are connected in is relevant because Constraints (5c) and (5d) ensure that arcs appear in only if the corresponding edges in the moral graph of exist in (Constraint (5d) is responsible for having the moralization of the graph falling inside ).\n\n###### Proposition 3\n\nLet , , , be variables satisfying Constraints (5). Then the directed graph , where is acyclic and consistent with every set . Moreover the moral graph of is a subgraph of .\n\nA corollary of the above result is that the treewidth of is at most the treewidth of .\n\n### 3.3 Combining the MILP formulations\n\nWe can now put together the two previous MILP formulations to reach the following MILP formulation for the problem of learning DAGs of treewidth bounded by a constant :\n\n maximize: ∑itπit⋅si(Fit) (6b) subject to: ∑j∈Nyij ≤w, ∀i∈N, (6c) (n+1)⋅yij ≤n+zj−zi, ∀i,j∈N, (6d) yij+yik−(yjk+ykj) ≤1, ∀i,j,k∈N, (6e) ∑tπit =1, ∀i∈N, (6f) (n+1)πit ≤n+vj−vi, ∀i∈N,∀t,∀j∈Fit, (6g) πit ≤yij+yji, ∀i∈N,∀t,∀j∈Fit, (6h) πit ≤yjk+ykj, ∀i∈N,∀t,∀j,k∈Fit, (6i) zi∈[0,n], vi∈[0,n], ∀i∈N, (6j) yij∈{0,1}, ∀i,j∈N, (6k) πit∈{0,1}, ∀i∈N,∀t. (6l)\n\nAs the following result shows, the MILP formulation above specifies DAGs of bounded treewidth:\n\n###### Theorem 1\n\nLet , , be variables satisfying Constraints (6c)–(6l), and define a directed graph , where . Then is a acyclic, consistent with the parents sets , and has treewidth at most .\n\n###### Corollary 1\n\nIf , , maximize (6b) and satisfy (6c)–(6l), then the DAG as defined above is the solution to the optimization in (3).\n\nThe MILP formulation (6) can be directly fed into any off-the-shelf MILP optimizer. According to Corollary (1), the outcome will always be an optimum structure if enough resources (memory and time) are given. Standard MILP optimizers (e.g. CPLEX) often employ branch-and-bound (or branch-and-cut) procedures, which are able to be halted prematurely at any time and still provide a valid solution and an outer bound for the maximum score. Hence, the MILP formulation also provides an anytime algorithm for learning Bayesian networks of bounded treewidth: the procedure can be stopped at time and still provide an approximate solutions and error bound. Moreover, the quality of the approximation solution returned increases with time, while the error bounds monotonically decrease and eventually converge to zero.\n\n### 3.4 Comparison with the dynamic programming approach\n\nTo validate the practical feasibility of our MILP formulation, we compare it against the the dynamic programming method proposed previously for this problem , which we call K&P from now on.555We used the freely available code provided by the authors at http://www.cs.helsinki.fi/u/jazkorho/aistats-2013/. Table 1 show the time performance of our MILP formulation and that of K&P on a collection of reasonably small data sets from the UCI repository666Obtained from http://archive.ics.uci.edu/ml/. (discretized over the median value, when needed) and small values of the treewidth bound. More details about these data are presented in Section 5. The experiments have been run with a limit of 64GB in memory usage and maximum number of parents per node equal to three (the latter restriction facilitates the experiments and does not impose a constraint in the possible treewidths that can be found). While one shall be careful when directly comparing the times between methods, as the implementations use different languages (we are running CPLEX 12.4, K&P uses a Cython compiled Python code), we note that our MILP formulation is orders of magnitude faster than K&P, and able to solve many problems which the latter could not (in Section 5 we show the results of experiments with much larger domains). A time limit of 3h was given to the MILP, in which case its own estimation of the error is reported (in fact, it found the optimal structure in all instances, but was not able to certify it to be optimal within 3h).\n\nThe results in the table show that our MILP formulations largely outperforms K&P, being able to handle much larger problems. Yet we see from these experiments that both algorithms scale poorly in the number of variables. In particular, K&P cannot cope with data sets containing more than a dozen of variables. The results suggest that the MILP problems become easier as the treewidth bound increases. This is likely a consequence of the increase of the space of feasible solutions, which makes the linear relaxations used for solving the MILP problem tighter, thus reducing the computational load. This is probably aggravated by the small number of variables in these data sets (hence, by increasing the treewidth we effectively approximate an unbounded learning situation).\n\nWe shall demonstrate empirically in Section 5 that the quality of solutions found by the MILP approach in a reasonable amount of time degrades quickly as the number of variables reaches several dozens. Indeed, the MILP formulation is unable to find reasonable solutions for data sets containing 100 variables, which is not surprising given that number of Constraints (6e) and (6i) is cubic in the number of variables; thus, as increases even the linear relaxations of the MILP problem become hard to solve. In the next section, we present a clever sampling algorithm over the space of -trees to overcome such limitations and handle large domains. The MILP formulation just described will set a baseline for the performance of such approximate approach.\n\n## 4 Sampling k-trees using Dandelion codes\n\nIn this section we develop an approximate method for learning bounded treewidth Bayesian networks that is based on sampling graphs of bounded treewidth and subsequently finding DAGs whose moral graph is a subgraph of that graph. The approach is designed aiming at data sets with large domains, which cannot be handled by the MILP formulation.\n\nA naive approach to designing an approximate method would be to extend one of the sampling methods for unconstrained Bayesian network learning. For instance, we could envision a rejection sampling approach, which would sample structures using some available procedure (for instance, by sampling topological orderings and then greedily finding a DAG structure consistent with that order, as in ), and verify their treewidth, discarding the structure when the test fails. There are two great issues with this approach: (i) the computation of treewidth is a hard problem, and even if there are linear-time algorithms (but exponential on the treewidth), they perform poorly in practice; (ii) virtually all structures would be discarded due to the fact that complex structures tend to have larger scores than simple ones, at least for the most used score functions (their penalizations reduce the local complexity of the model, but are not able to constrain a global property such as treewidth). We empirically verified these facts, but will not report further on them here.\n\nAnother natural approach to the problem is to consider both an elimination order for the variables (from which the treewidth can be computed) and a topological order (from which one can greedily search for parent sets without creating cycles in the graph). It is straightforward to uniformly sample from the space of orderings, but the combined overall number of such orderings is quite high: (from the Stirling approximation). We propose an interesting way that is more efficient in terms of the size of the sampling space, and yet can be sampled uniformly (uniform sampling is a desirable property, as it ensures a good coverage of the space and is superior to other options if one has no prior information about the search space). This approach is based on the set of -trees.\n\n###### Definition 1\n\nA -tree is defined in the following recursive way:\n(1) A -clique is a -tree.\n(2) If is a -tree with nodes and edges , is a -clique and , then is a -tree.\n\nWe denote by the set of all -trees over nodes. In fact, a Bayesian network with treewidth bounded by is closely related to a -tree. Because -trees are exactly the maximal graphs with treewidth (graphs to which no more edges can be added without increasing their treewidth), we know that the moral graph of the optimal structure has to be a subgraph of a -tree .\n\nThe idea is to sample -trees and then search for the best structure whose moral graph is one of the subgraphs of the -tree. While directly sampling a -tree might not be trivial, Caminiti et al. proposed a linear time method for coding and decoding -trees into what is called Dandelion codes (the set of such codes is denoted by ). Moreover, they established a bijective mapping between codes in and -trees in . The code is a pair where with and is a list of pairs of integers drawn from , where is an arbitrary number not in . For example, and is a Dandelion code of a (single) -tree over nodes (that is , , ). Dandelion codes can be sampled uniformly at random by a trivial linear-time algorithm that uniformly chooses elements out of to build , and then uniformly samples pairs of integers in .\n\n###### Theorem 2\n\n There is a bijection mapping elements of and that is computable in time linear in and .\n\nGiven , we can use the dynamic programming algorithm proposed in  to find the optimal structure whose moral graph is a subgraph of . Our implementation follows the ideas in , but can also be seen as extending the divide-and-conquer method of  to account for all possible divisions of nodes. This results in the following theorem.\n\n###### Theorem 3\n\n For any fixed , given (a -tree) and the scoring function for each node , we can find a DAG whose moralized graph is a subgraph of maximizing the score in time and space .\n\nWe can combine the linear-time sampling of -trees described in Theorem 2 with the linear-time learning of bounded structures consistent with a graph in the above theorem to obtain an algorithm for learning bounded treewidth Bayesian networks. The algorithm is described in Algorithm 1 [Version 1].\n\n###### Theorem 4\n\nThe sampling space of Algorithm 1 [Version 1] is less than . Each of its iterations runs in linear time in (but exponential in ).\n\nProof. The follow equality holds .\n\n |Tn,k|=(nk)⋅(k(n−k)+1)n−k−2. (7)\n\nIt is not hard to see that the maximum happens for (because of the symmetry of and of around , while decreases with the increase of ). By manipulating this number and applying Stirling’s approximation for the factorials, we obtain:\n\n |Tn,k| ≤ √nenlogn+1−n(n−ke)n−k(ke)kkn−k−2(n−k)n−k−2 ≤ e√n(n−k)2enlognkn−2k−2≤enlogn+(n−2k)logk,\n\nwhich is less than . The decoding algorithm has complexity linear in (Theorem 2), as well as the method to uniformly sample a Dandelion code, and the method to find the best DAG consistent with a -tree (Theorem 3).\n\nWhile the running time of Algorithm 1 [Version 1] is linear in , the computational complexity of step 2.c, which uses the method in , is exponential in the treewidth (more precisely, it is ). Hence, one cannot hope to use it with moderately high treewidth bounds (say, larger than 8). Regarding the sample space, according to the above theorem it is slightly higher than that of order-based learning of unconstrained Bayesian networks (e.g. ), especially if . However, each iteration of step 2.c needs considerable more effort than the corresponding iteration in the unbounded case (yet, as it is a method theoretically linear in , more efficient implementations of the algorithm that searches within a given -tree might bring an additional boost to this approach in the future).\n\nAs just explained, the main practical drawback of Algorithm 1 [Version 1] is step 2.c, which process each sampled -tree. In the sequel we propose a new approach ([Version 2]) that is much faster (per iteration), at the price of a slight increase in the sampling space. We will empirically compare these approaches in the next section.\n\nLet define a partial order of the nodes. We say that a DAG is consistent with if, (as defined by ), there is no directed path from to in . In other words, constrains the valid topological orderings for the nodes in . We do not force to be a linear order, because we are only interested in orderings that specify, for each edge in a -tree , which of the two ending points precedes the other (in other words, we are only interested in possible ways of orienting the edges of the k-tree). There are multiple linear orderings that achieve the very same result for , and our goal is to sample from the smallest possible space of orderings (if we used a linear order, then the sampling space would be ).\n\nA partial order can be represented as a DAG : is smaller than in if and only if node is an ancestor of node in . Given a k-tree , we will sample by following the same recursive process as in Definition 1. This is described in Algorithm 2. The procedure produces partial orders (i.e., DAGs) whose underlying graph (obtained by ignoring arc directions) is exactly the graph . Note that the treewidth of the DAG corresponding to might exceed the treewidth of . This does not affect the correctness of Algorithm 1, as is only used to specify which node preceeds which node in the order, and hence which are the possible parents; the actual parents are chosen so that the treewidth bound is respect. This can be done efficiently using .\n\n###### Theorem 5\n\nAlgorithm 2 samples DAGs on a sample space of size and runs in linear time in and .\n\nProof. The sampling of the nodes in the root clique takes time by sampling one of the ways to choose the arcs without creating cycles. We assume that an appropriate structure representing is known (e.g., a tree-decomposition with nodes), so Steps 1 and 3 can be done in time. For each iteration of Step 4, we spend time because there are only ways to direct the edges, as this is equivalent to placing in its relative order with respect to the already ordered neighbors. Hence the total running time is and the sampling space is .\n\nThe following result shows that the sampling space of this version of the sampling algorithm remains reasonably small, especially for (it would be also small if is close to , then decreases drastically, so the total sampling space would also decrease).\n\n###### Theorem 6\n\nThe sampling space of Algorithm 1 [Version 2] is less than . Each of its iterations runs in linear time in and .\n\nProof. As before, the decoding algorithm (Theorem 2) and the method to uniformly sample a Dandelion code run in linear time in both and . Algorithm 2 samples the ordering in linear time too. Finally, finding the best DAG consistent with a -tree and is a greedy procedure over all nodes (choosing the parent set of a node each time): the treewidth cannot exceed because we take a subgraph of , and no cycles can be formed if we respect .\n\nAlthough the sampling space of Version 2 is larger than the one of Version 1, Version 2 is much faster per iteration. This allows us to explore a much larger region of the space of -tress than Version 1 can within a fixed amount of time. Moreover, one can run Version 2 without pre-computing the score function: when scores are needed, they are computed and stored into a hash table for further accesses, thus closely matching another desirable characteristic of order-based learning methods for unbounded treewidth (namely, to avoid computing all scores a priori).\n\n## 5 Experiments\n\nWe empirically analyze the accuracy of Algorithm 1\n\nby comparing its two versions with each other and with the values obtained by the MILP method. As before, we use a collection of data sets from the UCI repository of varying dimensionality, with variables discretized over the median value when needed. The number of (binary) variables and samples in each data set are described in Table\n\n2. Some columns of the original data sets audio and community were discarded: 7 variables of audio had always a constant value, 5 variables of community have almost a different value per sample (such as personal data), and 22 variables have missing data (Table 2 shows dimensions after this pre-processing). In all experiments, we maximize the Bayesian Dirichlet likelihood equivalent uniform (BDeu) score with equivalent sample size equal to one .\n\nWe use treewidth bounds of 4 and 10, and maximum parent set size of 3 (for hill and community, it was set as 2; nevertheless, the MILP formulation is the one with a strong dependency on the maximum parent set size, as scores need to be pre-computed). To be fair among runs, we have pre-computed all scores, and have considered them as input of the problem. The MILP has been optimized by CPLEX 12.4 with a memory limit of 64GB. We have allowed it to run up to three hours, and have also collected the incumbent solution after 10 minutes. Algorithm 1 has been given only 10 minutes (in either version).", null, "Figure 1: Performance of methods relative to the solution found by the Version 2 of Algorithm 1 with a treewidth limit of four. MILP results are missing for community and hill because it was not able to produce a solution for those cases.", null, "Figure 2: Performance of methods relative to the solution found by the Version 2 of Algorithm 1 with a treewidth limit of ten. MILP results after 10 minutes are missing for community and hill because it was not able to produce a solution within that time.\n\nTo account for the variability of the performance of the sampling methods with respect to the sampling seed, we ran each version of Algorithm 1 ten times on each data set with different seeds. We report the minimum, median and maximum obtained values over those runs for each dataset. We show the relative scores (in percentage) of the approximate methods (Versions 1 and 2 of Algorithm 1 and the best score found by the MILP formulation within 10 minutes and 3 hours) with respect to Version 2’s median score, for treewidth bounds of four (Figure 1) and ten (Figure 2). The relative score is computed as the ratio of the obtained value and the median score of Version 2, so higher values are better. Moreover, a value higher than 100% shows that the method outperformed Version 2, whereas a value smaller than 100% shows the converse. The raw data used in the figures appear in Tables 3 (for Figure 1) and 4 (for Figure 2). The exponential dependence on treewidth of Version 1 made it intractable to run with treewidth bound greater than 8. We see from the plot on top that Version 2 is largely superior to Version 1, even if the former might only find suboptimal networks for a given -tree. This is probably a consequence of the much lower running times per iteration, which allows Version 2 to explore a much larger set of -trees. It also suggests that spending time finding good -trees is more worthy than optimizing network structures for a given -tree. We also see that the MILP formulation scales poorly with the number of variables, being unable to obtain satisfactory solutions for data sets with more than 50 variables. On the hill data set with treewidth , CPLEX running the MILP formulation was not able to output any solution within 10 minutes, and the solution obtained within 3 hours is far left of the zoomed area of the graph in Figure 1; on the community data set with treewidth , CPLEX did not find any solution within 3 hours. Regarding the treewidth bound of ten (Figure 2), we observe that Version 2 is very accurate and outperforms the MILP formulation in the larger data sets.\n\nIt is worth noting that both versions of Algorithm 1 were implemented in Matlab; hence, the comparison with the approximate solution of running the MILP formulation with the same amount of time (10 minutes) might be unfair, as we expect to produce better results by an appropriate re-coding of our sampling methods in a more efficient language (one could also try to improve the MILP formulation, although it will eventually suffer from the problems discussed in Section 3). Nevertheless, the results show that Version 2 is very competitive even in this scenario.\n\n## 6 Conclusions\n\nWe have created new exact and approximate procedures to learn Bayesian networks of bounded treewidth. They perform well and are of immediate practical use. The designed mixed-integer linear programming (MILP) formulation improves on MILP formulations for related tasks, especially regarding the specification of treewidth-related constraints. It solves the problem exactly and surpasses a state-of-the-art method both in size of networks and treewidth that it can handle. Even if results indicate it is better than the state of the art, MILP is not so accurate and might fail in large domains. For that purpose, we have proposed a double sampling idea that provides means to learn Bayesian networks in large domains and high treewidth limits, and is empirically shown to perform very well in a collection of public data sets. It scales well, because its complexity is linear both in the domain size and in the treewidth bound. There are certainly other search methods that can be integrated with our sampling approach, for instance a local search after every iteration of sampling, local permutations of orderings that are compatible with the -trees, etc. We leave the study of these and other avenues for future work.\n\nDuring the making of this work, two closely related works appeared in the literature. developed an exact learning procedure based on maximum satisfiability. developed an alternative MILP formulation of the problem with exponentially many constraints, and used cutting plane generation techniques to improve on performance. These works have been developed independently and simultaneously with our work presented here; future work should compare their performance empirically against the methods proposed here.\n\n## 7 Acknowledgments\n\nThis work was partly supported by the grant N00014-12-1-0868 from the US Office of Navy Research, the Swiss NSF grant n. 200021_146606/1, and the FAPESP grant n. 2013/23197-4.\n\n## References\n\n• Abdelbar and Hedetniemi A. M. Abdelbar and S. M. Hedetniemi. Approximating MAPs for belief networks is NP-hard and other theorems. Artif. Intell., 102(1):21–38, 1998.\n• Arnborg et al. S. Arnborg, D. Corneil, and A. Proskurowski. Complexity of finding embeddings in a k-tree. SIAM J. on Matrix Analysis and Applications, 8(2):277–284, 1987.\n• Bach and Jordan F. R. Bach and M. I. Jordan. Thin junction trees. In Advances in Neural Inf. Proc. Systems 14, pages 569–576, 2001.\n• Barlett and Cussens M. Barlett and J. Cussens. Advances in Bayesian Network Learning using Integer Programming. In Proc. 29th Conf. on Uncertainty in AI, pages 182–191, 2013.\n• Beineke and Pippert L. W. Beineke and R. E. Pippert. On the number of k-dimensional trees. J. of Comb. Theory, 6:200–205, 1969.\n• Berg et al. J. Berg, M. J ”arvisalo, and B. Malone. Learning optimal bounded treewidth Bayesian networks via maximum satisfiability. In Proc. 17th Int. Conf. on AI and Stat., pages 86–95, 2014. JMLR W&CP 33.\n• Beygelzimer and Rish A. Beygelzimer and I. Rish. Approximability of probability distributions. In Advances in Neural Inf. Proc. Systems 16, pages 377–384, 2003.\n• Bodlaender H. L. Bodlaender. A linear time algorithm for finding tree-decompositions of small treewidth. SIAM J. on Computing, 25(6):1305–1317, 1996.\n• Buntine W. Buntine. Theory refinement on Bayesian networks. In Proc. 7th Conf. on Uncertainty in AI, pages 52–60, 1991.\n• Caminiti et al. S. Caminiti, E. G. Fusco, and R. Petreschi. Bijective linear time coding and decoding for k-trees. Theory of Comp. Systems, 46(2):284–300, 2010.\n• Chandrasekaran et al. V. Chandrasekaran, N. Srebro, and P. Harsha. Complexity of inference in graphical models. In Proc. 24th Conf. on Uncertainty in AI, pages 70–78, 2008.\n• Chechetka and Guestrin A. Chechetka and C. Guestrin. Efficient principled learning of thin junction trees. In Advances in Neural Inf. Proc. Systems, pages 273–280, 2007.\n• Chickering D. M. Chickering. Learning Bayesian networks is NP-complete. In Learning from Data: AI and Stat. V, pages 121–130. Springer-Verlag, 1996.\n• Chow and Liu C. Chow and C. Liu. Approximating discrete probability distributions with dependence trees. Inf. Theory, IEEE Trans. on, 14(3):462–467, 1968.\n• Cooper and Herskovits G. F. Cooper and E. Herskovits. A Bayesian method for the induction of probabilistic networks from data. Mach. Learning, 9(4):309–347, 1992.\n• Cussens J. Cussens. Bayesian network learning with cutting planes. In Proc. 27th Conf. on Uncertainty in AI, pages 153–160, 2011.\n• Cussens et al. J. Cussens, M. Bartlett, E. M. Jones, and N. A. Sheehan. Maximum Likelihood Pedigree Reconstruction using Integer Linear Programming. Genetic Epidemiology, 37(1):69–83, 2013.\n• Dagum and Luby P. Dagum and M. Luby. Approximating probabilistic inference in Bayesian belief networks is NP-hard. Artif. Intell., 60(1):141–153, 1993.\n• Darwiche A. Darwiche. Modeling and Reasoning with Bayesian Networks. Cambridge University Press, 2009.\n• Dasgupta S. Dasgupta. Learning polytrees. In Proc. 15th Conf. on Uncertainty in AI, pages 134–141, 1999.\n• de Campos C. P. de Campos. New Complexity Results for MAP in Bayesian Networks. In Proc. Int. Joint Conf. on AI, pages 2100–2106, 2011.\n• de Campos et al. C. P. de Campos, Z. Zeng, and Q. Ji. Structure learning of Bayesian networks using constraints. In Proc. 26th Int. Conf. on Mach. Learning, pages 113–120, 2009.\n• Elidan and Gould G. Elidan and S. Gould. Learning Bounded Treewidth Bayesian Networks. J. of Mach. Learning Res., 9:2699–2731, 2008.\n• Ermon et al. S. Ermon, C. P. Gomes, A. Sabharwal, and B. Selman.\n\nTaming the curse of dimensionality: Discrete integration by hashing and optimization.\n\nIn Proc. 30th Int. Conf. on Mach. Learning, pages 334–342, 2013.\n• Friedman N. Friedman. The Bayesian structural EM algorithm. In Proc. 14th Conf. on Uncertainty in AI, pages 129–138, 1998.\n• Friedman et al. N. Friedman, I. Nachman, and D. Pe’er. Learning Bayesian network structure from massive datasets: The ”sparse candidate” algorithm. In Proc. 15th Conf. on Uncertainty in AI, pages 206–215, 1999.\n• Grigoriev et al. A. Grigoriev, H. Ensinck, and N. Usotskaya. Integer linear programming formulations for treewidth. Technical report, Maastricht Res. School of Economics of Tech. and Organization, 2011.\n• Heckerman et al. D. Heckerman, D. Geiger, and D. M. Chickering. Learning Bayesian networks: The combination of knowledge and statistical data. Mach. Learning, 20(3):197–243, 1995.\n• Hemmecke et al. R. Hemmecke, S. Lindner, and M. Studený. Characteristic imsets for learning Bayesian network structure. Int. J. of Approx. Reasoning, 53(9):1336–1349, 2012.\n• Jaakkola et al. T. Jaakkola, D. Sontag, A. Globerson, and M. Meila. Learning bayesian network structure using LP relaxations. In Proc. 13th Int. Conf. on AI and Stat., pages 358–365, 2010. JMLR W&CP 9.\n• Koller and Friedman D. Koller and N. Friedman. Probabilistic Graphical Models. MIT press, 2009.\n• Korhonen and Parviainen J. H. Korhonen and P. Parviainen. Exact learning of bounded tree-width Bayesian networks. In Proc. 16th Int. Conf. on AI and Stat., pages 370–378, 2013. JMLR W&CP 31.\n• Kwisthout et al. J. H. P. Kwisthout, H. L. Bodlaender, and L. C. van der Gaag. The Necessity of Bounded Treewidth for Efficient Inference in Bayesian Networks. In Proc. 19th European Conf. on AI, pages 237–242, 2010.\n• Mauá and de Campos D. D. Mauá and C. P. de Campos. Anytime marginal MAP inference. In Proc. 28th Int. Conf. on Mach. Learning, pages 1471–1478, 2012.\n• Parviainen and Koivisto P. Parviainen and M. Koivisto. Exact structure discovery in Bayesian networks with less space. In Proc. 25th Conf. on Uncertainty in AI, pages 436–443, 2009.\n• Parviainen et al. P. Parviainen, H. S. Farahani, and J. Lagergren. Learning bounded tree-width Bayesian networks using integer linear programming. In Proc. 17th Int. Conf. on AI and Stat., pages 751–759, 2014. JMLR W&CP 33.\n• Perrier et al. E. Perrier, S. Imoto, and S. Miyano. Finding optimal Bayesian network given a super-structure. J. of Mach. Learning Res., 9(2):2251–2286, 2008.\n• Roth D. Roth. On the hardness of approximate reasoning. Artif. Intell., 82(1–2):273–302, 1996.\n• Schwarz G. Schwarz. Estimating the dimension of a model. Annals of Stat., 6(2):461–464, 1978.\n• Silander and Myllymaki T. Silander and P. Myllymaki. A simple approach for finding the globally optimal Bayesian network structure. In Proc. 22nd Conf. on Uncertainty in AI, pages 445–452, 2006.\n• Spirtes and Meek P. Spirtes and C. Meek. Learning Bayesian networks with discrete variables from data. In Proc. 1st Int. Conf. on Knowledge Discovery and Data Mining, pages 294–299, 1995.\n• Srebro N. Srebro. Maximum likelihood bounded tree-width Markov networks. Artif. Intell., 143(1):123–138, 2003.\n• Teyssier and Koller M. Teyssier and D. Koller. Ordering-based search: A simple and effective algorithm for learning Bayesian networks. In Proc. 21st Conf. on Uncertainty in AI, pages 584–590, 2005.\n• Yuan and Malone C. Yuan and B. Malone. An Improved Admissible Heuristic for Learning Optimal Bayesian Networks. In Proc. 28th Conf. on Uncertainty in AI, pages 924–933, 2012.\n• Yuan and Malone C. Yuan and B. Malone. Learning optimal Bayesian networks: A shortest path perspective. J. of Artif. Intell. Res., 48:23–65, 2013." ]
[ null, "https://deepai.org/static/images/logo.png", null, "https://deepai.org/publication/None", null, "https://deepai.org/publication/None", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.915235,"math_prob":0.9531762,"size":48851,"snap":"2020-10-2020-16","text_gpt3_token_len":11256,"char_repetition_ratio":0.15368395,"word_repetition_ratio":0.030790327,"special_character_ratio":0.23262574,"punctuation_ratio":0.12463486,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9872786,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-03-31T19:15:17Z\",\"WARC-Record-ID\":\"<urn:uuid:e7e5feec-9a9f-404e-82ab-4752cfa52233>\",\"Content-Length\":\"715938\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8198571a-349e-4cd2-a3e7-458efb25fc95>\",\"WARC-Concurrent-To\":\"<urn:uuid:434645cb-f241-4629-ac39-8d60f6bdc290>\",\"WARC-IP-Address\":\"34.218.152.196\",\"WARC-Target-URI\":\"https://deepai.org/publication/advances-in-learning-bayesian-networks-of-bounded-treewidth\",\"WARC-Payload-Digest\":\"sha1:QHJGK6RXIKICRZSCJP6EHUQH6WX6IDSO\",\"WARC-Block-Digest\":\"sha1:3RL5VH6GE6Z7UA6O4PZYZHIKWXDHLZWV\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-16/CC-MAIN-2020-16_segments_1585370503664.38_warc_CC-MAIN-20200331181930-20200331211930-00390.warc.gz\"}"}
http://talks.cam.ac.uk/talk/index/161908
[ "", null, "# Pi-CAPM: The Classical CAPM with Probability Weighting and Skewed Assets\n\nWe study asset prices in a generalized mean-variance framework that allows for probability weighting (the idea that investors overweight rare, high impact events). The resulting model – the Pi-CAPM – allows for a unique and homogeneous pricing equilibrium with skewed and correlated assets and a tractable analysis thereof. We find that even symmetric probability weighting has asymmetric pricing implications. For example, the price impact of volatility is skewness-dependent, negative for left-skewed assets but potentially positive for right-skewed assets. We further find that probability weighting translates into an exaggerated dependence between the assets. Finally, we make an empirical contribution and show that the option-implied premiums on variance and skewness depend on the underlying asset’s skewness, in the very way that is predicted by the Pi-CAPM.\n\nThis talk is part of the Cambridge Finance Workshop Series series." ]
[ null, "http://talks.cam.ac.uk/image/show/26077/image.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8674452,"math_prob":0.92604405,"size":1939,"snap":"2021-43-2021-49","text_gpt3_token_len":388,"char_repetition_ratio":0.099224806,"word_repetition_ratio":0.042857144,"special_character_ratio":0.17483239,"punctuation_ratio":0.07886435,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95297945,"pos_list":[0,1,2],"im_url_duplicate_count":[null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-26T14:23:46Z\",\"WARC-Record-ID\":\"<urn:uuid:ebd381c8-501b-4fd3-beb7-8ec5f3a40301>\",\"Content-Length\":\"12715\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6626d52b-03fd-4498-8849-958d7ea9859c>\",\"WARC-Concurrent-To\":\"<urn:uuid:6de9033a-c0fe-4a5e-869d-ae87ccddcf82>\",\"WARC-IP-Address\":\"131.111.150.181\",\"WARC-Target-URI\":\"http://talks.cam.ac.uk/talk/index/161908\",\"WARC-Payload-Digest\":\"sha1:2THD5J6TNMHD2VY3DUBQLDPKOXTKQBUM\",\"WARC-Block-Digest\":\"sha1:V6E3OWUFDNAE7H7LT2I5IQEXW37QWRXQ\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323587908.20_warc_CC-MAIN-20211026134839-20211026164839-00561.warc.gz\"}"}
https://textbook.prob140.org/notebooks-md/23_03_Linear_Combinations.html
[ "## Linear Combinations¶\n\nLet $\\mathbf{X}$ be multivariate normal with mean vector $\\boldsymbol{\\mu}$ and covariance matrix $\\boldsymbol{\\Sigma}$. Definition 3 says that all linear combinations of elements of $\\mathbf{X}$ are normal too. This makes many calculations straightforward. Here is an example in two dimensions.\n\n### Sum and Difference¶\n\nLet $\\mathbf{X} = [X_1 ~ X_2]^T$ have bivariate normal distribution mean vector $\\boldsymbol{\\mu} = [\\mu_1 ~ \\mu_2]^T$ and covariance matrix $\\boldsymbol{\\Sigma}$.\n\nThen the sum $S = X_1 + X_2$ has the normal distribution with mean $\\mu_1 + \\mu_2$ and variance\n\n$$Var(S) ~ = ~ Var(X_1) + Var(X_2) + 2Cov(X_1, X_2)$$\n\nwhich you can calculate based on $\\boldsymbol{\\Sigma}$.\n\nThe difference $D= X_1 - X_2$ has the normal distribution with mean $\\mu_1 - \\mu_2$ and variance\n\n$$Var(D) ~ = ~ Var(X_1) + Var(X_2) - 2Cov(X_1, X_2)$$\n\nNo matter what the linear combination of elements of $\\mathbf{X}$, its distribution is normal. To identify the parameters of the distribution, work out the mean and variance using properties of means and variances and then find the necessary components from the mean vector and covariance matrix of $\\mathbf{X}$. Once you have the mean and variance, you are all set to find probabilities by using the normal curve as usual.\n\n### Joint Distribution of Linear Combinations¶\n\nDefinition 2 implies that the joint distribution of a finite number of linear combinations of $\\mathbf{X}$ is multivariate normal. In the example above, not only does each of $S$ and $D$ have a normal distribution, the joint distribution of $S$ and $D$ is bivariate normal. We found the mean vector and all but one element of the covariance matrix in the calculations above. The remaining element is\n\n$$Cov(S, D) ~ = ~ Cov(X_1 + X_2, X_1 - X_2) ~ = ~ Var(X_1) - Var(X_2)$$\n\nby bilinearity and symmetry of covariance.\n\n### Marginals¶\n\nEach $X_i$ is a linear combination of elements of $\\mathbf{X}$: the combination that has coefficient 1 at index $i$ and 0 everywhere else. So each $X_i$ has the normal distribution. The parameters of this normal distribution can be read off the mean vector and covariance matrix: $E(X_i) = \\boldsymbol{\\mu}(i)$ and $Var(X_i) = \\boldsymbol{\\Sigma}(i, i)$.\n\nBut be warned: the converse is not true. If all the marginals of a random vector are normal, the joint distribution need not be multivariate normal.\n\n### A Cautionary Tale¶\n\nThe cells below show the empirical joint and marginal distributions of an interesting data set. Read the comment at the top of each cell to see what is being computed and displayed.\n\n# Generate 100,000 iid standard normal points\n\nx = stats.norm.rvs(size=100000)\ny = stats.norm.rvs(size=100000)\nt = Table().with_column(\n'X', x,\n'Y', y\n)\n\n# Select just those where both elements have the same sign\n\nnew = t.where(t.column(0) * t.column(1) > 0)\n\n# The restricted pairs are not jointly normal;\n# that shape isn't an ellipse\n\nnew.scatter(0, 1)", null, "# Empirical distribution of horizontal coordinate\n\nnew.hist(0, bins=25, ec='w')\nplt.xticks(np.arange(-5, 6));", null, "# Empirical distribution of vertical coordinate\n\nnew.hist(1, bins=25, ec='w')\nplt.xticks(np.arange(-5, 6));", null, "Both marginals are normal but the joint distribution is far from bivariate normal.\n\nTo get the formula for the joint density of these variables, start with the circularly symmetric joint density of two i.i.d. standard normals and restrict it to Quadrants 1 and 3. This leaves out half of the volume under the original surface, so remember to multiply by 2 to make the total volume under the new surface equal to 1.\n\ndef new_density(x,y):\nif x*y > 0:\nreturn 1/np.pi * np.exp(-0.5*(x**2 + y**2))\nelse:\nreturn 0\n\nPlot_3d((-4, 4), (-4, 4), new_density, rstride=4, cstride=5)", null, "" ]
[ null, "https://textbook.prob140.org/notebooks-images/23_03_Linear_Combinations_9_0.png", null, "https://textbook.prob140.org/notebooks-images/23_03_Linear_Combinations_10_0.png", null, "https://textbook.prob140.org/notebooks-images/23_03_Linear_Combinations_11_0.png", null, "https://textbook.prob140.org/notebooks-images/23_03_Linear_Combinations_13_0.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.77342874,"math_prob":0.99965405,"size":3713,"snap":"2021-43-2021-49","text_gpt3_token_len":1011,"char_repetition_ratio":0.14828795,"word_repetition_ratio":0.027538726,"special_character_ratio":0.2841368,"punctuation_ratio":0.11965812,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999585,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,3,null,3,null,3,null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-12-07T05:40:46Z\",\"WARC-Record-ID\":\"<urn:uuid:d818fa3a-e23e-40ab-952c-8acbe675d235>\",\"Content-Length\":\"67796\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ea4cf7cb-0f64-4ad2-8cb9-f6d8bf6a993b>\",\"WARC-Concurrent-To\":\"<urn:uuid:555728af-57bf-4bad-8e66-720479ee09ac>\",\"WARC-IP-Address\":\"104.18.2.206\",\"WARC-Target-URI\":\"https://textbook.prob140.org/notebooks-md/23_03_Linear_Combinations.html\",\"WARC-Payload-Digest\":\"sha1:4TSWCTWPBO6NKACH5TRB55CKXXHAAAAP\",\"WARC-Block-Digest\":\"sha1:EPVJXQNGMBM4FMBCBP4ONBMSGJSYC2EB\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964363336.93_warc_CC-MAIN-20211207045002-20211207075002-00563.warc.gz\"}"}
https://developer.unigine.com/en/docs/future/api/library/objects/class.objectvolumesphere
[ "# Unigine::ObjectVolumeSphere Class\n\nThis class is used to create a volume sphere. Depending on the assigned material, it can be used to render fog or a visible volume of light around a light source. A volume sphere can also be of an ellipsoid shape.\n\nA set of UnigineScript API samples located in the <UnigineSDK>/data/samples/objects/ folder:\n\n• volumes_01\n• volumes_02\n\n## static ObjectVolumeSpherePtr create ( const Math::vec3 & radius ) #\n\nConstructor. Creates a new volume sphere object with given radius values.\nNotice\nIf a volume light material is assigned to an object, it is rendered based only on the X-axis radius value. If its radius values along Y or Z axes are smaller, then the object is cut along them.\n\n### Arguments\n\n• const Math::vec3 & radius - Radius values of the new volume sphere object in units. If a negative value is provided, 0 will be used instead.\n\nUpdates volume sphere radius values. If a volume light material is assigned to an object, it is rendered based only on the radius value along the X axis. If its radius values along the Y or Z axes are smaller than along the X axis, the object is cut along them.\n\n### Arguments\n\n• const Math::vec3 & radius - New radius values of the volume sphere in units. If a negative value is provided, 0 will be used instead.\n\nReturns the volume sphere radius values.\n\n### Return value\n\nThe radius values of the volume sphere in units.\n\n## static inttype ( ) #\n\nReturns the type of the node.\n\n### Return value\n\nObject type identifier.\nLast update: 2019-12-25" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6318116,"math_prob":0.88298887,"size":1746,"snap":"2020-24-2020-29","text_gpt3_token_len":408,"char_repetition_ratio":0.17795637,"word_repetition_ratio":0.2283737,"special_character_ratio":0.22852233,"punctuation_ratio":0.12835822,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9546497,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-06-03T08:59:21Z\",\"WARC-Record-ID\":\"<urn:uuid:19b54a57-f9d5-4162-b57d-bcda02d08f04>\",\"Content-Length\":\"290553\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9f827823-469d-4915-8ff3-de8457f6fdfe>\",\"WARC-Concurrent-To\":\"<urn:uuid:d968378e-dcaf-4971-bbb7-a3947bb1da02>\",\"WARC-IP-Address\":\"190.2.154.84\",\"WARC-Target-URI\":\"https://developer.unigine.com/en/docs/future/api/library/objects/class.objectvolumesphere\",\"WARC-Payload-Digest\":\"sha1:AEPCD4ES4JLSBDE3L3EC4H3VTC3THPXF\",\"WARC-Block-Digest\":\"sha1:T5VOLCEXGI3T5FB4LZV2Y35EOYU2KK6Y\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-24/CC-MAIN-2020-24_segments_1590347432521.57_warc_CC-MAIN-20200603081823-20200603111823-00343.warc.gz\"}"}
https://fr.mathworks.com/matlabcentral/cody/problems/20-summing-digits/solutions/1737796
[ "Cody\n\n# Problem 20. Summing digits\n\nSolution 1737796\n\nSubmitted on 27 Feb 2019 by ESSRA EMHIDI\nThis solution is locked. To view this solution, you need to provide a solution of the same size or smaller.\n\n### Test Suite\n\nTest Status Code Input and Output\n1   Pass\na = 1; b = 2; out = sumDigits(a); assert(isequal(out, b))\n\nn = 2\n\n2   Pass\na = 10; b = 7; out = sumDigits(a); assert(isequal(out, b))\n\nn = 1024\n\n3   Pass\na = 16; b = 25; out = sumDigits(a); assert(isequal(out, b))\n\nn = 65536" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5101882,"math_prob":0.99753135,"size":555,"snap":"2019-35-2019-39","text_gpt3_token_len":183,"char_repetition_ratio":0.12522686,"word_repetition_ratio":0.091836736,"special_character_ratio":0.36936936,"punctuation_ratio":0.1491228,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9923892,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-09-22T14:34:51Z\",\"WARC-Record-ID\":\"<urn:uuid:e55f3504-1d7e-41f7-bd4a-0a4122fa0f2e>\",\"Content-Length\":\"72271\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e73ce250-fd0d-4d37-b450-8b026d951c32>\",\"WARC-Concurrent-To\":\"<urn:uuid:8d25b732-3083-41d5-85d1-f6253c9df86c>\",\"WARC-IP-Address\":\"104.110.193.39\",\"WARC-Target-URI\":\"https://fr.mathworks.com/matlabcentral/cody/problems/20-summing-digits/solutions/1737796\",\"WARC-Payload-Digest\":\"sha1:7JN7RZB4CMP7EK4JY2DECDFXTV2D2QGT\",\"WARC-Block-Digest\":\"sha1:UBSDLZRLSGTJHA254WN7AFAQS4BV6D6B\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-39/CC-MAIN-2019-39_segments_1568514575515.93_warc_CC-MAIN-20190922135356-20190922161356-00176.warc.gz\"}"}
https://tech.forums.softwareag.com/t/java-service-developer-issue/152595
[ "", null, "# Java Service Developer Issue\n\nHello, i have the next problem i would like if anyone can help me.\n\nThe array “a” is full it with random numbers that i generate in another method call “dice”, the problem is that i can’t get the output to work. When i run it, the output is a code o something for example= [I@5e8a49\n\nthis are the input and out of the java service:\n\ninput: cant(integer)\nout: glass (integer list)\n\nthis the code of the java service on the devepoler ide.\n// pipeline\nIDataCursor pipelineCursor = pipeline.getCursor();\n\nint cant = IDataUtil.getInt( pipelineCursor, “cant”, -1 );\n\nint a []=new int[cant];\n\nfor(int i =0; i < a.length ; i++){\na[i]=dice();\n\n}\n\nIDataUtil.put( pipelineCursor, “glass”, a );\npipelineCursor.destroy();\n\nany idea?\n\nI have to agree i did not completely understand what you were asking for, As far as i understood from your above post looks like you are having issue printing out the integer list from dice method, is that right? Hope this helps!\n\n``````IDataCursor pipelineCursor = pipeline.getCursor();\nInteger cant = (Integer)IDataUtil.get( pipelineCursor, \"cant\" );\npipelineCursor.destroy();\n\nList<Integer> tempOutput=new ArrayList<Integer>();\ntempOutput=dice(cant);\n\nIDataUtil.put(pipelineCursor, \"glass\", tempOutput.toArray(new Integer[tempOutput.size()])); ``````\n\nShared:\n\n``````private static List<Integer> dice(Integer count){\n\nRandom randomGenerator = new Random();\nList<Integer> temp=new ArrayList<Integer>();\n\nfor(int i=0;i<=count-1;i++){\nint randomInt = randomGenerator.nextInt(100);\n}\nreturn temp;\n}``````\n\nCheers,\nAkshith\n\nThanks you Akshith. i going to test it, i then i will tell if it works.\n\nThe original code is working. It is correctly generating an array of integers (assuming the dice() method is accessible somewhere\n\nWhat you’re seeing in the results pane is the JVM address of the array itself–[strike]Developer doesn’t convert arrays of anything to strings for you. If that’s what you want, then you need to convert the array of integers to an array of strings.\n[/strike]\nBased upon the post from akki_84 below, the statement I made above is wrong. Thanks akki_84 for showing the code and the screen shots.\n\nHello today i tested it and i modify a little your code, but i didn’t get it work, i need make the service to show me all the values in the list (that i full it with ten random numbers), but it only show one value in the result panel, could you help me, please\n\nIDataCursor pipelineCursor = pipeline.getCursor();\nInteger cant = (Integer)IDataUtil.get( pipelineCursor, “cant” );\n\n`````` List<Integer> tempOutput=new ArrayList<Integer>();\nfor(int i=0;i < 10 ; i++){\n}\n``````\n\nIDataUtil.put(pipelineCursor, “glass”, tempOutput.toArray(new Integer[tempOutput.size()]));\n\npipelineCursor.destroy();\n\nHello, yes i need the full result from the list, so i can seeing in the result pane, but i only see one value. if you know the solution please let know. thank you.\n\nI think there might be an issue with your dice method, can you please post it here? The changes that you made should have added all the outputs from the dice method to the outlist and an Integer list (glass) should have been generated as output.\n\nCheers,\nAkshith\n\npublic static int dice(){\n\n``````int dice= (int) (Math.random()*6+1);\n\nreturn dice;\n``````\n\n}\n\nHmm that looks good, Can you please create a new java service with the following signature? A service with below details is working fine for me.\n\nInputs:\nNone (no inputs)\n\nOutputs:\nglass(Integer List)\n\n[ATTACH=CONFIG]911[/ATTACH]\n\nIS Code:\n\n``````IDataCursor pipelineCursor = pipeline.getCursor();\nList<Integer> tempOutput=new ArrayList<Integer>();\nfor(int i=0;i < 10 ; i++){\n}\n\nIDataUtil.put(pipelineCursor, \"glass\", tempOutput.toArray(new Integer[tempOutput.size()]));\npipelineCursor.destroy();``````\n\nShared Code:\n\n``````public static int dice(){\n\nint dice= (int) (Math.random()*6+1);\nreturn dice;\n}``````", null, "I copy your code that you post and finnally it work, i didn’t see the result below the glass integer list my fault, thank you very much!" ]
[ null, "https://aws1.discourse-cdn.com/techcommunity/original/2X/6/678d2ecbc31dd5ed4eaed13a0ad7546c43a27d41.png", null, "https://aws1.discourse-cdn.com/techcommunity/original/2X/4/4c9178273617faba3a17b419f3d0247a636c9ddf.jpeg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.71711636,"math_prob":0.76723146,"size":709,"snap":"2020-34-2020-40","text_gpt3_token_len":186,"char_repetition_ratio":0.12907802,"word_repetition_ratio":0.0,"special_character_ratio":0.27926657,"punctuation_ratio":0.18,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9710155,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-25T06:14:47Z\",\"WARC-Record-ID\":\"<urn:uuid:0f20d416-600a-45cc-b199-74c14dd45da0>\",\"Content-Length\":\"45573\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5a6f7451-3410-489c-995d-b94a81465ee9>\",\"WARC-Concurrent-To\":\"<urn:uuid:0e827e7f-9048-4982-94c4-43d6075584c7>\",\"WARC-IP-Address\":\"64.71.168.201\",\"WARC-Target-URI\":\"https://tech.forums.softwareag.com/t/java-service-developer-issue/152595\",\"WARC-Payload-Digest\":\"sha1:LYDV6KVITV4ONZEA26KAI4NDLKW35SVV\",\"WARC-Block-Digest\":\"sha1:XKRV6TBPXUIF52TIKFOYGDY4P4OXO7UR\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600400222515.48_warc_CC-MAIN-20200925053037-20200925083037-00521.warc.gz\"}"}
https://yowusa.info/elsgolts-calculus-of-variations-70/
[ "By using variational calculus, the optimum length l can be obtained by imposing a transversality condition at the bottom end (Elsgolts ). Therefore, if F is the . Baixe grátis o arquivo Elsgolts-Differential-Equations-and-the-Calculus-of- enviado por Aran no curso de Física na USP. Sobre: Apresentação . Download Differential Equations and the Calculus of Variations PDF Book by L. Elsgolts – The connection between the looked for amounts will be found if.", null, "Author: Mikajora Kajidal Country: Netherlands Language: English (Spanish) Genre: Life Published (Last): 2 September 2010 Pages: 69 PDF File Size: 2.74 Mb ePub File Size: 1.46 Mb ISBN: 128-1-61150-853-1 Downloads: 10340 Price: Free* [*Free Regsitration Required] Uploader: Akisho", null, "Linear Differential Equations of the nth Order E. The Moving-Boundary Problem for a Functional. We now turn to methods of integrating differential equations and the most elementary ways of investigating their ccalculus.\n\nFundamentals 2.\n\nThis text is meant for students of elsgoltz schools and deals with the most important sections of mathematics-differential equations and the calculus of variations. Search the history of over billion web pages on the Internet. Calculus-single variable-Hughes-Hallet Calculus-single variable. The book contains a large number of examples and problems with solutions involving applications of mathematics to physics and mechanics.\n\nThere are no reviews yet. First-order differential equations J.\n\nEvery vector equation in three-dimensional space may be replaced by three scalar equations by projecting onto the coordinate axes. In other words, it is necessary to solve equation 1.\n\nFLUKE AC283 PDF\n\n## Differential Equations And The Calculus Of Variations\n\nThe book here is to the third reprint. Analytical and Numerical Methods.", null, "Extremals with Corners 4. In this method, the desired solution R t is approximately replaced by a piecewise linear vector function, the graph of which is calculis certain polygonal line called Euler’s polygonal curve. And so, for t V ariatlon and Its Properties. The Ritz Method Chapter Variational Problems in Parametric Form 7. Stability Under Constantly Operating Perturbations. Lyapunov’s Second Method 4.\n\nVariational Problems Involving a Conditional Extremum 2. In applied problems, the initial values r0 and r0 are almost always the result of measurement and, calculis, are unavoidably determined with a certain error.", null, "The relation between the elsgolrs quantities will be found if methods are indicated for finding the unknown functions which are defined by differential equations. It is obvious that the differential equation 1. Parte 1 de 4 JI. The procedure of finding the solutions of a differential equation is called integration of the differential equation.\n\nNote that the second-order vector equation 1. We thtis come to a problem, important in applications, of finding the conditions under which a small change in the initial values r0 and r0 gives rise only to a small change in the solution r t which they deter- mine.\n\nIn the study of physical phenomena one is frequently unable to ot directly the laws relating the quantities that characterize a phenomenon, whereas a relationship between the quantities and their variatiins or differentials can readily be established.\n\nKENWOOD SC590 PDF", null, "I has the solution where c ‘is an arbitrary constant. If arbitrarily small changes in the initial values are capable of giving rise to appreciable changes in the solution then the solution determined by o initial values ro and ro usually has no applied value varaitions all, since it does not describe the motion of the body under consideration even in an approximate fashion. Today, however, highspeed computers are able to accomplish such work at the rate of several hundreds of thousands of operations per second.\n\nThe radius vector R t in this space has the coordinates rx, ry, r, vx, vy, v. A solution of a differential equation is a function which, when substituted into the differential equation, reduces it to an iOentity. If elsgoltw apply the above approximate method to 1. In applications, the problem for equation 1. The following are some examples of differential equations:. We take the interval of time t" ]
[ null, "https://imgv2-1-f.scribdassets.com/img/document/123998068/149x198/d3305bc566/1520303119", null, "https://yowusa.info/download_pdf.png", null, "https://images-na.ssl-images-amazon.com/images/I/31dAKoeeLyL.jpg", null, "https://images-na.ssl-images-amazon.com/images/I/41iMsqI8UEL._SR600,315_PIWhiteStrip,BottomLeft,0,35_PIAmznPrime,BottomLeft,0,-5_PIStarRatingFIVE,BottomLeft,360,-6_SR600,315_SCLZZZZZZZ_.jpg", null, "https://d3525k1ryd2155.cloudfront.net/h/810/674/1081674810.0.m.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8781023,"math_prob":0.981035,"size":4287,"snap":"2020-10-2020-16","text_gpt3_token_len":933,"char_repetition_ratio":0.14545879,"word_repetition_ratio":0.012251149,"special_character_ratio":0.19687428,"punctuation_ratio":0.10817942,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9935256,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,8,null,null,null,3,null,3,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-04-09T03:29:09Z\",\"WARC-Record-ID\":\"<urn:uuid:d026ea34-2f15-409f-963d-67fcfb2dd76d>\",\"Content-Length\":\"36011\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:236f4e1c-bc39-4a34-8ac9-488c29fe7292>\",\"WARC-Concurrent-To\":\"<urn:uuid:68584971-89fb-4cf5-b594-a02de5188ecb>\",\"WARC-IP-Address\":\"104.31.70.50\",\"WARC-Target-URI\":\"https://yowusa.info/elsgolts-calculus-of-variations-70/\",\"WARC-Payload-Digest\":\"sha1:VZYTZHVPRGIYVSXPKPWCL5J6X6N3C7NF\",\"WARC-Block-Digest\":\"sha1:NJMNZ33HCEVPWNXJPDXCVXTFGA26RO4Q\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-16/CC-MAIN-2020-16_segments_1585371829677.89_warc_CC-MAIN-20200409024535-20200409055035-00445.warc.gz\"}"}
https://unix.stackexchange.com/questions/34305/how-to-divide-a-list-of-values-by-a-number-in-command-line/34347
[ "# How to divide a list of values by a number in command line?\n\nI am trying to translate a simple program to the command line using unix utilities. For example, if I have a frequency list (after piping through uniq and sort)\n\n``````5 x\n4 y\n1 z\n``````\n\nI want to print out, instead of the frequencies, the fraction of the times they occur:\n\n``````0.5 x\n0.4 y\n0.1 z\n``````\n\n(I have a python program that does this, but I wanted to know if this could be done through the command line itself.)\n\nSo far, I have tried to compute the sum\n\n``````<...>| awk -F\" \" '{print \\$1}' | tr '\\n' +; echo 0 | bc\n``````\n\nbut this is just giving me the output `5+1+4+0` without computing it.\n\nEDIT: I got the sum . I modified the above command to\n\n``````<...>| awk -F\" \" '{print \\$1}' | echo \\$(tr '\\n' +; echo 0) | bc > sum\n``````\n\nand the correct result is stored in sum. Now I just want to divide the original list by sum and display it.\n\n## 2 Answers\n\n``````awk '{ f[\\$2] = \\$1; SUM += \\$1} END { for (i in f) { print f[i]/SUM, i } }' </tmp/data\n``````\n\nYou can do the summing in awk, and the dividing as well. This will be simpler than invoking `bc` since you have other data on each line.\n\nThis prints the sum of the first field of the input lines:\n\n``````awk '{sum += \\$1} END {print \\$1}'\n``````\n\nSo you can save the input data, compute the sum, and continue processing the data.\n\n``````data=\\$(…)\nsum=\\$(printf '%s\\n' \"\\$data\" | awk '{sum += \\$1} END {print \\$1}')\nprintf '%s\\n' \"\\$data\" | awk -v sum=\"\\$sum\" '{ \\$1 /= sum; print }'\n``````" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.89349884,"math_prob":0.9190782,"size":789,"snap":"2019-35-2019-39","text_gpt3_token_len":228,"char_repetition_ratio":0.09808917,"word_repetition_ratio":0.037037037,"special_character_ratio":0.3181242,"punctuation_ratio":0.12631579,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9991221,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-08-21T16:06:40Z\",\"WARC-Record-ID\":\"<urn:uuid:a5438c06-bed0-479d-899c-263c7c8e38fe>\",\"Content-Length\":\"140037\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:38e93825-e95f-4432-b997-8b0866b9b09c>\",\"WARC-Concurrent-To\":\"<urn:uuid:f393f9f2-0726-4c63-b4a5-3459a9f9537d>\",\"WARC-IP-Address\":\"151.101.65.69\",\"WARC-Target-URI\":\"https://unix.stackexchange.com/questions/34305/how-to-divide-a-list-of-values-by-a-number-in-command-line/34347\",\"WARC-Payload-Digest\":\"sha1:VGQ5IHZE4X6CA2H372E72J34DTSNQ2XZ\",\"WARC-Block-Digest\":\"sha1:JQ3GQRTISQXLQ6P3GH5JDSW6635NWZFB\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-35/CC-MAIN-2019-35_segments_1566027316075.15_warc_CC-MAIN-20190821152344-20190821174344-00202.warc.gz\"}"}
http://spmaddmaths.blog.onlinetuition.com.my/2013/05/inverse-function-2.html
[ "", null, "# Inverse Function\n\n### Inverse  Functions\n\nConsider the function (f:x mapsto x – 2) with domain A = {1, 3, 4,7}.  Then the range of the function is B = {-1, 1, 2, 5}.  The arrow diagram representing this function is shown as below.\n\nIf the arrows of (a) are reversed, the arrow diagram in (b) is obtained. A new function having domain B and range A is formed from the function f.  This new function is called the inverse function of f and is denoted by  f-1 ." ]
[ null, "https://www.facebook.com/tr", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.903607,"math_prob":0.988936,"size":429,"snap":"2019-13-2019-22","text_gpt3_token_len":118,"char_repetition_ratio":0.20235294,"word_repetition_ratio":0.0,"special_character_ratio":0.28671327,"punctuation_ratio":0.14285715,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9523504,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-05-20T07:35:00Z\",\"WARC-Record-ID\":\"<urn:uuid:2d95a24b-0ff5-48d7-87b3-174db9df49e3>\",\"Content-Length\":\"30826\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:de165d38-765f-4637-b4ad-fad81689df67>\",\"WARC-Concurrent-To\":\"<urn:uuid:c57594c8-81c1-4eb2-a99c-f4ec9f2f2e33>\",\"WARC-IP-Address\":\"34.201.26.129\",\"WARC-Target-URI\":\"http://spmaddmaths.blog.onlinetuition.com.my/2013/05/inverse-function-2.html\",\"WARC-Payload-Digest\":\"sha1:2T4NOU4ZM5V5I6HK6FUVGYWE722ETHGF\",\"WARC-Block-Digest\":\"sha1:MJ7SQAPVDLFR6NKE5FBTK3UH7N4A4DOT\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-22/CC-MAIN-2019-22_segments_1558232255773.51_warc_CC-MAIN-20190520061847-20190520083847-00282.warc.gz\"}"}
https://algorithms.tutorialhorizon.com/add-two-numbers-represented-by-a-linked-list-numbers-are-stored-in-forward-order/?replytocom=5087
[ "# Add two numbers represented by a linked list, Numbers are Stored in FORWARD order\n\nThis post is the extension of   – Two numbers represented by a linked list, Number Stored in REVERSE order\n\nObjective: Two numbers represented by a linked listwhere each node contains single digit. The digits are stored in Forward order, means head is pointing to the last digit of the number.\n\nInput: Two numbers represented by Linked Lists\n\nExample:\n\n```First Number : 1007\nSecond Number : 93\n```\n\nApproach:\n\n• Get the length of both the lists.\n• If lengths are not equal, make them equal by adding nodes with value 0 in front of shorter linked list.\n• Create a global variable carry=0.\n• Create a newHead = null;\n• newHead will be the starting node of our result linked list and curr node will the reference to the current node on which we are working in our result linked list.\n• Now using recursion travel in both the list till the end.\n• So now nodes are stores in a stack\n• Now while coming back, each node will pop out from the stack in reverse order\n• Take node data from both the lists add them along with carry.\n• if sum is >=10 , then make carry as 1 and create a new node with sum-10\n• Else just create a new Node with sum.\n\nComplete Code:\n\nOutput:\n\n```First Number : ->1->0->0->7\nSecond Number : ->9->3\n```\n\n### 4 thoughts on “Add two numbers represented by a linked list, Numbers are Stored in FORWARD order”\n\n1.", null, "Above solution gives wrong output for the inputs like below :\n\nFirst Number : ->1->1->1->7\nSecond Number : ->9->9->9->9\n\nThere is no extra last carry bit added according to the existing code. Please modify the logic accordinglty\n\n•", null, "Thanks a lot indra, i have corrected the code. Let me know if you see errors in this or other posts\n\n2.", null, "3.", null, "" ]
[ null, "https://secure.gravatar.com/avatar/611428d277dc032f223229307442894c", null, "https://secure.gravatar.com/avatar/2bba3949a5f3905c37cf16580d5b7839", null, "https://secure.gravatar.com/avatar/6e3ef05451bde411e6f0168e19379a39", null, "https://secure.gravatar.com/avatar/eb0abe72c1d360f188efdb4dabe21420", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6979221,"math_prob":0.9032273,"size":4445,"snap":"2021-31-2021-39","text_gpt3_token_len":1270,"char_repetition_ratio":0.1360054,"word_repetition_ratio":0.034246575,"special_character_ratio":0.31248593,"punctuation_ratio":0.16684724,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.991448,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-23T10:58:27Z\",\"WARC-Record-ID\":\"<urn:uuid:8a4ed684-6fa6-4f46-a303-e57046e3a555>\",\"Content-Length\":\"110058\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:90383efe-afbe-4a6a-9b56-e9fc7ae6a083>\",\"WARC-Concurrent-To\":\"<urn:uuid:a675a57d-6e21-4443-84be-7bb5e40d5488>\",\"WARC-IP-Address\":\"104.21.80.108\",\"WARC-Target-URI\":\"https://algorithms.tutorialhorizon.com/add-two-numbers-represented-by-a-linked-list-numbers-are-stored-in-forward-order/?replytocom=5087\",\"WARC-Payload-Digest\":\"sha1:T2WAUXCDDJW45MHBXSHAY2XYUUEJPS7G\",\"WARC-Block-Digest\":\"sha1:AUGDNCQRYA5XDYW6425GHQRGD34FKYU6\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780057421.82_warc_CC-MAIN-20210923104706-20210923134706-00557.warc.gz\"}"}
https://math.stackexchange.com/questions/41311/how-to-prove-that-the-cantor-ternary-function-is-not-weakly-differentiable
[ "# How to prove that the Cantor ternary function is not weakly differentiable?\n\nI am using the standard cantor ternary function $f$ here, as cited in this Wikipedia page.\n\nIt is an example of continuous, monotone increasing, but not strictly monotone increasing function with zero derivative almost everywhere. But how should I prove that its weak/distributional derivatives do not exist? I guess I start of with assuming that there exist $g \\in L^1_\\text{loc}(R)$ such that $\\int_R {f\\phi'} = - \\int_R{g\\phi}$ for all $\\phi\\in C_c^\\infty (R)$. And then I have to probably choose appropriate mollifiers $\\phi_\\epsilon$ and let $\\epsilon \\to 0$. But I am kind of stuck here; could you give me a detailed proof?\n\nAlso, is the derivative of $f$ a measure in the distributional sense?\n\nThank you !\n\nIf possible, assume that the Cantor ternary function $f$ is weakly differentiable on $[0,1]$. Then the continuous function $f$ is absolutely continuous on $[0,1]$ ( any theory of PDE book has the proof: Sobolev functions $W^{1,p}(\\text{interval }I)$ is AC on that interval $I$ for $p<\\infty$), and hence maps sets of measure zero to sets of measure zero. But for the Cantor ternary function $f$, it maps the Cantor set to a set of measure 1, (since on the complement of the Cantor set, $f$ is constant,and $f$ takes every value in between $0$ and $1$ ), which is a contradiction.\n• The last part of the argument doesn't seem correct. The function $f(x) = 1/2$ for $x\\in[0,1/2)$ and $f(x) = 2x-1/2$ seems to be a counterexample to the last claim. May 30, 2011 at 20:19\n• I modified the answer a bit : I used the fact that $f$ takes every value on $[0,1]$, but I guess my answer might still be incomplete, since I need to justify why the complement of the Cantor set gets mapped onto a set of measure zero, which is , I guess, intuitively clear. May 31, 2011 at 13:35\n• The complement of the Cantor set is a countable union of intervals, and $f$ is constant on each of these intervals. So $f(C^c)$ is a countable set. Jun 29, 2011 at 21:50\nShow that if $I\\subset[0,1]$ is one of the intervals on which $f$ is constant, then $g$ (the presumed weak derivative of $f$) must be equal to 0 at almost every point of $I$. Do this by considering the various $\\phi\\in C^\\infty$ vanishing outside of $I$. Since the union of these intervals has measure $1$, the derivative $g$ must vanish almost everywhere on $[0,1]$, which cannot be, because $0=f(0)<f(1)=1$." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.87187755,"math_prob":0.99848944,"size":711,"snap":"2023-40-2023-50","text_gpt3_token_len":181,"char_repetition_ratio":0.08486563,"word_repetition_ratio":0.0,"special_character_ratio":0.2517581,"punctuation_ratio":0.09489051,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9998062,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-28T04:03:00Z\",\"WARC-Record-ID\":\"<urn:uuid:b48eb539-d7c8-43e1-afa6-f4f8d1299535>\",\"Content-Length\":\"152283\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3f843335-be5d-4a19-abf4-191a92de4bbb>\",\"WARC-Concurrent-To\":\"<urn:uuid:c3faf752-d0c3-4581-bbf9-2b595ae0cb5b>\",\"WARC-IP-Address\":\"104.18.11.86\",\"WARC-Target-URI\":\"https://math.stackexchange.com/questions/41311/how-to-prove-that-the-cantor-ternary-function-is-not-weakly-differentiable\",\"WARC-Payload-Digest\":\"sha1:SNAJGRSAUEGYY5UX3FFKT2OKCDK3CCO6\",\"WARC-Block-Digest\":\"sha1:ZRBHNLEYRGS6P62L6JD2ILPXSLBTUSAR\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510358.68_warc_CC-MAIN-20230928031105-20230928061105-00576.warc.gz\"}"}
https://avesis.metu.edu.tr/yayin/6aee7e27-2865-4814-b960-f57e0a361279/optimal-path-tracking-control-of-a-quadrotor-uav
[ "## Optimal Path Tracking Control of a Quadrotor UAV\n\nInternational Conference on Unmanned Aircraft Systems (ICUAS), Florida, United States Of America, 27 - 30 May 2014, pp.115-125", null, "", null, "• Publication Type: Conference Paper / Full Text\n• City: Florida\n• Country: United States Of America\n• Page Numbers: pp.115-125\n• Keywords: Quadrotor UAV, Dicrete Time, Path Tracking, Riccati Equation, Nonlinear Dynamic Model, LQT, LQR, Optimal Control, Energy Consumption, Disturbance Rejection\n• Middle East Technical University Affiliated: Yes\n\n#### Abstract\n\nThis paper presents the linear quadratic tracking (LQT) control of a quadrotor UAV by solving discrete time matrix difference Riccati Equation. First, the nonlinear dynamic model of the quadrotor is obtained by using Newton's equations of motion. Then, the nonlinear dynamic model is linearized around hover condition. The linearized dynamic model is used to solve the optimal control problem. A trade off between good tracking performance and energy consumption is made while defining the performance index (cost function). Time-variant optimal control gains are found off-line by solving discrete time matrix difference Riccati Equation backwards in time. Finally, to validate optimal control system, simulations are performed by using the nonlinear dynamic model as plant and time-variant optimal control gains as state feedback control. The optimal control algorithm used in this paper uses time-variant control gains instead of fixed (time-invariant) control gains used in classical LQR control. Simulations show that, good tracking performance is achieved while decreasing energy consumption compared to the fixed gain LQR control. Some other advantageous properties of the optimal control system used in this paper compared to the fixed gain LQR control are also analyzed. In addition, disturbance rejection properties of the optimal control system are also studied. All algorithms and simulations are done by using MATLAB software." ]
[ null, "https://avesis.metu.edu.tr/Content/images/integrations/small/integrationtype_2.png", null, "https://avesis.metu.edu.tr/Content/images/integrations/small/integrationtype_1.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9059983,"math_prob":0.77163655,"size":1566,"snap":"2023-40-2023-50","text_gpt3_token_len":294,"char_repetition_ratio":0.13700384,"word_repetition_ratio":0.0625,"special_character_ratio":0.1787995,"punctuation_ratio":0.08949416,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9592219,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-11-30T18:32:54Z\",\"WARC-Record-ID\":\"<urn:uuid:94fcc5de-9626-4f51-823e-b432f3ba0d7b>\",\"Content-Length\":\"49304\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1d8f4310-4884-42ba-ae60-f6bae449246f>\",\"WARC-Concurrent-To\":\"<urn:uuid:789f3233-6db8-4a02-bfda-2d342b367efe>\",\"WARC-IP-Address\":\"144.122.201.61\",\"WARC-Target-URI\":\"https://avesis.metu.edu.tr/yayin/6aee7e27-2865-4814-b960-f57e0a361279/optimal-path-tracking-control-of-a-quadrotor-uav\",\"WARC-Payload-Digest\":\"sha1:DXY5WLGA3QJ7MFPUOUPKKB5FBHKKAOCA\",\"WARC-Block-Digest\":\"sha1:UWHJH4TLEYJN24OMA2F3HEP62QM43HJ7\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100229.44_warc_CC-MAIN-20231130161920-20231130191920-00501.warc.gz\"}"}
https://nrich.maths.org/1157/solution
[ "#### You may also like", null, "### A Mean Tetrahedron\n\nCan you number the vertices, edges and faces of a tetrahedron so that the number on each edge is the mean of the numbers on the adjacent vertices and the mean of the numbers on the adjacent faces?", null, "### Rhombicubocts\n\nEach of these solids is made up with 3 squares and a triangle around each vertex. Each has a total of 18 square faces and 8 faces that are equilateral triangles. How many faces, edges and vertices does each solid have?", null, "### Icosian Game\n\nThis problem is about investigating whether it is possible to start at one vertex of a platonic solid and visit every other vertex once only returning to the vertex you started at.\n\n# Triangles to Tetrahedra\n\n##### Age 11 to 14Challenge Level\n\nStudents attending a masterclass at the Thomas Deacon Academy in Peterborough tried to work on this problem systematically. Here are examples of how they went about it.\n\nI think their ideas are excellent and give an insight into how you might make a convincing argument that you have all possibilities. Well done to you all for trying to describe your approaches to this problem.", null, "Susannah, Adam, Maria and Erin adopted an approach like the one illustrated in this table. Can you see how they have been systematic and continue their argument? Thanks for this neat idea .", null, "", null, "Stephen, Hugh, Daniel and Deepak, adopted a similar approach. Firstly, they identified and coded each of the four triangles, identifying the long and the short sides of each. Then they considered sets of four triangles in a systematic way. Can you see how they worked systematically from the table below:\n\n All the same E,E,E,E SE,SE,SE,SE I,I,I,I R,R,R,R - no 3 and 1 E,E,E,SE- no E,E,E,I - no E,E,E,R-no SE,SE,SE,E - no SE,SE,SE,I -no SE,SE,SE,R-no I,I,I,E - no I,I,I,SE I,I,I,R - no R,R,R,E R,R,R,SE - no R,R,R,I - no 2 and 2 ... ... ...\n\nEmily, Clara, Lizzie and Kieran chose an approach using some further ideas to help them be more efficient:\n1. There are only two lengths of sides (long, as in the length of the sides of the large equilateral triangle and the hypotenuse of the right-angled triangle; short, as in the length of the sides of the small equilateral triangle and the short side, or base, of the isosceles triangle).\n2. There must be an even number of long and and even number of short sides in the sets of four triangles that form the tetrahedron if the triangles fit together.\n3. The even number of sides have to be spread across the triangles, for example four shorts are no good if three of them are all on a small equilateral triangle.\nWe coded the triangles as follows:\nBig equilaterals - EB\nSmall equilaterals - ES\nIsosceles - I\nRight-angled - R\n\nWe tabulated all possibilities and quickly crossed out those without an even number of long and an even number of short sides. We then considered whether we could actually make those left, like this (I have not included the whole of their table but can you see how they were systematically listing all the possibilities ):\n\n Even number of longs and shorts/does it work? I I I I YES/YES EB EB EB EB YES/YES ES ES ES ES YES/YES R R R R YES/NO EB EB EB I NO EB EB EB ES NO EB EB EB R NO I I I ES YES/YES I I I R NO I I I EB NO R R R I NO R R R ES NO R R R EB YES/YES ES ES ES I YES/NO ES ES ES EB NO ES ES ES R NO EB EB I I YES/YES EB EB I ES YES/NO\nand so on...\n\nChris, Rana, Nabil and Indrajeet took a quite different approach, using tree diagrams. Here is one of the diagrams they produced. They needed to do a tree diagram starting with each of the triangles and look carefully for duplications .", null, "", null, "Other solutions we have received included one from Mark Johnson, who found tetrahedra that use these triangles:\n1 small equilateral 3 isosceles\n1 big equilateral 3 right-angled\n2 big equilateral 2 isosceles\n4 small equilateral\n4 big equilateral\n2 small equilateral 2 right-angled\n2 isosceles 1 right-angled 1 big equilateral\n2 isosceles 2 right-angled (two different arrangements produce tetrahedra that are reflections of each other )\n\nYanqing from Lipson Community College can add one more to Mark's list :\n1 small equilateral 1 isosceles 2 right-angled\n\nAltogether we have found a total of 10 different tetrahedra." ]
[ null, "https://nrich.maths.org/content/98/02/six1/icon.jpg", null, "https://nrich.maths.org/content/01/10/six3/icon.gif", null, "https://nrich.maths.org/content/03/02/six4/icon.png", null, "https://nrich.maths.org/content/03/02/penta4/SAME1.png", null, "https://nrich.maths.org/content/03/02/penta4/IMAGE3.jpg", null, "https://nrich.maths.org/content/03/02/penta4/IMAGE2.jpg", null, "https://nrich.maths.org/content/03/02/penta4/triangles-tree.gif", null, "https://nrich.maths.org/content/03/02/penta4/IMAGE1.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8865017,"math_prob":0.9295274,"size":3583,"snap":"2023-40-2023-50","text_gpt3_token_len":1135,"char_repetition_ratio":0.1494831,"word_repetition_ratio":0.03421462,"special_character_ratio":0.2629082,"punctuation_ratio":0.13375796,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9795321,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16],"im_url_duplicate_count":[null,null,null,null,null,null,null,4,null,4,null,4,null,4,null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-10-01T19:08:01Z\",\"WARC-Record-ID\":\"<urn:uuid:33469e26-a67c-4457-b038-f0bdc67e4f07>\",\"Content-Length\":\"20166\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:253d3c7f-f518-40f5-be11-0ab446fa690a>\",\"WARC-Concurrent-To\":\"<urn:uuid:7684d90d-3d11-4aea-8cd9-bd03fcf81c39>\",\"WARC-IP-Address\":\"131.111.18.195\",\"WARC-Target-URI\":\"https://nrich.maths.org/1157/solution\",\"WARC-Payload-Digest\":\"sha1:USBT2K5EC6Z6OHVIAFR5OWX54XFIYEKV\",\"WARC-Block-Digest\":\"sha1:L6PA64BL37CJZIAIAOWQAVGUAO53Y4A4\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510924.74_warc_CC-MAIN-20231001173415-20231001203415-00400.warc.gz\"}"}
https://zbmath.org/1433.94093
[ "## Compact implementation of modular multiplication for special modulus on MSP430X.(English)Zbl 1433.94093\n\nLee, Kwangsu (ed.), Information security and cryptology – ICISC 2018. 21st international conference, Seoul, South Korea, November 28–30, 2018. Revised selected papers. Cham: Springer. Lect. Notes Comput. Sci. 11396, 55-66 (2019).\nSummary: For the pre/post-quantum public key cryptography (PKC), such as elliptic curve cryptography (ECC) and supersingular isogeny Diffie-Hellman key exchange (SIDH), modular multiplication is the most expensive operation among basic arithmetic of these cryptographic schemes. For this reason, the execution timing of such cryptographic schemes in an implementation level, which may highly determine the service availability for the low-end microprocessors (e.g., 8-bit AVR and 16-bit MSP430X), is mainly relied on the efficiency of modular multiplication on the target processors.{\n}In this paper, we present new optimal modular multiplication techniques based on interleaved Montgomery multiplication on 16-bit MSP430X microprocessors, where the multiplication part is performed in a hardware multiplier and the reduction part is performed in a basic arithmetic logic unit (ALU) with optimal modular multiplication routine, respectively. This approach is effective for special modulus of NIST curves, SM2 curves, and SIDH. In order to demonstrate the superiority of proposed Montgomery multiplication, we applied the proposed method to the NIST P-256 curve, of which the implementation improves the previous modular multiplication and squaring operations by 39% and 37.1% on 16-bit MSP430X microprocessors, respectively. Moreover, secure countermeasures against timing attack and simple power analysis is also applied to the scalar multiplication of NIST P-256, which achieves the 9,285,578 clock cycles and only requires 0.575 s (@16 MHz). The proposed Montgomery multiplication has broad applications to other cryptographic schemes and microprocessors.\nFor the entire collection see [Zbl 1407.68039].\n\n### MSC:\n\n 94A60 Cryptography\n\nMSP430X\nFull Text:" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.843362,"math_prob":0.7310363,"size":2358,"snap":"2023-14-2023-23","text_gpt3_token_len":537,"char_repetition_ratio":0.1537808,"word_repetition_ratio":0.012738854,"special_character_ratio":0.23621713,"punctuation_ratio":0.17745803,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97831476,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-03-26T00:26:34Z\",\"WARC-Record-ID\":\"<urn:uuid:5c32195b-b616-43b6-98e1-98191cb5e01d>\",\"Content-Length\":\"52536\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f70c8fea-7a71-4976-b1a7-ddcca22afe13>\",\"WARC-Concurrent-To\":\"<urn:uuid:c1776f78-8902-4381-bef1-1c501b79298c>\",\"WARC-IP-Address\":\"141.66.194.2\",\"WARC-Target-URI\":\"https://zbmath.org/1433.94093\",\"WARC-Payload-Digest\":\"sha1:WVUVZFMQ5BP7DHOFVH5JOA6QILGMRFEO\",\"WARC-Block-Digest\":\"sha1:VT4KB6ETCHKHRUELHD3NCKHTTHELICO4\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296945376.29_warc_CC-MAIN-20230325222822-20230326012822-00285.warc.gz\"}"}
https://pawpeds.com/pawacademy/genetics/genetics/genefrequency.html
[ "Genetics", null, "## An important word: Gene frequency\n\nUsing the gene for dilute color as an example, let us assume that we have a breed population of 100 cats. Since every cat has a double set of chromosomes, this population will have 200 loci for the dilution gene - meaning 200 places where the D or d gene can be situated. Now assume that 40 of these loci are filled with a d-gene, while the remaining 160 are filled with D-genes. Then the gene frequency for d in this population is 40/200 = 0.20 = 20%. In the same way we find that the gene frequency for D is 160/200 = 0.80 = 80%.\n\nNext..." ]
[ null, "https://pawpeds.com/images/courses.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.91054946,"math_prob":0.90872025,"size":589,"snap":"2020-24-2020-29","text_gpt3_token_len":157,"char_repetition_ratio":0.12649573,"word_repetition_ratio":0.0,"special_character_ratio":0.30390492,"punctuation_ratio":0.109375,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98342884,"pos_list":[0,1,2],"im_url_duplicate_count":[null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-07-02T05:23:36Z\",\"WARC-Record-ID\":\"<urn:uuid:b455b056-1a57-4c96-8010-a7937b3b4623>\",\"Content-Length\":\"8521\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:99711d11-dfe7-46bf-aa51-66a16b5fb05e>\",\"WARC-Concurrent-To\":\"<urn:uuid:30ea9758-a2a8-4242-a8fe-f0b02201815b>\",\"WARC-IP-Address\":\"5.150.254.176\",\"WARC-Target-URI\":\"https://pawpeds.com/pawacademy/genetics/genetics/genefrequency.html\",\"WARC-Payload-Digest\":\"sha1:2MMIDLO6K3WJ2FF5HU2DWLZFGOMXVMJL\",\"WARC-Block-Digest\":\"sha1:OAR7YD3RXW6WO5XX2X4T2DI4R4MIM4AF\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-29/CC-MAIN-2020-29_segments_1593655878519.27_warc_CC-MAIN-20200702045758-20200702075758-00201.warc.gz\"}"}
https://usatt.simplycompete.com/t/exp2?tri=10734&uai=13130
[ "", null, "for", null, "", null, "Shemar Britton\n\nUSATT#: 91729\n\nIntroduction\nThis page explains how Shemar Britton (USATT# 91729)'s rating went from 2275 to 2273 at the Westchester 2017 June Open held on 25 Jun 2017 - 25 Jun 2017. These ratings are calculated by the ratings processor which goes through 4 passes over the match results data for a tournament. The following values are produced at the end of each of the 4 passes of the ratings processor for Shemar Britton for this tournament.\n\nInitial Rating Pass 1 Pass 2 Pass 3 Final Rating (Pass 4)\n2275 2273 2275 2275 2273\n\nYou can click here to view a table of all the resultant values from each of the 4 passes (and the initial rating) of the ratings processor for all of the 134 players in this tournament. Sections below for further details on the initial rating and the 4 passes of the ratings processor.\n\nNote: We use mathematical notation to express the exact operations carried out in each pass of the ratings processor below. Whenever you see a variable/symbol such as for example ${X}_{i}^{3}$, we are following the convention that the superscript part of the variable (in this case \"3\") indicates an index (such as in a series), and it should not be misconstrued to be an exponent (which is how it is used by default).\n\nInitial Rating\nThe initial rating of a player for a tournament is the rating the player received at the end of the most recent tournament prior to the current tournament. If this is the first tournament the player has ever participated in (based on our records), then the player has no initial rating.\n\nThe initial rating for Westchester 2017 June Open held on 25 Jun 2017 - 25 Jun 2017 for Shemar Britton, and its source tournament are as follows:\nInitial Rating From Tournament Start Day End Day\n2275 2015 JOOLA North American Teams Championships n/a 29 Nov 2015\n\nClick here to view the details of the initial ratings for all the players in this tournament.\n\nPass 1 Rating\nIn Pass 1, we only consider all the players that come into this tournament with an initial rating while ignoring all the unrated players. If a rated player has a match against an unrated player, then that match result is ignored from the pass 1 calculations as well. We apply the point exchange table shown below to all the matches participated in by the rated players:\n\nPoint Spread Expected Result Upset Result\n0 - 12 8 8\n13 - 37 7 10\n38 - 62 6 13\n63 - 87 5 16\n88 - 112 4 20\n113 - 137 3 25\n138 - 162 2 30\n163 - 187 2 35\n188 - 212 1 40\n213 - 237 1 45\n238 and up 0 50\n\nSuppose player A has an initial rating of 2000 and player B has an initial rating of 2064, and they played a match against each other. When computing the impact of this match on their rating, the \"Point Spread\" (as it is referred to in the table above) between these two players is the absolute value of the difference their initial ratings. When the player with the higher rating wins, presumably the better player won, which is the expected outcome of a match, and therefore the \"Expected Result\" column applies. If the player with the lower rating wins the match, then presumably this is not expected, and therfore it is deemed as an \"Upset Result\" and the value from that column in the table above is used. So, in our example of player A vs player B, if player B wins the watch, then the expected outcome happens, and 5 points are added to player B's rating and 5 points are deducted from player A's rating. Looking at Shemar Britton's match results and applying the point exchange table, gives us the following result:\n\nShemar Britton's Wins\nWinner Loser\nPoint Spread Outcome Gain Player USATT # Rating Player USATT # Rating\n92 EXPECTED 4 Shemar Britton 91729 2275 Dennis Stephenson 20484 2183\n416 EXPECTED 0 Shemar Britton 91729 2275 Nicholas Wetzler 61233 1859\n98 EXPECTED 4 Shemar Britton 91729 2275 Adnan Medunjanin 83321 2177\n\nShemar Britton's Losses\nWinner Loser\nPoint Spread Outcome Loss Player USATT # Rating Player USATT # Rating\n21 UPSET -10 David Cui 86737 2254 Shemar Britton 91729 2275\n\nYou can click here to view a table of outcomes and points gained/lost from all the matches with all the players in this tournament.\n\nThe \"Outcome\" column above, shows whether the match had an expected (player with the higher rating wins the match) or an upset (player with the higher rating loses the match) outcome. Based on this outcome, and using both the player's initial rating, we apply the point exchange table from above and show the ratings points earned and lost by Shemar Britton in the \"Gain\" column. Matches are separated out into two tables for wins and losses where points are gained and lost respectively. We get the following math to calculate the Pass 1 Rating for Shemar Britton:\n\nInitial Rating Gains/Losses Pass 1 Rating\n2275 + 4 + 0 - 10 + 4 $=\\mathrm{2273}$\n\nYou can click here to view a table of pass1 calculations for all the rated players in this tournament.\n\nPass 2 Rating\nThe purpose of this pass is solely to determine ratings for unrated players. To do this, we first look at the ratings for rated players that came out of Pass 1 to determine an “Pass 2 Adjustment”. The logic for this is as follows:\n\n1. We calculate the points gained in Pass 1. Points gained is simply the difference between the Pass 1 Rating and the Initial Rating of a player:\n\n${\\rho }_{i}^{2}={P}_{i}^{1}-{P}_{i}^{0}$\nwhere,\n\n Symbol Universe Description ${P}_{i}^{0}$ ${P}_{i}^{0}\\in \\mathrm{{ℤ}^{+}}$ the initial rating for the $i$-th player. We use the symbol $P$ and the superscript $0$ to represent the idea that we sometimes refer to the process of identifying the initial rating of the given player as Pass 0 of the ratings processor. ${P}_{i}^{1}$ ${P}_{i}^{1}\\in \\mathrm{{ℤ}^{+}}$ the Pass 1 rating for the $i$-th player. ${\\rho }_{i}^{2}$ ${\\rho }_{i}^{2}\\in ℤ$ the points gained by the $i$-th player in this tournament. Note here that we use the superscript $2$ to denote that this value is calculated and used in Pass 2 of the ratings processor. Further, ${\\rho }_{i}^{2}$ only exists for players who have a well defined Pass 1 Rating. For Players with an undefined Pass 1 Rating (unrated players), will have an undefined ${\\rho }_{i}^{2}$. $i$ $i\\in \\left[1,\\mathrm{134}\\right]\\cap ℤ$ the index of the player under consideration. $i$ can be as small as $1$ or as large as $\\mathrm{134}$ for this tournament and the i-th player must be a rated player.\n\n2. For rated players, Pass 1 points gained, ${\\rho }_{i}^{2}$, is used to calculate the Pass 2 Adjustment in the following way:\n1. If a player gained less than 50 points (exclusive) in pass 1, then we set that player's Pass 2 Adjustment to his/her Initial Rating.\n2. If a player gained between 50 and 74 (inclusive) points in pass 1, then we set the player's Pass 2 Adjustment to his/her Final Pass1 Rating.\n3. If a player gains 75 or more points (inclusive) in pass 1, then the following formula applies:\n• If the player has won at least one match, and lost at least 1 match in the tournament, then the player's Pass 2 Adjustment is the average of his/her Final Pass 1 Rating and the average of his/her opponents rating from the best win and the worst loss, represented using the formula below:\n\n$\\mathrm{{\\alpha }_{i}^{2}}=⌊\\mathrm{\\frac{\\mathrm{{P}_{i}^{1}}+\\mathrm{\\frac{\\mathrm{{B}_{i}}+\\mathrm{{W}_{i}}}{2}}}{2}}⌋$\n\nwhere ${\\alpha }_{i}^{2}$ is the Pass 2 Adjustment for the current player, ${P}_{i}^{1}$ is the Pass 1 Rating, ${B}_{i}$ is the rating of the highest rated opponent against which the current player won a match, and ${W}_{i}$ is the rating of the lowest rated opponent against which the current player lost a match.\n• If a player has not lost any of his/her matches in the current tournament, the mathematical median (rounded down to the nearest integer) of all of the player's opponents initial rating is used as his/her Pass 2 Adjustment:\n\n$\\mathrm{{\\alpha }_{i}^{2}}=\\mathrm{⌊\\stackrel{\\sim }{\\mathrm{\\left\\{\\mathrm{{P}_{k}^{0}}\\right\\}}}⌋}$\n\nwhere ${P}_{k}^{0}$ is the initial rating of the player who was the i-th player's opponent from the k-th match.\nSymbol Universe Description\n$i$ $i\\in \\left[1,\\mathrm{134}\\right]\\cap ℤ$ the index of the player under consideration. $i$ can be as small as $1$ or as large as $\\mathrm{134}$ for this tournament and the i-th player must be a rated player.\n$q$ $q\\in \\left[1,\\mathrm{543}\\right]\\cap ℤ$ the index of the match result under consideration. $q$ can be as small as $1$ or as large as $\\mathrm{543}$ for this tournament and the q-th match must be have both rated players as opponents.\n$g$ $g\\in \\left[1,5\\right]\\cap ℤ$ the g-th game of the current match result under consideration. $q$ can be as small as $1$ or as large as $5$ for this tournament assuming players play up to 5 games in a match.\n${P}_{k}^{0}$ ${P}_{k}^{0}\\in \\mathrm{{ℤ}^{+}}$ initial rating of the i-th player's opponent from the k-th match.\n\n• Therefore, the Pass 2 Adjustment for Shemar Britton is calculated as follows:\n• Given the initial rating of 2275,\n• and the Pass 1 rating of 2273,\n• the Pass 1 gain is 2273 - 2275 = -2.\n• Since the Pass 1 gain of -2 is less than 50, the Pass 2 Rating (also referred to as Pass 2 Adjustment) is reset back to the initial rating.\n• Therefore the Pass 2 Adjustment for Shemar Britton is 2275.\n\nYou can click here to view a table of Pass 2 Adjustments for all the rated players in this tournament.\n\n3. After calculating the Pass 2 Adjustment for all the rated players as described above, we can now calculate the Pass 2 Rating for all the unrated players in this tournament (which is the main purpose of Pass 2). Pass 2 Rating is calculated using the following formula:\n1. If all of the matches of an unrated player are against other unrated players, then the Pass 2 Rating for that player is simply set to 1200. You can click here to view these players who received a 1200 Pass 2 Rating. Not all of Shemar Britton's matches were against unrated players. So this rule does not apply to him.\n2. For unrated players with wins and losses, where at least 1 of the opponents has an initial rating, the Pass 2 Rating is the average of the best win and the worst loss (using the Pass 2 Adjustment of all rated players) as defined by this formula here:\n\n$\\mathrm{{P}_{i}^{2}}=⌊\\mathrm{\\frac{\\mathrm{{B}_{i}^{2}}+\\mathrm{{W}_{i}^{2}}}{2}}⌋$\n\nwhere ${P}_{i}^{2}$ is the Pass 2 Rating for the i-th player, ${B}_{i}^{2}$ is the largest Pass 2 Adjustment (best win) of the opponenet against whom the i-th player won a match, and ${W}_{i}^{2}$ is the smallest Pass 2 Adjustment (worst loss) of the opponent against whom the i-th player lost a match.\n3. For unrated players with all wins and no losses, where at least 1 of the opponents has an initial rating, the Pass 2 Rating is calculated using the following formula:\n$Pi2 = Bi2 + ∑k=0Mi-1 I(Bi2-αk2)$\nwhere the function $I\\left(x\\right)$ is defined as, \\begin{equation} I(x)=\\left\\{ \\begin{array}{ll} 10, & \\text{if}\\ x >= 1, x <= 50 \\\\ 5, & \\text{if}\\ x >= 51, x <=100 \\\\ 1, & \\text{if}\\ x >= 101, x <= 150 \\\\ 0, & \\text{otherwise} \\end{array}\\right. \\end{equation}\nwhere,\nSymbol Universe Description\n${P}_{i}^{2}$ ${P}_{i}^{2}\\in \\mathrm{{ℤ}^{+}}$ the pass 2 rating, of the i-th player in this tournament only applicable to unrated players, where ${P}_{i}^{0}$ is not defined\n${B}_{i}^{2}$ ${B}_{i}^{2}\\in \\mathrm{{ℤ}^{+}}$ the largest of the Pass 2 Adjustments of opponents of the i-th player against whom he/she won a match.\n${\\alpha }_{k}^{2}$ ${\\alpha }_{k}^{2}\\in \\mathrm{{ℤ}^{+}}$ the Pass 2 Adjustment of the player who was the opponent of the i-th player in the k-th match\n$I\\left(x\\right)$ $I:ℤ↦\\mathrm{{ℤ}^{+}}$ a function that maps all integers to one of the values from -- 0, 1, 5, 10.\n${M}_{i}$ ${M}_{i}\\in \\mathrm{{ℤ}^{+}}$ total number of matches played by the i-th player in this tournament\nk $k\\in \\mathrm{\\left[0,\\mathrm{{M}_{i}}-1\\right]\\cap {ℤ}^{+}}$ The index of the match of the i-th player ranging from 0 to ${M}_{i}-1$\n4. For unrated players with all losses and no wins, where at least 1 of the opponents has an initial rating, the Pass 2 Rating is calculated using the following formula:\n$Pi2 = Wi2 + ∑k=0Mi-1 I(Wi2-αk2)$\nwhere $I\\left(x\\right)$ is defined above and,\n\nSymbol Universe Description\n${P}_{i}^{2}$ ${P}_{i}^{2}\\in \\mathrm{{ℤ}^{+}}$ the pass 2 rating, of the i-th player in this tournament only applicable to unrated players, where ${P}_{i}^{0}$ is not defined\n${W}_{i}^{2}$ ${W}_{i}^{2}\\in \\mathrm{{ℤ}^{+}}$ the smallest of the Pass 2 Adjustments of opponents of the i-th player against whom he/she lost a match.\n${\\alpha }_{k}^{2}$ ${\\alpha }_{k}^{2}\\in \\mathrm{{ℤ}^{+}}$ the Pass 2 Adjustment of the player who was the opponent of the i-th player in the k-th match\n$I\\left(x\\right)$ $I:ℤ↦\\mathrm{{ℤ}^{+}}$ a function that maps all integers to one of the values from -- 0, 1, 5, 10.\n${M}_{i}$ ${M}_{i}\\in \\mathrm{{ℤ}^{+}}$ total number of matches played by the i-th player in this tournament\nk $k\\in \\mathrm{\\left[0,\\mathrm{{M}_{i}}-1\\right]\\cap {ℤ}^{+}}$ The index of the match of the i-th player ranging from 0 to ${M}_{i}-1$\n5. For the rated players, all the work done in Pass 1 and Pass 2 to undone and they have their ratings reset back to their initial ratings while the unrated players keep their Pass 2 Adjustment as their final Pass 2 Rating. Since Shemar Britton is a rated player, his Pass 2 Adjustment of 2275 will be ignored, along with him Pass 1 Rating of 2273 and his Pass 2 Rating will be set to his initial rating of 2275 with which he came into this tournament.\n\nClick here to see detailed information about the Pass 2 Ratings of all the other players in this tournament.\n\nPass 3 Rating\nAny of the unrated players who have all wins or all losses are skipped in Pass 3. Since Shemar Britton has an initial rating of 2275, he is not an unrated player, and therefore this rule does not apply to him. You can click here to view list of all the players that are skipped in this Pass 3.\n\nPass 3 Rating is calculated using 2 steps described below:\n1. In the first part of Pass 3, we apply the point exchange table described in Pass 1 above except this time by using all the players' Pass 2 Ratings. Looking at Shemar Britton's wins and losses and applying the point exchange table, gives us the following result:\nShemar Britton's Wins\nWinner Loser\nPoint Spread Outcome Gain Player USATT # Rating Player USATT # Rating\n92 EXPECTED 4 Shemar Britton 91729 2275 Dennis Stephenson 20484 2183\n416 EXPECTED 0 Shemar Britton 91729 2275 Nicholas Wetzler 61233 1859\n98 EXPECTED 4 Shemar Britton 91729 2275 Adnan Medunjanin 83321 2177\n\nShemar Britton's Losses\nWinner Loser\nPoint Spread Outcome Loss Player USATT # Rating Player USATT # Rating\n21 UPSET -10 David Cui 86737 2254 Shemar Britton 91729 2275\n\nYou can click here to view a table of outcomes and points gained/lost from all the matches with all the players in this tournament for Pass 3 Part 1.\n\nThe \"Outcome\" column above, shows whether the match had an expected (player with the higher rating wins the match) or an upset (player with the higher rating loses the match) outcome. Based on this outcome, and using both the player's Pass 2 Rating, we apply the point exchange table from above and show the rating points earned and lost by Shemar Britton in the \"Gain\" column. Matches are divided up into two tables for wins and losses where points are \"Gain\"ed for the wins and \"loss\"ed for losses. Putting all the gains and losses together, we get the following math to calculate the rating for Shemar Britton in this first part of Pass 3:\n\nPass 2 Rating Gains/Losses Pass 3 Part 1 Rating\n2275 + 4 + 0 - 10 + 4 $=\\mathrm{2273}$\n\nYou can click here to view a table of these calculations for all the players in this tournament.\n\n2. Given the Pass 3 Part 1 rating calculated above, the second part of Pass 3 looks very similar to the part of Pass 2 that deals with rated players where we calculate their Pass 2 Adjustment.\n1. First, we calculate the points gained in Pass 3 Part 1. Points gained is simply the difference between the Pass 3 Part 1 Rating and the Pass 2 Rating of a player:\n\n${\\rho }_{i}^{3}={p}_{i}^{3}-{P}_{i}^{2}$\nwhere,\n\n Symbol Universe Description ${P}_{i}^{2}$ ${P}_{i}^{2}\\in \\mathrm{{ℤ}^{+}}$ the Pass 2 Rating for the $i$-th player. ${p}_{i}^{3}$ ${p}_{i}^{3}\\in \\mathrm{{ℤ}^{+}}$ the Pass 3 Part 1 rating for the $i$-th player. (Note that since this is an intermediate result, we are using a lower case p instead of the upper case P that we use to indicate final result from each pass of the ratings processor. ${\\rho }_{i}^{3}$ ${\\rho }_{i}^{3}\\in ℤ$ the points gained by the $i$-th player in this tournament in Pass 3. $i$ $i\\in \\left[1,\\mathrm{134}\\right]\\cap ℤ$ the index of the player under consideration. $i$ can be as small as $1$ or as large as $\\mathrm{134}$ for this tournament.\n\n3. Pass 3 points gained, ${\\rho }_{i}^{3}$, is then used to calculate the Pass 3 Part 2 Rating in the following way:\n1. If a player gained less than 50 points (exclusive) in Pass 3 Part 1, then we set that player's Pass 3 Part 2 Rating to his/her Pass 2 Rating.\n2. If a player gained between 50 and 74 (inclusive) points in Pass 3 Part 1, then we set the player's Pass 3 Part 2 Rating to his/her Pass 3 Part 1 Rating.\n3. If a player gains 75 or more points (inclusive) in Pass 3 Part 1, then the following formula applies:\n• If the player has won at least one match, and lost at least 1 match in the tournament, then the player's Pass 3 Part 2 Rating is the average of his/her Pass 3 Part 1 Rating and the average of his/her opponents rating from the best win and the worst loss, represented using the formula below:\n\n$\\mathrm{{\\alpha }_{i}^{3}}=⌊\\mathrm{\\frac{\\mathrm{{p}_{i}^{3}}+\\mathrm{\\frac{\\mathrm{{B}_{i}^{3}}+\\mathrm{{W}_{i}^{3}}}{2}}}{2}}⌋$\n\nwhere ${\\alpha }_{i}^{3}$ is the Pass 3 Part 2 Rating for the current player, ${p}_{i}^{3}$ is the Pass 3 Part 1 Rating, ${B}_{i}^{3}$ is the rating of the highest rated opponent against which the current player won a match, and ${W}_{i}$ is the rating of the lowest rated opponent against which the current player lost a match.\n• If a player has not lost any of his/her matches in the current tournament, the mathematical median (rounded down to the nearest integer) of all of the player's opponents rating is used as his/her Pass 3 Part 2 Rating:\n$\\mathrm{{\\alpha }_{i}^{3}}=\\mathrm{⌊\\stackrel{\\sim }{\\mathrm{\\left\\{\\mathrm{{p}_{k}^{3}}\\right\\}}}⌋}$\n\nwhere ${p}_{k}^{3}$ is the Pass 3 Part 1 Rating of the i-th player's opponent from the k-th match.\n\n• Therefore, the Pass 3 Part 2 Rating for Shemar Britton is calculated as follows:\n• Given the Pass 2 Rating of 2275,\n• and the Pass 3 Part 1 rating of 2273,\n• the Pass 3 Part 1 gain is 2273 - 2275 = -2.\n• Since the Pass 3 Gain of -2 is less than 50, the Pass 3 Part 2 Rating is reset back to the Pass 2 Rating.\n• Therefore the Pass 3 Part 2 Rating for Shemar Britton is 2275.\n\nThe Pass 3 Part 2 rating ends up becoming the final Pass 3 rating (also referred to as the Pass 3 Adjustment) except as follows:\n• In the cases where the Pass 3 Part 2 rating is less than the players' initial rating ${P}_{i}^{0}$, the Pass 3 rating is reset back to that players initial rating. Shemar Britton's Pass 3 Part 2 Rating came out to 2275. Since this value is greater than Shemar Britton's initial rating of 2275, his Pass 3 Adjustment is set to his Pass 3 Part 2 Rating of 2275.\n\n• It is possible for the admin of this tournament to override the Pass 3 Adjustment calculated above with a value they deem appropriate. Shemar Britton does not have a manually overridden value for his Pass 3 Adjustment, therefore the value remains at 2275.\nYou can click here to view a table of Pass 3 Part 2 Ratings for all the players in this tournament along with any manually overridden values.\n\nPass 4 Rating\nPass 4 is the final pass of the ratings processor. In this pass, we take the adjusted ratings (Pass 3 Adjustment) of all the rated players, and the assigned rating of unrated players (Pass 2 Rating), and apply the point exchange table to the match results based on these ratings to arrive at a final rating. Looking at Shemar Britton's match results and applying the point exchange table, gives us the following result:\n\nShemar Britton's Wins\nWinner Loser\nPoint Spread Outcome Gain Player USATT # Rating Player USATT # Rating\n92 EXPECTED 4 Shemar Britton 91729 2275 Dennis Stephenson 20484 2183\n416 EXPECTED 0 Shemar Britton 91729 2275 Nicholas Wetzler 61233 1859\n98 EXPECTED 4 Shemar Britton 91729 2275 Adnan Medunjanin 83321 2177\n\nShemar Britton's Losses\nWinner Loser\nPoint Spread Outcome Loss Player USATT # Rating Player USATT # Rating\n21 UPSET -10 David Cui 86737 2254 Shemar Britton 91729 2275\n\nYou can click here to view a table of outcomes and points gained/lost from all the matches with all the players in this tournament.\n\nThe \"Outcome\" column above, shows whether the match had an expected (player with the higher rating wins the match) or an upset (player with the higher rating loses the match) outcome. Based on this outcome, and using both the players' Pass 3 Adjustment, we apply the point exchange table from above and show the ratings points earned and lost by Shemar Britton in the \"Gain\" and \"Loss\" columns. Matches are separated out into two tables for wins and losses where points are gained and lost respectively. We get the following math to calculate the Pass 4 Rating for Shemar Britton:\n\nPass 3 Rating Gains/Losses Pass 4 Rating\n2275 + 4 + 0 - 10 + 4 $=\\mathrm{2273}$\n\nYou can click here to view a table of Pass 4 calculations for all the players in this tournament." ]
[ null, "https://usatt.simplycompete.com/static/images/sc-logo-dark-small.png", null, "https://usatt.simplycompete.com/static/images/USATT_logo.jpg", null, "https://usatt.simplycompete.com/static/images/no-photo-male.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.75851846,"math_prob":0.98846763,"size":15143,"snap":"2022-05-2022-21","text_gpt3_token_len":5925,"char_repetition_ratio":0.36230928,"word_repetition_ratio":0.2568039,"special_character_ratio":0.59208876,"punctuation_ratio":0.10788519,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99060875,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-01-23T18:47:54Z\",\"WARC-Record-ID\":\"<urn:uuid:e29cfaf8-70d2-4a1f-9d50-47e5a26f8839>\",\"Content-Length\":\"1049451\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:176a5b44-d8d4-4c25-885b-a1ed17593ae9>\",\"WARC-Concurrent-To\":\"<urn:uuid:9c458f28-beef-434e-88ff-54d901e99fc8>\",\"WARC-IP-Address\":\"52.34.235.248\",\"WARC-Target-URI\":\"https://usatt.simplycompete.com/t/exp2?tri=10734&uai=13130\",\"WARC-Payload-Digest\":\"sha1:SK5D53RCTWAH5IF7VZBX7ZWMMSRI7MNQ\",\"WARC-Block-Digest\":\"sha1:EGPNHN3CE5Q7JSSA6SV3YTJXB24MSVZT\",\"WARC-Truncated\":\"length\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-05/CC-MAIN-2022-05_segments_1642320304309.5_warc_CC-MAIN-20220123172206-20220123202206-00260.warc.gz\"}"}
https://learnhtml.foobrdigital.com/how-does-regression-analysis-work/
[ "Categories\n\n# How does regression analysis work?\n\nIn order to conduct a regression analysis, you’ll need to define a dependent variable that you hypothesize is being influenced by one or several independent variables.\n\nYou’ll then need to establish a comprehensive dataset to work with. Administering surveys to your audiences of interest is a terrific way to establish this dataset. Your survey should include questions addressing all of the independent variables that you are interested in.\n\nLet’s continue using our application training example. In this case, we’d want to measure the historical levels of satisfaction with the events from the past three years or so (or however long you deem statistically significant), as well as any information possible in regards to the independent variables.\n\nPerhaps we’re particularly curious about how the price of a ticket to the event has impacted levels of satisfaction.\n\nTo begin investigating whether or not there is a relationship between these two variables, we would begin by plotting these data points on a chart, which would look like the following theoretical example.\n\n(Plotting your data is the first step in figuring out if there is a relationship between your independent and dependent variables)\n\nOur dependent variable (in this case, the level of event satisfaction) should be plotted on the y-axis, while our independent variable (the price of the event ticket) should be plotted on the x-axis.\n\nOnce your data is plotted, you may begin to see correlations. If the theoretical chart above did indeed represent the impact of ticket prices on event satisfaction, then we’d be able to confidently say that the higher the ticket price, the higher the levels of event satisfaction.\n\nBut how can we tell the degree to which ticket price affects event satisfaction?\n\nTo begin answering this question, draw a line through the middle of all of the data points on the chart. This line is referred to as your regression line, and it can be precisely calculated using a standard statistics program like Excel.\n\nWe’ll use a theoretical chart once more to depict what a regression line should look like.\n\nThe regression line represents the relationship between your independent variable and your dependent variable.\n\nExcel will even provide a formula for the slope of the line, which adds further context to the relationship between your independent and dependent variables.\n\nThe formula for a regression line might look something like Y = 100 + 7X + error term.\n\nThis tells you that if there is no “X”, then Y = 100. If X is our increase in ticket price, this informs us that if there is no increase in ticket price, event satisfaction will still increase by 100 points.\n\nYou’ll notice that the slope formula calculated by Excel includes an error term. Regression lines always consider an error term because in reality, independent variables are never precisely perfect predictors of dependent variables. This makes sense while looking at the impact of  ticket prices on event satisfaction — there are clearly other variables that are contributing to event satisfaction outside of price.\n\nYour regression line is simply an estimate based on the data available to you. So, the larger your error term, the less definitively certain your regression line is." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.930146,"math_prob":0.93251264,"size":3237,"snap":"2023-40-2023-50","text_gpt3_token_len":601,"char_repetition_ratio":0.14599444,"word_repetition_ratio":0.034285713,"special_character_ratio":0.18751931,"punctuation_ratio":0.076124564,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9883406,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-27T12:48:18Z\",\"WARC-Record-ID\":\"<urn:uuid:89130b13-4302-41bb-9386-9b87ab55bb7a>\",\"Content-Length\":\"243399\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d55db716-d2fa-435c-8207-4d06ecbdf2e5>\",\"WARC-Concurrent-To\":\"<urn:uuid:2e46b60d-7a0e-46ab-ae0f-11cb85f4d89b>\",\"WARC-IP-Address\":\"172.67.191.6\",\"WARC-Target-URI\":\"https://learnhtml.foobrdigital.com/how-does-regression-analysis-work/\",\"WARC-Payload-Digest\":\"sha1:P3QZBQOI5NK2OOVYK7PZZI2NGOYRHXM6\",\"WARC-Block-Digest\":\"sha1:PZSIXRZQLEEHRDJJ24SNNVR3SWYNZAMH\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510297.25_warc_CC-MAIN-20230927103312-20230927133312-00053.warc.gz\"}"}
https://www.oreilly.com/library/view/math-for-the/9780133597639/book1_go01.html
[ "With Safari, you learn the way you learn best. Get unlimited access to videos, live online training, learning paths, books, tutorials, and more.\n\nNo credit card required\n\nGlossary\n\nAlternative hypothesis (H1)\n\nThe opposite of the null hypothesis (H0).\n\nAnalysis of variance (ANOVA)\n\nA statistical method that tests the effect of different factors on a variable of interest.\n\nBar chart\n\nA chart containing rectangles (“bars”) in which the length of each bar represents the count, amount, or percentage of responses in each category.\n\nBinomial distribution\n\nA distribution that finds the probability of a given number of successes for a given probability of success and sample size.\n\nBox-and-whisker plot\n\nAlso known as a boxplot; a graphical representation of the five-number summary that consists of the smallest value, the first quartile (or 25th percentile), the median, the third quartile (or 75th percentile), and the largest ...\n\nWith Safari, you learn the way you learn best. Get unlimited access to videos, live online training, learning paths, books, interactive tutorials, and more.\n\nNo credit card required" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8816411,"math_prob":0.82605666,"size":767,"snap":"2019-43-2019-47","text_gpt3_token_len":158,"char_repetition_ratio":0.093053736,"word_repetition_ratio":0.0,"special_character_ratio":0.20208605,"punctuation_ratio":0.1037037,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9768524,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-10-17T13:52:32Z\",\"WARC-Record-ID\":\"<urn:uuid:df0786aa-c485-48b0-97b2-0edc48da5b64>\",\"Content-Length\":\"26769\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:296e2217-922f-412c-a4f0-79eddbee75f8>\",\"WARC-Concurrent-To\":\"<urn:uuid:e9395308-4e3b-4a5d-bca9-a0e7861bf4fe>\",\"WARC-IP-Address\":\"104.119.16.36\",\"WARC-Target-URI\":\"https://www.oreilly.com/library/view/math-for-the/9780133597639/book1_go01.html\",\"WARC-Payload-Digest\":\"sha1:W367GGXIRS7RGBOMKOMYBML2KWROB2UJ\",\"WARC-Block-Digest\":\"sha1:WI64D44IFNITVSUFX25IPFAJ4QES45TE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-43/CC-MAIN-2019-43_segments_1570986675316.51_warc_CC-MAIN-20191017122657-20191017150157-00508.warc.gz\"}"}
https://d2mvzyuse3lwjc.cloudfront.net/pdfs/NAG26/Manual/html/d02/d02gbc.html
[ "NAG Library Function Document\n\n1Purpose\n\nnag_ode_bvp_fd_lin_gen (d02gbc) solves a general linear two-point boundary value problem for a system of ordinary differential equations using a deferred correction technique.\n\n2Specification\n\n #include #include\nvoid  nag_ode_bvp_fd_lin_gen (Integer neq,\n void (*fcnf)(Integer neq, double x, double f[], Nag_User *comm),\n void (*fcng)(Integer neq, double x, double g[], Nag_User *comm),\ndouble a, double b, double c[], double d[], double gam[], Integer mnp, Integer *np, double x[], double y[], double tol, Nag_User *comm, NagError *fail)\n\n3Description\n\nnag_ode_bvp_fd_lin_gen (d02gbc) solves the linear two-point boundary value problem for a system of neq ordinary differential equations in the interval $\\left[a,b\\right]$. The system is written in the form\n $y ′ = F x y + g x$ (1)\nand the boundary conditions are written in the form\n $Cy a + Dy b = γ$ (2)\nHere $F\\left(x\\right)$, $C$ and $D$ are neq by neq matrices, and $g\\left(x\\right)$ and $\\gamma$ are neq component vectors. The approximate solution to (1) and (2) is found using a finite difference method with deferred correction. The algorithm is a specialisation of that used in the function nag_ode_bvp_fd_nonlin_gen (d02rac) which solves a nonlinear version of (1) and (2). The nonlinear version of the algorithm is described fully in Pereyra (1979).\nYou need to supply an absolute error tolerance and may also supply an initial mesh for the construction of the finite difference equations (alternatively a default mesh is used). The algorithm constructs a solution on a mesh defined by adding points to the initial mesh. This solution is chosen so that the error is everywhere less than your tolerance and so that the error is approximately equidistributed on the final mesh. The solution is returned on this final mesh.\nIf the solution is required at a few specific points then these should be included in the initial mesh. If, on the other hand, the solution is required at several specific points, then you should use the interpolation functions provided in Chapter e01 if these points do not themselves form a convenient mesh.\nPereyra V (1979) PASVA3: An adaptive finite-difference Fortran program for first order nonlinear, ordinary boundary problems Codes for Boundary Value Problems in Ordinary Differential Equations. Lecture Notes in Computer Science (eds B Childs, M Scott, J W Daniel, E Denman and P Nelson) 76 Springer–Verlag\n\n5Arguments\n\n1:    $\\mathbf{neq}$IntegerInput\nOn entry: the number of equations; that is neq is the order of system (1).\nConstraint: ${\\mathbf{neq}}\\ge 2$.\n2:    $\\mathbf{fcnf}$function, supplied by the userExternal Function\nfcnf must evaluate the matrix $F\\left(x\\right)$ in (1) at a general point $x$.\nThe specification of fcnf is:\n void fcnf (Integer neq, double x, double f[], Nag_User *comm)\n1:    $\\mathbf{neq}$IntegerInput\nOn entry: the number of differential equations.\n2:    $\\mathbf{x}$doubleInput\nOn entry: the value of the independent variable $x$.\n3:    $\\mathbf{f}\\left[{\\mathbf{neq}}×{\\mathbf{neq}}\\right]$doubleOutput\nOn exit: the $\\left(i,j\\right)$th element of the matrix $F\\left(x\\right)$, for $i,j=1,2,\\dots ,{\\mathbf{neq}}$ where ${F}_{ij}$ is set by ${\\mathbf{f}}\\left[\\left(i-1\\right)×{\\mathbf{neq}}+\\left(j-1\\right)\\right]$. (See Section 10 for an example.)\n4:    $\\mathbf{comm}$Nag_User *\nPointer to a structure of type Nag_User with the following member:\npPointer\nOn entry/exit: the pointer $\\mathbf{comm}\\mathbf{\\to }\\mathbf{p}$ should be cast to the required type, e.g., struct user *s = (struct user *)comm → p, to obtain the original object's address with appropriate type. (See the argument comm below.)\nNote: fcnf should not return floating-point NaN (Not a Number) or infinity values, since these are not handled by nag_ode_bvp_fd_lin_gen (d02gbc). If your code inadvertently does return any NaNs or infinities, nag_ode_bvp_fd_lin_gen (d02gbc) is likely to produce unexpected results.\n3:    $\\mathbf{fcng}$function, supplied by the userExternal Function\nfcng must evaluate the vector $g\\left(x\\right)$ in (1) at a general point $x$.\nThe specification of fcng is:\n void fcng (Integer neq, double x, double g[], Nag_User *comm)\n1:    $\\mathbf{neq}$IntegerInput\nOn entry: the number of differential equations.\n2:    $\\mathbf{x}$doubleInput\nOn entry: the value of the independent variable $x$.\n3:    $\\mathbf{g}\\left[{\\mathbf{neq}}\\right]$doubleOutput\nOn exit: the $\\mathit{i}$th element of the vector $g\\left(x\\right)$, for $\\mathit{i}=1,2,\\dots ,{\\mathbf{neq}}$. (See Section 10 for an example.)\n4:    $\\mathbf{comm}$Nag_User *\nPointer to a structure of type Nag_User with the following member:\npPointer\nOn entry/exit: the pointer $\\mathbf{comm}\\mathbf{\\to }\\mathbf{p}$ should be cast to the required type, e.g., struct user *s = (struct user *)comm → p, to obtain the original object's address with appropriate type. (See the argument comm below.)\nNote: fcng should not return floating-point NaN (Not a Number) or infinity values, since these are not handled by nag_ode_bvp_fd_lin_gen (d02gbc). If your code inadvertently does return any NaNs or infinities, nag_ode_bvp_fd_lin_gen (d02gbc) is likely to produce unexpected results.\nIf you do not wish to supply fcng the actual argument fcng must be the NAG defined null function pointer NULLFN.\n4:    $\\mathbf{a}$doubleInput\nOn entry: the left-hand boundary point, $a$.\n5:    $\\mathbf{b}$doubleInput\nOn entry: the right-hand boundary point, $b$.\nConstraint: ${\\mathbf{b}}>{\\mathbf{a}}$.\n6:    $\\mathbf{c}\\left[{\\mathbf{neq}}×{\\mathbf{neq}}\\right]$doubleInput/Output\n7:    $\\mathbf{d}\\left[{\\mathbf{neq}}×{\\mathbf{neq}}\\right]$doubleInput/Output\n8:    $\\mathbf{gam}\\left[{\\mathbf{neq}}\\right]$doubleInput/Output\nOn entry: the arrays c and d must be set to the matrices $C$ and $D$ in (2). gam must be set to the vector $\\gamma$ in (2).\nOn exit: the rows of c and d and the components of gam are re-ordered so that the boundary conditions are in the order:\n (i) conditions on $y\\left(a\\right)$ only; (ii) condition involving $y\\left(a\\right)$ and $y\\left(b\\right)$; and (iii) conditions on $y\\left(b\\right)$ only.\nThe function will be slightly more efficient if the arrays c, d and gam are ordered in this way before entry, and in this event they will be unchanged on exit.\nNote that the boundary conditions must be of boundary value type, that is neither $C$ nor $D$ may be identically zero. Note also that the rank of the matrix $\\left[C,D\\right]$ must be neq for the problem to be properly posed. Any violation of these conditions will lead to an error exit.\n9:    $\\mathbf{mnp}$IntegerInput\nOn entry: the maximum permitted number of mesh points.\nConstraint: ${\\mathbf{mnp}}\\ge 32$.\n10:  $\\mathbf{np}$Integer *Input/Output\nOn entry: determines whether a default or user-supplied initial mesh is used.\n${\\mathbf{np}}=0$\nnp is set to a default value of 4 and a corresponding equispaced mesh ${\\mathbf{x}}\\left[0\\right],{\\mathbf{x}}\\left[1\\right],\\dots ,{\\mathbf{x}}\\left[{\\mathbf{np}}-1\\right]$ is used.\n${\\mathbf{np}}\\ge 4$\nYou must define an initial mesh using the array x as described.\nConstraint: ${\\mathbf{np}}=0$ or $4\\le {\\mathbf{np}}\\le {\\mathbf{mnp}}$.\nOn exit: the number of points in the final (returned) mesh.\n11:  $\\mathbf{x}\\left[{\\mathbf{mnp}}\\right]$doubleInput/Output\nOn entry: if ${\\mathbf{np}}\\ge 4$ (see np above), the first np elements must define an initial mesh. Otherwise the elements of x need not be set.\nConstraint:\n $a = x < x < ⋯ < x[np-1] = b ,$ (3)\nfor ${\\mathbf{np}}\\ge 4$.\nOn exit: ${\\mathbf{x}}\\left[0\\right],{\\mathbf{x}}\\left[1\\right],\\dots ,{\\mathbf{x}}\\left[{\\mathbf{np}}-1\\right]$ define the final mesh (with the returned value of np) satisfying the relation (3).\n12:  $\\mathbf{y}\\left[{\\mathbf{neq}}×{\\mathbf{mnp}}\\right]$doubleOutput\nOn exit: the approximate solution ${z}_{j}\\left({x}_{i}\\right)$ satisfying (4), on the final mesh, that is\n $y[j-1×mnp+i-1] = z j x i , i = 1 , 2 , … , np ; ​ j = 1 , 2 , … , neq ,$\nwhere np is the number of points in the final mesh.\nThe remaining columns of y are not used.\n13:  $\\mathbf{tol}$doubleInput\nOn entry: a positive absolute error tolerance.\nIf\n $a = x 1 < x 2 < ⋯ < x np = b$ (4)\nis the final mesh, ${z}_{j}\\left({x}_{i}\\right)$ is the $j$th component of the approximate solution at ${x}_{i}$, and ${y}_{j}\\left({x}_{i}\\right)$ is the $j$th component of the true solution of equation (1) (see Section 3) and the boundary conditions, then, except in extreme cases, it is expected that\n $z j x i - y j x i ≤ tol , i = 1 , 2 , … , np ; ​ j = 1 , 2 , … , neq$ (5)\nConstraint: ${\\mathbf{tol}}>0.0$.\n14:  $\\mathbf{comm}$Nag_User *\nPointer to a structure of type Nag_User with the following member:\npPointer\nOn entry/exit: the pointer $\\mathbf{comm}\\mathbf{\\to }\\mathbf{p}$, of type Pointer, allows you to communicate information to and from fcnf and fcng. An object of the required type should be declared, e.g., a structure, and its address assigned to the pointer $\\mathbf{comm}\\mathbf{\\to }\\mathbf{p}$ by means of a cast to Pointer in the calling program, e.g., comm.p = (Pointer)&s. The type pointer will be void * with a C compiler that defines void * and char * otherwise.\n15:  $\\mathbf{fail}$NagError *Input/Output\nThe NAG error argument (see Section 3.7 in How to Use the NAG Library and its Documentation).\n\n6Error Indicators and Warnings\n\nNE_2_REAL_ARG_LE\nOn entry, ${\\mathbf{b}}=〈\\mathit{\\text{value}}〉$ while ${\\mathbf{a}}=〈\\mathit{\\text{value}}〉$. These arguments must satisfy ${\\mathbf{b}}>{\\mathbf{a}}$.\nNE_ALLOC_FAIL\nDynamic memory allocation failed.\nNE_BOUND_COND_COL\nMore than neq columns of the neq by $2×{\\mathbf{neq}}$ matrix $\\left[C,D\\right]$ are identically zero. i.e., the boundary conditions are rank deficient. The number of non-identically zero columns is $〈\\mathit{\\text{value}}〉$.\nNE_BOUND_COND_LC\nAt least one row of the neq by $2×{\\mathbf{neq}}$ matrix $\\left[C,D\\right]$ is a linear combination of the other rows, i.e., the boundary conditions are rank deficient. The index of the first such row is $〈\\mathit{\\text{value}}〉$.\nNE_BOUND_COND_MAT\nOne of the matrices $C$ or $D$ is identically zero, i.e., the problem is of initial value and not of the boundary type.\nNE_BOUND_COND_NLC\nAt least one row of the neq by $2×{\\mathbf{neq}}$ matrix $\\left[C,D\\right]$ is a linear combination of the other rows determined up to a numerical tolerance, i.e., the boundary conditions are rank deficient. The index of first such row is $〈\\mathit{\\text{value}}〉$. There is some doubt as to the rank deficiency of the boundary conditions. However even if the boundary conditions are not rank deficient they are not posed in a suitable form for use with this function. For example, if\n $C = 1 0 1 ε , D = 1 0 1 0 , γ = γ 1 γ 2$\nand $\\epsilon$ is small enough, this error exit is likely to be taken. A better form for the boundary conditions in this case would be\n $C = 1 0 0 1 , D = 1 0 0 0 , γ = γ 1 ε -1 γ 2 - γ 1$\nNE_BOUND_COND_ROW\nRow $〈\\mathit{\\text{value}}〉$ of the array c and the corresponding row of array d are identically zero, i.e., the boundary conditions are rank deficient.\nNE_CONV_MESH\nA finer mesh is required for the accuracy requested; that is mnp is not large enough.\nNE_CONV_MESH_INIT\nThe Newton iteration failed to converge on the initial mesh. This may be due to the initial mesh having too few points or the initial approximate solution being too inaccurate. Try using nag_ode_bvp_fd_nonlin_gen (d02rac).\nNE_CONV_ROUNDOFF\nSolution cannot be improved due to roundoff error. Too much accuracy might have been requested.\nNE_INT_ARG_LT\nOn entry, ${\\mathbf{mnp}}=〈\\mathit{\\text{value}}〉$.\nConstraint: ${\\mathbf{mnp}}\\ge 32$.\nOn entry, ${\\mathbf{neq}}=〈\\mathit{\\text{value}}〉$.\nConstraint: ${\\mathbf{neq}}\\ge 2$.\nNE_INT_RANGE_CONS_2\nOn entry, ${\\mathbf{np}}=〈\\mathit{\\text{value}}〉$ and ${\\mathbf{mnp}}=〈\\mathit{\\text{value}}〉$. The argument np must satisfy either $4\\le {\\mathbf{np}}\\le {\\mathbf{mnp}}$ or ${\\mathbf{np}}=0$.\nNE_INTERNAL_ERROR\nAn internal error has occurred in this function. Check the function call and any array sizes. If the call is correct then please contact NAG for assistance.\nNE_LF_B_MESH\nOn entry, the left boundary value a, has not been set to ${\\mathbf{x}}\\left[0\\right]$: ${\\mathbf{a}}=〈\\mathit{\\text{value}}〉$, ${\\mathbf{x}}\\left[0\\right]=〈\\mathit{\\text{value}}〉$.\nNE_NOT_STRICTLY_INCREASING\nThe sequence x is not strictly increasing: ${\\mathbf{x}}\\left[〈\\mathit{\\text{value}}〉\\right]=〈\\mathit{\\text{value}}〉$, ${\\mathbf{x}}\\left[〈\\mathit{\\text{value}}〉\\right]=〈\\mathit{\\text{value}}〉$.\nNE_REAL_ARG_LE\nOn entry, tol must not be less than or equal to 0.0: ${\\mathbf{tol}}=〈\\mathit{\\text{value}}〉$.\nNE_RT_B_MESH\nOn entry, the right boundary value b, has not been set to ${\\mathbf{x}}\\left[{\\mathbf{np}}-1\\right]$: ${\\mathbf{b}}=〈\\mathit{\\text{value}}〉$, ${\\mathbf{x}}\\left[{\\mathbf{np}}-1\\right]=〈\\mathit{\\text{value}}〉$.\n\n7Accuracy\n\nThe solution returned by the function will be accurate to your tolerance as defined by the relation (4) except in extreme circumstances. If too many points are specified in the initial mesh, the solution may be more accurate than requested and the error may not be approximately equidistributed.\n\n8Parallelism and Performance\n\nnag_ode_bvp_fd_lin_gen (d02gbc) is not threaded in any implementation.\n\nThe time taken by the function depends on the difficulty of the problem, the number of mesh points (and meshes) used and the number of deferred corrections.\nIn the case where you wish to solve a sequence of similar problems, the use of the final mesh from one case is strongly recommended as the initial mesh for the next.\n\n10Example\n\nWe solve the problem (written as a first order system)\n $ε y ′′ + y ′ = 0$\nwith boundary conditions\n $y 0 = 0 , y 1 = 1$\nfor the cases $\\epsilon ={10}^{-1}$ and $\\epsilon ={10}^{-2}$ using the default initial mesh in the first case, and the final mesh of the first case as initial mesh for the second (more difficult) case. We give the solution and the error at each mesh point to illustrate the accuracy of the method given the accuracy request ${\\mathbf{tol}}=\\text{1.0e−3}$.\n\n10.1Program Text\n\nProgram Text (d02gbce.c)\n\nNone.\n\n10.3Program Results\n\nProgram Results (d02gbce.r)\n\n© The Numerical Algorithms Group Ltd, Oxford, UK. 2017" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8001708,"math_prob":0.9989226,"size":11117,"snap":"2022-05-2022-21","text_gpt3_token_len":2832,"char_repetition_ratio":0.12984793,"word_repetition_ratio":0.21088083,"special_character_ratio":0.24835837,"punctuation_ratio":0.1388889,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.999821,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-01-27T15:13:26Z\",\"WARC-Record-ID\":\"<urn:uuid:4f7b561d-a6d5-4e41-a683-958bd2fbb19d>\",\"Content-Length\":\"50760\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3cf4bdd8-dc7d-4160-be76-7e4ad89f236b>\",\"WARC-Concurrent-To\":\"<urn:uuid:39482d33-c0c8-4db3-afaf-98d803d1406b>\",\"WARC-IP-Address\":\"13.249.46.83\",\"WARC-Target-URI\":\"https://d2mvzyuse3lwjc.cloudfront.net/pdfs/NAG26/Manual/html/d02/d02gbc.html\",\"WARC-Payload-Digest\":\"sha1:WFLRS3F4RJNX3A6AGPU4IPEXA4MDMG2I\",\"WARC-Block-Digest\":\"sha1:KLWXEN3CNLVXJCLT6CGKIU3K2WW2O7J5\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-05/CC-MAIN-2022-05_segments_1642320305266.34_warc_CC-MAIN-20220127133107-20220127163107-00402.warc.gz\"}"}
https://linux-blog.anracom.com/tag/update-plots/
[ "# Matplotlib, Jupyter and updating multiple interactive plots\n\nFor experiments in Machine Learning [ML] it is quite useful to see the development of some characteristic quantities during optimization processes for algorithms - e.g. the behaviour of the cost function during the training of Artificial Neural Networks. Beginners in Python the look for an option to continuously update plots by interactively changing or extending data from a running Python code.\n\nDoes Matplotlib offer an option for interactively updating plots? In a Jupyter notebook? Yes, it does. It is even possible to update multiple plot areas simultanously. The magic (meta) commands are \"%matplotlib notebook\" and \"matplotlib.pyplot.ion()\".\n\nThe following code for a Jupyter cell demonstrates the basic principles. I hope it is useful for other ML- and Python beginners as me.\n\n```# Tests for dynamic plot updates\n#-------------------------------\n%matplotlib notebook\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport time\n\nx = np.linspace(0, 10*np.pi, 100)\ny = np.sin(x)\n\n# The really important command for interactive plot updating\nplt.ion()\n\n# sizing of the plots figure sizes\nfig_size = plt.rcParams[\"figure.figsize\"]\nfig_size = 8\nfig_size = 3\n\n# Two figures\n# -----------\nfig1 = plt.figure(1)\nfig2 = plt.figure(2)\n\n# first figure with two plot-areas with axes\n# --------------------------------------------\n\nfig1.canvas.draw()\n\n# second figure with just one plot area with axes\n# -------------------------------------------------\nline1, = ax2.plot(x, y, 'b-')\nfig2.canvas.draw()\n\nz= 32\nb = np.zeros()\nc = np.zeros()\nc = 1000\n\nfor i in range(z):\n# update data\nphase = np.pi / z * i\nline1.set_ydata(np.sin(0.5 * x + phase))\nb = np.append(b, [i**2])\nc = np.append(c, [1000.0 - i**2])\n\n# re-plot area 1 of fig1\nax1_1.clear()\nax1_1.set_xlim (0, 100)\nax1_1.set_ylim (0, 1000)\nax1_1.plot(b)\n\n# re-plot area 2 of fig1\nax1_2.clear()\nax1_2.set_xlim (0, 100)\nax1_2.set_ylim (0, 1000)\nax1_2.plot(c)\n\n# redraw fig 1\nfig1.canvas.draw()\n\n# redraw fig 2 with updated data\nfig2.canvas.draw()\n\ntime.sleep(0.1)\n```\n\nAs you see clearly we defined two different \"figures\" to be plotted - fig1 and fig2. The first figure ist horizontally splitted into two plotting areas with axes \"ax1_1\" and \"ax1_2\". Such a plotting area is created via the \"fig1.add_subplot()\" function and suitable parameters. The second figure contains only one plotting area \"ax2\".\n\nThen we update data for the plots within a loop witrh a timer of 0.1 secs. We clear the respective areas, redefine the axes and perform the plot for the updated data via the function \"plt.figure.canvas.draw()\".\n\nIn our case we see two parabolas develop in the upper figure; the lower figure shows a sinus-wave moving slowly from the right to the left.\n\nThe following plots show screenshots of the output in a Jupyter notebook in th emiddle of the loop and at its end:", null, "You see that we can deal with 3 plots at the same time. Try it yourself!\n\nHint:\nThere is small problem with the plot sizing when you have used the zoom-functionality of Chrome, Chromium or Firefox. You should work with interactive plots with the browser-zoom set to 100%." ]
[ null, "https://linux-blog.anracom.com/wp-content/uploads/2019/12/interactive_plots_1.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7491261,"math_prob":0.9462893,"size":3170,"snap":"2021-31-2021-39","text_gpt3_token_len":801,"char_repetition_ratio":0.14339861,"word_repetition_ratio":0.0,"special_character_ratio":0.31735015,"punctuation_ratio":0.14169382,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99678475,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-20T13:58:47Z\",\"WARC-Record-ID\":\"<urn:uuid:3cea313b-b546-4bf1-8cc9-7d1103930f3c>\",\"Content-Length\":\"45637\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6e2a2a75-c868-4952-afc1-e5612d6fe74a>\",\"WARC-Concurrent-To\":\"<urn:uuid:975bb1e2-a274-45c4-a79b-dbbacf4c11ee>\",\"WARC-IP-Address\":\"217.160.0.93\",\"WARC-Target-URI\":\"https://linux-blog.anracom.com/tag/update-plots/\",\"WARC-Payload-Digest\":\"sha1:5SNYAMLHUU2VF4EJW6QVGVAT5CQVLA3C\",\"WARC-Block-Digest\":\"sha1:6CCYLJXX5ATQGFYUO4CWS2RXNHVJC3YT\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780057039.7_warc_CC-MAIN-20210920131052-20210920161052-00674.warc.gz\"}"}
https://cdn.programiz.com/java-programming/library/math/sin
[ "", null, "# Java Math sin()\n\n#### The Java Math sin() returns the trigonometric sine of the specified angle.\n\nThe syntax of the `sin()` method is:\n\n``Math.sin(double angle)``\n\nHere, `sin()` is a static method. Hence, we are accessing the method using the class name, `Math`.\n\n## sin() Parameters\n\nThe `sin()` method takes a single parameter.\n\n• angle - angle whose trigonometric sine is to be returned\n\nNote: The value of the angle is in radians.\n\n## sin() Return Value\n\n• returns the trigonometric sine of the specified angle\n• returns NaN if the specified angle is NaN or infinity\n\nNote: If the argument is zero, then the result of the `sin()` method is also zero with the same sign as the argument.\n\n## Example 1: Java Math sin()\n\n``````import java.lang.Math;\n\nclass Main {\npublic static void main(String[] args) {\n\n// create variable in Degree\ndouble a = 30;\ndouble b = 45;\n\n// print the sine value\nSystem.out.println(Math.sin(a)); // 0.49999999999999994\nSystem.out.println(Math.sin(b)); // 0.7071067811865475\n\n// sin() with 0 as its argument\nSystem.out.println(Math.sin(0.0)); // 0.0\n}\n}``````\n\nIn the above example, we have imported the `java.lang.Math` package. It is a good practice to import the package. Notice the expression,\n\n``Math.sin(a)``\n\nHere, we have directly used the class name to call the method. It is because `sin()` is a static method.\n\nNote: We have used the Java Math.toRadians() method to convert all the values into radians. It is because as per the official Java documentation, the `sin()` method takes the parameter as radians.\n\n## Example 2: Math sin() Returns NaN\n\n``````import java.lang.Math;\n\nclass Main {\npublic static void main(String[] args) {\n\n// create variable\n// square root of negative number\n// results in not a number (NaN)\ndouble a = Math.sqrt(-5);\n\n// Using Double to implement infinity\ndouble infinity = Double.POSITIVE_INFINITY;\n\n// print the sine value\nSystem.out.println(Math.sin(a)); // NaN\nSystem.out.println(Math.sin(infinity)); // NaN\n}\n}``````\n\nHere, we have created a variable named a.\n\n• Math.sin(a) - returns NaN because square root of a negative number (-5) is not a number\n\nThe `Double.POSITIVE_INFINITY` is a field of `Double` class. It is used to implement infinity in Java.\n\nNote: We have used the Java Math.sqrt() method to compute the square root of a number." ]
[ null, "https://www.facebook.com/tr", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5681008,"math_prob":0.9530941,"size":2190,"snap":"2020-45-2020-50","text_gpt3_token_len":536,"char_repetition_ratio":0.12305581,"word_repetition_ratio":0.09090909,"special_character_ratio":0.2757991,"punctuation_ratio":0.17439294,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9997836,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-11-27T11:14:09Z\",\"WARC-Record-ID\":\"<urn:uuid:d71587f9-930a-4dd4-9a40-445634c6ce79>\",\"Content-Length\":\"75780\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d40ecc97-37af-4ffb-9ee9-49c023c1a917>\",\"WARC-Concurrent-To\":\"<urn:uuid:4e321e5c-ad38-4739-bb22-03effebb6a4b>\",\"WARC-IP-Address\":\"45.35.205.78\",\"WARC-Target-URI\":\"https://cdn.programiz.com/java-programming/library/math/sin\",\"WARC-Payload-Digest\":\"sha1:VRTRTLMGF33JP75B7YUI7P7PPYBNYEVH\",\"WARC-Block-Digest\":\"sha1:A3M2NHTW7XTC7RTORGAGQ36FUSUUXF4T\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141191692.20_warc_CC-MAIN-20201127103102-20201127133102-00334.warc.gz\"}"}
https://psychosystems.org/2018/05/
[ "New features in qgraph 1.5\n\nWritten by Sacha Epskamp.\n\nWhile the developmental version is routinely updated, I update the stable qgraph releases on CRAN less often. The last major update (version 1.4) was 1.5 years ago. After several minor updates since, I have now completed work on a new larger version of the package, version 1.5, which is now available on CRAN. The full list of changes can be read in the NEWS file. In this blog post, I will describe some of the new functionality.\n\nNew conservative GGM estimation algorithms\n\nRecently, there has been some debate on the specificity of EBICglasso in exploratory estimation of Gaussian graphical models (GGM). While EBIC selection of regularized glasso networks works well in retrieving network structures at low sample sizes, Donald Williams and Philippe Rast recently showed that specificity can be lower than expected in dense networks with many small edges, leading to an increase in false positives. These false edges are nearly invisible under default qgraph fading options (also due to regularization), and should not influence typical interpretations of these models. However, some lines of research focus on discovering the smallest edges (e.g., bridge symptoms or environmental edges), and there has been increasing concerns regarding the replicability of such small edges. To this end, qgraph 1.5 now includes a warning when a dense network is selected, and includes two new more conservative estimation algorithms: thresholded EBICglasso estimation and unregularized model selection.\n\nThresholded EBICglasso\n\nBased on recent work by Jankova and Van de Geer (2018), a low false positive rate is guaranteed for off-diagonal ($$i \\not= j$$) precision matrix elements (proportional to partial correlation coefficients) $$\\kappa_{ij}$$ for which:\n\n$|\\kappa_{ij}| > \\frac{\\log p (p-1) / 2}{\\sqrt{n}}.$ The option threshold = TRUE in EBICglasso and qgraph(..., graph = \"glass\") now employs this thresholding rule by setting edge-weights to zero that are not larger than the threshold in both in the returned final model as well in the EBIC computation of all considered models. Preliminary simulations indicate that with this thresholding rule, high specificity is guaranteed for many cases (an exception is the case in which the true model is not in the glassopath, at very high sample-sizes such as $$N > 10{,}000$$). A benefit of this approach over the unregularized option described above is that edge parameters are still regularized, preventing large visual overrepresentations due to sampling error.\n\nThe following codes showcase non-thresholded vs thresholded EBICglasso:\n\nlibrary(\"qgraph\")\nlibrary(\"psych\")\ndata(bfi)\nbfiSub <- bfi[,1:25]\nlayout(t(1:2))\ng1 <- qgraph(cor_auto(bfiSub), graph = \"glasso\", sampleSize = nrow(bfi),\nlayout = \"spring\", theme = \"colorblind\", title = \"EBICglasso\",\ncut = 0)\ng2 <- qgraph(cor_auto(bfiSub), graph = \"glasso\", sampleSize = nrow(bfi),\nthreshold = TRUE, layout = g1$layout, theme = \"colorblind\", title = \"Thresholded EBICglasso\", cut = 0)", null, "While the thresholded graph is much sparser, that does not mean all removed edges are false positives. Many are likely reflecting true edges. Unregularized Model Search While the LASSO has mostly been studied in high-dimensional low-sample cases, in many situations research focuses on relatively low-dimensional (e.g., 20 nodes) settings with high sample size (e.g., $$N > 1{,}000$$). To this end, it is arguable if regularization techniques are really needed. In the particular case of GGMs, one could also use model selection on unregularized models in which some pre-defined edge-weights are set to zero. It has been shown that (extended) Bayesian information criterion (EBIC) selection of such unregularized models selects the true model as $$N$$ grows to $$\\infty$$ (Foygel and Drton, 2010). The new function ggmModSelect now supports model search of unregularized GGM models, using the EBIC exactly as it is computed in the Lavaan package. The hypertuningparameter is set by default to $$0$$ (BIC selection) rather than $$0.5$$, as preliminary simulations indicate $$\\gamma = 0$$ shows much better sensitivity while retaining high specificity. By default, ggmModSelect will first run the glasso algorithm for $$100$$ different tuning parameters to obtain $$100$$ different network structures. Next, the algorithm refits all those networks without regularization and picks the best. Subsequently, the algorithm adds and removes edges until EBIC can no longer be improved. The full algorithm is: 1. Run glasso to obtain 100 models 2. Refit all models without regularization 3. Choose the best according to EBIC 4. Test all possible models in which one edge is changed (added or removed) 5. If no edge can be added or changed to improve EBIC, stop here 6. Change the edge that best improved EBIC, now test all other edges that would have also lead to an increase in EBIC again 7. If no edge can be added or changed to improve EBIC, go to 4, else, go to 6. When stepwise = FALSE, steps 4 to 7 are ignored, and when considerPerStep = \"all\", all edges are considered at every step. The following codes showcase the algorithm: modSelect_0 <- ggmModSelect(cor_auto(bfiSub), nrow(bfi), gamma = 0, nCores = 8) modSelect_0.5 <- ggmModSelect(cor_auto(bfiSub), nrow(bfi), gamma = 0.5, nCores = 8) layout(t(1:2)) g3 <- qgraph(modSelect_0$graph, layout = g1$layout, theme = \"colorblind\", title = \"ggmModSelect (gamma = 0)\", cut = 0) g4 <- qgraph(modSelect_0.5$graph, layout = g1\\$layout, theme = \"colorblind\",\ntitle = \"ggmModSelect (gamma = 0.5)\", cut = 0)", null, "Note that this algorithm is very slow in higher dimensions (e.g., above 30-40 nodes), in which case only the regular EBICglasso, thresholded EBICglasso, or setting stepwise = FALSE are feasible. Of note, centrality analyses, especially of the more stable strength metric, are hardly impacted by the estimation method:\n\ncentralityPlot(\nlist(EBICglasso=g1,\nEBICglasso_threshold=g2,\nggmModSelect_0 = g3,\nggmModSelect_0.5 = g4\n))", null, "Both thresholded EBICglasso and ggmModSelect are implemented in the development version of bootnet, which will be updated soon to CRAN as well. Preliminary simulations show that both guarantee high specificity, while losing sensitivity. Using ggmModSelect with $$\\gamma = 0$$ (BIC selection) shows better sensitivity and works well in detecting small edges, but is slow when coupled with stepwise model search, which may make bootstrapping hard. I encourage researchers to investigate these and competing methods in large-scale simulation studies.\n\nWhich estimation method to use?\n\nBoth new methods are much more conservative than the EBICglasso, leading to drops in sensitivity and possible misrepresentations of the true sparsity of the network structure. For exploratory hypothesis generation purposes in relatively low sample sizes, the original EBICglassso is likely to be preferred. In higher sample sizes and with a focus point on identifying small edges, the conservative methods may be preferred instead. There are many more GGM estimation procedures available in other R packages, and detailed simulation studies investigating which estimator works best in which case are now being performed in multiple labs. I have also implemented simulation functions in the developmental version of bootnet to aide studying these methods, which I will describe in an upcoming blog post.\n\nFlow diagrams\n\nSometimes, researchers are interested in the connectivity of one node in particular, which can be hard to see in the Fruchterman-Reingold algorithm, especially when the connections to that one node are weak. The new flow function, which I developed together with Adela Isvoranu, can be used to place nodes in such a way that connections of one node are clearly visible. The function will place the node of interest to the left, then in vertical levels nodes connected to the node of interest with 1, 2, 3, etcetera edges. Edges between nodes in the same level are displayed as curved edges. For example:\n\nflow(g2, \"N3\", theme = \"colorblind\", vsize = 4)", null, "Expected influence\n\nThe centrality index expected influence is now returned by centrality() and can be plotted using centralityPlot(), although it has to be requested using include. In addition, the plots can now be ordered by one of the indices:\n\ncentralityPlot(g2, include = c(\"Strength\",\"ExpectedInfluence\"),\norderBy = \"ExpectedInfluence\")", null, "Note, however, that the BFI network is not an optimal network to compute expected influence on, as some variables are (arbitrarily) scored negatively. It is best to compute expected influence on a network in which higher scores on all nodes have the same interpretation (e.g., symptoms in which higher = more severe).\n\nFuture developments\n\nAs always, I highly welcome bug reports and code suggestions on Github. In addition, I will also update the bootnet package soon and write a separate blog post on its latest additions.\n\nReferences\n\nJankova, J., and van de Geer, S. (2018) Inference for high-dimensional graphical models. In: Handbook of graphical models (editors: Drton, M., Maathuis, M., Lauritzen, S., and Wainwright, M.). CRC Press: Boca Raton, Florida, USA.\n\nFoygel, R., & Drton, M. (2010). Extended Bayesian information criteria for Gaussian graphical models. In Advances in neural information processing systems (pp. 604-612)." ]
[ null, "http://psychosystems.org/wp-content/uploads/2018/05/Picture1.png", null, "http://psychosystems.org/wp-content/uploads/2018/05/Picture2.png", null, "http://psychosystems.org/wp-content/uploads/2018/05/Picture3.png", null, "http://psychosystems.org/wp-content/uploads/2018/05/Picture4.png", null, "http://psychosystems.org/wp-content/uploads/2018/05/Picture5.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8892785,"math_prob":0.9493072,"size":9303,"snap":"2022-05-2022-21","text_gpt3_token_len":2189,"char_repetition_ratio":0.106247984,"word_repetition_ratio":0.03177966,"special_character_ratio":0.22444373,"punctuation_ratio":0.13672112,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9612283,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,7,null,7,null,7,null,7,null,7,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-01-22T09:17:36Z\",\"WARC-Record-ID\":\"<urn:uuid:9be70624-db1e-4e6c-925b-bd23e0f1850c>\",\"Content-Length\":\"32978\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:29ac47c9-5929-4098-89c2-c880e47007fa>\",\"WARC-Concurrent-To\":\"<urn:uuid:5b65c807-9a73-4aa5-a61c-c4cedd552858>\",\"WARC-IP-Address\":\"195.190.28.226\",\"WARC-Target-URI\":\"https://psychosystems.org/2018/05/\",\"WARC-Payload-Digest\":\"sha1:2SC5USFVUUADJWB2OE3Z55AVDJASHD2F\",\"WARC-Block-Digest\":\"sha1:PT4QAF2EPHMBJ4MY2OQ25ZRKF67573EA\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-05/CC-MAIN-2022-05_segments_1642320303779.65_warc_CC-MAIN-20220122073422-20220122103422-00281.warc.gz\"}"}
https://www.techsalerator.com/sub-data-categories/confusion-matrix
[ "# Products\n\nA confusion matrix, also known as an error matrix, is a tabular representation of the performance of a classification model. It summarizes the predictions made by the model on a test dataset and compares them to the actual labels or ground truth values. Read more\n\n#### Our Data Integrations", null, "", null, "", null, "", null, "### Confusion Matrix", null, "### Browse the Data Marketplace", null, "1. What is a confusion matrix?\nA confusion matrix, also known as an error matrix, is a tabular representation of the performance of a classification model. It summarizes the predictions made by the model on a test dataset and compares them to the actual labels or ground truth values.\n\n2. What are the components of a confusion matrix?\nA confusion matrix consists of four components: true positives (TP), true negatives (TN), false positives (FP), and false negatives (FN). These components represent the number of correctly classified positive instances, correctly classified negative instances, instances that are falsely classified as positive, and instances that are falsely classified as negative, respectively.\n\n3. How is a confusion matrix used?\nA confusion matrix provides valuable insights into the performance of a classification model. It allows the calculation of various performance metrics such as accuracy, precision, recall, and F1 score. It helps identify the types of errors made by the model, such as false positives and false negatives, and assesses the model's ability to correctly classify different classes.\n\n4. What metrics can be derived from a confusion matrix?\nSeveral performance metrics can be calculated using a confusion matrix, including accuracy, precision, recall (sensitivity), specificity, F1 score, and the area under the receiver operating characteristic (ROC) curve. These metrics provide different aspects of the model's performance, such as overall correctness, ability to predict positive instances, ability to predict negative instances, and the trade-off between precision and recall.\n\n5. How is a confusion matrix interpreted?\nThe interpretation of a confusion matrix depends on the specific problem and the desired outcome. Generally, a higher number of true positives and true negatives indicates better model performance. However, the interpretation may vary depending on the relative importance of false positives and false negatives in the specific context. For example, in medical diagnosis, false negatives (missing actual positive cases) may be more critical than false positives.\n\n6. Can a confusion matrix handle multi-class classification?\nYes, a confusion matrix can be extended to handle multi-class classification problems. In this case, the matrix is expanded to include cells representing each class's true positives, true negatives, false positives, and false negatives. The performance metrics derived from the confusion matrix, such as precision and recall, can be calculated for each class individually or summarized using macro- or micro-averaging techniques.\n\n7. What are the limitations of a confusion matrix?\nWhile a confusion matrix provides valuable insights into the performance of a classification model, it has some limitations. It assumes a fixed threshold for classification, which may not be optimal for all scenarios. Additionally, it does not capture the uncertainty associated with predicted probabilities. Further evaluation measures like precision-recall curves or ROC curves may be necessary to fully assess a model's performance." ]
[ null, "https://global-uploads.webflow.com/5fc693cacbcf90e4fa31d17e/63c19a44dee6143d47fbd0c1_esri-logo-F4B11100B0-seeklogo.com.png", null, "https://global-uploads.webflow.com/5fc693cacbcf90e4fa31d17e/63c19a44a3b166430add2680_factset-vector-logo.svg", null, "https://global-uploads.webflow.com/5fc693cacbcf90e4fa31d17e/63c19a44c740e151abd17478_Amazon_Web_Services_Logo.svg", null, "https://global-uploads.webflow.com/5fc693cacbcf90e4fa31d17e/63c19a47dee61422cafbd0c9_Snowflake_Logo.svg", null, "https://global-uploads.webflow.com/5fc693cacbcf90e4fa31d17e/6412da9df4e60d4cad577d7a_table.PNG", null, "https://global-uploads.webflow.com/5fc693cacbcf90e4fa31d17e/64986dfdfd6cd117a5e2b9e0_blur.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.88002324,"math_prob":0.93540853,"size":3371,"snap":"2023-40-2023-50","text_gpt3_token_len":605,"char_repetition_ratio":0.17344818,"word_repetition_ratio":0.2060606,"special_character_ratio":0.18036191,"punctuation_ratio":0.13253012,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98875904,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12],"im_url_duplicate_count":[null,null,null,null,null,5,null,5,null,5,null,5,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-10-04T02:17:49Z\",\"WARC-Record-ID\":\"<urn:uuid:33a28897-4a93-49da-991f-d6070c899f58>\",\"Content-Length\":\"15827\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:50cf3202-3116-4a6e-8cd1-dbb956fb5b9e>\",\"WARC-Concurrent-To\":\"<urn:uuid:683fed3f-f132-4bb7-8907-5a208677d368>\",\"WARC-IP-Address\":\"3.233.126.24\",\"WARC-Target-URI\":\"https://www.techsalerator.com/sub-data-categories/confusion-matrix\",\"WARC-Payload-Digest\":\"sha1:SOXP6EDDMJO5VIO4AMCFPP6G6FXICBI2\",\"WARC-Block-Digest\":\"sha1:46UUTH2Y7EY76OSYBIMLDXKFDYXLQFT5\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233511351.18_warc_CC-MAIN-20231004020329-20231004050329-00309.warc.gz\"}"}
https://en.wikipedia.org/wiki/Credible_interval
[ "Credible interval\n\nIn Bayesian statistics, a credible interval is an interval within which an unobserved parameter value falls with a particular subjective probability. It is an interval in the domain of a posterior probability distribution or a predictive distribution. The generalisation to multivariate problems is the credible region. Credible intervals are analogous to confidence intervals in frequentist statistics, although they differ on a philosophical basis: Bayesian intervals treat their bounds as fixed and the estimated parameter as a random variable, whereas frequentist confidence intervals treat their bounds as random variables and the parameter as a fixed value. Also, Bayesian credible intervals use (and indeed, require) knowledge of the situation-specific prior distribution, while the frequentist confidence intervals do not.\n\nFor example, in an experiment that determines the distribution of possible values of the parameter $\\mu$", null, ", if the subjective probability that $\\mu$", null, "lies between 35 and 45 is 0.95, then $35\\leq \\mu \\leq 45$", null, "is a 95% credible interval.\n\nChoosing a credible interval\n\nCredible intervals are not unique on a posterior distribution. Methods for defining a suitable credible interval include:\n\n• Choosing the narrowest interval, which for a unimodal distribution will involve choosing those values of highest probability density including the mode. This is sometimes called the highest posterior density interval.\n• Choosing the interval where the probability of being below the interval is as likely as being above it. This interval will include the median. This is sometimes called the equal-tailed interval.\n• Assuming that the mean exists, choosing the interval for which the mean is the central point.\n\nIt is possible to frame the choice of a credible interval within decision theory and, in that context, an optimal interval will always be a highest probability density set.\n\nContrasts with confidence interval\n\nA frequentist 95% confidence interval means that with a large number of repeated samples, 95% of such calculated confidence intervals would include the true value of the parameter. In frequentist terms, the parameter is fixed (cannot be considered to have a distribution of possible values) and the confidence interval is random (as it depends on the random sample).\n\nBayesian credible intervals can be quite different from frequentist confidence intervals for two reasons:\n\n• credible intervals incorporate problem-specific contextual information from the prior distribution whereas confidence intervals are based only on the data;\n• credible intervals and confidence intervals treat nuisance parameters in radically different ways.\n\nFor the case of a single parameter and data that can be summarised in a single sufficient statistic, it can be shown that the credible interval and the confidence interval will coincide if the unknown parameter is a location parameter (i.e. the forward probability function has the form $\\mathrm {Pr} (x|\\mu )=f(x-\\mu )$", null, "), with a prior that is a uniform flat distribution; and also if the unknown parameter is a scale parameter (i.e. the forward probability function has the form $\\mathrm {Pr} (x|s)=f(x/s)$", null, "), with a Jeffreys' prior   $\\mathrm {Pr} (s|I)\\;\\propto \\;1/s$", null, "— the latter following because taking the logarithm of such a scale parameter turns it into a location parameter with a uniform distribution. But these are distinctly special (albeit important) cases; in general no such equivalence can be made." ]
[ null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/9fd47b2a39f7a7856952afec1f1db72c67af6161", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/9fd47b2a39f7a7856952afec1f1db72c67af6161", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/ae30c87662c2463bad7d4dfe9e4aabe36b04fcf3", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/2668e5a4b481ff7a9bbcfea400e7e42f4bb7453c", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/91dffc4103f048445d08c6cad0a4bcf52f1eef25", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/d6aa4ab0b34d5c0daf078de7e32f564adadbcc12", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7563047,"math_prob":0.98621637,"size":4362,"snap":"2019-26-2019-30","text_gpt3_token_len":935,"char_repetition_ratio":0.18173474,"word_repetition_ratio":0.03732504,"special_character_ratio":0.22375058,"punctuation_ratio":0.15315315,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.994072,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12],"im_url_duplicate_count":[null,null,null,null,null,null,null,7,null,7,null,7,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-06-16T18:37:26Z\",\"WARC-Record-ID\":\"<urn:uuid:b166e797-a8fe-4957-93d8-2189a9f4c6df>\",\"Content-Length\":\"101268\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1f899228-f4d8-4ee6-a067-5016521365d4>\",\"WARC-Concurrent-To\":\"<urn:uuid:0b0644af-57bc-4f25-813e-5c5664322a95>\",\"WARC-IP-Address\":\"208.80.154.224\",\"WARC-Target-URI\":\"https://en.wikipedia.org/wiki/Credible_interval\",\"WARC-Payload-Digest\":\"sha1:VGWEYQXWBOR3MBTJCI37LPIHMLZGRK4R\",\"WARC-Block-Digest\":\"sha1:N6IZYKLOBKCGS2BSGIPQTVEU5TGRPZNW\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-26/CC-MAIN-2019-26_segments_1560627998291.9_warc_CC-MAIN-20190616182800-20190616204800-00476.warc.gz\"}"}
https://eprint.iacr.org/2009/621
[ "### On the Analysis of Cryptographic Assumptions in the Generic Ring Model\n\nTibor Jager and Jörg Schwenk\n\n##### Abstract\n\nThe generic ring model considers algorithms that operate on elements of an algebraic ring by performing only the ring operations and without exploiting properties of a given representation of ring elements. It is used to analyze the hardness of computational problems defined over rings. For instance, it is known that breaking RSA is equivalent to factoring in the generic ring model (Aggarwal and Maurer, Eurocrypt 2009). Do hardness results in the generic ring model support the conjecture that solving the considered problem is also hard in the standard model, where elements of $\\Z_n$ are represented by integers modulo $n$? We prove in the generic ring model that computing the Jacobi symbol of an integer modulo $n$ is equivalent to factoring. Since there are simple and efficient non-generic algorithms which compute the Jacobi symbol, this provides an example of a natural computational problem which is hard in the generic ring model, but easy to solve if elements of $\\Z_n$ are given in their standard representation as integers. Thus, a proof in the generic ring model is unfortunately not a very strong indicator for the hardness of a computational problem in the standard model. Despite this negative result, generic hardness results still provide a lower complexity bound for a large class of algorithms, namely all algorithms solving a computational problem independent of a given representation of ring elements. Thus, from this point of view results in the generic ring model are still interesting. Motivated by this fact, we show also that solving the quadratic residuosity problem generically is equivalent to factoring.\n\nNote: Revision includes some simplifications and corrections.\n\nAvailable format(s)\nCategory\nFoundations\nPublication info\nPublished elsewhere. Full version of Asiacrypt 2009 paper\nKeywords\nGeneric ring modelanalysis of cryptographic assumptions\nContact author(s)\ntibor jager @ rub de\nHistory\n2012-01-25: last of 6 revisions\nSee all versions\nShort URL\nhttps://ia.cr/2009/621", null, "CC BY\n\nBibTeX\n\n@misc{cryptoeprint:2009/621,\nauthor = {Tibor Jager and Jörg Schwenk},\ntitle = {On the Analysis of Cryptographic Assumptions in the Generic Ring Model},\nhowpublished = {Cryptology ePrint Archive, Paper 2009/621},\nyear = {2009},\nnote = {\\url{https://eprint.iacr.org/2009/621}},\nurl = {https://eprint.iacr.org/2009/621}\n}", null, "Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content." ]
[ null, "https://eprint.iacr.org/img/license/CC_BY.svg", null, "https://eprint.iacr.org/img/iacrlogo_small.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8721678,"math_prob":0.8739856,"size":2540,"snap":"2023-14-2023-23","text_gpt3_token_len":578,"char_repetition_ratio":0.1376183,"word_repetition_ratio":0.06005222,"special_character_ratio":0.22755906,"punctuation_ratio":0.095348835,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9704814,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-06T22:50:31Z\",\"WARC-Record-ID\":\"<urn:uuid:37202e3a-fe2b-452e-91c8-01567b6eea77>\",\"Content-Length\":\"13251\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:30ad1251-1ec1-49e3-b74f-f178ba8cc3c5>\",\"WARC-Concurrent-To\":\"<urn:uuid:5c38171f-64c8-4fab-a096-ab0a6ad4faca>\",\"WARC-IP-Address\":\"64.227.104.178\",\"WARC-Target-URI\":\"https://eprint.iacr.org/2009/621\",\"WARC-Payload-Digest\":\"sha1:BRLUC5LPDZA27MUXGYPURXG66YBX5KO2\",\"WARC-Block-Digest\":\"sha1:HDB2I2A7XKKXCQPOWIF27I32DAXEFVOK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224653183.5_warc_CC-MAIN-20230606214755-20230607004755-00681.warc.gz\"}"}
http://git.uio.no/git/?p=u/mrichter/AliRoot.git;a=blobdiff;f=ITS/ITSReadPlotData.C;h=9a382dbffdd6df3f8b51ea6ba86c1d613c374357;hp=44ac6c73bfec884de58bfce6188a5a3192003569;hb=43c39b722323955064944312fe7d57d8fa091991;hpb=4913f68a5276119b9e72e2686185c767de5b09e7;ds=sidebyside
[ "index 44ac6c73bfec884de58bfce6188a5a3192003569..9a382dbffdd6df3f8b51ea6ba86c1d613c374357 100644 (file)\n@@ -117,7 +117,7 @@ Int_t ITSReadPlotData(char *filename = \"galice.root\", Int_t evNum = 0) {\n\n// Detector type: 0 --> SPD, 1 --> SDD, 2 --> SSD.\n// Layer 1,2 --> 0 / Layer 3,4 --> 1 / Layer 5,6 --> 2\n\n// Detector type: 0 --> SPD, 1 --> SDD, 2 --> SSD.\n// Layer 1,2 --> 0 / Layer 3,4 --> 1 / Layer 5,6 --> 2\n-                               dtype = ID / 3;\n+                               dtype = (ID - 1) / 2;\n\n// Once fixed the layer number, the macro calculates the max number\n// for ladder and detector from geometry, and accepts only suitable values.\n\n// Once fixed the layer number, the macro calculates the max number\n// for ladder and detector from geometry, and accepts only suitable values.\n@@ -159,7 +159,8 @@ Int_t ITSReadPlotData(char *filename = \"galice.root\", Int_t evNum = 0) {\n\n// Defines the histograms inside the `for' cycle, so they are destroyed at the end\n// of every read sequqnce, in order to mek another withour segmentation faults\n\n// Defines the histograms inside the `for' cycle, so they are destroyed at the end\n// of every read sequqnce, in order to mek another withour segmentation faults\n-               Text_t msg, xm = 0.0, ym = 0.0;\n+               Text_t msg;\n+               Float_t xm = 0.0, ym = 0.0, zm = 0.0;\nswitch (dtype) {\ncase 0: xm = 1.5; zm = 7.0; break;\ncase 1: xm = 7.5; zm = 8.0; break;\nswitch (dtype) {\ncase 0: xm = 1.5; zm = 7.0; break;\ncase 1: xm = 7.5; zm = 8.0; break;\n@@ -178,6 +179,8 @@ Int_t ITSReadPlotData(char *filename = \"galice.root\", Int_t evNum = 0) {\ncout << \"No hits in module!\" << endl;\ncontinue;\n}\ncout << \"No hits in module!\" << endl;\ncontinue;\n}\n+               else\n+                       cout << \"Hits scanned...\" << endl;\nfor (Int_t i = 0; i < hits; i++) if (!St[i]) hhits->Fill(x[i], z[i]);\n\nfor (Int_t i = 0; i < hits; i++) if (!St[i]) hhits->Fill(x[i], z[i]);\n\n@@ -186,6 +189,8 @@ Int_t ITSReadPlotData(char *filename = \"galice.root\", Int_t evNum = 0) {\ncout << \"No recpoints in module!\" << endl;\ncontinue;\n}\ncout << \"No recpoints in module!\" << endl;\ncontinue;\n}\n+               else\n+                       cout << \"Recpoints scanned...\" << endl;\nfor (Int_t i = 0; i < recs; i++) hrecs->Fill(x[i], z[i]);\n\nfor (Int_t i = 0; i < recs; i++) hrecs->Fill(x[i], z[i]);\n\n@@ -194,6 +199,8 @@ Int_t ITSReadPlotData(char *filename = \"galice.root\", Int_t evNum = 0) {\ncout << \"No digits in module!\" << endl;\n//continue;\n}\ncout << \"No digits in module!\" << endl;\n//continue;\n}\n+               else\n+                       cout << \"Digits scanned...\" << endl;\nfor (Int_t i = 0; i < digits; i++) hdigits->Fill(x[i], z[i]);\n\nfor (Int_t i = 0; i < digits; i++) hdigits->Fill(x[i], z[i]);\n\n@@ -233,6 +240,22 @@ Int_t ITSReadPlotData(char *filename = \"galice.root\", Int_t evNum = 0) {\nlegend->Draw();\n\nviewer->Update();\nlegend->Draw();\n\nviewer->Update();\n+\n+\n+               Text_t fname,ans;\n+               cout << \"Do you want to save the current canvas on a file (y/n) ? \";\n+               cin >> ans;\n+               if(ans == 'y' || ans == 'Y') {\n+                  cout << \"Enter filename: \";\n+                  cin >> fname;\n+                  TString *control = new TString(fname);\n+                  Bool_t ok=control->Contains(\".C\") || control->Contains(\".root\") || control->Contains(\".ps\") || control->Contains(\".eps\") || control->Contains(\".gif\");\n+                  if(!ok){\n+                     cout << \"File extension is not recognized. The canvas will be saved as Postscript file\";\n+                     strcat(fname,\".ps\");\n+                  }\n+                  viewer->SaveAs(fname);\n+               }\n}\n\ncout << \"Done. Goodbye\" << endl;\n}\n\ncout << \"Done. Goodbye\" << endl;\n@@ -322,9 +345,16 @@ Int_t GetModuleDigits(TObject *its, Int_t ID, Int_t dtype, Float_t*& X, Float_t*\n// while, if it doesn't, the first thing to do is dimensioning\n// the coordinate and energy loss arrays, and then the loop can start.\n\n// while, if it doesn't, the first thing to do is dimensioning\n// the coordinate and energy loss arrays, and then the loop can start.\n\n+        if(dtype==2){\n+           seg->SetLayer(layer);\n+        }\n+\nif (!digits_num)\nreturn 0;\nelse {\nif (!digits_num)\nreturn 0;\nelse {\n+               cout << \"Digits to scan: \" << digits_num << endl;\nif (X) delete [] X;\nif (Z) delete [] Z;\nX = new Float_t[digits_num];\nif (X) delete [] X;\nif (Z) delete [] Z;\nX = new Float_t[digits_num];\n@@ -339,13 +369,18 @@ Int_t GetModuleDigits(TObject *its, Int_t ID, Int_t dtype, Float_t*& X, Float_t*\n}\n}\nfor (Int_t j = 0; j < digits_num; j++) {\n}\n}\nfor (Int_t j = 0; j < digits_num; j++) {\n-       cout << j << endl;\ndigit = (AliITSdigit*)digits_array->UncheckedAt(j);\nInt_t iz=digit->fCoord1;  // cell number z\nInt_t ix=digit->fCoord2;  // cell number x\n// Get local coordinates of the element (microns)\ndigit = (AliITSdigit*)digits_array->UncheckedAt(j);\nInt_t iz=digit->fCoord1;  // cell number z\nInt_t ix=digit->fCoord2;  // cell number x\n// Get local coordinates of the element (microns)\n-               if(dtype < 2)\n+               // ******************************* PARTE CORRETTA ***************************************\n+               if(dtype < 2) {\n+                       Float_t xx, zz; // aggiunta\n+       seg->DetToLocal(ix, iz, xx, zz);\n+                       X[j] = xx; // aggiunta\n+                       Z[j] = zz; // aggiunta\n+               }\n+               // ******************************* FINE PARTE CORRETTA ***************************************\nelse {\n// SSD: if iz==0 ---> N side; if iz==1 P side\nif (ssdone[j] == 0) {\nelse {\n// SSD: if iz==0 ---> N side; if iz==1 P side\nif (ssdone[j] == 0) {\n@@ -368,17 +403,10 @@ Int_t GetModuleDigits(TObject *its, Int_t ID, Int_t dtype, Float_t*& X, Float_t*\n}\n}\nif (!impaired) seg->GetPadCxz(pstrip, nstrip, X[j], Z[j]);\n}\n}\nif (!impaired) seg->GetPadCxz(pstrip, nstrip, X[j], Z[j]);\n+                               X[j] /= 10000.0;  // convert microns to cm\n+                               Z[j] /= 10000.0;  // convert microns to cm\n}\n}\n}\n}\n-               if (dtype == 0) {\n-                       // !!!THIS CONVERSION TO HIT LRS SHOULD BE REMOVED AS SOON AS THE CODE IS FIXED\n-                       X[j] = X[j]-seg->Dx() / 2.0;\n-                       Z[j] = Z[j]-seg->Dz() / 2.0;\n-               }\n-               if (dtype != 1) {\n-                       X[j] /= 10000.0;\n-                       Z[j] /= 10000.0;\n-               }\n}\nreturn digits_num;\n}\n}\nreturn digits_num;\n}" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6046492,"math_prob":0.9171039,"size":544,"snap":"2022-40-2023-06","text_gpt3_token_len":155,"char_repetition_ratio":0.17777778,"word_repetition_ratio":0.0,"special_character_ratio":0.40441176,"punctuation_ratio":0.22916667,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9922143,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-02-05T13:17:32Z\",\"WARC-Record-ID\":\"<urn:uuid:820112a2-6bf2-467e-97b6-53faf125651d>\",\"Content-Length\":\"48359\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:fa94778b-f55c-4cd2-a86d-318d3569693b>\",\"WARC-Concurrent-To\":\"<urn:uuid:73af3b91-fd8e-40d1-9d8a-03eaa04180e0>\",\"WARC-IP-Address\":\"129.240.118.59\",\"WARC-Target-URI\":\"http://git.uio.no/git/?p=u/mrichter/AliRoot.git;a=blobdiff;f=ITS/ITSReadPlotData.C;h=9a382dbffdd6df3f8b51ea6ba86c1d613c374357;hp=44ac6c73bfec884de58bfce6188a5a3192003569;hb=43c39b722323955064944312fe7d57d8fa091991;hpb=4913f68a5276119b9e72e2686185c767de5b09e7;ds=sidebyside\",\"WARC-Payload-Digest\":\"sha1:T2CPGSM7TI4VJLVMO5BOUCZDZQ6U2NZH\",\"WARC-Block-Digest\":\"sha1:64AS7VYILGYT4BT2CTGBWKFISQLX4NFB\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764500255.78_warc_CC-MAIN-20230205130241-20230205160241-00806.warc.gz\"}"}
https://gomathanswerkey.com/texas-go-math-kindergarten-lesson-5-1-answer-key/
[ "# Texas Go Math Kindergarten Lesson 5.1 Answer Key Model and Count 9\n\nRefer to our Texas Go Math Kindergarten Answer Key Pdf to score good marks in the exams. Test yourself by practicing the problems from Texas Go Math Kindergarten Lesson 5.1 Answer Key Model and Count 9.\n\n## Texas Go Math Kindergarten Lesson 5.1 Answer Key Model and Count 9\n\nExplore\n\nDIRECTIONS: Model 8 objects. Show one more object. How many are there? Tell a friend how you know. Draw the objects.\n\nShare and Show\n\nDIRECTIONS: 1. Place counters as shown. Count and tell how many counters. 2. 5 are yellow. How many are red? Write the number. 3. Place counters in the ten frame to model nine. Trace the counters. Tell a friend what you know about the number 9.\n\nQuestion 1.", null, "In the above ten-farme there are 5 yellow counters and 4 red counters.\n\nQuestion 2.", null, "", null, "Explanation:\nThere are 5 yellow counters and 4 red counters in the above ten-frame.So, i wrote the number 4 in the blank.\n\nQuestion 3.", null, "", null, "Explanation:\nI drew 9 counters to model the number 9.\n\nDIRECTIONS: 4. Use two-color counters to model the different ways to make 9. Write to show some pairs of numbers that make 9.\n\nQuestion 4.", null, "", null, "Explanation:\nI used two-color counters 7 yellow and 2 red counters to model the different ways to make 9 and i wrote to show some pairs of numbers that make 9, they are 7 and 2, 6 and 3, 5 and 4 and 4 and 5.\n\nHOME ACTIVITY • Ask your child to show a set of eight objects. Have him or her show one more object and tell how many.\n\nDIRECTIONS: 5. Count the flags in each set. Which sets show nine flags? Circle those sets. 6. Choose the correct answer. Which number does the model show?\n\nProblem Solving\n\nQuestion 5.", null, "", null, "Explanation:\nI circled those groups of flags that are 9 in number.\n\nQuestion 6.", null, "", null, "Explanation:\nI counted and marked the number 9 as there are 9 counters in the above ten-farme.\n\n### Texas Go Math Kindergarten Lesson 5.1 Homework and Practice Answer Key\n\nDIRECTIONS: 1. Draw some red and yellow counters to make 9 Write to show the numbers. 2. Draw some red and yellow counters to show another way to make 9. Write to show the numbers.\n\nQuestion 1.", null, "", null, "Explanation:\nI drew 3 red and 6 yellow counters to make 9 and wro the numbers 3 and 6.\n\nQuestion 2.", null, "", null, "Explanation:\nI drew 2 red and 7 yellow counters to make 9 and wro the numbers 2 and 9.\n\nDIRECTIONS: Choose the correct answer. 3-4. Which number does the model show?\n\nLesson Check\n\nQuestion 3.", null, "", null, "Explanation:\nI counted and marked the number 8 as there are 8 counters in the above ten-farme.\n\nQuestion 4.", null, "", null, "" ]
[ null, "https://i2.wp.com/gomathanswerkey.com/wp-content/uploads/2021/10/Texas-Go-Math-Kindergarten-Lesson-5.1-Answer-Key-11.png", null, "https://i1.wp.com/gomathanswerkey.com/wp-content/uploads/2021/10/Texas-Go-Math-Kindergarten-Lesson-5.1-Answer-Key-2.png", null, "https://i2.wp.com/gomathanswerkey.com/wp-content/uploads/2021/11/Texas-Go-Math-Kindergarten-lesson-5.1-Answer-key-1.png", null, "https://i2.wp.com/gomathanswerkey.com/wp-content/uploads/2021/10/Texas-Go-Math-Kindergarten-Lesson-5.1-Answer-Key-3.png", null, "https://i2.wp.com/gomathanswerkey.com/wp-content/uploads/2021/11/Texas-Go-Math-Kindergarten-lesson-5.1-Answer-key-2.png", null, "https://i1.wp.com/gomathanswerkey.com/wp-content/uploads/2021/10/Texas-Go-Math-Kindergarten-Lesson-5.1-Answer-Key-4.png", null, "https://i0.wp.com/gomathanswerkey.com/wp-content/uploads/2021/11/Texas-Go-Math-Kindergarten-lesson-5.1-Answer-key-3.png", null, "https://i0.wp.com/gomathanswerkey.com/wp-content/uploads/2021/10/Texas-Go-Math-Kindergarten-Lesson-5.1-Answer-Key-5.png", null, "https://i1.wp.com/gomathanswerkey.com/wp-content/uploads/2021/11/Texas-Go-Math-Kindergarten-lesson-5.1-Answer-key-4.png", null, "https://i0.wp.com/gomathanswerkey.com/wp-content/uploads/2021/10/Texas-Go-Math-Kindergarten-Lesson-5.1-Answer-Key-6.png", null, "https://i1.wp.com/gomathanswerkey.com/wp-content/uploads/2021/11/Texas-Go-Math-Kindergarten-lesson-5.1-Answer-key-5.png", null, "https://i1.wp.com/gomathanswerkey.com/wp-content/uploads/2021/10/Texas-Go-Math-Kindergarten-Lesson-5.1-Answer-Key-7.png", null, "https://i1.wp.com/gomathanswerkey.com/wp-content/uploads/2021/11/Texas-Go-Math-Kindergarten-lesson-5.1-Answer-key-6.png", null, "https://i2.wp.com/gomathanswerkey.com/wp-content/uploads/2021/10/Texas-Go-Math-Kindergarten-Lesson-5.1-Answer-Key-8.png", null, "https://i0.wp.com/gomathanswerkey.com/wp-content/uploads/2021/11/Texas-Go-Math-Kindergarten-lesson-5.1-Answer-key-7.png", null, "https://i2.wp.com/gomathanswerkey.com/wp-content/uploads/2021/10/Texas-Go-Math-Kindergarten-Lesson-5.1-Answer-Key-9.png", null, "https://i0.wp.com/gomathanswerkey.com/wp-content/uploads/2021/11/Texas-Go-Math-Kindergarten-lesson-5.1-Answer-key-8.png", null, "https://i2.wp.com/gomathanswerkey.com/wp-content/uploads/2021/10/Texas-Go-Math-Kindergarten-Lesson-5.1-Answer-Key-10.png", null, "https://i2.wp.com/gomathanswerkey.com/wp-content/uploads/2021/11/Texas-Go-Math-Kindergarten-lesson-5.1-Answer-key-9.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.88771015,"math_prob":0.68998635,"size":2632,"snap":"2023-40-2023-50","text_gpt3_token_len":687,"char_repetition_ratio":0.18074581,"word_repetition_ratio":0.21294363,"special_character_ratio":0.24886018,"punctuation_ratio":0.1584327,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9918342,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38],"im_url_duplicate_count":[null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-07T06:14:47Z\",\"WARC-Record-ID\":\"<urn:uuid:ccd87fd0-4122-48aa-bca5-18170d63c220>\",\"Content-Length\":\"205529\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:cb387d5e-5e74-42ea-a2ba-9df9c0a4a9b5>\",\"WARC-Concurrent-To\":\"<urn:uuid:e8436e62-1ef8-49e0-88a0-96adc3e76890>\",\"WARC-IP-Address\":\"194.1.147.76\",\"WARC-Target-URI\":\"https://gomathanswerkey.com/texas-go-math-kindergarten-lesson-5-1-answer-key/\",\"WARC-Payload-Digest\":\"sha1:RIKNRAM4UBQG5NNF74U6NRJN6TZ6VRBE\",\"WARC-Block-Digest\":\"sha1:NDOL5IZM5HIGUCGOWHLNEL5Y27QBMHET\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100650.21_warc_CC-MAIN-20231207054219-20231207084219-00182.warc.gz\"}"}
https://www.coastalwiki.org/wiki/Sediment_transport_formulas_for_the_coastal_environment
[ "# Sediment transport formulas for the coastal environment\n\n## Introduction\n\nCalculating nearshore sediment transport is a challenge due to the complexity of the hydrodynamics and the variety of the governing phenomena. Indeed, it is very difficult to estimate sediment fluxes on beaches due to the combination of steady flows (currents) and oscillatory flows (waves). Moreover, many other effects should be integrated such as the variations in mean water level (tide, set-up, set-down), breaking wave effects (turbulence, undertow), and topographic influence (mean slope and bed forms). Furthermore, these parameters induce different types of transport (bed load, suspended load and sheet flow), with very different physical implications for the movement of sand and a probabilistic approach introduced by Einstein . Most of the sediment transport formulas are functions of the bed shear stress and have been developed and calibrated on specific data sets. For example, Bijker and Bailard mainly validated their formula to field data for littoral drift; Van Rijn or Camenen compared their formulas to a large variety of laboratory and field data; Dibajana and Ribberink compared and fitted their formula to experimental flume data, simulating cross-shore dynamics (current opposite to incoming waves) for sheet-flow conditions especially.\n\nA selection of several sediment transport formulas are presented here to model bedload, suspended load and typical phenomena observed on the nearshore such as phase-lag effects in sheet-flow transport.\n\n## Bedload transport under waves and currents\n\n### Bijker (1971) formula\n\nOne of the first sediment transport formulations that is still often used in engineering applications was proposed by Bijker . It is derived from Frijlink's formula for a current only with a modification of the bottom shear stress using a wave-current model. The direction of sediment fluxes is always that of the current since this formula was proposed to estimate longshore transport rate. Bedload transport $q_{sb}$ is expressed such as:\n\n$q_{sb} = C_b \\ d_{50} \\ \\sqrt{\\Large \\frac{\\mu_c \\tau_c}{\\rho}} \\normalsize \\exp\\left( -0.27 \\ \\Large \\frac{(\\rho_s-\\rho)gd}{\\mu_c \\tau_{cw}} \\normalsize \\right) \\qquad (1)$\n\nwhere $d_{50}$ is the median grain size diameter, $h$ the water depth, $C_b$ a breaking wave parameter, $\\mu_c$ a ripple parameter, $\\tau_c$ the skin shear stress due to current only, $\\tau_{cw}$ the shear stress due to wave-current interaction, and $\\rho_s$, $\\rho$ the sediment and water densities, respectively.\n\nThe shear stress due to the wave-current interaction is computed following the method proposed by Bijker introducing waves as a stirring factor:\n\n$\\tau_{cw} = \\left[ 1+0.5 (\\xi_B \\ \\Large \\frac{U_w}{U_c} \\normalsize )^2 \\right] \\ \\tau_{cf} \\qquad (2)$\n\nwith $\\xi_B = \\sqrt{f_{wt}/f_{ct}}$ a parameter due to the wave-current interaction, $f_{wt}$ the total friction coefficient due to waves (including bedform effects), $U_w$ the peak value of the wave orbital velocity at the bottom, $U_c$ the depth-averaged current velocity and $\\tau_{cf}$ the bed shear stress (including form drag).\n\nThe ripple parameter introduced by Bijker is defined by the following equation:\n\n$\\mu_c = \\left(f_{ct}/f_c \\right)^{3/2} \\qquad (3)$\n\nwhere $f_{ct}$ is the total friction coefficient due to current (including bedform effects) and $f_c$ is the skin friction coefficient due to current.\n\nThe breaking wave coefficient is defined by:\n\n$C_b = 2 , \\; H_w/h \\lt 0.05 ; \\quad C_b = 2 + 3 \\ (H_w/h-0.05) , \\; 0.05\\lt H_w/h\\lt 0.4 ; \\quad C_b = 5 , \\; 0.4\\lt H_w/h . \\qquad (4)$\n\nwhere $H_w$ is the wave height, and $h$ the water depth.\n\n### Bailard and Inman (1981) formula\n\nBagnold introduced the energetics model in which the main idea is that the sediment flux is proportional to the energy flux $\\Omega$ (local rate of energy dissipation):\n\n$\\Omega = 0.5 \\ \\rho \\ f_{cw} \\ | \\overrightarrow{u(t)} |^3 \\qquad (5)$\n\nwith $f_{cw}$ the friction coefficient due to the wave-current interaction, $\\vec{u(t)}$ the instantaneous velocity vector, $\\vec{u(t)} = \\vec{U_c} + \\vec{u_w(t)}$, $U_c$ the depth-averaged current velocity, and $u_w(t)$ instantaneous wave velocity over the bed.\n\nThe Bailard and Inman formula is derived directly from the Bagnold model. For a horizontal bed, it can ultimately be written as a vector of sediment volume transport:\n\n$\\overrightarrow{q_{sb}} = \\Large \\frac{0.5 \\ f_{cw}}{g \\ (s-1)} \\normalsize \\left( \\Large \\frac{\\epsilon_b}{\\tan\\phi} \\normalsize \\lt \\mid\\vec{u}\\mid^2\\vec{u}\\gt \\right) \\qquad (6)$\n\nwhere $\\epsilon_b$ is the bed load efficiency, $\\phi$ the friction angle of the sediment, $s$ the ratio of sediment and water densities, and $\\lt \\ \\gt$ yields an average over several periods of the wave.\n\nThe bed load efficiency was found slightly different from the one given by Bagnold . Bailard suggested from a calibration with field data that $\\epsilon_b = 0.1$. One difficulty for this formulation is the estimation of the friction coefficient due to the wave-current interaction as Bailard did not specify any expression for this friction factor.\n\n### Van Rijn (1989) formula\n\nThe Van Rijn formula is expressed in the same way as the Bijker formula, as a bed load formula taking into account the influence of waves as a stirring effect. The direction of sediment fluxes is also that of the current. Bedload transport can be written as follows:\n\n$q_{sb} = 0.25 \\ d_{50} \\ d_*^{-0.3} \\ (\\tau_{cw}/ \\rho)^{0.5} \\ \\left( \\tau_{cw} / \\tau_{cr} \\ - 1 \\right) \\qquad (7)$\n\nwhere $d_* = [(s-1)g/\\nu^2]^{1/3}d_{50}$ the dimensionless sediment diameter, $\\tau_{cw} = 0.5 f_{cw} U_{cw}$ the skin bed shear stress due to current and waves with $f_{cw}=\\alpha \\beta u_c + (1-\\alpha) U_w$ , $\\alpha = u_c/(u_c+U_w)$, $u_c$ is the current close to the bottom as defined by Van Rijn , $\\beta$ an offset coefficient for bedload, and $\\tau_{cr}$ the critical bed shear stress for inception of movement.\n\nVan Rijn updated his bedload formula. He proposed a new simplified bedload transport formula for steady flow (with or without waves):\n\n$q_{sb} = 0.015 \\ U_c \\ h \\; (\\large \\frac{d_{50}}{h})^{1.2} \\normalsize \\ \\Psi^{1.5} \\qquad (8)$\n\nwhere $\\Psi=(U_e-U_{cr})/\\sqrt{(s-1)gd_{50}}$ is the mobility parameter, $U_e=U_c+\\gamma U_w$ the effective velocity with $\\gamma=0.4$ for irregular waves and $\\gamma=0.8$ for regular waves, $U_{cr}$ the critical effective velocity for inception of movement.\n\n### Ribberink (1998) formula\n\nRibberink proposed a quasi-steady model of bed load transport where the instantaneous solid flux is assumed to be proportional to a function of the difference between the actual time-dependent bed shear stress and the critical bed shear stress (see Fig. 1). This formulation has been calibrated towards several flume data sets including wave-current interaction in a plane regime (suspended load negligible) and field data (unidirectional flows in rivers).", null, "Figure 1: Profile of the time-dependent velocity (a) and bed shear stress (b) in the wave direction (an angle $\\varphi$ exists between the current and wave directions).\n\nThe following expression for the sand transport rate was obtained:\n\n$\\vec{q_{sb}} = m_{Rib} \\ \\sqrt{(s-1) \\ g \\ d_{50}^3} \\ \\lt ( |\\vec{\\theta(t)}|- \\theta_{cr})^{n_{Rib}} \\Large \\frac{\\overrightarrow{\\theta(t)}}{|\\theta(t)|} \\normalsize \\gt \\qquad (9)$\n\nwhere $\\overrightarrow{\\theta(t)} = 0.5 \\ f_{cw} \\ |u(t)|\\overrightarrow{u(t)} \\ / \\ [(s-1) \\ g \\ d_{50}]$ is the time-dependent Shields parameter ({\\it cf.} figure 1) with the instantaneous velocity $\\overrightarrow{u(t)} = \\vec{U_c} + \\overrightarrow{u_w(t)}$ and the wave-current friction factor $f_{cw}$, $\\theta_{cr}$ the critical Shields parameter, $\\lt \\ \\gt$ yields a time-averaging over several wave periods, and $m_{Rib}=11$, $n_{Rib}=1.65$ the adjusted coefficients.\n\nIn the same way as the Bailard formula, an equivalent wave-current friction coefficient has to be computed. Ribberink used the Madson and Grant model model for which the friction coefficient $f_{cw}$ due to the wave and current interaction is defined such as:\n\n$f_{cw}=X_v \\ f_c +(1-X_v) \\ f_w \\qquad(10)$\n\nwith $X_v=|U_c|/(|U_c|+U_w)$.\n\nRibberink also proposed to compute total roughness values as follows:\n\n$k_{st} = {\\rm max} \\left( k_s;d_{50} \\ [1+6 \\ (\\lt |\\theta(t)|\\gt / \\theta_{cr}-1)] \\right) \\qquad (11)$\n\nwhere $k_s$ is skin roughness height.\n\nFormulas with a similar approach have been suggested by Soulsby and Damgaard and Gonzalez and Madsen .\n\n### Camenen and Larson (2005) formula\n\nCamenen and Larson developed a formula for bed load transport in a similar approach as Ribberink. They introduce an exponential function for the effect of inception of motion following the probabilistic approach introduced by Einstein . The bed load transport $q_{sb}$ may be expressed as follows:\n\n$q_{sbw} = a_w \\ \\sqrt{(s-1)g \\ {d_{50}}^3} \\ \\sqrt{\\theta_{cw,net}} \\ \\theta_{cw,m} \\ \\exp\\left( -b \\Large \\frac{\\theta_{cr}}{\\theta_{cw}} \\normalsize \\right) ,$\n\n$q_{sbn} = a_n \\ \\sqrt{(s-1)g \\ {d_{50}}^3} \\ \\sqrt{\\theta_{cn}} \\ \\theta_{cw,m} \\ \\exp\\left( -b \\Large \\frac{\\theta_{cr}}{\\theta_{cw}} \\normalsize \\right) . \\qquad (12)$\n\nwhere the subscripts $w$ and $n$ correspond, respectively, to the wave direction and the direction normal to the wave direction and $a_w$ $a_n$ and $b$ are empirical coefficients. $\\theta_{cw,m}$ is the mean Shields parameter and $\\theta_{cw}$ the maximum Shields parameter due to wave-current interaction, and $\\theta_{cn}= \\frac{1}{2} f_c (U_c \\sin\\varphi)^2 / ((s-1)g d_{50})$. In order to simplify the calculations, the mean and maximum Shields parameter due to wave-current interaction is obtained by straightforward addition:\n\n$\\theta_{cw,m} = ( {\\theta_c}^2 + {\\theta_{w,m}}^2 + 2 \\theta_{w,m} \\theta_c \\cos\\varphi)^{1/2}$\n\n$\\theta_{cw} = ( {\\theta_c}^2 + {\\theta_w}^2 + 2 \\theta_w \\theta_c \\cos\\varphi)^{1/2}$,\n\nrespectively, where $\\theta_c$, $\\theta_{wm}$, and $\\theta_w$ are the current, mean wave, and maximum wave Shields number, and $\\theta_{w,m}=0.5 \\theta_w$ for a sinusoidal wave profile.\n\nThe net sediment transporting velocity $\\theta_{cw,net}$ in Eq. 12 is given by,\n\n$\\theta_{cw,net} = \\theta_{cw,on}+\\theta_{cw,off} \\qquad (13)$\n\nwhere $\\theta_{cw,on}$ and $\\theta_{cw,off}$ are the mean values of the instantaneous shear stress over the two half periods $T_{wc}$ and $T_{wt}$ ($T_w=T_{wc}+T_{wt}$, in which $T_w$ is the wave period) defined as follows (see Fig. 1),\n\n$\\theta_{w,onshore} = \\Large \\frac{1}{2T_{wc}} \\int_0^{T_{wc}} \\frac{f_{cw} (u_w(t)+U_c\\cos\\varphi)^2}{(s-1)g d_{50}} \\normalsize dt ,$\n\n$\\theta_{w,offshore} = \\Large \\frac{1}{2T_{wt}} \\int_{T_{wc}}^{T_w} \\frac{f_{cw} (u_w(t)+U_c\\cos\\varphi)^2}{(s-1)g d_{50}} \\normalsize dt \\qquad (14)$\n\nwhere $u_w(t)$ is the instantaneous wave orbital velocity.\n\nBased on comparison with an extensive data set, the following relationship is proposed for the transport coefficient $a_w$ \n\n$a_w = 6 + 6 \\ X_t \\qquad (15)$\n\nin which $X_t = \\theta_c/(\\theta_c + \\theta_w)$ , where $\\theta_c$ and $\\theta_w$ are the Shields parameter for current and waves, respectively. The coefficient perpendicular to the waves, where only the current transports sediment, is set to $a_n = 12$, and the coefficient in the term describing initiation of motion is $b = 4.5$.\n\n## Sheet-flow transport\n\nSheet-flow sediment transport refers to transport of sandy sediments as a fluidized thin surface layer (thickness of ten to a few tens of grains). It is a form a bedload transport. Since the experiments of Dibajnia and Watanabe and Ribberink and Salam , one realized that for intense sheet-flow transport, net bedload in the wave direction could be reduced or even be switched to the opposite direction of waves due to the phase-lag between sediment concentrations and fluid velocities . Several authors attempt to model these effects.\n\n### Dibajnia and Watanabe (1992) formula\n\nThe sediment transport formulation of Dibajnia and Watanabe is the first one to include phase-lag effects. Similar to the Bailard and Ribberink models, it breaks down the sediment transport into two half-cycles due to the presence of waves (see Fig. 2). During the first half-cycle, sediment moves in the direction of the wave, just as it moves in the opposite direction during the second half-cycle. An interesting aspect of the formula is that it takes into account a possible quantity of sand still in suspension after each half-cycle, and hence moving in the other direction. This formula enables transport under a non-linear wave to be described.", null, "Figure 2: Bottom velocity profile in the direction of the wave propagation .\n\nThe solid volume flux is given by the following equation:\n\n$\\vec{q_s} = A_{dw} \\ W_s \\ d \\ \\Large \\frac{\\vec{\\Gamma}}{\\Gamma} \\normalsize \\ \\Gamma^{B_{dw}} \\qquad (16)$\n\nwith $A_{dw} = 0.001$ and $B_{dw} = 0.55$ the calibration coefficients, and\n\n$\\vec{\\Gamma} = \\Large \\frac{ T_{wc} \\ \\vec{u_{wc}} \\ ( \\Omega_c^{\\ 3} + \\Omega_t^{'3} ) + T_{wt} \\ \\vec{u_{wt}} \\ ( \\Omega_t^{\\ 3} + \\Omega_c^{'3} )} {(u_{wc} + u_{wt}) \\ T_w} \\normalsize , \\qquad (17)$\n\nwhere $T_w$, $T_{wc}$, $T_{wt}$ are the period and half-periods of wave taking into account the effect of a current (cf. figure 2); $\\Omega_c$, $\\Omega_t$ are the amount of sand entrained and settled during the half-period $T_{wc}$ and $T_{wt}$, respectively; $\\Omega'_c$, $\\Omega'_t$ arecthe amount of suspended sand remaining from the positive and the negative half-cycle respectively. $u_{wc}^2$ and $u_{wt}^2$ the average quadratic velocities (wave + current) over each half-period expressed as:\n\n${u_{wj}}^2 = \\Large \\frac{2}{T_{wj}}\\int_t^{t+T_{wj}} \\normalsize u^2(t) \\ dt + 2 \\ {U_c}^2 \\ sin^2\\varphi \\qquad (18)$\n\nwhere $j$ can be $c$ or $t$, $u(t) = U_c \\ \\cos\\varphi + u_w(t)$. $u_w(t)$ is the instantaneous wave orbital velocity, and $\\varphi$ the angle between wave direction and current direction.\n\nIf $\\omega_j \\leq \\omega_{cr}$ then $\\Omega_j = \\omega_j \\ \\Large \\frac{2 \\ W_s \\ T_{wj}}{d} \\normalsize$ and $\\Omega'_j=0,$\n\nIf $\\omega_j \\geq \\omega_{cr}$ then $\\Omega_j = \\Large \\frac{2 \\ W_s \\ T_{wj}}{d} \\normalsize$ and $\\Omega'_j= (\\omega_j-1) \\ \\Large \\frac{2 \\ W_s \\ T_{wj}}{d} \\normalsize , \\qquad (19)$\n\nwith:\n\n$\\omega_j = \\Large \\frac{{u_{wj}}^2}{2 \\ (s-1) \\ g \\ W_s \\ T_{wj}} , \\normalsize \\qquad (20)$\n\nwhere $j$ can be $c$ or $t$.\n\n$\\omega_{cr}$ is a ripple parameter defined as:\n\n$\\omega_{cr} = 0.03 \\;$ if $\\; \\theta_{cw(max)} \\leq 0.2 ;$\n\n$\\omega_{cr} = 1-0.97 \\ [1- 6.25 \\ (\\theta_{cw(max)}-0.2)^2 ]^{0.5} \\;$ If $\\; 0.2 \\lt \\theta_{cw(max)} \\lt 0.6 ;$\n\n$\\omega_{cr} = 1 \\;$ if $\\; 0.6 \\lt \\theta_{cw(max)} \\qquad (21)$\n\nwhere $\\theta_{cw(max)}$ is the maximum Shields parameter due the wave-current interaction (computed following , pp.87-95).\n\nSeveral authors developed a model based on the work of Dibajnia and Watanabe by introducing the effects of acceleration and by introducing the Shields parameter in Eq.17. \n\n### Camenen and Larson (2006) formula", null, "Figure 3: Schematic view of the instantaneous velocity and acceleration variation for a bore over a wave period and in the direction of the waves.\n\nFollowing the approach proposed by Dibajnia and Watanabe, Camenen and Larson introduced a parameter in Eq. 13 of the bedload formula to take into account phase-lag effects in bedload transport. This modification was eventually extended to acceleration effects . The net sediment transporting velocity $\\theta_{cw,net}$ in Eq. 12 is then given by,\n\n$\\theta_{cw,net} = (1-\\alpha_{pl,b})(1+\\alpha_a)\\theta_{cw,on}+(1+\\alpha_{pl,b})(1-\\alpha_a)\\theta_{cw,off} \\qquad (22)$\n\nin which $\\alpha_{pl,b} = \\alpha_{onshore} - \\alpha_{offshore}$ and,\n\n$\\alpha_j = \\Large \\frac{\\nu^{0.25} \\ {U_{wj}}^{0.5}}{{W_s} \\ {T_j}^{0.75}} \\normalsize \\exp\\left[ - \\left(\\Large \\frac{U_{w,crsf}}{U_{wj}} \\right)^2 \\normalsize \\right] \\qquad (23)$\n\nwhere $U_{w,crsf}$ is the critical velocity for inception of sheet-flow transport }:\n\n$U_{w,crsf} = 8.35 \\ [(s-1) g \\ (d_{50} \\ \\delta_w)^{1/2}]^{1/2} \\ (1 + r_w) \\qquad (24)$\n\nwhere $\\delta_w = \\sqrt{\\nu T_w / \\pi}$ is the Stokes boundary layer thickness , and $r_w$ the wave asymmetry coefficient, $r_w = u_{w,max}/U_w-1 ,$ with $u_{w,max}$ being the maximum wave velocity.\n\nBased on the work by Watanabe and Sato , the coefficient $\\alpha_a$ is given by (see Fig. 3):\n\n$\\alpha_a = \\Large \\frac{1-R_{ac}}{1+R_{ac}} \\normalsize$ with $R_{ac} = T_{ac}/T_{dc} \\qquad (25)$\n\n### Bijker (1971) formula\n\nBijker related suspended load to bedload as the bed concentration reference value. Suspended load is then estimated using the Einstein integrals:\n\n$q_{ss} = 1.83 \\ q_{sb} \\ \\left( I_1\\ln \\left[ \\Large \\frac{33 h}{\\delta_c} \\normalsize \\right] +I_2 \\right) , \\qquad (26)$\n\nwhere $\\delta_c=100d/h$ is dimensionless thickness of the bed load layer.\n\nThe Einstein integrals $I_1$ and $I_2$ for the suspended load are given:\n\n$I_1 = \\Large \\int_{\\delta}^{1} (\\frac{1-y}{y} )^A \\normalsize dy ,$\n\n$I_2 = \\Large \\int_{\\delta}^{1} (\\frac{1-y}{y} )^A \\normalsize \\ln y \\ dy , \\qquad (27)$\n\nwhere $A=\\Large \\frac{W_s}{\\kappa} \\normalsize (\\tau_{cw}/\\rho)^{-1/2}$ is a function determining the rate of the suspension, $\\kappa=0.41$ is the Von Karman constant, and $W_s$ the settling velocity.\n\n### Bailard (1981) formula\n\nFollowing the work of Bailard and Inman , Bailard developed a total load formula, including a specific term for suspended load :\n\n$\\vec{q_{sb}} = \\Large \\frac{0.5 \\ f_{cw}}{g \\ (s-1)} \\normalsize \\left( \\Large \\frac{\\epsilon_s}{W_s} \\normalsize \\lt \\mid\\vec{u}\\mid^3\\vec{u}\\gt \\right) \\qquad (28)$\n\nwhere $\\epsilon_s$ is the suspended load efficiency, and $\\lt \\ \\gt$ yields an average over several periods of the wave.\n\nThe suspended load efficiency coefficient is also slightly different from the one given by Bagnold . Bailard suggested from a calibration with field data that $\\epsilon_s = 0.02$.\n\n### Van Rijn (1989) formula\n\nThe Van Rijn formula for suspended load corresponds to a resolution of the equation of concentration over depth:\n\n$\\Large \\frac{dc}{dz} \\normalsize = - \\Large \\frac{(1-c)^5 \\ c \\ W_s}{\\epsilon_{scw}} \\normalsize , \\qquad (29)$\n\nwhere $c(z)$ is the mean volume concentration (time averaged) at height $z$, $(1-c)^5$ corresponds to the decrease of the settling velocity due to high concentrations, and $\\epsilon_{scw}$ is the mixing coefficient in case of a wave-current interaction. Then, integrating sediment fluxes over depth:\n\n$q_{ss} = \\int^h_{z_a} \\overline{u(z)} \\ c(z) \\ dz \\qquad (30)$\n\nwhere $h$ is the water depth, $z_a={\\rm max}(k_{sct},k_{swt})$ the reference level, $k_{sct},k_{swt}$ total roughness values due to current and waves, respectively, and $\\overline{u(z)}$ is the mean velocity (time averaged) at height $z$.\n\nThe reference concentration is estimated at the level $z_a$ based on the Van Rijn bedload formula :\n\n$c_a = 0.015 \\ \\Large \\frac{d_{50}}{z_a} \\normalsize \\ d_{*}^{-0.3} \\ (\\tau_{cw} / \\tau_{cr} \\ -1 )^{1.5} . \\qquad (31)$\n\nThe sediment diffusion coefficient for a wave and current interaction is given by  :\n\n$\\epsilon_{scw}(z) = [\\epsilon_{sc}(z)^2+\\epsilon_{sw}(z)^2]^{1/2} \\qquad (32)$\n\nwith:\n\n$\\epsilon_{sc}(z) = \\epsilon_{sc,max} = 0.25 \\kappa \\beta_s u_* h \\;$ if $\\; z \\gt h/2$ ,\n\n$\\epsilon_{sc}(z) = \\epsilon_{sc,max} \\ \\left[1-\\left(1-2 \\Large \\frac{z}{h} \\right)^2 \\normalsize \\right] \\;$ if $\\; z \\leq h/2 \\qquad (33)$\n\nwhere $\\beta_s={\\rm min}(1.5,1+2(W_s/u_*)^2)$, and $u_*=\\sqrt{\\tau_{cw}/\\rho}$ is the shear velocity, and\n\n$\\epsilon_{sw}(z) = \\epsilon_{sw,b} = 0.004 \\ a_{br} \\ d_* \\ \\delta_s \\ U_w \\;$ if $\\; z \\leq \\delta_s ,$\n\n$\\epsilon_{sw}(z) = \\epsilon_{sw,max} = 0.035 \\ a_{br} \\ h \\Large \\frac{H_w}{T_w} \\normalsize \\;$ if $\\; z \\gt h/2 ,$\n\n$\\epsilon_{sw}(z) = \\epsilon_{sw,b}+(\\epsilon_{sw,max}-\\epsilon_{sw,b}) \\ \\Large \\frac{z-\\delta_s}{h/2-\\delta_s} \\normalsize \\;$ if $\\; \\delta_s \\lt z \\leq h/2 \\qquad (34)$\n\nwith $\\delta_s=0.3 h (H_w/h)^{0.5}$ is the thickness of the boundary layer, and $a_{br}={\\rm max}(3 H_w/h-0.8,1)$, a coefficient. The estimation of the time-averaged velocity is based on the logarithmic velocity profile:\n\n$\\overline{u(z)} = U_c \\ \\Large \\frac{\\log (30 \\delta_w/k_a)}{\\log(30 h/k_a)-1} \\ \\frac{\\log(30z/k_{sc})}{\\log(30\\delta_w/k_{sc})-1} \\; \\normalsize$ if $\\; z \\leq \\delta_w ,$\n\n$\\overline{u(z)} = U_c \\ \\Large \\frac{\\log(30z/k_a)}{\\log(30h/k_a)-1} \\normalsize \\;$ if $\\; z \\gt \\delta_w \\qquad (35)$\n\nwith $\\delta_w = 0.072 A_w (A_w/k_{sw})^{-0.25}$ the thickness of the wave boundary layer, $A_w=U_w T_w/(2\\pi)$ the wave half-excursion.\n\nVan Rijn updated his suspended-load formula. He proposed a new simplified suspended-load transport formula for steady flow (with or without waves) :\n\n$q_{sb} = 0.015 \\ U_c \\ \\Large \\frac{d_{50}}{d_*^{0.6}} \\normalsize \\ \\Psi^{2.0} \\qquad (36)$.\n\n### Camenen and Larson (2008) formula\n\nIn determining the suspended load $q_{ss}$, following the simplified approach by Madsen and Madsen et al. , the vertical variation in the horizontal velocity was neglected and an exponential-law profile assumed for the sediment concentration. The suspended sediment load is written (components along the wave direction and perpendicular) :\n\n$q_{ssw} = U_{cw,net} \\ c_R \\Large \\frac{\\epsilon}{W_s} \\normalsize \\left[ 1 - \\exp \\left( -\\Large \\frac{W_s h}{\\epsilon} \\normalsize \\right)\\right] ,$\n\n$q_{ssn} = U_c \\sin\\varphi \\ c_R \\Large \\frac{\\epsilon}{W_s} \\normalsize \\left[ 1 - \\exp \\left( -\\Large \\frac{W_s h}{\\epsilon} \\normalsize \\right)\\right] \\qquad (37)$\n\nwhere $h$ is the water depth, $U_{cw,net}$ is the net mean current after a wave period, $c_R$ the reference concentration at the bottom, $W_s$ the sediment fall speed, and $\\epsilon$ the sediment diffusivity. In solving the integral, the ratio $W_s h / \\epsilon$ may often be assumed large, implying that the exponential term is close to zero. However, the assumption that integrating to infinity or to $h$ produces about the same result, may not be valid when strong mixing due to wave breaking is present.\n\nThe bed reference concentration is obtained from\n\n$c_R = A_{cR} \\ \\theta_{cw,m} \\ \\exp \\left( -4.5 \\ \\Large \\frac{\\theta_{cr}}{\\theta_{cw}} \\normalsize \\right) \\qquad (38)$\n\nin which the coefficient $A_{cR}$ is given by\n\n$A_{cR} = 1.5 \\ 10^{-3} \\exp (-0.2 d_*) \\qquad (39)$\n\nwhere $d_*=\\sqrt{(s-1)g/\\nu^2} \\ d_{50}$ is the dimensionless grain size.\n\nThe sediment diffusivity is related to the energy dissipation ,\n\n$\\epsilon = h (D /\\rho)^{1/3} \\qquad (40)$\n\nin which $D$ is the total effective dissipation expressed as\n\n$D = {k_b}^3 \\ D_b + {k_c}^3 \\ D_c + {k_w}^3 \\ D_w \\qquad (41)$\n\nwhere the energy dissipation from wave breaking ($D_b$) and from bottom friction due to current ($D_c$) and waves ($D_w$) were simply added, and $k_b$, $k_c$ and $k_w$ are coefficients. The coefficient $k_b$ corresponds to an efficiency coefficient ($k_b=0.010$), whereas $k_c$ and $k_w$ are related to the Schmidt number. Assuming a parabolic profile for the vertical sediment diffusivity, its mean value over the depth (for a steady current or waves, respectively) may be written as follows:\n\n$\\epsilon_j = h (D_j / \\rho)^{1/3} = k_j \\ \\kappa \\ u_{*j} \\ h \\qquad (42)$\n\nwhere $k_j$ is a function of the Schmidt number $\\sigma_j$ or the ratio between the vertical eddy diffusivity of particles and the vertical eddy viscosity of water. $u_{*j}$ is the shear velocity due to current or waves only, with subscript $j$ taking on the values $c$ (current) or $w$ (waves), respectively. In case of a steady current, $k_c =\\sigma_c/6 \\kappa$ whereas for waves $k_w =\\pi \\sigma_w /3 \\kappa$. The following expression was developed for the Schmidt number:\n\n$\\sigma_j = A_1 + A_2 \\ \\sin^{2} \\left( \\Large \\frac{\\pi}{2} \\frac{W_s}{u_{*j}} \\right) \\; \\normalsize$ if $\\; W_s/u_{*j} \\leq 1 ,$\n\n$\\sigma_j = 1 + (A_1+A_2-1) \\ \\sin^{2} \\left( \\Large \\frac{\\pi}{2} \\frac{u_{*j}}{W_s} \\right) \\normalsize \\;$ if $\\; W_s/u_{*j} \\gt 1 \\qquad (43)$\n\nwhere $j$ is a subscript equal to $c$ or $w$. $A_{c1}=0.4$, $A_{c2}=3.5$, $A_{w1}=0.15$ and $A_{w2}=1.5$. Recent measurements in large rivers showed however that $\\sigma_c$ may be overestimated using Eq. 43 for large water depth . For wave-current interaction, a weighted value is employed for the Schmidt number:\n\n$\\sigma_{cw}=X_t \\ \\sigma_c +(1-X_t) \\ \\sigma_w \\qquad (44) .$\n\nThe net mean current is defined in a similar way to the net Shields parameter for the bed load in order to take into account a possible sediment transport due to wave asymmetry, as well as a possible phase-lag effects on the suspended concentration,\n\n$U_{cw,net} = (1-\\alpha_{pl,s})U_{cw,on}+(1+\\alpha_{pl,s})U_{cw,off} \\qquad (45)$\n\nwhere $\\alpha_{pl,s}$ is the coefficient describing phase-lag effects on the suspended load, and $U_{cw,j}$ is the root-mean-square value of the velocity (wave+current) over the half period $T_{wj}$, where the subscript $j$ should be replaced either by $on$ (onshore) or $off$ (offshore) (see also Fig. 1) according to:\n\n$U_{cw,on} = [\\Large \\frac{1}{T_{wc}} \\int_0^{T_{wc}} \\normalsize (u_w(t)+U_c \\cos\\varphi)^2 dt ]^{1/2},$\n\n$U_{cw,off} = [\\Large \\frac{1}{T_{wt}} \\int_{T_{wc}}^{T_w} \\normalsize (u_w(t)+U_c\\cos\\varphi)^2 dt]^{1/2}. \\qquad (46)$\n\nIn case of a steady current $U_{cw,net}=U_c$.\n\n## Related articles\n\nSand transport\nSediment deposition and erosion processes\nCoastal Hydrodynamics And Transport Processes\nLittoral drift and shoreline modelling\nCoastal and marine sediments\nDefinitions, processes and models in morphology\nManual Sediment Transport Measurements in Rivers, Estuaries and Coastal Seas\nProcess-based morphological models" ]
[ null, "https://www.coastalwiki.org/w/images/thumb/d/dd/CamFig1.jpg/700px-CamFig1.jpg", null, "https://www.coastalwiki.org/w/images/thumb/4/46/CamFig2.jpg/700px-CamFig2.jpg", null, "https://www.coastalwiki.org/w/images/thumb/9/91/CamFig3.jpg/400px-CamFig3.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.62211347,"math_prob":0.99983245,"size":34656,"snap":"2023-40-2023-50","text_gpt3_token_len":11162,"char_repetition_ratio":0.1935819,"word_repetition_ratio":0.06558401,"special_character_ratio":0.33728647,"punctuation_ratio":0.15153722,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999918,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-11-28T09:36:57Z\",\"WARC-Record-ID\":\"<urn:uuid:2c5ff268-feda-4291-b386-0331bd3496e2>\",\"Content-Length\":\"89404\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:492ad776-4ea1-4a23-a25f-3fdce90311d9>\",\"WARC-Concurrent-To\":\"<urn:uuid:6d3568e0-2379-4970-bcad-a63c609c8e39>\",\"WARC-IP-Address\":\"193.191.134.58\",\"WARC-Target-URI\":\"https://www.coastalwiki.org/wiki/Sediment_transport_formulas_for_the_coastal_environment\",\"WARC-Payload-Digest\":\"sha1:JTXOCXULMOPEJK2QZQRCW27NHNHFB7Q2\",\"WARC-Block-Digest\":\"sha1:JMK7IUSNEYSKIU3MLLK24GDCDUAQERY5\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679099281.67_warc_CC-MAIN-20231128083443-20231128113443-00783.warc.gz\"}"}
https://docs.scipy.org/doc/numpy-1.9.1/reference/generated/numpy.s_.html
[ "# numpy.s¶\n\nnumpy.s_ = <numpy.lib.index_tricks.IndexExpression object at 0x41611d4c>\n\nA nicer way to build up index tuples for arrays.\n\nNote\n\nUse one of the two predefined instances index_exp or s_ rather than directly using IndexExpression.\n\nFor any index combination, including slicing and axis insertion, a[indices] is the same as a[np.index_exp[indices]] for any array a. However, np.index_exp[indices] can be used anywhere in Python code and returns a tuple of slice objects that can be used in the construction of complex index expressions.\n\nParameters: maketuple : bool If True, always returns a tuple.\n\nindex_exp\nPredefined instance that always returns a tuple: index_exp = IndexExpression(maketuple=True).\ns_\nPredefined instance without tuple conversion: s_ = IndexExpression(maketuple=False).\n\nNotes\n\nYou can do all this with slice() plus a few special objects, but there’s a lot to remember and this version is simpler because it uses the standard array indexing syntax.\n\nExamples\n\n```>>> np.s_[2::2]\nslice(2, None, 2)\n>>> np.index_exp[2::2]\n(slice(2, None, 2),)\n```\n```>>> np.array([0, 1, 2, 3, 4])[np.s_[2::2]]\narray([2, 4])\n```\n\nnumpy.r\n\nnumpy.nonzero" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.69165105,"math_prob":0.8454798,"size":1134,"snap":"2020-34-2020-40","text_gpt3_token_len":299,"char_repetition_ratio":0.12300885,"word_repetition_ratio":0.0,"special_character_ratio":0.26895943,"punctuation_ratio":0.19130434,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9911683,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-19T10:24:04Z\",\"WARC-Record-ID\":\"<urn:uuid:a4cbfe24-34a3-4da2-afc3-ec782e911c22>\",\"Content-Length\":\"8581\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:bf5b4227-ef33-4719-b85d-e9e8f3a7a4cb>\",\"WARC-Concurrent-To\":\"<urn:uuid:5d85757f-58a9-4a9f-bf76-12352ee3c704>\",\"WARC-IP-Address\":\"50.17.248.72\",\"WARC-Target-URI\":\"https://docs.scipy.org/doc/numpy-1.9.1/reference/generated/numpy.s_.html\",\"WARC-Payload-Digest\":\"sha1:VYK4YO77EUGXVEPIS6TCM4RP5LSJVJ4Q\",\"WARC-Block-Digest\":\"sha1:LSHPWZQ5ECYFW6KA7DATCVVQI5G7NY5K\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600400191160.14_warc_CC-MAIN-20200919075646-20200919105646-00011.warc.gz\"}"}
https://krishnakitchen.org/trayning/application-of-graph-theory-in-real-life-ppt.php
[ "# Application Of Graph Theory In Real Life Ppt\n\nApplication of Graph Theory in real world by Sanjay pandey. Concrete and simple applications for bipartite graphs Do you want a real-life application? Browse other questions tagged graph-theory graphs bipartite, mathematics in graph theory behind the applications they use. Methodology The internet is one of the largest graphs in life. Image taken from Graphy Theory.\n\n### Some applications of graph theory combinatorics and\n\nGraph Coloring Set 1 (Introduction and Applications. Routing Planning As An Application Of Graph Theory finding is searching a graph, naturally as a mathematical model of the observed real, techniques for analysing graphs Complex systems network theory provides Applying network theory to a system The real world exhibits a lot of clustering..\n\nThe Real Life Applications of Another significant application of probability theory in everyday The Real Life Applications of Probability in Mathematics 63 fuzzy graph theory applications in real life. or details of fuzzy graph theory applications please have a application of graph theory in real life ppt,\n\nfuzzy graph theory applications in real life. To get full information or details of fuzzy graph theory application of graph theory in real life ppt, While the applications of fields of continuous Graph theory allows complex logistical What Are the Applications of Discrete Math? Sciencing\n\nGRAPH THEORY Tero Harju “Graph Theory with Applications”, Macmillan, 1978. R. DIESTEL, “Graph Theory”, • For a real number x, An activity where students are required to match a description of something in the real world, with a straight line graph. Students can then match up the appropriate\n\nGraph Theory: Graph theory can most of his life in Russia and Germany. Euler He created first graph to simulate a real time place and situation to Group Theory and Its Applications in Robotics, Computer Vision/Graphics and Medical and (3) concrete, real world applications in (e.g. graph theory\n\napplication of graph theory pdf - APPLICATION OF GRAPH THEORY IN REAL LIFE PPT DOWNLOAD application of graph theory in real life pptapplications of graph theory in techniques for analysing graphs Complex systems network theory provides Applying network theory to a system The real world exhibits a lot of clustering.\n\nReal World Applications of Edge Coloring? graph-theory discrete-mathematics coloring. What are some applications of loops in real life? 1. Construct a graph from given degrees of all vertices & Graph theory practice questions: This is a typical scheduling application of graph coloring problem.\n\nSome applications of graph theory, combinatorics bipartite graph G=(V1,V2;E), real Some applications of graph theory, combinatorics and number theory techniques and flexibility while defining and solving a real life problem. Graphs has many features, Application of graph theory in communication networks\n\n12/04/2013В В· I also want to have ideas of the applications of Topology to real life Applications of Topology to real a related graph. There is also an application for Lecture 09: Basic Graph Theory. PPT – Lecture 09: Basic Graph Theory PowerPoint presentation Graph Theory - Varying Applications\n\nGroup Theory and Its Applications in Robotics, Computer Vision/Graphics and Medical and (3) concrete, real world applications in (e.g. graph theory Real World Applications of Edge Coloring? graph-theory discrete-mathematics coloring. What are some applications of loops in real life? 1.\n\nXOXO Real Life Applications of Trigonometric Graphs Chelsea Kaye Punzalan TRIGONOMETRY MEANING AND USES studies relationships involving lengths and angles of triangles Digraphs Theory, Algorithms and Applications Graph theory is a very popular area of discrete mathematics with This clearly indicates a real necessity for a\n\nReal Life Applications of Trigonometric Graphs by Chelsea. XOXO Real Life Applications of Trigonometric Graphs Chelsea Kaye Punzalan TRIGONOMETRY MEANING AND USES studies relationships involving lengths and angles of triangles, Routing Planning As An Application Of Graph Theory In many applications such as transportation, naturally as a mathematical model of the observed real.\n\n### Differential Equation Ppt Scribd", null, "PPT – Lecture 09 Basic Graph Theory PowerPoint. The Real Life Applications of Another significant application of probability theory in everyday The Real Life Applications of Probability in Mathematics 63, Real-life straight line graphs game. 4.8 13 to match a description of something in the real world, with a straight line graph. Real-life straight.\n\nApplications of Graph Theory – Jan Fajfr's wall – Software. Lecture 09: Basic Graph Theory. PPT – Lecture 09: Basic Graph Theory PowerPoint presentation Graph Theory - Varying Applications, Project on Graph Theory. This article describes how several real-life problems give rise to differential equations in Partial Differential Equation.ppt..\n\n### Differential Equation Ppt Scribd", null, "PPT – Lecture 09 Basic Graph Theory PowerPoint. In the real world, graphs are used to help people quickly understand and use information. How Are Graphs Used in the Real World? A: Quick Answer. While the applications of fields of continuous Graph theory allows complex logistical What Are the Applications of Discrete Math? Sciencing.", null, "Project on Graph Theory. This article describes how several real-life problems give rise to differential equations in Partial Differential Equation.ppt. APPLICATIONS OF GRAPH THEORY IN HUMAN LIFE. now classical application of graph coloring concepts in graph theory and is used in many real time\n\nTranscript of Application of Graph Theory in real world. Finding the shortest route Applications of Graph Theory incredibly important part of modern-day life. mathematics in graph theory behind the applications they use. Methodology The internet is one of the largest graphs in life. Image taken from Graphy Theory\n\nTranscript of Application of Graph Theory in real world. Finding the shortest route Applications of Graph Theory incredibly important part of modern-day life. application of graph theory pdf - APPLICATION OF GRAPH THEORY IN REAL LIFE PPT DOWNLOAD application of graph theory in real life pptapplications of graph theory in\n\nconcepts in graph theory and is used in many real time The application of Graph Coloring also used in Applications of Graph Coloring in Modern Computer Graph Theory Applications? And the strength of it is the the power to be used to abstract such a vast array of real problems. Graph Theory should be Life\n\nTranscript of Application of Graph Theory in real world. Finding the shortest route Applications of Graph Theory incredibly important part of modern-day life. [email protected] http://www.zib.de/groetschel practice and some new theory\\з€Ђе±®\\з€Ђе±®Martin Groetschel\\з€Ђе±® of life along the Nile\n\nDigraphs Theory, Algorithms and Applications Graph theory is a very popular area of discrete mathematics with This clearly indicates a real necessity for a Real World Applications of Edge Coloring? graph-theory discrete-mathematics coloring. What are some applications of loops in real life? 1.\n\nMatrices And Application Of Application of Matrices\n\n• Graph theory\n• The adjacency matrix Application of matrices in real life Few important real life applications of graph data structures are: Graph theory is also used to study molecules in What are real life applications of graphs?\n\nXOXO Real Life Applications of Trigonometric Graphs Chelsea Kaye Punzalan TRIGONOMETRY MEANING AND USES studies relationships involving lengths and angles of triangles Transcript of Application of Graph Theory in real world. Finding the shortest route Applications of Graph Theory incredibly important part of modern-day life.\n\ntechniques for analysing graphs Complex systems network theory provides Applying network theory to a system The real world exhibits a lot of clustering. While the applications of fields of continuous Graph theory allows complex logistical What Are the Applications of Discrete Math? Sciencing\n\nDirected Graphs digraph search Digraph applications digraph vertex edge financial stock, • real world: Routing Planning As An Application Of Graph Theory In many applications such as transportation, naturally as a mathematical model of the observed real\n\nFew important real life applications of graph data structures are: Graph theory is also used to study molecules in What are real life applications of graphs? Digraphs Theory, Algorithms and Applications Graph theory is a very popular area of discrete mathematics with This clearly indicates a real necessity for a\n\nThe Galaxy S3 comes with a USB cable for the first Tap \"Apps\" on the Home screen of the Galaxy S3, How Large of an SD Card Can the Samsung Galaxy S3 Handle? Mtp application samsung galaxy s3 Edwardstown Home > Support > Samsung > Samsung Gusto 2 > Media Transfer Protocol (MTP) Setting - Basic Phones Samsung Galaxy Tab S3 Apps & Widgets;\n\n## Application of graph theory in real life pdf WordPress.com", null, "Real-life straight line graphs game by olitrussell. An activity where students are required to match a description of something in the real world, with a straight line graph. Students can then match up the appropriate, application of graph theory pdf - APPLICATION OF GRAPH THEORY IN REAL LIFE PPT DOWNLOAD application of graph theory in real life pptapplications of graph theory in.\n\n### Some applications of graph theory combinatorics and\n\nGraph Theory Origin and Seven Bridges of Königsberg. Graph Theory: Graph theory can most of his life in Russia and Germany. Euler He created first graph to simulate a real time place and situation to, Real World Applications of Edge Coloring? graph-theory discrete-mathematics coloring. What are some applications of loops in real life? 1..\n\ntechniques for analysing graphs Complex systems network theory provides Applying network theory to a system The real world exhibits a lot of clustering. Transcript of Application of Graph Theory in real world. Finding the shortest route Applications of Graph Theory incredibly important part of modern-day life.\n\nfuzzy graph theory applications in real life. To get full information or details of fuzzy graph theory application of graph theory in real life ppt, Digraphs Theory, Algorithms and Applications Graph theory is a very popular area of discrete mathematics with This clearly indicates a real necessity for a\n\nEmphasizing their application to real-world systems, rocs — a graph theory IDE; The Social Life of Routers — non-technical paper discussing graphs of people GRAPH THEORY Tero Harju “Graph Theory with Applications”, Macmillan, 1978. R. DIESTEL, “Graph Theory”, • For a real number x,\n\nSome applications of graph theory, combinatorics bipartite graph G=(V1,V2;E), real Some applications of graph theory, combinatorics and number theory An activity where students are required to match a description of something in the real world, with a straight line graph. Students can then match up the appropriate\n\nGraph theory is also widely used in sociology as a way, The application of data-structures When programming a real-time system that can be interrupted fuzzy graph theory applications in real life. To get full information or details of fuzzy graph theory application of graph theory in real life ppt,\n\n12/04/2013В В· I also want to have ideas of the applications of Topology to real life Applications of Topology to real a related graph. There is also an application for Project on Graph Theory. This article describes how several real-life problems give rise to differential equations in Partial Differential Equation.ppt.\n\nConstruct a graph from given degrees of all vertices & Graph theory practice questions: This is a typical scheduling application of graph coloring problem. While the applications of fields of continuous Graph theory allows complex logistical What Are the Applications of Discrete Math? Sciencing\n\nWhile the applications of fields of continuous Graph theory allows complex logistical What Are the Applications of Discrete Math? Sciencing Graph Theory Applications? And the strength of it is the the power to be used to abstract such a vast array of real problems. Graph Theory should be Life\n\nConcrete and simple applications for bipartite graphs Do you want a real-life application? Browse other questions tagged graph-theory graphs bipartite Project on Graph Theory. This article describes how several real-life problems give rise to differential equations in Partial Differential Equation.ppt.\n\nProject on Graph Theory. This article describes how several real-life problems give rise to differential equations in Partial Differential Equation.ppt. The Real Life Applications of Another significant application of probability theory in everyday The Real Life Applications of Probability in Mathematics 63\n\nProject on Graph Theory. This article describes how several real-life problems give rise to differential equations in Partial Differential Equation.ppt. Routing Planning As An Application Of Graph Theory In many applications such as transportation, naturally as a mathematical model of the observed real\n\nLinear Algebra, Theory and Applications was written by Dr. Kenneth Kuttler of Brigham Young 1.3 The Number Line And Algebra Of The Real Numbers Matrices And Application Of Application of Matrices\n\n• Graph theory\n• The adjacency matrix Application of matrices in real life\n\nAPPLICATIONS OF GRAPH THEORY IN HUMAN LIFE. now classical application of graph coloring concepts in graph theory and is used in many real time Some applications of graph theory, combinatorics bipartite graph G=(V1,V2;E), real Some applications of graph theory, combinatorics and number theory\n\nFew important real life applications of graph data structures are: Graph theory is also used to study molecules in What are real life applications of graphs? APPLICATIONS OF GRAPH THEORY IN HUMAN LIFE. now classical application of graph coloring concepts in graph theory and is used in many real time\n\nLecture 09: Basic Graph Theory. PPT – Lecture 09: Basic Graph Theory PowerPoint presentation Graph Theory - Varying Applications Graph theory is also widely used in sociology as a way, The application of data-structures When programming a real-time system that can be interrupted\n\nLecture 09: Basic Graph Theory. PPT – Lecture 09: Basic Graph Theory PowerPoint presentation Graph Theory - Varying Applications Graph Theory Applications? And the strength of it is the the power to be used to abstract such a vast array of real problems. Graph Theory should be Life\n\ntechniques and flexibility while defining and solving a real life problem. Graphs has many features, Application of graph theory in communication networks Graph Theory: Graph theory can most of his life in Russia and Germany. Euler He created first graph to simulate a real time place and situation to\n\nThe Real Life Applications of Another significant application of probability theory in everyday The Real Life Applications of Probability in Mathematics 63 In the real world, graphs are used to help people quickly understand and use information. How Are Graphs Used in the Real World? A: Quick Answer.\n\nAPPLICATIONS OF GRAPH THEORY IN HUMAN LIFE. now classical application of graph coloring concepts in graph theory and is used in many real time fuzzy graph theory applications in real life. To get full information or details of fuzzy graph theory application of graph theory in real life ppt,\n\nLinear Algebra, Theory and Applications was written by Dr. Kenneth Kuttler of Brigham Young 1.3 The Number Line And Algebra Of The Real Numbers Few important real life applications of graph data structures are: Graph theory is also used to study molecules in What are real life applications of graphs?\n\n### Graph Coloring Set 1 (Introduction and Applications", null, "Free Application Of Graph Theory In Real Life Ppt (PDF. Few important real life applications of graph data structures are: Graph theory is also used to study molecules in What are real life applications of graphs?, Graph theory is also widely used in sociology as a way, The application of data-structures When programming a real-time system that can be interrupted.\n\n### Application of Graph Theory in Representing and Modelling", null, "Real Life Applications of Trigonometric Graphs by Chelsea. In the real world, graphs are used to help people quickly understand and use information. How Are Graphs Used in the Real World? A: Quick Answer. [email protected] http://www.zib.de/groetschel practice and some new theory\\з€Ђе±®\\з€Ђе±®Martin Groetschel\\з€Ђе±® of life along the Nile.", null, "• Application of graph theory in real life pdf WordPress.com\n• Directed Graphs Princeton University Computer Science\n• Applications of Graph Theory – Jan Fajfr's wall – Software\n\n• 30/04/2013В В· Applications of Graphs to real life problems. Real life applications of trigonometry 4:24. Lecture - 11 The Graph Theory Approach for Electrical Concrete and simple applications for bipartite graphs Do you want a real-life application? Browse other questions tagged graph-theory graphs bipartite\n\nEmphasizing their application to real-world systems, rocs — a graph theory IDE; The Social Life of Routers — non-technical paper discussing graphs of people Construct a graph from given degrees of all vertices & Graph theory practice questions: This is a typical scheduling application of graph coloring problem.\n\nSome applications of graph theory, combinatorics bipartite graph G=(V1,V2;E), real Some applications of graph theory, combinatorics and number theory Graph theory is also widely used in sociology as a way, The application of data-structures When programming a real-time system that can be interrupted\n\nAPPLICATIONS OF GRAPH THEORY IN HUMAN LIFE. now classical application of graph coloring concepts in graph theory and is used in many real time The Real Life Applications of Another significant application of probability theory in everyday The Real Life Applications of Probability in Mathematics 63\n\nAn activity where students are required to match a description of something in the real world, with a straight line graph. Students can then match up the appropriate fuzzy graph theory applications in real life. To get full information or details of fuzzy graph theory application of graph theory in real life ppt,\n\nfuzzy graph theory applications in real life. To get full information or details of fuzzy graph theory application of graph theory in real life ppt, Digraphs Theory, Algorithms and Applications Graph theory is a very popular area of discrete mathematics with This clearly indicates a real necessity for a\n\nGraph Theory Applications? And the strength of it is the the power to be used to abstract such a vast array of real problems. Graph Theory should be Life 30/04/2013В В· Applications of Graphs to real life problems. Real life applications of trigonometry 4:24. Lecture - 11 The Graph Theory Approach for Electrical\n\nConstruct a graph from given degrees of all vertices & Graph theory practice questions: This is a typical scheduling application of graph coloring problem. Some applications of graph theory, combinatorics bipartite graph G=(V1,V2;E), real Some applications of graph theory, combinatorics and number theory\n\nConstruct a graph from given degrees of all vertices & Graph theory practice questions: This is a typical scheduling application of graph coloring problem. Concrete and simple applications for bipartite graphs Do you want a real-life application? Browse other questions tagged graph-theory graphs bipartite\n\nRouting Planning As An Application Of Graph Theory finding is searching a graph, naturally as a mathematical model of the observed real Applications of Graph theory: Graph theoretical Graph coloring is one of the most important concepts in graph theory and is used in many real time applications in\n\nReal World Applications of Edge Coloring? graph-theory discrete-mathematics coloring. What are some applications of loops in real life? 1. The Real Life Applications of Another significant application of probability theory in everyday The Real Life Applications of Probability in Mathematics 63\n\n12/04/2013В В· I also want to have ideas of the applications of Topology to real life Applications of Topology to real a related graph. There is also an application for Real World Applications of Edge Coloring? graph-theory discrete-mathematics coloring. What are some applications of loops in real life? 1.\n\nAPPLICATIONS OF GRAPH THEORY IN HUMAN LIFE. now classical application of graph coloring concepts in graph theory and is used in many real time techniques and flexibility while defining and solving a real life problem. Graphs has many features, Application of graph theory in communication networks\n\nLinear Algebra, Theory and Applications was written by Dr. Kenneth Kuttler of Brigham Young 1.3 The Number Line And Algebra Of The Real Numbers Transcript of Application of Graph Theory in real world. Finding the shortest route Applications of Graph Theory incredibly important part of modern-day life.\n\n30/04/2013В В· Applications of Graphs to real life problems. Real life applications of trigonometry 4:24. Lecture - 11 The Graph Theory Approach for Electrical In the real world, graphs are used to help people quickly understand and use information. How Are Graphs Used in the Real World? A: Quick Answer.\n\[email protected] http://www.zib.de/groetschel practice and some new theory\\з€Ђе±®\\з€Ђе±®Martin Groetschel\\з€Ђе±® of life along the Nile Routing Planning As An Application Of Graph Theory finding is searching a graph, naturally as a mathematical model of the observed real\n\nXOXO Real Life Applications of Trigonometric Graphs Chelsea Kaye Punzalan TRIGONOMETRY MEANING AND USES studies relationships involving lengths and angles of triangles Real-life straight line graphs game. 4.8 13 to match a description of something in the real world, with a straight line graph. Real-life straight\n\nmathematics in graph theory behind the applications they use. Methodology The internet is one of the largest graphs in life. Image taken from Graphy Theory While the applications of fields of continuous Graph theory allows complex logistical What Are the Applications of Discrete Math? Sciencing\n\nReal-life straight line graphs game. 4.8 13 to match a description of something in the real world, with a straight line graph. Real-life straight Directed Graphs digraph search Digraph applications digraph vertex edge financial stock, • real world:\n\n30/04/2013В В· Applications of Graphs to real life problems. Real life applications of trigonometry 4:24. Lecture - 11 The Graph Theory Approach for Electrical Directed Graphs digraph search Digraph applications digraph vertex edge financial stock, • real world:", null, "GRAPH THEORY Tero Harju “Graph Theory with Applications”, Macmillan, 1978. R. DIESTEL, “Graph Theory”, • For a real number x, fuzzy graph theory applications in real life. To get full information or details of fuzzy graph theory application of graph theory in real life ppt," ]
[ null, "https://krishnakitchen.org/pictures/851520.jpg", null, "https://krishnakitchen.org/pictures/8a076c35267a98b17e67c58702e6a35b.png", null, "https://krishnakitchen.org/pictures/application-of-graph-theory-in-real-life-ppt.jpg", null, "https://krishnakitchen.org/pictures/839473.png", null, "https://krishnakitchen.org/pictures/583cc3a9185d30f7ee683d4efe925b13.jpg", null, "https://krishnakitchen.org/pictures/996446.jpg", null, "https://krishnakitchen.org/pictures/application-of-graph-theory-in-real-life-ppt-2.gif", null, "https://krishnakitchen.org/pictures/cf893e42f4793c630b554d6ae6fd550c.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8628794,"math_prob":0.7615898,"size":23001,"snap":"2023-14-2023-23","text_gpt3_token_len":4621,"char_repetition_ratio":0.26294735,"word_repetition_ratio":0.78642714,"special_character_ratio":0.18286161,"punctuation_ratio":0.08523015,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9816826,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16],"im_url_duplicate_count":[null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-03-20T09:37:51Z\",\"WARC-Record-ID\":\"<urn:uuid:9bd47ac0-ecee-472b-825e-9eebf34eda64>\",\"Content-Length\":\"48383\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a2b1b8c2-cb36-429e-8c87-a9d85f99d0e2>\",\"WARC-Concurrent-To\":\"<urn:uuid:5035380e-33d8-4812-a46c-05e55960ad9e>\",\"WARC-IP-Address\":\"88.119.175.185\",\"WARC-Target-URI\":\"https://krishnakitchen.org/trayning/application-of-graph-theory-in-real-life-ppt.php\",\"WARC-Payload-Digest\":\"sha1:7ZSM3XLLZWYN4QWJZTDMI7XRHKTII3IX\",\"WARC-Block-Digest\":\"sha1:2MO337BVKI7ETTV4KM5HTAJ22A7VUQTQ\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296943471.24_warc_CC-MAIN-20230320083513-20230320113513-00090.warc.gz\"}"}
https://dsp.stackexchange.com/questions/36831/hht-and-how-to-plot-the-hilbert-spectrum/36846
[ "# HHT and how to plot the Hilbert spectrum\n\nNOTE: The following is based on an unanswered question posted here: How to plot the Hilbert Spectrum in Hilbert-Huang transform? , based on the usage of the \"plot_hht\" from here: (https://www.mathworks.com/matlabcentral/fileexchange/19681-hilbert-huang-transform)\n\nFrom what I understand the \"plot_hht\" does, after applying EMD to a signal, the IMF's of the signal remain. The Hilbert Transform is then applied to each IMF, and the resulting phase angles are somehow used to help find the instantaneous frequency (with respect to time).\n\nIs it possible to represent these instantaneous frequencies vs time, with a third dimension, its strength/amplitude?\n\nEDIT P.S.: For clarity, the desired result is a graph that looks like this:", null, ", as taken from an article titled \"Detecting position dependent tremor with the Empirical mode decomposition\".\n\n• Plu, can you please help with how to plot the curves as you have posted as above? I have been trying for over a week, but no success.I even checked the link hilbertspectrum.com, but couldn't locate the file that helps achieve this. – Tahm Aug 17 '17 at 9:42\n• Did you figure out how to plot the 2D picture? I am lost... – Annie Liang li Oct 24 '17 at 21:22\n• @Tahm I probably won't be able to help you directly, but to get results, you first need to do the empirical mode decomposition; Then from each IMF obtained, you use the Hilbert transform to help get the instantaneous frequency and amplitude from every point of the IMF. – plu Oct 25 '17 at 22:11\n• @AnnieLiangli FYI, its actually a 3D picture, you've got time, frequency, and amplitude (strength indicated by colours). – plu Oct 25 '17 at 22:12", null, "" ]
[ null, "https://i.stack.imgur.com/oa4q7.jpg", null, "https://i.stack.imgur.com/0x4LD.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.91448647,"math_prob":0.5052978,"size":858,"snap":"2019-51-2020-05","text_gpt3_token_len":187,"char_repetition_ratio":0.08782201,"word_repetition_ratio":0.0,"special_character_ratio":0.2062937,"punctuation_ratio":0.15,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.952274,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,5,null,5,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-26T15:10:36Z\",\"WARC-Record-ID\":\"<urn:uuid:6d6a4de6-27b4-4617-b58e-69e344903c93>\",\"Content-Length\":\"141012\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5e6aae1a-136c-402f-983a-f05c379082e3>\",\"WARC-Concurrent-To\":\"<urn:uuid:acc89a21-f60c-442b-b1fb-10103979b179>\",\"WARC-IP-Address\":\"151.101.193.69\",\"WARC-Target-URI\":\"https://dsp.stackexchange.com/questions/36831/hht-and-how-to-plot-the-hilbert-spectrum/36846\",\"WARC-Payload-Digest\":\"sha1:FWRPR4SPFFD7OOINHFEKODRILJEHM27C\",\"WARC-Block-Digest\":\"sha1:7TZNHYKRA7AI4KSBR3EYNHNXGQYOPLN5\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579251689924.62_warc_CC-MAIN-20200126135207-20200126165207-00180.warc.gz\"}"}
https://tutorme.com/tutors/15468/interview/
[ "Enable contrast version\n\n# Tutor profile: Julia F.\n\nInactive\nJulia F.\nActuarial Science Major, Economics Minor\nTutor Satisfaction Guarantee\n\n## Questions\n\n### Subject:Calculus\n\nTutorMe\nQuestion:\n\nfind the derivative: y = (x+1/x)(x-1/x+1)\n\nInactive\nJulia F.\n\nderivative rules needed for this problem: multiplication rule: derivative of the first times the second plus the derivate of the second times the first d/dx(x) = 1 d/dx(x^n) = nx^(n-1) d/dx(c) = 0 ---------------- 1. take derivative of first term: (x+1/x): derivative of x = 1 derivative of 1/x: rearrange to x^-1 derivative of x^-1 = -1x^(-1-1) = -x^-2) = 1/x^2 ans: 1+1/x^2 2. multiply the derivative of the first term (above) by the second term: (1+1/x^2)(x-1/x+1) 3.take derivative of second term: (x-1/x+1) derivative of x = 1 derivative of 1/x = 1/x^2 (steps shown above) derivative of 1 = 0 ans: 1-1/x^2 4. multiply the derivate of the second term (above) by the first term: (1-1/x^2)(x+1/x) 5. combine answers from part 2 and part 4 for final answer: (1+1/x^2)(x-1/x+1)+(1-1/x^2)(x+1/x)\n\n### Subject:Algebra\n\nTutorMe\nQuestion:\n\nmultiply out: 3(4x-3)-2(3x-4).\n\nInactive\nJulia F.\n\n1. distribute the 3 and -2: for the 3: 3*4x+3*(-3) for the 2, don't forget the negative sign: (-2)*3x + (-2)*(-4) 2. multiply each number together: 3*4x = 12x; 3*(-3) = -9; (-2)*3x= -6x; (-2)*(-4)=8 (remember, a negative times a negative gives you a positive) 3. combine like terms: 12x-6x = 6x; -9+8= -1 4. put it all together: 6x-1 ------------------------------- 3*4x+3*(-3)+(-2)*3x + (-2)*(-4) =12x-9-6x+8 =12x-6x-9+8 =6x-1\n\n### Subject:Microeconomics\n\nTutorMe\nQuestion:\n\nWhat is the difference between change in demanded and change in quantity demanded?\n\nInactive\nJulia F.\n\nA change in demand causes a shift of the demand curve. This shift can be caused by income, price of related goods, tastes, expectations, or number of buyers. Change in quantity demanded means there is movement along the demand curve. The only thing that would cause this movement is the change in price. For example, if someone starts to make more money they will demand more of some object causing that objects demand curve to shift to the right. That is looking at a small scale of just one person, where as in economics they would do that similar process for a lot of people to determine the demand curve.\n\n## Contact tutor\n\nSend a message explaining your\nneeds and Julia will reply soon.\nContact Julia" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8525732,"math_prob":0.9820687,"size":2127,"snap":"2020-34-2020-40","text_gpt3_token_len":691,"char_repetition_ratio":0.16627415,"word_repetition_ratio":0.023255814,"special_character_ratio":0.36154208,"punctuation_ratio":0.1125,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9994492,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-27T20:26:25Z\",\"WARC-Record-ID\":\"<urn:uuid:72a01962-0b24-4421-9ac8-b0bdc19e5b9d>\",\"Content-Length\":\"164045\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a626815c-37da-4b61-bccf-1a2e2c25d159>\",\"WARC-Concurrent-To\":\"<urn:uuid:44024141-f95f-447a-ad20-0ab1f2dddc81>\",\"WARC-IP-Address\":\"52.85.144.59\",\"WARC-Target-URI\":\"https://tutorme.com/tutors/15468/interview/\",\"WARC-Payload-Digest\":\"sha1:62CJHZNDNGYQ4URYVLWMQC4JXS7I2S6I\",\"WARC-Block-Digest\":\"sha1:3LY33SV7D3CJSX47HV4EE6QAGSIXBYXC\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600401578485.67_warc_CC-MAIN-20200927183616-20200927213616-00767.warc.gz\"}"}
https://nl.mathworks.com/matlabcentral/cody/problems/6-select-every-other-element-of-a-vector/solutions/1721191
[ "Cody\n\n# Problem 6. Select every other element of a vector\n\nSolution 1721191\n\nSubmitted on 5 Feb 2019 by Kristóf Nagy\nThis solution is locked. To view this solution, you need to provide a solution of the same size or smaller.\n\n### Test Suite\n\nTest Status Code Input and Output\n1   Pass\nx = rand(1,10); actual = everyOther(x); expected = x(1:2:length(x)); assert(isequal(actual, expected))\n\n2   Pass\nx = rand(1,100); actual = everyOther(x); expected = x(1:2:length(x)); assert(isequal(actual, expected))\n\n3   Pass\nx = ['A' 'long' 'time' 'ago' 'in' 'a' 'galaxy' 'far' 'far' 'away']; actual = everyOther(x); expected = x(1:2:length(x)); assert(isequal(actual, expected))" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5965796,"math_prob":0.99497896,"size":573,"snap":"2019-35-2019-39","text_gpt3_token_len":183,"char_repetition_ratio":0.14235501,"word_repetition_ratio":0.1923077,"special_character_ratio":0.36823735,"punctuation_ratio":0.18103448,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99196184,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-08-19T22:29:34Z\",\"WARC-Record-ID\":\"<urn:uuid:fb6a9497-0809-4fde-a992-01d2bfbcae5f>\",\"Content-Length\":\"72416\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ca285b7d-ca92-49a1-8499-e57bf2a1cbef>\",\"WARC-Concurrent-To\":\"<urn:uuid:f5fbfb41-7afd-477a-8158-8402ce9d5563>\",\"WARC-IP-Address\":\"104.118.179.86\",\"WARC-Target-URI\":\"https://nl.mathworks.com/matlabcentral/cody/problems/6-select-every-other-element-of-a-vector/solutions/1721191\",\"WARC-Payload-Digest\":\"sha1:NAOR4ZYY44MQQLQLZQ2D5BJJUZ6RX4NV\",\"WARC-Block-Digest\":\"sha1:NRMC6NXOSQSWVBXXRBJJU34IZ74JNMHK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-35/CC-MAIN-2019-35_segments_1566027315132.71_warc_CC-MAIN-20190819221806-20190820003806-00065.warc.gz\"}"}
https://physics.stackexchange.com/questions/345507/what-is-the-difference-between-poisitive-and-negative-charge?noredirect=1
[ "What is the difference between poisitive and negative charge? [duplicate]\n\nThis question already has an answer here:\n\nI would like to know what charge actually IS. Not the 'flow of electrons' charge but the charge because of which protons and electrons attract. I want to know why these attract and what the difference is between them. Why do we put a positive on a proton and a negative on an electron? They are 'positive' and 'negative' but what is the difference?\n\nmarked as duplicate by Emilio Pisanty, Diracology, John Rennie, Jon Custer, AccidentalFourierTransformJul 14 '17 at 8:13\n\nWhen countless experiments pointed towards that only two types of charges exists, then what would we call them? Maybe just type 1 and type 2 for example? No, that leaves open the possibility that a type 3 might exist. Rather let's pick some binary terms. Something like plus and minus. Or day and night. Or positive and negative. It doesn't matter which is called what, we can't see the difference anyways. We just make a choice.\n\nThere is no difference between positive and negative charges. The naming could have been opposite, had history been different. The only important thing we know about them is that same types repel and different types attract. That repulsion and attraction phenomenon is hard to explain - it exists, and we don't know why. And we have chosen the name charge to describe these \"things\" that show this phenomenon.\n\n• The +/- notation does great things for such formula as $F = \\frac{1}{4\\pi\\epsilon}\\frac{q_1q_2}{r^2}$ – Devsman Jul 13 '17 at 17:30\n• @Devsman Sure it does. Luckily the two types fit perfectly with a binary set of force reactions - they point either way depending on type. Very fortunate for a mathematical formula where +/- flips the direction. – Steeven Jul 13 '17 at 18:14\n\nI glanced through the proposed duplicates, and I want to put in the point of view of an experimentalist.\n\nThere exists an everyday word called \"electricity.\" The root comes from the greek word for amber ηλεκτρον. Amber is a naturally found \"stone\", fossilized tree resin, and its property of attracting stuff was known from ancient times.\n\nAround 585 BC, Thales discovered that if he rubbed amber (ilektron) with a piece of fur, that amber could attract lightweight objects (like feathers) to itself. Thales had discovered the principle of static electricity.\n\nBecause he lacked the tools to investigate further - as did subsequent thinkers and experimenters for more than 2,000 additional years - no one really followed-up on Thales’ ideas until the late-17th and early-18th centuries.\n\nIt is an observational fact that some matter, when rubbed, displays attraction and repulsion. This is two states, and mathematically easily described by assigning a positive sign and a negative sign to the variables eventually used to measure the observed effects.\n\nThat charge is carried by particles was found experimentally in the cathode ray tubes, and the assignment of the charges to particles follows the history of physics from then on. Consistency in assignments is important, but whether the electron was dubbed with a negative charge giving the proton a positive one is just a historical fluke.\n\nThese observations were organized into laws, which were unified in the electromagnetic theory so well modeled with the mathematics of Maxwell's equations. . The quantum mechanical framework of nature is consistent with the macroscopic observations and incorporates the effect in the mathematics.\n\nI just want to stress that physics is about describing observations and data with mathematical models. To do that there are certain postulates, laws, principles that are assumed so that the mathematics fits the observations. The existence of two charges is one of the basic observational facts incorporated into the mathematical models of nature, the sign is arbitrary but consistent, and historically it is the electrons that are called negative.\n\n>\n\n• You'll get in trouble for this answer, Anna. I didn't downvote because I recently did the exact same thing (wrote a placeholder answer) for a different question that I knew would be closed (which it was). – David Hammen Jul 13 '17 at 17:05\n• @DavidHammen I find that most of the answers are dominated by going into the theoretical explanations even for very basic stuff, which are obviously an input from experiment so I wanted to put in my two experimentalists's cents. A number of people are platonists at heart , \"mathematics defines physics\" , but imo physics chooses mathematics that fit data. – anna v Jul 13 '17 at 17:46\n• I understand your motivation in wanting to write an experimental-based answer and your motivation for initially writing a placeholder, Anna. Nicely filled in placeholder, by the way. – David Hammen Jul 13 '17 at 18:05" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9605116,"math_prob":0.86563367,"size":2134,"snap":"2019-43-2019-47","text_gpt3_token_len":412,"char_repetition_ratio":0.111267604,"word_repetition_ratio":0.0,"special_character_ratio":0.1846298,"punctuation_ratio":0.089673914,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95058626,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-10-21T08:18:14Z\",\"WARC-Record-ID\":\"<urn:uuid:eb364388-91fc-424e-8d0a-5244430d1dfa>\",\"Content-Length\":\"141140\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f9625fbb-f8a0-44fb-8dc1-78005c8f0246>\",\"WARC-Concurrent-To\":\"<urn:uuid:48481c66-5ed3-4127-804c-5f9aa8635614>\",\"WARC-IP-Address\":\"151.101.65.69\",\"WARC-Target-URI\":\"https://physics.stackexchange.com/questions/345507/what-is-the-difference-between-poisitive-and-negative-charge?noredirect=1\",\"WARC-Payload-Digest\":\"sha1:6CVTKD2OM2UXTRPS3PZYHMYICUSVNGD4\",\"WARC-Block-Digest\":\"sha1:QOGRFJWUWCLKIEDYUQXWDBKKJ4M264WX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-43/CC-MAIN-2019-43_segments_1570987763641.74_warc_CC-MAIN-20191021070341-20191021093841-00245.warc.gz\"}"}
https://mathindemand.com/products/8-ee-assessment
[ "# 8.EE Assessment\n\nIncluded is a pre-test, post-test, and 2 vocabulary quizzes on the 8th grade Common Core Standards Expressions & Equations (8.EE).\n\nStudents will:\n\n1.) Use the law of exponents to simplify expressions\n2.) Solve square roots\n3.) Convert numbers to scientific notation\n4.) Convert numbers to standard form\n5.) Determine the slope, y-intercept, and slope-intercept form of a line\n6.) Solve a system of equations\n7.) Graphing a system of equations\n\nThe vocabulary included is base, exponent, expression, monomial, coefficient, scientific notation, standard form, slope-intercept form, variable, linear, distributive property, system of equations, point of intersection, elimination, substitution, unit rate, slope, and y-intercept.\n\nTotal Pages: 8 (16 including answer key)" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7975058,"math_prob":0.9644727,"size":812,"snap":"2020-45-2020-50","text_gpt3_token_len":188,"char_repetition_ratio":0.11757426,"word_repetition_ratio":0.0,"special_character_ratio":0.23029557,"punctuation_ratio":0.2264151,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99784774,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-12-03T17:10:44Z\",\"WARC-Record-ID\":\"<urn:uuid:791edbe2-1217-48c4-89e3-80fce4bf87bd>\",\"Content-Length\":\"85716\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8908e6f1-e271-4673-9ebf-98803736fc5e>\",\"WARC-Concurrent-To\":\"<urn:uuid:35db4f85-c4fc-4a0f-8c60-d901801b94ce>\",\"WARC-IP-Address\":\"23.227.38.65\",\"WARC-Target-URI\":\"https://mathindemand.com/products/8-ee-assessment\",\"WARC-Payload-Digest\":\"sha1:7SJEZYF42P66OJNPGLN5NPVA4LBYN7WW\",\"WARC-Block-Digest\":\"sha1:DTB2X6TJZAVSOXOIGPMGECOIJ7N2MW5J\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141729522.82_warc_CC-MAIN-20201203155433-20201203185433-00643.warc.gz\"}"}
https://web2.0calc.com/questions/elimination-system-of-equations
[ "+0\n\n# Elimination system of equations\n\n+1\n270\n1\n\n3x+11y=4\n\n-2x-6y=0\n\nThis is for algebra two and I need to use elimination so idk :/\n\nOct 5, 2017\n\n#1\n0\n\nSolve the following system by elimination:\n{3 x + 11 y = 4 | (equation 1)\n-2 x - 6 y = 0 | (equation 2)\n\nAdd 2/3 × (equation 1) to equation 2:\n{3 x + 11 y = 4 | (equation 1)\n0 x+(4 y)/3 = 8/3 | (equation 2)\n\nMultiply equation 2 by 3/4:\n{3 x + 11 y = 4 | (equation 1)\n0 x+y = 2 | (equation 2)\n\nSubtract 11 × (equation 2) from equation 1:\n{3 x+0 y = -18 | (equation 1)\n0 x+y = 2 | (equation 2)\n\nDivide equation 1 by 3:\n{x+0 y = -6 | (equation 1)\n0 x+y = 2 | (equation 2)\n\nx = -6      and       y = 2\n\nOct 5, 2017" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.80907613,"math_prob":1.0000095,"size":563,"snap":"2019-13-2019-22","text_gpt3_token_len":258,"char_repetition_ratio":0.31484795,"word_repetition_ratio":0.28148147,"special_character_ratio":0.5097691,"punctuation_ratio":0.046153847,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.000007,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-03-25T06:41:52Z\",\"WARC-Record-ID\":\"<urn:uuid:d2e96b0e-7b2f-4463-a288-42220c15fc21>\",\"Content-Length\":\"20830\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3cc95fa6-9d81-415a-91c9-4ec1d6fd7bd2>\",\"WARC-Concurrent-To\":\"<urn:uuid:73bcc4ea-ca23-440b-bbb6-375eaf9090ad>\",\"WARC-IP-Address\":\"209.126.117.101\",\"WARC-Target-URI\":\"https://web2.0calc.com/questions/elimination-system-of-equations\",\"WARC-Payload-Digest\":\"sha1:YGFQ3WVQD5RNQPWNX5FRYNLXLPQQOVB6\",\"WARC-Block-Digest\":\"sha1:PBEUADNUXPFCHTYJWPYQPU3LQH5VL47K\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-13/CC-MAIN-2019-13_segments_1552912203755.18_warc_CC-MAIN-20190325051359-20190325073359-00126.warc.gz\"}"}
https://www.arxiv-vanity.com/papers/cond-mat/0601453/
[ "# N-dimensional electron in a spherical potential: the large-N limit\n\nAmit K Chattopadhyay Dipartimento di Fisica “Galileo Galilei”, Universita’ di Padova, via Marzolo 8, 35131 Padova, Italy\nFebruary 8, 2021\n###### Abstract\n\nWe show that the energy levels predicted by a -expansion method for an N-dimensional Hydrogen atom in a spherical potential are always lower than the exact energy levels but monotonically converge towards their exact eigenstates for higher ordered corrections. The technique allows a systematic approach for quantum many body problems in a confined potential and explains the remarkable agreement of such approximate theories when compared with the exact numerical spectrum.\n\nKeywords: -expansion, hyperspherical coordinates.\n\nPACS: 03.65.-w, 73.21.La\n\nA fundamental theoretical problem in the realm of many body physics concerns the technical difficulty in making precise theoretical evaluations of physical observations, even more so in problems involving quantum systems. In most cases, this is compounded by the practical difficulty in establishing a suitable approximation scheme that conjoins simplicity with effectiveness. An interesting observation in connection suggests that an increase in the number of degrees of freedom often simplifies the theoretical analysis witten . A perturbative approach requires at least one dimensionless parameter and if we couple this fact with the previous statement, it effectively implies that as we go on increasing the dimensionality of this parameter, the perturbation analysis becomes more and more simple wilson ; hooft . Often it is found that a problem of inherently quantum mechanical origin can be mapped on to a classical phase space in the limit thereby reducing a quantum problem to a classical one yaffe . In other words, one then has a limit where quantum interference effects simply die out paving the way for a simple classical analysis. The excited states for such a system can be obtained as an expansion in around the minimum of the effective classical potential . Such an approach is not at all uncommon in statistical physics Berlin_Kac ; spin in problems which allow for at least a minimum. In many of those cases, the large-N limit has been fruitfully utilized in dealing with equilibrium as well as non-equilibrium problems in classical critical phenomena Ma . In quantum mechanics too, expansion method has a long precedence. Detailed accounts of related applications can be obtained from review articles like the one due to Chatterjee moshe ; aharony ; chatterjee ; cohen as also from more informal narratives like witten ; yaffe . The versatility and flexibility of this technique has allowed it to be used in a range of diverse topics, starting from field theoretic studies in high energy physics gravity ; yang_mills to problems on earthquake dynamics earthquake as well as on problems in colloidal physics colloids .\n\nIn this brief report, we shall use the -expansion method to study the problem of an N-dimensional Hydrogen atom confined in a Harmonic oscillator potential. Although the model is nothing new chatterjee , however our objective here is. We intend to study the efficacy of this expansion method by calculating the energy eigenvalues and showing that to each order of correction, the large-N expansion method always predicts a slightly lower potential as compared to the exact eigenvalue obtained numerically. This is remarkable since this implies a certain monotonicity in these perturbation corrections which tells us that the corrections are always positive, a fact that has often been tacitly assumed in related calculations el-said ; garcia-castellan . We argue that this is the underlying reason which makes this method more dynamic compared to standard perturbation technique which is limited strictly to a weak-coupling regime. In a following work, we build on this principle and analytically solve for the three-body problem of interacting electrons using an exact Coulomb potential chattopadhyay .\n\nIn the first paper of the paper, we do a rehash of the N-dimensional quantum mechanics for a single electron in a spherical confining potential and then defining the potential in the relative frame of reference, we go on to solve the stationary state problem using the -expansion method. As already stated, we then proceed to calculate the energy corrections due to this method for both ground and excited states and show that all higher ordered corrections have a steady monotonicity that ensures a large-N eigenstate below its exact (meaning experimental) counterpart.\n\nTaking cues from standard literature chatterjee ; moshe , we begin with the Hamiltonian for the center of mass of an N-dimensional electron in a spherical potential\n\n H=→p22me+VN(→r) (1)\n\nUsing standardized units (h=Plank’s constant and =mass of the electron), the Hamiltonian can be rewritten as\n\n H=−12∇N2+VN(→r) (2)\n\nwhere terms have their usual meaning. The potential being radial and this gives the eigenvalue equation\n\n Hψ(→r)=[−12∇N2+VN(→r)]ψ(→r)=Eψ(→r) (3)\n\nFor a system with spherical symmetry, the curvilinear coordinates can be written as follows (generalisation of the treatment available in arfken )\n\n x1 = rcosθ1sinθ2sinθ3...sinθN−1 x2 = rsinθ1sinθ2sinθ3...sinθN−1 x3 = rsinθ2sinθ3sinθ4...sinθN−1 . . . xk = rcosθk−1sinθksinθk+1...sinθN−1 . . . xN−1 = rcosθN−2sinθN−1 xN = rcosθN−1 (4)\n\nwhere is the r is the radial distance and are the angles defining the hyper-spherical space, being the azimuthal angle. is the eigenfunction of this system. The above definition can now be used to obtain the radial equation of motion chatterjee\n\n [ −12(d2dr2+N−1rddr)+l(l+N−2)2r2 (5) + VN(r)]R(r)=ER(r)\n\nwhere ’s are the angular quantum numbers and is the radial wave function. Using the transformation , we can now absorb the first derivative in eqn. (6). The reconstructed radial equation of motion is now given by\n\n −12d2Rdr2+k2[(1−1k)(1−3k)8r2+VN(r)k2]u(r)=Eu(r) (6)\n\nIn the above, we have used . At this point, the meaning of the large-N limit turns out to be pretty obvious. It means that (since N is large) encompasses the idea of a stationarity limit for a very heavy classical particle of effective mass where the particle is localized at the point , the point in turn defining the minimum of the classical potential . The ground state energy of such a localized system is given by .\n\nWe now consider a specific form for the potential function and proceed to calculate the higher order corrections in the large-N limit. The model we choose for the purpose is an oscillator with anharmonic fluctuations. The reason for this choice has been accentuated by the observation that such a description, albeit simple, yet is able to reproduce a good estimate for the energy eigenstates johnson_payne1991 when compared with numerical chakraborty as well as with experimental experiment result. For a simple harmonic oscillator which gives , and eventually . One can now add quantum fluctuations and study the behavior of the system close to the classical minimum chatterjee . We go beyond this description in the sense that we consider a finite sized electron instead of a fixed mass and consider fluctuations around the classical stable minimum. To do this we revoke the original radial equation eq. (6) prior to the large-N limit being imposed on it. Using the expansion technique, we now embark on a stepwise evaluation of the energy eigenvalues due to the quantum fluctuations close to the classical minimum. We define the eigenvalue problem as follows\n\n [H0+^V(r)]ψ(→r)=Eψ(→r) (7)\n\nThe ground state eigenvalue equation has already been defined through equation (3) () while is the part of the Hamiltonian that contributes to the quantum fluctuations. Taylor’s expansion allows this perturbation Hamiltonian to be represented as\n\n ^V=^V(r0)+(r−r0)^V′(r0)+(r−r0)22!^V′′(r0)+... (8)\n\nwhere the primes denote derivatives with respect to r. Before proceeding any urther, we make a variable transformation from where and transform eq. (7) likewise. In the translated coordinate system, the complete eigenvalue equation is given by\n\n − 12d2udx2+k[(1−4k+3k2)(1−2x√k+3x2k (9) − 4x3k3/2+...)+r02{^V(r0)+r0^V′(r0)(1+x√k) + r022^V′′(r0)(1+x2k+2x√k)...}]u(x) = (Ek)r02u(x)\n\nIn the analysis of the above equation we consider all terms up to and evaluate coefficients for increasing powers of starting with . A little rearranging now allows us to rewrite the eigenvalue equation in terms of the variable as follows\n\n [H0+^V(x)]ψ(x)=λψ(x) (10)\n\nwhere\n\n H0 = −12d2dx2+12ω2x2+ϵ0 ^V(x) = 1√k(ϵ1x+ϵ3x3)+1k(ϵ2x2+ϵ4x4) (11) + 1k3/2(δ1x+δ3x3+δ5x5)\n\nwhere and for a harmonic oscillator potential the constants and are given by\n\n ϵ0=k8−12+38k+k264 ϵ1=1,ϵ2=−3/2,ϵ3=16r05^V′′′(r0)−1/2 (12)\n\nHigher-ordered parameters like have non-zero values for anharmonic oscillations. The above description allows us to re-frame an effective classical potential in the large-N limit but now including higher-ordered fluctuations. It has the form\n\n Veff(R)=−12ω2kR2+ϵ0k+1kV(R) (13)\n\nwhere represents some oscillator potential having a minimum at , a point which can be obtained from the relation\n\n ∂∂RVeff(R)|R=R0=0 (14)\n\nDefining the potential as in eq. (11) and then applying the optimization criterion as in eq. (14), we arrive at the quadratic equation\n\n 3ϵ3R02−√kω2R0+ϵ1=0 (15)\n\nwhich gives the solution . To check the stability at the point , we evaluate the second derivatives and find that the two roots of eq. (15) satisfy the relation\n\n ∂2∂R2Veff(R)|R=R0(±) (16) = −ω2k+6ϵ3k3/2(√kω2±√kω4−12ϵ1ϵ36ϵ3)\n\nThe above result implies that the minima are subject to the restriction . An idea of the exactitude of this analysis can be had from an evaluation of the parameters using a simple harmonic oscillator potential. This gives , thereby naturally validating the restriction. The conclusion remains unchanged even after adding higher ordered anharmonic terms to the potential. To leading order in expansions, we now have the large-N expanded energy eigenvalue for the ground state as follows\n\n E=k2ω2r02R0(+)2+√kr02R0(+)(ϵ1+ϵ3R0(+)2)+ϵ0k (17)\n\nwhere . The above expression for energy conclusively proves that even in the presence of fluctuations, large-N expansion gives positive corrections to energy, monotonically approaching the exact value as one scales up the order. We have checked for a range of such higher ordered fluctuations and have found the previous conclusion sacrosanct. A point of some interest here would be the variation of such an approximated energy with respect to the strength of the anharmonic oscillation for a fixed dimension, N=3 say. Fig. 1 shows this variation and evidently tells us that there is a minimum in the curve much as we would expect it to be. The minimum also signifies the fact that the results of the large-N approximation would be best valid close to the minimum, that is between as per Fig. 1.", null, "Figure 1: Variation of non-dimensionalized energy E as in eq.(17) with the oscillator strength ω for N=3. The dotted line in the figure shows the minimum around which the large-N approximation gives the best result.\n\nAs a suggestive example, we might look at the next higher modification in the potential which gives rise to the following cubic equation\n\n 4ϵ4R03+3√kϵ3R02+(2ϵ2−kω2)R0+√kϵ1=0 (18)\n\nOnce again, the above equation can be solved analytically using Cardan’s method and it is rather an easy algebraic exercise to show that the energy corrections are still positive.\n\nTo conclude, we have shown using a perturbed anharmonic oscillator potential that a large-N expansion method provides an effective approximation scheme in tackling quantum mechanical problems. This is evident, since the order of corrections as suggested by this method monotonically converges towards the semi-classical limit as . The results offer favorable comparisons with numerical and experimental data and might be used in more complicated quantum many body problems chattopadhyay involving exact interaction potentials.\n\nThe author acknowledges helpful discussions with A. Chatterjee and is grateful to the Marie Curie Foundation, fellowship MIFI-CT-2005-008608, for research support.\n\n## References\n\n• (1) E. Witten, Phys. Today, July, 38 (1980).\n• (2) K. G. Wilson, Phys. Rev. D 7, 2911 (1973).\n• (3) G. t’ Hooft, Nucl. Phys. 72, 461 (1974).\n• (4) L. G. Yaffe, Phys. Today, August, 50 (1983).\n• (5) T. H. Berlin and M. Kac, Phys. Rev. 86, 821 (1952); H. E. Stanley, Phys. Rev. 176, 718 (1968).\n• (6) Y. Sakamoto, H. Mukaida and C. Itoi, Phys. Rev. B72, 144405 (2005).\n• (7) S. K. Ma in Phase Transitions and Critical Phenomena, vol. 6, Edtd. Academic (New York), 1976.\n• (8) M. Moshe and J. Zinn-Justin, Phys. Reps. 385, 69 (2003).\n• (9) O. Aharony, S. S. Gubser, J. Maldacena et al, Phys. Reps. 323, 184 (2000).\n• (10) A. Chatterjee, Phys. Reps. 186, 249 (1990).\n• (11) T. D. Cohen, Rev. Mod. Phys. 68, 599 (1996).\n• (12) F. Canfora, Nucl. Phys. B 731, 389 (2005).\n• (13) O. V. Lunina, Y. Mart and A. S. Gladkov, J. Geodyn. 40, 216 (2005).\n• (14) P. Wette, H. J. Schope and T. Palberg, J. Chem. Phys. 123, 174902 (2005).\n• (15) P. Kovtun, M. Unsal and L. G. Yaffe, Phys. Rev. D 72, 105006 (2005); S. Bellucci, C. Sochichiu, Nucl. Phys. B 726, 233 (2005).\n• (16) M. El-Said, Phys. Rev. B 61, 13026 (2000).\n• (17) R. M. G. Garcia-Castellan, W. S. Choe and Y. C. Lee, Phys. Rev. B 57, 9792 (1998).\n• (18) Amit K Chattopadhyay, unpublished.\n• (19) G. Arfken in Mathematical Methods for Physicists, Academic Press (1985).\n• (20) N. F. Johnson and M. C. Payne, Phys. Rev. Lett. 67, 1157 (1991).\n• (21) P. A. Maksym and T. T. Chakraborty, Phys. Rev. Lett. 65, 108 (1990).\n• (22) Ch. Sikorski and U. Merkt, Phys. Rev. Lett. 62, 2164 (1989)." ]
[ null, "https://media.arxiv-vanity.com/render-output/4689568/x1.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.876697,"math_prob":0.9788337,"size":12403,"snap":"2022-40-2023-06","text_gpt3_token_len":2818,"char_repetition_ratio":0.12154206,"word_repetition_ratio":9.891196E-4,"special_character_ratio":0.2334919,"punctuation_ratio":0.13813815,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98943126,"pos_list":[0,1,2],"im_url_duplicate_count":[null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-02-06T13:41:49Z\",\"WARC-Record-ID\":\"<urn:uuid:30992df9-fcbf-4dff-aa63-a05bf74ccbea>\",\"Content-Length\":\"355982\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5eb70178-b842-4a42-ab84-271787b987a3>\",\"WARC-Concurrent-To\":\"<urn:uuid:72c8ab4c-643e-4793-8eab-1272a357ea5c>\",\"WARC-IP-Address\":\"104.21.14.110\",\"WARC-Target-URI\":\"https://www.arxiv-vanity.com/papers/cond-mat/0601453/\",\"WARC-Payload-Digest\":\"sha1:LIGJVAQ6G4UIJKQWNKRIUWGS6KKFLPUG\",\"WARC-Block-Digest\":\"sha1:KA7WZDTEL3Y7LSFJUYIC7LXOTOYIQ3D3\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764500339.37_warc_CC-MAIN-20230206113934-20230206143934-00039.warc.gz\"}"}
https://dev.to/akg1301/exploratory-data-analysis-on-geolocational-data-270j
[ "## DEV Community is a community of 621,723 amazing developers\n\nWe're a place where coders share, stay up-to-date and grow their careers.", null, "# Exploratory Data Analysis On Geolocational Data\n\nExploratory data Analysis is used by Data Scientist to analyse and summarize their main characteristics, often data visualisation method.\n\nAs a part of Crio #ibelieveindoing program, I tried some Data analysis on the Geolocational Data using Python.\n\n# Indroduction\n\nThis project involves the use of K-Means Clustering to find the best accommodation for students in Bangalore (or any other city of your choice) by classifying accommodation for incoming students on the basis of their preferences on amenities, budget and proximity to the location.\n\nImplementing the project will take you through the daily life of a data science engineer - from data preparation on real-life datasets, to visualising the data and running machine learning algorithms, to presenting the results.\n\nFood delivery apps aside, managers of restaurant chains and hotels can also leverage this information. For example, if a manager of a restaurant already knows the demographic of his current customers, they’d ideally want to open at a location where this demographic is at its highest concentration, ensuring short commute times to the location and more customers served.If potential hotel locations are being evaluated, a site that caters to a wide variety of tastes would be ideal, since one would want every guest to have something to their liking.\n\nThis project is a good start for beginners and a refresher for professionals who have dabbled in python / ML before. The methodology can be applied to any location of one's choosing, so feel free to innovate!\n\n# Summary\n\nClustering is the task of grouping the elements such that observations of same group are more similar to each other than those in other group.\n\nAffinity Propagation is a graph-based algorithm that assigns each observation to its nearest exemplar. Basically, all the observations “vote” for which other observations they want to be associated with, which results in a partitioning of the whole dataset into a large number of uneven clusters.\n\nGeolocational Analysis is the analysis that processes Satellite images, GPS coordinates and Street addresses and apply to geographic models.\n\nso let's start, I need to import the following packages.\n\n``````import numpy as np\nimport pandas as pd\n## for plotting\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n## for geospatial\nimport folium\nimport geopy\n## for machine learning\nfrom sklearn import preprocessing, cluster\nimport scipy\n## for deep learning\nimport minisom\n``````\n\nFetch the data we need and set up your environment before you move on to data analysis.\n\n``````from pandas.io.json import json_normalize\nimport folium\nfrom geopy.geocoders import Nominatim\nimport requests\nCLIENT_ID = \"CLient ID\" # your Foursquare ID\nCLIENT_SECRET = \"Client Secret key\" # your Foursquare Secret\nVERSION = '20200316'\nLIMIT = 10000\n``````\n\nSet up your query in such a way that you can check for residential locations in a fixed radius around a point of your choosing.\n\n``````url = 'https://api.foursquare.com/v2/venues/explore?&client_id={}&client_secret={}&v={}&ll={},{}&radius={}&limit={}'.format(\nCLIENT_ID,\nCLIENT_SECRET,\nVERSION,\n17.448372, 78.526957,\n30000,\nLIMIT)\n\nresults = requests.get(url).json()\n``````\n\nparse the response data into a usable dataframe.\n\n``````venues = results['response']['groups']['items']\nnearby_venues = json_normalize(venues)\n``````\n\nWe also need a count of grocery stores, restaurants, gyms etc. around each residential location. Form another query to get all these locations (fixed in a short distance around each residential location) and hit the endpoint again.\n\n``````resta=[]\noth=[]\nfor lat,long in zip(nearby_venues['venue.location.lat'],nearby_venues['venue.location.lng']):\nCLIENT_ID,\nCLIENT_SECRET,\nVERSION,\nlat,long,\n1000,\n100)\nres = requests.get(url).json()\nvenue = res['response']['groups']['items']\nnearby_venue = json_normalize(venue)\ndf=nearby_venue['venue.categories']\n\ng=[]\nfor i in range(0,df.size):\ng.append(df[i]['icon']['prefix'].find('food'))\nco=0\nfor i in g:\nif i>1:\nco+=1\nresta.append(co)\noth.append(len(g)-co)\n\nnearby_venues['restaurant']=resta\nnearby_venues['others']=oth\nnearby_venues\n``````\n\nDrop the irrelevant values, handle the NaN values(if any) and summarise the results into a dataframe.\n\nIn order to define the right k, I shall use the Elbow Method: plotting the variance as a function of the number of clusters and picking the k that flats the curve.\n\n``````f=['venue.location.lat','venue.location.lng']\nX = nearby_venues[f]\nmax_k = 10\n## iterations\ndistortions = []\nfor i in range(1, max_k+1):\nif len(X) >= i:\nmodel = cluster.KMeans(n_clusters=i, init='k-means++', max_iter=300, n_init=10, random_state=0)\nmodel.fit(X)\ndistortions.append(model.inertia_)\n## best k: the lowest derivative\nk = [i*100 for i in np.diff(distortions,2)].index(min([i*100 for i\nin np.diff(distortions,2)]))\n## plot\nfig, ax = plt.subplots()\nax.plot(range(1, len(distortions)+1), distortions)\nax.axvline(k, ls='--', color=\"red\", label=\"k = \"+str(k))\nax.set(title='The Elbow Method', xlabel='Number of clusters',\nylabel=\"Distortion\")\nax.legend()\nax.grid(True)\nplt.show()\n``````\n\nI am going to create the map with folium, a really convenient package that allows us to plot interactive maps without needing to load a shapefile. Each store shall be identified by a point with size proportional to its current staff and color based on its cost. I’m also going to add a small piece of HTML code to the default map to display the legend.\n\n``````x, y = \"lat\", \"long\"\ncolor = \"restaurant\"\nsize = \"others\"\ndata = n.copy()\n\n## create color column\nlst_colors=[\"red\",\"green\",\"orange\"]\nlst_elements = sorted(list(n[color].unique()))\n\n## create size column (scaled)\nscaler = preprocessing.MinMaxScaler(feature_range=(3,15))\ndata[\"size\"] = scaler.fit_transform(\ndata[size].values.reshape(-1,1)).reshape(-1)\n\n## initialize the map with the starting location\nmap_ = folium.Map(location=location, tiles=\"cartodbpositron\",\nzoom_start=11)\ndata.apply(lambda row: folium.CircleMarker(\nlocation=[row[x],row[y]],popup=row[popup],\n\n## plot the map\nmap_\n``````\n\nWe can try with k = 6 so that the K-Means algorithm will find 6 theoretical centroids. In addition, I will identify the real centroids too (the closest observation to the cluster center).\n\n``````k = 6\nmodel = cluster.KMeans(n_clusters=k, init='k-means++')\nX = n[[\"lat\",\"long\"]]\n## clustering\ndtf_X = X.copy()\ndtf_X[\"cluster\"] = model.fit_predict(X)\n## find real centroids\nclosest, distances = scipy.cluster.vq.vq(model.cluster_centers_,\ndtf_X.drop(\"cluster\", axis=1).values)\ndtf_X[\"centroids\"] = 0\nfor i in closest:\ndtf_X[\"centroids\"].iloc[i] = 1\n## add clustering info to the original dataset\nn[[\"cluster\",\"centroids\"]] = dtf_X[[\"cluster\",\"centroids\"]]\nn\n``````\n\nI added two columns to the dataset: “cluster” indicating what cluster the observation belongs to, and “centroids” that is 1 if an observation is also the centroid (the closest to the center) and 0 otherwise. Let’s plot it out:\n\n``````## plot\nfig, ax = plt.subplots()\nsns.scatterplot(x=\"lat\", y=\"long\", data=n,\npalette=sns.color_palette(\"bright\",k),\nhue='cluster', size=\"centroids\", size_order=[1,0],\nlegend=\"brief\", ax=ax).set_title('Clustering (k='+str(k)+')')\nth_centroids = model.cluster_centers_\nax.scatter(th_centroids[:,0], th_centroids[:,1], s=50, c='black',\nmarker=\"x\")\n``````\n\nAffinity Propagation is quite convenient when you can’t specify the number of clusters, and it’s suited for geospatial data as it works well with non-flat geometry.\n\n``````model = cluster.AffinityPropagation()\n``````\n\nIndependently from the algorithm you used to cluster the data, now you have a dataset with two more columns (“cluster”, “centroids”). We can use that to visualize the clusters on the map, and this time I’m going to display the centroids as well using a marker.\n\n``````x, y = \"lat\", \"long\"\ncolor = \"cluster\"\nsize = \"restaurant\"\nmarker = \"centroids\"\ndata = n.copy()\n## create color column\nlst_elements = sorted(list(n[color].unique()))\nlst_colors = ['#%06X' % np.random.randint(0, 0xFFFFFF) for i in\nrange(len(lst_elements))]\ndata[\"color\"] = data[color].apply(lambda x:\nlst_colors[lst_elements.index(x)])\n## create size column (scaled)\nscaler = preprocessing.MinMaxScaler(feature_range=(3,15))\ndata[\"size\"] = scaler.fit_transform(\ndata[size].values.reshape(-1,1)).reshape(-1)\n## initialize the map with the starting location\nmap_ = folium.Map(location=location, tiles=\"cartodbpositron\",\nzoom_start=11)\ndata.apply(lambda row: folium.CircleMarker(\nlocation=[row[x],row[y]],\ncolor=row[\"color\"], fill=True,popup=row[popup],\nlegend_html = \"\"\"<div style=\"position:fixed; bottom:10px; left:10px; border:2px solid black; z-index:9999; font-size:14px;\">&nbsp;<b>\"\"\"+color+\"\"\":</b><br>\"\"\"\nfor i in lst_elements:\nlegend_html = legend_html+\"\"\"&nbsp;<i class=\"fa fa-circle\nfa-1x\" style=\"color:\"\"\"+lst_colors[lst_elements.index(i)]+\"\"\"\">\n</i>&nbsp;\"\"\"+str(i)+\"\"\"<br>\"\"\"\nlegend_html = legend_html+\"\"\"</div>\"\"\"\nlst_elements = sorted(list(n[marker].unique()))\ndata[data[marker]==1].apply(lambda row:\nfolium.Marker(location=[row[x],row[y]],\ndraggable=False, popup=row[popup] ,\n## plot the map\nmap_\n``````\n\n# Conclusion\n\nIt was very insightful and great learning journey through entire project creation. I was able to built a view of the project. In the entire process I learnt about many new methods that used in Exploratory Data Analysis. Also through this project I landed up writing my first blog which I had never thought.\nI am sharing my code repo and web-app link below:\nGithub:", null, "" ]
[ null, "https://res.cloudinary.com/practicaldev/image/fetch/s--AO-JHRgm--/c_imagga_scale,f_auto,fl_progressive,h_420,q_auto,w_1000/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ilyq8mlfgcyj578kfuug.jpg", null, "https://res.cloudinary.com/practicaldev/image/fetch/s--RmY55OKL--/c_limit,f_auto,fl_progressive,q_auto,w_256/https://practicaldev-herokuapp-com.freetls.fastly.net/assets/devlogo-pwa-512.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6894611,"math_prob":0.9011973,"size":10126,"snap":"2021-21-2021-25","text_gpt3_token_len":2522,"char_repetition_ratio":0.10165975,"word_repetition_ratio":0.062305298,"special_character_ratio":0.26130754,"punctuation_ratio":0.15761448,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97896975,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,4,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-05-18T23:55:53Z\",\"WARC-Record-ID\":\"<urn:uuid:3dd6f55b-5d6d-46e7-8f57-30c9067a44ee>\",\"Content-Length\":\"118605\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9027802b-d34d-45be-9ab3-01e380f92aa5>\",\"WARC-Concurrent-To\":\"<urn:uuid:a0365700-8683-4137-80f5-a9eb4867159b>\",\"WARC-IP-Address\":\"151.101.66.217\",\"WARC-Target-URI\":\"https://dev.to/akg1301/exploratory-data-analysis-on-geolocational-data-270j\",\"WARC-Payload-Digest\":\"sha1:OVCLOHUP3SJ6DZYFRYLQCPJIZQFPHLIB\",\"WARC-Block-Digest\":\"sha1:UVDAKTN5OD63GAW6OBRBCMGNZD22273A\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-21/CC-MAIN-2021-21_segments_1620243989874.84_warc_CC-MAIN-20210518222121-20210519012121-00284.warc.gz\"}"}
https://crazyliblady.savingadvice.com/2016/06/
[ "User Real IP - 3.231.167.166\n```Array\n(\n => Array\n(\n => 182.68.68.92\n)\n\n => Array\n(\n => 101.0.41.201\n)\n\n => Array\n(\n => 43.225.98.123\n)\n\n => Array\n(\n => 2.58.194.139\n)\n\n => Array\n(\n => 46.119.197.104\n)\n\n => Array\n(\n => 45.249.8.93\n)\n\n => Array\n(\n => 103.12.135.72\n)\n\n => Array\n(\n => 157.35.243.216\n)\n\n => Array\n(\n => 209.107.214.176\n)\n\n => Array\n(\n => 5.181.233.166\n)\n\n => Array\n(\n => 106.201.10.100\n)\n\n => Array\n(\n => 36.90.55.39\n)\n\n => Array\n(\n => 119.154.138.47\n)\n\n => Array\n(\n => 51.91.31.157\n)\n\n => Array\n(\n => 182.182.65.216\n)\n\n => Array\n(\n => 157.35.252.63\n)\n\n => Array\n(\n => 14.142.34.163\n)\n\n => Array\n(\n => 178.62.43.135\n)\n\n => Array\n(\n => 43.248.152.148\n)\n\n => Array\n(\n => 222.252.104.114\n)\n\n => Array\n(\n => 209.107.214.168\n)\n\n => Array\n(\n => 103.99.199.250\n)\n\n => Array\n(\n => 178.62.72.160\n)\n\n => Array\n(\n => 27.6.1.170\n)\n\n => Array\n(\n => 182.69.249.219\n)\n\n => Array\n(\n => 110.93.228.86\n)\n\n => Array\n(\n => 72.255.1.98\n)\n\n => Array\n(\n => 182.73.111.98\n)\n\n => Array\n(\n => 45.116.117.11\n)\n\n => Array\n(\n => 122.15.78.189\n)\n\n => Array\n(\n => 14.167.188.234\n)\n\n => Array\n(\n => 223.190.4.202\n)\n\n => Array\n(\n => 202.173.125.19\n)\n\n => Array\n(\n => 103.255.5.32\n)\n\n => Array\n(\n => 39.37.145.103\n)\n\n => Array\n(\n => 140.213.26.249\n)\n\n => Array\n(\n => 45.118.166.85\n)\n\n => Array\n(\n => 102.166.138.255\n)\n\n => Array\n(\n => 77.111.246.234\n)\n\n => Array\n(\n => 45.63.6.196\n)\n\n => Array\n(\n => 103.250.147.115\n)\n\n => Array\n(\n => 223.185.30.99\n)\n\n => Array\n(\n => 103.122.168.108\n)\n\n => Array\n(\n => 123.136.203.21\n)\n\n => Array\n(\n => 171.229.243.63\n)\n\n => Array\n(\n => 153.149.98.149\n)\n\n => Array\n(\n => 223.238.93.15\n)\n\n => Array\n(\n => 178.62.113.166\n)\n\n => Array\n(\n => 101.162.0.153\n)\n\n => Array\n(\n => 121.200.62.114\n)\n\n => Array\n(\n => 14.248.77.252\n)\n\n => Array\n(\n => 95.142.117.29\n)\n\n => Array\n(\n => 150.129.60.107\n)\n\n => Array\n(\n => 94.205.243.22\n)\n\n => Array\n(\n => 115.42.71.143\n)\n\n => Array\n(\n => 117.217.195.59\n)\n\n => Array\n(\n => 182.77.112.56\n)\n\n => Array\n(\n => 182.77.112.108\n)\n\n => Array\n(\n => 41.80.69.10\n)\n\n => Array\n(\n => 117.5.222.121\n)\n\n => Array\n(\n => 103.11.0.38\n)\n\n => Array\n(\n => 202.173.127.140\n)\n\n => Array\n(\n => 49.249.249.50\n)\n\n => Array\n(\n => 116.72.198.211\n)\n\n => Array\n(\n => 223.230.54.53\n)\n\n => Array\n(\n => 102.69.228.74\n)\n\n => Array\n(\n => 39.37.251.89\n)\n\n => Array\n(\n => 39.53.246.141\n)\n\n => Array\n(\n => 39.57.182.72\n)\n\n => Array\n(\n => 209.58.130.210\n)\n\n => Array\n(\n => 104.131.75.86\n)\n\n => Array\n(\n => 106.212.131.255\n)\n\n => Array\n(\n => 106.212.132.127\n)\n\n => Array\n(\n => 223.190.4.60\n)\n\n => Array\n(\n => 103.252.116.252\n)\n\n => Array\n(\n => 103.76.55.182\n)\n\n => Array\n(\n => 45.118.166.70\n)\n\n => Array\n(\n => 103.93.174.215\n)\n\n => Array\n(\n => 5.62.62.142\n)\n\n => Array\n(\n => 182.179.158.156\n)\n\n => Array\n(\n => 39.57.255.12\n)\n\n => Array\n(\n => 39.37.178.37\n)\n\n => Array\n(\n => 182.180.165.211\n)\n\n => Array\n(\n => 119.153.135.17\n)\n\n => Array\n(\n => 72.255.15.244\n)\n\n => Array\n(\n => 139.180.166.181\n)\n\n => Array\n(\n => 70.119.147.111\n)\n\n => Array\n(\n => 106.210.40.83\n)\n\n => Array\n(\n => 14.190.70.91\n)\n\n => Array\n(\n => 202.125.156.82\n)\n\n => Array\n(\n => 115.42.68.38\n)\n\n => Array\n(\n => 102.167.13.108\n)\n\n => Array\n(\n => 117.217.192.130\n)\n\n => Array\n(\n => 205.185.223.156\n)\n\n => Array\n(\n => 171.224.180.29\n)\n\n => Array\n(\n => 45.127.45.68\n)\n\n => Array\n(\n => 195.206.183.232\n)\n\n => Array\n(\n => 49.32.52.115\n)\n\n => Array\n(\n => 49.207.49.223\n)\n\n => Array\n(\n => 45.63.29.61\n)\n\n => Array\n(\n => 103.245.193.214\n)\n\n => Array\n(\n => 39.40.236.69\n)\n\n => Array\n(\n => 62.80.162.111\n)\n\n => Array\n(\n => 45.116.232.56\n)\n\n => Array\n(\n => 45.118.166.91\n)\n\n => Array\n(\n => 180.92.230.234\n)\n\n => Array\n(\n => 157.40.57.160\n)\n\n => Array\n(\n => 110.38.38.130\n)\n\n => Array\n(\n => 72.255.57.183\n)\n\n => Array\n(\n => 182.68.81.85\n)\n\n => Array\n(\n => 39.57.202.122\n)\n\n => Array\n(\n => 119.152.154.36\n)\n\n => Array\n(\n => 5.62.62.141\n)\n\n => Array\n(\n => 119.155.54.232\n)\n\n => Array\n(\n => 39.37.141.22\n)\n\n => Array\n(\n => 183.87.12.225\n)\n\n => Array\n(\n => 107.170.127.117\n)\n\n => Array\n(\n => 125.63.124.49\n)\n\n => Array\n(\n => 39.42.191.3\n)\n\n => Array\n(\n => 116.74.24.72\n)\n\n => Array\n(\n => 46.101.89.227\n)\n\n => Array\n(\n => 202.173.125.247\n)\n\n => Array\n(\n => 39.42.184.254\n)\n\n => Array\n(\n => 115.186.165.132\n)\n\n => Array\n(\n => 39.57.206.126\n)\n\n => Array\n(\n => 103.245.13.145\n)\n\n => Array\n(\n => 202.175.246.43\n)\n\n => Array\n(\n => 192.140.152.150\n)\n\n => Array\n(\n => 202.88.250.103\n)\n\n => Array\n(\n => 103.248.94.207\n)\n\n => Array\n(\n => 77.73.66.101\n)\n\n => Array\n(\n => 104.131.66.8\n)\n\n => Array\n(\n => 113.186.161.97\n)\n\n => Array\n(\n => 222.254.5.7\n)\n\n => Array\n(\n => 223.233.67.247\n)\n\n => Array\n(\n => 171.249.116.146\n)\n\n => Array\n(\n => 47.30.209.71\n)\n\n => Array\n(\n => 202.134.13.130\n)\n\n => Array\n(\n => 27.6.135.7\n)\n\n => Array\n(\n => 107.170.186.79\n)\n\n => Array\n(\n => 103.212.89.171\n)\n\n => Array\n(\n => 117.197.9.77\n)\n\n => Array\n(\n => 122.176.206.233\n)\n\n => Array\n(\n => 192.227.253.222\n)\n\n => Array\n(\n => 182.188.224.119\n)\n\n => Array\n(\n => 14.248.70.74\n)\n\n => Array\n(\n => 42.118.219.169\n)\n\n => Array\n(\n => 110.39.146.170\n)\n\n => Array\n(\n => 119.160.66.143\n)\n\n => Array\n(\n => 103.248.95.130\n)\n\n => Array\n(\n => 27.63.152.208\n)\n\n => Array\n(\n => 49.207.114.96\n)\n\n => Array\n(\n => 102.166.23.214\n)\n\n => Array\n(\n => 175.107.254.73\n)\n\n => Array\n(\n => 103.10.227.214\n)\n\n => Array\n(\n => 202.143.115.89\n)\n\n => Array\n(\n => 110.93.227.187\n)\n\n => Array\n(\n => 103.140.31.60\n)\n\n => Array\n(\n => 110.37.231.46\n)\n\n => Array\n(\n => 39.36.99.238\n)\n\n => Array\n(\n => 157.37.140.26\n)\n\n => Array\n(\n => 43.246.202.226\n)\n\n => Array\n(\n => 137.97.8.143\n)\n\n => Array\n(\n => 182.65.52.242\n)\n\n => Array\n(\n => 115.42.69.62\n)\n\n => Array\n(\n => 14.143.254.58\n)\n\n => Array\n(\n => 223.179.143.236\n)\n\n => Array\n(\n => 223.179.143.249\n)\n\n => Array\n(\n => 103.143.7.54\n)\n\n => Array\n(\n => 223.179.139.106\n)\n\n => Array\n(\n => 39.40.219.90\n)\n\n => Array\n(\n => 45.115.141.231\n)\n\n => Array\n(\n => 120.29.100.33\n)\n\n => Array\n(\n => 112.196.132.5\n)\n\n => Array\n(\n => 202.163.123.153\n)\n\n => Array\n(\n => 5.62.58.146\n)\n\n => Array\n(\n => 39.53.216.113\n)\n\n => Array\n(\n => 42.111.160.73\n)\n\n => Array\n(\n => 107.182.231.213\n)\n\n => Array\n(\n => 119.82.94.120\n)\n\n => Array\n(\n => 178.62.34.82\n)\n\n => Array\n(\n => 203.122.6.18\n)\n\n => Array\n(\n => 157.42.38.251\n)\n\n => Array\n(\n => 45.112.68.222\n)\n\n => Array\n(\n => 49.206.212.122\n)\n\n => Array\n(\n => 104.236.70.228\n)\n\n => Array\n(\n => 42.111.34.243\n)\n\n => Array\n(\n => 84.241.19.186\n)\n\n => Array\n(\n => 89.187.180.207\n)\n\n => Array\n(\n => 104.243.212.118\n)\n\n => Array\n(\n => 104.236.55.136\n)\n\n => Array\n(\n => 106.201.16.163\n)\n\n => Array\n(\n => 46.101.40.25\n)\n\n => Array\n(\n => 45.118.166.94\n)\n\n => Array\n(\n => 49.36.128.102\n)\n\n => Array\n(\n => 14.142.193.58\n)\n\n => Array\n(\n => 212.79.124.176\n)\n\n => Array\n(\n => 45.32.191.194\n)\n\n => Array\n(\n => 105.112.107.46\n)\n\n => Array\n(\n => 106.201.14.8\n)\n\n => Array\n(\n => 110.93.240.65\n)\n\n => Array\n(\n => 27.96.95.177\n)\n\n => Array\n(\n => 45.41.134.35\n)\n\n => Array\n(\n => 180.151.13.110\n)\n\n => Array\n(\n => 101.53.242.89\n)\n\n => Array\n(\n => 115.186.3.110\n)\n\n => Array\n(\n => 171.49.185.242\n)\n\n => Array\n(\n => 115.42.70.24\n)\n\n => Array\n(\n => 45.128.188.43\n)\n\n => Array\n(\n => 103.140.129.63\n)\n\n => Array\n(\n => 101.50.113.147\n)\n\n => Array\n(\n => 103.66.73.30\n)\n\n => Array\n(\n => 117.247.193.169\n)\n\n => Array\n(\n => 120.29.100.94\n)\n\n => Array\n(\n => 42.109.154.39\n)\n\n => Array\n(\n => 122.173.155.150\n)\n\n => Array\n(\n => 45.115.104.53\n)\n\n => Array\n(\n => 116.74.29.84\n)\n\n => Array\n(\n => 101.50.125.34\n)\n\n => Array\n(\n => 45.118.166.80\n)\n\n => Array\n(\n => 91.236.184.27\n)\n\n => Array\n(\n => 113.167.185.120\n)\n\n => Array\n(\n => 27.97.66.222\n)\n\n => Array\n(\n => 43.247.41.117\n)\n\n => Array\n(\n => 23.229.16.227\n)\n\n => Array\n(\n => 14.248.79.209\n)\n\n => Array\n(\n => 117.5.194.26\n)\n\n => Array\n(\n => 117.217.205.41\n)\n\n => Array\n(\n => 114.79.169.99\n)\n\n => Array\n(\n => 103.55.60.97\n)\n\n => Array\n(\n => 182.75.89.210\n)\n\n => Array\n(\n => 77.73.66.109\n)\n\n => Array\n(\n => 182.77.126.139\n)\n\n => Array\n(\n => 14.248.77.166\n)\n\n => Array\n(\n => 157.35.224.133\n)\n\n => Array\n(\n => 183.83.38.27\n)\n\n => Array\n(\n => 182.68.4.77\n)\n\n => Array\n(\n => 122.177.130.234\n)\n\n => Array\n(\n => 103.24.99.99\n)\n\n => Array\n(\n => 103.91.127.66\n)\n\n => Array\n(\n => 41.90.34.240\n)\n\n => Array\n(\n => 49.205.77.102\n)\n\n => Array\n(\n => 103.248.94.142\n)\n\n => Array\n(\n => 104.143.92.170\n)\n\n => Array\n(\n => 219.91.157.114\n)\n\n => Array\n(\n => 223.190.88.22\n)\n\n => Array\n(\n => 223.190.86.232\n)\n\n => Array\n(\n => 39.41.172.80\n)\n\n => Array\n(\n => 124.107.206.5\n)\n\n => Array\n(\n => 139.167.180.224\n)\n\n => Array\n(\n => 93.76.64.248\n)\n\n => Array\n(\n => 65.216.227.119\n)\n\n => Array\n(\n => 223.190.119.141\n)\n\n => Array\n(\n => 110.93.237.179\n)\n\n => Array\n(\n => 41.90.7.85\n)\n\n => Array\n(\n => 103.100.6.26\n)\n\n => Array\n(\n => 104.140.83.13\n)\n\n => Array\n(\n => 223.190.119.133\n)\n\n => Array\n(\n => 119.152.150.87\n)\n\n => Array\n(\n => 103.125.130.147\n)\n\n => Array\n(\n => 27.6.5.52\n)\n\n => Array\n(\n => 103.98.188.26\n)\n\n => Array\n(\n => 39.35.121.81\n)\n\n => Array\n(\n => 74.119.146.182\n)\n\n => Array\n(\n => 5.181.233.162\n)\n\n => Array\n(\n => 157.39.18.60\n)\n\n => Array\n(\n => 1.187.252.25\n)\n\n => Array\n(\n => 39.42.145.59\n)\n\n => Array\n(\n => 39.35.39.198\n)\n\n => Array\n(\n => 49.36.128.214\n)\n\n => Array\n(\n => 182.190.20.56\n)\n\n => Array\n(\n => 122.180.249.189\n)\n\n => Array\n(\n => 117.217.203.107\n)\n\n => Array\n(\n => 103.70.82.241\n)\n\n => Array\n(\n => 45.118.166.68\n)\n\n => Array\n(\n => 122.180.168.39\n)\n\n => Array\n(\n => 149.28.67.254\n)\n\n => Array\n(\n => 223.233.73.8\n)\n\n => Array\n(\n => 122.167.140.0\n)\n\n => Array\n(\n => 95.158.51.55\n)\n\n => Array\n(\n => 27.96.95.134\n)\n\n => Array\n(\n => 49.206.214.53\n)\n\n => Array\n(\n => 212.103.49.92\n)\n\n => Array\n(\n => 122.177.115.101\n)\n\n => Array\n(\n => 171.50.187.124\n)\n\n => Array\n(\n => 122.164.55.107\n)\n\n => Array\n(\n => 98.114.217.204\n)\n\n => Array\n(\n => 106.215.10.54\n)\n\n => Array\n(\n => 115.42.68.28\n)\n\n => Array\n(\n => 104.194.220.87\n)\n\n => Array\n(\n => 103.137.84.170\n)\n\n => Array\n(\n => 61.16.142.110\n)\n\n => Array\n(\n => 212.103.49.85\n)\n\n => Array\n(\n => 39.53.248.162\n)\n\n => Array\n(\n => 203.122.40.214\n)\n\n => Array\n(\n => 117.217.198.72\n)\n\n => Array\n(\n => 115.186.191.203\n)\n\n => Array\n(\n => 120.29.100.199\n)\n\n => Array\n(\n => 45.151.237.24\n)\n\n => Array\n(\n => 223.190.125.232\n)\n\n => Array\n(\n => 41.80.151.17\n)\n\n => Array\n(\n => 23.111.188.5\n)\n\n => Array\n(\n => 223.190.125.216\n)\n\n => Array\n(\n => 103.217.133.119\n)\n\n => Array\n(\n => 103.198.173.132\n)\n\n => Array\n(\n => 47.31.155.89\n)\n\n => Array\n(\n => 223.190.20.253\n)\n\n => Array\n(\n => 104.131.92.125\n)\n\n => Array\n(\n => 223.190.19.152\n)\n\n => Array\n(\n => 103.245.193.191\n)\n\n => Array\n(\n => 106.215.58.255\n)\n\n => Array\n(\n => 119.82.83.238\n)\n\n => Array\n(\n => 106.212.128.138\n)\n\n => Array\n(\n => 139.167.237.36\n)\n\n => Array\n(\n => 222.124.40.250\n)\n\n => Array\n(\n => 134.56.185.169\n)\n\n => Array\n(\n => 54.255.226.31\n)\n\n => Array\n(\n => 137.97.162.31\n)\n\n => Array\n(\n => 95.185.21.191\n)\n\n => Array\n(\n => 171.61.168.151\n)\n\n => Array\n(\n => 137.97.184.4\n)\n\n => Array\n(\n => 106.203.151.202\n)\n\n => Array\n(\n => 39.37.137.0\n)\n\n => Array\n(\n => 45.118.166.66\n)\n\n => Array\n(\n => 14.248.105.100\n)\n\n => Array\n(\n => 106.215.61.185\n)\n\n => Array\n(\n => 202.83.57.179\n)\n\n => Array\n(\n => 89.187.182.176\n)\n\n => Array\n(\n => 49.249.232.198\n)\n\n => Array\n(\n => 132.154.95.236\n)\n\n => Array\n(\n => 223.233.83.230\n)\n\n => Array\n(\n => 183.83.153.14\n)\n\n => Array\n(\n => 125.63.72.210\n)\n\n => Array\n(\n => 207.174.202.11\n)\n\n => Array\n(\n => 119.95.88.59\n)\n\n => Array\n(\n => 122.170.14.150\n)\n\n => Array\n(\n => 45.118.166.75\n)\n\n => Array\n(\n => 103.12.135.37\n)\n\n => Array\n(\n => 49.207.120.225\n)\n\n => Array\n(\n => 182.64.195.207\n)\n\n => Array\n(\n => 103.99.37.16\n)\n\n => Array\n(\n => 46.150.104.221\n)\n\n => Array\n(\n => 104.236.195.147\n)\n\n => Array\n(\n => 103.104.192.43\n)\n\n => Array\n(\n => 24.242.159.118\n)\n\n => Array\n(\n => 39.42.179.143\n)\n\n => Array\n(\n => 111.93.58.131\n)\n\n => Array\n(\n => 193.176.84.127\n)\n\n => Array\n(\n => 209.58.142.218\n)\n\n => Array\n(\n => 69.243.152.129\n)\n\n => Array\n(\n => 117.97.131.249\n)\n\n => Array\n(\n => 103.230.180.89\n)\n\n => Array\n(\n => 106.212.170.192\n)\n\n => Array\n(\n => 171.224.180.95\n)\n\n => Array\n(\n => 158.222.11.87\n)\n\n => Array\n(\n => 119.155.60.246\n)\n\n => Array\n(\n => 41.90.43.129\n)\n\n => Array\n(\n => 185.183.104.170\n)\n\n => Array\n(\n => 14.248.67.65\n)\n\n => Array\n(\n => 117.217.205.82\n)\n\n => Array\n(\n => 111.88.7.209\n)\n\n => Array\n(\n => 49.36.132.244\n)\n\n => Array\n(\n => 171.48.40.2\n)\n\n => Array\n(\n => 119.81.105.2\n)\n\n => Array\n(\n => 49.36.128.114\n)\n\n => Array\n(\n => 213.200.31.93\n)\n\n => Array\n(\n => 2.50.15.110\n)\n\n => Array\n(\n => 120.29.104.67\n)\n\n => Array\n(\n => 223.225.32.221\n)\n\n => Array\n(\n => 14.248.67.195\n)\n\n => Array\n(\n => 119.155.36.13\n)\n\n => Array\n(\n => 101.50.95.104\n)\n\n => Array\n(\n => 104.236.205.233\n)\n\n => Array\n(\n => 122.164.36.150\n)\n\n => Array\n(\n => 157.45.93.209\n)\n\n => Array\n(\n => 182.77.118.100\n)\n\n => Array\n(\n => 182.74.134.218\n)\n\n => Array\n(\n => 183.82.128.146\n)\n\n => Array\n(\n => 112.196.170.234\n)\n\n => Array\n(\n => 122.173.230.178\n)\n\n => Array\n(\n => 122.164.71.199\n)\n\n => Array\n(\n => 51.79.19.31\n)\n\n => Array\n(\n => 58.65.222.20\n)\n\n => Array\n(\n => 103.27.203.97\n)\n\n => Array\n(\n => 111.88.7.242\n)\n\n => Array\n(\n => 14.171.232.77\n)\n\n => Array\n(\n => 46.101.22.182\n)\n\n => Array\n(\n => 103.94.219.19\n)\n\n => Array\n(\n => 139.190.83.30\n)\n\n => Array\n(\n => 223.190.27.184\n)\n\n => Array\n(\n => 182.185.183.34\n)\n\n => Array\n(\n => 91.74.181.242\n)\n\n => Array\n(\n => 222.252.107.14\n)\n\n => Array\n(\n => 137.97.8.28\n)\n\n => Array\n(\n => 46.101.16.229\n)\n\n => Array\n(\n => 122.53.254.229\n)\n\n => Array\n(\n => 106.201.17.180\n)\n\n => Array\n(\n => 123.24.170.129\n)\n\n => Array\n(\n => 182.185.180.79\n)\n\n => Array\n(\n => 223.190.17.4\n)\n\n => Array\n(\n => 213.108.105.1\n)\n\n => Array\n(\n => 171.22.76.9\n)\n\n => Array\n(\n => 202.66.178.164\n)\n\n => Array\n(\n => 178.62.97.171\n)\n\n => Array\n(\n => 167.179.110.209\n)\n\n => Array\n(\n => 223.230.147.172\n)\n\n => Array\n(\n => 76.218.195.160\n)\n\n => Array\n(\n => 14.189.186.178\n)\n\n => Array\n(\n => 157.41.45.143\n)\n\n => Array\n(\n => 223.238.22.53\n)\n\n => Array\n(\n => 111.88.7.244\n)\n\n => Array\n(\n => 5.62.57.19\n)\n\n => Array\n(\n => 106.201.25.216\n)\n\n => Array\n(\n => 117.217.205.33\n)\n\n => Array\n(\n => 111.88.7.215\n)\n\n => Array\n(\n => 106.201.13.77\n)\n\n => Array\n(\n => 50.7.93.29\n)\n\n => Array\n(\n => 123.201.70.112\n)\n\n => Array\n(\n => 39.42.108.226\n)\n\n => Array\n(\n => 27.5.198.29\n)\n\n => Array\n(\n => 223.238.85.187\n)\n\n => Array\n(\n => 171.49.176.32\n)\n\n => Array\n(\n => 14.248.79.242\n)\n\n => Array\n(\n => 46.219.211.183\n)\n\n => Array\n(\n => 185.244.212.251\n)\n\n => Array\n(\n => 14.102.84.126\n)\n\n => Array\n(\n => 106.212.191.52\n)\n\n => Array\n(\n => 154.72.153.203\n)\n\n => Array\n(\n => 14.175.82.64\n)\n\n => Array\n(\n => 141.105.139.131\n)\n\n => Array\n(\n => 182.156.103.98\n)\n\n => Array\n(\n => 117.217.204.75\n)\n\n => Array\n(\n => 104.140.83.115\n)\n\n => Array\n(\n => 119.152.62.8\n)\n\n => Array\n(\n => 45.125.247.94\n)\n\n => Array\n(\n => 137.97.37.252\n)\n\n => Array\n(\n => 117.217.204.73\n)\n\n => Array\n(\n => 14.248.79.133\n)\n\n => Array\n(\n => 39.37.152.52\n)\n\n => Array\n(\n => 103.55.60.54\n)\n\n => Array\n(\n => 102.166.183.88\n)\n\n => Array\n(\n => 5.62.60.162\n)\n\n => Array\n(\n => 5.62.60.163\n)\n\n => Array\n(\n => 160.202.38.131\n)\n\n => Array\n(\n => 106.215.20.253\n)\n\n => Array\n(\n => 39.37.160.54\n)\n\n => Array\n(\n => 119.152.59.186\n)\n\n => Array\n(\n => 183.82.0.164\n)\n\n => Array\n(\n => 41.90.54.87\n)\n\n => Array\n(\n => 157.36.85.158\n)\n\n => Array\n(\n => 110.37.229.162\n)\n\n => Array\n(\n => 203.99.180.148\n)\n\n => Array\n(\n => 117.97.132.91\n)\n\n => Array\n(\n => 171.61.147.105\n)\n\n => Array\n(\n => 14.98.147.214\n)\n\n => Array\n(\n => 209.234.253.191\n)\n\n => Array\n(\n => 92.38.148.60\n)\n\n => Array\n(\n => 178.128.104.139\n)\n\n => Array\n(\n => 212.154.0.176\n)\n\n => Array\n(\n => 103.41.24.141\n)\n\n => Array\n(\n => 2.58.194.132\n)\n\n => Array\n(\n => 180.190.78.169\n)\n\n => Array\n(\n => 106.215.45.182\n)\n\n => Array\n(\n => 125.63.100.222\n)\n\n => Array\n(\n => 110.54.247.17\n)\n\n => Array\n(\n => 103.26.85.105\n)\n\n => Array\n(\n => 39.42.147.3\n)\n\n => Array\n(\n => 137.97.51.41\n)\n\n => Array\n(\n => 71.202.72.27\n)\n\n => Array\n(\n => 119.155.35.10\n)\n\n => Array\n(\n => 202.47.43.120\n)\n\n => Array\n(\n => 183.83.64.101\n)\n\n => Array\n(\n => 182.68.106.141\n)\n\n => Array\n(\n => 171.61.187.87\n)\n\n => Array\n(\n => 178.162.198.118\n)\n\n => Array\n(\n => 115.97.151.218\n)\n\n => Array\n(\n => 196.207.184.210\n)\n\n => Array\n(\n => 198.16.70.51\n)\n\n => Array\n(\n => 41.60.237.33\n)\n\n => Array\n(\n => 47.11.86.26\n)\n\n => Array\n(\n => 117.217.201.183\n)\n\n => Array\n(\n => 203.192.241.79\n)\n\n => Array\n(\n => 122.165.119.85\n)\n\n => Array\n(\n => 23.227.142.218\n)\n\n => Array\n(\n => 178.128.104.221\n)\n\n => Array\n(\n => 14.192.54.163\n)\n\n => Array\n(\n => 139.5.253.218\n)\n\n => Array\n(\n => 117.230.140.127\n)\n\n => Array\n(\n => 195.114.149.199\n)\n\n => Array\n(\n => 14.239.180.220\n)\n\n => Array\n(\n => 103.62.155.94\n)\n\n => Array\n(\n => 118.71.97.14\n)\n\n => Array\n(\n => 137.97.55.163\n)\n\n => Array\n(\n => 202.47.49.198\n)\n\n => Array\n(\n => 171.61.177.85\n)\n\n => Array\n(\n => 137.97.190.224\n)\n\n => Array\n(\n => 117.230.34.142\n)\n\n => Array\n(\n => 103.41.32.5\n)\n\n => Array\n(\n => 203.90.82.237\n)\n\n => Array\n(\n => 125.63.124.238\n)\n\n => Array\n(\n => 103.232.128.78\n)\n\n => Array\n(\n => 106.197.14.227\n)\n\n => Array\n(\n => 81.17.242.244\n)\n\n => Array\n(\n => 81.19.210.179\n)\n\n => Array\n(\n => 103.134.94.98\n)\n\n => Array\n(\n => 110.38.0.86\n)\n\n => Array\n(\n => 103.10.224.195\n)\n\n => Array\n(\n => 45.118.166.89\n)\n\n => Array\n(\n => 115.186.186.68\n)\n\n => Array\n(\n => 138.197.129.237\n)\n\n => Array\n(\n => 14.247.162.52\n)\n\n => Array\n(\n => 103.255.4.5\n)\n\n => Array\n(\n => 14.167.188.254\n)\n\n => Array\n(\n => 5.62.59.54\n)\n\n => Array\n(\n => 27.122.14.80\n)\n\n => Array\n(\n => 39.53.240.21\n)\n\n => Array\n(\n => 39.53.241.243\n)\n\n => Array\n(\n => 117.230.130.161\n)\n\n => Array\n(\n => 118.71.191.149\n)\n\n => Array\n(\n => 5.188.95.54\n)\n\n => Array\n(\n => 66.45.250.27\n)\n\n => Array\n(\n => 106.215.6.175\n)\n\n => Array\n(\n => 27.122.14.86\n)\n\n => Array\n(\n => 103.255.4.51\n)\n\n => Array\n(\n => 101.50.93.119\n)\n\n => Array\n(\n => 137.97.183.51\n)\n\n => Array\n(\n => 117.217.204.185\n)\n\n => Array\n(\n => 95.104.106.82\n)\n\n => Array\n(\n => 5.62.56.211\n)\n\n => Array\n(\n => 103.104.181.214\n)\n\n => Array\n(\n => 36.72.214.243\n)\n\n => Array\n(\n => 5.62.62.219\n)\n\n => Array\n(\n => 110.36.202.4\n)\n\n => Array\n(\n => 103.255.4.253\n)\n\n => Array\n(\n => 110.172.138.61\n)\n\n => Array\n(\n => 159.203.24.195\n)\n\n => Array\n(\n => 13.229.88.42\n)\n\n => Array\n(\n => 59.153.235.20\n)\n\n => Array\n(\n => 171.236.169.32\n)\n\n => Array\n(\n => 14.231.85.206\n)\n\n => Array\n(\n => 119.152.54.103\n)\n\n => Array\n(\n => 103.80.117.202\n)\n\n => Array\n(\n => 223.179.157.75\n)\n\n => Array\n(\n => 122.173.68.249\n)\n\n => Array\n(\n => 188.163.72.113\n)\n\n => Array\n(\n => 119.155.20.164\n)\n\n => Array\n(\n => 103.121.43.68\n)\n\n => Array\n(\n => 5.62.58.6\n)\n\n => Array\n(\n => 203.122.40.154\n)\n\n => Array\n(\n => 222.254.96.203\n)\n\n => Array\n(\n => 103.83.148.167\n)\n\n => Array\n(\n => 103.87.251.226\n)\n\n => Array\n(\n => 123.24.129.24\n)\n\n => Array\n(\n => 137.97.83.8\n)\n\n => Array\n(\n => 223.225.33.132\n)\n\n => Array\n(\n => 128.76.175.190\n)\n\n => Array\n(\n => 195.85.219.32\n)\n\n => Array\n(\n => 139.167.102.93\n)\n\n => Array\n(\n => 49.15.198.253\n)\n\n => Array\n(\n => 45.152.183.172\n)\n\n => Array\n(\n => 42.106.180.136\n)\n\n => Array\n(\n => 95.142.120.9\n)\n\n => Array\n(\n => 139.167.236.4\n)\n\n => Array\n(\n => 159.65.72.167\n)\n\n => Array\n(\n => 49.15.89.2\n)\n\n => Array\n(\n => 42.201.161.195\n)\n\n => Array\n(\n => 27.97.210.38\n)\n\n => Array\n(\n => 171.241.45.19\n)\n\n => Array\n(\n => 42.108.2.18\n)\n\n => Array\n(\n => 171.236.40.68\n)\n\n => Array\n(\n => 110.93.82.102\n)\n\n => Array\n(\n => 43.225.24.186\n)\n\n => Array\n(\n => 117.230.189.119\n)\n\n => Array\n(\n => 124.123.147.187\n)\n\n => Array\n(\n => 216.151.184.250\n)\n\n => Array\n(\n => 49.15.133.16\n)\n\n => Array\n(\n => 49.15.220.74\n)\n\n => Array\n(\n => 157.37.221.246\n)\n\n => Array\n(\n => 176.124.233.112\n)\n\n => Array\n(\n => 118.71.167.40\n)\n\n => Array\n(\n => 182.185.213.161\n)\n\n => Array\n(\n => 47.31.79.248\n)\n\n => Array\n(\n => 223.179.238.192\n)\n\n => Array\n(\n => 79.110.128.219\n)\n\n => Array\n(\n => 106.210.42.111\n)\n\n => Array\n(\n => 47.247.214.229\n)\n\n => Array\n(\n => 193.0.220.108\n)\n\n => Array\n(\n => 1.39.206.254\n)\n\n => Array\n(\n => 123.201.77.38\n)\n\n => Array\n(\n => 115.178.207.21\n)\n\n => Array\n(\n => 37.111.202.92\n)\n\n => Array\n(\n => 49.14.179.243\n)\n\n => Array\n(\n => 117.230.145.171\n)\n\n => Array\n(\n => 171.229.242.96\n)\n\n => Array\n(\n => 27.59.174.209\n)\n\n => Array\n(\n => 1.38.202.211\n)\n\n => Array\n(\n => 157.37.128.46\n)\n\n => Array\n(\n => 49.15.94.80\n)\n\n => Array\n(\n => 123.25.46.147\n)\n\n => Array\n(\n => 117.230.170.185\n)\n\n => Array\n(\n => 5.62.16.19\n)\n\n => Array\n(\n => 103.18.22.25\n)\n\n => Array\n(\n => 103.46.200.132\n)\n\n => Array\n(\n => 27.97.165.126\n)\n\n => Array\n(\n => 117.230.54.241\n)\n\n => Array\n(\n => 27.97.209.76\n)\n\n => Array\n(\n => 47.31.182.109\n)\n\n => Array\n(\n => 47.30.223.221\n)\n\n => Array\n(\n => 103.31.94.82\n)\n\n => Array\n(\n => 103.211.14.45\n)\n\n => Array\n(\n => 171.49.233.58\n)\n\n => Array\n(\n => 65.49.126.95\n)\n\n => Array\n(\n => 69.255.101.170\n)\n\n => Array\n(\n => 27.56.224.67\n)\n\n => Array\n(\n => 117.230.146.86\n)\n\n => Array\n(\n => 27.59.154.52\n)\n\n => Array\n(\n => 132.154.114.10\n)\n\n => Array\n(\n => 182.186.77.60\n)\n\n => Array\n(\n => 117.230.136.74\n)\n\n => Array\n(\n => 43.251.94.253\n)\n\n => Array\n(\n => 103.79.168.225\n)\n\n => Array\n(\n => 117.230.56.51\n)\n\n => Array\n(\n => 27.97.187.45\n)\n\n => Array\n(\n => 137.97.190.61\n)\n\n => Array\n(\n => 193.0.220.26\n)\n\n => Array\n(\n => 49.36.137.62\n)\n\n => Array\n(\n => 47.30.189.248\n)\n\n => Array\n(\n => 109.169.23.84\n)\n\n => Array\n(\n => 111.119.185.46\n)\n\n => Array\n(\n => 103.83.148.246\n)\n\n => Array\n(\n => 157.32.119.138\n)\n\n => Array\n(\n => 5.62.41.53\n)\n\n => Array\n(\n => 47.8.243.236\n)\n\n => Array\n(\n => 112.79.158.69\n)\n\n => Array\n(\n => 180.92.148.218\n)\n\n => Array\n(\n => 157.36.162.154\n)\n\n => Array\n(\n => 39.46.114.47\n)\n\n => Array\n(\n => 117.230.173.250\n)\n\n => Array\n(\n => 117.230.155.188\n)\n\n => Array\n(\n => 193.0.220.17\n)\n\n => Array\n(\n => 117.230.171.166\n)\n\n => Array\n(\n => 49.34.59.228\n)\n\n => Array\n(\n => 111.88.197.247\n)\n\n => Array\n(\n => 47.31.156.112\n)\n\n => Array\n(\n => 137.97.64.180\n)\n\n => Array\n(\n => 14.244.227.18\n)\n\n => Array\n(\n => 113.167.158.8\n)\n\n => Array\n(\n => 39.37.175.189\n)\n\n => Array\n(\n => 139.167.211.8\n)\n\n => Array\n(\n => 73.120.85.235\n)\n\n => Array\n(\n => 104.236.195.72\n)\n\n => Array\n(\n => 27.97.190.71\n)\n\n => Array\n(\n => 79.46.170.222\n)\n\n => Array\n(\n => 102.185.244.207\n)\n\n => Array\n(\n => 37.111.136.30\n)\n\n => Array\n(\n => 50.7.93.28\n)\n\n => Array\n(\n => 110.54.251.43\n)\n\n => Array\n(\n => 49.36.143.40\n)\n\n => Array\n(\n => 103.130.112.185\n)\n\n => Array\n(\n => 37.111.139.202\n)\n\n => Array\n(\n => 49.36.139.108\n)\n\n => Array\n(\n => 37.111.136.179\n)\n\n => Array\n(\n => 123.17.165.77\n)\n\n => Array\n(\n => 49.207.143.206\n)\n\n => Array\n(\n => 39.53.80.149\n)\n\n => Array\n(\n => 223.188.71.214\n)\n\n => Array\n(\n => 1.39.222.233\n)\n\n => Array\n(\n => 117.230.9.85\n)\n\n => Array\n(\n => 103.251.245.216\n)\n\n => Array\n(\n => 122.169.133.145\n)\n\n => Array\n(\n => 43.250.165.57\n)\n\n => Array\n(\n => 39.44.13.235\n)\n\n => Array\n(\n => 157.47.181.2\n)\n\n => Array\n(\n => 27.56.203.50\n)\n\n => Array\n(\n => 191.96.97.58\n)\n\n => Array\n(\n => 111.88.107.172\n)\n\n => Array\n(\n => 113.193.198.136\n)\n\n => Array\n(\n => 117.230.172.175\n)\n\n => Array\n(\n => 191.96.182.239\n)\n\n => Array\n(\n => 2.58.46.28\n)\n\n => Array\n(\n => 183.83.253.87\n)\n\n => Array\n(\n => 49.15.139.242\n)\n\n => Array\n(\n => 42.107.220.236\n)\n\n => Array\n(\n => 14.192.53.196\n)\n\n => Array\n(\n => 42.119.212.202\n)\n\n => Array\n(\n => 192.158.234.45\n)\n\n => Array\n(\n => 49.149.102.192\n)\n\n => Array\n(\n => 47.8.170.17\n)\n\n => Array\n(\n => 117.197.13.247\n)\n\n => Array\n(\n => 116.74.34.44\n)\n\n => Array\n(\n => 103.79.249.163\n)\n\n => Array\n(\n => 182.189.95.70\n)\n\n => Array\n(\n => 137.59.218.118\n)\n\n => Array\n(\n => 103.79.170.243\n)\n\n => Array\n(\n => 39.40.54.25\n)\n\n => Array\n(\n => 119.155.40.170\n)\n\n => Array\n(\n => 1.39.212.157\n)\n\n => Array\n(\n => 70.127.59.89\n)\n\n => Array\n(\n => 14.171.22.58\n)\n\n => Array\n(\n => 194.44.167.141\n)\n\n => Array\n(\n => 111.88.179.154\n)\n\n => Array\n(\n => 117.230.140.232\n)\n\n => Array\n(\n => 137.97.96.128\n)\n\n => Array\n(\n => 198.16.66.123\n)\n\n => Array\n(\n => 106.198.44.193\n)\n\n => Array\n(\n => 119.153.45.75\n)\n\n => Array\n(\n => 49.15.242.208\n)\n\n => Array\n(\n => 119.155.241.20\n)\n\n => Array\n(\n => 106.223.109.155\n)\n\n => Array\n(\n => 119.160.119.245\n)\n\n => Array\n(\n => 106.215.81.160\n)\n\n => Array\n(\n => 1.39.192.211\n)\n\n => Array\n(\n => 223.230.35.208\n)\n\n => Array\n(\n => 39.59.4.158\n)\n\n => Array\n(\n => 43.231.57.234\n)\n\n => Array\n(\n => 60.254.78.193\n)\n\n => Array\n(\n => 122.170.224.87\n)\n\n => Array\n(\n => 117.230.22.141\n)\n\n => Array\n(\n => 119.152.107.211\n)\n\n => Array\n(\n => 103.87.192.206\n)\n\n => Array\n(\n => 39.45.244.47\n)\n\n => Array\n(\n => 50.72.141.94\n)\n\n => Array\n(\n => 39.40.6.128\n)\n\n => Array\n(\n => 39.45.180.186\n)\n\n => Array\n(\n => 49.207.131.233\n)\n\n => Array\n(\n => 139.59.69.142\n)\n\n => Array\n(\n => 111.119.187.29\n)\n\n => Array\n(\n => 119.153.40.69\n)\n\n => Array\n(\n => 49.36.133.64\n)\n\n => Array\n(\n => 103.255.4.249\n)\n\n => Array\n(\n => 198.144.154.15\n)\n\n => Array\n(\n => 1.22.46.172\n)\n\n => Array\n(\n => 103.255.5.46\n)\n\n => Array\n(\n => 27.56.195.188\n)\n\n => Array\n(\n => 203.101.167.53\n)\n\n => Array\n(\n => 117.230.62.195\n)\n\n => Array\n(\n => 103.240.194.186\n)\n\n => Array\n(\n => 107.170.166.118\n)\n\n => Array\n(\n => 101.53.245.80\n)\n\n => Array\n(\n => 157.43.13.208\n)\n\n => Array\n(\n => 137.97.100.77\n)\n\n => Array\n(\n => 47.31.150.208\n)\n\n => Array\n(\n => 137.59.222.65\n)\n\n => Array\n(\n => 103.85.127.250\n)\n\n => Array\n(\n => 103.214.119.32\n)\n\n => Array\n(\n => 182.255.49.52\n)\n\n => Array\n(\n => 103.75.247.72\n)\n\n => Array\n(\n => 103.85.125.250\n)\n\n => Array\n(\n => 183.83.253.167\n)\n\n => Array\n(\n => 1.39.222.111\n)\n\n => Array\n(\n => 111.119.185.9\n)\n\n => Array\n(\n => 111.119.187.10\n)\n\n => Array\n(\n => 39.37.147.144\n)\n\n => Array\n(\n => 103.200.198.183\n)\n\n => Array\n(\n => 1.39.222.18\n)\n\n => Array\n(\n => 198.8.80.103\n)\n\n => Array\n(\n => 42.108.1.243\n)\n\n => Array\n(\n => 111.119.187.16\n)\n\n => Array\n(\n => 39.40.241.8\n)\n\n => Array\n(\n => 122.169.150.158\n)\n\n => Array\n(\n => 39.40.215.119\n)\n\n => Array\n(\n => 103.255.5.77\n)\n\n => Array\n(\n => 157.38.108.196\n)\n\n => Array\n(\n => 103.255.4.67\n)\n\n => Array\n(\n => 5.62.60.62\n)\n\n => Array\n(\n => 39.37.146.202\n)\n\n => Array\n(\n => 110.138.6.221\n)\n\n => Array\n(\n => 49.36.143.88\n)\n\n => Array\n(\n => 37.1.215.39\n)\n\n => Array\n(\n => 27.106.59.190\n)\n\n => Array\n(\n => 139.167.139.41\n)\n\n => Array\n(\n => 114.142.166.179\n)\n\n => Array\n(\n => 223.225.240.112\n)\n\n => Array\n(\n => 103.255.5.36\n)\n\n => Array\n(\n => 175.136.1.48\n)\n\n => Array\n(\n => 103.82.80.166\n)\n\n => Array\n(\n => 182.185.196.126\n)\n\n => Array\n(\n => 157.43.45.76\n)\n\n => Array\n(\n => 119.152.132.49\n)\n\n => Array\n(\n => 5.62.62.162\n)\n\n => Array\n(\n => 103.255.4.39\n)\n\n => Array\n(\n => 202.5.144.153\n)\n\n => Array\n(\n => 1.39.223.210\n)\n\n => Array\n(\n => 92.38.176.154\n)\n\n => Array\n(\n => 117.230.186.142\n)\n\n => Array\n(\n => 183.83.39.123\n)\n\n => Array\n(\n => 182.185.156.76\n)\n\n => Array\n(\n => 104.236.74.212\n)\n\n => Array\n(\n => 107.170.145.187\n)\n\n => Array\n(\n => 117.102.7.98\n)\n\n => Array\n(\n => 137.59.220.0\n)\n\n => Array\n(\n => 157.47.222.14\n)\n\n => Array\n(\n => 47.15.206.82\n)\n\n => Array\n(\n => 117.230.159.99\n)\n\n => Array\n(\n => 117.230.175.151\n)\n\n => Array\n(\n => 157.50.97.18\n)\n\n => Array\n(\n => 117.230.47.164\n)\n\n => Array\n(\n => 77.111.244.34\n)\n\n => Array\n(\n => 139.167.189.131\n)\n\n => Array\n(\n => 1.39.204.103\n)\n\n => Array\n(\n => 117.230.58.0\n)\n\n => Array\n(\n => 182.185.226.66\n)\n\n => Array\n(\n => 115.42.70.119\n)\n\n => Array\n(\n => 171.48.114.134\n)\n\n => Array\n(\n => 144.34.218.75\n)\n\n => Array\n(\n => 199.58.164.135\n)\n\n => Array\n(\n => 101.53.228.151\n)\n\n => Array\n(\n => 117.230.50.57\n)\n\n => Array\n(\n => 223.225.138.84\n)\n\n => Array\n(\n => 110.225.67.65\n)\n\n => Array\n(\n => 47.15.200.39\n)\n\n => Array\n(\n => 39.42.20.127\n)\n\n => Array\n(\n => 117.97.241.81\n)\n\n => Array\n(\n => 111.119.185.11\n)\n\n => Array\n(\n => 103.100.5.94\n)\n\n => Array\n(\n => 103.25.137.69\n)\n\n => Array\n(\n => 47.15.197.159\n)\n\n => Array\n(\n => 223.188.176.122\n)\n\n => Array\n(\n => 27.4.175.80\n)\n\n => Array\n(\n => 181.215.43.82\n)\n\n => Array\n(\n => 27.56.228.157\n)\n\n => Array\n(\n => 117.230.19.19\n)\n\n => Array\n(\n => 47.15.208.71\n)\n\n => Array\n(\n => 119.155.21.176\n)\n\n => Array\n(\n => 47.15.234.202\n)\n\n => Array\n(\n => 117.230.144.135\n)\n\n => Array\n(\n => 112.79.139.199\n)\n\n => Array\n(\n => 116.75.246.41\n)\n\n => Array\n(\n => 117.230.177.126\n)\n\n => Array\n(\n => 212.103.48.134\n)\n\n => Array\n(\n => 102.69.228.78\n)\n\n => Array\n(\n => 117.230.37.118\n)\n\n => Array\n(\n => 175.143.61.75\n)\n\n => Array\n(\n => 139.167.56.138\n)\n\n => Array\n(\n => 58.145.189.250\n)\n\n => Array\n(\n => 103.255.5.65\n)\n\n => Array\n(\n => 39.37.153.182\n)\n\n => Array\n(\n => 157.43.85.106\n)\n\n => Array\n(\n => 185.209.178.77\n)\n\n => Array\n(\n => 1.39.212.45\n)\n\n => Array\n(\n => 103.72.7.16\n)\n\n => Array\n(\n => 117.97.185.244\n)\n\n => Array\n(\n => 117.230.59.106\n)\n\n => Array\n(\n => 137.97.121.103\n)\n\n => Array\n(\n => 103.82.123.215\n)\n\n => Array\n(\n => 103.68.217.248\n)\n\n => Array\n(\n => 157.39.27.175\n)\n\n => Array\n(\n => 47.31.100.249\n)\n\n => Array\n(\n => 14.171.232.139\n)\n\n => Array\n(\n => 103.31.93.208\n)\n\n => Array\n(\n => 117.230.56.77\n)\n\n => Array\n(\n => 124.182.25.124\n)\n\n => Array\n(\n => 106.66.191.242\n)\n\n => Array\n(\n => 175.107.237.25\n)\n\n => Array\n(\n => 119.155.1.27\n)\n\n => Array\n(\n => 72.255.6.24\n)\n\n => Array\n(\n => 192.140.152.223\n)\n\n => Array\n(\n => 212.103.48.136\n)\n\n => Array\n(\n => 39.45.134.56\n)\n\n => Array\n(\n => 139.167.173.30\n)\n\n => Array\n(\n => 117.230.63.87\n)\n\n => Array\n(\n => 182.189.95.203\n)\n\n => Array\n(\n => 49.204.183.248\n)\n\n => Array\n(\n => 47.31.125.188\n)\n\n => Array\n(\n => 103.252.171.13\n)\n\n => Array\n(\n => 112.198.74.36\n)\n\n => Array\n(\n => 27.109.113.152\n)\n\n => Array\n(\n => 42.112.233.44\n)\n\n => Array\n(\n => 47.31.68.193\n)\n\n => Array\n(\n => 103.252.171.134\n)\n\n => Array\n(\n => 77.123.32.114\n)\n\n => Array\n(\n => 1.38.189.66\n)\n\n => Array\n(\n => 39.37.181.108\n)\n\n => Array\n(\n => 42.106.44.61\n)\n\n => Array\n(\n => 157.36.8.39\n)\n\n => Array\n(\n => 223.238.41.53\n)\n\n => Array\n(\n => 202.89.77.10\n)\n\n => Array\n(\n => 117.230.150.68\n)\n\n => Array\n(\n => 175.176.87.60\n)\n\n => Array\n(\n => 137.97.117.87\n)\n\n => Array\n(\n => 132.154.123.11\n)\n\n => Array\n(\n => 45.113.124.141\n)\n\n => Array\n(\n => 103.87.56.203\n)\n\n => Array\n(\n => 159.89.171.156\n)\n\n => Array\n(\n => 119.155.53.88\n)\n\n => Array\n(\n => 222.252.107.215\n)\n\n => Array\n(\n => 132.154.75.238\n)\n\n => Array\n(\n => 122.183.41.168\n)\n\n => Array\n(\n => 42.106.254.158\n)\n\n => Array\n(\n => 103.252.171.37\n)\n\n => Array\n(\n => 202.59.13.180\n)\n\n => Array\n(\n => 37.111.139.137\n)\n\n => Array\n(\n => 39.42.93.25\n)\n\n => Array\n(\n => 118.70.177.156\n)\n\n => Array\n(\n => 117.230.148.64\n)\n\n => Array\n(\n => 39.42.15.194\n)\n\n => Array\n(\n => 137.97.176.86\n)\n\n => Array\n(\n => 106.210.102.113\n)\n\n => Array\n(\n => 39.59.84.236\n)\n\n => Array\n(\n => 49.206.187.177\n)\n\n => Array\n(\n => 117.230.133.11\n)\n\n => Array\n(\n => 42.106.253.173\n)\n\n => Array\n(\n => 178.62.102.23\n)\n\n => Array\n(\n => 111.92.76.175\n)\n\n => Array\n(\n => 132.154.86.45\n)\n\n => Array\n(\n => 117.230.128.39\n)\n\n => Array\n(\n => 117.230.53.165\n)\n\n => Array\n(\n => 49.37.200.171\n)\n\n => Array\n(\n => 104.236.213.230\n)\n\n => Array\n(\n => 103.140.30.81\n)\n\n => Array\n(\n => 59.103.104.117\n)\n\n => Array\n(\n => 65.49.126.79\n)\n\n => Array\n(\n => 202.59.12.251\n)\n\n => Array\n(\n => 37.111.136.17\n)\n\n => Array\n(\n => 163.53.85.67\n)\n\n => Array\n(\n => 123.16.240.73\n)\n\n => Array\n(\n => 103.211.14.183\n)\n\n => Array\n(\n => 103.248.93.211\n)\n\n => Array\n(\n => 116.74.59.127\n)\n\n => Array\n(\n => 137.97.169.254\n)\n\n => Array\n(\n => 113.177.79.100\n)\n\n => Array\n(\n => 74.82.60.187\n)\n\n => Array\n(\n => 117.230.157.66\n)\n\n => Array\n(\n => 169.149.194.241\n)\n\n => Array\n(\n => 117.230.156.11\n)\n\n => Array\n(\n => 202.59.12.157\n)\n\n => Array\n(\n => 42.106.181.25\n)\n\n => Array\n(\n => 202.59.13.78\n)\n\n => Array\n(\n => 39.37.153.32\n)\n\n => Array\n(\n => 177.188.216.175\n)\n\n => Array\n(\n => 222.252.53.165\n)\n\n => Array\n(\n => 37.139.23.89\n)\n\n => Array\n(\n => 117.230.139.150\n)\n\n => Array\n(\n => 104.131.176.234\n)\n\n => Array\n(\n => 42.106.181.117\n)\n\n => Array\n(\n => 117.230.180.94\n)\n\n => Array\n(\n => 180.190.171.5\n)\n\n => Array\n(\n => 150.129.165.185\n)\n\n => Array\n(\n => 51.15.0.150\n)\n\n => Array\n(\n => 42.111.4.84\n)\n\n => Array\n(\n => 74.82.60.116\n)\n\n => Array\n(\n => 137.97.121.165\n)\n\n => Array\n(\n => 64.62.187.194\n)\n\n => Array\n(\n => 137.97.106.162\n)\n\n => Array\n(\n => 137.97.92.46\n)\n\n => Array\n(\n => 137.97.170.25\n)\n\n => Array\n(\n => 103.104.192.100\n)\n\n => Array\n(\n => 185.246.211.34\n)\n\n => Array\n(\n => 119.160.96.78\n)\n\n => Array\n(\n => 212.103.48.152\n)\n\n => Array\n(\n => 183.83.153.90\n)\n\n => Array\n(\n => 117.248.150.41\n)\n\n => Array\n(\n => 185.240.246.180\n)\n\n => Array\n(\n => 162.253.131.125\n)\n\n => Array\n(\n => 117.230.153.217\n)\n\n => Array\n(\n => 117.230.169.1\n)\n\n => Array\n(\n => 49.15.138.247\n)\n\n => Array\n(\n => 117.230.37.110\n)\n\n => Array\n(\n => 14.167.188.75\n)\n\n => Array\n(\n => 169.149.239.93\n)\n\n => Array\n(\n => 103.216.176.91\n)\n\n => Array\n(\n => 117.230.12.126\n)\n\n => Array\n(\n => 184.75.209.110\n)\n\n => Array\n(\n => 117.230.6.60\n)\n\n => Array\n(\n => 117.230.135.132\n)\n\n => Array\n(\n => 31.179.29.109\n)\n\n => Array\n(\n => 74.121.188.186\n)\n\n => Array\n(\n => 117.230.35.5\n)\n\n => Array\n(\n => 111.92.74.239\n)\n\n => Array\n(\n => 104.245.144.236\n)\n\n => Array\n(\n => 39.50.22.100\n)\n\n => Array\n(\n => 47.31.190.23\n)\n\n => Array\n(\n => 157.44.73.187\n)\n\n => Array\n(\n => 117.230.8.91\n)\n\n => Array\n(\n => 157.32.18.2\n)\n\n => Array\n(\n => 111.119.187.43\n)\n\n => Array\n(\n => 203.101.185.246\n)\n\n => Array\n(\n => 5.62.34.22\n)\n\n => Array\n(\n => 122.8.143.76\n)\n\n => Array\n(\n => 115.186.2.187\n)\n\n => Array\n(\n => 202.142.110.89\n)\n\n => Array\n(\n => 157.50.61.254\n)\n\n => Array\n(\n => 223.182.211.185\n)\n\n => Array\n(\n => 103.85.125.210\n)\n\n => Array\n(\n => 103.217.133.147\n)\n\n => Array\n(\n => 103.60.196.217\n)\n\n => Array\n(\n => 157.44.238.6\n)\n\n => Array\n(\n => 117.196.225.68\n)\n\n => Array\n(\n => 104.254.92.52\n)\n\n => Array\n(\n => 39.42.46.72\n)\n\n => Array\n(\n => 221.132.119.36\n)\n\n => Array\n(\n => 111.92.77.47\n)\n\n => Array\n(\n => 223.225.19.152\n)\n\n => Array\n(\n => 159.89.121.217\n)\n\n => Array\n(\n => 39.53.221.205\n)\n\n => Array\n(\n => 193.34.217.28\n)\n\n => Array\n(\n => 139.167.206.36\n)\n\n => Array\n(\n => 96.40.10.7\n)\n\n => Array\n(\n => 124.29.198.123\n)\n\n => Array\n(\n => 117.196.226.1\n)\n\n => Array\n(\n => 106.200.85.135\n)\n\n => Array\n(\n => 106.223.180.28\n)\n\n => Array\n(\n => 103.49.232.110\n)\n\n => Array\n(\n => 139.167.208.50\n)\n\n => Array\n(\n => 139.167.201.102\n)\n\n => Array\n(\n => 14.244.224.237\n)\n\n => Array\n(\n => 103.140.31.187\n)\n\n => Array\n(\n => 49.36.134.136\n)\n\n => Array\n(\n => 160.16.61.75\n)\n\n => Array\n(\n => 103.18.22.228\n)\n\n => Array\n(\n => 47.9.74.121\n)\n\n => Array\n(\n => 47.30.216.159\n)\n\n => Array\n(\n => 117.248.150.78\n)\n\n => Array\n(\n => 5.62.34.17\n)\n\n => Array\n(\n => 139.167.247.181\n)\n\n => Array\n(\n => 193.176.84.29\n)\n\n => Array\n(\n => 103.195.201.121\n)\n\n => Array\n(\n => 89.187.175.115\n)\n\n => Array\n(\n => 137.97.81.251\n)\n\n => Array\n(\n => 157.51.147.62\n)\n\n => Array\n(\n => 103.104.192.42\n)\n\n => Array\n(\n => 14.171.235.26\n)\n\n => Array\n(\n => 178.62.89.121\n)\n\n => Array\n(\n => 119.155.4.164\n)\n\n => Array\n(\n => 43.250.241.89\n)\n\n => Array\n(\n => 103.31.100.80\n)\n\n => Array\n(\n => 119.155.7.44\n)\n\n => Array\n(\n => 106.200.73.114\n)\n\n => Array\n(\n => 77.111.246.18\n)\n\n => Array\n(\n => 157.39.99.247\n)\n\n => Array\n(\n => 103.77.42.132\n)\n\n => Array\n(\n => 74.115.214.133\n)\n\n => Array\n(\n => 117.230.49.224\n)\n\n => Array\n(\n => 39.50.108.238\n)\n\n => Array\n(\n => 47.30.221.45\n)\n\n => Array\n(\n => 95.133.164.235\n)\n\n => Array\n(\n => 212.103.48.141\n)\n\n => Array\n(\n => 104.194.218.147\n)\n\n => Array\n(\n => 106.200.88.241\n)\n\n => Array\n(\n => 182.189.212.211\n)\n\n => Array\n(\n => 39.50.142.129\n)\n\n => Array\n(\n => 77.234.43.133\n)\n\n => Array\n(\n => 49.15.192.58\n)\n\n => Array\n(\n => 119.153.37.55\n)\n\n => Array\n(\n => 27.56.156.128\n)\n\n => Array\n(\n => 168.211.4.33\n)\n\n => Array\n(\n => 203.81.236.239\n)\n\n => Array\n(\n => 157.51.149.61\n)\n\n => Array\n(\n => 117.230.45.255\n)\n\n => Array\n(\n => 39.42.106.169\n)\n\n => Array\n(\n => 27.71.89.76\n)\n\n => Array\n(\n => 123.27.109.167\n)\n\n => Array\n(\n => 106.202.21.91\n)\n\n => Array\n(\n => 103.85.125.206\n)\n\n => Array\n(\n => 122.173.250.229\n)\n\n => Array\n(\n => 106.210.102.77\n)\n\n => Array\n(\n => 134.209.47.156\n)\n\n => Array\n(\n => 45.127.232.12\n)\n\n => Array\n(\n => 45.134.224.11\n)\n\n => Array\n(\n => 27.71.89.122\n)\n\n => Array\n(\n => 157.38.105.117\n)\n\n => Array\n(\n => 191.96.73.215\n)\n\n => Array\n(\n => 171.241.92.31\n)\n\n => Array\n(\n => 49.149.104.235\n)\n\n => Array\n(\n => 104.229.247.252\n)\n\n => Array\n(\n => 111.92.78.42\n)\n\n => Array\n(\n => 47.31.88.183\n)\n\n => Array\n(\n => 171.61.203.234\n)\n\n => Array\n(\n => 183.83.226.192\n)\n\n => Array\n(\n => 119.157.107.45\n)\n\n => Array\n(\n => 91.202.163.205\n)\n\n => Array\n(\n => 157.43.62.108\n)\n\n => Array\n(\n => 182.68.248.92\n)\n\n => Array\n(\n => 157.32.251.234\n)\n\n => Array\n(\n => 110.225.196.188\n)\n\n => Array\n(\n => 27.71.89.98\n)\n\n => Array\n(\n => 175.176.87.3\n)\n\n => Array\n(\n => 103.55.90.208\n)\n\n => Array\n(\n => 47.31.41.163\n)\n\n => Array\n(\n => 223.182.195.5\n)\n\n => Array\n(\n => 122.52.101.166\n)\n\n => Array\n(\n => 103.207.82.154\n)\n\n => Array\n(\n => 171.224.178.84\n)\n\n => Array\n(\n => 110.225.235.187\n)\n\n => Array\n(\n => 119.160.97.248\n)\n\n => Array\n(\n => 116.90.101.121\n)\n\n => Array\n(\n => 182.255.48.154\n)\n\n => Array\n(\n => 180.149.221.140\n)\n\n => Array\n(\n => 194.44.79.13\n)\n\n => Array\n(\n => 47.247.18.3\n)\n\n => Array\n(\n => 27.56.242.95\n)\n\n => Array\n(\n => 41.60.236.83\n)\n\n => Array\n(\n => 122.164.162.7\n)\n\n => Array\n(\n => 71.136.154.5\n)\n\n => Array\n(\n => 132.154.119.122\n)\n\n => Array\n(\n => 110.225.80.135\n)\n\n => Array\n(\n => 84.17.61.143\n)\n\n => Array\n(\n => 119.160.102.244\n)\n\n => Array\n(\n => 47.31.27.44\n)\n\n => Array\n(\n => 27.71.89.160\n)\n\n => Array\n(\n => 107.175.38.101\n)\n\n => Array\n(\n => 195.211.150.152\n)\n\n => Array\n(\n => 157.35.250.255\n)\n\n => Array\n(\n => 111.119.187.53\n)\n\n => Array\n(\n => 119.152.97.213\n)\n\n => Array\n(\n => 180.92.143.145\n)\n\n => Array\n(\n => 72.255.61.46\n)\n\n => Array\n(\n => 47.8.183.6\n)\n\n => Array\n(\n => 92.38.148.53\n)\n\n => Array\n(\n => 122.173.194.72\n)\n\n => Array\n(\n => 183.83.226.97\n)\n\n => Array\n(\n => 122.173.73.231\n)\n\n => Array\n(\n => 119.160.101.101\n)\n\n => Array\n(\n => 93.177.75.174\n)\n\n => Array\n(\n => 115.97.196.70\n)\n\n => Array\n(\n => 111.119.187.35\n)\n\n => Array\n(\n => 103.226.226.154\n)\n\n => Array\n(\n => 103.244.172.73\n)\n\n => Array\n(\n => 119.155.61.222\n)\n\n => Array\n(\n => 157.37.184.92\n)\n\n => Array\n(\n => 119.160.103.204\n)\n\n => Array\n(\n => 175.176.87.21\n)\n\n => Array\n(\n => 185.51.228.246\n)\n\n => Array\n(\n => 103.250.164.255\n)\n\n => Array\n(\n => 122.181.194.16\n)\n\n => Array\n(\n => 157.37.230.232\n)\n\n => Array\n(\n => 103.105.236.6\n)\n\n => Array\n(\n => 111.88.128.174\n)\n\n => Array\n(\n => 37.111.139.82\n)\n\n => Array\n(\n => 39.34.133.52\n)\n\n => Array\n(\n => 113.177.79.80\n)\n\n => Array\n(\n => 180.183.71.184\n)\n\n => Array\n(\n => 116.72.218.255\n)\n\n => Array\n(\n => 119.160.117.26\n)\n\n => Array\n(\n => 158.222.0.252\n)\n\n => Array\n(\n => 23.227.142.146\n)\n\n => Array\n(\n => 122.162.152.152\n)\n\n => Array\n(\n => 103.255.149.106\n)\n\n => Array\n(\n => 104.236.53.155\n)\n\n => Array\n(\n => 119.160.119.155\n)\n\n => Array\n(\n => 175.107.214.244\n)\n\n => Array\n(\n => 102.7.116.7\n)\n\n => Array\n(\n => 111.88.91.132\n)\n\n => Array\n(\n => 119.157.248.108\n)\n\n => Array\n(\n => 222.252.36.107\n)\n\n => Array\n(\n => 157.46.209.227\n)\n\n => Array\n(\n => 39.40.54.1\n)\n\n => Array\n(\n => 223.225.19.254\n)\n\n => Array\n(\n => 154.72.150.8\n)\n\n => Array\n(\n => 107.181.177.130\n)\n\n => Array\n(\n => 101.50.75.31\n)\n\n => Array\n(\n => 84.17.58.69\n)\n\n => Array\n(\n => 178.62.5.157\n)\n\n => Array\n(\n => 112.206.175.147\n)\n\n => Array\n(\n => 137.97.113.137\n)\n\n => Array\n(\n => 103.53.44.154\n)\n\n => Array\n(\n => 180.92.143.129\n)\n\n => Array\n(\n => 14.231.223.7\n)\n\n => Array\n(\n => 167.88.63.201\n)\n\n => Array\n(\n => 103.140.204.8\n)\n\n => Array\n(\n => 221.121.135.108\n)\n\n => Array\n(\n => 119.160.97.129\n)\n\n => Array\n(\n => 27.5.168.249\n)\n\n => Array\n(\n => 119.160.102.191\n)\n\n => Array\n(\n => 122.162.219.12\n)\n\n => Array\n(\n => 157.50.141.122\n)\n\n => Array\n(\n => 43.245.8.17\n)\n\n => Array\n(\n => 113.181.198.179\n)\n\n => Array\n(\n => 47.30.221.59\n)\n\n => Array\n(\n => 110.38.29.246\n)\n\n => Array\n(\n => 14.192.140.199\n)\n\n => Array\n(\n => 24.68.10.106\n)\n\n => Array\n(\n => 47.30.209.179\n)\n\n => Array\n(\n => 106.223.123.21\n)\n\n => Array\n(\n => 103.224.48.30\n)\n\n => Array\n(\n => 104.131.19.173\n)\n\n => Array\n(\n => 119.157.100.206\n)\n\n => Array\n(\n => 103.10.226.73\n)\n\n => Array\n(\n => 162.208.51.163\n)\n\n => Array\n(\n => 47.30.221.227\n)\n\n => Array\n(\n => 119.160.116.210\n)\n\n => Array\n(\n => 198.16.78.43\n)\n\n => Array\n(\n => 39.44.201.151\n)\n\n => Array\n(\n => 71.63.181.84\n)\n\n => Array\n(\n => 14.142.192.218\n)\n\n => Array\n(\n => 39.34.147.178\n)\n\n => Array\n(\n => 111.92.75.25\n)\n\n => Array\n(\n => 45.135.239.58\n)\n\n => Array\n(\n => 14.232.235.1\n)\n\n => Array\n(\n => 49.144.100.155\n)\n\n => Array\n(\n => 62.182.99.33\n)\n\n => Array\n(\n => 104.243.212.187\n)\n\n => Array\n(\n => 59.97.132.214\n)\n\n => Array\n(\n => 47.9.15.179\n)\n\n => Array\n(\n => 39.44.103.186\n)\n\n => Array\n(\n => 183.83.241.132\n)\n\n => Array\n(\n => 103.41.24.180\n)\n\n => Array\n(\n => 104.238.46.39\n)\n\n => Array\n(\n => 103.79.170.78\n)\n\n => Array\n(\n => 59.103.138.81\n)\n\n => Array\n(\n => 106.198.191.146\n)\n\n => Array\n(\n => 106.198.255.122\n)\n\n => Array\n(\n => 47.31.46.37\n)\n\n => Array\n(\n => 109.169.23.76\n)\n\n => Array\n(\n => 103.143.7.55\n)\n\n => Array\n(\n => 49.207.114.52\n)\n\n => Array\n(\n => 198.54.106.250\n)\n\n => Array\n(\n => 39.50.64.18\n)\n\n => Array\n(\n => 222.252.48.132\n)\n\n => Array\n(\n => 42.201.186.53\n)\n\n => Array\n(\n => 115.97.198.95\n)\n\n => Array\n(\n => 93.76.134.244\n)\n\n => Array\n(\n => 122.173.15.189\n)\n\n => Array\n(\n => 39.62.38.29\n)\n\n => Array\n(\n => 103.201.145.254\n)\n\n => Array\n(\n => 111.119.187.23\n)\n\n => Array\n(\n => 157.50.66.33\n)\n\n => Array\n(\n => 157.49.68.163\n)\n\n => Array\n(\n => 103.85.125.215\n)\n\n => Array\n(\n => 103.255.4.16\n)\n\n => Array\n(\n => 223.181.246.206\n)\n\n => Array\n(\n => 39.40.109.226\n)\n\n => Array\n(\n => 43.225.70.157\n)\n\n => Array\n(\n => 103.211.18.168\n)\n\n => Array\n(\n => 137.59.221.60\n)\n\n => Array\n(\n => 103.81.214.63\n)\n\n => Array\n(\n => 39.35.163.2\n)\n\n => Array\n(\n => 106.205.124.39\n)\n\n => Array\n(\n => 209.99.165.216\n)\n\n => Array\n(\n => 103.75.247.187\n)\n\n => Array\n(\n => 157.46.217.41\n)\n\n => Array\n(\n => 75.186.73.80\n)\n\n => Array\n(\n => 212.103.48.153\n)\n\n => Array\n(\n => 47.31.61.167\n)\n\n => Array\n(\n => 119.152.145.131\n)\n\n => Array\n(\n => 171.76.177.244\n)\n\n => Array\n(\n => 103.135.78.50\n)\n\n => Array\n(\n => 103.79.170.75\n)\n\n => Array\n(\n => 105.160.22.74\n)\n\n => Array\n(\n => 47.31.20.153\n)\n\n => Array\n(\n => 42.107.204.65\n)\n\n => Array\n(\n => 49.207.131.35\n)\n\n => Array\n(\n => 92.38.148.61\n)\n\n => Array\n(\n => 183.83.255.206\n)\n\n => Array\n(\n => 107.181.177.131\n)\n\n => Array\n(\n => 39.40.220.157\n)\n\n => Array\n(\n => 39.41.133.176\n)\n\n => Array\n(\n => 103.81.214.61\n)\n\n => Array\n(\n => 223.235.108.46\n)\n\n => Array\n(\n => 171.241.52.118\n)\n\n => Array\n(\n => 39.57.138.47\n)\n\n => Array\n(\n => 106.204.196.172\n)\n\n => Array\n(\n => 39.53.228.40\n)\n\n => Array\n(\n => 185.242.5.99\n)\n\n => Array\n(\n => 103.255.5.96\n)\n\n => Array\n(\n => 157.46.212.120\n)\n\n => Array\n(\n => 107.181.177.138\n)\n\n => Array\n(\n => 47.30.193.65\n)\n\n => Array\n(\n => 39.37.178.33\n)\n\n => Array\n(\n => 157.46.173.29\n)\n\n => Array\n(\n => 39.57.238.211\n)\n\n => Array\n(\n => 157.37.245.113\n)\n\n => Array\n(\n => 47.30.201.138\n)\n\n => Array\n(\n => 106.204.193.108\n)\n\n => Array\n(\n => 212.103.50.212\n)\n\n => Array\n(\n => 58.65.221.187\n)\n\n => Array\n(\n => 178.62.92.29\n)\n\n => Array\n(\n => 111.92.77.166\n)\n\n => Array\n(\n => 47.30.223.158\n)\n\n => Array\n(\n => 103.224.54.83\n)\n\n => Array\n(\n => 119.153.43.22\n)\n\n => Array\n(\n => 223.181.126.251\n)\n\n => Array\n(\n => 39.42.175.202\n)\n\n => Array\n(\n => 103.224.54.190\n)\n\n => Array\n(\n => 49.36.141.210\n)\n\n => Array\n(\n => 5.62.63.218\n)\n\n => Array\n(\n => 39.59.9.18\n)\n\n => Array\n(\n => 111.88.86.45\n)\n\n => Array\n(\n => 178.54.139.5\n)\n\n => Array\n(\n => 116.68.105.241\n)\n\n => Array\n(\n => 119.160.96.187\n)\n\n => Array\n(\n => 182.189.192.103\n)\n\n => Array\n(\n => 119.160.96.143\n)\n\n => Array\n(\n => 110.225.89.98\n)\n\n => Array\n(\n => 169.149.195.134\n)\n\n => Array\n(\n => 103.238.104.54\n)\n\n => Array\n(\n => 47.30.208.142\n)\n\n => Array\n(\n => 157.46.179.209\n)\n\n => Array\n(\n => 223.235.38.119\n)\n\n => Array\n(\n => 42.106.180.165\n)\n\n => Array\n(\n => 154.122.240.239\n)\n\n => Array\n(\n => 106.223.104.191\n)\n\n => Array\n(\n => 111.93.110.218\n)\n\n => Array\n(\n => 182.183.161.171\n)\n\n => Array\n(\n => 157.44.184.211\n)\n\n => Array\n(\n => 157.50.185.193\n)\n\n => Array\n(\n => 117.230.19.194\n)\n\n => Array\n(\n => 162.243.246.160\n)\n\n => Array\n(\n => 106.223.143.53\n)\n\n => Array\n(\n => 39.59.41.15\n)\n\n => Array\n(\n => 106.210.65.42\n)\n\n => Array\n(\n => 180.243.144.208\n)\n\n => Array\n(\n => 116.68.105.22\n)\n\n => Array\n(\n => 115.42.70.46\n)\n\n => Array\n(\n => 99.72.192.148\n)\n\n => Array\n(\n => 182.183.182.48\n)\n\n => Array\n(\n => 171.48.58.97\n)\n\n => Array\n(\n => 37.120.131.188\n)\n\n => Array\n(\n => 117.99.167.177\n)\n\n => Array\n(\n => 111.92.76.210\n)\n\n => Array\n(\n => 14.192.144.245\n)\n\n => Array\n(\n => 169.149.242.87\n)\n\n => Array\n(\n => 47.30.198.149\n)\n\n => Array\n(\n => 59.103.57.140\n)\n\n => Array\n(\n => 117.230.161.168\n)\n\n => Array\n(\n => 110.225.88.173\n)\n\n => Array\n(\n => 169.149.246.95\n)\n\n => Array\n(\n => 42.106.180.52\n)\n\n => Array\n(\n => 14.231.160.157\n)\n\n => Array\n(\n => 123.27.109.47\n)\n\n => Array\n(\n => 157.46.130.54\n)\n\n => Array\n(\n => 39.42.73.194\n)\n\n => Array\n(\n => 117.230.18.147\n)\n\n => Array\n(\n => 27.59.231.98\n)\n\n => Array\n(\n => 125.209.78.227\n)\n\n => Array\n(\n => 157.34.80.145\n)\n\n => Array\n(\n => 42.201.251.86\n)\n\n => Array\n(\n => 117.230.129.158\n)\n\n => Array\n(\n => 103.82.80.103\n)\n\n => Array\n(\n => 47.9.171.228\n)\n\n => Array\n(\n => 117.230.24.92\n)\n\n => Array\n(\n => 103.129.143.119\n)\n\n => Array\n(\n => 39.40.213.45\n)\n\n => Array\n(\n => 178.92.188.214\n)\n\n => Array\n(\n => 110.235.232.191\n)\n\n => Array\n(\n => 5.62.34.18\n)\n\n => Array\n(\n => 47.30.212.134\n)\n\n => Array\n(\n => 157.42.34.196\n)\n\n => Array\n(\n => 157.32.169.9\n)\n\n => Array\n(\n => 103.255.4.11\n)\n\n => Array\n(\n => 117.230.13.69\n)\n\n => Array\n(\n => 117.230.58.97\n)\n\n => Array\n(\n => 92.52.138.39\n)\n\n => Array\n(\n => 221.132.119.63\n)\n\n => Array\n(\n => 117.97.167.188\n)\n\n => Array\n(\n => 119.153.56.58\n)\n\n => Array\n(\n => 105.50.22.150\n)\n\n => Array\n(\n => 115.42.68.126\n)\n\n => Array\n(\n => 182.189.223.159\n)\n\n => Array\n(\n => 39.59.36.90\n)\n\n => Array\n(\n => 111.92.76.114\n)\n\n => Array\n(\n => 157.47.226.163\n)\n\n => Array\n(\n => 202.47.44.37\n)\n\n => Array\n(\n => 106.51.234.172\n)\n\n => Array\n(\n => 103.101.88.166\n)\n\n => Array\n(\n => 27.6.246.146\n)\n\n => Array\n(\n => 103.255.5.83\n)\n\n => Array\n(\n => 103.98.210.185\n)\n\n => Array\n(\n => 122.173.114.134\n)\n\n => Array\n(\n => 122.173.77.248\n)\n\n => Array\n(\n => 5.62.41.172\n)\n\n => Array\n(\n => 180.178.181.17\n)\n\n => Array\n(\n => 37.120.133.224\n)\n\n => Array\n(\n => 45.131.5.156\n)\n\n => Array\n(\n => 110.39.100.110\n)\n\n => Array\n(\n => 176.110.38.185\n)\n\n => Array\n(\n => 36.255.41.64\n)\n\n => Array\n(\n => 103.104.192.15\n)\n\n => Array\n(\n => 43.245.131.195\n)\n\n => Array\n(\n => 14.248.111.185\n)\n\n => Array\n(\n => 122.173.217.133\n)\n\n => Array\n(\n => 106.223.90.245\n)\n\n => Array\n(\n => 119.153.56.80\n)\n\n => Array\n(\n => 103.7.60.172\n)\n\n => Array\n(\n => 157.46.184.233\n)\n\n => Array\n(\n => 182.190.31.95\n)\n\n => Array\n(\n => 109.87.189.122\n)\n\n => Array\n(\n => 91.74.25.100\n)\n\n => Array\n(\n => 182.185.224.144\n)\n\n => Array\n(\n => 106.223.91.221\n)\n\n => Array\n(\n => 182.190.223.40\n)\n\n => Array\n(\n => 2.58.194.134\n)\n\n => Array\n(\n => 196.246.225.236\n)\n\n => Array\n(\n => 106.223.90.173\n)\n\n => Array\n(\n => 23.239.16.54\n)\n\n => Array\n(\n => 157.46.65.225\n)\n\n => Array\n(\n => 115.186.130.14\n)\n\n => Array\n(\n => 103.85.125.157\n)\n\n => Array\n(\n => 14.248.103.6\n)\n\n => Array\n(\n => 123.24.169.247\n)\n\n => Array\n(\n => 103.130.108.153\n)\n\n => Array\n(\n => 115.42.67.21\n)\n\n => Array\n(\n => 202.166.171.190\n)\n\n => Array\n(\n => 39.37.169.104\n)\n\n => Array\n(\n => 103.82.80.59\n)\n\n => Array\n(\n => 175.107.208.58\n)\n\n => Array\n(\n => 203.192.238.247\n)\n\n => Array\n(\n => 103.217.178.150\n)\n\n => Array\n(\n => 103.66.214.173\n)\n\n => Array\n(\n => 110.93.236.174\n)\n\n => Array\n(\n => 143.189.242.64\n)\n\n => Array\n(\n => 77.111.245.12\n)\n\n => Array\n(\n => 145.239.2.231\n)\n\n => Array\n(\n => 115.186.190.38\n)\n\n => Array\n(\n => 109.169.23.67\n)\n\n => Array\n(\n => 198.16.70.29\n)\n\n => Array\n(\n => 111.92.76.186\n)\n\n => Array\n(\n => 115.42.69.34\n)\n\n => Array\n(\n => 73.61.100.95\n)\n\n => Array\n(\n => 103.129.142.31\n)\n\n => Array\n(\n => 103.255.5.53\n)\n\n => Array\n(\n => 103.76.55.2\n)\n\n => Array\n(\n => 47.9.141.138\n)\n\n => Array\n(\n => 103.55.89.234\n)\n\n => Array\n(\n => 103.223.13.53\n)\n\n => Array\n(\n => 175.158.50.203\n)\n\n => Array\n(\n => 103.255.5.90\n)\n\n => Array\n(\n => 106.223.100.138\n)\n\n => Array\n(\n => 39.37.143.193\n)\n\n => Array\n(\n => 206.189.133.131\n)\n\n => Array\n(\n => 43.224.0.233\n)\n\n => Array\n(\n => 115.186.132.106\n)\n\n => Array\n(\n => 31.43.21.159\n)\n\n => Array\n(\n => 119.155.56.131\n)\n\n => Array\n(\n => 103.82.80.138\n)\n\n => Array\n(\n => 24.87.128.119\n)\n\n => Array\n(\n => 106.210.103.163\n)\n\n => Array\n(\n => 103.82.80.90\n)\n\n => Array\n(\n => 157.46.186.45\n)\n\n => Array\n(\n => 157.44.155.238\n)\n\n => Array\n(\n => 103.119.199.2\n)\n\n => Array\n(\n => 27.97.169.205\n)\n\n => Array\n(\n => 157.46.174.89\n)\n\n => Array\n(\n => 43.250.58.220\n)\n\n => Array\n(\n => 76.189.186.64\n)\n\n => Array\n(\n => 103.255.5.57\n)\n\n => Array\n(\n => 171.61.196.136\n)\n\n => Array\n(\n => 202.47.40.88\n)\n\n => Array\n(\n => 97.118.94.116\n)\n\n => Array\n(\n => 157.44.124.157\n)\n\n => Array\n(\n => 95.142.120.13\n)\n\n => Array\n(\n => 42.201.229.151\n)\n\n => Array\n(\n => 157.46.178.95\n)\n\n => Array\n(\n => 169.149.215.192\n)\n\n => Array\n(\n => 42.111.19.48\n)\n\n => Array\n(\n => 1.38.52.18\n)\n\n => Array\n(\n => 145.239.91.241\n)\n\n => Array\n(\n => 47.31.78.191\n)\n\n => Array\n(\n => 103.77.42.60\n)\n\n => Array\n(\n => 157.46.107.144\n)\n\n => Array\n(\n => 157.46.125.124\n)\n\n => Array\n(\n => 110.225.218.108\n)\n\n => Array\n(\n => 106.51.77.185\n)\n\n => Array\n(\n => 123.24.161.207\n)\n\n => Array\n(\n => 106.210.108.22\n)\n\n => Array\n(\n => 42.111.10.14\n)\n\n => Array\n(\n => 223.29.231.175\n)\n\n => Array\n(\n => 27.56.152.132\n)\n\n => Array\n(\n => 119.155.31.100\n)\n\n => Array\n(\n => 122.173.172.127\n)\n\n => Array\n(\n => 103.77.42.64\n)\n\n => Array\n(\n => 157.44.164.106\n)\n\n => Array\n(\n => 14.181.53.38\n)\n\n => Array\n(\n => 115.42.67.64\n)\n\n => Array\n(\n => 47.31.33.140\n)\n\n => Array\n(\n => 103.15.60.234\n)\n\n => Array\n(\n => 182.64.219.181\n)\n\n => Array\n(\n => 103.44.51.6\n)\n\n => Array\n(\n => 116.74.25.157\n)\n\n => Array\n(\n => 116.71.2.128\n)\n\n => Array\n(\n => 157.32.185.239\n)\n\n => Array\n(\n => 47.31.25.79\n)\n\n => Array\n(\n => 178.62.85.75\n)\n\n => Array\n(\n => 180.178.190.39\n)\n\n => Array\n(\n => 39.48.52.179\n)\n\n => Array\n(\n => 106.193.11.240\n)\n\n => Array\n(\n => 103.82.80.226\n)\n\n => Array\n(\n => 49.206.126.30\n)\n\n => Array\n(\n => 157.245.191.173\n)\n\n => Array\n(\n => 49.205.84.237\n)\n\n => Array\n(\n => 47.8.181.232\n)\n\n => Array\n(\n => 182.66.2.92\n)\n\n => Array\n(\n => 49.34.137.220\n)\n\n => Array\n(\n => 209.205.217.125\n)\n\n => Array\n(\n => 192.64.5.73\n)\n\n => Array\n(\n => 27.63.166.108\n)\n\n => Array\n(\n => 120.29.96.211\n)\n\n => Array\n(\n => 182.186.112.135\n)\n\n => Array\n(\n => 45.118.165.151\n)\n\n => Array\n(\n => 47.8.228.12\n)\n\n => Array\n(\n => 106.215.3.162\n)\n\n => Array\n(\n => 111.92.72.66\n)\n\n => Array\n(\n => 169.145.2.9\n)\n\n => Array\n(\n => 106.207.205.100\n)\n\n => Array\n(\n => 223.181.8.12\n)\n\n => Array\n(\n => 157.48.149.78\n)\n\n => Array\n(\n => 103.206.138.116\n)\n\n => Array\n(\n => 39.53.119.22\n)\n\n => Array\n(\n => 157.33.232.106\n)\n\n => Array\n(\n => 49.37.205.139\n)\n\n => Array\n(\n => 115.42.68.3\n)\n\n => Array\n(\n => 93.72.182.251\n)\n\n => Array\n(\n => 202.142.166.22\n)\n\n => Array\n(\n => 157.119.81.111\n)\n\n => Array\n(\n => 182.186.116.155\n)\n\n => Array\n(\n => 157.37.171.37\n)\n\n => Array\n(\n => 117.206.164.48\n)\n\n => Array\n(\n => 49.36.52.63\n)\n\n => Array\n(\n => 203.175.72.112\n)\n\n => Array\n(\n => 171.61.132.193\n)\n\n => Array\n(\n => 111.119.187.44\n)\n\n => Array\n(\n => 39.37.165.216\n)\n\n => Array\n(\n => 103.86.109.58\n)\n\n => Array\n(\n => 39.59.2.86\n)\n\n => Array\n(\n => 111.119.187.28\n)\n\n => Array\n(\n => 106.201.9.10\n)\n\n => Array\n(\n => 49.35.25.106\n)\n\n => Array\n(\n => 157.49.239.103\n)\n\n => Array\n(\n => 157.49.237.198\n)\n\n => Array\n(\n => 14.248.64.121\n)\n\n => Array\n(\n => 117.102.7.214\n)\n\n => Array\n(\n => 120.29.91.246\n)\n\n => Array\n(\n => 103.7.79.41\n)\n\n => Array\n(\n => 132.154.99.209\n)\n\n => Array\n(\n => 212.36.27.245\n)\n\n => Array\n(\n => 157.44.154.9\n)\n\n => Array\n(\n => 47.31.56.44\n)\n\n => Array\n(\n => 192.142.199.136\n)\n\n => Array\n(\n => 171.61.159.49\n)\n\n => Array\n(\n => 119.160.116.151\n)\n\n => Array\n(\n => 103.98.63.39\n)\n\n => Array\n(\n => 41.60.233.216\n)\n\n => Array\n(\n => 49.36.75.212\n)\n\n => Array\n(\n => 223.188.60.20\n)\n\n => Array\n(\n => 103.98.63.50\n)\n\n => Array\n(\n => 178.162.198.21\n)\n\n => Array\n(\n => 157.46.209.35\n)\n\n => Array\n(\n => 119.155.32.151\n)\n\n => Array\n(\n => 102.185.58.161\n)\n\n => Array\n(\n => 59.96.89.231\n)\n\n => Array\n(\n => 119.155.255.198\n)\n\n => Array\n(\n => 42.107.204.57\n)\n\n => Array\n(\n => 42.106.181.74\n)\n\n => Array\n(\n => 157.46.219.186\n)\n\n => Array\n(\n => 115.42.71.49\n)\n\n => Array\n(\n => 157.46.209.131\n)\n\n => Array\n(\n => 220.81.15.94\n)\n\n => Array\n(\n => 111.119.187.24\n)\n\n => Array\n(\n => 49.37.195.185\n)\n\n => Array\n(\n => 42.106.181.85\n)\n\n => Array\n(\n => 43.249.225.134\n)\n\n => Array\n(\n => 117.206.165.151\n)\n\n => Array\n(\n => 119.153.48.250\n)\n\n => Array\n(\n => 27.4.172.162\n)\n\n => Array\n(\n => 117.20.29.51\n)\n\n => Array\n(\n => 103.98.63.135\n)\n\n => Array\n(\n => 117.7.218.229\n)\n\n => Array\n(\n => 157.49.233.105\n)\n\n => Array\n(\n => 39.53.151.199\n)\n\n => Array\n(\n => 101.255.118.33\n)\n\n => Array\n(\n => 41.141.246.9\n)\n\n => Array\n(\n => 221.132.113.78\n)\n\n => Array\n(\n => 119.160.116.202\n)\n\n => Array\n(\n => 117.237.193.244\n)\n\n => Array\n(\n => 157.41.110.145\n)\n\n => Array\n(\n => 103.98.63.5\n)\n\n => Array\n(\n => 103.125.129.58\n)\n\n => Array\n(\n => 183.83.254.66\n)\n\n => Array\n(\n => 45.135.236.160\n)\n\n => Array\n(\n => 198.199.87.124\n)\n\n => Array\n(\n => 193.176.86.41\n)\n\n => Array\n(\n => 115.97.142.98\n)\n\n => Array\n(\n => 222.252.38.198\n)\n\n => Array\n(\n => 110.93.237.49\n)\n\n => Array\n(\n => 103.224.48.122\n)\n\n => Array\n(\n => 110.38.28.130\n)\n\n => Array\n(\n => 106.211.238.154\n)\n\n => Array\n(\n => 111.88.41.73\n)\n\n)\n```\nArchive for June, 2016: Journey Towards Financial Freedom\n << Back to all Blogs Login or Create your own free blog Layout: Blue and Brown (Default) Author's Creation\nHome > Archive: June, 2016", null, "", null, "", null, "# Archive for June, 2016\n\n## 52 Week Challenge ... Credit Card Balance ... Ugh!\n\nJune 28th, 2016 at 10:59 pm\n\nHi, everyone. My credit card payment came due again on Friday and despite all the hard work I did at paying a good off, about \\$30.00 in finance charges were added. It's not even close to maxxed out. I think I need to verify the interest rate, because it's supposed to 12.9%, so the interest amount seems high. This interest brings the balance to \\$3157.00. Ugh!!!\n\nAnyway, I recently got a Pinecone check of \\$3.00, so I sent it directly to cc #1.\n\nOld 52 Week Challenge Balance \\$2364.23\n\n\\$3.00 Pinecone payment applied to cc #1\n\nNew 52 Week Challenge Balance: \\$2367.23\n\nAfter the payment, the cc #1 balance is \\$3154.00.\n\nI should add to the above that no recent purchases were made to cc #1 which did not get immediately paid for.\n\n## Payday and the 52 Week Challenge\n\nJune 17th, 2016 at 12:40 am\n\nHi, everyone. I get paid tomorrow, so I am sending more money to savings, debt, and retirement.\n\nOld 52 Week Challenge: \\$2229.23\n\n\\$40.00 regular savings deposit\n\\$20.00 car savings deposit\n\\$20.00 house savings deposit\n\\$5.00 hvac savings deposit\n\\$5.00 medical savings deposit\n\\$5.00 tax/aaa savings deposit\n\\$5.00 tax prep savings deposit\n\\$5.00 escrow savings deposit\n\\$50.00 cc #1 payment\n\\$20.00 xfer to cc #1 from last payday's slush\n\nNew 52 Week Challenge: \\$2364.23\n\nThis brings cc #1 down to \\$3123.75!!!", null, "## More Snowflakes for the 52 Week Challenge\n\nJune 9th, 2016 at 12:44 am\n\nOld 52 Week Challenge Balance: \\$2224.23\n\npayment to cc #1 \\$5.00\n\nNew 52 Week Challenge Balance: \\$3193.75\n\n## More Snowflakes for the 52 Week Challenge\n\nJune 6th, 2016 at 02:04 pm\n\nHi, everyone. I am celebrating here today! CC #1 balance is under \\$3200! Woo-hoo!\n\nOld 52 Week Challenge Balance: \\$2221.23\n\n\\$3.00 Pinecone check\n\nNew 52 Week Challenge Balance: \\$2224.23\n\nNew cc #1 balance: \\$3198.75\n\n## More Snowflakes for the 52 Week Challenge\n\nJune 4th, 2016 at 06:14 pm\n\nHi, everyone. I sent \\$20.00 to cc #1. This brings to cc #1 balance to \\$3201.75.\n\nOld 52 Week Challenge Balance: \\$2201.23\n\n\\$20.00 payment\n\nNew 52 Week Challenge Balance: \\$2221.23\n\n## More Snowflakes for the 52 Week Challenge\n\nJune 3rd, 2016 at 10:55 pm\n\nHi, everyone. I used cc #2 recently to buy something and got 1.5% cash back. Woo-Hoo! I have already requested a credit to the account to help pay off the balance that is there.\n\nOld 52 Week Challenge Balance: \\$2198.63\n\n\\$2.60 rewards from cc #2\n\nNew 52 Week Challenge Balance: \\$2201.23\n\n## Payday, Snowfalkes, and the 52 Week Challenge\n\nJune 2nd, 2016 at 11:58 pm\n\nHi, everyone. I get paid tomorrow and so more money is going to savings, retirement, and debt.\n\nOld 52 Week Challenge: \\$2041.93\n\nRegular Savings Deposit \\$40.00\nCar Savings Deposit \\$20.00\nHouse Savings Deposit \\$20.00\nMedical Savings Deposit \\$1.00\nTax/AAA Membership Savings Deposit \\$1.00\nTax Prep Savings Deposit \\$3.00\nEscrow Savings Deposit \\$5.00\nDH's payment to cc #1 \\$66.70\n\nNew 52 Week Challenge: \\$2198.63\n\nThis brings cc #1 balance to \\$3221.75.\n\nI have a Pinecone deposit coming in a day or two. I will send it directly to cc #1 when I get it." ]
[ null, "https://www.savingadvice.com/blogs/images/search/top_left.php", null, "https://www.savingadvice.com/blogs/images/search/top_right.php", null, "https://www.savingadvice.com/blogs/images/search/bottom_left.php", null, "https://www.savingadvice.com/forums/core/images/smilies/biggrin.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9103056,"math_prob":0.9990782,"size":5943,"snap":"2020-34-2020-40","text_gpt3_token_len":1814,"char_repetition_ratio":0.19649772,"word_repetition_ratio":0.83707315,"special_character_ratio":0.35251558,"punctuation_ratio":0.18542568,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9995135,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-25T16:37:27Z\",\"WARC-Record-ID\":\"<urn:uuid:94862df1-8917-4154-9047-5b8f18223f3d>\",\"Content-Length\":\"293982\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:249bebff-0e92-4c78-af75-a20c3ff458e9>\",\"WARC-Concurrent-To\":\"<urn:uuid:f7dc27d7-7d97-4a4a-a1f4-12ea3d3006ec>\",\"WARC-IP-Address\":\"173.231.200.26\",\"WARC-Target-URI\":\"https://crazyliblady.savingadvice.com/2016/06/\",\"WARC-Payload-Digest\":\"sha1:GMIUEQPORZDJST7K5HQC5CENR2CIMAXX\",\"WARC-Block-Digest\":\"sha1:4MR4DSRAYGJMJDZEEZEYZOXU2OYTNUFD\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600400227524.63_warc_CC-MAIN-20200925150904-20200925180904-00263.warc.gz\"}"}
https://answers.everydaycalculation.com/lcm/49-210
[ "Solutions by everydaycalculation.com\n\n## What is the LCM of 49 and 210?\n\nThe lcm of 49 and 210 is 1470.\n\n#### Steps to find LCM\n\n1. Find the prime factorization of 49\n49 = 7 × 7\n2. Find the prime factorization of 210\n210 = 2 × 3 × 5 × 7\n3. Multiply each factor the greater number of times it occurs in steps i) or ii) above to find the lcm:\n\nLCM = 2 × 3 × 5 × 7 × 7\n4. LCM = 1470\n\nMathStep (Works offline)", null, "Download our mobile app and learn how to find LCM of upto four numbers in your own time:" ]
[ null, "https://answers.everydaycalculation.com/mathstep-app-icon.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.69665736,"math_prob":0.99624604,"size":481,"snap":"2020-24-2020-29","text_gpt3_token_len":157,"char_repetition_ratio":0.1278826,"word_repetition_ratio":0.0,"special_character_ratio":0.43035343,"punctuation_ratio":0.08421053,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99616843,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-07-10T07:03:21Z\",\"WARC-Record-ID\":\"<urn:uuid:a6068094-fc08-4bf4-b2a2-3aff4eda90bd>\",\"Content-Length\":\"5760\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:dc0cbcc0-3276-4ac3-8924-a4e6c023467e>\",\"WARC-Concurrent-To\":\"<urn:uuid:a3e5bc14-9748-4695-b927-e0a30a5c360e>\",\"WARC-IP-Address\":\"96.126.107.130\",\"WARC-Target-URI\":\"https://answers.everydaycalculation.com/lcm/49-210\",\"WARC-Payload-Digest\":\"sha1:52XBYDS45RL77AVYPUDG7F47PQRNDA47\",\"WARC-Block-Digest\":\"sha1:425E4XM5GAJLD7JUSB7MBGDPB7PVDVVC\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-29/CC-MAIN-2020-29_segments_1593655906214.53_warc_CC-MAIN-20200710050953-20200710080953-00143.warc.gz\"}"}
https://bcxiaobai.eu.org/post/14378.html
[ "# 数据挖掘思维和实战15 k-mean 聚类:擒贼先擒王,找到中心点,它附近的都是一类\n\n### 一个例子", null, "### 算法原理\n\nK-means 的算法原理就已经解释完了,也是非常简洁、易于理解,但是这里面有一些问题需要解决。\n\n#### 如何确定 k 值", null, "### 算法优缺点\n\n#### 优点\n\n• 简洁明了,计算复杂度低。 K-means 的原理非常容易理解,整个计算过程与数学推理也不是很困难。\n\n• 收敛速度较快。 通常经过几个轮次的迭代之后就可以获得还不错的效果。\n\n#### 缺点\n\n• 结果不稳定。 由于初始值随机设定,以及数据的分布情况,每次学习的结果往往会有一些差异。\n\n• 无法解决样本不均衡的问题。 对于类别数据量差距较大的情况无法进行判断。\n\n• 容易收敛到局部最优解。 在局部最优解的时候,迭代无法引起中心点的变化,迭代将结束。\n\n• 受噪声影响较大。 如果存在一些噪声数据,会影响均值的计算,进而引起聚类的效果偏差。\n\n### 尝试动手\n\n``````from sklearn import datasets\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom sklearn.cluster import KMeans\n\"\"\" 画出聚类后的图像\nlabels: 聚类后的label, 从0开始的数字\ncents: 质心坐标\nn_cluster: 聚类后簇的数量\ncolor: 每一簇的颜色\n\"\"\"\ndef draw_result(train_x, labels, cents, title):\nn_clusters = np.unique(labels).shape\ncolor = [\"red\", \"orange\", \"yellow\"]\nplt.figure()\nplt.title(title)\nfor i in range(n_clusters):\ncurrent_data = train_x[labels == i]\nplt.scatter(current_data[:, 0], current_data[:,1], c=color[i])\n#使用蓝色的星形表示中心点位置\nplt.scatter(cents[i, 0], cents[i, 1], c=\"blue\", marker=\"*\", s=100)\nreturn plt\nif __name__ == '__main__':\niris_x = iris.data\n#设定聚类数目为3\nclf = KMeans(n_clusters=3, max_iter=10,n_init=10, init=\"k-means++\", algorithm=\"full\", tol=1e-4,n_jobs= -1,random_state=1)\nclf.fit(iris_x)\nprint(\"SSE = {0}\".format(clf.inertia_))\ndraw_result(iris_x, clf.labels_, clf.cluster_centers_, \"kmeans\").show()\n#输出结果\nSSE = 78.851441426146\n\n``````", null, "### 总结\n\nhttps://github.com/icegomic/GomicDatamining/tree/master/LagouCodes\n\n### 精选评论\n\n##### *明:\n\nTHE END", null, "", null, "" ]
[ null, "https://bcxiaobai.eu.org/wp-content/themes/CorePress/static/img/loading.gif", null, "https://bcxiaobai.eu.org/wp-content/themes/CorePress/static/img/loading.gif", null, "https://bcxiaobai.eu.org/wp-content/themes/CorePress/static/img/loading.gif", null, "https://bcxiaobai.eu.org/post/14378.html", null, "https://bcxiaobai.eu.org/post/14378.html", null ]
{"ft_lang_label":"__label__zh","ft_lang_prob":0.9809365,"math_prob":0.96138686,"size":4183,"snap":"2023-40-2023-50","text_gpt3_token_len":3606,"char_repetition_ratio":0.061497968,"word_repetition_ratio":0.0,"special_character_ratio":0.19029404,"punctuation_ratio":0.10344828,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9666238,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,null,null,null,null,null,null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-03T14:31:32Z\",\"WARC-Record-ID\":\"<urn:uuid:56bf4961-ca2c-4956-8686-321b5cd11beb>\",\"Content-Length\":\"70518\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9f24a1d3-aac5-4d6c-b4e8-821bddd8a58d>\",\"WARC-Concurrent-To\":\"<urn:uuid:14866141-91bb-4f49-b2b9-0d5e8fff7b9d>\",\"WARC-IP-Address\":\"104.21.55.30\",\"WARC-Target-URI\":\"https://bcxiaobai.eu.org/post/14378.html\",\"WARC-Payload-Digest\":\"sha1:LBIUTYMEDQFWACNFP7OVVL5RL27UD2PO\",\"WARC-Block-Digest\":\"sha1:FSS75QOX3S4VCOC4PM5UWRZ5XP3T2UQJ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100508.23_warc_CC-MAIN-20231203125921-20231203155921-00186.warc.gz\"}"}
https://ixtrieve.fh-koeln.de/birds/litie/document/16266
[ "# Document (#16266)\n\nAuthor\nHarken, S.E.\nTitle\nOutsourcing: ready, set, go? : A cataloger's perspective\nSource\nCataloging and classification quarterly. 23(1996) no.2, S.67-87\nYear\n1996\nAbstract\nConsiders the issues involved in outsourcing library cataloguing, including: the need to have dependable, good quality, records available to outsource; suitable vendors; librarians able to communicate their needs; and a means of acquiring bibliographic records and processing relatively easily, at a reasonable price. Descibes the experience of the Fritz Library, North Dakota University in using PALS, an online system based on OCLC MARC, and outlines the pitfalls to be avoided for successful outsourcing\nTheme\nFormalerschließung\n\n## Similar documents (content)\n\n1. Horenstein, B.: Outsourcing copy cataloging at Adelphi University Libraries (1999) 0.30\n```0.2971157 = sum of:\n0.2971157 = product of:\n1.2379822 = sum of:\n0.067511626 = weight(abstract_txt:successful in 364) [ClassicSimilarity], result of:\n0.067511626 = score(doc=364,freq=1.0), product of:\n0.10616493 = queryWeight, product of:\n1.105731 = boost\n5.814059 = idf(docFreq=350, maxDocs=43254)\n0.016513996 = queryNorm\n0.63591266 = fieldWeight in 364, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n5.814059 = idf(docFreq=350, maxDocs=43254)\n0.109375 = fieldNorm(doc=364)\n0.13122995 = weight(abstract_txt:reasonable in 364) [ClassicSimilarity], result of:\n0.13122995 = score(doc=364,freq=1.0), product of:\n0.16535501 = queryWeight, product of:\n1.3799636 = boost\n7.2560043 = idf(docFreq=82, maxDocs=43254)\n0.016513996 = queryNorm\n0.7936255 = fieldWeight in 364, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n7.2560043 = idf(docFreq=82, maxDocs=43254)\n0.109375 = fieldNorm(doc=364)\n0.13755637 = weight(abstract_txt:ready in 364) [ClassicSimilarity], result of:\n0.13755637 = score(doc=364,freq=1.0), product of:\n0.17062756 = queryWeight, product of:\n1.4017919 = boost\n7.37078 = idf(docFreq=73, maxDocs=43254)\n0.016513996 = queryNorm\n0.80617905 = fieldWeight in 364, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n7.37078 = idf(docFreq=73, maxDocs=43254)\n0.109375 = fieldNorm(doc=364)\n0.059351783 = weight(abstract_txt:records in 364) [ClassicSimilarity], result of:\n0.059351783 = score(doc=364,freq=1.0), product of:\n0.12275181 = queryWeight, product of:\n1.6814653 = boost\n4.420667 = idf(docFreq=1413, maxDocs=43254)\n0.016513996 = queryNorm\n0.48351046 = fieldWeight in 364, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n4.420667 = idf(docFreq=1413, maxDocs=43254)\n0.109375 = fieldNorm(doc=364)\n0.3316026 = weight(abstract_txt:outsource in 364) [ClassicSimilarity], result of:\n0.3316026 = score(doc=364,freq=1.0), product of:\n0.3067661 = queryWeight, product of:\n1.8795879 = boost\n9.883085 = idf(docFreq=5, maxDocs=43254)\n0.016513996 = queryNorm\n1.0809624 = fieldWeight in 364, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n9.883085 = idf(docFreq=5, maxDocs=43254)\n0.109375 = fieldNorm(doc=364)\n0.5107298 = weight(abstract_txt:outsourcing in 364) [ClassicSimilarity], result of:\n0.5107298 = score(doc=364,freq=1.0), product of:\n0.59006053 = queryWeight, product of:\n4.5151052 = boost\n7.913645 = idf(docFreq=42, maxDocs=43254)\n0.016513996 = queryNorm\n0.8655549 = fieldWeight in 364, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n7.913645 = idf(docFreq=42, maxDocs=43254)\n0.109375 = fieldNorm(doc=364)\n0.24 = coord(6/25)\n```\n2. Lam, V.-T.: Quality control issues in outsourcing cataloging in US and Canadian academic libraries (2005) 0.24\n```0.23662388 = sum of:\n0.23662388 = product of:\n1.1831194 = sum of:\n0.04745884 = weight(abstract_txt:oclc in 721) [ClassicSimilarity], result of:\n0.04745884 = score(doc=721,freq=1.0), product of:\n0.10504099 = queryWeight, product of:\n1.0998625 = boost\n5.7832007 = idf(docFreq=361, maxDocs=43254)\n0.016513996 = queryNorm\n0.45181257 = fieldWeight in 721, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n5.7832007 = idf(docFreq=361, maxDocs=43254)\n0.078125 = fieldNorm(doc=721)\n0.015858572 = weight(abstract_txt:library in 721) [ClassicSimilarity], result of:\n0.015858572 = score(doc=721,freq=1.0), product of:\n0.06372847 = queryWeight, product of:\n1.2115482 = boost\n3.1852286 = idf(docFreq=4863, maxDocs=43254)\n0.016513996 = queryNorm\n0.24884598 = fieldWeight in 721, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n3.1852286 = idf(docFreq=4863, maxDocs=43254)\n0.078125 = fieldNorm(doc=721)\n0.1527822 = weight(abstract_txt:vendors in 721) [ClassicSimilarity], result of:\n0.1527822 = score(doc=721,freq=3.0), product of:\n0.15878963 = queryWeight, product of:\n1.3522906 = boost\n7.110497 = idf(docFreq=95, maxDocs=43254)\n0.016513996 = queryNorm\n0.9621673 = fieldWeight in 721, product of:\n1.7320508 = tf(freq=3.0), with freq of:\n3.0 = termFreq=3.0\n7.110497 = idf(docFreq=95, maxDocs=43254)\n0.078125 = fieldNorm(doc=721)\n0.07342879 = weight(abstract_txt:records in 721) [ClassicSimilarity], result of:\n0.07342879 = score(doc=721,freq=3.0), product of:\n0.12275181 = queryWeight, product of:\n1.6814653 = boost\n4.420667 = idf(docFreq=1413, maxDocs=43254)\n0.016513996 = queryNorm\n0.59818906 = fieldWeight in 721, product of:\n1.7320508 = tf(freq=3.0), with freq of:\n3.0 = termFreq=3.0\n4.420667 = idf(docFreq=1413, maxDocs=43254)\n0.078125 = fieldNorm(doc=721)\n0.893591 = weight(abstract_txt:outsourcing in 721) [ClassicSimilarity], result of:\n0.893591 = score(doc=721,freq=6.0), product of:\n0.59006053 = queryWeight, product of:\n4.5151052 = boost\n7.913645 = idf(docFreq=42, maxDocs=43254)\n0.016513996 = queryNorm\n1.5144056 = fieldWeight in 721, product of:\n2.4494898 = tf(freq=6.0), with freq of:\n6.0 = termFreq=6.0\n7.913645 = idf(docFreq=42, maxDocs=43254)\n0.078125 = fieldNorm(doc=721)\n0.2 = coord(5/25)\n```\n3. Wilson, K.A.: Vendor-supplied cataloging and contract cataloging services : a report of the ALCTS Creative Ideas in Technical Services Discussion Group, American Library Association, Midwinter Meeting, Los Angeles, February 1994 (1994) 0.20\n```0.19679631 = sum of:\n0.19679631 = product of:\n1.229977 = sum of:\n0.025373716 = weight(abstract_txt:library in 3403) [ClassicSimilarity], result of:\n0.025373716 = score(doc=3403,freq=1.0), product of:\n0.06372847 = queryWeight, product of:\n1.2115482 = boost\n3.1852286 = idf(docFreq=4863, maxDocs=43254)\n0.016513996 = queryNorm\n0.39815357 = fieldWeight in 3403, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n3.1852286 = idf(docFreq=4863, maxDocs=43254)\n0.125 = fieldNorm(doc=3403)\n0.19959383 = weight(abstract_txt:vendors in 3403) [ClassicSimilarity], result of:\n0.19959383 = score(doc=3403,freq=2.0), product of:\n0.15878963 = queryWeight, product of:\n1.3522906 = boost\n7.110497 = idf(docFreq=95, maxDocs=43254)\n0.016513996 = queryNorm\n1.2569702 = fieldWeight in 3403, product of:\n1.4142135 = tf(freq=2.0), with freq of:\n2.0 = termFreq=2.0\n7.110497 = idf(docFreq=95, maxDocs=43254)\n0.125 = fieldNorm(doc=3403)\n0.17954555 = weight(abstract_txt:acquiring in 3403) [ClassicSimilarity], result of:\n0.17954555 = score(doc=3403,freq=1.0), product of:\n0.18643059 = queryWeight, product of:\n1.4652696 = boost\n7.704553 = idf(docFreq=52, maxDocs=43254)\n0.016513996 = queryNorm\n0.96306914 = fieldWeight in 3403, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n7.704553 = idf(docFreq=52, maxDocs=43254)\n0.125 = fieldNorm(doc=3403)\n0.82546395 = weight(abstract_txt:outsourcing in 3403) [ClassicSimilarity], result of:\n0.82546395 = score(doc=3403,freq=2.0), product of:\n0.59006053 = queryWeight, product of:\n4.5151052 = boost\n7.913645 = idf(docFreq=42, maxDocs=43254)\n0.016513996 = queryNorm\n1.398948 = fieldWeight in 3403, product of:\n1.4142135 = tf(freq=2.0), with freq of:\n2.0 = termFreq=2.0\n7.913645 = idf(docFreq=42, maxDocs=43254)\n0.125 = fieldNorm(doc=3403)\n0.16 = coord(4/25)\n```\n4. Simpson, B.; Williams, P.: ¬The cataloger's workstation revisited : utilizing cataloger's desktop (2001) 0.15\n```0.15023772 = sum of:\n0.15023772 = product of:\n1.251981 = sum of:\n0.05786711 = weight(abstract_txt:successful in 122) [ClassicSimilarity], result of:\n0.05786711 = score(doc=122,freq=1.0), product of:\n0.10616493 = queryWeight, product of:\n1.105731 = boost\n5.814059 = idf(docFreq=350, maxDocs=43254)\n0.016513996 = queryNorm\n0.545068 = fieldWeight in 122, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n5.814059 = idf(docFreq=350, maxDocs=43254)\n0.09375 = fieldNorm(doc=122)\n0.019030288 = weight(abstract_txt:library in 122) [ClassicSimilarity], result of:\n0.019030288 = score(doc=122,freq=1.0), product of:\n0.06372847 = queryWeight, product of:\n1.2115482 = boost\n3.1852286 = idf(docFreq=4863, maxDocs=43254)\n0.016513996 = queryNorm\n0.2986152 = fieldWeight in 122, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n3.1852286 = idf(docFreq=4863, maxDocs=43254)\n0.09375 = fieldNorm(doc=122)\n1.1750836 = weight(title_txt:cataloger's in 122) [ClassicSimilarity], result of:\n1.1750836 = score(doc=122,freq=2.0), product of:\n0.24889858 = queryWeight, product of:\n1.6930515 = boost\n8.902256 = idf(docFreq=15, maxDocs=43254)\n0.016513996 = queryNorm\n4.721134 = fieldWeight in 122, product of:\n1.4142135 = tf(freq=2.0), with freq of:\n2.0 = termFreq=2.0\n8.902256 = idf(docFreq=15, maxDocs=43254)\n0.375 = fieldNorm(doc=122)\n0.12 = coord(3/25)\n```\n5. Libby, K.A.; Caudle, D.M.: ¬A survey of the outsourcing of cataloging in academic libraries (1997) 0.15\n```0.14866936 = sum of:\n0.14866936 = product of:\n1.2389114 = sum of:\n0.08820885 = weight(abstract_txt:vendors in 3091) [ClassicSimilarity], result of:\n0.08820885 = score(doc=3091,freq=1.0), product of:\n0.15878963 = queryWeight, product of:\n1.3522906 = boost\n7.110497 = idf(docFreq=95, maxDocs=43254)\n0.016513996 = queryNorm\n0.5555076 = fieldWeight in 3091, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n7.110497 = idf(docFreq=95, maxDocs=43254)\n0.078125 = fieldNorm(doc=3091)\n0.33496922 = weight(abstract_txt:outsource in 3091) [ClassicSimilarity], result of:\n0.33496922 = score(doc=3091,freq=2.0), product of:\n0.3067661 = queryWeight, product of:\n1.8795879 = boost\n9.883085 = idf(docFreq=5, maxDocs=43254)\n0.016513996 = queryNorm\n1.091937 = fieldWeight in 3091, product of:\n1.4142135 = tf(freq=2.0), with freq of:\n2.0 = termFreq=2.0\n9.883085 = idf(docFreq=5, maxDocs=43254)\n0.078125 = fieldNorm(doc=3091)\n0.81573325 = weight(abstract_txt:outsourcing in 3091) [ClassicSimilarity], result of:\n0.81573325 = score(doc=3091,freq=5.0), product of:\n0.59006053 = queryWeight, product of:\n4.5151052 = boost\n7.913645 = idf(docFreq=42, maxDocs=43254)\n0.016513996 = queryNorm\n1.3824569 = fieldWeight in 3091, product of:\n2.236068 = tf(freq=5.0), with freq of:\n5.0 = termFreq=5.0\n7.913645 = idf(docFreq=42, maxDocs=43254)\n0.078125 = fieldNorm(doc=3091)\n0.12 = coord(3/25)\n```" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.70496327,"math_prob":0.99771243,"size":10058,"snap":"2021-31-2021-39","text_gpt3_token_len":3799,"char_repetition_ratio":0.22597972,"word_repetition_ratio":0.37540454,"special_character_ratio":0.52475643,"punctuation_ratio":0.28259006,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99965465,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-08-03T17:38:18Z\",\"WARC-Record-ID\":\"<urn:uuid:2e667ba9-8c28-40d8-9653-f10e15f99e6e>\",\"Content-Length\":\"19167\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a1cb0a4b-c80e-44ca-ade7-85a3757da730>\",\"WARC-Concurrent-To\":\"<urn:uuid:ac27c4f1-8730-44b6-a042-192d4b1b095c>\",\"WARC-IP-Address\":\"139.6.160.6\",\"WARC-Target-URI\":\"https://ixtrieve.fh-koeln.de/birds/litie/document/16266\",\"WARC-Payload-Digest\":\"sha1:5TLN625SBYHWGC4KHJCLINOSY3JDZ7Q3\",\"WARC-Block-Digest\":\"sha1:YBXEHTALGHJFSYY54GQF3ORKWGZHYZ7J\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046154466.61_warc_CC-MAIN-20210803155731-20210803185731-00056.warc.gz\"}"}
https://users.rust-lang.org/t/match-returning-range/95465
[ "# Match returning range\n\nWant to iterate over a sequence of integers - several different possible sequences returned by a function. I expected the below would work, but it doesn't - the .rev() arm has a different type.\nThe default arm would also have a different type without .step_by(1)\nThat doesn't seem reasonable?\n\n``````fn ray(i: usize, ray: usize) -> impl Iterator<Item = usize> {\nmatch ray {\n0 => (i + 8..7 * 8 + 1 + i % 8).step_by(8), // w\n1 => (i % 8..i - 8 + 1).step_by(8).rev(), // e\n_ => (0..0).step_by(1),\n}\n}\n``````\n\nIterator combiners take one iterator type and return another, usually as some sort of wrapper. So a different chain of combinators is going to be a different type.\n\nThere might be some convolutions in this particular case to make things the same type without type erasure, but that's probably easiest \"out\" to code .\n\n``````fn ray(i: usize, ray: usize) -> Box<dyn Iterator<Item = usize>> {\nmatch ray {\n0 => Box::new( (i + 8..7 * 8 + 1 + i % 8).step_by(8) ), // w\n1 => Box::new( (i % 8..i - 8 + 1).step_by(8).rev() ), // e\n_ => Box::new( (0..0).step_by(1) ),\n}\n}\n``````\n2 Likes\n\nWith only two types, `Either` is quite straightforward, too.\n\n``````use either::Either;\nfn ray(i: usize, ray: usize) -> impl Iterator<Item = usize> {\nmatch ray {\n0 => Either::Left((i + 8..7 * 8 + 1 + i % 8).step_by(8)), // w\n1 => Either::Right((i % 8..i - 8 + 1).step_by(8).rev()), // e\n_ => Either::Left((0..0).step_by(1)),\n}\n}\n``````\n\nRust Playground\n\nWith more types, it's possible to start nesting it, but the more cases the more you write", null, "``````use either::Either;\nfn ray(i: usize, ray: usize) -> impl Iterator<Item = usize> {\nuse Either::*;\nmatch ray {\n0 => Left((i + 8..7 * 8 + 1 + i % 8).step_by(8)), // w\n1 => Right(Left((i % 8..i - 8 + 1).step_by(8).rev())), // e\n_ => Right(Right(0..0)),\n}\n}\n``````\n4 Likes\n\nThe takeaway here is that `-> impl Trait` allows you to return any type that implements the trait, but it must be a specific type. It is the same as naming the type, but allows the compiler to infer the name for you. (And sometimes it is impossible to name a type, so you must let the compiler do it).\n\n`-> dyn Trait` would be the way to allow you to return different types that implement the trait. But since `dyn Trait` is unsized you need to make it sized, which is where the `Box` comes in for the first solution. The problem with this solution is that is does a relatively expensive heap allocation.\n\nThe second solution wraps both iterator types in a single type, allowing that type to be the inferred return type. It avoids the allocation, but as @steffahn demonstrates, gets uglier if more than 2 types need to be wrapped.\n\n2 Likes\n\nThanks for that - seems to run efficiently so I assume there is zero cost for wrapping in Left/Right and also for tacking on step_by(1) to get the type to match?\n\nI ended up with the below which has 8 arms - but two types are sufficient - it's the squares a queen can move to on an empty chess board. Works - but the extra cruft to get the types to check out is not pretty...\n\n``````use either::Either;\nfn ray(j: usize, ray: usize) -> impl Iterator<Item = isize> {\nlet i = j as isize;\nmatch ray {\n0 => Either::Left((i + 8..7 * 8 + 1 + i % 8).step_by(8)), // w\n1 => Either::Right((i % 8..i - 8 + 1).step_by(8).rev()), // e\n2 => Either::Left((i + 1..(i / 8) * 8 + 8).step_by(1)), // 2=n\n3 => Either::Right(\n((i / 8) * 8..i).step_by(1).rev(), // 3=s\n),\n4 => Either::Left((i + 9..min(64, i + (8 - i % 8) * 9)).step_by(9)),\n5 => Either::Right(\n(i - 7 * min(i / 8, 7 - i % 8)..i) // 5=u=ne\n.step_by(7)\n.rev(),\n),\n6 => Either::Left(\n(i + 7..min(64, i + i % 8 * 7 + 1)) // 6=v=sw\n.step_by(7),\n),\n7 => Either::Right(\n(i - 9 * min(i / 8, i % 8)..i) // 7=x\n.step_by(9)\n.rev(),\n),\n_ => Either::Left((0..0).step_by(1)),\n}\n}\n``````\n\nWhy do you want to return iterators here? That's 28 bytes at most, or, maybe even better, 64 bits with representation of accessible fields. Much cheaper and simpler to pass around than iterators.\n\nNot even on AVR this would make sense: all data size savings would be eaten by increase in size of the code.\n\nOf course there is overhead. The `next` method for the `Either` Iterator will meet to `match` on the `Either` variant, so there's potential for overhead in every iteration. This kind of iterator is one of those typical cases where `for_each` (or many other iterator methods that do the whole iterator with one method call) has the potential to be better than a `for` loop, because the `for_each` will also only need to `match` once, but for the whole iteration.\n\nSimilarly, `step_by` will make `next()` delegate to usage of `nth()` on the underlying iterator (as far as I can tell, in the current standard library implementation) which for ranges will be likely minimal overhead, as I'd assume this will end up effectively in adding this `1` stored in the `StepBy` iterator to be breaded and added instead of a sort of hard-coded/compiled increment operation.\n\nNow, compared to usage of `Box<dyn Iterator…>`, the overhead from `Either` and `step_by` is likely smaller, so yeah.. the observation that it \"runs efficiently\" is probably an accurate one, too, and after all, \"overhead\" compared to what alternative would we even be discussing?\n\n1. Unless perhaps if the use site is so specific that the optimizer manages to get rid of all overhead. ↩︎\n\n1 Like\n\n\"overhead\" compared to what alternative would we even be discussing?\n\nI'm comparing it to iterating over the same sequences in a precomputed `Vec<usize>` - not timed it accurately, but seems to be only a small overhead.\n\nGetting it is easy - shoving it inside your loved one and smuggling it across the border is not.\n\nWhy use iterators - because I want to iterate over the individual positions regardless of whether they are precomputed or calculated on the fly.\n\nPrecomputing is manageable - but you are underestimating the size. With bitmaps one result fits in a 64-bit word. For the whole board it is 64*64. Bishop and rook would need their own bitmaps - can't share as they do with the above function - though queen bitmap could be derived.\n\nThat space is manageable - `3*64*64 = 12288 bits` - but it is still not obviously faster when you have to filter out the individual bits to get the same sequences.\n\nMaybe I don't understand something you don't understand something.\n\nYou do know that on modern CPU's (for some definition of modern… basically anything newer that 80386 which was presented 38 years ago) there are leading_zeros and trailing zeros functions, right? On modern CPUs (this time really modern: Ryzen, latest generations of Intel Core, ARM CPUs made in last 5-7 years) you can execute these instructions as fast as normal aritmetic instructions. Certainly much faster then these iterators which would execute half-dozen if not dozen instructions for each position.\n\nThe big question is whether you would push out other things from L1 cache, but 12288 bis is only 1.5K, thus everything should work fine.\n\nThis topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments." ]
[ null, "https://emoji.discourse-cdn.com/twitter/sweat_smile.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8884043,"math_prob":0.97553027,"size":5023,"snap":"2023-40-2023-50","text_gpt3_token_len":1308,"char_repetition_ratio":0.10978282,"word_repetition_ratio":0.021905806,"special_character_ratio":0.29305196,"punctuation_ratio":0.14597157,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9756517,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-10-02T17:41:40Z\",\"WARC-Record-ID\":\"<urn:uuid:f97ac94c-23ca-444b-9ea8-0e1a67d35b6d>\",\"Content-Length\":\"44822\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d8d20a77-0c2e-4c71-b2ba-c928b7ef69d8>\",\"WARC-Concurrent-To\":\"<urn:uuid:be4d5016-5e8c-4acd-8c0c-9f27708c5276>\",\"WARC-IP-Address\":\"184.105.99.43\",\"WARC-Target-URI\":\"https://users.rust-lang.org/t/match-returning-range/95465\",\"WARC-Payload-Digest\":\"sha1:7A53VE262HZLGIXRYI4OFMYWSVZPRFO2\",\"WARC-Block-Digest\":\"sha1:HMA4FNJ5OS23DA3D54HG57C2N3J6K25K\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233511002.91_warc_CC-MAIN-20231002164819-20231002194819-00119.warc.gz\"}"}
https://planetmath.org/RayClassGroup
[ "# ray class group\n\nLet ${\\mathfrak{m}}$ be a modulus", null, "", null, "", null, "for a number field $K$. The ray class group of $K$ mod ${\\mathfrak{m}}$ is the group $\\mathbb{I}^{\\mathfrak{m}}/K_{{\\mathfrak{m}},1}$, where\n\nTitle ray class group RayClassGroup 2013-03-22 12:50:19 2013-03-22 12:50:19 djao (24) djao (24) 4 djao (24) Definition msc 11R29" ]
[ null, "http://mathworld.wolfram.com/favicon_mathworld.png", null, "http://planetmath.org/sites/default/files/fab-favicon.ico", null, "http://planetmath.org/sites/default/files/fab-favicon.ico", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7152138,"math_prob":0.9983023,"size":627,"snap":"2021-04-2021-17","text_gpt3_token_len":173,"char_repetition_ratio":0.12520064,"word_repetition_ratio":0.018867925,"special_character_ratio":0.3030303,"punctuation_ratio":0.06779661,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9889535,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-04-15T02:07:42Z\",\"WARC-Record-ID\":\"<urn:uuid:72e1137b-07c7-4a94-bfc9-8a41a337f731>\",\"Content-Length\":\"9679\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d270bb1f-b085-4d90-94dd-490ddc23b38d>\",\"WARC-Concurrent-To\":\"<urn:uuid:d64d559c-705e-499f-89df-f57f10e4b81b>\",\"WARC-IP-Address\":\"129.97.206.129\",\"WARC-Target-URI\":\"https://planetmath.org/RayClassGroup\",\"WARC-Payload-Digest\":\"sha1:UXAIWN22G625ZWFAVZJ3E55NCOB3RUU7\",\"WARC-Block-Digest\":\"sha1:IGI3OHQDCGP4SNXHCTCSMIQLRYNMAXMB\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-17/CC-MAIN-2021-17_segments_1618038082988.39_warc_CC-MAIN-20210415005811-20210415035811-00040.warc.gz\"}"}
http://www.josephkoonce.org/2016/06/what-processes-drive-soil-moisture.html
[ "## Thursday, June 23, 2016\n\n### What processes drive soil moisture dynamics?\n\nThe primary processes driving soil moisture dynamics in the Vacant to Vibrant parcels are rainfall events that act as sources (both direct rainfall and runoff) and losses due to infiltration, evaporation, and plant mediated evapotranspiration.  In the previous blog post (here), I presented evidence that the patterns of diurnal variation in soil moisture were driven by variation in soil temperature and did not reflect the effects of the primary drivers.  The regularity of the diurnal variability and its low amplitude allows use of smoothing functions to characterizing longer term declines in soil moisture associated with the loss processes (infiltration, evaporation, and evapotranspiration).  Figure 1 shows the application a simple linear regression to a time series between rainfall events.  Using the linear fit to the time series, Figure 2 shows the that the removal of the longer-term trend emphasizes the diurnal variability, and Figure 3 shows that the variation in the residuals are correlated with observed soil temperature.", null, "Figure 1.  Semi logarithmic plot of decline of soil moisture over the period May 25 to June 6, 2016 for the Gary E1 parcel at 3 cm.  The blue line is a linear regression.", null, "Figure 2.  Plot of the residuals for the regression in Figure 1.", null, "Figure 3.  Plot of the residuals from Figure 2 showing association with measured soil temperature at 3 cm depth.  The blue line is a regression between the two variables (r= 0.80, p <0.0001).\n\nThe advantage of the linear regression in Figure 1 is that the slope is a first order (i.e. an exponential decay rate) estimate of the rate of change of soil moisture between rainfall events.  The estimated rate for the Gary E1 parcel at 3 cm soil depth is -1.68e-07 (1/s).  This method can be applied to trends at other depths or for the weighted average soil moisture and provides data for comparison of experimental and control parcels in the different soil types of the neighborhoods of the three cities in the project.\n\nThe remaining decline pattern (presented in the previous blog post) is the transient associated with a rainfall event.  Figure 4, presents the observed relation between increment of soil moisture and the subsequent first order rate of decline.", null, "Figure 4.  Relation between soil moisture increment and subsequent rate of decline in soil moisture at 3 cm depth in the Gary E1 rain garden.", null, "Figure 5.  Relation between soil moisture increment and subsequent rate of decline in soil moisture at 3 cm depth in the Gary E1 rain garden for soil moisture increments greater than 0.012.  The blue line is a regression line and the shaded area represents the standard error bounds.\n\nThe slope of the relation in Figure 5 is -1.488e-05 (r=-0.5408624, p=0.1663, NS).  The hint of an inverse relation between the increment of soil moisture following a rainfall event exists, but is not statistically significant.  The average rate of decline, however, is 1.02e-6 (1/s).    An outlier occurs on April 30, 2016.  On 4/29 and 4/30 (see Figure 6), there was a double increase.  The identification of extrema near this interval is a problem and it is reasonable to regard the April 30 point as anomalous.  Figure 7 shows the result of eliminating this point.  The estimate of the decline rate is a statistically significant -1.90e-5, which is nearly two orders of magnitude greater than the rate of decline of the inter-event interval in Figure 1.  Clearly, more data are needed to explore this relationship and its drivers, but it seems reasonable to expect that the rate of decline following a rainfall event is a function of the soil moisture gradient and the permeability of the soil.", null, "Figure 6.  Pattern of variation in soil moisture (m3/m3) at 3 cm soil depth in the Gary E1 rain garden.", null, "Figure 7.  Relation between soil moisture increment and subsequent rate of decline in soil moisture at 3 cm depth in the Gary E1 rain garden for soil moisture increments greater than 0.012 and eliminating.  The blue line is a regression line (r= -0.96, p<0.001) and the shaded area represents the standard error bounds." ]
[ null, "https://3.bp.blogspot.com/-vp1fthJk0jU/V2wNJ2l1s6I/AAAAAAAAAsE/45lxzEdLbK8HMlWjU6L_f2MJzkvJt_TcwCLcB/s320/bp5f1.png", null, "https://3.bp.blogspot.com/-0qIgNgrVweI/V2wNKKe9MZI/AAAAAAAAAsU/zpSddRhvhSo3rsQTMNSR4nblG5rlQCvLgCKgB/s320/bp5f2.png", null, "https://1.bp.blogspot.com/-dOIAkw2y36U/V2wNJ1kBCDI/AAAAAAAAAsM/-CyKKCPOAvQJBEZwaWmd1fkzgeNiAI_JACKgB/s320/bp5f3.png", null, "https://1.bp.blogspot.com/-bZZe8xIqkso/V2wNK7r8RTI/AAAAAAAAAs8/FChme_aGQZc23zUjnG3m6o4uv-wjt-jUACKgB/s320/bp5f4.png", null, "https://3.bp.blogspot.com/-oP93voyANxo/V2wNK_GcY1I/AAAAAAAAAs8/EycucW7GiN8dmfXFk3QS30zxq1gQ9BatQCKgB/s320/bp5f5.png", null, "https://4.bp.blogspot.com/-KPBjXfL8x6Y/V2wNLD3U0mI/AAAAAAAAAs8/FLsOGtdyEaUaV9hv41dZYUlqWChNLGEHACKgB/s320/bp5f6.png", null, "https://4.bp.blogspot.com/-uNCrwiyOId0/V2wNLZCvBwI/AAAAAAAAAs8/vRZh_mlAG-QWzxNL4Wt5zoV3pkOi_E3UACKgB/s320/bp5f7.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8913108,"math_prob":0.9681712,"size":4095,"snap":"2021-43-2021-49","text_gpt3_token_len":946,"char_repetition_ratio":0.15864092,"word_repetition_ratio":0.14243759,"special_character_ratio":0.22490843,"punctuation_ratio":0.09335038,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9756426,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14],"im_url_duplicate_count":[null,5,null,5,null,5,null,5,null,5,null,5,null,5,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-17T18:29:46Z\",\"WARC-Record-ID\":\"<urn:uuid:992e0c2e-76d5-40b6-894a-d749fed7ea14>\",\"Content-Length\":\"106323\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:567b1bb8-eea6-4629-b426-0a36b5da3c70>\",\"WARC-Concurrent-To\":\"<urn:uuid:853af16a-eb9c-4a76-9223-542c8fc8065f>\",\"WARC-IP-Address\":\"142.250.188.51\",\"WARC-Target-URI\":\"http://www.josephkoonce.org/2016/06/what-processes-drive-soil-moisture.html\",\"WARC-Payload-Digest\":\"sha1:PJRJTFAZWOLNEBMZKAR7STC6I622MXRL\",\"WARC-Block-Digest\":\"sha1:3CQ2SFOCZ7SJYU27RS3QWCMME3IJPPTO\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323585181.6_warc_CC-MAIN-20211017175237-20211017205237-00669.warc.gz\"}"}
https://mathsci.kaist.ac.kr/~sangil/seminar/tag/mariachudnovsky/
[ "## Posts Tagged ‘MariaChudnovsky’\n\n### Paul Seymour and Maria Chudnovsky, Theory of excluding induced subgraphs [KAIST CMC Annual Distinguished Lecture]\n\nFriday, December 5th, 2014\nTheory of excluding induced subgraphs\nPaul Seymour Department of Mathematics, Princeton University, Princeton, NJ, USA\nMaria Chudnovsky, Columbia University, New York, NY, USA\n2014/12/11 Thu 10:30AM-12PM, 1:30PM-3PM\n2014/12/12 Fri 10:30AM-12PM, 1:30PM-3PM\nE6-1, Room 1501\nThis will be a series of four lectures, beginning with a general introduction to the area of induced subgraphs, and later focusing on several recent results. We will examine the structure of graphs that do not contain certain induced subgraphs, and in particular study relations between the clique number, stability number and chromatic number of these graphs. Later topics will include the strong perfect graph theorem, and recent progress on the Erdos-Hajnal conjecture, and on various conjectures of Gyárfás.\n\n### KAIST Graph Theory Day 2012\n\nThursday, November 29th, 2012\nKAIST Graph Theory Day 2012\n2012/12/13 Thursday (Room: 3433, Building E6-1)\n\nList of speakers\n\n• 11AM-12PM Tomáš Kaiser (University of West Bohemia, Czech Republic) : Applications of the matroid-complex intersection theorem\n• 1:30PM-2:30PM Paul Seymour (Princeton University, USA) : Graphs, Tournaments, Colouring and Containment\n• 2:45PM-3:45PM  Maria Chudnovsky (Columbia University, USA) : Excluding paths and antipaths\n• 4:30PM-5:30PM Daniel Kráľ (University of Warwick, UK) : Quasirandomness and property testing of permutations (Colloquium)\n\nApplications of the matroid-complex intersection theorems\nTomáš Kaiser\nThe Matroid intersection theorem of Edmonds gives a formula for the maximum size of a common independent set in two matroids on the same ground set. Aharoni and Berger generalized this theorem to the `topological’ setting where one of the matroids is replaced by an arbitrary simplicial complex. I will present two applications of this result to graph-theoretical problems. The first application is related to the existence of spanning 2-walks in tough graphs, the other one is more recent and gives a bound on the fractional arboricity of a graph G ensuring that G can be covered by k forests and a matching. In both cases, slightly better results can be obtained by other methods, but there seems to be room for improvement on the topological side as well.\n\nGraphs, tournaments, colouring and containment\nPaul Seymour\nSome tournaments H are heroes; they have the property that all tournaments not containing H as a subtournament have bounded chromatic number (colouring a tournament means partitioning its vertex-set into transitive subsets). In joint work with eight authors, we found all heroes explicitly. That was great fun, and it would be nice to find an analogue for graphs instead of tournaments.\nThe problem is too trivial for graphs, if we only exclude one graph H; but it becomes fun again if we exclude a finite set of graphs. The Gyarfas-Sumner conjecture says that if we exclude a forest and a clique then chromatic number is bounded. So what other combinations of excluded subgraphs will give bounded chromatic (or cochromatic) number? It turns out (assuming the Gyarfas-Sumner conjecture) that for any finite set S of graphs, the graphs not containing any member of S all have bounded cochromatic number if and only if S contains a complete multipartite graph, the complement of a complete multipartite graph, a forest, and the complement of a forest.\nProving this led us to the following: for every complete multipartite graph H, and every disjoint union of cliques J, there is a number n with the following property. For every graph G, if G contains neither of H,J as an induced subgraph, then V(G) can be partitioned into two sets such that the first contains no n-vertex clique and the second no n-vertex stable set.\nIn turn, this led us (with Alex Scott) to the following stronger result. Let H be the disjoint union of H_1,H_2, and let J be obtained from the disjoint union of J_1,J_2 by making every vertex of J_1 adjacent to every vertex of J_2. Then there is a number n such that for every graph G containing neither of H,J as an induced subgraph, V(G) can be partitioned into n sets such that for each of them, say X, one of H_1,H_2,J_1,J_2 is not contained in G|X.\nHow about a tournament analogue of this? It exists, and the same (short) proof works; and this leads to a short proof of the most difficult result of the heroes paper that we started with.\nThere are a number of other related results and open questions. Joint work with Maria Chudnovsky.\n\nExcluding paths and antipaths\nMaria Chudnovsky\nThe Erdos-Hajnal conjecture states that for every graph H, there exists a constant delta(H)>0, such that every n-vertex graph with no induced subgraph isomorphic to H contains a clique or a stable set of size at least n^delta(H). This conjecture is still open. We consider a variant of the conjecture, where instead of excluding a single graph H as an induced subgraph, a family of graphs is excluded. We prove this modified conjecture for the case when the five-edge path and its complement are excluded. Our second result is an asymmetric version of this: we prove that for every graph G such that G contains no induced six-edge path, and the complement of G contains no induced four-edge path, G contains a polynomial-size clique or stable set. This is joint work with Paul Seymour.\n\nQuasirandomness and property testing of permutations\nDaniel Kráľ\nA systematic study of large combinatorial objects has recently led to discovering many connections between discrete mathematics and analysis. In this talk, we explore the analytic view of large permutations. We associate every sequence of permutations with a measure on a unit square and show the following: if the density of every 4-element subpermutations in a permutation p is 1/4!+o(1), then the density of every k-element subpermutation is 1/k!+o(1). This solves a question of Graham whether quasirandomness of a permutation is captured by densities of its 4-element subpermutations. At the end of the talk, we present a result related to an area of computer science called property testing. A property tester is an algorithm which determines (with a small error probability) properties of a large input object based on a small sample of it. Specifically, we prove a conjecture of Hoppen, Kohayakawa, Moreira and Sampaio asserting that hereditary properties of permutations are testatble with respect to the so-called Kendal’s tau distance.\nThe results in this talk were obtained jointly with Tereza Klimosova or Oleg Pikhurko.\n\n### KAIST Graph Theory Day 2011\n\nSunday, April 3rd, 2011\nKAIST Graph Theory Day 2011\n2011/5/10 Tuesday (Room: 1501, Building E6-1)\nList of speakers\n\n• 11AM-12PM Maria Chudnovsky (Columbia University, USA) : Coloring some perfect graphs\n• 2PM-3PM Ken-ichi Kawarabayashi (NII, Japan) : A separator theorem in minor-closed class of graphs\n• 4PM-5PM Bojan Mohar (SFU, Canada) : On the chromatic number of digraphs\n• 5PM-6PM Paul Seymour (Princeton University, USA) : Colouring Tournaments\n\nColoring some perfect graphs\nMaria Chudnovsky\nA graph G is called perfect if for every induced subgraph H of G, the chromatic number and the clique number of H are equal. After the recent proof of the Strong Perfect Graph Theorem, and the discovery of a polynomial-time recognition algorithm, the central remaining open question about perfect graphs is finding a combinatorial polynomial-time coloring algorithm. (There is a polynomial-time algorithm known, using the ellipsoid method). Recently, we were able to find such an algorithm for a certain class of perfect graphs, that includes all perfect graphs admitting no balanced skew-partition. The algorithm is based on finding special “extremal” decompositions in such graphs; we also use the idea of “trigraphs”.\nThis is joint work with Nicolas Trotignon, Theophile Trunck and Kristina Vuskovic.\n\nA separator theorem in minor-closed class of graphs\nKen-ichi Kawarabayashi\nIt is shown that for each t, there is a separator of size $$O(t \\sqrt{n})$$ in any n-vertex graph G with no Kt-minor.\nThis settles a conjecture of Alon, Seymour and Thomas (J. Amer. Math. Soc., 1990 and STOC’90), and generalizes a result of Djidjev (1981), and Gilbert, Hutchinson and Tarjan (J. Algorithm, 1984), independently, who proved that every graph with n vertices and genus g has a separator of order $$O(\\sqrt{gn})$$, because Kt has genus Ω(t2).\nJoint work with Bruce Reed.\n\nOn the chromatic number of digraphs\nBojan Mohar\nSeveral reasons will be presented why the natural extension of the notion of undirected graph colorings is to partition the vertex set of a digraph into acyclic sets. Additionally, some recent results in this area, the proofs of which use probabilistic techniques, will be outlined.\n\nColouring Tournaments\nPaul Seymour\nA tournament is a digraph obtained from a complete graph by directing its edges, and colouring a tournament means partitioning its vertex set into acyclic subsets (acyclic means the subdigraph induced on the subset has no directed cycles). This concept is quite like that for graph-colouring, but different. For instance, there are some tournaments H such that every tournament not containing H as a subdigraph has bounded chromatic number. We call them heroes; for example, all tournaments with at most four vertices are heroes.\nIt turns out to be a fun problem to figure out exactly which tournaments are heroes. We have recently managed to do this, in joint work with Berger, Choromanski, Chudnovsky, Fox, Loebl, Scott and Thomassé, and this talk is about the solution.\n\n### Maria Chudnovsky, Packing seagulls in graphs with no stable set of size three\n\nMonday, April 13th, 2009\nPacking seagulls in graphs with no stable set of size three\nMaria Chudnovsky\nDepartment of Industrial Engineering and Operations Research & Department of Mathematics, Columbia University, New York, USA\n2009/5/21 Thursday 2PM-3PM\n\nHadwiger’s conjecture is a well known open problem in graph theory. It states that every graph with chromatic number k, contains a certain structure, called a “clique minor” of size k. An interesting special case of the conjecture, that is still wide open, is when the graph G does not contain three pairwise non-adjacent vertices. In this case, it should be true that G contains a clique minor of size t where $$t = \\lceil |V(G)|/2 \\rceil$$. This remains open, but Jonah Blasiak proved it in the subcase when |V(G)| is even and the vertex set of G is the union of three cliques. Here we prove a strengthening of Blasiak’s result: that the conjecture holds if some clique in G contains at least |V(G)|/4 vertices.\n\nThis is a consequence of a result about packing “seagulls”. A seagull in G is an induced three-vertex path. It is not known in general how to decide in polynomial time whether a graph contains k pairwise disjoint seagulls; but we answer this for graphs with no stable sets of size three.\n\nThis is joint work with Paul Seymour." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.92795783,"math_prob":0.8123259,"size":5625,"snap":"2020-45-2020-50","text_gpt3_token_len":1298,"char_repetition_ratio":0.110478565,"word_repetition_ratio":0.019736841,"special_character_ratio":0.20657778,"punctuation_ratio":0.09595484,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97287464,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-12-01T22:05:31Z\",\"WARC-Record-ID\":\"<urn:uuid:304a9b67-09e1-4bd1-9123-e59fb0b73f60>\",\"Content-Length\":\"49376\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d59c166a-0f26-4744-a6bb-f3533d933fe8>\",\"WARC-Concurrent-To\":\"<urn:uuid:a6a22262-2e0f-4642-b3db-ffca5d524374>\",\"WARC-IP-Address\":\"143.248.27.129\",\"WARC-Target-URI\":\"https://mathsci.kaist.ac.kr/~sangil/seminar/tag/mariachudnovsky/\",\"WARC-Payload-Digest\":\"sha1:OXZQ735QHL5SWS3AXZK6VPYHUJPD6PXY\",\"WARC-Block-Digest\":\"sha1:QPG7MDZF3NXPA2C7ROELUNO6MG2NYNTN\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141681524.75_warc_CC-MAIN-20201201200611-20201201230611-00134.warc.gz\"}"}
https://math.stackexchange.com/questions/177160/integral-int-infty-infty-frac-lnx21x21dx/177166
[ "# Integral:$\\int^\\infty_{-\\infty}\\frac{\\ln(x^{2}+1)}{x^{2}+1}dx$\n\nHow to evaluate:\n\n$$\\int^\\infty_{-\\infty}\\frac{\\ln(x^{2}+1)}{x^{2}+1}dx$$\n\nMaybe we can evaluate it using the well-known result:$\\int_{0}^{\\frac{\\pi}{2}} \\ln{\\sin t} \\text{d}t=\\int_{0}^{\\frac{\\pi}{2}} \\ln{\\cos t} \\text{d}t=-\\frac{\\pi}{2}\\ln{2}$\n\nBut how do I evaluate it, using that ?\n\n## 4 Answers\n\nLetting $x=\\tan t$ leads to $$-4 \\int^{\\pi/2}_0 \\log(\\cos t) dt = 2\\pi \\log 2.$$\n\nAlternatively to Ragib's substitution, you could consider $I(s) = \\int_\\mathbb{R} \\left(1+x^2\\right)^s \\mathrm{d}x$, and then evaluate $I^\\prime(-1)$.\n\n$$I(s) = \\int_\\mathbb{R} \\left(1+x^2\\right)^s \\mathrm{d}x = 2 \\int_0^\\infty \\left(1+x^2\\right)^s \\mathrm{d}x \\stackrel{x^2=\\frac{u}{1-u}}{=} \\int_0^1 \\left(1-u\\right)^{-\\frac{3}{2}-s}\\frac{\\mathrm{d} u}{\\sqrt{u}} = B\\left(\\frac{1}{2}, -\\frac{1}{2}-s\\right)$$ Thus we established $I(s) = \\sqrt{\\pi}\\frac{\\Gamma\\left(-\\frac{1}{2}-s\\right)}{\\Gamma\\left(-s\\right)}$. We are now ready to compute the derivative: $$I^\\prime(s) = I(s) \\left( \\psi(-s) - \\psi\\left(-s-\\frac{1}{2}\\right) \\right)$$ and $$I^\\prime(-1) = I(-1) \\left( \\psi(1) - \\psi\\left(\\frac{1}{2}\\right) \\right) = \\sqrt{\\pi} \\frac{\\Gamma\\left(\\frac{1}{2}\\right)}{\\Gamma(1)} \\left( \\psi(1) - \\psi\\left(\\frac{1}{2}\\right) \\right) = 2 \\pi \\log(2)$$ where $\\Gamma\\left(\\frac{1}{2}\\right) = \\sqrt{\\pi}$ was used, as well as a polygamma duplication identity: $$\\psi(2s) = \\log(2) + \\frac{1}{2}\\left(\\psi(s) + \\psi\\left(s+\\frac{1}{2}\\right)\\right)$$ that evaluated at $s=\\frac{1}{2}$ gives $\\psi(1) - \\psi\\left(\\frac{1}{2}\\right) = 2 \\log(2)$.\n\nLet $$I(\\alpha)=\\int_{-\\infty}^{\\infty}\\frac{\\ln(\\alpha x^2+1)}{x^2+1}dx.$$ Then \\begin{eqnarray*} I'(\\alpha)&=&\\int_{-\\infty}^{\\infty}\\frac{x^2}{(\\alpha x^2+1)(x^2+1)}dx\\\\ &=&\\frac{1}{\\alpha-1}\\int_{-\\infty}^{\\infty}\\left[\\frac{1}{x^2+1}-\\frac{1}{\\alpha x^2+1}\\right]dx\\\\ &=&\\frac{\\pi}{\\sqrt{\\alpha}+\\alpha} \\end{eqnarray*} and hence \\begin{eqnarray*} I(\\alpha)&=&\\int\\frac{\\pi}{\\sqrt{\\alpha}+\\alpha}d\\alpha\\\\ &=&2\\pi\\ln(1+\\sqrt{\\alpha})+C. \\end{eqnarray*} Clearly $I(0)=0$ implies $C=0$. Thus $$\\int_{-\\infty}^{\\infty}\\frac{\\ln(x^2+1)}{x^2+1}dx=I(1)=2\\pi\\ln 2.$$\n\nsince $\\frac{\\ln(x^{2}+1)}{x^{2}+1}$ is even function, $$\\int^\\infty_{-\\infty}\\frac{\\ln(x^{2}+1)}{x^{2}+1}dx = 2\\int^\\infty_{0}\\frac{\\ln(x^{2}+1)}{x^{2}+1}dx$$\n\nFrom here, you can fallow Ragib Zaman's process." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.52720124,"math_prob":1.0000097,"size":283,"snap":"2019-13-2019-22","text_gpt3_token_len":120,"char_repetition_ratio":0.16129032,"word_repetition_ratio":0.0,"special_character_ratio":0.44522968,"punctuation_ratio":0.0625,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.0000097,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-05-23T02:50:06Z\",\"WARC-Record-ID\":\"<urn:uuid:cb989a0c-a48d-4430-9467-ac0911f57c12>\",\"Content-Length\":\"145315\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f2bd0a51-81f9-4696-ab8d-52e36e2a3543>\",\"WARC-Concurrent-To\":\"<urn:uuid:1f6f7911-c5cd-4ab9-9094-5f7fc46e5975>\",\"WARC-IP-Address\":\"151.101.1.69\",\"WARC-Target-URI\":\"https://math.stackexchange.com/questions/177160/integral-int-infty-infty-frac-lnx21x21dx/177166\",\"WARC-Payload-Digest\":\"sha1:QIKMZJQS7M56CS2DMTPKKTVYZ575O3GR\",\"WARC-Block-Digest\":\"sha1:42PRL3WK2OLFJCRNQBWKOO5I4LJ4JYRS\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-22/CC-MAIN-2019-22_segments_1558232257002.33_warc_CC-MAIN-20190523023545-20190523045545-00203.warc.gz\"}"}
https://www.depanvolets95.fr/2016/6052-how-to-calculate-vibrating-screen.html
[ "# How To Calculate Vibrating Screen\n\n• ### how to calculate the size of a vibrating screens\n\nhow to calculate the size of a vibrating screens. how to calculate the size of a vibrating screens. bed height calculation in vibrating screen. how to calculate the size of a vibrating screens. here is some very critical information to size vibrating screens\n\nGet Price\n• ### tph calculation of vibrating screen\n\nsales inquiry tph calculation of vibrating screen. method of tph calculation in crusher. method of tph calculation in crusher. tph calculation of vibrating screen cement industry calculation . pe jaw crusher.\n\nGet Price\n• ### banana type vibrating screen area calculation\n\nvibrating screen area capacity software aplanet.co.in. how to calculate vibrating screen how to calculate vibrating screen, paschal associate sales, inc banana type vibrating screen area calculation 8211; grinding mill .\n\nGet Price\n• ### calculation of g force in a vibrating screen\n\ncalculation of g force in a vibrating screen: mining vibrating screen basics psco. how to calculate vibrating screen g force – coal surface mining how to calculation vibrating screen g force – 26 industry news.\n\nGet Price\n• ### how to calculate vibrating screen size\n\nhome >> how to calculate vibrating screen size. how to calculate vibrating screen size. grafisk fagordbog, engelsk dansk\n\nGet Price\n• ### capacity calculation formula for vibrating screen\n\nfailure and sensitivity analysis of a reconfigurable ability to diminish the occurrence of a vibrating screen structural failure underit is able to calculate the effect of steady loading conditions on the screen structure,the rvs uses a simple theory of reconfigurability to increase its capacity\n\nGet Price\n• ### length of vibrating screen calculation\n\nhow to adjust stroke on a vibrating screen. ejmpep.com hori]ontal vibrating screens, with the emphasis based on thehow to calculate the vibrating screen stroke length;how to adjust stroke on a\n\nGet Price\n• ### how to calculate the size of a vibrating screens\n\nbed height calculation in vibrating screen. how to calculate the size of a vibrating screens. here is some very critical information to size vibrating screens to do the depth of material to calculate the bed [more]\n\nGet Price\n• ### how to compute power requirement of vibrating screen\n\nhow to calculate vibrating screen power vibrating screen power requirement page 2 forums john gave you the math for calculating which is excellent.\n\nGet Price\n• ### length of vibrating screen calculation\n\nlength of vibrating screen calculation mineral vibrating screens calculate dynamic characteristic of the vibrating screen was researched and for a screen with square holes and side length d 0 as\n\nGet Price\n• ### mineral processing vibrating screen efficiency calculation\n\ncalculate a classifier and screen's efficiencycalculate a classifier and screen's efficiency mass balance calculations example. we can improve all plant\n\nGet Price\n• ### calculating capacity of mcnally bharat vibrating screen\n\nhow to calculate vibrating screen, how to calculate vessel capacity power calculation for vibratory feeders » learn more calculating capacity of mcnally bharat\n\nGet Price\n• ### vibrating conveyors vibrating conveyor vibra screw\n\nvibra screw's line of vibrating conveyors offers a simple and efficient means to meter and convey your dry bulk material. screen and convey virtually any dry bulk\n\nGet Price\n• ### vibrating screen calculation\n\nprinciples of screening and sizing. footage of the screen deck. • calculation gives the basic capacity of each deck and the total capacity of the vibrating\n\nGet Price\n• ### how to calculate vibrating screen g force\n\nhow to calculation vibrating screen g force. description : how to drive linear resonance actuators (lra) vibrating motors most vibratin get prices.calculate g force and\n\nGet Price\n• ### how to calculate power of a vibrator\n\nthe rotary vibrating screen is widely used in mining, coal, petroleum, chemical, electric power, food, how to calculate vibrating screen speed youtube.\n\nGet Price\n• ### banana type vibrating screen area calculation\n\nhow to calculate vibrating screen, paschal associate sales, inc. banana type vibrating screen area calculation 8211 grinding mill china. jul 30, get more info.\n\nGet Price\n• ### calculating vibrating screen deck efficiency\n\ncalculating vibrating screen deck efficiency – grinding calculating screen area a separate calculation is required for each deck of a multiple deck screen, b\n\nGet Price\n• ### calculation of vibrating screen\n\npower calculation for vibratory feeders south africa . how to calculate vibrating screen power, motor power calculation of vibrating feeder and .\n\nGet Price\n• ### vibrating screen design calculation\n\nhow to calculate vibrating screen speed vibrating screen deck please kindly i want to ask how to calculate calculation involves the designing vibrating screen calculation involves the\n\nGet Price" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.81028324,"math_prob":0.98684597,"size":4762,"snap":"2021-04-2021-17","text_gpt3_token_len":898,"char_repetition_ratio":0.3783102,"word_repetition_ratio":0.1846591,"special_character_ratio":0.16610667,"punctuation_ratio":0.072463766,"nsfw_num_words":1,"has_unicode_error":false,"math_prob_llama3":0.9909664,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-04-17T08:47:59Z\",\"WARC-Record-ID\":\"<urn:uuid:b77525f8-48cf-40a1-a408-c6dabde1f0f8>\",\"Content-Length\":\"15872\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:885c97b3-bdc1-459b-b692-1f3f139021ed>\",\"WARC-Concurrent-To\":\"<urn:uuid:196f1701-aeb9-4291-9d83-a5d8976d1cea>\",\"WARC-IP-Address\":\"172.67.215.113\",\"WARC-Target-URI\":\"https://www.depanvolets95.fr/2016/6052-how-to-calculate-vibrating-screen.html\",\"WARC-Payload-Digest\":\"sha1:HQUU473DQZDPJU3ACBK4ZR6HLDHWTX2P\",\"WARC-Block-Digest\":\"sha1:TUPJJER7QQXUXHKNJVV4U3X4X77RMAB4\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-17/CC-MAIN-2021-17_segments_1618038118762.49_warc_CC-MAIN-20210417071833-20210417101833-00401.warc.gz\"}"}
https://malaysiaigcse.com/shop/edexcel-level-maths-topical-book-core-mechanics-statistic-question-mark-scheme/
[ "# Edexcel A Level Maths Topical Book Core, Mechanics, Statistic, Question & Mark Scheme\n\nRM 493.00\n\n### Edexcel A Level Maths Topical Book Core, Mechanics, Statistic, Question & Mark Scheme\n\nA. CORE 1\nSection 1: Algebra and Functions\n​ Section 2: Coordinate geometry in the (x,y) plane\nSection 3: Sequence and Series\nSection 4: Differentiation\nSection 5: Integration\n\nB. CORE 2\nSection 1: Algebra and Functions\nSection 2: Coordinate Geometry in the (x,y) plane\nSection 3: Sequences and Series\nSection 4: Trigonometry​\nSection 5: Exponential & Logarithms\nSection 6: Differentiation\nSection 7: Integration\n\nC. CORE 3\nSection 1: Algebra and functions\nSection 2: Trigonometry\nSection 3: Exponential and logarithms​\nSection 4: Differentiation\nSection 5: Numerical methods\n\nD. CORE 4\nSection 1: Algebra and Functions\nSection 2: Coordinate Geometry in the (x,y) plane\nSection 3: Sequences and Series\nSection 4: Differentiation\nSection 5: Integration\nSection 6 : Vectors\nSection 7 : Trigonometry\n\nE. MECHANICS 1\nSection 1: Collisions\nSection 2: Dynamics\n​ Section 3: Kinematics\nSection 4: Modelling\nSection 5: Moments\nSection 6: Statics\n\nF. STATISCTIC 1\nSection 1: Mathematical models in probability and statistics\nSection 2: Representation and summary of data​\nSection 3: Probability\nSection 4: Correlation and regression\nSection 5: Discrete random variables\nSection 6: The Normal distribution\n\nWeight 6 kg" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.649154,"math_prob":0.90170634,"size":1269,"snap":"2019-51-2020-05","text_gpt3_token_len":359,"char_repetition_ratio":0.25296444,"word_repetition_ratio":0.25268817,"special_character_ratio":0.22616233,"punctuation_ratio":0.20253165,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99368507,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-19T19:43:33Z\",\"WARC-Record-ID\":\"<urn:uuid:5e4127b6-315f-464d-b52e-72ee1cc5dea5>\",\"Content-Length\":\"70071\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7987e45e-6dbd-4141-ba90-500fcc776b2f>\",\"WARC-Concurrent-To\":\"<urn:uuid:a15160d4-b9a4-4e8c-b85a-3aec28bf827d>\",\"WARC-IP-Address\":\"88.99.102.115\",\"WARC-Target-URI\":\"https://malaysiaigcse.com/shop/edexcel-level-maths-topical-book-core-mechanics-statistic-question-mark-scheme/\",\"WARC-Payload-Digest\":\"sha1:P2QD2C7MUPWVCYRFT2AOD5RECF775KXX\",\"WARC-Block-Digest\":\"sha1:YFS2FOJ4G73DWG5RFHEIOEBWOPF6GRSC\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579250594705.17_warc_CC-MAIN-20200119180644-20200119204644-00416.warc.gz\"}"}
https://bowaggoner.com/blog/2016/09-20-convexity/
[ "", null, "# The Tiger's Stripes\n\nA technical blog on math, computer science, and game theory.\n\nAuthor: Bo Waggoner RSS feed", null, "# Convexity\n\nPosted: 2016-09-20.\n\nConvexity is a simple, intuitive, yet surprisingly powerful mathematical concept. It shows up repeatedly in the study of efficient algorithms and in game theory. The goal of this article to is to review the basics so we can put convexity to use later.\n\n## Convex sets\n\nIn a vector space such as $\\mathbb{R}^n$, a set $S$ is convex if, for every two points in $S$, the entire line segment between the points is also in the set.\n\nIn notation, $S$ is convex if for all $x,y \\in S$ and all $\\alpha \\in [0,1]$, $\\alpha x + (1-\\alpha)y \\in S$.", null, "", null, "A convex ($S$) and non-convex ($S'$) set.\n\nMost but not all potatoes are convex. Most but not all bananas are non-convex.\n\nConvex sets don't have to be closed or compact; $\\mathbb{R}^n$ itself is a convex set. One important convex set is a halfspace: all points on one side of a hyperplane.\n\nA nice fact is that the intersection of any number of convex sets is convex (can you prove it?). In particular, an intersection of a finite number of halfspaces is called a convex polytope. Examples are cubes and tetrahedra. Convex polytopes are the central object of study in linear programming.\n\nGiven a set of points, their convex hull is the smallest convex set containing all the points. The convex hull of a hula hoop is a disk. The convex hull of the set of four points consisting of the corners of a soccer field is the entire soccer field. (Yes, I'm American.) The convex hull of a soccer ball is a soccer ball.\n\n## Convex functions\n\nGiven a function $G$ from a vector space to scalars, e.g. $G: \\mathbb{R}^n \\to \\mathbb{R}$, its epigraph is the set of points lying above the function.\nIn notation, epi $G = \\{(x,y) : y \\geq G(x) \\}$.\n\n$G$ is a convex function if its epigraph is a convex set.", null, "The epigraph of a convex function $G$.\n\nYou'd more often see a definition such as: $G$ is convex if, for all $x,x'$ and all $\\alpha \\in [0,1]$, $G\\left(\\alpha x + (1-\\alpha)x'\\right) \\geq \\alpha G(x) + (1-\\alpha)G(x')$. Check the equivalence!\n\nA very nice fact, or perhaps beautiful definition of convexity, is Jensen's inequality: If $G$ is convex, then for any random variable $X$, $\\mathbb{E}G(X) \\geq G\\left(\\mathbb{E} X\\right)$.", null, "You'll never forget which way Jensen's inequality goes if you keep the picture in mind.\n\nA nice fact is that the pointwise maximum of a set of convex functions is itself a convex function. That is, if $G_i$ is convex for all $i$, then $G(x) = \\max_i G_i(x)$ is convex. Exercise: prove it! (Use our definition of convexity and a nice fact from the previous section.)\n\nThere are many other nice facts about convex functions (I feel like I'm noticing a pattern). They are continuous on the interior of their domain. If $G_1$ and $G_2$ are convex and $\\alpha,\\beta \\gt 0$, then $\\alpha G_1 + \\beta G_2$ is convex.\n\nA subtangent of $G$ is a linear function that shares at least one point with $G$ and lies everywhere below $G$. If $G$ is convex then it has a subtangent at every point $x$ in its interior. Furthermore, this subtangent can be written $f(x') = G(x) + \\langle r, x' - x \\rangle$ for some vector $r$.", null, "A subtangent is a linear approximation to $G$ at $x$.\n\nThe vector $r$ in the above definition is a subgradient of $G$ at $x$. In other words, $G(x') \\geq G(x) + \\langle r, x' - x\\rangle$ for all $x'$.\n\nIf $G$ is differentiable at $x$, then there is only one subgradient at $x$, which is the derivative at $x$. You can consider $G(x) = |x|$ at $x=0$ to see the converse; there are many possible subtangents at that point.\n\nBecause it only helps intuition to think of the subgradient as a generalization of the derivative, I like to use the notation $dG_x$ for a subgradient of $G$ at $x$.\n\n## Bregman divergences\n\nThis function is simply a measure of how far a subtangent is below $G$. The Bregman divergence of $G$ is $D_G(x,y) = G(x) - \\Big[ G(y) + \\langle dG_y, x-y \\rangle \\Big] .$", null, "It's easy to remember the order of the arguments: $D_G(x,y)$ is always at least zero because it's $G(x)$ (the first argument) minus the linear approximation at $y$ (the second argument).\n\nDon't memorize the formula; memorize the figure and reconstruct the formula!\n\nBregman divergence is often considered a sort of distance measure (though not technically a metric), because it is nonnegative and equals zero if $x = y$. This is an only if when $G$ is strictly convex, meaning that no subtangent intersects $G$ at more than one point.\n\nExample/exercise 1.\nConsider the convex function on $\\mathbb{R}^n$ defined by $x \\mapsto \\|x\\|_2^2 = \\sum_i x_i^2$. Show that its Bregman divergence is just $\\|x-y\\|_2^2$. Conclude that its Bregman divergence is a distance metric.\n\nExample/exercise 2.\nIf we let $p$ be a probability distribution on a finite domain, then the function $p \\mapsto \\sum_i p_i \\log(p_i)$ is convex (it is the negative of the Shannon entropy function). Show that the Bregman divergence of this function is the relative entropy (also called KL-divergence) $KL(p,q) = \\sum_i p_i \\log\\frac{p_i}{q_i}$.\n\nProblem.\nShow that $\\|x-y\\|_2^2$ is the unique symmetric Bregman divergence (hence the only Bregman divergence that is also a distance metric).\n\n(1997) R. Tyrrell Rockafeller. Convex analysis." ]
[ null, "https://bowaggoner.com/blog/images/morphog.jpg", null, "https://bowaggoner.com/blog/images/rss.png", null, "https://bowaggoner.com/blog/2016/09-20-convexity/images/convexset.png", null, "https://bowaggoner.com/blog/2016/09-20-convexity/images/nonconvexset.png", null, "https://bowaggoner.com/blog/2016/09-20-convexity/images/epigraph.png", null, "https://bowaggoner.com/blog/2016/09-20-convexity/images/jensens.png", null, "https://bowaggoner.com/blog/2016/09-20-convexity/images/subtangent.png", null, "https://bowaggoner.com/blog/2016/09-20-convexity/images/bregman-divergence.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8770912,"math_prob":0.9997168,"size":5147,"snap":"2023-40-2023-50","text_gpt3_token_len":1446,"char_repetition_ratio":0.13416293,"word_repetition_ratio":0.020293122,"special_character_ratio":0.27569458,"punctuation_ratio":0.10879849,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99999654,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16],"im_url_duplicate_count":[null,null,null,null,null,4,null,4,null,4,null,4,null,4,null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-04T03:39:05Z\",\"WARC-Record-ID\":\"<urn:uuid:a2bb6877-ee78-4d7a-a0a5-539b91fe8141>\",\"Content-Length\":\"9674\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7e4726d3-23a7-49b3-9c45-f476967cb7bf>\",\"WARC-Concurrent-To\":\"<urn:uuid:0b114e65-12bb-4c1b-9ceb-568cfe137d83>\",\"WARC-IP-Address\":\"45.79.188.27\",\"WARC-Target-URI\":\"https://bowaggoner.com/blog/2016/09-20-convexity/\",\"WARC-Payload-Digest\":\"sha1:W3OKXL2FMKROSW6KB5C7DQO2KNCPHHZV\",\"WARC-Block-Digest\":\"sha1:SQFVDFBILJM3IIFGGPO7A3P7IY5LJ7QO\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100523.4_warc_CC-MAIN-20231204020432-20231204050432-00463.warc.gz\"}"}
https://zbmath.org/?q=an:0619.10046
[ "# zbMATH — the first resource for mathematics\n\nWeyl’s inequality, Hua’s inequality and Waring’s problem. (English) Zbl 0619.10046\nLet $$k\\geq 3$$ be a fixed integer, let $$\\alpha\\in {\\mathbb{R}}$$, and let $$S(\\alpha)=\\sum ^{P}_{n=1}e(\\alpha n^ k)$$. If $$| \\alpha - (a/q)| \\leq q^{-2}$$ with $$(a,q)=1$$ then $$S(\\alpha)\\ll P^{1-2^{1- k}+\\epsilon}$$ providing that $$P\\leq q\\leq P^{k-1}$$. This is Weyl’s inequality. The first result of the paper is the sharper bound $S(\\alpha)\\ll P^{1-(8/3)2^{-k}+\\epsilon},$ valid on the shorter range $$P^ 3\\leq q\\leq P^{k-3}$$, for $$k\\geq 6.$$\nHua’s inequality states that $$\\int ^{1}_{0}| S(\\alpha)| ^{2^ k} d\\alpha \\ll P^{2^ k-k+\\epsilon}$$. The second result of the paper is the better bound $\\int ^{1}_{0}| S(\\alpha)| ^{7.2^{k-3}} d\\alpha \\ll P^{7.2^{k-3}-k+\\epsilon},\\text{ for } k\\geq 6.$ The third result, which is a simple corollary of the second, is that the Hardy-Littlewood asymptotic formula, for sums of s k-th powers, is valid for $$s\\geq (7/8)2^{k-3}+1.$$ The key idea in the proof is to estimate S($$\\alpha)$$ by performing k-3 Weyl steps. This produces a large number of cubic sums, whose mean value is bounded using the integral $\\int ^{1}_{0}\\int ^{1}_{0}| \\sum ^{P}_{1}e(\\alpha n^ 3+\\beta n)| ^ 6 d\\alpha d\\beta.$\n\n##### MSC:\n 11P05 Waring’s problem and variants 11L40 Estimates on character sums 11P55 Applications of the Hardy-Littlewood method\nFull Text:" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6658111,"math_prob":0.99999785,"size":1765,"snap":"2021-43-2021-49","text_gpt3_token_len":650,"char_repetition_ratio":0.10732538,"word_repetition_ratio":0.0546875,"special_character_ratio":0.39546743,"punctuation_ratio":0.15291262,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.0000013,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-12-07T15:30:04Z\",\"WARC-Record-ID\":\"<urn:uuid:65ee8c21-eb22-410b-9cbc-4873bf6501ad>\",\"Content-Length\":\"48287\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ba140af3-821e-4a73-9c45-30732a19559c>\",\"WARC-Concurrent-To\":\"<urn:uuid:cf8f0d98-b75a-43fb-865c-d9083d543d3f>\",\"WARC-IP-Address\":\"141.66.194.2\",\"WARC-Target-URI\":\"https://zbmath.org/?q=an:0619.10046\",\"WARC-Payload-Digest\":\"sha1:XXNDS7GNVBRBP2UCRVDQYJPJ533C7PYO\",\"WARC-Block-Digest\":\"sha1:TUSUKBGQ6JBN34FD47RDWGKABTOAXEBI\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964363400.19_warc_CC-MAIN-20211207140255-20211207170255-00078.warc.gz\"}"}
https://getcalc.com/convert-1cehextobinary.htm
[ "Hex Arithmetic & Conversion\nBinary :\n\nOctal :\n\nDecimal :\n\n1ce\n\nshare\nfeedback\ncalculator\ninfo\nhistory", null, "HISTORY\n\n# Hex 1CE to Binary Conversion\n\nWhat is 1CE hex in binary? - converter, chart & solved example problem with step by step workout for how to carry out hex 1CE to binary conversion manually. The base-16 value of 1ce16 is equal to base-2 value of 1110011102\nIn different representation\n1CE16 = 1110011102\n0x1ce = 0b111001110\n\nHexBinaryDecimal\n1CC.8111001100.1460.5\n1CD111001101461\n1CD.8111001101.1461.5\n1CE111001110462\n1CE.8111001110.1462.5\n1CF111001111463\n\n## Work to Find What is 1CE Hex in Binary\n\nThe below is the example problem with step by step work to find what is 1CE hex in binary.\n1CE16 Hex to Binary Conversion :\nstep 1 Write each digit of given hex number 1CE16 into its equivalent binary containing 4 digits each.\n1 = 0001C = 1100E = 1110\n\nstep 2 Arrange the binary numbers group in the same order\n1CE16 = 1110011102", null, "" ]
[ null, "https://getcalc.com/graphics/loading.gif", null, "https://getcalc.com/cdn/graphics/getcalc-logo.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.71783155,"math_prob":0.9538691,"size":1157,"snap":"2023-14-2023-23","text_gpt3_token_len":387,"char_repetition_ratio":0.15524718,"word_repetition_ratio":0.42574257,"special_character_ratio":0.40190148,"punctuation_ratio":0.0913242,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9825325,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-05T09:05:53Z\",\"WARC-Record-ID\":\"<urn:uuid:31912c6a-c1fb-46c4-b3f1-c4b51d70100f>\",\"Content-Length\":\"36970\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9ef5b766-0e0c-4222-9796-bdbd675f336a>\",\"WARC-Concurrent-To\":\"<urn:uuid:23c994d0-a5c3-4432-ad33-d046e429882a>\",\"WARC-IP-Address\":\"50.18.123.146\",\"WARC-Target-URI\":\"https://getcalc.com/convert-1cehextobinary.htm\",\"WARC-Payload-Digest\":\"sha1:WP67K3DVWLUCHOUY26Z3R6RRAHZ2RZC3\",\"WARC-Block-Digest\":\"sha1:2N5B4EJQDMBQJOTATQA6JTSSJLZIGBEY\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224651815.80_warc_CC-MAIN-20230605085657-20230605115657-00498.warc.gz\"}"}
http://www.academickids.com/encyclopedia/index.php/Finite
[ "# Finite\n\nIn mathematics, a set is called finite if and only if there is a bijection between the set and some set of the form {1, 2, ..., n} where [itex]n[itex] is a natural number.\n\nIt is a theorem (assuming the axiom of choice) that a set is finite if and only if there exists no bijection between the set and any of its proper subsets. Equivalently, a set is finite if its cardinality, i.e. the number of its elements, is a natural number. For instance, the set of integers between -15 and 3 is finite, since it has 17 elements. The set of all prime numbers is not finite. Sets that are not finite are called infinite.\n\nIn physics, finite additionally means \"non-zero\", for instance in a sentence like \"if the distance of the two objects is finite...\".\n\n• Art and Cultures\n• Countries of the World (http://www.academickids.com/encyclopedia/index.php/Countries)\n• Space and Astronomy" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.92544353,"math_prob":0.94835305,"size":866,"snap":"2022-40-2023-06","text_gpt3_token_len":229,"char_repetition_ratio":0.1438515,"word_repetition_ratio":0.053691275,"special_character_ratio":0.23787528,"punctuation_ratio":0.16753927,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9979987,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-02-08T23:52:39Z\",\"WARC-Record-ID\":\"<urn:uuid:c87a3389-4870-4708-aff9-299bec3b72b1>\",\"Content-Length\":\"24654\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:693d5b6d-71fa-4b48-80d2-7613c963b53d>\",\"WARC-Concurrent-To\":\"<urn:uuid:6767782c-ce47-4fea-a09b-07b901d3159e>\",\"WARC-IP-Address\":\"108.62.157.30\",\"WARC-Target-URI\":\"http://www.academickids.com/encyclopedia/index.php/Finite\",\"WARC-Payload-Digest\":\"sha1:3YZWJB7RQSG5SX4TSS74OELBULLFAXNY\",\"WARC-Block-Digest\":\"sha1:ZGAHEKWRNCIP37ZKDBOAY4RKYEVSQW7P\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764500983.76_warc_CC-MAIN-20230208222635-20230209012635-00401.warc.gz\"}"}
https://salesforce.stackexchange.com/questions/157142/distance-between-salesforce-ids/167750
[ "# Distance Between Salesforce Ids\n\nDoes anyone have an idea of the distance between two Salesforce Ids of the same object and instance?\n\nExample Account Id:\n\nObject-Instance-Reserved-This Distance\n\n001-30-0-0000xxxxx\n\n• Though I do not know the answer but I am curious to know why you want to find this – Saumya Ranjan Satapathy Jan 23 '17 at 17:02\n• I'm also curious as to why you want this information, and what you plan to do with it. This sounds like an X-Y problem to me. While I'm here, what do you mean by 'distance' between two Ids? For example, is the distance between '001xxxxxxx12345' and '001xxxxxxx1235a' 2 (the Levenshtein distance) or 36 (number of Ids between the two, taking Ids as numbers in a base-36 system)? – Derek F Jan 23 '17 at 17:56\n• What is your meaning with distance. These are in sequence. – Ashwani Jan 23 '17 at 17:56\n• @Ashwani Yes, they are sequenial, but there are \"gaps\". Is the distance 256 until the next Id, or 64? – Steel Reserve 211 Jan 23 '17 at 18:22\n• Where are you getting 256 and 64? Please edit your post to be more specific and clear, and if possible actually explain your end goal at a high level. – Adrian Larson Jan 23 '17 at 19:10\n\nAs per What are Salesforce ID's composed of?, the Ids are base-62 encoded. If you convert the base62 representation to a decimal representation then calculating the decimal \"distance\" between the two records would be easy enough.\n\nLets create a few test records to get a feel for how they run in a sequence:\n\n``````List<Account> accountsToCreate = new List<Account>();\nfor(Integer i = 0; i < 62 * 4; i++) {\naccountsToCreate.add(new Account(Name = 'Account:' + i));\n}\ninsert accountsToCreate;\nfor(Account acc : accountsToCreate) {\nSystem.debug(acc.Id);\n}\n\ndelete accountsToCreate;\n``````\n\nAn abbreviated version of the output:\n\n0017000001WXV8y\n0017000001WXV8z\n0017000001WXV90\n0017000001WXV91\n0017000001WXV92\n... 3 to 8\n0017000001WXV99\n0017000001WXV9A\n... B to X\n0017000001WXV9Y\n0017000001WXV9Z\n0017000001WXV9a\n0017000001WXV9b\n... c to x\n0017000001WXV9y\n0017000001WXV9z\n0017000001WXVA0\n0017000001WXVA1\n0017000001WXVA2\n... 3 to 8\n0017000001WXVA9\n0017000001WXVAA\n... B to Y\n0017000001WXVAZ\n0017000001WXVAa\n... b to x\n0017000001WXVAy\n0017000001WXVAz\n0017000001WXVB0\n0017000001WXVB1\n... 2 to 8\n0017000001WXVB9\n0017000001WXVBA\n... B to X\n0017000001WXVBY\n0017000001WXVBZ\n0017000001WXVBa\n... b to x\n0017000001WXVBy\n0017000001WXVBz\n0017000001WXVC0\n...\n0017000001WXVC1\n...\n\nThe sequence is clear enough, run through `0` to `9`, then `A` to `Z`, then finally `a` to `z` before wrapping round and increment the higher characters.\n\nWhat we now need is a way to do base62 decoding in Apex. Turns out I did something similar way back in 2011 except it was in T-SQL rather than Apex.\n\n``````public class IdDistance {\n\n// This is the order ID's were assigned in when tested\nfinal static string base62Chars = '0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz';\n\npublic static long numericDistance(Id firstId, Id secondId) {\nlong firstNumericId = numericId(firstId);\nlong secondNumericId = numericId(secondId);\nreturn secondNumericId - firstNumericId;\n}\n\npublic static long numericId(Id input) {\nreturn numericId((string)input);\n}\n\npublic static long numericId(string input) {\n\nstring idAsString = (string)input;\nif(idAsString.length() > 15) {\n// Drop the case checking suffix for the last 3 characters of an 18 char ID.\nidAsString = idAsString.substring(0, 15);\n}\n\nlong returnValue = 0;\nlong multiplier = 1;\n\nfor(integer i = idAsString.length(); i > 1; i--) {\n// The character being converted\nstring idChar = idAsString.substring(i-1, i);\nSystem.debug(idChar);\n// The index of the character being converted\nlong value = base62Chars.indexOf(idChar);\n\nreturnValue = returnValue + ( value * multiplier );\nmultiplier = multiplier * 62;\n}\n\nreturn returnValue;\n}\n}\n``````\n\nTest class. Add more are required.\n\n``````@IsTest\npublic class IdDistance_Test {\n@IsTest\npublic static void testValues() {\nSystem.assertEquals(0, IdDistance.numericId('000'));\nSystem.assertEquals(1, IdDistance.numericId('001'));\nSystem.assertEquals(10, IdDistance.numericId('00A'));\nSystem.assertEquals(35, IdDistance.numericId('00Z'));\nSystem.assertEquals(36, IdDistance.numericId('00a'));\nSystem.assertEquals(61, IdDistance.numericId('00z'));\nSystem.assertEquals(62, IdDistance.numericId('010'));\n\nId testId = '00Q7000001DsqIj';\nSystem.assertEquals(911562501854070361L, IdDistance.numericId(testId));\n\n}\n\n@IsTest\npublic static void distance() {\nSystem.assertEquals(1, IdDistance.numericDistance('00Q7000001DsqIj', '00Q7000001DsqIk'));\nSystem.assertEquals(10, IdDistance.numericDistance('00Q7000001DsqI1', '00Q7000001DsqIB'));\n}\n\n}\n``````\n\nSample output:\n\n'0' > 0\n'1' > 1\n'A' > 10\n'Z' > 35\n'a' > 36\n'z' > 61\n'10' > 62\n\nRemember to truncate off any extra characters after the first 15, as we don't want the case checking suffix. Special consideration might also be required for Orgs that have gone through a pod migration.\n\nOf course, even though you can now do this, the question still remains of why you would want to do this.\n\n• Thank you for the detailed response, this is exactly what I was looking for. A southern defense contractor (my customer) has a use case for this. – Steel Reserve 211 Apr 7 '17 at 16:32" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6604286,"math_prob":0.82008296,"size":4383,"snap":"2020-34-2020-40","text_gpt3_token_len":1265,"char_repetition_ratio":0.19547842,"word_repetition_ratio":0.06620209,"special_character_ratio":0.33949348,"punctuation_ratio":0.19591837,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9571121,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-08-11T16:27:30Z\",\"WARC-Record-ID\":\"<urn:uuid:904d941f-5f6b-4a47-ad83-5f2e0bea0f15>\",\"Content-Length\":\"156413\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6eed6403-eba1-4778-bdf9-80a1792ec315>\",\"WARC-Concurrent-To\":\"<urn:uuid:9bef3120-abdf-4069-b002-b71ffa6e79e6>\",\"WARC-IP-Address\":\"151.101.1.69\",\"WARC-Target-URI\":\"https://salesforce.stackexchange.com/questions/157142/distance-between-salesforce-ids/167750\",\"WARC-Payload-Digest\":\"sha1:YHEL57IMP3DLFGZZILI2QQXSKZSTDNW5\",\"WARC-Block-Digest\":\"sha1:PGKGHTW2HTHK5VPUHW22TX3JM45AWCVJ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-34/CC-MAIN-2020-34_segments_1596439738816.7_warc_CC-MAIN-20200811150134-20200811180134-00145.warc.gz\"}"}
http://www.freescience.info/go.php?pagename=books&id=2106
[ "", null, "", null, "Language/Lingua", null, "", null, "", null, "", null, "Books 3054", null, "", null, "· Book News · Most clicked · Least clicked\n\nSearch for a Book", null, "## Multigrid methods for structured grids and their application in particle simulation", null, "Language:", null, "", null, "Author: Matthias Bolten", null, "", null, "Format: Pdf", null, "Year: 2008", null, "Category: Computational Physics", null, "Pages: 153", null, "Clicks: 2039", null, "Description\nThis work is focussed on the application of multigrid methods to particle simulation methods. Particle simulation is important for a broad range of scientific fields, like biophysics, astrophysics or plasma physics, to name a few. In these fields computer experiments play an important role, either supporting real experiments or replacing them. The first can significantly reduce costs, e.g. in the pharmaceutic industry, where possible agents can be checked for an effect in advance of real and expensive experiments. The latter has an important role in astrophysics, where most experiments just cannot be carried out in a laboratory. In the cases we are interested in, the interaction of particles can be evaluated by pairwise potentials, where short-ranged potentials, e.g. potentials describing chemical bonds, are easy to be implemented efficiently. But the very important Coulomb potential and the gravitational potential are not short-ranged, thus an intuitive implementation has to evaluate all pairwise interactions, yielding an O(N2) algorithm, where N is the number of particles to be simulated. The key to reduce this complexity is the use of approximate algorithms for the evaluation of the long-ranged potentials.\n\nSimilar Books\n Numerical Simulations in Cosmology I A practical guide to computer simulations Selected Computational Methods Computational Physics Computational Physics II Introduction to Computational Physics Computational Physics Eléments de programmation et Introduction aux logiciels libres Four lectures on computational statistical physics Introduction to Randomness and Statistics An introduction to finite volumes for gas dynamics Solve the Master Equation by Python-An Introduction to the Python Computing Environment Computational Physics Computational Physics with Python Computational Physics with Python Computational Physics A Practical Introduction to Computational Physics and Scientific Computing Computational Physics: Problem Solving with Computers Computational Physics With Python\n\n```Home | Authors | About | Contact Us | Email" ]
[ null, "http://www.freescience.info/images/punto.gif", null, "http://www.freescience.info/images/freescience.gif", null, "http://www.freescience.info/flags/it.png", null, "http://www.freescience.info/flags/en.png", null, "http://www.freescience.info/flags/fr.png", null, "http://www.freescience.info/images/punto.gif", null, "http://www.freescience.info/images/punto.gif", null, "http://www.freescience.info/images/punto.gif", null, "http://www.freescience.info/images/punto.gif", null, "http://www.fz-juelich.de/nic-series/volume41/nic-series-41.jpg", null, "http://www.freescience.info/flags/en.png", null, "http://www.freescience.info/images/punto.gif", null, "http://www.freescience.info/images/punto.gif", null, "http://www.freescience.info/images/punto.gif", null, "http://www.freescience.info/images/punto.gif", null, "http://www.freescience.info/images/punto.gif", null, "http://www.freescience.info/images/punto.gif", null, "http://www.freescience.info/images/punto.gif", null, "http://www.freescience.info/images/punto.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9099485,"math_prob":0.5241963,"size":1230,"snap":"2020-34-2020-40","text_gpt3_token_len":232,"char_repetition_ratio":0.11663948,"word_repetition_ratio":0.0,"special_character_ratio":0.17642276,"punctuation_ratio":0.12037037,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9785319,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,4,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-08-10T16:07:22Z\",\"WARC-Record-ID\":\"<urn:uuid:8be7ddb2-1aae-4667-8202-c4dc5209a0c2>\",\"Content-Length\":\"15979\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:fa03fb85-1255-4a4f-8bae-c9e3a8bc3e94>\",\"WARC-Concurrent-To\":\"<urn:uuid:508f7035-442b-4eb2-8a85-7a855de4289f>\",\"WARC-IP-Address\":\"93.191.242.19\",\"WARC-Target-URI\":\"http://www.freescience.info/go.php?pagename=books&id=2106\",\"WARC-Payload-Digest\":\"sha1:WRZQHFB3OTRYE7ZB63VZP6DQY6XTWQFA\",\"WARC-Block-Digest\":\"sha1:FFVR7PDVC7YLCXK2SBWTZ4JWPW4Q7J5X\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-34/CC-MAIN-2020-34_segments_1596439736057.87_warc_CC-MAIN-20200810145103-20200810175103-00378.warc.gz\"}"}
https://physics.stackexchange.com/questions/98824/how-does-wave-function-collapse-when-i-measure-position
[ "# How does wave function collapse when I measure position?\n\nText books say that when you measure a particle's position, its wave function collapses to one eigenstate, which is a delta function at that location. I'm confused here.\n\n1. A measurement always have limited accuracy. Does the wave function collapse to exactly a eigenstate no matter what accuracy I have?\n\n2. When a particle is in an eigenstate of position, I can represent the state in momentum basis, and calculate it's expected value (average) of kinetic energy. This gives me infinity. Can a particle ever be in such a state?\n\n1. No, it doesn't collapse to an eigenstate. Collapse to an eigenstate is a picture of an ideal measurement. In general the final state will not be describable by a wave function, because it's not a pure state, it is instead a mixed state. See this question, which is about inexact measurements.\n\n2. Position eigenstate in position representation is $\\langle x_{}|x_0\\rangle=\\delta(x-x_0)$. This gives the following in the momentum representation: $\\langle p_{}|x_0\\rangle=e^{\\frac{i}{\\hbar}px}$. For this function probability density is constant, thus its expectation value is undefined (one can't find a center of infinite line). Similarly, for free particle expectation value of energy will also be undefined. This is because such state is an abstraction, a useful mathematical tool. Of course, such states can't be prepared in real experiment, but one can come very close to it, e.g. shoot an electron at a tiny slit and observe state of the electron at the very exit of that slit.\n\nAs to finding expectation value of energy in position eigenstate, first mistake which you make using the formula $\\overline E=\\langle x|\\hat H|x\\rangle$ is forgetting to normalize the eigenvector. But position operator has continuous spectrum, which makes all its eigenvectors unnormalizable (i.e. if you try to normalize them, you'll get null vector, which is meaningless as a state). Thus you can't directly find expectation value of energy in position eigenstate.\n\n• Am I understanding you correctly, that te wave function has truly collapsed into a single eigenstate, and that the density matrix just represents our classical uncertainty given the limited resolution of our measurement? Or, put another way, if we measure a particle's momentum, the position is truly delocalized across all of space? This seems unphysical and inconsistent with what we experience day-to-day. This explanation is also inconsistent with physics.stackexchange.com/questions/301223/… . – Dragonsheep Mar 6 at 8:18\n• @Dragonsheep this is not inconsistent once you consider Heisenberg uncertainty principle and compare day-to-day measurement errors to the quantum uncertainties. The latter ones will be dwarfed as to be irrelevant in daily life. – Ruslan Mar 6 at 8:39\n\nThe wave function is \"reduced\", meaning that there is a reduction in size of the continuous range of states (positions) that have non-zero probability. However, it never goes to being a single eigenstate, due to the quantum uncertainty of the probe used in the measurement or of the measurement apparatus, itself. That uncertainty can never go to exactly zero.\n\nPosition measurements are of the unfortunate breed that do not refine with successive measurements, due to the back action of the position measurement on the momentum, which, in turn, affects the position that was just measured. That is, measuring the position demolishes the position that was just measured. On the other hand, measurements of momentum can be non-demolition, so that successive measurements further reduce the size of the range of non-zero probability momentum states.\n\nIt is due to the nature of quantum mechanics.\n\nIn classical regime, the lowest possible energy is zero. But in QM, the lowest state(ground state) still has energy. Quantum nature is wave-nature. In CM, you can pinpoint a location of an object but in QM, it is a distribution (probability density). What you can only do is find the smallest distribution not a specific point of an object.\n\nFor your question, It is due to wave nature: Uncertainty Principle says that$\\Delta x \\: \\Delta p \\ge \\frac{\\hbar}{2}$. So when you try to measure a location , $\\Delta x$ (deviation in position) becomes zero and to satisfy this relation, your $\\Delta p$ has to be really large (like a delta function). So as soon as you do the measurement, you just destroy the wave function.After you do the exact measurement on something in QM, you just destroy that \"something\" and you cannot say anything about it such as its energy.\n\n• Thanks for the answers. What if I calculate <x|H|x>, where x is an eigenstate of position? Does that make any sense? – Purui Feb 12 '14 at 6:10\n• @Anug, I refined my second question a little bit. Can you take a look? Thanks. – Purui Feb 12 '14 at 7:02\n1. No. Position operator does not have normalizable eigen-functions ($\\delta(x-x_0)$ is not normalizable). The closest thing one can do in this formalism is to contract the wave function to some sharp peak with non-zero width and finite height, based on the accuracy of the measurement.\n\n2. With continuous space, particle cannot be in an \"eigenstate of position\", because there is no such thing there. On a discrete set of admissible positions this would be possible, but the whole physics and formalism would be very different." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.92778677,"math_prob":0.9409019,"size":522,"snap":"2020-45-2020-50","text_gpt3_token_len":107,"char_repetition_ratio":0.11969112,"word_repetition_ratio":0.0,"special_character_ratio":0.197318,"punctuation_ratio":0.11111111,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9901859,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-11-30T05:07:05Z\",\"WARC-Record-ID\":\"<urn:uuid:a9d4b3b6-c0b5-451b-9d33-b85b29acaae7>\",\"Content-Length\":\"180756\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:08d0be1c-45df-46ae-b442-dc744e3b6821>\",\"WARC-Concurrent-To\":\"<urn:uuid:5de4633c-a6b9-4b0d-8754-c9fee399ac42>\",\"WARC-IP-Address\":\"151.101.1.69\",\"WARC-Target-URI\":\"https://physics.stackexchange.com/questions/98824/how-does-wave-function-collapse-when-i-measure-position\",\"WARC-Payload-Digest\":\"sha1:RGP7DNN4EUG7LX5VZ767HL5YE2EBAYPG\",\"WARC-Block-Digest\":\"sha1:AD72W3WD3H3BCVLT23EMHSYFZYBVSI6J\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141205147.57_warc_CC-MAIN-20201130035203-20201130065203-00649.warc.gz\"}"}
https://www.jiskha.com/questions/770505/a-piece-of-wire-3-5m-long-was-cut-into-lengths-each-measuring-0-05m-long-calculate-the
[ "# math\n\na piece of wire 3.5m long was cut into lengths each measuring 0.05m long.Calculate the number of pieces obtained.\n\n1. 👍\n2. 👎\n3. 👁\n1. 3.5 m/0.05m = ?\n\n1. 👍\n2. 👎\n2. 70\n\n1. 👍\n2. 👎\n3. 70m\n\n1. 👍\n2. 👎\n\n## Similar Questions\n\n1. ### Related rates\n\nA piece of wire 10 feet long is cut into two pieces. One piece is bent into the shape of a circle and the other into the shape of the square. How should the wire be cut so that the combined area of the two figures is as small as\n\n2. ### Calculus Help Please Urgent!!!\n\nA piece of wire 14 m long is cut into two pieces. One piece is bent into a square and the other is bent into an equilateral triangle. (a) How much wire should be used for the square in order to maximize the total area? 14 m this\n\n3. ### Algebra\n\nA wire 20 cm long is cut into two pieces. The longer piece is 4 cm longer than the shorter piece. Find the length of the shorter piece of wire\n\n4. ### algebra\n\na 38 inch piece of steels is cut into three pieces so that the second piece is twice as long as the first piece, and the third piece is three inches more than four times the length of the first piece. find the lengths of pieces.\n\n1. ### algebra\n\nA 28​-inch board is to be cut into three pieces so that the second piece is twicetwice as long as the first piece and the third piece is 44 times as long as the first piece. If x represents the length of the first​ piece, find\n\n2. ### math\n\nif a wire 20 inches long is to be cut so that one piece is 2/5 as long as the other piece, how many inches long must the shorter piece be?\n\n3. ### Algebra\n\nA piece of lumber two and one forth feet long is to be cut into three equal pices. How long will each piece of cut wood be? Give the measurement in feet and in inches.\n\n4. ### math\n\na wire is 36 m long is cut into two pieces , each piece is bent to form a rectangle which is 1 centimeter longer than its width. how long should each piece be to minimize the sum of the areas of the two rectangles?\n\n1. ### MINIMIZATION PROBLEM (CALC)\n\nA wire 9 meters long is cut into two pieces. One piece is bent into a square for a frame for a stained glass ornament, while the other piece is bent into a circle for a TV antenna. To reduce storage space, where should the wire be\n\n2. ### Math\n\nA piece of wire 9 m long is cut into two pieces. One piece is bent into the shape of a circle of radius r and the other is bent into a square of side s. How should the wire be cut so that the total area enclosed is: I need help\n\n3. ### Math\n\nA 14 inch board is to be cut into 3 pieces so that the second piece is twice as long as the first piece and the third piece is 4 times as long as the first piece. If x represents the length of the first piece find the lengths of\n\n4. ### math\n\nSTRING LENGTHS: You have two pieces of string. One is 27 cm long. The other is 45 cm long. You want to cut each piece of string into smaller pieces of equal length. Each length is to be a whole number of centimeters. List all the" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9484941,"math_prob":0.9943567,"size":2799,"snap":"2021-21-2021-25","text_gpt3_token_len":725,"char_repetition_ratio":0.19248658,"word_repetition_ratio":0.24493243,"special_character_ratio":0.24866024,"punctuation_ratio":0.06322795,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9739311,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-06-21T11:02:58Z\",\"WARC-Record-ID\":\"<urn:uuid:6ee735bf-b6a9-4ad7-9609-1d4ac7a1b454>\",\"Content-Length\":\"19812\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:bacfa50f-3e26-49bc-859e-6a698b1b1c2e>\",\"WARC-Concurrent-To\":\"<urn:uuid:65ccf193-079e-4925-aa32-1a8a22860386>\",\"WARC-IP-Address\":\"66.228.55.50\",\"WARC-Target-URI\":\"https://www.jiskha.com/questions/770505/a-piece-of-wire-3-5m-long-was-cut-into-lengths-each-measuring-0-05m-long-calculate-the\",\"WARC-Payload-Digest\":\"sha1:RO6SHZ63QAX3LOCL7FPVH4LYXXAKEZCL\",\"WARC-Block-Digest\":\"sha1:IJKXWQIL3ABCFXANGEFS5GNOHMSQMXEP\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-25/CC-MAIN-2021-25_segments_1623488269939.53_warc_CC-MAIN-20210621085922-20210621115922-00300.warc.gz\"}"}
https://socratic.org/questions/how-do-you-write-the-vertex-form-equation-of-the-parabola-y-2x-2-12x-13
[ "# How do you write the vertex form equation of the parabola y= -2x^2 + 12x - 13?\n\nMay 12, 2017\n\n$y = - 2 {\\left(x - 3\\right)}^{2} + 5$\n\n#### Explanation:\n\n$y = - 2 {x}^{2} + 12 x - 13$\nx-coordinate of vertex:\n$x = - \\frac{b}{2 a} = - \\frac{12}{-} 4 = 3$\ny-coordinate of vertex:\n$y \\left(3\\right) = - 2 \\left(9\\right) + 12 \\left(3\\right) - 13 = 18 - 13 = 5$\nVertex form:\n$y = - 2 {\\left(x - 3\\right)}^{2} + 5$\n\nCheck:\nDevelop the vertex form:\n$y = - 2 \\left({x}^{2} - 6 x + 9\\right) + 5 = - 2 {x}^{2} + 12 x - 13.$ OK" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.531676,"math_prob":1.0000099,"size":335,"snap":"2021-43-2021-49","text_gpt3_token_len":88,"char_repetition_ratio":0.1389728,"word_repetition_ratio":0.0,"special_character_ratio":0.26268658,"punctuation_ratio":0.12307692,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.999962,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-11-30T08:53:12Z\",\"WARC-Record-ID\":\"<urn:uuid:a61f76cf-2921-47ae-bc4e-9a1489e67982>\",\"Content-Length\":\"33088\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:fca61588-c5ad-4d68-a577-ab05e7d005f9>\",\"WARC-Concurrent-To\":\"<urn:uuid:2805030f-6b5a-477f-b0ed-a263d8cd6033>\",\"WARC-IP-Address\":\"216.239.32.21\",\"WARC-Target-URI\":\"https://socratic.org/questions/how-do-you-write-the-vertex-form-equation-of-the-parabola-y-2x-2-12x-13\",\"WARC-Payload-Digest\":\"sha1:O25BZ26XC2BCAGO6UWDVPHSJVIJPTJ2G\",\"WARC-Block-Digest\":\"sha1:E7ISP4OGDW55ODEM2TNW7KQG6LT6ACLE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964358966.62_warc_CC-MAIN-20211130080511-20211130110511-00374.warc.gz\"}"}
https://mulloverthings.com/when-should-you-not-use-floats/
[ "MullOverThings\n\nUseful tips for everyday\n\nWhen should you not use floats?\n\nWhen should you not use floats?\n\nAll floating point values that can represent a currency amount (in dollars and cents) cannot be stored exactly as it is in the memory. So, if we want to store 0.1 dollars (10 cents), float/double can not store it as it is.\n\nWhich requires more memory int or float?\n\nAn int and float usually take up “one-word” in memory. While float s can represent numbers of greater magnitude, it cannot represent them with as much accuracy, because it has to account for encoding the exponent. The exponent itself could be quite a large number.\n\nWhat is a limitation of ints compared to floats?\n\nAs you probably know, both of these types are 32-bits. int can hold only integer numbers, whereas float also supports floating point numbers (as the type names suggest). How is it possible then that the max value of int is 231, and the max value of float is 3.4*1038, while both of them are 32 bits?\n\nIs it better to use double or float?\n\nDouble is more precise than float and can store 64 bits, double of the number of bits float can store. Double is more precise and for storing large numbers, we prefer double over float. Unless we do need precision up to 15 or 16 decimal points, we can stick to float in most applications, as double is more expensive.\n\nIs double slower than float?\n\nFloats are faster than doubles when you don’t need double’s precision and you are memory-bandwidth bound and your hardware doesn’t carry a penalty on floats. They conserve memory-bandwidth because they occupy half the space per number. There are also platforms that can process more floats than doubles in parallel.\n\nWhy are floats bad for money?\n\nThe float and double types are particularly ill-suited for monetary calculations because it is impossible to represent 0.1 (or any other negative power of ten) as a float or double exactly. For example, suppose you have \\$1.03 and you spend 42c. How much money do you have left? System.\n\nBecause of this ability, floats have been used in web layouts time and time again. Since they weren’t considered for full web layouts when they were built, using floats as such usually leads to layouts breaking unexpectedly, especially when it comes to responsive design, and that can get quite frustrating.\n\nCan I use float instead of int?\n\n4 Answers. Floating point numbers are approximations in many cases. Some integers (and decimals) can be exactly represented by a float , but most can’t. For example, when you’re dealing with money calculations, it’s better to use integers, or (if speed is not an issue) the decimal module.\n\nWhat is the value of 0x80000000 float?\n\n-0.0\nAccording to this online IEEE-754 floating-point format converter 0x80000000 should be a -0.0, since floating-point format uses sign-magnitude format, which supports -0.0.\n\nShould I use decimal or float?\n\nFloat stores an approximate value and decimal stores an exact value. In summary, exact values like money should use decimal, and approximate values like scientific measurements should use float. When multiplying a non integer and dividing by that same number, decimals lose precision while floats do not.\n\nWhen to use int or float in C + +?\n\nAs we know, the ‘int’ data type is used to hold integers and whole numbers, while the ‘float’ data type is used to define variables holding real and fractional numbers. However, I do find that I can place an integer into a variable of the ‘float’ data type just as easily, removing the need to use the ‘int’ type?\n\nWhich is faster, an int or a float?\n\nDoing basic math operations with int is around 30% faster than float. If you need to save RAM and your integer numbers are small enough, you can use short (System.Int16) or even byte instead of int, however int32 is a little faster than both. On a desktop CPU anyway; not sure about ARM.\n\nWhat’s the difference between INT AND FLOAT in Excel?\n\nAs we know, the ‘int’ data type is used to hold integers and whole numbers, while the ‘float’ data type is used to define variables holding real and fractional numbers.\n\nWhy do I use INTs instead of floats in Python?\n\nWhen a reader of the code sees that you used an integer, that reader can infer that the quantity is only meant to take integer values. A philosophy of “don’t use what you don’t need”. A lot of programs have no need for non-integer values but use integer values a lot, so an integer type reflects the problem domain." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8920993,"math_prob":0.95661634,"size":4970,"snap":"2022-05-2022-21","text_gpt3_token_len":1120,"char_repetition_ratio":0.13129279,"word_repetition_ratio":0.16035634,"special_character_ratio":0.23762575,"punctuation_ratio":0.109161794,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9831502,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-01-27T21:07:34Z\",\"WARC-Record-ID\":\"<urn:uuid:1c4299f9-da31-4a29-9e43-265b738fc654>\",\"Content-Length\":\"35936\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6a27b983-2caa-4bee-aac8-2815e13af789>\",\"WARC-Concurrent-To\":\"<urn:uuid:0692f8af-3fd2-4995-9c1f-066d01514ae5>\",\"WARC-IP-Address\":\"49.12.116.136\",\"WARC-Target-URI\":\"https://mulloverthings.com/when-should-you-not-use-floats/\",\"WARC-Payload-Digest\":\"sha1:7XLBIMGFBBAXDTLVBAH5VDPJFWIW6CJ5\",\"WARC-Block-Digest\":\"sha1:QGIHHEQVWWTODA4QWNPDWTNI2LOZ4SPY\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-05/CC-MAIN-2022-05_segments_1642320305288.57_warc_CC-MAIN-20220127193303-20220127223303-00424.warc.gz\"}"}
https://physics.stackexchange.com/questions/625196/uncertainty-principle-for-two-compatible-observables-when-do-we-get-the-equalit
[ "# Uncertainty principle for two compatible observables. When do we get the equality and the inequality?\n\nFor two compatible observables A and B i.e. if $$[A, B]=0$$, the uncertainty principle says that $$(\\Delta A)_\\psi(\\Delta B)_\\psi\\geq 0$$ in any state $$|\\psi\\rangle$$ where $$(\\Delta A)_\\psi=(\\langle \\psi|A^2|\\psi\\rangle-(\\langle \\psi|A|\\psi\\rangle)^2)^{1/2}$$ . I know that these uncertainties have nothing to do with the precision of measurement. It is however not clear to me when will we get equality and when inequality?\n\n• If you look at the derivation of the general uncertainty principle, you should be able to tell which terms are neglected there to obtain this inequality from a more complicated equality, and so this will be an equality when these neglected terms are zero. Do you have some difficulty with doing this on your own? Mar 27, 2021 at 18:02\n• Consider $A=B$, when would $(\\Delta A)_\\psi$ be non-zero?\n– ACat\nMar 27, 2021 at 18:04\n• I think, there exist some states $\\psi$ for which either $(\\Delta A)_\\psi$ or $(\\Delta B)_\\psi$ or both are zero. In those states, the uncertainty product is zero. Am I right? Mar 27, 2021 at 18:11\n• @DvijD.C. When $\\psi$ is not an eigenstate of $A$. Also, see my comment above. Mar 27, 2021 at 18:16\n\nThe Heisenberg uncertainty relation looks like $$(\\Delta A)^2(\\Delta B)^2\\geq \\frac{1}{4}\\langle \\psi|[\\hat{A},\\hat{B}]_+|\\psi\\rangle^2+\\frac{\\hbar^2}{4}$$ The above inequality becomes (if you look for the whole derivation) equality only if\n• $$\\hat{A}|\\psi\\rangle =c\\hat{B}|\\psi\\rangle$$\n• $$\\langle \\psi|[\\hat{A},\\hat{B}]_+|\\psi\\rangle =0$$\nwhere $$\\hat{A}=A-\\langle A\\rangle$$ and $$\\hat{B}=B-\\langle B\\rangle$$." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.799415,"math_prob":0.99980706,"size":1820,"snap":"2022-27-2022-33","text_gpt3_token_len":560,"char_repetition_ratio":0.14096916,"word_repetition_ratio":0.007905139,"special_character_ratio":0.31318682,"punctuation_ratio":0.09944751,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999434,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-07-06T09:25:13Z\",\"WARC-Record-ID\":\"<urn:uuid:881d39db-2cb0-484b-b873-64c000dad011>\",\"Content-Length\":\"229764\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8a594988-d0c4-4261-a7bc-db5ed0e35724>\",\"WARC-Concurrent-To\":\"<urn:uuid:1c120e0d-a665-4223-9d0a-a8a6e98044f6>\",\"WARC-IP-Address\":\"151.101.193.69\",\"WARC-Target-URI\":\"https://physics.stackexchange.com/questions/625196/uncertainty-principle-for-two-compatible-observables-when-do-we-get-the-equalit\",\"WARC-Payload-Digest\":\"sha1:SPWVODWACVYYBZLGU3C2VFJI4Z2725M6\",\"WARC-Block-Digest\":\"sha1:RABJSSYQNWTTSX4WRX73L7MVPXJ6SY4N\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656104669950.91_warc_CC-MAIN-20220706090857-20220706120857-00205.warc.gz\"}"}
https://www.physicsforums.com/threads/hand-pushes-block.178701/
[ "# Hand pushes block\n\n## Homework Statement\n\nNot an actual HW problem, but related to alot of probs i'm doing.\nLets say a hand pushes a block across a surface (with friction).\nYou take pic of 2 timepoints\n1) when hand pushes block\n2) After hand has let go and block is slowing due to friction\n\nNow when I draw the FBD for situation #2, I am confused about the horizontal forces. (i know that vertical forces cancel)\n\nI know that 1 vector (kinetic friction) will point opposite the motion.\n\n## Homework Equations\n\nBUT: Is there a forces that goes in the direction of motion???\nIf there isn't, then how is the block moving??\nIf there is, then what force could that be, seeing as the hand is no longer in contact with the block??\n\nThanks in advance for any explanation!\n\nPhanthomJay\nHomework Helper\nGold Member\n\n## Homework Statement\n\nNot an actual HW problem, but related to alot of probs i'm doing.\nLets say a hand pushes a block across a surface (with friction).\nYou take pic of 2 timepoints\n1) when hand pushes block\n2) After hand has let go and block is slowing due to friction\n\nNow when I draw the FBD for situation #2, I am confused about the horizontal forces. (i know that vertical forces cancel)\n\nI know that 1 vector (kinetic friction) will point opposite the motion.\nyes, correct.\n\n## Homework Equations\n\nBUT: Is there a forces that goes in the direction of motion???\nWhat does your FBD show you?\nIf there isn't, then how is the block moving??\nThe hand sure helped. What would happen to the object after the hand was released if there were no friction? Why?\nIf there is, then what force could that be, seeing as the hand is no longer in contact with the block??\nRight, good point.\n\nMy problem is in drawing the correct FBD, so at moment im not relying on FBD for info\nBut this is how i think it should look: (W and N are sposed to be equal lengths)\n\nYes I know hand got it moving, but if FBD is sposed to show forces on it, then there should be no force in pos x direction (right), because the hand is no longer in contact with block, right?\nBut if no force in in pos x, then according to diagram, Fnet is to the left, and object is moving to the left (according to the diagram)???\n\nIf there was no friction, then block would move at constant velocity in pos x direction. (and accel would be zero)\n\nLast edited:\nPhanthomJay\nHomework Helper\nGold Member\nMy problem is in drawing the correct FBD, so at moment im not relying on FBD for info\nBut this is how i think it should look: (W and N are sposed to be equal lengths)\n\nYes I know hand got it moving, but if FBD is sposed to show forces on it, then there should be no force in pos x direction (right), because the hand is no longer in contact with block, right?\nBut if no force in in pos x, then according to diagram, Fnet is to the left, and object is moving to the left (according to the diagram)???\n\nIf there was no friction, then block would move at constant velocity in pos x direction. (and accel would be zero)\n\nYes, your FBD is OK. You have correctly noted that there is only one horizontal force acting, the friction force, which is the net force, acting left. But per Newton 2, if the net force is left, then the acceleration is left, not necessarily the motion, which in this case is still to the right until the block stops. As you also have correctly noted, you don't need forces to keep a body moving. Forces retard or accelerate the motion; they do not keep it in motion, as noted in Newton 1.\n\nSO am i corerect in saying that A FBD tells us nothing about which direction an object is moving (the velocity vector) , only its direction (and mag) of acceleration.\nIf thats true then Fnet says nothing about velocity, and a system with Fnet=0 could be stationary or going any speed or direction? ( velocity is completely independent of Fnet at all right?)\n\nPhanthomJay" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9254333,"math_prob":0.93131244,"size":749,"snap":"2021-31-2021-39","text_gpt3_token_len":187,"char_repetition_ratio":0.11543624,"word_repetition_ratio":0.0,"special_character_ratio":0.246996,"punctuation_ratio":0.115384616,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9902313,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-23T00:20:12Z\",\"WARC-Record-ID\":\"<urn:uuid:0bd0730c-0eb1-4ab2-89a0-333a44d6f607>\",\"Content-Length\":\"79001\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:35bd248c-391e-400c-949f-8ef9eaa03d99>\",\"WARC-Concurrent-To\":\"<urn:uuid:184b6535-9367-499f-94e9-587c95bbc45e>\",\"WARC-IP-Address\":\"104.26.14.132\",\"WARC-Target-URI\":\"https://www.physicsforums.com/threads/hand-pushes-block.178701/\",\"WARC-Payload-Digest\":\"sha1:KLD3BHW65TCNVDXQXVIAKNJP3IQDTHIM\",\"WARC-Block-Digest\":\"sha1:3GF37M7HVT7TBT7SULESFR7JPZYSP45T\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780057403.84_warc_CC-MAIN-20210922223752-20210923013752-00673.warc.gz\"}"}
https://accacoach.com/dupont-analysis/
[ "# DuPont Analysis", null, "DuPont analysis is a technique which is used to analyze a company able to increase its return on equity (ROE) based on Profit margin, total asset turnover and financial leverage the DuPont analysis concludes that a company can raise its ROE by maintaining a high-profit margin, increasing asset turnover to increase more sales or leveraging assets more efficiently (equity multiplier).\n\nFormula\n\nThe DuPont model compare ROE to profit margin, asset turnover and financial leverage (equity multiplier). The simple formula by equating ROE are as follows:\n\nReturn on equity (ROE) = Profit margin * Total assets turnover * financial leverage\n\nThe profit margin can be obtained by net income divide by sales, total assets turnover can be obtained by net sales divided by average total assets and financial leverage can be obtained by total assets divided by total equity.\n\nEach figure can be obtained easily from financial statements. Net Income and sales appear on the income statement where total assets and total equity are found in the balance sheet.\n\nInterpretation\n\nDuPont model helps to analyze ROE and evaluates the performance measure of the business and their effects based on ROE. Investors have a concern about the current ROE not about the ROE result whether they are big or small in numbers. Therefore, if the investors are not satisfied with a low ROE then management can relate ROE with a different component such as profit margin, total asset turnover, and financial leverage to find out the problems area whether there is lower profit margin, asset turnover or it is below the financial leverage.\n\nAfter verifying the problem area, management can solve the problem by addressing the issues with shareholders. The scenario can be different like some normal operation lower ROE in some cases and investors don’t decide based on the small output. The reason can be accelerated depreciation artificially lowers ROE in the beginning periods. The result can be evaluated by DuPont analysis and helps the management to make decisions based on the ROE and three other components.\n\nExample\n\nPP and AA are two retailers company which sell the same products and have the same return on equity ratio of 45 percent. The DuPont model can be used to show the strengths and weaknesses of each company. The company has each ratio which are the following:\n\nRatio      PP           AA\n\nProfit margin      30%        15%\n\nTotal Asset Turnover      0.50        6.0\n\nFinancial leverage            3.0          .50\n\nFrom above example, both PP and AA companies have the same overall ROE, but the companies operations are majorly different.\n\nDuPont Analysis\n\n0.30*0.5 *3 =45%\n\n0.15*6*0.50=45%\n\nPP generates sales while maintaining a lower cost of goods as result show higher profit margin. PP has suffers turning over large amounts of sales.\n\nAA business on the other way , is selling products at a smaller margin , but AA business is turning over a lot of products. From above we can analyze that AA business has low profit margin and huge asset turnover .\n\nThe DuPont model helps investors to compare related to similar ratios. Investors can use judgemental analysis based on applying perceived risks with each company’s business model." ]
[ null, "https://accacoach.com/wp-content/uploads/2021/08/DUPONT-ANALYSIS.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.92919225,"math_prob":0.9501969,"size":3147,"snap":"2023-40-2023-50","text_gpt3_token_len":635,"char_repetition_ratio":0.13331212,"word_repetition_ratio":0.0077669905,"special_character_ratio":0.1963775,"punctuation_ratio":0.077057794,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97245693,"pos_list":[0,1,2],"im_url_duplicate_count":[null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-08T19:04:32Z\",\"WARC-Record-ID\":\"<urn:uuid:8d3bcf01-fcee-4c2e-9c5b-3acd76daf70b>\",\"Content-Length\":\"68928\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8e226531-7f1a-4472-a88f-7ae951876d76>\",\"WARC-Concurrent-To\":\"<urn:uuid:787f1964-74e4-45c5-aa63-847404c8cef2>\",\"WARC-IP-Address\":\"82.180.172.163\",\"WARC-Target-URI\":\"https://accacoach.com/dupont-analysis/\",\"WARC-Payload-Digest\":\"sha1:Q3P3GPSMI54L5RGQNR6EQM4K4XU66VSH\",\"WARC-Block-Digest\":\"sha1:QEH5RQIR4JGS763XQXQKSG5IX62M3BE4\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100769.54_warc_CC-MAIN-20231208180539-20231208210539-00734.warc.gz\"}"}
https://au.mathworks.com/matlabcentral/profile/authors/13037934
[ "Community Profile", null, "# Martin Lindvald Pedersen\n\nLast seen: 4 months ago Active since 2021\n\n#### Statistics\n\n•", null, "•", null, "•", null, "•", null, "#### Content Feed\n\nView by\n\nSolved\n\nSet some matrix elements to zero\nFirst get the maximum of each *row*, and afterwards set all the other elements to zero. For example, this matrix: 1 2 3 ...\n\n10 months ago\n\nSolved\n\nsurrounded matrix\nWith a given matrix A (size m x n) create a matrix B (size m+2 x n+2) so that the matrix A is surrounded by ones: A = [1 2 ...\n\n10 months ago\n\nSolved\n\nFlip the main diagonal of a matrix\nGiven a n x n matrix, M, flip its main diagonal. Example: >> M=magic(5); >> flipDiagonal(M) 9 24 1 ...\n\n10 months ago\n\nSolved\n\nMake an awesome ramp for a tiny motorcycle stuntman\nOkay, given a vector, say v=[1 3 6 9 11], turn it into a matrix 'ramp' like so: m=[1 3 6 9 11; 3 6 9 11 0; 6 9 ...\n\n10 months ago\n\nSolved\n\nBack to basics 23 - Triangular matrix\nCovering some basic topics I haven't seen elsewhere on Cody. Given an input matrix, return a matrix with all elements above a...\n\n10 months ago\n\nSolved\n\nBack to basics 21 - Matrix replicating\nCovering some basic topics I haven't seen elsewhere on Cody. Given an input matrix, generate an output matrix that consists o...\n\n10 months ago\n\nSolved\n\nRemove NaN ?\ninput -> matrix (n*m) with at least one element equal to NaN; output -> matrix(p*m), the same matrix where we deleted the enti...\n\n10 months ago\n\nSolved\n\nRemove the air bubbles\nGiven a matrix a, return a matrix b in which all the zeros have \"bubbled\" to the top. That is, any zeros in a given column shoul...\n\n10 months ago\n\nSolved\n\nMatrix with different incremental runs\nGiven a vector of positive integers a = [ 3 2 4 ]; create the matrix where the *i* th column contains the vector *1:a(i)...\n\n10 months ago\n\nSolved\n\nReference Index Number\nGiven a reference set R of elements (each unique but identical in type), and a list V of elements drawn from the set R, possibly...\n\n11 months ago\n\nSolved\n\nOh Zero Zero Zero!!!\nHello all, So you have to find the largest section of zeros in a vector and then find the length of those zeros and there start...\n\n11 months ago\n\nSolved\n\nInsert zeros into vector\nInsert zeros after each elements in the vector. Number of zeros is specified as the input parameter. For example: x = [1 ...\n\n11 months ago\n\nSolved\n\nGenerate a vector like 1,2,2,3,3,3,4,4,4,4\nGenerate a vector like 1,2,2,3,3,3,4,4,4,4 So if n = 3, then return [1 2 2 3 3 3] And if n = 5, then return [1 2 2...\n\n11 months ago\n\nSolved\n\nUnique values without using UNIQUE function\nYou must return unique values in a vector in *stable* mode without using the unique function. About stable order flag: ...\n\n11 months ago\n\nSolved\n\nDetermine the number of odd integers in a vector\nDetermine the number of unique odd integers in a vector. Examples: Input x = [2 5 8 3 7 1]; Output y = 4; Inp...\n\n11 months ago\n\nSolved\n\nChange the sign of even index entries of the reversed vector\nchange the signs of the even index entries of the reversed vector example 1 vec = [4 -1 -2 9] ans = [9 2 -1 -4] example2...\n\n11 months ago\n\nSolved\n\nSet a diagonal\nGiven a matrix M, row vector v of appropriate length, and diagonal index d (where 0 indicates the main diagonal and off-diagonal...\n\n11 months ago\n\nSolved\n\nCreate a vector whose elements depend on the previous element\nThe idea is to create a vector A whose elements depend on the previous element : *A(i+1) = 2*A(i)+1* *2 Inputs*: - A : The...\n\n11 months ago\n\nSolved\n\nFinding peaks\nFind the peak values in the signal. The peak value is defined as the local maxima. For example, x= [1 12 3 2 7 0 3 1 19 7]; ...\n\n11 months ago\n\nSolved\n\nImplement a bubble sort technique and output the number of swaps required\nA bubble sort technique compares adjacent items and swaps them if they are in the wrong order. This is done recursively until al...\n\n11 months ago\n\nSolved\n\nDecimation - Optimized for speed\nThis problem is similar to http://www.mathworks.com/matlabcentral/cody/problems/1092-decimation, only this time the score will b...\n\n11 months ago\n\nSolved\n\nDecimation\nWhen dealing to the Roman Army, the term decimate meant that the entire unit would be broken up into groups of ten soldiers, and...\n\n11 months ago\n\nSolved\n\nFind nth maximum\nFind nth maximum in a vector of integer numbers. Return NaN if no such number exists. x = [2 6 4 9 -10 3 1 5 -10]; So ...\n\n11 months ago\n\nSolved\n\nCreate an index-powered vector\nGiven a input vector x, return y as index-powered vector as shown below. Example x = [2 3 6 9] then y should be [...\n\n11 months ago\n\nSolved\n\nFind last zero for each column\nGiven a numeric array of arbitrary size, return the row index of the last zero for each column. If a column contains all nonzero...\n\n11 months ago\n\nSolved\n\nCount consecutive 0's in between values of 1\nSo you have some vector that contains 1's and 0's, and the goal is to return a vector that gives the number of 0's between each ...\n\n11 months ago\n\nSolved\n\nCalculate the Number of Sign Changes in a Row Vector (No Element Is Zero)\nFor a row vector: V=[7 1 2 -3] there is one sign change (from 2 to -3). So, the function you write must return N=1. F...\n\n11 months ago\n\nSolved\n\nSymmetry of vector\nDetermine whether the vector is symmetric or not (vector could be even or odd in length). For example: x = [1 2 3 3 2 1] ...\n\n11 months ago\n\nSolved\n\nCreate an n-by-n null matrix and fill with ones certain positions\nThe positions will be indicated by a z-by-2 matrix. Each row in this z-by-2 matrix will have the row and column in which a 1 has...\n\n11 months ago\n\nSolved\n\nGetting the indices from a vector\nThis is a basic MATLAB operation. It is for instructional purposes. --- You may already know how to <http://www.mathworks....\n\n11 months ago" ]
[ null, "https://au.mathworks.com/responsive_image/150/150/0/0/0/cache/matlabcentral/profiles/13037934_1529409178507_DEF.jpg", null, "https://au.mathworks.com/images/responsive/supporting/matlabcentral/cody/badges/indexing_master_2.png", null, "https://au.mathworks.com/images/responsive/supporting/matlabcentral/cody/badges/promoter.png", null, "https://au.mathworks.com/images/responsive/supporting/matlabcentral/cody/badges/community_authored_group.png", null, "https://au.mathworks.com/images/responsive/supporting/matlabcentral/cody/badges/solver.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.733236,"math_prob":0.982081,"size":5975,"snap":"2022-05-2022-21","text_gpt3_token_len":1690,"char_repetition_ratio":0.1845587,"word_repetition_ratio":0.056756757,"special_character_ratio":0.28585774,"punctuation_ratio":0.14659686,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9946713,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,4,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-16T19:43:36Z\",\"WARC-Record-ID\":\"<urn:uuid:2e8241b0-28f3-475f-81a7-b839faa78a15>\",\"Content-Length\":\"102496\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5c8d82db-c026-4ca2-949e-2bf0ada5dcac>\",\"WARC-Concurrent-To\":\"<urn:uuid:da348745-67b1-46c8-aaef-d02f1d646e71>\",\"WARC-IP-Address\":\"23.1.9.244\",\"WARC-Target-URI\":\"https://au.mathworks.com/matlabcentral/profile/authors/13037934\",\"WARC-Payload-Digest\":\"sha1:BY4H5GD223NIYBZQ7FNZ6CDDEDRMQAFT\",\"WARC-Block-Digest\":\"sha1:KT64ETMNTROYHFHT3WF6QBOE3T2KBFM7\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662512229.26_warc_CC-MAIN-20220516172745-20220516202745-00264.warc.gz\"}"}
https://math.stackexchange.com/questions/2030560/does-there-exist-a-non-pir-in-which-every-countably-generated-prime-ideal-is-pri
[ "# Does there exist a non-PIR in which every countably generated prime ideal is principal?\n\nIs there a commutative ring $R$ such that all the countably generated primes are principal, but $R$ is not a principal ideal ring?\n\nI know that if all the prime ideals are principal, then all the ideals are principal (see here ; this answer doesn't need $R$ to ba a domain at all).\n\nOn the other hand, any commutative ring such that countably generated ideals are principal, is a PIR (see here).\n\nNotice that since the ring $R$ of algebraic integers is a non-Noetherian Bezout domain, all the finitely-generated (prime) ideals are principal, but $R$ is not a PIR.\n\nI tried some examples of non-PIR rings without too many prime ideals. The extreme case is only one prime ideal, for instance Artin and local, but the unique prime ideal wasn't principal in the examples I found. As pointed out by Alex Youcis in the comment below, this can't work since any Artin ring is noetherian. I wanted to search for local zero-dimensional rings, but I didn't know how.\n\n• I don't know an example off hand, but let me point out that I've very confused about your sequence of statements. Namely, if $A$ is Artin local then, as you pointed out, it has only one prime, and so it's a terrible place to look for this property since $A$ is Noetherian, and so your condition forces the maximal ideal to be principal and thus by your first statement a PIR. – Alex Youcis Nov 26 '16 at 10:18\n• @AlexYoucis : you are right. Actually I wanted to search for local zero-dimensional rings, but I didn't know how, so I search for local Artin rings. – Watson Nov 26 '16 at 10:20\n\nLet $k$ be a field and $X$ be an uncountable set, and let $R$ be the ring of functions $X\\to k$ that are constant off of a finite set. If $P\\subset R$ is a prime ideal, there are two cases. The first case is that $P$ contains a function with cofinite support, in which case by primality it must contain a function whose support is the complement of a singleton $\\{x\\}$. It then follows that $P$ must be equal to the ideal of functions which vanish at $x$, which is principal (generated by the characteristic function of $X\\setminus\\{x\\}$).\n\nThe second case is that $P$ contains no functions of cofinite support. By primality, $P$ must then contain every function of finite support (since if $f$ has finite support, $fg=0$ where $g$ is the characteristic function of the complement of the support of $f$, and $g\\not\\in P$). Thus $P$ is the set of all functions with finite support, which is indeed a prime ideal. This prime ideal is not countably generated, since $X$ is uncountable.\n\nAnother similar example is the ring of all functions $X\\to k$, where $X$ is any infinite set. The proof that no nonprincipal prime is countably generated is more complicated in that case, and is equivalent to the statement that no nonprincipal ultrafilter on a set is countably generated.\n\nFor a totally different kind of example, let $\\{x_i\\}$ be an uncountable set of variables and let $R=k[x_i]/(x_i^2)$. Then the unique prime ideal of $R$ is the ideal generated by all the $x_i$, since they are all nilpotent and the ideal they generate is already maximal (since the quotient is $k$). This ideal is not countably generated since any countable set of generators would only involve countably many of the variables.\n\nHere's one last example, which unlike the previous examples is a domain. First, let me start with a couple lemmas.\n\nLemma 1: Let $A$ be a domain and $a\\in A$ be a nonzero element. Then the ring $B=A[x,y]/(xy-a)$ is a domain.\n\nProof: Note that $B[1/x]=A[x,1/x][y]/(xy-a)=A[x,1/x][z]/(z-a/x)=A[x,1/x]$ by letting $z=y/x$. So $B[1/x]$ is a domain, and to show $B$ is a domain it suffices to show $x$ is not a zero divisor in $B$. If $x$ were a zero divisor, then there would be polynomials $f,g\\in A[x,y]$ with $xf=g(xy-a)$ and $f$ not divisible by $xy-a$. But $x$ is prime in $A[x,y]$, so if $xf=g(xy-a)$ then $x$ divides $g$, so $f=(g/x)(xy-a)$ is divisible by $xy-a$.\n\nLemma 2: Let $A$ be a domain and let $a\\in A$ be a nonzero element. Then the ring $C=A[x,y,s,t]/(xy-a,sx+ty-1)$ is a domain and no nonunit element of $A$ is a unit in $C$.\n\nProof: By Lemma 1, $B=A[x,y]/(xy-a)$ is a domain. Now $C[1/x]=B[1/x][s,t]/(sx+ty-1)=B[1/x][s,u]/(s+uy-1/x)=B[1/x][u]$ (by letting $u=t/x$). So $C[1/x]$ is a domain, and to show $C$ is a domain it suffices to show $x$ is not a zero divisor in $C$. If $x$ were a zero divisor in $C$, there would be polynomials $f,g\\in B[s,t]$ such that $xf=g(sx+ty-1)$ but $f$ is not divisible by $sx-ty-1$. But if $x$ divides $g(sx+ty-1)$, then by induction $x$ must divide each of the homogeneous parts of $g$, and thus $x$ divides $g$ (here we use the fact that the constant term of $sx+ty-1$ is a unit). So $f=(g/x)(sx+ty-1)$ is divisible by $sx+ty-1$.\n\nThus $C$ is a domain. To prove that no nonunit in $A$ is a unit in $C$, note that there is a homomorphism of $A$-algebras from $C$ to $A$ which sends $x$ to $a$, $y$ to $1$, $s$ to $0$, and $t$ to $1$. Any unit in $C$ maps to a unit in $A$ under this homomorphism, and so any unit of $C$ which came from an element of $A$ must be a unit in $A$.\n\nOK, now we can finally construct the example. Let $U$ be an uncountable set. Given a domain $A$, let $F(A)$ be the ring obtained by adjoining elements $x_{a,u},y_{a,u},s_{a,u},t_{a,u}$ such that $x_{a,u}y_{a,u}=a$ and $s_{a,u}x_{a,u}+t_{a,u}y_{a,u}=1$ for each nonzero element $a\\in A$ and each $u\\in U$. By iterating Lemma 2, $F(A)$ is a domain and no nonunit in $A$ is a unit in $F(A)$.\n\nNow let $A_0$ be a domain which is not a field and define $A_1=F(A_0)$, $A_2=F(A_1)$, and so on, and let $A_\\omega$ be the direct limit of the rings $A_n$. Then $A_\\omega$ is a domain, and is not a field, since $A_0$ was not a field and any non-unit in $A_0$ is still a non-unit in each $A_n$. But I claim that no nonzero prime ideal in $A_\\omega$ is countably generated.\n\nIndeed, suppose $P\\subset A_\\omega$ is prime and $a\\in P$ is a nonzero element. Then $a\\in A_n$ for some $n$, and for each $u\\in U$ in $A_{n+1}$ there are elements $x_{a,u},y_{a,u},s_{a,u},t_{a,u}$ such that $x_{a,u}y_{a,u}=a$ and $s_{a,u}x_{a,u}+t_{a,u}y_{a,u}=1$. Since $P$ is prime, it must contain exactly one of $x_{a,u}$ and $y_{a,u}$ for each $u$ (if it contained both, then $s_{a,u}x_{a,u}+t_{a,u}y_{a,u}=1$ would be in $P$). Now if $P$ were countably generated, its generators would involve only countably many of the elements of $U$, and we could find an automorphism of $A_\\omega$ that fixes each of the generators of $P$ but swaps $x_{a,u}$ and $y_{a,u}$ (and also swaps $s_{a,u}$ and $t_{a,u}$) for some $u$ that is not involved in any of the generators of $P$. This automorphism would fix $P$, and so $P$ contains $x_{a,u}$ iff it contains $y_{a,u}$. This is a contradiction, since $P$ must contain exactly one of them.\n\nThus no nonzero prime in $A_\\omega$ is countably generated. Since $A_\\omega$ is a domain which is not a field, it does contain nonzero primes, so it is not a principal ideal ring.\n\n• Not the OP but I have a followup question. Do you know an example of a domain with this property? I think the examples you gave were all non-domains – Jay Nov 27 '16 at 10:40\n• Good question! I've added an example that is a domain. – Eric Wofsey Nov 27 '16 at 21:51\n• Thanks, that's a nice example. It looks pretty complicated at first but on second glance it seems like each step is straightforward and maybe necessary – Jay Nov 29 '16 at 0:14\n• Incidentally, I suspect that the $s$ and $t$ variables are not actually necessary for the example. But without them, I don't know how to prove no prime can be countably generated, since then you only know that $P$ must contain at least one of $x_{a,u}$ and $y_{a,u}$ and I don't quite see how to prove that a countable set of generators couldn't somehow generate both $x_{a,u}$ and $y_{a,u}$ for all $u$. – Eric Wofsey Nov 29 '16 at 0:23" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.895668,"math_prob":0.9996891,"size":5394,"snap":"2020-45-2020-50","text_gpt3_token_len":1759,"char_repetition_ratio":0.13951762,"word_repetition_ratio":0.07483731,"special_character_ratio":0.32424918,"punctuation_ratio":0.11558935,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999988,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-12-05T18:38:23Z\",\"WARC-Record-ID\":\"<urn:uuid:2462ddac-b1cb-47d7-92eb-380cf318e0f1>\",\"Content-Length\":\"163622\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:74d875b4-1d18-424c-af9d-db97b06fe34e>\",\"WARC-Concurrent-To\":\"<urn:uuid:e2a67899-af79-4750-a325-2869429fb021>\",\"WARC-IP-Address\":\"151.101.193.69\",\"WARC-Target-URI\":\"https://math.stackexchange.com/questions/2030560/does-there-exist-a-non-pir-in-which-every-countably-generated-prime-ideal-is-pri\",\"WARC-Payload-Digest\":\"sha1:M23SC3FNV7F4CYIL22INZ65YKDNTPNX5\",\"WARC-Block-Digest\":\"sha1:LRFMYBTJGYBQHKETLXYKQ5SCPUGXD3VZ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141748276.94_warc_CC-MAIN-20201205165649-20201205195649-00659.warc.gz\"}"}
https://math.stackexchange.com/questions/757091/alternative-definition-of-hyperbolic-cosine-without-relying-on-exponential-funct?noredirect=1
[ "# Alternative definition of hyperbolic cosine without relying on exponential function\n\nOrdinary trigonometric functions are defined independently of exponential function, and then shown to be related to it by Euler's formula.\n\nCan one define hyperbolic cosine so that the formula $$\\cosh{x}=\\dfrac{e^x+e^{-x}}{2}$$ becomes something to be proven?\n\n• This is a definition - definitions can't be proved. – user122283 Apr 16 '14 at 23:53\n• @SanathDevalapurkar: One can define $\\cosh u$ and $\\sinh u$ geometrically as hyperbolic analogues of $\\cos\\theta$ and $\\sin\\theta$, taking $(\\cosh u, \\sinh u)$ to be points on the \"unit hyperbola\", $x^2 - y^2 = 1$. In that case, the relation between these values and exponentials does require proof. (I may have posted one on MSE at some point.) – Blue Apr 16 '14 at 23:58\n• How exactly have you defined $\\cosh x$, if not through this very formula? – user61527 Apr 17 '14 at 0:04\n• I don't understand why this question is causing so much confusion...OP is merely asking if there's another equivalent definition one can work with. $\\cosh x$ can be characterized as the function $f:\\mathbb{R} \\to \\mathbb{R}$ satisfying $f'' = f$, $f'(0) = 0$ and $f(0) = 1$. Then after proving existence/uniqueness it's easy to verify that the formula you have works. – MathematicsStudent1122 Jul 18 '16 at 8:00\n\nThe more-geometrically-minded of us take $\\cosh u$ and $\\sinh u$ to be defined via the \"unit hyperbola\", $x^2 - y^2 = 1$, in a manner directly analogous to $\\cos\\theta$ and $\\sin\\theta$. Specifically, given $P$ a point on the hyperbola with vertex $V$, and defining $u$ as twice(?!) the area of the hyperbolic sector $OVP$, then $\\cosh u$ and $\\sinh u$ are, respectively the $x$- and $y$-coordinates of $P$.", null, "Just as in circular trig, we can assign measures $u$ (in \"hyperbolic radians\") to angles ---from the flat angle (when $u=0$) to half a right angle (when $u=\\infty$)--- and associate those measures with the lengths of the corresponding $\\cosh$ and $\\sinh$ segments. And, just as in circular trig (prior to the advent of imaginary numbers), we might be forgiven for suspecting that the correspondences $u \\leftrightarrow \\cosh u$ and $u \\leftrightarrow \\sinh u$ are \"non-arithmetical\", which is to say: that no arithmetical formula converts angle measures to their associated trig values.\n\nHowever, it turns out that the correspondences are not non-arithmetical; to find the appropriate arithmetical conversion formula, all we need is a bit of calculus ...\n\nEdit. (Two years later!) Check the edit history for an inelegant argument that I now streamline with the help of this trigonograph, in which lengths from the unit hyperbola have been scaled by $\\sqrt{2}$ (and, thus, areas by $2$):", null, "Because the hyperbola is rectangular, we have that $|\\overline{OX}|\\cdot|\\overline{XY}|$ is a constant (here, $1$), which guarantees that the regions labeled $v$ have the same area (namely, $1/2$), and therefore that the regions labeled $u$ have the same area (namely, $u$). Now, the bit of calculus I promised, to evaluate $u$ as the area under the reciprocal curve: $$u = \\int_1^{|\\overline{OX}|}\\frac{1}{t}dt = \\ln|\\overline{OX}| \\quad\\to\\quad |\\overline{OX}| = e^{u} \\quad\\to\\quad |\\overline{XY}| = \\frac{1}{e^u}$$ With that, we clearly have $$2\\,\\sinh u \\;=\\; e^{u}- e^{-u} \\qquad\\qquad 2\\,\\cosh u \\;=\\; e^{u} + e^{-u}$$ as desired. Easy-peasy!\n\nEnd of edit.\n\nThat hyperbolic radians are defined via doubling the area of a hyperbolic sector may seem at odds with the common definition of circular radians in terms of arc-length, but it's hard to argue with success, given the elegance of the formulas above. Even so, the hyperbolic twice-the-sector-area definition can be seen as directly analogous to the circular case, since circular radians are also definable as \"twice-the-sector-area\": In the unit circle, the sector with angle measure $\\pi/2$ radians has area $\\pi/4$ (it's a quarter-circle), the sector with angle measure $\\pi$ radians has area $\\pi/2$ (it's a half-circle), and the \"sector\" with angle measure $2\\pi$ radians has area $\\pi$ (it's the full circle); in these, and all other, cases, the angle measure is twice the sector area.\n\n• This was fascinating! Would I be correct in assuming that, like with the circular trig functions, if $z$ gives the arc length from the vertex to the point $(x,y)$ on the hyperbola $x^2-y^2=r^2$, with a sign of positive or negative according to whether $y$ is positive or negative, then $\\cosh z$ could also be defined as the ratio $\\frac{x}{r}$, and $\\sinh z$ as $\\frac{y}{r}$? And then in the unit hyperbola, these ratios simply reduce to coordinates and the arc length becomes half the sector area? This would be an even nicer analogy to circular trigonometry. – solstafir Apr 17 '14 at 4:03\n• @solstafir: You can define $\\cosh$ and $\\sinh$ based on an arc-length parameter (your $z$); however, hyperbolic arc-length cannot be expressed in terms of elementary functions. (Lengths of curves are almost-always trickier to calculate than the areas they bound; circles (& lines) are the primary exceptions.) The length of arc $V^\\prime P^\\prime$ involves $\\int \\sqrt{1+x^4}/x^2 dx$, which is quite non-trivial, so hyperbolic trig values would effectively be \"non-arithmetical\" functions of an arc-length-based angle measure. It's certainly not the case that arc-length is twice the sector area. – Blue Apr 17 '14 at 4:23\n• Okay, that makes sense, thanks for the clarification. – solstafir Apr 17 '14 at 4:34\n• This is great! Loved it. – MycrofD Sep 22 '16 at 9:29\n• Thank you very much. Surely I do my best to bring people's attention to your work whenever I can. – Lee David Chung Lin Apr 20 '18 at 10:31\n\nWell, that is usually simply taken to be the definition, but given that\n\n$$\\cos x=\\cosh ix$$\n\nyou may be asking for a proof that\n\n$$\\cos x=\\frac{e^{ix}+e^{-ix}}{2}$$\n\nFrom Taylor's theorem, we know that\n\n$$e^x=\\sum_{n=0}^{\\infty}\\frac{x^n}{n!}$$\n\nSo\n\n$$e^{ix}=\\sum_{n=0}^{\\infty}\\frac{(ix)^n}{n!}=\\sum_{n=0}^{\\infty}\\frac{(-1)^nx^{2n}}{(2n)!}+i\\sum_{n=0}^{\\infty}\\frac{(-1)^nx^{2n+1}}{(2n+1)!}=\\cos x+i\\sin x$$\n\nUsing $e^{ix}=\\cos x+i\\sin x$, express $e^{ix}+e^{-ix}$ in terms of $\\cos x$, noting that the cosine function is even and the sine function is odd.\n\nThis is often taken as the definition of $\\cosh$ so it can't really be proved.\n\n• From what ? This is often taken as the definition of cosh – mahmoud afefey Apr 16 '14 at 23:51\n• I don't understand what you're trying to ask. – Cameron Williams Apr 16 '14 at 23:51\n• why the downvote? – robjohn Apr 16 '14 at 23:53\n• @robjohn It's sitting at 3 downvotes now lol. I think people do not realize how poorly conceived this post was initially. – Cameron Williams Jul 13 '16 at 0:55\n• At the time of my comment the question simply read like this. – robjohn Sep 22 '16 at 16:50" ]
[ null, "https://i.stack.imgur.com/vuAScm.png", null, "https://i.stack.imgur.com/fH1Ka.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.86468124,"math_prob":0.9988704,"size":2843,"snap":"2019-26-2019-30","text_gpt3_token_len":799,"char_repetition_ratio":0.11940824,"word_repetition_ratio":0.01369863,"special_character_ratio":0.27682027,"punctuation_ratio":0.11433757,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999286,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,7,null,5,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-07-20T05:23:21Z\",\"WARC-Record-ID\":\"<urn:uuid:32f480cb-6920-4bf6-8d37-a89c927dfa30>\",\"Content-Length\":\"175621\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:26af9722-bdde-48d9-9241-2c9e03cd9905>\",\"WARC-Concurrent-To\":\"<urn:uuid:7879a2d3-7de4-4341-a2f0-f4cfbe18a1a4>\",\"WARC-IP-Address\":\"151.101.1.69\",\"WARC-Target-URI\":\"https://math.stackexchange.com/questions/757091/alternative-definition-of-hyperbolic-cosine-without-relying-on-exponential-funct?noredirect=1\",\"WARC-Payload-Digest\":\"sha1:XYHMVAVWI4ZET32GT464JQQO35JSWS5T\",\"WARC-Block-Digest\":\"sha1:WVTSIJNY7JBF7PCMAORBZSR5BOTRMRTT\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-30/CC-MAIN-2019-30_segments_1563195526446.61_warc_CC-MAIN-20190720045157-20190720071157-00447.warc.gz\"}"}
https://answers.everydaycalculation.com/subtract-fractions/6-15-minus-18-7
[ "Solutions by everydaycalculation.com\n\n## Subtract 18/7 from 6/15\n\n1st number: 6/15, 2nd number: 2 4/7\n\n6/15 - 18/7 is -76/35.\n\n#### Steps for subtracting fractions\n\n1. Find the least common denominator or LCM of the two denominators:\nLCM of 15 and 7 is 105\n2. For the 1st fraction, since 15 × 7 = 105,\n6/15 = 6 × 7/15 × 7 = 42/105\n3. Likewise, for the 2nd fraction, since 7 × 15 = 105,\n18/7 = 18 × 15/7 × 15 = 270/105\n4. Subtract the two fractions:\n42/105 - 270/105 = 42 - 270/105 = -228/105\n5. After reducing the fraction, the answer is -76/35\n\nMathStep (Works offline)", null, "Download our mobile app and learn to work with fractions in your own time:" ]
[ null, "https://answers.everydaycalculation.com/mathstep-app-icon.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6004041,"math_prob":0.984006,"size":400,"snap":"2020-24-2020-29","text_gpt3_token_len":185,"char_repetition_ratio":0.23484848,"word_repetition_ratio":0.0,"special_character_ratio":0.55,"punctuation_ratio":0.07619048,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9983958,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-07-14T19:35:57Z\",\"WARC-Record-ID\":\"<urn:uuid:1cb8335f-44fd-44e1-9530-63ad1e5d00ee>\",\"Content-Length\":\"8541\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:abfd6c25-3a03-4fc0-9508-990d03d1f8bd>\",\"WARC-Concurrent-To\":\"<urn:uuid:93a88c69-0f5a-40da-8b85-7967bf97c5a5>\",\"WARC-IP-Address\":\"96.126.107.130\",\"WARC-Target-URI\":\"https://answers.everydaycalculation.com/subtract-fractions/6-15-minus-18-7\",\"WARC-Payload-Digest\":\"sha1:AE7G6YA2EQ2PDMFO366YEHXACSPODERR\",\"WARC-Block-Digest\":\"sha1:YTAESKMFNOKMOJ7XZNELSS2VQWWI2266\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-29/CC-MAIN-2020-29_segments_1593657151197.83_warc_CC-MAIN-20200714181325-20200714211325-00406.warc.gz\"}"}
https://gis.stackexchange.com/questions/350574/processing-ndvi-in-google-earth-engine/350662
[ "# Processing NDVI in Google Earth Engine\n\nI would like to analyze NDVI in the region of Bangkok.\n\nHere is the code :\n\n``````{\n//Define time range\nvar startyear = 2006;\nvar endyear = 2016;\n\n//Set date in ee date format\nvar startdate = ee.Date.fromYMD(startyear, 1, 1);\nvar enddate = ee.Date.fromYMD(endyear, 12, 31);\n\n///Import a fusion table of your study area\n// The Ca basin\nvar Ca = ee.FeatureCollection(geometry)\n\n//filter on date and area (bounds)\nvar l5images = l5.filterDate(startdate,enddate).filterBounds( Ca);\nvar l7images = l7.filterDate(startdate,enddate).filterBounds( Ca);\nvar l8images = l8.filterDate(startdate,enddate).filterBounds( Ca);\n\n///Include a funtion to remove the clouds\n//set cloud threshold\nvar cloud_threshold = 40;\n\n// Select the red and NIR bands\nl5images = l5images.select([\"B4\",\"B3\"]);\nl7images = l7images.select([\"B4\",\"B3\"]);\nl8images = l8images.select([\"B5\",\"B4\"]);\n\n// calculate ndvi from landsat 8\nfunction l8ndvi(img) {\nvar ndvi = img.normalizedDifference(['B5', 'B4']).rename('NDVI');\nreturn img.addBands(ndvi);\n}\n\n// calculate ndvi from landsat 5 & 7\nfunction l57ndvi(img) {\nvar ndvi = img.normalizedDifference(['B4', 'B3']).rename('NDVI');\nreturn img.addBands(ndvi);\n}\n\n// calculate ndwi for each image in imagecollection\nvar l5ndvi = l5images.map(l57ndvi);\nvar l7ndvi = l7images.map(l57ndvi);\nvar l8ndvi = l8images.map(l8ndvi);\n\n// combine all data in single image collection\nvar allcollection = ee.ImageCollection((l5ndvi.merge(l7ndvi)).merge(l8ndvi));\n\n// add map to canvas\nMap.addLayer(allcollection);\nMap.centerObject(allcollection,8);\nprint()\n}\n``````\n\nGoogle Earth Engine Link that I used\n\nBut then there is trouble, but I don't really understand what is it about.", null, "So Is there anything I can do about this. I am still very new to programming.\n\n• The message means that Earth Engine is trying to combine B4, B3 and NDVI into a single collection, but that something is different about them which is preventing that. I suspect that the data type of the NDVI band is different to B4 and B3, or that the object type is of NDVI is unknown, so you need to cast NDVI to the right data or object type. See here: developers.google.com/earth-engine/tutorial_js_02#casting – sbphd Feb 13 '20 at 10:33\n\n## 2 Answers\n\n`NormalizedDifference()` returns floating point values. Your band B3/B4/B5 are probably of some integer value. GEE cannot make composites with difference band types. Easiest solution would be casting the input B3-4-5 bands to floating point numbers. Change the following function of your code, and I think it should work fine (note that your link doesn't work for me).\n\n``````// calculate ndvi from landsat 8\nfunction l8ndvi(img) {\nvar ndvi = img.normalizedDifference(['B5', 'B4']).rename('NDVI');\nreturn img.toFloat().addBands(ndvi);\n}\n\n// calculate ndvi from landsat 5 & 7\nfunction l57ndvi(img) {\nvar ndvi = img.normalizedDifference(['B4', 'B3']).rename('NDVI');\nreturn img.toFloat().addBands(ndvi);\n}\n``````\n\nI'd suggest that you comment out these lines and let us know if it works.\n\n``````// Select the red and NIR bands\nl5images = l5images.select([\"B4\",\"B3\"]);\nl7images = l7images.select([\"B4\",\"B3\"]);\nl8images = l8images.select([\"B5\",\"B4\"]);\n``````\n\nIf it works, I'll explain what I have in mind. I'm also a newbie, so maybe my hypothesis is wrong.\n\n• This isn't an answer, it's a comment – nmtoken Feb 14 '20 at 16:44\n• Thx for the reply @nmtoken. I didn't make myself clear, sorry. My answer was \"comment out these lines of codes... It should work. Let me know.\" – VuTien Khang Feb 15 '20 at 12:46" ]
[ null, "https://i.stack.imgur.com/95ODy.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.50976765,"math_prob":0.924306,"size":1721,"snap":"2021-21-2021-25","text_gpt3_token_len":500,"char_repetition_ratio":0.12696564,"word_repetition_ratio":0.01793722,"special_character_ratio":0.292853,"punctuation_ratio":0.21118012,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96765774,"pos_list":[0,1,2],"im_url_duplicate_count":[null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-06-12T18:12:33Z\",\"WARC-Record-ID\":\"<urn:uuid:8db1d455-698c-4fd2-a937-d21ae4d0f3a0>\",\"Content-Length\":\"172640\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b3b14127-eb1f-448c-a08a-451e4e963979>\",\"WARC-Concurrent-To\":\"<urn:uuid:cf0cc219-0ed4-481b-b388-6018f7d7851f>\",\"WARC-IP-Address\":\"151.101.193.69\",\"WARC-Target-URI\":\"https://gis.stackexchange.com/questions/350574/processing-ndvi-in-google-earth-engine/350662\",\"WARC-Payload-Digest\":\"sha1:6WHH5UMOE4E3JFHEJTWLIYLDW3SOYCQM\",\"WARC-Block-Digest\":\"sha1:BHZD3FZUY2UAH6L5U2Y6SFYZKSHOXKCY\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-25/CC-MAIN-2021-25_segments_1623487586239.2_warc_CC-MAIN-20210612162957-20210612192957-00621.warc.gz\"}"}
https://cr.openjdk.java.net/~iris/se/10/latestSpec/api/java/awt/geom/QuadCurve2D.Float.html
[ "Module java.desktop\nPackage java.awt.geom\n\n## Class QuadCurve2D.Float\n\n• All Implemented Interfaces:\n`Shape`, `Serializable`, `Cloneable`\nEnclosing class:\nQuadCurve2D\n\n```public static class QuadCurve2D.Float\nextends QuadCurve2D\nimplements Serializable```\nA quadratic parametric curve segment specified with `float` coordinates.\nSince:\n1.2\nSee Also:\nSerialized Form\n\n• ### Nested classes/interfaces declared in class java.awt.geom.QuadCurve2D\n\n`QuadCurve2D.Double, QuadCurve2D.Float`\n• ### Field Summary\n\nFields\nModifier and Type Field Description\n`float` `ctrlx`\nThe X coordinate of the control point of the quadratic curve segment.\n`float` `ctrly`\nThe Y coordinate of the control point of the quadratic curve segment.\n`float` `x1`\nThe X coordinate of the start point of the quadratic curve segment.\n`float` `x2`\nThe X coordinate of the end point of the quadratic curve segment.\n`float` `y1`\nThe Y coordinate of the start point of the quadratic curve segment.\n`float` `y2`\nThe Y coordinate of the end point of the quadratic curve segment.\n• ### Constructor Summary\n\nConstructors\nConstructor Description\n`Float()`\nConstructs and initializes a `QuadCurve2D` with coordinates (0, 0, 0, 0, 0, 0).\n```Float​(float x1, float y1, float ctrlx, float ctrly, float x2, float y2)```\nConstructs and initializes a `QuadCurve2D` from the specified `float` coordinates.\n• ### Method Summary\n\nAll Methods\nModifier and Type Method Description\n`Rectangle2D` `getBounds2D()`\nReturns a high precision and more accurate bounding box of the `Shape` than the `getBounds` method.\n`Point2D` `getCtrlPt()`\nReturns the control point.\n`double` `getCtrlX()`\nReturns the X coordinate of the control point in `double` precision.\n`double` `getCtrlY()`\nReturns the Y coordinate of the control point in `double` precision.\n`Point2D` `getP1()`\nReturns the start point.\n`Point2D` `getP2()`\nReturns the end point.\n`double` `getX1()`\nReturns the X coordinate of the start point in `double` in precision.\n`double` `getX2()`\nReturns the X coordinate of the end point in `double` precision.\n`double` `getY1()`\nReturns the Y coordinate of the start point in `double` precision.\n`double` `getY2()`\nReturns the Y coordinate of the end point in `double` precision.\n`void` ```setCurve​(double x1, double y1, double ctrlx, double ctrly, double x2, double y2)```\nSets the location of the end points and control point of this curve to the specified `double` coordinates.\n`void` ```setCurve​(float x1, float y1, float ctrlx, float ctrly, float x2, float y2)```\nSets the location of the end points and control point of this curve to the specified `float` coordinates.\n• ### Methods declared in class java.lang.Object\n\n`equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait`\n• ### Methods declared in class java.awt.geom.QuadCurve2D\n\n`clone, contains, contains, contains, contains, getBounds, getFlatness, getFlatness, getFlatness, getFlatnessSq, getFlatnessSq, getFlatnessSq, getPathIterator, getPathIterator, intersects, intersects, setCurve, setCurve, setCurve, setCurve, solveQuadratic, solveQuadratic, subdivide, subdivide, subdivide`\n• ### Field Detail\n\n• #### x1\n\n`public float x1`\nThe X coordinate of the start point of the quadratic curve segment.\nSince:\n1.2\n• #### y1\n\n`public float y1`\nThe Y coordinate of the start point of the quadratic curve segment.\nSince:\n1.2\n• #### ctrlx\n\n`public float ctrlx`\nThe X coordinate of the control point of the quadratic curve segment.\nSince:\n1.2\n• #### ctrly\n\n`public float ctrly`\nThe Y coordinate of the control point of the quadratic curve segment.\nSince:\n1.2\n• #### x2\n\n`public float x2`\nThe X coordinate of the end point of the quadratic curve segment.\nSince:\n1.2\n• #### y2\n\n`public float y2`\nThe Y coordinate of the end point of the quadratic curve segment.\nSince:\n1.2\n• ### Constructor Detail\n\n• #### Float\n\n`public Float()`\nConstructs and initializes a `QuadCurve2D` with coordinates (0, 0, 0, 0, 0, 0).\nSince:\n1.2\n• #### Float\n\n```public Float​(float x1,\nfloat y1,\nfloat ctrlx,\nfloat ctrly,\nfloat x2,\nfloat y2)```\nConstructs and initializes a `QuadCurve2D` from the specified `float` coordinates.\nParameters:\n`x1` - the X coordinate of the start point\n`y1` - the Y coordinate of the start point\n`ctrlx` - the X coordinate of the control point\n`ctrly` - the Y coordinate of the control point\n`x2` - the X coordinate of the end point\n`y2` - the Y coordinate of the end point\nSince:\n1.2\n• ### Method Detail\n\n• #### getX1\n\n`public double getX1()`\nReturns the X coordinate of the start point in `double` in precision.\nSpecified by:\n`getX1` in class `QuadCurve2D`\nReturns:\nthe X coordinate of the start point.\nSince:\n1.2\n• #### getY1\n\n`public double getY1()`\nReturns the Y coordinate of the start point in `double` precision.\nSpecified by:\n`getY1` in class `QuadCurve2D`\nReturns:\nthe Y coordinate of the start point.\nSince:\n1.2\n• #### getP1\n\n`public Point2D getP1()`\nReturns the start point.\nSpecified by:\n`getP1` in class `QuadCurve2D`\nReturns:\na `Point2D` that is the start point of this `QuadCurve2D`.\nSince:\n1.2\n• #### getCtrlX\n\n`public double getCtrlX()`\nReturns the X coordinate of the control point in `double` precision.\nSpecified by:\n`getCtrlX` in class `QuadCurve2D`\nReturns:\nX coordinate the control point\nSince:\n1.2\n• #### getCtrlY\n\n`public double getCtrlY()`\nReturns the Y coordinate of the control point in `double` precision.\nSpecified by:\n`getCtrlY` in class `QuadCurve2D`\nReturns:\nthe Y coordinate of the control point.\nSince:\n1.2\n• #### getCtrlPt\n\n`public Point2D getCtrlPt()`\nReturns the control point.\nSpecified by:\n`getCtrlPt` in class `QuadCurve2D`\nReturns:\na `Point2D` that is the control point of this `Point2D`.\nSince:\n1.2\n• #### getX2\n\n`public double getX2()`\nReturns the X coordinate of the end point in `double` precision.\nSpecified by:\n`getX2` in class `QuadCurve2D`\nReturns:\nthe x coordinate of the end point.\nSince:\n1.2\n• #### getY2\n\n`public double getY2()`\nReturns the Y coordinate of the end point in `double` precision.\nSpecified by:\n`getY2` in class `QuadCurve2D`\nReturns:\nthe Y coordinate of the end point.\nSince:\n1.2\n• #### getP2\n\n`public Point2D getP2()`\nReturns the end point.\nSpecified by:\n`getP2` in class `QuadCurve2D`\nReturns:\na `Point` object that is the end point of this `Point2D`.\nSince:\n1.2\n• #### setCurve\n\n```public void setCurve​(double x1,\ndouble y1,\ndouble ctrlx,\ndouble ctrly,\ndouble x2,\ndouble y2)```\nSets the location of the end points and control point of this curve to the specified `double` coordinates.\nSpecified by:\n`setCurve` in class `QuadCurve2D`\nParameters:\n`x1` - the X coordinate of the start point\n`y1` - the Y coordinate of the start point\n`ctrlx` - the X coordinate of the control point\n`ctrly` - the Y coordinate of the control point\n`x2` - the X coordinate of the end point\n`y2` - the Y coordinate of the end point\nSince:\n1.2\n• #### setCurve\n\n```public void setCurve​(float x1,\nfloat y1,\nfloat ctrlx,\nfloat ctrly,\nfloat x2,\nfloat y2)```\nSets the location of the end points and control point of this curve to the specified `float` coordinates.\nParameters:\n`x1` - the X coordinate of the start point\n`y1` - the Y coordinate of the start point\n`ctrlx` - the X coordinate of the control point\n`ctrly` - the Y coordinate of the control point\n`x2` - the X coordinate of the end point\n`y2` - the Y coordinate of the end point\nSince:\n1.2\n• #### getBounds2D\n\n`public Rectangle2D getBounds2D()`\nReturns a high precision and more accurate bounding box of the `Shape` than the `getBounds` method. Note that there is no guarantee that the returned `Rectangle2D` is the smallest bounding box that encloses the `Shape`, only that the `Shape` lies entirely within the indicated `Rectangle2D`. The bounding box returned by this method is usually tighter than that returned by the `getBounds` method and never fails due to overflow problems since the return value can be an instance of the `Rectangle2D` that uses double precision values to store the dimensions.\n\nNote that the definition of insideness can lead to situations where points on the defining outline of the `shape` may not be considered contained in the returned `bounds` object, but only in cases where those points are also not considered contained in the original `shape`.\n\nIf a `point` is inside the `shape` according to the `contains(point)` method, then it must be inside the returned `Rectangle2D` bounds object according to the `contains(point)` method of the `bounds`. Specifically:\n\n`shape.contains(p)` requires `bounds.contains(p)`\n\nIf a `point` is not inside the `shape`, then it might still be contained in the `bounds` object:\n\n`bounds.contains(p)` does not imply `shape.contains(p)`\n\nSpecified by:\n`getBounds2D` in interface `Shape`\nReturns:\nan instance of `Rectangle2D` that is a high-precision bounding box of the `Shape`.\nSince:\n1.2\nSee Also:\n`Shape.getBounds()`" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6251388,"math_prob":0.9061966,"size":8209,"snap":"2021-21-2021-25","text_gpt3_token_len":2134,"char_repetition_ratio":0.2253504,"word_repetition_ratio":0.46879876,"special_character_ratio":0.22926056,"punctuation_ratio":0.15284441,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9907888,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-06-17T20:28:52Z\",\"WARC-Record-ID\":\"<urn:uuid:9e58d588-ea7d-4685-a5e1-dcfb3d5aa439>\",\"Content-Length\":\"42389\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b8e37bd4-904a-438f-a1da-c61618fbfe2d>\",\"WARC-Concurrent-To\":\"<urn:uuid:09f4e6b1-9ae5-414f-9954-29d4fb586b05>\",\"WARC-IP-Address\":\"137.254.56.61\",\"WARC-Target-URI\":\"https://cr.openjdk.java.net/~iris/se/10/latestSpec/api/java/awt/geom/QuadCurve2D.Float.html\",\"WARC-Payload-Digest\":\"sha1:OZFR4OHGGSC2T2FOGHL6D2A5KZJ4EFMW\",\"WARC-Block-Digest\":\"sha1:7TATBQVNQHJ4FE65IPXXPQT6B5CMFPFU\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-25/CC-MAIN-2021-25_segments_1623487633444.37_warc_CC-MAIN-20210617192319-20210617222319-00431.warc.gz\"}"}
http://www.learner.org/courses/learningmath/number/session10/part_b/indexk2.html
[ "", null, "Teacher resources and professional development across the curriculum\n\nTeacher professional development and classroom resources across the curriculum", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "Session 10, Part B:\nReasoning About Number and Operations (40 minutes)\n\nIn This Part: Exploring Standards | Examining Children's Reasoning\n\nThe National Council of Teachers of Mathematics has identified number and operations as a strand in its Principles and Standards for School Mathematics. In grades pre-K through 12, instructional programs should enable all students to do the following:\n\n • Understand numbers, ways of representing numbers, relationships among numbers, and number systems • Understand the meaning of operations and how they relate to one another • Compute fluently and make reasonable estimates\n\nIn pre-K through grade 2 classrooms, students are expected to do the following:\n\n • Understand various meanings of addition and subtraction of whole numbers and the relationship between the two operations • Develop and use strategies for whole-number computations, with a focus on addition and subtraction • Develop fluency with basic number combinations for addition and subtraction • Use a variety of methods and tools to compute, including objects, mental computation, estimation, paper and pencil, and calculators • Count with understanding, and recognize \"how many\" are in sets of objects • Use multiple models to develop initial understandings of place value and the base ten number system • Connect number words and numerals to the quantities they represent using various physical models and representations\n\nThe NCTM Number and Operations Standards state that students should \"develop a solid understanding of the base-ten numeration system and place-value concepts by the end of grade 2... Using concrete materials can help students learn to group and ungroup by tens. For example, such materials can help students express '23' as 23 ones (units), 1 ten and 13 ones, or 2 tens and 3 ones. Of course, students should also note the ways in which using concrete materials to represent a number differs from using conventional notation. For example, when the numeral for the collection is written, the arrangement of digits matters -- the digit for the tens must be written to the left of the digit for the units. In contrast, when base-ten blocks or connecting multi-cubes are used, the value is not affected by the arrangement of the blocks\" (NCTM, 2000, p. 81).\n\nAs you watch another video segment from Ms. Weiss's class, think about how the students are developing this understanding of number and operations.", null, "", null, "", null, "Video Segment In this video segment, two groups of students use Digi-Blocks to solve subtraction problems. Note 3 If you are using a VCR, you can find this segment on the session video approximately 16 minutes and 22 seconds after the Annenberg Media logo.", null, "", null, "", null, "", null, "Problem B1", null, "a. How did the students use the Digi-Blocks to represent the problem? b. What processes did the students use to group the Digi-Blocks? c. What subtraction strategies did the students consider?", null, "Problem B2", null, "How did the Digi-Blocks help students relate their actions to the written algorithm?", null, "Problem B3", null, "What are some ways that you see the NCTM Standards being incorporated into Ms. Weiss's lesson?", null, "Problem B4", null, "Embedded in the children's explanations of solving the subtraction problems are early understandings of place value. How could you extend this conversation to formalize these notions?", null, "Join the discussion! Post your answer to Problem B4 on Channel Talk, then read and respond to answers posted by others.\n\n Principles and Standards for School Mathematics Copyright © 2000 by the National Council of Teachers of Mathematics, Inc. www.nctm.org. All rights reserved. This material may not be copied or redistributed electronically or in other formats without written permission from NCTM. standards.nctm.org Standards are listed with the permission of the National Council of Teachers of Mathematics (NCTM). NCTM does not endorse the content or validity of these alignments. Digi-Block® materials are used with permission of Digi-Block, Inc.", null, "", null, "", null, "Session 10, Grades K-2: Index | Notes | Solutions | Video" ]
[ null, "http://www.learner.org/images/header/annenberg-learner.jpg", null, "http://www.learner.org/images/header/mail_list_icon.png", null, "http://www.learner.org/images/header/search_icon.jpg", null, "http://www.learner.org/courses/learningmath/number/images/mathhome_off.gif", null, "http://www.learner.org/courses/learningmath/number/images/number_title_off.gif", null, "http://www.learner.org/courses/learningmath/number/images/spacer.gif", null, "http://www.learner.org/courses/learningmath/number/images/spacer.gif", null, "http://www.learner.org/courses/learningmath/number/images/glossary_off.gif", null, "http://www.learner.org/courses/learningmath/number/images/map_off.gif", null, "http://www.learner.org/courses/learningmath/number/images/s10_k2_materials.gif", null, "http://www.learner.org/courses/learningmath/number/images/notes_off.gif", null, "http://www.learner.org/courses/learningmath/number/images/solutions_off.gif", null, "http://www.learner.org/courses/learningmath/number/images/video_off.gif", null, "http://www.learner.org/courses/learningmath/number/images/n10b_k2.gif", null, "http://www.learner.org/courses/learningmath/number/images/videotexttop.gif", null, "http://www.learner.org/courses/learningmath/number/images/spacer.gif", null, "http://www.learner.org/courses/learningmath/number/images/spacer.gif", null, "http://www.learner.org/courses/learningmath/number/images/spacer.gif", null, "http://www.learner.org/courses/learningmath/number/images/videotextbottom.gif", null, "http://www.learner.org/courses/learningmath/number/images/spacer.gif", null, "http://www.learner.org/courses/learningmath/number/images/solution_button.gif", null, "http://www.learner.org/courses/learningmath/number/images/spacer.gif", null, "http://www.learner.org/courses/learningmath/number/images/solution_button.gif", null, "http://www.learner.org/courses/learningmath/number/images/spacer.gif", null, "http://www.learner.org/courses/learningmath/number/images/solution_button.gif", null, "http://www.learner.org/courses/learningmath/number/images/spacer.gif", null, "http://www.learner.org/courses/learningmath/number/images/solution_button.gif", null, "http://www.learner.org/courses/learningmath/number/images/spacer.gif", null, "http://www.learner.org/courses/learningmath/number/images/bottom_corner.gif", null, "http://www.learner.org/courses/learningmath/number/images/spacer.gif", null, "http://www.learner.org/courses/learningmath/number/images/spacer.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.91323614,"math_prob":0.85407454,"size":2296,"snap":"2019-13-2019-22","text_gpt3_token_len":456,"char_repetition_ratio":0.12609075,"word_repetition_ratio":0.005540166,"special_character_ratio":0.19817074,"punctuation_ratio":0.09319899,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97830063,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,4,null,null,null,null,null,null,null,1,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-03-22T19:10:35Z\",\"WARC-Record-ID\":\"<urn:uuid:f5585b95-bd8d-4dbb-8656-2efaa1c41d9b>\",\"Content-Length\":\"57018\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:927bdfd0-613c-49b4-9c43-afd050a9a6ab>\",\"WARC-Concurrent-To\":\"<urn:uuid:d4134a4c-895d-4d9f-baa5-b022e7ad4229>\",\"WARC-IP-Address\":\"104.25.141.19\",\"WARC-Target-URI\":\"http://www.learner.org/courses/learningmath/number/session10/part_b/indexk2.html\",\"WARC-Payload-Digest\":\"sha1:CTUYDCQFVUIZUUF45KFKQWWHS3ZYC5B5\",\"WARC-Block-Digest\":\"sha1:B37EAQBZKQKENVKNFFDFOHW6E72XVPZQ\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-13/CC-MAIN-2019-13_segments_1552912202688.89_warc_CC-MAIN-20190322180106-20190322202106-00379.warc.gz\"}"}
https://blog.quantinsti.com/options-trading-excel-model/
[ "# How to Use Black Scholes Option Pricing Model", null, "In this post, we will discuss on modeling option pricing using Black Scholes Option Pricing model and plotting the same for a combination of various options. If you are new to options trading then you can check the options trading for dummies free course on Quantra. You can put any number of call and/or put o options in the model and use a built-in macro (named ‘BS’) for calculating the BS model based option pricing for each option. The macro (named ‘PayOff’) is used for plotting the Profit/Loss for the overall combination of the option positions against the spot price.\n\nSheet1 named Payoff has a table where we specify all option parameters. Column B specifies Expiry data for the options. Column C specifies the option type. Column D has the strike price of the underlying asset. Column E shows the premium amount in INR at which the option is bought. Column F tells us about the number of option contracts we have bought. Column G specifies the volatility, column H specifies Black Scholes price of the option (calculated by the macro “BS”. Column I is the current spot price of the underlying asset, column J shows the time to expiry of the option (calculated using the formula). Column K specifies the Expected PnL of the option (calculated using the formula). It is calculated as the difference between the Black Scholes price and the premium paid multiplied by the number of option contracts. Column L shows the actual premium in the market currently, meaning the current premium should you wish to buy the option.\n\nThe 13th row calculates the total investment. Since we have bought two calls and put options at a premium of 120 and 152 our total investment is 1202+1522=544. The 14th row shows the Expected present value. Since the market has moved after the options are bought, the current expected price of the option multiplied by the number of option contracts gives the expected value. Hence the expected payoff is 170.182+124.592=589.5475.\n\nThe present value in row 15 is calculated similarly by taking the product of actual premium in the market currently and the number of options contracts. Hence the present value is 1502+1202=540.\n\n### Plotting the Payoff\n\nThe graph below shows the plot of expected payoff for the option portfolio. This is done by taking the expected payoff values from sheet4. More on this later.", null, "#### BS in Macros\n\nBS Price sheet shows the pricing of an option using Black Scholes model. From Black-Scholes option pricing model, we know the price of a call option on a non-dividend stock can be written as:\n\n$$C_t = S_t N(d_1) - Xe^{-r\\tau} N(d_2)$$\n\nand the price of a put option on a non-dividend stock can be written as:\n\n$$P_t = Xe^{-r\\tau} N (-d_2) - S_tN (-d_1)$$\n\nwhere\n\n$$d_1 = \\frac {{ln ( \\frac {S_t} {X}) + (r + \\frac {\\sigma_s^2} {2}) \\tau}} {{\\sigma_s} {\\sqrt{\\tau}}}$$\n\n$$d_1 = \\frac {{ln ( \\frac {S_t} {X}) + (r + \\frac {\\sigma_s^2} {2}) \\tau}} {{\\sigma_s} {\\sqrt{\\tau}}} = d_1 - \\sigma_s \\sqrt{\\tau}$$\n\n$$\\tau = T - t$$\n\nis the cumulative density function of normal distribution.\n\n$$S$$\n\nCurrent price of the underlying\n\n$$X$$\n\nStrike price\n\n$$r$$\n\nRisk-free interest rate\n\n$$\\tau$$\n\nTime to expiry\n\n$$ln$$\n\nNatural log\n\nThe call and put value using Black Scholes framework is calculated in the 13th and 14th row for the parameters specified in row 1 to 5.", null, "### Customizing BS\n\n“Back-end BS” sheet has the same set of values of Payoff sheet from columns A to G. Column H onwards shows the spot price ranges in the 2nd row. You can change the starting point for the price range of Spot Price in Cell H2. The increment (presently of 10 points) can be changed from Cell I2 and then drag it across the range horizontally. The 3rd row shows the Black Scholes call option for the specified parameters and varying spot price. The 4th row shows the Black Scholes put option for the specified parameters and varying spot price. Please note that though the post shows the calculation for three options, you can go up to 10 options combinations of by just filling appropriate values in the table in Sheet1. For more than 10 options, you can edit the sheet and the macro.\n\nThe 13th row calculates the total payoff from the option position. This is calculated as the difference between the profits from options and the total investment.", null, "In this case, the profit from overall option position is the sum of H3 and H4. The total investment (calculated in Payoff Sheet 13th row) of 544 has to be subtracted from the sum of H3 and H4 to obtain the final payoff. Similar calculations are done to all other columns henceforth.", null, "The Expected Payoff graph in Sheet1 is the plot of total payoff calculated in Sheet3 against the underlying spot price.", null, "There are two macros. One in BS Price sheet that calculates Black Scholes option price depending upon the values entered in the Payoff sheet. The other one is in the Payoff sheet that plots the Expected Payoff graph. Please make a note that the Expiry Date in Payoff sheet is set beyond the current date, else the Black Scholes price will not return a numerical value for a negative time period.\n\nYou can enroll for this free online python course on Quantra and understand basic terminologies and concepts that will help you trade in options.\n\n### Next Step\n\nIn our next post, we have covered the basics of Bull Call Spread Option strategy, includes a bonus Python code and Excel model that shows how to implement this strategy using a live example.\n\nDisclaimer: All investments and trading in the stock market involve risk. Any decisions to place trades in the financial markets, including trading in stock or options or other financial instruments is a personal decision that should only be made after thorough research, including a personal risk and financial assessment and the engagement of professional assistance to the extent you believe necessary. The trading strategies or related information mentioned in this article is for informational purposes only." ]
[ null, "https://d1rwhvwstyk9gu.cloudfront.net/2015/07/Black-Scholes-Options.jpg", null, "https://d1rwhvwstyk9gu.cloudfront.net/2015/07/Options-Trading-Screenshot-1.png", null, "https://d1rwhvwstyk9gu.cloudfront.net/2015/07/Options-Trading-Screenshot-2-1024x578.png", null, "https://d1rwhvwstyk9gu.cloudfront.net/2017/08/Options-Trading-Screenshot-3.jpg", null, "https://d1rwhvwstyk9gu.cloudfront.net/2017/08/Options-Trading-Screenshot-4.jpg", null, "https://d1rwhvwstyk9gu.cloudfront.net/2015/07/Options-Trading-Screenshot-5.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.87130135,"math_prob":0.97328204,"size":5904,"snap":"2019-51-2020-05","text_gpt3_token_len":1351,"char_repetition_ratio":0.14864407,"word_repetition_ratio":0.071428575,"special_character_ratio":0.23357046,"punctuation_ratio":0.06484018,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9968326,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12],"im_url_duplicate_count":[null,5,null,5,null,5,null,5,null,5,null,5,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-11T14:59:48Z\",\"WARC-Record-ID\":\"<urn:uuid:6e3703e1-a593-4e3e-917c-8bd112d6f05b>\",\"Content-Length\":\"146900\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4169a6c4-25fb-41fc-bf01-727384a45c06>\",\"WARC-Concurrent-To\":\"<urn:uuid:3eb2063e-88f0-4bcf-a457-034e18cef51d>\",\"WARC-IP-Address\":\"3.84.167.59\",\"WARC-Target-URI\":\"https://blog.quantinsti.com/options-trading-excel-model/\",\"WARC-Payload-Digest\":\"sha1:JUAAM36CFTJVM2OZIGNJWNSFINHFO3IN\",\"WARC-Block-Digest\":\"sha1:6GTIRPYFOE42EL3ZB3MUYSCIGPB47D6I\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575540531917.10_warc_CC-MAIN-20191211131640-20191211155640-00071.warc.gz\"}"}
https://owndraw.com/3n31b/university-of-auckland-accommodation-a27fed
[ "The number will of course depend both on the formula of the substance and on the weight of the sample. It is an experimentally determined number. So the initial conditions would equal the final conditions. comparison fahrenheit 451 world war 2 my best friend global warming critism sociology what is success introduction heroism volunteerism chicken strengths and weaknessess profile all quiet on the western front. like- per mole of a substance. Avogadro’s number is defined as the number of elementary particles (molecules, atoms, compounds, etc.) This number is referred as Avogadro’s number. A pound mole of hydrogen would weigh 1 pound which would be 454 grams. Calculate the number of water molecules in 0.5 mol of water. Avogadro’s Number is very broad and the generally agreed value is 6.022 × 10 23. to. Chociaż liczba ta jest stała, to określona doświadczalnie, więc używamy przybliżoną wartość 6.022 x 10 23. 1. …the constant of proportionality being Avogadro’s number. Chemistry Density Problems Worksheet Answers   Więc wiesz ile atomy są w mol. number of particles = Avogadro constant × amount (mol) Example. Search Pages. As is common in our lessons, Avogadro's Law also has some real life applications. Avogadro's number is a collection of 6.02 x 1023 particles. A \"Mole\" is similar to the use of \"dozen\" (12) or \"gross\" (144). avogadro’s number Essay Examples. This consideration is equal for both the lightest gas (hydrogen) and heavy gases (Bromine or Carbon dioxide). Avogadro’s Law. to. V 1 /n 1 = V 2 /n 2. So Avogadro's number is an amount - like atoms in 12 g of carbon - and can mean different things. The mass of Avogadro’s number of atoms is the atomic mass expressed in grams. Words. Avogadroes number definition and examples.It is the collection of 6.02×10^23.Amaedo Avogadro born in Italy in 1776 and died in 1856 was an Italian scholar. This lesson plan works to simplify the topic though a hands-on activity, a game, and then creating a mini-hint booklet. It is represented by symbol ‘NA’. Avogadro’s law states that – “At the similar and constant physical conditions like Temperature and Pressure, the two different gases like hydrogen and nitrogen in the same volume contain an equal number of molecules.” V 1 n 2 = V 2 n 1. where p is the pressure of a gas, V is volume, T is temperature, and n is number of moles. Chemists use this relationship to easily convert between the measurable unit of a gram and the invisible unit of moles, of atoms or molecules. Because atoms (and molecules) are so tiny, we need a huge number to study them in manageable quantities. It is equal to 6.022×10 … How is Avogadro's Number Used? Number of water molecules = 6.022 × 10 23 × 0.5 Hence, the 6.02 x 1023 number of atoms, molecules or formula units is called Avogadro’s number that is equivalent to one ‘mole’ of respective substance. Liczba Avogadro jest jedną z najważniejszych stałych używanych w chemii. Therefore, the terms for the specific types of fundamental chemical particles, namely \"atoms,\" \"ions,\" and \"molecules,\" are the indicator words that are associated with applying this numerical value in a problem-solving context. Avogadro’s law, also known as Avogadro’s principle or Avogadro’s hypothesis, is a gas law which states that the total number of atoms/molecules of a gas (i.e. Avogadro's number is absolute and constant: there are 6.022×10 23 elementary particles in one mole. The Altmetric Attention Score is a quantitative measure of the attention that a research article has received online. A representative particle is the smallest unit in which a substance naturally exists. But if we consider a weight of substance that is the same as its formula (molecular) weight expressed in grams, we have only one number to know: Avogadro's number, 6.022141527 × 10 … In 1 gram of hydrogen there is an approximate 6.022 x 1023 hydrogen atoms, while in 12 grams of carbon-12 … There are plenty of awe-inspiring examples to help imagine this number’s massive scale. Avogadro's number would be larger by a factor of 454. Use tags for VBA and tags for inline. 21 Posts Related to Chemistry Worksheet 3 Avogadros Number Answers. This amount is also called a mole. Avogadro's number definition is - the number 6.022 × 1023 indicating the number of atoms or molecules in a mole of any substance —called also Avogadro number. What is Avogadro’s Number? Examples of Avogadro’s law … Avogadro’s number is a proportion that relates molar mass on an atomic scale to physical mass on a human scale. It is equal to 6.022140857×10 23 . For example, spreading 6.022 × 10 23 oranges over the entire surface of Earth would produce a layer 9 mi into space! avogadro number; avogadros number; 6.02 x 10^23; avogardo's constant; 6.02214076^24; avogadros constant; avagadros number; avogadro; Avogadro's constant; avogadro constant; what does avogadro's constant mean; avogadro's number Avogadro’s number is defined as the number of units in one mole of a substance. A gram mole of hydrogen weighs 1 gram and … Avogadro’s number, number of units in one mole of any substance (defined as its molecular weight in grams), equal to 6.02214076 × 10 23. Example Exercise 9.1 Atomic Mass and Avogadro’s Number. 23. atoms of Avogadro's number is defined based on the number of particles that should be present in a typical chemical measurement. Initially, it was called Avogadro’s number to refer to the number of molecules-grams of oxygen but in 1865, the scientist Johann Josef Loschmidf called the Avogadro’s number, Avogadro constant. The number $$6.02 \\times 10^{23}$$ is called Avogadro's number, the number of representative particles in a mole. Avogadro’s law also means the ideal gas constant is the same value for all gases, so: constant = p 1 V 1 /T 1 n 1 = P 2 V 2 /T 2 n 2. For example it is also pretty close to the number of atoms in 1 g of hydrogen. The units may be electrons, atoms, ions, or molecules, depending on the nature of the substance and the character of the reaction (if any).See alsoAvogadro’s law. So one mole of carbon weighs 12 g and one mole of hydrogen weighs 1 g. We know this is true because Avogadro's Law says that the quotient of the volume and the number of moles is constant for an ideal gas. These are a few examples of it: It is not much, but by applying these examples, we can further deduce other kinds of real life applications. Avogadro's Number Indicator Words. The atomic mass of each element is listed below the symbol of the element in the periodic table: Cu = 63.55 amu, Hg = 200.59 amu, S = 32.07 amu, and He = 4.00 amu. This is a perfect opportunity to use Avogadro's Law. Therefore, 6.02 × 10. Solved Examples. The units may be electrons, ions, atoms, or molecules, depending on the character of the reaction and the nature of the substance. Top Tag’s. Jest to liczba cząstek w jednym molu materiału, obliczona na podstawie liczby atomów w dokładnie 12 gramach izotopu węgla-12. The Avogadro's number is 6.022 x 10^23 and this number is what defines 1 mole. (b) is used to determine the number of atoms or molecules in a substance. April 23, 2018: Avogadro Part of Google Summer of Code 2018 March 24, 2017: Support Avogadro through Open Collective December 02, 2016: Avogadro 1.90.0 Released The value of is Avogadro’s Number … (c) equals the number of atoms in 1 gram of 12C. The number of units in one mole of any substance is called Avogadro’s number or Avogadro’s constant. Therefore, Avogadro’s number is a dimensionless value. Understand Avogadro’s Law Examples, Ballons. The scale of a number like that is incredibly ridiculous to envision. the amount of gaseous substance) is directly proportional to the volume occupied by the gas at constant temperature and pressure. Let's start with V1 divided by N1 is equal to V2 divided by N2. All of these numbers are used to group a certain amount of stuff (such as electrons, atoms, molecules,etc). The specific number of molecules in one gram-mole of a substance, defined as the molecular weight in grams, is 6.02214076 × 10 23, a quantity called Avogadro’s number, or the Avogadro constant. The mole system allows the scientists to accurately calculate the number of elementary entities (usually atoms or molecules) in a particular mass of a given substance. ** The enormity of Avogadro’s number is difficult to imagine. Avogadro's number, N A, is the fundamental physical constant that links the macroscopic physical world of objects that we can see and feel with the submicroscopic, invisible world of atoms.In theory, N A specifies the exact number of atoms in a palm-sized specimen of a physical element such as carbon or silicon.. Understanding Avogadro's Number is a confusing part of chemistry. Find more information about Crossref citation counts. Citations are the number of other articles citing this article, calculated by Crossref and updated daily. Avogadro's law (sometimes referred to as Avogadro's hypothesis or Avogadro's principle) or Avogadro-Ampère's hypothesis is an experimental gas law relating the volume of a gas to the amount of substance of gas present. For the majority of elements, the representative particle is the atom. Avogadro’s number is fundamental to understanding both the makeup of molecules and their interactions and combinations. Density Worksheet Chemistry With Answers. Avogadro's number: (a) equals 6.02 times 10^(23) molecules/mole. This relationship means that if we had Avogadro's number, or one mole, of carbon-12 atoms (which has an atomic weight of 12 amu by definition), that sample of carbon-12 would weigh exactly 12 grams." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8632541,"math_prob":0.97216153,"size":9591,"snap":"2021-43-2021-49","text_gpt3_token_len":2409,"char_repetition_ratio":0.18963179,"word_repetition_ratio":0.025577042,"special_character_ratio":0.24251902,"punctuation_ratio":0.11845584,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98720473,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-20T20:17:17Z\",\"WARC-Record-ID\":\"<urn:uuid:bb4d9533-64e1-4558-917d-2ab1ff3f2118>\",\"Content-Length\":\"21010\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c3a13f4f-428f-4fbb-a15e-35a382a6d6f8>\",\"WARC-Concurrent-To\":\"<urn:uuid:12ce61fd-12a0-4c7f-87ee-43b5fef9751d>\",\"WARC-IP-Address\":\"180.76.132.181\",\"WARC-Target-URI\":\"https://owndraw.com/3n31b/university-of-auckland-accommodation-a27fed\",\"WARC-Payload-Digest\":\"sha1:IA7V4EFWCLYDYWX4VGATUUTVUVPFPV4N\",\"WARC-Block-Digest\":\"sha1:62BOVWOLT3MGWCOPLWSWU4VRGPOMEWG5\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323585348.66_warc_CC-MAIN-20211020183354-20211020213354-00417.warc.gz\"}"}
https://codefinity.com/courses/v2/b22d1166-efda-45e8-979e-6c3ecfc566fc/5b86502d-1889-459f-bef9-8b1aec1ae438/30f4723e-1a47-4eda-91e4-7fd1cd10f553
[ "", null, "", null, "Course Content\n\nLinear Regression with Python", null, "", null, "Building The Linear Regression Using Statsmodels\n\nIn the previous chapter, we used a function from NumPy to calculate the parameters.\nNow we will use the class object instead of the function to represent the linear regression. This approach takes more lines of code to find the parameters, but it stores a lot of helpful information inside the object and makes the prediction more straightforward.\n\n## Building a Linear Regression model\n\nIn statsmodels, the `OLS` class can be used to create a linear regression model.", null, "We first need to initialize an `OLS` class object using `sm.OLS(y, X_tilde)`. Then train it using the `fit()` method.\n\nWhich is equivalent to:\n\nNote\n\nThe constructor of the `OLS` class expects a specific array `X_tilde` as an input, which we saw in the Normal Equation. So you need to convert your `X` array to `X_tilde`. This is achievable using the `sm.add_constant()` function.\n\n## Finding parameters\n\nWhen the model is trained, you can easily access the parameters using the `params` attribute.", null, "", null, "## Making the predictions\n\nNew instances can easily be predicted using `predict()` method, but you need to preprocess the input for them too:", null, "", null, "## Getting the summary\n\nAs you probably noticed, using the `OLS` class is not as easy as the `polyfit()` function. But using `OLS` has its benefits. While training, it calculates a lot of statistical information. You can access the information using the `summary()` method.", null, "", null, "That's a lot of statistics. We will discuss the table's most important parts in later sections." ]
[ null, "data:image/svg+xml,%3csvg%20xmlns=%27http://www.w3.org/2000/svg%27%20version=%271.1%27%20width=%2720%27%20height=%2720%27/%3e", null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null, "data:image/svg+xml,%3csvg%20xmlns=%27http://www.w3.org/2000/svg%27%20version=%271.1%27%20width=%2724%27%20height=%2724%27/%3e", null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null, "https://codefinity-content-media.s3.eu-west-1.amazonaws.com/b22d1166-efda-45e8-979e-6c3ecfc566fc/OLS_class.png", null, "data:image/svg+xml,%3csvg%20xmlns=%27http://www.w3.org/2000/svg%27%20version=%271.1%27%20width=%2740%27%20height=%2740%27/%3e", null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null, "data:image/svg+xml,%3csvg%20xmlns=%27http://www.w3.org/2000/svg%27%20version=%271.1%27%20width=%2740%27%20height=%2740%27/%3e", null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null, "data:image/svg+xml,%3csvg%20xmlns=%27http://www.w3.org/2000/svg%27%20version=%271.1%27%20width=%2740%27%20height=%2740%27/%3e", null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.75578815,"math_prob":0.91727376,"size":1548,"snap":"2023-14-2023-23","text_gpt3_token_len":332,"char_repetition_ratio":0.11722798,"word_repetition_ratio":0.0,"special_character_ratio":0.19444445,"punctuation_ratio":0.108391605,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9959953,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,3,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-03-24T13:10:48Z\",\"WARC-Record-ID\":\"<urn:uuid:7086aac6-3693-4edc-92f8-6a244a6fabf8>\",\"Content-Length\":\"117366\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3de61a19-a46b-4fe6-957d-95ff208e1a01>\",\"WARC-Concurrent-To\":\"<urn:uuid:99fe15ef-466a-4641-b8fd-3d2a5d9b0af8>\",\"WARC-IP-Address\":\"54.154.244.34\",\"WARC-Target-URI\":\"https://codefinity.com/courses/v2/b22d1166-efda-45e8-979e-6c3ecfc566fc/5b86502d-1889-459f-bef9-8b1aec1ae438/30f4723e-1a47-4eda-91e4-7fd1cd10f553\",\"WARC-Payload-Digest\":\"sha1:I2HSXNUBOKAZOEK4AYQWBSKH7RXYJNND\",\"WARC-Block-Digest\":\"sha1:HT2H7RNVAS6VYS5XJAD664NJUNIMEPF6\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296945282.33_warc_CC-MAIN-20230324113500-20230324143500-00720.warc.gz\"}"}
https://myonlinegrades.com/apjava/quiz/quiz8.php
[ "", null, "Quiz 8- Math   id: 1. True or false: You need to import a package for the Math class to work: true false 2. True or false: The Math methods are static because we dont need to create an instance of the math class to use it. true false 3 . In the following code fragment, k will be a random integer from: ```int k=(int)(Math.random()*6); ``` 0-6 1-6 1-5 0-5   4. Predict the following output ``` System.out.print(Math.abs(-5.4)); ``` 5.4 -5.4 5 -5   5. In the following code fragment, k will be a random integer from: ``` int k=(int)(Math.random()*6 + 10); ``` 0-6 0-10 1-10 10-16 10-15   6. What is returned by the following statement:   ``` System.out.print(Math.pow(3,2)); ```" ]
[ null, "https://myonlinegrades.com/apjava/images/title.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5413556,"math_prob":0.9585352,"size":551,"snap":"2019-51-2020-05","text_gpt3_token_len":175,"char_repetition_ratio":0.11334552,"word_repetition_ratio":0.0,"special_character_ratio":0.3430127,"punctuation_ratio":0.19863014,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9983316,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-24T00:14:11Z\",\"WARC-Record-ID\":\"<urn:uuid:c642c0fe-b563-4af9-b13c-76922c7eb42f>\",\"Content-Length\":\"7848\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:db207943-fce5-408b-9a9f-142f71be72d6>\",\"WARC-Concurrent-To\":\"<urn:uuid:e860372e-3df1-4f08-8f1e-d8e1422f72e6>\",\"WARC-IP-Address\":\"34.228.213.11\",\"WARC-Target-URI\":\"https://myonlinegrades.com/apjava/quiz/quiz8.php\",\"WARC-Payload-Digest\":\"sha1:BAXVWVP5OK4UW7XY54LE7AMEOJRM7B4E\",\"WARC-Block-Digest\":\"sha1:TIN7TTWTJ7P44ZU5NURJTUAEGXLFAZUP\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579250614086.44_warc_CC-MAIN-20200123221108-20200124010108-00464.warc.gz\"}"}
https://proxies123.com/co-combinatorics-combinatorics-and-geometry-underlying-a-refined-pascal-matrix-newton-identities/
[ "# co.combinatorics – Combinatorics and geometry underlying a refined Pascal matrix/Newton identities\n\nThe partition polynomials of OEIS A263633 give the coefficients of the power series/o.g.f of the multiplicative inverse (reciprocal) of a power series/o.g.f. and so give the Newton identities for transforming between complete homogeneous symmetric polynomials/functions and elementary symmetric polynomials/functions. Certain Koszul duals are related to this.\n\nThe algebraic combinatorics of the complementary reciprocal of a Taylor series/e.g.f. is governed by the antipode/refined Euler characteristic classes of the permutahedra or, equivalently, by surjective mappings, so I have an indirect geometric combinatorial interpretation of ‘scaled’ versions of the Newton identities, but I’m looking for more direct interpretations.\n\nWhat combinatoric/geometric structures are enumerated by the integer coefficients of these partition polynomials for conversion of an o.g.f. into a reciprocal o.g.f.?" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.86306405,"math_prob":0.90834236,"size":896,"snap":"2021-04-2021-17","text_gpt3_token_len":180,"char_repetition_ratio":0.11883408,"word_repetition_ratio":0.0,"special_character_ratio":0.17410715,"punctuation_ratio":0.14012739,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9689884,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-04-22T15:10:28Z\",\"WARC-Record-ID\":\"<urn:uuid:15929d65-6e49-4f76-b946-ce414966a4d6>\",\"Content-Length\":\"25031\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:98ee93ab-7950-4024-80bc-6b2fd2a26832>\",\"WARC-Concurrent-To\":\"<urn:uuid:f80bf501-a608-40e7-b51d-eb7837992348>\",\"WARC-IP-Address\":\"173.212.203.156\",\"WARC-Target-URI\":\"https://proxies123.com/co-combinatorics-combinatorics-and-geometry-underlying-a-refined-pascal-matrix-newton-identities/\",\"WARC-Payload-Digest\":\"sha1:2PMFLHXRGRKQGIXP2RZZGKWULXJTRDFR\",\"WARC-Block-Digest\":\"sha1:QFIQPNQAUI4UJ7FETZL7WVIPNZOMZIJC\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-17/CC-MAIN-2021-17_segments_1618039610090.97_warc_CC-MAIN-20210422130245-20210422160245-00620.warc.gz\"}"}
https://josephwoolf.com/bejeweled-1-ai-part-2-enabling-ai-to-make-moves/
[ "In my previous post, I was able to get our code to get past the loading screen and get the board information.  However, a program that can only grab the board without acting on the information does no good.  In this post, we’ll be adding the basic mechanisms for our AI to act on the board information.  Please note that more intelligent behavior won’t be added in this post.\n\n### Analyze Potential Moves\n\nNow that our program can return us the board information, we need to figure out where our AI can make a move.\n\nWe represented our board as an 8×8 array.  Using that information, we can come with a diagram of the following valid moves:\n\nFrom the diagram, there are a maximum of six valid moves that you can make if the same pieces are adjacent to each other.  In the case that the same pieces are a space apart, only two moves can be made.  These moves are the basis for more advanced matches, like 4-piece and T-shape.\n\n#### Code Base\n\nNow that we have defined a valid move, let’s write up the code to detect valid moves.  I split up horizontal and vertical search into different methods for better readability.  A valid move is represented by the text “({x},{y}){d}”\n\nFor the case of horizontal search:\n\n```def horizontalSearch(self, board):\nmoves = []\nfor y in range(8):\noccur = 0\ncurColor = \"\"\nfor x in range(8):\nif board[y][x] != curColor:\ncurColor = board[y][x]\nif x+2 < 8 and board[y][x+2] == curColor:\nif y-1 >= 0 and board[y-1][x+1] == curColor:\nmoves.append(\"({},{}){}\".format(x+1,y, \"U\"))\nif y+1 < 8 and board[y+1][x+1] == curColor:\nmoves.append(\"({},{}){}\".format(x+1,y, \"D\"))\nelse:\nif x-3 >= 0 and board[y][x-3] == curColor:\nmoves.append(\"({},{}){}\".format(x-2, y, \"L\"))\nif x+2 < 8 and board[y][x+2] == curColor:\nmoves.append(\"({},{}){}\".format(x+1, y, \"R\"))\nif y-1 >= 0 and x-2 >= 0 and board[y-1][x-2] == curColor:\nmoves.append(\"({},{}){}\".format(x-2, y, \"U\"))\nif y-1 >= 0 and x+1 < 8 and board[y-1][x+1] == curColor:\nmoves.append(\"({},{}){}\".format(x+1, y, \"U\"))\nif y+1 < 8 and x-2 >= 0 and board[y+1][x-2] == curColor:\nmoves.append(\"({},{}){}\".format(x-2, y, \"D\"))\nif y+1 < 8 and x+1 < 8 and board[y+1][x+1] == curColor:\nmoves.append(\"({},{}){}\".format(x+1, y, \"D\"))\ncurColor = \"\"\nreturn moves```\n\nFor the case of vertical search:\n\n```def verticalSearch(self, board):\nmoves = []\nfor x in range(8):\noccur = 0\ncurColor = \"\"\nfor y in range(8):\nif board[y][x] != curColor:\ncurColor = board[y][x]\nif y+2 < 8 and board[y+2][x] == curColor:\nif x-1 >= 0 and board[y+1][x-1] == curColor:\nmoves.append(\"({},{}){}\".format(x,y+1, \"L\"))\nif x+1 < 8 and board[y+1][x+1] == curColor:\nmoves.append(\"({},{}){}\".format(x,y+1, \"R\"))\nelse:\nif y-3 >= 0 and board[y-3][x] == curColor:\nmoves.append(\"({},{}){}\".format(x, y-2, \"U\"))\nif y+2 < 8 and board[y+2][x] == curColor:\nmoves.append(\"({},{}){}\".format(x, y+1, \"D\"))\nif y-2 >= 0 and x-1 >= 0 and board[y-2][x-1] == curColor:\nmoves.append(\"({},{}){}\".format(x,y-2, \"L\"))\nif y-2 >= 0 and x+1 < 8 and board[y-2][x+1] == curColor:\nmoves.append(\"({},{}){}\".format(x,y-2, \"R\"))\nif y+1 < 8 and x-1 >= 0 and board[y+1][x-1] == curColor:\nmoves.append(\"({},{}){}\".format(x,y+1, \"L\"))\nif y+1 < 8 and x+1 < 8 and board[y+1][x+1] == curColor:\nmoves.append(\"({},{}){}\".format(x,y+1, \"R\"))\ncurColor = \"\"\nreturn moves```\n\nNote that our code only does a search for 3-piece matches.  In the future, I’ll be adding methods to detect more advanced matches.\n\n### A Note on Image Processing\n\nIn the first post, there weren’t a lot of opportunities for our program to fail at detecting the game board.  Once I started working on implementing the basic AI mechanics, any poorly written code made it prone to execution issues.  As you go through the post, I’ll point out where our program can fail due to a modified state.\n\n### Moving the Prompt\n\nWhen starting a new game up, you get a large prompt that reads:\n\nSwap adjacent gems to create rows of 3 or more!\n\nYou also get smaller prompts telling you where to make the first swap.  However, you don’t have to swap where they’re indicating.  Instead, you can choose any location.\n\nThe issue is that while we can grab the correct pieces from the large prompt, we can’t swap pieces without moving the prompt.  Once the prompt is moved, it’ll mess up the rest of our program.  This is because we expect to find the square boarder that surrounds the game pieces.  Since the prompt overlaps the square boarder, the program cannot find the boarder and thus cannot get the pieces.\n\nTo move the boarder, I had to extend the getPlayingFieldInfo() method to add the mechanism to detect and move the prompt.  After the prompt is moved, we need to take a screenshot again of the new game window.  From there, we should be able to make a move without interference.\n\nThe following code snippet will move the prompt:\n\n```def _moveDialog(self, img):\ncropped = img[10:200, 10:300]\nfilteredImg = cv2.inRange(cropped, np.array([56, 67, 154]), np.array([96, 134, 167]))\nuniques, counts = np.unique(filteredImg, return_counts=True)\ncounts = dict(zip(uniques, counts))\nif 255 in counts:\n(winX, winY, winW, winH) = self.getWindowDimensions()\n(areaX, areaY, w, h) = self._getPlayingFieldCoord(self.getWindowShot())\nwinX += areaX\nwinY += areaY\npyautogui.moveTo(winX + 300, winY + 90)\npyautogui.drag(0, w, duration=.5)\ntime.sleep(.25)\n\ndef getPlayingFieldInfo(self):\n# Responsible for getting the information\n(x, y, _, _) = self.getWindowDimensions()\npyautogui.moveTo(x,y)\ncroppedImage = None\nif self.moves == 0:\nimg = self.getWindowShot()\nif self.dialog == False:\n(self.x, self.y, self.w, self.h) = self._getPlayingFieldCoord(img)\nself._moveDialog(img[self.y:self.y+self.h, self.x:self.x+self.w])\nself.dialog = True\nimg = self.getWindowShot()\ncroppedImage = cv2.cvtColor(img[self.y:self.y+self.h, self.x:self.x+self.w], cv2.COLOR_BGR2RGB)\nelse:\ncroppedImage = self.getPlayingField()\n...```\n\nThe _moveDialog() method detects the prompt and moves the prompt to the bottom of the screen.  In addition, I added some class variables called x, y, w, h.  These will be needed to detect the game board when we make the first move.  Without capturing these values, the program won’t click at the right coordinates since the board wouldn’t be able to found.\n\n### Telling the Computer How to Move\n\nNow that we can get the list of moves, we need to tell the environment where to swap the pieces.  When working on this piece the most, it was prone to moving the mouse at random locations.\n\nWhenever a match was made, a number would show up indicating the amount of points gained from the move.  If this number overlapped in our square boarder, it would prevent us from getting the boarder coordinates.  This would, in turn, prevent us from correctly locating where to make a move.  As a result, our program would move the mouse outside the window and click on other applications.\n\nTo handle the above behavior, the following code would allow us to make a move while minimizing the chances of lost focus:\n\n```def makeMove(self, x, y, direction):\n(winX, winY, winW, winH) = self.getWindowDimensions()\nimg = None\nareaX = 0\nareaY = 0\nw = 0\nh = 0\nwhile areaX == 0 or areaY == 0:\n# Sometimes, a mis-fire occurs when trying to grab the field\n# coordinates. As a result, we should take a shot as many\n# times as needed\nimg = self.getWindowShot()\nif self.moves == 0:\n# Only needed once. Additional moves won't execute\n(areaX, areaY, w, h) = (self.x, self.y, self.w, self.h)\nelse:\n(areaX, areaY, w, h) = self._getPlayingFieldCoord(img)\nself.moves += 1\nwinX += areaX\nwinY += areaY\nmoveX = winX + 12 + (52*(x)) + 26\nmoveY = winY+ 12 + (52*(y)) + 26\npyautogui.moveTo(moveX,moveY)\nif direction == \"U\":\npyautogui.drag(0, -50)\nelif direction == \"D\":\npyautogui.drag(0, 50)\nelif direction == \"R\":\npyautogui.drag(50, 0)\nelse:\npyautogui.drag(-50, 0)```\n\nNote that our class variables, x, y, w, and h, are present.  Since the prompt interferes with the ability to get the square boarder coordinates, we need save the coordinate information for the first move.  We also have to make sure that we update our screen image until we can clearly get the square boarder coordinates.  Once that’s done, we can finally make a move.\n\n### Writing a Basic Agent\n\nNow that we laid the groundwork, we need a script to allow us to launch the game and play a full game.\n\nWhile the board state is static when the player is making a move, once a move is made, a cascade can occur.  While a cascade occurs, the player cannot make any additional moves.  Since the duration varies, there’s no reliable way to check whether the player can make a move.  To somewhat compensate this, our board piece will return “N/A” if it can’t be identified.  We have a method in our rule based AI class to check whether there are any “N/A” in the board.  If so, we get the board state again.\n\nWe also make check whether the board state is the same from the last check.  Once our board state is persistent, we can get the available moves and actually make a move.\n\nThe following python script will suffice:\n\n```env = Jewel1Env()\nenv.launchGame()\nenv.handleTitleScreen()\ntime.sleep(3)\nagent = Jewel1RB()\nboard = \"\"\nimageNumber = 0\nwhile True:\ncanMakeMove = False\npreviousBoard = \"1\"\nwhile not canMakeMove and previousBoard != board:\npreviousBoard = board\nboard = env.getPlayingFieldInfo()\nprint(board)\ncanMakeMove = agent.isBoardAvailable(board)\nmoves = agent.processBoard(board)\nif len(moves) == 0:\ncontinue\ntheMove = random.choice(moves)\n#cv2.imwrite(\"moves/{}.png\".format(imageNumber), env.getPlayingField())\nprint(\"Chose move #{}: {}\".format(imageNumber,theMove))\nimageNumber += 1\ntime.sleep(.25)\nenv.makeMove(int(theMove), int(theMove), theMove[-1])\ntime.sleep(1)```\n\nThe script isn’t perfect, of course.  There are times where our program makes an incorrect swap.  I want to say this is due to trying to make a move based off of a stale board state." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.80419207,"math_prob":0.98968136,"size":9999,"snap":"2020-45-2020-50","text_gpt3_token_len":2763,"char_repetition_ratio":0.14757378,"word_repetition_ratio":0.06913891,"special_character_ratio":0.30183017,"punctuation_ratio":0.17832647,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9700251,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-12-03T11:37:54Z\",\"WARC-Record-ID\":\"<urn:uuid:123b56b6-0857-4645-890f-5bebf07c44f0>\",\"Content-Length\":\"55698\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:52b22bcd-caff-4183-8f9b-8dcd8fa3e56c>\",\"WARC-Concurrent-To\":\"<urn:uuid:2ed8de15-0ed2-43c7-ba07-f00e317ac44f>\",\"WARC-IP-Address\":\"23.235.209.125\",\"WARC-Target-URI\":\"https://josephwoolf.com/bejeweled-1-ai-part-2-enabling-ai-to-make-moves/\",\"WARC-Payload-Digest\":\"sha1:SEQMWLNVDPHD2GW5267YL2VQCETXRD5E\",\"WARC-Block-Digest\":\"sha1:QNTQCTP3WB3CXJLHS57CG7LIMD5LUI7O\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141727627.70_warc_CC-MAIN-20201203094119-20201203124119-00310.warc.gz\"}"}
https://medical-dictionary.thefreedictionary.com/solubility+coefficient
[ "# solubility\n\n(redirected from solubility coefficient)\nAlso found in: Dictionary, Thesaurus, Encyclopedia.\n\n## solubility\n\n[sol″u-bil´ĭ-te]\nthe quality of being soluble.\n\n## sol·u·bil·i·ty\n\n(sol'yū-bil'i-tē), Avoid the misspelling/mispronunciation soluability.\nThe property of being soluble.\n\n## sol·u·bil·i·ty\n\n(solyū-bili-tē)\nThe property of being soluble.\n\n## solubility\n\nthe amount of a substance that will dissolve in a given amount of another substance.\n\n## sol·u·bil·i·ty\n\n(solyū-bili-tē)\nThe property of being soluble.\nReferences in periodicals archive ?\nIn two-phase mixtures, such as 50% of octane and 50% of water, the ozone solubility coefficients in both phases were close to each other, and for the whole range of the inlet ozone concentration in gas were equal to 0.25 [+ or -] 0.11 for water ([[alpha].sub.W]) and to 0.20 [+ or -] 0.10 for octane ([[alpha].sub.OCT]).\nTable 1: Values of diameters and force constants ([epsilon]/k) of [C.sub.1]-[C.sub.4] hydrocarbon gases for the calculation of a solubility coefficient in PVTMS.\nThe apparent gas diffusion and solubility coefficients of the UV-cured membranes as a function of UV irradiation time are presented in Fig.\nThe solubility coefficients, S, were then calculated from\nSince permeation is a solution-diffusion process, the permeability, P, is the product of the diffusion and solubility coefficients and can be expressed by the following equation:\nSolubility Coefficient [K.sub.H] = 3.61 x [10.sup.-5] mole [multiplied by] [N.sup.-1] [multiplied by] [m.sup.-1]\nEquation 3 can be solved using the Newton-Raphson technique, and it shows that the equilibrium bubble size depends neither on the hydrodynamic nor the diffusional aspects of the bubble growth process, but rather on the initial conditions, temperature, final pressure, surface tension, gas molecular weight, and solubility coefficient. For example, a polymer/gas system with the following parameters: [P.sub.sat] = [p.sub.g0] = 9.9 MPa: [P.sub.[infinity]] = 0.1 MPa; K = 0.164 [cm.sup.3] (STP) / g .\nAs the simplest practical cases of m = 2 and m = 3, with stepwise distribution of both diffusion coefficients and solubility coefficients at the boundary between respective layers, the diffusion properties in the transient state are analyzed in detail.\nIn fact, while in certain cases, researchers ascribed the low permeability values to surprisingly low solubility coefficients (14-18), in one case the low value of permeability was attributed to a low diffusion coefficient (19).\nIt uses an automated constant-volume sorption technique that provides the effective diffusion and solubility coefficients independently in a single test.\nOn the other hand, the diffusion coefficients of gases in membranes often change less than the solubility coefficients. Therefore, more condensable gases are more permeable through the PDMS membrane which is a rubbery polymer.\n\nSite: Follow: Share:\nOpen / Close" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8768183,"math_prob":0.9000208,"size":2402,"snap":"2020-10-2020-16","text_gpt3_token_len":584,"char_repetition_ratio":0.176814,"word_repetition_ratio":0.0,"special_character_ratio":0.23105745,"punctuation_ratio":0.15434782,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99398744,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-02-28T19:38:50Z\",\"WARC-Record-ID\":\"<urn:uuid:ae25fde7-2e73-4a2b-8d69-54d1d7b455ba>\",\"Content-Length\":\"48019\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:948cfac3-ad74-4863-bfe9-af1175504f74>\",\"WARC-Concurrent-To\":\"<urn:uuid:9cd57ed5-6d1e-4cf7-9d9e-9e7900fbe5b8>\",\"WARC-IP-Address\":\"209.160.67.5\",\"WARC-Target-URI\":\"https://medical-dictionary.thefreedictionary.com/solubility+coefficient\",\"WARC-Payload-Digest\":\"sha1:PW4YQCZRVQ3G5ZRLW7SP6BWZYUW6ZZRS\",\"WARC-Block-Digest\":\"sha1:VOD3YIME6V4IEZHLN7NUZHD2QWTH6LJL\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-10/CC-MAIN-2020-10_segments_1581875147628.27_warc_CC-MAIN-20200228170007-20200228200007-00269.warc.gz\"}"}
https://online.stat.psu.edu/stat462/node/158/
[ "# 7.7 - Polynomial Regression\n\nIn our earlier discussions on multiple linear regression, we have outlined ways to check assumptions of linearity by looking for curvature in various plots.\n\n• For instance, we look at the scatterplot of the residuals versus the fitted values.\n• We also look at a scatterplot of the residuals versus each predictor.\n\nSometimes, a plot of the residuals versus a predictor may suggest there is a nonlinear relationship. One way to try to account for such a relationship is through a polynomial regression model. Such a model for a single predictor, X, is:\n\n$\\begin{equation}\\label{poly} Y=\\beta _{0}+\\beta _{1}X +\\beta_{2}X^{2}+\\ldots+\\beta_{h}X^{h}+\\epsilon, \\end{equation}$\n\nwhere h is called the degree of the polynomial. For lower degrees, the relationship has a specific name (i.e., h = 2 is called quadratic, h = 3 is called cubic, h = 4 is called quartic, and so on). Although this model allows for a nonlinear relationship between Y and X, polynomial regression is still considered linear regression since it is linear in the regression coefficients, $\\beta_1, \\beta_2, ..., \\beta_h$!\n\nIn order to estimate the equation above, we would only need the response variable (Y) and the predictor variable (X). However, polynomial regression models may have other predictor variables in them as well, which could lead to interaction terms. So as you can see, the basic equation for a polynomial regression model above is a relatively simple model, but you can imagine how the model can grow depending on your situation!\n\nFor the most part, we implement the same analysis procedures as done in multiple linear regression. To see how this fits into the multiple linear regression framework, let us consider a very simple data set of size n = 50 that was simulated:", null, "The data was generated from the quadratic model\n\n$\\begin{equation} y_{i}=5+12x_{i}-3x_{i}^{2}+\\epsilon_{i}, \\end{equation}$\n\nwhere the $\\epsilon_{i}s$ are assumed to be normally distributed with mean 0 and variance 2. A scatterplot of the data along with the fitted simple linear regression line is given below (a). As you can see, a linear regression line is not a reasonable fit to the data.", null, "", null, "Residual plots of this linear regression analysis are also provided in the plot above. Notice in the residuals versus fits plot (b) how there is obvious curvature and it does not show uniform randomness as we have seen before. The histogram (c) appears heavily left-skewed and does not show the ideal bell-shape for normality. Furthermore, the normal probability plot (d) seems to deviate from a straight line and curves down at the extreme percentiles. These plots alone suggest that there is something wrong with the model being used and indicate that a higher-order model may be needed.\n\nThe matrices for the second-degree polynomial model are:\n\n$\\textbf{Y}=\\left( \\begin{array}{c} y_{1} \\\\ y_{2} \\\\ \\vdots \\\\ y_{50} \\\\ \\end{array} \\right)$, $\\textbf{X}=\\left( \\begin{array}{cccc} 1 & x_{1} & x_{1}^{2} \\\\ 1 & x_{2} & x_{2}^{2} \\\\ \\vdots & \\vdots & \\vdots \\\\ 1 & x_{50} & x_{50}^{2} \\\\ \\end{array} \\right)$, $\\beta=\\left( \\begin{array}{c} \\beta_{0} \\\\ \\beta_{1} \\\\ \\beta_{2} \\\\ \\end{array} \\right)$, $\\epsilon=\\left( \\begin{array}{c} \\epsilon_{1} \\\\ \\epsilon_{2} \\\\ \\vdots \\\\ \\epsilon_{50} \\\\ \\end{array} \\right)$\n\nwhere the entries in Y and X would consist of the raw data. So as you can see, we are in a setting where the analysis techniques used in multiple linear regression are applicable.\n\nSome general guidelines to keep in mind when estimating a polynomial regression model are:\n\n• The fitted model is more reliable when it is built on a larger sample size n.\n• Do not extrapolate beyond the limits of your observed values, particularly when the polynomial function has a pronounced curve such that an extraploation produces meaningless results beyond the scope of the model.\n• Consider how large the size of the predictor(s) will be when incorporating higher degree terms as this may cause numerical overflow for the statistical software being used.\n• Do not go strictly by low p-values to incorporate a higher degree term, but rather just use these to support your model only if the resulting residual plots looks reasonable. This is an example of a situation where you need to determine \"practical significance\" versus \"statistical significance\".\n• In general, as is standard practice throughout regression modeling, your models should adhere to the hierarchy principle, which says that if your model includes $X^{h}$ and $X^{h}$ is shown to be a statistically significant predictor of Y, then your model should also include each $X^{j}$ for all $j<h$, whether or not the coefficients for these lower-order terms are significant. In other words, when fitting polynomial regression functions, fit a higher-order model and then explore whether a lower-order (simpler) model is adequate. For example, suppose we formulate the following cubic polynomial regression function:\n\n$y_i=\\beta_{0}+\\beta_{1}x_{i}+\\beta_{2}x_{i}^{2}+\\beta_{3}x_{i}^{3}+\\epsilon_i$\n\nThen, to see if the simpler first order model (a \"straight line\") is adequate in describing the trend in the data, we could test the null hypothesis:\n\n$H_0: \\beta_{2}=\\beta_{3}=0$\n\nBut then … if a polynomial term of a given order is retained, then all related lower-order terms are also retained. That is, if a quadratic term (x2) is deemed significant, then it is standard practice to use this regression function:\n\n$\\mu_Y=\\beta_{0}+\\beta_{1}x_{i}+\\beta_{2}x_{i}^{2}$\n\nand not this one:\n\n$\\mu_Y=\\beta_{0}+\\beta_{2}x_{i}^{2}$\n\nwhether or not the linear term (x) is significant. That is, we always fit the terms of a polynomial model in a hierarchical manner." ]
[ null, "https://online.stat.psu.edu/stat462/sites/onlinecourses.science.psu.edu.stat462/files/07transform/17.1_table_01/index.png", null, "https://online.stat.psu.edu/stat462/sites/onlinecourses.science.psu.edu.stat462/files/07transform/17.1_plot_01/index.png", null, "https://online.stat.psu.edu/stat462/sites/onlinecourses.science.psu.edu.stat462/files/07transform/17.1_plot_02/index.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8809901,"math_prob":0.9982238,"size":5723,"snap":"2023-40-2023-50","text_gpt3_token_len":1415,"char_repetition_ratio":0.1295681,"word_repetition_ratio":0.0045402953,"special_character_ratio":0.26175082,"punctuation_ratio":0.08773585,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99989927,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,2,null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-10T22:50:26Z\",\"WARC-Record-ID\":\"<urn:uuid:ff094c0d-42e0-47af-b672-6ccf537e15fa>\",\"Content-Length\":\"23654\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:db3e3bf6-17cd-46a0-91a4-f0d422b373a6>\",\"WARC-Concurrent-To\":\"<urn:uuid:4b81c991-e754-48ef-80d2-c3ab0a9cda85>\",\"WARC-IP-Address\":\"128.118.15.226\",\"WARC-Target-URI\":\"https://online.stat.psu.edu/stat462/node/158/\",\"WARC-Payload-Digest\":\"sha1:EQ7VZVXTQ7O4UF3G4TSHDQQ6TS6V3PIV\",\"WARC-Block-Digest\":\"sha1:IZH6F66X3AY4X4ZADFAVOJZ4IWUOBSTE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679102697.89_warc_CC-MAIN-20231210221943-20231211011943-00757.warc.gz\"}"}
https://tex.stackexchange.com/questions/461256/what-is-the-difference-between-storing-table-data-by-filecontents-and-pgfplotsta
[ "# What is the difference between storing table data by filecontents and pgfplotstableread?\n\nIn this MWE, the table data loaded from poles.dat is successfully parsed as shown in the left plot.\n\nHowever, storing the same data inside a macro using \\pgfplotstableread doesn't result in the same expected output.\n\nWhy is there a difference between both ways? and how to make the second approach of pgfplotstableread work like the first one of *.dat file?\n\n\\documentclass[border=1cm]{standalone}\n\n\\usepackage{pgfplots,pgfplotstable}\n\npoles/.style= { only marks, mark=x, mark size = 1ex, thick},\npoint meta = explicit symbolic,\nvisualization depends on={\\thisrow{angle} \\as \\myangle},\nvisualization depends on={value \\thisrow{label} \\as \\mylabel},\nLabel Style/.style args = {#1}{\nnodes near coords,\nevery node near coord/.style = %\n{\nanchor=south, label={[#1]\\myangle:{\\mylabel}}\n},\n}\n}\n\n\\usepackage{filecontents}\n\\begin{filecontents*}{poles.dat}\nRe Im label angle\n-2 2 (-2,2) 270\n-2 -2 (-2,-2) 90\n-4 0 (-4,0) 60\n\\end{filecontents*}\n\nRe Im label angle\n-2 2 (-2,2) 270\n-2 -2 (-2,-2) 90\n-4 0 (-4,0) 60\n}\\mytable\n\n\\begin{document}\n\\begin{tikzpicture}\n\\begin{axis}\n\\addplot[Label Style={blue,fill = gray!20},poles] table {poles.dat};\n\\end{axis}\n\\end{tikzpicture}\n\\begin{tikzpicture}\n\\begin{axis}\n\\addplot[Label Style={blue,fill = gray!20},poles] table {\\mytable};\n\\end{axis}\n\\end{tikzpicture}\n\\end{document}", null, "• I guess that the remarks on the bottom of p. 58 of the pgfplots manual may be of interest here. – user121799 Nov 22 '18 at 17:45\n• @marmot , I see. So, is there any workaround to store a table data inside a macro to use it later the same way I do with that physical *.dat file? – Diaa Nov 22 '18 at 19:52\n• Yes, you can do that. I believe that this is one of the craziest things I ever did, but yes, it does work. Please let me know if you want me to spell this out. – user121799 Nov 22 '18 at 19:56\n\nI found the following in the pgfplots source code.\n\n% The normal implementation of \\thisrow is not accessable here. And the\n% worst is: error messages are impossible either because they are\n% not executed... we resort to the associated math functions:\n\\def\\thisrow##1{thisrow(\"##1\")}% let us hope that math parsing is active!\n\n\nNote that this solution always uses \\mytable. The key was the use of \\coordindex.\n\n\\documentclass[border=1cm]{standalone}\n\n\\usepackage{pgfplotstable,pgfplots}\n\npoles/.style= { only marks, mark=x, mark size = 1ex, thick},\npoint meta = explicit symbolic,\nvisualization depends on={\\thisrow{angle} \\as \\myangle},\n%visualization depends on={value \\thisrow{label} \\as \\mylabel},\nLabel Style/.style args = {#1}{\nnodes near coords,\nevery node near coord/.style = %\n{\nanchor=south, label={[#1]\\myangle:{\\pgfplotstablegetelem{\\coordindex}{label}\\of{\\mytable}\\pgfplotsretval}}\n},\n}\n}\n\\newcommand{\\myotherlabel}{\\pgfmathparse{\\thisrow{label}}\\pgfmathresult}\n\n\\usepackage{filecontents}\n\\begin{filecontents*}{poles.dat}\nRe Im label angle\n-2 2 (-2,2) 270\n-2 -2 (-2,-2) 90\n-4 0 (-4,0) 60\n\\end{filecontents*}\n\nRe Im label angle\n-2 2 (-2,2) 270\n-2 -2 (-2,-2) 90\n-4 0 (-4,0) 60\n}\\mytable\n\n\\begin{document}\n\\begin{tikzpicture}\n\\begin{axis}\n\\addplot[Label Style={blue,fill = gray!20},poles] table {poles.dat};\n\\end{axis}\n\\end{tikzpicture}\n\\begin{tikzpicture}\n\\begin{axis}\n\\addplot[Label Style={blue,fill = gray!20},poles] table {\\mytable};\n\\end{axis}\n\\end{tikzpicture}\n\\end{document}\n\n• I am sorry, but could you please show me how this workaround can be implemented in my MWE? Thanks – Diaa Nov 22 '18 at 19:50\n• It took a while. Done now. – John Kormylo Nov 23 '18 at 16:09\n• Short and perfect workaround. Is it possible to make the table macro name (i.e. \\mytable) variable and dependent of that assigned by \\pgfplotstableread? – Diaa Nov 23 '18 at 18:17\n• Additionally, does your answer imply that the answer to my main question (in the title) is there is no difference between storing table data by filecontents and pgfplotstableread? It would be great if you make this answer more informative and comprehensive for the future readers by addressing the differences between both approaches. – Diaa Nov 23 '18 at 18:27\n• Obviously pgfplots handles files differently than macros. The code I found only applies when using macros. A totally different set of definitions are used when reading files directly. Note that my solution will not work if \\mytable was not created. – John Kormylo Nov 24 '18 at 13:23\n\nHere is a crazy workaround: write the table to a file and read it again. (Of course, in the present example this does not make sense.) However, there are situations in which it can make sense, e.g. when you create the table through macros or when you really want to skip some rows, like here.\n\n\\documentclass[border=1cm]{standalone}\n\n\\usepackage{pgfplots,pgfplotstable}\n\npoles/.style= { only marks, mark=x, mark size = 1ex, thick},\npoint meta = explicit symbolic,\nvisualization depends on={\\thisrow{angle} \\as \\myangle},\nvisualization depends on={value \\thisrow{label} \\as \\mylabel},\nLabel Style/.style args = {#1}{\nnodes near coords,\nevery node near coord/.style = %\n{\nanchor=south, label={[#1]\\myangle:{\\mylabel}}\n},\n}\n}\n\n% from https://tex.stackexchange.com/a/445369/121799\n\\pgfplotstablegetelem{#2}{#3}\\of{#1}%\n\\let#4\\pgfplotsretval\n}\n% based on https://tex.stackexchange.com/a/307032/121799\n% and https://tex.stackexchange.com/a/451326/121799\n\\newcommand{\\GetRow}{\n\\pgfplotstablegetcolsof{\\mytable}\n\\pgfmathtruncatemacro{\\colnumber}{\\pgfplotsretval-1}\n\\foreach \\XX in {0,...,\\colnumber}\n{%\n\\ifnum\\XX=0%\n\\xdef#2{{\\tmp}}%\n\\else%\n\\xdef#2{#2,{\\tmp}}%\n\\fi%\n}\n}\n\n\\usepackage{filecontents}\n\\begin{filecontents*}{poles.dat}\nRe Im label angle\n-2 2 (-2,2) 270\n-2 -2 (-2,-2) 90\n-4 0 (-4,0) 60\n\\end{filecontents*}\n\nRe Im label angle\n-2 2 (-2,2) 270\n-2 -2 (-2,-2) 90\n-4 0 (-4,0) 60\n}\\mytable\n\n\\pgfplotstablegetrowsof{\\mytable}%\n\\pgfmathtruncatemacro{\\rownum}{\\pgfplotsretval-1}%\n\\pgfplotstablegetcolsof{\\mytable}%\n\\pgfmathtruncatemacro{\\colnum}{\\pgfplotsretval-1}%\n\\foreach \\X in {0,...,\\colnum}%\n{\\pgfplotstablegetcolumnnamebyindex{\\X}\\of{\\mytable}\\to\\pgfplotsretval%\n\\ifnum\\X=0%\n\\xdef\\tmp{\\pgfplotsretval}%\n\\else%\n\\xdef\\tmp{\\tmp,\\pgfplotsretval}%\n\\fi}\n\\newwrite\\myoutput% from https://tex.stackexchange.com/a/290058/121799\n\\immediate\\openout\\myoutput=\\jobname-tmp.dat%\n\\foreach \\X in {0,...,\\rownum}% rows\n{\\GetRow{\\X}{\\myrow}%\n\\immediate\\write\\myoutput{\\myrow}}%\n\\immediate\\closeout\\myoutput%\n\n\\begin{document}\n\\begin{tikzpicture}\n\\begin{axis}\n\\addplot[Label Style={blue,fill = gray!20},poles] table {poles.dat};\n\\end{axis}\n\\end{tikzpicture}\n\\begin{tikzpicture}\n\\begin{axis}", null, "" ]
[ null, "https://i.stack.imgur.com/wbb9S.png", null, "https://i.stack.imgur.com/c15Xk.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.52467364,"math_prob":0.87561667,"size":1383,"snap":"2020-34-2020-40","text_gpt3_token_len":471,"char_repetition_ratio":0.11167513,"word_repetition_ratio":0.1734104,"special_character_ratio":0.31236443,"punctuation_ratio":0.16538462,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98669,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,3,null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-24T17:52:41Z\",\"WARC-Record-ID\":\"<urn:uuid:44920f96-8f1b-460a-8c8a-f20d1b5f521f>\",\"Content-Length\":\"172773\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3b36ee3b-84e4-49a8-b6a3-657ee34aad20>\",\"WARC-Concurrent-To\":\"<urn:uuid:ebac9d88-e57e-49ea-bebc-d3eecb232c6b>\",\"WARC-IP-Address\":\"151.101.1.69\",\"WARC-Target-URI\":\"https://tex.stackexchange.com/questions/461256/what-is-the-difference-between-storing-table-data-by-filecontents-and-pgfplotsta\",\"WARC-Payload-Digest\":\"sha1:2BHJCYQM4R7Z5EKOWD35DM6EQ7H5ENJ5\",\"WARC-Block-Digest\":\"sha1:ZERU3FGIREDDAQXXC6V4JDXAMOBZK7SR\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600400219691.59_warc_CC-MAIN-20200924163714-20200924193714-00030.warc.gz\"}"}
https://careercup.com/question?id=5454247129776128
[ "## Facebook Interview Question for SDE1s\n\n•", null, "0\n\nCountry: United States\n\nComment hidden because of low score. Click to expand.\n1\nof 1 vote\n\nPython solution :\nTime complexity : O(n)\n\n``````def smallestSubarray(nums, target):\n\ncurr_sum, start, min_len = 0,0, sys.maxint\n\nfor idx, num in enumerate(nums):\nif curr_sum + num >=target:\ncurr_sum += num\nwhile curr_sum >=target and start<=idx:\nmin_len = min(min_len, idx-start+1)\ncurr_sum, start = curr_sum-nums[start], start+1\nelse:\ncurr_sum += num\nreturn min_len if min_len!=sys.maxint else -1\n\ndef main():\n\n# smallesSubarray sum\nnums,target = [1,2,3,4,5,6,7,8,9],45\nprint('Smallest subarray {} '.format(smallestSubarray(nums,target)))``````\n\nComment hidden because of low score. Click to expand.\n0\nof 0 vote\n\n@sachin, the moment you sort, the notion of subarray is gone (unless you record the original index somewhere). Your code will work if the question asked for the minimum \"subset\" instead of subarray...\n\nComment hidden because of low score. Click to expand.\n0\nof 0 vote\n\n@DyingLizard. [4,4,2,-6,4,10,2],16 produces 6, instead of 3.\n\nComment hidden because of low score. Click to expand.\n0\nof 2 vote\n\n``````// k is the targetSum\nint minSubArrayLen(int *a, int n, int k) {\nint minLen = 9999;\nint curSum = 0;\nint len = 0, i = 0;\nwhile(i < n) {\nif (a[i] > k) {\ncurSum = 0;\nminLen = 1;\nbreak;\n}\nelse if (a[i] < 0) {\n/*\n* curSum is going to decrease, so\n* it will never be >=k. Reset the counters and start\n*/\ncurSum = 0;\nlen = 0;\n}\nelse if (curSum + a[i] >= k) {\n/*\n* including this number makes subarray sum >= k\n* store the subarray len and update minLen\n*/\ncurSum = 0;\nminLen = std::min(minLen, len+1);\nlen = 0;\n} else {\nlen++;\ncurSum = curSum + a[i];\n}\ni++;\n}\nreturn minLen;\n}``````\n\nComment hidden because of low score. Click to expand.\n0\nof 0 vote\n\n@charan. A couple of test cases the code produces a wrong result for:\n{3, 8, 8}, k=16 - result is 3, should be 2\n{2, 7, 2, -3, 9, -7, -6, -5}, k=12 - result is 9999, should be 4\n\nComment hidden because of low score. Click to expand.\n0\nof 0 vote\n\nNot sure that it is actually O(n), but it is a solution.\n\n``````def subs(l,x,s=0,e=0,curr_sum=0):\nif s == len(l):\nreturn (0xffff, -1)\nif e == len(l):\nif curr_sum >= x:\nreturn min((e-s, s), subs(l,x,s+1,e,curr_sum - l[s]), key=lambda n: n)\n# In order to handle negative values\nreturn subs(l,x,s+1,e,curr_sum - l[s])\n\nif curr_sum >= x:\nreturn min((e-s,s), subs(l,x,s+1,e+1, curr_sum - l[s] + l[e]), subs(l, x, s+1, e, curr_sum - l[s]), key=lambda n: n)\n\nreturn subs(l, x, s, e+1, curr_sum + l[e])``````\n\nAs I said, I'm not sure it is actually O(n), is it?\n\nComment hidden because of low score. Click to expand.\n0\nof 0 vote\n\n@charan\nk may be negative.\nYou code will fail already for a={-1}, k=-2.\n\nComment hidden because of low score. Click to expand.\n0\n\nfor the above input, returned result is 1 which is valid since we have a subarray {-1} >= -2.\n\nComment hidden because of low score. Click to expand.\n0\nof 0 vote\n\nThinking aloud here:\n\n1. Go through the array list and create a hash map O(n) with key: length of sub-array and value: is the array itself\n2. and then look-up minimum key O(n), if we need the subarray we can return the value\n\ntotal O(n)\n\nComment hidden because of low score. Click to expand.\n0\n\nHow would you produce the sub-arrays before inserting the to hash-map?\n\nComment hidden because of low score. Click to expand.\n0\nof 0 vote\n\nMy solution using DP. I constructed a 2D table which caches sums of all subarrays, then just compares intermediate sums with the target.\n\nFor example, the 2D array for the given example of\n\n``[ 5, 4, -8, 16 ]``\n\nis below\n\n``````[ 5, 9, 1, 17 ]\n[ 0, 4, -4, 12 ]\n[ 0, 0, -8, 8 ]\n[ 0, 0, 0, 16 ]``````\n\nThe code:\n\n``````public static int minLengthSubarrayWithSum(int[] nums, int target) {\nif (nums == null || nums.length == 0) {\nreturn -1;\n}\n\nfinal int n = nums.length;\nfinal int[][] cache = new int[n][n];\nint minLength = Integer.MAX_VALUE;\n\n// Sum continuously for first row\ncache = nums;\nfor (int i = 1; i < n; i++) {\ncache[i] = cache[i - 1] + nums[i];\n}\n\n// Sum the rest\nfor (int r = 1; r < n; r++) {\nfor (int c = r; c < n; c++) {\ncache[r][c] = cache[r][c - 1] + nums[c];\n}\n}\n\n// Find the min length\nfor (int r = 0; r < n; r++) {\nfor (int c = r; c < n; c++) {\nif (cache[r][c] >= target) {\nminLength = Math.min(minLength, c - r + 1);\n}\n}\n}\n\nreturn minLength;\n}``````\n\nI didn't spend any time optimizing this at all, but it's likely that some of the loops can be combined. This algorithm uses quadratic space and time\n\nComment hidden because of low score. Click to expand.\n0\nof 0 vote\n\nHere is my proposal. I've divided the code into three steps for readability. This can be achieved in one iteration as well.\nTime and space complexity O(n).\n\n``````int minSubArrLen(vector<int>& arr, int k){\nint n = arr.size();\n\nunsigned int res = -1 //max unsigned int;\n\nvector<int> subSum(n);\nvector<int> subLen(n);\n\nsubSum = arr;\nsubLen = 1;\n\nint y=0;\n\n//Kadane's algorithm to produce all max sum sub-arrays anding at each index.\nfor(int i=1; i<n; ++i){\nif(arr[i] >= k) //Will work also w/o it. Just an optimization.\nreturn 1;\n\nif(subSum[i-1] > 0){\nsubSum[i] = subSum[i-1] + arr[i];\nsubLen[i] = subLen[i-1] + 1;\n}\nelse{\nsubSum[i] = arr[i];\nsubLen[i] = 1;\n}\n}\n\n//For every sub-array which sum is more than k, try to trim it from the front:\nfor(int i=0; i<n; ++i){\nif(subSum[i] > k){\nif(y <= i - subLen[i]){\nwhile(subSum[i]-subSum[y]>k)\ny++;\n}\nelse{\ny=i;\nwhile(y>=0 && subSum[i]-subSub[y]<k){\ny--;\n}\n}\nsubLen[i] = i - y;\n}\n}\n\n//Choose the minimum length sub array\nunsigned minLength =-1;\nfor(int i=0; i<n; ++i){\nif(subSum[i]>=k && subLen[i] < minLength)\nminLength = subLen[i];\n}\n\nreturn minLength;\n}``````\n\nComment hidden because of low score. Click to expand.\n0\n\nCodeArtist:\n\nIsn't trimming O(N) in the worst case? And since your algorithm runs trimming on each index, the worst case seems O(N^2).\n\nI don't think an O(N) solution exists.\n\nComment hidden because of low score. Click to expand.\n0\nof 0 vote\n\n``````void findSmallestContgArr(int[] arr, int len, int num, int& start, int& end) {\nauto j = 0, sum = 0, minLength = INT_MAX;\nfor (int i = 0; i < len; i++) {\nwhile (sum < num && j < len) {\nsum += arr[j];\nj++;\n}\nif (sum >= num) { // we have a solution\nif (j - i < minLength) {\nminLength = j - i;\nstart = i;\nend = j;\n}\n}\nsum -= arr[i];\n}\nif (minLength == INT_MAX) { // Didn't find a solution\nstart = -1;\nend = -1;\n}\n}``````\n\nComment hidden because of low score. Click to expand.\n0\nof 0 vote\n\n@DeathEater\nCheck this case:\narr = {-2,2,-2,1,1,1,1} k=2.\n\nComment hidden because of low score. Click to expand.\n0\nof 0 vote\n\nThe smallest piece of code to solve it in O(n) using perl is as follows:-\n\n``````#my @intArray = qw(5 4 -8 16);\n#my \\$lookupInt = 10;\n\nmy @intArray = qw(4 4 2 -6 4 10 2);\nmy \\$lookupInt = 16;\n\nprint STDOUT \"Input \\@intArray=[\",join(\",\",@intArray),\"] and \\\\$lookupInt=[\",\\$lookupInt,\"] output of \\&getSamllSubArray ~[\",join (\",\", @{getSamllSubArray( \\@intArray, \\$lookupInt )}),\"]\\n\";\n\nsub getSamllSubArray{\n\nmy ( \\$arrayRef, \\$lookupValue ) = @_ ;\nmy \\$sum = 0;\nmy @smallestArray = ();\n\nforeach (sort {\\$b <=> \\$a} @{\\$arrayRef}) {\n\\$sum += \\$_;\npush ( @smallestArray, \\$_ );\nif ( \\$sum >= \\$lookupValue ) {\nlast;\n}\n}\nreturn [@smallestArray];\n}``````\n\nTest Output -\nInput @intArray=[4,4,2,-6,4,10,2] and \\$lookupInt= output of &getSamllSubArray ~[10,4,4]\nInput @intArray=[5,4,-8,16] and \\$lookupInt= output of &getSamllSubArray ~\n\nComment hidden because of low score. Click to expand.\n0\nof 0 vote\n\nUsing perl, we can achieve the O(n) for just a few piece of code base\n=======================================================\n\n``````my @intArray = qw(4 4 2 -6 4 10 2);\nmy \\$lookupInt = 16;\n\nprint STDOUT \"Input \\@intArray=[\",join(\",\",@intArray),\"] and \\\\$lookupInt=[\",\\$lookupInt,\"] output of \\&getSamllSubArray ~[\",join (\",\", @{getSamllSubArray( \\@intArray, \\$lookupInt )}),\"]\\n\";\n\nsub getSamllSubArray{\n\nmy ( \\$arrayRef, \\$lookupValue ) = @_ ;\nmy \\$sum = 0;\nmy @smallestArray = ();\n\nforeach (sort {\\$b <=> \\$a} @{\\$arrayRef}) {\n\\$sum += \\$_;\npush ( @smallestArray, \\$_ );\nif ( \\$sum >= \\$lookupValue ) {\nlast;\n}\n}\nreturn [@smallestArray];\n}``````\n\nOutput of the above code-base -\n\n``````Input @intArray=[4,4,2,-6,4,10,2] and \\$lookupInt= output of &getSamllSubArray ~[10,4,4]\nInput @intArray=[5,4,-8,16] and \\$lookupInt= output of &getSamllSubArray ~``````\n\nComment hidden because of low score. Click to expand.\n0\nof 0 vote\n\nThis code is running with O(n^2), but works as expected.\n\n``````public static int miniSubArrayLen(int[] nums, int s) {\nint returnVal = -1;\nfor (int i = 0; i < nums.length; i++) {\nint currentSum = nums[i];\nint subArrLen = 1;\nfor (int j = i + 1; j < nums.length; j++) {\nif (s > currentSum) {\ncurrentSum += nums[j];\n}\n++subArrLen;\nif (currentSum >= s && (returnVal == -1 || returnVal > subArrLen)) {\nreturnVal = subArrLen;\nbreak;\n}\n\n}\n}\nreturn returnVal;\n}``````\n\nComment hidden because of low score. Click to expand.\n0\nof 0 vote\n\nThis code works in O(n^2)\n\n``````public static int miniSubArrayLen(int[] nums, int s) {\nint returnVal = -1;\nfor (int i = 0; i < nums.length; i++) {\nint currentSum = nums[i];\nint subArrLen = 1;\nfor (int j = i + 1; j < nums.length; j++) {\nif (s > currentSum) {\ncurrentSum += nums[j];\n}\n++subArrLen;\nif (currentSum >= s && (returnVal == -1 || returnVal > subArrLen)) {\nreturnVal = subArrLen;\nbreak;\n}\n\n}\n}\nreturn returnVal;\n}``````\n\nComment hidden because of low score. Click to expand.\n0\nof 0 vote\n\n``````def minimalSubArraySum(array: Array[Int], value: Int): Int = {\nvar left = 0\nvar right = 0\nvar sum = 0\nvar result = Integer.MAX_VALUE\n\nwhile (right < array.length) {\nif (sum >= value && result > (right - left)) result = right - left\nsum += array(right)\nright += 1\nif (sum > value) {\nsum -= array(left)\nleft += 1\n}\n}\nwhile (left < right) {\nif (sum >= value && result > (right - left)) result = right - left\nsum -= array(left)\nleft += 1\n}\nif (result == Integer.MAX_VALUE) -1\nelse result\n}``````\n\nComment hidden because of low score. Click to expand.\n-1\nof 1 vote\n\nPython:\n=======================\n\nl = [1,4,2,5,-4,8,-2]\nsum1 = 10\n\nl.sort()\nl.reverse()\n\nfor i in range(len(l)):\nif l[i] >= sum1: #We are checking if biggest element is bigger,\nprint l[:i+1]\nbreak\nsum1 = sum1 - l[i] # if not then we have add that biggest element with second big element. so, just subtract the biggest element and check if remaining is bigger than next big element.\n\nComment hidden because of low score. Click to expand.\n-1\nof 1 vote\n\n``````public static int smallestSubarray(int arr[], int target) {\nint minLen = Integer.MAX_VALUE;\nint end = 0;\nint rollingSum = 0;\nMap<Integer, Integer> pos = new HashMap<Integer, Integer>();\nwhile (end < arr.length) {\nrollingSum = rollingSum + arr[end];\nif (pos.containsKey(rollingSum % 16) && rollingSum >= target) {\nminLen = Math.min(minLen, end - (pos.get(rollingSum % 16)));\n}\npos.put(rollingSum % 16, end);\nend++;\n}\nreturn minLen;\n}``````\n\nName:\n\nWriting Code? Surround your code with {{{ and }}} to preserve whitespace.\n\n### Books\n\nis a comprehensive book on getting a job at a top tech company, while focuses on dev interviews and does this for PMs.\n\n### Videos\n\nCareerCup's interview videos give you a real-life look at technical interviews. In these unscripted videos, watch how other candidates handle tough questions and how the interviewer thinks about their performance." ]
[ null, "https://careercup.com/attributeimages/facebook-interview-questions.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5219767,"math_prob":0.9921216,"size":7886,"snap":"2019-35-2019-39","text_gpt3_token_len":2651,"char_repetition_ratio":0.123318955,"word_repetition_ratio":0.2642706,"special_character_ratio":0.39779356,"punctuation_ratio":0.19661222,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9987418,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-09-23T11:20:57Z\",\"WARC-Record-ID\":\"<urn:uuid:9deecdeb-8c0a-4c48-a40b-f6edee2fb39d>\",\"Content-Length\":\"75263\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7c25cb80-bfd9-4327-a0d6-25b8d865e6bb>\",\"WARC-Concurrent-To\":\"<urn:uuid:eeb44068-f4c5-4791-8acf-dc3c504a518a>\",\"WARC-IP-Address\":\"216.239.36.21\",\"WARC-Target-URI\":\"https://careercup.com/question?id=5454247129776128\",\"WARC-Payload-Digest\":\"sha1:VQIM4DIMIWZOQH5PYQQNIULRNUBKQXVB\",\"WARC-Block-Digest\":\"sha1:4OQQ66BMMX5KLZWVJ4CGCUNACTUBXIXC\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-39/CC-MAIN-2019-39_segments_1568514576355.92_warc_CC-MAIN-20190923105314-20190923131314-00199.warc.gz\"}"}
https://booksiview.com/paper-3-174390.html
[ "Title paper 3 525.8 KB 13\nDocument Text Contents\nPage 1\n\n1) Which of the following SI units can be expressed in exactly two base SI units?\n\nA coulomb B tesla C newton D hertz (E) A andC\n\n2) A car is travelling at a velocity of 24 m s-1 due west initially. At a later time, it is seen travelling at a\nvelocity of 10 m s-1 due south. Given that the direction North N, points vertically upwards, which of the\nfollowing vector R represents the change in velocity of the car?\n\n(E)\n\nN\n\nR\n\n3) A motorist travelling at 10 m s–1 can bring his car to rest in a braking distance of 10 m.\nIn what distance could he bring the car to rest from a speed of 30 m s–1 using the same\nbraking force?\nA17m B30m C52m D90m (E)21m\n\n4) A satellite orbits the Earth 200 km above its surface. The satellite's acceleration towards the centre of\nthe Earth is 9.2 m s−2 and the radius of the Earth is 6400 km. The speed of the satellite is\nA 246 km s B 7.79 km s−1 C 7.67 km s−1 D 1.36 km s−1 (E)0\n\n5) The escape speed of a nitrogen molecule at the Earth's surface is 0.90×104m s-1. What is the escape\nspeed at a height 0.30 RE above the Earth's surface, where RE is the radius of the Earth?\nA 0.49×104 m s-1 B 0.59×104 m s-1 C 0.69×104 m s-1\nD 0.79×104 m s-1 (E) 0.19×104 m s-1\n\n6) A constant power supply is used to melt 1 kg. of ice, to heat the water produced, and finally to turn all\nthe water to steam.\nSpecific heat capacity of water = 4 x 103 J kg-1 K-1\nSpecific latent heat of fusion of ice = 3 x 105 J kg-1\nSpecific latent heat of vaporization of water = 2 x 106 J kg-1\nWhich graph in best shows how the thermodynamic temperature T varies with time t for this sequence?\n\nGeneral Certificate of Education (Adv.Level) Examination, 2013\n\n%%%%%%Trail%%%%%%%%%\n\nGrade 13- 2nd Term Test 2013\n\nPaper 1 Time 2hours Part A\n\nAll right reserved All right reserved All right reserved All right reserved All right reserved All right\nreserved All right reserved All right reserved All right reserved All right reservedAll right reserved All\nright reserved All right reserved All right reserved All right reserved All right reserved All right reserved\n\nPage 2\n\n(E) B and C\n\n7) A sound wave of frequency 400 Hz is traveling in a gas at a speed of 320 ms-1. What is the phase\ndifference between two points 0.1 m apart in the direction of travel?\n\n7) Fig below shows the formation of the first order spectrum when parallel rays of monochromatic light\nfall perpendicularly on a non-uniform spacing diffraction grating PQR. For the part of the grating between\nP and Q, the angle of deviation θ is constant, whilst for that between Q and R, θ decreases\n\nWhich diagram best shows how the grating interval d varies with distance x, the distance from P?\n\nPage 6\n\n17) The diagram below shows two bodies of masses 0.50 kg and 1.00 kg connected\nby alight rigid rod of length 4.00 m and placed on a smooth surface. A body P of\nmass 0.50 kg which moves at velocity 4.00 m s–\n1 collides and sticks to the body of mass 0.50\nkg and the system of masses rotates about the\ncentre of mass\n\n18) The specific heat capacity at constant volume of an ideal gas is 2.4 × 102 J K–1 kg–1. The change in\nthe internal energy of 5.0 × 10–3 kg of the gas when the temperature of the gas is increased from 27 °C to\n327 °C is\nA 32 J C 180 J E 120 J\nB 49 J D 360 J\n\n19) Diagram (a) below shows a graph of displacement y against distance x for a progressive wave at a\ncertain time. At time0.4 s later, the profile of the wave is shown in diagram (b).\n\nThe frequency of the wave is\nA 0.5 Hz C 5.0 Hz\nB 2.5 Hz D 12.5 Hz\nE 1 Hz\n\n20) A screw gauge gives the following reading when used to measure the diameter of a wire\nMain scale reading: 0 mm. Circular scale reading: 52 division Given that 1 mm on main scale\ncorresponds to 100 divisions of the circular scale. The diameter of wire from the above data is\n\nA0.005 cm B0.52 cm E 0.0125cm\nC0.052 cm D0.026 cm\n\n21) Three perfect gases at absolute temperatures T1,T2 andT3 are mixed. The\nmasses of molecules are m1,m2 andm3 and the number of molecules are n1,n2 and\nn3respectively. Assuming no loss of energy, the final temperature of the mixtures is:\n\nA\n\nPage 7\n\nD E none\n\n22) A bullet fired into a fixed target loses half of its velocity after penetrating 3 cm. How much further it\nwill penetrate before coming to rest assuming that it faces constant resistance to motion?\n\nA 3.0 cm B 2.0 cm E 0 .5cm\nC 1.5 cm D 1.0 cm\n\n23) The block of mass M moving on the friction less horizontal surface collides with a spring of spring\nconstant K and compresses it by length L. The maximum momentum of the block after collision is\n\nZero\n\nNone\n\n24) In the circuit, the galvanometer G show szero deflection. If the batteries A and B have negligible\ninternal resistance, the value of the resistor R will be\n\nA 200Ω C 100 Ω E 10 Ω\nB 500Ω D 1000 Ω\n\n25) When two tuning forks (fork 1 and fork 2) are sounded simultaneously, 4 beats per second are heard.\nNow, some tape is attached on the prong of the fork 2. When the tuning forks are sounded again, 6 beats\nper seconds are heard. If the frequency of fork 1 is 200 Hz, then what was the original frequency of fork\n2?\nA 200 Hz C202 Hz E 198Hz\nB 196 Hz D 204 Hz\n\n26) A parallel plate condenser with a dielectric of dielectric constant K between the plates has a capacity\nC and is charged to a potential V volts. The dielectric slab is slowly removed from between the plates and\nthen reinserted. The net work done by the system in this process is\n\nA ½ (K−1)CV2 B CV2(K −1)/K C (K−1)CV2 D zero\nE none\n\nPage 12\n\n43) A source producing sound of wave length 0.6 m is moving away from a stationary listener\nwith speed V/6 where V is the speed of sound in air. The wave length of sound heard by the\nlistener is\n\n(a) 0.5 m (b) 0.54 m (c) 0.66 m (d) 0.7m (e) 0.8 m\n\n44) An 8 kg metal block of dimension 16 cm x 8 cm x 6 cm is lying on a table with its face of\nlargest area touching the table. If g= 10 ms-2 the minimum amount of work done in making it\nstand with its length vertical is\n(a) 0.4 J (b) 6.4 J (c) 64 J (d) 4 J (e) 12.8 J.\n\n45) A beaker full of hot water is kept in a room and it cools from 80°C to 75°C in t1 minutes,\nfrom 75°C to 70°C in t2minutes and from 70°C to 65°C in t3minutes\n\n(a) tl<t2<t3 (b) t1 = t2 = t3 (c) t1<t2 = t3 (d) t1< t2< t3 e) none\n\n(46) A metal wire is clamped between two vertical walls. At 20 °C the unstrained length of the\nwire is exactly equal to the separation between walls. If the temperature of the wire is decreased\nthe graph between elastic energy density (u) and temperature (T) of the wire is\n\n(E) u\n\n20 T(in C0)\n\n48) The figure shows a meter-bridge circuit, with AB = 100 cm, X = 12and\nR = 18, and the jockey J in the position of balance.If R is now made 8,\nthrough what distance will J have to be moved\ntoobtain balance?\n\n(A) 10 cm (B) 20 cm\n(C) 30 cm (D) 40cm (E) 15cm\n\nPage 13\n\n(46) Heat is conducted across a composite block of two slabs of thickness d and 2d. Their thermal\nconductivities are 2k and k respectively. All the heat entering the face AB leaves from the face CD. The\ntemperature in °C of the junction EF of the two slabs is\n\n(A) 20 (B) 50 (C) 60 (D)80 (E) 12\n\n(47) Figure shows equi-potential surfaces for a two charges system. At which of the labeled points point\nwill an electron have the highest potential energy?\n\n(A) Point A (B) Point B (C) Point C (D) Point D\n\n(E) Point B & D" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8432768,"math_prob":0.9777207,"size":15249,"snap":"2022-05-2022-21","text_gpt3_token_len":4689,"char_repetition_ratio":0.111774355,"word_repetition_ratio":0.9970424,"special_character_ratio":0.3073644,"punctuation_ratio":0.07197943,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9784533,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-01-24T10:01:02Z\",\"WARC-Record-ID\":\"<urn:uuid:893d4c52-679d-4160-8bbc-a624e7ac0861>\",\"Content-Length\":\"25642\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ce9064ca-4fdf-4924-b3a0-6466fc71b281>\",\"WARC-Concurrent-To\":\"<urn:uuid:de0a61dd-c631-4f57-9b05-b8305be2421e>\",\"WARC-IP-Address\":\"172.67.208.112\",\"WARC-Target-URI\":\"https://booksiview.com/paper-3-174390.html\",\"WARC-Payload-Digest\":\"sha1:65D7FCWPWRUBLVBY2JI4CJI5MBYQZNIE\",\"WARC-Block-Digest\":\"sha1:UXJOSRPNTEF6C3XJ5AAVOX2MUSRJCKCH\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-05/CC-MAIN-2022-05_segments_1642320304528.78_warc_CC-MAIN-20220124094120-20220124124120-00650.warc.gz\"}"}
http://www.expertsmind.com/questions/yield-volatility-and-measurement-30111053.aspx
[ "## Yield volatility and measurement, Financial Management\n\nAssignment Help:\n\nMeasuring volatility is very important as it is a critical input in valuation models. In subsequent chapters we will see the importance of assumed volatility in valuing bonds with embedded options. Also, in measuring the interest rate risk of a position, a combination of duration with yield volatility is used.\n\nMeasuring Historical Yield Volatility\n\nStandard deviation or variance is used to measure the yield volatility. We can calculate variance using historical date with the help of the following formula:\n\nVariance =", null, "... Eq. (1)\n\nand\n\nStandard deviation =", null, "In the above formula, Xt is the observation t of variable", null, ", is the sample mean for variable X, and T is the number of observations in the sample.\n\nOur focus is to calculate the change in the daily yield relative to the previous day's yield.\n\nThis can be computed as the natural logarithm of the ratio of the yield for two days i.e.,\n\nln (yt/yt - 1)\n\nWhere,\n\ny    = Yield on day t.\n\nyt - 1  = Yield on day t - 1.\n\nThe relative change of daily yields computed under simple compounding and continuous compounding is almost same. But, continuous compounding is more popular among market participants.\n\nMultiplying the natural logarithm of the ratio of the two yields by 100 scales gives us the percentage change in daily yields.\n\nXt = 100 [ln (yt/yt - 1)]\n\nWhere,\n\nXt   = % change in yield.\n\nyt    = Yield on day t.\n\nyt -  1 = Yield on day t - 1.\n\nDetermining the Number of Observations\n\nThe sample size, i.e., the number of observations taken, affects the calculation of daily standard deviation. It is difficult to define an ideal sample size as it always depends upon the situation in hand. For example, a portfolio manager who is more concerned about long-term volatility might use 25 days for observation whereas a trader concerned about overnight positions might use only 10 most recent trading days.\n\nAnnualizing the Standard Deviation\n\nWe can find the annualized standard deviation with the help of the formula given below:\n\nDaily standard deviation x", null, "There is a different view regarding the number of days in the year that is to be used in the formula given above. Some market participants use 360 days whereas some use 365 days. There are some market participants who use only trading days i.e., 260 days based on five working days in a week for 52 weeks, while some other participants deduct 10 non-trading holidays too and use 250 days.\n\nInterpreting the Standard Deviation\n\nAssume that standard deviation for the 15 years zero coupon bond is 14%. If the prevailing yield is 8% then the annual standard deviation will be 112 basis points (14 x 8).\n\n#### Formulation of optimum credit policy, A firm requires a clear policy regard...\n\nA firm requires a clear policy regarding as to whether the credit should be authorized to a customer and if yes to what extent. Credit principles are set for making such decisions.\n\n#### What are the internal audits, What are the Internal audits Internal au...\n\nWhat are the Internal audits Internal audit is seen as independent from management who are devising and implementing internal controls and must be able to provide advice on in\n\n#### Explain the purchasing power parity, Explain the purchasing power parity, b...\n\nExplain the purchasing power parity, both of the absolute and relative versions. What causes the deviations from the purchasing power parity? Answer:  The absolute version of p\n\n#### Show the supposition of mm hypothesis, Q. Show the Supposition of MM Hypoth...\n\nQ. Show the Supposition of MM Hypothesis? Supposition of MM Hypothesis:- (i) There are ideal capital markets. (ii) Investors act rationally. (iii) Information regardin\n\n#### Brief on mistakes in linton’s evaluation, Mistakes in Linton's evaluation ...\n\nMistakes in Linton's evaluation (1) The preliminary investment in working capital should be offset by a working capital release in the final year, assuming a constant level of\n\n#### Components of working capital, Examine the components of working capital & ...\n\nExamine the components of working capital & also explain the concepts of working capital.\n\n#### Bond's capital gain yield, A 10-year, 12% semi-yearly coupon bond with a pa...\n\nA 10-year, 12% semi-yearly coupon bond with a par value of \\$1,000 may be called in 4 years at a call price of \\$1,050. The bond sells for \\$1,050. (Suppose that the bond has just bee\n\n#### Example on bills of exchange, Q. Example on Bills of exchange? ARG Co w...\n\nQ. Example on Bills of exchange? ARG Co will be apprehensive to protect the sterling value of its expected dollar receipt. The quoted forward rates demonstrate that the dollar\n\n#### Non-statutory reports, what are the types of non-statuary reports?\n\nwhat are the types of non-statuary reports?\n\n#### Leverage, Evaluate the importance of leverage of financial management on a ...\n\nEvaluate the importance of leverage of financial management on a small scale company.", null, "", null, "" ]
[ null, "http://www.expertsmind.com/CMSImages/508_yield volatility.png", null, "http://www.expertsmind.com/CMSImages/260_yield volatility1.png", null, "http://www.expertsmind.com/CMSImages/448_yield%20volatility2.png", null, "http://www.expertsmind.com/CMSImages/1126_standard%20deviation.png", null, "http://www.expertsmind.com/questions/CaptchaImage.axd", null, "http://www.expertsmind.com/prostyles/images/3.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8835773,"math_prob":0.9344143,"size":2584,"snap":"2020-24-2020-29","text_gpt3_token_len":560,"char_repetition_ratio":0.12868217,"word_repetition_ratio":0.07589286,"special_character_ratio":0.22291021,"punctuation_ratio":0.09979633,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9733983,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12],"im_url_duplicate_count":[null,1,null,1,null,1,null,1,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-06-02T14:53:32Z\",\"WARC-Record-ID\":\"<urn:uuid:c8efa585-d290-4bc8-b076-122823845924>\",\"Content-Length\":\"76310\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:93b75dfb-3ef7-46c1-815a-257705aaa125>\",\"WARC-Concurrent-To\":\"<urn:uuid:26e2b8bb-5ad0-43d7-9bc0-db96b32f3493>\",\"WARC-IP-Address\":\"198.38.85.49\",\"WARC-Target-URI\":\"http://www.expertsmind.com/questions/yield-volatility-and-measurement-30111053.aspx\",\"WARC-Payload-Digest\":\"sha1:PADTU63BX77S7DWXIE77OKOQ7IBZMHJ6\",\"WARC-Block-Digest\":\"sha1:MIVLXZTJRSBRDOXH2NRJ6ZDKSVPJ4AZ3\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-24/CC-MAIN-2020-24_segments_1590347425148.64_warc_CC-MAIN-20200602130925-20200602160925-00554.warc.gz\"}"}