URL
stringlengths
15
1.68k
text_list
sequencelengths
1
199
image_list
sequencelengths
1
199
metadata
stringlengths
1.19k
3.08k
https://www.numbersaplenty.com/8389137
[ "Cookie Consent by FreePrivacyPolicy.com\nSearch a number\nBaseRepresentation\nbin100000000000…\n…001000010001\n3120210012201210\n4200000020101\n54121423022\n6455450333\n7131210061\noct40001021\n916705653\n108389137\n11480a979\n1229869a9\n1319795b3\n1411853a1\n15b0aa0c\nhex800211\n\n8389137 has 8 divisors (see below), whose sum is σ = 11352736. Its totient is φ = 5509152.\n\nThe previous prime is 8389123. The next prime is 8389141. The reversal of 8389137 is 7319838.\n\n8389137 is digitally balanced in base 3, because in such base it contains all the possibile digits an equal number of times.\n\nIt is a sphenic number, since it is the product of 3 distinct primes.\n\nIt is not a de Polignac number, because 8389137 - 28 = 8388881 is a prime.\n\nIt is a Leyland number of the form 232 + 223.\n\nIt is a Duffinian number.\n\nIt is a junction number, because it is equal to n+sod(n) for n = 8389095 and 8389104.\n\nIt is not an unprimeable number, because it can be changed into a prime (8389187) by changing a digit.\n\nIt is a polite number, since it can be written in 7 ways as a sum of consecutive naturals, for example, 20668 + ... + 21069.\n\nIt is an arithmetic number, because the mean of its divisors is an integer number (1419092).\n\nAlmost surely, 28389137 is an apocalyptic number.\n\nIt is an amenable number.\n\n8389137 is a deficient number, since it is larger than the sum of its proper divisors (2963599).\n\n8389137 is a wasteful number, since it uses less digits than its factorization.\n\n8389137 is an evil number, because the sum of its binary digits is even.\n\nThe sum of its prime factors is 41807.\n\nThe product of its digits is 36288, while the sum is 39.\n\nThe square root of 8389137 is about 2896.4006974174. The cubic root of 8389137 is about 203.1916056760.\n\nThe spelling of 8389137 in words is \"eight million, three hundred eighty-nine thousand, one hundred thirty-seven\".\n\nDivisors: 1 3 67 201 41737 125211 2796379 8389137" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8464125,"math_prob":0.98840475,"size":1927,"snap":"2021-21-2021-25","text_gpt3_token_len":587,"char_repetition_ratio":0.1700468,"word_repetition_ratio":0.0059701493,"special_character_ratio":0.4302024,"punctuation_ratio":0.1305483,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99364614,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-06-18T02:56:44Z\",\"WARC-Record-ID\":\"<urn:uuid:cbbfdd75-dda9-4973-bb90-f3535ee7481f>\",\"Content-Length\":\"9226\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c09d63e3-6b9b-46a5-b6b9-c19b0df96bc5>\",\"WARC-Concurrent-To\":\"<urn:uuid:58a2887a-956f-46ed-94bc-75d89f8ac061>\",\"WARC-IP-Address\":\"62.149.142.170\",\"WARC-Target-URI\":\"https://www.numbersaplenty.com/8389137\",\"WARC-Payload-Digest\":\"sha1:PFMBWCFJ3OJ5B5B5K3QM3MX2RVJNHOR2\",\"WARC-Block-Digest\":\"sha1:WZXAP7ASFH6VMNMNIPXVXVBDLRIXNHLJ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-25/CC-MAIN-2021-25_segments_1623487634616.65_warc_CC-MAIN-20210618013013-20210618043013-00293.warc.gz\"}"}
https://ij-healthgeographics.biomedcentral.com/articles/10.1186/s12942-020-00228-y
[ "# Detecting multiple spatial disease clusters: information criterion and scan statistic approach\n\n## Abstract\n\n### Background\n\nDetecting the geographical tendency for the presence of a disease or incident is, particularly at an early stage, a key challenge for preventing severe consequences. Given recent rapid advancements in information technologies, it is required a comprehensive framework that enables simultaneous detection of multiple spatial clusters, whether disease cases are randomly scattered or clustered around specific epicenters on a larger scale. We develop a new methodology that detects multiple spatial disease clusters and evaluates its performance compared to existing other methods.\n\n### Methods\n\nA novel framework for spatial multiple-cluster detection is developed. The framework directly stands on the integrated bases of scan statistics and generalized linear models, adopting a new information criterion that selects the appropriate number of disease clusters. We evaluated the proposed approach using a real dataset, the hospital admission for chronic obstructive pulmonary disease (COPD) in England, and simulated data, whether the approach tends to select the correct number of clusters.\n\n### Results\n\nA case study and simulation studies conducted both confirmed that the proposed method performed better compared to conventional cluster detection procedures, in terms of higher sensitivity.\n\n### Conclusions\n\nWe proposed a new statistical framework that simultaneously detects and evaluates multiple disease clusters in a large study space, with high detection power compared to conventional approaches.\n\n## Introduction\n\nIn the middle of the 19th century, a deadly cholera outbreak affected the Soho area of London, UK. John Snow, a British physician, plotted the cases of cholera victims on a map and identified many victims within a short distance of a water pump on Broad Street. The disease map led him to a historic landmark, with the water from the pump identified as the source of cholera . However, what if other cholera victims had also clustered around another pump just 200 yards away? Would this still be considered as a single cluster or preferably another cluster with a different epicenter? Although the cause of disease or incident cannot be determined only by mapping the victims, disease maps are useful in initial investigations of disease causes. Whether the cases of diseases are scattered randomly or clustered around multiple specific centers is a long-standing question in epidemiological studies .\n\nTo date, detecting the tendency of a clustering incident, particularly at an early stage, is still a key challenge for practitioners in preventing severe epidemics and pandemics. Given recent rapid advancements in the utility of combined health and geographical information, the challenge has become more complex and has initiated a range of methodological developments. Based on the domain with which disease clusters are dealt, the types of disease clustering are threefold: being purely temporal, purely spatial, and spatio-temporal, for each of which different test techniques are proposed . In particular, spatial clusters indicate a spatial tendency for the presence of a disease or incident, the risk of which is relatively high to other surrounding regions.\n\nThere have been many statistical tests widely used for identifying meaningful spatial clusters. Amongst those techniques, a class called the general test searches for clusters without any preconceived assumptions on their locations. Whether the statistical significance information of each cluster is available, however, depends on the technique employed . The techniques that do not determine any statistical significance are called global clustering tests, techniques developed by Moran , Whitemore et al. , Oden , Tango , Rogerson and Bonetti and Pagano . In contrast, the other techniques that provide the statistical significance information, on which the present study focus, are called cluster detection tests (CDTs), including those proposed by Besag and Newell , Turnbull et al. , Kulldorff and Nagarwalla , Kulldorff , Tango .\n\nWithin CDTs, the circular spatial scan statistic has been used extensively along with SaTScan software ; examples include, as part of their cancer surveillance initiative, investigating the geographical variation of breast, lung, prostate, and colorectal cancer incidences in New York State . A distinctive feature of the methodology is to adopt a circular scanning window varying its size for defining potential clusters. Such a fixed shape of the scanning window could perform less effective when detecting clusters that lie in non-circular shape regions, like regions alongside a river . More recent developments focus on non-circular cluster forms, employing different spatial scan statistics; examples can be found in Patil , Assuncao et al. , and Tango and Takahashi . The flexibly shaped scan statistic implemented in FleXScan software adopts the scan approach with an exhaustive search of all cluster candidates within a given radius of any area. This approach balances out the unfeasible exhaustive search by restricting it within pre-specified neighborhoods of each area . Tango and Takahashi also proposed a flexible spatial scan statistic implemented with a restricted likelihood ratio. Their technique requires much less computational time compared to the original statistic and effectively detects clusters of any shape when the relative risk (RR) becomes large.\n\nEven though such extensive methodological developments have been made, there seems to have been little attention to the accurate statistical evaluation on the simultaneous detection of multiple clusters, in other words, identifying an appropriate number of cluster regions at the same time. A significant shortcoming of previous CDTs is that they cannot provide any statistical significance information for the identified multiple clusters. Such a limitation is simply because most of the methodologies focus on “single” cluster detection while investigating the extended study space within which more than one cluster is expected. Some CDTs can be adjusted for multiple cluster detection employing spatial scan statistics [14, 23,24,25], by iteratively running a conventional CDT single cluster detection algorithm—it leaves out sub-regions that are already identified as disease clusters in previous iterations until satisfactory results are obtained . While the detection procedure is recursively performed, the cluster of the first choice is often referred to as the “primary” cluster, while the remaining clusters are referred to as “secondary” clusters; the conventional procedure is therefore often named as the secondary-cluster procedure (SCP).\n\nThe utility of CDTs becomes challenging when evaluating the number of clusters that lie within the study region. Each iteration of cluster detection in SCPs identifies only one cluster; thus, any test statistics, including associated p-values of the iteration, are only valid for evaluating that specific cluster. As a consequence, the current conventional approaches fail to provide an accurate assessment for selected multiple clusters. Therefore, a comprehensive approach is needed. A recent study suggests that a combined approach of statistical modeling and model selection can offer a potential solution by illustrating a case study that detects purely temporal clusters with a time series model in Takahashi and Shimadzu . However, it is not always straightforward if the time series framework is directly applicable to a spatial context, which involves an extra dimension. It is unable to take advantage of the ordering structure in data—time series data are one-directional along with time, from the beginning to the end, but spatial data do not possess such a clear ordering structure. It is even unclear whether a similar approach can perform with a high detection power for cluster detection and, thus, extra care is required to develop a multiple-cluster detection framework in spatial contexts.\n\nHere, we propose a unified framework that enables simultaneous detection and evaluation of multiple spatial-clusters by combining generalized linear models (GLMs) and information criterion approaches. The framework encompasses the procedure proposed for detecting purely temporal clusters in Takahashi and Shimadzu as a special case. We present an illustrative example, the hospital admission for chronic obstructive pulmonary disease (COPD) in England, available from a textbook , for evaluating the performance of the proposed method. The results are compared with an SCP approach. The consistency property of the proposed procedure is also investigated in a simulation study.\n\n## Methods\n\nThe proposed method will be evaluated through real and simulation data. As an illustrative example, we applied the method to the spatial distribution of the hospital admission for COPD in England for 2010 and compared the detection performance with an SCP for the spatial tendency of disease risk. COPD is a group of lung conditions that cause breathing difficulties, including emphysema and chronic bronchitis, and is common in the middle to older aged adults who smoke. Although the leading cause of COPD is smoking, some cases are due to long-term exposure to harmful fumes or dust. Figure 1 shows the spatial distribution of the risk of hospital admission for COPD. There were $$m = 324$$ sub-regions (local authorities) in England amongst which the total number of cases reported was 22,293. The data was taken from the book “Spatio-Temporal Methods in Environmental Epidemiology” by Shaddick and Zidek (from the authors’ website: http://empslocal.ex.ac.uk/people/staff/gs454/). The color gradient corresponds to standardized admission rates adjusted by the underlying age-sex profile of the population within the sub-region; a darker color indicates a higher rate of COPD hospital admission.\n\nA simulation study is set up to investigate the consistent property, whether the proposed method tends to select the correct number of clusters when the actual number of clusters is known. The simulation data are motivated by the COPD data to keep some reality in the spatial distribution of disease. However, the focus is given on the evaluation of detecting low RR clusters ranging from 1 to 1.6.\n\nIn the simulation study, we assumed five clusters [A–E; Fig. 2] consisting of a different number of sub-regions, with each cluster showing a different RR according to the seven different scenarios (S1–S7) shown in Table 1. For instance, Scenario 1 (S1) indicated the null, i.e., there was no cluster, whereas Scenarios 2–5 (S2–S5) had five clusters (A–E) and Scenarios 6 and 7 (S6 and S7) assumed only single cluster (A) in the study area. For the remaining sub-regions (B–E), the RRs were set to 1.0. We generated 1000 datasets for each scenario and compared the estimated power calculated from the two cluster detection tests, the SCP and the proposed methods, at a significance level of 0.05.\n\n## Results\n\n### Methodological developments\n\nWe first describe the challenge in detecting multiple-clusters in a spatial extent, formulating it as a mixture Poisson GLM. Here, the formulation allows that the proposed procedure directly stands on the likelihood principle and encompasses the SCP as a special case, demonstrating the critical fact that selecting appropriate multiple clusters is an exact parallel to the covariate selection in regression modeling, i.e., model selection. We then propose a new criterion for choosing a model with the appropriate number of clusters in favor of the maximum marginal likelihood, in a similar manner in deriving the Bayesian information criterion (BIC).\n\n### Multiple-cluster model and its likelihood\n\nConsider a study space (or area) $$G$$ consisting of $$m$$ segments (or sub-regions), each of which corresponds to the smallest element in the space (e.g., counties and states). We write the number of cases within segment $$i$$ as $$Y_{i}$$, which is assumed to follow a Poisson distribution independently with an expected value $$\\mu_{i}$$—i.e., $$Y_{i} |\\mu_{i} \\sim {\\text{Poisson}}\\left( {\\mu_{i} } \\right)$$. And the observations (which is not random variable) of which $$Y_{i}$$ is denoted in lowercase as $$y_{i}$$, $$i = 1, 2, \\ldots , m$$. Additionally, let $${\\mathcal{W}}$$ denote the set of all potential scanning zones (sets of connected segments) of any size, the construction of which set $${\\mathcal{W}}$$ relies on an employed scanning method. Assuming that there are $$K$$ clusters: $$\\varvec{w} = \\left\\{ {w_{1} , w_{2} , \\ldots , w_{K} } \\right\\}$$, in space $$G$$, each mutually exclusive window $$w_{k}$$ contains a set of adjacent segments as a cluster; i.e., $$w_{k} \\cap w_{{k^{\\prime}}} = \\phi$$ for $$w_{k} \\ne w_{{k^{\\prime}}}$$. Note that $$K = 0$$ and $$K = 1$$ indicate no cluster and a single cluster in the study space, respectively.\n\nThe number of cases, $$y_{i}$$, is expected to be higher within hot-spot clusters compared to in other parts of the study space. The expected number of cases can be modeled as\n\n$$\\log \\mu_{i} = \\log \\left( {\\theta_{i} \\mu_{i}^{0} } \\right) = \\alpha + \\mathop \\sum \\limits_{k = 1}^{K} \\beta_{k} z_{ki} + \\log \\mu_{i}^{0}$$\n(1)\n\nfor $$K \\ge 1$$ and $$\\log \\mu_{i} = \\alpha^{0} + \\log \\mu_{i}^{0}$$ for $$K = 0$$. Here, the indicator variable $$z_{ki} = 1$$, if segment $$i$$ is a member of $$k{ - }$$th cluster ($$i \\in w_{k}$$) and $$z_{ki} = 0$$ otherwise. Note that all coefficients are positive, $$\\beta_{k} > 0$$. For segments that fall into the $$k$$-th hot-spot cluster, $$w_{k}$$, a parameter of model (1), becomes $$\\theta_{i} = \\theta_{{w_{k} }} = \\exp \\left( {\\alpha + \\beta_{k} } \\right)$$. In contrast, for those that fall outside of the clusters ($$\\bar{\\varvec{w}}$$), the parameter is $$\\theta_{i} = \\theta_{{\\bar{\\varvec{w}}}} = \\exp \\left( \\alpha \\right)$$. Here, there is some flexibility in the constant term $$\\mu_{i}^{0} : = \\mu_{i}^{0} \\left( {\\varvec{x}_{i} } \\right)$$ that is often modeled as a function of other covariates $$x_{i}$$, such as demographic or environmental factors; this yields the null model; i.e., the expected number of cases, when there is initially no cluster in the study space such that $$\\varvec{\\beta}= \\varvec{0}$$. The null model is therefore described as $$\\log \\mu_{i} = \\alpha + \\log \\mu_{i}^{0}$$.\n\nThe likelihood function of model (1) can be constructed as follows. Now, $$f_{i} \\left( {y_{i} |\\varvec{z},\\varvec{\\psi}} \\right) = f(y_{i} |\\mu_{i}^{0} , \\varvec{z},\\varvec{\\psi})$$ is the probability function of $$Y_{i} = y_{i}$$ given the two arguments: the locations of a hot-spot window, $$\\varvec{z}\\text{:} = \\varvec{z}\\left( \\varvec{w} \\right) = \\left( {z_{ki} } \\right)$$, which is a $$K \\times m$$ matrix, and the parameters $$\\varvec{\\psi}= \\left( {\\alpha , \\beta_{1} , \\beta_{2} , \\ldots , \\beta_{K} } \\right)$$. The conditional log-likelihood function can be expressed as\n\n$$l \\left( {\\varvec{\\psi}|\\varvec{z}} \\right): = \\log \\left[ {\\mathop \\prod \\limits_{i = 1}^{m} \\mathop \\prod \\limits_{k = 0}^{K} \\left\\{ {f\\left( {y_{i} |\\mu_{i}^{0} ,\\varvec{ z},\\varvec{\\psi}} \\right)} \\right\\}^{{z_{ki} }} } \\right],$$\n\nwhere $$z_{0i} = 1$$ if $$i \\notin \\textstyle{\\bigcup_{k = 1}^{K} {w_{k} }}$$, and otherwise as $$z_{0i} = 0$$. If we assume $$\\varvec{z}$$ to be randomly selected from a probability function $$h\\left( \\varvec{z} \\right)$$, the complete (full) log-likelihood function of $$\\varvec{\\psi}$$ becomes:\n\n\\begin{aligned} l\\left(\\varvec{\\psi}\\right) & = \\log L\\left(\\varvec{\\psi}\\right) = \\log \\left[ {\\mathop \\prod \\limits_{i = 1}^{m} \\mathop \\prod \\limits_{k = 0}^{K} \\left\\{ {f\\left( {y_{i} , z_{ki} |\\mu_{i}^{0} ,\\varvec{ \\psi }} \\right)} \\right\\}^{{z_{ki} }} } \\right]\\\\ & = l\\left( {\\varvec{\\psi}|\\varvec{z}} \\right) + \\log \\left\\{ {h\\left( \\varvec{z} \\right)} \\right\\} \\end{aligned}\n\nwhere $$L\\left(\\varvec{\\psi}\\right)$$ is the likelihood function of $$\\varvec{\\psi}$$.\n\n### Information criterion for selecting an appropriate $$\\varvec{K}$$\n\nMultiple-cluster model (1) suggests that the problem of detecting multiple clusters can be approached as a model selection problem to find an appropriate number of clusters, $$K\\left( { \\le K_{max} } \\right)$$. We propose a new information criterion that chooses $$K$$ in favor of the maximum marginal likelihood, $$ML\\left( {\\varvec{y},\\varvec{z}} \\right) = \\smallint \\exp \\{ \\log L(\\varvec{\\psi})\\} g\\left(\\varvec{\\psi}\\right)d\\varvec{\\psi}$$, where $$g\\left(\\varvec{\\psi}\\right)$$ is a prior probability function of parameter $$\\varvec{\\psi}$$. This can be achieved as follows. Applying Taylor expansion and Laplace approximations to the marginal likelihood function, it can be approximated as\n\n\\begin{aligned} - 2\\log ML\\left( {\\varvec{y},\\varvec{z}} \\right) & \\approx - 2\\mathop \\sum \\limits_{{{\\text{i}} = 1}}^{m} \\mathop \\sum \\limits_{k = 0}^{K} z_{ki} \\left\\{ {\\log f\\left( {y_{i} |\\mu_{i}^{0} ,\\varvec{ z},\\hat{\\varvec{\\psi }}} \\right)} \\right\\} - 2\\log \\left( {h\\left( \\varvec{z} \\right)} \\right) \\\\ & \\quad + q\\log m + \\log \\left| {J\\left( {\\hat{\\varvec{\\psi }}} \\right)} \\right| - q\\log \\left( {2\\pi } \\right) - 2\\log \\left( {g\\left( {\\hat{\\varvec{\\psi }}} \\right)} \\right) \\\\ \\end{aligned}\n\nwhere $$\\hat{\\varvec{\\psi }}$$ is the maximum likelihood estimator of $$\\varvec{\\psi}$$,\n\n$$J\\left( {\\hat{\\varvec{\\psi }}} \\right) = - \\frac{1}{m}\\frac{{\\partial^{2} l\\left( {\\varvec{\\psi}|\\varvec{z}} \\right)}}{{\\partial\\varvec{\\psi}\\partial \\varvec{\\psi^{\\prime}}}}{\\bigg |}_{{\\varvec{\\psi}= \\hat{\\varvec{\\psi }}}}$$\n\nand $$q = K + 1$$. The model evaluation criterion can then be obtained by eliminating terms with an order less than $$O\\left( 1 \\right)$$ with respect to the large sample size $$m$$; that is,\n\n$$C\\left( K \\right) = - 2l\\left( {\\hat{\\varvec{\\psi }} |\\varvec{z}} \\right) - 2\\log \\left( {h\\left( \\varvec{z} \\right)} \\right) + \\left( {K + 1} \\right)\\log m, \\quad \\left( {K \\ge 1} \\right).$$\n(2)\n\nTo select an appropriate number of clusters, $$K$$, we define a relative difference statistic based on criterion $$C\\left( K \\right)$$ as\n\n$$RDC\\left( K \\right) = \\left( {C_{0} - C\\left( K \\right)} \\right)/C_{0} ,$$\n\nwhere $$C_{0} = C\\left( 0 \\right)$$, the criterion under the null model. Appropriate multiple clusters are selected from the set of candidates $$\\tilde{\\varvec{w}} = \\left( {w_{1} , w_{2} , \\ldots , w_{K} } \\right)$$ with respect to $$\\mathop {\\hbox{max} }\\limits_{K} RDC\\left( K \\right)$$.\n\nFor the calculation of the proposed criterion (2), the probability function $$h\\left( \\varvec{z} \\right)$$ must be specified. We recommend $$h\\left( \\varvec{z} \\right) = \\left( {1/m} \\right)^{K}$$ as an approximation of the probability of selecting locations $$\\varvec{w}$$ given the fixed windows size, shape, and direction, when the window size is relatively very small, $$\\# \\{ i |i \\in \\varvec{w}\\} \\ll m$$, with respect to the whole data size $$m$$. Thus, a cluster selection criterion is now given as\n\n$$C\\left( K \\right) = - 2l\\left( {\\hat{\\varvec{\\psi }} |\\varvec{z}} \\right) + \\left( {3K + 1} \\right)\\log m, \\quad \\left( {K \\ge 1} \\right).$$\n\n### Statistical significance of overall clusters\n\nThe Monte Carlo hypothesis testing procedure evaluates the statistical significance of appropriate models in the same manner as the standard scan statistic. Under the null hypothesis, a large number of random datasets are generated; however, for each of these, $$\\mathop {\\hbox{max} }\\limits_{K} RDC\\left( K \\right)$$ is instead calculated as a test statistic (see details ).\n\n### Candidates of multiple clusters $$\\varvec{w}$$\n\nFor the multiple-cluster model (1), candidate clusters, $$\\varvec{z}$$, i.e., $$\\varvec{w}$$ among a large number of combinations of sets in $${\\mathbf{\\mathcal{W}}}$$, must be chosen in advance. Using an SCP method, namely the flexibly shaped scan statistic, we sequentially selected candidate clusters $$w_{1}^{*} , w_{2}^{*} , \\ldots , w_{{K_{max} }}^{*}$$ up to the predefined maximum number $$K_{max}$$. While the single cluster detection procedure is iteratively applied, the cluster of the first choice, $$w_{1}^{*}$$, is often called the “primary” cluster, with the remaining $$w_{2}^{*} , w_{3}^{*} , \\ldots , w_{{K_{max} }}^{*}$$ referred to as “secondary” clusters. Note that $$K_{max} = 1$$ corresponds to the detection of only the primary cluster. In practice, we predefine the maximum number of candidates (e.g., $$K_{max} = 10, 20, \\ldots$$) or a $$p$$-value threshold, $$p_{s}$$ (e.g., $$p_{s} < 0.5, 0.8, 1.0$$) derived as the “secondary cluster” by SCPs, as there are no overlaps among the candidate clusters. The $$p$$-value for each cluster selected by an SCP is often calculated by the Monte Carlo hypothesis testing procedure. The selection of candidates may differ depending on the scanning method used (e.g., circular, flexible, and so forth).\n\n### An illustrative example\n\nAs an illustrative example, we applied the method to the COPD data in England ($$m = 324$$ sub-regions) for 2010, shown in Fig. 1. A comparison of our proposed method and conventional SCP revealed a distinctive difference in the number of detected clusters. The proposed method tended to detect more clusters compared to the conventional SCP approach, as shown in Fig. 3 and Table 2. Note that some clusters are next to each other as if they are in the same single cluster, for example $$w_{1}^{*} , w_{2}^{*} ,w_{11}^{*} , w_{12}^{*}$$; however, they are not because their RRs differ. In the analysis, the candidate clusters $$\\varvec{w}$$ were chosen by the restricted flexible shaped scan statistic with the maximum number of the area as 20. The $$p$$-values were calculated by the Monte Carlo hypothesis testing procedure with 9999 replications for each cluster selected by the SCP.\n\nOur proposed method suggested a total of 15 clusters $$\\left( {w_{1}^{*} , w_{2}^{*} , \\ldots , w_{15}^{*} } \\right)$$ with the $$p$$-value of the multiple cluster model as $$p_{M} =$$ 0.0001 ($$C\\left( {15} \\right) =$$ 2926.92 and $$RDC\\left( {15} \\right) =$$ 0.2242, where $$C_{0} =$$ 3724.78). In contrast, the conventional SCP detected $$K =$$ 10 clusters $$\\left( {w_{1}^{*} , w_{2}^{*} , \\ldots , w_{10}^{*} } \\right)$$ at a significance level of $$p_{s} <$$ 0.05 (Table 2). Although clusters $$w_{11}^{*} , w_{12}^{*} , \\ldots , w_{15}^{*}$$ with $$p_{s} >$$ 0.05 were excluded by the conventional SCP approach, the proposed method suggested that they should be included, as the $$p$$-value of the multiple cluster model was $$p_{M} =$$ 0.0001.\n\n### Simulation study\n\nTable 3 shows the number of detected significant multiple clusters $$K$$ of the SCP and proposed procedure along with the total power among 1000 datasets for each scenario. Note that the RRs of S5 were set to resemble those of the first five clusters in the example data (Table 1). Table 4 shows the sensitivity (Sen) and positive predictive value (PPV) of regions detected as significant, as well as their averages and number of detections with Sen = 1 and PPV = 1 among the 1000 datasets.\n\nThe total powers for both procedures were very similar, except for S4. However, the SCP tended to detect a smaller number of clusters compared to the proposed method. The sensitivity of the SCP was lower than that of the proposed procedure. Notably, for weak clusters with low RRs, RR = 1.3 (S3), RR = 1.2 (S4), and mixed RRs (S5), the SCP failed to detect the five clusters with a higher power. Therefore, the sensitivity of the SCP and the probability of Sen = 1 for these scenarios were much lower than that of the proposed procedure.\n\nIn contrast, the proposed procedure tended to detect more clusters than the actual value. The PPVs of the proposed procedure were slightly lower than those of the SCP approach, but its sensitivity appeared to be higher. These simulation results suggest that the proposed procedure can detect regions within the assumed clusters with RR > 1.0 accurately with slightly extended regions. A similar performance was observed in scenarios S6 and S7 for which a single cluster was assumed.\n\n## Discussion\n\nSeveral studies have been conducted to detect multiple clusters using scan statistics other than SCPs. For example, Zhang et al. proposed an adjusted $$p$$-value for a sequential detection approach, recursively locating clusters based upon all previously detected clusters. Although this method performs better with a higher power than conventional SCPs, the relative sizes of the adjusted $$p$$-values for secondary clusters are irrelevant to the order in which the clusters are sequentially detected; thus, the $$k$$-th cluster may have a smaller $$p$$-value than the previously detected $$\\left( {k - 1} \\right)$$-th cluster. Additionally, the procedure can only evaluate the significance of individual clusters but not of multiple clusters as a whole.\n\nIn the spatial context, a multiple cluster detection procedure using spatial scan statistics was described in [24, 25]. However, this method cannot assess the significance of multiple clusters as a whole. A generalized linear mixed model with Moran’s $$I$$ statistic and stepwise procedure allows for multiple cluster evaluation, accounting for random spatial effects. The power of the approach is lower than that of the standard scan statistic . A recent study suggested a quasi-likelihood approach that deals with spatial correlation. However, quasi-likelihood suffers from the multiple testing problem in selecting multiple clusters, as the approach does not provide a full-likelihood. Our approach avoids this issue by utilizing the model selection framework with the proposed information criterion based on the full-likelihood principle.\n\nWe proposed an information criterion for selecting an appropriate number of clusters. The information criterion approach is based on the framework proposed by Takahashi and Shimadzu for detecting multiple temporal-clusters. The idea of model selection has been used in more general statistical modeling contexts; for instance, Akaike information criterion (AIC) and Bayesian information criterion (BIC) are used to estimate the number of multiple clusters [31, 32] and finite mixtures . However, in situations where large datasets are used, conventional information criteria, including $$- 2$$ log likelihood, AIC, and BIC, perform poorly and cannot accurately select an appropriate number of clusters. The proposed criterion is derived from the marginal likelihood of the multiple cluster model and accounts for the probability distribution of selected candidate clusters. Our examples and simulations clearly demonstrate that the proposed criteria perform well for identifying appropriate multiple clusters.\n\nFigure 4 shows the comparison of the proposed criterion $$C$$ with other conventional criteria: $$- 2\\log L$$, AIC, and BIC, at $$K$$ ($$K = 0, 1, \\ldots , 20$$). Although some inflection points were observed at around $$K =$$ 11, the proposed criterion $$C$$ attained a minimum value, i.e., the maximum value of $$RDC$$, at $$K =$$ 15. In contrast, other criteria monotonically decrease and do not reach minimum values for $$K \\le 20$$.\n\nA more conservative $$p$$-value is calculated by the secondary procedure as compared to the primary cluster procedure [23, 34]. Thus, the former identifies fewer significant secondary clusters relative to true clusters. This was observed in our simulation study, while the proposed procedure tends to detect more clusters, contrasting the reported result in the purely temporal setting , although this may largely depend on the scenario assumed.\n\nOur case study and simulation studies demonstrate that the proposed framework performs well, although some limitations remain. First, multiple cluster detection depends on the scanning method initially used, and we adopted the conventional secondary procedure to pre-select candidate clusters for a GLM. This implies that choosing the optimal scan statistic with high detection accuracy is essential. It requires further investigations on various detection test statistics as well as other scanning methods, including the union cluster situation. Second, the spatial dependence structure must be considered for better cluster detection. These methods will provide insight for future research.\n\n## Conclusion\n\nWe proposed a new statistical framework that combines the scan statistic and GLMs to simultaneously detect and evaluate multiple disease clusters in a large study space. The framework can determine whether the presence of a specific disease or incident is entirely random over geographical space. We also developed a new information criterion to select the appropriate number of clusters in the spatial context. Together with these approaches, the proposed framework enables the estimation and evaluation of multiple clusters with high detection power, as demonstrated in our simulation study. Further, a distinctive feature of our simultaneous detection framework is that it can calculate the $$p$$-value of detected multiple-clusters as a whole, as opposed to one at a time, as in conventional SCPs.\n\n## Availability of data and materials\n\nThe data for the risk of hospital admission for chronic obstructive pulmonary disease in the UK was taken from the book “Spatio-Temporal Methods in Environmental Epidemiology” by Shaddick and Zidek (from the authors’ website: http://empslocal.ex.ac.uk/people/staff/gs454/)\n\n## References\n\n1. Snow J. On the mode of communication of cholera. 2nd ed. London: John Churchill; 1855.\n\n2. Tango T. Statistical methods for disease clustering. Berlin: Springer; 2010.\n\n3. Waller LA. Discussion: statistical cluster detection, epidemiologic interpretation, and public health policy. Stat Public Policy. 2015;2:1–8.\n\n4. Besag J, Newell J. The detection of clusters in rare diseases. J R Stat Soc Series A. 1991;154:143–55.\n\n5. Kulldorff M. Statistical methods for spatial epidemiology: tests for randomness. In: Gatrell A, Loytonen M, editors. GIS and Health. New York: Taylor & Francis; 1998. p. 49–62.\n\n6. Moran PAP. Notes on continuous stochastic phenomena. Biometrika. 1950;37:17–23.\n\n7. Whitemore AS, Friend N, Brown BW, et al. A test to detect clusters of disease. Biometrika. 1987;74:631–5.\n\n8. Oden N. Adjusting Moran’s I for population density. Stat Med. 1995;14:17–26.\n\n9. Tango T. A class of tests for detecting ‘general’ and ‘focused’ clustering of rare diseases. Stat Med. 1995;14:2323–34.\n\n10. Rogerson PA. The detection of clusters using a spatial version of the Chi square goodness-of-fit statistic. Geogr Anal. 1999;31:130–47.\n\n11. Bonetti M, Pagano M. The interpoint distance distribution as a descriptor of point patterns, with an application to spatial disease clustering. Stat Med. 2005;24:753–73.\n\n12. Turnbull B, Iwano E, Burnett W, et al. Monitoring for clusters of disease: application to leukemia incidence in upstate New York. Am J Epidemiol. 1990;132:136–43.\n\n13. Kulldorff M, Nagarwalla N. Spatial disease clusters: detection and inference. Stat Med. 1995;14:799–810.\n\n14. Kulldorff M. A spatial scan statistic. Commun Stat Theory Methods. 1997;26:1481–96.\n\n15. Tango T. A test for spatial disease clustering adjusted for multiple testing. Stat Med. 2000;19:191–204.\n\n16. Kulldorff M. Information Management Services, Inc. SaTScan v9.6: Software for the spatial and space-time scan statistics. 2018. http://www.satscan.org/. Accessed 15 May 2020.\n\n17. Kulldorff M. Scan statistics for geographical disease surveillance: an overview. In: Lawson AB, Kleinman K, editors. Spatial and Syndromic Surveillance for Public Health. Wiley: New York; 2005. p. 115–31.\n\n18. Tango T, Takahashi K. A flexibly shaped spatial scan statistic for detecting clusters. Int J Health Geogr. 2005;4:11.\n\n19. Patil GP. Upper level set scan statistics for detecting arbitrarily shaped hot-spots. Environ Ecol Stat. 2004;11:183–97.\n\n20. Assuncao R, Costa M, Tavares A, et al. Fast detection of arbitrary shaped clusters. Stat Med. 2006;25:723–42.\n\n21. Takahashi K, Yokoyama T, Tango T. FleXScan v3.1: Software for the Flexible Scan Statistic. 2010. https://sites.google.com/site/flexscansoftware/. Accessed 15 May 2020.\n\n22. Tango T, Takahashi K. A flexible spatial scan statistic with a restricted likelihood ratio for detecting disease clusters. Stat Med. 2011;31:4207–18.\n\n23. Zhang Z, Assuncao R, Kulldorff M. Spatial scan statistics adjusted for multiple clusters. J Prob Stat. 2010; Article ID 642379.\n\n24. Li XZ, Wang JF, Yang WZ, et al. A spatial scan statistic for multiple clusters. Math Biosci. 2011;233:135–42.\n\n25. Wan Y, Pei T, Zhou C, et al. ACOMCD: a multiple cluster detection algorithm based on the spatial scan statistic and ant colony optimization. Comput Stat Data Anal. 2012;56:283–96.\n\n26. Takahashi K, Shimadzu H. Multiple-cluster detection test for purely temporal disease clustering: integration of scan statistics and generalized linear models. PLoS ONE. 2018;13(11):e0207821.\n\n27. Sharddick G, Zidek JV. Spatio-temporal methods in environmental epidemiology. New York: CRC Press; 2016.\n\n28. Konishi S, Kitagawa G. Information criteria and statistical modeling. New York: Springer; 2008.\n\n29. Zhang T, Lin G. Cluster detection based on spatial associations and iterated residuals in generalized linear mixed models. Biometrics. 2009;65:353–60.\n\n30. Lin PS, Kung YH, Clayton M. Spatial scan statistics for detection of multiple clusters with arbitrary shapes. Biometrics. 2016;72:1226–34.\n\n31. Molinari N, Bonaldi C, Daures JP. Multiple temporal cluster detection. Biometrics. 2001;57:277–583.\n\n32. Xie M, Sun Q, Naus J. A latent model to detect multiple clusters of varying sizes. Biometrics. 2009;65:1011–20.\n\n33. McLachlan G, Peel D. Finite mixture models. New York: Wiley; 2000.\n\n34. Kulldorff M, Feuer EJ, Miller BA, Freedman LS. Breast cancer clusters in the Northeast United States: a geographic analysis. Am J Epidemiol. 1997;146:161–70.\n\n## Funding\n\nThe work was partially supported by JSPS KAKENHI Grant Numbers: JP17K00046 and JP19K21569.\n\n## Author information\n\nAuthors\n\n### Contributions\n\nKT undertook the data analysis. KT and HS equally contributed to the theoretical development and writing of the manuscript. Both authors read and approved the final manuscript.\n\n### Corresponding author\n\nCorrespondence to Kunihiko Takahashi.\n\n## Ethics declarations\n\nNot applicable.\n\nNot applicable.\n\n### Competing interests\n\nThe authors declare that they have no competing interests.", null, "" ]
[ null, "https://ij-healthgeographics.biomedcentral.com/track/article/10.1186/s12942-020-00228-y", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8786572,"math_prob":0.9911524,"size":34077,"snap":"2023-40-2023-50","text_gpt3_token_len":7942,"char_repetition_ratio":0.15813108,"word_repetition_ratio":0.040159047,"special_character_ratio":0.24447575,"punctuation_ratio":0.13479675,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9959594,"pos_list":[0,1,2],"im_url_duplicate_count":[null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-22T14:29:52Z\",\"WARC-Record-ID\":\"<urn:uuid:00666142-362b-4ada-9a54-846c1573a6f9>\",\"Content-Length\":\"355270\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:26544f95-e508-4382-9206-792193f9af3f>\",\"WARC-Concurrent-To\":\"<urn:uuid:0365736a-f546-482c-8421-dd85f7405f64>\",\"WARC-IP-Address\":\"146.75.32.95\",\"WARC-Target-URI\":\"https://ij-healthgeographics.biomedcentral.com/articles/10.1186/s12942-020-00228-y\",\"WARC-Payload-Digest\":\"sha1:IWBPYXLJNGKON3RXIHZLK4TYJLMLWLPG\",\"WARC-Block-Digest\":\"sha1:6M5RGJG3JCU3EHCPO3PC3WAQGLEBHWB7\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233506420.84_warc_CC-MAIN-20230922134342-20230922164342-00706.warc.gz\"}"}
https://answers.opencv.org/question/14434/how-to-read-pixels-from-mnist-digit-database-and-create-the-iplimage/
[ "Ask Your Question\n\n# How to read pixels from MNIST digit database and create the iplimage", null, "hi, i am involved with handwritten OCR application. I use MNIST digit database for training process here. I use following code for read pixels from the database and re-create the image. programs doesnt give any error but it gives meaningless image(totally black images and unclear pixels patterns) as output. can someone explain the reason for that? plz help\n\nhere is my code\n\nint reverseInt(int i) {\nunsigned char c1, c2, c3, c4;\nc1 = i & 255;\nc2 = (i >> 8) & 255;\nc3 = (i >> 16) & 255;\nc4 = (i >> 24) & 255;\nreturn ((int)c1 << 24) + ((int)c2 << 16) + ((int)c3 << 8) + c4;\n}\n\nvoid create_image(CvSize size, int channels, unsigned char* data, int imagenumber) {\nstring imgname; ostringstream imgstrm;string fullpath;\nimgstrm << imagenumber;\nimgname=imgstrm.str();\nfullpath=\"D:\\\\\"+imgname+\".jpg\";\n\nIplImage *imghead=cvCreateImageHeader(size, IPL_DEPTH_16S, channels);\nimghead->imageData=(char *)data;\ncvSaveImage(fullpath.c_str(),imghead);\n}\n\nint main(){\nifstream file (\"D:\\\\train-images.idx3-ubyte\",ios::binary);\nif (file.is_open())\n{\nint magic_number=0; int number_of_images=0;int r; int c;\nint n_rows=0; int n_cols=0;CvSize size;unsigned char temp=0;\n\nfile.read((char*)&magic_number,sizeof(magic_number));\nmagic_number= reverseInt(magic_number);\n\nfile.read((char*)&number_of_images,sizeof(number_of_images));\nnumber_of_images= reverseInt(number_of_images);\n\nfile.read((char*)&n_rows,sizeof(n_rows));\nn_rows= reverseInt(n_rows);\nfile.read((char*)&n_cols,sizeof(n_cols));\nn_cols= reverseInt(n_cols);\n\nfor(int i=0;i<number_of_images;++i)\n{\nfor(r=0;r<n_rows;++r)\n{\nfor(c=0;c<n_cols;++c)\n{\nfile.read((char*)&temp,sizeof(temp));\n}\n}\nsize.height=r;size.width=c;\ncreate_image(size,3, &temp, i);\n}\n}\nreturn 0;\n}\n\n\nand this is one of result image", null, "edit retag close merge delete\n\n## 1 answer\n\nSort by » oldest newest most voted", null, "i have done mistake here. there must be a variable to keep image data. temp is only for keep information about single pixel. and also MNIST database has one channel images. i have define it as 3. here is the working code ans if someone has any thing about this, please comment here.\n\nvoid create_image(CvSize size, int channels, unsigned char data, int imagenumber) {\nstring imgname; ostringstream imgstrm;string fullpath;\nimgstrm << imagenumber;\nimgname=imgstrm.str();\nfullpath=\"D:\\\\MNIST\\\\\"+imgname+\".jpg\";\n\nIplImage *imghead=cvCreateImageHeader(size, IPL_DEPTH_8U, channels);\ncvSetData(imghead, data, size.width);\ncvSaveImage(fullpath.c_str(),imghead);\n}\n\n\nand in the main function. it must be as follows.\n\n unsigned char arr;\n\nfor(int i=0;i<1000;++i)\n{\nfor(r=0;r<n_rows;++r)\n{\nfor(c=0;c<n_cols;++c)\n{\nfile.read((char*)&temp,sizeof(temp));\narr[r][c]= temp;\n}\n}\nsize.height=r;size.width=c;\ncreate_image(size,1,arr, i);\n\n}\n\nmore\n\nOfficial site\n\nGitHub\n\nWiki\n\nDocumentation\n\n## Stats\n\nAsked: 2013-06-01 05:11:53 -0500\n\nSeen: 2,904 times\n\nLast updated: Jun 01 '13" ]
[ null, "https://answers.opencv.org/upfiles/avatars/Heshan%20Sandeepa/resized/32/jammer-closeup.jpg", null, "https://answers.opencv.org/upfiles/13700814047011695.jpg", null, "https://answers.opencv.org/upfiles/avatars/Heshan%20Sandeepa/resized/32/jammer-closeup.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.52200466,"math_prob":0.9539954,"size":1796,"snap":"2021-21-2021-25","text_gpt3_token_len":525,"char_repetition_ratio":0.125,"word_repetition_ratio":0.0,"special_character_ratio":0.3134744,"punctuation_ratio":0.23579545,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9759834,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,null,null,4,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-06-22T00:07:01Z\",\"WARC-Record-ID\":\"<urn:uuid:788cd3ff-1984-4c47-9638-9b91ab605b41>\",\"Content-Length\":\"55170\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:fa70cda9-f005-418a-ae9f-b26a74db85a9>\",\"WARC-Concurrent-To\":\"<urn:uuid:7df2c802-f07e-4635-a064-aae3aa257961>\",\"WARC-IP-Address\":\"5.9.49.245\",\"WARC-Target-URI\":\"https://answers.opencv.org/question/14434/how-to-read-pixels-from-mnist-digit-database-and-create-the-iplimage/\",\"WARC-Payload-Digest\":\"sha1:TRDTJXEQ7U6B6SRW2IMU4HM7FKJGRZ3D\",\"WARC-Block-Digest\":\"sha1:DOGKZ35HSQXJS2IYQVG5Q6PV7XTDIPCR\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-25/CC-MAIN-2021-25_segments_1623488504838.98_warc_CC-MAIN-20210621212241-20210622002241-00152.warc.gz\"}"}
https://opendev.org/openstack/tripleo-heat-templates/commit/fc50cfd2e49ad26d9056fcfbb3c6dc5db611f7dd
[ "### Close if block in dual bonds\n\n```Fix the same issue found in:\nhttps://review.opendev.org/c/openstack/tripleo-ansible/+/781102\n\n `@ -46,6 +46,7 @@ DUAL_MIN_VIABLE_MTU_HEADER = (` ` \"{# largest MTU. #}\\n\"` ` \"{% else %}\\n\"` ` \"{{ mtu_ctlplane_list.append(lookup('vars', networks_lower[network] ~ '_mtu')) }}\\n\" # noqa` ` \"{%- endif %}\\n\"` ` \"{%- endfor %}\\n\"` ` \"{% set min_viable_mtu_ctlplane = mtu_ctlplane_list | max %}\\n\"` ` \"{% set min_viable_mtu_dataplane = mtu_dataplane_list | max %}\\n\"`\n `@ -16,6 +16,7 @@` `{# largest MTU. #}` `{% else %}` `{{ mtu_ctlplane_list.append(lookup('vars', networks_lower[network] ~ '_mtu')) }}` `{%- endif %}` `{%- endfor %}` `{% set min_viable_mtu_ctlplane = mtu_ctlplane_list | max %}` `{% set min_viable_mtu_dataplane = mtu_dataplane_list | max %}`" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.53522676,"math_prob":0.6743749,"size":261,"snap":"2022-40-2023-06","text_gpt3_token_len":91,"char_repetition_ratio":0.031128405,"word_repetition_ratio":0.0,"special_character_ratio":0.35632184,"punctuation_ratio":0.12195122,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96012014,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-01-31T21:28:07Z\",\"WARC-Record-ID\":\"<urn:uuid:41c66dd2-d957-47e0-bd0e-5aaec3b90702>\",\"Content-Length\":\"43781\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:08425c3f-1ef2-485a-a444-3ddbb64c293e>\",\"WARC-Concurrent-To\":\"<urn:uuid:0becdc28-60a7-4e61-b8f0-d1aa675a2e63>\",\"WARC-IP-Address\":\"38.108.68.66\",\"WARC-Target-URI\":\"https://opendev.org/openstack/tripleo-heat-templates/commit/fc50cfd2e49ad26d9056fcfbb3c6dc5db611f7dd\",\"WARC-Payload-Digest\":\"sha1:ZPKQRH44MHEHRAZ3WKYATTFLPLXEWPKM\",\"WARC-Block-Digest\":\"sha1:4PXNMMJ6LAW6Q27FQVUSHEX6IXACTEEW\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764499890.39_warc_CC-MAIN-20230131190543-20230131220543-00212.warc.gz\"}"}
http://zijian-lv.com/problem/4
[ "# #4. 括号匹配\n\n1、空串 $“”$ 为合法序列。\n\n2、如果串 $A、B$ 均为合法序列,则 $C=A+B$(即将 $B$ 拼在 $A$ 后面)也是一个合法序列。\n\n3、如果串 $A$ 为合法序列,则串 $C=“(”+A+“)”$(即在 $A$ 的外面套一层括号)也是一个合法序列。\n\n### 样例一\n\n#### input\n\n6\n)))(((\n\n\n#### output\n\n2\n\n\n#### explanation\n\n$) ) ) ( ( ( → ( ) ) ( ( ) → ( ) ( ) ( )$" ]
[ null ]
{"ft_lang_label":"__label__zh","ft_lang_prob":0.9031588,"math_prob":0.9999418,"size":668,"snap":"2021-43-2021-49","text_gpt3_token_len":474,"char_repetition_ratio":0.14307229,"word_repetition_ratio":0.0,"special_character_ratio":0.53592813,"punctuation_ratio":0.0,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9977837,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-23T19:55:00Z\",\"WARC-Record-ID\":\"<urn:uuid:54fe2c53-bcb3-440c-9c93-524dda082108>\",\"Content-Length\":\"12533\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1e49c3a5-28ae-4211-bdb7-92cc1dbf5bb8>\",\"WARC-Concurrent-To\":\"<urn:uuid:8792d3a1-45c8-4b28-9755-0ced79302fb5>\",\"WARC-IP-Address\":\"39.107.24.117\",\"WARC-Target-URI\":\"http://zijian-lv.com/problem/4\",\"WARC-Payload-Digest\":\"sha1:WSTYYJSNWOWTVMH7OEL3UJ7KCH5GRBTN\",\"WARC-Block-Digest\":\"sha1:ACLU5EUSSVE4R4ET4QTEVBKKVVNUDSOC\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323585768.3_warc_CC-MAIN-20211023193319-20211023223319-00595.warc.gz\"}"}
https://goprep.co/q35-two-different-dice-are-thrown-together-find-the-i-1nlbfm
[ "Q. 355.0( 2 Votes )\n\n# Two different dic\n\nAnswer :\n\nWe know that, Number of ways in which two dice can be thrown = 6 × 6\n\n= 36 ways\n\n(i) A number of ways we are getting the number greater than 3 on each die:\n\n(4, 4), (4, 5), (4, 6), (5, 4), (5, 5), (5, 6), (6, 4), (6, 5), (6, 6) = 9 ways\n\nProbability (Number > 3 on each die) =", null, "(ii) A number of ways to get a total of 6 or 7:\n\n(2, 4), (4, 2), (3, 3), (1, 6) (6, 1) (2, 5)\n\n(1, 5), (5, 1), (5, 2), (3, 4), (4, 3) = 11 ways", null, "Rate this question :\n\nHow useful is this solution?\nWe strive to provide quality solutions. Please rate us to serve you better.\nTry our Mini CourseMaster Important Topics in 7 DaysLearn from IITians, NITians, Doctors & Academic Experts\nDedicated counsellor for each student\n24X7 Doubt Resolution\nDaily Report Card\nDetailed Performance Evaluation", null, "view all courses", null, "" ]
[ null, "https://gradeup-question-images.grdp.co/liveData/PROJ12788/15180883512659.png", null, "https://gradeup-question-images.grdp.co/liveData/PROJ12788/1518088352030916.png", null, "https://grdp.co/cdn-cgi/image/height=128,quality=80,f=auto/https://gs-post-images.grdp.co/2020/8/group-7-3x-img1597928525711-15.png-rs-high-webp.png", null, "https://gs-post-images.grdp.co/2020/8/group-img1597139979159-33.png-rs-high-webp.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8238889,"math_prob":0.9915227,"size":821,"snap":"2021-04-2021-17","text_gpt3_token_len":298,"char_repetition_ratio":0.12729499,"word_repetition_ratio":0.0,"special_character_ratio":0.40073082,"punctuation_ratio":0.23039216,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96557707,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,3,null,3,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-04-19T16:11:19Z\",\"WARC-Record-ID\":\"<urn:uuid:947e9821-3eb2-499b-95cb-7baf8415c1eb>\",\"Content-Length\":\"164876\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:af11fb58-3ca4-4f0c-81b1-6024b2471a1b>\",\"WARC-Concurrent-To\":\"<urn:uuid:10bed1a4-3157-473d-a747-9f1139f26f9c>\",\"WARC-IP-Address\":\"104.18.25.35\",\"WARC-Target-URI\":\"https://goprep.co/q35-two-different-dice-are-thrown-together-find-the-i-1nlbfm\",\"WARC-Payload-Digest\":\"sha1:5O3F2NAXYQG6TMXPYGHGLQ7VRZ4G3GQG\",\"WARC-Block-Digest\":\"sha1:IGBZ5A2ZAG3VDHCANG45GJYVL3HUIDMK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-17/CC-MAIN-2021-17_segments_1618038887646.69_warc_CC-MAIN-20210419142428-20210419172428-00394.warc.gz\"}"}
https://www.scivision.dev/matplotlib-constrained-layout-tight-layout/
[ "## Use Matplotlib constrained_layout instead of tight_layout\n\nMany legacy tutorials and examples for Matplotlib use tight_layout(). A key problem with tight_layout() and subplots is that tight_layout destroys suptitle(). Matplotlib constrained_layout is still being improved and is recommended over tight_layout().\n\nTo make figures with subplots and suptitle work better, use `figure(constrained_layout=True)`:\n\n``````from matplotlib.pyplot import figure, show\n\nfg = figure(constrained_layout=True)\nax = fg.subplots(3, 1)\n\nfor i in range(3):\nax[i].plot(range(5+5*i))\n\nfg.suptitle('lots of lines')\n\nshow()\n``````\n\nThis plot is much superior to `fg.tight_layout()`" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6334865,"math_prob":0.8919741,"size":635,"snap":"2019-51-2020-05","text_gpt3_token_len":142,"char_repetition_ratio":0.17908083,"word_repetition_ratio":0.0,"special_character_ratio":0.22204724,"punctuation_ratio":0.123809524,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9732645,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-24T08:43:48Z\",\"WARC-Record-ID\":\"<urn:uuid:36da28ee-4865-4cc2-af67-9ab52ac3ceef>\",\"Content-Length\":\"6233\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:634b8396-9265-422f-b531-03ec15755115>\",\"WARC-Concurrent-To\":\"<urn:uuid:14428055-3522-4c8e-bc0a-5d2d18e9a295>\",\"WARC-IP-Address\":\"104.248.63.248\",\"WARC-Target-URI\":\"https://www.scivision.dev/matplotlib-constrained-layout-tight-layout/\",\"WARC-Payload-Digest\":\"sha1:EBYQS73TTC3NRQTGPXACVUVPFVFYJGXN\",\"WARC-Block-Digest\":\"sha1:IF3PJYWUR6HM4WR6O44RUYJE4LNVRUHP\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579250616186.38_warc_CC-MAIN-20200124070934-20200124095934-00181.warc.gz\"}"}
https://onlineitpark.net/0-5-as-a-fraction/
[ "Sunday, October 1, 2023\nHomeFraction0.5 as a Fraction\n\n# 0.5 as a Fraction\n\n##### 0.5 as a fraction equals 5/10 or 1/2\n\nSteps to convert 0.5 into a fraction.\n\nWrite 0.5 as0.51\n\nMultiply both the numerator and denominator by 10 for each digit after the decimal point.\n\n0.51=0.5 x 101 x 10=510\n\nAs a side note the whole number-integral part is: empty\nThe decimal part is: .5 = 5/10\nFull simple fraction breakdown: 50/100\n= 5/10\n= 1/2\n\nyou may also check\n\nfractionData\n0.03 as a Fraction0.03 as a Fraction\n0.55 as a Fraction0.55 as a Fraction\n.1875 as a Fraction.1875 as a Fraction\n0.45 as a Fraction0.45 as a Fraction\nPrevious article.43 as a Fraction\nNext article.625 as a Fraction\nRELATED ARTICLES" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.88923746,"math_prob":0.9968476,"size":541,"snap":"2023-40-2023-50","text_gpt3_token_len":192,"char_repetition_ratio":0.26070765,"word_repetition_ratio":0.18367347,"special_character_ratio":0.39186692,"punctuation_ratio":0.14285715,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99871767,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-10-01T12:49:47Z\",\"WARC-Record-ID\":\"<urn:uuid:ca04a55a-e3cc-4d3d-9e3b-0e5cfcdc0afc>\",\"Content-Length\":\"261737\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1be3f7b2-a0ad-47d9-9a19-1d68a1c7b705>\",\"WARC-Concurrent-To\":\"<urn:uuid:d2577e87-cd12-47ec-a1de-83cd9500aaa9>\",\"WARC-IP-Address\":\"162.214.155.129\",\"WARC-Target-URI\":\"https://onlineitpark.net/0-5-as-a-fraction/\",\"WARC-Payload-Digest\":\"sha1:HLHA6PK7YQPRED3SNGFLHOUERZCHAHGI\",\"WARC-Block-Digest\":\"sha1:6LLXDROHTIGSC7PFUS4DYGY2LAIFDTMD\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510888.64_warc_CC-MAIN-20231001105617-20231001135617-00159.warc.gz\"}"}
https://luckytoilet.wordpress.com/tag/rref/
[ "# Solving systems of linear equations in Haskell\n\nHaskell isn’t normally used for things like this, but it’s quite possible to solve systems of linear equations with Haskell.\n\nThere are already several libraries for doing this, and other more advanced matrix manipulating. But here, I’m going to start simple.\n\nIn mathematics, systems of linear equations are usually represented by an augmented matrix. A system of n linear equations would be represented by an augmented matrix with n rows and n+1 columns.\n\nFor example, we have this system of equations:", null, "$\\begin{array}{rrrcl} x &+2y &-z &=& -4 \\\\ 2x &+3y &-z &=& -11 \\\\ -2x &&-3z &=& 22 \\end{array}$\n\nThis would be represented as an augmented matrix:", null, "$\\left[ \\begin{array}{ccc|c} 1 & 2 & -1 & -4 \\\\ 2 & 3 & -1 & -11 \\\\ -2 & 0 & -3 & 22 \\end{array} \\right]$\n\nIn Haskell we represent this as a list of lists, like this:\n\n[ [1,2,-1,-4], [2,3,-1,-11], [-2,0,-3,22] ]\n\n\nHere I’ll store each entry not as an integer, but as a floating point. You could also use Rational in Data.Ratio but both are fine for now.\n\nThe advantage of using Rational over Float is that sometimes you will end up with fractions that don’t work very well with floating point numbers. However, I’ve found that printing a list of lists of Rational types makes it difficult to read, unless you implement a custom show function for it.\n\nSo this is how we define our matrix types in Haskell:\n\ntype Row = [Float]\ntype Matrix = [Row]\n\n\nThe approach to solving this problem is rather simple. First we reduce whatever matrix we have to REF, or Row Echelon Form and then get the actual roots with some back substitution.\n\nThe algorithm used to transform a matrix to its Row Echelon Form is known as the Gaussian Elimination. Here’s what a matrix should look like after Gaussian Elimination (a", null, "$*$ represents any value):", null, "$\\left[ \\begin{array}{ccc|c} 1 & * & * & * \\\\ 0 & 1 & * & * \\\\ 0 & 0 & 1 & * \\end{array} \\right]$\n\nOur matrix should look like this after Gaussian Elimination:", null, "$\\left[ \\begin{array}{ccc|c} 1 & 2 & -1 & -4 \\\\ 0 & 1 & -1 & 3 \\\\ 0 & 0 & 1 & -2 \\end{array} \\right]$\n\nThe REF form is not unique, so that is only one of the possible valid outputs for the Gaussian Elimination.\n\nWhy do we want to have the matrix in REF form? A matrix in this form can easily be solved using back substitution. Consider this matrix as a series of linear equations, as we did before:", null, "$\\begin{array}{rrrcl} x &+2y &-z &=& -4 \\\\ &+y &-z &=& 3 \\\\ &&z &=& -2 \\end{array}$\n\nNow it would be very clear how to solve for the three variables.\n\n## The Gaussian Elimination Algorithm\n\nHere is a diagram of how Gaussian Elimination works. On each iteration, the element circled in green is considered the pivot element, while the elements enclosed in the red square are the ones we intend to remove (zero) in each iteration.", null, "Removing the red elements in the matrix is actually quite simple. Consider how you would eliminate", null, "$x$ in equation", null, "$B$ here:", null, "$\\begin{array}{lrrcl}(A) & x & +2y & = & 4 \\\\(B) & 2x & +y & = & 5 \\end{array}$\n\nProbably you would multiply equation", null, "$A$ by 2, giving", null, "$2x + 4y = 8$, then subtract", null, "$B$ from it, giving", null, "$3y=3$, eliminating", null, "$x$.\n\nWe can also write that as", null, "$B = 2A - B$. Basically to eliminate a variable, just multiply a row so it matches up, and subtract. This is middle school algebra.\n\nTo make things easier for us, we divide the row we are on so that the pivot is always 1. We do this now because we need them to be 1 anyways, and this avoids an unnecessary division in the next step.\n\nWe could, of course, not have the pivot always be 1, but we would have to do the divisions later when substituting to get the solutions. More on this later.\n\nSo to eliminate the variable under the pivot, multiply the whole row by that number. I have a picture to clarify:", null, "We simply repeat this for all elements under the pivot.\n\n### An edge case\n\nThis is where it gets a little bit tricky. What if the pivot is 0?\n\nWe have no way of making it 1 by any kind of multiplication. Further, we cannot eliminate any elements below the pivot. What do we do now?\n\nSimple. We swap the current row with any other row so that the pivot is not zero. Any row will do, so we’ll just pick the first one that fits.", null, "If there is not a single element below the pivot that is not zero, the matrix is either under-determined or singular; either case it is unsolvable.\n\nHere is my Haskell code on what I just covered:\n\ngaussianReduce :: Matrix -> Matrix\ngaussianReduce matrix = fixlastrow $foldl reduceRow matrix [0..length matrix-1] where --swaps element at position a with element at position b. swap xs a b | a > b = swap xs b a | a == b = xs | a < b = let (p1,p2) = splitAt a xs (p3,p4) = splitAt (b-a-1) (tail p2) in p1 ++ [xs!!b] ++ p3 ++ [xs!!a] ++ (tail p4) reduceRow matrix1 r = let --first non-zero element on or below (r,r). firstnonzero = head$ filter (\\x -> matrix1 !! x !! r /= 0) [r..length matrix1-1]\n\n--matrix with row swapped (if needed)\nmatrix2 = swap matrix1 r firstnonzero\n\n--row we're working with\nrow = matrix2 !! r\n\n--make it have 1 as the leading coefficient\nrow1 = map (\\x -> x / (row !! r)) row\n\n--subtract nr from row1 while multiplying\nsubrow nr = let k = nr!!r in zipWith (\\a b -> k*a - b) row1 nr\n\n--apply subrow to all rows below\nnextrows = map subrow $drop (r+1) matrix2 --concat the lists and repeat in take r matrix2 ++ [row1] ++ nextrows fixlastrow matrix' = let a = init matrix'; row = last matrix'; z = last row; nz = last (init row) in a ++ [init (init row) ++ [1, z / nz]] Edit: There was a bug in the above code, found by Alan Zimmerman. I think it’s been fixed. This may be a bit difficult to read because there is no syntax highlighting and the code is cut off. I’ll provide a link to the full source code at the end. Admittedly Haskell may not have been the best language to implement this algorithm this particular way because there is so much state changing. Any language that allows mutable state would probably perform better than this code. Notice that at the end, the last row does not get divided. The fixlastrow function corrects this problem. Let’s test this code: *Main> gaussianReduce [ [1,2,-1,-4], [2,3,-1,-11], [-2,0,-3,22] ] [[1.0,2.0,-1.0,-4.0],[0.0,1.0,-1.0,3.0],[-0.0,0.0,1.0,-2.0]] Excellent. ## Finishing up The next step of the algorithm is to solve the variables by back substitution. This is pretty easy, I think. My code keeps a list of already-found solutions. Folding from the right, each step it substitutes in the corresponding solution and multiplies & subtracts to get the next solution, adding that to the solution list. --Solve a matrix (must already be in REF form) by back substitution. substitute :: Matrix -> Row substitute matrix = foldr next [last (last matrix)] (init matrix) where next row found = let subpart = init$ drop (length matrix - length found) row\nsolution = last row - sum (zipWith (*) found subpart)\nin solution : found\n\n\nTo get a list of solutions from a matrix, we chain the substitute and gaussianReduce functions:\n\nsolve :: Matrix -> Row\nsolve = substitute . gaussianReduce\n\n*Main> solve [ [1,2,-1,-4], [2,3,-1,-11], [-2,0,-3,22] ]\n[-8.0,1.0,-2.0]\n\n\nThis means the solutions are", null, "$(x,y,z) = (-8,1,-2)$. That seems correct, so we’re done!\n\nThe code is far from practical, though. Although it works, I haven’t really tested its performance (probably not very good), and it doesn’t handle all edge cases." ]
[ null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://i0.wp.com/i.imgur.com/6piBv.png", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://i0.wp.com/i.imgur.com/5fyOv.png", null, "https://i0.wp.com/i.imgur.com/vmcXZ.png", null, "https://s0.wp.com/latex.php", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.89846194,"math_prob":0.99683344,"size":6650,"snap":"2022-40-2023-06","text_gpt3_token_len":1654,"char_repetition_ratio":0.11450496,"word_repetition_ratio":0.004198153,"special_character_ratio":0.2645113,"punctuation_ratio":0.14599575,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9987178,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,2,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,2,null,2,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-02-05T23:38:31Z\",\"WARC-Record-ID\":\"<urn:uuid:d81635af-ac14-4b8c-b8f8-c83c728b346f>\",\"Content-Length\":\"70336\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ec12eec6-f6df-4ee0-8c8e-8c4670bd0151>\",\"WARC-Concurrent-To\":\"<urn:uuid:79817681-fbaa-4452-bb0f-ed42cc160399>\",\"WARC-IP-Address\":\"192.0.78.13\",\"WARC-Target-URI\":\"https://luckytoilet.wordpress.com/tag/rref/\",\"WARC-Payload-Digest\":\"sha1:5N3DXP4LHMN43OQGPY5AA33AFAFBVPSL\",\"WARC-Block-Digest\":\"sha1:WQVNCQVXZNY7Y4TTWU5G7FGPON4CY2PX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764500294.64_warc_CC-MAIN-20230205224620-20230206014620-00205.warc.gz\"}"}
https://postgrespro.com/docs/postgrespro/14/functions-geometry
[ "## 9.11. Geometric Functions and Operators\n\nThe geometric types `point`, `box`, `lseg`, `line`, `path`, `polygon`, and `circle` have a large set of native support functions and operators, shown in Table 9.35, Table 9.36, and Table 9.37.\n\nTable 9.35. Geometric Operators\n\nOperator\n\nDescription\n\nExample(s)\n\n`geometric_type` `+` `point``geometric_type`\n\nAdds the coordinates of the second `point` to those of each point of the first argument, thus performing translation. Available for `point`, `box`, `path`, `circle`.\n\n`box '(1,1),(0,0)' + point '(2,0)'``(3,1),(2,0)`\n\n`path` `+` `path``path`\n\nConcatenates two open paths (returns NULL if either path is closed).\n\n`path '[(0,0),(1,1)]' + path '[(2,2),(3,3),(4,4)]'``[(0,0),(1,1),(2,2),(3,3),(4,4)]`\n\n`geometric_type` `-` `point``geometric_type`\n\nSubtracts the coordinates of the second `point` from those of each point of the first argument, thus performing translation. Available for `point`, `box`, `path`, `circle`.\n\n`box '(1,1),(0,0)' - point '(2,0)'``(-1,1),(-2,0)`\n\n`geometric_type` `*` `point``geometric_type`\n\nMultiplies each point of the first argument by the second `point` (treating a point as being a complex number represented by real and imaginary parts, and performing standard complex multiplication). If one interprets the second `point` as a vector, this is equivalent to scaling the object's size and distance from the origin by the length of the vector, and rotating it counterclockwise around the origin by the vector's angle from the `x` axis. Available for `point`, `box`,[a] `path`, `circle`.\n\n`path '((0,0),(1,0),(1,1))' * point '(3.0,0)'``((0,0),(3,0),(3,3))`\n\n`path '((0,0),(1,0),(1,1))' * point(cosd(45), sind(45))``((0,0),​(0.7071067811865475,0.7071067811865475),​(0,1.414213562373095))`\n\n`geometric_type` `/` `point``geometric_type`\n\nDivides each point of the first argument by the second `point` (treating a point as being a complex number represented by real and imaginary parts, and performing standard complex division). If one interprets the second `point` as a vector, this is equivalent to scaling the object's size and distance from the origin down by the length of the vector, and rotating it clockwise around the origin by the vector's angle from the `x` axis. Available for `point`, `box`,[a] `path`, `circle`.\n\n`path '((0,0),(1,0),(1,1))' / point '(2.0,0)'``((0,0),(0.5,0),(0.5,0.5))`\n\n`path '((0,0),(1,0),(1,1))' / point(cosd(45), sind(45))``((0,0),​(0.7071067811865476,-0.7071067811865476),​(1.4142135623730951,0))`\n\n`@-@` `geometric_type``double precision`\n\nComputes the total length. Available for `lseg`, `path`.\n\n`@-@ path '[(0,0),(1,0),(1,1)]'``2`\n\n`@@` `geometric_type``point`\n\nComputes the center point. Available for `box`, `lseg`, `polygon`, `circle`.\n\n`@@ box '(2,2),(0,0)'``(1,1)`\n\n`#` `geometric_type``integer`\n\nReturns the number of points. Available for `path`, `polygon`.\n\n`# path '((1,0),(0,1),(-1,0))'``3`\n\n`geometric_type` `#` `geometric_type``point`\n\nComputes the point of intersection, or NULL if there is none. Available for `lseg`, `line`.\n\n`lseg '[(0,0),(1,1)]' # lseg '[(1,0),(0,1)]'``(0.5,0.5)`\n\n`box` `#` `box``box`\n\nComputes the intersection of two boxes, or NULL if there is none.\n\n`box '(2,2),(-1,-1)' # box '(1,1),(-2,-2)'``(1,1),(-1,-1)`\n\n`geometric_type` `##` `geometric_type``point`\n\nComputes the closest point to the first object on the second object. Available for these pairs of types: (`point`, `box`), (`point`, `lseg`), (`point`, `line`), (`lseg`, `box`), (`lseg`, `lseg`), (`line`, `lseg`).\n\n`point '(0,0)' ## lseg '[(2,0),(0,2)]'``(1,1)`\n\n`geometric_type` `<->` `geometric_type``double precision`\n\nComputes the distance between the objects. Available for all geometric types except `polygon`, for all combinations of `point` with another geometric type, and for these additional pairs of types: (`box`, `lseg`), (`lseg`, `line`), (`polygon`, `circle`) (and the commutator cases).\n\n`circle '<(0,0),1>' <-> circle '<(5,0),1>'``3`\n\n`geometric_type` `@>` `geometric_type``boolean`\n\nDoes first object contain second? Available for these pairs of types: (`box`, `point`), (`box`, `box`), (`path`, `point`), (`polygon`, `point`), (`polygon`, `polygon`), (`circle`, `point`), (`circle`, `circle`).\n\n`circle '<(0,0),2>' @> point '(1,1)'``t`\n\n`geometric_type` `<@` `geometric_type``boolean`\n\nIs first object contained in or on second? Available for these pairs of types: (`point`, `box`), (`point`, `lseg`), (`point`, `line`), (`point`, `path`), (`point`, `polygon`), (`point`, `circle`), (`box`, `box`), (`lseg`, `box`), (`lseg`, `line`), (`polygon`, `polygon`), (`circle`, `circle`).\n\n`point '(1,1)' <@ circle '<(0,0),2>'``t`\n\n`geometric_type` `&&` `geometric_type``boolean`\n\nDo these objects overlap? (One point in common makes this true.) Available for `box`, `polygon`, `circle`.\n\n`box '(1,1),(0,0)' && box '(2,2),(0,0)'``t`\n\n`geometric_type` `<<` `geometric_type``boolean`\n\nIs first object strictly left of second? Available for `point`, `box`, `polygon`, `circle`.\n\n`circle '<(0,0),1>' << circle '<(5,0),1>'``t`\n\n`geometric_type` `>>` `geometric_type``boolean`\n\nIs first object strictly right of second? Available for `point`, `box`, `polygon`, `circle`.\n\n`circle '<(5,0),1>' >> circle '<(0,0),1>'``t`\n\n`geometric_type` `&<` `geometric_type``boolean`\n\nDoes first object not extend to the right of second? Available for `box`, `polygon`, `circle`.\n\n`box '(1,1),(0,0)' &< box '(2,2),(0,0)'``t`\n\n`geometric_type` `&>` `geometric_type``boolean`\n\nDoes first object not extend to the left of second? Available for `box`, `polygon`, `circle`.\n\n`box '(3,3),(0,0)' &> box '(2,2),(0,0)'``t`\n\n`geometric_type` `<<|` `geometric_type``boolean`\n\nIs first object strictly below second? Available for `point`, `box`, `polygon`, `circle`.\n\n`box '(3,3),(0,0)' <<| box '(5,5),(3,4)'``t`\n\n`geometric_type` `|>>` `geometric_type``boolean`\n\nIs first object strictly above second? Available for `point`, `box`, `polygon`, `circle`.\n\n`box '(5,5),(3,4)' |>> box '(3,3),(0,0)'``t`\n\n`geometric_type` `&<|` `geometric_type``boolean`\n\nDoes first object not extend above second? Available for `box`, `polygon`, `circle`.\n\n`box '(1,1),(0,0)' &<| box '(2,2),(0,0)'``t`\n\n`geometric_type` `|&>` `geometric_type``boolean`\n\nDoes first object not extend below second? Available for `box`, `polygon`, `circle`.\n\n`box '(3,3),(0,0)' |&> box '(2,2),(0,0)'``t`\n\n`box` `<^` `box``boolean`\n\nIs first object below second (allows edges to touch)?\n\n`box '((1,1),(0,0))' <^ box '((2,2),(1,1))'``t`\n\n`box` `>^` `box``boolean`\n\nIs first object above second (allows edges to touch)?\n\n`box '((2,2),(1,1))' >^ box '((1,1),(0,0))'``t`\n\n`geometric_type` `?#` `geometric_type``boolean`\n\nDo these objects intersect? Available for these pairs of types: (`box`, `box`), (`lseg`, `box`), (`lseg`, `lseg`), (`lseg`, `line`), (`line`, `box`), (`line`, `line`), (`path`, `path`).\n\n`lseg '[(-1,0),(1,0)]' ?# box '(2,2),(-2,-2)'``t`\n\n`?-` `line``boolean`\n\n`?-` `lseg``boolean`\n\nIs line horizontal?\n\n`?- lseg '[(-1,0),(1,0)]'``t`\n\n`point` `?-` `point``boolean`\n\nAre points horizontally aligned (that is, have same y coordinate)?\n\n`point '(1,0)' ?- point '(0,0)'``t`\n\n`?|` `line``boolean`\n\n`?|` `lseg``boolean`\n\nIs line vertical?\n\n`?| lseg '[(-1,0),(1,0)]'``f`\n\n`point` `?|` `point``boolean`\n\nAre points vertically aligned (that is, have same x coordinate)?\n\n`point '(0,1)' ?| point '(0,0)'``t`\n\n`line` `?-|` `line``boolean`\n\n`lseg` `?-|` `lseg``boolean`\n\nAre lines perpendicular?\n\n`lseg '[(0,0),(0,1)]' ?-| lseg '[(0,0),(1,0)]'``t`\n\n`line` `?||` `line``boolean`\n\n`lseg` `?||` `lseg``boolean`\n\nAre lines parallel?\n\n`lseg '[(-1,0),(1,0)]' ?|| lseg '[(-1,2),(1,2)]'``t`\n\n`geometric_type` `~=` `geometric_type``boolean`\n\nAre these objects the same? Available for `point`, `box`, `polygon`, `circle`.\n\n`polygon '((0,0),(1,1))' ~= polygon '((1,1),(0,0))'``t`\n\n[a] Rotating a box with these operators only moves its corner points: the box is still considered to have sides parallel to the axes. Hence the box's size is not preserved, as a true rotation would do.\n\n### Caution\n\nNote that the same as operator, `~=`, represents the usual notion of equality for the `point`, `box`, `polygon`, and `circle` types. Some of the geometric types also have an `=` operator, but `=` compares for equal areas only. The other scalar comparison operators (`<=` and so on), where available for these types, likewise compare areas.\n\n### Note\n\nBefore PostgreSQL 14, the point is strictly below/above comparison operators `point` `<<|` `point` and `point` `|>>` `point` were respectively called `<^` and `>^`. These names are still available, but are deprecated and will eventually be removed.\n\nTable 9.36. Geometric Functions\n\nFunction\n\nDescription\n\nExample(s)\n\n`area` ( `geometric_type` ) → `double precision`\n\nComputes area. Available for `box`, `path`, `circle`. A `path` input must be closed, else NULL is returned. Also, if the `path` is self-intersecting, the result may be meaningless.\n\n`area(box '(2,2),(0,0)')``4`\n\n`center` ( `geometric_type` ) → `point`\n\nComputes center point. Available for `box`, `circle`.\n\n`center(box '(1,2),(0,0)')``(0.5,1)`\n\n`diagonal` ( `box` ) → `lseg`\n\nExtracts box's diagonal as a line segment (same as `lseg(box)`).\n\n`diagonal(box '(1,2),(0,0)')``[(1,2),(0,0)]`\n\n`diameter` ( `circle` ) → `double precision`\n\nComputes diameter of circle.\n\n`diameter(circle '<(0,0),2>')``4`\n\n`height` ( `box` ) → `double precision`\n\nComputes vertical size of box.\n\n`height(box '(1,2),(0,0)')``2`\n\n`isclosed` ( `path` ) → `boolean`\n\nIs path closed?\n\n`isclosed(path '((0,0),(1,1),(2,0))')``t`\n\n`isopen` ( `path` ) → `boolean`\n\nIs path open?\n\n`isopen(path '[(0,0),(1,1),(2,0)]')``t`\n\n`length` ( `geometric_type` ) → `double precision`\n\nComputes the total length. Available for `lseg`, `path`.\n\n`length(path '((-1,0),(1,0))')``4`\n\n`npoints` ( `geometric_type` ) → `integer`\n\nReturns the number of points. Available for `path`, `polygon`.\n\n`npoints(path '[(0,0),(1,1),(2,0)]')``3`\n\n`pclose` ( `path` ) → `path`\n\nConverts path to closed form.\n\n`pclose(path '[(0,0),(1,1),(2,0)]')``((0,0),(1,1),(2,0))`\n\n`popen` ( `path` ) → `path`\n\nConverts path to open form.\n\n`popen(path '((0,0),(1,1),(2,0))')``[(0,0),(1,1),(2,0)]`\n\n`radius` ( `circle` ) → `double precision`\n\n`radius(circle '<(0,0),2>')``2`\n\n`slope` ( `point`, `point` ) → `double precision`\n\nComputes slope of a line drawn through the two points.\n\n`slope(point '(0,0)', point '(2,1)')``0.5`\n\n`width` ( `box` ) → `double precision`\n\nComputes horizontal size of box.\n\n`width(box '(1,2),(0,0)')``1`\n\nTable 9.37. Geometric Type Conversion Functions\n\nFunction\n\nDescription\n\nExample(s)\n\n`box` ( `circle` ) → `box`\n\nComputes box inscribed within the circle.\n\n`box(circle '<(0,0),2>')``(1.414213562373095,1.414213562373095),​(-1.414213562373095,-1.414213562373095)`\n\n`box` ( `point` ) → `box`\n\nConverts point to empty box.\n\n`box(point '(1,0)')``(1,0),(1,0)`\n\n`box` ( `point`, `point` ) → `box`\n\nConverts any two corner points to box.\n\n`box(point '(0,1)', point '(1,0)')``(1,1),(0,0)`\n\n`box` ( `polygon` ) → `box`\n\nComputes bounding box of polygon.\n\n`box(polygon '((0,0),(1,1),(2,0))')``(2,1),(0,0)`\n\n`bound_box` ( `box`, `box` ) → `box`\n\nComputes bounding box of two boxes.\n\n`bound_box(box '(1,1),(0,0)', box '(4,4),(3,3)')``(4,4),(0,0)`\n\n`circle` ( `box` ) → `circle`\n\nComputes smallest circle enclosing box.\n\n`circle(box '(1,1),(0,0)')``<(0.5,0.5),0.7071067811865476>`\n\n`circle` ( `point`, `double precision` ) → `circle`\n\nConstructs circle from center and radius.\n\n`circle(point '(0,0)', 2.0)``<(0,0),2>`\n\n`circle` ( `polygon` ) → `circle`\n\nConverts polygon to circle. The circle's center is the mean of the positions of the polygon's points, and the radius is the average distance of the polygon's points from that center.\n\n`circle(polygon '((0,0),(1,3),(2,0))')``<(1,1),1.6094757082487299>`\n\n`line` ( `point`, `point` ) → `line`\n\nConverts two points to the line through them.\n\n`line(point '(-1,0)', point '(1,0)')``{0,-1,0}`\n\n`lseg` ( `box` ) → `lseg`\n\nExtracts box's diagonal as a line segment.\n\n`lseg(box '(1,0),(-1,0)')``[(1,0),(-1,0)]`\n\n`lseg` ( `point`, `point` ) → `lseg`\n\nConstructs line segment from two endpoints.\n\n`lseg(point '(-1,0)', point '(1,0)')``[(-1,0),(1,0)]`\n\n`path` ( `polygon` ) → `path`\n\nConverts polygon to a closed path with the same list of points.\n\n`path(polygon '((0,0),(1,1),(2,0))')``((0,0),(1,1),(2,0))`\n\n`point` ( `double precision`, `double precision` ) → `point`\n\nConstructs point from its coordinates.\n\n`point(23.4, -44.5)``(23.4,-44.5)`\n\n`point` ( `box` ) → `point`\n\nComputes center of box.\n\n`point(box '(1,0),(-1,0)')``(0,0)`\n\n`point` ( `circle` ) → `point`\n\nComputes center of circle.\n\n`point(circle '<(0,0),2>')``(0,0)`\n\n`point` ( `lseg` ) → `point`\n\nComputes center of line segment.\n\n`point(lseg '[(-1,0),(1,0)]')``(0,0)`\n\n`point` ( `polygon` ) → `point`\n\nComputes center of polygon (the mean of the positions of the polygon's points).\n\n`point(polygon '((0,0),(1,1),(2,0))')``(1,0.3333333333333333)`\n\n`polygon` ( `box` ) → `polygon`\n\nConverts box to a 4-point polygon.\n\n`polygon(box '(1,1),(0,0)')``((0,0),(0,1),(1,1),(1,0))`\n\n`polygon` ( `circle` ) → `polygon`\n\nConverts circle to a 12-point polygon.\n\n`polygon(circle '<(0,0),2>')``((-2,0),​(-1.7320508075688774,0.9999999999999999),​(-1.0000000000000002,1.7320508075688772),​(-1.2246063538223773e-16,2),​(0.9999999999999996,1.7320508075688774),​(1.732050807568877,1.0000000000000007),​(2,2.4492127076447545e-16),​(1.7320508075688776,-0.9999999999999994),​(1.0000000000000009,-1.7320508075688767),​(3.673819061467132e-16,-2),​(-0.9999999999999987,-1.732050807568878),​(-1.7320508075688767,-1.0000000000000009))`\n\n`polygon` ( `integer`, `circle` ) → `polygon`\n\nConverts circle to an `n`-point polygon.\n\n`polygon(4, circle '<(3,0),1>')``((2,0),​(3,1),​(4,1.2246063538223773e-16),​(3,-1))`\n\n`polygon` ( `path` ) → `polygon`\n\nConverts closed path to a polygon with the same list of points.\n\n`polygon(path '((0,0),(1,1),(2,0))')``((0,0),(1,1),(2,0))`\n\nIt is possible to access the two component numbers of a `point` as though the point were an array with indexes 0 and 1. For example, if `t.p` is a `point` column then `SELECT p FROM t` retrieves the X coordinate and `UPDATE t SET p = ...` changes the Y coordinate. In the same way, a value of type `box` or `lseg` can be treated as an array of two `point` values." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.840401,"math_prob":0.99850523,"size":3336,"snap":"2022-27-2022-33","text_gpt3_token_len":749,"char_repetition_ratio":0.1944778,"word_repetition_ratio":0.14620939,"special_character_ratio":0.21582733,"punctuation_ratio":0.13551402,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99808055,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-06-26T10:42:37Z\",\"WARC-Record-ID\":\"<urn:uuid:016d7195-3380-4102-a1f3-4396fbfcb8b0>\",\"Content-Length\":\"101505\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:faf6edce-dbaa-4e15-b67b-97e031a8962a>\",\"WARC-Concurrent-To\":\"<urn:uuid:0c1eb935-200d-4090-a5bf-574f595870ad>\",\"WARC-IP-Address\":\"93.174.134.210\",\"WARC-Target-URI\":\"https://postgrespro.com/docs/postgrespro/14/functions-geometry\",\"WARC-Payload-Digest\":\"sha1:3BA54AKQLCW6CFHWVLYTNR5LKAUIGERP\",\"WARC-Block-Digest\":\"sha1:6WTKIXIRVMG4WPEXWRYNMXT3NBHRBERZ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656103205617.12_warc_CC-MAIN-20220626101442-20220626131442-00793.warc.gz\"}"}
https://www.bhavinionline.com/2016/04/fun-maths-riddle-5-x-8-28-13-x-13/
[ "# Fun Maths Riddle: If 5 x 8 = 28 Then 13 x 13 = ?\n\nLook at the equations given in the riddle and find the value of the missing number in the last equation.\n\nIf\n\n5 x 8 = 28\n\n3 x 7 = 12\n\n8 x 6 = 35\n\nThen\n\n13 x 13 = ?", null, "Share it with your friends on Facebook and WhatsApp to test their intelligence." ]
[ null, "http://3.6.37.168/wp-content/uploads/2016/04/find-missing-number-5-8-28.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8396592,"math_prob":0.9994772,"size":344,"snap":"2022-27-2022-33","text_gpt3_token_len":134,"char_repetition_ratio":0.12352941,"word_repetition_ratio":0.1521739,"special_character_ratio":0.4651163,"punctuation_ratio":0.046511628,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96661556,"pos_list":[0,1,2],"im_url_duplicate_count":[null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-07-02T20:19:02Z\",\"WARC-Record-ID\":\"<urn:uuid:6b49631d-65b2-4cee-a5e9-8e8da2a93025>\",\"Content-Length\":\"51309\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7b9e6e05-a0bb-4cf5-9a20-0540b31e44a3>\",\"WARC-Concurrent-To\":\"<urn:uuid:d1054448-41ea-44fc-a7da-8437f3e964f4>\",\"WARC-IP-Address\":\"172.67.177.208\",\"WARC-Target-URI\":\"https://www.bhavinionline.com/2016/04/fun-maths-riddle-5-x-8-28-13-x-13/\",\"WARC-Payload-Digest\":\"sha1:W5CVJUKFDBVVRICOMLWMHZO3HHK5Q4YS\",\"WARC-Block-Digest\":\"sha1:EMHYWZGYDA2JGWL7LIN5MCDW6K5DWV45\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656104204514.62_warc_CC-MAIN-20220702192528-20220702222528-00587.warc.gz\"}"}
http://www.e-booksdirectory.com/details.php?ebook=9538
[ "", null, "# Perturbation Theory of Dynamical Systems", null, "Perturbation Theory of Dynamical Systems\nby\n\nPublisher: arXiv\nNumber of pages: 111\n\nDescription:\nThese are lecture notes for a course given to undergraduate Mathematics and Physics students. It covers a few selected topics from perturbation theory at an introductory level. Only certain results are proved, and for some of the most important theorems, sketches of the proofs are provided.\n\n(4.8MB, PDF)\n\n## Similar books", null, "An Introduction to Dynamical Systems and Chaos\nby - LDEO\nThis tutorial will develop the basics ingredients necessary for modeling simple non-linear dynamical systems. The goal is to demonstrate you that you can develop significant insight into the behavior of non-linear systems with just a little math.\n(7159 views)", null, "Data Assimilation: A Mathematical Introduction\nby - arXiv.org\nThis book provides a systematic treatment of the mathematical underpinnings of work in data assimilation. Authors develop a framework in which a Bayesian formulation of the problem provides the bedrock for the derivation and analysis of algorithms.\n(2153 views)", null, "Elementary Symbolic Dynamics and Chaos in Dissipative Systems\nby - World Scientific\nThis is a monograph on chaos in dissipative systems written for those working in the physical sciences. Emphasis is on symbolic description of the dynamics and characteristics of the attractors, written from the view-point of practical applications.\n(6774 views)", null, "Encyclopedia of Dynamical Systems\nby - Scholarpedia\nThe encyclopedia covers differential equations, numerical analysis, bifurcations, topological dynamics, ergodic theory, hyperbolic dynamics, oscillators, pattern formation, chaos, statistical mechanics, control theory, and applications.\n(9616 views)" ]
[ null, "http://www.e-booksdirectory.com/img/ebd-logo.png", null, "http://www.e-booksdirectory.com/images/9538.jpg", null, "http://www.e-booksdirectory.com/images/6300.jpg", null, "http://www.e-booksdirectory.com/images/11663.jpg", null, "http://www.e-booksdirectory.com/images/6360.jpg", null, "http://www.e-booksdirectory.com/images/3039.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8137588,"math_prob":0.58307624,"size":3064,"snap":"2019-51-2020-05","text_gpt3_token_len":656,"char_repetition_ratio":0.109477125,"word_repetition_ratio":0.70159453,"special_character_ratio":0.19353786,"punctuation_ratio":0.14339623,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9763808,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12],"im_url_duplicate_count":[null,null,null,10,null,5,null,null,null,7,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-29T19:08:29Z\",\"WARC-Record-ID\":\"<urn:uuid:761b20ef-4823-4a57-81f9-4a506e5850eb>\",\"Content-Length\":\"11390\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:cb9a66e6-b00f-4680-ba84-8af9d6a5802a>\",\"WARC-Concurrent-To\":\"<urn:uuid:9bb02565-bf1c-4766-b451-114307a31c35>\",\"WARC-IP-Address\":\"67.55.74.44\",\"WARC-Target-URI\":\"http://www.e-booksdirectory.com/details.php?ebook=9538\",\"WARC-Payload-Digest\":\"sha1:WUNGXWPNY34YDHBXWQQNZXWETCR4T3AL\",\"WARC-Block-Digest\":\"sha1:LO6EKLU5SN6ADNZ7EWSEAOUUQQMWX6CW\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579251801423.98_warc_CC-MAIN-20200129164403-20200129193403-00476.warc.gz\"}"}
https://www.rdocumentation.org/packages/forecast/versions/8.13/topics/meanf
[ "forecast (version 8.13)\n\n# meanf: Mean Forecast\n\n## Description\n\nReturns forecasts and prediction intervals for an iid model applied to y.\n\n## Usage\n\nmeanf(\ny,\nh = 10,\nlevel = c(80, 95),\nfan = FALSE,\nlambda = NULL,\nbootstrap = FALSE,\nnpaths = 5000,\nx = y\n)\n\n## Arguments\n\ny\n\na numeric vector or time series of class ts\n\nh\n\nNumber of periods for forecasting\n\nlevel\n\nConfidence levels for prediction intervals.\n\nfan\n\nIf TRUE, level is set to seq(51,99,by=3). This is suitable for fan plots.\n\nlambda\n\nBox-Cox transformation parameter. If lambda=\"auto\", then a transformation is automatically selected using BoxCox.lambda. The transformation is ignored if NULL. Otherwise, data transformed before model is estimated.\n\nUse adjusted back-transformed mean for Box-Cox transformations. If transformed data is used to produce forecasts and fitted values, a regular back transformation will result in median forecasts. If biasadj is TRUE, an adjustment will be made to produce mean forecasts and fitted values.\n\nbootstrap\n\nIf TRUE, use a bootstrap method to compute prediction intervals. Otherwise, assume a normal distribution.\n\nnpaths\n\nNumber of bootstrapped sample paths to use if bootstrap==TRUE.\n\nx\n\nDeprecated. Included for backwards compatibility.\n\n## Value\n\nAn object of class \"forecast\".\n\nThe function summary is used to obtain and print a summary of the results, while the function plot produces a plot of the forecasts and prediction intervals.\n\nThe generic accessor functions fitted.values and residuals extract useful features of the value returned by meanf.\n\nAn object of class \"forecast\" is a list containing at least the following elements:\n\nmodel\n\nA list containing information about the fitted model\n\nmethod\n\nThe name of the forecasting method as a character string\n\nmean\n\nPoint forecasts as a time series\n\nlower\n\nLower limits for prediction intervals\n\nupper\n\nUpper limits for prediction intervals\n\nlevel\n\nThe confidence values associated with the prediction intervals\n\nx\n\nThe original time series (either object itself or the time series used to create the model stored as object).\n\nresiduals\n\nResiduals from the fitted model. That is x minus fitted values.\n\nfitted\n\nFitted values (one-step forecasts)\n\n## Details\n\nThe iid model is $$Y_t=\\mu + Z_t$$ where $$Z_t$$ is a normal iid error. Forecasts are given by $$Y_n(h)=\\mu$$ where $$\\mu$$ is estimated by the sample mean.\n\nrwf\n# NOT RUN {" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.76347685,"math_prob":0.9689795,"size":2221,"snap":"2021-31-2021-39","text_gpt3_token_len":497,"char_repetition_ratio":0.13712224,"word_repetition_ratio":0.0,"special_character_ratio":0.21972084,"punctuation_ratio":0.12219451,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99759567,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-07-28T17:05:07Z\",\"WARC-Record-ID\":\"<urn:uuid:4aff27d2-c81c-48b5-bddc-34ada81d105e>\",\"Content-Length\":\"38286\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c1ce6246-03d0-4e7c-86fd-23e3e3db7895>\",\"WARC-Concurrent-To\":\"<urn:uuid:eacb8910-2836-48c7-ab16-057e862fb93b>\",\"WARC-IP-Address\":\"54.239.152.53\",\"WARC-Target-URI\":\"https://www.rdocumentation.org/packages/forecast/versions/8.13/topics/meanf\",\"WARC-Payload-Digest\":\"sha1:23OEWANKP2COR2AIC43P2H5F6H3IRWUK\",\"WARC-Block-Digest\":\"sha1:ZKNSJVTVEPUUOPHMC5REQTJQOFLVBG4J\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046153739.28_warc_CC-MAIN-20210728154442-20210728184442-00645.warc.gz\"}"}
https://learnbright.org/lessons/math/interpret-division-of-a-whole-number-by-a-unit-fraction/
[ "# Interpret Division of a Whole Number by a Unit Fraction\n\n\\$1.95\n\nWith our Interpret Division of a Whole Number by a Unit Fraction lesson plan, students learn how to divide whole numbers by fractions. Students solve practice problems as a part of this lesson.\n\n## Description\n\nOur Interpret Division of a Whole Number by a Unit Fraction lesson plan teaches students how to solve division problems where they need to divide a whole number by a fraction. During this lesson, students are asked to solve practice problems to demonstrate their understanding of the lesson material. Students are also asked to solve word problems that include these kinds of division problems and include an illustration as a visual aid.\n\nAt the end of the lesson, students will be able to divide a non-zero whole number by a unit fraction using the standard algorithm and a visual fraction model to show the quotient.\n\nState Educational Standards: LB.Math.Content.5.NF.B.7.B" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9276743,"math_prob":0.9227615,"size":686,"snap":"2023-40-2023-50","text_gpt3_token_len":129,"char_repetition_ratio":0.12903225,"word_repetition_ratio":0.0,"special_character_ratio":0.18367347,"punctuation_ratio":0.10606061,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9986512,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-27T06:15:38Z\",\"WARC-Record-ID\":\"<urn:uuid:35b97476-c49c-4da9-85c1-adb3302e8b73>\",\"Content-Length\":\"304236\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ab14bd00-66ec-455a-9696-645c4068766d>\",\"WARC-Concurrent-To\":\"<urn:uuid:d304866e-dcff-4fce-9179-13127a350c1f>\",\"WARC-IP-Address\":\"35.215.74.1\",\"WARC-Target-URI\":\"https://learnbright.org/lessons/math/interpret-division-of-a-whole-number-by-a-unit-fraction/\",\"WARC-Payload-Digest\":\"sha1:R3ASJ3RBGUMQ73VV343HCZM2P6LZ3SVP\",\"WARC-Block-Digest\":\"sha1:K7IDFXC24SD7CBXIFMXKHPCIA3XE56SK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510259.52_warc_CC-MAIN-20230927035329-20230927065329-00698.warc.gz\"}"}
https://oneclass.com/textbook-notes/ca/utsc/chm/chma-10h3/7627-chapter-5-gases.en.html
[ "Textbook Notes (280,000)\nCA (160,000)\nUTSC (20,000)\nChemistry (300)\nCHMA10H3 (100)\nChapter 5\n\n# CHMA10H3 Chapter Notes - Chapter 5: Lead\n\nDepartment\nChemistry\nCourse Code\nCHMA10H3\nProfessor\nAnn Verner\nChapter\n5\n\nThis preview shows page 1. to view the full 5 pages of the document.", null, "Chapter 5 Gases\n5.1 Breathing: Putting Pressure to Work & 5.2 Pressure: The Result of Molecular\nCollisions\n- Pressure: Force exerted/ unit area, the sums of all molecular collisions/constant\nforce on surfaces to any gas\n- Gases in the Air: Gas mixtures are always homogenous and are compressible\n- Characteristics of Gases:\nGases assume volume and shape of containers\nGases are the most compressible state of\nmatter\nGases will mix evenly and completely when\nconfined in same container\nGas have lower densities than liquid or gases\n- The pressure exerted by a solid: Both cylinders have\nthe same mass and have different areas of contact\n- The pressure exerted by a liquid depends on height\nand density of the column of liquid\n- The pressure exerted by a gas depends on the # of gas particles in a given volume, the\nvolume of the contained and the average speed of the gas particles\n5.3 The Simple Gas Laws: Boyles Law, Charles Law and Avogadros Law\n- A SAMPLE OF GAS HAS 4 BASIC PHYSICAL PROPERTIES:\nPRESSURE (P), VOLUME (V), TEMPERATURE (T) and AMOUNT OF MOLES (n)\n1) BOYLES LAW: P1V1 = P2V2\n2) CHARLES LAW: P & n = constant\nwww.notesolution.com\n\nUnlock to view full version\n\nOnly page 1 are available for preview. Some parts have been intentionally blurred.", null, "5.4 The Ideal Gas Law\n- The equation state of a hypothetical ideal gas The state of an amount of gas is\ndetermined by its pressure, volume, and temperature.\nThe modern form of the equation is: R =\n5.5 Applications of the Ideal Gas Law: Molar Volume, Density, and Molar Mass of a Gas\n- Molar Volume: Volume occupied by 1 mole of substance, usually under Standard\nTemperature (T = 0 Degrees or 273K) and Pressure (P=1.00 atm) (STP),\nis 22.4L\n- Density: m= Molar Mass, V= molar\nvolume\n\n- Molar Mass of a Gas:\n5.6 Mixtures of Gases and Partial\nPressures\nwww.notesolution.com" ]
[ null, "https://new-preview-html.oneclass.com/1JRV4BM2KWx7j7bnp0oAmndkAG6LgqaP/bg1.png", null, "https://new-preview-html.oneclass.com/1JRV4BM2KWx7j7bnp0oAmndkAG6LgqaP/bg2.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.80636233,"math_prob":0.8550809,"size":2233,"snap":"2019-51-2020-05","text_gpt3_token_len":623,"char_repetition_ratio":0.10318528,"word_repetition_ratio":0.047493402,"special_character_ratio":0.25750113,"punctuation_ratio":0.11238532,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.973926,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-08T23:45:37Z\",\"WARC-Record-ID\":\"<urn:uuid:6a3e14e2-1e21-4678-8f04-5d851aba283d>\",\"Content-Length\":\"814724\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0c1be977-4aa9-41a9-918e-8cc4fe9b6ca9>\",\"WARC-Concurrent-To\":\"<urn:uuid:25871481-9a1d-42b2-b2b3-066d1dd49f03>\",\"WARC-IP-Address\":\"104.96.220.10\",\"WARC-Target-URI\":\"https://oneclass.com/textbook-notes/ca/utsc/chm/chma-10h3/7627-chapter-5-gases.en.html\",\"WARC-Payload-Digest\":\"sha1:NEIODWZU7D57TDAS3M6QAMLEFCTU3W5O\",\"WARC-Block-Digest\":\"sha1:LBAY2NYPIUTDKRBK7DZ5SAYSQ56BMBLU\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575540515344.59_warc_CC-MAIN-20191208230118-20191209014118-00314.warc.gz\"}"}
https://www.geeksforgeeks.org/area-as-definite-integral/?ref=rp
[ "", null, "Open in App\nNot now\n\n# Area as Definite Integral\n\n• Last Updated : 02 Jun, 2021\n\nIntegrals are an integral part of calculus. They represent summation, for the functions which are not as straightforward as the standard functions, integrals help us to calculate the sum and their areas and give us the flexibility to work with any type of functions we want to work with. The areas for the standard functions are already known, it is not easy to keep and remember formulas for the area of every type of function. Integrals provide the generalizability to this and give an approach for calculating these things for any general function. Definite integrals are used to calculate the areas under the curves. Let’s study this concept in detail.\n\n### Definite Integrals\n\nDefinite integrals are defined as a sum with limits. These are integrals with limits defined as their boundaries between which they calculate the sum for the given function. These limits are called start and end values defined as [a, b] where a is called lower limit and b is called the higher limit of the sum. The definite integral is calculated by calculating the indefinite integral at a and then b and then subtracting both of them.\n\nGiven a function f(x) which is continuous between [a, b], this interval is divided into n sub-intervals of width", null, "and from each interval a point is chosen x*i. Then the value of the definite integral of the function f(x) from a to b is,", null, "a and b are collectively called interval of integration. This is calculated using the following expression,\n\nLet’s say F(x) =", null, "", null, "The graph below shows the definite integral of function f(x) working between the interval a and b.", null, "Definite integrals follow sum properties that allow us to simplify our calculations.\n\nProperties of Definite Integrals\n\nProperty 1: Limits of any definite integral can be interchanged, a minus sign is added while interchanging the limits.", null, "Property 2: If the upper limits and lower limits are equal, then the value of the integral is zero.", null, "Property 3: When a function is multiplied with a constant, its integral is also multiplied by that constant.", null, "Property 4: Definite integrals can be broken down across sums and differences.", null, "Property 5: The intervals of integrals can be broken down.", null, "### Definite Integrals as Area\n\nCalculation of the area bounded by the curve is one of the most important applications of the integrals. Definite integrals allow us to calculate the area bounded by any curve f(x) between a fixed point x = a and a variable point x.", null, "Now since definite integrals calculate the summation of very small rectangular strips as shown in the above figure, they can be used to calculate the area under the curve. In this case, the area under the curve will be given by,", null, "Where, F(x) =", null, "The area bounded by the curve above the x-axis\n\nConsider the function given below in the graph. The function lies completely above the x-axis. We are interested in calculating the area enclosed between this curve and the x-axis between the points x = a and x = b. This case if pretty simple, it just requires us to calculate the area under the curve", null, "So, when the curve lies completely above x-axis the area becomes,", null, "The area bounded by the curve not entirely above the x-axis\n\nIn the figure below, some part of the curve lies below the x-axis. The function is f(x) = -x2 + 1", null, "The region of interest is below the x-axis and evaluation of integral in this part leads us to a negative area. This is not possible as the area cannot be negative. If a function lies both above the x-axis and below the x-axis at some points. Then we need to take special care while calculating the area because if they are calculated together, positive and negative areas will cancel each other out, and we will not get the correct value of the area. So, in this case, the limits must be broken down such that both the integrals are separated out, and they should be added with their absolute values.\n\nLet’s say a function is given by f(x), function lies above the x-axis between [0,3] and below the x-axis between (3,∞). The goal is to calculate the area enclosed by the function between [0,5].\n\nSo, A =", null, "⇒A =", null, "Let’s look at some sample problems\n\n### Sample Problems\n\nQuestion 1: Calculate the area enclosed by the function f(x) and x-axis between x = 0 to x = 1.\n\nf(x) = 3", null, "", null, "", null, "", null, "⇒", null, "⇒ 27\n\nQuestion 2: Calculate the area enclosed by the function f(x) and x-axis between x = 0 to x = 3.\n\nf(x) = x2 + 2", null, "", null, "", null, "", null, "", null, "⇒9 + 6\n\n⇒ 15\n\nQuestion 3: Calculate the area enclosed by the function f(x) and x-axis between x = 0 to x = 3.\n\nf(x) = 3x", null, "", null, "", null, "", null, "⇒", null, "⇒", null, "Question 4: Calculate the area enclosed by the function f(x) and x-axis between x = 0 to x =", null, "f(x) = sin(x) + 2", null, "", null, "", null, "", null, "", null, "", null, "Question 5: Calculate the area enclosed by the function f(x) and x-axis between x = 0 to x = 1.\n\nf(x) = -ex", null, "Now this function lies completely below the x-axis, so only the magnitude of the area will be considered, the sign will be ignored.", null, "", null, "", null, "⇒1 – e\n\nOnly the magnitude of this area will be considered.\n\nQuestion 6: Calculate the area by the function f(x) between [0,3].", null, "Solution:\n\nIt is obvious from the definition that the function becomes negative after x >=1.\n\nSo, the integral must be broken down at x = 1.", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "My Personal Notes arrow_drop_up" ]
[ null, "https://media.geeksforgeeks.org/gfg-gg-logo.svg", null, "https://www.geeksforgeeks.org/wp-content/ql-cache/quicklatex.com-88e72931a47ff327bb194069ff8e4d7f_l3.png", null, "https://www.geeksforgeeks.org/wp-content/ql-cache/quicklatex.com-1cd46d1f2f5cbc47bb2057e909517181_l3.png", null, "https://www.geeksforgeeks.org/wp-content/ql-cache/quicklatex.com-96e2f8caf1dca4f80d5469752793b80d_l3.png", null, "https://www.geeksforgeeks.org/wp-content/ql-cache/quicklatex.com-c628543d3523ae8803fa1baa51504f33_l3.png", null, "https://media.geeksforgeeks.org/wp-content/uploads/20210529195123/figuree6.png", null, "https://www.geeksforgeeks.org/wp-content/ql-cache/quicklatex.com-8b590fd08ca2f08ba2e57ae8827bc878_l3.png", null, "https://www.geeksforgeeks.org/wp-content/ql-cache/quicklatex.com-72edb19814649b499fb797412f987bf1_l3.png", null, "https://www.geeksforgeeks.org/wp-content/ql-cache/quicklatex.com-2a8de5c75c1bb31ecf2711684fa4d8a3_l3.png", null, "https://www.geeksforgeeks.org/wp-content/ql-cache/quicklatex.com-9af5521976149dd2b91aa1dc7e0fcd9a_l3.png", null, "https://www.geeksforgeeks.org/wp-content/ql-cache/quicklatex.com-f73128bf2c540215d5831e774374528c_l3.png", null, "https://media.geeksforgeeks.org/wp-content/uploads/20210529195239/figure8.png", null, "https://www.geeksforgeeks.org/wp-content/ql-cache/quicklatex.com-06c8530cd60245b2fa7fffeb6c4a36b0_l3.png", null, "https://www.geeksforgeeks.org/wp-content/ql-cache/quicklatex.com-96e2f8caf1dca4f80d5469752793b80d_l3.png", null, "https://media.geeksforgeeks.org/wp-content/uploads/20210529195314/figure9.png", null, "https://www.geeksforgeeks.org/wp-content/ql-cache/quicklatex.com-c628543d3523ae8803fa1baa51504f33_l3.png", null, "https://media.geeksforgeeks.org/wp-content/uploads/20210529195350/figuree7-660x332.png", null, "https://www.geeksforgeeks.org/wp-content/ql-cache/quicklatex.com-dd2af2a97bd590123f8f19fa1845ca6d_l3.png", null, "https://www.geeksforgeeks.org/wp-content/ql-cache/quicklatex.com-753992743ba0796d4af5fd4829a03007_l3.png", null, "https://www.geeksforgeeks.org/wp-content/ql-cache/quicklatex.com-2a604f407ac887922aa90466e1398b6d_l3.png", null, "https://www.geeksforgeeks.org/wp-content/ql-cache/quicklatex.com-13ddc6d2dc6c9e5d9d5afc25462dffa6_l3.png", null, "https://www.geeksforgeeks.org/wp-content/ql-cache/quicklatex.com-d7c9e6abff6a7ebb27a5bcb3c6f36c07_l3.png", null, "https://www.geeksforgeeks.org/wp-content/ql-cache/quicklatex.com-0d34f82801ecbfb0f71b7366fd46160b_l3.png", null, "https://www.geeksforgeeks.org/wp-content/ql-cache/quicklatex.com-3326b79cd3a58f1b1c10b4fb44127cfd_l3.png", null, "https://www.geeksforgeeks.org/wp-content/ql-cache/quicklatex.com-1e7818739ed25745c56b07163ef8f5d1_l3.png", null, "https://www.geeksforgeeks.org/wp-content/ql-cache/quicklatex.com-62376711b15b40f540fae67a1b48c18f_l3.png", null, "https://www.geeksforgeeks.org/wp-content/ql-cache/quicklatex.com-8ce99669544eb30de4c41acf482c3cd4_l3.png", null, "https://www.geeksforgeeks.org/wp-content/ql-cache/quicklatex.com-bbb1585a0f754609540f14a908a4f43e_l3.png", null, "https://www.geeksforgeeks.org/wp-content/ql-cache/quicklatex.com-456e419f7f646ef597448586c802bf28_l3.png", null, "https://www.geeksforgeeks.org/wp-content/ql-cache/quicklatex.com-1e7818739ed25745c56b07163ef8f5d1_l3.png", null, "https://www.geeksforgeeks.org/wp-content/ql-cache/quicklatex.com-aef7a13639a4acd83325311c8fc3c9b2_l3.png", null, "https://www.geeksforgeeks.org/wp-content/ql-cache/quicklatex.com-c97b582c04ba0a3678ae1111e798fa4d_l3.png", null, "https://www.geeksforgeeks.org/wp-content/ql-cache/quicklatex.com-c31699239853b119d710222f04ee6343_l3.png", null, "https://www.geeksforgeeks.org/wp-content/ql-cache/quicklatex.com-275eda3c0ff59a76c9e3f78f2eeeda84_l3.png", null, "https://www.geeksforgeeks.org/wp-content/ql-cache/quicklatex.com-0ded776b19341a1353b3066ffc2871a1_l3.png", null, "https://www.geeksforgeeks.org/wp-content/ql-cache/quicklatex.com-096e65871816a0f1535f492726db0c84_l3.png", null, "https://www.geeksforgeeks.org/wp-content/ql-cache/quicklatex.com-69e5e24e97337e05981b7ad30a4fca2d_l3.png", null, "https://www.geeksforgeeks.org/wp-content/ql-cache/quicklatex.com-c8f34d281e5d00342fd7043e2cb003b8_l3.png", null, "https://www.geeksforgeeks.org/wp-content/ql-cache/quicklatex.com-d7626cc31138c2bc14476a8c25613df1_l3.png", null, "https://www.geeksforgeeks.org/wp-content/ql-cache/quicklatex.com-8d7f2a1538126bb6967426dc3b889e2a_l3.png", null, "https://www.geeksforgeeks.org/wp-content/ql-cache/quicklatex.com-89d7b723f39bc2d79258836939265e52_l3.png", null, "https://www.geeksforgeeks.org/wp-content/ql-cache/quicklatex.com-094dcb14824f81ddf63ea9e8a47223f8_l3.png", null, "https://www.geeksforgeeks.org/wp-content/ql-cache/quicklatex.com-7fda88a8fd3cf0c502e45b5e31eb58fc_l3.png", null, "https://www.geeksforgeeks.org/wp-content/ql-cache/quicklatex.com-7fda88a8fd3cf0c502e45b5e31eb58fc_l3.png", null, "https://www.geeksforgeeks.org/wp-content/ql-cache/quicklatex.com-3161a59f5c60132a89377fb72723f1ce_l3.png", null, "https://www.geeksforgeeks.org/wp-content/ql-cache/quicklatex.com-67597c755095bcf3eb263056787af5f6_l3.png", null, "https://www.geeksforgeeks.org/wp-content/ql-cache/quicklatex.com-c635fd05e0402e82d037a18191c08fa4_l3.png", null, "https://www.geeksforgeeks.org/wp-content/ql-cache/quicklatex.com-f283d06c316178ebc6782397bf47587e_l3.png", null, "https://www.geeksforgeeks.org/wp-content/ql-cache/quicklatex.com-3e6baeb2b3a73ad5062702689807db00_l3.png", null, "https://www.geeksforgeeks.org/wp-content/ql-cache/quicklatex.com-ad219d660f828d0f65901c97100ae976_l3.png", null, "https://www.geeksforgeeks.org/wp-content/ql-cache/quicklatex.com-32c3d7e755dc188bb9b521b93ff5437e_l3.png", null, "https://www.geeksforgeeks.org/wp-content/ql-cache/quicklatex.com-fd915a64190b4a7bd8452bc469481e1a_l3.png", null, "https://www.geeksforgeeks.org/wp-content/ql-cache/quicklatex.com-f6c23e5f46c1abb89057c5ceaacf61c9_l3.png", null, "https://www.geeksforgeeks.org/wp-content/ql-cache/quicklatex.com-4e83d193b3a6e0521846d1e70b4b7410_l3.png", null, "https://www.geeksforgeeks.org/wp-content/ql-cache/quicklatex.com-4d8e61c597943051f67700ae7c9dbaf3_l3.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.91421413,"math_prob":0.99903154,"size":5197,"snap":"2022-40-2023-06","text_gpt3_token_len":1312,"char_repetition_ratio":0.19872905,"word_repetition_ratio":0.11320755,"special_character_ratio":0.23186454,"punctuation_ratio":0.092307694,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9998921,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110],"im_url_duplicate_count":[null,null,null,null,null,3,null,null,null,6,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,null,null,3,null,6,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,6,null,3,null,3,null,3,null,3,null,6,null,3,null,3,null,3,null,3,null,3,null,null,null,3,null,3,null,3,null,3,null,3,null,3,null,6,null,6,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-02-08T18:15:33Z\",\"WARC-Record-ID\":\"<urn:uuid:ccb84c01-83cb-44ed-82c6-6198ef2a1412>\",\"Content-Length\":\"346730\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:193284cf-9249-4265-8b09-ee4de51ada94>\",\"WARC-Concurrent-To\":\"<urn:uuid:7ac6b071-e6e7-4617-be5c-513bb5bf3b81>\",\"WARC-IP-Address\":\"23.218.216.148\",\"WARC-Target-URI\":\"https://www.geeksforgeeks.org/area-as-definite-integral/?ref=rp\",\"WARC-Payload-Digest\":\"sha1:6LAFM6H3ECS6IZDW5VYO2TEUGICXZTKT\",\"WARC-Block-Digest\":\"sha1:ODHCTT4FPBDBMKCPTGAARMK2U5FIZ6KH\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764500837.65_warc_CC-MAIN-20230208155417-20230208185417-00171.warc.gz\"}"}
https://community.tableau.com/thread/130392
[ "5 Replies Latest reply on Aug 30, 2013 11:56 AM by Patricia Santillan\n\n# Create a variable where mon - Fri = Weekday, Sat and Sun separate\n\nI work for a bus company and the same route will have a different schedule for M-F, one for Sat and one for Sun.  In excel I can use a formula to first tell me the day number (1-7) and then I can use an if statement to say if the day of this date is a 7(Saturday), show me \"SAT, if a 1(Sunday), show me \"SUN\" else show me \"Weekday\".  How do I do this in Tableau?  I need to group ridership totals by this variable and I can't figure out how it can work.  Thanks in advance!\n\n• ###### 1. Re: Create a variable where mon - Fri = Weekday, Sat and Sun separate\n\nHi Patricia,\n\nCreating a calculated field similar to the following should help:\n\nif datepart('weekday',[DATE])=1 then 'Sunday'\n\nelseif datepart('weekday',[DATE])=7 then 'Saturday'\n\nelse 'Weekday' end\n\n-Tracy\n\n• ###### 2. Re: Create a variable where mon - Fri = Weekday, Sat and Sun separate\n\nThank you so much that was perfect!!\n\nHave an extra issue if you have time...\n\nWe treat holiday service as Sunday service since there are less riders.  Assuming I create variable which says whether a day is a holiday (t/f) how can I adjust this formula to say if this date is NOT a holiday (False)  then the formula above applies, if this date IS a holiday (True) then mark as a Sunday.  I haven't created the variable yet but lets assums the variable is just called \"Holiday\".  Thanks again for your consideration.\n\n• ###### 3. Re: Create a variable where mon - Fri = Weekday, Sat and Sun separate\n\nIf you are gathering right answers I can ask this as a separate question if you want.\n\n• ###### 4. Re: Create a variable where mon - Fri = Weekday, Sat and Sun separate\n\nSo it sounds like there will be another dimension that determines the holiday?\n\nA calculated field like the following should work:\n\nif [Holiday]=true then 'Sunday'\n\nelseif datepart('weekday',[DATE])=1 then 'Sunday'\n\nelseif datepart('weekday',[DATE])=7 then 'Saturday'\n\nelse 'Weekday' end\n\n-Tracy\n\n1 of 1 people found this helpful\n• ###### 5. Re: Create a variable where mon - Fri = Weekday, Sat and Sun separate\n\nThank you so much for your help!" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8564203,"math_prob":0.7473047,"size":1885,"snap":"2019-35-2019-39","text_gpt3_token_len":493,"char_repetition_ratio":0.11642743,"word_repetition_ratio":0.36190477,"special_character_ratio":0.2668435,"punctuation_ratio":0.12064343,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96058947,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-09-22T14:40:26Z\",\"WARC-Record-ID\":\"<urn:uuid:529bbffc-a3d1-40ad-a5b4-d04684c33070>\",\"Content-Length\":\"105952\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:648b7ee9-6a6b-4bf7-a22b-ce7f85eb0f15>\",\"WARC-Concurrent-To\":\"<urn:uuid:15fca98f-f814-486a-ac95-c48163811063>\",\"WARC-IP-Address\":\"204.93.79.205\",\"WARC-Target-URI\":\"https://community.tableau.com/thread/130392\",\"WARC-Payload-Digest\":\"sha1:CEZ65525K3GHG56DZSY6G6IJGZOMXBB7\",\"WARC-Block-Digest\":\"sha1:BT2IEYCRNR22C2O7IPKOGKD4WDXMJUSQ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-39/CC-MAIN-2019-39_segments_1568514575515.93_warc_CC-MAIN-20190922135356-20190922161356-00500.warc.gz\"}"}
https://www.math-only-math.com/worksheet-on-understanding-matrix.html
[ "# Worksheet on Understanding Matrix\n\nPractice the questions given in the Worksheet on understanding matrix.\n\n1. For the matrix A = $$\\begin{bmatrix} 6 & 13\\\\ -5 & -7\\\\ 2 & 4 \\end{bmatrix}$$, answer the following.\n\n(i) What is the order of the matrix A?\n\n(ii) Find the (2, 1)th, (1, 2)th and (3, 2)th elements.\n\n(iii) Is it a rectangular matrix or a square matrix?\n\n2. (i) A matrix has 4 elements. Write the possible orders of the matrix.\n\n(ii) A matrix has 11 elements. Write the possible orders of the matrix.\n\n(iii) A matrix has 3 rows and 2 columns, what is the number of elements in the matrix?\n\n3. (i) If $$\\begin{bmatrix} 10 & x\\\\ -5 & 2 \\end{bmatrix}$$ = $$\\begin{bmatrix} 10 & 1\\\\ -5 & 2 \\end{bmatrix}$$, Find the value of x.\n\n(ii) If $$\\begin{bmatrix} 1 & a + b\\\\ -4 & 3\\\\ a - b & 2 \\end{bmatrix}$$ = $$\\begin{bmatrix} 1 & -2\\\\ -4 & 3\\\\ 1 & 2 \\end{bmatrix}$$, find a and b.\n\n(iii) If $$\\begin{bmatrix} 2x + y & 1\\\\ 3 & x – 3y \\end{bmatrix}$$ = $$\\begin{bmatrix} 1 & 1\\\\ 3 & 2 \\end{bmatrix}$$, Find the value of x and y.\n\nAnswers for the Worksheet on Understanding Matrix are given below.\n\n1. (i) The order of the matrix A is 3 × 2.\n\n(ii) (2, 1)th element = -5;\n\n(1, 2)th element = 13;\n\n(3, 2)th element = 4.\n\n(iii) The matrix A has 3 rows and 2 columns. The number of rows ≠ the number of columns. Therefore the matrix A is rectangular matrix.\n\n2. (i) The possible orders of the matrix are: 1 × 4; 4 × 1 and 2 × 2.\n\n(ii) The possible orders of the matrix are: 11 × 1 and 1 × 11.\n\n(iii) The number of elements in the matrix = 3 × 2 = 6.\n\n3. (i) The value of x = 1\n\n(ii)  The value of a = -½ and b = -$$\\frac{3}{2}$$\n\n(iii) The value of x = $$\\frac{5}{7}$$ and y = -$$\\frac{3}{7}$$." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7131985,"math_prob":1.0000061,"size":1957,"snap":"2020-34-2020-40","text_gpt3_token_len":680,"char_repetition_ratio":0.1781874,"word_repetition_ratio":0.07672634,"special_character_ratio":0.3852836,"punctuation_ratio":0.122969836,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99999905,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-20T04:51:55Z\",\"WARC-Record-ID\":\"<urn:uuid:4ff35f2c-1c0a-4bf6-be60-f3e9f74e5508>\",\"Content-Length\":\"30115\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7f123eac-6975-41b8-a6b6-be9c3e8dea96>\",\"WARC-Concurrent-To\":\"<urn:uuid:d87adadd-17fe-4e67-8c04-2901c4f0ae96>\",\"WARC-IP-Address\":\"173.247.219.53\",\"WARC-Target-URI\":\"https://www.math-only-math.com/worksheet-on-understanding-matrix.html\",\"WARC-Payload-Digest\":\"sha1:EHJTVIW7B5D5FNWXT2LYLTK7NOWZU5ZY\",\"WARC-Block-Digest\":\"sha1:WVNSRL43IFF5VOLAVJ3N7ZPYEQ3XEOOZ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600400193391.9_warc_CC-MAIN-20200920031425-20200920061425-00113.warc.gz\"}"}
https://www.researchandmarkets.com/reports/2176725/mathematical_modeling_in_science_and_engineering
[ "", null, "+353-1-416-8900REST OF WORLD\n+44-20-3973-8888REST OF WORLD\n1-917-300-0470EAST COAST U.S\n1-800-526-8630U.S. (TOLL FREE)\n\nPRINTER FRIENDLY\n\n# Mathematical Modeling in Science and Engineering. An Axiomatic Approach\n\n• ID: 2176725\n• Book\n• March 2012\n• Region: Global\n• 264 Pages\n• John Wiley and Sons Ltd\n1 of 3\nA powerful, unified approach to mathematical and computational modeling in science and engineering\n\nMathematical and computational modeling makes it possible to predict the behavior of a broad range of systems across a broad range of disciplines. This text guides students and professionals through the axiomatic approach, a powerful method that will enable them to easily master the principle types of mathematical and computational models used in engineering and science. Readers will discover that this axiomatic approach not only enables them to systematically construct effective models, it also enables them to apply these models to any macroscopic physical system.\n\nMathematical Modeling in Science and Engineering focuses on models in which the processes to be modeled are expressed as systems of partial differential equations. It begins with an introductory discussion of the axiomatic formulation of basic models, setting the foundation for further topics such as:\n\n• Mechanics of classical and non–classical continuous systems\n\n• Solute transport by a free fluid\n\n• Flow of a fluid in a porous medium\n\n• Multiphase systems\n\n• Enhanced oil recovery\n\n• Fluid mechanics\n\nThroughout the text, diagrams are provided to help readers visualize and better understand complex mathematical concepts. A set of exercises at the end of each chapter enables readers to put their new modeling skills into practice. There is also a bibliography in each chapter to facilitate further investigation of individual topics.\n\nMathematical Modeling in Science and Engineering is ideal for both students and professionals across the many disciplines of science and engineering that depend on mathematical and computational modeling to predict and understand complex systems.\n\nNote: Product cover images may vary from those shown\n2 of 3\nPreface xiii\n\n1 AXIOMATIC FORMULATION OF THE BASIC MODELS 1\n\n1.1 Models 1\n\n1.2 Microscopic and macroscopic physics 2\n\n1.3 Kinematics of continuous systems 3\n\n1.3.1 Intensive properties 6\n\n1.3.2 Extensive properties 8\n\n1.4 Balance equations of extensive and intensive properties 9\n\n1.4.1 Global balance equations 9\n\n1.4.2 The local balance equations 10\n\n1.4.3 The role of balance conditions in the modeling of continuous systems 13\n\n1.4.4 Formulation of motion restrictions by means of balance equations 14\n\n1.5 Summary 16\n\n2 MECHANICS OF CLASSICAL CONTINUOUS SYSTEMS 23\n\n2.1 One–phase systems 23\n\n2.2 The basic mathematical model of one–phase systems 24\n\n2.3 The extensive/intensive properties of classical mechanics 25\n\n2.4 Mass conservation 26\n\n2.5 Linear momentum balance 27\n\n2.6 Angular momentum balance 29\n\n2.7 Energy concepts 32\n\n2.8 The balance of kinetic energy 33\n\n2.9 The balance of internal energy 34\n\n2.10 Heat equivalent of mechanical work 35\n\n2.11 Summary of basic equations for solid and fluid mechanics 35\n\n2.12 Some basic concepts of thermodynamics 36\n\n2.12.1 Heat transport 36\n\n2.13 Summary 38\n\n3 MECHANICS OF NON–CLASSICAL CONTINUOUS SYSTEMS 45\n\n3.1 Multiphase systems 45\n\n3.2 The basic mathematical model of multiphase systems 46\n\n3.3 Solute transport in a free fluid 47\n\n3.4 Transport by fluids in porous media 49\n\n3.5 Flow of fluids through porous media 51\n\n3.6 Petroleum reservoirs: the black–oil model 52\n\n3.6.1 Assumptions of the black–oil model 53\n\n3.6.2 Notation 53\n\n3.6.3 Family of extensive properties 54\n\n3.6.4 Differential equations and jump conditions 55\n\n3.7 Summary 57\n\n4 SOLUTE TRANSPORT BY A FREE FLUID 63\n\n4.1 The general equation of solute transport by a free fluid 64\n\n4.2 Transport processes 65\n\n4.2.2 Diffusion processes 65\n\n4.3 Mass generation processes 66\n\n4.4 Differential equations of diffusive transport 67\n\n4.5 Well–posed problems for diffusive transport 69\n\n4.5.1 Time–dependent problems 70\n\n4.6 First–order irreversible processes 71\n\n4.7 Differential equations of non–diffusive transport 73\n\n4.8 Well–posed problems for non–diffusive transport 73\n\n4.8.1 Well–posed problems in one spatial dimension 74\n\n4.8.2 Well–posed problems in several spatial dimensions 79\n\n4.8.3 Well–posed problems for steady–state models 80\n\n4.9 Summary 80\n\n5 FLOW OF A FLUID IN A POROUS MEDIUM 85\n\n5.1 Basic assumptions of the flow model 85\n\n5.2 The basic model for the flow of a fluid through a porous medium 86\n\n5.3 Modeling the elasticity and compressibility 87\n\n5.3.1 Fluid compressibility 87\n\n5.3.2 Pore compressibility 88\n\n5.3.3 The storage coefficient 90\n\n5.4 Darcy′s law 90\n\n5.5 Piezometric level 92\n\n5.6 General equation governing flow through a porous medium 94\n\n5.6.1 Special forms of the governing differential equation 95\n\n5.7 Applications of the jump conditions 96\n\n5.8 Well–posed problems 96\n\n5.8.2 Time–dependent problems 99\n\n5.9 Models with a reduced number of spatial dimensions 99\n\n5.9.1 Theoretical derivation of a 2–D model for a confined aquifer 100\n\n5.9.2 Leaky aquitard method 102\n\n5.9.3 The integrodifferential equations approach 104\n\n5.9.4 Other 2–D aquifer models 108\n\n5.10 Summary 111\n\n6 SOLUTE TRANSPORT IN A POROUS MEDIUM 117\n\n6.1 Transport processes 118\n\n6.2 Non–conservative processes 118\n\n6.2.1 First–order irreversible processes 119\n\n6.3 Dispersion–diffusion 121\n\n6.4 The equations for transport of solutes in porous media 123\n\n6.5 Well–posed problems 125\n\n6.6 Summary 125\n\n7 MULTIPHASE SYSTEMS 129\n\n7.1 Basic model for the flow of multiple–species transport in a multiple–fluid– phase porous medium 129\n\n7.2 Modeling the transport of species i in phase a 130\n\n7.3 The saturated flow case 133\n\n7.4 The air–water system 137\n\n7.5 The immobile air unsaturated flow model 142\n\n7.6 Boundary conditions 143\n\n7.7 Summary 145\n\n8 ENHANCED OIL RECOVERY 149\n\n8.1 Background on oil production and reservoir modeling 149\n\n8.2 Processes to be modeled 151\n\n8.3 Unified formulation of EOR models 151\n\n8.4 The black–oil model 152\n\n8.5 The Compositional Model 156\n\n8.6 Summary 160\n\n9 LINEAR ELASTICITY 165\n\n9.1 Introduction 165\n\n9.2 Elastic Solids 166\n\n9.3 The Linear Elastic Solid 167\n\n9.4 More on the Displacement Field Decomposition 170\n\n9.5 Strain Analysis 171\n\n9.6 Stress Analysis 173\n\n9.7 Isotropic materials 175\n\n9.8 Stress–strain relations for isotropic materials 177\n\n9.9 The governing differential equations 179\n\n9.9.1 Elastodynamics 180\n\n9.9.2 Elastostatics 180\n\n9.10 Well–posed problems 181\n\n9.10.1 Elastostatics 181\n\n9.10.2 Elastodynamics 181\n\n9.11 Representation of solutions for isotropic elastic solids 182\n\n9.12 Summary 183\n\n10 FLUID MECHANICS 189\n\n10.1 Introduction 189\n\n10.2 Newtonian fluids: Stokes′ constitutive equations 190\n\n10.3 Navier–Stokes equations 192\n\n10.4 Complementary constitutive equations 193\n\n10.5 The concepts of incompressible and inviscid fluids 193\n\n10.6 Incompressible fluids 194\n\n10.7 Initial and boundary conditions 195\n\n10.8 Viscous incompressible fluids: steady states 196\n\n10.9 Linearized theory of incompressible fluids 196\n\n10.10 Ideal fluids 197\n\n10.11 Irrotational flows 198\n\n10.12 Extension of Bernoulli′s relations to compressible fluids 199\n\n10.13 Shallow–water theory 200\n\n10.14 Inviscid compressible fluids 202\n\n10.14.1 Small perturbations in a compressible fluid: the theory of sound 203\n\n10.14.2 Initiation of motion 204\n\n10.14.3 Discontinuous models and shock conditions 206\n\n10.15 Summary 208\n\nA: PARTIAL DIFFERENTIAL EQUATIONS 211\n\nA\n1 Classification 211\n\nA.2 Canonical forms 213\n\nA.3 Well–posed problems 213\n\nA.3.1 Boundary–value problems: the elliptic case 214\n\nA.3.2 Initial–boundary–value problems 214\n\nB: SOME RESULTS FROM THE CALCULUS 217\n\nB.l Notation 217\n\nB.2 Generalized Gauss Theorem 218\n\nC: PROOF OF THEOREM 221\n\nD: THE BOUNDARY LAYER INCOMPRESSIBILITY APPROXIMATION 225\n\nE: INDICIAL NOTATION 229\n\nE.l General 229\n\nE.2 Matrix algebra 230\n\nE.3 Applications to differential calculus 232\n\nIndex 235\n\nNote: Product cover images may vary from those shown\n3 of 3", null, "", null, "", null, "" ]
[ null, "https://d386vep05x5edh.cloudfront.net/images/logo-research-and-markets-print.png", null, "https://d386vep05x5edh.cloudfront.net/images/loading-2.gif", null, "https://d.adroll.com/p/YTRQWRCPDJCNFIYRA73XHU/", null, "https://d.adroll.com/ipixel/YTRQWRCPDJCNFIYRA73XHU/FGN6SDTHFRHT5M7ZWSEEZP", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7700996,"math_prob":0.8920205,"size":7537,"snap":"2020-24-2020-29","text_gpt3_token_len":2114,"char_repetition_ratio":0.12385504,"word_repetition_ratio":0.008733625,"special_character_ratio":0.28353456,"punctuation_ratio":0.13035144,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9576022,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-07-04T15:01:41Z\",\"WARC-Record-ID\":\"<urn:uuid:d04dbc70-710f-41cd-9233-89dc1ba252c4>\",\"Content-Length\":\"150028\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:96e6f8d6-d878-48a2-af9d-0f4be626c37f>\",\"WARC-Concurrent-To\":\"<urn:uuid:d6743ca7-e94a-4988-934f-a405e054dece>\",\"WARC-IP-Address\":\"199.232.66.49\",\"WARC-Target-URI\":\"https://www.researchandmarkets.com/reports/2176725/mathematical_modeling_in_science_and_engineering\",\"WARC-Payload-Digest\":\"sha1:BTCV2GHCI27NGYEBI7QS3QKBGCIKYQN3\",\"WARC-Block-Digest\":\"sha1:TMS66PICREONZTNIBHYWOOUSSZ6C5RTW\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-29/CC-MAIN-2020-29_segments_1593655886178.40_warc_CC-MAIN-20200704135515-20200704165515-00402.warc.gz\"}"}
https://testbook.com/question-answer/a-pointer-is-connected-to-the-spindle-of-a-dynamom--60118e8bea000cf17afe5461
[ "# A pointer is connected to the spindle of a dynamometer type phase angle meter. The two light coils of the phase angle meter mounted on the spindle\n\nThis question was previously asked in\nDRDO EE 2008 Official Paper\nView all DRDO RAC Papers >\n1. carry equal amount of currents in phase with each other and develop torques opposing each other\n2. carry unequal amount of currents in phase with each other and develop torques opposing each other\n3. carry equal amount of currents at quadrature to each other and develop torques opposing each other\n4. carry the load current and develop the magnetic field required for the meter\n\nOption 1 : carry equal amount of currents in phase with each other and develop torques opposing each other\n\n## Detailed Solution\n\nConcept:\n\nElectrodynamic Power Factor Meter:\n\n• It is also known as Dynamometer phase angle meter and Dynamometer power factor meter.\n• It measures the power factor or cosine of the phase angle between voltage and current.\n• There are 2 stationary coils (SC) also called as current coil, that are connected in series to the load.\n• The current coil produces a magnetic field proportional to the current.\n• There are 2 moving coil (MC) also called as voltage or pressure coil, that are connected parallel to load.\n• One moving coil is connected with a high resistor while another with a high inductor. These 2 coils make separation of 90° electrical.\n• The pointer is connected to the moving coil.\n• During the phase angle or power factor measurement, the values of R and L are so adjusted that R = ωL, so that both coil carry equal current.\n• The coils are arranged in such a way that 2 equal and opposite torques are produced and the pointer shows desired result.\n• Therefore, there is no requirement of a controlling system.", null, "Free\nST 1: Logical reasoning\n5361\n20 Questions 20 Marks 20 Mins" ]
[ null, "https://storage.googleapis.com/tb-img/production/21/03/F5_Shweta%20G_19-3-2021_Swati_D7.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.842125,"math_prob":0.9227452,"size":6936,"snap":"2021-43-2021-49","text_gpt3_token_len":1679,"char_repetition_ratio":0.12189844,"word_repetition_ratio":0.3084112,"special_character_ratio":0.22102076,"punctuation_ratio":0.08819018,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98950183,"pos_list":[0,1,2],"im_url_duplicate_count":[null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-12-07T12:28:28Z\",\"WARC-Record-ID\":\"<urn:uuid:3d8435f0-3363-48e2-95b0-0524a1f82764>\",\"Content-Length\":\"124910\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ae4ab05d-48cd-4dad-ba58-4ba728e0f76d>\",\"WARC-Concurrent-To\":\"<urn:uuid:0a0476bc-7eec-425c-a5ba-6f1963d32b0e>\",\"WARC-IP-Address\":\"172.67.30.170\",\"WARC-Target-URI\":\"https://testbook.com/question-answer/a-pointer-is-connected-to-the-spindle-of-a-dynamom--60118e8bea000cf17afe5461\",\"WARC-Payload-Digest\":\"sha1:7W6CH5M3MLEBPBPLJKDADFTFMS3JSRM4\",\"WARC-Block-Digest\":\"sha1:7TUILMIKDLGFLGMC5MBHQHOMR5EEISWE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964363376.49_warc_CC-MAIN-20211207105847-20211207135847-00639.warc.gz\"}"}
https://deepai.org/publication/quantum-linear-system-solver-based-on-time-optimal-adiabatic-quantum-computing-and-quantum-approximate-optimization-algorithm
[ "", null, "# Quantum linear system solver based on time-optimal adiabatic quantum computing and quantum approximate optimization algorithm\n\nWe demonstrate that with an optimally tuned scheduling function, adiabatic quantum computing (AQC) can solve a quantum linear system problem (QLSP) with O(κ/ϵ) runtime, where κ is the condition number, and ϵ is the target accuracy. This achieves the optimal time complexity with respect to κ. The success of the time-optimal AQC implies that the quantum approximate optimization algorithm (QAOA) can also achieve the O(κ) complexity with respect to κ. Our method is applicable to general non-Hermitian matrices (possibly dense), but the efficiency can be improved when restricted to Hermitian matrices, and further to Hermitian positive definite matrices. Numerical results indicate that QAOA can yield the lowest runtime compared to the time-optimal AQC, vanilla AQC, and the recently proposed randomization method. The runtime of QAOA is observed numerically to be only O(κpoly(log(1/ϵ))).\n\n## Authors\n\n##### This week in AI\n\nGet the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.\n\n## References\n\n• Harrow et al. (2009) A. W. Harrow, A. Hassidim,  and S. Lloyd, Phys. Rev. Lett. 103, 150502 (2009).\n• Childs et al. (2017) A. M. Childs, R. Kothari,  and R. D. Somma, SIAM J. Comput. 46, 1920 (2017).\n• Chakraborty et al. (2018) S. Chakraborty, A. Gilyén,  and S. Jeffery, arXiv:1804.01973  (2018).\n• Gilyén et al. (2019) A. Gilyén, Y. Su, G. H. Low,  and N. Wiebe, in\n\nProceedings of the 51st Annual ACM SIGACT Symposium on Theory of Computing\n\n(2019) pp. 193–204.\n• Subaşı et al. (2019) Y. Subaşı, R. D. Somma,  and D. Orsucci, Phys. Rev. Lett. 122, 060504 (2019).\n• Wossnig et al. (2018) L. Wossnig, Z. Zhao,  and A. Prakash, Phys. Rev. Lett. 120, 050502 (2018).\n• Saad (2003) Y. Saad, Iterative methods for sparse linear systems, Vol. 82 (SIAM, 2003).\n• Liu (1992) J. Liu, SIAM Rev. 34, 82 (1992).\n• Low and Chuang (2017) G. H. Low and I. L. Chuang, Phys. Rev. Lett. 118, 010501 (2017).\n• Jansen et al. (2007) S. Jansen, M.-B. Ruskai,  and R. Seiler, J. Math. Phys. 48, 102111 (2007).\n• Albash and Lidar (2018) T. Albash and D. A. Lidar, Rev. Mod. Phys. 90, 015002 (2018).\n• Boixo et al. (2009) S. Boixo, E. Knill,  and R. D. Somma, Quantum Info. Comput. 9, 833 (2009).\n• Ambainis (2012) A. Ambainis, in STACS’12 (29th Symposium on Theoretical Aspects of Computer Science), Vol. 14 (2012) pp. 636–647.\n• Roland and Cerf (2002) J. Roland and N. J. Cerf, Phys. Rev. A 65, 042308 (2002).\n• Rezakhani et al. (2009) A. T. Rezakhani, W.-J. Kuo, A. Hamma, D. A. Lidar,  and P. Zanardi, Phys. Rev. Lett. 103, 080502 (2009).\n• Farhi et al. (2014) E. Farhi, J. Goldstone,  and S. Gutmann, arXiv:1411.4028  (2014).\n• Zhu and Rabitz (1998) W. Zhu and H. Rabitz, J. Chem. Phys. 109, 385 (1998).\n• Maday and Turinici (2003) Y. Maday and G. Turinici, J. Chem. Phys. 118, 8191 (2003).\n• Yang et al. (2017) Z.-C. Yang, A. Rahmani, A. Shabani, H. Neven,  and C. Chamon, Phys. Rev. X 7, 021027 (2017).\n• Bao et al. (2018) S. Bao, S. Kleer, R. Wang,  and A. Rahmani, Phys. Rev. A 97, 062343 (2018).\n• Bukov et al. (2018) M. Bukov, A. G. Day, D. Sels, P. Weinberg, A. Polkovnikov,  and P. Mehta, Phys. Rev. X 8, 031086 (2018).\n• Niu et al. (2019) M. Y. Niu, S. Boixo, V. N. Smelyanskiy,  and H. Neven, npj Quantum Info. 5, 33 (2019).\n\n## I The gap of H(f(s)) for the Hermitian positive definite A\n\nThe Hamiltonian can be written in the block matrix form as\n\n H(f)=(0((1−f)I+fA)QbQb((1−f)I+fA)0). (S1)\n\nLet be an eigenvalue of , then\n\n 0 =det(λI−((1−f)I+fA)Qb−Qb((1−f)I+fA)λI) =det(λ2I−((1−f)I+fA)Q2b((1−f)I+fA))\n\nwhere the second equality holds because the bottom two blocks are commutable. Thus is an eigenvalue of , and equals the smallest non-zero eigenvalue of . Applying a proposition of matrices that and have the same non-zero eigenvalues, also equals the smallest non-zero eigenvalue of .\n\nNow we focus on the matrix . Note that is the unique eigenstate corresponding to the eigenvalue 0, all eigenstates corresponding to non-zero eigenvalues must be orthogonal with . Therefore\n\n Δ2(f) =inf⟨b|φ⟩=0,⟨φ|φ⟩=1⟨φ∣∣Qb((1−f)I+fA)2Qb∣∣φ⟩ =inf⟨b|φ⟩=0,⟨φ|φ⟩=1⟨φ∣∣((1−f)I+fA)2∣∣φ⟩ ≥inf⟨φ|φ⟩=1⟨φ∣∣((1−f)I+fA)2∣∣φ⟩ =(1−f+f/κ)2,\n\nand .\n\n## Ii Relations among Different Measurements of Accuracy\n\nThe quantum adiabatic theorem (Jansen et al., 2007, Theorem 3) states that for any ,\n\nWe will show that also serves as an error bound for the density distance and bounds the fidelity from below.\n\nNote that is the eigenstate for both and corresponding the 0 eigenvalue, we have , and thus . Together with the initial condition , the overlap of and remains to be 0 for the whole time period, i.e.  Since is a rank-2 projector, we have . Therefore the error used in the adiabatic theorem becomes\n\nSince is exactly the fidelity , the fidelity can be bounded from below by .\n\nFurthermore, by using , the distance between and can be bounded by the error of the fidelity as\n\n ∥|ψT(s)⟩⟨ψT(s)|−|˜x(s)⟩⟨˜x(s)|∥22 ≤ ∥|ψT(s)⟩⟨ψT(s)|−|˜x(s)⟩⟨˜x(s)|∥2F =\n\nwhich implies\n\n## Iii Proof of Theorem 1 and Theorem 2\n\nThe proof of Theorem 1 and Theorem 2 can be completed by carefully analyzing the -dependence of each term in given in Eq. (3). Note that in both cases , and we introduce a constant with for the proof of Theorem 1 and for the proof of Theorem 2 due to the different scaling parameter of . We first compute the derivatives of\n\nby chain rule as\n\n H(1)(s)=ddsH(f(s))=dH(f(s))dfdf(s)ds=(H1−H0)cpΔp∗(f(s)),\n\nand\n\n H(2)(s) =ddsH(1)(s)=dds((H1−H0)cpΔp∗(f(s))) =(H1−H0)cppΔp−1∗(f(s))dΔ∗(f(s))dfdf(s)ds =c′(−1+1/κ)(H1−H0)c2ppΔ2p−1∗(f(s)).\n\nThen the first two terms of can be rewritten as\n\n ∥H(1)(0)∥2TΔ2(0)+∥H(1)(s)∥2TΔ2(f(s))≤∥H(1)(0)∥2TΔ2∗(0)+∥H(1)(s)∥2TΔ2∗(f(s)) = ∥(H1−H0)cpΔp∗(f(0))∥2TΔ2∗(0)+∥(H1−H0)cpΔp∗(f(s))∥2TΔ2∗(f(s)) ≤ CT(cpΔp−2∗(0)+cpΔp−2∗(f(s))) ≤ CT(cpΔp−2∗(0)+cpΔp−2∗(1))\n\nHere stands for a general positive constant independent of . To compute the remaining two terms of , we use the following change of variable\n\n u=f(s′),du=dds′f(s′)ds′=cpΔp∗(f(s′))ds′,\n\nand the last two terms of become\n\n 1T∫s0∥H(2)∥2Δ2ds′≤1T∫s0∥H(2)∥2Δ2∗ds′ = 1T∫s0∥c′(−1+1/κ)(H1−H0)c2ppΔ2p−1∗(f(s′))∥2Δ2∗(f(s′))ds′ = 1T∫f(s)0∥c′(−1+1/κ)(H1−H0)c2ppΔ2p−1∗(u)∥2Δ2∗(u)ducpΔp∗(u) ≤ CT((1−1/κ)cp∫f(s)0Δp−3∗(u)du) ≤ CT((1−1/κ)cp∫10Δp−3∗(u)du),\n\nand similarly\n\n 1T∫s0∥H(1)∥22Δ3ds′≤1T∫s0∥H(1)∥22Δ3∗ds′ = 1T∫s0∥(H1−H0)cpΔp∗(f(s′))∥22Δ3∗(f(s′))ds′ = 1T∫f(s)0∥(H1−H0)cpΔp∗(u)∥22Δ3∗(u)ducpΔp∗(u) ≤ CT(cp∫f(s)0Δp−3∗(u)du) ≤ CT(cp∫10Δp−3∗(u)du).\n\nSummarize all terms above, an upper bound of is\n\n η(s) ≤CT{(cpΔp−2∗(0)+cpΔp−2∗(1))+((1−1/κ)cp∫10Δp−3∗(u)du)+(cp∫10Δp−3∗(u)du)} =CT{c′p−2(cp+cpκ2−p)+((1−1/κ)cp∫10Δp−3∗(u)du)+(cp∫10Δp−3∗(u)du)}.\n\nFinally, since for\n\n cp=∫10Δ−p∗(u)du=c′−pp−1κκ−1(κp−1−1),\n\nand\n\n ∫10Δp−3∗(u)du=c′p−32−pκκ−1(κ2−p−1),\n\nwe have\n\n η(s)≤ CT{κκ−1(κp−1−1)+κκ−1(κ−κ2−p) +κκ−1(κp−1−1)(κ2−p−1)+(κκ−1)2(κp−1−1)(κ2−p−1)}.\n\nThe leading term of the bound is when .\n\nNow we consider the limiting case when . Note that the bound for can still be written as\n\n η(s) ≤CT{(cpΔp−2∗(0)+cpΔp−2∗(1))+((1−1/κ)cp∫10Δp−3∗(u)du)+(cp∫10Δp−3∗(u)du)} =CT{c′p−2(cp+cpκ2−p)+(1−1/κ)cpc3−p+cpc3−p}.\n\nStraightforward computation shows that\n\n c1=∫10Δ−1∗(u)du=1c′κκ−1log(κ)\n\nand\n\n c2=∫10Δ−2∗(u)du=1c′2κκ−1(κ−1).\n\nHence when ,\n\n η(s)≤CT{c′p−2(cp+cpκ2−p)+(1−1/κ)c1c2+c1c2}≤Cκlog(κ)T.\n\nThis completes the proof of Theorem 1 and Theorem 2.\n\n## Iv Details of the numerical examples\n\nFor concreteness, for the Hermitian positive definite example, we choose . Here\n\nis an orthogonal matrix obtained by Gram-Schmidt orthogonalization (implemented via a QR factorization) of the discretized periodic Laplacian operator given by\n\n L=⎛⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜⎝1−0.5−0.5−0.51−0.5−0.51−0.5⋱⋱⋱−0.51−0.5−0.5−0.51⎞⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟⎠. (S2)\n\nis chosen to be a diagonal matrix with diagonals uniformly distributed in\n\n. More precisely, with . Such construction ensures to be a Hermitian positive definite matrix which satisfies and the condition number of is . We choose where is the set of the column vectors of . Here .\n\nFor the non-Hermitian positive definite example, we choose . Here and are the same as those in the Hermitian positive definite case, except that the dimension is reduced to . is an orthogonal matrix obtained by Gram-Schmidt orthogonalization of the matrix\n\n K=⎛⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜⎝2−0.5−0.5−0.52−0.5−0.52−0.5⋱⋱⋱−0.52−0.5−0.5−0.52⎞⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟⎠. (S3)\n\nSuch construction ensures to be non-Hermitian, satisfying and the condition number of is . We choose the same as that in the Hermitian positive definite example." ]
[ null, "https://deepai.org/static/images/logo.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7927402,"math_prob":0.99283916,"size":5321,"snap":"2021-43-2021-49","text_gpt3_token_len":1676,"char_repetition_ratio":0.12206131,"word_repetition_ratio":0.046025105,"special_character_ratio":0.33095282,"punctuation_ratio":0.24512987,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99607676,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-28T12:37:53Z\",\"WARC-Record-ID\":\"<urn:uuid:3968dac6-b557-4f91-9642-8873fc8ce64d>\",\"Content-Length\":\"643960\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0ee057ab-4b44-4875-93a2-5da273193b68>\",\"WARC-Concurrent-To\":\"<urn:uuid:c89e03b6-2a95-4eec-b45d-c8c8530e764f>\",\"WARC-IP-Address\":\"54.203.32.116\",\"WARC-Target-URI\":\"https://deepai.org/publication/quantum-linear-system-solver-based-on-time-optimal-adiabatic-quantum-computing-and-quantum-approximate-optimization-algorithm\",\"WARC-Payload-Digest\":\"sha1:DVHQ4NNNIGQQCLX7LV53VM5REJLXTZRC\",\"WARC-Block-Digest\":\"sha1:7B7BQB45EQKB2KPWZYRH6IGGW7E3AOKT\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323588284.71_warc_CC-MAIN-20211028100619-20211028130619-00533.warc.gz\"}"}
https://www.himanshu-rai.com/artificial-neural-networks-part2-feed-forward-and-backpropagation
[ "# Artificial Insights\n\nDeep dive to unravel the mystery of Neural Networks, stepping outside the theoretical boundaries.", null, "A lot has been said and written about Neural Networks (NNs) in recent years — right from the concept of Perceptron to the complex Multilayer Architecture of Neurons. This article is an attempt to demystify the two fundamental algorithms, Feed-forward and Back-propagation, that enable the working of a Neural Network. These techniques have been explained in their simplest form using Microsoft Excel.\n\nThe example taken into consideration is really basic and far from the real-world example. The intention here is to keep it simple and intuitive, to understand the working logic, rather than focusing on the complex mathematics behind it.\n\nTo begin with, I have considered just one input vector V= [X1=1, X2= 0, X3=1, X4=0] with a single hidden layer consisting of 3 neurons and an output layer. The target output is 1.", null, "Neural Network with One Hidden Layer\n\nNetwork set up-\n\nInput and Output- As an example, let us say we expect the algorithm to give an output of ‘1’ for an indicated non-zero value for ‘X1’ & ‘X3’ (say, 1) and zero for ‘X2’ & ‘X4’. So, the input vector considered here is [1,0,1,0].", null, "Input and Output\n\n# Feed-forward:\n\n## Step 1: Initialize network parameters\n\nFirst step is to initialize weights and biases using rand() function in MS Excel.\n\n(P.S: The highlighted cells in all tables below represent derived values based on the suggested formulae)", null, "Weights and Biases — Input to Hidden Layer", null, "Weights and Biases — Hidden to Output Layer\n\n## Step 2: Calculate Net Input at hidden layer nodes\n\nNet Input is nothing but input multiplied by weight, then incremented by bias. Using matrix multiplication of input vector [1x4] and weights [4x3], the resultant matrix is of dimension [1X3]. To get this working in excel, use \\=SUMPRODUCT() to arrive at resultant matrix [1X3] as below-", null, "Input times Weight\n\nNow, add biases to these Input times weight,", null, "Net Input\n\n## Step 3: Pass the Net Inputs through the activation function (Sigmoid)\n\nLet’s pass the output from ‘Step 2’ [1.33,1.30,0.99] as input to the activation function at each neuron of hidden layer as [f(1.33),f(1.30),f(0.99)], which can easily be done by f(x) = 1/(1+exp (-x)) .", null, "Output at Hidden Layer\n\n## Step 4: Calculate Net Input at output node\n\nNow the outputs from ‘Step 3’ [0.79,0.79,0.73] will act as inputs to the output node. Let’s repeat ‘Step 2’ with input vector as [0.79,0.79,0.73], weight vector as [0.71,0.16,0.57] and output bias as[0.83].", null, "Net Input at Output Node\n\nwhich after simplification = 1.93\n\n## Step 5: Obtain final output of neural network\n\nLet’s pass the output received from ‘Step 4’ [1.93] to the activation function as f(1.93), which again can be calculated using f(x)=1/(1+exp(-x)), thus resulting in the final output of neural network.\n\n\\=1/(1+exp(-1.93))\n\nFeed-forward Network Output =0.87\n\n# Back-propagation:\n\nOnce the output from Feed-forward is obtained, the next step is to assess the output received from the network by comparing it with the target outcome.\n\nNow, one obvious thing that’s in control of the NN designer are the weights and biases (also called parameters of network). So, the challenge here is to find the optimal weights and biases that can minimise the sum of square error: E=1/2 ∑ (Network output-Target output)² received by the network, which in this case = [0.5*(-0.13)²] = 0.00798\n\nWe need to look at the error contributed by each of these weights and biases individually and then keep updating them accordingly to reduce the error. This process will be iterated till the convergence. The network will be called a trained network once an optimal is reached. Let’s start implementing this theory in Excel.\n\n## Step 1: Update weights [wH1, wH2, wH3]\n\nCalculate derivative of error function E with respect to weights [wH1, wH2, wH3] using chain rule (I’ll skip the derivation here), which after simplification is equal to the product of", null, "where, Derivative of sigmoid function f(x) = [f(x)\\(1-f(x)]*\n\nSo, [d(E)/d(wH1), d(E)/d(wH2), d(E)/d(wH3)] =", null, "Derivative of E with respect to WH1, WH2 and WH3 respectively\n\nThe new updated weights will be [Initial weights] — [{(learning rate)* [d(E)/d(wH1), d(E)/d(wH2), d(E)/d(wH3)]}], where, learning rate is assumed to be 0.5", null, "Updated weights after 1st Iteration\n\n## Step 2 — Update bias BO at Output node\n\nFor bias, calculate the derivative of error function E with respect to bias BO using chain rule, which after simplification is equal to the product of", null, "So, d(E)/d(BO)=", null, "Derivative of Error E with respect to output node bias\n\nNew updated Bias [BO]new = [Initial Bias] — [learning rate *{d(E)/d(BO)}]", null, "Update Bias at Output node after 1st Iteration\n\n## Step 3- Update weights [w11, w12, ……w43]\n\nTo update weights at “Input to Hidden” layer, let’s calculate derivative of error E with respect to weights [W11, W12….W43] which after simplification is equal to the product of", null, "So, [d(E)/d(w11), d(E)/d(w12),……d(E)/d(w43)] =", null, "Derivatives of error E with respect to weights w11, w12,….w43\n\nNew updated weights = [Initial weights] — [learning rate * (d(E)/dwij)]", null, "Updated Weight after 1st Iteration\n\nExcel formula to be used to update the weights\n\n## Step 4: Update the bias [BH1,BH2,BH3]\n\nSame way, calculate derivative of error E with respect to the biases at hidden node using chain rule, which after simplification is equal to the product of", null, "So, [d(E)/d(BH1), d(E)/d(BH2), d(E)/d(BH3)] =", null, "Derivative of error E with respect to Bias at Hidden nodes\n\nNew updated biases = [Initial bias] — [learning rate * (d(E)/d(BHi))]", null, "Updated Hidden node Bias after 1st Iteration\n\nThus, we have completed one loop of Feed-forward and Back-propagation, Repetition of the same steps i.e. running Feed-forward again with these updated parameters will take you one step closer to the target output and once again, Back-propagation will be used to update these parameters. This cyclic process of Feed-forward and Back-Propagation will continue till the error becomes almost constant and there is not much scope of further improvement in target output.\n\nAfter 100 iterations shows the behaviour of the network as below. With each iteration, Network Output progresses towards the target output (Blue Line) with reduction in error (Red Line).", null, "Note: This blog is reproduced from the Gaurav Gupta blog." ]
[ null, "https://miro.medium.com/v2/resize:fit:875/1*USJifaaBxcvQrAofsv8K2A.jpeg", null, "https://miro.medium.com/v2/resize:fit:791/1*aIw-2s9KjEzK3hKlKHh-bg.png", null, "https://miro.medium.com/v2/resize:fit:443/1*uAbDsF1RNWn5IqkoWGNsfQ.jpeg", null, "https://miro.medium.com/v2/resize:fit:875/1*HJHEgDhYdDEt4NHAo1PbEA.png", null, "https://miro.medium.com/v2/resize:fit:761/1*qgujnFZgOxBS_26bZ710gw.png", null, "https://miro.medium.com/v2/resize:fit:875/1*liZhRdHUva5k2ZHVMAYlqA.png", null, "https://miro.medium.com/v2/resize:fit:875/1*ZYjWmB2q9YtXMPQMAjy1rw.png", null, "https://miro.medium.com/v2/resize:fit:796/1*REIYe29i5n7rBom0cV2ltw.png", null, "https://miro.medium.com/v2/resize:fit:875/1*Qb3njGVe84E68Kg3kM97vg.png", null, "https://miro.medium.com/v2/resize:fit:875/1*8BsduWZ_uq2Guhp-ATI5VQ.png", null, "https://miro.medium.com/v2/resize:fit:875/1*0jFIIW0guR3dOUwFNFkKNQ.png", null, "https://miro.medium.com/v2/resize:fit:875/1*b5kfGAzdnbBxoYdk6uUfDw.png", null, "https://miro.medium.com/v2/resize:fit:875/1*vhVljKgpuxIeTHwB76FSDw.png", null, "https://miro.medium.com/v2/resize:fit:771/1*E-MrgpRGv7V4gCgTOv0DdA.png", null, "https://miro.medium.com/v2/resize:fit:875/1*D3o2nvbXYk2IuBeK3iM7TQ.png", null, "https://miro.medium.com/v2/resize:fit:875/1*bBWzFsj6SpOwfLhERHlaqw.png", null, "https://miro.medium.com/v2/resize:fit:875/1*PJ9BDc_wCYW2sty4tMkl7w.png", null, "https://miro.medium.com/v2/resize:fit:875/1*EzVaHNij0nST4aekoxv6OA.png", null, "https://miro.medium.com/v2/resize:fit:875/1*z3PJdBvn5NZni-gHW1UtRA.png", null, "https://miro.medium.com/v2/resize:fit:875/1*i2bAeBVdeKm-Mjhs4326Jg.png", null, "https://miro.medium.com/v2/resize:fit:875/1*zc-dydND0k-HgYCa94vL1g.png", null, "https://miro.medium.com/v2/resize:fit:875/1*DflkZbY_o4DdBiyB_szefA.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.89168227,"math_prob":0.98112345,"size":6454,"snap":"2023-40-2023-50","text_gpt3_token_len":1671,"char_repetition_ratio":0.12976745,"word_repetition_ratio":0.05019305,"special_character_ratio":0.26123333,"punctuation_ratio":0.11459129,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99772,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44],"im_url_duplicate_count":[null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-30T17:13:21Z\",\"WARC-Record-ID\":\"<urn:uuid:c46a2855-894e-4a2d-83f8-29ccee5d9551>\",\"Content-Length\":\"137667\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8ef756c7-e39b-4353-b944-006f2bb5e448>\",\"WARC-Concurrent-To\":\"<urn:uuid:0d1cdebb-844b-4658-89ad-d3a84f7dd7c8>\",\"WARC-IP-Address\":\"76.76.21.21\",\"WARC-Target-URI\":\"https://www.himanshu-rai.com/artificial-neural-networks-part2-feed-forward-and-backpropagation\",\"WARC-Payload-Digest\":\"sha1:NTQGY2ME5TVK3QVZ4OKTJVTVSXZBTMFI\",\"WARC-Block-Digest\":\"sha1:NV323YK2DCGJGN3DJNBJE2HCNZRW4BYQ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510697.51_warc_CC-MAIN-20230930145921-20230930175921-00118.warc.gz\"}"}
http://fastformulas.com/engelecac.html
[ "", null, "# Formulas included in the Electrical Engineering (AC Circuits) Spreadsheet\n\n Average Voltage Phase Angle Difference Rectangular Voltage Voltage Voltage Regulation Form Factor Crest Factor Impedance in Parallel Impedance in Series Impedance Triangle Reactance Rectangular Impedance Admittance Conductance Rectangular Admittance Rectangular Admittance Susceptance Power Power in a Resistive Circuit Power Stored in a Capacitor Power Stored in an Inductor Radiated Power Parallel RL Circuit Series RL Circuit Parallel RC Circuit Series RC Circuit Parallel RLC Circuit Series RLC Circuit Bandwidth Energy Stored Half-Power Point Quality Factor Quality Factor in RLC Parallel Circuit Quality Factor in RLC Series Circuit Resonance High-Pass Filter Circuit Low-Pass Filter Circuit Change in Reactive Power Complex Power Power Factor Power Cost Coefficient of Coupling Induced Voltage Magnetic Flux Mutual Reactance Ideal Transformer Effective Primary Impedance Ideal Transformer Secondary Current Ideal Transformer Turns Ratio Real Transformer Power Losses Real Transformer Efficiency Two-Port Transformer Transmission Line Characteristic Impedance Transmission Line Reflection Coefficient Transmission Line Standing Wave Ratio Transmission Line Velocity Factor Gain Impedance Model Admittance Model Hybrid Model\n\n## Here is a sample image of what a portion of the Electrical Engineering (AC Circuits) Spreadsheet looks like:", null, "" ]
[ null, "http://fastformulas.com/images/close.gif", null, "http://fastformulas.com/images/big_engelecac.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.56973284,"math_prob":0.68814677,"size":1233,"snap":"2021-21-2021-25","text_gpt3_token_len":280,"char_repetition_ratio":0.16761595,"word_repetition_ratio":0.0,"special_character_ratio":0.13463098,"punctuation_ratio":0.0,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9952821,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,6,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-05-14T15:31:55Z\",\"WARC-Record-ID\":\"<urn:uuid:7d565e91-751b-4da2-8b30-eedb5c0f4f3e>\",\"Content-Length\":\"2634\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e5ce1ef6-9627-460e-ab03-161367e0ba87>\",\"WARC-Concurrent-To\":\"<urn:uuid:7730b3fc-62e8-4092-a07d-82ec84856157>\",\"WARC-IP-Address\":\"74.124.218.36\",\"WARC-Target-URI\":\"http://fastformulas.com/engelecac.html\",\"WARC-Payload-Digest\":\"sha1:ACGYYFUHNAD7Y2U2ROHXEVVPJMRM46KJ\",\"WARC-Block-Digest\":\"sha1:HLLA2ZTLAYS5DPBJVAOPMIDWIL74XVBL\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-21/CC-MAIN-2021-21_segments_1620243991428.43_warc_CC-MAIN-20210514152803-20210514182803-00342.warc.gz\"}"}
https://www.journeyingtheglobe.com/fahrenheit-to-celsius/111.5-f-to-c/
[ "111.5 f to c | 111.5 Fahrenheit to Celsius | [+ Examples]\n\n# 111.5F to C - Convert 111.5° Fahrenheit to Celsius\n\n### The answer is: 44.17 degrees Celsius or 44.17° C\n\nLet's look into the conversion between Fahrenheit and Celsius scales in detail.\n\n### Calculate 111.5° Fahrenheit to Celsius (111.5F to °C)\n\nFahrenheit\nCelsius\n111.5 Degrees Fahrenheit = 44.17 Degrees Celsius\n\nTemperature Conversion - Degrees Fahrenheit into Degrees Celsius\n\nFahrenheit to celsius conversion formula is all about converting the temperature denoting in Fahrenheit to Celsius. As mentioned earlier, the temperature of boiling (hot) water in Celsius is 0 degrees and in Fahrenheit is 21 degrees, the formula to convert F to C is\n\n### °C = (°F − 32) x 5/9\n\nThe math is here is fairly simple, and can be easily understood by an example. Let's say we need to 111.5 Fahrenheit to Celsius\n\n## How To Convert 111.5 F to C?\n\nTo convert 111.5 degrees Fahrenheit to Celsius, all one needs is to put in the values in the converter equation-\n\n### °C = (°F − 32) x 5/9\n\nC = 44.17 degrees\n\nThus, after applying the formula to convert 111.5 Fahrenheit to Celsius, the answer is -\n\n111.5°F = 44.17°C\n\nor\n\n111.5 degrees Fahrenheit equals 44.17 degrees Celsius!\n\n### How much is 111.5 degrees Fahrenheit in Celsius?\n\n111.5F to C = 44.17 °C\n\n### How to Convert From Fahrenheit to Celsius and Celsius to Fahrenheit - Quick and Easy Method\n\nHow to Convert From Fahrenheit to C...\nHow to Convert From Fahrenheit to Celsius and Celsius to Fahrenheit\n\n### What is the formula to calculate Fahrenheit to Celsius?\n\nThe F to C formula is\n\n(F − 32) × 5/9 = C\n\nWhen we enter 111.5 for F in the formula, we get\n\n(111.5 − 32) × 5/9  = 44.17 C\n\nTo be able to solve the (111.5 − 32) × 5/9 equation, we first subtract 32 from 111.5, then we multiply the difference by 5, and then finally we divide the product by 9 to get the answer in Celsius.\n\n### What is the simplest way of converting Fahrenheit into Celsius?\n\nThe boiling temperature of water in Fahrenheit is 21 and 0 in Celsius. So, the simplest formula to calculate the difference is\n\nC = (F − 32) × 5/9\n\nFor converting Fahrenheit into Celsius, you can use this formula – Fahrenheit Temperature – 32/ 2 = Celsius Temperature.\n\nBut this is not the only formula that is used for the conversion as some people believe it doesn’t give out the exact number.\n\nOne another formula that is believed to be equally easy and quick is\n\n(°F - 32) x .5556\n\nWhile there are other temperature units like Kelvin, Réaumur, and Rankine as well, Degree Celsius and Degree Fahrenheit are the most commonly used.\n\nWhile Fahrenheit is primarily used in the US and its territories, Celsius has gained more popularity in the rest of the world. For those using these two different scales, the numbers that denote that temperature are quite different.\n\nFor example, water freezes at Zero Degree Celsius and boils at 100 degrees, the readings are 32-degree Fahrenheit as the freezing point of water and 212 degrees for boiling.\n\n## For Celsius Conversions\n\nFor Celsius conversion, all you need to do is start with the temperature in Celsius. Subtract 30 from the resultant figure, and finally, divide your answer by 2!\n\n## Common F and C Temperature Table\n\n### Key Inferences about Fahrenheit and Celsius\n\n• Celsius and Fahrenheit are commonly misspelled as Celcius and Farenheit.\n• The formula to find a Celsius temperature from Fahrenheit is:  °F = (°C × 9/5) + 32\n• The formula to find a Fahrenheit temperature from Celsius is:  °°C = (°F - 32) × 5/9\n• The two temperature scales are equal at -40°.\n\n## Oven temperature chart\n\nThe Fahrenheit temperature scale is named after the German physicist Daniel Gabriel Fahrenheit in 1724 and was originally used for temperature measurement through mercury thermometers that he invented himself.\n\nMeanwhile, the Celsius scale was originally called centigrade but later came to be named after Swedish astronomer Anders Celsius in 1742. But when the scale was first introduced, it was quite the reverse of what it is today. Anders labeled 0 Degree Celsius as the boiling point of water, while 100 denoted the freezing point.\n\nHowever, after Celsius passed away, Swedish taxonomist Carl Linnaeus flipped it to the opposite, the same as it is used today.\n\n### Our Take\n\nWhile this is the formula that is used for the conversion from Fahrenheit to Celsius, there are few diversions and it is not always a perfect conversion either making it slightly more difficult than what appears to be.\n\nAll said and done, one must understand that since both the scales are offset, meaning that neither of them is defined as starting from zero, there comes a slightly complicated angle to the above-mentioned formula.\n\nBesides, the two scales do not start with a zero, and they both add a different additional value for every unit of heat. This is why it is not every time possible to get an exact value of the conversion by applying the formula.\n\nReverse Conversion: Celsius to Fahrenheit\n\n Fahrenheit Celsius 111.51°F 44.17°C 111.52°F 44.18°C 111.53°F 44.18°C 111.54°F 44.19°C 111.55°F 44.19°C 111.56°F 44.2°C 111.57°F 44.21°C 111.58°F 44.21°C 111.59°F 44.22°C 111.6°F 44.22°C 111.61°F 44.23°C 111.62°F 44.23°C 111.63°F 44.24°C 111.64°F 44.24°C 111.65°F 44.25°C 111.66°F 44.26°C 111.67°F 44.26°C 111.68°F 44.27°C 111.69°F 44.27°C 111.7°F 44.28°C 111.71°F 44.28°C 111.72°F 44.29°C 111.73°F 44.29°C 111.74°F 44.3°C\n Fahrenheit Celsius 111.75°F 44.31°C 111.76°F 44.31°C 111.77°F 44.32°C 111.78°F 44.32°C 111.79°F 44.33°C 111.8°F 44.33°C 111.81°F 44.34°C 111.82°F 44.34°C 111.83°F 44.35°C 111.84°F 44.36°C 111.85°F 44.36°C 111.86°F 44.37°C 111.87°F 44.37°C 111.88°F 44.38°C 111.89°F 44.38°C 111.9°F 44.39°C 111.91°F 44.39°C 111.92°F 44.4°C 111.93°F 44.41°C 111.94°F 44.41°C 111.95°F 44.42°C 111.96°F 44.42°C 111.97°F 44.43°C 111.98°F 44.43°C 111.99°F 44.44°C\n Fahrenheit Celsius 112°F 44.44°C 112.01°F 44.45°C 112.02°F 44.46°C 112.03°F 44.46°C 112.04°F 44.47°C 112.05°F 44.47°C 112.06°F 44.48°C 112.07°F 44.48°C 112.08°F 44.49°C 112.09°F 44.49°C 112.1°F 44.5°C 112.11°F 44.51°C 112.12°F 44.51°C 112.13°F 44.52°C 112.14°F 44.52°C 112.15°F 44.53°C 112.16°F 44.53°C 112.17°F 44.54°C 112.18°F 44.54°C 112.19°F 44.55°C 112.2°F 44.56°C 112.21°F 44.56°C 112.22°F 44.57°C 112.23°F 44.57°C 112.24°F 44.58°C\n Fahrenheit Celsius 112.25°F 44.58°C 112.26°F 44.59°C 112.27°F 44.59°C 112.28°F 44.6°C 112.29°F 44.61°C 112.3°F 44.61°C 112.31°F 44.62°C 112.32°F 44.62°C 112.33°F 44.63°C 112.34°F 44.63°C 112.35°F 44.64°C 112.36°F 44.64°C 112.37°F 44.65°C 112.38°F 44.66°C 112.39°F 44.66°C 112.4°F 44.67°C 112.41°F 44.67°C 112.42°F 44.68°C 112.43°F 44.68°C 112.44°F 44.69°C 112.45°F 44.69°C 112.46°F 44.7°C 112.47°F 44.71°C 112.48°F 44.71°C 112.49°F 44.72°C" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.78971046,"math_prob":0.9578889,"size":7876,"snap":"2022-27-2022-33","text_gpt3_token_len":2758,"char_repetition_ratio":0.23882113,"word_repetition_ratio":0.01946472,"special_character_ratio":0.42166075,"punctuation_ratio":0.1576507,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95584375,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-07-06T21:47:00Z\",\"WARC-Record-ID\":\"<urn:uuid:e2a467ae-f550-4118-a362-52b6d92b1e7f>\",\"Content-Length\":\"325869\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7d89ab68-560b-4e64-83b3-7bbf965457cb>\",\"WARC-Concurrent-To\":\"<urn:uuid:d6070edf-16b3-489b-a330-d33e067a0706>\",\"WARC-IP-Address\":\"172.67.69.139\",\"WARC-Target-URI\":\"https://www.journeyingtheglobe.com/fahrenheit-to-celsius/111.5-f-to-c/\",\"WARC-Payload-Digest\":\"sha1:H2JXELS32ZQFNWDGNHKOLAMP35XAUPQW\",\"WARC-Block-Digest\":\"sha1:3DMTSP627MOAQPHDMCQC6LK2KTK35LKS\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656104678225.97_warc_CC-MAIN-20220706212428-20220707002428-00095.warc.gz\"}"}
https://istina.ips.ac.ru/publications/article/194020549/
[ "", null, "## A NUMERICAL METHOD FOR DETERMINING TWO SORBENT CHARACTERISTICS IN CASE OF DECREASING POROSITYстатья", null, "Информация о цитировании статьи получена из Scopus\nСтатья опубликована в журнале из списка Web of Science и/или Scopus\nДата последнего поиска статьи во внешних источниках: 26 сентября 2019 г.\n• Автор:\n• Журнал: Computational Mathematics and Modeling\n• Том: 30\n• Номер: 2\n• Год издания: 2019\n• Издательство: Consultants Bureau\n• Местоположение издательства: United States\n• Первая страница: 155\n• Последняя страница: 163\n• DOI: 10.1007/s10598-019-09443-0\n• Аннотация: For a mathematical model that incorporates internal-diffusion kinetics and sorbent swelling, we consider the inverse problem of determining the sorption isotherm and the porosity coefficient from two output dynamic curves. A gradient-type iterative method utilizing the conjugate problem technique is proposed and results of numerical experiments are reported. The results are used to investigate the features of the proposed method.\n• Добавил в систему: Туйкина Светлана Рафгатовна\n\n### Работа с статьей\n\n Tuikina S. R. A numerical method for determining two sorbent characteristics in case of decreasing porosity // Computational Mathematics and Modeling. — 2019. — Vol. 30, no. 2. — P. 155–163. For a mathematical model that incorporates internal-diffusion kinetics and sorbent swelling, we consider the inverse problem of determining the sorption isotherm and the porosity coefficient from two output dynamic curves. A gradient-type iterative method utilizing the conjugate problem technique is proposed and results of numerical experiments are reported. The results are used to investigate the features of the proposed method. [ DOI ]" ]
[ null, "https://mc.yandex.ru/watch/45923424", null, "https://istina.ips.ac.ru/static/publications/img/webofscience.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8283011,"math_prob":0.5730019,"size":640,"snap":"2020-24-2020-29","text_gpt3_token_len":136,"char_repetition_ratio":0.0927673,"word_repetition_ratio":0.0,"special_character_ratio":0.215625,"punctuation_ratio":0.13207547,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9581115,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-07-15T12:41:08Z\",\"WARC-Record-ID\":\"<urn:uuid:1b017bc8-e1fb-4208-8831-25129b4e890f>\",\"Content-Length\":\"32747\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:11e881a8-f5cf-4cab-b016-cc98b52723bd>\",\"WARC-Concurrent-To\":\"<urn:uuid:93160b04-4f50-4336-bc4c-e34b57b9e527>\",\"WARC-IP-Address\":\"188.44.51.9\",\"WARC-Target-URI\":\"https://istina.ips.ac.ru/publications/article/194020549/\",\"WARC-Payload-Digest\":\"sha1:SHNIUEBUAS5SCIPJNRNWEFUPIBDCNOEL\",\"WARC-Block-Digest\":\"sha1:OZHQ75CJBYMFFULQZEAISTIZJU7ZEOBH\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-29/CC-MAIN-2020-29_segments_1593657167808.91_warc_CC-MAIN-20200715101742-20200715131742-00050.warc.gz\"}"}
https://www.meritnation.com/cbse-class-9/math/rs-aggarwal-2017/coordinate-geometry/textbook-solutions/11_1_1175_5694_218_64178
[ "Rs Aggarwal 2017 Solutions for Class 9 Math Chapter 6 Coordinate Geometry are provided here with simple step-by-step explanations. These solutions for Coordinate Geometry are extremely popular among Class 9 students for Math Coordinate Geometry Solutions come handy for quickly completing your homework and preparing for exams. All questions and answers from the Rs Aggarwal 2017 Book of Class 9 Math Chapter 6 are provided here for you for free. You will also love the ad-free experience on Meritnation’s Rs Aggarwal 2017 Solutions. All Rs Aggarwal 2017 Solutions for class Class 9 Math are prepared by experts and are 100% accurate.\n\n#### Question 1:\n\nWrite down the coordinates of each of the points A, B, C, D, E shown below:", null, "Draw perpendicular AL, BM, CN, DP and EQ on the X-axis.", null, "(i) Distance of A from the Y-axis = OL = -6 units\nDistance of A from the X-axis = AL = 5 units\nHence, the coordinates of A are (-6,5).\n\n(ii) Distance of B from the Y-axis = OM = 5 units\nDistance of B from the X-axis = BM = 4 units\nHence, the coordinates of B are (5,4).\n\n(iii) Distance of C from the Y-axis = ON = -3 units\nDistance of C from the X-axis = CN = 2 units\nHence, the coordinates of C are (-3,2).\n\n(iv) Distance of D from the Y-axis = OP = 2 units\nDistance of D from the X-axis = DP = -2 units\nHence, the coordinates of D are (2,-2).\n\n(v) Distance of E from the Y-axis = OL = -1 units\nDistance of E from the X-axis = AL = -4 units\nHence, the coordinates of E are (-1,-4).\n\n#### Question 2:\n\nDraw the lines X' OX and YOY' as the coordinate axes on a paper and plot the following points on it.\n(i) P(7, 4)\n(ii) Q(−5, 3)\n(iii) R(−6, −3)\n(iv) S(3, −7)\n(v) A(6, 0)\n(vi) B(0, 9)\n(vii) O(0, 0)\n(viii) C(−3, −3)\n\nLet X'OX and YOY' be the coordinate axes.\nFix a convenient unit of length and form point O, mark equal distances on OX, OX', OY and OY'. Use the convention of signs.\n\n(i) Starting from O, take 7 units on the x-axis and then 4 units on the y-axis to obtain the point P(7,4).\n(ii) Starting from O, take -5 units on the x-axis and then 3 units on the y-axis to obtain the point Q(-5,3).\n(iii) Starting from O, take -6 units on the x-axis and then -3 units on the y-axis to obtain the point R(-6,-3).\n(iv) Starting from O, take 3 units on the x-axis and then -7 units on the y-axis to obtain the point S(3,-7).\n(v) Starting from O, take 6 units on the x-axis to obtain the point A(6,0).\n(vi) Starting from O, take 9 units on the y-axis to obtain the point B(0,9).\n(vii) Same as origin.\n(viii) ​Starting from O, take -3 units on the x-axis and then -3 units on the y-axis to obtain the point C(-3,-3).", null, "#### Question 3:\n\nOn which axis do the following points lie?\n(i) (7, 0)\n(ii) (0, −5)\n(iii) (0, 1)\n(iv) (−, 0)\n\n(i) In (7,0), ordinate = 0\n∴ (7,0) lies on the x-axis.\n\n(ii) In (0,-5), abscissa = 0\n∴ (0,-5) lies on the y-axis.\n\n(iii) In (0,1), abscissa = 0\n∴ (0,1) lies on the y-axis.\n\n(iv) In (-4,0), ordinate = 0\n∴ (-4,0) lies on the x-axis.\n\n#### Question 4:\n\nIn which quadrant do the given points lie?\n(i) (−6, 5)\n(ii) (−3, −2)\n(iii) (2, −9)\n\n(i)  Points of the type (-,+) lie in the second quadrant.\nHence, the point (-6,5) lies in quadrant II.\n\n(ii) Points of the type (-,-) lie in the third quadrant.\nHence, the point (-3,-2) lies in quadrant III.\n\n(iii) Points of the type (+,-) lie in the fourth quadrant.\nHence, the point (-6,5) lies in quadrant IV.\n\n#### Question 5:\n\nDraw the graph of the equation, y = x + 1.\n\nThe given equation is y = x + 1.\nPutting x = 0, we get y = 0 + 1 = 1\nPutting x = 1, we get y = 1 + 1 = 2\nThus, we have the following table:\n\n x 0 1 y 1 2\n\nOn a graph paper, draw the lines X'OX and YOY' as the x-axis and y-axis, respectively.\nNow, plot the points A(0,1) and B(1,2) on the graph paper.\nJoin AB and extend it on both directions.", null, "Thus, line AB is the required graph of the equation, y = x + 1.\n\n#### Question 6:\n\nDraw the graph of the equation, y = 3x + 2.\n\nThe given equation is y = 3x + 2.\nPutting x = 0, we get y = (3 × 0) + 2 = 2.\nPutting x = 1, we get y = (3 × 1) + 2 = 5.\nThus, we have the following table:\n x 0 1 y 2 5", null, "On a graph paper, draw the lines X'OX and YOY' as the x-axis and y-axis, respectively.\nNow, plot the points A(0,2) and B(1,5) on the graph paper.\nJoin AB and extend it on both sides.\nThus, line AB is the required graph of the equation, y = 3x + 2.\n\n#### Question 7:\n\nDraw the graph of the equation, y = 5x 3.\n\nThe given equation is y = 5x - 3.\nPutting x = 0, we get y = (5 × 0) - 3 = -3\nPutting x = 1, we get y = (5 × 1) - 3 = 2\nThus, we have the following table:\n x 0 1 y -3 2", null, "On a graph paper, draw the lines X'OX and YOY' as the x-axis and y-axis, respectively.\nNow, plot the points A(0,-3) and B(1,2) on the graph paper.\nJoin AB and extend it on both sides.\nThus, line AB is the required graph of the equation, y = 5x - 3.\n\n#### Question 8:\n\nDraw the graph of the equation, y = 3x.\n\nThe given equation is y = 3x.\nPutting x = 0, we get y = (3 × 0) = 0.\nPutting x = 1, we get y = (3 × 1) = 3\nThus, we have the following table:\n\n x 0 1 y 0 3", null, "On a graph paper, draw the lines X'OX and YOY' as the x-axis and y-axis, respectively.\nNow, plot the points A(0,0) and B(1,3) on the graph paper.\nJoin AB and extend it on both sides.\nThus, line AB is the required graph of the equation, y = 3x.\n\n#### Question 9:\n\nDraw the graph of the equation, y = −x.\n\nThe given equation is y = -x.\nPutting x = 0, we get y = 0.\nPutting x = 1, we get y = (-1).\nThus, we have the following table:\n\n x 0 1 y 0 -1", null, "On a graph paper, draw the lines X'OX and YOY' as the x-axis and y-axis, respectively.\nNow, plot the points A(0,0) and B(1,-1) on the graph paper.\nJoin AB and extend it on both sides.\nThus, line AB is the required graph of the equation,  y = -x.\n\n#### Question 1:\n\nThe point P(−5, 3) lies in\n\nPoints of the type (-, +) lie in the second quadrant.\nHence, (-5,3) lies in quadrant II.\n\n#### Question 2:\n\nThe point Q(4, −6) lies in\n\nExplanation:\nThe points of the type (+, -) lie in the fourth quadrant.\nHence, (4,-6) lies in quadrant IV.\n\n#### Question 3:\n\nThe point A(0, −4) lies\n(c) on the x-axis\n(d) on the y-axis\n\n(d) on the y- axis\n\n​Explanation:\nAs the abscissa of the point A(0,-4) is 0, it lies on the y-axis.\n\n#### Question 4:\n\nThe point B(8, 0) lies\n(c) on the x-axis\n(d) on the y-axis\n\n(c) on the x-axis\n\n​Explanation:\nAs the ordinate of the point B(8,0) is 0, it lies on the x-axis.\n\n#### Question 5:\n\nThe point C(−6, 0) lies\n(c) on the x-axis\n(d) on the y-axis\n\n(c) on the x-axis\n\n​Explanation:\nAs the ordinate of the point C(-6,0) is 0, it lies on the x-axis.\n\n#### Question 6:\n\nThe point at which the two coordinate axes meet is called\n(a) the abscissa\n(b) the ordinate\n(c) the origin\n\n(c) the origin\n​Explanation: The point at which two axes meet is called as the origin.\n\n#### Question 7:\n\nIf x > 0 and y < 0, then the point (x, y) lies in\n\n​Explanation:\nThe points of the type (+,-) lie in fourth quadrant.\nHence, the point (x,y), where > 0 and y <0, lies in quadrant IV.\n\n#### Question 8:\n\nThe points (other than the origin) for which the abscissa is equal to the ordinate lie in\n\n​Explanation:\nIf abscissa = ordinate, there could be two possibilities.\nEither both are positive or both are negative. So, a point could be either (+,+), which lie in quadrant I or it could be of the type (-,-), which lie in quadrant III.\nHence, the points (other then the origin) for which the abscissae are equal to the ordinates lie in quadrant I and III.\n\n#### Question 9:\n\nThe points in which abscissa and ordinate have different signs will lie in\n\n​Explanation:\nIf the abscissa and ordinate have different signs, there could be two possibilities:\nEither the abscissa is positive and the ordinate is negative or the abscissa is positive and the ordinate is negative.\nSo, a point could be either (+,-), which lie in quadrant IV, or it could be of the type (-,+), which lie in quadrant II.\nHence, points whose abscissae and ordinates have different signs lie in quadrants IV and II.\n\n#### Question 10:\n\nThe perpendicular distance of the point A(7, 5) from y-axis is\n(a) 7 units\n(b) 5 units\n(c) 12 units\n(d) 2 units\n\n(a) 7 units\n\n​Explanation:\nThe abscissa is the distance of a point from the y-axis. For point A(7,5), the abscissa is 7.\nHence, the perpendicular distance of the point A from y-axis is 7 units.\n\n#### Question 11:\n\nA point both of whose coordinates are negative lies in\n\n​Explanation:\nPoints of the type (-,-) lie in the third quadrant.\n\n#### Question 12:\n\nAbscissa of a point is positive in\n\n​Explanation:\nIf abscissa of a point is positive, then the ordinate could be either positive or negative.\nIt means that the type of any point can be either (+,+) or (+, -).\nPoints of the type (+,+) lie in quadrant I, whereas points of the type (+,-) lie in quadrant IV.\n\n#### Question 13:\n\nThe coordinates of two points are A(3, 4) and B(−2, 5) then (abscissa of A) − (abscissa of B) = ?\n(a) 1\n(b) −1\n(c) 5\n(d) −5\n\n(c) 5\n\n​Explanation:\nAbscissa of A = 3\nAbscissa of B = -2\nHence, (abscissa of A) - (abscissa of B) = 3 - (-2) = 5\n\n#### Question 14:\n\nThe points A(2, −2), B(3, 3), C(4, −4) and D(5, −5) all lie in\n\n​Explanation:\nFor all the given points, the abscissa is positive and the ordinate is negative.\nSuch points of the type (+,-) lie in quadrant IV.\n\n#### Question 15:\n\nWhich of the points A(0, 6) B(−2, 0), C(0, −5), D(3, 0) and E(1, 2) does not lie on x-axis?\n(a) A and C\n(b) B and D\n(c) A, C and E\n(d) E only\n\n(c) A,C and E\n\n​Explanation:\nThe ordinate of the points lying on the x-axis = 0\nSo, the points B and D lie on the x-axis. The rest of the points do not lie on the x-axis, as their ordinates are not equal to 0.\nThus, the points A, C and E do not lie on the x-axis.\n\n#### Question 16:\n\nThe signs of abscissa and ordinate of a point in quadrant II are respectively\n(a) (+, )\n(b) (−, +)\n(c) (−, −)\n(d) (+, +)\n\n(b) (-, +)\n\nIn quadrant II, the sign of the abscissa is negative and the sign of the ordinate is positive.\n\n#### Question 17:\n\nWhich of the following points does not lie on the line y = 3x + 4?\n(a) (1, 7)\n(b) (2, 0)\n(c) (−1, 1)\n(d) (4, 12)\n\n(d) (4,12)\n\nExplanation:\n(a) Point (1,7) satisfy the equation y = 3x + 4.                      (∵y = 3 × 1 + 4 = 7)\n(b) Point (2,10) satisfy the equation y = 3x + 4.                    (∵y = 3 × 2 + 4 = 10)\n(c) Point (-1,1) satisfy the equation y = 3x + 4.                     (∵y = 3 × -1 + 4 = 1)\n(d) Point (4,12) does not satisfy the equation y = 3x + 4.    (∵ y = 3 × 4 + 4 = 16 ≠ 12)\nHence, the point (4,12) do not lie on the line y = 3x +4.\n\n#### Question 18:\n\nWhich of the following points lies on the line y = 2x + 3?\n(a) (2, 8)\n(b) (3, 9)\n(c) (4, 12)\n(d) (5, 15)\n\n(b) (3,9)\n\nExplanation:\nPoint (2,8) does not satisfy the equation y = 2x + 3.              (​∵ y = 2 × 2 + 8 = 12$\\ne$ 8)\nPoint (3,9) satisfy the equation y = 2x + 3.                             (​∵ y =2 × 3 + 3 = 9)\nPoint (4,12) does not satisfy the equation y = 2x + 3.    (∵ y = 2 × 4 + 3 = 11$\\ne$ 12)\nPoint (5,15) does not satisfy the equation y = 2x +3.    (∵ y= 2 × 5 + 3 = 13$\\ne$15)\nHence, the point (3,9) lies on the line ​y = 2x +3.\n\n#### Question 19:\n\nIf a < 0 and b < 0, then the point P(a, b) lies in\n\nExplanation:\nPoints of the type (-,-) lie in the third quadrant.\nHence, the point P(a,b), where a < 0 and b < 0, lie in quadrant III.\n\n#### Question 20:\n\nThe perpendicular distance of the point P(4, 3) from the y-axis is\n(a) 3 units\n(b) 4 units\n(c) 5 units\n(d) 7 units\n\n(b) 4 units\nExplanation:\nThe perpendicular distance of the point P(4,3) from the y-axis is 4 units (the abscissa).\n\n#### Question 21:\n\nThe area of the OAB with O(0, 0), A(4, 0) and B(0, 6) is\n(a) 8 sq units\n(b) 12 sq units\n\n(c) 16 sq units\n(d) 24 sq units", null, "(b) 12 sq units\nExplanation:\nOn plotting the points on a graph paper, we get ∆OAB as a right angle triangle, where OA = base = 4 units and OB = 6 units\n∴ Area of ∆OAB = ½ × OA × OB = ½ × 4 × 6 = 12 sq units\n\n#### Question 22:\n\nThe area of the OPQ with O(0, 0), P(1, 0) and Q(0, 1) is\n(a) 1 sq unit\n(b)\n(c)\n(d) 2 sq units", null, "(b)  ½​ sq unit\nExplanation:\nOn plotting the points on a graph paper, we get ∆OPQ as a right angle triangle, where OP = base = 1 units and OQ = 1 units\n∴ Area of (∆OPQ) = ½ × OP × OQ = ½ × 1 × 1 = ½ sq unit\n\n#### Question 23:\n\nConsider the three statements given below:\nI. Any point on x-axis is of the form (a, 0).\nII. Any point on y-axis is of the form (0, b).\nIII. The point P(3, 3) lies on both the axes.\nWhich is true?\n(a) I and II\n(b) I and III\n(c) II and III\n(d) III only\n\n(a) I and II\n\nExplanation:\nOrdinates of points lying on the x-axis = 0\nAbscissae of points lying on the y-axis = 0\nIn point P(3,3), neither the abscissa nor the ordinate is 0. Hence, statements I and II are true.\n\n#### Question 24:\n\nAssertion: The point P(−3, 0) lies on x-axis.\nReason: Every point on x-axis is of the form (x, 0).\n(a) Both Assertion and Reason are true and Reason is a correct explanation of Assertion.\n(b) Both Assertion and Reason are true but Reason is not a correct explanation of Assertion.\n(c) Assertion is true and Reason is false.\n(d) Assertion is false and Reason is true.\n\n(a)  Both Assertion and Reason are true and Reason is a correct explanation of Assertion.\n\nExplanation:\nAssertion (A): The point P(-3,0) lies on the x-axis. This is true, as the ordinate of the point is 0.\nReason (R): Every point on the x- axis is of the form (x,0). This is also a true statement.\nHence, both the assertion and the reason are true and reason (R) is the correct explanation of assertion (A).\n\n#### Question 25:\n\nAssertion: The point O(0, 0) lies in quadrant I.\nReason: The point O(0, 0) lies on both the axes.\n(a) Both Assertion and Reason are true and Reason is a correct explanation of Assertion.\n(b) Both Assertion and Reason are true but Reason is not a correct explanation of Assertion.\n(c) Assertion is true and Reason is false.\n(d) Assertion is false and Reason is true.\n\n(d) Assertion (A) is false and Reason (R) is true.\nExplanation:\nAssertion (A): The point O(0,0) lies in quadrant I. This is a false statement, as point O is the origin where two axes intersect each other.\nReason (R): The point O(0, 0) lies on both the axes. This is a true statement.\n​Hence, assertion (A) is false and reason (R) is true.\nSo, the correct answer is (d).\n\n#### Question 26:\n\nAssertion: The point P(−6, −4) lies in quadrant III.\nReason: The signs of points in quadrants I, II, III and IV are respectively (+, +), (−, +), (−, −) and (+, ).\n(a) Both Assertion and Reason are true and Reason is a correct explanation of Assertion.\n(b) Both Assertion and Reason are true but Reason is not a correct explanation of Assertion.\n(c) Assertion is true and Reason is false.\n(d) Assertion is false and Reason is true.\n\n(a)  Both Assertion and Reason are true and Reason is a correct explanation of Assertion.\nExplanation:\nAssertion (A): The point P(-6,-4) lies in quadrant III. This is a true statement, as points of the type (-,-) lie in quadrant III.\nReason (R): The signs of the points in quadrants I, II, III and IV are (+, +), (−,+), (−,−) and (+,), respectively. This is also a true statement.\nClearly, reason ( R) justifies assertion (A), as those points of the type (-,-) lie in quadrant III.\n​Hence,  (a).\n\n#### Question 27:\n\nAssertion: If ab, then (a, b) ≠ (b, a).\nReason: (4, −3) lies in quadrant IV.\n(a) Both Assertion and Reason are true and Reason is a correct explanation of Assertion.\n(b) Both Assertion and Reason are true but Reason is not a correct explanation of Assertion.\n(c) Assertion is true and Reason is false.\n(d) Assertion is false and Reason is true.\n\n(b)  Both Assertion and Reason are true but Reason is not a correct explanation of Assertion.\nExplanation:\nAssertion (A): If a ≠ b, then (ab) ≠ (b, a), which is a true statement.\nReason ( R ): (4, −3) lies in quadrant IV, as points of the type (+,-) lie in the fourth quadrant. So, the reason (R)  is also a true statement.\nBut, the reason does not justify the assertion.\n​Hence, the correct answer is (b).\n\n#### Question 28:\n\nWrite whether the following statements are true or false?\n(i) The point P(6, 0) lies in the quadrant I.\n(ii) The perpendicular distance of the point A(5, 4) for x-axis is 5 units.\n\n(i) False\nExplanation:\nThe ordinate of the point P(6,0) is 0. So, it lies on the x-axis.\n\n(ii) False\nExplanation:\nThe perpendicular distance of the point A( 5,4) from the x-axis will be 4 units, not 5 units.\n\n#### Question 29:\n\nState whether true or false:\n(i) The mirror image of the pint A(4, 5) in the x-axis is A'(−4, 5).\n(ii) The mirror image of the pint A(4, 5) in the y-axis is A'(−4, 5).\n\n(i) False\nExplanation:\nThe mirror image of the point A(4,5) on the x-axis is A'(4,-5), not A'(-4,5).\n\n(ii) True\nExplanation:\nThe mirror image of the point A(4,5) on the y-axis is A'(-4,5).\n\n#### Question 30:\n\nWrite whether the following statements are true or false:\n(a) The point (−5, 0) lies on x-axis.\n(b) The point (0, −3) lies in quadrant II.\n\n(i)True\nExplanation:\nThe point (−5,0) lies on the x-axis, as any point whose ordinate is 0 lies on the x-axis .Therefore, the given statement is correct.\n\n(ii) False\nExplanation:\nThe point (0,-3) lies on the y-axis. So, the given statement is false.\n\n#### Question 31:\n\nMatch the following columns:\n\n Column I Column II (a) Equation of x-axis is (p) (a, 0) (b) Equation of y-axis is (q) y = 0 (c) Any point on x-axis is of the form (r) (0, b) (d) Any point on y-axis is of the form (s) x = 0\n(a) ......,\n(b) ......,\n(c) ......,\n(d) ......,\n\n(a)-(q), (b)-(s), (c)-(p) and (d)-(r)\n\nExplanation:\n(a) As the points that lie on the x-axis have their ordinates equal to 0, the equation of the x-axis will be y = 0.\n(b) As the points that lie on the y-axis have their absiccae equal to 0, the equation of the y-axis will be x = 0.\n(c) Any point on the x-axis is of the form (a,0).\n(d) Any point on the y-axis is of the form (0,b).\n\n#### Question 32:\n\nMatch the following columns:\n\n Column I Column II (a) The point A(−3, 0) lies on (p) y-axis (b) The point B(−5, −1) lies in quadrant (q) IV (c) The point C(2, −3) lies in quadrant (r) III (d) The point D(0, −6) lies on (s) x-axis\n(a) ......,\n(b) ......,\n(c) ......,\n(d) ......,\n\n(a)-(s), (b)-(r), (c)-(q) and (d)-(p)\nExplanation:\nThe points of the type (a,0) lie on the x-axis.\nThe points of the type (-,-) lie in quadrant III.\nThe points of the type (+,-) lie in quadrant IV.\nThe points of the type (\n0,b) lie on the y-axis.\n\n#### Question 33:\n\nWithout plotting the given points on a graph paper indicate the quadrants in which they lie, it\n(a) ordinate = 6, abscissa = −3\n(b) ordinate = 6, abscissa = 4\n(c) abscissa = −5, ordinate = −7\n(d) ordinate = 3, abscissa = 5\n\n(a) Point (-3,6) lie in quadrant II.\n(b) Point (4,-6) lie in quadrant IV.\n(c) Point (-5,-7) lie in quadrant III.\n(d) Point (5,3) lie in quadrant I.​\n\n#### Question 34:\n\nPlot the point P(−6, 3) on a graph paper. Draw PL x-axis and PM ⊥ y-axis. Write the coordinates of L and M.", null, "The required point is shown in the graph given above.\n\nAlso, draw PL ​⊥ x-axis and PM ⊥ y-axis.\nThe coordinates of L and M are (-6,0) and(0,3), respectively.\n\n#### Question 35:\n\nPlot the points A(−5,2), B(3,−2), C(−4,−3) and D(6, 0) on a graph paper.", null, "The points A(-5,2), B(3,-2), C(-4,-3) and D(6,0) are plotted on the graph paper.\n\n#### Question 36:\n\nThe three vertices of ABC are A(1, 4), B(−2, 2) and C(3, 2). Plot these points on a graph paper and calculate the area of ∆ABC.", null, "Let A(1,4), B(-2,2) and C(3,2) be the vertices of ∆ABC.\nOn plotting the points on the graph paper and joining the points, we get ∆ABC as shown above.\nLet BC intersect y-axis at D.\nThen BC = BD + DC = (2 + 3) units = 5 units                                    ( Abscissa of B  = -2, which indicates that it is on the left side of y-axis. So, for calculating the length of BC, we will consider only the magnitude)\nDraw AM ⊥​ x -axis meeting BC at L.\nOrdinate of point L = ordinate of point C = 2\nSo, AL = AM - LM = (4 - 2) units = 2 units\n∴ Area of (∆ABC) = ½ × BC × AL = ½ × 5 × 2 = 5 sq units\n\n#### Question 37:\n\nThe three vertices of a rectangle ABCD are A(2, 2) B(−3, 2) and C(−3, 5). Plot these points on a graph paper and find the coordinates of D. Also, find the area of rectangle ABCD.", null, "Let A(2,2), B(-3,2) and C(-3,5) be the three vertices of rectangle ABCD.\nOn plotting the points on the graph paper and joining the points, we see that points B and C lie on quadrant II and point A lies on quadrant I.\nLet D be the fourth vertex of the rectangle.\nSo, abscissa of D = abscissa of A = 2\nAlso, ordinate of D = ordinate of C = 5\nSo, coordinates of point D = (2,5)\nLet the y-axis cut AB and CD at points L and M, respectively.\nNow, AB = (BL + LA) = (3 + 2) units = 5 units               (Abscissa of B = -3, which indicates that it is on the left side of y-axis. So, for calculating the length of AB, we will consider only the magnitude.)\nThus, BC = (5 - 2) units = 3 units\n∴ Area of rectangle ABCD = BC × AB = 3 × 5 = 15 sq units\n\n#### Question 38:\n\nThe three vertices of a square ABCD are A(3, 2) B(−2, 2) and D(3, −3). Plot these points on a graph paper and hence, find the coordinates of C. Also, find the area of square ABCD.", null, "Let A(3,2), B(-2,2) and D(3,-3) be the three vertices of square ABCD.\nOn plotting the points on the graph paper and joining the points, we see that A, B and D lie in different quadrants.\nLet C be the fourth vertex of the square.\n∴ Abscissa of C = abscissa of B = -2\nAlso, ordinate of C = ordinate of D = -3\nSo, coordinates of D = (-2,-3)\nLet the y-axis cut AB and CD at points L and M, respectively.\nNow, AB = (BL + LA) = (2 + 3) units = 5 units              (Abscissa of B = -2, which indicates that it is on the left side of y-axis. So, for calculating the length of AB, we will consider only the magnitude.)\n∴ Area of ABCD = AB × AB = 5 × 5 = 25 sq units\n\n#### Question 39:\n\nFrom the figure given below write each of the following:\n(i) The coordinates of point D\n(ii) The abscissa of the point A\n(iii) The point whose coordinates are (2, −3)\n(iv) The point whose coordinates are (−3, −4)\n(v) The ordinate of point E\n(vi) The coordinates of B\n(vii) The coordinates of F\n(viii) The coordinates of the origin", null, "(i) As the abscissa of point D is 0 and the ordinate is -5, the coordinates of point D are (0,-5).\n(ii) The abscissa of point A is -4.\n(iii) The coordinates of point E are (2,-3).\n\n(iv) The coordinates of point C are (-3,-4).\n(v) Ordinate of point E = -3\n(vi) The point B lies on the x-axis, i.e., abscissa = -2 and ordinate = 0.\nSo, the coordinates of B are (-2,0).\n(vii) Abscissa of point F = 5 and ordinate = -1\n​So, coordinates of point F are (5,-1).\n(viii) The coordinates of the origin are (0,0).\n\n#### Question 1:\n\nIf x < 0 and y > 0, then the point (x, y) lies in\n\nExplanation:\nThose points of the type (-,+) lie on the second quadrant. Hence, if  x < 0 and y > 0, then the point (xy) lies in quadrant II.\n\n#### Question 2:\n\nWhich point does not lie in any quadrant?\n(a) (3, −6)\n(b) (−3, 4)\n(c) (5, 7)\n(d) (0, 3)\n\n(d)  (0,3)\n\nExplanation:\nThe point (0,3) lies on the y-axis.\n\n#### Question 3:\n\nThe area of AOB having vertices A(0, 6), O(0, 0) and B(6, 0) is\n(a) 12 sq units\n(b) 36 sq units\n\n(c) 18 sq units\n(d) 24 sq units", null, "(c)  18​ sq units\n\nExplanation:\nOn plotting the points on the graph paper, we get the right angle ∆AOB, where OB = base = 6 units and height = OA = 6 units\n∴ Area of ∆AOB$\\frac{1}{2}$ × OA × OB = $\\frac{1}{2}$ × 6 × 6 = 18 sq units\n\n#### Question 4:\n\nI. Any point on x-axis is of the form (x, 0) for all x.\nII. Any point on y-axis is of the form (0, y) for all y.\nIII. Any point on both the axes is of the form (x, y) for all x and y.\nWhich of the following is true?\n(a) I and II\n(b) I and III\n(c) I only\n(d) III only\n\n(a) I and II\nThe correct statements are:\nI: Any point on the x-axis is of the form (x,0) for all x.\nII. Any point on the y-axis is of the form (0, y) for all y.\n\n#### Question 5:\n\nWhich of the following points does not lie on the line 3y = 2x − 5?\n(a) (7, 3)\n(b) (1, −1)\n(c) (−2, −3)\n(d) (−5, 5)\n\n(d) (-5,5)\n\nExplanation:\n(-5,5) does not satisfy the equation 3y = 2x - 5\n[RHS = 2 x (-5) - 5 = -15; LHS = 3 x 5 = 15 and 15 ≠ (-15)]\nSo, the point (-5,5) does not lie on the equation.\n\n#### Question 6:\n\nPlot each of the following points on a graph paper:\nA(3, −5), B(−5, −2), C(−6, 1) and D(4, 0).", null, "The points A(3,-5), B(-5,-2), C(-6,1) and D(4,0) are plotted on the graph paper.\n\n#### Question 7:\n\nIf 2y = 3 − 5x, find the value of y when x = −1.\n\nOn putting the value of x = -1 in the equation, 2y =  3 - 5x, we get:\n2y = 3 - 5 ×​ (-1)\ny = $\\frac{1}{2}$ ×​ [3 - 5 ×​ (-1)] = 4\n∴ y = 4 when x = -1\n\n#### Question 8:\n\nOn which axis does the point A(0, −4) lie?\n\nAbscissa of point A(0,-4) = 0\nHence, A lies on the y-axis.\n\n#### Question 9:\n\nIn which quadrant does the point B(−3, −5) lie?\n\nThe abscissa and ordinate of point B(-3,-5) are negative and those points of the type (-,-) lie in the third quadrant.\nHence, point B lies in quadrant III.\n\n#### Question 10:\n\nWhat is the perpendicular distance of the point P(−2, −3) from the y-axis?\n\nAbscissa of point P(-2,-3) = -2\n\nHowever, distance cannot be negative.\nHence, the perpendicular distance of point P(-2,-3) from the y-axis is 2 units.\n\n#### Question 11:\n\nAt what point do the coordinate axes meet?\n\nThe coordinate axes (x-axis and y-axis) meet at point O(0,0), known as the origin.\n\n#### Question 12:\n\nFor each of the following write true or false\n(i) The point (4, 0) lies in quadrant I.\n(ii) The ordinate of a point P is −3 and its abscissa is −4. The point is P(−3, −4).\n(iii) The points A(1, −1) and B(−1, 1) both lies in quadrant IV.\n(iv) A point lies on y-axis at a distance of 3 units from x-axis. Its coordinates are (3, 0).\n(v) The point C(0, −5) lies on y-axis.\n(vi) The point O(0, 0) lies on x-axis as well as y-axis.\n\n(i) False. It lies on the x-axis.\n(ii) False. The point is P(-4,-3).\n(iii) False. A(1,-1) lies in quadrant IV and B (-1,1) lies in quadrant II.\n(iv) False. The coordinates of the point are (0,3).\n(v) True.\n(vi) True.\n\n#### Question 13:\n\nTaking a suitable scale, plot the following points on a graph paper:\n\n x −4 −2 5 0 3 −5 y 6 −7 5 −1 −6 0", null, "The points A(-4, 6), B( -2,-7), C( 5,5), D (0,-1), E( 3, -6) and F(-5,0) are plotted on the graph paper.\n\n#### Question 14:\n\n(i) Write the points whose ordinate is 0.\n(ii) Write the points whose abscissa is 0.\n(iii) Write the points whose ordinate is −3.\n(iv) Write the points whose abscissa is 2.\n(v) Write the coordinates of all points in quadrant II.\n(vi) Write the coordinates of all those points for which abscissa and ordinate have the same value.", null, "(i) The points G(-3,0), H(-8,0), Q(4,0)and R(9,0) lie on the x-axis. Hence, their ordinates are equal to 0.\n\n(ii) The points L(0,-6), K(0,-2), D(0,3) and C(0,7) lie on the y-axis. Hence, their abscissae are equal to 0.\n\n(iii) The ordinates of points M(1,-3), J(-4,-3) and P(6,-3) are equal to -3.\n\n(iv) B(2,4) and N 2,-1)\n\n(v) The points E and F lie in quadrant II.\nCoordinates of E = (-4,4)\nCoordinates of F = (-6,2)\n(vi) A( 3,3) and I(-2,-2)\n\n#### Question 15:\n\n(i) Write the mirror image of the point (2, 5) in the x-axis.\n(ii) Write the mirror image of the point (3, 6) in the y-axis.\n(iii) A point (a, b) lies in quadrant II. In which quadrant does (b, a) lie?\n\n(i) The mirror image of the point (2,5) in the x-axis is (2,5).\n(ii) The mirror image of the point (3,6) in the y-axis is (3,6).\n(iii) If a point (a,b) lies in quadrant II, then a must be a negative number and b must be a positive number. So, the point (b,a) or (+,) lie in quadrant IV.\n\n#### Question 16:\n\nWithout plotting the points on a graph paper indicate the quadrant in which they lie:\n(i) ordinate = 4, abscissa = −3\n(ii) ordinate = −5, abscissa = 4\n(iii) abscissa = −1, ordinate = −2\n(iv) abscissa = −5, ordinate = 3\n(v) abscissa = 2, ordinate = 1\n(vi) abscissa = 7, ordinate = −4", null, "(i) Point (3,4) lies in quadrant II.\n(ii) Point (4,5) lies in quadrant IV.\n(iii) Point (1,2) lies in quadrant III.\n(iv) Point (5, 3) lies in quadrant II.\n(v) Point (2,1) lies in quadrant I.\n(vi) Point (7,4) lies in quadrant IV.\n\n#### Question 17:\n\nWhich of the following points do not lie on x-axis?\n(i) A(0, 6)\n(ii) B(2, 0)\n(iii) C(0, −2)\n(iv) D(−6, 0)\n(v) E(2, 1)\n(vi) F(0, 4)\n\nThe points B(2,0) and D(6,0) have their ordinates equal to 0. Hence, they lie on the x-axis.\nThe rest of the points whose ordinate is not equal to zero (i.e., A, C, E and F) do not lie on the  x-axis.\nHence, the points ​A, C, E and F do not lie on the x-axis.\n\n#### Question 18:\n\nThree vertices of a rectangle ABCD are  A(3, 1), B(−3, 1) and C(−3, 3). Plot these points on a graph paper and find the coordinates of the fourth vertex D.", null, "Let A( 3,1), B(-3,1) and C(-3,3) be the three vertices of rectangle ABCD.\nOn plotting the points on a graph paper and joining the points, we see that A lie in quadrant I and B and C lie in quadrant II.\nLet D be the fourth vertex of the rectangle.\ni.e., Abscissa of D = abscissa of A = 3\nAlso, ordinate of D = ordinate of C = 3\n∴ Coordinates of the fourth vertex, D = (3,3)\n\n#### Question 19:\n\nWrite the coordinates of vertices of a rectangle OABC, where O is the origin, length OA = 5 units lying along x-axis, breadth AB = 3 units and B lying in the fourth quadrant.", null, "Given: OABC is a rectangle. O is the origin, OA = 5 units along the x-axis, AB = 3 units and B lies in quadrant IV.\nSolution: Coordinates of origin, i.e., O = (0,0)\nPoint A lies on the x-axis. So, coordinates of point A = (5,0)\nPoint B lies in the fourth quadrant. So, ordinate of point B is negative.\nAs width AB = 3 units, coordinates of point B = ( 5,−3)\nPoint C and point O lies on the same line.\nHence, abscissa of C = abscissa of O = 0\nIt means that point C lies on the y-axis.\nSimilarly, point C and point B lie on the same altitude. So, the ordinates of both points must be equal.\ni.e., ordinate of C = ordinate of B = (−3)\ni.e., coordinates of C = (0, 3)\nThus, the coordinates of the vertices of rectangle OABC are O(0,0), A( 5,0), B( 5, -3) and C( 0,-3).\n\n#### Question 20:\n\nPlot the points A(2, 5), B(−2, 2) and C(4, 2) on a graph paper. Join AB, BC and AC. Calculate the area of ABC.", null, "Let A( 2,5), B(-2,2) and C(4,2) be the three vertices of  ∆​ABC.\nOn plotting the points on a graph paper and joining the points, we see that points A and C lie in quadrant I and point B lie in quadrant II.\nLet BC intersects y-axis at point D.\n\nBC = (BD + DC) = (2 + 4) units = 6 units          (Abscissa of B = −2, which indicates that it is on the left side of y-axis. So, for calculating the length of BC, we will consider the magnitude only)\nDraw AM x-axis and intersect BC at L.​\nOrdinate of point L = ordinate of point B = ordinate of point C\nAL = AM − LM = (5 2) units = 3 units\n∴ Area of ∆ABC = ½ × BC × AL = ½ × 6 × 3 = 9 sq units\n\nView NCERT Solutions for all chapters of Class 9" ]
[ null, "https://www.meritnation.com/img/site_content/ask-answer/loader.gif", null, "https://www.meritnation.com/img/site_content/ask-answer/loader.gif", null, "https://www.meritnation.com/img/site_content/ask-answer/loader.gif", null, "https://www.meritnation.com/img/site_content/ask-answer/loader.gif", null, "https://www.meritnation.com/img/site_content/ask-answer/loader.gif", null, "https://www.meritnation.com/img/site_content/ask-answer/loader.gif", null, "https://www.meritnation.com/img/site_content/ask-answer/loader.gif", null, "https://www.meritnation.com/img/site_content/ask-answer/loader.gif", null, "https://www.meritnation.com/img/site_content/ask-answer/loader.gif", null, "https://www.meritnation.com/img/site_content/ask-answer/loader.gif", null, "https://www.meritnation.com/img/site_content/ask-answer/loader.gif", null, "https://www.meritnation.com/img/site_content/ask-answer/loader.gif", null, "https://www.meritnation.com/img/site_content/ask-answer/loader.gif", null, "https://www.meritnation.com/img/site_content/ask-answer/loader.gif", null, "https://www.meritnation.com/img/site_content/ask-answer/loader.gif", null, "https://www.meritnation.com/img/site_content/ask-answer/loader.gif", null, "https://www.meritnation.com/img/site_content/ask-answer/loader.gif", null, "https://www.meritnation.com/img/site_content/ask-answer/loader.gif", null, "https://www.meritnation.com/img/site_content/ask-answer/loader.gif", null, "https://www.meritnation.com/img/site_content/ask-answer/loader.gif", null, "https://www.meritnation.com/img/site_content/ask-answer/loader.gif", null, "https://www.meritnation.com/img/site_content/ask-answer/loader.gif", null, "https://www.meritnation.com/img/site_content/ask-answer/loader.gif", null, "https://www.meritnation.com/img/site_content/ask-answer/loader.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8029857,"math_prob":0.99920183,"size":31782,"snap":"2021-31-2021-39","text_gpt3_token_len":10547,"char_repetition_ratio":0.23437598,"word_repetition_ratio":0.2729792,"special_character_ratio":0.3597319,"punctuation_ratio":0.16415046,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999447,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-24T05:10:04Z\",\"WARC-Record-ID\":\"<urn:uuid:6d797e66-a55a-4ff2-8721-bf8c273d4d68>\",\"Content-Length\":\"169807\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:38bebd30-ec8b-42d7-9ed0-2bd69a03a487>\",\"WARC-Concurrent-To\":\"<urn:uuid:5a9fbeff-191a-4fde-9167-d63df7505a29>\",\"WARC-IP-Address\":\"13.32.208.109\",\"WARC-Target-URI\":\"https://www.meritnation.com/cbse-class-9/math/rs-aggarwal-2017/coordinate-geometry/textbook-solutions/11_1_1175_5694_218_64178\",\"WARC-Payload-Digest\":\"sha1:7XAUOLSB6ZPZ2LLE52UN7LYGMP3CW4QJ\",\"WARC-Block-Digest\":\"sha1:V6EE6N6BOFYJJTPKXDTXOOXXVFURORFS\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780057504.60_warc_CC-MAIN-20210924050055-20210924080055-00352.warc.gz\"}"}
https://forum.allaboutcircuits.com/threads/what-clock-frequency-for-a-certain-baud-rate.61514/
[ "# What clock frequency for a certain baud rate?\n\n#### atferrari\n\nJoined Jan 6, 2004\n3,489\nPIC 18F family EUSART\n\nMany years ago I was able to grasp this but I forgot all about it.\n\nFor a certain baud rate, how could I know the EUSART's frequency of the clock when transmitting in Synchronized mode?\n\nI understand that I should define if it is 8 or 9 bits/symbol, right? And add if there is parity bit?\n\nSomething like Tx clok freq = Baud rate * (8+1)??\n\nIf I look at lost on this, yes, I am.\n\nPlease, I know the formulas in the datasheet. What I am asking is the relationship between baud rate and Tx clock frequency because I intend use it in non standard way.\n\nGracias.\n\n#### AlexR\n\nJoined Jan 16, 2008\n732\nBaud rate is defined as the number of symbols sent or received per second. Unless you are doing some fancy coding the basic symbol is going to be the bit and the Baud rate will be the same as the bit rate. The number of bits per character will effect the number of character per second that you transmit but has no bearing on the actual Baud rate.\n\n#### atferrari\n\nJoined Jan 6, 2004\n3,489\nShould I understand then that 4960 bauds means a Tx clock of 4960 Hz?\n\nI am looking for that concept: relationship between clock frequency and baud rate.\n\n#### ErnieM\n\nJoined Apr 24, 2011\n8,007\nShould I understand then that 4960 bauds means a Tx clock of 4960 Hz?\n\nI am looking for that concept: relationship between clock frequency and baud rate.\nBaud rate is the symbol rate and is (1 start + 8 data + 1 parity + 1 stop = 11 bits) 1/11th of the bit rate for that case.\n\nRich (BB code):\nFosc / ( k * (n+1) ) where K = 64, 16, or 4 depending\non other settings (4 for synchronous)\n\nn = 16 bit value in (SPBRGH, SPBRG)\nThe data sheet for your particular device should detail all this out.\n\nFull disclosure: I've never understood why the parity (9th bit) does not change the baud rate or the value in (SPBRGH, SPBRG).\n\n#### AlexR\n\nJoined Jan 16, 2008\n732\nShould I understand then that 4960 bauds means a Tx clock of 4960 Hz?...............\nYes, in most cases the Tx clock frequency will be the same as the Baud rate.\nBaud rate is the symbol rate and is (1 start + 8 data + 1 parity + 1 stop = 11 bits) 1/11th of the bit rate for that case.\n.....................................\n\nFull disclosure: I've never understood why the parity (9th bit) does not change the baud rate or the value in (SPBRGH, SPBRG).\nYou are confusing symbol rate with character rate. The term symbol in Baud rate calculations refers to line transitions (in this case to ones and zeros on the line) not to characters so the Baud rate is not effected by the number of bits in each character. The reason the distinction is made between bit rate and symbol rate is that with some forms of modulation a single line transition can represent several data bits. In these circumstances the Baud rate will not be the same as the data rate.\n\n#### MrChips\n\nJoined Oct 2, 2009\n19,415\nI am not a PIC user. Usually, the UART clock is 16 times the baud rate.\n\n#### ErnieM\n\nJoined Apr 24, 2011\n8,007\nYou are confusing symbol rate with character rate. The term symbol in Baud rate calculations refers to line transitions (in this case to ones and zeros on the line) not to characters so the Baud rate is not effected by the number of bits in each character. The reason the distinction is made between bit rate and symbol rate is that with some forms of modulation a single line transition can represent several data bits. In these circumstances the Baud rate will not be the same as the data rate.\nYes I am confusing them, been decades since I tried to make sense out of that so the bit rate level. Wiki redirects baud rate to symbol rate and defines them as synonymous. Some cobwebs came up with an old confusion, so thanks for clearing that up for me.\n\nMostly I just set the baud rate to what I need and don't worry about the bit rate.\n\n#### davebee\n\nJoined Oct 22, 2008\n540\nFor transmitting the sending frequency can be the baud rate.\n\nBut for receiving, it helps to be able check the line at a greater rate, like 8 or 16 times the expected incoming baudrate.\n\nThat's because your goal is to sample the bit values around the center of their pulse. The more finely you can detect the time of the initial shift of the start bit, the more finely you can place the sampling instant near the center of each arriving bit.\n\nSo for a general purpose utility, either a discrete UART chip or software that both transmits and receives, the specified driving clock rate will likely be required to be 8 or 16 times the expected incoming baud." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.928549,"math_prob":0.9596248,"size":1127,"snap":"2019-51-2020-05","text_gpt3_token_len":282,"char_repetition_ratio":0.092609085,"word_repetition_ratio":0.962963,"special_character_ratio":0.24223602,"punctuation_ratio":0.12,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95940775,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-13T00:06:42Z\",\"WARC-Record-ID\":\"<urn:uuid:fff9cef7-c295-4e1b-8a64-b263a1775f19>\",\"Content-Length\":\"122733\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:cb3a4319-9afd-4b33-b66e-7b71f9399f77>\",\"WARC-Concurrent-To\":\"<urn:uuid:d051b921-6171-4485-a182-5ac7723a3d2b>\",\"WARC-IP-Address\":\"104.20.234.39\",\"WARC-Target-URI\":\"https://forum.allaboutcircuits.com/threads/what-clock-frequency-for-a-certain-baud-rate.61514/\",\"WARC-Payload-Digest\":\"sha1:N3MFAMO5F4ODEOWLTPD3Q7WQ752HJ3CM\",\"WARC-Block-Digest\":\"sha1:VXDB2UUC6N5DQTYXEGQKGNDQCNE2UWY5\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575540547536.49_warc_CC-MAIN-20191212232450-20191213020450-00006.warc.gz\"}"}
https://forum.ansys.com/forums/topic/how-could-i-model-and-solve-the-following-problem-with-ansys/
[ "", null, "## General Mechanical\n\n•", null, "dexterjamoe\nSubscriber\n•", null, "mekafime\nSubscriber\n\nHello\n\nyou can do it linearly in the desig modeler and then you give the section of each line\n\n•", null, "peteroznewman\nSubscriber\n\nDexter,\n\nDo you know how to use a CAD system?  If so, draw lines along the centerlines of each beam in 3D space. Export that as an IGES file, then Open that IGES file in SpaceClaim.\n\nIf you don't know how to use a CAD system, then you can create these lines in SpaceClaim. You will have to spend some time learning how to do that. Then create the cross sections that run along those lines in SpaceClaim.\n\nOpen the Model and apply Simply Supported constraints on the four end points (vertex) of the lines fixed to the wall.  Apply a force of 1000 N at the lifting end.  Solve.  Insert a Beam Tool into the results and find the Maximum Stress. Now you know the MPa/kN for this lift, call that number Skn\n\nSince you know the Ultimate Strength, Su, for the material, the maximum load in kN this structure could lift is Lmax = Su/Skn.\n\nLswf = Lmax/2.25 to calculate the safe working load with a 2.25 Factor of Safety.\n\nChange the 1000 N load with the Lswf that you just calculated then Solve.  Now you know the tip deflection.\n\nThe above is an overview of the process. There are many details you have to know along the way.\n\nGood luck,\n\nPeter", null, "" ]
[ null, "https://forum.ansys.com/wp-content/uploads/2022/01/structurs-icon-1.svg", null, "https://secure.gravatar.com/avatar/473af6623e74f4cd1dc93d635d311652", null, "https://secure.gravatar.com/avatar/7ecd8b71c9bc280f36b6124498b77c3c", null, "https://secure.gravatar.com/avatar/fe9c8e3ef802a8872cb9da68195531f3", null, "https://forum.ansys.com/wp-content/themes/ansysbbpress/assets/images/loading.svg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.89861566,"math_prob":0.9039751,"size":3067,"snap":"2022-40-2023-06","text_gpt3_token_len":716,"char_repetition_ratio":0.0848841,"word_repetition_ratio":0.012867647,"special_character_ratio":0.22888817,"punctuation_ratio":0.09556314,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9627356,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-02-08T19:30:06Z\",\"WARC-Record-ID\":\"<urn:uuid:e85fc2ab-ea42-4a5c-8bef-fe735d112b62>\",\"Content-Length\":\"882641\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:17018fa2-7cbf-4a1f-9a9e-3fcf2f4287ee>\",\"WARC-Concurrent-To\":\"<urn:uuid:9e8a0b17-7ca7-4cc2-9568-c467c9db25da>\",\"WARC-IP-Address\":\"23.222.79.154\",\"WARC-Target-URI\":\"https://forum.ansys.com/forums/topic/how-could-i-model-and-solve-the-following-problem-with-ansys/\",\"WARC-Payload-Digest\":\"sha1:KXAUWM6MDHEH52HPL7PHZWE54E2AVIGX\",\"WARC-Block-Digest\":\"sha1:OWDSGMSDPZXNRCIV5WHBM2ZZR6YECYD7\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764500904.44_warc_CC-MAIN-20230208191211-20230208221211-00615.warc.gz\"}"}
https://www.onlinemath4all.com/solving-linear-equations-using-substitution-method.html
[ "# SOLVING LINEAR EQUATIONS USING SUBSTITUTION METHOD\n\nSolving Linear Equations Using Substitution Method :\n\nIn this section, you will learn how to solve linear linear equations in two variables using the substitution method.\n\n## Substitution Method - Steps\n\nStep 1 :\n\nSolve one of the equations for one of its variables.\n\nStep 2 :\n\nSubstitute the expression from step 1 into the other equation and solve for the other variable.\n\nStep 3 :\n\nSubstitute the value from step 2 into either original equation and solve to find the value of the variable in step 1.\n\n## Substitution Method - Examples\n\nExample 1 :\n\nSolve the following pair of linear equations by the substitution method.\n\n0.2x + 0.3y  =  1.3 and 0.4x + 0.5y  =  2.3\n\nSolution :\n\n0.2 x + 0.3 y = 1.3 ------(1)\n\n0.4 x + 0.5 y = 2.3  ------(2)\n\nMultiply both (1) and (2) by 10,\n\n2 x + 3 y = 13 ----(1)\n\n4 x + 5 y = 23 -----(2)\n\nStep 1 :\n\nFind the value of one variable in terms of other variable, say y in terms of x\n\n3y  =  13 - 2x\n\ny  =  (13 - 2x)/3\n\nStep 2 :\n\nBy applying the value of y in the second equation, we get\n\n4 x + 5 [(13 - 2x)/3]  =  23\n\n12 x + [5 (13 - 2 x)]/3  =  23\n\n12 x + 65 - 10 x  =  69\n\n2x  =  69 - 65\n\n2 x  =  4\n\nx  =  2\n\nStep 3 :\n\nNow,we have to apply the value of x in the equation\n\ny  =  (13 -2x)/3\n\ny  =  (13 -2(2))/3\n\ny  =  (13 -4)/3\n\ny  =  9/3\n\ny  =  3\n\nSo, the solution is (2, 3).\n\nExample 2 :\n\nSolve the following pair of linear equations by the substitution method.\n\n√2x + √3y  =  0 and √3x - √8y  =  0\n\nSolution :\n\nStep 1 :\n\nFind the value of one variable in terms of other variable, say y in terms of x\n\n√3 y  =  - √2 x\n\ny  =  - (√2/√3) x\n\nStep 2 :\n\nBy applying the value of y in the second equation, we get\n\n√3x - √8 [- (√2/√3) x]  =  0\n\n√3x + (√16/√3) x)  =  0\n\n(3x + 4x)/√3  =  0\n\n7x/√3  =  0\n\n7x  =  0\n\nx  =  0\n\nStep 3 :\n\nNow, we have to apply the value of x in the equation\n\ny  =  - (√2/√3) x\n\ny  =  - (√2/√3) (0)\n\ny  =  0\n\nSo, the solution is (0, 0).\n\nExample 3 :\n\nSolve the following pair of linear equations by the substitution method.\n\n(3x/2) - (5y/3)  =  -2 and (x/3) + (y/2)  =  13/6\n\nSolution :\n\n(3x/2) - (5y/3)  =  -2  --------(1)\n\n(x/3) + (y/2)  =  13/6   --------(2)\n\nWe are going to take L.C.M for both equations.\n\n(9x - 10y)/6  =  -2\n\n9x - 10y  =  -12 ------(1)\n\n(x/3) + (y/2)  =  13/6\n\n(2x + 3y)/6  =  13/6\n\n2x + 3y  =  13  ------(2)\n\nStep 1 :\n\nFind the value of one variable in terms of other variable, say y in terms of x\n\n10 y  =  9x + 12\n\ny  =  (9x + 12)/10\n\nStep 2 :\n\nBy applying the value of y in the second equation, we get\n\n2x + 3[(9x + 12)/10]  =  13\n\n(20x + 27x + 36)/10  =  13\n\n47x + 36  =  130\n\n47x  =  130 - 36\n\n47x  =  94\n\nx  =  94/47\n\nx  =  2\n\nStep 3 :\n\nNow,we have to apply the value of x in the equation\n\ny  =  (9 x + 12)/10\n\ny  =  (9(2) + 12)/10\n\ny  =  (18 + 12)/10\n\ny  =  30/10\n\ny  =  3\n\nSo, the solution is (2, 3).", null, "After having gone through the stuff given above, we hope that the students would have understood how to solve linear equations using substitution method.\n\nApart from the stuff given in this section if you need any other stuff in math, please use our google custom search here\n\nYou can also visit our following web pages on different stuff in math.\n\nWORD PROBLEMS\n\nWord problems on simple equations\n\nWord problems on linear equations\n\nAlgebra word problems\n\nWord problems on trains\n\nArea and perimeter word problems\n\nWord problems on direct variation and inverse variation\n\nWord problems on unit price\n\nWord problems on unit rate\n\nWord problems on comparing rates\n\nConverting customary units word problems\n\nConverting metric units word problems\n\nWord problems on simple interest\n\nWord problems on compound interest\n\nWord problems on types of angles\n\nComplementary and supplementary angles word problems\n\nDouble facts word problems\n\nTrigonometry word problems\n\nPercentage word problems\n\nProfit and loss word problems\n\nMarkup and markdown word problems\n\nDecimal word problems\n\nWord problems on fractions\n\nWord problems on mixed fractrions\n\nOne step equation word problems\n\nLinear inequalities word problems\n\nRatio and proportion word problems\n\nTime and work word problems\n\nWord problems on sets and venn diagrams\n\nWord problems on ages\n\nPythagorean theorem word problems\n\nPercent of a number word problems\n\nWord problems on constant speed\n\nWord problems on average speed\n\nWord problems on sum of the angles of a triangle is 180 degree\n\nOTHER TOPICS\n\nProfit and loss shortcuts\n\nPercentage shortcuts\n\nTimes table shortcuts\n\nTime, speed and distance shortcuts\n\nRatio and proportion shortcuts\n\nDomain and range of rational functions\n\nDomain and range of rational functions with holes\n\nGraphing rational functions\n\nGraphing rational functions with holes\n\nConverting repeating decimals in to fractions\n\nDecimal representation of rational numbers\n\nFinding square root using long division\n\nL.C.M method to solve time and work problems\n\nTranslating the word problems in to algebraic expressions\n\nRemainder when 2 power 256 is divided by 17\n\nRemainder when 17 power 23 is divided by 16\n\nSum of all three digit numbers divisible by 6\n\nSum of all three digit numbers divisible by 7\n\nSum of all three digit numbers divisible by 8\n\nSum of all three digit numbers formed using 1, 3, 4\n\nSum of all three four digit numbers formed with non zero digits\n\nSum of all three four digit numbers formed using 0, 1, 2, 3\n\nSum of all three four digit numbers formed using 1, 2, 5, 6", null, "" ]
[ null, "https://www.onlinemath4all.com/images/xonlinemath4all1.png.pagespeed.ic.JRmzCqIPA4.png", null, "https://www.onlinemath4all.com/images/xonlinemath4all.jpeg.pagespeed.ic.tFVnUP02HG.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7743168,"math_prob":0.99932075,"size":7649,"snap":"2019-51-2020-05","text_gpt3_token_len":2103,"char_repetition_ratio":0.16272074,"word_repetition_ratio":0.19054763,"special_character_ratio":0.26996994,"punctuation_ratio":0.05501859,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999416,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-16T13:24:00Z\",\"WARC-Record-ID\":\"<urn:uuid:fe1776b1-b2ce-4914-9e6c-e4e95c3d4a71>\",\"Content-Length\":\"81814\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c7f75228-fcf5-4de9-aac7-2196d2620487>\",\"WARC-Concurrent-To\":\"<urn:uuid:440f7fcf-8d77-4ca6-8c85-67ba94398011>\",\"WARC-IP-Address\":\"173.247.218.242\",\"WARC-Target-URI\":\"https://www.onlinemath4all.com/solving-linear-equations-using-substitution-method.html\",\"WARC-Payload-Digest\":\"sha1:GQ2IKHH4URWFMVRUFZNQMBTAU37J2GYS\",\"WARC-Block-Digest\":\"sha1:PCQOTB5ZZKHP5Q6JIOH27VM2ETOZQLHT\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575540565544.86_warc_CC-MAIN-20191216121204-20191216145204-00321.warc.gz\"}"}
https://advancesindifferenceequations.springeropen.com/articles/10.1186/s13662-019-1979-6
[ "Theory and Modern Applications\n\nStability and Hopf bifurcation for a stage-structured predator–prey model incorporating refuge for prey and additional food for predator\n\nAbstract\n\nIn this paper, we study a stage-structured predator–prey model incorporating refuge for prey and additional food for predator. By analyzing the corresponding characteristic equations, we investigate the local stability of equilibria and the existence of Hopf bifurcation at the positive equilibrium taking the time delay as a bifurcation parameter. Furthermore, we obtain the direction of the Hopf bifurcation and the stability of bifurcating periodic solutions applying the center manifold theorem and normal form theory. Numerical simulations are illustrated to verify our main results.\n\nIntroduction\n\nSince the first mathematical model for predator–prey was developed independently by Lotka and Volterra , the predator–prey models in ecology have received great attention [3,4,5,6,7]. Researchers studied the predator–prey models by analyzing their life history. In the natural world, species can be divided into two stages: immaturity and maturity. Therefore, the predator–prey models with stage structure are more reasonable than the ones without stage structure. With the prey species as immature individual organisms, we suppose that they are not attacked by predators, but as mature individuals, in order to reduce their rate of encounter with predators, prey refuges play an important role in affording the prey some degree of protection from predation. Kuang showed that a time delay could destroy the stability of the positive equilibrium and cause a Hopf bifurcation. The delayed predator–prey models with stage structure or refuge have been studied by many authors, see [8,9,10,11,12]. Especially, Wei and Fu investigated Hopf bifurcation and stability of a delayed predator–prey model with stage structure for prey incorporating prey refuge,\n\n$$\\textstyle\\begin{cases} \\dot{x}_{1}(t)=ax_{2}(t)-bx_{1}(t)-\\alpha x_{1}(t), \\\\ \\dot{x}_{2}(t)=\\alpha x_{1}(t)-cx_{2}(t)-d x^{2}_{2}(t)-\\frac{\\beta (1-m)x_{2}(t)y(t)}{a_{1}+b_{1}(1-m)x_{2}(t)+c_{1}y(t)}, \\\\ \\dot{y}(t)=\\frac{d\\beta(1-m)x_{2}(t-\\tau)y(t-\\tau )}{a_{1}+b_{1}(1-m)x_{2}(t-\\tau)+c_{1}y(t-\\tau)}-r y(t), \\end{cases}$$\n(1.1)\n\nwhere $$x_{1}(t)$$, $$x_{2}(t)$$ and $$y(t)$$ denote the densities of immature prey, mature prey and predator at time t, respectively. m is a refuge parameter with $$m\\in[0,1)$$, $$\\tau\\geq0$$ is the time delay due to the gestation of the predator.\n\nPrey refuge can protect the prey from the attack of predators in some degree. What will happen if the predators cannot eat the prey? Now, additional food is very important for the predators. In fact, additional food is an important component of most predators. Recently, the effects of the additional food to predator in prey–predator models were investigated [14,15,16,17,18,19]. Srinivasu et al. reported the dynamics of prey–predator system in the presence of additional food for predator and discussed the effect of quality and quantity of the additional food. Ghosh et al. considered a predator–prey model with logistic growth rate and prey refuge in presence of additional food for predator\n\n$$\\textstyle\\begin{cases} \\dot{N}(t)=r_{1}N(1-\\frac{N}{K})-\\frac {c_{1}(1-c')e_{1}NP}{a+h_{2}e_{2}A'+h_{1}e_{1}N}, \\\\ \\dot{P}(t)=\\frac {b_{1}[(1-c')e_{1}N+e_{2}A']P}{a+h_{2}e_{2}A'+h_{1}e_{1}N}-rP, \\end{cases}$$\n(1.2)\n\nwhere $$N(t)$$ and $$P(t)$$ represent the densities of the prey and predator at time t, respectively. The parameters $$c'$$ is a refuge parameter with $$c'\\in[0,1)$$.\n\nMotivated by the above work, we propose a delayed predator–prey model with stage structure for prey incorporating refuge and providing additional food to the predator,\n\n$$\\textstyle\\begin{cases} \\dot{x}_{1}(t)=ax_{2}(t)-bx_{1}(t)-\\alpha x_{1}(t), \\\\ \\dot{x}_{2}(t)=\\alpha x_{1}(t)-cx_{2}(t)-d x^{2}_{2}(t)-\\frac {k_{1}(1-m)e_{1}x_{2}(t)y(t)}{a_{1}+h_{2}e_{2}A'+h_{1}e_{1}x_{2}(t)}, \\\\ \\dot{y}(t)=\\frac{k_{2}[(1-m)e_{1}x_{2}(t-\\tau)+e_{2}A']y(t-\\tau )}{a_{1}+h_{2}e_{2}A'+h_{1}e_{1}x_{2}(t-\\tau)}-ry(t), \\end{cases}$$\n(1.3)\n\nwhere $$x_{1}(t)$$, $$x_{2}(t)$$ and $$y(t)$$ denote the densities of immature prey species, mature prey species and predator species at time t, respectively. a is the intrinsic growth rate of the immature prey species. b, c and r denote the death rates of immature prey, mature prey and predator, respectively. α is the transformation rate from immature prey to mature prey. d is intra species competition rate of mature prey. m is a refuge parameter with $$m\\in[0,1)$$, $$k_{1}(1-m)$$ is the capturing rate of the predator. $$k_{2}$$ is the conversion rate of nutrients into the production of predator species. $$\\tau\\geq0$$ is the time delay due to the gestation of the predator. $$h_{1}$$ and $$e_{1}$$, respectively represent the handling time of the predator per unit quantity of mature prey, ability of the predator to detect the mature prey. $$h_{2}$$ and $$e_{2}$$, respectively, represent the handling time of the predator per unit quantity of additional food, the ability of the predator to identify the additional food. $$A'$$ represents the biomass of the additional food. All the parameters are nonnegative constants.\n\nDefine $$k_{1}:=\\frac{k_{1}}{h_{1}}$$, $$k_{2}:=\\frac {k_{2}}{h_{1}}$$, $$a_{1}:=\\frac{a_{1}}{e_{1}h_{1}}$$, $$\\beta=\\frac {h_{2}}{h_{1}}$$, $$\\eta=\\frac{e_{2}}{e_{1}}$$. The model (1.3) can be written as\n\n$$\\textstyle\\begin{cases} \\dot{x}_{1}(t) =ax_{2}(t)-bx_{1}(t)-\\alpha x_{1}(t), \\\\ \\dot{x}_{2}(t) =\\alpha x_{1}(t)-cx_{2}(t)-d x^{2}_{2}(t)-\\frac {k_{1}(1-m)x_{2}(t)y(t)}{a_{1}+\\beta\\eta A'+x_{2}(t)}, \\\\ \\dot{y}(t) =\\frac{k_{2}[(1-m)x_{2}(t-\\tau)+\\eta A']y(t-\\tau )}{a_{1}+\\beta\\eta A'+x_{2}(t-\\tau)}-ry(t). \\end{cases}$$\n(1.4)\n\nBy denoting $$u_{1}(t)=\\frac{x_{1}(t)}{a_{1}}$$, $$u_{2}(t)=\\frac {x_{2}(t)}{a_{1}}$$, $$v(t)=\\frac{k_{1}y(t)}{a_{1}}$$, $$d_{1}=a_{1}d$$, $$\\xi =\\frac{\\eta A'}{a_{1}}$$, the model (1.4) reduces to the following form:\n\n$$\\textstyle\\begin{cases} \\dot{u}_{1}(t) =au_{2}(t)-bu_{1}(t)-\\alpha u_{1}(t), \\\\ \\dot{u}_{2}(t) =\\alpha u_{1}(t)-cu_{2}(t)-d_{1} u^{2}_{2}(t)-\\frac {(1-m)u_{2}(t)v(t)}{1+\\beta\\xi+u_{2}(t)}, \\\\ \\dot{v}(t) =\\frac{k_{2}[(1-m)u_{2}(t-\\tau)+\\xi]v(t-\\tau)}{1+\\beta\\xi +u_{2}(t-\\tau)}-rv(t), \\end{cases}$$\n(1.5)\n\nwhere the term β and ξ are the parameters which characterize the “quality” and “quantity” of additional food, respectively. The initial conditions for model (1.5) take the form\n\n$$\\textstyle\\begin{cases} u_{1}(\\theta)=\\varphi_{1}(\\theta)\\geq0,\\qquad u_{2}(\\theta)=\\varphi_{2}(\\theta )\\geq0,\\qquad v(\\theta)=\\varphi_{3}(\\theta)\\geq0, \\\\ \\theta\\in[-\\tau,0),\\qquad \\varphi_{1}(0)>0, \\qquad \\varphi_{2}(0)>0,\\qquad \\varphi_{3}(0)>0, \\end{cases}$$\n(1.6)\n\nwhere $$(\\varphi_{1}(\\theta),\\varphi_{2}(\\theta),\\varphi_{3}(\\theta))\\in C \\{[-\\tau,0],R^{3}_{+}\\}$$, $$R^{3}_{+}=\\{(u_{1},u_{2},v):u_{1}\\geq 0,u_{2}\\geq0,v\\geq0\\}$$.\n\nFrom the fundamental theory of functional differential equations , the model (1.5) has a unique solution $$(u_{1}(t),u_{2}(t),v(t))$$ satisfying the initial conditions (1.6). It is easy to show that all solutions of (1.5) with initial conditions (1.6) are defined on $$[0,+\\infty)$$ and remain positive for all $$t\\geq0$$.\n\nThe main contributions of the present paper are: (1) A stage-structured predator–prey model incorporating refuge for prey and additional food for predator is formulated. (2) The existence and local stability of equilibria and the existence of Hopf bifurcation of the model are given. (3) The direction of the Hopf bifurcation and the stability of bifurcating periodic solutions are obtained by applying the center manifold theorem and the normal form theory. (4) Numerical simulations are illustrated to show our main results.\n\nIn this paper, we assume the following conditions hold.\n\n\\begin{aligned}& (\\mathrm{H}_{1}) \\quad a\\alpha-(b+\\alpha)c>0; \\qquad ( \\mathrm{H}_{2}) \\quad a\\alpha-(b+\\alpha )c< 0; \\\\& (\\mathrm{H}_{3})\\quad r+(r\\beta-k_{2})\\xi>0; \\qquad ( \\mathrm{H}_{4}) \\quad r+(r\\beta-k_{2})\\xi < 0; \\\\& (\\mathrm{H}_{5})\\quad k_{2}(1-m)-r>0; \\qquad ( \\mathrm{H}_{6})\\quad k_{2}(1-m)-r< 0; \\\\& (\\mathrm{H}_{7})\\quad 0< u^{\\ast}_{2}< \\frac{a\\alpha-(b+\\alpha)c}{d_{1}(b+\\alpha )}; \\qquad (\\mathrm{H}_{8})\\quad (b+\\alpha)c< a \\alpha< (b+\\alpha) \\bigl(c+2d_{1}u^{\\ast}_{2}+A\\bigr); \\\\& (\\mathrm{H}_{9})\\quad (b+\\alpha)c< a\\alpha< (b+\\alpha) \\bigl(c+2d_{1}u^{\\ast }_{2}\\bigr); \\qquad ( \\mathrm{H}_{10}) \\quad (b+\\alpha) \\bigl(c+2d_{1}u^{\\ast}_{2}+A \\bigr)< a\\alpha; \\end{aligned}\n\nwhere $$u^{\\ast}_{2}$$, A can be found in Sect. 2 and Sect. 3, respectively.\n\nNow we give the biological interpretation of the conditions ($$\\mathrm{H}_{1}$$)–($$\\mathrm{H}_{10}$$).\n\nFrom ($$\\mathrm{H}_{1}$$), we have $$a\\alpha>(b+\\alpha)c$$, which means that the prey species keeps a linear net growth without the predator species. The condition ($$\\mathrm{H}_{3}$$) can be rewritten in the form $$r>\\frac{k_{2}\\xi}{1+\\beta\\xi}$$. It means that additional food cannot ensure the survival of the predator species without the prey species. The condition ($$\\mathrm{H}_{5}$$) is explained that prey species can ensure the survival of predator species without additional food even if the prey species have refuge. It is clear that the conditions ($$\\mathrm{H}_{2}$$), ($$\\mathrm{H}_{4}$$), ($$\\mathrm{H}_{6}$$) have opposite interpretation with ($$\\mathrm{H}_{1}$$), ($$\\mathrm{H}_{3}$$), ($$\\mathrm{H}_{5}$$), respectively. The term $$\\frac{a\\alpha-(b+\\alpha)c}{d_{1}(b+\\alpha)}$$ in condition ($$\\mathrm{H}_{7}$$) is the ratio of net growth with their own retarded growth of prey species. This ratio is a critical value for $$u^{\\ast}_{2}$$. The condition ($$\\mathrm{H}_{8}$$) can be simplified as $$0< a\\alpha-(b+\\alpha)c<(b+\\alpha )(2d_{1}u^{\\ast}_{2}+A)$$. It implies that on the one hand the prey species keep linear net growth and on the other hand this growth is limited by some value. Obviously, if ($$\\mathrm{H}_{9}$$) holds, then ($$\\mathrm{H}_{8}$$) holds. The condition ($$\\mathrm{H}_{10}$$) shows that the linear net growth of the prey species is higher than the limit value.\n\nEquilibria of the model (1.5)\n\nIn order to obtain the equilibria of the model (1.5), we consider the prey nullcline and predator nullcline of this model, which are given by\n\n$$\\textstyle\\begin{cases} au_{2}-bu_{1}-\\alpha u_{1}=0, \\\\ \\alpha u_{1}-cu_{2}-d_{1} u^{2}_{2}-\\frac{(1-m)u_{2}v}{1+\\beta\\xi +u_{2}}=0, \\\\ \\frac{k_{2}[(1-m)u_{2}+\\xi]v}{1+\\beta\\xi+u_{2}}-rv=0. \\end{cases}$$\n\nObviously, the model (1.5) always has a trivial equilibrium $$E_{0}(0,0,0)$$.\n\nIf the condition ($$\\mathrm{H}_{1}$$) holds, then the model (1.5) has a predator-extinction equilibrium $$E_{1}(\\bar{u}_{1},\\bar{u}_{2},0)$$, where $$\\bar{u}_{1}=\\frac{a[a\\alpha-(b+\\alpha)c]}{(b+\\alpha)^{2}d_{1}}$$, $$\\bar{u}_{2}=\\frac{a\\alpha-(b+\\alpha)c}{(b+\\alpha)d_{1}}$$.\n\nIf the conditions ($$\\mathrm{H}_{1}$$), ($$\\mathrm{H}_{3}$$), ($$\\mathrm{H}_{5}$$) and ($$\\mathrm{H}_{7}$$) hold, which imply\n\n$$\\beta>\\frac{k_{2}}{r}-\\frac{1}{\\xi}, \\quad \\mbox{and}\\quad 0< m< \\min \\biggl\\{ 1-\\frac{r}{k_{2}},1-\\frac{r}{k_{2}}-\\frac{d_{1}[r+(r\\beta -k_{2})\\xi](b+\\alpha)}{\\alpha k_{2}(a-c)-bc}\\biggr\\} ,$$\n\nthen there exists a unique coexisting equilibrium $$E_{2}(u^{\\ast }_{1},u^{\\ast}_{2},v^{\\ast})$$ of the model (1.5), where\n\n\\begin{aligned} &u^{\\ast}_{1}=\\frac{a}{b+\\alpha}u^{\\ast}_{2}, \\qquad u^{\\ast}_{2}=\\frac {r+(r\\beta-k_{2})\\xi}{k_{2}(1-m)-r}, \\\\ &v^{\\ast}= \\frac{[a\\alpha-(b+\\alpha)c-d_{1}(b+\\alpha)u^{\\ast }_{2}](1+\\beta\\xi+u^{\\ast}_{2})}{(1-m)(b+\\alpha)}. \\end{aligned}\n\nLocal stability of the equilibria\n\nLet $$E(u_{1},u_{2},v)$$ be any arbitrary equilibrium, then Jacobian matrix at E is given by\n\n${J}_{\\left({u}_{1},{u}_{2},v\\right)}=\\left(\\begin{array}{ccc}-b-\\alpha & a& 0\\\\ \\alpha & -c-2{d}_{1}{u}_{2}-\\frac{\\left(1-m\\right)\\left(1+\\beta \\xi \\right)v}{{\\left(1+\\beta \\xi +{u}_{2}\\right)}^{2}}& -\\frac{\\left(1-m\\right){u}_{2}}{1+\\beta \\xi +{u}_{2}}\\\\ 0& \\frac{{k}_{2}\\left[\\left(1-m\\right)\\left(1+\\beta \\xi \\right)-\\xi \\right]v{e}^{-\\lambda \\tau }}{{\\left(1+\\beta \\xi +{u}_{2}\\right)}^{2}}& \\frac{{k}_{2}\\left[\\left(1-m\\right){u}_{2}+\\xi \\right]{e}^{-\\lambda \\tau }}{1+\\beta \\xi +{u}_{2}}-r\\end{array}\\right).$\n\n(a) Trivial equilibrium point: At the trivial equilibrium point $$E_{0}(0,0,0)$$, the Jacobian matrix is given by\n\n${J}_{\\left(0,0,0\\right)}=\\left(\\begin{array}{ccc}-b-\\alpha & a& 0\\\\ \\alpha & -c& 0\\\\ 0& 0& \\frac{{k}_{2}\\xi {e}^{-\\lambda \\tau }}{1+\\beta \\xi }-r\\end{array}\\right),$\n\nand the characteristic equation at $$E_{0}$$ becomes\n\n$$\\biggl(\\lambda+r-\\frac{k_{2}\\xi e^{-\\lambda\\tau}}{1+\\beta\\xi} \\biggr)\\bigl[ \\lambda^{2}+(b+\\alpha+c)\\lambda+c(b+\\alpha)-a\\alpha\\bigr]=0,$$\n(3.1)\n\nthen the equation\n\n$$\\lambda^{2}+(b+\\alpha+c)\\lambda+c(b+\\alpha)-a\\alpha=0$$\n\nhas two roots, and we have $$\\lambda_{1}+\\lambda_{2}=-(b+\\alpha+c)<0$$, $$\\lambda_{1}\\lambda_{2}=c(b+\\alpha)-a\\alpha$$.\n\nIf ($$\\mathrm{H}_{1}$$) holds, then $$\\lambda_{1}\\lambda_{2}<0$$, that is, $$E_{0}$$ is an unstable saddle; If ($$\\mathrm{H}_{2}$$) holds, then $$\\lambda _{1}\\lambda_{2}>0$$, that is, $$\\operatorname{Re}(\\lambda_{i})<0$$, $$i=1,2$$. Another root of (3.1) is determined by the equation\n\n$$\\lambda+r-\\frac{k_{2}\\xi e^{-\\lambda\\tau}}{1+\\beta\\xi}=0.$$\n(3.2)\n\nDenote\n\n$$f_{1}(\\lambda)=\\lambda+r-\\frac{k_{2}\\xi e^{-\\lambda\\tau}}{1+\\beta\\xi}.$$\n\nIf ($$\\mathrm{H}_{2}$$) and ($$\\mathrm{H}_{3}$$) hold, we claim that $$E_{0}$$ is locally asymptotically stable. Otherwise, there is a root λ satisfying $$\\operatorname{Re}(\\lambda)\\geq0$$, it follows from (3.2) that\n\n$$\\operatorname{Re}(\\lambda)=\\frac{k_{2}\\xi}{1+\\beta\\xi}e^{-\\tau\\operatorname{Re}\\lambda }\\cos(\\tau \\operatorname{Im}\\lambda)-r\\leq\\frac{k_{2}\\xi}{1+\\beta\\xi}-r< 0,$$\n\nwhich is contradiction. Hence the equilibrium $$E_{0}$$ is locally asymptotically stable.\n\nIf ($$\\mathrm{H}_{4}$$) holds, it is easy to show that, for real λ, $$f_{1}(0)=r-\\frac{k_{2}\\xi}{1+\\beta\\xi}<0$$, and\n\n$$\\lim_{\\lambda\\rightarrow+\\infty}f_{1}(\\lambda)=+\\infty.$$\n\nHence, $$f_{1}(\\lambda)=0$$ has a positive real root.\n\nFrom the above discussions, we can get the following theorem.\n\nTheorem 3.1\n\nFor the model (1.5):\n\n1. (i)\n\nIf ($$\\mathrm{H}_{1}$$) or ($$\\mathrm{H}_{4}$$) holds, then the trivial equilibrium $$E_{0}(0,0,0)$$ is unstable.\n\n2. (ii)\n\nIf ($$\\mathrm{H}_{2}$$) and ($$\\mathrm{H}_{3}$$) hold, then the trivial equilibrium $$E_{0}(0,0,0)$$ is locally asymptotically stable.\n\nRemark 3.1\n\nIt is easy to understand Theorem 3.1 from the biological meaning of ($$\\mathrm{H}_{1}$$)–($$\\mathrm{H}_{4}$$).\n\n(b) Predator-extinction equilibrium point: At equilibrium point $$E_{1}(\\bar{u}_{1},\\bar{u}_{2},0)$$, the Jacobian matrix is given by\n\n${J}_{\\left({\\overline{u}}_{1},{\\overline{u}}_{2},0\\right)}=\\left(\\begin{array}{ccc}-b-\\alpha & a& 0\\\\ \\alpha & -c-2{d}_{1}{\\overline{u}}_{2}& -\\frac{\\left(1-m\\right){\\overline{u}}_{2}}{1+\\beta \\xi +{\\overline{u}}_{2}}\\\\ 0& 0& \\frac{{k}_{2}\\left[\\left(1-m\\right){\\overline{u}}_{2}+\\xi \\right]{e}^{-\\lambda \\tau }}{1+\\beta \\xi +{\\overline{u}}_{2}}-r\\end{array}\\right),$\n\nand the characteristic equation at $$E_{1}$$ becomes\n\n\\begin{aligned}& \\biggl(\\lambda+r-\\frac{k_{2}[(1-m)\\bar{u}_{2}+\\xi] e^{-\\lambda\\tau}}{1+\\beta \\xi+\\bar{u}_{2}}\\biggr)\\bigl[ \\lambda^{2}+(b+\\alpha+c+2d_{1}\\bar{u}_{2})\\lambda +(b+\\alpha) (c+2d_{1}\\bar{u}_{2})-a\\alpha\\bigr] \\\\& \\quad =0, \\end{aligned}\n(3.3)\n\nthen the equation\n\n$$\\lambda^{2}+(b+\\alpha+c+2d_{1}\\bar{u}_{2}) \\lambda+(b+\\alpha ) (c+2d_{1}\\bar{u}_{2})-a\\alpha=0$$\n\nhas two roots, and\n\n\\begin{aligned}& \\lambda_{1}+\\lambda_{2}=-(b+\\alpha+c+2d_{1}\\bar {u}_{2})< 0, \\\\& \\begin{aligned} \\lambda_{1}\\lambda_{2}&=(b+\\alpha) (c+2d_{1}\\bar{u}_{2})-a\\alpha \\\\ &=a\\alpha-c(b+\\alpha). \\end{aligned} \\end{aligned}\n\nIf ($$\\mathrm{H}_{1}$$) holds, then $$\\lambda_{1}\\lambda_{2}>0$$, that is $$\\operatorname{Re}\\lambda_{i}<0$$, $$i=1,2$$. Another root of (3.3) is determined by\n\n$$\\lambda+r-\\frac{k_{2}[(1-m)\\bar{u}_{2}+\\xi]}{1+\\beta\\xi+\\bar{u}_{2}} e^{-\\lambda\\tau}=0.$$\n(3.4)\n\nDenote\n\n$$f_{2}(\\lambda)=\\lambda+r-\\frac{k_{2}[(1-m)\\bar{u}_{2}+\\xi]}{1+\\beta\\xi +\\bar{u}_{2}} e^{-\\lambda\\tau}.$$\n\nIf ($$\\mathrm{H}_{4}$$) and ($$\\mathrm{H}_{5}$$) hold, it is easy to show that, for real λ,\n\n\\begin{aligned} f_{2}(0) =&r-\\frac{k_{2}[(1-m)\\bar{u}_{2}+\\xi]}{1+\\beta\\xi+\\bar {u}_{2}} \\\\ =&\\frac{1}{1+\\beta\\xi+\\bar{u}_{2}} \\bigl[r(1+\\beta\\xi)-k_{2}\\xi + \\bigl[r-k_{2}(1-m)\\bigr]\\bar{u}_{2} \\bigr] \\\\ < &0, \\end{aligned}\n\nand $$\\lim_{\\lambda\\rightarrow+\\infty}f_{2}(\\lambda)=+\\infty$$. Hence, $$f_{2}(\\lambda)=0$$ has a positive real root.\n\nIf ($$\\mathrm{H}_{3}$$) and ($$\\mathrm{H}_{6}$$) hold, we have $$f_{2}(0)>0$$. We claim that $$E_{1}$$ is locally asymptotically stable. Otherwise, there is a root λ satisfying $$\\operatorname{Re}\\lambda\\geq0$$. It follows from (3.4) that\n\n\\begin{aligned} \\operatorname{Re}\\lambda =&\\frac{k_{2}[(1-m)\\bar{u}_{2}+\\xi]}{1+\\beta\\xi+\\bar {u}_{2}} e^{-\\tau\\operatorname{Re}\\lambda}\\cos(\\tau \\operatorname{Im}\\lambda)-r \\\\ \\leq&\\frac{k_{2}[(1-m)\\bar{u}_{2}+\\xi]}{1+\\beta\\xi+\\bar{u}_{2}}-r \\\\ =&-f_{2}(0)< 0, \\end{aligned}\n\nwhich is a contradiction. Hence, when ($$\\mathrm{H}_{3}$$) and ($$\\mathrm{H}_{6}$$) hold, then $$\\operatorname{Re}\\lambda<0$$.\n\nBased on the above discussions, the following theorem can be obtained.\n\nTheorem 3.2\n\nSuppose that ($$\\mathrm{H}_{1}$$) holds. For the model (1.5), we have:\n\n1. (i)\n\nIf ($$\\mathrm{H}_{4}$$) and ($$\\mathrm{H}_{5}$$) hold, then the predator-extinction equilibrium $$E_{1}(\\bar{u}_{1},\\bar{u}_{2},0)$$ is unstable.\n\n2. (ii)\n\nIf ($$\\mathrm{H}_{3}$$) and ($$\\mathrm{H}_{6}$$) hold, then the predator-extinction equilibrium $$E_{1}(\\bar{u}_{1},\\bar{u}_{2},0)$$ is locally asymptotically stable.\n\nRemark 3.2\n\nIt is easy to understand Theorem 3.2 from the biological meaning of ($$\\mathrm{H}_{3}$$)–($$\\mathrm{H}_{6}$$).\n\n(c) Co-existing equilibrium point: At the coexisting equilibrium point $$E_{2}(u^{\\ast}_{1},u^{\\ast}_{2},v^{\\ast})$$, the Jacobian matrix is given by\n\n${J}_{\\left({u}_{1}^{\\ast },{u}_{2}^{\\ast },{v}^{\\ast }\\right)}=\\left(\\begin{array}{ccc}-b-\\alpha & a& 0\\\\ \\alpha & -c-2{d}_{1}{u}_{2}^{\\ast }-\\frac{\\left(1-m\\right)\\left(1+\\beta \\xi \\right){v}^{\\ast }}{{\\left(1+\\beta \\xi +{u}_{2}^{\\ast }\\right)}^{2}}& -\\frac{\\left(1-m\\right){u}_{2}^{\\ast }}{1+\\beta \\xi +{u}_{2}^{\\ast }}\\\\ 0& \\frac{{k}_{2}\\left[\\left(1-m\\right)\\left(1+\\beta \\xi \\right)-\\xi \\right]{v}^{\\ast }}{{\\left(1+\\beta \\xi +{u}_{2}^{\\ast }\\right)}^{2}}{e}^{-\\lambda \\tau }& \\frac{{k}_{2}\\left[\\left(1-m\\right){u}_{2}^{\\ast }+\\xi \\right]{e}^{-\\lambda \\tau }}{1+\\beta \\xi +{u}_{2}^{\\ast }}-r\\end{array}\\right),$\n\nand the characteristic equation at $$E_{2}$$ becomes\n\n\\begin{aligned}& \\lambda^{3} + \\bigl(b+\\alpha+c+2d_{1}u^{\\ast}_{2}+A+r-Be^{-\\lambda\\tau } \\bigr)\\lambda^{2} \\\\& \\quad{} + \\biggl[(b+\\alpha) \\bigl(c+2d_{1}u^{\\ast}_{2}+A \\bigr)-a\\alpha \\\\& \\quad {}+\\bigl(r-Be^{-\\lambda\\tau }\\bigr) \\bigl(b+\\alpha+c+2d_{1}u^{\\ast}_{2}+A \\bigr)+\\frac{k_{2}C(1-m)u^{\\ast }_{2}}{1+\\beta\\xi+u^{\\ast}_{2}}e^{-\\lambda\\tau} \\biggr]\\lambda \\\\& \\quad{} + \\bigl(r-Be^{-\\lambda\\tau}\\bigr) \\bigl[(b+\\alpha) \\bigl(c+2d_{1}u^{\\ast }_{2}+A \\bigr)-a\\alpha \\bigr]+\\frac{k_{2}C(1-m)(b+\\alpha)u^{\\ast}_{2}}{1+\\beta \\xi+u^{\\ast}_{2}}e^{-\\lambda\\tau}=0, \\end{aligned}\n(3.5)\n\nwhere $$A=\\frac{(1-m)(1+\\beta\\xi)}{(1+\\beta\\xi+u^{\\ast}_{2})^{2}}v^{\\ast }>0$$, $$B=\\frac{k_{2}[(1-m)u^{\\ast}_{2}+\\xi]}{1+\\beta\\xi+u^{\\ast }_{2}}=r>0$$, and $$C=A-\\frac{\\xi v^{\\ast}}{(1+\\beta\\xi+u^{\\ast }_{2})^{2}}>0$$, when $$m<1-\\frac{\\xi}{1+\\beta\\xi}$$.\n\nOne can rewrite (3.5) so that it has the following form:\n\n\\begin{aligned}& \\lambda^{3} + \\bigl(b+\\alpha+c+2d_{1}u^{\\ast}_{2}+A+r \\bigr)\\lambda^{2} \\\\& \\quad {}+ \\bigl[(b+\\alpha) \\bigl(c+2d_{1}u^{\\ast}_{2}+A \\bigr)-a\\alpha+r\\bigl(b+\\alpha+c+2d_{1}u^{\\ast }_{2}+A \\bigr) \\bigr]\\lambda \\\\& \\quad {} + r\\bigl[(b+\\alpha) \\bigl(c+2d_{1}u^{\\ast}_{2}+A \\bigr)-a\\alpha\\bigr] \\\\& \\quad {} + e^{-\\lambda\\tau}\\biggl[-r\\lambda^{2}+\\biggl[ \\frac{k_{2}C(1-m)u^{\\ast }_{2}}{1+\\beta\\xi+u^{\\ast}_{2}}-r\\bigl(b+\\alpha+c+2d_{1}u^{\\ast }_{2}+A \\bigr)\\biggr]\\lambda \\\\& \\quad {} + \\frac{k_{2}C(1-m)(b+\\alpha)u^{\\ast}_{2}}{1+\\beta\\xi+u^{\\ast }_{2}}-r \\bigl[(b+\\alpha) \\bigl(c+2d_{1}u^{\\ast}_{2}+A \\bigr)-a\\alpha \\bigr] \\biggr]=0. \\end{aligned}\n(3.6)\n\nLet\n\n\\begin{aligned}& P_{1} =b+\\alpha+c+2d_{1}u^{\\ast}_{2}+A+r, \\\\& P_{2} =(b+\\alpha) \\bigl(c+2d_{1}u^{\\ast}_{2}+A \\bigr)-a\\alpha+r\\bigl(b+\\alpha +c+2d_{1}u^{\\ast}_{2}+A \\bigr), \\\\& P_{3} =r\\bigl[(b+\\alpha) \\bigl(c+2d_{1}u^{\\ast}_{2}+A \\bigr)-a\\alpha\\bigr], \\\\& P_{4} =\\frac{k_{2}C(1-m)u^{\\ast}_{2}}{1+\\beta\\xi+u^{\\ast }_{2}}-r\\bigl(b+\\alpha+c+2d_{1}u^{\\ast}_{2}+A \\bigr), \\\\& P_{5} =\\frac{k_{2}C(1-m)(b+\\alpha)u^{\\ast}_{2}}{1+\\beta\\xi+u^{\\ast }_{2}}-r\\bigl[(b+\\alpha) \\bigl(c+2d_{1}u^{\\ast}_{2}+A \\bigr)-a\\alpha\\bigr]. \\end{aligned}\n\nEquation (3.6) can be written as\n\n$$\\lambda^{3}+P_{1}\\lambda^{2}+P_{2} \\lambda+P_{3}+e^{-\\lambda\\tau }\\bigl(-r\\lambda^{2}+P_{4} \\lambda+P_{5}\\bigr)=0.$$\n(3.7)\n\nCase 3.1. $$\\tau=0$$.\n\nEquation (3.7) turns to\n\n$$\\lambda^{3}+(P_{1}-r)\\lambda^{2}+(P_{2}+P_{4}) \\lambda+(P_{3}+P_{5})=0,$$\n(3.8)\n\nthen\n\n\\begin{aligned}& P_{1}-r=b+\\alpha+c+2d_{1}u^{\\ast}_{2}+A+r-r=b+ \\alpha+c+2d_{1}u^{\\ast }_{2}+A>0, \\\\& P_{3}+P_{5}=r\\bigl[(b+\\alpha) \\bigl(c+2d_{1}u^{\\ast}_{2}+A \\bigr)-a\\alpha\\bigr]+\\frac {k_{2}C(1-m)(b+\\alpha)u^{\\ast}_{2}}{1+\\beta\\xi+u^{\\ast}_{2}} \\\\& \\hphantom{P_{3}+P_{5}={}}{}-r\\bigl[(b+\\alpha ) \\bigl(c+2d_{1}u^{\\ast}_{2}+A \\bigr)-a\\alpha\\bigr] \\\\& \\hphantom{P_{3}+P_{5}}=\\frac{k_{2}C(1-m)(b+\\alpha)u^{\\ast}_{2}}{1+\\beta\\xi+u^{\\ast}_{2}}>0, \\\\& (P_{1}-r) (P_{2}+P_{4})-(P_{3}+P_{5}) \\\\& \\quad = \\bigl(b+\\alpha+c+2d_{1}u^{\\ast }_{2}+A\\bigr) \\biggl[(b+\\alpha) \\bigl(c+2d_{1}u^{\\ast}_{2}+A\\bigr)-a \\alpha \\\\& \\qquad{} + r\\bigl(b+\\alpha+c+2d_{1}u^{\\ast}_{2}+A \\bigr)+\\frac{k_{2}C(1-m)u^{\\ast }_{2}}{1+\\beta\\xi+u^{\\ast}_{2}}-r\\bigl(b+\\alpha+c+2d_{1}u^{\\ast}_{2}+A \\bigr)\\biggr] \\\\& \\qquad {}- \\frac{k_{2}C(1-m)(b+\\alpha)u^{\\ast}_{2}}{1+\\beta\\xi+u^{\\ast}_{2}} \\\\& \\quad = \\bigl(c+2d_{1}u^{\\ast}_{2}+A\\bigr) \\frac{k_{2}C(1-m)u^{\\ast}_{2}}{1+\\beta\\xi +u^{\\ast}_{2}} \\\\& \\qquad {} + \\bigl(b+\\alpha+c+2d_{1}u^{\\ast}_{2}+A \\bigr)\\bigl[(b+\\alpha) \\bigl(c+2d_{1}u^{\\ast }_{2}+A \\bigr)-a\\alpha\\bigr]. \\end{aligned}\n\nIf the condition ($$\\mathrm{H}_{8}$$) holds, then $$(P_{1}-r)(P_{2}+P_{4})-(P_{3}+P_{5})>0$$. By the Routh–Hurwitz criterion, we see that the coexisting equilibrium point $$E_{2}$$ is locally asymptotically stable.\n\nCase 3.2. $$\\tau> 0$$.\n\nLet $$\\lambda=i\\omega$$ ($$\\omega>0$$) be a root of (3.7), then\n\n$$\\bigl(-i\\omega^{3}+i\\omega P_{2}+P_{3}-P_{1} \\omega^{2}\\bigr)+(\\cos\\omega\\tau-i\\sin \\omega\\tau) \\bigl(iP_{4} \\omega+r\\omega^{2}+P_{5}\\bigr)=0.$$\n(3.9)\n\nSeparating real part and imaginary part of (3.9), we have\n\n\\begin{aligned}& P_{4}\\omega\\cos\\omega\\tau-\\bigl(r\\omega^{2}+P_{5} \\bigr)\\sin\\omega\\tau=\\omega ^{3}-P_{2}\\omega, \\\\& P_{4}\\omega\\sin\\omega\\tau+\\bigl(r\\omega^{2}+P_{5} \\bigr)\\cos\\omega\\tau=P_{1}\\omega ^{2}-P_{3}, \\end{aligned}\n\nthat is,\n\n$$\\textstyle\\begin{cases} \\cos\\omega\\tau=\\frac{(rP_{1}+P_{4})\\omega ^{4}-(rP_{3}+P_{2}P_{4}-P_{1}P_{5})\\omega^{2}-P_{3}P_{5}}{(P_{4}\\omega )^{2}+(P_{5}+r\\omega^{2})^{2}}, \\\\ \\sin\\omega\\tau=\\frac{-r\\omega^{5}+(P_{1}P_{5}-P_{5}+rP_{2})\\omega ^{2}+(P_{2}P_{5}-P_{3}P_{4})\\omega}{(P_{4}\\omega)^{2}+(P_{5}+r\\omega^{2})^{2}}. \\end{cases}$$\n(3.10)\n\nTaking the square on both sides of (3.10) implies that\n\n$$\\omega^{6}+\\bigl(P_{1}^{2}-2P_{2}-r^{2} \\bigr)\\omega ^{4}+\\bigl(P_{2}^{2}-2P_{1}P_{3}-P_{4}^{2}-2rP_{5} \\bigr)\\omega^{2}+P_{3}^{2}-P_{5}^{2}=0.$$\n(3.11)\n\nSuppose $$\\nu=\\omega^{2}$$. Then (3.11) becomes\n\n$$\\nu^{3}+\\bigl(P_{1}^{2}-2P_{2}-r^{2} \\bigr)\\nu ^{2}+\\bigl(P_{2}^{2}-2P_{1}P_{3}-P_{4}^{2}-2rP_{5} \\bigr)\\nu+P_{3}^{2}-P_{5}^{2}=0,$$\n(3.12)\n\nwhere\n\n\\begin{aligned}& P_{1}^{2}-2P_{2}-r^{2}=\\bigl(b+ \\alpha+c+2d_{1}u^{\\ast}_{2}+A+r\\bigr)^{2}-2 \\bigl[(b+\\alpha) \\bigl(c+2d_{1}u^{\\ast}_{2}+A\\bigr) \\\\& \\hphantom{P_{1}^{2}-2P_{2}-r^{2}={}}{}-a\\alpha+r\\bigl(b+\\alpha+c+2d_{1}u^{\\ast}_{2}+A \\bigr)\\bigr]-r^{2} \\\\& \\hphantom{P_{1}^{2}-2P_{2}-r^{2}}=(b+\\alpha)^{2}+\\bigl(c+2d_{1}u^{\\ast}_{2}+A \\bigr)^{2}+2a\\alpha>0, \\\\& P_{2}^{2}-2P_{1}P_{3}-P_{4}^{2}-2rP_{5} \\\\& \\quad = \\bigl[(b+\\alpha) \\bigl(c+2d_{1}u^{\\ast }_{2}+A\\bigr)-a \\alpha+r\\bigl(b+\\alpha+c+2d_{1}u^{\\ast}_{2}+A\\bigr) \\bigr]^{2} \\\\& \\qquad {}-2r\\bigl(b+\\alpha+c+2d_{1}u^{\\ast}_{2}+A+r \\bigr)\\bigl[(b+\\alpha) \\bigl(c+2d_{1}u^{\\ast }_{2}+A \\bigr)-a\\alpha\\bigr] \\\\& \\qquad {}-2r \\biggl[\\frac{k_{2}C(1-m)(b+\\alpha)u^{\\ast}_{2}}{1+\\beta\\xi+u^{\\ast }_{2}}-r\\bigl[(b+\\alpha) \\bigl(c+2d_{1}u^{\\ast}_{2}+A\\bigr)-a\\alpha\\bigr] \\biggr] \\\\& \\qquad {}- \\biggl[\\frac{k_{2}C(1-m)u^{\\ast}_{2}}{1+\\beta\\xi+u^{\\ast }_{2}}-r\\bigl(b+\\alpha+c+2d_{1}u^{\\ast}_{2}+A \\bigr) \\biggr]^{2}. \\end{aligned}\n\nDenoting $$m_{1}=b+\\alpha$$, $$m_{2}=c+2d_{1}u^{\\ast}_{2}+A$$, $$m_{3}=\\frac {k_{2}C(1-m)u^{\\ast}_{2}}{1+\\beta\\xi+u^{\\ast}_{2}}$$, we have\n\n\\begin{aligned}& P_{2}^{2}-2P_{1}P_{3}-P_{4}^{2}-2rP_{5} \\\\& \\quad = \\bigl[m_{1}m_{2}-a\\alpha +r(m_{1}+m_{2}) \\bigr]^{2}-2r(m_{1}+m_{2}+r) (m_{1}m_{2}-a \\alpha) \\\\& \\qquad {}- 2r\\bigl[m_{3}(b+\\alpha)-r(m_{1}m_{2}-a \\alpha )\\bigr]-\\bigl[m_{3}-r(m_{1}+m_{2}) \\bigr]^{2} \\\\& \\quad = (m_{1}m_{2}-a\\alpha)^{2}+2rm_{3} \\bigl[m_{1}+m_{2}-(b+\\alpha)\\bigr]-m_{3}^{2} \\\\& \\quad = \\bigl[(b+\\alpha) \\bigl(c+2d_{1}u^{\\ast}_{2}+A \\bigr)-a\\alpha\\bigr]^{2} \\\\& \\qquad {} + \\frac{k_{2}C(1-m)u^{\\ast}_{2}}{1+\\beta\\xi+u^{\\ast }_{2}}\\biggl[2r\\bigl(c+2d_{1}u^{\\ast}_{2}+A \\bigr)-\\frac{k_{2}C(1-m)u^{\\ast}_{2}}{1+\\beta \\xi+u^{\\ast}_{2}}\\biggr] \\\\& \\quad = \\bigl[(b+\\alpha) \\bigl(c+2d_{1}u^{\\ast}_{2}+A \\bigr)-a\\alpha\\bigr]^{2} \\\\& \\qquad {} + \\frac{k_{2}C(1-m)u^{\\ast}_{2}}{1+\\beta\\xi+u^{\\ast}_{2}} \\biggl[2\\frac {k_{2}[(1-m)u^{\\ast}_{2}+\\xi]}{1+\\beta\\xi+u^{\\ast}_{2}} \\bigl(c+2d_{1}u^{\\ast }_{2}+A\\bigr)-\\frac{k_{2}C(1-m)u^{\\ast}_{2}}{1+\\beta\\xi+u^{\\ast}_{2}} \\biggr] \\\\& \\quad = \\bigl[(b+\\alpha) \\bigl(c+2d_{1}u^{\\ast}_{2}+A \\bigr)-a\\alpha\\bigr]^{2} \\\\& \\qquad {} + \\frac{k_{2}^{2}C(1-m)u^{\\ast}_{2}}{1+\\beta\\xi+u^{\\ast}_{2}}\\times \\biggl[\\bigl(c+2d_{1}u^{\\ast}_{2} \\bigr)\\bigl[2(1-m)u^{\\ast}_{2}+\\xi\\bigr]+\\xi v^{\\ast}+ \\frac{\\xi (1-m)u^{\\ast}_{2}v^{\\ast}}{(1+\\beta\\xi+u^{\\ast}_{2})^{2}}\\biggr] \\\\& \\quad > 0, \\\\& P_{3}^{2}-P_{5}^{2} = (P_{3}+P_{5}) (P_{3}-P_{5}) \\\\& \\hphantom{P_{3}^{2}-P_{5}^{2}}= \\frac {k_{2}C(1-m)(b+\\alpha)u^{\\ast}_{2}}{1+\\beta\\xi+u^{\\ast}_{2}} \\biggl[r\\bigl[(b+\\alpha) \\bigl(c+2d_{1}u^{\\ast}_{2}+A \\bigr)-a\\alpha\\bigr] \\\\& \\hphantom{P_{3}^{2}-P_{5}^{2} ={}}{} - \\frac{k_{2}C(1-m)(b+\\alpha)u^{\\ast}_{2}}{1+\\beta\\xi+u^{\\ast }_{2}}+r\\bigl[(b+\\alpha) \\bigl(c+2d_{1}u^{\\ast}_{2}+A \\bigr)-a\\alpha\\bigr]\\biggr] \\\\& \\hphantom{P_{3}^{2}-P_{5}^{2}} = \\frac{k_{2}C(1-m)(b+\\alpha)u^{\\ast}_{2}}{1+\\beta\\xi+u^{\\ast }_{2}}\\biggl[\\frac{k_{2}C(1-m)(b+\\alpha)u^{\\ast}_{2}}{1+\\beta\\xi+u^{\\ast }_{2}} \\\\& \\hphantom{\\hphantom{P_{3}^{2}-P_{5}^{2}}={}}{}+\\frac{2k_{2}(1-m)u^{\\ast}_{2}[(b+\\alpha)(c+2d_{1}u^{\\ast }_{2})-a\\alpha]}{1+\\beta\\xi+u^{\\ast}_{2}} \\\\& \\hphantom{P_{3}^{2}-P_{5}^{2} ={}}{} + \\frac{2k_{2}\\xi[(b+\\alpha)(c+2d_{1}u^{\\ast}_{2}+A)-a\\alpha ]}{1+\\beta\\xi+u^{\\ast}_{2}}+\\frac{k_{2}\\xi(1-m)(b+\\alpha)u^{\\ast }_{2}v^{\\ast}}{(1+\\beta\\xi+u^{\\ast}_{2})^{3}}\\biggr]. \\end{aligned}\n\nIf ($$\\mathrm{H}_{9}$$) holds, then we have $$a\\alpha<(b+\\alpha )(c+2d_{1}u^{\\ast}_{2}+A)$$. Obviously, if ($$\\mathrm{H}_{9}$$) holds, it implies that $$P_{3}^{2}-P_{5}^{2}>0$$, and $$(P_{1}-r)(P_{2}+P_{4})-(P_{3}+P_{5})>0$$, then (3.11) has no positive real roots. Therefore, by Theorem 3.4.1 in , all roots of (3.11) have negative real parts for all $$\\tau\\geq0$$, which implies that the positive equilibrium $$E_{2}(u^{\\ast}_{1},u^{\\ast }_{2},v^{\\ast})$$ is locally asymptotically stable for all $$\\tau\\geq0$$.\n\nIf ($$\\mathrm{H}_{10}$$) $$a\\alpha>(b+\\alpha)(c+2d_{1}u^{\\ast}_{2}+A)$$ holds, which implies that\n\n$$P_{3}-P_{5}=2r\\bigl[(b+\\alpha) \\bigl(c+2d_{1}u^{\\ast}_{2}+A \\bigr)-a\\alpha\\bigr]-\\frac {k_{2}C(1-m)(b+\\alpha)u^{\\ast}_{2}}{1+\\beta\\xi+u^{\\ast}_{2}}< 0,$$\n\nthen $$P_{3}^{2}-P_{5}^{2}<0$$. Hence, there exists a unique positive root $$\\omega_{0}$$ satisfying (3.11). From (3.10), we get\n\n$$\\textstyle\\begin{cases} \\cos\\omega_{0}\\tau=\\frac{(rP_{1}+P_{4})\\omega _{0}^{4}-(rP_{3}+P_{2}P_{4}-P_{1}P_{5})\\omega _{0}^{2}-P_{3}P_{5}}{(P_{4}\\omega_{0})^{2}+(P_{5}+r\\omega _{0}^{2})^{2}}, \\\\ \\sin\\omega_{0}\\tau=\\frac{-r\\omega _{0}^{5}+(P_{1}P_{5}-P_{5}+rP_{2})\\omega _{0}^{2}+(P_{2}P_{5}-P_{3}P_{4})\\omega_{0}}{(P_{4}\\omega _{0})^{2}+(P_{5}+r\\omega_{0}^{2})^{2}}. \\end{cases}$$\n\nDenote\n\n$$\\tau_{n}=\\frac{1}{\\omega_{0}}\\arccos\\frac{(rP_{1}+P_{4})\\omega _{0}^{4}-(rP_{3}+P_{2}P_{4}-P_{1}P_{5})\\omega _{0}^{2}-P_{3}P_{5}}{(P_{4}\\omega_{0})^{2}+(P_{5}+r\\omega _{0}^{2})^{2}}+ \\frac{2n\\pi}{\\omega_{0}},\\quad n=0,1,2,\\ldots.$$\n\nTaking $$\\tau_{0}=\\min\\{\\tau_{n}: n=0,1,2,\\ldots\\}$$, we see that $$\\pm i\\omega_{0}$$ is a pair of purely imaginary roots of (3.7) with $$\\tau =\\tau_{n}$$. Differentiating the two sides of (3.7) with respect to τ, it follows that\n\n$$\\bigl(3\\lambda^{2}+2P_{1}\\lambda+P_{2}\\bigr) \\frac{d\\lambda}{d\\tau}+(-2r\\lambda +P_{4})e^{-\\lambda\\tau} \\frac{d\\lambda}{d\\tau}+\\bigl(-r\\lambda ^{2}+P_{4} \\lambda+P_{5}\\bigr) \\biggl(-\\tau e^{-\\lambda\\tau}\\frac{d\\lambda}{d\\tau }- \\lambda e^{-\\lambda\\tau}\\biggr)=0,$$\n\nthen\n\n\\begin{aligned}& \\biggl(\\frac{d\\lambda}{d\\tau} \\biggr)^{-1}=-\\frac{3\\lambda ^{2}+2P_{1}\\lambda+P_{2}}{\\lambda(\\lambda^{3}+P_{1}\\lambda ^{2}+P_{2}\\lambda+P_{3})}+ \\frac{-2r\\lambda+P_{4}}{\\lambda(-r\\lambda ^{2}+P_{4}\\lambda+P_{5})}-\\frac{\\tau}{\\lambda}, \\\\& \\biggl(\\frac{d\\lambda}{d\\tau} \\biggr)_{\\lambda=i\\omega_{0}}^{-1} \\\\& \\quad = \\frac {3\\omega_{0}^{2}-P_{2}-2iP_{1}\\omega_{0}}{i\\omega_{0}(-i\\omega _{0}^{3}+iP_{2}\\omega_{0}+P_{3}-P_{1}\\omega_{0}^{2})}+\\frac {P_{4}-2ir\\omega_{0}}{i\\omega_{0}(r\\omega_{0}^{2}+P_{5}+iP_{4}\\omega _{0})}-\\frac{\\tau}{i\\omega_{0}} \\\\& \\quad =\\frac{[(3\\omega_{0}^{2}-P_{2})(\\omega_{0}^{3}-P_{2}\\omega _{0})-2P_{1}\\omega_{0}(P_{3}-P_{1}\\omega_{0}^{2})]+i[-2P_{1}\\omega _{0}(\\omega_{0}^{3}-P_{2}\\omega_{0})+(3\\omega_{0}^{2}-P_{2})(P_{1}\\omega _{0}^{2}-P_{3})]}{\\omega_{0}[(\\omega_{0}^{3}-P_{2}\\omega _{0})^{2}+(P_{3}-P_{1}\\omega_{0}^{2})^{2}]} \\\\& \\qquad {}+\\frac{[-P_{4}^{2}\\omega_{0}-2r\\omega_{0}(P_{5}+r\\omega _{0}^{2})]-i[P_{4}(P_{5}+r\\omega_{0}^{2})-2rP_{4}\\omega_{0}^{2}]}{\\omega _{0}[(P_{4}\\omega_{0})^{2}+(P_{5}+r\\omega_{0}^{2})^{2}]}-\\frac{\\tau }{i\\omega_{0}} \\\\& \\quad =\\frac{E+iF}{\\omega_{0}[(\\omega_{0}^{3}-P_{2}\\omega _{0})^{2}+(P_{3}-P_{1}\\omega_{0}^{2})^{2}]}+\\frac{E'-iF'}{\\omega _{0}[(P_{4}\\omega_{0})^{2}+(P_{5}+r\\omega_{0}^{2})^{2}]}-\\frac{\\tau }{i\\omega_{0}}, \\end{aligned}\n\nwhere\n\n\\begin{aligned}& E=\\bigl(3\\omega_{0}^{2}-P_{2}\\bigr) \\bigl( \\omega_{0}^{3}-P_{2}\\omega_{0} \\bigr)-2P_{1}\\omega _{0}\\bigl(P_{3}-P_{1} \\omega_{0}^{2}\\bigr), \\\\& F=-2P_{1}\\omega_{0}\\bigl(\\omega_{0}^{3}-P_{2} \\omega_{0}\\bigr)+\\bigl(3\\omega _{0}^{2}-P_{2} \\bigr) \\bigl(P_{1}\\omega_{0}^{2}-P_{3} \\bigr), \\\\& E'=-P_{4}^{2}\\omega_{0}-2r \\omega_{0}\\bigl(P_{5}+r\\omega _{0}^{2} \\bigr), \\qquad F'=P_{4}P_{5}-rP_{4} \\omega_{0}^{2}. \\end{aligned}\n\nSince\n\n$$\\bigl(\\omega_{0}^{3}-P_{2}\\omega_{0} \\bigr)^{2}+\\bigl(P_{3}-P_{1}\\omega _{0}^{2}\\bigr)^{2}=(P_{4} \\omega_{0})^{2}+\\bigl(P_{5}+r\\omega_{0}^{2} \\bigr)^{2},$$\n\nwe have\n\n$$\\biggl(\\frac{d\\lambda}{d\\tau} \\biggr)_{\\lambda=i\\omega_{0}}^{-1}= \\frac {1}{\\omega_{0}} \\biggl[\\frac{E+E'+i(F-F')}{(P_{4}\\omega _{0})^{2}+(P_{5}+r\\omega_{0}^{2})^{2}}-\\frac{\\tau}{i\\omega_{0}} \\biggr].$$\n\nBy simple computation, we derive that\n\n\\begin{aligned} \\operatorname{sgn} \\biggl\\{ \\frac{d\\operatorname{Re}\\lambda}{d\\tau} \\biggr\\} _{\\lambda =i\\omega_{0}} =& \\operatorname{sgn} \\biggl\\{ \\operatorname{Re} \\biggl(\\frac{d\\lambda}{d\\tau } \\biggr)^{-1} \\biggr\\} _{\\lambda=i\\omega_{0}} \\\\ =&\\operatorname{sgn} \\biggl\\{ \\frac{1}{\\omega_{0}}\\frac{E+E'}{(P_{4}\\omega _{0})^{2}+(P_{5}+r\\omega_{0}^{2})^{2}} \\biggr\\} \\\\ =&\\operatorname{sgn} \\biggl\\{ \\frac{3\\omega_{0}^{4}+2\\omega _{0}^{2}(P_{1}^{2}-2P_{2}-r^{2})+(P_{2}^{2}-2P_{1}P_{3}-P_{4}^{2}-2rP_{5})}{(P_{4}\\omega _{0})^{2}+(P_{5}+r\\omega_{0}^{2})^{2}} \\biggr\\} \\\\ >&0. \\end{aligned}\n\nTherefore, the transversal condition holds and a Hopf bifurcation occurs at $$\\omega=\\omega_{0}$$, $$\\tau=\\tau_{0}$$. In conclusion, we have the following results.\n\nTheorem 3.3\n\nAssume that ($$\\mathrm{H}_{1}$$), ($$\\mathrm{H}_{3}$$), ($$\\mathrm{H}_{5}$$), ($$\\mathrm{H}_{7}$$) hold and $$m<1-\\frac{\\xi}{1+\\beta\\xi}$$. For the model (1.5), we have:\n\n1. (i)\n\nIf ($$\\mathrm{H}_{9}$$) holds, then the coexisting equilibrium $$E_{2}(u^{\\ast}_{1},u^{\\ast}_{2},v^{\\ast})$$ is locally asymptotically stable for all $$\\tau\\geq0$$.\n\n2. (ii)\n\nIf ($$\\mathrm{H}_{10}$$) holds, then there exists a positive number $$\\tau_{0}$$, such that $$E_{2}(u^{\\ast}_{1},u^{\\ast}_{2},v^{\\ast})$$ is locally asymptotically stable for $$0\\leq\\tau<\\tau_{0}$$ and unstable for $$\\tau>\\tau_{0}$$. Furthermore, the model (1.5) undergoes a Hopf bifurcation at $$E_{2}$$ when $$\\tau=\\tau_{0}$$.\n\nStability of bifurcated periodic solutions\n\nIn this section, we will establish the direction and stability of periodic solutions bifurcating from the positive equilibrium $$E_{2}$$, and we shall derive explicit formulae for determining the properties of the Hopf bifurcation at $$\\tau_{0}$$ by using the normal form theory and the center manifold theorem introduced by Hassard et al. .\n\nFor the model (1.5), expanding the nonlinear part by Taylor expansion, we rewrite (1.5) in the following form:\n\n$$\\textstyle\\begin{cases} \\dot{u}_{1}(t) =a_{11}u_{1}(t)+a_{12}u_{2}(t)+f_{1}, \\\\ \\dot{u}_{2}(t) =a_{21} u_{1}(t)+a_{22}u_{2}(t)+a_{23}v(t)+f_{2}, \\\\ \\dot{v}(t) =a_{31}v(t)+b_{31}u_{2}(t-\\tau)+b_{32}v(t-\\tau)+f_{3}, \\end{cases}$$\n(4.1)\n\nwhere\n\n\\begin{aligned}& a_{11}=-(b+\\alpha), \\qquad a_{12}=a, \\qquad a_{21}=\\alpha,\\qquad a_{22}=-c-2d_{1}u^{\\ast }_{2}- \\frac{v^{\\ast}(1-m)(1+\\beta\\xi)}{(1+\\beta\\xi+u^{\\ast}_{2})^{2}}, \\\\& a_{23}=-\\frac{(1-m)u^{\\ast}_{2}}{1+\\beta\\xi+u^{\\ast }_{2}}, \\qquad a_{31}=-r, \\qquad b_{31}=\\frac{k_{2}(1-m)v^{\\ast}(1+\\beta\\xi )-k_{2}\\xi v^{\\ast}}{(1+\\beta\\xi+u^{\\ast}_{2})^{2}}, \\\\& b_{32}=\\frac{k_{2}[(1-m)u^{\\ast}_{2}+\\xi]}{1+\\beta\\xi+u^{\\ast }_{2}}, \\qquad f_{1}=0, \\\\& f_{2}=a_{24}u_{2}^{2}(t)+a_{25}u_{2}(t)v(t)+a_{26}u_{2}^{3}(t)+a_{27}u_{2}^{2}(t)v(t), \\\\& f_{3}=a_{32}u_{2}^{2}(t- \\tau)+a_{33}u_{2}(t-\\tau)v(t-\\tau )+a_{34}u_{2}^{3}(t- \\tau), \\\\& a_{24}=-2d_{1}+\\frac{2v^{\\ast}(1-m)(1+\\beta \\xi)}{(1+\\beta\\xi+u^{\\ast}_{2})^{2}}, \\qquad a_{25}=-\\frac{(1-m)(1+\\beta\\xi)}{(1+\\beta\\xi+u^{\\ast }_{2})^{2}}, \\\\& a_{26}=- \\frac{4v^{\\ast}(1-m)(1+\\beta\\xi)}{(1+\\beta\\xi +u^{\\ast}_{2})^{3}},\\qquad a_{27}=-\\frac{4(1-m)(1+\\beta\\xi)}{(1+\\beta\\xi +u^{\\ast}_{2})^{2}}, \\\\& a_{32}=\\frac{2k_{2}[(1-m)(1+\\beta\\xi)-\\xi]v^{\\ast}}{(1+\\beta\\xi+u^{\\ast }_{2})^{3}}, \\qquad a_{33}= \\frac{k_{2}[(1-m)(1+\\beta\\xi)-\\xi]}{(1+\\beta\\xi +u^{\\ast}_{2})^{2}}, \\\\& a_{34}=-\\frac{6k_{2}[(1-m)(1+\\beta\\xi)-\\xi]v^{\\ast}}{(1+\\beta\\xi +u^{\\ast}_{2})^{4}}. \\end{aligned}\n\nThe linearized model (4.1) is\n\n$$\\textstyle\\begin{cases} \\dot{u}_{1}(t) =a_{11}u_{1}(t)+a_{12}u_{2}(t), \\\\ \\dot{u}_{2}(t) =a_{21} u_{1}(t)+a_{22}u_{2}(t)+a_{23}v(t), \\\\ \\dot{v}(t) =a_{31}v(t)+b_{31}u_{2}(t-\\tau)+b_{32}v(t-\\tau). \\end{cases}$$\n(4.2)\n\nLet $$\\tau=\\tau_{0}+\\mu$$, $$\\mu\\in\\mathbb{R}$$, $$t=s\\tau$$, $$u_{1}(s\\tau )=\\hat{u}_{1}(s)$$, $$u_{2}(s\\tau)=\\hat{u}_{2}(s)$$, $$v(s\\tau)=\\hat {v}(s)$$, denote $$u_{1}=\\hat{u}_{1}$$, $$u_{2}=\\hat{u}_{2}$$, $$v=\\hat{v}$$, then (4.1) is transformed into the model\n\n$$\\textstyle\\begin{cases} \\dot{u}_{1}(t) =(\\tau_{0}+\\mu)[a_{11}u_{1}(t)+a_{12}u_{2}(t)], \\\\ \\dot{u}_{2}(t) =(\\tau_{0}+\\mu)[a_{21} u_{1}(t)+a_{22}u_{2}(t)+a_{23}v(t)+f_{22}(t)], \\\\ \\dot{v}(t) =(\\tau_{0}+\\mu)[a_{31}v(t)+b_{31}u_{2}(t-1)+b_{32}v(t-1)+f_{33}(t)], \\end{cases}$$\n(4.3)\n\nwhere\n\n\\begin{aligned}& f_{22}(t)=a_{24}u_{2}^{2}(t)+a_{25}u_{2}(t)v(t)+a_{26}u_{2}^{3}(t)+a_{27}u_{2}^{2}(t)v(t), \\\\& f_{33}(t)=a_{32}u_{2}^{2}(t-1)+a_{33}u_{2}(t-1)v(t-1)+a_{34}u_{2}^{3}(t-1). \\end{aligned}\n\nDenote $$C^{k}[-1,0]=\\{\\varphi|\\varphi:[-1,0]\\rightarrow\\mathbb{R}^{3}\\}$$, each component of φ has a Kth-order continuous derivative. Let $$\\phi(\\theta)=(\\phi_{1}(\\theta),\\phi_{2}(\\theta),\\phi _{3}(\\theta))^{T}\\in C[-1,0]$$ be the initial data of model (1.5).\n\nDefine the operators\n\n$$L_{\\mu}\\phi=(\\tau_{0}+\\mu)\\bigl[A' \\phi(0)+B'\\phi(-1)\\bigr],\\qquad f(\\mu,\\phi)=(\\tau _{0}+ \\mu) (0,f_{22},f_{33}),$$\n\nwith\n\n$\\begin{array}{c}{A}^{\\prime }=\\left(\\begin{array}{ccc}{a}_{11}& {a}_{12}& 0\\\\ {a}_{21}& {a}_{22}& {a}_{23}\\\\ 0& 0& {a}_{31}\\end{array}\\right),\\hfill \\\\ {B}^{\\prime }=\\left(\\begin{array}{ccc}0& 0& 0\\\\ 0& 0& 0\\\\ 0& {b}_{31}& {b}_{32}\\end{array}\\right)\\hfill \\end{array}$\n\nand $$L_{\\mu}:C[-1,0]\\rightarrow\\mathbb{R}^{3}$$, $$f:R\\times C[-1,0]\\rightarrow\\mathbb{R}^{3}$$. Then (4.3) can be rewritten as $$u'_{t}=L_{\\mu}(u_{t})+f(\\mu,u_{t})$$.\n\nBy the Riesz representation theorem there exists a function $$\\eta(\\theta,\\mu)$$ of bounded variation for $$\\theta\\in [-1,0]$$ such that $$L_{\\mu}\\phi=\\int_{-1} ^{0}d\\eta(\\theta,\\mu)\\phi(\\theta)$$, for $$\\theta\\in[-1,0]$$. In fact, we can choose\n\n$$\\eta(\\theta,\\mu)=(\\tau_{0}+\\mu)A'\\delta(\\theta)+( \\tau_{0}+\\mu)B'\\delta (\\theta+1),$$\n\nwhere $$\\delta(\\theta)$$ is the Dirac function.\n\nFor $$\\phi\\in C^{1}[-1,0]$$, define\n\n$$(A_{\\mu}\\phi) (\\theta)= \\textstyle\\begin{cases} \\frac{d\\phi(\\theta)}{d\\theta},&\\theta\\in[-1,0), \\\\ \\int_{-1}^{0}d\\eta(\\theta,\\mu)\\phi(\\theta),&\\theta=0, \\end{cases}$$\n\nand\n\n$$(R_{\\mu}\\phi) (\\theta)= \\textstyle\\begin{cases} 0,&\\theta\\in[-1,0), \\\\ f(\\mu,\\theta),&\\theta=0. \\end{cases}$$\n\nThe model (4.3) is equivalent to $$u'_{t}=A_{\\mu}u_{t}+R_{\\mu }u_{t}$$, where $$u_{t}=u(t+\\theta)$$, $$\\theta\\in[-1,0]$$.\n\nFor $$\\varphi\\in C^{1}[-1,0]$$, define\n\n$$\\bigl(A^{\\ast}\\psi\\bigr) (s)= \\textstyle\\begin{cases} -\\frac{d\\psi(s)}{ds},&s\\in(0,1], \\\\ \\int_{-1}^{0}d\\eta^{T}(s,0)\\psi(-s),&s=0, \\end{cases}$$\n\nand the bilinear inner product\n\n$$\\bigl\\langle \\psi(s),\\phi(\\theta)\\bigr\\rangle =\\bar{\\psi}(0)\\phi(0)- \\int _{-1}^{0} \\int_{\\xi=0}^{\\theta}\\bar{\\psi}(\\xi-\\theta)\\, d\\eta(\\theta) \\phi (\\xi)\\, d\\xi,$$\n(4.4)\n\nwhere $$\\psi(\\theta)\\in C^{1}[-1,0]$$, $$\\eta(\\theta)=\\eta(\\theta,0)$$, and $$A_{0}$$ and $$A^{\\ast}$$ are adjoint operators. From the discussion in Sect. 3, we know that $$\\pm i\\omega_{0}\\tau_{0}$$ are the eigenvalues of $$A_{0}$$. Hence, they are also eigenvalues of $$A^{\\ast}$$.\n\nSuppose that $$q(\\theta)=(1,q_{1},q_{2})^{T}e^{i\\omega_{0}\\tau _{0}\\theta}$$ is the eigenvector of $$A_{0}$$, corresponding to $$i\\omega _{0}\\tau_{0}$$, then $$q(0)=(1,q_{1},q_{2})^{T}$$, and $$q(-1)=q(0)e^{-i\\omega_{0}\\tau _{0}}$$. By a direct calculation, we get\n\n${\\tau }_{0}\\left(\\begin{array}{ccc}{a}_{11}& {a}_{12}& 0\\\\ {a}_{21}& {a}_{22}& {a}_{23}\\\\ 0& {b}_{31}{e}^{-i{\\omega }_{0}{\\tau }_{0}}& {a}_{31}+{b}_{32}{e}^{-i{\\omega }_{0}{\\tau }_{0}}\\end{array}\\right)\\left(\\begin{array}{c}1\\\\ {q}_{1}\\\\ {q}_{2}\\end{array}\\right)=i{\\omega }_{0}{\\tau }_{0}\\left(\\begin{array}{c}1\\\\ {q}_{1}\\\\ {q}_{2}\\end{array}\\right),$\n\nthen\n\n$$q_{1}=\\frac{i\\omega_{0}-a_{11}}{a_{12}},\\qquad q_{2}=\\frac{(i\\omega _{0}-a_{22})(i\\omega_{0}-a_{11})-a_{21}a_{12}}{a_{12}a_{23}}.$$\n\nSimilarly, we can calculate the eigenvector $$q^{\\ast}(s)=D(1,q^{\\ast }_{1},q^{\\ast}_{2})e^{i\\omega_{0}\\tau_{0}s}$$ of $$A^{\\ast}$$ belong to the eigenvector $$-i\\omega_{0}\\tau_{0}$$, then we get\n\n${\\tau }_{0}D\\left(\\begin{array}{ccc}1& {q}_{1}^{\\ast }& {q}_{2}^{\\ast }\\end{array}\\right)\\left(\\begin{array}{ccc}{a}_{11}& {a}_{12}& 0\\\\ {a}_{21}& {a}_{22}& {a}_{23}\\\\ 0& {b}_{31}{e}^{i{\\omega }_{0}{\\tau }_{0}}& {a}_{31}+{b}_{32}{e}^{i{\\omega }_{0}{\\tau }_{0}}\\end{array}\\right)=-i{\\omega }_{0}{\\tau }_{0}D\\left(\\begin{array}{ccc}1& {q}_{1}^{\\ast }& {q}_{2}^{\\ast }\\end{array}\\right),$\n\nthen\n\n\\begin{aligned}& q^{\\ast}_{1}=\\frac{-a_{12}(a_{31}+b_{32}e^{i\\omega_{0}\\tau_{0}}+i\\omega _{0})}{(a_{22}+i\\omega_{0})(a_{31}+b_{32}e^{i\\omega_{0}\\tau_{0}}+i\\omega _{0})-a_{23}b_{31}e^{i\\omega_{0}\\tau_{0}}}, \\\\& q^{\\ast}_{2}=\\frac{a_{12}a_{23}}{(a_{22}+i\\omega _{0})(a_{31}+b_{32}e^{i\\omega_{0}\\tau_{0}}+i\\omega _{0})-a_{23}b_{31}e^{i\\omega_{0}\\tau_{0}}}. \\end{aligned}\n\nWe normalize q and $$q^{\\ast}$$ by the condition $$\\langle q^{\\ast }(s),q(\\theta)\\rangle=1$$. Clearly $$\\langle q^{\\ast}(s),q(\\theta)\\rangle =0$$. In order to ensure that $$\\langle q^{\\ast}(s),q(\\theta)\\rangle =1$$, we need to determine the value of D. By (4.4), we have\n\n\\begin{aligned} \\bigl\\langle q^{\\ast}(s),q(\\theta)\\bigr\\rangle =& \\bar{D}\\bigl(1, \\bar{q}^{\\ast }_{1},\\bar{q}^{\\ast}_{2}\\bigr) (1,q_{1},q_{2})^{T} \\\\ &{}- \\int_{-1}^{0} \\int_{\\xi =0}^{\\theta}\\bar{D}\\bigl(1,\\bar{q}^{\\ast}_{1}, \\bar{q}^{\\ast}_{2}\\bigr)e^{-i\\omega _{0}\\tau_{0}(\\xi-\\theta)}\\, d\\eta(\\theta) (1,q_{1},q_{2})^{T}e^{i\\omega _{0}\\tau_{0}\\xi}\\, d\\xi \\\\ =& \\bar{D}\\bigl(1+\\bar{q}^{\\ast}_{1}q_{1}+ \\bar{q}^{\\ast}_{2}q_{2}\\bigr)-\\bar {D} \\int_{-1}^{0}\\bigl(1,\\bar{q}^{\\ast}_{1}, \\bar{q}^{\\ast}_{2}\\bigr)\\theta e^{i\\omega_{0}\\tau_{0}\\theta}\\, d\\eta(\\theta) (1,q_{1},q_{2})^{T} \\\\ =& \\bar{D}\\bigl[1+\\bar{q}^{\\ast}_{1}q_{1}+ \\bar{q}^{\\ast}_{2}q_{2}+\\tau _{0} \\bar{q}^{\\ast}_{2}(b_{31}q_{1}+b_{32}q_{2})e^{-i\\omega_{0}\\tau_{0}} \\bigr], \\end{aligned}\n\ntherefore $$\\bar{D}=\\frac{1}{1+\\bar{q}^{\\ast}_{1}q_{1}+\\bar{q}^{\\ast }_{2}q_{2}+\\tau_{0}\\bar{q}^{\\ast }_{2}(b_{31}q_{1}+b_{32}q_{2})e^{-i\\omega_{0}\\tau_{0}}}$$.\n\nIn the remainder of this section, following the algorithms given in and using a similar computation process as in , we get the coefficients that will be used to determine several important qualities\n\n\\begin{aligned}& g_{20}=2\\tau_{0}\\bar{D}\\bigl(k_{11} \\bar{q}^{\\ast}_{1}+k_{21}\\bar{q}^{\\ast }_{2} \\bigr), \\qquad g_{11}=\\tau_{0}\\bar{D}\\bigl(k_{12} \\bar{q}^{\\ast}_{1}+k_{22}\\bar {q}^{\\ast}_{2} \\bigr), \\\\& g_{02}=2\\tau_{0}\\bar{D}\\bigl(k_{13} \\bar{q}^{\\ast}_{1}+k_{23}\\bar{q}^{\\ast }_{2} \\bigr), \\qquad g_{21}=2\\tau_{0}\\bar{D}\\bigl(k_{14} \\bar{q}^{\\ast}_{1}+k_{24}\\bar {q}^{\\ast}_{2} \\bigr), \\end{aligned}\n\nwhere\n\n\\begin{aligned}& k_{11} = a_{24}q^{2}_{1}+a_{25}q_{1}q_{2},\\qquad k_{12} = 2a_{24}q_{1}\\bar{q}_{1}+a_{25}(q_{1} \\bar{q}_{2}+q_{2}\\bar{q}_{1}), \\\\& k_{13} = a_{24}\\bar{q}_{1}^{2}+a_{25} \\bar{q}_{1}\\bar{q}_{2}, \\\\& \\begin{aligned} k_{14} &= a_{24} \\bigl[\\bar {q}_{1}w_{20}^{(2)}(0)+2q_{1}w_{11}^{(2)}(0) \\bigr]+3a_{26}q^{2}_{1}\\bar {q}_{1}+a_{27} \\bigl(q^{2}_{1}\\bar{q}_{2}+2q_{1}q_{2} \\bar{q}_{1}\\bigr) \\\\ &\\quad {} + a_{25} \\biggl[\\frac{1}{2}\\bar {q}_{2}w_{20}^{(2)}(0)+q_{2}w_{11}^{(2)}(0)+ \\frac{1}{2}\\bar {q}_{1}w_{20}^{(3)}(0)+q_{1}w_{11}^{(3)}(0) \\biggr], \\end{aligned} \\\\& k_{21} = a_{32}q^{2}_{1}+a_{33}q_{1}q_{2}e^{-2i\\omega_{0}\\tau_{0}}, \\qquad k_{22} = \\bigl[2a_{32}q_{1} \\bar{q}_{1}+a_{33}(q_{1}\\bar{q}_{2}+q_{2} \\bar {q}_{1})\\bigr]e^{-2i\\omega_{0}\\tau_{0}}, \\\\& k_{23} = \\bigl(a_{32}\\bar{q}_{1}^{2}+a_{33} \\bar{q}_{1}\\bar {q}_{2}\\bigr)e^{-2i\\omega_{0}\\tau_{0}}, \\\\& \\begin{aligned} k_{24} &= a_{32} \\bigl[\\bar {q}_{1}w_{20}^{(2)}(-1)+2q_{1}w_{11}^{(2)}(-1) \\bigr]e^{-i\\omega_{0}\\tau _{0}}+3a_{34}q^{2}_{1} \\bar{q}_{1}e^{-3i\\omega_{0}\\tau_{0}} \\\\ &\\quad {} + a_{33} \\biggl[\\frac{1}{2}\\bar {q}_{2}w_{20}^{(2)}(-1)+q_{2}w_{11}^{(2)}(-1)+ \\frac{1}{2}\\bar {q}_{1}w_{20}^{(3)}(-1)+q_{1}w_{11}^{(3)}(-1) \\biggr]e^{-i\\omega_{0}\\tau_{0}}, \\end{aligned} \\end{aligned}\n\nand\n\n\\begin{aligned}& \\begin{aligned} w_{20}(\\theta) & =\\frac{ig_{20}}{\\omega_{0}\\tau_{0}}q(0)e^{i\\omega _{0}\\tau_{0}\\theta}+ \\frac{i\\bar{g}_{20}}{3\\omega_{0}\\tau_{0}}\\bar {q}(0)e^{-i\\omega_{0}\\tau_{0}\\theta}+E_{1}e^{2i\\omega_{0}\\tau_{0}\\theta } \\\\ &= \\frac{ig_{20}}{\\omega_{0}\\tau_{0}}q(\\theta)+\\frac{i\\bar {g}_{20}}{3\\omega_{0}\\tau_{0}}\\bar{q}(\\theta)+E_{1}e^{2i\\omega_{0}\\tau _{0}\\theta}, \\end{aligned} \\\\& \\begin{aligned} w_{11}(\\theta) &= -\\frac{ig_{11}}{\\omega_{0}\\tau_{0}}q(0)e^{i\\omega _{0}\\tau_{0}\\theta}+ \\frac{i\\bar{g}_{11}}{\\omega_{0}\\tau_{0}}\\bar {q}(0)e^{-i\\omega_{0}\\tau_{0}\\theta}+E_{2} \\\\ &= -\\frac{ig_{11}}{\\omega_{0}\\tau_{0}}q(\\theta)+\\frac{i\\bar {g}_{11}}{\\omega_{0}\\tau_{0}}\\bar{q}( \\theta)+E_{2}. \\end{aligned} \\end{aligned}\n\nMoreover, $$E_{1}$$ and $$E_{2}$$ satisfy the following equations:\n\n$\\begin{array}{c}\\left(\\begin{array}{ccc}2i{\\omega }_{0}-{a}_{11}& -{a}_{12}& 0\\\\ -{a}_{21}& 2i{\\omega }_{0}-{a}_{22}& -{a}_{23}\\\\ 0& -{b}_{31}{e}^{-2i{\\omega }_{0}{\\tau }_{0}}& 2i{\\omega }_{0}-{a}_{31}-{b}_{32}{e}^{-2i{\\omega }_{0}{\\tau }_{0}}\\end{array}\\right){E}_{1}=2\\left(\\begin{array}{c}0\\\\ {k}_{11}\\\\ {k}_{21}\\end{array}\\right),\\hfill \\\\ \\left(\\begin{array}{ccc}-{a}_{11}& -{a}_{12}& 0\\\\ -{a}_{21}& -{a}_{22}& -{a}_{23}\\\\ 0& -{b}_{31}& -{a}_{31}-{b}_{32}\\end{array}\\right){E}_{2}=\\left(\\begin{array}{c}0\\\\ {k}_{12}\\\\ {k}_{22}\\end{array}\\right).\\hfill \\end{array}$\n\nFurthermore, $$g_{ij}$$ is expressed by the parameters and delay in (1.5). Thus, we can compute the following values:\n\n\\begin{aligned}& C_{1}(0) = \\frac{i}{2\\omega_{0}\\tau _{0}}\\biggl(g_{20}g_{11}-2|g_{11}|^{2}- \\frac{|g_{02}|^{2}}{3}\\biggr)+\\frac {g_{21}}{2}, \\\\& \\mu_{2} = -\\frac{\\operatorname{Re}{C_{1}(0)}}{\\operatorname{Re}{\\frac{d\\lambda(\\tau _{0})}{d\\tau} }}, \\\\& \\beta_{2} = 2\\operatorname{Re} {C_{1}(0)}, \\\\& T_{2} = -\\frac{\\operatorname{Im}{C_{1}(0)}+\\mu_{2}\\operatorname{Im}{\\frac{d\\lambda (\\tau_{0})}{d\\tau} }}{\\omega_{0}\\tau_{0}}, \\end{aligned}\n\nwhich determine the properties of bifurcation period solutions at $$\\tau =\\tau_{0}$$ on the center manifold. From the above discussions, we have the following result.\n\nTheorem 4.1\n\nFor model (1.5), the following results hold:\n\n1. (i)\n\nThe sign of $$\\mu_{2}$$ determines the directions of the Hopf bifurcation: if $$\\mu_{2}>0$$, then the Hopf bifurcation is supercritical and the bifurcating periodic solutions exist for $$\\tau >\\tau_{0}$$; if $$\\mu_{2}<0$$, then the Hopf bifurcation is subcritical and the bifurcating periodic solutions exist for $$\\tau<\\tau_{0}$$.\n\n2. (ii)\n\nThe sign of $$\\beta_{2}$$ determines the stability of the bifurcating periodic solutions: the bifurcating periodic solutions are stable if $$\\beta_{2}<0$$; the bifurcating periodic solutions are unstable if $$\\beta_{2}>0$$.\n\n3. (iii)\n\nThe sign of $$T_{2}$$ determines the period of the bifurcating periodic solutions: the period increases if $$T_{2}>0$$ and decreases $$T_{2}<0$$.\n\nNumerical simulations\n\nWe perform the numerical simulations of the model (1.5) to verify our theoretical results.\n\nTaking $$a = 3$$; $$b = \\frac{1}{4}$$; $$\\alpha= \\frac{1}{4}$$; $$c = \\frac{1}{8}$$; $$d_{1} = \\frac{1}{8}$$; $$m = \\frac{1}{2}$$; $$k_{2} = \\frac{3}{2}$$; $$\\xi= \\frac{1}{3}$$; $$\\beta= \\frac{1}{2}$$; $$r = \\frac{1}{4}$$; we see that the conditions ($$\\mathrm{H}_{1}$$) or ($$\\mathrm{H}_{4}$$) hold. Theorem 3.1(i) is verified numerically in Fig. 1(a).\n\nTaking $$a = 3$$; $$b = \\frac{1}{4}$$; $$\\alpha= \\frac{1}{4}$$; $$c = 2$$; $$d_{1} = \\frac{1}{8}$$; $$m = \\frac{1}{2}$$; $$k_{2} = \\frac{3}{2}$$; $$\\xi= \\frac{1}{3}$$; $$\\beta= \\frac{1}{2}$$; $$r = 2$$; it is clear that the conditions ($$\\mathrm{H}_{2}$$) and ($$\\mathrm{H}_{3}$$) hold. Theorem 3.1(ii) is verified numerically in Fig. 1(b).\n\nIf we choose $$a = 4.3$$; $$b = \\frac{1}{4}$$; $$\\alpha= \\frac{2}{9}$$; $$c = 2$$; $$d_{1} = \\frac{1}{8}$$; $$m = \\frac{1}{4}$$; $$k_{2} = 3$$; $$\\xi= 5$$; $$\\beta= \\frac{1}{2}$$; $$r = 2$$; then the conditions ($$\\mathrm{H}_{4}$$) and ($$\\mathrm{H}_{5}$$) hold. Theorem 3.2(i) is verified numerically by Fig. 2(a).\n\nIf we choose $$a = \\frac{7}{2}$$; $$b = \\frac{1}{4}$$; $$\\alpha= \\frac{1}{2}$$; $$c = 2$$; $$d_{1} = \\frac{1}{8}$$; $$m = \\frac{1}{2}$$; $$k_{2} = \\frac{3}{2}$$; $$\\xi= \\frac{1}{3}$$; $$\\beta= \\frac{1}{2}$$; $$r = 2$$; then the conditions ($$\\mathrm{H}_{3}$$) and ($$\\mathrm{H}_{6}$$) hold. Theorem 3.2(ii) is verified numerically by Fig. 2(b).\n\nTaking $$a = 1.1$$; $$b = \\frac{1}{10}$$; $$\\alpha= \\frac{1}{2}$$; $$c = \\frac{1}{10}$$; $$d_{1} = \\frac{1}{12}$$; $$m = \\frac{1}{4}$$; $$k_{2} = 0.5$$; $$\\xi= 3.5$$; $$\\beta= 5$$; $$r = \\frac{1}{8}$$; we see that the conditions $$m < 1 - \\frac{\\xi}{1 + \\beta\\xi}$$, ($$\\mathrm{H}_{8}$$) and $$(P_{1} - r)(P_{2} + P_{4}) - (P_{3} + P_{5}) > 0$$ hold. The numerical result of Case 3.1 can be seen in Fig. 3.\n\nWhen $$\\tau\\ge0$$, taking $$a = 1.1$$; $$b = 0.1$$; $$\\alpha= \\frac{1}{2}$$; $$c = \\frac{1}{10}$$; $$d_{1} = \\frac{1}{4}$$; $$m = \\frac{1}{3}$$; $$k_{2} = 0.5$$; $$\\xi= 3.5$$; $$\\beta= 4.5$$; $$r = \\frac{1}{8}$$; we see that the conditions $$\\beta= 4.5 > \\frac{k_{2}}{r} - \\frac{1}{\\xi} = 3.714285714285714$$, $$0 < m = \\frac{1}{3} < \\min \\{1-\\frac{r}{k_{2}}, 1-\\frac{r}{k_{2}}-\\frac {d_{1}[r+(r\\beta-k_{2})\\xi](b+\\alpha)}{\\alpha k_{2}(a-c)-bc} \\} = 0.5351562500000000$$, $$m = \\frac{1}{3} < 1 - \\frac{\\xi}{1 + \\beta\\xi} = 0.7910447761194030$$ and ($$\\mathrm{H}_{9}$$) hold. The numerical result of Theorem 3.3(i) is presented by Fig. 4(a) for $$\\tau= 0$$ and Fig. 4(b) for $$\\tau=10$$.\n\nTaking $$a = 6$$; $$b = 0.2$$; $$\\alpha= \\frac{5}{3}$$; $$c = \\frac{1}{10}$$; $$d_{1} = \\frac{1}{4}$$; $$m = \\frac{1}{3}$$; $$k_{2} = 0.5$$; $$\\xi= 3.5$$; $$\\beta= 4.5$$; $$r = \\frac{1}{8}$$; we see that ($$\\mathrm{H}_{1}$$), ($$\\mathrm{H}_{3}$$), ($$\\mathrm{H}_{5}$$), ($$\\mathrm{H}_{7}$$), ($$\\mathrm{H}_{10}$$) hold and $$\\tau _{0}=0.1509514710143546$$. The numerical result of Theorem 3.3(ii) is presented by Fig. 5 for $$\\tau= 0.06$$ and Fig. 6 for $$\\tau=0.5$$.\n\nConclusions\n\nIn this paper, we study a delayed predator–prey model with stage structure for prey incorporating refuge and provide additional food to the predator. By analyzing the corresponding characteristic equations, we investigate the local stability of the equilibria of the model. We discuss the existence of Hopf bifurcation by choosing time delay as a parameter. We find that time delay can causes a stable equilibrium to become unstable one, even occur Hopf bifurcation, when time delay passes through some critical values. Furthermore, by applying the normal form method and center manifold theorem, we investigate the direction of Hopf bifurcation and the stability of the bifurcated periodic solutions. We give numerical simulations to show our main results.\n\nFrom Theorem 3.3, we see that, for the stability of a coexisting equilibrium point, the refuge has to be bounded by a value which depends on the quantity and the quality of additional food. Results obtained in this paper provide a useful platform to understand the roles of refuge and additional food. Therefore, refuge and additional food can be taken as population controllers to study the prey–predator models.\n\nOur results can be compared with the ones in Sahoo which considered the role of additional food in eco-epidemiological system with disease in the prey. So, we can extend our predator–prey model to an eco-epidemiological system based on [17, 23].\n\nReferences\n\n1. 1.\n\nLotka, A.: Analytical note on certain rhythmic relations in organic systems. Proc. Natl. Acad. Sci. USA 6, 410–415 (1920)\n\n2. 2.\n\nVolterra, V.: Variazioni e fluttuazioni del numero d’individui in specie animali conviventi. Mem. R. Accad. Naz. Lincei, Ser VI 2, 31–113 (1926)\n\n3. 3.\n\nFreedman, H.: Deterministic Mathematical Models in Population Ecology. Dekker, New York (1980)\n\n4. 4.\n\nKuang, Y.: Delay Differential Equation with Application in Population Dynamics. Academic Press, New York (1993)\n\n5. 5.\n\nBrauer, F., Castillo-Chavez, C.: Mathematical Models in Population Biology and Epidemiology. Springer, Berlin (2000)\n\n6. 6.\n\nHu, D., Cao, H.: Stability and bifurcation analysis in a predator–prey system with Michaelis–Menten type predator harvesting. Nonlinear Anal., Real World Appl. 33, 58–82 (2017)\n\n7. 7.\n\nYu, X., Wang, Q., Bai, Y.: Permanence and almost periodic solutions for N-species non-autonomous Lotka–Volterra competitive systems with delays and impulsive perturbations on time scales. Complexity 2018, Article ID 2658745 (2018)\n\n8. 8.\n\nSong, X., Hao, M., Meng, X.: A stage-structured predator–prey model with disturbing pulse and time delays. Appl. Math. Model. 33(1), 211–223 (2009)\n\n9. 9.\n\nLi, F., Li, H.: Hopf bifurcation of a predator–prey model with time delay and stage structure for the prey. Math. Comput. Model. 55(3–4), 672–679 (2012)\n\n10. 10.\n\nDevi, S.: Effects of prey refuge on a ratio-dependent predator–prey model with stage-structure of prey population. Appl. Math. Model. 37(6), 4337–4349 (2013)\n\n11. 11.\n\nJana, D., Agrawal, R., Upadhyay, R.: Dynamics of generalist predator in a stochastic environment: effect of delayed growth and prey refuge. Appl. Math. Comput. 268(1), 1072–1094 (2015)\n\n12. 12.\n\nDubey, B., Kumar, A., Maiti, A.: Global stability and Hopf-bifurcation of prey–predator system with two discrete delays including habitat complexity and prey refuge. Commun. Nonlinear Sci. Numer. Simul. 67, 528–554 (2019)\n\n13. 13.\n\nWei, F., Fu, Q.: Hopf bifurcation and stability for predator–prey systems with Beddington–DeAngelis type functional response and stage structure for prey incorporating refuge. Appl. Math. Model. 40, 126–134 (2016)\n\n14. 14.\n\nSrinivasu, P., Prasad, B., Venkatesulu, M.: Biological control through provision of additional food to predators: a theoretical study. Theor. Popul. Biol. 72, 111–120 (2007)\n\n15. 15.\n\nSahoo, B., Poria, S.: Effects of supplying alternative food in a predator–prey model with harvesting. Appl. Math. Comput. 234, 150–166 (2014)\n\n16. 16.\n\nSahoo, B., Poria, S.: The chaos and control of a food chain model supplying additional food to top-predator. Chaos Solitons Fractals 58, 52–64 (2014)\n\n17. 17.\n\nSahoo, B.: Role of additional food in eco-epidemiological system with disease in the prey. Appl. Math. Comput. 259, 61–79 (2015)\n\n18. 18.\n\nGhosh, J., Sahoo, B., Poria, S.: Prey–predator dynamics with prey refuge providing additional food to predator. Chaos Solitons Fractals 96, 110–119 (2017)\n\n19. 19.\n\nSong, J., Hu, M., Bai, Y., Xia, Y.: Dynamic analysis of a non-autonomous ratio-dependent predator–prey model with additional food. J. Appl. Anal. Comput. 8(6), 1893–1909 (2018)\n\n20. 20.\n\nHale, J.: Theory of Functional Differential Equation. Springer, Heidelberg (1977)\n\n21. 21.\n\nHassard, B., Kazarinoff, N., Wan, Y.: Theory and Applications of Hopf Bifurcation. Cambridge University Press, Cambridge (1981)\n\n22. 22.\n\nWei, J., Ruan, S.: Stability and bifurcation in a neural net work model with two delays. Phys. D, Nonlinear Phenom. 130, 255–272 (1999)\n\n23. 23.\n\nBai, Y., Mu, X.: Global asymptotic stability of a generalized SIRS epidemic model with transfer from infectious to susceptible. J. Appl. Anal. Comput. 8(2), 402–412 (2018)\n\nAcknowledgements\n\nThe authors would like to express their gratitude to Prof. Yonghui Xia and Dr. Dongpo Hu for their help in doing numerical simulations. The authors are grateful to the anonymous reviewers for their valuable comments and suggestions.\n\nFunding\n\nThis work was supported by China Postdoctoral Science Foundation (No. 2014M551873), Postdoctoral Science Foundation of Shandong Province of China (No. 201401008) and Distinguished Middle-Aged and Young Scientist Encourage and Reward Foundation of Shandong Province of China (No. ZR2018BF018).\n\nAuthor information\n\nAuthors\n\nContributions\n\nBoth authors have equally contributed to obtaining new results in this paper and also read and approved the final manuscript.\n\nCorresponding author\n\nCorrespondence to Yuzhen Bai.\n\nEthics declarations\n\nCompeting interests\n\nThe authors declare that they have no competing interests.", null, "" ]
[ null, "https://advancesindifferenceequations.springeropen.com/track/article/10.1186/s13662-019-1979-6", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7983092,"math_prob":1.0000058,"size":28993,"snap":"2022-05-2022-21","text_gpt3_token_len":9271,"char_repetition_ratio":0.16057815,"word_repetition_ratio":0.124801695,"special_character_ratio":0.3663298,"punctuation_ratio":0.156214,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.0000023,"pos_list":[0,1,2],"im_url_duplicate_count":[null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-01-25T23:22:34Z\",\"WARC-Record-ID\":\"<urn:uuid:1a7cc6c6-d442-4ad9-87f7-b3b264748592>\",\"Content-Length\":\"363797\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ddf88491-c127-4a37-9de0-f54356515edb>\",\"WARC-Concurrent-To\":\"<urn:uuid:e3261107-ac8a-42c9-ae64-846b3d4d1f15>\",\"WARC-IP-Address\":\"146.75.32.95\",\"WARC-Target-URI\":\"https://advancesindifferenceequations.springeropen.com/articles/10.1186/s13662-019-1979-6\",\"WARC-Payload-Digest\":\"sha1:LKWDLD37W6GPD3JRHPDZIYUJK2OFOW72\",\"WARC-Block-Digest\":\"sha1:OJ5QE6DUO2RP52XHCPHXWFQOFGDQSJZQ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-05/CC-MAIN-2022-05_segments_1642320304876.16_warc_CC-MAIN-20220125220353-20220126010353-00626.warc.gz\"}"}
https://howkgtolbs.com/convert/39.89-kg-to-lbs
[ "# 39.89 kg to lbs - 39.89 kilograms to pounds\n\nDo you want to know how much is 39.89 kg equal to lbs and how to convert 39.89 kg to lbs? Here you go. This whole article is dedicated to kilogram to pound conversion - both theoretical and practical. It is also needed/We also want to emphasize that all this article is dedicated to only one number of kilograms - this is one kilogram. So if you need to know more about 39.89 kg to pound conversion - read on.\n\nBefore we move on to the more practical part - it means 39.89 kg how much lbs conversion - we are going to tell you some theoretical information about these two units - kilograms and pounds. So let’s start.\n\nHow to convert 39.89 kg to lbs? 39.89 kilograms it is equal 87.9423963118 pounds, so 39.89 kg is equal 87.9423963118 lbs.\n\n## 39.89 kgs in pounds\n\nWe will start with the kilogram. The kilogram is a unit of mass. It is a base unit in a metric system, that is International System of Units (in abbreviated form SI).\n\nSometimes the kilogram could be written as kilogramme. The symbol of the kilogram is kg.\n\nFirst definition of a kilogram was formulated in 1795. The kilogram was described as the mass of one liter of water. First definition was not complicated but hard to use.\n\nLater, in 1889 the kilogram was described using the International Prototype of the Kilogram (in abbreviated form IPK). The IPK was made of 90% platinum and 10 % iridium. The IPK was used until 2019, when it was replaced by another definition.\n\nThe new definition of the kilogram is based on physical constants, especially Planck constant. Here is the official definition: “The kilogram, symbol kg, is the SI unit of mass. It is defined by taking the fixed numerical value of the Planck constant h to be 6.62607015×10−34 when expressed in the unit J⋅s, which is equal to kg⋅m2⋅s−1, where the metre and the second are defined in terms of c and ΔνCs.”\n\nOne kilogram is exactly 0.001 tonne. It can be also divided to 100 decagrams and 1000 grams.\n\n## 39.89 kilogram to pounds\n\nYou know something about kilogram, so now we can go to the pound. The pound is also a unit of mass. It is needed to emphasize that there are not only one kind of pound. What are we talking about? For example, there are also pound-force. In this article we are going to to centre only on pound-mass.\n\nThe pound is in use in the British and United States customary systems of measurements. Of course, this unit is in use also in other systems. The symbol of this unit is lb or “.\n\nThe international avoirdupois pound has no descriptive definition. It is exactly 0.45359237 kilograms. One avoirdupois pound could be divided into 16 avoirdupois ounces and 7000 grains.\n\nThe avoirdupois pound was enforced in the Weights and Measures Act 1963. The definition of this unit was given in first section of this act: “The yard or the metre shall be the unit of measurement of length and the pound or the kilogram shall be the unit of measurement of mass by reference to which any measurement involving a measurement of length or mass shall be made in the United Kingdom; and- (a) the yard shall be 0.9144 metre exactly; (b) the pound shall be 0.45359237 kilogram exactly.”\n\n### How many lbs is 39.89 kg?\n\n39.89 kilogram is equal to 87.9423963118 pounds. If You want convert kilograms to pounds, multiply the kilogram value by 2.2046226218.\n\n### 39.89 kg in lbs\n\nThe most theoretical part is already behind us. In this section we will tell you how much is 39.89 kg to lbs. Now you learned that 39.89 kg = x lbs. So it is time to know the answer. Let’s see:\n\n39.89 kilogram = 87.9423963118 pounds.\n\nThis is an accurate result of how much 39.89 kg to pound. It is possible to also round it off. After it your outcome is as following: 39.89 kg = 87.758 lbs.\n\nYou know 39.89 kg is how many lbs, so look how many kg 39.89 lbs: 39.89 pound = 0.45359237 kilograms.\n\nNaturally, in this case you can also round off this result. After rounding off your outcome will be as following: 39.89 lb = 0.45 kgs.\n\nWe also want to show you 39.89 kg to how many pounds and 39.89 pound how many kg outcomes in charts. Look:\n\nWe want to start with a table for how much is 39.89 kg equal to pound.\n\n### 39.89 Kilograms to Pounds conversion table\n\nKilograms (kg) Pounds (lb) Pounds (lbs) (rounded off to two decimal places)\n39.89 87.9423963118 87.7580\nNow look at a table for how many kilograms 39.89 pounds.\n\nPounds Kilograms Kilograms (rounded off to two decimal places\n39.89 0.45359237 0.45\n\nNow you learned how many 39.89 kg to lbs and how many kilograms 39.89 pound, so we can go to the 39.89 kg to lbs formula.\n\n### 39.89 kg to pounds\n\nTo convert 39.89 kg to us lbs you need a formula. We are going to show you a formula in two different versions. Let’s begin with the first one:\n\nAmount of kilograms * 2.20462262 = the 87.9423963118 outcome in pounds\n\nThe first formula give you the most correct result. Sometimes even the smallest difference can be significant. So if you want to get a correct result - this version of a formula will be the best solution to calculate how many pounds are equivalent to 39.89 kilogram.\n\nSo let’s go to the second formula, which also enables conversions to know how much 39.89 kilogram in pounds.\n\nThe another formula is as following, look:\n\nNumber of kilograms * 2.2 = the outcome in pounds\n\nAs you can see, the second version is simpler. It can be the best option if you want to make a conversion of 39.89 kilogram to pounds in quick way, for example, during shopping. Just remember that your outcome will be not so accurate.\n\nNow we are going to show you these two versions of a formula in practice. But before we are going to make a conversion of 39.89 kg to lbs we are going to show you another way to know 39.89 kg to how many lbs totally effortless.\n\n### 39.89 kg to lbs converter\n\nAn easier way to learn what is 39.89 kilogram equal to in pounds is to use 39.89 kg lbs calculator. What is a kg to lb converter?\n\nConverter is an application. Calculator is based on first version of a formula which we gave you in the previous part of this article. Due to 39.89 kg pound calculator you can effortless convert 39.89 kg to lbs. You only have to enter number of kilograms which you need to convert and click ‘convert’ button. The result will be shown in a flash.\n\nSo let’s try to calculate 39.89 kg into lbs with use of 39.89 kg vs pound converter. We entered 39.89 as a number of kilograms. It is the result: 39.89 kilogram = 87.9423963118 pounds.\n\nAs you can see, our 39.89 kg vs lbs calculator is user friendly.\n\nNow we are going to our main issue - how to convert 39.89 kilograms to pounds on your own.\n\n#### 39.89 kg to lbs conversion\n\nWe will begin 39.89 kilogram equals to how many pounds calculation with the first formula to get the most accurate outcome. A quick reminder of a formula:\n\nNumber of kilograms * 2.20462262 = 87.9423963118 the outcome in pounds\n\nSo what have you do to check how many pounds equal to 39.89 kilogram? Just multiply number of kilograms, this time 39.89, by 2.20462262. It is 87.9423963118. So 39.89 kilogram is 87.9423963118.\n\nIt is also possible to round off this result, for instance, to two decimal places. It is equal 2.20. So 39.89 kilogram = 87.7580 pounds.\n\nIt is time for an example from everyday life. Let’s convert 39.89 kg gold in pounds. So 39.89 kg equal to how many lbs? As in the previous example - multiply 39.89 by 2.20462262. It is 87.9423963118. So equivalent of 39.89 kilograms to pounds, if it comes to gold, is exactly 87.9423963118.\n\nIn this example it is also possible to round off the result. It is the outcome after rounding off, in this case to one decimal place - 39.89 kilogram 87.758 pounds.\n\nNow we are going to examples converted with a short version of a formula.\n\n#### How many 39.89 kg to lbs\n\nBefore we show you an example - a quick reminder of shorter formula:\n\nNumber of kilograms * 2.2 = 87.758 the outcome in pounds\n\nSo 39.89 kg equal to how much lbs? As in the previous example you need to multiply amount of kilogram, this time 39.89, by 2.2. Look: 39.89 * 2.2 = 87.758. So 39.89 kilogram is equal 2.2 pounds.\n\nLet’s do another calculation using this formula. Now calculate something from everyday life, for example, 39.89 kg to lbs weight of strawberries.\n\nSo calculate - 39.89 kilogram of strawberries * 2.2 = 87.758 pounds of strawberries. So 39.89 kg to pound mass is 87.758.\n\nIf you learned how much is 39.89 kilogram weight in pounds and can convert it with use of two different versions of a formula, we can move on. Now we are going to show you these outcomes in charts.\n\n#### Convert 39.89 kilogram to pounds\n\nWe know that outcomes shown in tables are so much clearer for most of you. We understand it, so we gathered all these results in charts for your convenience. Thanks to this you can easily compare 39.89 kg equivalent to lbs results.\n\nLet’s start with a 39.89 kg equals lbs table for the first formula:\n\nKilograms Pounds Pounds (after rounding off to two decimal places)\n39.89 87.9423963118 87.7580\n\nAnd now see 39.89 kg equal pound chart for the second formula:\n\nKilograms Pounds\n39.89 87.758\n\nAs you see, after rounding off, when it comes to how much 39.89 kilogram equals pounds, the results are the same. The bigger amount the more significant difference. Keep it in mind when you need to do bigger amount than 39.89 kilograms pounds conversion.\n\n#### How many kilograms 39.89 pound\n\nNow you learned how to convert 39.89 kilograms how much pounds but we will show you something more. Do you want to know what it is? What about 39.89 kilogram to pounds and ounces conversion?\n\nWe are going to show you how you can convert it step by step. Let’s start. How much is 39.89 kg in lbs and oz?\n\nFirst things first - you need to multiply amount of kilograms, in this case 39.89, by 2.20462262. So 39.89 * 2.20462262 = 87.9423963118. One kilogram is exactly 2.20462262 pounds.\n\nThe integer part is number of pounds. So in this case there are 2 pounds.\n\nTo convert how much 39.89 kilogram is equal to pounds and ounces you need to multiply fraction part by 16. So multiply 20462262 by 16. It gives 327396192 ounces.\n\nSo final outcome is exactly 2 pounds and 327396192 ounces. You can also round off ounces, for instance, to two places. Then final outcome is equal 2 pounds and 33 ounces.\n\nAs you can see, conversion 39.89 kilogram in pounds and ounces is not complicated.\n\nThe last calculation which we want to show you is conversion of 39.89 foot pounds to kilograms meters. Both foot pounds and kilograms meters are units of work.\n\nTo convert it it is needed another formula. Before we show you this formula, have a look:\n\n• 39.89 kilograms meters = 7.23301385 foot pounds,\n• 39.89 foot pounds = 0.13825495 kilograms meters.\n\nNow let’s see a formula:\n\nAmount.RandomElement()) of foot pounds * 0.13825495 = the outcome in kilograms meters\n\nSo to convert 39.89 foot pounds to kilograms meters you need to multiply 39.89 by 0.13825495. It is equal 0.13825495. So 39.89 foot pounds is exactly 0.13825495 kilogram meters.\n\nYou can also round off this result, for example, to two decimal places. Then 39.89 foot pounds is 0.14 kilogram meters.\n\nWe hope that this calculation was as easy as 39.89 kilogram into pounds calculations.\n\nThis article was a huge compendium about kilogram, pound and 39.89 kg to lbs in calculation. Thanks to this conversion you know 39.89 kilogram is equivalent to how many pounds.\n\nWe showed you not only how to make a calculation 39.89 kilogram to metric pounds but also two other conversions - to check how many 39.89 kg in pounds and ounces and how many 39.89 foot pounds to kilograms meters.\n\nWe showed you also other way to make 39.89 kilogram how many pounds calculations, that is with use of 39.89 kg en pound converter. This will be the best option for those of you who do not like calculating on your own at all or need to make @baseAmountStr kg how lbs conversions in quicker way.\n\nWe hope that now all of you are able to do 39.89 kilogram equal to how many pounds conversion - on your own or using our 39.89 kgs to pounds calculator.\n\nIt is time to make your move! Convert 39.89 kilogram mass to pounds in the way you like.\n\nDo you need to make other than 39.89 kilogram as pounds conversion? For example, for 10 kilograms? Check our other articles! We guarantee that conversions for other numbers of kilograms are so easy as for 39.89 kilogram equal many pounds.\n\n### How much is 39.89 kg in pounds\n\nAt the end, we are going to summarize the topic of this article, that is how much is 39.89 kg in pounds , we gathered answers to the most frequently asked questions. Here you can see the most important information about how much is 39.89 kg equal to lbs and how to convert 39.89 kg to lbs . You can see it down below.\n\nWhat is the kilogram to pound conversion? It is a mathematical operation based on multiplying 2 numbers. Let’s see 39.89 kg to pound conversion formula . Check it down below:\n\nThe number of kilograms * 2.20462262 = the result in pounds\n\nNow you can see the result of the conversion of 39.89 kilogram to pounds. The exact answer is 87.9423963118 lbs.\n\nIt is also possible to calculate how much 39.89 kilogram is equal to pounds with second, shortened type of the equation. Check it down below.\n\nThe number of kilograms * 2.2 = the result in pounds\n\nSo this time, 39.89 kg equal to how much lbs ? The answer is 87.9423963118 pounds.\n\nHow to convert 39.89 kg to lbs quicker and easier? It is possible to use the 39.89 kg to lbs converter , which will make whole mathematical operation for you and give you a correct result .\n\n#### Kilograms [kg]\n\nThe kilogram, or kilogramme, is the base unit of weight in the Metric system. It is the approximate weight of a cube of water 10 centimeters on a side.\n\n#### Pounds [lbs]\n\nA pound is a unit of weight commonly used in the United States and the British commonwealths. A pound is defined as exactly 0.45359237 kilograms.\nRead more related articles:\n 39.01 kg to lbs = 86.0023 39.02 kg to lbs = 86.0244 39.03 kg to lbs = 86.0464 39.04 kg to lbs = 86.0685 39.05 kg to lbs = 86.0905 39.06 kg to lbs = 86.1126 39.07 kg to lbs = 86.1346 39.08 kg to lbs = 86.1566 39.09 kg to lbs = 86.1787 39.1 kg to lbs = 86.2007 39.11 kg to lbs = 86.2228 39.12 kg to lbs = 86.2448 39.13 kg to lbs = 86.2669 39.14 kg to lbs = 86.2889 39.15 kg to lbs = 86.311 39.16 kg to lbs = 86.333 39.17 kg to lbs = 86.3551 39.18 kg to lbs = 86.3771 39.19 kg to lbs = 86.3992 39.2 kg to lbs = 86.4212 39.21 kg to lbs = 86.4433 39.22 kg to lbs = 86.4653 39.23 kg to lbs = 86.4874 39.24 kg to lbs = 86.5094 39.25 kg to lbs = 86.5314\n 39.26 kg to lbs = 86.5535 39.27 kg to lbs = 86.5755 39.28 kg to lbs = 86.5976 39.29 kg to lbs = 86.6196 39.3 kg to lbs = 86.6417 39.31 kg to lbs = 86.6637 39.32 kg to lbs = 86.6858 39.33 kg to lbs = 86.7078 39.34 kg to lbs = 86.7298 39.35 kg to lbs = 86.7519 39.36 kg to lbs = 86.7739 39.37 kg to lbs = 86.796 39.38 kg to lbs = 86.818 39.39 kg to lbs = 86.8401 39.4 kg to lbs = 86.8621 39.41 kg to lbs = 86.8842 39.42 kg to lbs = 86.9062 39.43 kg to lbs = 86.9283 39.44 kg to lbs = 86.9503 39.45 kg to lbs = 86.9724 39.46 kg to lbs = 86.9944 39.47 kg to lbs = 87.0165 39.48 kg to lbs = 87.0385 39.49 kg to lbs = 87.0606 39.5 kg to lbs = 87.0826\n 39.51 kg to lbs = 87.1046 39.52 kg to lbs = 87.1267 39.53 kg to lbs = 87.1487 39.54 kg to lbs = 87.1708 39.55 kg to lbs = 87.1928 39.56 kg to lbs = 87.2149 39.57 kg to lbs = 87.2369 39.58 kg to lbs = 87.259 39.59 kg to lbs = 87.281 39.6 kg to lbs = 87.3031 39.61 kg to lbs = 87.3251 39.62 kg to lbs = 87.3471 39.63 kg to lbs = 87.3692 39.64 kg to lbs = 87.3912 39.65 kg to lbs = 87.4133 39.66 kg to lbs = 87.4353 39.67 kg to lbs = 87.4574 39.68 kg to lbs = 87.4794 39.69 kg to lbs = 87.5015 39.7 kg to lbs = 87.5235 39.71 kg to lbs = 87.5456 39.72 kg to lbs = 87.5676 39.73 kg to lbs = 87.5897 39.74 kg to lbs = 87.6117 39.75 kg to lbs = 87.6338\n 39.76 kg to lbs = 87.6558 39.77 kg to lbs = 87.6778 39.78 kg to lbs = 87.6999 39.79 kg to lbs = 87.7219 39.8 kg to lbs = 87.744 39.81 kg to lbs = 87.766 39.82 kg to lbs = 87.7881 39.83 kg to lbs = 87.8101 39.84 kg to lbs = 87.8322 39.85 kg to lbs = 87.8542 39.86 kg to lbs = 87.8763 39.87 kg to lbs = 87.8983 39.88 kg to lbs = 87.9203 39.89 kg to lbs = 87.9424 39.9 kg to lbs = 87.9644 39.91 kg to lbs = 87.9865 39.92 kg to lbs = 88.0085 39.93 kg to lbs = 88.0306 39.94 kg to lbs = 88.0526 39.95 kg to lbs = 88.0747 39.96 kg to lbs = 88.0967 39.97 kg to lbs = 88.1188 39.98 kg to lbs = 88.1408 39.99 kg to lbs = 88.1629 40 kg to lbs = 88.1849" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8883743,"math_prob":0.98943686,"size":15667,"snap":"2022-40-2023-06","text_gpt3_token_len":4797,"char_repetition_ratio":0.24701525,"word_repetition_ratio":0.053914834,"special_character_ratio":0.38048127,"punctuation_ratio":0.15657789,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9988202,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-10-02T16:28:28Z\",\"WARC-Record-ID\":\"<urn:uuid:50d13ca3-0438-4d7c-a945-b44b24c8b887>\",\"Content-Length\":\"70333\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2071ff1d-cdab-4079-99a8-acda9ef95fe6>\",\"WARC-Concurrent-To\":\"<urn:uuid:41a58f52-3ba1-4ff3-9d8f-1544a85ac6a8>\",\"WARC-IP-Address\":\"104.21.5.238\",\"WARC-Target-URI\":\"https://howkgtolbs.com/convert/39.89-kg-to-lbs\",\"WARC-Payload-Digest\":\"sha1:6X4YZ4TDWPMTJIPYSMAC34X35UBNIWZD\",\"WARC-Block-Digest\":\"sha1:KJV2YBBTBEY7NOSMLCCXOQKIRVDSFD3A\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030337338.11_warc_CC-MAIN-20221002150039-20221002180039-00518.warc.gz\"}"}
https://answers.everydaycalculation.com/subtract-fractions/56-4-minus-1-36
[ "Solutions by everydaycalculation.com\n\n## Subtract 1/36 from 56/4\n\n1st number: 14 0/4, 2nd number: 1/36\n\n56/4 - 1/36 is 503/36.\n\n#### Steps for subtracting fractions\n\n1. Find the least common denominator or LCM of the two denominators:\nLCM of 4 and 36 is 36\n2. For the 1st fraction, since 4 × 9 = 36,\n56/4 = 56 × 9/4 × 9 = 504/36\n3. Likewise, for the 2nd fraction, since 36 × 1 = 36,\n1/36 = 1 × 1/36 × 1 = 1/36\n4. Subtract the two fractions:\n504/36 - 1/36 = 504 - 1/36 = 503/36\n5. In mixed form: 1335/36\n\nMathStep (Works offline)", null, "Download our mobile app and learn to work with fractions in your own time:" ]
[ null, "https://answers.everydaycalculation.com/mathstep-app-icon.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7114246,"math_prob":0.9987304,"size":346,"snap":"2020-10-2020-16","text_gpt3_token_len":145,"char_repetition_ratio":0.16374269,"word_repetition_ratio":0.0,"special_character_ratio":0.47976878,"punctuation_ratio":0.09411765,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9975413,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-04-02T19:44:36Z\",\"WARC-Record-ID\":\"<urn:uuid:e893628b-994f-4773-b108-eb04a1255737>\",\"Content-Length\":\"7862\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e553fcfe-d9f0-4328-b482-0fc740a95103>\",\"WARC-Concurrent-To\":\"<urn:uuid:20e9dc86-9bd1-46e1-9351-d9063c71cdc4>\",\"WARC-IP-Address\":\"96.126.107.130\",\"WARC-Target-URI\":\"https://answers.everydaycalculation.com/subtract-fractions/56-4-minus-1-36\",\"WARC-Payload-Digest\":\"sha1:HAUTATTM3PQJRBOMMEC2A5UOTHJXET32\",\"WARC-Block-Digest\":\"sha1:ASP2RGEHPCS7VGPEM6QGI7RHMDSCW4A3\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-16/CC-MAIN-2020-16_segments_1585370507738.45_warc_CC-MAIN-20200402173940-20200402203940-00551.warc.gz\"}"}
https://patents.justia.com/patent/20120128203
[ "# MOTION ANALYZING APPARATUS\n\n- SEIKO EPSON CORPORATION\n\nA sensor unit is installed to a target object and detects a given physical amount. A data acquisition unit acquires output data of the sensor unit in a period including a first period for which a real value of a value of m time integrals of the physical amount is known and a second period that is a target for motion analysis. An error time function estimating unit performs m time integrals of the output data of the sensor unit and estimates a time function of an error of a value of the physical amount detected by the sensor unit with respect to the real value of the value of the physical amount detected by the sensor unit based on a difference between a value of m time integrals of the output data and the real value for the first period.\n\n## Latest SEIKO EPSON CORPORATION Patents:\n\nDescription\nBACKGROUND\n\n1. Technical Field\n\nThe present invention relates to a motion analyzing apparatus.\n\n2. Related Art\n\nIn various fields, apparatuses that analyze the motion of a person or an object are necessary. For example, by analyzing the swing trajectory of a tennis racket or a golf club, the form of baseball pitching or batting, and the like and clarifying points to be improved based on the analysis result, game power can be improved.\n\nCurrently, as practical motion analyzing apparatuses, apparatuses that analyze a motion by consecutively photographing a measurement object, to which a mark is attached, using an infrared camera or the like and calculating the motion trajectory of the mark using consecutive photographed images are generally used.\n\nJP-A-2004-24488 is an example of the related art.\n\nHowever, in such apparatuses, since an infrared camera used for photographing images is necessary, the size of the apparatuses is in consequence large, and, accordingly, there is a problem in that it is difficult to handle the apparatuses. For example, in a case where the images of a tennis practice are desired to be acquired through photographing at a plurality of angles, it is necessary to move the position of the infrared camera or change the direction of a player in accordance with the desired photographing angles.\n\nIn contrast to this, recently, an apparatus was proposed which analyzes the motion of a measurement object based on output data of a small inertial sensor by installing the inertial sensor in the measurement object. Such an apparatus does not need an infrared camera, and accordingly there is an advantage of easy handling. For example, the velocity v(t) and the position p(t) of the measurement object can be calculated by performing a time integration process as shown in the following Equations (1) and (2) for an acceleration value a(t) detected by an acceleration sensor.\n\n$ v  ( T ) = ?  a  ( t )   t + v 0 ? ( 1 )  p  ( T ) = ?  v  ( t )   t + p 0 = ?  ?  a  ( τ )   τ   t + v 0  T + p 0   ?  indicates text missing or illegible when filed ( 2 )$\n\nHowever, generally, an error other than a value to be observed is included in the output value of an inertial sensor. Accordingly, for example, the output data x(t) of the acceleration sensor can be represented as the following Equation (3) by using an acceleration value a(t) and an error ε(t).\n\nx(t)=a(t)+ε(t)  (3)\n\nAccordingly, in a case where the velocity v(t) and the position p(t) of a measurement object are calculated by performing a time integration process as represented in the following Equations (4) and (5) based on the output data x(t) of the acceleration sensor, the error ε(t) is integrated with respect to time as well. Therefore errors in the velocity v(t) and the position p(t) rapidly increase in accordance with the elapse of time t.\n\n0Tx(t)dt=v(T)+∫0Tε(t)dt+c1  (4)\n\n0T0τx(τ)dτdt=p(T)+∫0T0τε(τ)dτdt+c1T+c2  (5)\n\nIn other words, in a motion analyzing apparatus using an inertial sensor, the characteristics of the sensor are not sufficient in practice, and in a case where the posture, the velocity, the position, and the like are calculated by performing an integration process for the output data of the inertial sensor, an error included in the output of the sensor noticeably increases through the integration process, whereby there is problem in that a sufficient analysis (measurement) capability is not acquired.\n\nSUMMARY\n\nAn advantage of some aspects of the invention is that it provides a motion analyzing apparatus that can be easily handled and provide analysis information with sufficient accuracy.\n\n(1) An aspect of the invention is directed to a motion analyzing apparatus including: a sensor unit that is installed to a target object and detects a physical amount; a data acquisition unit that acquires output data of the sensor unit in a period including a first period for which a real value of a value of m time integrals (here, m is an integer equal to or greater than one) of the physical amount is known and a second period that is a target for motion analysis; an error time function estimating unit that performs m time integrals of the output data and estimates a time function of an error of a value of the physical amount detected by the sensor unit with respect to the real value based on a difference between a value of m time integrals of the output data and the real value for the first period; a data correcting unit that corrects a value of m time integrals of the output data for the second period based on an estimation result of the error time function estimating unit; and a motion analysis information generating unit that generates motion analysis information of the target object based on the value of the m time integrals for the second period that is corrected by the data correcting unit.\n\nThe target object to be analyzed may be a person or an object (for example, an exercise tool, a vehicle, or the like) other than a person.\n\nThe information used for analyzing the motion of a target object, for example, may be trajectory information of the target object or information of a change in the speed of the target object, or the like.\n\nThe m time integrals may be an m time integrals in a continuous time system or an m time integrals (m time differentials) in a discrete time system.\n\nAccording to the above-described motion analyzing apparatus, the detection error of the sensor unit is estimated as a time function, and the m time integrals of the physical amount of the detection target is corrected by using the estimated time function of the error, whereby analysis information having sufficient accuracy can be generated. In addition, a sensor is used instead of an infrared camera, the configuration can be simplified, and the handling thereof is easy.\n\n(2) In the above-described motion analyzing apparatus, the error time function estimating unit may estimate the time function of the error by approximating the time function of the error as a polynomial equation and calculating coefficients of the polynomial equation.\n\nIn such a case, the time function of the detected error can be estimated with sufficient accuracy through relatively simple calculation. In addition, the order of the polynomial may be determined based on the accuracy required for the motion analysis.\n\nIn addition, for example, the error time function estimating unit may calculate coefficients of the polynomial equation by solving over-determined simultaneous equations that are acquired by approximating the error of the m time integrals of the data acquired by the data acquisition unit for the first period with respect to the real value to the value of the m time integrals of the polynomial in the first period of the polynomial equation.\n\nAs above, by setting up the over-determined simultaneous equations by acquiring more data in the first period, the estimation accuracy of the time function of the detected error can be increased. In addition, for example, the over-determined simultaneous equations may be solved by using a least squares method.\n\n(3) The above-described motion analyzing apparatus may be configured such that a plurality of the first periods is set, and the error time function estimating unit estimates the time function of the error based on data for each of the plurality of the first periods that is acquired by the data acquiring unit.\n\nBy arranging a plurality of the first periods as above, the estimation accuracy of the time function of the detected error can be increased further.\n\n(4) The above-described motion analyzing apparatus may be configured such that at least one of the plurality of the first periods is a period before start of the second period, and at least one of the plurality of the first periods is a period after end of the second period.\n\nIn such a case, the estimation accuracy of the time function of the detected error for the second period as a target of the motion analysis can be further increased, and accordingly, the motion analysis information having higher accuracy can be generated.\n\n(5) In the above-described motion analyzing apparatus, the first period may be a period in which the target object is stopped.\n\nIn such a case, for example, the speed, the posture, and the position of the target object for the first period can be known.\n\n(6) In the above-described motion analyzing apparatus, the sensor unit may detect at least one of acceleration and angular velocity as the physical amount.\n\nBRIEF DESCRIPTION OF THE DRAWINGS\n\nThe invention will be described with reference to the accompanying drawings, wherein like numbers reference like elements.\n\nFIG. 1 is a diagram showing the configuration of a motion analyzing apparatus according to this embodiment.\n\nFIG. 2 is a flowchart showing an example of a process of generating motion analysis information by using a processing unit.\n\nFIGS. 3A and 3B are diagrams showing examples of a data acquisition period, a first period, and a second period.\n\nFIG. 4 is a flowchart illustrating a process of estimating an error time function and a data correcting process.\n\nFIG. 5 is a schematic diagram showing the configuration of a sensor unit in this experimental example.\n\nFIG. 6 is a diagram showing an example of installation of the sensor unit in this experimental example.\n\nFIG. 7 is a diagram illustrating the operation sequence of a test subject in this experimental example.\n\nFIG. 8 is a diagram illustrating the definition of a coordinate system in this experimental example.\n\nFIG. 9 is a flowchart showing the process performed by a processing unit in this experimental example.\n\nFIGS. 10A and 10B are diagrams showing trajectory data in this experimental example.\n\nFIGS. 11A and 11B are diagrams for comparing trajectory data according to a technique of this embodiment and trajectory data according to a general technique.\n\nDESCRIPTION OF EXEMPLARY EMBODIMENTS\n\nHereinafter, a preferred embodiment of the invention will be described in detail with reference to the accompanying drawings. The embodiment described here is not for purposes of inappropriately limiting the content of the invention that is defined in the claims. In addition, not all the configurations described below are determined as essential constituent elements of the invention.\n\nFIG. 1 is a diagram showing the configuration of a motion analyzing apparatus according to this embodiment.\n\nThe motion analyzing apparatus 1 according to this embodiment is configured so as to include one or a plurality of sensor units 10 and a host terminal 20 and analyzes the motion of a target object. The sensor unit 10 and the host terminal 20 are interconnected in a wired or wireless manner.\n\nThe sensor unit 10 is installed to a target object for motion analysis and performs a process of detecting a given physical amount. In this embodiment, the sensor unit 10 is configured so as to include one or a plurality of sensors 100, a data processing section 110, and a communication section 120.\n\nThe sensor 100 is a sensor that detects a given physical amount and outputs a signal (data) according to the magnitude of the detected physical amount (for example, acceleration, angular velocity, speed, angular acceleration, or the like). For example, the sensor 100 is an inertial sensor.\n\nThe data processing section 110 synchronizes output data of each sensor 100, forms a packet in which the data is combined with time information and the like, and outputs the packet to the communication section 120. In addition, the data processing section 110 may perform the process of correcting the bias of the sensor 100 and correcting the temperature. Alternatively, the function of bias correction and temperature correction may be introduced into the sensor 100.\n\nThe communication section 120 performs the process of transmitting the packet data received from the data processing section 110 to the host terminal 20.\n\nThe host terminal 20 is configured so as to include a processing unit (CPU) 200, a communication unit 210, an operation unit 220, a ROM 230, a RAM 240, a non-volatile memory 250, and a display unit 260.\n\nThe communication unit 210 performs the process of receiving data transmitted from the sensor unit 10 and transmitting the data to the processing unit 200.\n\nThe operation unit 220 performs the process of acquiring operation data from a user and transmitting the operation data to the processing unit 200. The operation unit 220, for example, is a touch panel-type display, buttons, keys, a microphone, or the like.\n\nThe ROM 230 stores programs used for performing various calculation processes and control processes of the processing unit 200, various programs and data for implementing application functions, and the like.\n\nThe RAM 240 is used as a work area of the processing unit 200 and is a storage unit that temporarily stores a program or data read out from the ROM 230, data input from the operation unit 220, calculation results of the processing unit 200 that are acquired through execution of various programs, and the like.\n\nThe non-volatile memory 250 is a recording unit that records data which needs to be stored for a long term out of data generated by the process of the processing unit 200.\n\nThe display unit 260 displays the processing result of the processing unit 200 as a text, a graph, or other images. The display unit 260, for example, is a CRT, an LCD, a touch panel-type display, an HMD (head mount display), or the like. In addition, the functions of the operation unit 220 and the display unit 260 may be realized by one touch panel-type display.\n\nThe processing unit 200 performs various calculation processes for data received from the sensor unit 10 through the communication unit 210 or various control processes (display control for the display unit 260 or the like) in accordance with programs stored in the ROM 240.\n\nParticularly, in this embodiment, the processing unit 200 serves as a data acquisition section 202, an error time function estimating section 204, a data correcting section 206, and a motion analysis information generating section 208 to be described later.\n\nThe data acquisition section 202 performs the process of acquiring output data of the sensor unit 10 in a period including a first period in which the real value of the value of m time integrals of the physical amount as a detection target of the sensor 100 is known and a second period as a motion analysis target. The acquired data, for example, is stored in the RAM 240.\n\nThe error time function estimating section 204 calculates m integrals of the output data of the sensor unit and performs the process of estimating a function (hereinafter, referred to as an “error time function) of an error with respect to the real value of the value of the physical amount detected by the sensor unit 10 in time based on a difference between the value of the m time integrals of the output data for the first period and the real value.\n\nThe data correcting section 206 performs the process of correcting the value of the m time integrals of the output data of the sensor unit 10 for the second period based on the estimation result of the error time function estimating section 204.\n\nThe motion analysis information generating section 208 performs the process of generating information used for analyzing the motion of a target object (hereinafter, referred to as “motion analysis information”) based on the value of the m time integrals for the second period that has been corrected by the data correcting section 206. The generated motion analysis information may be displayed as a text, a graph, a diagram, or the like on the display unit 260 or may be output to the outside of the host terminal 20.\n\nFIG. 2 is a flowchart showing an example of the process of generating motion analysis information by using the processing unit 200.\n\nFirst, the processing unit 200 periodically acquires new data from the sensor unit 10 until a data acquisition period ends (No in Step S20) by using the data acquisition section 202 (Step S10).\n\nNext, when the data acquisition period ends (Yes in Step S20), the processing unit 200 calculates m time integrals of the data (Step S21) in the first period and estimates the error time function based on a difference between the m time integrals of the data acquired in Step S10 and the real value, by using the error time function estimating section 204 (Step S30).\n\nNext, the processing unit 200 corrects the value of the m time integrals of the data acquired in Step S10 for the second period based on the time function estimated in Step S30, by using the data correcting section 206 (Step S40).\n\nFinally, the processing unit 200 generates motion analysis information based on the value of the m integrals for the second period with respect to time, which has been corrected in Step S40, by using the motion analysis information generating section 208 (Step S50).\n\nFIGS. 3A and 3B are diagrams showing examples of the data acquisition period, the first period, and the second period.\n\nIn the example shown in FIG. 3A, a second period for which an analysis target object moves is arranged at time t2 to t3, and, before and after the second period, two first periods that are separated in time are arranged at t0 to t1 and t4 to t5. In addition, a data acquisition period is arranged at time t0 to t5, for this data acquisition period, the output data of the sensor unit 10 is sampled (acquired) at a predetermined interval by the host terminal 20. In each of the two first periods, since the real value of m time integrals of the physical amount as the detection target of the sensor unit 10 is known, a difference between the value of m time integrals of the output data of the sensor unit 10 and the real value can be known. An error time function for the output data of the sensor unit 10 can be estimated for the entire data acquisition period based on the information of the difference. In addition, any one of the first period (time t0 to t1) that is arranged first and the first period (time t4 to t5) arranged second may not be provided. However, in order to increase the accuracy of the estimation of the error time function, it is preferable that the first periods are arranged before and after the second period. In order to increase the accuracy of estimation of the error time function, it is effective to estimate the error time function by reflecting random variations of the error that is caused by the variations of the power source, temperature variations, and the like, accordingly, it is preferable that a plurality of the first periods that are separated in time are arranged. Particularly, by arranging the first periods before and after the second period, the accuracy of the estimated error increases for the second period, and accordingly, the accuracy of data correction for the second period can be improved.\n\nIn addition, in the example shown in FIG. 3B, two second periods in which the analysis target object moves are arranged at time t2 and t3 and time t4 and t5. The first period (time t3 to t4) is arranged before the second period (time t2 to t3) arranged first, the first period (time t3 to t4) arranged second is arranged between the two second periods, and the first period (time t6 to t7) arranged third is arranged after the second period arranged second. Then, the data acquisition period is arranged at time t0 to t7. For each one of the three first periods, the real value of m integrals of the physical amount as the detection target of the sensor unit 10 is known, a differences between the value of m time integrals of the output data of the sensor unit 10 and a real value can be known. The error time function for the output data of the sensor unit 10 can be estimated for the entire data acquisition period. In addition, in the example shown in FIG. 3B, since two second periods as targets of motion analysis are arranged, by arranging three first periods at positions that are separated in time with the two second periods interposed therebetween, the estimation accuracy of the error time function for the two second periods can be increased. In other words, by arranging the first periods before and after the second period as targets of motion analysis, even in a case where motions of the analysis targets are repeatedly performed over time, the correction accuracy of the data for each second period can be improved.\n\nEstimation of Error Time Function and Data Correction\n\nNext, an example of the technique for estimating the error time function and data correction will be described.\n\nFirst, in a case where the value of the physical amount as a calculation target of the processing unit 200 at time t is assumed to be Fm(t), and the sensor unit 10 measures the value f(t) of the m-th order derivative function, the following Equation (6) is satisfied.\n\n$ m  F m  ( t )  t m = f  ( t ) ( 6 )$\n\nHere, assuming that the output data x(t) of the sensor unit 10 includes an error ε(t), x(t) can be represented as the following Equation (7).\n\nx(t)=f(t)+ε(t)  (7)\n\nIt can be considered that the error time function ε(t) is approximated as an n-th order polynomial equation ε(t) as the following Equation (8).\n\n$ɛ  ( t ) ≈ g  ( t ) = a 0 + a 1  t + a 2  t 2 + … + a n  t n = ∑ k = 0 n  a k  t k ( 8 )$\n\nIn Xm(t) that is the result of m time integrals of the output data x(t) of the sensor unit 10, an error component Em(t) due to an initial state error ε(t) (integral constant) other than the physical amount Fm(t) as a calculation target is included. Accordingly, Xm(t) can be represented as the following Equation (9).\n\n$X m  ( t ) = F m  ( t ) + E m  ( t )  {  m  X m  ( t )  t m = x  ( t )  m  E m  ( t )  t m = ɛ  ( t ) ( 9 )$\n\nConsidering that the error component Em(t) can be approximated as a polynomial equation Gm(t) in consideration of the integral constant (initial state error) cK for the m time integrals of g(t), the following Equations (10) and (11) are satisfied.\n\n$ m  G m  ( t )  t m = g  ( t ) ( 10 ) E m  ( t ) ≈ G m  ( t ) = ∑ k = 0 n  k ! ( k + m ) !  a k  t k + m + ∑ k = 0 m - 1  c m - k k !  t k ( 11 )$\n\nAccordingly, in a case where the physical amount Fm(tr) at specific time tr is known, the relation represented in the following Equation (12) is satisfied.\n\n$X m  ( t r ) - F m  ( t r ) ≈ G m  ( t r ) = ∑ k = 0 n  k ! ( k + m ) !  a k  t r k + m + ∑ k = 0 m - 1  c m - k k !  t r k ( 12 )$\n\nBy preparing this relation equations of Equation (12) corresponding to the number of each time at which the value of the physical amount as a calculation target is known, for coefficients aK and CK of Equation (11) as an approximated polynomial equation, the following Equation (13) as over-determined simultaneous equations as below can be set up.\n\n$[ X m  ( t r   1 ) - F m  ( t r   1 ) X m  ( t r   2 ) - F m  ( t r   2 ) X m  ( t r   3 ) - F m  ( t r   3 ) ⋮ ] ≈ U  [ a 0 a 1 ⋮ a n ] + V  [ c 1 c 2 ⋮ c m ]  { U = { u ij } , u ij = j ! ( m + j ) !  t ri m + j V = { v ij } , v ij = 1 ( m - j ) !  t ri m - j ( 13 )$\n\nFrom Equation (13) as the over-determined simultaneous equations, the coefficients aK and CK of Equation (11) as the approximated polynomial equations can be acquired, for example, by using a least-squares method.\n\n$M = [ U V ] ( 14 ) [ a 0 a 0 ⋮ a n c 1 c 2 ⋮ c m ] = ( M T  M ) - 1  M T  [ X m  ( t r   1 ) - F m  ( t r   1 ) X m  ( t r   2 ) - F m  ( t r   2 ) X m  ( t r   3 ) - F m  ( t r   3 ) ⋮ ] ( 15 )$\n\nSince the approximated polynomial equations g(t) and Gm(t) are determined by using the coefficients aK and CK, the physical amount Fm(t) and the value f(t) of the m-th order derivative function thereof can be estimated by using the following Equations (16) and (17).\n\nFm(t)≈Xm(t)−Gm(t)  (16)\n\nf(t)≈x(t)−g(t)  (17)\n\nThe flowchart of the error time function estimating process and the data correction process based on the above-described techniques are illustrated in FIG. 4.\n\nFirst, the m time integrals of the acquired data x(t) is performed so as to calculate Xm(t) (Step S32).\n\nNext, the error time function ε(t) is approximated as a polynomial equation g(t), and Equation (13) as the over-determined simultaneous equations is generated by using the value Xm(tr) of the m time integrals at each time tr in the first period and the real value Fm(tr) (Step S34).\n\nNext, the Equation (13) as the over-determined simultaneous equations generated in Step S34 is solved so as to calculated the coefficient values aK and cK of g(t) (Step S36).\n\nNext, Gm(t) is calculated from Equation (11) by using the coefficient values aK and cK calculated in Step S36 (Step S38).\n\nFinally, Fm(t) is calculated from Equation (16) by using Xm(t) calculated in Step S32 and Gm(t) calculated in Step S36 (Step S42).\n\nHere, the process of Steps S32 to S38 corresponds to the process of Step S30 illustrated in the flowchart of FIG. 2, and the process of Step S42 corresponds to the process of Step S40 illustrated in the flowchart of FIG. 2.\n\nAs described above, according to the motion analyzing apparatus of this embodiment, motion analysis information having sufficient accuracy can be generated by estimating the error time function of the output data of the sensor unit 10 and correcting the value of the m time integrals of the output data of the sensor unit 10. In addition, according to this embodiment, the sensor is used instead of the infrared camera, and accordingly, a motion analyzing apparatus that has a simple configuration and can be easily handled can be realized.\n\nIn addition, according to this embodiment, by approximating the error time function as a polynomial equation, the error time function can be estimated with sufficient accuracy, for example, through relatively simple calculation as Equation (15). In addition, by acquiring more data for the first period and setting up Equation (13) as the over-determined simultaneous equations, the estimation accuracy of the error time function can be raised.\n\nExperimental Example of Motion Analysis\n\nNext, an experimental example will be described to which the motion analyzing technique of this embodiment is applied. In this experimental example, the sensor unit 10 configured as shown in FIG. 5 is installed to a grip end of a tennis racket as an analysis target object as shown in FIG. 6, and the trajectories (an example of the motion analysis information) of the top 302 and the grip end 304 of the tennis racket when the test subject hits a tennis ball are represented.\n\nAs shown in FIG. 5, the sensor unit 10 used in this experimental example includes a six-axis motion sensor that is configured by three axis acceleration sensors 102x, 102y, and 102z (examples of inertial sensors) that detect the acceleration in the directions of the X axis, the Y axis, and the Z axis and three axis gyro sensors (angular velocity sensors) 104x, 104y, and 104z that detect the angular velocities in the directions of the X-axis, the Y-axis, and the Z-axis, as the sensor 100 shown in FIG. 1. The X-axis, the Y-axis, and the Z-axis are determined based on the right-hand system.\n\nThe data processing section 110 synchronizes the output data of the six-axis motion sensor and outputs the synchronized data to the communication section 120. In addition, the data processing section 110 performs the process of correcting a detected error due to a deviation of the installation angle of the six-axis motion sensor and the like.\n\nThe communication section 120 performs the process of transmitting the data received from the data processing section 110 to the host terminal 20.\n\nThis sensor unit 10, for example, as shown in FIG. 6, is installed to the grip end 304 of the tennis racket 300 such that the X axis is perpendicular to the face (hitting area). The installation direction of the sensor unit 10 is arbitrary. For example, as shown in FIG. 6, the sensor unit 10 is installed such that the x-axis direction is the direction of a perpendicular line extending from the inside of the sheet face toward the front side, the y-axis direction extends toward the right side in the horizontal direction, and the z-axis direction extends toward the upper side in the vertical direction.\n\nIn this experimental example, the test subject is allowed to perform a predetermined operation sequence. This operation sequence will be described with reference to FIG. 7. First, the tennis racket 300 is placed at a first position determined in advance and is stopped at least about one second (time t0 to t1). Next, the test subject moves to a second position with the tennis racket 300 held and prepares a swing (time t1 to t2). Next, the tennis ball is sent to the test subject, and the test subject hits the tennis ball with the tennis racket 300 (time t2 to t3). Next, after finishing the swing, the test subject moves to the first position with the tennis racket held and places the tennis racket at the first position (time t3 to t4). Finally, the tennis racket 300 is stopped for at least about one second (time t4 to t5). The period of time t0 to t5 corresponds to the data acquisition period, and, the output data of the sensor unit 10 is sampled, for example, at the sampling rate (0.5 kHz) of 500 samples per second. In addition, in the period of time t0 to t1 and the period t4 to t5, the positions of the sensor unit 10 are known and the period corresponds to the first period. Furthermore, the period of time t2 to t3 corresponds to the second period as a motion analysis target.\n\nIn addition, in this experimental example, as shown in FIG. 8, the position of the sensor unit 10 at a time when the top 302 of the tennis racket 300 is at a maximum speed (immediately before the face of the tennis racket 300 is hit by the tennis ball 400) is set as the origin point, the direction of the maximum speed of the top 302 is set to the X axis, and the Y axis and the Z axis are determined based on the right-hand system. Then, the trajectories of the top 302 and the grip end 304 of the tennis racket 300 in the XYZ coordinate system for the second period (the period of time t2 to t3) are displayed as graphs.\n\nFIG. 9 is a flowchart of the process after the processing unit 200 starts to acquire the output data of the sensor unit 10 until the trajectories of the top 302 and the grip end 304 of the tennis racket 300 for the second period in the XYZ coordinate system are displayed as graphs.\n\nFirst, until the data acquisition period ends (No in Step S120), new three-axis acceleration data and three-axis angular velocity data are periodically acquired from the sensor unit 10 (Step S110).\n\nNext, when the data acquisition period ends (Yes in Step S120), an error with respect to the real value (0) of the three-axis angular velocity data acquired in two first periods (the period of time t0 to t1 and the period of time t4 to t5) in Step S110 is calculated, and the time function of the output error (an error in the angular velocity) of the three axis gyro sensors is estimated (Step S130). For example, the time function of the angular velocity error may be estimated through approximation as a polynomial equation.\n\nNext, by using the time function estimated in Step S130, integration is performed with the error of the three axis angular velocity data acquired in Step S110 being eliminated, and the posture of the sensor unit 10 in the XYZ coordinate system is calculated (Step S140).\n\nNext, by using the posture of the sensor unit 10 in the XYZ coordinate system that is calculated in Step S140, coordinate conversion of the three axis acceleration data (an acceleration vector in the xyz coordinate system) acquired in Step S110 into the acceleration vector in the XYZ coordinate system is performed (Step S150).\n\nNext, the acceleration vector in the XYZ coordinate system that is acquired through the coordinate conversion of Step S150 is double-integrated, and the positions of the sensor unit 10 in the XYZ coordinate system for the data acquisition period (the period of time t0 to t5) are calculated (Step S160).\n\nNext, the error with respect to the real value (the first position) of the position of the sensor unit 10 in the XYZ coordinate system for the two first periods (the period of time t0 to t1 and the period of time t4 to t5) is calculated, and the time function of the acceleration error in each direction of the X-axis, the Y-axis, and the Z-axis of the acceleration vector in the XYZ coordinate system is estimated (Step S170).\n\nNext, by using the time function of the acceleration error that is estimated in Step S170, double integration is performed with the error of the acceleration vector in the XYZ coordinate system being eliminated, and the position (the position of the grip end 304 of the tennis racket 300) of the sensor unit 10 in the XYZ coordinate system is calculated (Step S180).\n\nNext, the distance and the direction from the sensor unit 10 to the top are measured in advance and are known, and the position of the top 302 of the tennis racket 300 in the XYZ coordinate system is calculated based on the position of the sensor unit 10 in the XYZ coordinate system that is calculated in Step S160 and the posture of the sensor unit 10 in the XYZ coordinate system that is calculated in Step S140 (Step S190).\n\nFinally, the coordinates of the positions of the top 302 and the grip end 304 of the tennis racket 300 in the XYZ coordinate system for the second period (the period of time t2 to t3) as a motion analysis target are extracted and are displayed as graphs (Step S200).\n\nFIGS. 10A and 10B are diagrams showing an example of the trajectories of the top 302 and the grip end 304 of the tennis racket 300 for the second period (the period of time t2 to t3). FIG. 10A illustrates the trajectories in the X-Y plane, and FIG. 108 illustrates the trajectories in the X-Z plane. In FIG. 10A, a curve denoted by L1 is the trajectory of the top 302, and a curve denoted by L2 is the trajectory of the grip end 304. In addition, in FIG. 10B, a curve denoted by L3 is the trajectory of the top 302, and a curve denoted by L4 is the trajectory of the grip end 304. The trajectories shown in FIGS. 10A and 10B are appropriate for the trajectory of an actual swing.\n\nFor a comparison, FIGS. 11A and 11B are diagrams acquired by displaying the trajectories in an overlapping manner in a case where a general technique of integrating without correction of the error of the three axis acceleration data in the trajectories shown in FIGS. 10A and 108. In FIG. 11A, a trajectory graph G1 is a graph (a trajectory graph in the XY plane in a case where the technique of this embodiment is applied) of the trajectory shown in FIG. 10A, and a trajectory graph G2 is a graph of the trajectory in the XY plane in a case where a general technique is applied. In addition, in FIG. 118, a trajectory graph G3 is a graph (a trajectory graph in the XZ plane in a case where the technique of this embodiment is applied) of the trajectory shown in FIG. 10B, and a trajectory graph G4 is a graph of the trajectory in the XZ plane in a case where a general technique is applied. Based on FIGS. 11A and 113, in the trajectory graphs G2 and G4 in a case where a general technique is applied, there is a displacement of 4 m in the X-axis direction, and it is apparent that the trajectory does not match an actual swing trajectory. Based on this result, it can be understood that, by applying the technique of this embodiment, the accuracy of the swing trajectory is improved to a large extent.\n\nThe invention is not limited to this embodiment, and various modifications can be made therein within the scope of the concept of the invention.\n\nFor example, in this embodiment, a case has been described as an example in which position data that is acquired by performing double time integration of the acceleration data is corrected. However, as another example, speed data acquired by performing time integration of the acceleration data once may be corrected. In such a case, for example, in a case where the first period is set as a period in which the target object is stopped, the speed is zero for the first period, and the time function of the acceleration error can be estimated. By correcting the speed as above, for example, the swing speed of a tennis racket, a golf club, a bat, or the like can be measured with high accuracy. As another example, data of an angle (rotation angle) of one axis rotation that is acquired by performing time integration of the angular velocity output by the gyro sensor once may be corrected. In such a case, for example, in a case where the first period is a period in which the target object is stopped, the rotation angle for the first period is set to zero, and the time function of the acceleration error can be estimated. By correcting the rotation angle as above, for example, the rotation angle of the hit area immediately after a tennis racket, a golf club, or the like is hit by a ball (immediately after an impact) can be measured with high accuracy.\n\nThe invention includes a configuration (for example, a configuration that has the same function, the same method, and the same result or a configuration that has the same object and the same effects) that is substantially the same as the configuration described in the embodiment. In addition, the invention includes a configuration acquired by substituting a non-essential part of the configuration described in the embodiment. Furthermore, the invention includes a configuration that exhibits the same operations and effects as those of the configuration described in the embodiment or a configuration that can achieve the same object as that of the embodiment. In addition, the invention includes a configuration acquired by adding known techniques to the configuration described in the embodiment.\n\nThe entire disclosure of Japanese Patent Application No. 2010-259234, filed Nov. 19, 2010 is expressly incorporated by reference herein.\n\n## Claims\n\n1. A motion analyzing apparatus comprising:\n\na sensor unit that is installed to a target object and detects a physical amount;\na data acquisition unit that acquires output data of the sensor unit in a period including a first period for which a real value of a value of m time integrals (here, m is an integer equal to or greater than one) of the physical amount is known and a second period that is a target for motion analysis;\nan error time function estimating unit that performs m time integrals of the output data and estimates a time function of an error of a value of the physical amount detected by the sensor unit with respect to the real value based on a difference between a value of m time integrals of the output data and the real value for the first period;\na data correcting unit that corrects a value of m time integrals of the output data for the second period based on an estimation result of the error time function estimating unit; and\na motion analysis information generating unit that generates motion analysis information of the target object based on the value of the m time integrals for the second period that is corrected by the data correcting unit.\n\n2. The motion analyzing apparatus according to claim 1, wherein the error time function estimating unit estimates the time function of the error by approximating the time function of the error as a polynomial equation and calculating coefficients of the polynomial equation.\n\n3. The motion analyzing apparatus according to claim 1,\n\nwherein a plurality of the first periods is set, and\nwherein the error time function estimating unit estimates the time function of the error based on data for each of the plurality of the first periods that is acquired by the data acquiring unit.\n\n4. The motion analyzing apparatus according to claim 3,\n\nwherein at least one of the plurality of the first periods is a period before start of the second period, and\nwherein at least one of the plurality of the first periods is a period after end of the second period.\n\n5. The motion analyzing apparatus according to claim 1, wherein the first period is a period in which the target object is stopped.\n\n6. The motion analyzing apparatus according to claim 1, wherein the sensor unit detects at least one of acceleration and angular velocity as the physical amount.\n\nPatent History\nPublication number: 20120128203\nType: Application\nFiled: Oct 31, 2011\nPublication Date: May 24, 2012\nPatent Grant number: 8565483\nApplicant: SEIKO EPSON CORPORATION (Tokyo)\nInventor: Yasushi NAKAOKA (Shiojiri-shi)\nApplication Number: 13/285,083\nClassifications\nCurrent U.S. Class: Target Tracking Or Detecting (382/103)\nInternational Classification: G06K 9/00 (20060101);" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8895833,"math_prob":0.97829866,"size":29567,"snap":"2023-14-2023-23","text_gpt3_token_len":6109,"char_repetition_ratio":0.19710448,"word_repetition_ratio":0.31625655,"special_character_ratio":0.21283863,"punctuation_ratio":0.0733309,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99472976,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-04-02T00:36:33Z\",\"WARC-Record-ID\":\"<urn:uuid:995fb26e-8590-4224-8b0d-d718e132d4b1>\",\"Content-Length\":\"119213\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:17f7299c-2316-40a2-8bb3-b8bc18fd6c1b>\",\"WARC-Concurrent-To\":\"<urn:uuid:077d288a-24e1-4953-9cc2-1ba41dfd8ad4>\",\"WARC-IP-Address\":\"44.209.20.103\",\"WARC-Target-URI\":\"https://patents.justia.com/patent/20120128203\",\"WARC-Payload-Digest\":\"sha1:RJNIZHC36CEZLQVUDWVGRYIVHIUVRX5Z\",\"WARC-Block-Digest\":\"sha1:IZKRBYRN3IENHWXC4B3QLYLTH3YKK5X5\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296950363.89_warc_CC-MAIN-20230401221921-20230402011921-00621.warc.gz\"}"}
https://file.scirp.org/Html/5-7502601_64071.htm
[ " Novel Bianchi VII Space Times and Their Properties\n\nJournal of Modern Physics\nVol.07 No.05(2016), Article ID:64071,13 pages\n10.4236/jmp.2016.75046\n\nNovel Bianchi VII Space Times and Their Properties\n\nManavendra Mahato, Ajay Pratap Singh\n\nDiscipline of Physics, Indian Institute of Technology Indore, Indore, India", null, "", null, "", null, "", null, "Received 31 December 2015; accepted 26 February 2016; published 29 February 2016\n\nABSTRACT\n\nWe construct and investigate non conformal anisotropic Bianchi type VII0 solutions in 5 dimensions. The solutions are asymptotically flat with a singularity. We also construct anisotropic solutions of Einstein-Maxwell gravity using a procedure similar to Majumdar-Papapetrou solutions with various profiles of charged dust and explore ways to hide the singularity behind the horizon. We further embed it in one higher dimension to get an asymptotically anti de Sitter space and approximate two point correlator of operators with higher conformal dimensions by calculating geodesic lengths. We find a peculiar power law decay of the correlator as a function of separation.\n\nKeywords:\n\nBianchi Spacetimes, Anisotropic Spacetime, Gauge/Gravity Correspondence", null, "1. Introduction\n\nInception of general relativity has led to great insights into cosmology as well as many open problems. General relativity encodes the interaction of matter or radiation with space time in a charming manner. Its various solutions have been studied in the past and many of them point to interesting properties about general relativity and cosmology. Among them, spherically symmetric solutions have received much attention so far because of their enhanced symmetries, simplicity and relevance in cosmology. However, many anisotropic solutions do exist in nature and such situations in general offer a far richer set of intricacies. Anisotropic, but homogenous solutions of Einstein equations were classified by Bianchi many decades ago. But, finding such solutions has always been challenging and has led to some results previously. Recently, anisotropic space times have gathered a renewed interest from a different direction. Many solutions of general relativity are much sought for the study of field theory using AdS/CFT correspondence as well as attractor mechanism - . Recently, asymptotic AdS (anti-de Sitter), anisotropic solutions were constructed and studied to investigate properties of certain anisotropic condensed matter systems using AdS/CFT correspondence. For example, see - .\n\nIn this context, asymptotically AdS, Bianchi VII class of solutions are also important as they were related using AdS/CFT correspondence to spatially modulated superconductors where the Cooper pair is not a s-wave, but a p-wave. The numerical solutions were constructed and their properties were further studied - . This motivated us to the idea that similar anisotropic solutions with asymptotically flat or de Sitter property can also be constructed. We take up the investigation of such solutions and their properties in this manuscript.\n\nIn this manuscript, we begin with constructing a simple, static but anisotropic solution of pure general relativity which is also asymptotically flat. Such simple cases usually lead to singular solutions. We report certain simple, analytic Bianchi VII0 solutions containing singularity. To address the problem of singularity, we include additional Maxwell and matter fields in one higher dimension and attempt to construct Majumdar-Papapetrou like solutions with anisotropic property preserved - . We seek to construct solutions with a regular horizon hiding any naked singularity and we report one such case, though it is sourced by a fictitious matter density.\n\nWe embark on a study to explore the effect of anisotropy in such spacetimes, their properties and dual field theories. We construct a 5 dimensional asymptotically locally anti de Sitter solution by adding a radial direction whose boundary metric is our anisotropic, 4 dimensional solution. Various foliations of such space away from the singularity can have well defined field theory on the boundary. Similar setups also provide a good platform to study the singularity and have inspired many to demystify cosmological singularities such as big bang. For example, see - . We use AdS/CFT correspondence to study two point correlators of higher dimensional operators living on the boundary. For such operators, the correlator gets most significant contribution from the length of the geodesics and this approach was recently used to study cosmological anisotropic singularites. We partially solve geodesic equations to evaluate the correlator and discuss its properties. Here the boundary is an anisotropically curved space which can impart peculiar properties to correlators.\n\n2. Conventions and Notations\n\nThree dimensional homogenous spaces have been classified by Bianchi into different classes . Since homogenous spaces exhibit identical metric properties at all points in space, such spaces have translational symmetries. They contain 3 Killing vectors denoted here as", null, "(1)\n\nwhere", null, "Due to lack of isotropy in general, these generators of translation are not always compatible with each other. Their Lie algebra can be represented as", null, "(2)\n\nwhere", null, "are called structure constants which are antisymmetric in lower indices,", null, "(3)\n\nand they also satisfy Jacobi identity,", null, "(4)\n\nTaking into consideration the choices under scale transformation, local coordinate frame and signs, it was shown by Bianchi that there are 9 different possible classes. We hereby explicitly write the Killing vectors corresponding to Bianchi type VII,", null, "(5)\n\nHere, k is a constant. It is usually fixed to be 1 by using diffeomorphism i.e. by reparameterizing kx to be new variable x and similar changes. We keep it explicit here by considering it as a small positive number. It is sometimes helpful to check the analogous isotropic case by taking the limit", null, ". The commutation relations between the generators can be checked to be non vanishing. The dual one forms of these vectors are", null, "(6)\n\nIf the metric is to have Bianchi VII symmetry, then the one forms", null, "and", null, "are restricted to appear in the above combinations only. We will use these one forms in subsequent sections to construct the metric ansatz.\n\n3. Anisotropic Solution of Pure Gravity Action\n\nWe attempt to construct here a simple anisotropic solution of pure gravity. For anisotropy, 3 spatial directions along the 3 non commutative Killing vectors are necessary. We add one extra radial direction r and make an ansatz that the metric coefficients of the various one forms are functions of r only. The action S for this section is", null, "(7)\n\nHere, R is the Ricci scalar and g denotes determinant of the metric. The signature of our metric is chosen to be (+, +, +, +). We choose it to be all plus to keep the analysis simple. Our aim is to study unique properties due to anisotropy and an amenable choice of signs is desired. We will see later that time coordinate can be embedded in such spaces by introducing higher dimensions. We choose our ansatz for the metric to be", null, "(8)\n\nwhere", null, ",", null, "and are chosen to be functions of r only. The one forms used above are defined in Equation (6).\n\nThus, we are looking for static, anisotropic Bianchi solutions. We choose to work in a non-coordinate basis with the following vielbeins. The metric in such basis is diagonal. Our vielbeins are\n\nThe Einstein equations of motion can be conveniently written in terms of a variable G which denotes. They are\n\n(9)\n\n(10)\n\n(11)\n\n(12)\n\nincluding a constraint which arises from (rr) component of Einstein equation,\n\n(13)\n\nHere, superscript prime denotes derivative with respect to r. The first equation suggests that a good radial variable will be u defined as\n\n(14)\n\nSuch a choice simplifies the set of dynamical equations to\n\n(15)\n\nThe constraint equation now becomes\n\n(16)\n\nWe then proceed to express the function M as\n\nwhere a and b are constants. The next amenable equation to solve is that for N. In general, this equation is non linear in u and requires numerical methods. However, it simplifies for a special case of, i.e. variable M is a constant. We will see later that such choice still preserves many properties related to anisotropy. The equation for N then reduces to\n\n(17)\n\nwhere is a constant. Inverting this differential equation results in\n\n(18)\n\nwhich if integrated once, leads to\n\n(19)\n\nHere, is an integration constant. The above equation can be solved in terms of Jacobi amplitudes in general. However, it offers simple solution for two cases, 1) and 2). We next proceed to explore the case of in more detail. We can then write the above equation as1\n\n(20)\n\nIts general solution is\n\n(21)\n\nIt can also be written as\n\n(22)\n\nThe equation for Z can be written as\n\n(23)\n\nIt has the general solution\n\n(24)\n\nSince differs from Z by a constant, it also gets determined. The constraint relation, Equation (16) is also satisfied for the given choice of m. Furthermore, one can absorb the parameters and by redefining coordinate u. Similarly, can be accounted by redefining. By redefining the constant a, the metric can be written in a form\n\n(25)\n\nThis metric is Ricci flat, i.e. but contains a naked singularity at. Here, the Kretschmann tensor diverges as. We assert this metric to be the simplest Bianchi VII metric. We hope that the naked singularity can still be put in a physical context if we excite some other field whose energy density itself becomes infinity at, thus accounting for the strong curvature there. Or, it may be possible to add extra fields in such a way that this singularity can be hidden behind a horizon. Indeed, in reference , the numerically constructed 5 dimensional Bianchi VII black brane space has a horizon. We explore some other possibilities in the next section.\n\n4. Anisotropic Solution of Einstein Maxwell Action\n\nWe now attempt to construct anisotropic solutions of 5 dimensional Einstein Maxwell action along with matter density. Our action in this section is\n\n(26)\n\nHere, notation g denotes the determinant of the metric which we choose to have one negative signature along time direction. Ricci scalar is denoted by R. The electromagnetic potential and field strength are denoted by and, respectively. We also incorporate source for electromagnetic field denoted as as well as a matter density denoted by with a four velocity profile. The matter term in action is conspired to give the correct energy momentum tensor for a pressureless dust i.e.. The Maxwell equation is\n\n(27)\n\nThe metric fluctuations of the action leads to following Einstein equations.\n\nWe further attempt to find solutions of these set of equations using a method employed earlier to find generalizations of Majumdar-Papapetrou metrics . We note that the Majumdar-Papapetrou metrics are 4 dimensional extremal solutions of Einstein-Maxwell equations of the type\n\n(28)\n\nalong with an electromagnetic flux of the kind. The function V is required by Einstein equations to be a harmonic function of the 3 dimensional flat subspace. When searching for solutions in dimensions, we generalize the metric ansatz to be of type\n\n(29)\n\nThe metric depends on spatial coordinates only. It leads to the following Ricci tensor,\n\n(30)\n\nThe indices denote spatial coordinates only. Here, notation denotes Laplacian defined over the internal space with metric. The Ricci tensor component is found to be vanishing. When we write the corresponding energy momentum tensor and try to satisfy Einstein equations, we notice that there is no analog of any term like in the expression of. Such a term, if kept, will require us to solve complicated non-linear equations. One chooses a relation between parameters m and n, so as to make such term vanish i.e.. Returning to our interest of 5 dimensional metrics, we find that we should take and. Thus, our metric ansatz becomes\n\n(31)\n\nWe next evaluate the components of the Einstein tensor and they are found to be\n\n(32)\n\nWe choose our internal subspace to be the anisotropic Ricci flat space that we obtained in last section i.e. Equation (25). Thus, Ricci tensor and Ricci scalar vanishes. The Einstein tensor in our case thus reduces to\n\n(33)\n\nNext, we make an ansatz for the electromagnetic potential. We assume it to be along the time direction\n\n(34)\n\nWe also make an ansatz for the four velocity of the matter density. We assume matter to be at rest i.e.\n\n(35)\n\nSuch a choice also ensures that. The energy momentum tensor component from electromagnetic field is given in terms of field strength tensor as\n\n(36)\n\nThe matter energy density contribution to energy momentum tensor is\n\n(37)\n\nThe non trivial components of Einstein equations are\n\nWe notice that the term explicitly proportional to in Equation (38) is same as (tt) component Einstein equation as in (38). Canceling it, we get\n\n(38)\n\nIt can be solved easily if we take the function A proportional to function V. The equation fixes the relation to be. Then the rest of Einstein equations simplifies to\n\n(39)\n\nIn terms of, the above equation can be written as\n\n(40)\n\nWe next make an ansatz for the source of the electromagnetic field. We take only the time component of to be non trivial. Along with the above choice for electromagnetic potential, the Maxwell equation takes a form\n\n(41)\n\nThis equation will be consistent with the Equation (40), if we choose\n\n(42)\n\nThus we are left with a single equation viz. Equation (40), which is a non homogenous Laplacian equation. We now proceed to solve it for some suitable choices of matter density in the next section.\n\n5. Anisotropic Solutions with Chosen Source Profiles\n\n5.1. Polynomial Solutions\n\nIn the previous section, the Einstein equations were reduced to a single non-homogenous harmonic equation which is sourced by the density profile of the matter field. The equation now left to solve is\n\n(43)\n\nWe choose our spatial subspace to be same as the anisotropic 4 dimensional subspace which was obtained in the section (3). Therefore,\n\n(44)\n\nThe determinant of the metric is. We will henceforth denote simply by x. For simplicity, we assume the function to be a function of u and x only. Then Equation (43) becomes,\n\n(45)\n\nWe next define polar coordinates and. The equation appears in polar form as\n\n(46)\n\nwhere we have taken to be independent of, i.e. we restrict ourselves to the lowest harmonic. Now the equation can be made amenable to analytical results by suitably choosing the profiles for the matter density. We next choose\n\n(47)\n\nwhere, n is a positive integer greater than 2 and c is a constant. Such a form of energy density is physically reasonable as it vanishes smoothly to zero when one proceeds towards infinity. Then Equation (46) becomes,\n\n(48)\n\nOne can easily solve it to obtain\n\n(49)\n\nWe can restrict ourselves to polynomial form by choosing. This leads to\n\n(50)\n\nWe further proceed by making a particular choice of. Thus, we get\n\n(51)\n\nThen, the metric now appears as\n\n(52)\n\nBut, this solution shows two essential singularity where Kretschmann tensor diverges. They are and. Thus there are two naked singularities in this solution. We next consider a fictitious matter whose density profile is negative by replacing the constant c with. This sends the second singularity at to a negative value of r, thus out of the considered spacetime. Choosing constant, the metric now appears as\n\n(53)\n\nWe find that the (tt) component of the metric for small values of r is\n\n(54)\n\nThus, this solution has a horizon near, which also hides the essential singularity residing at. We expect this solution to be of extremal type as the same is true for all such previous generalizations of Majumdar-Papapetrou metrics.\n\n5.2. Sine Gordon Solution\n\nOne can get here an equation of Sine Gordon type by a different choice of matter density. First, we choose a different radial coordinate\n\nThen the Equation (46) becomes\n\n(55)\n\nNext, we choose matter density profile to be of form\n\n(56)\n\nThe above equation then reduces to a Sine Gordon equation,\n\n(57)\n\n(58)\n\nThe resultant metric looks like\n\n(59)\n\nThe range for coordinate r is restricted from to. This ensures that is finite everywhere except at one end, where one encounters a singularity.\n\n6. Asymptotically Anti de Sitter Space\n\nWe further explore the properties of the Ricci flat 4 dimensional metric obtained in section (3). We consider a massive (but not back-reacting) field living in this space. Recent advances in gauge/gravity correspondence allows us to approximate its two point correlations for strong self coupling of the field. The correspondence states that the generating functional of the correlations of a 4 dimensional strongly coupled field theory defined on the boundary of a 5 dimensional asymptotically anti de Sitter space is same as the partition function of the latter theory of gravity. The boundary values of fields in the gravitational theory are coupled to sources of appropriate fields in field theory - . The method to calculate correlator in our 4 dimensional space will be to embed it in an anti de Sitter (AdS) space with one higher dimension such that it behaves as its boundary. The two point correlator of the massive field can be related to geodesics traveling in higher dimensional AdS space. We expect that these correlators will capture the effect of singularity in the same way as cosmological singularities are demonstrated to show such behavior in similar setups - . The embedding for our case is\n\n(60)\n\nWe note that the above metric is an Einstein metric and its curvature invariants are\n\nThus, the metric shows a singularity at. Another interesting feature of this embedding is the limit. In this limit, the metric becomes\n\n(61)\n\nwith the following curvature properties.\n\n(62)\n\nThis is Euclidean anti de Sitter space and it can be verified that the metric in Equation (61) is the same as AdS in Poincare coordinates after a change of variables. [Take]. The metric also possesses a scaling symmetry,\n\n(63)\n\nSince the metric asymptotically becomes Euclidean anti de Sitter, it can be conjectured along the lines of AdS/CFT duality that it can be explored to learn about an Euclidean Yang Mills theory living in an anisotropic background in its strong coupling limit. Simplest quantity that can be calculated to probe the field theory is the two point correlation function for operators with high conformal dimensions. This correlation can be approximately given in terms of the length of the geodesics with endpoints on the boundary .\n\nWe next calculate the geodesics connecting two endpoints on the boundary at different u coordinates. In this section, will denote the affine parameter along the geodesic. The geodesic equations can be obtained by extremizing a Lagrangian\n\nwhere denotes the expression\n\nHere, denotes the rate of change of y along the affine parameter. The above Lagrangian is independent of y and z, so it results in two constants of motion as follows\n\n(64)\n\nIf we write this expression explicitly in terms of constant of motion X, we get\n\nSimilarly, one can obtain another constant of motion Y as\n\nSimplifying the above two equations leads to\n\n(65)\n\nIf we denote the expression\n\nthen the remaining geodesic equations can be written as\n\n(66)\n\nWith a little manipulation, we can find a conserved quantity along the geodesic as\n\n(67)\n\nThis constraint considerably simplifies the dynamical equation for variable t, which can now be solved to obtain\n\n(68)\n\nwhere D is another integration constant. We note that t becomes zero for both. Thus, as the affine parameter varies from to, the geodesic drops from boundary into bulk and again comes back to boundary. We also notice that the relation also sets the maximum depth the geodesic from the boundary can reach.\n\nThe maximum value of the scale invariant quantity along the geodesic is We next calculate the\n\nlength of the geodesic. To deal with divergences, we put a cutoff for the affine parameter. We choose it to vary between to, where is chosen to be very small. This cut off is related to the ultra violet cutoff along the radial direction. According to Equation (63), if the cutoff along t is given in terms of a scale invariant quantity as for a very small real number, then it is related to according to Equation (68) as\n\n. It can be inverted for very small to give. The geodesic reaches boundary at both ends as approaches 0. We then calculate the length of the geodesic hanging in the bulk as\n\n(69)\n\nUsing the identity Equation (67), we simplify it to\n\n(70)\n\nThe latter part can be identified as the divergent contribution due to asymptotic AdS geometry and can be dropped during regularization. We next calculate the length of the geodesic on the boundary by using the boundary metric.\n\nThus, can be approximated to be, i.e. twice the radial distance of the turning point from the boundary. We calculate\n\n(71)\n\nThus, the two point correlators for higher dimensional operators living on the boundary metric is expected to show a behavior \n\n(72)\n\nHere the mass of the field (m) is also approximated to be same as conformal dimension () for large values. The term above will be most dominating term of the correlator since higher mass will suppress the contribution of quantum fluctuations. The above result shows that the behaviour of the two point correlator is same as the case of a flat space for smaller values of the separation of the points i.e.. We can neglect the effect of the second factor in such cases. The result is expected for a conformal field theory on a conformally flat spacetime. As the separation increases i.e. approaches 1, the second factor starts contributing significantly and the correlation function vanishes as a power law. We thus feel the effect of background anisotropic space for large separations.\n\n7. Conclusion\n\nWe have constructed an anisotropic 4 dimensional asymptotically flat Riemannian metric which is also Ricci flat. We later incorporated it in a 5 dimensional space time along with matter density and electromagnetic flux using a method similar to Majumdar-Papapetrou way of constructing extremal solutions. With certain choices of matter density profiles, we were able to construct explicit solutions. The singularity problem seems to be addressed for a case of a fictitious choice of matter density for which we arrived at a solution with a horizon hiding the singularity. Using our method, interesting anisotropic solutions in higher dimensions can be constructed by further investigating different types of Lagrangians for their properties. Given the importance of anisotropic Bianchi VII space-times for their relations with condensed matter systems, we proceeded to study their properties. We embedded the metric in 5 dimensions analogous to the flat space embedding in anti de Sitter space manifested in the Poincare metric. Apart from the 3 directions along the 3 non commutative Killing vectors, we have 2 other spatial directions; one, the u-direction along which the metric coefficients vary. The other is the radial direction towards the bulk of the AdS space. The subspace for a fixed u and its neighbourhood can be considered locally AdS where the bulk metric encodes the properties of the field theory lying on its boundary, which is itself anisotropic. We examined the effect of anisotropy on the properties of the quantum field theory on such background by approximating the two point correlator of two operators with high conformal dimensions. For small separation between the operator positions, we saw a dependence similar to a conformal theory living on the flat space. The effect of curvature of the background is not reflected in this case. However for large separations, the two point correlator vanishes as a power law of the separation between the operator positions. We expect that the large distance behavior is unique to our case and encodes the effect of singularity into it.\n\nAcknowledgements\n\nThe authors acknowledge the financial support of DST grant number SR/FTP/PS-149/2011.\n\nCite this paper\n\nManavendraMahato,Ajay PratapSingh, (2016) Novel Bianchi VII Space Times and Their Properties. Journal of Modern Physics,07,445-457. doi: 10.4236/jmp.2016.75046\n\nReferences\n\n1. 1. Iizuka, N., Kachru, S., Kundu, N., Narayan, P., Sircar, N. and Trivedi, S.P. (2012) Journal of High Energy Physics, 1207, 193.\nhttp://dx.doi.org/10.1007/JHEP07(2012)193\n\n2. 2. Iizuka, N., Kachru, S., Kundu, N., Narayan, P., Sircar, N., Trivedi, S.P. and Wang, H. (2013) Journal of High Energy Physics, 1303, 126.\nhttp://dx.doi.org/10.1007/JHEP03(2013)126\n\n3. 3. Kachru, S., Kundu, N., Saha, A., Samanta, R. and Trivedi, S.P. (2014) Journal of High Energy Physics, 1403, 074.\n\n4. 4. Kachru, S., Liu, X. and Mulligan, M. (2008) Physical Review D, 78, Article ID: 106005.\nhttp://dx.doi.org/10.1103/PhysRevD.78.106005\n\n5. 5. Son, D.T. (2008) Physical Review D, 78, Article ID: 046003.\nhttp://dx.doi.org/10.1103/PhysRevD.78.046003\n\n6. 6. Balasubramanian, K. and McGreevy, J. (2008) Physical Review Letters, 101, Article ID: 061601.\nhttp://dx.doi.org/10.1103/PhysRevLett.101.061601\n\n7. 7. Danielsson, U.H. and Thorlacius, L. (2009) Journal of High Energy Physics, 0903, 070.\n\n8. 8. Hartnoll, S.A., Polchinski, J., Silverstein, E. and Tong, D. (2010) Journal of High Energy Physics, 1004, 120.\nhttp://dx.doi.org/10.1007/JHEP04(2010)120\n\n9. 9. Balasubramanian, K. and Narayan, K. (2010) Journal of High Energy Physics, 1008, 014.\n\n10. 10. Donos, A. and Gauntlett, J.P. (2010) Journal of High Energy Physics, 1012, 002.\n\n11. 11. Singh, H. (2010) Journal of High Energy Physics, 1012, 061.\n\n12. 12. Gregory, R., Parameswaran, S.L., Tasinato, G. and Zavala, I. (2010) Journal of High Energy Physics, 1012, 047.\n\n13. 13. Cassani, D. and Faedo, A.F. (2011) Journal of High Energy Physics, 1105, 013.\n\n14. 14. Lu, H., Pang, Y., Pope, C.N. and Vazquez-Poritz, J.F. (2012) Physical Review D, 86, Article ID: 044011.\nhttp://dx.doi.org/10.1103/PhysRevD.86.044011\n\n15. 15. Shu, F.W., Lin, K., Wang, A. and Wu, Q. (2014) Journal of High Energy Physics, 1404, 056.\n\n16. 16. Nakamura, S., Ooguri, H. and Park, C.S. (2010) Physical Review D, 81, Article ID: 044018.\nhttp://dx.doi.org/10.1103/PhysRevD.81.044018\n\n17. 17. Ooguri, H. and Park, C.S. (2010) Physical Review D, 82, Article ID: 126001.\nhttp://dx.doi.org/10.1103/PhysRevD.82.126001\n\n18. 18. Donos, A. and Gauntlett, J.P. (2012) Physical Review Letters, 108, Article ID: 211601.\nhttp://dx.doi.org/10.1103/PhysRevLett.108.211601\n\n19. 19. Donos, A. and Gauntlett, J.P. (2012) Physical Review D, 86, Article ID: 064010.\nhttp://dx.doi.org/10.1103/PhysRevD.86.064010\n\n20. 20. Myers, R.C. (1987) Physical Review D, 35, Article ID: 455.\nhttp://dx.doi.org/10.1103/PhysRevD.35.455\n\n21. 21. Gibbons, G.W. and Warnick, C.M. (2009) Physical Review D, 79, Article ID: 064031.\nhttp://dx.doi.org/10.1103/PhysRevD.79.064031\n\n22. 22. Varela, V. (2003) General Relativity and Gravitation, 35, Article ID: 1815.\nhttp://dx.doi.org/10.1023/A:1026014114308\n\n23. 23. Frolov, V.P. and Zelnikov, A. (2012) Physical Review D, 85, Article ID: 064032.\nhttp://dx.doi.org/10.1103/PhysRevD.85.064032\n\n24. 24. Hertog, T. and Horowitz, G.T. (2004) Journal of High Energy Physics, 0407, 073\nhttp://dx.doi.org/10.1088/1126-6708/2004/07/073\n\n25. 25. Hertog, T. and Horowitz, G.T. (2005) Journal of High Energy Physics, 0504, 005\n\n26. 26. Awad, A., Das, S.R., Nampuri, S., Narayan, K. and Trivedi, S.P. (2009) Physical Review D, 79, Article ID: 046004.\nhttp://dx.doi.org/10.1103/PhysRevD.79.046004\n\n27. 27. Fischetti, S., Kastor, D. and Traschen, J. (2014) Classical and Quantum Gravity, 31, Article ID: 235007\nhttp://dx.doi.org/10.1088/0264-9381/31/23/235007\n\n28. 28. Engelhardt, N., Hertog, T. and Horowitz, G.T. (2014) Physical Review Letters, 113, Article ID: 121602\nhttp://dx.doi.org/10.1103/PhysRevLett.113.121602\n\n29. 29. Engelhardt, N., Hertog, T. and Horowitz, G.T. (2015) Journal of High Energy Physics, 1507, 044.\nhttp://dx.doi.org/10.1007/JHEP07(2015)044\n\n30. 30. Banerjee, S., Bhowmick, S., Chatterjee, S. and Mukherji, S. (2015) Journal of High Energy Physics, 1506, 043.\nhttp://dx.doi.org/10.1007/JHEP06(2015)043\n\n31. 31. Landau, L.D. and Lifshitz, E.M. (1975) The Classical Theory of Fields. Vol. 2, 4th Edition, Butterworth-Heinemann, Oxford.\n\n32. 32. Maldacena, J.M. (1999) International Journal of Theoretical Physics, 38, 1113. [(1998) Advances in Theoretical and Mathematical Physics, 2, 231].\nhttp://dx.doi.org/10.1023/A:1026654312961\n\n33. 33. Witten, E. (1998) Advances in Theoretical and Mathematical Physics, 2, 253.\n\n34. 34. Gubser, S.S., Klebanov, I.R. and Polyakov, A.M. (1998) Physics Letters B, 428, 105.\nhttp://dx.doi.org/10.1016/S0370-2693(98)00377-3\n\n35. 35. Freedman, D.Z., Mathur, S.D., Matusis, A. and Rastelli, L. (1999) Nuclear Physics B, 546, 96.\nhttp://dx.doi.org/10.1016/S0550-3213(99)00053-X\n\n1For positive sign in Equation (20), one needs for sensible solutions. Since we will choose parameter u to vary from 0 to infinity, we neglect this case. For, one can get solutions in terms of tangent function instead of tanh, but it leads to finite metric only for a restricted range of u." ]
[ null, "http://html.scirp.org/file/5-7502601x1.png", null, "http://html.scirp.org/file/9-2500537x3.png", null, "http://html.scirp.org/file/9-2500537x2.png", null, "http://html.scirp.org/file/5-7502601x4.png", null, "http://html.scirp.org/file/5-7502601x5.png", null, "http://html.scirp.org/file/5-7502601x6.png", null, "http://html.scirp.org/file/5-7502601x7.png", null, "http://html.scirp.org/file/5-7502601x8.png", null, "http://html.scirp.org/file/5-7502601x9.png", null, "http://html.scirp.org/file/5-7502601x10.png", null, "http://html.scirp.org/file/5-7502601x11.png", null, "http://html.scirp.org/file/5-7502601x12.png", null, "http://html.scirp.org/file/5-7502601x13.png", null, "http://html.scirp.org/file/5-7502601x14.png", null, "http://html.scirp.org/file/5-7502601x15.png", null, "http://html.scirp.org/file/5-7502601x16.png", null, "http://html.scirp.org/file/5-7502601x17.png", null, "http://html.scirp.org/file/5-7502601x18.png", null, "http://html.scirp.org/file/5-7502601x19.png", null, "http://html.scirp.org/file/5-7502601x20.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8729922,"math_prob":0.94130194,"size":29013,"snap":"2020-10-2020-16","text_gpt3_token_len":7113,"char_repetition_ratio":0.13261402,"word_repetition_ratio":0.02873313,"special_character_ratio":0.2488195,"punctuation_ratio":0.1606802,"nsfw_num_words":1,"has_unicode_error":false,"math_prob_llama3":0.9879578,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40],"im_url_duplicate_count":[null,4,null,null,null,null,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-02-25T03:38:02Z\",\"WARC-Record-ID\":\"<urn:uuid:75b3bafe-e798-47e9-bdf2-d07df65eb070>\",\"Content-Length\":\"73769\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:48e1c141-67db-4a38-97d6-86aef596518a>\",\"WARC-Concurrent-To\":\"<urn:uuid:a7626d3b-e83a-4899-9594-7daacfbe8f6f>\",\"WARC-IP-Address\":\"104.149.186.66\",\"WARC-Target-URI\":\"https://file.scirp.org/Html/5-7502601_64071.htm\",\"WARC-Payload-Digest\":\"sha1:P22NP2JYHHYB5I4U7WZQASD2IIGTRYFR\",\"WARC-Block-Digest\":\"sha1:6FOHGV4PMJJYMIXZK2RHLDNIU5KSY7TZ\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-10/CC-MAIN-2020-10_segments_1581875146004.9_warc_CC-MAIN-20200225014941-20200225044941-00543.warc.gz\"}"}
https://solutionzip.com/downloads/3-python-functions/
[ "# 3 Python Functions\n\nExercise 1:\nCreate a function that accepts a single array as an argument. Given an array of integers, x, sort x and split the integers into three smaller arrays of equal length. If the length of x is not evenly divisible by three, increase the size of the smaller arrays by one starting from the first array. The function should return an array of arrays. Example: Input = [2,1,3,4,7,5,9,6,8,13,12,11,10,0,15,16,14] Output = [ [0, 1, 2, 3, 4, 5], [6, 7, 8, 9, 10, 11], [12, 13, 14, 15, 16] ]\n\nExercise 2:\nWrite a function that find the frequency occurrence of a letter in a sentence. The function should return an integer. (Do not use the str.count() default python function) Examples: find_frequency(“t”, “this is a test”) ? 3 find_frequency(“y”, “this is a test”) ? 0\n\nExercise 3:\nWrite a function that identifies if an integer is a power of 2. The function should return a boolean. Explain why your function will work for any integer inputs that it receives. Examples: is_power_two(6) ? false is_power_two(16) ? true\n\n×" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7557613,"math_prob":0.9934756,"size":1017,"snap":"2021-31-2021-39","text_gpt3_token_len":297,"char_repetition_ratio":0.13030602,"word_repetition_ratio":0.023255814,"special_character_ratio":0.32153392,"punctuation_ratio":0.23170732,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9859878,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-07-30T23:29:57Z\",\"WARC-Record-ID\":\"<urn:uuid:715fe15f-d25c-484f-98f6-8bcd0123af0d>\",\"Content-Length\":\"244410\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e339be8f-22a0-4844-b9c1-855bf5188f3d>\",\"WARC-Concurrent-To\":\"<urn:uuid:fe4eaab6-31f7-436d-ac52-f4702474882e>\",\"WARC-IP-Address\":\"70.32.23.80\",\"WARC-Target-URI\":\"https://solutionzip.com/downloads/3-python-functions/\",\"WARC-Payload-Digest\":\"sha1:RE2SLLOVINYXEDSWXAQSDPICOWUXLIK5\",\"WARC-Block-Digest\":\"sha1:MWGO3BWXVXTCPW4QYRCCDEQ3U6BZNM5M\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046154032.75_warc_CC-MAIN-20210730220317-20210731010317-00211.warc.gz\"}"}
https://www.colorhexa.com/4b5066
[ "# #4b5066 Color Information\n\nIn a RGB color space, hex #4b5066 is composed of 29.4% red, 31.4% green and 40% blue. Whereas in a CMYK color space, it is composed of 26.5% cyan, 21.6% magenta, 0% yellow and 60% black. It has a hue angle of 228.9 degrees, a saturation of 15.3% and a lightness of 34.7%. #4b5066 color hex could be obtained by blending #96a0cc with #000000. Closest websafe color is: #336666.\n\n• R 29\n• G 31\n• B 40\nRGB color chart\n• C 26\n• M 22\n• Y 0\n• K 60\nCMYK color chart\n\n#4b5066 color description : Very dark grayish blue.\n\n# #4b5066 Color Conversion\n\nThe hexadecimal color #4b5066 has RGB values of R:75, G:80, B:102 and CMYK values of C:0.26, M:0.22, Y:0, K:0.6. Its decimal value is 4935782.\n\nHex triplet RGB Decimal 4b5066 `#4b5066` 75, 80, 102 `rgb(75,80,102)` 29.4, 31.4, 40 `rgb(29.4%,31.4%,40%)` 26, 22, 0, 60 228.9°, 15.3, 34.7 `hsl(228.9,15.3%,34.7%)` 228.9°, 26.5, 40 336666 `#336666`\nCIE-LAB 34.38, 3.49, -13.407 8.168, 8.192, 13.721 0.272, 0.272, 8.192 34.38, 13.853, 284.593 34.38, -3.63, -17.968 28.622, 0.851, -8.386 01001011, 01010000, 01100110\n\n# Color Schemes with #4b5066\n\n• #4b5066\n``#4b5066` `rgb(75,80,102)``\n• #66614b\n``#66614b` `rgb(102,97,75)``\nComplementary Color\n• #4b5e66\n``#4b5e66` `rgb(75,94,102)``\n• #4b5066\n``#4b5066` `rgb(75,80,102)``\n• #544b66\n``#544b66` `rgb(84,75,102)``\nAnalogous Color\n• #5e664b\n``#5e664b` `rgb(94,102,75)``\n• #4b5066\n``#4b5066` `rgb(75,80,102)``\n• #66544b\n``#66544b` `rgb(102,84,75)``\nSplit Complementary Color\n• #50664b\n``#50664b` `rgb(80,102,75)``\n• #4b5066\n``#4b5066` `rgb(75,80,102)``\n• #664b50\n``#664b50` `rgb(102,75,80)``\n• #4b6661\n``#4b6661` `rgb(75,102,97)``\n• #4b5066\n``#4b5066` `rgb(75,80,102)``\n• #664b50\n``#664b50` `rgb(102,75,80)``\n• #66614b\n``#66614b` `rgb(102,97,75)``\n• #2b2d3a\n``#2b2d3a` `rgb(43,45,58)``\n• #353949\n``#353949` `rgb(53,57,73)``\n• #404457\n``#404457` `rgb(64,68,87)``\n• #4b5066\n``#4b5066` `rgb(75,80,102)``\n• #565c75\n``#565c75` `rgb(86,92,117)``\n• #616783\n``#616783` `rgb(97,103,131)``\n• #6b7392\n``#6b7392` `rgb(107,115,146)``\nMonochromatic Color\n\n# Alternatives to #4b5066\n\nBelow, you can see some colors close to #4b5066. Having a set of related colors can be useful if you need an inspirational alternative to your original color choice.\n\n• #4b5766\n``#4b5766` `rgb(75,87,102)``\n• #4b5566\n``#4b5566` `rgb(75,85,102)``\n• #4b5266\n``#4b5266` `rgb(75,82,102)``\n• #4b5066\n``#4b5066` `rgb(75,80,102)``\n• #4b4e66\n``#4b4e66` `rgb(75,78,102)``\n• #4b4c66\n``#4b4c66` `rgb(75,76,102)``\n• #4d4b66\n``#4d4b66` `rgb(77,75,102)``\nSimilar Colors\n\n# #4b5066 Preview\n\nThis text has a font color of #4b5066.\n\n``<span style=\"color:#4b5066;\">Text here</span>``\n#4b5066 background color\n\nThis paragraph has a background color of #4b5066.\n\n``<p style=\"background-color:#4b5066;\">Content here</p>``\n#4b5066 border color\n\nThis element has a border color of #4b5066.\n\n``<div style=\"border:1px solid #4b5066;\">Content here</div>``\nCSS codes\n``.text {color:#4b5066;}``\n``.background {background-color:#4b5066;}``\n``.border {border:1px solid #4b5066;}``\n\n# Shades and Tints of #4b5066\n\nA shade is achieved by adding black to any pure hue, while a tint is created by mixing white to any pure color. In this example, #000000 is the darkest color, while #f4f5f7 is the lightest one.\n\n• #000000\n``#000000` `rgb(0,0,0)``\n• #09090c\n``#09090c` `rgb(9,9,12)``\n• #111217\n``#111217` `rgb(17,18,23)``\n• #191b22\n``#191b22` `rgb(25,27,34)``\n• #21242d\n``#21242d` `rgb(33,36,45)``\n• #2a2d39\n``#2a2d39` `rgb(42,45,57)``\n• #323544\n``#323544` `rgb(50,53,68)``\n• #3a3e4f\n``#3a3e4f` `rgb(58,62,79)``\n• #43475b\n``#43475b` `rgb(67,71,91)``\n• #4b5066\n``#4b5066` `rgb(75,80,102)``\n• #535971\n``#535971` `rgb(83,89,113)``\n• #5c627d\n``#5c627d` `rgb(92,98,125)``\n• #646b88\n``#646b88` `rgb(100,107,136)``\n• #6c7493\n``#6c7493` `rgb(108,116,147)``\n• #787e9b\n``#787e9b` `rgb(120,126,155)``\n• #8389a4\n``#8389a4` `rgb(131,137,164)``\n• #8e94ac\n``#8e94ac` `rgb(142,148,172)``\n• #9a9fb4\n``#9a9fb4` `rgb(154,159,180)``\n• #a5a9bd\n``#a5a9bd` `rgb(165,169,189)``\n• #b0b4c5\n``#b0b4c5` `rgb(176,180,197)``\n• #bbbfcd\n``#bbbfcd` `rgb(187,191,205)``\n``#c7cad6` `rgb(199,202,214)``\n• #d2d4de\n``#d2d4de` `rgb(210,212,222)``\n• #dddfe6\n``#dddfe6` `rgb(221,223,230)``\n• #e9eaef\n``#e9eaef` `rgb(233,234,239)``\n• #f4f5f7\n``#f4f5f7` `rgb(244,245,247)``\nTint Color Variation\n\n# Tones of #4b5066\n\nA tone is produced by adding gray to any pure hue. In this case, #52545f is the less saturated color, while #0021b1 is the most saturated one.\n\n• #52545f\n``#52545f` `rgb(82,84,95)``\n• #4b5066\n``#4b5066` `rgb(75,80,102)``\n• #444c6d\n``#444c6d` `rgb(68,76,109)``\n• #3d4774\n``#3d4774` `rgb(61,71,116)``\n• #37437a\n``#37437a` `rgb(55,67,122)``\n• #303f81\n``#303f81` `rgb(48,63,129)``\n• #293b88\n``#293b88` `rgb(41,59,136)``\n• #22368f\n``#22368f` `rgb(34,54,143)``\n• #1b3296\n``#1b3296` `rgb(27,50,150)``\n• #152e9c\n``#152e9c` `rgb(21,46,156)``\n• #0e29a3\n``#0e29a3` `rgb(14,41,163)``\n• #0725aa\n``#0725aa` `rgb(7,37,170)``\n• #0021b1\n``#0021b1` `rgb(0,33,177)``\nTone Color Variation\n\n# Color Blindness Simulator\n\nBelow, you can see how #4b5066 is perceived by people affected by a color vision deficiency. This can be useful if you need to ensure your color combinations are accessible to color-blind users.\n\nMonochromacy\n• Achromatopsia 0.005% of the population\n• Atypical Achromatopsia 0.001% of the population\nDichromacy\n• Protanopia 1% of men\n• Deuteranopia 1% of men\n• Tritanopia 0.001% of the population\nTrichromacy\n• Protanomaly 1% of men, 0.01% of women\n• Deuteranomaly 6% of men, 0.4% of women\n• Tritanomaly 0.01% of the population" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.56214106,"math_prob":0.6997546,"size":3681,"snap":"2023-14-2023-23","text_gpt3_token_len":1694,"char_repetition_ratio":0.12238238,"word_repetition_ratio":0.011070111,"special_character_ratio":0.56832385,"punctuation_ratio":0.23463687,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99149317,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-08T12:45:30Z\",\"WARC-Record-ID\":\"<urn:uuid:293c2df3-47e1-43d2-a107-509be71d7304>\",\"Content-Length\":\"36143\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7ba4222d-63c3-4317-acb5-7249f910ba06>\",\"WARC-Concurrent-To\":\"<urn:uuid:9fc1fae9-e41d-4d67-92a4-d3115c902969>\",\"WARC-IP-Address\":\"178.32.117.56\",\"WARC-Target-URI\":\"https://www.colorhexa.com/4b5066\",\"WARC-Payload-Digest\":\"sha1:Q46TQ42JKWKERGFL56GGPMQ3CGP2L5OK\",\"WARC-Block-Digest\":\"sha1:DVU74367FZTFRIMXBHZT2345FTMSFMCM\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224654871.97_warc_CC-MAIN-20230608103815-20230608133815-00323.warc.gz\"}"}
https://math.stackexchange.com/questions/160306/weak-and-pointwise-convergence-in-a-l2-space
[ "# Weak and pointwise convergence in a $L^2$ space\n\nLet $I$ be a measured space (typically an interval of $\\Bbb R$ with the Lebesgue measure), and let $(f_n)_n$ a sequence of function of $L^2(I)$.\n\nAssume that the sequence $(f_n)$ converge pointwise and weakly. How to prove that the pointwise limit and the weak limit are the same ?\n\nHere's a functional analytic approach:\n\nA weakly convergent sequence in a Hilbert space $H$ is bounded, and by the Banach-Saks theorem has a subsequence whose Cesàro averages converge strongly in $H$ to the same limit. Almost sure convergence is preserved by taking subsequences and Cesàro averages. So, without loss of generality you may assume that your weakly convergent sequence is actually strongly convergent.\n\nBoth strong $L^2$ convergence and almost sure convergence imply convergence locally in measure, so you only need to show that such limits are unique, which is easy.\n\n• I like that, thanks ! – Lierre Jun 19 '12 at 13:21\n• I'm glad to help. – user940 Jun 19 '12 at 13:22\n\nRead the proof on page 266 of this book\n\n• Thanks ! Is the requirement that $E$ has finite measure necessary ? – Lierre Jun 19 '12 at 12:34\n• Good question. I don't think so, but Egorov's theorem does not work without this assumption. – Siminore Jun 19 '12 at 12:36\n• After some thoughts, no it's not necessary, at least if $E$ is $\\sigma$-finite, because then the result in the book can be applied to restriction to every subset of a covering of $E$ by set with finite measure. – Lierre Jun 19 '12 at 12:39\n• I suspect that everything works provided continuous functions with compact support are dense in $L^2$. – Siminore Jun 19 '12 at 12:59\n\nFollowing Byron Schmuland we can say:\n\n1) By the Banach-Saks theorem, a weakly convergent sequence, in a Banach space, has a subsequence whose Cesàro averages converge strongly to the same limit.\n\n2) In a $L^p$ space, strong convergence implyes pointwise convergence a.e. for a subsequence;\n\n3) Pointwise convergence a.e. is preserved by taking subsequences and Cesàro averages.\n\n4) For Pointwise convergence a.e. we have unicity of limits.\n\nSo we can conclude that, in $L^p$, weak convergence to $f$ and pointwise convergence to $g$, imply $f=g$.\n\nAssume that $f_n \\rightharpoonup g$ and $f_n \\rightarrow f$ a.e. Then \\begin{equation} | \\int_{I} (f-g)| \\le \\int_{I} |f_n - f| + \\int_{I} |f_n - g| \\end{equation} As $f_n \\rightharpoonup g$, e $1 \\chi \\{I\\} \\in L^{2}(I)$ we have \\begin{equation} \\int_{I} |f_n - g| \\rightarrow 0. \\end{equation} Now, notice that \\begin{equation} |f_n -f| \\le |f_n| + |f| < 2|f| + \\varepsilon. \\end{equation} for $n>>1$. Then by Dominated convergence Theorem \\begin{equation} \\int_{I} |f_n - f| \\rightarrow 0. \\end{equation} Hence $f=g$ a.e.\n\n• As far as I understand, there is something wrong. How can you state that $f_n \\to g$ strongly on $I$? – Siminore Jun 19 '12 at 12:56\n• You seem right. $\\lim_{n} \\int_{I} f_n \\chi \\{I\\} \\rightarrow \\int_{I} f \\chi \\{I\\}.$ Then, $\\lim_{n} \\int_{I} (f_n -g) \\le 0$. But not necessarily $\\lim_{n} \\int_{I} (g - f_n ) \\le 0$. – user29999 Jun 19 '12 at 13:08\n• I think that the flaw in your argument is that $f_n \\rightharpoonup g$, and $1 \\chi \\{I\\} \\in L^{2}(I)$ implies $\\int_I(f_n-g)\\to0$ but not $\\int_I|f_n-g|\\to0$, since the latter is not the pairing between $\\chi$ and $f_n-g$... – bartgol Jun 20 '12 at 15:11" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.90293235,"math_prob":0.9989509,"size":280,"snap":"2020-10-2020-16","text_gpt3_token_len":77,"char_repetition_ratio":0.123188406,"word_repetition_ratio":0.0,"special_character_ratio":0.27142859,"punctuation_ratio":0.07272727,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999163,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-02-19T19:51:06Z\",\"WARC-Record-ID\":\"<urn:uuid:3f88bcfa-8388-4da6-b19a-445b732aa499>\",\"Content-Length\":\"172300\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f8705c79-d270-4909-ade9-69170f64e2ce>\",\"WARC-Concurrent-To\":\"<urn:uuid:2517f712-7ec5-4ba7-98f7-7bdead1edecb>\",\"WARC-IP-Address\":\"151.101.1.69\",\"WARC-Target-URI\":\"https://math.stackexchange.com/questions/160306/weak-and-pointwise-convergence-in-a-l2-space\",\"WARC-Payload-Digest\":\"sha1:PVXRJIGPFAYNGGSMTXQOG5W4RVPFJF7X\",\"WARC-Block-Digest\":\"sha1:T6HGCNFUEGYQRMVLBOBG2YDXB3KVWC26\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-10/CC-MAIN-2020-10_segments_1581875144167.31_warc_CC-MAIN-20200219184416-20200219214416-00315.warc.gz\"}"}
https://wiki.haskell.org/index.php?title=99_questions/Solutions/81&diff=prev&oldid=43814
[ "# Difference between revisions of \"99 questions/Solutions/81\"\n\n(**) Path from one node to another one\n\nWrite a function that, given two nodes a and b in a graph, returns all the acyclic paths from a to b.\n\n```import List (elem)\n\npaths :: Eq a => a -> a -> [(a,a)] -> [[a]]\npaths a b g = paths1 a b g []\n\npaths1 :: Eq a => a -> a -> [(a,a)] -> [a] -> [[a]]\npaths1 a b g current = paths2 a b g current [ y | (x,y) <- g, x == a ]\n\npaths2 :: Eq a => a -> a -> [(a,a)] -> [a] -> [a] -> [[a]]\npaths2 a b g current []\t| a == b = [current++[b]]\n| otherwise = []\npaths2 a b g current (x:xs) | a == b = [current++[b]]\n| elem a current = []\n| otherwise = (paths1 x b g (current++[a])) ++ (paths2 a b g current xs)\n```\n\nThis solution uses a representation of a (directed) graph as a list of arcs (a,b).\n\nHere is another implementation using List's monadic behavior\n\n```import Data.List (partition)\n\npathsImpl :: Eq a => [a] -> a -> a -> [(a, a)] -> [[a]]\npathsImpl trail src dest clauses\n| src == dest = [src:trail]\n| otherwise = do\nlet (nexts, rest) = partition ((==src) . fst) clauses\nnext <- nexts\npathsImpl (src:trail) (snd next) dest rest\n\npaths :: Eq a => a -> a -> [(a, a)] -> [[a]]\npaths src dest clauses = map reverse (pathsImpl [] src dest clauses)\n```\n\nHere is another recursive implementation\n\n```paths :: Eq a =>a -> a -> [(a,a)] -> [[a]]\npaths source sink edges\n| source == sink = [[sink]]\n| otherwise = [\n[source] ++ path | edge<-edges, (fst edge) == source,\npath<-(paths (snd edge) sink [e|e<-edges, e/=edge])\n];\n```" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7526374,"math_prob":0.99772155,"size":2307,"snap":"2020-24-2020-29","text_gpt3_token_len":725,"char_repetition_ratio":0.16717325,"word_repetition_ratio":0.2905983,"special_character_ratio":0.41959253,"punctuation_ratio":0.12713936,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9996902,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-07-03T14:48:14Z\",\"WARC-Record-ID\":\"<urn:uuid:54a411c9-840c-4416-98ba-ec2525bc28de>\",\"Content-Length\":\"28378\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9ce1e028-3903-402b-9080-2c77e9a4501f>\",\"WARC-Concurrent-To\":\"<urn:uuid:af269929-3cb4-4b8a-989e-0670f148e7d0>\",\"WARC-IP-Address\":\"151.101.249.175\",\"WARC-Target-URI\":\"https://wiki.haskell.org/index.php?title=99_questions/Solutions/81&diff=prev&oldid=43814\",\"WARC-Payload-Digest\":\"sha1:E2GZLPTTZRI7L56M54SJFQDU4ZOONKOF\",\"WARC-Block-Digest\":\"sha1:LSCZYSJLWKR5M2QTTTZRUF4ZREOSWHZG\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-29/CC-MAIN-2020-29_segments_1593655882051.19_warc_CC-MAIN-20200703122347-20200703152347-00516.warc.gz\"}"}
https://courses.lumenlearning.com/suny-osuniversityphysics/chapter/1-6-significant-figures/
[ "## 1.6 Significant Figures\n\n### Learning Objectives\n\nBy the end of this section, you will be able to:\n\n• Determine the correct number of significant figures for the result of a computation.\n• Describe the relationship between the concepts of accuracy, precision, uncertainty, and discrepancy.\n• Calculate the percent uncertainty of a measurement, given its value and its uncertainty.\n• Determine the uncertainty of the result of a computation involving quantities with given uncertainties.\n\n(Figure) shows two instruments used to measure the mass of an object. The digital scale has mostly replaced the double-pan balance in physics labs because it gives more accurate and precise measurements. But what exactly do we mean by accurate and precise? Aren’t they the same thing? In this section we examine in detail the process of making and reporting a measurement.", null, "Figure 1.11 (a) A double-pan mechanical balance is used to compare different masses. Usually an object with unknown mass is placed in one pan and objects of known mass are placed in the other pan. When the bar that connects the two pans is horizontal, then the masses in both pans are equal. The “known masses” are typically metal cylinders of standard mass such as 1 g, 10 g, and 100 g. (b) Many mechanical balances, such as double-pan balances, have been replaced by digital scales, which can typically measure the mass of an object more precisely. A mechanical balance may read only the mass of an object to the nearest tenth of a gram, but many digital scales can measure the mass of an object up to the nearest thousandth of a gram. (credit a: modification of work by Serge Melki; credit b: modification of work by Karel Jakubec)\n\n### Accuracy and Precision of a Measurement\n\nScience is based on observation and experiment—that is, on measurements. Accuracy is how close a measurement is to the accepted reference value for that measurement. For example, let’s say we want to measure the length of standard printer paper. The packaging in which we purchased the paper states that it is 11.0 in. long. We then measure the length of the paper three times and obtain the following measurements: 11.1 in., 11.2 in., and 10.9 in. These measurements are quite accurate because they are very close to the reference value of 11.0 in. In contrast, if we had obtained a measurement of 12 in., our measurement would not be very accurate. Notice that the concept of accuracy requires that an accepted reference value be given.\n\nThe precision of measurements refers to how close the agreement is between repeated independent measurements (which are repeated under the same conditions). Consider the example of the paper measurements. The precision of the measurements refers to the spread of the measured values. One way to analyze the precision of the measurements is to determine the range, or difference, between the lowest and the highest measured values. In this case, the lowest value was 10.9 in. and the highest value was 11.2 in. Thus, the measured values deviated from each other by, at most, 0.3 in. These measurements were relatively precise because they did not vary too much in value. However, if the measured values had been 10.9 in., 11.1 in., and 11.9 in., then the measurements would not be very precise because there would be significant variation from one measurement to another. Notice that the concept of precision depends only on the actual measurements acquired and does not depend on an accepted reference value.\n\nThe measurements in the paper example are both accurate and precise, but in some cases, measurements are accurate but not precise, or they are precise but not accurate. Let’s consider an example of a GPS attempting to locate the position of a restaurant in a city. Think of the restaurant location as existing at the center of a bull’s-eye target and think of each GPS attempt to locate the restaurant as a black dot. In (Figure)(a), we see the GPS measurements are spread out far apart from each other, but they are all relatively close to the actual location of the restaurant at the center of the target. This indicates a low-precision, high-accuracy measuring system. However, in (Figure)(b), the GPS measurements are concentrated quite closely to one another, but they are far away from the target location. This indicates a high-precision, low-accuracy measuring system.", null, "Figure 1.12 A GPS attempts to locate a restaurant at the center of the bull’s-eye. The black dots represent each attempt to pinpoint the location of the restaurant. (a) The dots are spread out quite far apart from one another, indicating low precision, but they are each rather close to the actual location of the restaurant, indicating high accuracy. (b) The dots are concentrated rather closely to one another, indicating high precision, but they are rather far away from the actual location of the restaurant, indicating low accuracy. (credit a and credit b: modification of works by Dark Evil)\n\n### Accuracy, Precision, Uncertainty, and Discrepancy\n\nThe precision of a measuring system is related to the uncertainty in the measurements whereas the accuracy is related to the discrepancy from the accepted reference value. Uncertainty is a quantitative measure of how much your measured values deviate from one another. There are many different methods of calculating uncertainty, each of which is appropriate to different situations. Some examples include taking the range (that is, the biggest less the smallest) or finding the standard deviation of the measurements. Discrepancy (or “measurement error”) is the difference between the measured value and a given standard or expected value. If the measurements are not very precise, then the uncertainty of the values is high. If the measurements are not very accurate, then the discrepancy of the values is high.\n\nRecall our example of measuring paper length; we obtained measurements of 11.1 in., 11.2 in., and 10.9 in., and the accepted value was 11.0 in. We might average the three measurements to say our best guess is 11.1 in.; in this case, our discrepancy is 11.1 – 11.0 = 0.1 in., which provides a quantitative measure of accuracy. We might calculate the uncertainty in our best guess by using the range of our measured values: 0.3 in. Then we would say the length of the paper is 11.1 in. plus or minus 0.3 in. The uncertainty in a measurement, A, is often denoted as δA (read “delta A”), so the measurement result would be recorded as A ± δA. Returning to our paper example, the measured length of the paper could be expressed as 11.1 ± 0.3 in. Since the discrepancy of 0.1 in. is less than the uncertainty of 0.3 in., we might say the measured value agrees with the accepted reference value to within experimental uncertainty.\n\nSome factors that contribute to uncertainty in a measurement include the following:\n\n• Limitations of the measuring device\n• The skill of the person taking the measurement\n• Irregularities in the object being measured\n• Any other factors that affect the outcome (highly dependent on the situation)\n\nIn our example, such factors contributing to the uncertainty could be the smallest division on the ruler is 1/16 in., the person using the ruler has bad eyesight, the ruler is worn down on one end, or one side of the paper is slightly longer than the other. At any rate, the uncertainty in a measurement must be calculated to quantify its precision. If a reference value is known, it makes sense to calculate the discrepancy as well to quantify its accuracy.\n\n#### Percent uncertainty\n\nAnother method of expressing uncertainty is as a percent of the measured value. If a measurement A is expressed with uncertainty δA, the percent uncertainty is defined as\n\n$\\text{Percent uncertainty}=\\frac{\\delta A}{A}\\,×\\,100%.$\n\n### Example\n\n#### Calculating Percent Uncertainty: A Bag of Apples\n\nA grocery store sells 5-lb bags of apples. Let’s say we purchase four bags during the course of a month and weigh the bags each time. We obtain the following measurements:\n\n• Week 1 weight: 4.8 lb\n• Week 2 weight: 5.3 lb\n• Week 3 weight: 4.9 lb\n• Week 4 weight: 5.4 lb\n\nWe then determine the average weight of the 5-lb bag of apples is 5.1 ± 0.2 lb. What is the percent uncertainty of the bag’s weight?\n\n#### Strategy\n\nFirst, observe that the average value of the bag’s weight, A, is 5.1 lb. The uncertainty in this value, $\\delta A,$ is 0.2 lb. We can use the following equation to determine the percent uncertainty of the weight:\n\n$\\text{Percent uncertainty}=\\frac{\\delta A}{A}\\,×\\,100%.$\n\n#### Solution\n\nSubstitute the values into the equation:\n\n$\\text{Percent uncertainty}=\\frac{\\delta A}{A}\\,×\\,100%=\\frac{0.2\\,\\text{lb}}{5.1\\,\\text{lb}}\\,×\\,100%=3.9%\\approx 4%.$\n\nSignificanceWe can conclude the average weight of a bag of apples from this store is 5.1 lb ± 4%. Notice the percent uncertainty is dimensionless because the units of weight in $\\delta A=0.2$ lb canceled those in A = 5.1 lb when we took the ratio.\n\nA high school track coach has just purchased a new stopwatch. The stopwatch manual states the stopwatch has an uncertainty of ±0.05 s. Runners on the track coach’s team regularly clock 100-m sprints of 11.49 s to 15.01 s. At the school’s last track meet, the first-place sprinter came in at 12.04 s and the second-place sprinter came in at 12.07 s. Will the coach’s new stopwatch be helpful in timing the sprint team? Why or why not?\n\n#### Uncertainties in calculations\n\nUncertainty exists in anything calculated from measured quantities. For example, the area of a floor calculated from measurements of its length and width has an uncertainty because the length and width have uncertainties. How big is the uncertainty in something you calculate by multiplication or division? If the measurements going into the calculation have small uncertainties (a few percent or less), then the method of adding percents can be used for multiplication or division. This method states the percent uncertainty in a quantity calculated by multiplication or division is the sum of the percent uncertainties in the items used to make the calculation. For example, if a floor has a length of 4.00 m and a width of 3.00 m, with uncertainties of 2% and 1%, respectively, then the area of the floor is 12.0 m2 and has an uncertainty of 3%. (Expressed as an area, this is 0.36 m2 $12.0{\\,\\text{m}}^{2}\\,×\\,0.03$, which we round to 0.4 m2 since the area of the floor is given to a tenth of a square meter.)\n\n### Precision of Measuring Tools and Significant Figures\n\nAn important factor in the precision of measurements involves the precision of the measuring tool. In general, a precise measuring tool is one that can measure values in very small increments. For example, a standard ruler can measure length to the nearest millimeter whereas a caliper can measure length to the nearest 0.01 mm. The caliper is a more precise measuring tool because it can measure extremely small differences in length. The more precise the measuring tool, the more precise the measurements.\n\nWhen we express measured values, we can only list as many digits as we measured initially with our measuring tool. For example, if we use a standard ruler to measure the length of a stick, we may measure it to be 36.7 cm. We can’t express this value as 36.71 cm because our measuring tool is not precise enough to measure a hundredth of a centimeter. It should be noted that the last digit in a measured value has been estimated in some way by the person performing the measurement. For example, the person measuring the length of a stick with a ruler notices the stick length seems to be somewhere in between 36.6 cm and 36.7 cm, and he or she must estimate the value of the last digit. Using the method of significant figures, the rule is that the last digit written down in a measurement is the first digit with some uncertainty. To determine the number of significant digits in a value, start with the first measured value at the left and count the number of digits through the last digit written on the right. For example, the measured value 36.7 cm has three digits, or three significant figures. Significant figures indicate the precision of the measuring tool used to measure a value.\n\n#### Zeros\n\nSpecial consideration is given to zeros when counting significant figures. The zeros in 0.053 are not significant because they are placeholders that locate the decimal point. There are two significant figures in 0.053. The zeros in 10.053 are not placeholders; they are significant. This number has five significant figures. The zeros in 1300 may or may not be significant, depending on the style of writing numbers. They could mean the number is known to the last digit or they could be placeholders. So 1300 could have two, three, or four significant figures. To avoid this ambiguity, we should write 1300 in scientific notation as $1.3\\,×\\,{10}^{3},$ $1.30\\,×\\,{10}^{3},$ or $1.300\\,×\\,{10}^{3},$ depending on whether it has two, three, or four significant figures. Zeros are significant except when they serve only as placeholders.\n\n#### Significant figures in calculations\n\nWhen combining measurements with different degrees of precision, the number of significant digits in the final answer can be no greater than the number of significant digits in the least-precise measured value. There are two different rules, one for multiplication and division and the other for addition and subtraction.\n\n1. For multiplication and division, the result should have the same number of significant figures as the quantity with the least number of significant figures entering into the calculation. For example, the area of a circle can be calculated from its radius using A = πr2. Let’s see how many significant figures the area has if the radius has only two—say, r = 1.2 m. Using a calculator with an eight-digit output, we would calculate\n$A=\\pi {r}^{2}=(3.1415927\\text{…})\\,×\\,{(1.2\\,\\text{m})}^{2}=4.5238934{\\,\\text{m}}^{2}.$\n\nBut because the radius has only two significant figures, it limits the calculated quantity to two significant figures, or\n\n$A=4.5{\\,\\text{m}}^{2},$\n\nalthough π is good to at least eight digits.\n\n2. For addition and subtraction, the answer can contain no more decimal places than the least-precise measurement. Suppose we buy 7.56 kg of potatoes in a grocery store as measured with a scale with precision 0.01 kg, then we drop off 6.052 kg of potatoes at your laboratory as measured by a scale with precision 0.001 kg. Then, we go home and add 13.7 kg of potatoes as measured by a bathroom scale with precision 0.1 kg. How many kilograms of potatoes do we now have and how many significant figures are appropriate in the answer? The mass is found by simple addition and subtraction:\n$\\begin{array}{cc} \\phantom{\\rule{1.2em}{0ex}}7.56\\,\\text{kg}\\hfill \\\\ -6.052\\,\\text{kg}\\hfill \\\\ \\\\ \\\\ \\,\\frac{\\,+13.7\\,\\text{kg}}{15.208\\,\\text{kg}}=15.2\\,\\text{kg}\\text{.}\\hfill \\end{array}$\n\nNext, we identify the least-precise measurement: 13.7 kg. This measurement is expressed to the 0.1 decimal place, so our final answer must also be expressed to the 0.1 decimal place. Thus, the answer is rounded to the tenths place, giving us 15.2 kg.\n\n#### Significant figures in this text\n\nIn this text, most numbers are assumed to have three significant figures. Furthermore, consistent numbers of significant figures are used in all worked examples. An answer given to three digits is based on input good to at least three digits, for example. If the input has fewer significant figures, the answer will also have fewer significant figures. Care is also taken that the number of significant figures is reasonable for the situation posed. In some topics, particularly in optics, more accurate numbers are needed and we use more than three significant figures. Finally, if a number is exact, such as the two in the formula for the circumference of a circle, C = 2πr, it does not affect the number of significant figures in a calculation. Likewise, conversion factors such as 100 cm/1 m are considered exact and do not affect the number of significant figures in a calculation.\n\n### Summary\n\n• Accuracy of a measured value refers to how close a measurement is to an accepted reference value. The discrepancy in a measurement is the amount by which the measurement result differs from this value.\n• Precision of measured values refers to how close the agreement is between repeated measurements. The uncertainty of a measurement is a quantification of this.\n• The precision of a measuring tool is related to the size of its measurement increments. The smaller the measurement increment, the more precise the tool.\n• Significant figures express the precision of a measuring tool.\n• When multiplying or dividing measured values, the final answer can contain only as many significant figures as the least-precise value.\n• When adding or subtracting measured values, the final answer cannot contain more decimal places than the least-precise value.\n\n### Key Equations\n\n Percent uncertainty $\\text{Percent uncertainty}=\\frac{\\delta A}{A}\\,×\\,100%$\n\n### Conceptual Questions\n\n(a) What is the relationship between the precision and the uncertainty of a measurement? (b) What is the relationship between the accuracy and the discrepancy of a measurement?\n\n### Problems\n\nConsider the equation 4000/400 = 10.0. Assuming the number of significant figures in the answer is correct, what can you say about the number of significant figures in 4000 and 400?\n\nSuppose your bathroom scale reads your mass as 65 kg with a 3% uncertainty. What is the uncertainty in your mass (in kilograms)?\n\n2 kg\n\nA good-quality measuring tape can be off by 0.50 cm over a distance of 20 m. What is its percent uncertainty?\n\nAn infant’s pulse rate is measured to be 130 ± 5 beats/min. What is the percent uncertainty in this measurement?\n\n4%\n\n(a) Suppose that a person has an average heart rate of 72.0 beats/min. How many beats does he or she have in 2.0 years? (b) In 2.00 years? (c) In 2.000 years?\n\nA can contains 375 mL of soda. How much is left after 308 mL is removed?\n\nState how many significant figures are proper in the results of the following calculations: (a) $(106.7)(98.2)/(46.210)(1.01);$ (b) ${(18.7)}^{2};$ (c) $(1.60\\,×\\,{10}^{-19})(3712)$\n\n(a) How many significant figures are in the numbers 99 and 100.? (b) If the uncertainty in each number is 1, what is the percent uncertainty in each? (c) Which is a more meaningful way to express the accuracy of these two numbers: significant figures or percent uncertainties?\n\n(a) If your speedometer has an uncertainty of 2.0 km/h at a speed of 90 km/h, what is the percent uncertainty? (b) If it has the same percent uncertainty when it reads 60 km/h, what is the range of speeds you could be going?\n\n(a) A person’s blood pressure is measured to be $120±2\\,\\text{mm Hg}.$ What is its percent uncertainty? (b) Assuming the same percent uncertainty, what is the uncertainty in a blood pressure measurement of 80 mm Hg?\n\na. 2%; b. 1 mm Hg\n\nA person measures his or her heart rate by counting the number of beats in 30 s. If 40 ± 1 beats are counted in 30.0 ± 0.5 s, what is the heart rate and its uncertainty in beats per minute?\n\nWhat is the area of a circle 3.102 cm in diameter?\n\nDetermine the number of significant figures in the following measurements: (a) 0.0009, (b) 15,450.0, (c) 6×103, (d) 87.990, and (e) 30.42.\n\nPerform the following calculations and express your answer using the correct number of significant digits. (a) A woman has two bags weighing 13.5 lb and one bag with a weight of 10.2 lb. What is the total weight of the bags? (b) The force F on an object is equal to its mass m multiplied by its acceleration a. If a wagon with mass 55 kg accelerates at a rate of 0.0255 m/s2, what is the force on the wagon? (The unit of force is called the newton and it is expressed with the symbol N.)\n\n### Glossary\n\naccuracy\nthe degree to which a measured value agrees with an accepted reference value for that measurement\ndiscrepancy\nthe difference between the measured value and a given standard or expected value" ]
[ null, "https://s3-us-west-2.amazonaws.com/courses-images/wp-content/uploads/sites/2952/2018/01/31183600/CNX_UPhysics_01_06_Balance.jpg", null, "https://s3-us-west-2.amazonaws.com/courses-images/wp-content/uploads/sites/2952/2018/01/31183604/CNX_UPhysics_01_06_Target.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.93032885,"math_prob":0.98727083,"size":19053,"snap":"2022-27-2022-33","text_gpt3_token_len":4369,"char_repetition_ratio":0.19638826,"word_repetition_ratio":0.060233366,"special_character_ratio":0.23838766,"punctuation_ratio":0.12329486,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9967803,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,5,null,5,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-07-02T08:08:52Z\",\"WARC-Record-ID\":\"<urn:uuid:45a1e734-c38b-4571-be14-57d70bc59903>\",\"Content-Length\":\"57822\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2d0bdccc-bc79-46b3-8180-62ad63ac770b>\",\"WARC-Concurrent-To\":\"<urn:uuid:c310586b-4d20-4f37-9466-715617a0ef3a>\",\"WARC-IP-Address\":\"23.185.0.1\",\"WARC-Target-URI\":\"https://courses.lumenlearning.com/suny-osuniversityphysics/chapter/1-6-significant-figures/\",\"WARC-Payload-Digest\":\"sha1:XJTBAPS2A6XZQXTT6HK7EPZEY4RJNCFW\",\"WARC-Block-Digest\":\"sha1:QBWCZETJ6IUII4GKXM2EFTBF6M2YPB4P\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656103989282.58_warc_CC-MAIN-20220702071223-20220702101223-00146.warc.gz\"}"}
https://dsotm-rsa.space/post/2018/06/05/card-sorting-updated/
[ "# Card Sorting: Removing Replicates\n\n## 2018/06/05\n\n### Summary\n\nAs an exercise in skeletal-programming I wanted to evaluate how many times a given set of assorted items needs to be randomly split into two piles, compared against each other and then reassembled removing a given proportion of duplicates at each iteration, until ultimately no duplicate items remain!\n\nSkeletal programming is assembling the bare-minimum programmatic structures needed to test an idea at the level of intuition\n\nI want to use this “wax-rinse-repeat” process in a small project to sort, and categorise a catalogue of tens of thousands of images taken over a decade. The basic idea is to split subsets of the library of images into two directories, apply image-hashing via Python to allocate IDs to each image. A collection of bash tools such as awk, sed are then used to list, collate and remove duplicates in a continuous cycle.\n\nThe results from this experimental “card-sorting” exercise suggests that as few as 6-10 splits-repeats are need to reach unity.\n\n### Implementation\n\nI use a model function, and a stepper function to power my simulation. Reproducibility and robustness of results is estimated through repetition using the replicate function.\n\n• model function: defines the code to build, split and recombine the items\n• stepper function: passes parameters to the model, including the initial size of data, split-sizes, repetitions (passes), replicates etc.\n\n### Model Function\n\n# model (compute) function\n# splits duplicates into 2 piles\n\nshares <-function(x,sp){\n\nfracA <-length(x)*sp\nfracB <-length(x)-fracA\n\n# generate two pools of data to compare\nsideA <-sample(x,fracA,replace=FALSE)\nsideB <-x[-which(sideA %in% x)]\n\n# check which tiems are in both pools\nleftB <-sideB[-which(sideB %in% sideA)]\n\nleftAB <-c(sideA,leftB)\nreturn(leftAB)\n\n}\n\n### Stepper Function\n\n# stepper function\n# passes all parameters to the model function\n# holds a loop function for iteration\n# writes diagnostics/results to file\n\nstepper <-function(nsteps=15,pool=500,dups=0.25,splits=0.70){\n\n# generate initial data-set\nsams <-c(seq(1,pool,1),c(sample(seq(1,pool,1),dups*pool)),c(\nseq(pool+floor(dups*pool),2*pool-1,1)))\n\n# prepare holder for the results\ntransactions <-matrix(ncol=2,nrow=nsteps)\n\n# initialise the loop\nfor (i in seq_len(nsteps)){\n\nsams <-shares(sams,sp=splits)\nres <-round(as.numeric(length(unique(sams))/length(sams)),2)\ntrn <-res\ntransactions[i,2] <-trn\ntransactions[i] <-i # update :: keep track of step in loop\n}\n\nreturn(transactions)\n\n}\n\nA single run can be returned by running the stepper function with default parameters.\n\ntr <-stepper()\nstr(tr)\n## num [1:15, 1:2] 1 2 3 4 5 6 7 8 9 10 ...\n\n### Robust Estimation of Event-Space\n\nUsing the replicate function one can return hundreds of simulations to gain an estimation of the range of possible outcomes and trajectories.\n\nIts worth noting that the output of the replicate function can be returned in a variety of forms - such as an array, matrix etc. Further processing and reshaping can be added to build an object ready for tidy plotting.\n\n# define number of replicate runs\nreps <- 100\n\n# returning an array (the default)\nopt.array <-replicate(reps,stepper())\n\n# returning a matrix :: using replicate in combination with do-call/rbind\nopt.matrix <-do.call(rbind, replicate(reps, stepper(), simplify=FALSE))\n\n# returning a list :: using lapply in combiantion with do.call/rbind\nopt.list <-do.call(rbind, lapply(1:reps, function(i) stepper()))\n\n# returning a dataframe :: convert an array to a data.frame.table without plyr\nopt.df <-as.data.frame.table(opt.array)\nhead(opt.df)\n## Var1 Var2 Var3 Freq\n## 1 A A A 1\n## 2 B A A 2\n## 3 C A A 3\n## 4 D A A 4\n## 5 E A A 5\n## 6 F A A 6\nopt.df %>% datatable(., rownames = TRUE, filter=\"none\",\noptions = list(pageLength = 10, scrollX=F)) %>%\nDT::formatStyle(columns = c(1:4), fontSize = '85%')\n\n### Tidying and Plotting\n\nI chose to rearrange the returned data.frame into a tidy form. The key-variable pair is: Var2 which contains the iterations of the splits, and Freq which contains and the percentage of unique items remaining.\n\nVar3 contains the alphanumeric count of the simulation runs (1-100). Similarly Var1 contains the step number of the replicate function.\n\n# rearrange the data into a tidy form\ndf.tidy <-spread(data = opt.df, key = Var2, value = Freq)\nhead(df.tidy)\n## Var1 Var3 A B\n## 1 A A 1 0.91\n## 2 A B 1 0.93\n## 3 A C 1 0.92\n## 4 A D 1 0.91\n## 5 A E 1 0.91\n## 6 A F 1 0.92\n# preview of tidy data\ndf.tidy %>% datatable(., rownames = TRUE, filter=\"none\",\noptions = list(pageLength = 10, scrollX=F)) %>%\nDT::formatStyle(columns = c(1:4), fontSize = '85%')\n# plot simulated runs\nggplot(aes(A,B),data=df.tidy) +\ngeom_line(aes(group=Var3),alpha=0.025) +\ngeom_smooth() +\nxlab(\"Number of Iterations Required\") +\nylab(\"Fraction of Unique Items\") +\nlabs(title=\"Removing Duplicates Through An Iterative Split-Compare-Regroup Approach\") +\ntheme_plain(base_size = 10)", null, "" ]
[ null, "https://dsotm-rsa.space/post/2018-06-05-card-sorting-ii_files/figure-html/unnamed-chunk-3-1.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.66957194,"math_prob":0.914479,"size":4845,"snap":"2020-24-2020-29","text_gpt3_token_len":1324,"char_repetition_ratio":0.10535014,"word_repetition_ratio":0.040983606,"special_character_ratio":0.274097,"punctuation_ratio":0.13403141,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98665583,"pos_list":[0,1,2],"im_url_duplicate_count":[null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-05-29T00:08:34Z\",\"WARC-Record-ID\":\"<urn:uuid:7bdd63b6-51e3-4e41-8075-334939b0171c>\",\"Content-Length\":\"109103\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d13d4838-51fa-4755-aae0-c8bab175becc>\",\"WARC-Concurrent-To\":\"<urn:uuid:98642eab-e8af-4d08-8995-b5715ae8d5b4>\",\"WARC-IP-Address\":\"185.199.108.153\",\"WARC-Target-URI\":\"https://dsotm-rsa.space/post/2018/06/05/card-sorting-updated/\",\"WARC-Payload-Digest\":\"sha1:HMDLS7U5F7BENTA5CEFEMHEJHFROQ7BA\",\"WARC-Block-Digest\":\"sha1:VMC45SJL6L2LHBUMRYISTC2GEUL2CUSU\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-24/CC-MAIN-2020-24_segments_1590347401004.26_warc_CC-MAIN-20200528232803-20200529022803-00171.warc.gz\"}"}
https://spparks.github.io/doc/region.html
[ "SPPARKS WWW Site - SPPARKS Documentation - SPPARKS Commands\n\n### region command\n\nSyntax:\n\n```region ID style args keyword value ...\n```\n• ID = user-assigned name for the region\n• style = block or cylinder or sphere or union or intersect\n``` block args = xlo xhi ylo yhi zlo zhi\nxlo,xhi,ylo,yhi,zlo,zhi = bounds of block in all dimensions (distance units)\ncylinder args = dim c1 c2 radius lo hi\ndim = x or y or z = axis of cylinder\nc1,c2 = coords of cylinder axis in other 2 dimensions (distance units)\nlo,hi = bounds of cylinder in dim (distance units)\nsphere args = x y z radius\nx,y,z = center of sphere (distance units)\nunion args = N reg-ID1 reg-ID2 ...\nN = # of regions to follow, must be 2 or greater\nreg-ID1,reg-ID2, ... = IDs of regions to join together\nintersect args = N reg-ID1 reg-ID2 ...\nN = # of regions to follow, must be 2 or greater\nreg-ID1,reg-ID2, ... = IDs of regions to intersect\n```\n• zero or more keyword/value pairs may be appended\n• keyword = side\n``` side value = in or out\nin = the region is inside the specified geometry\nout = the region is outside the specified geometry\n```\n\nExamples:\n\n```region 1 block -3.0 5.0 INF 10.0 INF INF\nregion 2 sphere 0.0 0.0 0.0 5 side out\nregion void cylinder y 2 3 5 -5.0 EDGE\nregion outside union 4 side1 side2 side3 side4\n```\n\nDescription:\n\nThis command defines a geometric region of space. Various other commands use regions. For example, the region can be filled with sites via the create_sites command.\n\nThe distance units used to define the region are setup by the lattice command which must be used before any regions are defined. The lattice command defines a lattice spacing and regions are defined in terms of this length scale. For example, if the lattice spacing is 3.0 and the region sphere radius is 2.5, then the size of the sphere is 2.5*3.0 = 7.5.\n\nCommands which use regions typically test whether a lattice site is contained in the region or not. For this purpose, coordinates exactly on the region boundary are considered to be interior to the region. This means, for example, for a spherical region, a lattice site on the sphere surface would be part of the region if the sphere were defined with the side in keyword, but would not be part of the region if it were defined using the side out keyword. See more details on the side keyword below.\n\nThe lo/hi values for the block or cylinder styles can be specified as EDGE or INF. EDGE means they extend all the way to the global simulation box boundary. Note that this is the current box boundary; if the box changes size during a simulation, the region does not. INF means a large negative or positive number (1.0e20), so it should encompass the simulation box even if it changes size. If a region is defined before the simulation box has been created (via create_box or read_sites commands), then an EDGE or INF parameter cannot be used.\n\nIMPORTANT NOTE: Regions in SPPARKS are always 3d geometric objects, regardless of whether the dimension of the lattice is 1d or 2d or 3d. Thus when using regions in a 2d simulation, for exapmle, you should be careful to define the region so that its intersection with the 2d x-y plane of the simulation has the 2d geometric extent you want. Also note that for 2d simulations, SPPARKS expects lattice sites to lie in the z=0 plane, and similarly for 1d (y = z = 0), so the regions you define as input to the create_box command should reflect that.\n\nFor style cylinder, the c1,c2 params are coordinates in the 2 other dimensions besides the cylinder axis dimension. For dim = x, c1/c2 = y/z; for dim = y, c1/c2 = x/z; for dim = z, c1/c2 = x/y. Thus the third example above specifies a cylinder with its axis in the y-direction located at x = 2.0 and z = 3.0, with a radius of 5.0, and extending in the y-direction from -5.0 to the upper box boundary.\n\nThe union style creates a region consisting of the volume of all the listed regions combined. The intersect style creates a region consisting of the volume that is common to all the listed regions.\n\nThe side keyword determines whether the region is considered to be inside or outside of the specified geometry. Using this keyword in conjunction with union and intersect regions, complex geometries can be built up. For example, if the interior of two spheres were each defined as regions, and a union style with side = out was constructed listing the region-IDs of the 2 spheres, the resulting region would be all the volume in the simulation box that was outside both of the spheres.\n\nRestrictions: none\n\nRelated commands:\n\nDefault:\n\nThe option defaults are side = in." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8557778,"math_prob":0.9740382,"size":4401,"snap":"2021-31-2021-39","text_gpt3_token_len":1062,"char_repetition_ratio":0.15078463,"word_repetition_ratio":0.06801008,"special_character_ratio":0.2444899,"punctuation_ratio":0.12269273,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95319873,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-21T08:24:33Z\",\"WARC-Record-ID\":\"<urn:uuid:5925e32f-95a6-450c-a869-70bdc376a17f>\",\"Content-Length\":\"6418\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:765c19e2-8afd-4d6e-9781-a8aaf663efbc>\",\"WARC-Concurrent-To\":\"<urn:uuid:692c4c04-c709-4b2c-8d80-bd9123249fa1>\",\"WARC-IP-Address\":\"185.199.110.153\",\"WARC-Target-URI\":\"https://spparks.github.io/doc/region.html\",\"WARC-Payload-Digest\":\"sha1:ELMI4ERDEJFFVMMH6UMH3XAPQ3YT3GZ6\",\"WARC-Block-Digest\":\"sha1:UZYQBSZ5MXCTRVY6F6SHX3HB4DXSVH2W\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780057199.49_warc_CC-MAIN-20210921070944-20210921100944-00302.warc.gz\"}"}
https://www.birchtreebarkandstone.com/coverage-charg/
[ "Order online now or give us a call at 616.514.5030!\n\n## Mulch Calculator\n\nTOTAL SQUARE FEET:\nTOTAL CUBIC YARDS NEEDED:\n\n#### How do I know how many yards to purchase?\n\nHow many yards do I need?  This is a very common question.  First, you need to figure out how many square feet you want to cover.  To do this, see below…\n\nFor Square areas: Multiply the Length (x) the Width.  This will give you the total square feet.\n\nFor example:  You have 2 areas to cover with mulch.  The first area is 9′ x 22′.  The second one is 6′ x 30′. For the first area, 9 x 22 equals 198 square feet.  The second equals 180 square feet.  So total for both areas is 378 square feet.\n\nFor Circular areas: Take the Radius (which is half of the diameter) and multiply by itself.  Then take that number and multiply by 3.14.  This will give you the total square feet.\n\nFor example: You have a 24 foot diameter (the distance across) circle to cover with mulch.  So the radius is 12.  12 x 12 equals 144.  Now, take that number and multiply by 3.14.  So, 144 x 3.14 equals 452 square feet.\n\nTake your total number of square footage and divide by the number in the chart of your chosen depth. This will give you the correct amount of mulch needed.\n\nFor example: Your circle is 452 square feet.  You want to cover it all with 3 inches of mulch.  Take the 452 and divide by 108 (the coverage for a 3″ depth).  You come up with approximately 4.2 cubic yards.\n\n* Note: If placing an order for delivery, make sure you get enough.  It would be best to round up to the nearest full yard.\n\nDetermine what depth you want for coverage and refer to the table below.\n\nCoverage Chart\n\n Depth 1 Cubic Yard Covers 0.5″ 648 Sq. Ft. 1″ 324 Sq. Ft. 2″ 162 Sq. Ft. 3″ 108 Sq. Ft. 4″ 81 Sq. Ft. 5″ 64.8 Sq. Ft. 6″ 54 Sq. Ft. 7″ 46.3 Sq. Ft. 8″ 40.5 Sq. Ft. 9″ 36 Sq. Ft. 10″ 32.4 Sq. Ft. 11″ 29.5 Sq. Ft. 12″ 27 Sq. Ft." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.90870255,"math_prob":0.991607,"size":1732,"snap":"2020-45-2020-50","text_gpt3_token_len":512,"char_repetition_ratio":0.1417824,"word_repetition_ratio":0.06666667,"special_character_ratio":0.3256351,"punctuation_ratio":0.17701149,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9958393,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-24T17:07:21Z\",\"WARC-Record-ID\":\"<urn:uuid:69fa13fc-371a-4cf7-a4b0-56ab1a8254b2>\",\"Content-Length\":\"47294\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c8c359e4-cb54-4be0-bc95-98073114a154>\",\"WARC-Concurrent-To\":\"<urn:uuid:bddc71e7-55ff-4e49-af84-32660705b002>\",\"WARC-IP-Address\":\"155.130.134.33\",\"WARC-Target-URI\":\"https://www.birchtreebarkandstone.com/coverage-charg/\",\"WARC-Payload-Digest\":\"sha1:SNGWNIJ7BT6UADOUK2M3VUXXTIFRT7YG\",\"WARC-Block-Digest\":\"sha1:7S4E2CGUYWSMEJTJ5LVYGFKFK67RLEHT\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107884322.44_warc_CC-MAIN-20201024164841-20201024194841-00500.warc.gz\"}"}
https://www.talkstats.com/threads/comparing-correlation-coefficients.73824/#post-215165
[ "Comparing correlation coefficients\n\nMarieK\n\nNew Member\nHello,\n\nI have a large sample of patients who have had a TIA, minor stroke or major stroke. I have been looking at correlations in each of these three groups. Is there a statistical test in SPSS (or elsewhere) that would allow me to compare the correlation coefficients between each of three groups in this sample?\n\nMarie\n\nspunky\n\nCan't make spagetti\nYou can do this through Structural Equation Modelling (SEM) by setting up the proper constraints and fitting a multiple-groups model.\n\nBut this assumes you know SEM and a software package like R's lavaan or Mplus to fit your SEM-model.\n\nKarabiner\n\nTS Contributor\nCould yo tell something moreabout the study? What are the research questions,\nhow was the study designed, how were the data collected, how large are the\nsample sizes, which variables were correlated and how were they measured?\n\nWith kind regards\n\nKarabiner\n\nMarieK\n\nNew Member\nI have a group of about 1300 patients who had had disease 1 (subgroup 1), disease 2 (subgroup 2), or disease 3 (subgroup 3).\n\nI have correlated certain biomarkers with their kidney function in the whole group (N=1300), and then in each of these subgroups (N = circa 400 in each subgroup)\n\ne.g. Fibrinogen & GFR\n\nI would like to work out if there is a statistical difference for these correlations across the 3 subgroups (R1 vs R2 vs R3).\n\nThe only online calculators that I've found for fisher's z transformation only seem to allow comparison between two correlation coefficients (as opposed to 3 which is what I need).\n\nIs there another link where I could do this or is there any spss syntax that would allow me to do this?\n\nMarieK\n\nNew Member\nI have a group of about 1300 patients who had had disease 1 (subgroup 1), disease 2 (subgroup 2), or disease 3 (subgroup 3).\n\nI have correlated certain biomarkers with their kidney function in the whole group (N=1300), and then in each of these subgroups (N = circa 400 in each subgroup)\n\ne.g. Fibrinogen & GFR\n\nI would like to work out if there is a statistical difference for these correlations across the 3 subgroups (R1 vs R2 vs R3).\n\nThe only online calculators that I've found for fisher's z transformation only seem to allow comparison between two correlation coefficients (as opposed to 3 which is what I need).\n\nIs there another link where I could do this or is there any spss syntax that would allow me to do this?\n\nMarieK\n\nNew Member\nCould yo tell something moreabout the study? What are the research questions,\nhow was the study designed, how were the data collected, how large are the\nsample sizes, which variables were correlated and how were they measured?\n\nWith kind regards\n\nKarabiner\nI have a group of about 1300 patients who had had disease 1 (subgroup 1), disease 2 (subgroup 2), or disease 3 (subgroup 3).\n\nI have correlated certain biomarkers with their kidney function in the whole group (N=1300), and then in each of these subgroups (N = circa 400 in each subgroup)\n\ne.g. Fibrinogen & GFR\n\nI would like to work out if there is a statistical difference for these correlations across the 3 subgroups (R1 vs R2 vs R3).\n\nThe only online calculators that I've found for fisher's z transformation only seem to allow comparison between two correlation coefficients (as opposed to 3 which is what I need).\n\nIs there another link where I could do this or is there any spss syntax that would allow me to do this?\n\nkatxt\n\nActive Member\nYou could compare the correlations in pairs using a more stringent significance level (say Bonferroni p<0.05/3 = 0.017).\nOr, you could use a permutation test. Use the variance of the three correlations as a measure of how close the three correlations are, and permute the Fibrinogen & GFR pairs across the entire group. See where your correlation variance fits in the generated list. The p value will be the upper tail proportion.\nEither way, if you are planning to test other pairs of markers, you will need to make the significance level even lower to avoid false positives.\n\nKarabiner\n\nTS Contributor\nYou can compare stabilty of relationship across three groups using linear regression,\nbut you would have to define a \"dependent\" and an \"independent\" variable for each of\nthese analyses.\n\nThe model would look like:\nbiomarker2 = constant + b1* biomarker1 + b2 * group + b3*(biomarker1*group) + e\n\nThe interaction between group and \"independent\" biomarker tells you whether\nthe relationship between biomarkers differs between groups.\n\nWith kind regards\n\nKarabiner\n\nMarieK\n\nNew Member\nCould yo tell something moreabout the study? What are the research questions,\nhow was the study designed, how were the data collected, how large are the\nsample sizes, which variables were correlated and how were they measured?\n\nWith kind regards\n\nKarabiner\nI have a group of about 1300 patients who had had disease 1 (subgroup 1), disease 2 (subgroup 2), or disease 3 (subgroup 3).\n\nI have correlated certain biomarkers with their kidney function in the whole group (N=1300), and then in each of these subgroups (N = circa 400 in each subgroup)\n\ne.g. Fibrinogen & GFR\n\nI would like to work out if there is a statistical difference for these correlations across the 3 subgroups (R1 vs R2 vs R3).\n\nThe only online calculators that I've found for fisher's z transformation only seem to allow comparison between two correlation coefficients (as opposed to 3 which is what I need).\n\nIs there another link where I could do this or is there any spss syntax that would allow me to do this?\n\nYou can compare stabilty of relationship across three groups using linear regression,\nbut you would have to define a \"dependent\" and an \"independent\" variable for each of\nthese analyses.\n\nThe model would look like:\nbiomarker2 = constant + b1* biomarker1 + b2 * group + b3*(biomarker1*group) + e\n\nThe interaction between group and \"independent\" biomarker tells you whether\nthe relationship between biomarkers differs between groups.\n\nWith kind regards\n\nKarabiner\nCould I do just do R to Z fisher transformation and then do anova of the Z values? If I do this, should I use bonferroni correction?\n\nThanks!\n\nGretaGarbo\n\nHuman\nCould I do just do R to Z fisher transformation and then do anova of the Z values? If I do this, should I use bonferroni correction?\nYou can just do the calculation for one correlation at a time and use the Fisher transformation to find out the standard error for that correlation. Then you take the next correlation. You can do that with a pocket calculator.\n\nIf you want to compare two korrelations, say corr1 and corr2, or take the difference between them, like\n\ncorr1 - corr2\n\nThe \"uncertainty\" in that difference can be:\n\n(corr1 - corr2) +/- 1.96*Sqrt( std(corr1)^2 + std(corr2)^2)\n\n(That would make the strong assumption that the two are not dependent, but maybe this is good enough.)\n\nA possibility is to do bootstrap simulations, but maybe that is too difficult.\n\nOr maybe you can formulate the problem as an anova problem (like the other ones suggested), not dealing with the correlation coefficient at all." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.98065543,"math_prob":0.8603165,"size":676,"snap":"2022-05-2022-21","text_gpt3_token_len":143,"char_repetition_ratio":0.10714286,"word_repetition_ratio":0.95726496,"special_character_ratio":0.19970414,"punctuation_ratio":0.083333336,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9829616,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-01-21T10:44:06Z\",\"WARC-Record-ID\":\"<urn:uuid:8b26c574-851f-4ec9-90a7-3ad244393c11>\",\"Content-Length\":\"78606\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:38aaab98-8045-48bb-ab21-b351a23fa6ce>\",\"WARC-Concurrent-To\":\"<urn:uuid:d2dbb5a1-790b-4135-be7d-365050d3974c>\",\"WARC-IP-Address\":\"199.167.200.62\",\"WARC-Target-URI\":\"https://www.talkstats.com/threads/comparing-correlation-coefficients.73824/#post-215165\",\"WARC-Payload-Digest\":\"sha1:5XFGQ2FXEZPR6DQMZ5GXBM3VSYHYBVRY\",\"WARC-Block-Digest\":\"sha1:F5GTB7QB7VLCQWEA24KNXZY4SEIXKDO3\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-05/CC-MAIN-2022-05_segments_1642320303356.40_warc_CC-MAIN-20220121101528-20220121131528-00009.warc.gz\"}"}
https://m.everything2.com/title/M%25C3%25B6ssbauer+Spectroscopy
[ "display | more...\nMössbauer Spectroscopy is a spectroscopic technique that investigates the absorption and emission of gamma radiation in a material, based on the occurrence of the Mössbauer Effect.\n\nA typical experimental setup for Mössbauer Spectroscopy consists of a source for γ-photons., an absorber that is the material under investigation, and a detector. The energy of the photon beam is adjusted by using the Doppler Effect. The source is moved towards and away from the absorber.\n\n```\n<- -> _ ___\n___ | | | |\n|___|- - - - - | | - - - -|- |\n|_| |___|\n\nSource Absorber Detector\n\n```\n\nThere are two conditions that need to be satisfied in order for this technique to work. First, the experimental conditions need to satisfy the occurrence of the Mössbauer Effect, that is the recoil energy of the photon emissions and absorptions must be significantly smaller than the energy of the lattice vibrations. The intensity of the Mössbauer Effect is determined by the recoilfree fraction or f-factor, which can be considered as a kind of efficiency. The second condition that must be satisfied is that one needs nuclei in the excited state as a source for the γ-photons. These nuclei are made using a nuclear accelerator and consist of a specific atomic isotope that decays to the excited state of the nucleus under investigation at a specific half life time. A necessary condition for an observable Mössbauer Effect is thus that one has a source which decays to the excited state of the nucleus under investigation with a sufficiently long life-time such that the experiments are practical.\n\nAn example of a spectroscopic measurement would be the analysis of iron oxide (the absorber) with a 57Co source. The 57Co isotope decays to 57Fe with a half life of 270 days, while emitting photons with almost the correct energy. The energy levels are not entirely matching, since the iron in the iron oxide lattice is coupled trough hyperfine interactions; the nuclear levels in the absorber have slightly different energies than in the emitter. Therefore, the energy of the photons from the source is varied using the Doppler Effect. If the emitter is moved towards the absorber at a velocity v, the energy of the photon (E(v) becomes:\n\nE(v) = E0(1+v/c)\n\nWhere E0 is the energy difference between the excited state and ground state of the nucleus. and c is the velocity of light. The Doppler velocities are usually in the range of -10 to 10 mm/s.\n\nA typical Mössbauer spectrum will show the γ-ray intensity as a function of sample velocity. This mode, Mössbauer Absorption Spectroscopy (MAS) is the common mode of operation. It is also possible to fix the source, and move the absorber. This technique is called Mössbauer Emission Spectroscopy (MES).\n\nThe advantage of Mössbauer Spectroscopy is that it uses γ- radiation of high penetrating power; this allows the technique to be used in situ. Applications of Mössbauer Spectroscopy are:\n\nLog in or register to write something here or to contact authors." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9068138,"math_prob":0.8128301,"size":2897,"snap":"2021-43-2021-49","text_gpt3_token_len":691,"char_repetition_ratio":0.13584514,"word_repetition_ratio":0.028985508,"special_character_ratio":0.20952709,"punctuation_ratio":0.07514451,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95075387,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-23T08:15:38Z\",\"WARC-Record-ID\":\"<urn:uuid:59dcbab9-0bde-4f91-838b-f7c69e869c78>\",\"Content-Length\":\"21234\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:25947253-25af-4461-a421-f8ffa16f6d7d>\",\"WARC-Concurrent-To\":\"<urn:uuid:d9f2d5cc-827f-48e7-a157-50c1b5369728>\",\"WARC-IP-Address\":\"44.239.200.20\",\"WARC-Target-URI\":\"https://m.everything2.com/title/M%25C3%25B6ssbauer+Spectroscopy\",\"WARC-Payload-Digest\":\"sha1:F6JTPTVVNRSKPQMU2FIMFZJJDQHRLMSU\",\"WARC-Block-Digest\":\"sha1:HQZO4XQYUH622A7HL2UEEUJGK5XCQ2QW\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323585653.49_warc_CC-MAIN-20211023064718-20211023094718-00694.warc.gz\"}"}
https://msbiskills.com/2015/10/08/sql-puzzle-sum-of-digits-puzzle/
[ "SQL Puzzle | Sum of Digits Puzzle\n\nPuzzle Statement\n\nIn this puzzle you have to find sum digits from an input number/string\n\nSample Input\n\nSample Input could be for example – 100992 or ‘1WW992’\n\nExpected output\n\nSum Of Digits in this case is 21\n\nRules/Restrictions\n\nThe solution should be should use “SELECT” statement or “CTE”.\nScript\n\nSOLUTION # 1 | Using Numbers Table\n\n ```-- DECLARE @intValue AS VARCHAR(10) = 100992 SELECT SUM(CAST(SUBSTRING(@intValue,number,1) AS TINYINT)) SUMOFDIGITS FROM ( SELECT DISTINCT number FROM MASTER..SPT_VALUES WHERE number > 0 AND number <= DATALENGTH(@intValue) ) x -- ```\n\nSOLUTION # 2 | Using Numbers Table (Validation – If characters are present in Input String)\n\n ```-- DECLARE @intValue AS VARCHAR(10) = '1WW992' SELECT SUM( CASE WHEN SUBSTRING(@intValue,number,1) LIKE '[0-9]' THEN CAST(SUBSTRING(@intValue,number,1) AS TINYINT) ELSE 0 END ) SUMOFDIGITS FROM ( SELECT DISTINCT number FROM MASTER..SPT_VALUES WHERE number > 0 AND number <= DATALENGTH(@intValue) ) x -- ```\n\nAdd a comment if you have any other solution in mind. We all need to learn. Enjoy !!!\n\nKeep Learning\n\nPawan Khowal\n\nHttp://MSBISkills.com" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.566516,"math_prob":0.96934575,"size":6759,"snap":"2023-14-2023-23","text_gpt3_token_len":1628,"char_repetition_ratio":0.26558104,"word_repetition_ratio":0.1,"special_character_ratio":0.22606894,"punctuation_ratio":0.17047308,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9938304,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-03T05:00:15Z\",\"WARC-Record-ID\":\"<urn:uuid:7e7a6ebf-54f1-4946-b3d4-977df0957eb8>\",\"Content-Length\":\"149536\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:124e72e8-eb88-456c-a55c-0bbe3b54c649>\",\"WARC-Concurrent-To\":\"<urn:uuid:2d03e55d-6ec7-41cf-bf00-1c1009457619>\",\"WARC-IP-Address\":\"192.0.78.25\",\"WARC-Target-URI\":\"https://msbiskills.com/2015/10/08/sql-puzzle-sum-of-digits-puzzle/\",\"WARC-Payload-Digest\":\"sha1:MYKLHIDXP4LETU5BPIPZQWLMXJ55RKCJ\",\"WARC-Block-Digest\":\"sha1:VGHGFZG5QJGY5C7QWI7ISMXWY3F3BFIP\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224649105.40_warc_CC-MAIN-20230603032950-20230603062950-00469.warc.gz\"}"}
https://heartburnquickremedies.com/beginning-math-worksheets-number/
[ "# Beginning Math Worksheets Number", null, "## Beginningath worksheets number travel time distance word problems v1 reading line fractions free.", null, "Beginning math worksheets number calculate activities and graph worksheet grade scaled.", null, "Beginning math worksheets number multiplication year mathematics forten pdf answers worksheet.", null, "Beginning math activities free worksheets for preschoolers numberonds grade line method.", null, "## Beginning math activities number bonds to reading worksheets skills line tenrames.", null, "Beginning matheets number bonds grade ten frames online free for sixth graders math worksheets sheet.", null, "Beginning math worksheets number bondsen frames printable line method free for preschoolers.", null, "Beginning reading worksheets math skills numbernds ten frames printable.", null, "Sheet beginning math worksheets number basic addition facts free printable worksheetfun additionbox2.", null, "## Beginning mathheets number line free for preschoolers printable.", null, "Beginning math worksheets number bonds tenrames grade to.", null, "November handwriting worksheet printable worksheets andnning math number sheet.", null, "The ordinal stories activity sheet helps assist number and beginning math worksheets free adding grade printable fraction sheets statistics answers.", null, "Beginning math worksheets number sheet 2nd grade doubles learning printable the two digit subtraction.", null, "## Sheet beginning math worksheetser bonds ten frames free for adults preschoolers.", null, "Beginningathksheetsksheet letter freeultiplication drill spelling for grade sounds coloring beginning math number line printable.", null, "Beginning math worksheets number free printable kindergarten match it up line method.", null, "Beginning math worksheets number for adults line method bonds ten frames free.", null, "Beginning math worksheets number skills bonds ten frames online to line.", null, "## Outstanding beginning math worksheetsrcises photo ideas activities 5th grade nilekayakclub worksheets number sheet free for.", null, "Sheet kindergarten math patterns worksheet printable pattern worksheets free beginning number bonds to.", null, "Sheet beginning math for adults worksheets number line fractions bondso activities.", null, "Sheet beginning mathorksheets number bonds ten frames printable line method activities free for preschoolers.", null, "Beginning math worksheets number skills activities reading kindergarten bonds.", null, "Beginning math worksheets number kindergarten sheet free printable match it up 1bw line." ]
[ null, "https://heartburnquickremedies.com/wp-content/uploads/beginningath-worksheets-number-travel-time-distance-word-problems-v1-reading-line-fractions-free.jpg", null, "https://heartburnquickremedies.com/wp-content/uploads/beginning-math-worksheets-number-calculate-activities-and-graph-worksheet-grade-scaled.jpg", null, "https://heartburnquickremedies.com/wp-content/uploads/beginning-math-worksheets-number-multiplication-year-mathematics-forten-pdf-answers-worksheet.gif", null, "https://heartburnquickremedies.com/wp-content/uploads/beginning-math-activities-free-worksheets-for-preschoolers-numberonds-grade-line-method.png", null, "https://heartburnquickremedies.com/wp-content/uploads/beginning-math-activities-number-bonds-to-reading-worksheets-skills-line-tenrames.jpg", null, "https://heartburnquickremedies.com/wp-content/uploads/beginning-matheets-number-bonds-grade-ten-frames-online-free-for-sixth-graders-math-worksheets-sheet.jpg", null, "https://heartburnquickremedies.com/wp-content/uploads/beginning-math-worksheets-number-bondsen-frames-printable-line-method-free-for-preschoolers.jpg", null, "https://heartburnquickremedies.com/wp-content/uploads/beginning-reading-worksheets-math-skills-numbernds-ten-frames-printable.png", null, "https://heartburnquickremedies.com/wp-content/uploads/sheet-beginning-math-worksheets-number-basic-addition-facts-free-printable-worksheetfun-additionbox2.png", null, "https://heartburnquickremedies.com/wp-content/uploads/beginning-mathheets-number-line-free-for-preschoolers-printable.jpg", null, "https://heartburnquickremedies.com/wp-content/uploads/beginning-math-worksheets-number-bonds-tenrames-grade-to.png", null, "https://heartburnquickremedies.com/wp-content/uploads/november-handwriting-worksheet-printable-worksheets-andnning-math-number-sheet.jpg", null, "https://heartburnquickremedies.com/wp-content/uploads/the-ordinal-stories-activity-sheet-helps-assist-number-and-beginning-math-worksheets-free-adding-grade-printable-fraction-sheets-statistics-answers.jpg", null, "https://heartburnquickremedies.com/wp-content/uploads/beginning-math-worksheets-number-sheet-2nd-grade-doubles-learning-printable-the-two-digit-subtraction.jpg", null, "https://heartburnquickremedies.com/wp-content/uploads/sheet-beginning-math-worksheetser-bonds-ten-frames-free-for-adults-preschoolers.png", null, "https://heartburnquickremedies.com/wp-content/uploads/beginningathksheetsksheet-letter-freeultiplication-drill-spelling-for-grade-sounds-coloring-beginning-math-number-line-printable.jpg", null, "https://heartburnquickremedies.com/wp-content/uploads/beginning-math-worksheets-number-free-printable-kindergarten-match-it-up-line-method.gif", null, "https://heartburnquickremedies.com/wp-content/uploads/beginning-math-worksheets-number-for-adults-line-method-bonds-ten-frames-free.jpg", null, "https://heartburnquickremedies.com/wp-content/uploads/beginning-math-worksheets-number-skills-bonds-ten-frames-online-to-line.jpg", null, "https://heartburnquickremedies.com/wp-content/uploads/outstanding-beginning-math-worksheetsrcises-photo-ideas-activities-5th-grade-nilekayakclub-worksheets-number-sheet-free-for.jpg", null, "https://heartburnquickremedies.com/wp-content/uploads/sheet-kindergarten-math-patterns-worksheet-printable-pattern-worksheets-free-beginning-number-bonds-to.png", null, "https://heartburnquickremedies.com/wp-content/uploads/sheet-beginning-math-for-adults-worksheets-number-line-fractions-bondso-activities.png", null, "https://heartburnquickremedies.com/wp-content/uploads/sheet-beginning-mathorksheets-number-bonds-ten-frames-printable-line-method-activities-free-for-preschoolers.jpg", null, "https://heartburnquickremedies.com/wp-content/uploads/beginning-math-worksheets-number-skills-activities-reading-kindergarten-bonds.gif", null, "https://heartburnquickremedies.com/wp-content/uploads/beginning-math-worksheets-number-kindergarten-sheet-free-printable-match-it-up-1bw-line.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.628134,"math_prob":0.9237742,"size":2788,"snap":"2021-04-2021-17","text_gpt3_token_len":457,"char_repetition_ratio":0.29238507,"word_repetition_ratio":0.22841226,"special_character_ratio":0.14490674,"punctuation_ratio":0.077319585,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99587667,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50],"im_url_duplicate_count":[null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-01-18T19:32:10Z\",\"WARC-Record-ID\":\"<urn:uuid:5e60b96a-ce94-479d-be77-e26a5ea3d13c>\",\"Content-Length\":\"54386\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8df6c38a-8926-46d7-88f1-4ce6fa143925>\",\"WARC-Concurrent-To\":\"<urn:uuid:713d6c10-cc3a-46e1-a062-7caacf9d688c>\",\"WARC-IP-Address\":\"104.21.63.76\",\"WARC-Target-URI\":\"https://heartburnquickremedies.com/beginning-math-worksheets-number/\",\"WARC-Payload-Digest\":\"sha1:IZIQ4AB6QFKGSWAMM2MCV6FC4YY7CC6T\",\"WARC-Block-Digest\":\"sha1:CK4D6QPG32ENOQA6ZGTPY7GKNRRV32LC\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-04/CC-MAIN-2021-04_segments_1610703515235.25_warc_CC-MAIN-20210118185230-20210118215230-00605.warc.gz\"}"}
https://numbermatics.com/n/2777496/
[ "# 2777496\n\n## 2,777,496 is an even composite number composed of four prime numbers multiplied together.\n\nWhat does the number 2777496 look like?\n\nThis visualization shows the relationship between its 4 prime factors (large circles) and 32 divisors.\n\n2777496 is an even composite number. It is composed of four distinct prime numbers multiplied together. It has a total of thirty-two divisors.\n\n## Prime factorization of 2777496:\n\n### 23 × 3 × 19 × 6091\n\n(2 × 2 × 2 × 3 × 19 × 6091)\n\nSee below for interesting mathematical facts about the number 2777496 from the Numbermatics database.\n\n### Names of 2777496\n\n• Cardinal: 2777496 can be written as Two million, seven hundred seventy-seven thousand, four hundred ninety-six.\n\n### Scientific notation\n\n• Scientific notation: 2.777496 × 106\n\n### Factors of 2777496\n\n• Number of distinct prime factors ω(n): 4\n• Total number of prime factors Ω(n): 6\n• Sum of prime factors: 6115\n\n### Divisors of 2777496\n\n• Number of divisors d(n): 32\n• Complete list of divisors:\n• Sum of all divisors σ(n): 7310400\n• Sum of proper divisors (its aliquot sum) s(n): 4532904\n• 2777496 is an abundant number, because the sum of its proper divisors (4532904) is greater than itself. Its abundance is 1755408\n\n### Bases of 2777496\n\n• Binary: 10101001100001100110002\n• Base-36: 1NJ4O\n\n### Squares and roots of 2777496\n\n• 2777496 squared (27774962) is 7714484030016\n• 2777496 cubed (27774963) is 21426948535433319936\n• The square root of 2777496 is 1666.5821311895\n• The cube root of 2777496 is 140.5673575113\n\n### Scales and comparisons\n\nHow big is 2777496?\n• 2,777,496 seconds is equal to 4 weeks, 4 days, 3 hours, 31 minutes, 36 seconds.\n• To count from 1 to 2,777,496 would take you about six weeks!\n\nThis is a very rough estimate, based on a speaking rate of half a second every third order of magnitude. If you speak quickly, you could probably say any randomly-chosen number between one and a thousand in around half a second. Very big numbers obviously take longer to say, so we add half a second for every extra x1000. (We do not count involuntary pauses, bathroom breaks or the necessity of sleep in our calculation!)\n\n• A cube with a volume of 2777496 cubic inches would be around 11.7 feet tall.\n\n### Recreational maths with 2777496\n\n• 2777496 backwards is 6947772\n• The number of decimal digits it has is: 7\n• The sum of 2777496's digits is 42\n• More coming soon!\n\nMLA style:\n\"Number 2777496 - Facts about the integer\". Numbermatics.com. 2023. Web. 7 June 2023.\n\nAPA style:\nNumbermatics. (2023). Number 2777496 - Facts about the integer. Retrieved 7 June 2023, from https://numbermatics.com/n/2777496/\n\nChicago style:\nNumbermatics. 2023. \"Number 2777496 - Facts about the integer\". https://numbermatics.com/n/2777496/\n\nThe information we have on file for 2777496 includes mathematical data and numerical statistics calculated using standard algorithms and methods. We are adding more all the time. If there are any features you would like to see, please contact us. Information provided for educational use, intellectual curiosity and fun!\n\nKeywords: Divisors of 2777496, math, Factors of 2777496, curriculum, school, college, exams, university, Prime factorization of 2777496, STEM, science, technology, engineering, physics, economics, calculator, two million, seven hundred seventy-seven thousand, four hundred ninety-six.\n\nOh no. Javascript is switched off in your browser.\nSome bits of this website may not work unless you switch it on." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8274383,"math_prob":0.8558257,"size":2868,"snap":"2023-14-2023-23","text_gpt3_token_len":788,"char_repetition_ratio":0.13163407,"word_repetition_ratio":0.046875,"special_character_ratio":0.3406555,"punctuation_ratio":0.16970803,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9820308,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-07T22:18:22Z\",\"WARC-Record-ID\":\"<urn:uuid:7223e59d-f055-45e5-a160-0b2956f4d4b6>\",\"Content-Length\":\"19370\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6f2a1be3-0f95-4753-b04f-975b310edeab>\",\"WARC-Concurrent-To\":\"<urn:uuid:c4af31cb-574a-44c3-8d0a-656bfedd65b2>\",\"WARC-IP-Address\":\"72.44.94.106\",\"WARC-Target-URI\":\"https://numbermatics.com/n/2777496/\",\"WARC-Payload-Digest\":\"sha1:JTF5SWNJC3QRKONSBFHMFWZ7534OVHH2\",\"WARC-Block-Digest\":\"sha1:QLDJXBKAGVNCR4NMMBZWKTPTDYEGUICQ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224654016.91_warc_CC-MAIN-20230607211505-20230608001505-00686.warc.gz\"}"}
https://www.metabunk.org/threads/how-does-this-domino-tower-collapse-relate-to-9-11-collapses.7502/page-2
[ "# How does this Domino Tower Collapse relate to 9/11 Collapses\n\nStatus\nNot open for further replies.\n\n#### Mick West\n\nStaff member\nSo we have ma - the upwards force\n\nNo we don't. There's no \"a\". Acceleration comes from net force, so if the net force is zero there's no acceleration.\n\nYou need to describe things in terms of forces, not accelerations.\n\nAnd there's no homogenous \"stiffness of the structure\". Most of the stiffness is provided by the columns, which were essentially bypassed in the collapse (at the top tllted), they were not \"in the way\", and hence largely irrelevant to the speed of the collapse front.\n\nThis domino model on the other hand has the main support structure in the way all the way down, and is not really falling in any macro sense (dominos are largely just toppling, and then falling off).\n\n#### aka\n\n##### Member\nThere's no \"a\".\nIf there is no a, there is only g, and it falls.\nAcceleration comes from net force, so if the net force is zero there's no acceleration.\nNo net acceleration. The towers are being accelerated all the time. So they must be stable. But they will and must displace, even if only the tiniest amount. So of course there is a, lots of a even. At least as long as it stands.\n\nAnd there's no homogenous \"stiffness of the structure\".\nI know that and you know I know that. That it is not a constant spring is not the point. The point is that the load-displacement curve F(u) more or less already dictates the downwards acceleration (and hence, by necessity the \"stiffness\") and there is Oysteins Dirac impulse function that gives a clue as to how F(u) must look like. In any case, on average, it must be smaller than mg for it to fall.\nThis domino model on the other hand has the main support structure in the way all the way down, and is not really falling in any macro sense (dominos are largely just toppling, and then falling off).\nOn that I agree, the lateral ejections of the real thing were much more impressive.\n\n#### econ41\n\n##### Senior Member\nI am not aware of when, where and through what mechanism Bazant arrived at g/3, but that's what I got: By modelling pure momentum transfer of floors pancaking inelastically. No strength, force or energy of anything considered (connections can be thought of as holding the floors just barely, but requiring practically zero force/energy to disconnect once tapped on ever so lightly).\nAnd that is a valid representation of the real WTC events at least to first order significant quantification.\nThat seems as far away from anything Bazant has modeled in his WTC papers as I can think of.\nIt is - diametrically opposite. Bazant's modelling - all of it AFAIK - is based on the abstract \"Limit Case\" model from B&Z 2002 which simply didn't happen . The same \"take the abstract literally\" error which led T Szamboti astray followed by or in company with far too many debunkers.\n\nOr I missed that part (quite likely, given that I never truly studied any of them after B&Z", null, ")\nWhat Bazant modelled in his later papers - principally B&V 2007 with \"crush down crush up\" {\"cd/cu\"] - was definitively NOT applicable to WTC real event. Cd/cu has at least THREE fatal errors in that context.\n\nI've been making and explaining that assertion for about 5 years in the face of entrenched \"Bazantophile\" denialsim.\n\nI'll return you to the normal program.", null, "Last edited:\n\n#### econ41\n\n##### Senior Member\nExcellent point. Clearly the mechanism is entirely different than the tower collapses, which might lead to premature dismissal of the instructive value it offers.\nAgreed, agreed strongly, agreed with my usual proviso - respectively.\n\n\"instructive value\" for what? Understanding/explaining WTC collapses OR generic abstract principles? No problem with either but...\n\n...you know the rest of the advice.", null, "#### Mick West\n\nStaff member\nIf there is no a, there is only g, and it falls.\n\nGravity is a force. The use of \"g\" as an acceleration is just a convenience, as things normally fall with that acceleration. But the force of gravity is G*m1*m2/(r^2) - i.e. it's a force.\n\nThat force, absent opposing forces, will produce an acceleration downwards.\n\nWhen something rests on the ground then it is being opposed by an upwards force. It's not accelerating upwards.\n\nThere is no a, there isn't really a g either. If nothing is moving there is no acceleration. It's nonsensical to say something is accelerating in two directions at the same time.\n\n#### OneWhiteEye\n\n##### Senior Member\nYes,,,, IF the tower is tall enough.\nAnalogue would be skydiver in free fall. Eventually air friction force equals gravitational force and net acceleration reduces to zero. It is of little consequence if the jumper is just leaping off of a six foot step ladder. A shorter structure than the towers may well not see acceleration drop to zero.\nGood point. One which got me thinking... I have to think more before speaking.\n\n#### OneWhiteEye\n\n##### Senior Member\nHere is how I understand \"Mechanics of Progressive Collapse\" and I hope OWE will be so kind to correct me where I'm wrong.\nThis was an excellent rundown. There is a problem in there, but very nice all the same.\n\n...the effective acceleration only depends on how much mass is shed...\nAt this point, you have accounted for the external force due to structural resistance but not momentum change from inelastic accretion. Mass shedding is one side of the coin, mass accumulation is another. It turns out that it is this component which largely determines the eventual acceleration limit. It is of course possible to specify a form of F(u) which overrides that behavior but it wouldn't be realistic. This acceleration limit we discuss also applies to a uniform mass distribution without shedding, so in some respects you're ahead of the discussion.\n\nAnd since the area under F(u), let us call it E[elasticplasticpot], is smaller than E[gravpot], this is in essence a mathematical way of saying \"for every meter height, there is on average less Newtons upwards force in the way than necessary to decelerate the kilograms of the tower against the gravity of the planet.\"\nYes.\n\nSo far, Oysteins model (if I understand correctly) derives U, an optimum of the energy going into deformation/pulverization/etc/friction (in short: heat) to achieve optimum v, from perfectly inelastic collisions.\nYes. Slab model. Bam-bam-bam down the line.\n\nSo far, a Dirac impulse is assumed here, which is of course unrealistic (which does not matter because the Twin Tower floors did not hover mid-air) but extremely useful for analysis ...\nAlso because it allows a neat trick. Do you remember the crazy back-and-forth about momentum conservation we had a few years back? Is it or is it not conserved?\n\nYes, on the global level, it is.\nNo, for the structure as a whole, yet isolated system.\nYes, for imaginary Dirac-like impulse.\n\nIt's the last which permits a simple analysis to do so much. It essentially allows decoupling the structural and momentum-change components of retarding force.\n\n#### OneWhiteEye\n\n##### Senior Member\n\"instructive value\" for what? Understanding/explaining WTC collapses OR generic abstract principles?\nAbstract principles.\n\n#### OneWhiteEye\n\n##### Senior Member\ndoes not really make sense for a body at rest (the towers).\nI don't believe aka is talking about at rest. The load displacement relation only applies to something getting deformed.\n\n#### OneWhiteEye\n\n##### Senior Member\nNo net acceleration. The towers are being accelerated all the time.\nNo. \"Net\" only applies to forces. They may sum to zero which results in no acceleration.\n\n#### aka\n\n##### Member\nso in some respects you're ahead of the discussion.\nI agree. I guess I just wanted to have one \"free\" parameter in there for fine-tuning of motion history, because the others are dictated somewhat by the conditions given.\nDo you remember the crazy back-and-forth about momentum conservation we had a few years back? Is it or is it not conserved?\nHow could I ever forget it! Yes, it is conserved in a closed system. Always.\nIt's the last which permits a simple analysis to do so much. It essentially allows decoupling the structural and momentum-change components of retarding force.\nIt is the approach of finding the ideal, bestest possible collapse time! If the collisions are not perfectly inelastic, the mass does not accrete optimally, and a sole slab will fly ahead of the loosely accreted mass and impact the next one with less momentum, which will therefor then fall slower and so forth. If, on the other hand, more energy than needed is dissipated during the momentum exchange, less kinetic energy than possible is available and the rate of fall also decreases. Perfect inelasticity and therefor ideal mass accretion is ONE condition utterly favorable towards optimal rate of fall.\n\nAssuming a Dirac function is only the cherry on top to keep things simple here - of course we know that it is impossible to have infinitely high force for infinitely short displacement and infinite transition steepness, so that the area under the curve just happens to be equal to the energy that needs to be converted into heat. The Dirac function is not even a function mathematically, it is just a convenient tool for physicists.\n\nIn the real world, the force would be finite and stretched out over a displacement - be it micrometers or centimeters. This, in turn, would take time, and this in turn would result in less-than-optimal drop height, which in turn would mean the accreted mass cannot impact the next slab with optimum velocity - resulting in a smaller effective downwards acceleration overall.\n\nGravity is a force. The use of \"g\" as an acceleration is just a convenience, as things normally fall with that acceleration. But the force of gravity is G*m1*m2/(r^2) - i.e. it's a force.\nYes, gravity - the acceleration of a mass - is a force. g is the acceleration of that force.\nThat force, absent opposing forces, will produce an acceleration downwards.\nCorrect (for all things near the surface of the planet).\nWhen something rests on the ground then it is being opposed by an upwards force. It's not accelerating upwards\nThis is also correct.\nThere is no a, there isn't really a g either. If nothing is moving there is no acceleration. It's nonsensical to say something is accelerating in two directions at the same time.\nThis is not correct. There was a mathematician once who calculated that when two elephants push against each other with equal force, the resulting net force equals zero. He figured it must be safe to stand between them.\n\nThere is both an a and a g, and if ma and mg are equal, and just pointing into opposite directions, there is no displacement - the body is at rest in static equilibrium, that is all. It is still bathing in the vast force field resulting from the planet's mass.\n\nLast edited:\n\n#### Mick West\n\nStaff member\nThis is not correct. There was a mathematician once who calculated that when two elephants push against each other with equal force, the resulting net force equals zero. He figured it must be safe to stand between them.\n\nThere is both an a and a g, and if ma and mg are equal, and just pointing into opposite directions, there is no displacement - the body is at rest in static equilibrium, that is all. It is still bathing in the vast force field resulting from the planet's mass.\n\nThe a you use here is just a number you get when you divide the force opposing gravity by the mass. As the building is not actually accelerating then it's a meaningless number.\n\nAcceleration is the result of net forces. It does not have components. Force has components.\n\nWhere is a? Somehow \"stored\" in the subatomic bond of the structure? If you remove the top nine bricks from a stack of ten then will the bottom brick accelerate upwards at 9g?\n\n#### jaydeehess\n\n##### Senior Member.\nNo. \"Net\" only applies to forces. They may sum to zero which results in no acceleration.\n^^ This^^\nAcceleration is a consequence of net force.\n\n#### jaydeehess\n\n##### Senior Member.\nDeceleration is generally understood as a reduction in speed (where \"speed\" is the scalar value of velocity in the direction of movement). I.e. slowing down. So:\nRespectfully, that is simply another way of saying exactly what I said. Acceleration in the opposite direction to positive convention. Thus a reduction instead of an increase, in velocity.\n\nHowever many times I see people using 'decelleration' to imply a lessening, but still with the same vector direction, of acceleration.\n\nI do not in fact recall my physics prof in university ever using this term/ It was acceleration, a number with attendant direction. In the case of an object that only moves along a single dimension a +/- will suffice, in other situations such as a pendulum swinging, it moves in two dimensions, an electron in a CRT moves in all three directions. At that level the term 'decelleration' moves from ambiguous to downright meaningless.\nIMHO\n\nLast edited:\n\n#### jaydeehess\n\n##### Senior Member.\nThe strawman is simply a result of misapplication of physics by others. I guess you also missed it when Oystein ( I now note my new tablet'so annoying autocorrect) said the same thing.\n\nI am not \"explaining\" the WTC collapses with the TNB collapse. I am pointing out that a design peculiarity that allows for collapse need not be a deliberate feature the engineer built in.\nI understand. And I refute it by pointing out, by means of the domino tower model, that there is a measurable, energetic difference between \"shit happening\" and \"a plan coming together\", between failure due to stupidity and neglect and a perfect chain reaction, between dumb accident and blatant purpose, between ordo ab chao and intelligent design. .\nI'm sorry what exactly are you \"refuting\"?\n\nThe domino collapse illustrates the way a specific design collapses, a design that has lower 'floors' supporting the mass of upper structure. That is a feature diametrically opposite of how the towers, and indeed most high rise structures, are built.\n\nMick's model demonstrates something much more akin to the design of the towers and indeed the columns do not even enter into the mode of collapse. Its the destruction of floor to column connections, thus the loss of lateral support offered by the floors to the columns, that causes collapse. Progression is solely due to loss of floors.\n\nYour beef seems to be that the columns are two flimsy. Like I believe I said above, make it 3d and use 4X4 columns and the structure will still fall apart if the floors progressively are no longer attached to the columns.\n\nThe only way this could not result would be if the individual columns are, as put above, \"meta-stable\". That is to say that individual columns could stand upright on their own. In the TTs they fairly obviously, could not.\n\nLast edited:\n\n#### Mick West\n\nStaff member\nRespectfully, that is simply another way of saying exactly what I said. Acceleration in the opposite direction to positive convention.\n\nI wasn't disagreeing with you, I was putting what you said in another way to explain it to @aka.\n\nHowever now I do actually disagree slightly with \"Deceleration is ... a change in the sign of acceleration.\" The sign comes from the velocity, not acceleration. For example if a train is traveling at constant speed and then brakes, then it's decelerating, but the prior acceleration was zero. It's really just slowing down, it's a convenience word, not some kind of anti-acceleration.\n\nNiggling things, but I think ultimately a lot of trutherism, especially the appeals to Newton, come from a crippled epistemology in the realm of physics.\n\n#### aka\n\n##### Member\nAs the building is not actually accelerating then it's a meaningless number.\nIt is accelerating the mass opposite the acceleration of gravity, resulting in mechanical equilibrium. If it would not do that, then only gravity would act on the mass and displace it downwards.\n\nI know the debate about whether \"slowing down the acceleration\" is \"deceleration\" has been had already in the debates with Tony Szamboti. While it is purely of semantical nature, I have learned it the way it has been presented by those who disagreed with him: if g = 9.81m/s², and a thing falls only at 5m/s², then it is decelerated, or accelerated in the opposite direction, for all intents and purposes, by 4.81m/s². I think technically, it has something to do with velocity and acceleration being vector quantities or something in that spirit, but don't quote me on that.\nWhere is a? Somehow \"stored\" in the subatomic bond of the structure?\nAllow me to repost the diagram I made for the same discussion we had before in another thread:", null, "So yes, a is kx/m.\nIf you remove the top nine bricks from a stack of ten then will the bottom brick accelerate upwards at 9g?\nOf course not. But it will release the elastic energy that has previously been stored in it to return to mechanical equilibrium. Let me think about that. In a sense, yes.\n\nAnd if you try the same with a linear spring, yes, it will accelerate upwards if you suddenly unload it.\n\nLast edited:\n\n#### Mick West\n\nStaff member\nWhile it is purely of semantical nature, I have learned it the way it has been presented by those who disagreed with him: if g = 9.81m/s², and a thing falls only at 5m/s², then it is decelerated, or accelerated in the opposite direction, for all intents and purposes, by 4.81m/s². I think technically, it has something to do with velocity and acceleration being vector quantities or something in that spirit, but don't quote me on that.\n\nBeating a dead horse here. You can't sum acceleration vectors, you sum force vectors. Acceleration is the result of the net force, divided by the mass.\n\nSaying a falling object is being accelerated upwards is just nonsense. If something is falling with an acceleration less than g than that means there's a retarding force perpendicular to the force of gravity.\n\nForce is the important thing here. Acceleration is a measure of the rate of change in velocity. I think your point is being lost because of your strange insistence on this upwards acceleration of an object whose downward speed is increasing. Just make your point using force, and net acceleration.\n\n#### aka\n\n##### Member\nYou can't sum acceleration vectors, you sum force vectors.\nhttps://en.wikipedia.org/wiki/Acceleration\nAcceleration is the result of the net force, divided by the mass.\nForce is the product of mass and acceleration.\nSaying a falling object is being accelerated upwards is just nonsense.\nThis is what I said:\n\n\"I have learned it the way it has been presented by those who disagreed with [Tony Sz]: if g = 9.81m/s², and a thing falls only at 5m/s², then it is decelerated, or accelerated in the opposite direction, for all intents and purposes, by 4.81m/s².\"\n\nIf something is falling with an acceleration less than g than that means there's a retarding force perpendicular to the force of gravity.\nA \"retarding force\"! A force, that would be the product of mass and acceleration! So if something is falling with an acceleration less than g that means there is an acceleration in the opposite direction of g!\nI am quoting Wikipedia's article on acceleration because the disagreement with the most fundamental concepts is starting to make me doubt my sanity and google things I thought are elementary.\nI think your point is being lost because of your strange insistence on this upwards acceleration of an object whose downward speed is increasing.\nI think my point is quite clear. Gravity does not magically get switched off once the building stands. It is being accelerated all the time, which is why its mass has a weight, which is why there needs to be a normal force to keep it up all the time, decelerating it, so that the effective acceleration remains zero and the displacement within given parameters.\nJust make your point using force\nI will make my point using force, and Wikipedias diagram in the mechanical equilibrium article:", null, "mg is the downwards force, resulting from the mass and the acceleration due to gravity. N is the normal force, acting upwards. It results from there being something in the way of the green object that has a \"retarding force\". The green object does not move. It is in mechanical equilbrium. It does not fall, it does not move, it is not displaced, although there are still 6371 km to go to the center of gravity of the planet and thus, it has huge gravitational potential energy. Because the yellow stuff is too stiff and is in the way, the green object does not move, because mg = N (technically, mg = -N, but we can't do vector signs here).\n\nETA\n\nJust make your point using force, and net acceleration.\n\"Net\" only applies to forces.", null, "Last edited:\n\n#### Mick West\n\nStaff member\nI will make my point using force, and Wikipedias diagram in the mechanical equilibrium article:", null, "mg is the downwards force, resulting from the mass and the acceleration due to gravity. N is the normal force, acting upwards. It results from there being something in the way of the green object that has a \"retarding force\". The green object does not move. It is in mechanical equilbrium. It does not fall, it does not move, it is not displaced, although there are still 6371 km to go to the center of gravity of the planet and thus, it has huge gravitational potential energy E=mgh. Because the yellow stuff is too stiff and is in the way, the green object does not move.\n\nThink of it another way. Does acceleration cause force, or does force cause acceleration?\n\nI think it's clear that force causes things to accelerate. Acceleration is just a measure of the rate of change of velocity. It's not a thing in itself. it does not do anything.\n\nPerhaps this is confusing because F = ma suggests that Force is the result of mass multiplied by acceleration. This suggests the mass and the acceleration are creating a force.\n\nBut really one or more forces acts on a mass, giving a net force, and this creates an acceleration.\n\nLook at the convolution in the sentence I bolded from your post\n\n\"mg is the downwards force, resulting from the mass and the acceleration due to gravity\"\n\nThat's essentially circular. You are saying the downwards force is caused by the acceleration caused by the downwards force. And yet nothing is moving.\n\ng is not gravity. It's the acceleration resulting from the force of gravity. It's a handy approximate constant for the resultant acceleration in the absence of any other force. And since the force of gravity is also a product of the mass, we can handily use g to calculate the force Fgravity​ = mg\n\nIf we've got another force acting on a body, like Fspring​ = kx (stiffness*displacement) then you can subtract the forces, assuming they act in opposing directions (although it's more correct to use -kx as the force, and add them)\n\nFnet​ = Fgravity​ - Fspring​\n\nThen you can calculate the acceleration as (Fgravity​ - Fspring​)/m\n\nNow you might argue that you could expand this equation into two accelerations Fgravity​/m and Fspring​)/m (i.e. your g and a) and add them together, and get the same result. But you are introducing a meaningless term a, which would be equal to the acceleration from the spring if there was no gravity. But there isn't.\n\nIt makes no sense to say there's an upwards acceleration. However there IS an upwards force.\n\n#### aka\n\n##### Member\nThink of it another way. Does acceleration cause force, or does force cause acceleration?\nI will not think of it any other way, because the way I think it is correct, as I have shown. A force accelerates a mass and a mass exerts a force when it is accelerated. actio=reactio. This is fundamental and elementary. There can be not much use in an ontological debate about the prima causa of the universe's inner workings except to distract from the point being made: If a mass does not accelerate, it does not mean no force is acting upon it, it only means that the vector sum of all forces acting upon it is zero.\n\nPerhaps this is confusing because F = ma suggests that Force is the result of mass multiplied by acceleration. This suggests the mass and the acceleration are creating a force.\n\nBut really one or more forces acts on a mass, giving a net force, and this creates an acceleration.\nIt is not confusing at all. Yes, the mass and the acceleration are creating a force. If the mass were not accelerated, it would exert no force. And when the net force is zero, the acceleration of the mass equals zero. Thus, a = 0 means not that no force is acting at all.\nLook at the convolution in the sentence I bolded from your post\n\n\"mg is the downwards force, resulting from the mass and the acceleration due to gravity\"\n\nThat's essentially circular. You are saying the downwards force is caused by the acceleration caused by the downwards force.\nThis reminds me of the time they asked Feynman to explain how magnets work.\n\nObjects attract each other. That phenomenon is called gravity. It is a force. It depends on their mass and their distance.\n\nSo, I am saying the downwards force results from the mass being accelerated due to gravity. You said yourself gravity is a force. You gave the formula.\n\nF=G*m1*m2/(r^2)\n\nm1 is the mass of the planet, m2 is the mass of the green object, r1 is the distance of the surface to the middle of the planet. If you make a unit check, you arrive at Newtons. This is a force. Hence g is G*m1/r² and it for most m1 >> m2 and most r ~ 6000km relatively constant. The unit will be meters per second squared. It is the rate at which objects fall. And it says what a given mass weighs. mg is the downwards force, resulting from the weight of the object, it is its gravity. It is the force with which the planet and the object attract each other, and the force that must at least be applied in the opposite direction to lift it - to increase their distance - the mass must be accelerated!\nAnd yet nothing is moving.\nThat is what I am saying. Because there is an equal and opposite force acting against gravity.\ng is not gravity. It's the acceleration resulting from the force of gravity.\nSo we agree on that at least.\nThen you can calculate the acceleration as (Fgravity - Fspring)/m\nCorrect. And since we know roughly how fast, at which rates and velocities, the object we are analyzing falls and when, all we are asking for now is the relation between its F[gravity] and its F[spring]. We know a, we know g, we have x and h so we can approximate m and we get k.\n\nNow you might argue that you could expand this equation into two accelerations Fgravity/m and Fspring)/m (i.e. your g and a) and add them together, and get the same result. But you are introducing a meaningless term a, which would be equal to the acceleration from the spring if there was no gravity. But there isn't.\na is not meaningless, it is our given, our measurement, our observation. We already have it, roughly, every step of the way, even if there is a little margin of error.\nIt makes no sense to say there's an upwards acceleration. However there IS an upwards force.\nShall we simply call it \"retardation\" then? There must be a name for when a thing accelerates less than it would in freefall, even if not at all, when it is in mechanical equilibrium.\n\nLast edited:\n\n#### Mick West\n\nStaff member\nShall we simply call it \"retardation\" then? There must be a name for when a thing accelerates less than it would in freefall, even if not at all, when it is in mechanical equilibrium.\n\nYes, it's when there's some upwards force.\n\n#### jaydeehess\n\n##### Senior Member.\nI wasn't disagreeing with you, I was putting what you said in another way to explain it to @aka.\n\nHowever now I do actually disagree slightly with \"Deceleration is ... a change in the sign of acceleration.\" The sign comes from the velocity, not acceleration. For example if a train is traveling at constant speed and then brakes, then it's decelerating, but the prior acceleration was zero. It's really just slowing down, it's a convenience word, not some kind of anti-acceleration.\n\nNiggling things, but I think ultimately a lot of trutherism, especially the appeals to Newton, come from a crippled epistemology in the realm of physics.\nThe convention + is the same for velocity and acceleration. Deceleration is acceleration with - sign.\n\nMy main point is that the term is ambiguous given it is used interchangeably for decreased acceleration in the positive direction, and acceleration with - sign in the acceleration vector.\n\n#### aka\n\n##### Member\nShall we simply call it \"retardation\" then? There must be a name for when a thing accelerates less than it would in freefall, even if not at all, when it is in mechanical equilibrium.\nYes, it's when there's some upwards force.\nI am glad we could finally, not despite, but through what might at first seem like a needlessly pedantic and nonsensical debate, to a mutual understanding of each other's... \"epistemologies\", however... \"crippled\". If you insist that what is an \"acceleration\" be named \"retardation\" just because it is the rate of change in velocity - or the force per mass - in a different direction, I will gladly comply, if it allows us to get back on topic.\n\nWe know that the \"retardation\" of the structure must equal the gravitational acceleration so it stands up. If additional forces act on the structure - a Tae Bo class, a subtropical hurricane, a library full of heavy books - the structure must still be able to \"retard\" the accelerations resulting from those forces so the structure remains in mechanical equilibrium.\n\nExpressed in terms of forces, the forces keeping the structure up must equal the gravitation resulting from its mass. If additional forces act on the structure, it must still be able to exert forces in the opposite direction - \"push back\" - so the structure remains in mechanical equilibrium.\n\nExpressed in terms of energy, the elastic potential energy must do the virtual work of keeping the displacements due to additional inputs of mechanical energy within a given margin so that the structure does not convert its gravitational potential energy into kinetic energy.\n\nWe also know, by observation, that when the structure falls, the \"retardation\" is smaller than half the gravitational acceleration on average. In terms of forces, the forces acting on the structure during the fall - the friction force - are smaller than half the weight of the structure on average. In terms of energy, all that keeps the gravitational potential energy from being completely converted into kinetic energy is the energy of friction.\n\nThis leads us to a fool-proof way of describing the system objectively, mathematically and physically.\n\nWe have the Bazantian computational model, we have Oysteins computational model, and we have the domino tower and the Twin Towers. I am convinced that we can mold these approaches into a grand unified theory of tower self-disassembly, simply by taking Oysteins computational model and, instead of letting the masses hover mid-air, rest them on \"springs\" with known load-displacement curves (à la Bazant) so the structure stands up. Instead of a Dirac function, we only have to \"smear\" the function a little so its area equals the energy of friction, with still high enough a peak so that small displacements can be balanced to remain in mechanical equilibrium.\n\nIf we now allow the \"mass shedding\" parameter to follow an arbitrary function, this computational model will be able to describe both the domino tower and the Twin Towers, even the \"NMSR does the Heiwa Challenge\" \"weights on toothpicks on a broom stick\" model and psikeyhackrs \"Momentum Interference Test\" model, and additionally describe the possibility of arrest as is the case in the crushing experiments \"Collapse onto cumulative supports\" and Coles' models with the concrete slabs and paper loops and pizza box columns - and the real-world \"experiments\" (botched demolitions), and even vérinages - simply by adjusting the load-displacement curve relative to mg.\n\nAny objections?\n\nLast edited:\n\n#### Mick West\n\nStaff member\nWe know that the \"retardation\" of the structure must equal the gravitational acceleration so it stands up. If additional forces act on the structure - a Tae Bo class, a subtropical hurricane, a library full of heavy books - the structure must still be able to \"retard\" the accelerations resulting from those forces so the structure remains in mechanical equilibrium.\n\nWhat accelerations? If it's not moving then there is no acceleration.\n\nWe can't have a discussion if you call things what they are not.\n\n#### aka\n\n##### Member\nIf it's not moving then there is no acceleration.\nI already showed that that is not true. If it is not moving (EDIT: or more accurately, moving at constant velocity), the sum of all accelerations (or \"retardations\", or forces per mass) is zero, that is all, it does mean there is no acceleration at all. Accelerations are vector quantities and add up according to the parallelogram law. \"Crippled epistemology\" or not, this is how classical mechanics work.\n\nTo throw a famous experiment back at \"debunkerism\": hold a bowling ball straight away from your body by an outstretched arm. Keep it still. It is not moving. According to your epistemology, there is no acceleration, because it is not moving. So its weight is zero, because F=m*0=0, and you can hold it up for all eternity.\n\nThat is clearly wrong. The bowling ball is still being accelerated by gravitation, giving it weight F=mg, which you will have to acknowledge when your muscles begin to hurt.\n\nWe can't have a discussion when you insist on denying a fundamental concept of classical mechanics.\n\nLast edited:\n\n#### Mick West\n\nStaff member\nSorry it seems you are immune to understanding this (you can vector sum forces, and the acceleration is the resultant net force, divided by the mass). So I'm done trying to explain it.\n\n#### jonnyH\n\n##### Senior Member.\nTo throw a famous experiment back at \"debunkerism\": hold a bowling ball straight away from your body by an outstretched arm. Keep it still. It is not moving. According to your epistemology, there is no acceleration, because it is not moving. So its weight is zero, because F=m*0=0, and you can hold it up for all eternity.\n\nThat is clearly wrong. The bowling ball is still being accelerated by gravitation, giving it weight F=mg, which you will have to acknowledge when your muscles begin to hurt.\nNo, in that scenario F is the sum of the force of gravity and the equal and opposite force applied by your arm, so;\n\nF=mg+FArm=0\n\nacceleration is a measure of the rate of change of velocity, so an object that is motionless like the bowling ball has no acceleration. When your arm gives up, i.e. when FArm=0, the bowling ball will accelerate towards the floor at g.\n\n#### aka\n\n##### Member\nSorry it seems you are immune to understanding this\nStill no need for insults.\nyou can vector sum forces\n...because velocities, and hence accelerations, and hence momenta and forces are vector quantities that add according to the parellelogram law.\nand the acceleration is the resultant net force, divided by the mass\nThis statement and that the net force is the resultant acceleration multiplied with the mass are equivalent. F=ma, hence a=F/m. a=F/m, hence F=ma.\nSo I'm done trying to explain it.\nThen please address the point I've been trying to make instead of trying to prove I am unable to understand although I backed my assertions.\nNo, in that scenario F is the sum of the force of gravity and the equal and opposite force applied by your arm, so;\n\nF=mg+FArm=0\nF is the net force here. It is only zero because mg and F[Arm], the two forces acting on the mass, have equal magnitude and point into opposite directions. You still have g=9.81m/s², and you have F[Arm], so a = F[Arm]/m[Bowlingball] = -9.81m/s², so that F[net]=m(g+a)=m(9.81m/s²-9.81m/s²)=m*0=0.\n\nI am glad you agree that F[Arm] is necessary to keep the bowling ball up. A huge step forward.\nacceleration is a measure of the rate of change of velocity\nCorrect.\nso an object that is motionless like the bowling ball has no acceleration\n...or, to be more precise, the vector sum of all accelerations (=forces per mass) is zero.\nWhen your arm gives up, i.e. when FArm=0, the bowling ball will accelerate towards the floor at g.\nPrecisely. Keeping the bowling ball where it is, keeping its gravitational potential energy, requires work to be done.\n\n#### Keith Beachy\n\n##### Senior Member\n... famous experiment back at \"debunkerism\": hold a bowling ball straight away from your body by an outstretched arm. Keep it still. It is not moving. According to your epistemology, there is no acceleration, because it is not moving. So its weight is zero, because F=m*0=0, and you can hold it up for all eternity.\n... We can't have a discussion when you insist on denying a fundamental concept of classical mechanics.\nIf the bowling ball is 1kg, it weighs 9.81 newtons (on earth), more on Staturn, less on Mars. There is no acceleration of the ball standing still, but the ball has mass, and the ball in on earth, thus to say it has no weight is not understanding physics. F=ma, the ball is not moving, a=0, but it has weight. The problem is not a \"debunkerism\" one.\n\nTruth is not an insult\n\nOn 9/11 the arm failed in fire..\n\n\"\"requires work to be done.\"\" nope, not in physics... the work would be moving the ball from the floor to standing up... you don't get credit for work in physics for standing still holding a bowling ball... it could be torture\n\nLast edited:\n\n#### Trailspotter\n\n##### Senior Member.\nhold a bowling ball straight away from your body by an outstretched arm. Keep it still. It is not moving. According to your epistemology, there is no acceleration, because it is not moving. So its weight is zero, because F=m*0=0, and you can hold it up for all eternity.\n\nThat is clearly wrong. The bowling ball is still being accelerated by gravitation, giving it weight F=mg, which you will have to acknowledge when your muscles begin to hurt.\n\nThat is clearly wrong. Weight of the ball is NOT the force that acts on the ball, it is the force which the ball exerts on the rest, in this case a human arm. According to the Newton's third law, the arm exerts equal and opposite force on the ball. The sum of this force and the force of gravity acting on the ball is zero.\n\nKeeping the bowling ball where it is, keeping its gravitational potential energy, requires work to be done.\nThat is also wrong. The work has already been done to put the ball where it is. There is no additional mechanical work required to keep it where. Your muscles work to keep the skeleton in the fixed position, which is far from the human body resting state. A body in rigour mortis state or a wired skeleton will serve as the ball's rest perfectly; no muscle work required.\n\n#### Mick West\n\nStaff member\nSorry, this is really wasting everyone's time. I'm going to ban @aka for one month, or until he demonstrates an understanding of the fact that acceleration is the result of the net force.\n\nI realize that this might seem a bit harsh, but there's really no point just endless rehashing it here. We can all pick up again in a month if @aka so wishes.\n\n##### Member\nHow very unexpected.\n\n#### Mick West\n\nStaff member\nHow very unexpected.\n\nYes, it's unfortunate because it creates the misconception to some that that I'm cracking down on what he's saying because don't like the implications.\n\nBut really @aka is simply laboring under a very simple misconception, that there is an \"upwards acceleration\" keeping a building from falling down. Really there's an upwards force, matching the force due to gravity, which results in zero net force, and hence zero acceleration.\n\nBut I've already said the same thing several times in this thread, and he keeps insisting that something can \"be accelerated\" by gravity and by normal forces, etc, when its velocity is not changing.\n\nSimply a semantic point? Sure, you could say there's a \"imaginary acceleration\", for the purposes of doing the math. The problem here is one of being able to communicate clearly. \"Acceleration\" means that there's a change in velocity. So if we are talking about the \"acceleration\" of a structure, then the casual reader who is familiar with physics will assume you mean that the structure is moving.\n\n@aka can make any point he has to make here by discussing the sum of forces and then net acceleration, rather than a sum of \"accelerations\". The math works out the same. It's simply an issue of communications.\n\n@aka also keeps mentioning how velocity and acceleration are vector quantities and so can be added to create a net vector. Now I do actually understand vector arithmetic in this context. In fact I had a job (video game programming)for 20 years, of which a significant percentage (video game physics) involved vector arithmetic with position, velocity, force, and acceleration vector. Sometimes I'd spend weeks doing essentially nothing but vector arithmetic. It's foundational to video game physics:", null, "So I understand the point he is trying to make.\n\nBut adding together velocity (or acceleration) vectors only makes sense if the vectors are in different frames of reference. For example, you are on a train moving at 50mph, you walk backwards on the train at 3 mph, you can add the velocity (one dimensional here) and get a net velocity (relative to the ground) of 47 mph. You can do this because you are measuring the velocities in different frames of reference. One is relative to the ground, and the other relative to the train.\n\nBut in a building, not only is nothing moving, but if things start moving then the velocity and acceleration we are interested in are all in the same frame of reference (i.e. relative to the ground).\n\nIn a single frame of reference, an object can have multiple forces acting on it. But it only has one velocity, and one acceleration.\n\n#### Mick West\n\nStaff member\n...because velocities, and hence accelerations, and hence momenta and forces are vector quantities that add according to the parellelogram law.\n\nAnd a very minor point for clarity: the parallelogram law is irrelevant here. You add vectors visually head to tail, or in practice just add their individual x,y,z components.", null, "The parallelogram law of vector addition is really just an illustration of the commutative nature of vector addition (a+b = b+a). It's a trivial bit of geometry that's helpful in understanding vector addition, but it's not how you add vectors.", null, "##### Member\nI really think you should allow @aka to respond if you're going to continue to discuss his comments to this extent after banning him.\n\nStatus\nNot open for further replies.\n\nRelated Articles" ]
[ null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null, "https://www.metabunk.org/attachments/poten-png.18778/", null, "https://www.metabunk.org/data/MetaMirrorCache/b66ff0462e8d5dd1283c8ada44266dc2.png", null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null, "https://www.metabunk.org/data/MetaMirrorCache/b66ff0462e8d5dd1283c8ada44266dc2.png", null, "https://www.metabunk.org/attachments/upload_2016-4-24_6-30-50-png.18836/", null, "https://www.metabunk.org/attachments/upload_2016-4-24_6-52-49-png.18838/", null, "https://www.metabunk.org/attachments/upload_2016-4-24_6-50-24-png.18837/", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.91212547,"math_prob":0.88663566,"size":3502,"snap":"2021-43-2021-49","text_gpt3_token_len":744,"char_repetition_ratio":0.12864494,"word_repetition_ratio":0.049382716,"special_character_ratio":0.19531696,"punctuation_ratio":0.08988764,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9733713,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20],"im_url_duplicate_count":[null,null,null,null,null,null,null,1,null,2,null,null,null,2,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-18T05:30:05Z\",\"WARC-Record-ID\":\"<urn:uuid:16ef9d3f-5624-40a3-9a5a-72a6de7b903b>\",\"Content-Length\":\"401835\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:95c787cf-86bb-415f-bc64-2ce170460ec3>\",\"WARC-Concurrent-To\":\"<urn:uuid:6e122f2f-471f-4a82-ab9f-6e3bfac0b300>\",\"WARC-IP-Address\":\"104.26.11.98\",\"WARC-Target-URI\":\"https://www.metabunk.org/threads/how-does-this-domino-tower-collapse-relate-to-9-11-collapses.7502/page-2\",\"WARC-Payload-Digest\":\"sha1:Z7LNFOW6P4YVESQZVQK7YMNBSMO72MZL\",\"WARC-Block-Digest\":\"sha1:43D3Z32QLMTZMRCMJHDULJAVT4ADNQMZ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323585196.73_warc_CC-MAIN-20211018031901-20211018061901-00498.warc.gz\"}"}
http://www.tiberlab.com/en/tutorials/90.html
[ "TIGHT-BINDING SIMULATION OF A GaN QUANTUM DOT\n\nIn this Tutorial we will see how to perform the simulation of a GaN/AlN Quantum Dot (QD) structure by employing the Empirical Tight Binding (ETB) Module.\n\nWe first define an atomistic structure corresponding to the quantum cluster where we want to apply ETB calculations, than we apply continuous simulations on the whole device structure, in order to calculate strain map and equilibrium solution for potentials in device.\n\nThen quantum calculations are performed with empirical_tb Module to get the electron and hole states in the QD.\n\nThe device structure is defined in the geometry .geo file and is the following:\na spherical GaN QD with radius 1 nm inside a 5X5 nm AlN cubic region\n\nIn order to execute correctly this example you should have the following files in the same working directory:\nquantum_dot_GaN_TB.geo:input file for GMSH\nquantum_dot_GaN_TB.msh: mesh file produced by the GMSH script quantum_dot_GaN_TB.geo\n\nIn the following, some features of the input file will be described. For further details you can refer to the program reference manual.\n\n### DEVICE STRUCTURE\n\nIn the Device section, the QD heterostructure is described: the GaN quantum dot region (ball), the AlN qbox, an intrinsic buffer AlN region and two n and p-doped AlN regions.\nThe crystal directions are defined for this wurtzite structure:\n\n```x-growth-direction = (-1,2,-1,0)\ny-growth-direction = (1,0,-1,0)\nz-growth-direction = (0,0,0,-1)\n```\n\nThe regions are defined in the usual way:\n\n```Region ball\n{\nmaterial = GaN\nDoping\n{\nNd = 1e15\ntype = donor\nEd = 0.025\n}\n}\n................\n```\n\nA Cluster named atomistic is declared, including the ball QD region and the qbox AlN barrier region\n\n```Cluster atomistic\n{\nregions = (ball, qbox)\n}\n```\n\n### ATOMISTIC STRUCTURE\n\nAn atomistic representation of the above defined atomistic cluster is generated by means of the Atomistic block\n\n```Atomistic tb1\n{\nreference_region = nside\nregions = atomistic\npassivation = yes\nprint = ( xyz, gen, tgn)\ntranslation = (0.0, 0.8983559, -4.39363)\n}```\n\nThe reference region is chosen to provide the lattice parameters with which the crystalline structure is built. In this case the lattice is that of AlN, the material composing the nside region.\n\n```reference_region = nside\n```\n\nIn the QD region, the GaN atoms are then substituted in the lattice basis.\nWe will see in the following how Elasticity Module is then used to apply strain induced deformation to the mesh which is then projected to the atomic structure of GaN qdot.\n\nFinally, passivation is performed at the boundaries of the heterostructure\n\n```passivation = yes\n```\n\nA print instruction gives in output the atomic structure for a visualization.\n\n### SIMULATION MODULES\n\n#### 1. Elasticity\n\nElasticity is used here to calculate lattice mismatch induced strain tensor in the whole structure. Moreover, in this case a non-linear calculation is performed, to feed back obtained displacements to the mesh elements. The mesh is thus deformed and a new strain calculation is performed with the new mesh.\n\nThrough these keywords:\n\n```mesh_deformation = true\nshape_iterations = 1\n```\n\nthe strain is computed iteratively until the convergence on the structure deformation is reached.\n\nNOTE: This step is required to obtain a correct atomistic representation of  a heterostructure.\n\n#### 2. Drift-diffusion\n\nAs for drift-diffusion, as usual we define a simulation\n\n```name = dd\n```\n\nbelonging to the model driftdiffusion and associated to the whole device (default choice).\nPolarization is included through:\n\n```polarization (piezo, pyro) {}\n```\n\nThe Boundary Regions for drift-diffusion are the two contact regions, defined by the two boundary surfaces anode and cathode.\n\n#### 3. Empirical Tight-Binding\n\nFor the atomistic quantum calculations, we define two empirical_tb simulations, named tb1 and tb2. In the first we will calculate the ground states, in the second one these calculate states will be loaded from output file, to complete the calculation of tb states.\n\n```Module empirical_tb{\nregions = atomistic\nname = tb1\natomistic_structure = tb1\npotential_simulation = dd\nplot = (tbstates, MeshStatesNodes )\nSolver\n{\nnum_valence_eigenvalues = 1\nnum_conduction_eigenvalues = 1\nlong_tolerance = 1e-4\n}\n}\n```\n\nIn both cases the associated region is the Cluster atomistic and the associated potential simulation is the drift-diffusion dd simulation. Note that with this link the potential profile including built-in and polarization fields are correctly included into the TB Hamiltonian, through a correction on the on-site elements.\n\nAnother critical point is that Harrison scaling of ETB parameters is here applied\n(it's a deafult whenever a strain simulation is performed on the system, as in this case).\nScaling is required in presence of material deformation which causes atom displacement from\nequilibrium position.\n\nIn tb1, only one state is calculated for holes and electrons:\n\n```num_valence_eigenvalues = 1\nnum_conduction_eigenvalues = 1\n```\n\nIn tb2:\n\n```Module empirical_tb\n{\nregions = atomistic\nname = tb2\natomistic_structure = tb1\npotential_simulation = dd\nplot = (tbstates, MeshStatesNodes)\nSolver\n{\nnum_valence_eigenvalues = 4 # 2\nnum_conduction_eigenvalues = 4 # 2\nlong_tolerance = 1e-4\n}\n}\n```\n\ncalculated states are loaded from output file by defining in the Solver block:\n\n```load_path = output_tb1\n```\n\nand the calculation of 4 states is completed, starting from those alreay available.\n\n```num_valence_eigenvalues = 4 # 2\nnum_conduction_eigenvalues = 4 # 2\n```\n\nThis option may be useful to continue a long calculation of many eigenstates after that it has been stopped by the user for any reason.\n\n#### 4. TB Optics\n\nWith the Module opticstb it is possible to perform the calculation of the optical matrix from the TB Hamiltonian.\n\n```Module opticstb\n{\nname = opt\nregions = atomistic\ninitial_state_model = tb2\nfinal_state_model = tb2\ncompute_strengths = true\nplot = (matrix_elements, optical_spectrum_k_0)\noutput_format = grace\nEmin = 4.25\nEmax = 4.75\ndE = 0.001\n}\n```\n\nNote that, differently from the analogous Optics Module for EFA, here the initial state and the final state source models must be the same. In this case, we choose tb2 simulation, which compute four eigenstates.\nThe other keywords are similar to those of EFA case.\nNote that we choose\n\n```compute_strengths = true\n```\n\nto calculate optical strength output.\n\nRun simulations\n\nWe may now run tiberCAD to calculate strain with Elasticity (str) and driftdiffusion (dd) for an equilibrium solution.\nThen, for tight-binding, we first run tb1 for calculation of the ground state\n\n```solve = (str, dd, tb1)\n```\n\nthen we run tb2 with\n\n```load_states = true\n```\n\nto load existing ground state solution and calculate further states and then finally the optical properties (opt)\n\n```solve = (str, dd, tb2, opt)\n```\n\n```tibercad quantum_dot_GaN_TB.tib\n```\n\n### Output\n\nAfter the execution, the output directory contains, as usual, the results for the simulations performed. As for tight-binding simulation, the file tb2.dat contains a table with the calculated eigenvalues for electrons and holes, together with their occupation index.\nThe files .cube have been also generated, which contain the information on the electron and hole wavefunction (square module). The format of these data files is supported by the visualization software jmol, an open-source Java viewer for chemical structures in 3D http://jmol.sourceforge.net.\nBy using jmol, we can load the atomic structure generated by tiberCAD Atomistic Generator, contained in the file tb1.xyz. Here Ga atom are shown in red.", null, "Then we can visualize the isosurface of e.g. the electron ground state, with this command:\n\n```isosurface mo1 color green cutoff 0.00005 \"mo_cb_01.cube\"\n```\n\nAnalogously, we can visualize the hole state.\n\nThe figure below shows the confinement of the two states inside the spherical GaN quantum dot: in green the conduction state and in yellow the valence state.", null, "" ]
[ null, "http://www.tiberlab.com/images/stories/products/device_sim/tibercad/sample_apps/tb-qdot/tb2.jpg", null, "http://www.tiberlab.com/images/stories/products/device_sim/tibercad/sample_apps/tb-qdot/states_1c_green.el.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.84840304,"math_prob":0.9811593,"size":5180,"snap":"2019-51-2020-05","text_gpt3_token_len":1138,"char_repetition_ratio":0.13350077,"word_repetition_ratio":0.0,"special_character_ratio":0.1934363,"punctuation_ratio":0.09766926,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9935867,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,5,null,5,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-29T00:10:32Z\",\"WARC-Record-ID\":\"<urn:uuid:c712b2ff-43f4-42dd-8f5e-2d3b02daa4f1>\",\"Content-Length\":\"25759\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:14d39d99-28ea-4c76-b62b-dd45b1c14c76>\",\"WARC-Concurrent-To\":\"<urn:uuid:34ab9389-de42-4b59-a61b-c335cf5b98af>\",\"WARC-IP-Address\":\"160.80.80.254\",\"WARC-Target-URI\":\"http://www.tiberlab.com/en/tutorials/90.html\",\"WARC-Payload-Digest\":\"sha1:2A4XJRAOH6FS6VNIUGEJGL5S2PFKY3AU\",\"WARC-Block-Digest\":\"sha1:DABEPSMKCCFJWHQBYBG64K73BYRZHEQF\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579251783342.96_warc_CC-MAIN-20200128215526-20200129005526-00435.warc.gz\"}"}
https://www.alvesmartins.be/Jul/02_consume-mm-gravel-quantity-in-cubic-meter.html
[ " consume mm gravel quantity in cubic meter\n\n# consume mm gravel quantity in cubic meter\n\n•", null, "### how many cubic meter of 12mm pea gravel is there in one ...\n\nJul 18, 2010· Best Answer: Need to know are you talking about a metric ton or a US ton. Then what does each pea of gravel weigh, and what is the average percentage of each size of gravel. Or are you wanting to know how many pieces of gravel there is in 1 cubic meter? Again need to know the percentage of each of the sizes of gravel in the mixture.\n\nGet Price\n•", null, "### LACalc - a calculator for Landscape Architects, Landscape ...\n\nIf the units are omitted feet (or meters) are assumed. Use the Type box to choose the description that best matches the characteristics of your gravel. Select 'Rule of Thumb' to use a generic conversion factor of 1.5 tons per cubic yard (1.78 tonnes per cubic meter).\n\nGet Price\n•", null, "### Cubic Meter Calculator (in,ft,yd,mm,cm,m to cubic meter)\n\nConvert from measurements in cm to cubic meters, when we measure the dimension of a carton with a ruler, the unit is centimeters, and we need calculate the cubic meters. 42 cm = 42 ÷ 100 m = 0.42 m 37 cm = 37 ÷ 100 m = 0.37 m 28 cm = 28 ÷ 100 m = 0.28 m 0.42 × 0.37 × 0.28 = 0.043512 m³\n\nGet Price\n•", null, "### How Much Dirt, Fill, Sand or Gravel Do I Need? - Mr Dirt ...\n\nOct 08, 2014· Dirt, sand, and gravel are typically sold by the cubic yard. This is 3' x 3' x 3', or 27 cubic feet in volume. A cubic yard of material will cover approximately a 10' x 10' area to a depth of 2\". Using this information, you can calculate how much material you need for your project.\n\nGet Price\n•", null, "### 1 MINE SAND/ARENA DE MINA 2 GRAVEL/GRAVA DE MINA .\n\nTotal Amount Picture * As example no specific brand is being requested: 1; MINE SAND/ARENA DE MINA; 3 CZ - cubic meter; 2 GRAVEL/GRAVA DE MINA T.M.A. 19 MM Ø (3/4), M3; 3 CZ - cubic meter; 3 CEMENT/CEMENTO (GRIS) PORTLAND TIPO II ... CZ - cubic meter 13. CONCRETE PUMPING/BOMBEO DE CONCRETO 184. CZ - cubic meter ...\n\nGet Price\n•", null, "### Material Volume Calculator | Gorham Sand and Gravel ...\n\nTo use the Volume Calculator, simply enter the width, length, and thickness of your project, click on whether you are measuring the thickness in feet or inches, then click on the Calculate button. The calculator will estimate the number of cubic yards of material that will be required.\n\nGet Price\n•", null, "### how many cubic meter of 12mm pea gravel is there in one ...\n\nJul 18, 2010· How many cubic meter of 12mm pea gravel is there in one ton? Suppose I need 1 cubic meter of pea gravel which is 8 mm to 12 mm in size, what is the equivalent quantity in tons. Follow . 1 answer 1. Report Abuse. Are you sure you want to delete this answer? Yes No.\n\nGet Price\n•", null, "### m³ - Cubic Meter. Conversion Chart / Capacity and Volume ...\n\nThis is a conversion chart for cubic meter (Metric). To switch the unit simply find the one you want on the page and click it. You can also go to the universal conversion page. 2: Enter the value you want to convert (cubic meter). Then click the Convert Me button. Your value gets instantly converted to all other units on the page. 3\n\nGet Price\n•", null, "### Cubic Meter Conversion - Online Unit Converter\n\nTo perform conversions between cubic meter and other Capacity and Volume units please try our Capacity and Volume Unit Converter Convert cubic meter to: cubic kilometer, cubic decimeter, cubic centimeter, cubic millimeter, liter, exaliter, ...\n\nGet Price\n•", null, "### weight of stone per cubic meter\n\nGravel, Pea weighs 1.79 gram per cubic centimeter or 1 788 kilogram per cubic meter, i.e. its density is equal to 1 788 kg/m³. In Imperial or US customary measurement system, the Gravel, Pea density is equal to 111.62 pound per cubic foot [lb/ft³], or 1.03 ounce per cubic inch [oz/inch³] .\n\nGet Price\n•", null, "### Sand Calculator, Calculate How Cubic Yards of Sand do I Need\n\nOur Sand calculator will help you estimate how many Cubic Yards of Sand you need for your desired coverage area. The sand calculator offers 4 \"Box\" area fields and 2 \"Circular\" area fields for you to calculate multiple areas simultaneously (back yard, front yard, driveway, Dressing .90 tons (1,800 lb.) per cubic yard\n\nGet Price\n•", null, "### Calculate Fill Sand | cubic yards / Tons\n\nType in inches and feet of your project and calculate the estimated amount of Sand / Screenings in cubic yards, cubic feet and Tons, that your need for your project. The Density of Fill Sand: 2,410 lb/yd³ or 1.21 t/yd³ or 0.8 yd³/t\n\nGet Price\n•", null, "### DIY Conversion Tables and Conversion Information | DIY Doctor\n\n1.44 tonnes per cubic metre: Ballast = 1.76 tonnes per cubic metre: gravel (MOT Type 1 scalpings) = 1.92 tonnes per cubic metre: shingle = 1.62 tonnes per cubic metre: cement = 50 & 25 kg bags: stiff clay = 1.6 tonnes per cubic metre: loam = 1.28 tonnes per cubic metre: peat (wet) = 0.96 tonnes per cubic metre: peat (dry) = 0.8 tonnes per cubic metre: lump chalk = 1.2 tonnes per cubic metre\n\nGet Price\n•", null, "### what is the weight of one cubic meter M20 concrete\n\nAnswer / manoj kumar bhola. The unit weight of onecubic meter M20 concrete RCC =25000 N/M3 =2500 KG/M3 =25 KN/M3 But may when this concrete made with standard gravel or crushed natural stone aggregate,weight may depend upon the type of aggregate used.Below M20 .\n\nGet Price\n•", null, "### Topsoil Calculator - Work Out How Much Topsoil You Need\n\nCalculate topsoil below to find the ideal quantity for your needs. Topsoil is most often sold by the metric tonne (1000kg), but the easiest way to calculate the quantity of topsoil required is by volume (cubic metres or litres). Try using our Topsoil Calculator below to work out how much topsoil you need.\n\nGet Price\n•", null, "### Concrete 1 cubic meter volume to Metric tonnes converter\n\nSpecific unit weight of concrete - amount properties converter for conversion factor exchange from 1 cubic meter m3 equals = 2.41 Metric tonnes t exactly for the masonry material type. To convert concrete measuring units can be useful when building with concrete and where handling of concrete is required.\n\nGet Price" ]
[ null, "https://www.alvesmartins.be/image/74.jpg", null, "https://www.alvesmartins.be/image/69.jpg", null, "https://www.alvesmartins.be/image/261.jpg", null, "https://www.alvesmartins.be/image/361.jpg", null, "https://www.alvesmartins.be/image/236.jpg", null, "https://www.alvesmartins.be/image/306.jpg", null, "https://www.alvesmartins.be/image/105.jpg", null, "https://www.alvesmartins.be/image/117.jpg", null, "https://www.alvesmartins.be/image/62.jpg", null, "https://www.alvesmartins.be/image/349.jpg", null, "https://www.alvesmartins.be/image/289.jpg", null, "https://www.alvesmartins.be/image/356.jpg", null, "https://www.alvesmartins.be/image/335.jpg", null, "https://www.alvesmartins.be/image/399.jpg", null, "https://www.alvesmartins.be/image/285.jpg", null, "https://www.alvesmartins.be/image/101.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.95677274,"math_prob":0.9895092,"size":408,"snap":"2019-35-2019-39","text_gpt3_token_len":98,"char_repetition_ratio":0.1509901,"word_repetition_ratio":0.0,"special_character_ratio":0.25245097,"punctuation_ratio":0.10752688,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98429555,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32],"im_url_duplicate_count":[null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,2,null,2,null,1,null,1,null,1,null,1,null,1,null,1,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-09-16T00:11:02Z\",\"WARC-Record-ID\":\"<urn:uuid:165de1ca-83c2-4c2a-9c43-4b5d83c21f33>\",\"Content-Length\":\"19011\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:fe90e8fb-cc73-4025-b6ea-c0e35f37ea35>\",\"WARC-Concurrent-To\":\"<urn:uuid:8e7fb383-4e0e-4ce7-b154-5f2f1e263c88>\",\"WARC-IP-Address\":\"104.28.0.220\",\"WARC-Target-URI\":\"https://www.alvesmartins.be/Jul/02_consume-mm-gravel-quantity-in-cubic-meter.html\",\"WARC-Payload-Digest\":\"sha1:E3KFQEUEBNUE6LWIL4AVEI3J54PRIHM5\",\"WARC-Block-Digest\":\"sha1:M7FABMPXRRSQDUUYVOUOBS3GNUL35Z5B\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-39/CC-MAIN-2019-39_segments_1568514572439.21_warc_CC-MAIN-20190915235555-20190916021555-00398.warc.gz\"}"}
https://www.hackmath.net/en/calculator/fraction?input=3+1%2F2+-+2+2%2F3
[ "# Fraction calculator\n\nThe calculator performs basic and advanced operations with fractions, expressions with fractions combined with integers, decimals, and mixed numbers. It also shows detailed step-by-step information about the fraction calculation procedure. Solve problems with two, three, or more fractions and numbers in one expression.\n\n## Result:\n\n### 31/2 - 22/3 = 5/6 ≅ 0.8333333\n\nSpelled result in words is five sixths.\n\n### How do you solve fractions step by step?\n\n1. Conversion a mixed number 3 1/2 to a improper fraction: 3 1/2 = 3 1/2 = 3 · 2 + 1/2 = 6 + 1/2 = 7/2\n\nTo find a new numerator:\na) Multiply the whole number 3 by the denominator 2. Whole number 3 equally 3 * 2/2 = 6/2\nb) Add the answer from previous step 6 to the numerator 1. New numerator is 6 + 1 = 7\nc) Write a previous answer (new numerator 7) over the denominator 2.\n\nThree and one half is seven halfs\n2. Conversion a mixed number 2 2/3 to a improper fraction: 2 2/3 = 2 2/3 = 2 · 3 + 2/3 = 6 + 2/3 = 8/3\n\nTo find a new numerator:\na) Multiply the whole number 2 by the denominator 3. Whole number 2 equally 2 * 3/3 = 6/3\nb) Add the answer from previous step 6 to the numerator 2. New numerator is 6 + 2 = 8\nc) Write a previous answer (new numerator 8) over the denominator 3.\n\nTwo and two thirds is eight thirds\n3. Subtract: 7/2 - 8/3 = 7 · 3/2 · 3 - 8 · 2/3 · 2 = 21/6 - 16/6 = 21 - 16/6 = 5/6\nFor adding, subtracting, and comparing fractions, it is suitable to adjust both fractions to a common (equal, identical) denominator. The common denominator you can calculate as the least common multiple of both denominators - LCM(2, 3) = 6. In practice, it is enough to find the common denominator (not necessarily the lowest) by multiplying the denominators: 2 × 3 = 6. In the following intermediate step, the fraction result cannot be further simplified by canceling.\nIn other words - seven halfs minus eight thirds = five sixths.\n\n#### Rules for expressions with fractions:\n\nFractions - use the slash “/” between the numerator and denominator, i.e., for five-hundredths, enter 5/100. If you are using mixed numbers, be sure to leave a single space between the whole and fraction part.\nThe slash separates the numerator (number above a fraction line) and denominator (number below).\n\nMixed numerals (mixed fractions or mixed numbers) write as non-zero integer separated by one space and fraction i.e., 1 2/3 (having the same sign). An example of a negative mixed fraction: -5 1/2.\nBecause slash is both signs for fraction line and division, we recommended use colon (:) as the operator of division fractions i.e., 1/2 : 3.\n\nDecimals (decimal numbers) enter with a decimal point . and they are automatically converted to fractions - i.e. 1.45.\n\nThe colon : and slash / is the symbol of division. Can be used to divide mixed numbers 1 2/3 : 4 3/8 or can be used for write complex fractions i.e. 1/2 : 1/3.\nAn asterisk * or × is the symbol for multiplication.\nPlus + is addition, minus sign - is subtraction and ()[] is mathematical parentheses.\nThe exponentiation/power symbol is ^ - for example: (7/8-4/5)^2 = (7/8-4/5)2\n\n#### Examples:\n\nsubtracting fractions: 2/3 - 1/2\nmultiplying fractions: 7/8 * 3/9\ndividing Fractions: 1/2 : 3/4\nexponentiation of fraction: 3/5^3\nfractional exponents: 16 ^ 1/2\nadding fractions and mixed numbers: 8/5 + 6 2/7\ndividing integer and fraction: 5 ÷ 1/2\ncomplex fractions: 5/8 : 2 2/3\ndecimal to fraction: 0.625\nFraction to Decimal: 1/4\nFraction to Percent: 1/8 %\ncomparing fractions: 1/4 2/3\nmultiplying a fraction by a whole number: 6 * 3/4\nsquare root of a fraction: sqrt(1/16)\nreducing or simplifying the fraction (simplification) - dividing the numerator and denominator of a fraction by the same non-zero number - equivalent fraction: 4/22\nexpression with brackets: 1/3 * (1/2 - 3 3/8)\ncompound fraction: 3/4 of 5/7\nfractions multiple: 2/3 of 3/5\ndivide to find the quotient: 3/5 ÷ 2/3\n\nThe calculator follows well-known rules for order of operations. The most common mnemonics for remembering this order of operations are:\nPEMDAS - Parentheses, Exponents, Multiplication, Division, Addition, Subtraction.\nBEDMAS - Brackets, Exponents, Division, Multiplication, Addition, Subtraction\nBODMAS - Brackets, Of or Order, Division, Multiplication, Addition, Subtraction.\nGEMDAS - Grouping Symbols - brackets (){}, Exponents, Multiplication, Division, Addition, Subtraction.\nBe careful, always do multiplication and division before addition and subtraction. Some operators (+ and -) and (* and /) has the same priority and then must evaluate from left to right.\n\n## Fractions in word problems:\n\n• From a 2", null, "From a rope that is 11 m long, two pieces of lengths 13/5 m and 33/10 m are cut off. What is the length of the remaining rope?\n• Bucket of clay", null, "Tina and Bill share a 12-ounce bucket of clay. By the end of the week, Tina has used 1/6 of the bucket, and Bill has used 2/3 of the bucket of clay. How many ounces are left in the bucket?\n• Fractions and mixed numerals", null, "(a) Convert the following mixed numbers to improper fractions. i. 3 5/8 ii. 7 7/6 (b) Convert the following improper fraction to a mixed number. i. 13/4 ii. 78/5 (c) Simplify these fractions to their lowest terms. i. 36/42 ii. 27/45 2. evaluate the follow\n• There 17", null, "There is 3/4 of a cake on a plate in Maria's kitchen.  Silvia sees the cake and eats 1/5 of the cake.  Then Franca takes 1/3 of what was there and shares half of her portion with Antonella.  What fraction of the cake is left?", null, "About 6/9 of the sixth- grade pupils will be going to the parents' seminar. If 1/6 of the participants are girls, what part of the portion of sixth graders are boys?\n• Savings", null, "Eva borrowed 1/3 of her savings to her brother, 1/2 of savings spent in the store and 7 euros left. How much did she save?\n• Half of halves", null, "Half of the square we cut off, then half of the rest, etc. Five cuts we made in this way. What part of the content of the original square is the content of the cut part?\n• Benhur", null, "Benhur boiled 1 1/4 liters of water in a kettle. After 10 1/2 minutes he measured the water again. It had 3/4 liters left in the kettle. What is the amount of water that evaporates every minutes?\n• Sundar", null, "Sundar has 50 chocolates. He gave 2/5 of these chocolates to Ram and he ate 1/5 of them. How many chocolates are left with Sundar?\n• Equation with mixed 2", null, "A number, X, is subtracted from 8 1/4. The result is 12 3/5. What is the value of X?\n• Product and sum", null, "What is the product of two fourths  and the sum of three halves and four?\n• Translate 2", null, "Translate the given phrases to mathematical phrases. Thrice the sum of three fifths and two thirds less one half is what number?\n• Hotel 4", null, "A 360 room hotel has 1/3 of its rooms occupied at present. How many rooms are empty?" ]
[ null, "https://www.hackmath.net/thumb/51/t_56351.jpg", null, "https://www.hackmath.net/thumb/93/t_50493.jpg", null, "https://www.hackmath.net/thumb/83/t_49683.jpg", null, "https://www.hackmath.net/thumb/71/t_56571.jpg", null, "https://www.hackmath.net/thumb/13/t_56113.jpg", null, "https://www.hackmath.net/thumb/4/t_5704.jpg", null, "https://www.hackmath.net/thumb/71/t_23471.jpg", null, "https://www.hackmath.net/thumb/53/t_55953.jpg", null, "https://www.hackmath.net/thumb/11/t_55711.jpg", null, "https://www.hackmath.net/thumb/71/t_52471.jpg", null, "https://www.hackmath.net/thumb/43/t_56143.jpg", null, "https://www.hackmath.net/thumb/21/t_54821.jpg", null, "https://www.hackmath.net/thumb/33/t_51933.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8815533,"math_prob":0.9845823,"size":6797,"snap":"2021-43-2021-49","text_gpt3_token_len":1966,"char_repetition_ratio":0.15427646,"word_repetition_ratio":0.028962187,"special_character_ratio":0.2963072,"punctuation_ratio":0.12508784,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9977041,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-21T18:15:00Z\",\"WARC-Record-ID\":\"<urn:uuid:c40c43d7-715a-4460-855d-67a9aa8065de>\",\"Content-Length\":\"38027\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3b009e9a-f2ca-47f9-a9bb-eac30f53c431>\",\"WARC-Concurrent-To\":\"<urn:uuid:87c69a2e-3ca4-4ef4-8801-46a61a5c2b68>\",\"WARC-IP-Address\":\"172.67.143.236\",\"WARC-Target-URI\":\"https://www.hackmath.net/en/calculator/fraction?input=3+1%2F2+-+2+2%2F3\",\"WARC-Payload-Digest\":\"sha1:ZDQWOHL4FRI2K4F2RF6FKTNMKKNRNYFJ\",\"WARC-Block-Digest\":\"sha1:SIF5PEVJZCO7YAIEX6EPX5TMNA4IOP24\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323585439.59_warc_CC-MAIN-20211021164535-20211021194535-00179.warc.gz\"}"}
https://campusgrades.com/discussion-about-polynomialsexplanation-of-the-termspolynomial-what-is-a-polynomial-etc/
[ "discussion about polynomialsexplanation of the terms”polynomial”, what is a polynomial etc.\n\n# discussion about polynomialsexplanation of the terms”polynomial”, what is a polynomial etc.\n\n• LITERATURE\n\ntwentieth century literature Twentieth Century literature is a reassertion of Romantic Movement…\n\nAre you referring to 20th century world literature or literature from a specific country? If one applies your statement to American Literature, I would disagree with it because Romanticism is…\n\n• MATH\n\ndiscussion about polynomialsexplanation of the terms”polynomial”, what is a polynomial etc.\n\nThe explanation of the term polynomial: – “poly” means many; – “nomial” means name (in this case, terms). A polynomial is made up from constants and variables. For instance: 3x^4*y^2 + 5z is a…\n\n• HISTORY\n\nWhat were the areas/regions of Europe that remained faithful to the Catholic Church?\n\nThe answer to this question depends to some degree on how long after the Protestant Reformation you are asking about. For example, England remained faithful to the Catholic Church at first, but…\n\n• MATH\n\nanother issuei want to subtract (2a-3) /(a^2-4))-2/(a+2) is good to do (2a-3-2)/(a^2-4-a-2)…\n\nYou should know that the algebraic addition of two fractions is possible if they have common denominators. If they do not have a common denominator, then you need to find it. In this problem,…\n\n• SCIENCE\n\nfluoride for cavitiesfluoride is really helpful against cavities?\n\nI believe it is. Scientific studies have repeatedly shown that it is. I know a lot of people talk about how it actually causes cavities or how it is dangerous but this stuff is not supported by…\n\n• MATH\n\nmethod to find termsneed to find terms a,b,c first? i know a=x+1 ,b=x^2-x+1 ,c=x^3-1 and i don’t…\n\nYou need to evaluate , hence, you need to substitute for a and for b to perform such that: You need to substitute for c such that: Hence, evaluating the expression yields .\n\nWhat is the theme of Lady Windermere’s fan?\n\nEnotes users are only allowed one question at a time, so I edited your question to the gist of it, and will answer the most important one which is about the theme of Lady Windermere’s fan. The play…\n\n• REFERENCE\n\nselecting aluminium fabricatori have to make a paper about selecting aluminium fabricators. what…\n\nI think that this will depend on what you need fabricated. The first thing I would do is ask to see similar work to what I need fabricated. I would also ask for people that they have done previous…\n\n• JOHN DONNE\n\nGive the main thought of John Donnes poem “The Flea” and comment on it.\n\nThe speaker of the poem is attempting to persuade a woman to have sex with him. She responds by squashing a flea. The speaker then goes on to create an analogy of the, the woman and the flea to…\n\n• REFERENCE\n\nfree online educationwhy is important\n\nFree online educational resources are very important, especially for people whose schools or families do not have the funds to buy educational materials. For example, there are many sites with the…\n\n• MATH\n\nwhat to solve firsti have 2 fractions (x+1)/(x^2+1)+(2x-2)/(x^2+1) i don’t understand why to…\n\nDepending on the situation, you may use this way to write a fraction, whose denominator is a polynomial. If you need to solve an indefinite integral, splitting the fraction may help you to solve…\n\n• MATH\n\nadditioni’m not so good in addition of fractions pls. show steps 6/5(x+3)+12/10(x-3)+(-17)/30(x-3)\n\nNotice that the fractions have different denominators, hence, you need to arrive to a common denominator multiplying the different denominators such that: Notice that the third fraction has…\n\n• TO SIR, WITH LOVE\n\nIn To Sir, with Love, how does Mr Braithwaite’s stay in the school bring joy, courage and…\n\nI think that Braithwaite’s stay in the school is successful because it is driven by a “student first” philosophy. After some initial setbacks, Braithwaite’s philosophy is one where he places the…\n\n• MATH\n\nEvaluate the value of the expression x1^4+x2^4+x3^4+x4^4. x1,x2,x3,x4 are the roots of the…\n\nTo find the x1^4+x2^4+x3^4+x4^4 , where xi’s are the roots of x^4-x^2+1= 0. by the relation of roots and coefficients, x1+x2+x3+x4 0/1 = 0 We multiply the equation by x^2+1 and get :…\n\n• MATH\n\nThe solution of the system x^3+y^3+x+y=32 and x^3-y^3+x-y=28 is also the solution of the…\n\nIt would be easier to get the answer here by starting with the choices given. Let us substitute the choices in the two equations: x^3+y^3+x+y=32 and x^3-y^3+x-y=28 x^3 + x = 20 We have y^3 + y = 32…\n\n• MATH\n\nEvaluate the indefinite integral of y=square root of 16-x^2.\n\nInt f(x)dx = Int sqrt(16 – x^2)dx (y = f(x)) We’ll factorize by 16: Int sqrt[16(1 – x^2/16)]dx = 4Int sqrt[1 – (x/4)^2]dx We’ll substitute x/4 = t. We’ll differentiate both sides: dx/4 = dt dx =…\n\n• MATH\n\nIf the sum of n terms of a string is 5n^2+6n find the form of the general term.\n\nWe have the sum of n terms of a string given as 5n^2+6n Now 5n^2+6n = 5n^2 + 5n + n => 5n(n+1) + n => 10n(n+1)/2 +n Now 10*n(+1)/2 is the sum of n terms of the form n multiplied by 10 and n…\n\n• IMAGISM\n\nWhat are the main themes of Imagist poets?\n\nThe sensual and immediate experience of an image, a direct confrontation with the world, unmediated by ideological or ritualistic influences, Imagist poetry was a move away from sentimental…\n\n• MATH\n\nSimplify the expression [(x^2-6x+5)/(x+3)]*[(x^2-9)/(x-5).\n\nLet E = (x^2-6x+5) / (x+3) * (x^2-9)/(x-5) First we will multiply numerators and denominators: ==> E = (x^2-6x+5)*(x^2-9)/(x+3)(x-5) Now we will factor the numerator: ==> E =…\n\n• MATH\n\nHow to factor the equation x^2+13=0.\n\nx^2 + 13 = 0 We will factor using the difference between the squares. We know that: a^2 -b^2 = (a-b)(a+b) let us rewrite: x^2 – (-13) = 0 Now we will factor: ==> (x^2-(-13) = 0 ==>…\n\n• ROMEO AND JULIET\n\nWhat is an example of situational irony in Shakespeare’s Romeo and Juliet?\n\nSituational Irony – not so much the opposite of what you expect to happen. Situational Irony is more specifically defined as when a perverse, tragic, surprisingly amusing or odd outcome occurs….\n\n• WUTHERING HEIGHTS\n\nWhat are the elements of Romanticism in Wuthering Heights?im writing a essay on wuthering heights…\n\nFirst, you need to make sure that you fully understand the characteristics of romantic literature in 19th Century England. Basically, the Romantics believed that man, as an individual, is superior,…\n\n• MATH\n\nCalculate the area of the surface limited by the graph of y=x^2+3, the tangent to the graph of y…\n\nWe need to find the area between the curve y = x^2 + 3 and the tangent line at x= 2. ==> y(2) = 2^2 + 3 = 7 Let us find the tangent. ==> y’ = 2x ==> y’ = 2*2= 4 Then the equation of the…\n\n• MATH\n\nGiven the functions f=x*arctanx and g=ln(1+x^2) prove that f>=g if x is in the interval [0;1]\n\nWe’ll subtract g(x) both sides of the inequality that has to be demonstrated. f(x) – g(x) >= 0 We’ll substitute f(x) and g(x) by their expressions: x*arctanx – ln(1+x^2) >= 0 We’ll assign a…\n\nHow does Gabriel compare himself with Michael Furey?”The Dead” by James Joyce\n\nYou will find your answer in the paragraph beginning with “Gabriel, leaning on his elbow.” After Greta explains to Gabriel why she is crying and why the song she heard at the party moved her so,…\n\n• HAROLD PINTER\n\nExplain the role of Harold Pinter as a representative of modern drama.\n\nI think you’d have to call him a Modernist and a Postmodernist. As a Modernist, Pinter’s work is indebted to a naturalist/Realist tradition in that his dialogues are often so close to every day…\n\n• MARXIST LITERARY CRITICISM\n\nWhat are the differences between superstructure and infrastructure in Marxism?\n\nMarx said that it is not consciousness that determines life but it is life that determines consciousness. Think of it that way: Our physical functions and interactions will determine the ways we…\n\n• MY BROTHER SAM IS DEAD\n\nWho is the antagonist in my brother sam is dead?who is the protagonist in my brother sam is dead?\n\nThe antagonist is the struggle to understand the war and form an opinion about it.\n\n• MATH\n\nWhat are the 2 numbers with GM= 4 and HM=16/5?\n\nLet the two numbers be A and B. Now it is given that their GM is 4. This gives us sqrt (A*B) = 4 => AB = 4^2 = 16 We also know that their HM is 16/5, so 2*AB / (A+B) = 16/5 => 2*AB / (A+B) =…\n\n• LITERATURE\n\nHow do teachers check for plagiarism? First of all, I do NOT plagiarise. But, I’m curious on…\n\nGenerally you would be looking at a writing assignment. Good writing teachers can recognize the vocabulary, sentence structure, and depth of the writing itself because they have seen other work by…\n\nExplain why the statement about profit-maximizing competitive firm is incorrect. A firm’s…\n\nIn general, the statement is correct. Most of a firm’s supply curve is the same as its marginal costs curve. This is because a firm should always produce its product at the quantity where the…\n\n• LITERATURE\n\nWhat are the most important aspects of the mythological and archetypal approaches? How are they…\n\nBorrowing from the work of Northrop Frye, particularly “The Archetypes of Literature,” the archetypal and mythological approach to literature is to sort of “stand back” when you look at…\n\nHow does a nation’s science policy impact businesses?\n\nThe term “science policy” typically refers to the degree to which scientific endeavors are funded and/or regulated by the government of a country. This can have a significant impact on businesses…\n\n• SOCIAL SCIENCES\n\nShould drug use be decriminalized?Should drug use be decriminalized?\n\nThe use of really hard drugs should not be decriminalized. This is because there are so many kinds of drugs that can do serious harm to human beings. People should not be encouraged in any way to…\n\n• SHAKESPEARE’S SONNETS\n\nCategorize Shakespeare’s sonnets and describe what they reveal about Shakespeare’s character.I…\n\nFrom your question, it sounds like you are writing a paper on Shakespearean Sonnets and what they reveal about Shakespeare’s character. I can see why you are having difficulty. There are over 100…\n\n• THINGS FALL APART\n\nI need an outline for my term paper about national identity in Things Fall Apart.\n\nHi, hamada: I don’t exactly know your specific topic or thesis, but here’s what I would highlight in terms of major points. I. In Things Fall Apart, national identity is clearly passed down through…\n\n• LAW AND POLITICS\n\nEthics is probably the most difficult concept to define.’ Justify this statement\n\nLooking at the history of ethics alone will likely justify this claim. For Aristotle ethics had to do with using reason to understand “the good”. For Bentham, it had to do with how much pleasure…\n\nWhat is a ‘Lock-in period’, ‘Bipartite Lease’ and “Suit for Quantum Meruit”?\n\nWhen money is borrowed from an institution it usually has a minimum period for which the person borrowing the funds has to pay an interest. This is called the lock-in period. If the borrower…\n\n• HISTORY\n\n“The Cuban Missile Crisis was inevitable because of how the USA treated the country after the…\n\nI agree with this statement to a large degree, but not completely. I do agree that Fidel Castro and the Cuban government were very unhappy with how the US had treated them after their revolution….\n\n• LITERATURE\n\nWhy is the story of Arthur so important to the history of British Literature?\n\nThis is a very interesting question. I think that it might strike at an element featured in all of literature and help to bring out a discussion as to why specific examples of literature is deemed…\n\n• NICK HORNBY\n\nIn the novel called, Slam by Nick Hornby, what perspective does Nick Hornby present on one of the…\n\nYou can explore the themaic importance of “Role models” (Failure of / importance of)\n\nWhat forms of economic risk does a Multinational Corporation typically face?\n\nThere are two major types of economic risk that are especially important of a multi-national corporation. These are types of risk that are caused by internal factors on the one hand and external…\n\n• MATH\n\nFind the area of the triangle formed by the lines: 1) 11x-7y=81 2) 3x-5y=-15 3) x+4y=12\n\nWe have the three lines which form the sides of the triangle as 11x-7y=81…(1) 3x-5y=-15…(2) x+4y=12…(3) We have to find their points of intersection. 11*(2) – 3*(1) => 33x – 55y – 33x +…\n\n• MATH\n\nWhat is the extreme value of f(x)=x^4+8x^2-48x+19 for 1<x<2? Is this a mininum or maximum?\n\nTo find the extreme value of a function f(x) we need to find the first derivative of the function and equate it to zero. Then this is used to solve for x. Here, f(x)=x^4 + 8x^2 – 48x + 19 f'(x) =…\n\n• LAW AND POLITICS\n\nShould the supreme court hold that the evidence and inferences drawn were sufficient to support…\n\nThe Court held in the Jackson case that there was sufficient evidence to uphold the petitioner’s conviction. The Court found from the evidence that the Petitioner (who claimed self defense or…\n\n• I STAND HERE IRONING\n\nAnalyze the tonal shifts in the story. Use evidence from the short story to prove.Tillie Olsen’s…\n\nTone, which is an integral part of a narrative, is produced by the writer’s diction as well as stylistic choices concerning syntax, line or sentence length, imagery, and other figurative language….\n\n• POLITICAL SCIENCE\n\nDoes the nation create the state, or does the state create the nation?Please provide reasons for…\n\nNeither of these is really true because there are many states that incorporate more than one nation and there are many nations that are spread over more than one state. If the nation created the…\n\n• THE SECOND COMING\n\nDiscuss Yeats’ use of symbolism in “The Second Coming.”\n\nI think that one can find many examples of Yeats employing symbolism throughout his work. Since the question is asking for how it is used, I would focus on the symbols present in my favorite Yeats…\n\n• GUIDE TO LITERARY TERMS\n\nWhat is an epic?i want short notes on it means everything related to an epic\n\nSimply put, an epic is a long poem about gods or heroes. The poem is written in the narrative style, which means that it tells a story. An epic poem focuses on the actions and deeds of the hero…." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9280086,"math_prob":0.9482536,"size":15014,"snap":"2023-14-2023-23","text_gpt3_token_len":3891,"char_repetition_ratio":0.1344437,"word_repetition_ratio":0.025789069,"special_character_ratio":0.2516318,"punctuation_ratio":0.06837319,"nsfw_num_words":1,"has_unicode_error":false,"math_prob_llama3":0.96132815,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-04-01T01:29:47Z\",\"WARC-Record-ID\":\"<urn:uuid:25f8d529-a3b0-453f-ac50-4ad85e2e16f8>\",\"Content-Length\":\"47750\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9077d445-196e-40f0-b37d-91cc31f88420>\",\"WARC-Concurrent-To\":\"<urn:uuid:34a5f8a0-0e3f-46a0-836f-656148f5d4d2>\",\"WARC-IP-Address\":\"192.64.118.90\",\"WARC-Target-URI\":\"https://campusgrades.com/discussion-about-polynomialsexplanation-of-the-termspolynomial-what-is-a-polynomial-etc/\",\"WARC-Payload-Digest\":\"sha1:YXL6AANFQFY5UQOHLULN42FGMYYAU324\",\"WARC-Block-Digest\":\"sha1:EKI7TDO6LXACV32MSUFCT3EOBHEJC56R\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296949694.55_warc_CC-MAIN-20230401001704-20230401031704-00137.warc.gz\"}"}
https://es.mathworks.com/matlabcentral/cody/problems/2015-length-of-the-hypotenuse/solutions/543744
[ "Cody\n\n# Problem 2015. Length of the hypotenuse\n\nSolution 543744\n\nSubmitted on 10 Dec 2014 by AGAM GUPTA_786\nThis solution is locked. To view this solution, you need to provide a solution of the same size or smaller.\n\n### Test Suite\n\nTest Status Code Input and Output\n1   Pass\n%% a = 1; b = 2; c_correct = sqrt(5); tolerance = 1e-12 ; assert(abs(hypotenuse(a,b)-c_correct)<tolerance);\n\n2   Pass\n%% a = 3; b = 4; c_correct = 5; tolerance = 1e-12 ; assert(abs(hypotenuse(a,b)-c_correct)<tolerance);\n\n3   Pass\n%% a = 5; b = 12; c_correct = 13; tolerance = 1e-12 ; assert(abs(hypotenuse(a,b)-c_correct)<tolerance);" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6005138,"math_prob":0.98315364,"size":663,"snap":"2020-24-2020-29","text_gpt3_token_len":218,"char_repetition_ratio":0.15022762,"word_repetition_ratio":0.057142857,"special_character_ratio":0.3484163,"punctuation_ratio":0.14728682,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9876124,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-07-11T05:37:24Z\",\"WARC-Record-ID\":\"<urn:uuid:ab2ee9a3-c4ab-4c49-b594-d29cfc163ee9>\",\"Content-Length\":\"73658\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:32af2310-f95f-44f5-8b71-872e738c193f>\",\"WARC-Concurrent-To\":\"<urn:uuid:9b373564-8b5e-42bb-8840-16af72957633>\",\"WARC-IP-Address\":\"184.24.72.83\",\"WARC-Target-URI\":\"https://es.mathworks.com/matlabcentral/cody/problems/2015-length-of-the-hypotenuse/solutions/543744\",\"WARC-Payload-Digest\":\"sha1:HBU74LRBS73S7VU3XPZOMUCIQE6DR627\",\"WARC-Block-Digest\":\"sha1:YQKS23DT3CN4PRBYN56F3I4JIXJ6Y7SK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-29/CC-MAIN-2020-29_segments_1593655921988.66_warc_CC-MAIN-20200711032932-20200711062932-00485.warc.gz\"}"}
https://ajlopez.wordpress.com/category/qunit/
[ "# Social Games Programming (Part 6) Testing Game and Service with TDD and QUnit\n\nIn my previous post, I presented the new version of Windows Azure Toolkit for Social Games. It has simple games to demonstrate the use of Javascript, HTML 5 canvas, game moves processing, Azure worker roles and web roles. Let’s explore in this post the making of client game logic, in Javascript, using TDD and QUnit.\n\nThere are online tests at:\n\nhttp://watgames4.cloudapp.net/Test", null, "Let’s run the Tic Tac Toe Game Logic tests:\n\nhttp://watgames4.cloudapp.net/Samples/ClientTest/TicTacToeGameTest", null, "TDD with Javascript and QUnit\n\nThe above page is testing the Tic Tac Toe logic. Remember, each game is implemented in parts, the logic is one of them:", null, "The client code resides in TicTacToeGame.js inside the SocialGames.Web project. Their first lines:\n\n```TTTColor = { Empty: 0, Cross: 1, Circle: 2 };\nfunction TicTacToeGame() {\nthis.board = [\n[TTTColor.Empty, TTTColor.Empty, TTTColor.Empty],\n[TTTColor.Empty, TTTColor.Empty, TTTColor.Empty],\n[TTTColor.Empty, TTTColor.Empty, TTTColor.Empty]\n];\n}\nTicTacToeGame.prototype.move = function (x, y, color) {\nthis.board[x][y] = color;\n};\nTicTacToeGame.prototype.isEmpty = function (x, y) {\nreturn this.board[x][y] == TTTColor.Empty;\n};\n....```\n\nThe client test page (TicTacToeGameTest.cshtml) was built at the same time, using a TDD (Test-Driven Development) approach. Look at first tests:\n\n```test(\"Create Empty Board\", function () {\nvar game = new TicTacToeGame();\nfor (var x = 0; x < 3; x++)\nfor (var y = 0; y < 3; y++)\nok(game.isEmpty(x, y));\nequal(game.isTie(), false);\nequal(game.hasWinner(), false);\n});\ntest(\"Valid Moves on Empty Board\", function () {\nvar game = new TicTacToeGame();\nfor (var x = 0; x < 3; x++)\nfor (var y = 0; y < 3; y++) {\nok(game.isValid(x, y, TTTColor.Cross));\nok(game.isValid(x, y, TTTColor.Circle));\n}\n});\ntest(\"No Winner in Empty Board\", function () {\nvar game = new TicTacToeGame();\nequal(game.getWinner(), TTTColor.Empty);\n});\ntest(\"Get Winner in First Row\", function () {\nvar game = new TicTacToeGame();\ngame.move(0, 0, TTTColor.Cross);\ngame.move(1, 0, TTTColor.Cross);\ngame.move(2, 0, TTTColor.Cross);\nequal(game.getWinner(), TTTColor.Cross);\nequal(game.isTie(), false);\nequal(game.hasWinner(), true);\n});```\n\nThe idea is to take baby steps, one test at a time, designing the game logic “API”, its expected behavior. In this way, you expend less time debugging in a dynamic language like Javascript, and you gain a test suite that can save your day in case of major refactoring. Look at the Four In A Row logic and client tests: you will find a similar approach.\n\nOk, not all can be easily tested, or build using TDD. Some of the game-agnostic services are using Ajax and Blob Storage, and to test them you must consider asynchronous Ajax calls. You can check:\n\nhttp://watgames4.cloudapp.net/Test/ServerInterfaceTest\n\n(You must be logged in using your Facebook or Windows Live ID, an example of use of Federated Security and Access Control Service (ACS))", null, "This time, the system under test is the game-agnostic Server Interface:", null, "There are some tricks in the test code (ServerInterfaceTest.cshmlt), an excerpt:\n\n```test(\"Call User/Verify\", function () {\nvar success = function (result) { ok(true); start(); };\nvar error = ajaxGetError;\nstop(10000);\nexpect(1);\nsi.sendAjaxGet(apiURL + \"user/verify\", success);\n});```\n\nexpect is a QUnit function that prepare the framework to receive 1 ok(true) sometime during the test run. That confirmation is included in callback function success that it will be called after the successful processing of the asynchronous call .sendAjaxGet. Async life is not easy 😉\n\nMore code analysis is coming. And some adapt to use Node.js as game server.\n\nKeep tuned!\n\nAngel “Java” Lopez\n\nhttp://www.ajlopez.com\n\n# AjLisp in Javascript (Part 1) Atoms, Lists and TDD\n\nNext Post\n\nI’m rewriting my AjLisp interpreter using Javascript. I think that a Lisp interpreter is a good project to learn a language: simple, bounded but not trivial. I would never have begun this project without TDD (Test-Driven Development): Javascript is so dynamic and the tools I’m using (the browser, plain text editors) are so limited, that this project have been hard without the help of TDD. The practice of TDD gives me fun, doing baby steps and it helps me to explore good design.\n\nThe code (with parser, many primitive forms, some macro procession, work in progress) is at GitHub:\n\nhttps://github.com/ajlopez/AjLispJs\n\nThe code is in one file: https://github.com/ajlopez/AjLispJs/blob/master/src/ajlisp.js\n\nI’m using QUnit for tests:", null, "The implementation uses the namespace pattern:\n\n```AjLisp = function() {\n// ...\n}();\n\nThe namespace is the result of evaluate a function. This function returns an object with the public members of the namespace:\nreturn {\n// Classes\nList: List,\nEnvironment: Environment,\nAtom: Atom,\nClosure: Closure,\nLexer: Lexer,\nTokenType: TokenType,\nParser: Parser,\n\n// Functions\nmakeList: makeList,\nisAtom: isAtom,\nisList: isList,\nisNil: isNil,\nasString: asString,\nevaluate: evaluate,\n\n// Top Environment\nenvironment: environment\n}\nAs usual, a Lisp interpreter should support lists and atoms. Partial code:\nfunction List(first, rest) {\nfunction getFirst() {\nreturn first;\n}\n\nfunction getRest() {\nreturn rest;\n}\n\nthis.first = getFirst;\nthis.rest = getRest;\n}\nList.prototype.isAtom = function() { return false; }\nList.prototype.isList = function() { return true; }\nList.prototype.evaluate = function(environment)\n{\nvar form = this.first().evaluate(environment);\nreturn form.apply(this, environment);\n}\n// ...\nNote that the first and rest part of a list are encapsulated in the constructor closure. They are inmutable, and can be accesed by functions aList.first(), aList.rest(). I should evaluate the impact of all those closure at list construction. But they are relative light.\nAtom implementation is simple:\nfunction Atom(name) {\nthis.evaluate = function(environment) {\nreturn environment.getValue(name);\n};\n\nthis.name = function() { return name; };\n}\nAtom.prototype.isAtom = function() { return true; }\nAtom.prototype.isList = function() { return false; }\nAtom.prototype.asString = function() { return this.name(); }\nAtom.prototype.equals = function(atom)\n{\nif (isNil(atom) || !isAtom(atom))\nreturn false;\n\nreturn this.name() == atom.name();\n}\nThe atom evaluation is based in an enviroment (association of names to values) and the atom name. Numbers, strings are direct Javascript objects and they don’t need to be implemented as atoms. In a “classical” Lisp implementation, all elements are SExpressions (symbolic expressions) capable of being evaluated. Now, I have AjLisp.evaluate that accepts any Javascript object and detect if it can be evaluated in an environment:\nfunction evaluate(x, environment)\n{\nif (x === null || x === undefined)\nreturn x;\n\nif (x.evaluate != undefined && typeof(x.evaluate) == \"function\")\nreturn x.evaluate(environment);\n\nreturn x;\n}\n\nAtom tests:\ntest(\"Atom\", function() {\nvar environment = new AjLisp.Environment();\nenvironment.setValue(\"one\", 1);\nvar one = new AjLisp.Atom(\"one\");\nequal(one.evaluate(environment), 1);\nok(one.isAtom());\nequal(one.isList(), false);\nok(AjLisp.isAtom(one));\nequal(AjLisp.isList(one), false);\nequal(one.asString(), \"one\");\nequal(one.equals(one), true);\nvar one2 = new AjLisp.Atom(\"one\");\nequal(one.equals(one2), true);\n});\nTest exercising list behavior:\ntest(\"List\", function() {\nvar list = new AjLisp.List(1,2);\nequals(list.first(), 1);\nequals(list.rest(), 2);\nequal(list.isAtom(),false);\nequal(list.isList(),true);\nequal(AjLisp.isAtom(list), false);\nequal(AjLisp.isList(list), true);\nequal(list.asString(), \"(1.2)\");\nequal(list.equals(list), true);\nvar list2 = new AjLisp.List(1,2);\nequal(list.equals(list2), true);\nequal(list2.equals(list), true);\nvar list3 = AjLisp.makeList(1,2,3);\nequal(list.equals(list3), false);\nequal(list3.equals(list), false);\nlist = AjLisp.makeList(null, null);\nok(list.first() === null);\nok(list.rest().first() === null);\n});\n\nPending topics: environment implementation, list evaluation, forms and special forms, parser, lambda, mlambda, flambda. Lot of fun! 😉\nKeep tuned!\nAngel “Java” Lopez\nhttp://www.ajlopez.com\n```\n\n# TDD with Javascript and QUnit\n\nThis week end, I started to write a lisp interpreter using Javascript. The code is at\n\nhttps://github.com/ajlopez/AjLispJs\n\nBut the key point: I’m using TDD (Test-Driven Development). I couldn’t start such project using traditional development: I’m still not proficient in Javascript. Using TDD is the way to start mastering Javascript idioms and patterns. Meanwhile, I’m writting a Javascript interpreter in C#, see:\n\nhttps://github.com/ajlopez/AjScript\n\nLast week, I stated to use QUnit TDD framework in a customer project. You can download it from:\n\nhttps://github.com/jquery/qunit\n\nAfter expanding the content (I downloaded the .zip file), you can launch the index.html file in the test directory. The result:", null, "How to start a test? I copied qunit.js, qunit.css to a new directory, and I added a jquery source code file to it. Then, I created a new index.html with content:\n\n```<html>\n<meta charset=\"UTF-8\" />\n<title>QUnit First Test</title><link rel=\"stylesheet\" href=\"qunit.css\" type=\"text/css\" media=\"screen\"><script type=\"text/javascript\" src=\"jquery-1.6.2.min.js\"></script>\n<script type=\"text/javascript\" src=\"qunit.js\"></script>\n<body>\n<h2 id=\"qunit-banner\"></h2>\n<div id=\"qunit-testrunner-toolbar\"></div><h2 id=\"qunit-userAgent\"></h2>\n<ol id=\"qunit-tests\"></ol><div id=\"qunit-fixture\">test markup</div></body>\n</html>```\n\nThe page reference JQuery and QUnit. The initial markup is important: QUnit will fill it with the test results. The result:", null, "Before closing tag </body> I added an script fragment, with the initial tests (the simplest one\n\n```<script type=\"text/javascript\">\n\ntest(\"First Test\", function() {\nsame(3-1,2);\n});</script>```\n\nThe test is green:", null, "Note the use of \\$ JQuery function to register the code to execute AT THE END of the load of document.\n\nYou could add some test for a classic Calculator:\n\n```test(\"Calculator\", function() {\nvar calculator = new Calculator();\n\nNow, the second test is red:", null, "I wrote the new calculator.js file, with the minimal code to pass the test:\n\n```function Calculator() {\nthis.add = function(x, y) { return x+y; }\n}```\n\nI added the reference in index.html:\n\n`\t<script type=\"text/javascript\" src=\"calculator.js\"></script>`", null, "All is Ok! You can use your preferred editor. No IDE is needed.\n\nAnd you can learn Javascript (classes, prototypes, namespaces, scopes, closures) writing code using TDD.\n\nScript Junkie | jQuery Test-Driven Development http://msdn.microsoft.com/en-us/scriptjunkie/ff452703.aspx\n\nQUnit – jQuery JavaScript Library" ]
[ null, "https://i0.wp.com/www.ajlopez.com/images/articles2/watgames05.png", null, "https://i1.wp.com/www.ajlopez.com/images/articles2/watgames06.png", null, "https://i2.wp.com/www.ajlopez.com/images/articles2/watgames07.png", null, "https://i0.wp.com/www.ajlopez.com/images/articles2/watgames08.png", null, "https://i0.wp.com/www.ajlopez.com/images/articles2/watgames09.png", null, "https://i1.wp.com/www.ajlopez.com/images/articles2/ajlispjs01.png", null, "https://i1.wp.com/www.ajlopez.com/images/articles2/qunit01.png", null, "https://i2.wp.com/www.ajlopez.com/images/articles2/qunit02.png", null, "https://i2.wp.com/www.ajlopez.com/images/articles2/qunit03.png", null, "https://i0.wp.com/www.ajlopez.com/images/articles2/qunit04.png", null, "https://i2.wp.com/www.ajlopez.com/images/articles2/qunit05.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6905571,"math_prob":0.7039742,"size":3708,"snap":"2019-51-2020-05","text_gpt3_token_len":957,"char_repetition_ratio":0.13849892,"word_repetition_ratio":0.10909091,"special_character_ratio":0.26321468,"punctuation_ratio":0.23620026,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95117444,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22],"im_url_duplicate_count":[null,4,null,3,null,3,null,4,null,4,null,3,null,3,null,3,null,3,null,4,null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-29T14:17:31Z\",\"WARC-Record-ID\":\"<urn:uuid:a7192b42-14a5-49ca-a1fe-fd7cfaac05c8>\",\"Content-Length\":\"95800\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e94c2642-d63f-4c36-ba24-d9a6a0114c90>\",\"WARC-Concurrent-To\":\"<urn:uuid:47a922b4-154d-4122-b624-1dc36473df50>\",\"WARC-IP-Address\":\"192.0.78.13\",\"WARC-Target-URI\":\"https://ajlopez.wordpress.com/category/qunit/\",\"WARC-Payload-Digest\":\"sha1:6SU6O547CBUQU3PT6ACZHKFGXMIBR4CQ\",\"WARC-Block-Digest\":\"sha1:VHCO6O3UOTCO5A7YF4QQG4B5EOMGG5D5\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579251799918.97_warc_CC-MAIN-20200129133601-20200129163601-00435.warc.gz\"}"}
https://forum.azimuthproject.org/discussion/comment/17819/
[ "#### Howdy, Stranger!\n\nIt looks like you're new here. If you want to get involved, click one of these buttons!\n\nOptions\n\n# Lecture 19 - Chapter 2: Chemistry and Scheduling\n\nedited February 2020\n\nBefore we dive into the math of resource theories, let me say a bit more about what they're good for! I said they're good for answering questions like these:\n\n1. Given what I have, is it possible to get what I want?\n2. Given what I have, how much will it cost to get what I want?\n3. Given what I have, how long will it take to get what I want?\n4. Given what I have, what is the set of ways to get what I want?\n\nand also others. But let's see examples!\n\nChemistry. Suppose we have a bunch of chemicals, and reactions involving these chemicals. Then we can ask which collections of molecules can turn into which other collections. We can also ask what's the set of ways in which this can happen.\n\nPuzzle 55. For example, suppose you have these chemical reactions: $$\\text{C} + \\text{O}_2 \\longrightarrow \\text{CO}_2$$ $$\\text{CO}_2 + \\text{NaOH} \\longrightarrow \\text{NaHCO}_3$$ $$\\text{NaHCO}_3 + \\text{HCl} \\longrightarrow \\text{H}_2 \\text{O} + \\text{NaCl} + \\text{CO}_2$$ Can you use these to turn $$\\text{C} + \\text{O}_2 + \\text{NaOH} + \\text{HCl}$$ into $$\\text{CO}_2 + \\text{H}_2\\text{O} + \\text{NaCl} \\text{?}$$ If so, how many ways can you do it?\n\nThe \"can you do it?\" question here is called the reachability problem, and you can read about it here:\n\nIf you like hard problems, the reachability problem is for you! The puzzle above is not hard, but in general the reachability problem is very hard. It was proved in 1981 that there's an algorithm to solve it, no matter what chemicals and reactions you choose. In other words, it's \"decidable\". However, it's known that no algorithm can solve it in polynomial time. The only known upper bound on the runtime of an algorithm that solves the reachability problem is insanely bad... in fact, so bad that I'll make you read our book to learn how bad! So, there's a very interesting set of puzzles here for people skilled in computational complexity theory.\n\nOne way to draw a bunch of chemicals and reactions involving them is a \"Petri net\". Here's the Petri net for the reactions I just listed:", null, "My book with Jacob is really about Petri nets. They're actually used in computer science more than chemistry! That's because they're a good way to draw a bunch of processes that take certain inputs and produce a bunch of outputs. Whenever we have this sort of situation, the reachability problem becomes important.\n\nScheduling. Suppose you have a bunch of jobs to do, that take various amounts of time. And suppose you can only start some jobs when others are done. How long will it take to do all these jobs?\n\nThis a hugely important problem, for example in big companies. One approach to solving it uses \"PERT charts\", where \"PERT\" - businesspeople love acronyms but I hate them - stands for \"program evaluation and review technique\". Here's an example:", null, "The circles are different states, while the edges are different tasks. Each state is labelled with an arbitrary name: 10, 20, 30, 40 and 50. The tasks also have names: A, B, C, D, E, and F. More importantly, each task is labelled by the amount of time that task requires!\n\nYour goal is to start at state 10 and move all the way to state 50. Since you're bossing lots of people around, you can make them do tasks simultaneously. However, you can only reach a state after you have done all the tasks leading up to that state. For example, you can't reach state 50 unless you have already done all of tasks C, E, and F.\n\nPuzzle 56. What is the minimum amount of time it takes to get from state 10 to state 50?\n\nPuzzle 57. Which tasks could take longer, without changing the answer to Puzzle 56? How much longer could each task take, without changing the answer? This amount of time is called the slack for that task.\n\nFor an introduction to PERT charts and their uses, see:\n\nYou can read about algorithms to solve puzzles like those above: companies use these algorithms to schedule tasks! And so do governments. It's used to simplify the scheduling of large and complex projects - first in 1957 by the U.S. Navy, but later all over industry. For example, in 1968 it was used to help plan the Olympics.\n\nAs a category theorist, I am immediately attracted to diagrams, so I loved Petri nets and PERT charts as soon as I saw them. The puzzle for me was then to figure out the category theory hiding behind these diagrams!\n\nI think I understand some things now. For example, it's possible to reinterpret PERT charts as \"timed\" Petri nets: that is, Petri nets where each reaction takes a specific amount of time.\n\nBut I haven't figured out everything. It's all part of a big subject: resource theories!\n\nTo read other lectures go here.\n\n«1\n\n• Options\n1.\nedited May 2018\n\nI think there's a typo in the third reaction formula -- $$\\text{NaHCO}_2$$ should be $$\\text{NaHCO}_3$$, or else the Petri net doesn't correspond with the set of reactions.\n\nComment Source:I think there's a typo in the third reaction formula -- \\$$\\text{NaHCO}_2\\$$ should be \\$$\\text{NaHCO}_3\\$$, or else the Petri net doesn't correspond with the set of reactions.\n• Options\n2.\nedited May 2018\n\nThanks! You just caught a typo in my book - unfortunately after it was published. It's amazing how a finite-sized book can have an infinite number of typos.\n\nSpeaking of mistakes: in my lecture above I originally wrote:\n\nYour goal is to start at state 10 and move all the way to state 50. Since you're bossing lots of people around, you can make them do tasks simultaneously. However, you can only reach a state after you have reached all its predecessors! For example, you can't reach state 50 unless you have already reached states 20, 30 and 40.\n\nThis was wrong; this is not how people usually use PERT charts. The usual rules say that you can only reach a state after you have accomplished all the tasks leading up to that state - i.e., traversed all the edges pointing into that state.\n\nBased on my erroneous statement of the rules, Sophie Libkind got a different answer to Puzzles 56 and 57 - different than the answer by Jared Summers based on the usual rules.\n\nThis led to an interesting discussion which will only make sense if you know my original mistake! I'm going to correct my lecture now, but please compare what I had written:\n\nYour goal is to start at state 10 and move all the way to state 50. Since you're bossing lots of people around, you can make them do tasks simultaneously. However, you can only reach a state after you have reached all its predecessors! For example, you can't reach state 50 unless you have already reached states 20, 30 and 40.\n\nto what's there now:\n\nYour goal is to start at state 10 and move all the way to state 50. Since you're bossing lots of people around, you can make them do tasks simultaneously. However, you can only reach a state after you have done all the tasks leading up to that state. For example, you can't reach state 50 unless you have already done all of tasks C, E, and F.\n\nThe change makes a big difference.\n\nComment Source:Thanks! You just caught a typo in my book - unfortunately after it was published. It's amazing how a finite-sized book can have an infinite number of typos. Speaking of mistakes: in my lecture above I originally wrote: > Your goal is to start at state 10 and move all the way to state 50. Since you're bossing lots of people around, you can make them do tasks simultaneously. However, you can only reach a state after you have reached _all_ its predecessors! For example, you can't reach state 50 unless you have already reached states 20, 30 _and_ 40. This was wrong; this is not how people usually use PERT charts. The usual rules say that you can only reach a state after you have accomplished all the tasks leading up to that state - i.e., traversed all the edges pointing into that state. Based on my erroneous statement of the rules, [Sophie Libkind got a different answer to Puzzles 56 and 57](https://forum.azimuthproject.org/discussion/comment/17832/#Comment_17832) - different than the answer by Jared Summers based on the _usual_ rules. This led to an interesting discussion which will only make sense if you know my original mistake! I'm going to correct my lecture now, but please compare what I had written: > Your goal is to start at state 10 and move all the way to state 50. Since you're bossing lots of people around, you can make them do tasks simultaneously. However, you can only reach a state after you have reached _all_ its predecessors! For example, you can't reach state 50 unless you have already reached states 20, 30 _and_ 40. to what's there now: > Your goal is to start at state 10 and move all the way to state 50. Since you're bossing lots of people around, you can make them do tasks simultaneously. However, you can only reach a state after you have done _all_ the tasks leading up to that state. For example, you can't reach state 50 unless you have already done _all_ of tasks C, E, and F. The change makes a big difference.\n• Options\n3.\n\n### Puzzle 55\n\nYes, 1 way (which the Petri net illustrates).\n\n7 months.\n\n### Puzzle 57\n\nOnly task E can take longer, up to 1 month.\n\nSide note: I've had to look at too many PERT charts and am presently in PERT hell thanks to an ill-timed remark in a meeting on my part. Never suggest you know more about management than management (even when it's true). They may try to recruit you.\n\nComment Source:### Puzzle 55 Yes, 1 way (which the Petri net illustrates). ### Puzzle 56 7 months. ### Puzzle 57 Only task E can take longer, up to 1 month. Side note: I've had to look at too many PERT charts and am presently in PERT hell thanks to an ill-timed remark in a meeting on my part. Never suggest you know more about management than management (even when it's true). They may try to recruit you.\n• Options\n4.\n\nIs there a difference between reachability and satisfiability?\n\nComment Source:Is there a difference between reachability and satisfiability?\n• Options\n5.\n\nJared Summers #3\n\nNever suggest you know more about management than management (even when it's true). They may try to recruit you.\n\nI expect there's a story behind that comment. Not necessarily any of my business, just sayin'...\n\nComment Source:[Jared Summers #3](https://forum.azimuthproject.org/discussion/comment/17819/#Comment_17819) > Never suggest you know more about management than management (even when it's true). They may try to recruit you. I expect there's a story behind that comment. Not necessarily any of my business, just sayin'...\n• Options\n6.\nedited May 2018\n\nIs there a difference between reachability and satisfiability?\n\nReachability is concerned with the general problem of the existence of a path between point A and point B. (Propositional) satisfiability is concerned with determining if an assignment of values to variables exists that makes a given formula true.\n\nYou can probably connect the two by considering point A to be an empty assignment, and point B to any of the class of assignment that simplify the formula to \"true\". Then reachability would ask if we can augment our starting assignment to one in the class of satisfying assignments. But I'm not sure if this contributes much to a discussion on satisfiability.\n\nComment Source:Keith E. Peterson asked: > Is there a difference between reachability and satisfiability? Reachability is concerned with the general problem of the existence of a path between point A and point B. (Propositional) satisfiability is concerned with determining if an assignment of values to variables exists that makes a given formula true. You can probably connect the two by considering point A to be an empty assignment, and point B to any of the class of assignment that simplify the formula to \"true\". Then reachability would ask if we can augment our starting assignment to one in the class of satisfying assignments. But I'm not sure if this contributes much to a discussion on satisfiability.\n• Options\n7.\nedited May 2018\n\nSome remarks on the chemical Petri net and catalysis:\n\nThe feedback loop to the $$\\text{CO}_2$$ makes this system interesting. It means that once we obtain 1 equivalent of $$\\text{CO}_2$$, we no longer need to supply any since it is always regenerated with every reaction cycle.\n\n$$\\text{CO}_2 + \\text{NaOH} \\longrightarrow \\text{NaHCO}_3$$ $$\\text{NaHCO}_3 + \\text{HCl} \\longrightarrow \\text{H}_2 \\text{O} + \\text{NaCl} + \\text{CO}_2$$ So essentially, if one wanted to produce the products water $$\\text{H}_2\\text{O}$$ and table salt $$\\text{NaCl}$$ from the reactants sodium hydroxide $$\\text{NaOH}$$ and hydrochloric acid $$\\text{HCl}$$ via this petri net, one could do so with only a single molecule of $$\\text{CO}_2$$, while only being limited by the amount of the reactants one has on hand. The reaction stops once one runs out of one.\n\nOne could thus simplify the above equations to:\n\n$$\\text{HCl} + \\text{NaOH} \\xrightarrow{\\text{'cat.'} \\text{CO}_2} \\text{NaCl} + \\text{H}_2\\text{O}$$ Where 'cat.' stands for 'catalyzed by'. I put it in quotations because in order for something to actually be catalysis it would have to increase the reaction rate by lowering the activation energy (this can happen by a number of mechanisms). With the above reaction (acid/base, thermodynamically favored), I very much doubt that in reality having the $$\\text{CO}_2$$ around would speed things up, but according to our Petri net model, it's required for the reaction to proceed.\n\nThe step we left out in the above simplified equation, i.e the middle step in:\n\n$$\\text{CO}_2 + \\text{NaOH} + \\text{HCl}\\longrightarrow \\text{NaHCO}_3 \\ + \\text{HCl} \\longrightarrow \\text{CO}_2 \\ + \\text{H}_2 \\text{O} + \\text{NaCl}$$ would be referred to as the mechanism of catalysis by chemists. When one finds that adding a specific substance drastically increases the rate of a reaction, the search is on to find out how it does so. This often involves a lower energy intermediate that can only be formed in the presence of the catalyst.\n\nAs the book points out, the oven in the lemon pie example can be thought of as a 'catalyst' as well. Just to give a non-chemical example (although baking is in some sense also chemistry).\n\nComment Source:Some remarks on the chemical Petri net and catalysis: The feedback loop to the \\$$\\text{CO}_2\\$$ makes this system interesting. It means that once we obtain 1 equivalent of \\$$\\text{CO}_2\\$$, we no longer need to supply any since it is always regenerated with every reaction cycle. $\\text{CO}_2 + \\text{NaOH} \\longrightarrow \\text{NaHCO}_3$ $\\text{NaHCO}_3 + \\text{HCl} \\longrightarrow \\text{H}_2 \\text{O} + \\text{NaCl} + \\text{CO}_2$ So essentially, if one wanted to produce the products water \\$$\\text{H}_2\\text{O}\\$$ and table salt \\$$\\text{NaCl}\\$$ from the reactants sodium hydroxide \\$$\\text{NaOH}\\$$ and hydrochloric acid \\$$\\text{HCl}\\$$ via this petri net, one could do so with only a single molecule of \\$$\\text{CO}_2\\$$, while only being limited by the amount of the reactants one has on hand. The reaction stops once one runs out of one. One could thus simplify the above equations to: $\\text{HCl} + \\text{NaOH} \\xrightarrow{\\text{'cat.'} \\text{CO}_2} \\text{NaCl} + \\text{H}_2\\text{O}$ Where 'cat.' stands for 'catalyzed by'. I put it in quotations because in order for something to actually be catalysis it would have to increase the reaction rate by lowering the activation energy (this can happen by a number of mechanisms). With the above reaction (acid/base, thermodynamically favored), I very much doubt that in reality having the \\$$\\text{CO}_2\\$$ around would speed things up, but according to our Petri net model, it's required for the reaction to proceed. The step we left out in the above simplified equation, i.e the middle step in: $$\\text{CO}_2 + \\text{NaOH} + \\text{HCl}\\longrightarrow \\text{NaHCO}_3 \\ + \\text{HCl} \\longrightarrow \\text{CO}_2 \\ + \\text{H}_2 \\text{O} + \\text{NaCl}$$ would be referred to as the mechanism of catalysis by chemists. When one finds that adding a specific substance drastically increases the rate of a reaction, the search is on to find out how it does so. This often involves a lower energy intermediate that can only be formed in the presence of the catalyst. As the book points out, the oven in the lemon pie example can be thought of as a 'catalyst' as well. Just to give a non-chemical example (although baking is in some sense also chemistry).\n• Options\n8.\n\n@Jonathan Castello\n\nI suspect a better way to view the reachability and satisfiability as being equivalent would be to view a proof (more specifically modus ponens) as being a path between propositions.\n\nI believe homotopy type theory makes this more explicit.\n\nComment Source:@Jonathan Castello I suspect a better way to view the reachability and satisfiability as being equivalent would be to view a proof (more specifically modus ponens) as being a path between propositions. I believe homotopy type theory makes this more explicit. \n• Options\n9.\nedited May 2018\n\nProvability and satisfiability are distinct, though. Satisfiability is concerned with a single formula, not a relationship between two formulae. For instance, I know that $$\\bot \\vdash A \\land \\neg A$$, but $$A \\land \\neg A$$ is not satisfiable.\n\nWorse, satisfiability is weaker than validity, so it wouldn't even be fair to characterize satisfiability by $$\\top \\vdash P$$.\n\nComment Source:Provability and satisfiability are distinct, though. Satisfiability is concerned with a single formula, not a relationship between two formulae. For instance, I know that \\$$\\bot \\vdash A \\land \\neg A\\$$, but \\$$A \\land \\neg A\\$$ is not satisfiable. Worse, satisfiability is weaker than validity, so it wouldn't even be fair to characterize satisfiability by \\$$\\top \\vdash P\\$$.\n• Options\n10.\n\nI ask because John Baez notes that with reachability it is \"known that no algorithm can solve it in polynomial time.\"\n\nSo I wondered if reachability is equivalent to satisfiability since satisfiability is well known to be NP-complete.\n\nComment Source:I ask because John Baez notes that with reachability it is \"known that no algorithm can solve it in polynomial time.\" So I wondered if reachability is equivalent to satisfiability since satisfiability is well known to be NP-complete.\n• Options\n11.\nedited May 2018\n\nHm. I spent a little more time digging here. The specific reachability problem in question concerns Petri nets; this is good, because reachability is definitely solvable in polynomial time for finite directed graphs. The issue seems to be that Petri nets induce an infinite directed graph of a particular character, whose vertices are sets of resources (the set of tokens placed on the Petri net at a given time, I believe) and whose edges are given by the action of the Petri net itself. There is a considerable amount of regularity to this graph -- as one would expect of something with a finite description! -- but not enough to give anything resembling a reasonable algorithm. (Phrased this way, it's somewhat surprising it's decidable at all.)\n\nOn the other hand, Petri net reachability is decidable. This isn't true of most proof systems of any strength. John's book explains why this is interesting:\n\nOn the bright side, it means that Petri nets might be fairly powerful when viewed as computers themselves! After all, for a universal Turing machine, the analogue of the reachability problem is undecidable. So if the reachability problem for Petri nets were decidable, they couldn’t serve as universal computers. But if it were decidable but hard, Petri nets might be fairly powerful—though still not universal—computers.\n\nAs for satisfiability, we're given a formula of finite size $$n$$ and asked to determine if it has any satisfying assignment. There can only be at most $$2^n$$ assignments, so we can brute-force all of them to give a singly-exponential algorithm. This is a significantly better bound than anything referenced in Section 25.1 of John's book, so I am inclined to believe that (a) Petri net reachability is significantly more powerful than propositional satisfiability, and that (b) if an equivalence were established, that would be worth a publication.\n\n(EDIT: Still mulling this one over. It isn't obvious whether Petri net reachability is in NP at all. I can imagine that there exist reachability problems where the shortest witnesses are exponential in the size of the Petri net. From problem 51 (Section 25.2) in John's book, this would probably involve giving a presentation of the desired arrow in terms of the Petri net morphisms and tensor product. Given such a presentation, we can surely check that it does indeed produce the arrow we desire; but if no polynomial-sized presentation is guaranteed to exist, we can't rightly say the problem is in NP.)\n\nComment Source:Hm. I spent a little more time digging here. The specific reachability problem in question concerns Petri nets; this is good, because reachability is definitely solvable in polynomial time [for finite directed graphs](https://en.wikipedia.org/wiki/Floyd%E2%80%93Warshall_algorithm). The issue seems to be that Petri nets induce an infinite directed graph of a particular character, whose vertices are sets of resources (the set of tokens placed on the Petri net at a given time, I believe) and whose edges are given by the action of the Petri net itself. There is a considerable amount of regularity to this graph -- as one would expect of something with a finite description! -- but not enough to give anything resembling a reasonable algorithm. (Phrased this way, it's somewhat surprising it's decidable at all.) On the other hand, Petri net reachability _is_ decidable. This isn't true of most proof systems of any strength. John's book explains why this is interesting: > On the bright side, it means that Petri nets might be fairly powerful when viewed as computers themselves! After all, for a universal Turing machine, the analogue of the reachability problem is undecidable. So if the reachability problem for Petri nets were decidable, they couldn’t serve as universal computers. But if it were decidable but hard, Petri nets might be fairly powerful—though still not universal—computers. As for satisfiability, we're given a formula of finite size \\$$n\\$$ and asked to determine if it has any satisfying assignment. There can only be at most \\$$2^n\\$$ assignments, so we can brute-force all of them to give a singly-exponential algorithm. This is a significantly better bound than anything referenced in Section 25.1 of John's book, so I am inclined to believe that (a) Petri net reachability is significantly more powerful than propositional satisfiability, and that (b) if an equivalence were established, that would be worth a publication. (EDIT: Still mulling this one over. It isn't obvious whether Petri net reachability is in NP at all. I can imagine that there exist reachability problems where the shortest witnesses are exponential in the size of the Petri net. From problem 51 (Section 25.2) in John's book, this would probably involve giving a presentation of the desired arrow in terms of the Petri net morphisms and tensor product. Given such a presentation, we can surely check that it does indeed produce the arrow we desire; but if no polynomial-sized presentation is guaranteed to exist, we can't rightly say the problem is in NP.)\n• Options\n12.\n\nI got a slightly different answer for Puzzle 57 and was wondering what you all think!\n\nPuzzle 57 My interpretation of the problem was that you don't need to do every task, you only need to visit each preceding state. By skirting around task E, you can do this in 7 months no matter how long task E takes. Thus, task E can take an arbitrary amount of time without changing the answer to Puzzle 56.\n\nComment Source:I got a slightly different answer for Puzzle 57 and was wondering what you all think! <b>Puzzle 57</b> My interpretation of the problem was that you don't need to do every task, you only need to visit each preceding state. By skirting around task E, you can do this in 7 months no matter how long task E takes. Thus, task E can take an arbitrary amount of time without changing the answer to Puzzle 56.\n• Options\n13.\nedited May 2018\n\nNice comment, Marius Furter! The $$CO_2$$ should be a catalyst in the way you describe, because without it, the reaction would stop altogether.\n\nComment Source:Nice comment, Marius Furter! The \\$$CO_2\\$$ should be a catalyst in the way you describe, because without it, the reaction would stop altogether.\n• Options\n14.\n\nNo, Sophie Libkind, John asked to get all the way to state '50', and he also made clear that to get to a state you have to do all the tasks leading there. What you described is more like getting done all the preparations for the tasks leading to '50'. For that, however, the time should be 4 moments.\n\nIt's so nice to have a course where we can quickly eradicate misunderstandings like these!\n\nComment Source:No, Sophie Libkind, John asked to get all the way to state '50', and he also made clear that to get to a state you have to do all the tasks leading there. What you described is more like getting done all the preparations for the tasks leading to '50'. For that, however, the time should be 4 moments. It's so nice to have a course where we can quickly eradicate misunderstandings like these!\n• Options\n15.\n\n@Sophie_Libkind to comment 12:\n\nI find it also a bit unclear what John means with \"you have reached all its predecessors\" By that he probably doesn't mean \"had been at all predecessors states\" but more \"performed all predecessors tasks\", because (at least for a production process) one usually doesnt stop in the predecessor states that is you usually don't leave things in an unfinished state, i.e. you usually need to do all tasks - unless some steps are redundancy steps. In the former interpretation 6 month would be enough since then state 50 had been reached through 10->30->50 and all predecessor states (which take at most 4 months) had been reached as well and there would be no slack time, but in the latter interpretation (the one I think is meant) one needs 7 months and E would have a slack time of 1 month. So it depends how much redundancy you want to/ are allowed to have in your system.\n\nComment Source:@Sophie_Libkind to comment 12: I find it also a bit unclear what John means with \"you have reached all its predecessors\" By that he probably doesn't mean \"had been at all predecessors states\" but more \"performed all predecessors tasks\", because (at least for a production process) one usually doesnt stop in the predecessor states that is you usually don't leave things in an unfinished state, i.e. you usually need to do all tasks - unless some steps are redundancy steps. In the former interpretation 6 month would be enough since then state 50 had been reached through 10->30->50 and all predecessor states (which take at most 4 months) had been reached as well and there would be no slack time, but in the latter interpretation (the one I think is meant) one needs 7 months and E would have a slack time of 1 month. So it depends how much redundancy you want to/ are allowed to have in your system.\n• Options\n16.\n\n@Sophie Libkind and hi @nad. If you add 1 to A then A = 4 + D = 1 + F = 3 = 8 which busts 7 so E + 1 is unique as A = 3 + E = 3+ 1 = 7 as per the previous puzzle.\n\nComment Source:@Sophie Libkind and hi @nad. If you add 1 to A then A = 4 + D = 1 + F = 3 = 8 which busts 7 so E + 1 is unique as A = 3 + E = 3+ 1 = 7 as per the previous puzzle. \n• Options\n17.\nedited May 2018\n\nKeith wrote:\n\nSo I wondered if reachability is equivalent to satisfiability since satisfiability is well known to be NP-complete.\n\nI will wildly guess that the \"satisfiability\" you're talking about is SAT, the Boolean satisfiability problem. This asks whether a Boolean expression like $$p \\vee (q \\wedge r) \\vee (\\neg p \\wedge r)$$ is true for some assignment of T and F to all the variables. Reachability, on the other hand, is a question involving Petri nets.\n\nSo, they are quite different things. To prove the equivalence you'd wondering about, you'd need to figure out some way to translate reachability questions about Petri nets into satisfiability questions about Boolean expressions.\n\nBut I'm pretty sure this is impossible - at least, not in exponential time - because NP problems like SAT can be solved in exponential time, and nobody knows if reachability can be solved in exponential time. We have an exponential lower bound on the runtime of a certain algorithm for reachability, but the best known upper bound is astronomical.\n\nReachability can, however, be proved equivalent to lots of other questions about Petri nets.\n\nIt's really fun stuff.\n\nComment Source:Keith wrote: > So I wondered if reachability is equivalent to satisfiability since satisfiability is well known to be NP-complete. I will wildly guess that the \"satisfiability\" you're talking about is [SAT](https://en.wikipedia.org/wiki/Boolean_satisfiability_problem), the Boolean satisfiability problem. This asks whether a Boolean expression like \\$$p \\vee (q \\wedge r) \\vee (\\neg p \\wedge r)\\$$ is true for some assignment of T and F to all the variables. Reachability, on the other hand, is a question involving Petri nets. So, they are quite different things. To prove the equivalence you'd wondering about, you'd need to figure out some way to translate reachability questions about Petri nets into satisfiability questions about Boolean expressions. But I'm pretty sure this is impossible - at least, not in exponential time - because NP problems like SAT can be solved in exponential time, and _nobody knows_ if reachability can be solved in exponential time. We have an exponential _lower bound_ on the runtime of a certain algorithm for reachability, but the best known upper bound is astronomical. Reachability can, however, be proved equivalent to lots of other questions about Petri nets. You can see everything I know about this here: * John Baez and Jacob Biamonte, _[Quantum Techniques for Stochastic Mechanics](https://arxiv.org/abs/1209.3632)_, Section 25.1: The Reachability Problem. It's really fun stuff.\n• Options\n18.\nedited May 2018\n\nMarius - your remarks on catalysis are very interesting and important! One of the beauties of resource theory is that it lets us make the concept of \"catalyst\" very general and mathematical.\n\nIn Lecture 20, I talk about some reactions in manufacturing, like these:\n\n$$\\textrm{[processing chip]} + \\textrm{[memory chip]} + 4 \\textrm{[minute]} \\to \\textrm{[laptop]}$$ $$\\textrm{[processing chip]} + 2 \\textrm{[memory chip]} + 3 \\textrm{[minute]} \\to \\textrm{[desktop]}$$ $$\\textrm{[laptop]} \\to 750\\textrm{[profit]}$$ $$\\textrm{[desktop]} \\to 1000 \\textrm{[profit]}$$ These are often studied using linear programming. Linear programming ignores catalysis because it doesn't distinguish between a reaction, say\n\n$$X + Y \\to Z,$$ and a similar reaction that involves a catalyst:\n\n$$X + Y + C \\to Z + C.$$ The manufacturing reactions I listed don't involve catalysis, but if they did, linear programming would be somewhat inadequate to capture all the details! (At least the simple sort of linear programming I know about. Maybe there's a fancier version that handles catalysis.)\n\nComment Source:Marius - your [remarks on catalysis](https://forum.azimuthproject.org/discussion/comment/17825/#Comment_17825) are very interesting and important! One of the beauties of resource theory is that it lets us make the concept of \"catalyst\" very general and mathematical. In [Lecture 20](https://forum.azimuthproject.org/discussion/2081/lecture-20-chapter-2-resource-theories), I talk about some reactions in manufacturing, like these: $$\\textrm{[processing chip]} + \\textrm{[memory chip]} + 4 \\textrm{[minute]} \\to \\textrm{[laptop]}$$ $$\\textrm{[processing chip]} + 2 \\textrm{[memory chip]} + 3 \\textrm{[minute]} \\to \\textrm{[desktop]}$$ $$\\textrm{[laptop]} \\to 750\\textrm{[profit]}$$ $$\\textrm{[desktop]} \\to 1000 \\textrm{[profit]}$$ These are often studied using linear programming. Linear programming _ignores catalysis_ because it doesn't distinguish between a reaction, say $$X + Y \\to Z,$$ and a similar reaction that involves a catalyst: $$X + Y + C \\to Z + C.$$ The manufacturing reactions I listed don't involve catalysis, but if they did, linear programming would be somewhat inadequate to capture all the details! (At least the simple sort of linear programming I know about. Maybe there's a fancier version that handles catalysis.) \n• Options\n19.\nedited May 2018\n\nSophie wrote:\n\nI got a slightly different answer for Puzzle 57 and was wondering what you all think!\n\nPuzzle 57. My interpretation of the problem was that you don't need to do every task, you only need to visit each preceding state. By skirting around task E, you can do this in 7 months no matter how long task E takes. Thus, task E can take an arbitrary amount of time without changing the answer to Puzzle 56.\n\nVery good point! This is a perfectly self-consistent viewpoint. But now that I think about it, I'm pretty sure that people who use PERT charts usually say you must accomplish every task (=arrow) pointing to a given state (=node) before you can go ahead and do further tasks pointing out of that state.\n\nSo, my description of the problem was inaccurate, at least for PERT charts as ordinarily used. I've fixed my lecture, and explained the change in comment 2.\n\nOne annoying thing about the Wikipedia article on PERT charts is that they don't start by clearly stating the rules of the game.\n\nComment Source:Sophie wrote: > I got a slightly different answer for Puzzle 57 and was wondering what you all think! > <b>Puzzle 57.</b> My interpretation of the problem was that you don't need to do every task, you only need to visit each preceding state. By skirting around task E, you can do this in 7 months no matter how long task E takes. Thus, task E can take an arbitrary amount of time without changing the answer to Puzzle 56. Very good point! This is a perfectly self-consistent viewpoint. But now that I think about it, I'm pretty sure that people who use PERT charts usually say you must accomplish every task (=arrow) pointing to a given state (=node) before you can go ahead and do further tasks pointing out of that state. So, my description of the problem was inaccurate, at least for PERT charts as ordinarily used. I've fixed my lecture, and explained the change in [comment 2](https://forum.azimuthproject.org/discussion/comment/17816/#Comment_17816). One annoying thing about the Wikipedia article on PERT charts is that they don't start by clearly stating the rules of the game.\n• Options\n20.\nedited May 2018\n\nPossibly the https://en.wikipedia.org/wiki/Travelling_salesman_problem would be a classical \"target\" for resource theories? Am I correct?\n\nComment Source:Possibly the https://en.wikipedia.org/wiki/Travelling_salesman_problem would be a classical \"target\" for resource theories? Am I correct?\n• Options\n21.\nedited May 2018\n\nI find it also a bit unclear what John means with \"you have reached all its predecessors\" By that he probably doesn't mean \"had been at all predecessors states\" but more \"performed all predecessors tasks\"...\n\nThat's not really what I meant, but it's what I should have meant. One reason is that Jared Summers, an official expert on PERT charts, answered my puzzles in the way that's consistent with this correction.\n\nComment Source:Nad wrote: > I find it also a bit unclear what John means with \"you have reached all its predecessors\" By that he probably doesn't mean \"had been at all predecessors states\" but more \"performed all predecessors tasks\"... That's not really what I meant, but it's what I _should have_ meant. One reason is that [Jared Summers, an official expert on PERT charts](https://forum.azimuthproject.org/discussion/comment/17819/#Comment_17819), answered my puzzles in the way that's consistent with this correction.\n• Options\n22.\n\nPossible the \"https://en.wikipedia.org/wiki/Travelling_salesman_problem\" would be a classical \"target\" for resource theories? Am I correct?\n\nThe traveling salesman problem is reducible to Boolean satisfiability if my memory is correct.\n\nComment Source:Pierre Prado wrote: >Possible the \"https://en.wikipedia.org/wiki/Travelling_salesman_problem\" would be a classical \"target\" for resource theories? Am I correct? The traveling salesman problem is reducible to Boolean satisfiability if my memory is correct.\n• Options\n23.\nedited May 2018\n\nJonathan Castello - nice comment! Everything you said sounds right to me.\n\nIt isn't obvious whether Petri net reachability is in NP at all.\n\nI'm no good at computational complexity, so I could be missing something obvious, but hope that if this problem were known to be in NP I'd have bumped into this fact during my literature search.\n\nIn 1976, Roger J. Lipton showed that any algorithm for solving the Petri net reachability problem requires least an exponential amount of memory space:\n\nand thus also at least an exponential amount of time. As far as I know, this doesn't rule out the possibility that this problem is in $$\\text{NP}$$. But maybe I'm missing some theorems.\n\nComment Source:Jonathan Castello - [nice comment](https://forum.azimuthproject.org/discussion/comment/17830/#Comment_17830)! Everything you said sounds right to me. > It isn't obvious whether Petri net reachability is in NP at all. I'm no good at computational complexity, so I could be missing something obvious, but hope that if this problem were known to be in NP I'd have bumped into this fact during my literature search. In 1976, Roger J. Lipton showed that any algorithm for solving the Petri net reachability problem requires least an exponential amount of memory space: * Roger J. Lipton, <a href = \"http://www.cs.yale.edu/publications/techreports/tr63.pdf\">The reachability problem requires exponential space</a>, Technical Report 62, Yale University, 1976. and thus also at least an exponential amount of time. As far as I know, this doesn't rule out the possibility that this problem is in \\$$\\text{NP}\\$$. But maybe I'm missing some theorems.\n• Options\n24.\nedited May 2018\n\nEw. I seem to be missing the point. I decide to believe that, if my interjections become too awful, someone will point that out to me loud and clear. So I won't stop just because I see room for improvement.\n\nThat being said, man, what an interesting pack of discussions! Thanks everybody!\n\nComment Source:Ew. I seem to be missing the point. I decide to believe that, if my interjections become too awful, someone will point that out to me loud and clear. So I won't stop just because I see room for improvement. That being said, man, what an interesting pack of discussions! Thanks everybody!\n• Options\n25.\nedited May 2018\n\nI wonder if we could get prof Erik Demaine in here. He has a pretty good handle on what it takes to reduce a problem to a known strongly NP-hard problem.\n\nFor instance, he's shown that playing \"offline\" Tetris optimally, or more precisely, given a known Tetris board configuration, and some known future Tetris pieces, can we get to a board configuration that is not a game over, is NP-complete, by reducing the game to 3-PARTITION (which I believe is reducible to SAT, but don't quote me on that)..\n\nIn terms of Tetris, my question above is, if we view board configurations and pieces as resources, can we even produce any valid new resource (that being any specific board configuration) following the rules of the game without getting a game over?\n\nComment Source:I wonder if we could get prof Erik Demaine in here. He has a pretty good handle on what it takes to reduce a problem to a known strongly NP-hard problem. For instance, he's shown that playing \"offline\" Tetris optimally, or more precisely, given a known Tetris board configuration, and some known future Tetris pieces, can we get to a board configuration that is not a game over, is NP-complete, by reducing the game to 3-PARTITION (which I believe is reducible to SAT, but don't quote me on that).. In terms of Tetris, my question above is, if we view board configurations and pieces as *resources*, can we even produce *any* valid new resource (that being any specific board configuration) following the rules of the game without getting a game over?\n• Options\n26.\nedited May 2018\n\nAs far as I know, this doesn't rule out the possibility that this problem is in NP.\n\nWe know that $$\\mathsf{DTIME}(f(n)) \\subsetneq \\mathsf{DSPACE}(f(n))$$. This is because, intuitively, in order to use up all of that space you needed to take the time to write it out. But the containment is strict because often times you can just mutate your input. For example qsort runs in $$\\mathsf{DTIME}(\\mathcal{O}(n \\, \\mathsf{ln}(n)))$$ but only takes $$\\mathsf{DSPACE}(\\mathcal{O}(n))$$ (this is including its input string).\n\nWe also know that $$\\mathsf{PSPACE} \\subsetneq \\mathsf{EXPSPACE}$$ from the space hierarchy separation theorem.\n\nFrom the above two observations, we have $$\\mathsf{P} \\subsetneq \\mathsf{EXSPACE}$$.\n\nHence, a polynomial time reduction of Petri net reachability to SAT would suffice to prove $$\\mathsf{EXPSPACE} \\subseteq \\mathsf{NP}$$ and thus $$\\mathsf{P} \\neq \\mathsf{NP}$$. By the simple argument above I feel I deserve at least half the Millenium prize money if someone in this thread comes up with such a polynomial time reduction ;-)\n\nComment Source:[John Baez wrote in #23](https://forum.azimuthproject.org/discussion/comment/17859/#Comment_17859): > As far as I know, this doesn't rule out the possibility that this problem is in NP. We know that \\$$\\mathsf{DTIME}(f(n)) \\subsetneq \\mathsf{DSPACE}(f(n))\\$$. This is because, intuitively, in order to use up all of that space you needed to take the time to write it out. But the containment is strict because often times you can just mutate your input. For example [qsort](http://www.cplusplus.com/reference/cstdlib/qsort/) runs in \\$$\\mathsf{DTIME}(\\mathcal{O}(n \\, \\mathsf{ln}(n)))\\$$ but only takes \\$$\\mathsf{DSPACE}(\\mathcal{O}(n))\\$$ (this is including its input string). We also know that \\$$\\mathsf{PSPACE} \\subsetneq \\mathsf{EXPSPACE}\\$$ from the [space hierarchy separation theorem](https://en.wikipedia.org/wiki/Space_hierarchy_theorem). From the above two observations, we have \\$$\\mathsf{P} \\subsetneq \\mathsf{EXSPACE}\\$$. Hence, a polynomial time reduction of Petri net reachability to SAT would suffice to prove \\$$\\mathsf{EXPSPACE} \\subseteq \\mathsf{NP}\\$$ and thus \\$$\\mathsf{P} \\neq \\mathsf{NP}\\$$. By the simple argument above I feel I deserve at least half the Millenium prize money if someone in this thread comes up with such a polynomial time reduction ;-)\n• Options\n27.\n\nMatthew, can you elaborate on why reducing Petri net reachability to SAT would imply $$\\text{EXPSPACE} \\subseteq \\text{NP}$$? Is Petri net reachability known to be EXPSPACE-complete? I don't think you're necessarily wrong, but the critical step is eluding me.\n\nApproaching this similarly: We have an exponential lower bound on space for Petri net reachability. As you said, this necessarily imposes an exponential lower bound on time, since you can only write one cell per unit time (per tape). Suppose a reduction to SAT existed. If SAT had a subexponential algorithm, then we could defeat the exponential lower bound; so SAT, and by extension every NP-Complete problem, must not be solvable in subexponential time. Therefore, $$\\text{P} \\ne \\text{NP}$$.\n\nComment Source:Matthew, can you elaborate on why reducing Petri net reachability to SAT would imply \\$$\\text{EXPSPACE} \\subseteq \\text{NP}\\$$? Is Petri net reachability known to be EXPSPACE-complete? I don't think you're necessarily wrong, but the critical step is eluding me. Approaching this similarly: We have an exponential lower bound on space for Petri net reachability. As you said, this necessarily imposes an exponential lower bound on time, since you can only write one cell per unit time (per tape). Suppose a reduction to SAT existed. If SAT had a subexponential algorithm, then we could defeat the exponential lower bound; so SAT, and by extension every NP-Complete problem, must not be solvable in subexponential time. Therefore, \\$$\\text{P} \\ne \\text{NP}\\$$.\n• Options\n28.\nedited May 2018\n\nYeah, like Jonathan I don't see how proving one particular problem that's known to take at least exponential space is in $$\\text{NP}$$ would imply $$\\textrm{EXPSPACE} \\subseteq \\textrm{NP}$$.\n\nI've never heard anything about Petri net reachability being $$\\textrm{EXPSPACE}$$-complete. If it's true I wanna know!\n\nComment Source:Yeah, like Jonathan I don't see how proving one particular problem that's known to take at least exponential space is in \\$$\\text{NP}\\$$ would imply \\$$\\textrm{EXPSPACE} \\subseteq \\textrm{NP}\\$$. I've never heard anything about Petri net reachability being \\$$\\textrm{EXPSPACE} \\$$-complete. If it's true I wanna know!\n• Options\n29.\nedited May 2018\n\nRobert wrote:\n\nNo, Sophie Libkind, John asked to get all the way to state '50', and he also made clear that to get to a state you have to do all the tasks leading there.\n\nNo, I didn't make that clear at all. I should have made it clear, but I said something very different:\n\nHowever, you can only reach a state after you have reached all its predecessors! For example, you can't reach state 50 unless you have already reached states 20, 30 and 40.\n\nSophie correctly picked up on this issue: I was demanding only that all previous states be reached, not that all previous tasks be completed.\n\nI will now fix this mistake of mine.\n\nComment Source:Robert wrote: > No, Sophie Libkind, John asked to get all the way to state '50', and he also made clear that to get to a state you have to do all the tasks leading there. No, I didn't make that clear at all. I _should have_ made it clear, but I said something very different: > However, you can only reach a state after you have reached all its predecessors! For example, you can't reach state 50 unless you have already reached states 20, 30 and 40. Sophie correctly picked up on this issue: I was demanding only that all previous states be reached, not that all previous tasks be completed. I will now fix this mistake of mine.\n• Options\n30.\n\nLet me make a weaker claim instead of a full equivalence.\n\nIs (Boolean) satisfiability a type of reachability?\n\nOr in constructivist language, can we build the various Boolean operations and model SAT problems internally in Petri-nets or other equivalent models of resource theories?\n\nThe answer seems \"yes\" because Petri-nets are in a computationally stronger class.\n\nComment Source:Let me make a weaker claim instead of a full equivalence. *Is (Boolean) satisfiability a type of reachability*? Or in constructivist language, can we build the various Boolean operations and model SAT problems internally in Petri-nets or other equivalent models of resource theories? The answer seems \"yes\" because Petri-nets are in a computationally stronger class.\n• Options\n31.\nedited May 2018\n\nKeith - you'll notice I've carefully sidestepped the question of whether the Boolean satisfiability problem can be reduced to the Petri net reachability problem. That's a really interesting question, but I don't know the answer.\n\nThe answer seems \"yes\" because Petri nets are in a computationally stronger class.\n\nThat's an interesting guess but not an a proof.\n\nAbove, I only discussed whether the Petri net reachability problem can be reduced to the Boolean satisfiability problem. I don't know the answer to that either, but at least I can say some mildly entertaining things about it!\n\nComment Source:Keith - you'll notice I've carefully sidestepped the question of whether the Boolean satisfiability problem can be reduced to the Petri net reachability problem. That's a really interesting question, but I don't know the answer. > The answer seems \"yes\" because Petri nets are in a computationally stronger class. That's an interesting guess but not an a proof. Above, I only discussed whether the Petri net reachability problem can be reduced to the Boolean satisfiability problem. I don't know the answer to that either, but at least I can say some mildly entertaining things about it!\n• Options\n32.\nedited May 2018\n\nIs there something on the reachability problem but with the assumption that at all steps you must have $$\\leq d_i$$ of object $$X_i$$? This is breaking the symmetric monoidal structure by taking a very small subset of the objects, but I wanted a finite dimensional linear operator on $$\\otimes_i \\mathbb{C}^{d_i+1}$$ that sends each computational basis state to the result of applying any one of the reactions (including identity) in uniform superposition (provided still satisfy all the constraints). Then apply this some large number of times and evaluate the matrix element between starting and ending to see if it is 0. I see some with $$d_i =1$$, but not anything higher.\n\nComment Source:Is there something on the reachability problem but with the assumption that at all steps you must have \\$$\\leq d_i \\$$ of object \\$$X_i \\$$? This is breaking the symmetric monoidal structure by taking a very small subset of the objects, but I wanted a finite dimensional linear operator on \\$$\\otimes_i \\mathbb{C}^{d_i+1} \\$$ that sends each computational basis state to the result of applying any one of the reactions (including identity) in uniform superposition (provided still satisfy all the constraints). Then apply this some large number of times and evaluate the matrix element between starting and ending to see if it is 0. I see some with \\$$d_i =1 \\$$, but not anything higher.\n• Options\n33.\nedited May 2018\n\nHi, Ammar! My first comment is: the bad news is that we're not on a Wordpress blog, so you need to use \\$$\\$$ around your math instead of $latex$. But the good news is that you can edit your comments by clicking on the little gear at upper right.\n\nI'll say something more interesting later, I promise! But right now I need to attend the department teat.\n\nComment Source:Hi, Ammar! My first comment is: the bad news is that we're not on a Wordpress blog, so you need to use \\$$\\$$ around your math instead of $latex$. But the good news is that you can edit your comments by clicking on the little gear at upper right. I'll say something more interesting later, I promise! But right now I need to attend the department teat.\n• Options\n34.\n\nWe've discussed catalysts a bit, but is there anything akin to an inhibitor in this setting? In other words, something where an arrow $$X \\to Z$$ exists, but an arrow $$X \\otimes Y \\to Z \\otimes Y$$ does not?\n\nComment Source:We've discussed catalysts a bit, but is there anything akin to an inhibitor in this setting? In other words, something where an arrow \\$$X \\to Z\\$$ exists, but an arrow \\$$X \\otimes Y \\to Z \\otimes Y\\$$ does not?\n• Options\n35.\nedited May 2018\n\nJonathan - great questions!\n\nThe framework we're discussing right now can't explain \"inhibition\". In biochemistry, \"inhibitors\" often work by binding to one of the reactants. But this only happens in a context where our reactions have \"rates\" associated to them. That is, we might have reaction $$X \\otimes Y \\to A$$ that occurs with such a high rate that $$X \\otimes Y \\to Z \\otimes Y$$ rarely gets a chance to happens, because the rate of $$X \\to Z$$ is lower. More precisely: if there are enough $$Y$$s around, most of the $$X$$s bind to the $$Y$$s and form $$A$$s before they get a chance to become $$Z$$s. In the framework discussed near the start of Chapter 2 - namely, symmetric monoidal posets - reactions don't have rates attached to them.\n\nFor the same reason, this simple framework can't explain how catalysts in chemistry increase the rates of reactions. It can only explain situations where a reaction is impossible without a catalyst, but becomes possible with it.\n\nI've been thinking a lot about open reaction networks with rates, so you can click the link to read more about those if you're curious. That's a framework that can handle inhibition!\n\nComment Source:Jonathan - great questions! The framework we're discussing right now can't explain \"inhibition\". In biochemistry, \"inhibitors\" often work by binding to one of the reactants. But this only happens in a context where our reactions have \"rates\" associated to them. That is, we might have reaction \\$$X \\otimes Y \\to A\\$$ that occurs with such a high rate that \\$$X \\otimes Y \\to Z \\otimes Y\\$$ rarely gets a chance to happens, because the rate of \\$$X \\to Z\\$$ is lower. More precisely: if there are enough \\$$Y\\$$s around, most of the \\$$X\\$$s bind to the \\$$Y\\$$s and form \\$$A\\$$s before they get a chance to become \\$$Z\\$$s. In the framework discussed near the start of Chapter 2 - namely, symmetric monoidal posets - reactions don't have rates attached to them. For the same reason, this simple framework can't explain how catalysts in chemistry increase the rates of reactions. It can only explain situations where a reaction is impossible without a catalyst, but becomes possible with it. I've been thinking a lot about [open reaction networks with rates](https://johncarlosbaez.wordpress.com/2017/07/30/a-compositional-framework-for-reaction-networks/), so you can click the link to read more about those if you're curious. That's a framework that can handle inhibition!\n• Options\n36.\n\nYeah, like Jonathan I don't see how proving one particular problem that's known to take at least exponential space is in $$\\textsf{NP}$$ would imply $$\\textsf{EXPSPACE} \\subseteq \\textsf{NP}$$.\n\nI've never heard anything about Petri net reachability being $$\\textsf{EXPSPACE}$$-complete. If it's true I wanna know!\n\nMy pithy argument was too pithy.\n\nFortunately, I have found some references.\n\nEsparza et al. Decidability Issues for Petri Nets (1994) remark remark that Petri net reachability is $$\\textsf{EXPSPACE}$$-complete for symmetric Petri nets. The write \"loosely speaking, a Petri net is symmetric if for every transition $$t$$ there is a reverse transition $$t^\\prime$$ whose occurrence 'undoes' the effect of the occurrence of $$t$$\". They quote the proof in Cardoza, Lipton and Meyer Exponential Space Complete Problems for Petri Nets and Commutative Semigroups (1976). Cardoza et al. call symmetric Petri nets reversible.\n\nThe wider class of reachability problems for arbitrary Petri nets must then contain $$\\textsf{EXPSPACE}$$.\n\nMoreover, $$\\textsf{NP} \\subsetneq \\textsf{EXPSPACE}$$ - this is because it is a folk theorem that $$\\textsf{NP} \\subseteq \\textsf{PSPACE}$$ and $$\\textsf{PSPACE} \\subsetneq \\textsf{EXPSPACE}$$ by the hierarchy theorem.\n\nComment Source:[John Baez wrote in #28](https://forum.azimuthproject.org/discussion/comment/17869/#Comment_17869): > Yeah, like Jonathan I don't see how proving one particular problem that's known to take at least exponential space is in \\$$\\textsf{NP}\\$$ would imply \\$$\\textsf{EXPSPACE} \\subseteq \\textsf{NP}\\$$. > > I've never heard anything about Petri net reachability being \\$$\\textsf{EXPSPACE} \\$$-complete. If it's true I wanna know! My pithy argument was *too* pithy. Fortunately, I have found some references. Esparza et al. [*Decidability Issues for Petri Nets* (1994)](https://pdfs.semanticscholar.org/08f6/b004cd6e8cbf63b7a5e89398e3d208db1a0a.pdf) remark remark that Petri net reachability is \\$$\\textsf{EXPSPACE} \\$$-complete for *symmetric* Petri nets. The write \"loosely speaking, a Petri net is *symmetric* if for every transition \\$$t\\$$ there is a reverse transition \\$$t^\\prime \\$$ whose occurrence 'undoes' the effect of the occurrence of \\$$t\\$$\". They quote the proof in Cardoza, Lipton and Meyer [*Exponential Space Complete Problems for Petri Nets and Commutative Semigroups* (1976)](https://dl.acm.org/citation.cfm?id=803630). Cardoza et al. call symmetric Petri nets *reversible*. The wider class of reachability problems for arbitrary Petri nets must then *contain* \\$$\\textsf{EXPSPACE} \\$$. Moreover, \\$$\\textsf{NP} \\subsetneq \\textsf{EXPSPACE} \\$$ - this is because it is a folk theorem that \\$$\\textsf{NP} \\subseteq \\textsf{PSPACE}\\$$ and \\$$\\textsf{PSPACE} \\subsetneq \\textsf{EXPSPACE}\\$$ by the hierarchy theorem.\n• Options\n37.\n\nJohn wrote in 19.\n\n\"This is a perfectly self-consistent viewpoint.\"\n\nIt seems you forgot something here.\n\nPuzzle 57. Which tasks could take longer, without changing the answer to Puzzle 56?..\n\nand\n\nPuzzle 56. What is the minimum amount of time it takes to get from state 10 to state 50?\n\nBut seven months is not the minimum time it takes to get from state 10 to state 50. Please see again comment 15.\n\nJohn wrote:\n\nThat's not really what I meant, but it's what I should have meant. One reason is that Jared Summers, an official expert on PERT charts, answered my puzzles in the way that's consistent with this correction.\n\nHe gave the same answer as me for the second variant (see comment 15). This seems indeed to be the official reading of PERT charts -at least I understood it in the same way (I recently also needed to look a bit at those organization tools jobwise and I understand that he his not very happy with having to deal with PERT charts and the like, too bad that he couldn't avoid being recruited).\n\nBut as I said in 15. there exist also processes, where your initial interpretation makes also sense. An example can be found in the italian haute couture production. It's given as one example of how certain social conditions arise in this book. I bought the book as a present and got to look only briefly into it, but if I remember correctly it describes how the production of some haute couture items is put up for a kind of auction: those who sew fastest and cheapest (while keeping haute couture quality) will be paid for the sewn product the others not. At least it seems they are allowed to sell their later-or partly unfinished product on some secondary market, but I dont know how this will go together in future with the new IP tendencies in fashion.\n\nComment Source:@John_Baez @Sophie_Libkind @Robert_Figura John wrote in 19. >\"This is a perfectly self-consistent viewpoint.\" It seems you forgot something here. That is Puzzle 57 reads: >Puzzle 57. Which tasks could take longer, without changing the answer to Puzzle 56?.. and >Puzzle 56. What is the minimum amount of time it takes to get from state 10 to state 50? But seven months is not the minimum time it takes to get from state 10 to state 50. Please see again comment 15. John wrote: >That's not really what I meant, but it's what I should have meant. One reason is that Jared Summers, an official expert on PERT charts, answered my puzzles in the way that's consistent with this correction. He gave the same answer as me for the second variant (see comment 15). This seems indeed to be the official reading of PERT charts -at least I understood it in the same way (I recently also needed to look a bit at those organization tools jobwise and I understand that he his not very happy with having to deal with PERT charts and the like, too bad that he couldn't avoid being recruited). But as I said in 15. there exist also processes, where your initial interpretation makes also sense. An example can be found in the italian haute couture production. It's given as one example of how certain social conditions arise in this <a href=\"https://en.wikipedia.org/wiki/Gomorrah_(book)\">book.</a> I bought the book as a present and got to look only briefly into it, but if I remember correctly it describes how the production of some haute couture items is put up for a kind of auction: those who sew fastest and cheapest (while keeping haute couture quality) will be paid for the sewn product the others not. At least it seems they are allowed to sell their later-or partly unfinished product on some secondary market, but I dont know how this will go together in future with the new IP tendencies in fashion.\n• Options\n38.\n\nI found this in my simplex code notes. I don't know what Lemke-Howson is so I doubt I read or understood it.\n\nY. Disser and M. Skutella (2013) In defense of the Simplex Algorithm’s worst-case behavior⋆ http://arxiv.org/abs/1311.5935\n\nThese papers show that the Lemke-Howson can actually solve PSPACE-complete problems (!), and the Simplex algorithm can solve NP-hard problems, respectively (with the solution in the case of the Simplex algorithm encoded in the computation trace, obviously not by the LP solution, which can be found in polynomial time). These results highlight just how powerful these algorithms are, and they reinforce the importance of understanding what types of/ restrictions of inputs are important.\n\nComment Source:I found this in my simplex code notes. I don't know what Lemke-Howson is so I doubt I read or understood it. Y. Disser and M. Skutella (2013) In defense of the Simplex Algorithm’s worst-case behavior⋆ http://arxiv.org/abs/1311.5935 These papers show that the Lemke-Howson can actually solve PSPACE-complete problems (!), and the Simplex algorithm can solve NP-hard problems, respectively (with the solution in the case of the Simplex algorithm encoded in the computation trace, obviously not by the LP solution, which can be found in polynomial time). These results highlight just how powerful these algorithms are, and they reinforce the importance of understanding what types of/ restrictions of inputs are important.\n• Options\n39.\nedited May 2018\n\nI chose to ignore interior-point and ellipsoid methods as per:\n\nWe prove that the classic policy-iteration method (Howard 1960), including the Simplex method (Dantzig 1947) with the most-negative-reduced-cost pivoting rule, is a strongly polynomial-time algorithm for solving the Markov decision problem (MDP) with a fixed discount rate. Furthermore, the computational complexity of the policyiteration method (including the Simplex method) is superior to that of the only known strongly polynomial-time interior-point algorithm ( 2005) for solving this problem. The result is surprising since the Simplex method with the same pivoting rule was shown to be exponential for solving a general linear programming (LP) problem, the Simplex (or simple policy-iteration) method with the smallest-index pivoting rule was shown to be exponential for solving an MDP regardless of discount rates, and the policy-iteration method was recently shown to be exponential for solving a undiscounted MDP. We also extend the result to solving MDPs with sub-stochastic and transient state transition probability matrices.\n\nComment Source:I chose to ignore interior-point and ellipsoid methods as per: > We prove that the classic policy-iteration method (Howard 1960), including the Simplex method (Dantzig 1947) with the most-negative-reduced-cost pivoting rule, is a strongly polynomial-time algorithm for solving the Markov decision problem (MDP) with a fixed discount rate. Furthermore, the computational complexity of the policyiteration method (including the Simplex method) is superior to that of the only known strongly polynomial-time interior-point algorithm ( 2005) for solving this problem. The result is surprising since the Simplex method with the same pivoting rule was shown to be exponential for solving a general linear programming (LP) problem, the Simplex (or simple policy-iteration) method with the smallest-index pivoting rule was shown to be exponential for solving an MDP regardless of discount rates, and the policy-iteration method was recently shown to be exponential for solving a undiscounted MDP. We also extend the result to solving MDPs with sub-stochastic and transient state transition probability matrices. * Yunyu Ye, [The Simplex and Policy-Iteration Methods are Strongly Polynomial for the Markov Decision Problem with a Fixed Discount Rate (2010)](http://web.stanford.edu/~yyye/SimplexMDP3.pdf)\n• Options\n40.\nedited May 2018\n\nTo be clear I don’t dislike PERT charts, and the story isn't that interesting. My office is trying to become more Agile and less Waterfall, but management only got as far as adding daily standups and were confused as to why that wasn't enough. I ended up explaining theory of constraints, lean, and agile to them and got drafted to help. There are a lot of teams and projects here (1k or so employees, almost all engineers or computer scientists, in our department alone). So the \"hell\" is being in management/planning more than doing (and with no authority to direct change, I'm more like an internal consultant).\n\nPERT charts and Gantt charts (less useful in some ways, but a helpful visualization) are great from a scheduling and planning perspective. Interestingly, since this is largely a software shop, they ended up with an internal team that developed our planning tools over the years. They've never included PERT charts in their tools which might be something I'll point out to them. They do have Gantt charts (which can be generated from the same data), but these often don't show the same causal links between states/activities (the diagram gets noisy if you add all the connections in) so the connection is implicit (some activity is to the right of another activity in the chart means it may or may not depend on the activity to the left). What PERT charts we use are mostly handmade in another program and not in the main reporting tool that upper management sees, so projects have it but don't report it. Which is silly.\n\nComment Source:To be clear I don’t dislike PERT charts, and the story isn't that interesting. My office is trying to become more Agile and less Waterfall, but management only got as far as adding daily standups and were confused as to why that wasn't enough. I ended up explaining theory of constraints, lean, and agile to them and got drafted to help. There are a *lot* of teams and projects here (1k or so employees, almost all engineers or computer scientists, in our department alone). So the \"hell\" is being in management/planning more than doing (and with no authority to direct change, I'm more like an internal consultant). PERT charts and Gantt charts (less useful in some ways, but a helpful visualization) are great from a scheduling and planning perspective. Interestingly, since this is largely a software shop, they ended up with an internal team that developed our planning tools over the years. They've never included PERT charts in their tools which might be something I'll point out to them. They do have Gantt charts (which can be generated from the same data), but these often don't show the same causal links between states/activities (the diagram gets noisy if you add all the connections in) so the connection is implicit (some activity is to the right of another activity in the chart means it may or may not depend on the activity to the left). What PERT charts we use are mostly handmade in another program and not in the main reporting tool that upper management sees, so projects have it but don't report it. Which is silly.\n• Options\n41.\n\nJonathan Castello #27 wrote:\n\nMatthew, can you elaborate on why reducing Petri net reachability to SAT would imply $$\\text{EXPSPACE} \\subseteq \\text{NP}$$? Is Petri net reachability known to be EXPSPACE-complete? I don't think you're necessarily wrong, but the critical step is eluding me.\n\nAs I mentioned above, the Cardoza, Lipton and Meyer (1976) establish that reachability for symmetric Petri nets is $$\\textsf{EXPSPACE}$$-complete.\n\nI didn't know this when I wrote my argument yesterday, I had to look it up.\n\nIf we let $$\\textsf{PETRI-REACH}$$ be the class of problems reducible to Petri net reachability, then $$\\textsf{EXPSPACE} \\subseteq \\textsf{PETRI-REACH}$$\n\nApproaching this similarly: We have an exponential lower bound on space for Petri net reachability. As you said, this necessarily imposes an exponential lower bound on time, since you can only write one cell per unit time (per tape). Suppose a reduction to SAT existed. If SAT had a subexponential algorithm, then we could defeat the exponential lower bound; so SAT, and by extension every NP-Complete problem, must not be solvable in subexponential time. Therefore, $$\\text{P} \\ne \\text{NP}$$.\n\nThis is good, but we can do better I believe.\n\nNot only does $$\\textsf{PETRI-REACH} \\subseteq \\textsf{NP} \\implies \\textsf{NP} \\neq \\textsf{P}$$, but in fact we have the stronger result:\n\n$$\\textsf{NP} \\subsetneq \\textsf{PETRI-REACH}$$ Proof.\n\nIt's well known that $$\\textsf{NP} \\subseteq \\textsf{PSPACE}$$ (see, for instance Arora and Barak (2007), §4.2, pg. 78).\n\nWe also know that $$\\mathsf{PSPACE} \\subsetneq \\mathsf{EXPSPACE}$$ from the space hierarchy separation theorem.\n\nFinally we have $$\\mathsf{EXPSPACE} \\subseteq \\textsf{PETRI-REACH}$$ by Cardoza et al. (1976).\n\nHence $$\\textsf{NP} \\subsetneq \\textsf{PETRI-REACH}$$.\n\n$$\\Box$$\n\nComment Source:[Jonathan Castello #27 wrote:](https://forum.azimuthproject.org/discussion/comment/17868/#Comment_17868) > Matthew, can you elaborate on why reducing Petri net reachability to SAT would imply \\$$\\text{EXPSPACE} \\subseteq \\text{NP}\\$$? Is Petri net reachability known to be EXPSPACE-complete? I don't think you're necessarily wrong, but the critical step is eluding me. As I mentioned above, the [Cardoza, Lipton and Meyer (1976)](https://dl.acm.org/citation.cfm?id=803630) establish that reachability for *symmetric* Petri nets is \\$$\\textsf{EXPSPACE}\\$$-complete. I didn't know this when I wrote my argument yesterday, I had to look it up. If we let \\$$\\textsf{PETRI-REACH}\\$$ be the class of problems reducible to Petri net reachability, then \\$$\\textsf{EXPSPACE} \\subseteq \\textsf{PETRI-REACH}\\$$ > Approaching this similarly: We have an exponential lower bound on space for Petri net reachability. As you said, this necessarily imposes an exponential lower bound on time, since you can only write one cell per unit time (per tape). Suppose a reduction to SAT existed. If SAT had a subexponential algorithm, then we could defeat the exponential lower bound; so SAT, and by extension every NP-Complete problem, must not be solvable in subexponential time. Therefore, \\$$\\text{P} \\ne \\text{NP}\\$$. This is good, but we can do better I believe. Not only does \\$$\\textsf{PETRI-REACH} \\subseteq \\textsf{NP} \\implies \\textsf{NP} \\neq \\textsf{P}\\$$, but in fact we have the stronger result: $$\\textsf{NP} \\subsetneq \\textsf{PETRI-REACH}$$ **Proof.** It's well known that \\$$\\textsf{NP} \\subseteq \\textsf{PSPACE}\\$$ (see, for instance [Arora and Barak (2007), §4.2, pg. 78](http://theory.cs.princeton.edu/complexity/book.pdf)). We also know that \\$$\\mathsf{PSPACE} \\subsetneq \\mathsf{EXPSPACE}\\$$ from the [space hierarchy separation theorem](https://en.wikipedia.org/wiki/Space_hierarchy_theorem). Finally we have \\$$\\mathsf{EXPSPACE} \\subseteq \\textsf{PETRI-REACH}\\$$ by [Cardoza et al. (1976)](https://dl.acm.org/citation.cfm?id=803630). Hence \\$$\\textsf{NP} \\subsetneq \\textsf{PETRI-REACH}\\$$. \\$$\\Box\\$$\n• Options\n42.\n\n@ John Baez:\n\nI had a thought. I know you can use the Lawvere fixed point theorem to prove the halting problem. Generalized to Oracle machines, it establishes separation theorems for the Arithmetical hierarchy.\n\nI am wondering: can the separation theorems for the time and space hierarchies in ordinary complexity theory also be seen as applications of the Lawvere fixed point theorem?\n\nOn a related not, it's well known that the fixed point theorem can be used to prove Gödel's first incompleteness theorem. However, I found a super cute proof that applies it to prove the second incompleteness theorem rather succinctly. Not sure if it's something to post here...\n\nComment Source:@ John Baez: I had a thought. I know you can use the Lawvere fixed point theorem to prove the halting problem. Generalized to Oracle machines, it establishes separation theorems for the [Arithmetical hierarchy](https://en.wikipedia.org/wiki/Arithmetical_hierarchy). I am wondering: can the separation theorems for the time and space hierarchies in ordinary complexity theory also be seen as applications of the Lawvere fixed point theorem? On a related not, it's well known that the fixed point theorem can be used to prove Gödel's first incompleteness theorem. However, I found a super cute proof that applies it to prove the *second* incompleteness theorem rather succinctly. Not sure if it's something to post here...\n• Options\n43.\nedited May 2018\n\nMatthew wrote:\n\nI am wondering: can the separation theorems for the time and space hierarchies in ordinary complexity theory also be seen as applications of the Lawvere fixed point theorem?\n\nI don't have the combination of time and space (= brainpower) to answer this. For example, I have no clue as to how people usually prove those separation theorems. However, if someone were able to prove these theorems using the Lawvere fixed point theorem, that would be a nice (small) step toward applying category theory to computational complexity.\n\nThere's a famous divide between the computer scientists who like category theory and those who like computational complexity. Any step toward bridging this divide would be great.\n\nI found a super cute proof that applies it to prove the second incompleteness theorem rather succinctly. Not sure if it's something to post here...\n\nCool! Is this really new? If so, it might be better to post it on the $$n$$-Category Café and/or the Azimuth blog. That way, more people would see it. If you post it here, I can repost it there, as a \"guest post\".\n\nComment Source:Matthew wrote: > I am wondering: can the separation theorems for the time and space hierarchies in ordinary complexity theory also be seen as applications of the Lawvere fixed point theorem? I don't have the combination of time and space (= brainpower) to answer this. For example, I have no clue as to how people _usually_ prove those separation theorems. However, if someone were able to prove these theorems using the Lawvere fixed point theorem, that would be a nice (small) step toward applying category theory to computational complexity. There's a famous divide between the computer scientists who like category theory and those who like computational complexity. Any step toward bridging this divide would be great. > I found a super cute proof that applies it to prove the _second_ incompleteness theorem rather succinctly. Not sure if it's something to post here... Cool! Is this really new? If so, it might be better to post it on the \\$$n\\$$-Category Caf&eacute; and/or the Azimuth blog. That way, more people would see it. If you post it here, I can repost it there, as a \"guest post\".\n• Options\n44.\n\nFYI..\n\n1. Rössler, O.E.: Adequate locomotion strategies for an abstract organism in an abstract environment: a relational approach to brain function. In: Physics and Mathematics of the Nervous System (M. Conrad, W. Guttinger and M. DalCin, eds.), Lecture Notes in Biomathematics, vol. 4, pp. 342–369. Springer, New York (1974)\n\nAnd\n\nAbstract /chapter: The Brain Equation\n\nThe brain equation is a solution to the “second survival problem.” The latter is called “positional adaptation.” It unlike Darwin’s first (“metabolic adaptation”) is history-independent. As such it is mathematically well posed. The equation applies to all life forms in the cosmos that live in a structured environment in which survival depends on position in space in a short-term fashion. An eusocial version does not exist. The equation solves, in conjunction with the necessarily attached VR machine, the famous NP-complete “decision-type travelling salesman problem” for finite times. The resulting autonomous optimizer with cognition is susceptible to a “function change” in the sense of Bob Rosen which so far is known empirically only from the human brain.\n\nCite this chapter as: Rössler O.E. (2014) The Brain Equation. In: Sanayei A., Zelinka I., Rössler O. (eds) ISCS 2013: Interdisciplinary Symposium on Complex Systems. Emergence, Complexity and Computation, vol 8. Springer, Berlin, Heidelberg\n\nComment Source:FYI.. 1. Rössler, O.E.: Adequate locomotion strategies for an abstract organism in an abstract environment: a relational approach to brain function. In: Physics and Mathematics of the Nervous System (M. Conrad, W. Guttinger and M. DalCin, eds.), Lecture Notes in Biomathematics, vol. 4, pp. 342–369. Springer, New York (1974) And Abstract /chapter: The Brain Equation The brain equation is a solution to the “second survival problem.” The latter is called “positional adaptation.” It unlike Darwin’s first (“metabolic adaptation”) is history-independent. As such it is mathematically well posed. The equation applies to all life forms in the cosmos that live in a structured environment in which survival depends on position in space in a short-term fashion. An eusocial version does not exist. The equation solves, in conjunction with the necessarily attached VR machine, the famous NP-complete “decision-type travelling salesman problem” for finite times. The resulting autonomous optimizer with cognition is susceptible to a “function change” in the sense of Bob Rosen which so far is known empirically only from the human brain. Cite this chapter as: Rössler O.E. (2014) The Brain Equation. In: Sanayei A., Zelinka I., Rössler O. (eds) ISCS 2013: Interdisciplinary Symposium on Complex Systems. Emergence, Complexity and Computation, vol 8. Springer, Berlin, Heidelberg\n• Options\n45.\nedited May 2018\n\nI don't have the combination of time and space (= brainpower) to answer this. For example, I have no clue as to how people usually prove those separation theorems. However, if someone were able to prove these theorems using the Lawvere fixed point theorem, that would be a nice (small) step toward applying category theory to computational complexity.\n\nThere's a famous divide between the computer scientists who like category theory and those who like computational complexity. Any step toward bridging this divide would be great.\n\nSeparation for time and space complexity hierarchies is proved by a kind of diagonalization (see Arora and Barak (2007), §3, pgs. 65-74)\n\nSimilar diagonalization arguments also crop up in descriptive set theory for establishing higher separation theorems. For nice proof of this, I like Jech (2003) where in chapter 11 he shows $$\\mathbf{\\Sigma}^0_\\alpha\\neq \\mathbf{\\Pi}^0_\\alpha$$ (Corollary 11.3).\n\nIn the case of computational complexity theory, I am not sure if diagonalization is taking place in a CCC with a suitable epimorphism. However, I am much more confident that descriptive set theory is following the usual recipe.\n\nCool! Is this really new? If so, it might be better to post it on the n-Category Café and/or the Azimuth blog. That way, more people would see it. If you post it here, I can repost it there, as a \"guest post\".\n\nNah it's in Boolos' The Logic of Provability (1995). I can't find the citation but I think he got it from Martin Löb's writings in the 1950s. Unlike the first incompleteness theorem, the quick proof of the second incompleteness theorem applies the fixed point with some indirection á la Curry's paradox.\n\nI'll try to post it in a For Fun thread.\n\nComment Source:> I don't have the combination of time and space (= brainpower) to answer this. For example, I have no clue as to how people usually prove those separation theorems. However, if someone were able to prove these theorems using the Lawvere fixed point theorem, that would be a nice (small) step toward applying category theory to computational complexity. > > There's a famous divide between the computer scientists who like category theory and those who like computational complexity. Any step toward bridging this divide would be great. Separation for time and space complexity hierarchies is proved by a kind of diagonalization (see [Arora and Barak (2007), §3, pgs. 65-74](http://theory.cs.princeton.edu/complexity/book.pdf)) Similar diagonalization arguments also crop up in descriptive set theory for establishing higher separation theorems. For nice proof of this, I like [Jech (2003)](https://link.springer.com/book/10.1007%2F3-540-44761-X#about) where in chapter 11 he shows \\$$\\mathbf{\\Sigma}^0_\\alpha\\neq \\mathbf{\\Pi}^0_\\alpha\\$$ (Corollary 11.3). In the case of computational complexity theory, I am not sure if diagonalization is taking place in a CCC with a suitable epimorphism. However, I am much more confident that descriptive set theory is following the usual recipe. > Cool! Is this really new? If so, it might be better to post it on the n-Category Café and/or the Azimuth blog. That way, more people would see it. If you post it here, I can repost it there, as a \"guest post\". Nah it's in Boolos' *The Logic of Provability* (1995). I can't find the citation but I think he got it from Martin Löb's writings in the 1950s. Unlike the first incompleteness theorem, the quick proof of the second incompleteness theorem applies the fixed point with some indirection á la [Curry's paradox](https://plato.stanford.edu/entries/curry-paradox/). I'll try to post it in a *For Fun* thread.\n• Options\n46.\nedited May 2018\n\nI can't imagine that Boolos used Lawvere's fixed point theorem to prove Gödel's second incompleteness theorem! He may have used a fixed point argument that you can instantly recognize as a special case of Lawvere's fixed point theorem. But making that explicit would be a great $$n$$-Café post... as long as you admit that it's not utterly brand new. People there like seeing things done with categories!\n\nIn the case of computational complexity theory, I am not sure if diagonalization is taking place in a CCC with a suitable epimorphism.\n\nIt would be very nice to get CCC's for different complexity classes of functions.\n\nComment Source:I can't imagine that Boolos used Lawvere's fixed point theorem to prove G&ouml;del's second incompleteness theorem! He may have used a fixed point argument that you can instantly recognize as a special case of Lawvere's fixed point theorem. But making that explicit would be a great \\$$n\\$$-Caf&eacute; post... as long as you admit that it's not utterly brand new. People there like seeing things done with categories! > In the case of computational complexity theory, I am not sure if diagonalization is taking place in a CCC with a suitable epimorphism. It would be very nice to get CCC's for different complexity classes of functions. \n• Options\n47.\nedited May 2018\n\nSpeaking of applying category theory to computational complexity, I've been thinking of NP-complete problems as terminal objects in the category of problems in NP with arrows $$a \\to b$$ when $$a$$ can be reduced to $$b$$. I'm sure that's a trivial observation, but it was surprisingly handy when I was helping a friend understand how NP-completeness is special and why NP-hardness is something we'd ever think to consider.\n\nComment Source:Speaking of applying category theory to computational complexity, I've been thinking of NP-complete problems as terminal objects in the category of problems in NP with arrows \\$$a \\to b\\$$ when \\$$a\\$$ can be reduced to \\$$b\\$$. I'm sure that's a trivial observation, but it was surprisingly handy when I was helping a friend understand how NP-completeness is special and why NP-hardness is something we'd ever think to consider.\n• Options\n48.\n\nI've long wanted to apply category theory to the ideas of optimization, to preserving semantics while changing performance.\n\nComment Source:I've long wanted to apply category theory to the ideas of optimization, to preserving semantics while changing performance. \n• Options\n49.\n\nChristopher that's 2-category. You've got types, the programs and the action of the optimizing compiler that moves between programs. If you restrict yourself to fixed input and output types to go down a category number, you'll lose power for not much gain in ease.\n\nComment Source:Christopher that's 2-category. You've got types, the programs and the action of the optimizing compiler that moves between programs. If you restrict yourself to fixed input and output types to go down a category number, you'll lose power for not much gain in ease.\n• Options\n50.\nedited May 2018\n\n@Jonathan Castello:\n\nIt isn't obvious whether Petri net reachability is in NP at all.\n\nWell, every decision problem in NP has an exponential-time algorithm. And as John's book explains on p.251, the complexity of reachability is at least doubly exponential. Doesn't this mean that reachability cannot be in NP? [Edit: I misread the statement in the book, which is about Presburger arithmetic rather than Petri net reachability. So this doesn't really work.]\n\nNow that we're talking about computational complexity, it may be fun to note that also computational complexity itself is a resource theory! Here, the resources are all the possible decision problems, while we write $$x \\leq y$$ if the decision problem y can be reduced to x in, say, polynomial time.\n\nIt may sound a bit funny that I'm calling the decision problems 'resources', and it's better to think of the resources themselves as being the oracles for those decision problems. But technically I find it harder to define what an oracle is, so that's why I've written down the poset of decision problems.\n\nFor given decision problems $$x$$ and $$y$$, we can take $$x + y$$ to mean the decision problem that asks you to solve either an instance of $$x$$ or an instance of $$y$$. So we really have a resource theory as well! It's one that behaves very differently than the one for lemon pie. I won't go into the details now, since I don't know what will be discussed in the upcoming lectures.\n\nComment Source:@Jonathan Castello: > It isn't obvious whether Petri net reachability is in NP at all. Well, every decision problem in NP [has an exponential-time algorithm](https://cs.stackexchange.com/questions/41555/does-every-problem-in-np-have-an-exponential-time-algorithm). And as John's book explains on p.251, the complexity of reachability is at least doubly exponential. Doesn't this mean that reachability cannot be in NP? *[Edit: I misread the statement in the book, which is about Presburger arithmetic rather than Petri net reachability. So this doesn't really work.]* Now that we're talking about computational complexity, it may be fun to note that also computational complexity itself is a resource theory! Here, the resources are all the possible decision problems, while we write \\$$x \\leq y\\$$ if the decision problem y can be reduced to x in, say, polynomial time. It may sound a bit funny that I'm calling the decision problems 'resources', and it's better to think of the resources themselves as being the *oracles* for those decision problems. But technically I find it harder to define what an oracle is, so that's why I've written down the poset of decision problems. For given decision problems \\$$x\\$$ and \\$$y\\$$, we can take \\$$x + y\\$$ to mean the decision problem that asks you to solve either an instance of \\$$x\\$$ or an instance of \\$$y\\$$. So we really have a resource theory as well! It's one that behaves very differently than the one for lemon pie. I won't go into the details now, since I don't know what will be discussed in the upcoming lectures." ]
[ null, "http://math.ucr.edu/home/baez/mathematical/7_sketches/chemistryNetBasicA.png", null, "http://math.ucr.edu/home/baez/mathematical/7_sketches/PERT_chart.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9389685,"math_prob":0.87127733,"size":78179,"snap":"2021-04-2021-17","text_gpt3_token_len":18634,"char_repetition_ratio":0.12755996,"word_repetition_ratio":0.7486091,"special_character_ratio":0.23341307,"punctuation_ratio":0.10585908,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9827252,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-01-19T05:39:47Z\",\"WARC-Record-ID\":\"<urn:uuid:21c070a9-a575-4b2f-82ab-b8ac793ac539>\",\"Content-Length\":\"204603\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c4a11b2b-f709-4c97-b969-15d33ccb3d44>\",\"WARC-Concurrent-To\":\"<urn:uuid:d4e5e232-64b4-40ce-9dd0-262b8c906828>\",\"WARC-IP-Address\":\"75.98.32.15\",\"WARC-Target-URI\":\"https://forum.azimuthproject.org/discussion/comment/17819/\",\"WARC-Payload-Digest\":\"sha1:2QAE5P7MSSII3TNNEKW3OXRCL2JCF3ER\",\"WARC-Block-Digest\":\"sha1:K6AVL3GSQ3K5RRDMLDKBMHRXUGMHWHFC\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-04/CC-MAIN-2021-04_segments_1610703517966.39_warc_CC-MAIN-20210119042046-20210119072046-00221.warc.gz\"}"}
https://chai-pe-charcha.com/math-quiz-1/
[ "Online Math Quiz Sets of 10 Questions each. It will take around 5 minutes of your time to complete a set of questions. Solving this math test quiz will help you to know how much you know about basic math functions. You can share these questions with your friends over and have some fun.\n\n15\nCreated on By", null, "admin\n\nMath Quiz 1\n\nMath Quiz\n\n1 / 10\n\nFind the sum of 111 + 222 + 333\n\n2 / 10\n\nSimplify: 240 ÷ (9 + 3 x 7) - 5\n\n3 / 10\n\nSolve: 35 + 7 ÷ 7\n\n4 / 10\n\nSimplify: 150 ÷ (6 + 3 x 8) - 5\n\n5 / 10\n\nSimplify : 3 + 6 x (5 + 4) ÷ 3 - 7\n\n6 / 10\n\nSolve: 365 – (128 ÷ 4)\n\n7 / 10\n\nSubtract 264 from 1094\n\n8 / 10\n\nFind the product of 78 × 3\n\n9 / 10\n\nSelect the Answer for 110 ÷ 10\n\n10 / 10\n\nSimplify: 26 + 32 - 12" ]
[ null, "data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAAgAQMAAABJtOi3AAAAA1BMVEUAAACnej3aAAAAAXRSTlMAQObYZgAAAAtJREFUCB1jGOQAAACgAAGXmq1SAAAAAElFTkSuQmCC", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9386324,"math_prob":0.9928279,"size":344,"snap":"2021-31-2021-39","text_gpt3_token_len":73,"char_repetition_ratio":0.13235295,"word_repetition_ratio":0.0,"special_character_ratio":0.2122093,"punctuation_ratio":0.072463766,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96926856,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-18T22:15:38Z\",\"WARC-Record-ID\":\"<urn:uuid:c8937f17-5fad-4f8f-a17d-393909d0010b>\",\"Content-Length\":\"177836\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:534b76d1-67a7-4476-ada7-9493bf5ebb37>\",\"WARC-Concurrent-To\":\"<urn:uuid:a8cd32e0-805a-4f04-9c26-91f65d398113>\",\"WARC-IP-Address\":\"172.67.156.229\",\"WARC-Target-URI\":\"https://chai-pe-charcha.com/math-quiz-1/\",\"WARC-Payload-Digest\":\"sha1:NF6SDENUJD3BZ4FAZG65EWUB5AUHVAGY\",\"WARC-Block-Digest\":\"sha1:DNZZ36QH2QJPQVTN2TIWS5PNSJ3LENER\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780056578.5_warc_CC-MAIN-20210918214805-20210919004805-00159.warc.gz\"}"}
http://book.caltech.edu/bookforum/showpost.php?s=a0f4012d8728fd04ac229b261ac6a4f0&p=8531&postcount=17
[ "Thread: Confused on question 6. View Single Post\n#17\n kumarpiyush", null, "Junior Member Join Date: Jan 2013 Posts: 7", null, "Re: Confused on question 6.\n\nQuote:\n Originally Posted by yaser", null, "Possible target function is a notion introduced in this problem in order to make a point about learning. In general, there is one target function, albeit unknown. Here we spell out \"unkown\" by considering all the possibilities the target function can assume. We can afford to do that here because there is only a finite number of possibilities. Hypotheses are the products of learning that try to approximate the target function. In this problem, we prescribe different learning scenarios that result in different hypotheses, then attempt to grade these hypotheses. We grade them according to how well each of them approximates the target function. The twist is that we consider all possible target functions and grade the hypothesis according to how well it approximates each of these possible targets.\nI understood it now :-)" ]
[ null, "http://book.caltech.edu/bookforum/images/statusicon/user_offline.gif", null, "http://book.caltech.edu/bookforum/images/icons/icon1.gif", null, "http://book.caltech.edu/bookforum/images/buttons/viewpost.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9013683,"math_prob":0.8354297,"size":967,"snap":"2021-04-2021-17","text_gpt3_token_len":208,"char_repetition_ratio":0.1443406,"word_repetition_ratio":0.0,"special_character_ratio":0.21923475,"punctuation_ratio":0.11299435,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98205376,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-01-18T15:02:07Z\",\"WARC-Record-ID\":\"<urn:uuid:cc57ab0b-2db7-48e9-be19-a9899a982225>\",\"Content-Length\":\"14062\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c5db2229-072e-40e3-b32a-21b88b859b10>\",\"WARC-Concurrent-To\":\"<urn:uuid:2b37222b-abe5-43c6-8195-ba4c83bad1bb>\",\"WARC-IP-Address\":\"131.215.134.70\",\"WARC-Target-URI\":\"http://book.caltech.edu/bookforum/showpost.php?s=a0f4012d8728fd04ac229b261ac6a4f0&p=8531&postcount=17\",\"WARC-Payload-Digest\":\"sha1:QXFJHVS3HVCVOTT6XF7ZRVR5BERJ2PRL\",\"WARC-Block-Digest\":\"sha1:RM2XLJSJWTKGENMYOLDSZCALQNQ2K2AK\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-04/CC-MAIN-2021-04_segments_1610703514796.13_warc_CC-MAIN-20210118123320-20210118153320-00230.warc.gz\"}"}
https://economics.stackexchange.com/questions/tagged/self-study
[ "# Questions tagged [self-study]\n\nQuestions about self-studying economics, including curriculum design, study strategies, resources, etc.\n\n226 questions\nFilter by\nSorted by\nTagged with\n122 views\n+100\n\n### Writing down Bellman equation\n\nAssume an infinite horizon representative agent economy with the following consumer preferences $u(c_t)$ The production technology of this economy uses capital and land, which is fixed amount in ...\n14 views\n\n### Real or monetary values? In the context of Classic model\n\nI am studying the classing model. And I have a basic question what are the units of the following variables( monetary or real?) Inflation rate ($\\pi$), wage (w), Labor supply N, consumption C, ...\n96 views\n\n### Endogenous growth model with externalities\n\nI have a following model of endogenous growth where each firm has the following technology; $$y_t=AK_t^{1-\\alpha} k_t^{\\alpha} n_t^{1-\\alpha}$$ The production function above defines an externality. I ..." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9095711,"math_prob":0.74452424,"size":13947,"snap":"2022-27-2022-33","text_gpt3_token_len":3464,"char_repetition_ratio":0.14193502,"word_repetition_ratio":0.038181037,"special_character_ratio":0.25661433,"punctuation_ratio":0.11880466,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.985428,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-10T19:59:19Z\",\"WARC-Record-ID\":\"<urn:uuid:e2d3f02b-ba4d-4e56-9944-978e17f523fa>\",\"Content-Length\":\"369457\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:968a44e4-cb2a-4d7c-ac57-fe441749cd23>\",\"WARC-Concurrent-To\":\"<urn:uuid:9ef9a5c0-6f4f-4ae0-ba01-b12e620069e4>\",\"WARC-IP-Address\":\"151.101.193.69\",\"WARC-Target-URI\":\"https://economics.stackexchange.com/questions/tagged/self-study\",\"WARC-Payload-Digest\":\"sha1:DSD4DHXGIIRVAGUK63LT5TODBQX3Y4IU\",\"WARC-Block-Digest\":\"sha1:PUO6A4X3LQXWWTKELQWYWW23ASMDYFVV\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882571210.98_warc_CC-MAIN-20220810191850-20220810221850-00728.warc.gz\"}"}
http://excel.bigresource.com/Multiple-criteria-met-in-a-sumproduct-formula--ar1nZmVC.html
[ "Multiple Criteria Met In A Sumproduct Formula.\n\nOct 1, 2009\n\nI have 2 columns of data being populated by vlookups\n\nColumn H is both numbers and text. Column I is Text and blanks. I need to be able to find only numeric values in column H greater than 0 and compare those occurrences with the corresponding cells in column I and if column I has a text entry (not a blank space) than to count that and at the end give me a total number of times these 2 criteria are met. As an example.\n\nIf column H has a text entry then don't count it.\nIf column H has a number less than zero then don't count it.\nIf column H has a number greater than zero but column I is blank then don't count it.\nIf column H has a number greater than 0 and column I has a text entry then count it.\n\nI've tried using many variations of a sumproduct formula and none of them work.\n\nThis formula counts all instances where column I has a text entry without checking column H for a number greater than 0.\n\n=SUMPRODUCT(--(H2:H110>0),--(I2:I110<>\" \"))\n\nOr it's possible that the formula is counting the text entries in column H as a number greater than 0 but I'v tried excluding text using this..\n\n=SUMPRODUCT(--(H2:H110>0&<>\"*\"),--(I2:I110<>\" \"))\n\nbut this causes an error in the formula somehow that I can't figure out. I even tried this\n\n=SUMPRODUCT(--(H2:H110>0&\"*\"),--(I2:I110<>\" \"))\n\nand I get a formula that counts only the times text appears in column H and column I together which is not what I want either.\n\nI'm self-taught on Excel so I know there's a lot I'm not understanding about creating formulas like this but I need to have this working by Friday and I just want it to work.\n\nSumproduct With Multiple Criteria?\n\nJan 21, 2014\n\nI am looking around any way wherein I can sumproduct the values as given in attached sheet, basically I wanted to know the total MRP value of Sale and Stock\n\nHow To Use SUMPRODUCT With Multiple Criteria\n\nJul 17, 2009\n\nI am stuck - I have a large amount of data for a group of physicians I work for. I am trying to set up a monthly trend report to be able to run quickly after I plug in the data. I want to use some sort of lookup to look up two things - 1) the physician's specialty and 2) the month.\n\nCan anyone look at the attached example and tell me how to do this? I have started a SUMPRODUCT formula, but am stuck on how to tell it to find only that month's data.\n\nHow To Use Sumproduct With Multiple Criteria\n\nMar 10, 2013\n\nID, Name, Point, Session\n1111, Viking, 5, 1\n2222, John, 6, 1\n1111, Viking, 10, 2\n\nWhat's the formula to get the Point cell value with criteria ID = \"1111\" and \"Session = 2\" ? In this case, it will return me the value of Point = 10\n\nSumproduct With Multiple Criteria?\n\nFeb 22, 2012\n\nI was wondering if I could use a range of cells as my criteria as opposed to inserting quotation marks with each criteria. For example in the syntax below can I do something like this? Report!\\$C\\$3:\\$C\\$5000=B45:B51?\n\n=SUMPRODUCT((Report!\\$A\\$3:\\$A\\$5000=\"XXX\")*(Report!\\$C\\$3:\\$C\\$5000=???)*(Report!\\$E\\$3:\\$E\\$5000))\n\nMultiple Criteria - SUMPRODUCT\n\nJan 30, 2009\n\nI'm trying to create a budget worksheet that pulls actual data from another sheet within the file for comparison (Budget vs. Actual). There are two criteria: 1) the actual transaction falls into the same category of transaction as the budget line item (e.g., mortgage payment) and 2) the date of the actual transaction matches the month in the budget (e.g., a January or March transaction isn't pulled into the actual data for February budget information). From there, I'd like it to sum any charges or reduce by any deposits for those given criteria.\n\nI've tried numerous things from DSUM, to SUMIF with IF, to SUMPRODUCT.\n\nSumproduct With Multiple Criteria\n\nApr 28, 2009\n\nI received an answer to my original question and now have a new question but I wanted to reference my original for the history. I posted my new question at the end of my original thread.\n\n[url]\n\nMultiple Criteria Countif Or Sumproduct\n\nSep 16, 2009\n\nI haven't been this deep into excel before. The deeper I look, the more potential I recognize, the more amazed I get. That being said, I have come to a tough count issue. Let me attempt to explain as precisely as possible.\n\nMy current worksheet is large but I am only particularly concerned with two columns of information (Regions) and (Days). The logic I am attempting is something along the lines of Count If Region = East, or West, and Days is greater than 0, less than 60.\n\nI am open to any and all suggestions on how to tackle this situation. I have been able to achieve similar counts by using pivot tables but the dynamic nature of these two columns presents some difficulties that my �new user� mind has been unable to work through.\n\nLookup With Multiple Criteria...sumproduct\n\nJan 26, 2010\n\nAttached is my sample workbook. There would normally be 600+ employees with multiple rows per employee. I would like Cell O3 in the Premium Calculation Worksheet to look at the Premium Contribution Report, and if Row A contains the employee number (A3) AND row C contains \"H&D\" I would like it to sum row E.\n\nI included the sumproduct formula I tried to put together but I'm getting an error, so I'm not sure what I've done wrong. The reason I have it referencing \"O2\" instead of just inputting \"H&D\" is that O2 could be any number of plans - I have multiple rows with different plans and I need it to pull in all the data.\n\nSumproduct With Multiple Criteria In Same Column?\n\nSep 24, 2012\n\nI have two sets of criteria I want to incorporate into one formula. In the first column, if the criteria is matched, it will check the criteria in the next column. The criteria in the second column is something that resembles the 'or' function. So if criteria equals x,y, or z, sum the results from the data range c3:c98. I tried writing it like this.\n\n=sumproduct(--(a3:a98=a),--(b3:b98=x(or(b3:b98=y,b3:b98=z))),c3:c98)\n\nExcel 2010 :: Sumproduct With Multiple Criteria?\n\nAug 20, 2014\n\nI am using excel 2010.\n\nI have a spreadsheet with the following:\n\nColumn E is a product. If that product is ordered, any character is entered in that cell\nColumn F has a due date\nColumn I has the received date\n\nWhat I want is to count the number of cells that have any character in column E AND the received date is later than the due date\n\nThese two formulas are working fine alone but I cannot get them to work together.\n\n=SUMPRODUCT(--(F:F<I:I))\n=SUMPRODUCT(--ISTEXT(E2:E1000))\n\nI have tried all kinds of tweeks to the following to no avail:\n\n=SUMPRODUCT(--(F:F<I:I),--(ISTEXT(E2:E1000)))\n\nSumproduct For Multiple Criteria Across Columns And Rows\n\nJan 10, 2014\n\nI've not used SUMPRODUCT previously and can't understand how to get results for the attached.\n\nI've tried SUMIFS but it doesn't work because I'm looking down columns and across rows, I'm assuming.\n\nI've attached a summary of what I'm trying to achieve. I want to sum all costs with an R,P,I,G, etc. in column C for December '13 (E3) in the top table.\n\nThe second table is actually in a different sheet but is the source of the data I need added.\n\nSumproduct P&L.xlsx‎\n\nSumproduct With Multiple Criteria Using Non Numerical Values?\n\nFeb 5, 2014\n\nI am attempting to count from a spreadsheet the reference number of a customer (numbers and text) based on two criteria.\n\n1, If column G= Requested\n2, Column I = Meeting\n\nCount Row E\n\nI thought a sumproduct was best and have started using it for the first time, I thought this should work but I keep getting a #NUM! error.\n\nI have tried with numbers and it works but the non numeric aspect is difficult.\n\n[Code].....\n\nSumproduct :: Sum Data Based On Multiple Criteria..\n\nNov 8, 2007\n\nI am trying to sum data based on multiple criteria..\n\nThe english version of the formula is Sum all refunds for Store during week\n\nOriginal Data Format: ....\n\nSUMIF Or SUMPRODUCT For Multiple Criteria And A Negative?\n\nNov 12, 2011\n\nI have an array that contains order numbers, tracking numbers and shipment costs. I want to get the total value of the shipment cost per order. the problem is, there are some duplicate shipments (ie same tracking number) and I don't want to include those. I can't delete the duplicate entries from the database for reasons I won't go into here.\n\nso I tired to use a formula like =SUMIFS(C:C,A:A,A1,B:B,B1)\nA B C\n11462046 CJ225083125US 10.51\n11462051 CJ225082247US 17.04\n11462046 CJ225083125US 10.51\n11462046 CJ225083564US 22.40\n\nthe formula doesn't work (won't even let me enter it) but if it did, it should give a result of 32.91. it would add C1 and C4 (but not C3 because even though A3=A1, B# also equals B1 and that is what I don't want to add.\n\nI think maybe a sumproduct formula is what is needed but the negative criteria is throwing me for a loop.\n\nSumproduct With Date Range And Multiple Criteria?\n\nAug 19, 2012\n\nThe part in green will count the number of entries for the name Johnson & Freedman LLC perfectly fine. However when i add the last part in red i receive a #Value! error.\n\nCol. W is formatted as General and has a data validation for the user only to choose Pass or Fail.\n\nNot sure why it isn't working.\n\nCode:\n=SUMPRODUCT(--( 'SCRA'!B26:B29>=Sheet3!C2),--('SCRA'!B26:B29\n\nSumproduct Multiple Criteria Ignore Error\n\nJun 6, 2014\n\nI have the following two columns in A1:B4 (customer # followed by percentage)\n\n1 0.5\n2 0.9\n3 0.8\n4 #DIV/0!\n\nIn column D i have a list of the customer #s. In column E i try to identify if the customer in column D have a percentage >=.8.\n\nI am using the below formula, but getting a #DIV/0! error due to the error in cell B4, which i am not allowed to change using an iferror formula.\n\n=SUMPRODUCT(--(A1:A4=D2),--(B1:B4>=0.8))\n\nIs there a way to get around this using sumproduct or any other method to determine if the customer in D has a percentage >= 80%?\n\nSumproduct Formula Based On Criteria\n\nDec 26, 2013\n\nI have a sumproduct formula based on some criteria, but I don't know how add another criteria wherein I need to exclude in the count if the date in column F is 1/1/2009\n\nAttached excel file for reference. LE26dec.xlsx\n\n2003: COUNTIF/SUMPRODUCT, Multiple Criteria W/Wildcard\n\nNov 24, 2008\n\nI'm trying to write this but it returns a 0 when I know there are 3 records that match this criteria: =SUMPRODUCT(('Invoice-Detail'!J2:J50=\"NewJob_Post.NET\")*('Invoice-Detail'!H2:H50=\"KY_*\")). I think the problem is in the wildcard character. I don't know if I should be using COUNTIF or SUMPRODUCT or something else?\n\nSUMPRODUCT With Multiple Criteria: Count The Number Of Documents\n\nNov 3, 2009\n\nI have attached a spreadsheet with a small indicative data set to assist in understanding. I am trying to count the number of documents each individual has assigned to them that are not yet 'completed' (ie REGISTERED, IN WORK, REVIEWED). The problem I am trying to overcome is that the document state can be 1 of several values indicated in the same column.\n\nI have tried using this SUMPRODUCT formula:\n=SUMPRODUCT(((\\$E\\$2:\\$E\\$11=\"REGISTERED\")+(\\$E\\$2:\\$E\\$11=\"IN WORK\")+(\\$E\\$2:\\$E\\$11=\"REVIEWED\")*(\\$B\\$2:\\$B\\$11=\"Jones\")))\nbut it is generating incorrect values!\n\nSpecifically:\n- Jones shoulld return 1\n- Franks should return 3\n- Smith shoudl return 0\n\nSumproduct Formula Needs To Halve A Value Depending On Criteria\n\nAug 10, 2009\n\nThe current spreadsheets add up each persons totals by matching the name in each tab with the name of the person who won the job located in service orders tab. BUT.....If two salesman pair up on up on a job then the formula doesn't recognize the joint name. eg Scott/Ash in row 21 & 22 (Service orders). I need the totals to half the job and add it to the salesmans total accordingly.\n\nMacro To Compare 2 Lists Based On Multiple Criteria Using SUMPRODUCT?\n\nFeb 27, 2014\n\nI want to compare 2 lists in separate sheets based on multiple criteria and delete the duplicates\n\nSheet 1 - new list in column A:E\nSheet 2 - old list in columns B:F\n\nSo here is what I need: the macro should generate single IDs made of Sheet 1 Ai,Bi,Ci,Di,Ei cells for each row i to the end of the list + generate single IDs made of Sheet 2 Bi,Ci,Di,Fi\n\nIf . Evaluate (Sumproduct (IDs made of Ai,Bi,Ci,Di,Ei from sheet1) & Sumproduct IDs made of Bi,Ci,Di,Ei,Fi from sheet2) >1 then delete the entire row in Sheet 2.\n\nThis will leave me with only updated items (rows) in Sheet 2\n\nExcel 2010 :: Sumproduct With Multiple Criteria And Ignoring Text Values\n\nJun 19, 2014\n\nUsing Excel 2010, I am trying to do a Sumproduct formula with two criteria, one of which needs to ignore text values.\n\nHere is the set up:\n\nColumn AColumn BColumn C\n(Side)(Qty)(Price)\nSell119,428null\nSell20,05412.25\n...\n\nI'm trying to find the sumproduct of Qty * Price if the side equals \"Buy\" (or \"Sell\") but ignoring the \"null\" value in column C. The formula I have is =SUMPRODUCT(--(\\$A\\$2:\\$A\\$20=\"Buy\")*IF(ISNUMBER(\\$C\\$2:\\$C\\$20),--(\\$B\\$2:\\$B\\$20*\\$C\\$2:\\$C\\$20)))\n\nThe result in the cell is 0, but if I open the Insert Function dialog box, I see the correct value being returned.\n\nMultiple Criteria And SUMPRODUCT (count The Number Of Rows That Have Values Greater Than 10/01/2008 In Either Of Two Fields)\n\nJan 23, 2009\n\nI am trying to count the number of rows that have values greater than 10/01/2008 in either of two fields. I tried following formula but instead of giving total number of rows, it returns a random date.\n\nSUMPRODUCT Formula - Multiple Conditions?\n\nDec 6, 2009\n\nCan a sumproduct formula accomodate multiple criteria?\n\nThe following is a sumproduct formula, for just one condition.\n\nSUMPRODUCT(--(A1:A100=\"Red Sox\"),--(B1:B100\"\"))\n\nSumming Multiple Cells Populated By Sumproduct Formula\n\nNov 9, 2012\n\nI have this formula populating a huge table of data for number of inspections performed, the first reference is a name of an individual, the second reference is a name of the company, and the third reference is the week ending date.\n\n=SUMPRODUCT(((Sheet1!\\$C\\$3:\\$C\\$1000=\\$A2)*(Sheet1!\\$D\\$3:\\$D\\$1000=D\\$1)*(Sheet1!\\$B\\$3:\\$B\\$1000=\\$A\\$1)))\n\nthere are 5 of these sheets for 5 different categories. I can get these spreadsheets to populate but i then need to be able to sum from each spreadsheet all of the times an individual inspected a certain company, so one cell in each of the 5 tables.\n\nEach time I do this it returns a 0. If i sum from one table it will return a number but if I sum from multiple tables I get 0\n\n=SUMPRODUCT(((Sheet1!\\$C\\$3:\\$C\\$1000=\\$A2)*(Sheet1!\\$D\\$3:\\$D\\$1000=D\\$1)*(Sheet1!\\$B\\$3:\\$B\\$1000=\\$A\\$1)))\n\nExcel 2003 :: Formula For Counting Values Across A Range Using Multiple Criteria Across Multiple Sheets\n\nFeb 9, 2014\n\nI have saved this on a 2010 workbook as I am at home but this will be used on a 2003 workbook.\n\nI have several projects on one spreadsheet which multiple users will be working and I am trying to create a summary sheet of the work carried out.\n\nEach user is expected to carry out a task on each row of the data held in each worksheet (research, call, update etc) and each task (Option 1-5) is assigned a value. Each user is expected to meet a certain level of points per day to calculate productivity.\n\nI am looking for a sumproduct along the lines of the summary sheet attached but mine just takes one sheet into consideration and I need one for all sheets.\n\nMultiple IF And AND Criteria Within Formula\n\nJan 16, 2014\n\nI need to create a formula that takes into consideration 9 possible scenarios using IF and AND. I have 3 performance measures (exceeds requirements, meets requirements, improvement needed) and 3 potential measures (low, medium, and high). I have a spreadsheet where each individual is rated for both. Each combination correlates to a numeric rating on a 9 box grid. I need to include every option in my formula so the correct rating is determined for each individual. I've tried several versions on my own, and can't get past one set of conditions.\n\nPerformance PotentialRating\nExceeds ExpectationsLow4\nExceeds ExpectationsMedium7\nExceeds ExpectationsHigh9\nMeets Expectations Low2\nMeets Expectations Medium5\nMeets Expectations High8\nImprovement NeededLow1\nImprovement NeededMedium3\nImprovement NeededHigh6\nUnacceptable Low1\nUnacceptable Medium3\nUnacceptable High6\n\nSum Formula With Multiple Criteria\n\nAug 17, 2007\n\nI'm looking for a formula which will filter the area expiring in any given year per property. An example would be (from the attachment) - what percentage of area of Property 2 is expiring in 2010? Answer is 50.94%.\n\nVlookup Formula With Multiple Criteria\n\nMay 12, 2014\n\nI am trying to modify an existing nested vlookup formula to include one more condition. I attached the excel data file. There are two tabs:\n\nTab #1 - Performance\nColumn B (Email Send Date): can be a repetitive date, something like\nRow 2. 4/25/2014\nRow 3. 4/25/2014\nRow 4. 4/25/2014\nRow 5. 4/25/2014\nRow 6. 5/2/2014\nRow 7. 5/2/2014\nRow 8. 5/2/2014\nRow 9. 5/9/2014\nRow 10. 5/9/2014\nRow 11. 5/9/2014\n\nColumn F (Product ID): can be same product for different Email Send Date. For instance, Row 2 & Row 9 have the same product ID - 128 and Row 5 & Row 10 have same product ID - 131.\n\nRow 2. 128\nRow 3. 129\nRow 4. 130\nRow 5. 131\nRow 6. 567\nRow 7. 897\nRow 8. 987\nRow 9. 128\nRow 10. 131\nRow 11. 234\n\nColumn R: Units Sold - need to retrieve the units sold value from Column D - Units Sold in UnitsSoldOnlineVlookup tab.\n\nThe formula needs to lookup the Units Sold from a table in a different tab, named UnitsSoldOnlineVlookup. This table contains the following columns:\n\nTab #2 - UnitsSoldOnlineVlookup\nColumn A - Email Send Date\nColumn B - Product Description\nColumn C - Product ID\nColumn D - Units Sold\n\nBefore Product IDs were different for each Email Send Date and I successfully used this formula:\n\n=IF(ISERROR(VLOOKUP(F2,UnitsSoldOnlineVlookup!\\$C\\$2:\\$D\\$31000,2,FALSE)),0,\nVLOOKUP(F2,UnitsSoldOnlineVlookup!\\$C\\$2:\\$D\\$31000,2,FALSE))\n\nNow I need to embed one more condition to this formula - lookup Units Sold for the Product ID as well as the email date:\n\nlookup Units Sold for a Product ID for a corresponding Email Send Date in UnitsSoldOnlineVlookup table and return Units Sold into the corresponding cell in the Performance tab.\n\nI thought to use MATCH function in addition to IF and ISERROR functions but I it doesn't work - I know it is wrong.\n\n=IF(MATCH(B2,UnitsSoldOnlineVlookup!\\$A\\$2:\\$D\\$31000,0),\nISERROR(VLOOKUP(F2,UnitsSoldOnlineVlookup!\\$C\\$2:\\$D\\$31000,2,FALSE)),0,\nVLOOKUP(F2,UnitsSoldOnlineVlookup!\\$C\\$2:\\$D\\$31000,2,FALSE))" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.90157896,"math_prob":0.758864,"size":17527,"snap":"2019-43-2019-47","text_gpt3_token_len":4811,"char_repetition_ratio":0.13370998,"word_repetition_ratio":0.027329601,"special_character_ratio":0.28105208,"punctuation_ratio":0.117222965,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95694435,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-10-17T07:48:16Z\",\"WARC-Record-ID\":\"<urn:uuid:5cd3f601-6c77-455e-ace8-4dc5acf416e7>\",\"Content-Length\":\"50703\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8cf6db30-a342-46ac-912d-0ed9c3365aef>\",\"WARC-Concurrent-To\":\"<urn:uuid:3c53f661-607c-40f7-876d-c68e08633877>\",\"WARC-IP-Address\":\"130.211.100.39\",\"WARC-Target-URI\":\"http://excel.bigresource.com/Multiple-criteria-met-in-a-sumproduct-formula--ar1nZmVC.html\",\"WARC-Payload-Digest\":\"sha1:YDFP4GXWYPEBSY52LZSHCRLS5WDDYLNG\",\"WARC-Block-Digest\":\"sha1:L65LV7XWTXA7GZSMV26WQHE4N5PSR45G\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-43/CC-MAIN-2019-43_segments_1570986673250.23_warc_CC-MAIN-20191017073050-20191017100550-00242.warc.gz\"}"}
http://sciencesoft.at/equation/index?lang=en&ref=flutterdev.at
[ "This demo deals with the determination of the coefficients of a chemical equation. These equations describe the interaction between chemical compounds/elements qualitatively, as well as quantitatively. Reactants are written on the left hand side, and products on the right hand side.\n\nExamples\n\n2 Na + Cl2 = 2 NaCl\nC2H5OH + 3 O2 = 2 CO2 + 3 H2O\n\nThe number and type of atoms have to be identical on both sides of the equation. The so-called stoichiometric coefficients, which are highlighted in blue, in the above examples, ensure that the same number of atoms of an element is present on both sides of the equation. The determination of these coefficients is also called balancing of a chemical equation. Such equations are mainly used in inorganic chemistry, whereas organic chemistry resorts to structural formulas, when defining an equation. In the case of simple equations, the coefficients can be determined by trial and error, but when more complex equations are involved, this approach often turns out to be impractical.\n\nExample for a complex equation\n\n3 As2S3 + 28 HNO3 + 4 H2O = 6 H3AsO4 + 9 H2SO4 + 28 NO\n\n### Solve stoichiometric equation\n\n Input Equation Result\nExamples\n (NH4)MoO4 + Na3PO4 + NH4NO3 = (NH4)3[P(Mo3O10)4] + H2O + NaNO3 + NH3 As2S3 + HNO3+ H2O = H3AsO4+H2SO4+ NO (NH4)2PtCl6 = N2+ NH3 + HCl + Pt\n\n### Handling\n\n1. Input: In this field you can enter a chemical equation. Factors can be entered simply as numbers - e.g. H20. The correct presentation of the input is simultaneously achieved in the Equation field, which displays the factors of the elements as subscripts. The current version does not support ionic equations.\nA valid equation must meet the following criteria:\n1. Have the same elements on both sides of the equation.\n2. Elements are be represented by their international symbols (IUPAC-nomenclature):\nfrom H Hydrogen to U Uranium\n3. Valid operators: + = ( ) [ ] { } Comment: The number of opening and closing parentheses must be identical!\n2. Equation: This field presents the equation without coefficients.\n3. Result: This field presents the equation with the coefficients determined or the corresponding error message.\n\n### Mathematical background\n\nThis servlet applies the Gaussian elimination method to determine the unknown coefficients of a chemical equation.\n\nThe following example may serve as an explanation: The introduction of chlorine into hot potash lye will lead to the following reaction products: potassium chlorate, potassium chloride and water.\n\nx1 KOH + x2 Cl2 = x3 KClO3 + x4 KCl + x5 H2O\n\nThe above leads to the following connection for the number of K,O, H and Cl-atoms:\n\n x1 =  x3 + x4  x1 = 3x3 + x5  x1 = 2x5 2x2 =  x3 + x4", null, "x1 -  x3 - x4 = 0  x1 - 3x3 = x5  x1 = 2x5 2x2 -  x3 - x4 = 0", null, "1 0 -1 -1 0 1 0 -3  0 1 1 0  0  0 2 0 2 -1 -1 0\n\nOne step of the Gaussian elimination method is to transfer this matrix into the so-called reduced row echelon form by means of the following transformations:\n\nThis equation can be transformed as follows:\n1. if you switch two rows of the matrix.\n2. if you multiply one row by a non-zero number.\n3. if you multiply one row by a factor and add the multiple to another row of the matrix.\n\nThe operations i.-iii. result in the following reduced row echelon form of the matrix\n\n1 0  0  0  2\n0 2 -1 -1  0\n0 0 -3  0 -1\n0 0  0  3  5\n\nThe last step is to determine the coefficients of the equation by means of the so-called back substitution; here only integral coefficients are relevant. Thus, the solution is as follows:\n\n x1 = 6   x2 = 3   x3 = 1   x4 = 5   x5 = 3 implies 6 KOH + 3 Cl2 = KClO3 + 5 KCl + 3 H2O" ]
[ null, "http://sciencesoft.at/images/arrow.png", null, "http://sciencesoft.at/images/arrow.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.88595045,"math_prob":0.9987052,"size":3397,"snap":"2023-40-2023-50","text_gpt3_token_len":897,"char_repetition_ratio":0.1423519,"word_repetition_ratio":0.060606062,"special_character_ratio":0.26229027,"punctuation_ratio":0.08634223,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99943256,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-26T14:34:44Z\",\"WARC-Record-ID\":\"<urn:uuid:7536f910-d70d-4ba4-944f-0a966c738b48>\",\"Content-Length\":\"22261\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:472cb4c4-3657-4de7-9505-259adcd9cd6d>\",\"WARC-Concurrent-To\":\"<urn:uuid:31260732-5e50-48c4-806e-7e1959d9f7a1>\",\"WARC-IP-Address\":\"65.109.96.61\",\"WARC-Target-URI\":\"http://sciencesoft.at/equation/index?lang=en&ref=flutterdev.at\",\"WARC-Payload-Digest\":\"sha1:6QFEJUWKPQKX4B4RHEP7CFSZZOYDGUAP\",\"WARC-Block-Digest\":\"sha1:UK34K424RLF2DI4IVFINDEEPQTDDHPL7\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510214.81_warc_CC-MAIN-20230926143354-20230926173354-00647.warc.gz\"}"}
https://www.bartleby.com/solution-answer/chapter-7-problem-21p-structural-analysis-6th-edition/9781337630931/720-and-721-use-the-virtual-work-method-to-determine-the-slope-and-deflection-at-point-b-of-the/41bb60be-839a-11e9-8385-02ee952b546e
[ "", null, "", null, "", null, "# 7.20 and 7.21 Use the virtual work method to determine the slope and deflection at point B of the beam shown. FIG. P7.21, P7.23, P7.58\n\n#### Solutions\n\nChapter\nSection\nChapter 7, Problem 21P\nTextbook Problem\n227 views\n\n## 7.20 and 7.21 Use the virtual work method to determine the slope and deflection at point B of the beam shown.", null, "FIG. P7.21, P7.23, P7.58\n\nTo determine\n\nFind the slope and deflection at point B of the beam using virtual work method.\n\n### Explanation of Solution\n\nGiven information:\n\nThe beam is given in the Figure.\n\nThe value of E is 70 GPa and I is 164(106)mm4.\n\nCalculation:\n\nConsider the real system.\n\nSketch the real system of the beam as shown in Figure 1.\n\nRefer Figure 1.\n\nConsider the section at x distance from the end B.\n\nCalculate the moment M as follows:\n\nM=50\n\nConsider the virtual system.\n\nRemove all the real loads and apply a unit couple at the point on the beam where the slope is desired.\n\nSketch the virtual system of the beam with unit couple at point B as shown in Figure 2.\n\nLet an equation expressing the variation of bending moment due to virtual couple be Mv1.\n\nRefer Figure 2.\n\nMv1=1\n\nFind the slope at B using the virtual work expression:\n\n1(θB)=0LMv1MEIdx (1)\n\nHere, L is the length of the beam, E is the Young’s modulus, and I is the moment of inertia.\n\nSubstitute 1 for Mv1, 4 for L, and 50 for M in Equation (1).\n\nθB=1EI04(1)(50)dx=1EI04(50)dx=1EI[50x]04=200kNm2EI\n\nSubstitute 70 GPa for E and 164(106)mm4 for I.\n\nTherefore, the slope at point B of the beam is 0\n\n### Still sussing out bartleby?\n\nCheck out a sample textbook solution.\n\nSee a sample solution\n\n#### The Solution to Your Study Problems\n\nBartleby provides explanations to thousands of textbook problems written by our experts, many with advanced degrees!\n\nGet Started\n\nFind more solutions based on key concepts\nWhen learning to play some sports, such as tennis, golf, or baseball, often you are told to follow through with...\n\nEngineering Fundamentals: An Introduction to Engineering (MindTap Course List)\n\nDifferentiate among user names, passwords, passphrases, and pass codes.\n\nEnhanced Discovering Computers 2017 (Shelly Cashman Series) (MindTap Course List)\n\nWhat should be done to a magnetic chuck when it becomes unevenly worn?\n\nPrecision Machining Technology (MindTap Course List)\n\nUsing only one equilibrium equation, compute the force in rope AD of Prob. 5.33.\n\nInternational Edition---engineering Mechanics: Statics, 4th Edition\n\nIf your motherboard supports ECC DDR3 memory, can you substitute non-ECC DDR3 memory?\n\nA+ Guide to Hardware (Standalone Book) (MindTap Course List)\n\nWhat changes can be made to successfully make a weld in a poorly fitted joint?\n\nWelding: Principles and Applications (MindTap Course List)", null, "" ]
[ null, "https://www.bartleby.com/static/search-icon-white.svg", null, "https://www.bartleby.com/static/close-grey.svg", null, "https://www.bartleby.com/static/solution-list.svg", null, "https://content.bartleby.com/tbms-images/9781337630931/Chapter-7/images/30931-7-21p-question-digital_image_001.png", null, "https://www.bartleby.com/static/logo-full-footer.svg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7534719,"math_prob":0.87351245,"size":2442,"snap":"2020-10-2020-16","text_gpt3_token_len":547,"char_repetition_ratio":0.17719442,"word_repetition_ratio":0.10655738,"special_character_ratio":0.1969697,"punctuation_ratio":0.09638554,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9893912,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,null,null,null,null,null,null,1,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-04-08T13:34:46Z\",\"WARC-Record-ID\":\"<urn:uuid:1712dbf1-70d4-4611-ade5-4c9c3cf2f2bd>\",\"Content-Length\":\"415003\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:028aba2a-a5ba-47df-a03e-53e506b93fbd>\",\"WARC-Concurrent-To\":\"<urn:uuid:bff67a90-41d3-42d2-b31d-bc0ac24a3592>\",\"WARC-IP-Address\":\"99.84.191.30\",\"WARC-Target-URI\":\"https://www.bartleby.com/solution-answer/chapter-7-problem-21p-structural-analysis-6th-edition/9781337630931/720-and-721-use-the-virtual-work-method-to-determine-the-slope-and-deflection-at-point-b-of-the/41bb60be-839a-11e9-8385-02ee952b546e\",\"WARC-Payload-Digest\":\"sha1:IVDNRKL4N4SJQKBDAJS2OQRP5WAC5PJD\",\"WARC-Block-Digest\":\"sha1:KHHKBTX7OHNRVRK4YVJ6KP3J2DKWTY2A\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-16/CC-MAIN-2020-16_segments_1585371813538.73_warc_CC-MAIN-20200408104113-20200408134613-00047.warc.gz\"}"}
https://meangreenmath.com/2017/03/16/my-favorite-one-liners-part-44/
[ "# My Favorite One-Liners: Part 44\n\nIn this series, I’m compiling some of the quips and one-liners that I’ll use with my students to hopefully make my lessons more memorable for them.\n\nToday’s quip is something that I’ll use to emphasize that the meaning of the word “or” is a little different in mathematics than in ordinary speech. For example, in mathematics, we could solve a quadratic equation for", null, "$x$:", null, "$x^2 + 2x - 8 = 0$", null, "$(x+4)(x-2) = 0$", null, "$x + 4 = 0 \\qquad \\hbox{OR} \\qquad x - 2 = 0$", null, "$x = -4 \\qquad \\hbox{OR} \\qquad x = 2$\n\nIn this example, the word “or” means “one or the other or maybe both.” It could be that both statements are true, as in the next example:", null, "$x^2 + 2x +1 = 0$", null, "$(x+1)(x+1) = 0$", null, "$x + 1 = 0 \\qquad \\hbox{OR} \\qquad x + 1= 0$", null, "$x = -1 \\qquad \\hbox{OR} \\qquad x = -1$\n\nHowever, in plain speech, the word “or” typically means “one or the other, but not both.” Here the quip I’ll use to illustrate this:\n\nAt the end of “The Bachelor,” the guy has to choose one girl or the other. He can’t choose both." ]
[ null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9349486,"math_prob":0.9998103,"size":735,"snap":"2019-43-2019-47","text_gpt3_token_len":172,"char_repetition_ratio":0.11764706,"word_repetition_ratio":0.0,"special_character_ratio":0.23265307,"punctuation_ratio":0.11320755,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99876094,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-16T00:00:14Z\",\"WARC-Record-ID\":\"<urn:uuid:aec5cbda-d7be-4e6b-a109-e717ad78c7b4>\",\"Content-Length\":\"82620\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8f9f295b-b219-4064-812b-bd2afdf4d890>\",\"WARC-Concurrent-To\":\"<urn:uuid:d6e7b83e-66d3-42cc-a333-36747485604c>\",\"WARC-IP-Address\":\"192.0.78.24\",\"WARC-Target-URI\":\"https://meangreenmath.com/2017/03/16/my-favorite-one-liners-part-44/\",\"WARC-Payload-Digest\":\"sha1:2KRWUN2JVIDCH4SORAHPPZUHQXY3GLDK\",\"WARC-Block-Digest\":\"sha1:B565LPJUIX4T4BX4FUOE62PDFRZGB43X\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496668716.22_warc_CC-MAIN-20191115222436-20191116010436-00355.warc.gz\"}"}
https://www.feynmanlectures.caltech.edu/II_11.html
[ "", null, "", null, "", null, "", null, "", null, "", null, "◄ ▲ ► A A A", null, "", null, "", null, "SUMMARY RECORDING\nMATHJAX\n\nhttps://www.feynmanlectures.caltech.edu/I_01.html\n\nIf it does not open, or only shows you this message again, then please let us know:\n\n• which browser you are using (including version #)\n• which operating system you are using (including version #)\n\nThis type of problem is rare, and there's a good chance it can be fixed if we have some clues about the cause. So, if you can, after enabling javascript, clearing the cache and disabling extensions, please open your browser's javascript console, load the page above, and if this generates any messages (particularly errors or warnings) on the console, then please make a copy (text or screenshot) of those messages and send them with the above-listed information to the email address given below.\n\nBy sending us information you will be helping not only yourself, but others who may be having similar problems accessing the online edition of The Feynman Lectures on Physics. Your time and consideration are greatly appreciated.\n\nBest regards,\nMike Gottlieb\[email protected]\nEditor, The Feynman Lectures on Physics New Millennium Edition\n\n## 11", null, "Inside Dielectrics", null, "Review: Chapter 31, Vol. I, The Origin of the Refractive Index Chapter 40, Vol. I, The Principles of Statistical Mechanics\n\n### 11–1Molecular dipoles\n\nIn this chapter we are going to discuss why it is that materials are dielectric. We said in the last chapter that we could understand the properties of electrical systems with dielectrics once we appreciated that when an electric field is applied to a dielectric it induces a dipole moment in the atoms. Specifically, if the electric field $E$ induces an average dipole moment per unit volume $P$, then $\\kappa$, the dielectric constant, is given by \\begin{equation} \\label{Eq:II:11:1} \\kappa-1=\\frac{P}{\\epsO E}. \\end{equation}\n\nWe have already discussed how this equation is applied; now we have to discuss the mechanism by which polarization arises when there is an electric field inside a material. We begin with the simplest possible example—the polarization of gases. But even gases already have complications: there are two types. The molecules of some gases, like oxygen, which has a symmetric pair of atoms in each molecule, have no inherent dipole moment. But the molecules of others, like water vapor (which has a nonsymmetric arrangement of hydrogen and oxygen atoms) carry a permanent electric dipole moment. As we pointed out in Chapter 6, there is in the water vapor molecule an average plus charge on the hydrogen atoms and a negative charge on the oxygen. Since the center of gravity of the negative charge and the center of gravity of the positive charge do not coincide, the total charge distribution of the molecule has a dipole moment. Such a molecule is called a polar molecule. In oxygen, because of the symmetry of the molecule, the centers of gravity of the positive and negative charges are the same, so it is a nonpolar molecule. It does, however, become a dipole when placed in an electric field. The forms of the two types of molecules are sketched in Fig. 11–1.\n\nFig. 11–1.(a) An oxygen molecule with zero dipole moment. (b) The water molecule has a permanent dipole moment $\\Figp_0$.\n\n### 11–2Electronic polarization\n\nWe will first discuss the polarization of nonpolar molecules. We can start with the simplest case of a monatomic gas (for instance, helium). When an atom of such a gas is in an electric field, the electrons are pulled one way by the field while the nucleus is pulled the other way, as shown in Fig. 10–4. Although the atoms are very stiff with respect to the electrical forces we can apply experimentally, there is a slight net displacement of the centers of charge, and a dipole moment is induced. For small fields, the amount of displacement, and so also the dipole moment, is proportional to the electric field. The displacement of the electron distribution which produces this kind of induced dipole moment is called electronic polarization.\n\nWe have already discussed the influence of an electric field on an atom in Chapter 31 of Vol. I, when we were dealing with the theory of the index of refraction. If you think about it for a moment, you will see that what we must do now is exactly the same as we did then. But now we need worry only about fields that do not vary with time, while the index of refraction depended on time-varying fields.\n\nIn Chapter 31 of Vol. I we supposed that when an atom is placed in an oscillating electric field the center of charge of the electrons obeys the equation \\begin{equation} \\label{Eq:II:11:2} m\\,\\frac{d^2x}{dt^2}+m\\omega_0^2x=q_eE. \\end{equation} The first term is the electron mass times its acceleration and the second is a restoring force, while the right-hand side is the force from the outside electric field. If the electric field varies with the frequency $\\omega$, Eq. (11.2) has the solution \\begin{equation} \\label{Eq:II:11:3} x=\\frac{q_eE}{m(\\omega_0^2-\\omega^2)}, \\end{equation} which has a resonance at $\\omega=\\omega_0$. When we previously found this solution, we interpreted it as saying that $\\omega_0$ was the frequency at which light (in the optical region or in the ultraviolet, depending on the atom) was absorbed. For our purposes, however, we are interested only in the case of constant fields, i.e., for $\\omega=0$, so we can disregard the acceleration term in (11.2), and we find that the displacement is \\begin{equation} \\label{Eq:II:11:4} x=\\frac{q_eE}{m\\omega_0^2}. \\end{equation}\n\nFrom this we see that the dipole moment $p$ of a single atom is \\begin{equation} \\label{Eq:II:11:5} p=q_ex=\\frac{q_e^2E}{m\\omega_0^2}. \\end{equation} In this theory the dipole moment $p$ is indeed proportional to the electric field.\n\nPeople usually write \\begin{equation} \\label{Eq:II:11:6} \\FLPp=\\alpha\\epsO\\FLPE. \\end{equation} (Again the $\\epsO$ is put in for historical reasons.) The constant $\\alpha$ is called the polarizability of the atom, and has the dimensions $L^3$. It is a measure of how easy it is to induce a moment in an atom with an electric field. Comparing (11.5) and (11.6), our simple theory says that \\begin{equation} \\label{Eq:II:11:7} \\alpha=\\frac{q_e^2}{\\epsO m\\omega_0^2}=\\frac{4\\pi e^2}{m\\omega_0^2}. \\end{equation}\n\nIf there are $N$ atoms in a unit volume, the polarization $P$—the dipole moment per unit volume—is given by \\begin{equation} \\label{Eq:II:11:8} \\FLPP=N\\FLPp=N\\alpha\\epsO\\FLPE. \\end{equation}\n\nPutting (11.1) and (11.8) together, we get \\begin{equation} \\label{Eq:II:11:9} \\kappa-1=\\frac{P}{\\epsO E}=N\\alpha \\end{equation} or, using (11.7), \\begin{equation} \\label{Eq:II:11:10} \\kappa-1=\\frac{4\\pi Ne^2}{m\\omega_0^2}. \\end{equation}\n\nFrom Eq. (11.10) we would predict that the dielectric constant $\\kappa$ of different gases should depend on the density of the gas and on the frequency $\\omega_0$ of its optical absorption.\n\nOur formula is, of course, only a very rough approximation, because in Eq. (11.2) we have taken a model which ignores the complications of quantum mechanics. For example, we have assumed that an atom has only one resonant frequency, when it really has many. To calculate properly the polarizability $\\alpha$ of atoms we must use the complete quantum-mechanical theory, but the classical ideas above give us a reasonable estimate.\n\nLet’s see if we can get the right order of magnitude for the dielectric constant of some substance. Suppose we try hydrogen. We have once estimated (Chapter 38, Vol. I) that the energy needed to ionize the hydrogen atom should be approximately \\begin{equation} \\label{Eq:II:11:11} E\\approx\\frac{1}{2}\\,\\frac{me^4}{\\hbar^2}. \\end{equation} For an estimate of the natural frequency $\\omega_0$, we can set this energy equal to $\\hbar\\omega_0$—the energy of an atomic oscillator whose natural frequency is $\\omega_0$. We get \\begin{equation*} \\omega_0\\approx\\frac{1}{2}\\,\\frac{me^4}{\\hbar^3}. \\end{equation*} If we now use this value of $\\omega_0$ in Eq. (11.7), we find for the electronic polarizability \\begin{equation} \\label{Eq:II:11:12} \\alpha\\approx16\\pi\\biggl[\\frac{\\hbar^2}{me^2}\\biggr]^3. \\end{equation} The quantity $(\\hbar^2/me^2)$ is the radius of the ground-state orbit of a Bohr atom (see Chapter 38, Vol. I) and equals $0.528$ angstroms. In a gas at standard pressure and temperature ($1$ atmosphere, $0^\\circ$C) there are $2.69\\times10^{19}$ atoms/cm$^3$, so Eq. (11.9) gives us \\begin{equation} \\label{Eq:II:11:13} \\kappa=1+(2.69\\times10^{19})16\\pi(0.528\\times10^{-8})^3=1.00020. \\end{equation} \\begin{align} \\kappa&=1+(2.69\\times10^{19})16\\pi(0.528\\times10^{-8})^3\\notag\\\\[1ex] \\label{Eq:II:11:13} &=1.00020. \\end{align}\n\nThe dielectric constant for hydrogen gas is measured to be \\begin{equation*} \\kappa_{\\text{exp}}=1.00026. \\end{equation*} We see that our theory is about right. We should not expect any better, because the measurements were, of course, made with normal hydrogen gas, which has diatomic molecules, not single atoms. We should not be surprised if the polarization of the atoms in a molecule is not quite the same as that of the separate atoms. The molecular effect, however, is not really that large. An exact quantum-mechanical calculation of $\\alpha$ for hydrogen atoms gives a result about $12\\%$ higher than (11.12) (the $16\\pi$ is changed to $18\\pi$), and therefore predicts a dielectric constant somewhat closer to the observed one. In any case, it is clear that our model of a dielectric is fairly good.\n\nAnother check on our theory is to try Eq. (11.7) on atoms which have a higher frequency of excitation. For instance, it takes about $24.6$ electron volts to pull the electron off helium, compared with the $13.6$ electron volts required to ionize hydrogen. We would, therefore, expect that the absorption frequency $\\omega_0$ for helium would be about twice as big as for hydrogen and that $\\alpha$ would be one-quarter as large. So, from (11.13) we expect that \\begin{equation*} \\kappa_{\\text{helium}}\\approx1.000050. \\end{equation*} Experimentally, \\begin{equation*} \\kappa_{\\text{helium}}=1.000068, \\end{equation*} so you see that our rough estimates are coming out on the right track. So we have understood the dielectric constant of nonpolar gas, but only qualitatively, because we have not yet used a correct atomic theory of the motions of the atomic electrons.\n\n### 11–3Polar molecules; orientation polarization\n\nFig. 11–2.(a) In a gas of polar molecules, the individual moments are oriented at random; the average moment in a small volume is zero. (b) When there is an electric field, there is some average alignment of the molecules.\n\nNext we will consider a molecule which carries a permanent dipole moment $p_0$—such as a water molecule. With no electric field, the individual dipoles point in random directions, so the net moment per unit volume is zero. But when an electric field is applied, two things happen: First, there is an extra dipole moment induced because of the forces on the electrons; this part gives just the same kind of electronic polarizability we found for a nonpolar molecule. For very accurate work, this effect should, of course, be included, but we will neglect it for the moment. (It can always be added in at the end.) Second, the electric field tends to line up the individual dipoles to produce a net moment per unit volume. If all the dipoles in a gas were to line up, there would be a very large polarization, but that does not happen. At ordinary temperatures and electric fields the collisions of the molecules in their thermal motion keep them from lining up very much. But there is some net alignment, and so some polarization (see Fig. 11–2). The polarization that does occur can be computed by the methods of statistical mechanics we described in Chapter 40 of Vol. I.\n\nTo use this method we need to know the energy of a dipole in an electric field. Consider a dipole of moment $\\FLPp_0$ in an electric field, as shown in Fig. 11–3. The energy of the positive charge is $q\\phi(1)$, and the energy of the negative charge is $-q\\phi(2)$. Thus the energy of the dipole is \\begin{equation} U=q\\phi(1)-q\\phi(2)=q\\FLPd\\cdot\\FLPgrad{\\phi},\\notag \\end{equation} or \\begin{equation} \\label{Eq:II:11:14} U=-\\FLPp_0\\cdot\\FLPE=-p_0E\\cos\\theta, \\end{equation} where $\\theta$ is the angle between $\\FLPp_0$ and $\\FLPE$. As we would expect, the energy is lower when the dipoles are lined up with the field.\n\nWe now find out how much lining up occurs by using the methods of statistical mechanics. We found in Chapter 40 of Vol. I that in a state of thermal equilibrium, the relative number of molecules with the potential energy $U$ is proportional to \\begin{equation} \\label{Eq:II:11:15} e^{-U/kT}, \\end{equation} where $U(x,y,z)$ is the potential energy as a function of position. The same arguments would say that using Eq. (11.14) for the potential energy as a function of angle, the number of molecules at $\\theta$ per unit solid angle is proportional to $e^{-U/kT}$.\n\nLetting $n(\\theta)$ be the number of molecules per unit solid angle at $\\theta$, we have \\begin{equation} \\label{Eq:II:11:16} n(\\theta)=n_0e^{+p_0E\\cos\\theta/kT}. \\end{equation} For normal temperatures and fields, the exponent is small, so we can approximate by expanding the exponential: \\begin{equation} \\label{Eq:II:11:17} n(\\theta)=n_0\\biggl(1+\\frac{p_0E\\cos\\theta}{kT}\\biggr). \\end{equation}\n\nWe can find $n_0$ if we integrate (11.17) over all angles; the result should be just $N$, the total number of molecules per unit volume. The average value of $\\cos\\theta$ over all angles is zero, so the integral is just $n_0$ times the total solid angle $4\\pi$. We get \\begin{equation} \\label{Eq:II:11:18} n_0=\\frac{N}{4\\pi}. \\end{equation}\n\nWe see from (11.17) that there will be more molecules oriented along the field ($\\cos\\theta=1$) than against the field ($\\cos\\theta=-1$). So in any small volume containing many molecules there will be a net dipole moment per unit volume—that is, a polarization $P$. To calculate $P$, we want the vector sum of all the molecular moments in a unit volume. Since we know that the result is going to be in the direction of $\\FLPE$, we will just sum the components in that direction (the components at right angles to $\\FLPE$ will sum to zero): \\begin{equation*} P=\\underset{\\substack{\\text{unit}\\\\\\text{volume}}}{\\sum} p_0\\cos\\theta_i. \\end{equation*}\n\nWe can evaluate the sum by integrating over the angular distribution. The solid angle at $\\theta$ is $2\\pi\\sin\\theta\\,d\\theta$, so \\begin{equation} \\label{Eq:II:11:19} P=\\int_0^\\pi n(\\theta)p_0\\cos\\theta\\,2\\pi\\sin\\theta\\,d\\theta. \\end{equation} Substituting for $n(\\theta)$ from (11.17), we have \\begin{equation*} P=-\\frac{N}{2}\\int_1^{-1} \\biggl(1+\\frac{p_0E}{kT}\\cos\\theta\\biggr) p_0\\cos\\theta\\,d(\\cos\\theta), \\end{equation*} which is easily integrated to give \\begin{equation} \\label{Eq:II:11:20} P=\\frac{Np_0^2E}{3kT}. \\end{equation} The polarization is proportional to the field $E$, so there will be normal dielectric behavior. Also, as we expect, the polarization depends inversely on the temperature, because at higher temperatures there is more disalignment by collisions. This $1/T$ dependence is called Curie’s law. The permanent moment $p_0$ appears squared for the following reason: In a given electric field, the aligning force depends upon $p_0$, and the mean moment that is produced by the lining up is again proportional to $p_0$. The average induced moment is proportional to $p_0^2$.\n\nWe should now try to see how well Eq. (11.20) agrees with experiment. Let’s look at the case of steam. Since we don’t know what $p_0$ is, we cannot compute $P$ directly, but Eq. (11.20) does predict that $\\kappa-1$ should vary inversely as the temperature, and this we should check.\n\nFrom (11.20) we get \\begin{equation} \\label{Eq:II:11:21} \\kappa-1=\\frac{P}{\\epsO E}=\\frac{Np_0^2}{3\\epsO kT}, \\end{equation} so $\\kappa-1$ should vary in direct proportion to the density $N$, and inversely as the absolute temperature. The dielectric constant has been measured at several different pressures and temperatures, chosen such that the number of molecules in a unit volume remained fixed.1 [Notice that if the measurements had all been taken at constant pressure, the number of molecules per unit volume would decrease linearly with increasing temperature and $\\kappa-1$ would vary as $T^{-2}$ instead of as $T^{-1}$.] In Fig. 11–4 we plot the experimental observations for $\\kappa-1$ as a function of $1/T$. The dependence predicted by (11.21) is followed quite well.\n\nThere is another characteristic of the dielectric constant of polar molecules—its variation with the frequency of the applied field. Due to the moment of inertia of the molecules, it takes a certain amount of time for the heavy molecules to turn toward the direction of the field. So if we apply frequencies in the high microwave region or above, the polar contribution to the dielectric constant begins to fall away because the molecules cannot follow. In contrast to this, the electronic polarizability still remains the same up to optical frequencies, because of the smaller inertia in the electrons.\n\n### 11–4Electric fields in cavities of a dielectric\n\nWe now turn to an interesting but complicated question—the problem of the dielectric constant in dense materials. Suppose that we take liquid helium or liquid argon or some other nonpolar material. We still expect electronic polarization. But in a dense material, $\\FLPP$ can be large, so the field on an individual atom will be influenced by the polarization of the atoms in its close neighborhood. The question is, what electric field acts on the individual atom?\n\nImagine that the liquid is put between the plates of a condenser. If the plates are charged they will produce an electric field in the liquid. But there are also charges in the individual atoms, and the total field $\\FLPE$ is the sum of both of these effects. This true electric field varies very, very rapidly from point to point in the liquid. It is very high inside the atoms—particularly right next to the nucleus—and relatively small between the atoms. The potential difference between the plates is the line integral of this total field. If we ignore all the fine-grained variations, we can think of an average electric field $E$, which is just $V/d$. (This is the field we were using in the last chapter.) We should think of this field as the average over a space containing many atoms.\n\nNow you might think that an “average” atom in an “average” location would feel this average field. But it is not that simple, as we can show by considering what happens if we imagine different-shaped holes in a dielectric. For instance, suppose that we cut a slot in a polarized dielectric, with the slot oriented parallel to the field, as shown in part (a) of Fig. 11–5. Since we know that $\\FLPcurl{\\FLPE}=\\FLPzero$, the line integral of $\\FLPE$ around the curve, $\\Gamma$, which goes as shown in (b) of the figure, should be zero. The field inside the slot must give a contribution which just cancels the part from the field outside. Therefore the field $E_0$ actually found in the center of a long thin slot is equal to $E$, the average electric field found in the dielectric.\n\nNow consider another slot whose large sides are perpendicular to $E$, as shown in part (c) of Fig. 11–5. In this case, the field $E_0$ in the slot is not the same as $E$ because polarization charges appear on the surfaces. If we apply Gauss’ law to a surface $S$ drawn as in (d) of the figure, we find that the field $E_0$ in the slot is given by \\begin{equation} \\label{Eq:II:11:22} E_0=E+\\frac{P}{\\epsO}, \\end{equation} where $E$ is again the electric field in the dielectric. (The Gaussian surface contains the surface polarization charge $\\sigma_{\\text{pol}}=P$.) We mentioned in Chapter 10 that $\\epsO E+P$ is often called $D$, so $\\epsO E_0=D_0$ is equal to $D$ in the dielectric.\n\nEarlier in the history of physics, when it was supposed to be very important to define every quantity by direct experiment, people were delighted to discover that they could define what they meant by $E$ and $D$ in a dielectric without having to crawl around between the atoms. The average field $\\FLPE$ is numerically equal to the field $\\FLPE_0$ that would be measured in a slot cut parallel to the field. And the field $\\FLPD$ could be measured by finding $E_0$ in a slot cut normal to the field. But nobody ever measures them that way anyway, so it was just one of those philosophical things.\n\nFor most liquids which are not too complicated in structure, we could expect that an atom finds itself, on the average, surrounded by the other atoms in what would be a good approximation to a spherical hole. And so we should ask: “What would be the field in a spherical hole?” We can find out by noticing that if we imagine carving out a spherical hole in a uniformly polarized material, we are just removing a sphere of polarized material. (We must imagine that the polarization is “frozen in” before we cut out the hole.) By superposition, however, the fields inside the dielectric, before the sphere was removed, is the sum of the fields from all charges outside the spherical volume plus the fields from the charges within the polarized sphere. That is, if we call $E$ the field in the uniform dielectric, we can write \\begin{equation} \\label{Eq:II:11:23} E=E_{\\text{hole}}+E_{\\text{plug}}, \\end{equation} where $E_{\\text{hole}}$ is the field in the hole and $E_{\\text{plug}}$ is the field inside a sphere which is uniformly polarized (see Fig. 11–6). The fields due to a uniformly polarized sphere are shown in Fig. 11–7. The electric field inside the sphere is uniform, and its value is \\begin{equation} \\label{Eq:II:11:24} E_{\\text{plug}}=-\\frac{P}{3\\epsO}. \\end{equation} Using (11.23), we get \\begin{equation} \\label{Eq:II:11:25} E_{\\text{hole}}=E+\\frac{P}{3\\epsO}. \\end{equation} The field in a spherical cavity is greater than the average field by the amount $P/3\\epsO$. (The spherical hole gives a field $1/3$ of the way between a slot parallel to the field and a slot perpendicular to the field.)\n\n### 11–5The dielectric constant of liquids; the Clausius-Mossotti equation\n\nIn a liquid we expect that the field which will polarize an individual atom is more like $E_{\\text{hole}}$ than just $E$. If we use the $E_{\\text{hole}}$ of (11.25) for the polarizing field in Eq. (11.6), then Eq. (11.8) becomes \\begin{equation} \\label{Eq:II:11:26} P=N\\alpha\\epsO\\biggl(E+\\frac{P}{3\\epsO}\\biggr), \\end{equation} or \\begin{equation} \\label{Eq:II:11:27} P=\\frac{N\\alpha}{1-(N\\alpha/3)}\\,\\epsO E. \\end{equation} Remembering that $\\kappa-1$ is just $P/\\epsO E$, we have \\begin{equation} \\label{Eq:II:11:28} \\kappa-1=\\frac{N\\alpha}{1-(N\\alpha/3)}, \\end{equation} which gives us the dielectric constant of a liquid in terms of $\\alpha$, the atomic polarizability. This is called the Clausius-Mossotti equation.\n\nWhenever $N\\alpha$ is very small, as it is for a gas (because the density $N$ is small), then the term $N\\alpha/3$ can be neglected compared with $1$, and we get our old result, Eq. (11.9), that \\begin{equation} \\label{Eq:II:11:29} \\kappa-1=N\\alpha. \\end{equation}\n\nLet’s compare Eq. (11.28) with some experimental results. It is first necessary to look at gases for which, using the measurement of $\\kappa$, we can find $\\alpha$ from Eq. (11.29). For instance, for carbon disulfide at zero degrees centigrade the dielectric constant is $1.0029$, so $N\\alpha$ is $0.0029$. Now the density of the gas is easily worked out and the density of the liquid can be found in handbooks. At $20^\\circ$C, the density of liquid CS$_2$ is $381$ times higher than the density of the gas at $0^\\circ$C. This means that $N$ is $381$ times higher in the liquid than it is in the gas, so that—if we make the approximation that the basic atomic polarizability of the carbon disulfide doesn’t change when it is condensed into a liquid—$N\\alpha$ in the liquid is equal to $381$ times $0.0029$, or $1.11$. Notice that the $N\\alpha/3$ term amounts to almost $0.4$, so it is quite significant. With these numbers we predict a dielectric constant of $2.76$, which agrees reasonably well with the observed value of $2.64$.\n\nIn Table 11–1 we give some experimental data on various materials (taken from the Handbook of Chemistry and Physics), together with the dielectric constants calculated from Eq. (11.28) in the way just described. The agreement between observation and theory is even better for argon and oxygen than for CS$_2$—and not so good for carbon tetrachloride. On the whole, the results show that Eq. (11.28) works very well.\n\nTable 11–1Computation of the dielectric constants of liquids from the dielectric constant of the gas.\n Gas Liquid Substance κ(exp) Nα Density Density Ratio1 Nα κ (predict) κ (exp) CS2 $1.0029\\phantom{00}$ $0.0029\\phantom{00}$ $0.00339$ $1.293$ $381$ $1.11\\phantom{0}$ $2.76\\phantom{0}$ $2.64\\phantom{0}$ O2 $1.000523$ $0.000523$ $0.00143$ $1.19\\phantom{0}$ $832$ $0.435$ $1.509$ $1.507$ CCl4 $1.0030\\phantom{00}$ $0.0030\\phantom{00}$ $0.00489$ $1.59\\phantom{0}$ $325$ $0.977$ $2.45\\phantom{0}$ $2.24\\phantom{0}$ Ar $1.000545$ $0.000545$ $0.00178$ $1.44\\phantom{0}$ $810$ $0.441$ $1.517$ $1.54\\phantom{0}$ 1Ratio = density of liquid/density of gas.\n\nOur derivation of Eq. (11.28) is valid only for electronic polarization in liquids. It is not right for a polar molecule like H$_2$O. If we go through the same calculations for water, we get $13.2$ for $N\\alpha$, which means that the dielectric constant for the liquid is negative, while the observed value of $\\kappa$ is $80$. The problem has to do with the correct treatment of the permanent dipoles, and Onsager has pointed out the right way to go. We do not have the time to treat the case now, but if you are interested it is discussed in Kittel’s book, Introduction to Solid State Physics.\n\n### 11–6Solid dielectrics\n\nNow we turn to the solids. The first interesting fact about solids is that there can be a permanent polarization built in—which exists even without applying an electric field. An example occurs with a material like wax, which contains long molecules having a permanent dipole moment. If you melt some wax and put a strong electric field on it when it is a liquid, so that the dipole moments get partly lined up, they will stay that way when the liquid freezes. The solid material will have a permanent polarization which remains when the field is removed. Such a solid is called an electret.\n\nAn electret has permanent polarization charges on its surface. It is the electrical analog of a magnet. It is not as useful, though, because free charges from the air are attracted to its surfaces, eventually cancelling the polarization charges. The electret is “discharged” and there are no visible external fields.\n\nA permanent internal polarization $P$ is also found occurring naturally in some crystalline substances. In such crystals, each unit cell of the lattice has an identical permanent dipole moment, as drawn in Fig. 11–8. All the dipoles point in the same direction, even with no applied electric field. Many complicated crystals have, in fact, such a polarization; we do not normally notice it because the external fields are discharged, just as for the electrets.\n\nIf these internal dipole moments of a crystal are changed, however, external fields appear because there is not time for stray charges to gather and cancel the polarization charges. If the dielectric is in a condenser, free charges will be induced on the electrodes. For example, the moments can change when a dielectric is heated, because of thermal expansion. The effect is called pyroelectricity. Similarly, if we change the stresses in a crystal—for instance, if we bend it—again the moment may change a little bit, and a small electrical effect, called piezoelectricity, can be detected.\n\nFor crystals that do not have a permanent moment, one can work out a theory of the dielectric constant that involves the electronic polarizability of the atoms. It goes much the same as for liquids. Some crystals also have rotatable dipoles inside, and the rotation of these dipoles will also contribute to $\\kappa$. In ionic crystals such as NaCl there is also ionic polarizability. The crystal consists of a checkerboard of positive and negative ions, and in an electric field the positive ions are pulled one way and the negatives the other; there is a net relative motion of the plus and minus charges, and so a volume polarization. We could estimate the magnitude of the ionic polarizability from our knowledge of the stiffness of salt crystals, but we will not go into that subject here.\n\n### 11–7Ferroelectricity; BaTiO$_{\\boldsymbol{3}}$\n\nWe want to describe now one special class of crystals which have, just by accident almost, a built-in permanent moment. The situation is so marginal that if we increase the temperature a little bit they lose the permanent moment completely. On the other hand, if they are nearly cubic crystals, so that their moments can be turned in different directions, we can detect a large change in the moment when an applied electric field is changed. All the moments flip over and we get a large effect. Substances which have this kind of permanent moment are called ferroelectric, after the corresponding ferromagnetic effects which were first discovered in iron.\n\nWe would like to explain how ferroelectricity works by describing a particular example of a ferroelectric material. There are several ways in which the ferroelectric property can originate; but we will take up only one mysterious case—that of barium titanate, BaTiO$_3$. This material has a crystal lattice whose basic cell is sketched in Fig. 11–9. It turns out that above a certain temperature, specifically $118^\\circ$C, barium titanate is an ordinary dielectric with an enormous dielectric constant. Below this temperature, however, it suddenly takes on a permanent moment.\n\nIn working out the polarization of solid material, we must first find what are the local fields in each unit cell. We must include the fields from the polarization itself, just as we did for the case of a liquid. But a crystal is not a homogeneous liquid, so we cannot use for the local field what we would get in a spherical hole. If you work it out for a crystal, you find that the factor $1/3$ in Eq. (11.24) becomes slightly different, but not far from $1/3$. (For a simple cubic crystal, it is just $1/3$.) We will, therefore, assume for our preliminary discussion that the factor is $1/3$ for BaTiO$_3$.\n\nNow when we wrote Eq. (11.28) you may have wondered what would happen if $N\\alpha$ became greater than $3$. It appears as though $\\kappa$ would become negative. But that surely cannot be right. Let’s see what should happen if we were gradually to increase $\\alpha$ in a particular crystal. As $\\alpha$ gets larger, the polarization gets bigger, making a bigger local field. But a bigger local field will polarize each atom more, raising the local fields still more. If the “give” of the atoms is enough, the process keeps going; there is a kind of feedback that causes the polarization to increase without limit—assuming that the polarization of each atom increases in proportion to the field. The “runaway” condition occurs when $N\\alpha=3$. The polarization does not become infinite, of course, because the proportionality between the induced moment and the electric field breaks down at high fields, so that our formulas are no longer correct. What happens is that the lattice gets “locked in” with a high, self-generated, internal polarization.\n\nIn the case of BaTiO$_3$, there is, in addition to an electronic polarization, also a rather large ionic polarization, presumed to be due to titanium ions which can move a little within the cubic lattice. The lattice resists large motions, so after the titanium has gone a little way, it jams up and stops. But the crystal cell is then left with a permanent dipole moment.\n\nIn most crystals, this is really the situation for all temperatures that can be reached. The very interesting thing about barium titanate is that there is such a delicate condition that if $N\\alpha$ is decreased just a little bit it comes unstuck. Since $N$ decreases with increasing temperature—because of thermal expansion—we can vary $N\\alpha$ by varying the temperature. Below the critical temperature it is just barely stuck, so it is easy—by applying an external field—to shift the polarization and have it lock in a different direction.\n\nLet’s see if we can analyze what happens in more detail. We call $T_c$ the critical temperature at which $N\\alpha$ is exactly $3$. As the temperature increases, $N$ goes down a little bit because of the expansion of the lattice. Since the expansion is small, we can say that near the critical temperature \\begin{equation} \\label{Eq:II:11:30} N\\alpha=3-\\beta(T-T_c), \\end{equation} where $\\beta$ is a small constant, of the same order of magnitude as the thermal expansion coefficient, or about $10^{-5}$ to $10^{-6}$ per degree C. Now if we substitute this relation into Eq. (11.28), we get that \\begin{equation*} \\kappa-1=\\frac{3-\\beta(T-T_c)}{\\beta(T-T_c)/3}. \\end{equation*} Since we have assumed that $\\beta(T-T_c)$ is small compared with one, we can approximate this formula by \\begin{equation} \\label{Eq:II:11:31} \\kappa-1=\\frac{9}{\\beta(T-T_c)}. \\end{equation}\n\nThis relation is right, of course, only for $T>T_c$. We see that just above the critical temperature $\\kappa$ is enormous. Because $N\\alpha$ is so close to $3$, there is a tremendous magnification effect, and the dielectric constant can easily be as high as $50{,}000$ to $100{,}000$. It is also very sensitive to temperature. For increases in temperature, the dielectric constant goes down inversely as the temperature, but, unlike the case of a dipolar gas, for which $\\kappa-1$ goes inversely as the absolute temperature, for ferroelectrics it varies inversely as the difference between the absolute temperature and the critical temperature (this law is called the Curie-Weiss law).\n\nWhen we lower the temperature to the critical temperature, what happens? If we imagine a lattice of unit cells like that in Fig. 11–9, we see that it is possible to pick out chains of ions along vertical lines. One of them consists of alternating oxygen and titanium ions. There are other lines made up of either barium or oxygen ions, but the spacing along these lines is greater. We make a simple model to imitate this situation by imagining, as shown in Fig. 11–10(a), a series of chains of ions. Along what we call the main chain, the separation of the ions is $a$, which is half the lattice constant; the lateral distance between identical chains is $2a$. There are less-dense chains in between which we will ignore for the moment. To make the analysis a little easier, we will also suppose that all the ions on the main chain are identical. (It is not a serious simplification because all the important effects will still appear. This is one of the tricks of theoretical physics. One does a different problem because it is easier to figure out the first time—then when one understands how the thing works, it is time to put in all the complications.)\n\nNow let’s try to find out what would happen with our model. We suppose that the dipole moment of each atom is $p$ and we wish to calculate the field at one of the atoms of the chain. We must find the sum of the fields from all the other atoms. We will first calculate the field from the dipoles in only one vertical chain; we will talk about the other chains later. The field at the distance $r$ from a dipole in a direction along its axis is given by \\begin{equation} \\label{Eq:II:11:32} E=\\frac{1}{4\\pi\\epsO}\\,\\frac{2p}{r^3}. \\end{equation} At any given atom, the dipoles at equal distances above and below it give fields in the same direction, so for the whole chain we get \\begin{equation} \\label{Eq:II:11:33} E_{\\text{chain}}=\\frac{p}{4\\pi\\epsO}\\,\\frac{2}{a^3}\\cdot \\biggl(2+\\frac{2}{8}+\\frac{2}{27}+\\frac{2}{64}+\\dotsb\\biggr)= \\frac{p}{\\epsO}\\,\\frac{0.383}{a^3}. \\end{equation} \\begin{align} E_{\\text{chain}}&=\\frac{p}{4\\pi\\epsO}\\,\\frac{2}{a^3}\\cdot \\biggl(2+\\frac{2}{8}+\\frac{2}{27}+\\frac{2}{64}+\\dotsb\\biggr)\\notag\\\\[1.5ex] \\label{Eq:II:11:33} &=\\;\\,\\frac{p}{\\epsO}\\,\\frac{0.383}{a^3}. \\end{align} It is not too hard to show that if our model were like a completely cubic crystal—that is, if the next identical lines were only the distance $a$ away—the number $0.383$ would be changed to $1/3$. In other words, if the next lines were at the distance $a$ they would contribute only $-0.050$ unit to our sum. However, the next main chain we are considering is at the distance $2a$ and, as you remember from Chapter 7, the field from a periodic structure dies off exponentially with distance. Therefore these lines contribute much less than $-0.050$ and we can just ignore all the other chains.\n\nIt is necessary now to find out what polarizability $\\alpha$ is needed to make the runaway process work. Suppose that the induced moment $p$ of each atom of the chain is proportional to the field on it, as in Eq. (11.6). We get the polarizing field on the atom from $E_{\\text{chain}}$ using Eq. (11.32). So we have the two equations \\begin{equation*} p=\\alpha\\epsO E_{\\text{chain}} \\end{equation*} and \\begin{equation*} E_{\\text{chain}}=\\frac{0.383}{a^3}\\,\\frac{p}{\\epsO}. \\end{equation*} There are two solutions: $E_{\\text{chain}}$ and $p$ both zero, or \\begin{equation*} \\alpha=\\frac{a^3}{0.383}, \\end{equation*} with $E_{\\text{chain}}$ and $p$ both finite. Thus if $\\alpha$ is as large as $a^3/0.383$, a permanent polarization sustained by its own field will set in. This critical equality must be reached for barium titanate at just the temperature $T_c$. (Notice that if $\\alpha$ were larger than the critical value for small fields, it would decrease at larger fields and at equilibrium the same equality we have found would hold.)\n\nFor BaTiO$_3$, the spacing $a$ is $2\\times10^{-8}$ cm, so we must expect that $\\alpha=21.8\\times10^{-24}$ cm$^3$. We can compare this with the known polarizabilities of the individual atoms. For oxygen, $\\alpha=30.2\\times10^{-24}$ cm$^3$; we’re on the right track! But for titanium, $\\alpha=2.4\\times10^{-24}$ cm$^3$; rather small. To use our model we should probably take the average. (We could work out the chain again for alternating atoms, but the result would be about the same.) So $\\alpha(\\text{average})=16.3\\times10^{-24}$ cm$^3$, which is not high enough to give a permanent polarization.\n\nBut wait a moment! We have so far only added up the electronic polarizabilities. There is also some ionic polarization due to the motion of the titanium ion. All we need is an ionic polarizability of $9.2\\times10^{-24}$ cm$^3$. (A more precise computation using alternating atoms shows that actually $11.9\\times10^{-24}$ cm$^3$ is needed.) To understand the properties of BaTiO$_3$, we have to assume that such an ionic polarizability exists.\n\nWhy the titanium ion in barium titanate should have that much ionic polarizability is not known. Furthermore, why, at a lower temperature, it polarizes along the cube diagonal and the face diagonal equally well is not clear. If we figure out the actual size of the spheres in Fig. 11–9, and ask whether the titanium is a little bit loose in the box formed by its neighboring oxygen atoms—which is what you would hope, so that it could be easily shifted—you find quite the contrary. It fits very tightly. The barium atoms are slightly loose, but if you let them be the ones that move, it doesn’t work out. So you see that the subject is really not one-hundred percent clear; there are still mysteries we would like to understand.\n\nReturning to our simple model of Fig. 11–10(a), we see that the field from one chain would tend to polarize the neighboring chain in the opposite direction, which means that although each chain would be locked, there would be no net permanent moment per unit volume! (Although there would be no external electric effects, there are still certain thermodynamic effects one could observe.) Such systems exist, and are called antiferroelectric. So what we have explained is really an antiferroelectric. Barium titanate, however, is really like the arrangement in Fig. 11–10(b). The oxygen-titanium chains are all polarized in the same direction because there are intermediate chains of atoms in between. Although the atoms in these chains are not very polarizable, or very dense, they will be somewhat polarized, in the direction antiparallel to the oxygen-titanium chains. The small fields produced at the next oxygen-titanium chain will get it started parallel to the first. So BaTiO$_3$ is really ferroelectric, and it is because of the atoms in between. You may be wondering: “But what about the direct effect between the two O-Ti chains?” Remember, though, the direct effect dies off exponentially with the separation; the effect of the chain of strong dipoles at $2a$ can be less than the effect of a chain of weak ones at the distance $a$.\n\nThis completes our rather detailed report on our present understanding of the dielectric constants of gases, of liquids, and of solids.\n\n1. Sänger, Steiger, and Gächter, Helvetica Physica Acta 5, 200 (1932)." ]
[ null, "https://www.feynmanlectures.caltech.edu/II_11.html", null, "https://www.feynmanlectures.caltech.edu/II_11.html", null, "https://www.feynmanlectures.caltech.edu/II_11.html", null, "https://www.feynmanlectures.caltech.edu/II_11.html", null, "https://www.feynmanlectures.caltech.edu/II_11.html", null, "https://www.feynmanlectures.caltech.edu/II_11.html", null, "https://www.feynmanlectures.caltech.edu/II_11.html", null, "https://www.feynmanlectures.caltech.edu/II_11.html", null, "https://www.feynmanlectures.caltech.edu/II_11.html", null, "https://www.feynmanlectures.caltech.edu/img/camera.svg", null, "https://www.feynmanlectures.caltech.edu/img/FLP_II/f11-00/f11-00.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.89270127,"math_prob":0.99812317,"size":39694,"snap":"2022-27-2022-33","text_gpt3_token_len":10460,"char_repetition_ratio":0.16253464,"word_repetition_ratio":0.007523716,"special_character_ratio":0.26014006,"punctuation_ratio":0.11528384,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9994093,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,5,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-11T17:40:32Z\",\"WARC-Record-ID\":\"<urn:uuid:77ce829c-f125-428d-83c7-96b377ad15d0>\",\"Content-Length\":\"86692\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f7eb3a13-57b8-4bb2-a91d-7727237cd00c>\",\"WARC-Concurrent-To\":\"<urn:uuid:5cc43dcc-2160-4f4a-9ce9-5d87a22d97c8>\",\"WARC-IP-Address\":\"52.43.17.37\",\"WARC-Target-URI\":\"https://www.feynmanlectures.caltech.edu/II_11.html\",\"WARC-Payload-Digest\":\"sha1:BLX35GR4FQIZXCKNJWFNR647NMYIFW3D\",\"WARC-Block-Digest\":\"sha1:5T3JTLNDXAT5LFF6BGZ5X5GCECOX5WEC\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882571483.70_warc_CC-MAIN-20220811164257-20220811194257-00631.warc.gz\"}"}
https://goprep.co/ex-6-q128-in-abc-de-bc-fig-6-39-find-the-values-of-x-y-and-z-i-1nkmfk
[ "# In ∆ABC, DE || BC (Fig. 6.39). Find the values of x, y and z.", null, "Given: DE || BC; B = 30° ; C = 40°\n\nFormula Used/Theory:-\n\nAngle sum property\n\nSum of all angles of triangle is 180°\n\nif 2 lines are parallel then their corresponding angles will be equal\n\nAs DE || BC\n\nAnd AB is transverse\n\nx = 30° Corresponding angles\n\nAs DE || BC\n\nAnd AC is transverse\n\ny = 40° Corresponding angles\n\nBy angle sum property in Δ ADE\n\nx + y + z = 180°\n\n30° + 40° + z = 180°\n\nz = 180° -70°\n\nz = 110°\n\nResult:- The value of x;y;z is 30°;40°;110° respectively\n\nRate this question :\n\nHow useful is this solution?\nWe strive to provide quality solutions. Please rate us to serve you better." ]
[ null, "https://gradeup-question-images.grdp.co/liveData/PROJ18800/1531202631647613.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.66605085,"math_prob":0.99851096,"size":1414,"snap":"2021-04-2021-17","text_gpt3_token_len":467,"char_repetition_ratio":0.17730497,"word_repetition_ratio":0.021201413,"special_character_ratio":0.32885432,"punctuation_ratio":0.1511254,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99852985,"pos_list":[0,1,2],"im_url_duplicate_count":[null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-01-22T12:24:55Z\",\"WARC-Record-ID\":\"<urn:uuid:14b825a3-fcf2-418c-8e27-70bf40ea8102>\",\"Content-Length\":\"332390\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:435354bf-802c-4f05-ba03-7e636aa2192f>\",\"WARC-Concurrent-To\":\"<urn:uuid:610431c3-42e8-40ec-ab6a-992afa0dad13>\",\"WARC-IP-Address\":\"104.18.25.35\",\"WARC-Target-URI\":\"https://goprep.co/ex-6-q128-in-abc-de-bc-fig-6-39-find-the-values-of-x-y-and-z-i-1nkmfk\",\"WARC-Payload-Digest\":\"sha1:ZF73WV4MTXHRGGGLOJM3TCLR2RDJQ3H6\",\"WARC-Block-Digest\":\"sha1:S3E255NYYRJCDRJP252RUQ5QOM2BTKBV\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-04/CC-MAIN-2021-04_segments_1610703529331.99_warc_CC-MAIN-20210122113332-20210122143332-00753.warc.gz\"}"}
https://kidsworksheetfun.com/2022/05/16/
[ "## Easy English For Beginners Worksheets\n\nA collection of english esl worksheets for home learning online practice distance learning and english classes to teach about beginner beginner. Are you a beginner...\n\nAddition Table Worksheet Pdf. This worksheet is a supplementary fourth grade resource to help teachers, parents and children at home and in school. 11 + 1 = 12 11 + 2 = 13 11 + 3 = 14 11 + 4 = 15 11 + 5 = 16 11 + 6 = 17 11 + […]\n\n## Spelling Worksheets For 2nd Grade\n\nMath Worksheet Addition Grade 1. Learning the basic steps involved in adding numbers in columns is important. Math worksheets for first graders that your students will want to complete. Maths Worksheets For Grade 1 Addition Maths Worksheets For Grade 1 from infinitesearchforthespicecabinet.blogspot.com Mathematics for grade 3 (addition and subtraction of fractions) by jalvin: If you […]\n\nChristmas Addition Worksheets 2Nd Grade. Christmas wordsearches, puzzles, gift calendars. If you are searching about second grade math worksheets free printable k5 learning you've visit to the right page. 2nd Grade Color By Number Christmas Worksheets Name Tracing Generator from www.nametracinggenerator.com Christmas addition math worksheets 2nd grade coloring source: Ensure solid practice with our 2nd […]\n\n## One Variable Equations Worksheet Pdf\n\nOne Variable Equations Worksheet Pdf. The equations we study in classes vi, vii and viii are linear equations in one variable. Send your suggestions or comments. One Step Equations Worksheet Answer Key Master of Documents from tutore.org Solve the single variables for these equations, answers are on the second page of the pdf worksheet. Equations […]\n\nPerfect as home learning tasks help children put what they ve learned into practice. 4x4 magic square puzzles. Trigonometry Identity Magic Square Activity Trigonometry Student..." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.83011264,"math_prob":0.8722147,"size":2068,"snap":"2022-27-2022-33","text_gpt3_token_len":459,"char_repetition_ratio":0.19040698,"word_repetition_ratio":0.03773585,"special_character_ratio":0.21034816,"punctuation_ratio":0.09593023,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97590125,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-07-02T19:48:03Z\",\"WARC-Record-ID\":\"<urn:uuid:b146e120-109c-4a1d-a0a5-eab5a602c3d5>\",\"Content-Length\":\"92583\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:21f549d4-7d8e-4b1d-9017-d236dd31e2a7>\",\"WARC-Concurrent-To\":\"<urn:uuid:541f9420-a65f-40ca-97f2-7d84965c51f5>\",\"WARC-IP-Address\":\"104.21.92.83\",\"WARC-Target-URI\":\"https://kidsworksheetfun.com/2022/05/16/\",\"WARC-Payload-Digest\":\"sha1:WSAQPI6PVWKTQ37YYV6PDXRKASETHXUP\",\"WARC-Block-Digest\":\"sha1:OTHUDJDUKEP4BMW3IXX35EW4PE7ER3JJ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656104204514.62_warc_CC-MAIN-20220702192528-20220702222528-00323.warc.gz\"}"}
https://mathoverflow.net/questions/289443/closed-form-expression-for-differential-of-matrix-function
[ "# Closed-form expression for differential of matrix function\n\nLet $X$ be a real $n\\times n$ positive semidefinite matrix of rank $m\\le n$ and let $Y\\in\\mathbb{R}^{m\\times n}$ be the unique matrix satisfying (i) $X=Y^\\top Y$, and (ii) $Y\\, [I\\, |\\, 0]^\\top = L$ with $L\\in\\mathbb{R}^{m\\times m}$ being upper triangular with positive diagonal entries. (Notice that when $n=m$, $Y$ coincides with the (unique) Cholesky factor of $X$.) Let $A\\in\\mathbb{R}^{m\\times m}$ be a positive definite matrix and $B\\in\\mathbb{R}^{n\\times m}$ be such that $YB$ is nonsingular. Consider the following matrix-valued function $$f(X)=(YB)^{-1}A (YB)^{-\\top}.$$\n\nI'm looking for a closed-form expression of the matrix differential $\\mathrm{d}\\, f(X)$.\n\nMy attempt. A simple observation is that, in case $A=\\alpha I$, $\\alpha\\in \\mathbb{R}$, the sought differential reduces to $$\\mathrm{d}\\, f(X) = -\\alpha(B^\\top XB)^{-1} {B^{\\top} \\mathrm{d}\\, X\\, B} (B^\\top XB)^{-1}.$$\n\nFor the case of general positive definite $A$, I describe below my attempt that deals with vectorized ($\\mathrm{vec}$ operation) differentials. First, using chain rule, \\begin{equation}\\tag{1}\\label{eq:1} \\mathrm{vec}(\\mathrm{d}\\, f(X)) = \\frac{\\mathrm{vec}(\\mathrm{d}\\, f(X))}{\\mathrm{vec}(\\mathrm{d}\\, Y)}\\frac{\\mathrm{vec}(\\mathrm{d}\\, Y)}{\\mathrm{vec}(\\mathrm{d}\\, X)} \\mathrm{vec}(\\mathrm{d}\\, X). \\end{equation} Concerning the first term, we have $$\\frac{\\mathrm{vec}(\\mathrm{d}\\, f(X))}{\\mathrm{vec}(\\mathrm{d}\\, Y)}=-\\left((YB)^{-1}A(YB)^{-\\top}B^\\top \\otimes (YB)^{-1}\\right)-\\left((YB)^{-1}\\otimes (YB)^{-1}A(YB)^{-\\top}B^\\top \\right)K^{n,m},$$ where $K^{n,m}$ is an $nm\\times nm$ commutation matrix and $\\otimes$ denotes Kronecker product. For the second term, using the same argument of the accepted answer to this MO question, we have $$\\frac{\\mathrm{vec}(\\mathrm{d}\\, Y)}{\\mathrm{vec}(\\mathrm{d}\\, X)} = \\left((Y^\\top\\otimes I)K^{n,m}+(I\\otimes Y^\\top)\\right)^{-L},$$ where $(\\cdot)^{-L}$ denotes left-inversion. Plugging the above-derived expressions into \\eqref{eq:1}, yields a (vectorized) formula for the sought differential.\n\nAt this point, provided that my calculations are correct, I wonder whether Eq. \\eqref{eq:1} can be further simplified and written in matrix form (if possible). I conjecture (or, more frankly, I hope) that the final expression is similar to the scalar ($A=\\alpha I$) case, i.e., something of the form $$\\mathrm{d}\\, f(X) = -(YB)^{-1}A (YB)^{-\\top} {B^{\\top} \\mathrm{d}\\, X\\, B} (YB)^{-1}A (YB)^{-\\top}.$$\n\nEdit. I believe that the hard part of my question regards the computation of the differential of $Y$ w.r.t. $X$. Of course, this can be accomplished via vectorization, as described above, and subsequent \"matricization\". Nevertheless, I wonder whether a more \"genuine\" matrix expression for $\\mathrm{d}Y$ exists.\n\nAny comment/suggestion is very appreciated.\n\n• Since $Y$ is not well-defined, what exactly do you mean by its differential? Dec 28, 2017 at 18:35\n• @IgorRivin: According to the accepted answer to this MO question mathoverflow.net/questions/150427/…, I think that the (classical notion of) differential of $Y$ w.r.t. $X$ is well-defined under certain assumption concerning invertibility of $(I\\otimes Y^\\top)$. See also my edit. Dec 30, 2017 at 17:35\n• Since multiplying $Y$ on the left by an orthogonal matrix gives you another $Y,$ I still don't know what you mean... Dec 30, 2017 at 17:54\n• @IgorRivin: I edited the OP in order to deal with a well-defined $Y$. Dec 31, 2017 at 9:04\n\nIt suffices to consider the case $n=2$, $m=1$. Namely, write $Y=[Y_1|Y_2]$ etc, then $L=Y_1$ is upper triangular with positives on the diagonal, and \\begin{align*} X &= Y^\\top Y = \\begin{pmatrix} Y_1^\\top Y_1 & Y_1^\\top Y_2 \\\\ Y_2^\\top Y_1 & Y_2^\\top Y_2\\end{pmatrix}\\,, \\\\ dX &= \\begin{pmatrix} (dY_1)^\\top. Y_1 + Y_1^\\top.dY_1 & (dY_1)^\\top. Y_2 + Y_1^\\top.dY_2 \\\\ (dY_2)^\\top. Y_1 + Y_2^\\top. dY_1 & (dY_2)^\\top. Y_2 + Y_2^\\top.dY_2 \\end{pmatrix} \\\\ YB &= Y_1B_1 + Y_2B_2 \\end{align*} Moreover, \\begin{align*} df(X) =& -(YB)^{-1}.dY.B.(YB)^{-\\top} - (YB)^{-1}.A.(YB)^{-\\top}.(dY.B)^\\top.(YB)^{-\\top} \\\\=& -(YB)^{-1}.dY.B.f(X) - f(X)((YB)^{-1}.dY.B)^\\top \\\\ dY.B =& dY_1.B_1 + dY_2.B_2 \\end{align*}\nNow you can compute $dY_1$ from $dX_{1,1}$ as in the invertible situation. Let us drop subindices and go to the invertible situation. $C\\mapsto C^{-\\top}$ is the Cartan involution on the reductive Lie group $GL^+(m)$. Consider the Iwasawa decomposition $GL^+(m) = SO(m).A.N$, $A$ the diagonal matrices with positive entries, and $N$ the upper unipotent matrices (which equals here the Gram-Schmidt orthonormalisation procedure with the coefficients arranged in $A.N$). First note that $Y:S_+(m)\\to AN$ is a smooth map into a Lie group, so $dY$, better $TY: TS_+(m)\\to T(AN)$, and the right logarithmic derivative $\\delta Y:= TY.Y^{-1}:TS_+(m)\\to \\mathfrak{an}$ is a Lie algebra valued 1-form, $\\delta Y\\in \\Omega^1(S_+(m);\\mathfrak{an})$. You can get back $Y$ from the 1-form $\\delta Y$ by Cartan development. Namely, $\\delta Y$ describes a flat principal connection on the trivial principal $AN$-bundle $S_+(m)\\times AN \\to S_+(m)$, and any horizontal leaf of it is a right translate of the mapping $Y$. Moreover $Z\\mapsto Z^\\top$ restricts to the mapping $\\mathfrak{an}\\to \\mathfrak{an}^*$ corresponding to the inner product $\\operatorname{Trace}(U^\\top.V)$ on $\\mathfrak{gl}(m)$. We have \\begin{gather*} (dY)^\\top.Y + Y^{\\top}.dY = dX = dX^{\\top} \\\\ \\delta Y + (\\delta Y)^\\top = dY.Y^{-1} + Y^{-\\top}.(dY)^\\top = Y^{-\\top}.dX.Y^{-1} = (dX.Y^{-1})^{\\top}.Y^{-1} \\end{gather*} Now, $\\delta Y + (\\delta Y)^\\top$ allows in a simple way to compute $\\delta Y$ (take the upper triangular part and 1/2 of the diagonal entries).\n• Thanks for the answer! Could you please elaborate a little more how $\\mathrm{d} Y_1$ can be computed from $\\mathrm{d} X_{1,1}$ (without using vectorization, if possible)? Jan 1, 2018 at 9:40" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6733078,"math_prob":0.9999542,"size":2801,"snap":"2023-14-2023-23","text_gpt3_token_len":938,"char_repetition_ratio":0.1766178,"word_repetition_ratio":0.0,"special_character_ratio":0.33166727,"punctuation_ratio":0.13274336,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999883,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-07T15:36:57Z\",\"WARC-Record-ID\":\"<urn:uuid:5fb90e41-980e-4e43-bb4f-97d92e7e0965>\",\"Content-Length\":\"102651\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7f00ccba-8d69-4039-871c-84b024a18fde>\",\"WARC-Concurrent-To\":\"<urn:uuid:f92ebfd7-767d-49de-827a-858501cc16d4>\",\"WARC-IP-Address\":\"151.101.129.69\",\"WARC-Target-URI\":\"https://mathoverflow.net/questions/289443/closed-form-expression-for-differential-of-matrix-function\",\"WARC-Payload-Digest\":\"sha1:DTXVQUWHUZIF4TUBNYVG4OHMJ2BI4R6K\",\"WARC-Block-Digest\":\"sha1:ZSRVHUGMYYQVVECMGBJMTN5WM5YB3KIR\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224653930.47_warc_CC-MAIN-20230607143116-20230607173116-00758.warc.gz\"}"}
https://codegolf.stackexchange.com/questions/250283/rearrange-to-a-palindrome
[ "# Rearrange to a palindrome\n\nGiven a string, shuffle it so that it becomes a palindrome.\n\nFor example, adadbcc can be arranged into dacbcad, or dcabacd, acdbdca and more. Any of these (or all) is acceptable, and duplicates are allowed if outputting all. Something like abc cannot be shuffled into a palindrome, and you can assume it won't be inputted.\n\n(if it helps) input will only contain lowercase letters.\n\n## Testcases\n\nThese show one possible solution.\n\nnanas -> nasan\ncoconutnut -> conuttunoc\napotato -> atopota\nmanplancanalpanamaaaa -> amanaplanacanalpanama\nnananana -> nanaanan\nanaan -> anana\n\n• Related: Unsort an array\n– tsh\nJul 25 at 3:20\n• Suggested test case(s): Any string with an even length, e.g., nanana. Jul 25 at 4:44\n• Suggested test case: nnaaa (a case where the odd-count element has a count greater than 1). Jul 25 at 5:18\n\n# Brachylog, 3 bytes\n\np.↔\n\n\nTry it online!\n\n### Explanation\n\np. The output is a permutation of the input\n.↔ The output reversed is itself\n\n\n# J, 21 bytes\n\n(,~,/@/:2|1#.e.)@/:~\n\n\nTry it online!\n\nA non brute force approach which runs in n*log(n) time.\n\n• /:~ Sort. This ensures that like elements are grouped together.\n• /:2|1#.e. Then sort by number of occurrences, modded by 2. This puts any items with an odd number of elements at the end of the array, while keeping like elements together.\n• ,~,/@ Reduce that from the right by alternately appending and prepending elements. The upshot is that we start with the middle element, and then build outward by adding pairs of elements to opposite sides.\n• For those porting this to other langs, reduction by \"add the next char and then reverse\" or \"reverse and then add next char\" works equally well. Jul 26 at 1:40\n\n# Python 3, 72 bytes\n\n*r,=s={''}\nfor c in input():s^={c};r+={c}-s\nprint(*r,*s,*r[::-1],sep=\"\")\n\n\nTry it online!\n\nSaved 2 bytes by pxeger and xnor. It could be 66 bytes as xnor pointed out if output as list of characters.\n\nIt is $$\\O(n)\\$$.\n\n• Boring 1 byte save in Python 3.9+: ato.pxeger.com/… Jul 25 at 10:02\n• – xnor\nJul 25 at 23:51\n• If you're OK printing a list of characters, something like this is shorter\n– xnor\nJul 25 at 23:52\n\n# 05AB1E, 4 bytes\n\nœʒÂQ\n\n\nTry it online!\n\n# Explanation\n\nœ all permutations of the (implicit) input\nʒ only keep those such that\n push x, reversed(x)\nQ they are equal\n\n\n# Python, 67 bytes\n\nlambda x:(s:=sorted(x,key=lambda e:(x.count(e)%2,e)))[1::2]+s[::-2]\n\nAttempt This Online!\n\nPort of Jonah's excellent J solution.\n\n# C (gcc), 124108 107 bytes\n\n16 bytes saved thanks to c--! 1 byte saved thanks to ceilingcat!\n\nf(c,n,i,j,k)char*c;{for(i=j=0;i<n;i++)j=*c-c[i]?j:i;k=c[i=j?n--,n--:i/2];\nc[i]=c[j];c[j]=k;n>2&&f(c+!!j,n);}\n\n\nTry it online! Linebreak added for clarity. Function f which takes as input a pointer to the start of a char array and its length as n. Modifies the input array in place, yielding a single result.\n\nAnnoying that I can't save a few bytes by using c[i]^=c[j]^=c[i]^=c[j] instead of a standard switch, but this expression fails when i == j, and accounting for that doesn't end up saving any bytes.\n\n## Commented explanation\n\nSlightly outdated, but the same general concept is the same. In the current version, we infer the count by observing that k is 1 if and only if j is 0.\n\nf(c,n,i,j,k,t) char*c; {\n// count the number of instances of the first character, *c\nfor(i = k = 0; i < n; i++)\n// if we found *c in the string\n*c == c[i]\n? k++, j = i // then note it in our tally, and note its index as j\n: 0; // else do nothing\n\n// i is now the original length n\n// j is now the index of the last occurrence of *c\n\n// we will check if there is more than one occurrence of *c\n--k\n// this is truthy iff k > 1. in this case, we set up further recursion\n? n -= 2, // deduct the two solved characters from the solve length\ni-- // we want to swap with the end of the string (i=n-1)\n// else, if k == 1, then we need to put this character in the middle\n// to properly palindromize it\n: (j = 0, // we want to swap the lone character (at j=0)\ni /= 2); // with the center character (at i=n/2).\n\n// swap characters at positions j and i\n// when k>1, swaps the last occurrence of *c with the end of the string\n// when k==1, swaps the first character with the middle of the string\nt = c[i];\nc[i] = c[j];\nc[j] = t;\n\n// if n < 2, the string is solved\n// otherwise, we will recurse as follows:\n// - when k was initially >1, k is now k-1, and !!k evaluates to 1,\n// letting us recurse starting with c+1.\n// in this case, n is now n-2, letting us recur on the string without\n// the bookending characters\n// - when k was initially 1, k is now 0, and !!k evaluates to 0.\n// this means we recurse with c, and examine the character we\n// just swapped there. n is also unchanged in this branch.\n// furthermore, this swap only ever happens once because\n// we check n > 2 before attempting to recurse.\nn > 2 && f(c + !!k, n);\n}\n\n• @c-- Thank you! I'll apply the first version, but I'll need to look at the second version a bit more in depth. Jul 25 at 21:13\n• that's probably for the best\n– c--\nJul 26 at 3:01\n• @c-- I think it looks good, I'll post an update with an explanation justifying it Jul 26 at 5:00\n• the initialization of j to 0 isn't a problem, because the loop iterates through the range [0..n], what I was talking about is the fact that i goes one character past the end of the string, which is not always \\0 because after calling f() recursively at least once, the character after the string may match *c, which would set j = n, swapping it with c[n-1], what I'm trying to say is that it's broken for a string such as aaaabb or aaabb, sorry for the inconvenience\n– c--\nJul 26 at 15:09\n• @c-- Good catch, I'll revert with that in mind Jul 27 at 17:12\n\n# JavaScript (Node.js), 71 bytes\n\na=>[...p=a.filter(c=>!(a[c]^=1)),...a.find(c=>a[c])||[],...p.reverse()]\n\n\nTry it online!\n\nInput / Output as array of characters.\n\n# JavaScript (Node.js), 78 bytes\n\nf=(s,c='',r=s.replace(/^(.)(.*)\\1|./,'$2'),t=RegExp.$1)=>s?t+f(r,t?c:s)+t:c\n\n\nTry it online!\n\nInput / Output as strings.\n\n# Python, 120 bytes\n\nlambda a,j=\"\".join:[C:=Counter(a),x:=j(C[c]//2*c for c in C)]+j(C[c]%2*c for c in C)+x[::-1]\nfrom collections import*\n\nAttempt This Online!\n\nThere's probably a much shorter way to do this in $$\\ O(n!n) \\$$ time or something silly like that, but this is linear I think.\n\n# Python, 94 bytes\n\nlambda a,j=\"\".join:(x:=j(a.count(c)//2*c for c in{*a}))+j(a.count(c)%2*c for c in{*a})+x[::-1]\n\nAttempt This Online!\n\nA little shorter, but runs in $$\\ O(n^2) \\$$ time.\n\n# lin, 19 bytes\n\nperm\".+ rev =\"?\n\n\nTry it here!\n\nFor testing purposes (use -i flag if running locally):\n\n\"nanas\" ; _\nperm\".+ rev =\"?\n\n\n## Explanation\n\nPrettified code:\n\nperm (.+ rev = ) ?\n\n• perm permutations\n• (...) ? find first...\n• .+ rev = palindrome\n\n# Curry (PAKCS), 41 bytes\n\nf a@([]?[_])=a\nf(a:b++a:c)=a:f(b++c)++[a]\n\n\nTry it online!\n\nThis may returns multiple results, with duplicates, but not necessarily all of them. If this is not allowed, you can add the flag :set +first to print only the first result: Try it online!.\n\n# Vyxal, 4 bytes\n\nṖ'Ṙ=\n\n\nTry it Online! Outputs all possibilities with duplicates. Add ;U to remove them. Takes permutations and only keeps those that are equal after reversal.\n\n# Wolfram Language (Mathematica), 33 bytes\n\nSelect[PalindromeQ]@*Permutations\n\n\nTry it online!\n\n-1 byte thanks to att\n\n• Select[PalindromeQ]@*Permutations\n– att\nJul 26 at 2:12\n\n# Alice, 19 bytes\n\n/@P?w$.R/ \\IO>K.-!\\ Try it online! /IP>w..!R-$K?O@ Full program\n/ Switch to ordinal mode\nIP Read the input and generate all the possible permutations\n>w $K For each permutation .! Store a copy of the permutation on the tape . R Reverse the permutation - Subtract the reversed permutation from the permutation (leaving \"\" if it is a palindrome, exiting the loop) ?O@ Print the tape # Ruby, 41 bytes ->a{a.permutation.select{_1==_1.reverse}} Attempt This Online! Inputs and outputs array of chars. Output contains all answers with duplicates, but test suite prints only the first one, so that it doesn't flood the output. # JavaScript (ES6), 63 62 bytes Expects and returns a string. f=s=>s==(S=s.replace(/(.)(.*)\\1/,(_,A,B)=>(s=A,B)))?s:s+f(S)+s Try it online! ### Commented f = // f is a recursive function s => // taking the input string s s == ( // test whether s is unchanged S = s.replace( // when turned into the reduced string S // obtained by looking in s for: /(.)(.*)\\1/, // a character A, followed by some string B // (which may be empty), followed by A (_, A, B) => // if found, (s = A, B) // copy A into s and replace the match with B ) // (i.e. both instances of A are removed) ) ? // if S is equal to s: s // we're left with either an empty string or a // single character; either way, this ends up // in the middle of the output : // else: s + f(S) + s // append s, followed by the result of a // recursive call with S, followed by s again • This is also 63 bytes: f=(s,S=s.replace(/(.)(.*)\\1|$/,'$2'),q=RegExp.$1)=>q?q+f(S)+q:s\n– tsh\nJul 26 at 1:51\n\n$_=join\"\",sort@F;s/(.)\\1/!push@r,$1/ge;say@r,$_,reverse@r Try it online! # Haskell, 52 bytes import Data.List find(\\x->x==reverse x).permutations Try it online! Pretty new to Haskell so I'm very open to suggestions on how this could be improved, because I have a feeling it can be a lot - especially concerning that lambda, but I couldn't find how to make it pointfree. • Pointfree: \\x->x==reverse x -> (==)<*>reverse. Aug 3 at 2:38 # APL (Dyalog Classic), 19 bytes {⌊/⍵=⌽⍵:⍵⋄∇⍵[?⍨≢⍵]} Try it online! Usage: palindrome←{⌊/⍵=⌽⍵:⍵⋄∇⍵[?⍨≢⍵]} palindrome 'baba' baab palindrome 'daabbcc' cbadabc New contributor atpx8 is a new contributor to this site. Take care in asking for clarification, commenting, and answering. Check out our Code of Conduct. # Factor + math.combinatorics, 41 38 bytes [ [ dup reverse = ] find-permutation ] Try it online! # Retina, 35 bytes O. |\"\"L^((.)\\2)*.?$\n$^$\n(.).\n$1 Try it online! Link includes test cases. Explanation: Based on @Jonah's J solution. O. Sort the characters into order. |\"\"L^((.)\\2)*.? If there's an odd character, split the string after that point and exchange (^) and join (|\"\") the two halves. (There's also an empty string in the results but obviously it doesn't have any effect.) $\n$^$\n\n\nAppend the reverse of the string.\n\n(.).\n\\$1\n\n\nDrop every other character.\n\n# Charcoal, 24 bytes\n\n≔Φθ﹪№…θκι²ηηΦΦθ﹪№θ鲬κ⮌η\n\n\nTry it online! Link is to verbose version of code. Explanation:\n\n≔Φθ﹪№…θκι²ηη\n\n\nUsing my code from Generate an arbitrary half of a string, extract half (rounded down) of the characters from the string, but also save it in a variable.\n\nΦΦθ﹪№θ鲬κ\n\n\nIf there was a character that appeared an odd number of times then output it. (Now if only Maximum(\"\") returned the empty string...)\n\n⮌η\n\n\nOutput the reverse of the half string.\n\n# x86-64 machine code, 31 bytes\n\n31 C9 FF CA AC 0F BB C1 73 FA 88 04 17 AA 83 EA 02 77 F1 75 09 0F BC C1 75 01 AC 0C 60 AA C3\n\n\nTry it online!\n\nFollowing the standard calling convention for Unix-like systems (from the System V AMD64 ABI), this takes the address of the input, as an array of single-byte characters, in RSI and its length in RDX, and takes in RDI an address at which to place the result, as a non-overlapping array of single-byte characters of the same length.\n\nPart of this is similar to my answer to \"Generate an arbitrary half of a string\".\n\nIn assembly:\n\nf:\nxor ecx, ecx # Set ECX to 0.\ndec edx # Subtract 1 from the length in EDX.\nrepeat:\nlodsb # Load a byte from the string into AL, advancing the pointer.\nbtc ecx, eax # Invert the bit in ECX indexed by the low 5 bits of that byte.\n# Set CF to the previous value of that bit.\njnc repeat # Jump back if CF is 0.\nmov [rdi+rdx], al # Place the byte at a position that starts from the end of\n# the output string and will move backwards.\nstosb # Add it to the start of the output, advancing the pointer.\nsub edx, 2 # Subtract 2 from EDX. (= # of unfilled places - 1)\nja repeat # Jump back if it is still positive.\njnz end # Jump if it is -1 (all places filled: happens for even length).\nbsf eax, ecx # Set EAX to the index of the 1 bit in ECX.\n# (If the input is valid, there is at most one 1 bit here.)\njnz skip # Jump if there was a 1 bit in ECX.\nlodsb # (Otherwise, the final instance of the odd-count character\n# is at the end of the string and has not yet been read.)\n# Load a byte from the string into AL, advancing the pointer.\nskip:\nor al, 0x60 # Set bits 5 and 6 in AL, making the correct lowercase letter.\nstosb # Add it to the output (in the centre), advancing the pointer.\nend:\nret # Return.\n`" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8496202,"math_prob":0.9013904,"size":2489,"snap":"2022-27-2022-33","text_gpt3_token_len":753,"char_repetition_ratio":0.115492955,"word_repetition_ratio":0.024742268,"special_character_ratio":0.33306548,"punctuation_ratio":0.1521739,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9852456,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-15T10:51:14Z\",\"WARC-Record-ID\":\"<urn:uuid:46ff661e-ba0d-4ca3-9fe3-e3b587dca2da>\",\"Content-Length\":\"452200\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9a207dbe-8079-439f-9b55-82529312b149>\",\"WARC-Concurrent-To\":\"<urn:uuid:d652d5ac-ba03-4524-930c-2778960f1766>\",\"WARC-IP-Address\":\"151.101.1.69\",\"WARC-Target-URI\":\"https://codegolf.stackexchange.com/questions/250283/rearrange-to-a-palindrome\",\"WARC-Payload-Digest\":\"sha1:LRDYR37SRXEWXGHMV55QELPC6HCTQLTR\",\"WARC-Block-Digest\":\"sha1:MYFUQEVXI3TU73ELOQG4IGGD2H4P72LK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882572163.61_warc_CC-MAIN-20220815085006-20220815115006-00054.warc.gz\"}"}
https://www.colorhexa.com/42c761
[ "# #42c761 Color Information\n\nIn a RGB color space, hex #42c761 is composed of 25.9% red, 78% green and 38% blue. Whereas in a CMYK color space, it is composed of 66.8% cyan, 0% magenta, 51.3% yellow and 22% black. It has a hue angle of 134 degrees, a saturation of 54.3% and a lightness of 52%. #42c761 color hex could be obtained by blending #84ffc2 with #008f00. Closest websafe color is: #33cc66.\n\n• R 26\n• G 78\n• B 38\nRGB color chart\n• C 67\n• M 0\n• Y 51\n• K 22\nCMYK color chart\n\n#42c761 color description : Moderate lime green.\n\n# #42c761 Color Conversion\n\nThe hexadecimal color #42c761 has RGB values of R:66, G:199, B:97 and CMYK values of C:0.67, M:0, Y:0.51, K:0.22. Its decimal value is 4376417.\n\nHex triplet RGB Decimal 42c761 `#42c761` 66, 199, 97 `rgb(66,199,97)` 25.9, 78, 38 `rgb(25.9%,78%,38%)` 67, 0, 51, 22 134°, 54.3, 52 `hsl(134,54.3%,52%)` 134°, 66.8, 78 33cc66 `#33cc66`\nCIE-LAB 71.464, -57.384, 40.479 24.826, 42.866, 18.274 0.289, 0.499, 42.866 71.464, 70.224, 144.8 71.464, -56.131, 60.881 65.472, -46.891, 29.282 01000010, 11000111, 01100001\n\n# Color Schemes with #42c761\n\n• #42c761\n``#42c761` `rgb(66,199,97)``\n• #c742a8\n``#c742a8` `rgb(199,66,168)``\nComplementary Color\n• #66c742\n``#66c742` `rgb(102,199,66)``\n• #42c761\n``#42c761` `rgb(66,199,97)``\n• #42c7a4\n``#42c7a4` `rgb(66,199,164)``\nAnalogous Color\n• #c74266\n``#c74266` `rgb(199,66,102)``\n• #42c761\n``#42c761` `rgb(66,199,97)``\n• #a442c7\n``#a442c7` `rgb(164,66,199)``\nSplit Complementary Color\n• #c76142\n``#c76142` `rgb(199,97,66)``\n• #42c761\n``#42c761` `rgb(66,199,97)``\n• #6142c7\n``#6142c7` `rgb(97,66,199)``\n• #a8c742\n``#a8c742` `rgb(168,199,66)``\n• #42c761\n``#42c761` `rgb(66,199,97)``\n• #6142c7\n``#6142c7` `rgb(97,66,199)``\n• #c742a8\n``#c742a8` `rgb(199,66,168)``\n• #2b9143\n``#2b9143` `rgb(43,145,67)``\n• #31a54c\n``#31a54c` `rgb(49,165,76)``\n• #37b955\n``#37b955` `rgb(55,185,85)``\n• #42c761\n``#42c761` `rgb(66,199,97)``\n• #56cd71\n``#56cd71` `rgb(86,205,113)``\n• #69d382\n``#69d382` `rgb(105,211,130)``\n• #7dd892\n``#7dd892` `rgb(125,216,146)``\nMonochromatic Color\n\n# Alternatives to #42c761\n\nBelow, you can see some colors close to #42c761. Having a set of related colors can be useful if you need an inspirational alternative to your original color choice.\n\n• #44c742\n``#44c742` `rgb(68,199,66)``\n• #42c74b\n``#42c74b` `rgb(66,199,75)``\n• #42c756\n``#42c756` `rgb(66,199,86)``\n• #42c761\n``#42c761` `rgb(66,199,97)``\n• #42c76c\n``#42c76c` `rgb(66,199,108)``\n• #42c777\n``#42c777` `rgb(66,199,119)``\n• #42c782\n``#42c782` `rgb(66,199,130)``\nSimilar Colors\n\n# #42c761 Preview\n\nThis text has a font color of #42c761.\n\n``<span style=\"color:#42c761;\">Text here</span>``\n#42c761 background color\n\nThis paragraph has a background color of #42c761.\n\n``<p style=\"background-color:#42c761;\">Content here</p>``\n#42c761 border color\n\nThis element has a border color of #42c761.\n\n``<div style=\"border:1px solid #42c761;\">Content here</div>``\nCSS codes\n``.text {color:#42c761;}``\n``.background {background-color:#42c761;}``\n``.border {border:1px solid #42c761;}``\n\n# Shades and Tints of #42c761\n\nA shade is achieved by adding black to any pure hue, while a tint is created by mixing white to any pure color. In this example, #020804 is the darkest color, while #f8fdf9 is the lightest one.\n\n• #020804\n``#020804` `rgb(2,8,4)``\n• #07170b\n``#07170b` `rgb(7,23,11)``\n• #0b2611\n``#0b2611` `rgb(11,38,17)``\n• #103518\n``#103518` `rgb(16,53,24)``\n• #14441f\n``#14441f` `rgb(20,68,31)``\n• #195326\n``#195326` `rgb(25,83,38)``\n• #1d632d\n``#1d632d` `rgb(29,99,45)``\n• #227234\n``#227234` `rgb(34,114,52)``\n• #26813b\n``#26813b` `rgb(38,129,59)``\n• #2b9042\n``#2b9042` `rgb(43,144,66)``\n• #2f9f49\n``#2f9f49` `rgb(47,159,73)``\n• #34ae50\n``#34ae50` `rgb(52,174,80)``\n• #38bd57\n``#38bd57` `rgb(56,189,87)``\n• #42c761\n``#42c761` `rgb(66,199,97)``\n• #51cb6e\n``#51cb6e` `rgb(81,203,110)``\n• #60d07a\n``#60d07a` `rgb(96,208,122)``\n• #6fd487\n``#6fd487` `rgb(111,212,135)``\n• #7fd994\n``#7fd994` `rgb(127,217,148)``\n• #8edda0\n``#8edda0` `rgb(142,221,160)``\n``#9de2ad` `rgb(157,226,173)``\n• #ace6ba\n``#ace6ba` `rgb(172,230,186)``\n• #bbebc6\n``#bbebc6` `rgb(187,235,198)``\n• #caefd3\n``#caefd3` `rgb(202,239,211)``\n• #d9f4df\n``#d9f4df` `rgb(217,244,223)``\n• #e8f8ec\n``#e8f8ec` `rgb(232,248,236)``\n• #f8fdf9\n``#f8fdf9` `rgb(248,253,249)``\nTint Color Variation\n\n# Tones of #42c761\n\nA tone is produced by adding gray to any pure hue. In this case, #848584 is the less saturated color, while #13f648 is the most saturated one.\n\n• #848584\n``#848584` `rgb(132,133,132)``\n• #7b8e7f\n``#7b8e7f` `rgb(123,142,127)``\n• #71987a\n``#71987a` `rgb(113,152,122)``\n• #68a175\n``#68a175` `rgb(104,161,117)``\n• #5eab70\n``#5eab70` `rgb(94,171,112)``\n• #55b46b\n``#55b46b` `rgb(85,180,107)``\n• #4bbe66\n``#4bbe66` `rgb(75,190,102)``\n• #42c761\n``#42c761` `rgb(66,199,97)``\n• #39d05c\n``#39d05c` `rgb(57,208,92)``\n• #2fda57\n``#2fda57` `rgb(47,218,87)``\n• #26e352\n``#26e352` `rgb(38,227,82)``\n• #1ced4d\n``#1ced4d` `rgb(28,237,77)``\n• #13f648\n``#13f648` `rgb(19,246,72)``\nTone Color Variation\n\n# Color Blindness Simulator\n\nBelow, you can see how #42c761 is perceived by people affected by a color vision deficiency. This can be useful if you need to ensure your color combinations are accessible to color-blind users.\n\nMonochromacy\n• Achromatopsia 0.005% of the population\n• Atypical Achromatopsia 0.001% of the population\nDichromacy\n• Protanopia 1% of men\n• Deuteranopia 1% of men\n• Tritanopia 0.001% of the population\nTrichromacy\n• Protanomaly 1% of men, 0.01% of women\n• Deuteranomaly 6% of men, 0.4% of women\n• Tritanomaly 0.01% of the population" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.53100723,"math_prob":0.58049405,"size":3679,"snap":"2020-10-2020-16","text_gpt3_token_len":1600,"char_repetition_ratio":0.1232653,"word_repetition_ratio":0.011090573,"special_character_ratio":0.56183743,"punctuation_ratio":0.22883295,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9837456,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-03-31T23:47:59Z\",\"WARC-Record-ID\":\"<urn:uuid:f54c0953-1330-4c22-820b-33f01046ba74>\",\"Content-Length\":\"36271\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:cb74d730-f326-4ea4-ba7e-3d3068ad56f4>\",\"WARC-Concurrent-To\":\"<urn:uuid:3a464e8c-43b8-42cd-8212-6596c20b3b86>\",\"WARC-IP-Address\":\"178.32.117.56\",\"WARC-Target-URI\":\"https://www.colorhexa.com/42c761\",\"WARC-Payload-Digest\":\"sha1:7O2RDZMRR2XQIDD6BZN3DAFT4KQIKAU2\",\"WARC-Block-Digest\":\"sha1:EW7UD444B2KABES37APVYVTEYK2PEGGA\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-16/CC-MAIN-2020-16_segments_1585370504930.16_warc_CC-MAIN-20200331212647-20200401002647-00263.warc.gz\"}"}
https://www.mathsisfun.com/exponent.html
[ "# Exponents\n\nThe exponent of a number says how many times to use the number in a multiplication.", null, "In 82 the \"2\" says to use 8 twice in a multiplication,\nso 82 = 8 × 8 = 64\n\nIn words: 82 could be called \"8 to the power 2\" or \"8 to the second power\", or simply \"8 squared\"\n\nExponents are also called Powers or Indices.\n\nSome more examples:\n\n### Example: 53 = 5 × 5 × 5 = 125\n\n• In words: 53 could be called \"5 to the third power\", \"5 to the power 3\" or simply \"5 cubed\"\n\n### Example: 24 = 2 × 2 × 2 × 2 = 16\n\n• In words: 24 could be called \"2 to the fourth power\" or \"2 to the power 4\" or simply \"2 to the 4th\"\n\nExponents make it easier to write and use many multiplications\n\nExample: 96 is easier to write and read than 9 × 9 × 9 × 9 × 9 × 9\n\nYou can multiply any number by itself as many times as you want using exponents.\n\nTry here:\n\n## In General\n\nSo in general:\n\n an tells you to multiply a by itself, so there are n of those a's:", null, "## Another Way of Writing It\n\nSometimes people use the ^ symbol (above the 6 on your keyboard), as it is easy to type.\n\nExample: 2^4 is the same as 24\n\n• 2^4 = 2 × 2 × 2 × 2 = 16\n\n## Negative Exponents\n\nNegative? What could be the opposite of multiplying? Dividing!\n\nSo we divide by the number each time, which is the same as multiplying by 1number\n\nExample: 8-1 = 18 = 0.125\n\nWe can continue on like this:\n\nExample: 5-3 = 15 × 15 × 15 = 0.008\n\nBut it is often easier to do it this way:\n\n5-3 could also be calculated like:\n\n15 × 5 × 5 = 153 = 1125 = 0.008\n\n## Negative? Flip the Positive!", null, "That last example showed an easier way to handle negative exponents: Calculate the positive exponent (an) Then take the Reciprocal (i.e. 1/an)\n\nMore Examples:\n\nNegative Exponent   Reciprocal of\nPositive Exponent\nAnswer\n4-2 = 1 / 42 = 1/16 = 0.0625\n10-3 = 1 / 103 = 1/1,000 = 0.001\n(-2)-3 = 1 / (-2)3 = 1/(-8) = -0.125\n\n## What if the Exponent is 1, or 0?\n\n 1 If the exponent is 1, then you just have the number itself (example 91 = 9) 0 If the exponent is 0, then you get 1 (example 90 = 1) But what about 00 ? It could be either 1 or 0, and so people say it is \"indeterminate\".\n\n## It All Makes Sense\n\nIf you look at that table, you will see that positive, zero or negative exponents are really part of the same (fairly simple) pattern:\n\nExample: Powers of 5\n.. etc..", null, "52 5 × 5 25\n51 5 5\n50 1 1\n5-1 15 0.2\n5-2 15 × 15 0.04\n.. etc..\n\n## Be Careful About Grouping\n\nTo avoid confusion, use parentheses () in cases like this:\n\n With () : (-2)2 = (-2) × (-2) = 4 Without () : -22 = -(22) = - (2 × 2) = -4\n\n With () : (ab)2 = ab × ab Without () : ab2 = a × (b)2 = a × b × b" ]
[ null, "https://www.mathsisfun.com/algebra/images/exponent-8-2.svg", null, "https://www.mathsisfun.com/algebra/images/exponent-definition.gif", null, "https://www.mathsisfun.com/algebra/images/negative-exponent.gif", null, "https://www.mathsisfun.com/algebra/images/larger-smaller-5.svg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.862731,"math_prob":0.9980943,"size":2491,"snap":"2021-04-2021-17","text_gpt3_token_len":853,"char_repetition_ratio":0.12384399,"word_repetition_ratio":0.03888889,"special_character_ratio":0.40907267,"punctuation_ratio":0.12072072,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9994851,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,null,null,10,null,10,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-04-19T12:44:53Z\",\"WARC-Record-ID\":\"<urn:uuid:ae2886ce-9797-43dd-89d1-191a0ee664ee>\",\"Content-Length\":\"12238\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:288b1ea1-df9c-4318-a00c-e4865d5ed113>\",\"WARC-Concurrent-To\":\"<urn:uuid:8d04749c-d797-45a7-bb15-e339d2800b3c>\",\"WARC-IP-Address\":\"104.22.10.27\",\"WARC-Target-URI\":\"https://www.mathsisfun.com/exponent.html\",\"WARC-Payload-Digest\":\"sha1:LLQLJIBJCBEMYXUBLFJRAUOBRV43K2VY\",\"WARC-Block-Digest\":\"sha1:4WEDQNGM64AF3VECRIMTAY3PQGDHFPXA\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-17/CC-MAIN-2021-17_segments_1618038879374.66_warc_CC-MAIN-20210419111510-20210419141510-00607.warc.gz\"}"}
https://www.esc-history.com/details.php?key=5332
[ "  Eurovision Song Contest : Denmark 2008 Simon Mathew All Night Long\n\n# All Night Long | Denmark 2008", null, "", null, "Artist : Simon Mathew Title : All Night Long Place : 3 Points : 112 Language : English Text : Jacob LaunbjergSvend GudiksenNis Bøgvad Music : Jacob LaunbjergSvend GudiksenNis Bøgvad Startposition : 13\n\n", null, "Televoting", null, "Latvia 8", null, "Lithuania 8", null, "Portugal 8", null, "Switzerland 5", null, "Cyprus 5", null, "Malta 4", null, "Ukraine 4", null, "Albania 4", null, "Belarus 4", null, "France 4", null, "Georgia 3", null, "FYR Macedonia 3", null, "Croatia 3", null, "Bulgaria 2", null, "Sweden 12", null, "Iceland 12", null, "Hungary 12", null, "Czech Republic 10", null, "United Kingdom 1\n\n \n\n=1957= =1958= =1959= =1960= =1961= =1962= =1963= =1964= =1965= =1966= =1978= =1979= =1980= =1981= =1982= =1983= =1984= =1985= =1986= =1987= =1988= =1989= =1990= =1991= =1992= =1993= =1995= =1997= =1999= =2000= =2001= =2002= =2005= =2006= =2008= =2009= =2010= =2011= =2012= =2013= =2014= =2017= =2018= =2019=\nTotal Entries : 44\n\n\n=1996= =2004= =2005= =2007= =2008= =2009= =2010= =2011= =2012= =2013= =2015= =2016= =2017= =2018= =2019= =2020= =2021= =2022= =2023=\nTotal Entries : 19" ]
[ null, "https://www.esc-history.com/images/flags/DK.jpg", null, "https://www.esc-history.com/img_artists/5332.jpg", null, "https://www.esc-history.com/images/flags/DK.jpg", null, "https://www.esc-history.com/images/flags/LV.jpg", null, "https://www.esc-history.com/images/flags/LT.jpg", null, "https://www.esc-history.com/images/flags/PT.jpg", null, "https://www.esc-history.com/images/flags/CH.jpg", null, "https://www.esc-history.com/images/flags/CY.jpg", null, "https://www.esc-history.com/images/flags/MT.jpg", null, "https://www.esc-history.com/images/flags/UA.jpg", null, "https://www.esc-history.com/images/flags/AL.jpg", null, "https://www.esc-history.com/images/flags/BY.jpg", null, "https://www.esc-history.com/images/flags/FR.jpg", null, "https://www.esc-history.com/images/flags/GE.jpg", null, "https://www.esc-history.com/images/flags/MK.jpg", null, "https://www.esc-history.com/images/flags/HR.jpg", null, "https://www.esc-history.com/images/flags/BG.jpg", null, "https://www.esc-history.com/images/flags/SE.jpg", null, "https://www.esc-history.com/images/flags/IS.jpg", null, "https://www.esc-history.com/images/flags/HU.jpg", null, "https://www.esc-history.com/images/flags/CZ.jpg", null, "https://www.esc-history.com/images/flags/GB.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5384328,"math_prob":0.99607486,"size":910,"snap":"2023-14-2023-23","text_gpt3_token_len":398,"char_repetition_ratio":0.19205298,"word_repetition_ratio":0.06849315,"special_character_ratio":0.62857145,"punctuation_ratio":0.06666667,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9982227,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44],"im_url_duplicate_count":[null,null,null,1,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,10,null,6,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-03-31T02:59:41Z\",\"WARC-Record-ID\":\"<urn:uuid:0cb9817a-b75e-4ae3-9f4d-096eeac7f088>\",\"Content-Length\":\"15341\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f32a3b76-3800-4b10-9443-04ea432b178c>\",\"WARC-Concurrent-To\":\"<urn:uuid:4cd89386-181f-41f9-9f22-c4ba5f8c466b>\",\"WARC-IP-Address\":\"213.246.62.213\",\"WARC-Target-URI\":\"https://www.esc-history.com/details.php?key=5332\",\"WARC-Payload-Digest\":\"sha1:J66W3762P7SCBTZ3GLLOAAYFECZBIY5B\",\"WARC-Block-Digest\":\"sha1:CCKXYIXKLFZGZHARYD3FVPCJ2UTXINNE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296949533.16_warc_CC-MAIN-20230331020535-20230331050535-00405.warc.gz\"}"}
https://answers.everydaycalculation.com/divide-fractions/5-98-divided-by-6-45
[ "Solutions by everydaycalculation.com\n\n## Divide 5/98 with 6/45\n\n5/98 ÷ 6/45 is 75/196.\n\n#### Steps for dividing fractions\n\n1. Find the reciprocal of the divisor\nReciprocal of 6/45: 45/6\n2. Now, multiply it with the dividend\nSo, 5/98 ÷ 6/45 = 5/98 × 45/6\n3. = 5 × 45/98 × 6 = 225/588\n4. After reducing the fraction, the answer is 75/196\n\nMathStep (Works offline)", null, "Download our mobile app and learn to work with fractions in your own time:" ]
[ null, "https://answers.everydaycalculation.com/mathstep-app-icon.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7397144,"math_prob":0.95228136,"size":317,"snap":"2022-40-2023-06","text_gpt3_token_len":139,"char_repetition_ratio":0.19488817,"word_repetition_ratio":0.0,"special_character_ratio":0.48264983,"punctuation_ratio":0.065789476,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9686116,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-01-28T07:39:53Z\",\"WARC-Record-ID\":\"<urn:uuid:c8402c76-7dab-43f8-a8f0-9491dc413a24>\",\"Content-Length\":\"7517\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1de271a5-2bcb-4be7-9415-a5ccb03ec5fe>\",\"WARC-Concurrent-To\":\"<urn:uuid:54ac0070-bc16-463b-9b10-83c9db23493d>\",\"WARC-IP-Address\":\"96.126.107.130\",\"WARC-Target-URI\":\"https://answers.everydaycalculation.com/divide-fractions/5-98-divided-by-6-45\",\"WARC-Payload-Digest\":\"sha1:YVI3GBX56ICKR2HGHG7WLP35LFZIEXCI\",\"WARC-Block-Digest\":\"sha1:JHTUTCKYIDRSKJP33Q7Z4YD66VJDGXRC\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764499524.28_warc_CC-MAIN-20230128054815-20230128084815-00059.warc.gz\"}"}
https://dev.to/thormeier/the-mandelbrot-set-demystified-building-a-visualizer-1nga
[ "## DEV Community 👩‍💻👨‍💻 is a community of 919,000 amazing developers\n\nWe're a place where coders share, stay up-to-date and grow their careers.", null, "Pascal Thormeier\n\nPosted on • Updated on\n\n# Let's build a Mandelbrot set visualizer\n\nWriting about the Levenshtein edit distance was a lot of fun. I got to test out my whiteboard desk and share my knowledge. So I asked which algorithm I should tackle next.\n\nAs suggested by Raphi on Twitter, in this post, I'll explain roughly what the Mandelbrot set is and how to build a Mandelbrot set visualizer in JavaScript with canvas.\n\n# The Mandelbrot what?\n\nThe Mandelbrot set. As defined/discovered by Benoît Mandelbrot in 1980. It's a fractal, roughly meaning that it's an infinitely complex structure that is self-similar. It looks like this when visualized:", null, "(Created by Prateek Rungta, found on Flickr, released under CC BY 2.0)\n\n# How is the Mandelbrot set defined?\n\nThe Mandelbrot set is the set of complex numbers $c$ for which this iteration does not diverge:\n\n$z_0 = 0 \\newline z_{n+1} = z^{2}_{n} + c$\n\nFor those unfamiliar with calculus or complex numbers, I'll take a quick detour of what \"diverging\" and \"complex numbers\" mean:\n\n## Converging and diverging functions\n\nCalculus is all about change. When we talk about if a function (or a series or an infinite sum) approaches a certain value and gets almost to it, but never quite reaches it, we talk about a converging function.\n\nWhen a function diverges, it either blows off to infinity or negative infinity. The two graphs in the picture show both - A converging function and a diverging one:", null, "(A third kind of function would be alternating ones. Those oscillate between values but don't stay there.)\n\nSo what does that mean for the definition of the Mandelbrot set? It means that the value for $z_{n+1}$ does not blow up to infinity or negative infinity.\n\n## Complex numbers\n\nAll numbers (0, 1, -13, Pi, e, you name it) can be arranged in a number line:", null, "Any number is somewhere on this line. The number line is one-dimensional. Complex numbers introduce a second dimension. This new dimension is called the \"imaginary part\" of the complex number, whereas the usual number line is called the \"real part\" of this number. A complex number thus looks like this:\n\n$a+bi$\n\n$a$ is the real part, $bi$ the imaginary part with the imaginary unit $i$ . Examples for complex numbers would be $12+6i$ or $-3-87i$ . The number line thus evolves into a number plane and would look like this (with the example of $2+1i$ ):", null, "Complex numbers come with a set of special calculation rules. We need to know how addition and multiplication work. Before we dive a little too deep into the why, we just look up the rules and roll with them:\n\n$Multiplication: (a+bi)*(c+di)=(ac-bd)+(ad+bc)i \\newline Addition: (a+bi)+(c+di)=(a+c)+(b+d)i$\n\nAnother side note: All numbers are by default complex numbers. If they're right on the number line, they're represented with an imaginary part of 0. For example $5$ is actually $5+0i$\n\nSo complex numbers can be displayed on an X/Y plane. For each number $X + Yi$ we can say if it belongs to the Mandelbrot set or not.\n\nThe signature pattern emerges when we give those points on the complex number plane that belong to the Mandelbrot set a different color.\n\nWith this knowledge we can get going!\n\n# Let's implement this\n\nWe start with a representation of complex numbers.\n\nclass Complex {\nconstructor(real, imaginary) {\nthis.real = real\nthis.imaginary = imaginary\n}\n\nplus(other) {\nreturn new Complex(\nthis.real + other.real,\nthis.imaginary + other.imaginary\n)\n}\n\ntimes(other) {\nreturn new Complex(\n(this.real * other.real - this.imaginary * other.imaginary),\n(this.real * other.imaginary + other.real * this.imaginary)\n)\n}\n}\n\n\nThe rules for multiplication and addition are now already in there. These complex number objects can now be used like this:\n\nconst x = new Complex(1, 2) // (1 + 2i)\nconst y = new Complex(3, -3) // (3 - 3i)\n\nconsole.log(x.plus(y), x.times(y))\n\n\nAwesome. Now let's implement the function that checks if a given complex number converges with the given iteration:\n\n/**\n* Calculates n+1\n*/\nconst iterate = (n, c) => n.times(n).plus(c)\n\n/**\n* Checks if a complex number c diverges according to the Mandelbrot definition.\n*/\nconst doesDiverge = (c, maxIter) => {\nlet n = new Complex(0, 0)\nfor (let i = 0; i < maxIter; i++) {\nn = iterate(n, c)\n}\n\n// If the iteration diverges, these values will be NaN quite fast. Around 50 iterations is usually needed.\nreturn isNaN(n.real) || isNaN(n.imaginary)\n}\n\n\nWe can now ask this function to tell us if a complex number $c$ is within the Mandelbrot set:\n\n!doesDiverge(new Complex(1, 1), 100) // false\n!doesDiverge(new Complex(0, 0), 100) // true\n\n\n# Building the visualization\n\nSo far so good, we're almost there. Now we can visualize the Mandelbrot set. We'll add a click zoom option as well. For this, we'll use a canvas and some more elements:\n\n<!-- Used to control the zoom level etc. -->\n<div class=\"controls\">\n<div>\nZoom size:\n<input type=\"range\" min=\"2\" max=\"50\" value=\"10\" id=\"zoomsize\">\n</div>\n\n<input type=\"button\" id=\"reset\" value=\"Reset\">\n</div>\n\n<!-- A little box that shows what part of the Mandelbrot set will be shown on click -->\n<div class=\"selector\"></div>\n\n<!-- The canvas we'll render the Mandelbrot set on -->\n<canvas class=\"canvas\" />\n\n\nAnd style these a little bit:\n\nhtml, body {\nmargin: 0;\nheight: 100%;\n}\n.controls {\nposition: fixed;\nbackground-color: #f0f0f0;\nz-index: 1000;\n}\n.selector {\nborder: 2px solid #000;\nopacity: .2;\nposition: fixed;\nz-index: 999;\ntransform: translate(-50%, -50%);\npointer-events: none;\n}\n.canvas {\nwidth: 100%;\nheight: 100vh;\n}\n\n\nSo far so good. Let's head to the JS part. Since it's relatively independent, we'll start with the selector box:\n\n// Size of the zoom compared to current screen size\n// i.e. 1/10th of the screen's width and height.\nlet zoomsize = 10\n\n/**\n* Makes the selector follow the mouse\n*/\ndocument.addEventListener('mousemove', event => {\nconst selector = document.querySelector('.selector')\nselector.style.top = ${event.clientY}px selector.style.left = ${event.clientX}px\nselector.style.width = ${window.innerWidth / zoomsize}px selector.style.height = ${window.innerHeight / zoomsize}px\n})\n\n/**\n* Zoom size adjustment.\n*/\n'change',\nevent => {\nzoomsize = parseInt(event.target.value)\n}\n)\n\n\nNow the user has a clear indication which part of the Mandelbrot set they're going to see when they click.\n\nThe plan is now as follows: We define which part of the complex plane is visible (coordinates) and map this to actual pixels. For this we need an initial state and a reset button:\n\n// X coordinate\nconst realInitial = {\nfrom: -2,\nto: 2,\n}\n\n// Y coordinate, keep the aspect ratio\nconst imagInitial = {\nfrom: realInitial.from / window.innerWidth * window.innerHeight,\nto: realInitial.to / window.innerWidth * window.innerHeight,\n}\n\n// Ranging from negative to positive - which part of the plane is visible right now?\nlet real = realInitial\nlet imag = imagInitial\n\ndocument.querySelector('#reset').addEventListener('click', () => {\nreal = realInitial\nimag = imagInitial\n\n// TODO: Trigger redraw.\n})\n\n\nNice. Now we create a function that actually renders the Mandelbrot set pixel by pixel. I won't got into detail about the coordinate system juggling, but the main idea is to determine how much a number on X and Y coordinate changes by each pixel. For example: When there's a 50 by 100 pixel grid that represents a 5 by 10 number grid, each pixel is $0.1$ .\n\n/**\n* Draws the Mandelbrot set.\n*/\nconst drawMandelbrotSet = (realFrom, realTo, imagFrom, imagTo) => {\nconst canvas = document.querySelector('canvas')\nconst ctx = canvas.getContext('2d')\n\nconst winWidth = window.innerWidth\nconst winHeight = window.innerHeight\n\n// Reset the canvas\ncanvas.width = winWidth\ncanvas.height = winHeight\nctx.clearRect(0, 0, winWidth, winHeight)\n\n// Determine how big a change in number a single pixel is\nconst stepSizeReal = (realTo - realFrom) / winWidth\nconst stepSizeImaginary = (imagTo - imagFrom) / winHeight\n\n// Loop through every pixel of the complex plane that is currently visible\nfor (let x = realFrom; x <= realTo; x += stepSizeReal) {\nfor (let y = imagFrom; y <= imagTo; y += stepSizeImaginary) {\n// Determine if this coordinate is part of the Mandelbrot set.\nconst c = new Complex(x, y)\nconst isInMandelbrotSet = !doesDiverge(c, 50)\n\nconst r = isInMandelbrotSet ? 67 : 104\nconst g = isInMandelbrotSet ? 65 : 211\nconst b = isInMandelbrotSet ? 144 : 145\n\n// Cast the coordinates on the complex plane back to actual pixel coordinates\nconst screenX = (x - realFrom) / (realTo - realFrom) * winWidth\nconst screenY = (y - imagFrom) / (imagTo - imagFrom) * winHeight\n\n// Draw a single pixel\nctx.fillStyle = rgb(${r},${g}, \\${b})\nctx.fillRect(screenX, screenY, 1, 1)\n}\n}\n}\n\n\nNow this should already render the Mandelbrot set as we know it:\n\ndrawMandelbrotSet(real.from, real.to, imag.from, imag.to)\n\n\nLast but not least, a click on the canvas should now set the real and imag according to the selected section:\n\n/**\n* Perform a zoom\n*/\ndocument.querySelector('canvas').addEventListener('click', event => {\nconst winWidth = window.innerWidth\nconst winHeight = window.innerHeight\n\nconst selectedWidth = winWidth / zoomsize\nconst selectedHeight = winHeight / zoomsize\n\nconst startX = (event.clientX - (selectedWidth / 2)) / winWidth\nconst endX = (event.clientX + (selectedWidth / 2)) / winWidth\nconst startY = (event.clientY - (selectedHeight / 2)) / winHeight\nconst endY = (event.clientY + (selectedHeight / 2)) / winHeight\n\nreal = {\nfrom: ((real.to - real.from) * startX) + real.from,\nto: ((real.to - real.from) * endX) + real.from,\n}\n\nimag = {\nfrom: ((imag.to - imag.from) * startY) + imag.from,\nto: ((imag.to - imag.from) * endY) + imag.from,\n}\n\ndrawMandelbrotSet(real.from, real.to, imag.from, imag.to)\n})\n\n\nThe finished result looks like this (Click \"Rerun\" if it looks off or is blank - happens because iframes, I guess):\n\nHave fun exploring this infinitely complex structure!\n\n# Some screenshots\n\nHere's a few screenshots of the visualisation:", null, "", null, "", null, "", null, "Can you guess where the last one is located? Leave your guess in the comments!\n\nI write tech articles in my free time. If you enjoyed reading this post, consider buying me a coffee!", null, "" ]
[ null, "https://res.cloudinary.com/practicaldev/image/fetch/s--E4H1DsCr--/c_imagga_scale,f_auto,fl_progressive,h_420,q_auto,w_1000/https://dev-to-uploads.s3.amazonaws.com/i/4fpsw7o422qzhcvwwqcw.png", null, "https://res.cloudinary.com/practicaldev/image/fetch/s--bbQnDmDR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/oqwb9krh7z0y7hrqr3z0.jpg", null, "https://res.cloudinary.com/practicaldev/image/fetch/s--ZzLL9XWe--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/qgvg6v5o3ltimva2dcnd.png", null, "https://res.cloudinary.com/practicaldev/image/fetch/s--vvNckcPY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/quz46jibvp51jdhyaguq.png", null, "https://res.cloudinary.com/practicaldev/image/fetch/s--FY25iFey--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/s404uo4nk4gp7h3uvy57.png", null, "https://res.cloudinary.com/practicaldev/image/fetch/s--0b7ZuBXi--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/se0dn549e3tmsf2a01u8.png", null, "https://res.cloudinary.com/practicaldev/image/fetch/s--SBnMIzKh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/860paru6gx8nq5messjo.png", null, "https://res.cloudinary.com/practicaldev/image/fetch/s--vEw13_wD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/os2qkdzjm5afgjvcq9fs.png", null, "https://res.cloudinary.com/practicaldev/image/fetch/s--3j2azE_y--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/2823firnnu3okciuckvy.png", null, "https://res.cloudinary.com/practicaldev/image/fetch/s--kc4mYYLu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/x55hp6jopwyy161d8e2u.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.75628144,"math_prob":0.9740474,"size":9916,"snap":"2022-40-2023-06","text_gpt3_token_len":2531,"char_repetition_ratio":0.135795,"word_repetition_ratio":0.011214953,"special_character_ratio":0.26532876,"punctuation_ratio":0.16648649,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98699856,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20],"im_url_duplicate_count":[null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-10-03T12:40:56Z\",\"WARC-Record-ID\":\"<urn:uuid:55e10e44-64b2-47f5-bf1a-ab721036fd3f>\",\"Content-Length\":\"168704\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0927a70e-1580-4bf1-b6d8-7e8efbee2b33>\",\"WARC-Concurrent-To\":\"<urn:uuid:31e5c53b-f503-41fb-a9cb-5d157049000f>\",\"WARC-IP-Address\":\"151.101.66.217\",\"WARC-Target-URI\":\"https://dev.to/thormeier/the-mandelbrot-set-demystified-building-a-visualizer-1nga\",\"WARC-Payload-Digest\":\"sha1:65Y3WSCAZVEP3RJDCNNT7Y7AV4HSLW2D\",\"WARC-Block-Digest\":\"sha1:ZPMO5LA7XNRAKME7MS74I75HKXECWHUM\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030337415.12_warc_CC-MAIN-20221003101805-20221003131805-00148.warc.gz\"}"}
https://testbook.com/objective-questions/mcq-on-realization-of-logic-gates--5eea6a0d39140f30f369e25c
[ "# The Boolean function Y = AB +CD is to be realized using only 2-input NAND gates.The minimum number of gates required is\n\n1. 2\n\n2. 3\n\n3. 4\n\n4. 5\n\nOption 2 :\n\n3\n\n## Realization of Logic Gates MCQ Question 1 Detailed Solution", null, "$$\\begin{array}{l} y = AB + CD\\\\ = \\overline {\\overline {AB + CD} } \\\\ = \\overline {\\overline {AB} .\\overline {CD} } \\end{array}$$\n\nOnly 3 NAND gates are required\n\n# Consider the following gate network:", null, "Which one of the following gates is redundant?\n\n1. Gate No. 1\n2. Gate No. 2\n3. Gate No. 3\n4. Gate No. 4\n\nOption 2 : Gate No. 2\n\n## Realization of Logic Gates MCQ Question 2 Detailed Solution", null, "f = x̅ y z + w̅  x + w̅\n\n= w̅  (1 + x) + x̅ y z\n\n= w̅ + x̅ y z\n\nAs the output of Gate 2 is not coming in the final simplified solution.\n\nGate 2 Is redundant.\n\n# What is the minimum number of two-input NAND gates used to perform the function of two input OR gate?\n\n1. One\n2. Two\n3. Three\n4. Four\n\nOption 3 : Three\n\n## Realization of Logic Gates MCQ Question 3 Detailed Solution\n\nThe output Y of OR gate with inputs A and B is given as:\n\nY = A + B\n\nThis can be written as:\n\n$$Y = \\overline{\\overline {A + B}}$$\n\nApplying De Morgan's law, we get:\n\n$$Y = \\overline {\\overline {A }. \\overline {B}}$$", null, "Hence implementing an OR gate using the NAND gate requires three of them.\n\n# Consider the logic circuit with input signal TEST shown in the figure. All gates in the figure shown have identical non-zero delay. The signal TEST which was at logic LOW is switched to logic HIGH and maintained at logic HIGH. The output", null, "1. stays HIGH throughout\n2. stays LOW throughout\n3. pulses from LOW to HIGH to LOW\n4. pulses from HIGH to LOW to HIGH\n\nOption 4 : pulses from HIGH to LOW to HIGH\n\n## Realization of Logic Gates MCQ Question 4 Detailed Solution\n\nIn the given logic circuit, every gate has an identical non-zero delay.\n\nLet the delay is t0 msec.", null, "Initially test signal was at logic LOW.\n\nx = 0, y = 1\n\n$$Output\\;\\left( f \\right) = \\overline {x.y} = 1$$\n\nInitially output was HIGH\n\nLet assume test signal is switched to logic high at t = 0 m sec\n\nAt, t = 0 m sec, x = 1\n\nAs there are three NOT gates, the delay of signal to reach y input is 3t0 msec.\n\nAt, t = 3t msec, y = 0 remains as before\n\n$$f = \\overline {xy} = 1$$\n\nNAND gate also has delay of t0 msec.\n\nAt, t = 4t0 msec, f becomes high.\n\nAt, t = 0 msec, x = 1, y = 1\n\nAt, t = 100 msec,\n\n$$f = \\overline {xy} = 0$$\n\n x y F Initial 0 1 1 t = 0 1 1 1 t = t0 1 1 0 t = 3t0 1 0 0 t = 4t0 1 0 1\n\nOutput pulses from HIGH → LOW → HIGH\n\nAt, 0 < t < t0, f = HIGH\n\nAt, t0 < t < 4t0, f = LOW\n\nAt, t = 4t0, f = HIGH\n\n# What is the minimum number of two-input NAND gates used to perform the function of two input OR gate?\n\n1. One\n2. Two\n3. Three\n4. Four\n5. Five\n\nOption 3 : Three\n\n## Realization of Logic Gates MCQ Question 5 Detailed Solution\n\nThe output Y of OR gate with inputs A and B is given as:\n\nY = A + B\n\nThis can be written as:\n\n$$Y = \\overline{\\overline {A + B}}$$\n\nApplying De Morgan's law, we get:\n\n$$Y = \\overline {\\overline {A }. \\overline {B}}$$", null, "Hence implementing an OR gate using the NAND gate requires three of them.\n\n# Which of the following are universal gates?1. AND2. NAND3. OR4. NOR5. NOT\n\n1. 1, 2, 3, 4 and 5\n2. 1, 3 and 4 only\n3. 2, 3 and 5 only\n4. 2 and 4 only\n\nOption 4 : 2 and 4 only\n\n## Realization of Logic Gates MCQ Question 6 Detailed Solution\n\nConcept:\n\n• A Universal Gate is a gate by which every other gate can be realized.\n• AND, OR, NOT, etc. are basic gates.\n• NAND, NOR, etc. are the universal gate.\n• That means we can implement any logic function using NAND and NOR gates without the need of AND, OR, or NOT gates.\n\nExample:\n\nNOT, AND and OR gate realization using NAND gate is as shown:", null, "# The output equivalent circuit of following circuit is", null, "1. INVERTER\n2. AND\n3. OR\n4. NOR\n\nOption 3 : OR\n\n## Realization of Logic Gates MCQ Question 7 Detailed Solution\n\nConcept:\n\nDe Morgans law:\n\nThe complement of the union of two sets is the intersection of their complements.\n\n(x + y)’ = x’. y’\n\nThe complement of the intersection of two sets is the union of their complements.\n\n(x.y)’ = x’ + y’\n\nAnalysis:", null, "$$\\overline {\\bar A\\bar B} = \\overline {\\bar A} + \\overline {\\bar B} = A + B$$\n\n∴ OR gate\n\n# The logic function implemented by the following circuit can be represented as:", null, "1. Y = [(AB) + (B + C) + (B + D)]’\n2. Y = [(AB) + (A + C) + (B + D)]’\n3. Y = [(AB)’ + (A + C)’ + (AB + D)’]\n4. Y = [(AB) + (A + C) + (AD)]’\n\nOption 4 : Y = [(AB) + (A + C) + (AD)]’\n\n## Realization of Logic Gates MCQ Question 8 Detailed Solution\n\nConcept:", null, "", null, "Analysis:\n\nThe output function will be:\n\n$$Y=\\overline{[AB+(A+C)+A(B+D)]}$$\n\n$$Y=\\overline{[AB+(A+C)+AB+AD]}$$\n\n$$Y=\\overline{[AB+(A+C)+AD]}$$\n\n# In an all NOR gate realization of a combinational circuit all EVEN and ODD level gates behave like\n\n1. OR and AND\n2. AND and OR\n3. OR and NOT\n4. NOR and AND\n\nOption 2 : AND and OR\n\n## Realization of Logic Gates MCQ Question 9 Detailed Solution\n\nIn an all NOR gate realization of a combinational circuit,\n\n• All Even level gates behave like AND gate\n• All Odd level gates behave like OR gate\n\nIn an all NAND gate realization of a combinational circuit,\n\n• All Even level gates behave like OR gate\n• All Odd level gates behave like AND gate\n\n# Identify the gate realized by the MOSFET circuit shown below", null, "1. AND gate\n2. NAND gate\n3. OR gate\n4. NOR gate\n\nOption 2 : NAND gate\n\n## Realization of Logic Gates MCQ Question 10 Detailed Solution\n\nConcept:\n\nCMOS logic circuit is an extension of a CMOS inverter. It consists of two network transistors, a pull-down network (PDN) constructed of an n-MOS and Pull-up Network (PUN) constructed of P-MOS.", null, "PDN: Since nMOS conducts when the signal gate is high, PDN is activated when the inputs are high.\n\nPUN: It comprises PMOS and conducts when the input signal gate is low.\n\nThe PDN and PUN are connected in parallel to form OR logic function and they are connected in series to form AND logic as shown:", null, "", null, "", null, "", null, "Application:\n\nFor the circuit given:", null, "The output expression will be:\n\n$$Y_{out}=\\overline{A.B}$$\n\n# In the circuit shown in the figure, if C = 0, the expression for Y is", null, "1. Y = A B̅ + A̅ B\n2. Y = A + B\n3. Y = A̅ + B̅\n4. Y = A B\n\nOption 1 : Y = A B̅ + A̅ B\n\n## Realization of Logic Gates MCQ Question 11 Detailed Solution\n\nThe given circuit is redrawn as:", null, "The output will be:\n\n$$Y = \\overline {\\left( {\\overline {A + B} } \\right) + AB}$$\n\n$$Y = \\overline {\\bar A\\bar B + AB}$$\n\n$$Y = \\overline {A \\odot B}$$\n\nY = A B\n\nY = A̅ B + A B̅", null, "Additional Information\n\nAll Boolean algebra laws are shown below\n\n Name AND Form OR Form Identity law 1.A = A 0 + A = A Null Law 0.A = 0 1 + A = 1 Idempotent Law A. A = A A + A = A Inverse Law AA’ = 0 A + A’ = 1 Commutative Law AB = BA A + B = B + A Associative Law (AB)C (A + B) + C = A + (B + C) Distributive Law A + BC = (A + B) (A + C) A (B + C) = AB + AC Absorption Law A (A + B) = A A + AB = A De Morgan’s Law (AB)’ = A’ + B’ (A + B)’ = A’B’\n\n# Boolean expression for the output of the logic circuit shown in the figure is", null, "1. $$Y = AB + AB + C$$\n2. $$Y = \\overline A \\overline B + AB + \\overline C$$\n3. $$Y\\, = \\,A\\overline B \\, + \\,\\overline A B\\, + C$$\n4. $$Y\\, = \\,AB\\, + \\,\\overline A B\\, + C$$\n\nOption 2 : $$Y = \\overline A \\overline B + AB + \\overline C$$\n\n## Realization of Logic Gates MCQ Question 12 Detailed Solution\n\nCalculation:\n\nThe output of the EXOR Gate is given as:\n\nX = A B̅ + A̅ B\n\nNow, the output of the NAND Gate is:\n\n$$Y = \\overline{XC}$$\n\n$$Y = \\overline{(A\\bar B+ \\bar A B )C}$$\n\n$$Y = \\overline { A \\bar B + \\bar A B}+\\bar C$$\n\n$$Y = AB + \\bar A \\bar B+\\bar C$$\n\n# The Boolean function Y = AB + CD is to be realized using only 2 input NAND gates. The minimum number of gates required is\n\n1. 2\n2. 3\n3. 4\n4. 5\n\nOption 2 : 3\n\n## Realization of Logic Gates MCQ Question 13 Detailed Solution\n\nConcept:\n\nDe Morgan’s law states that:\n\n$$\\overline {\\left( {{A_1}\\;.\\;{A_2} \\ldots \\;{A_n}} \\right)} \\; = \\left( {\\overline {{A_1}} \\; + \\;\\overline {{A_2}} \\; + \\; \\ldots \\; + \\;\\overline {{A_n}} } \\right)\\;$$\n\n$$\\overline {\\left( {{A_1}\\; + \\;{A_2}\\; + \\; \\ldots \\; + \\;{A_n}} \\right)} \\; = \\left( {\\overline {{A_1}} \\;.\\;\\overline {{A_2}} \\;.\\;..\\;\\overline {{A_n}} } \\right)$$\n\nAnalysis:", null, "$$\\begin{array}{l} y = AB + CD\\\\ = \\overline {\\overline {AB + CD} } \\\\ = \\overline {\\overline {AB} .\\overline {CD} } \\end{array}$$\n\nOnly 3 NAND gates are required\n\n# Assuming that only logic inputs X and Y are available and their complements X̅ and Y̅ are not available, the minimum number of two-input NAND gates required to implement X ⊕ Y would be\n\n1. 2\n2. 3\n3. 4\n4. 5\n\nOption 3 : 4\n\n## Realization of Logic Gates MCQ Question 14 Detailed Solution\n\nThe number of 2-input NAND gates required to implement a 2-input XOR gate is 4.", null, "Similarly, the number of 2-input NOR gates required to implement a 2-input XNOR gate is 4.", null, "Logic Gates Min. number of NOR Gate Min. number of NAND Gate NOT 1 1 AND 3 2 OR 2 3 EX-OR 5 4 EXNOR 4 5 NAND 4 1 NOR 1 4 Half-Adder 5 5 Half-Subtractor 5 5 Full-Adder 9 9 Full-Subtractor 9 9\n\n# For a 3-input logic circuit shown below, the output Z can be expressed as", null, "1. Q + R̅\n2. PQ̅ + R\n3. Q̅ + R\n4. P + Q̅ + R\n\nOption 3 : Q̅ + R\n\n## Realization of Logic Gates MCQ Question 15 Detailed Solution", null, "$$Z = \\overline {{\\overline{P ̅ Q}} .Q.\\overline {QR} }$$\n\n$$= P ̅ Q + ̅ Q + QR$$\n\n= Q̅ (1 + P) + QR\n\n= Q̅ + QR\n\n= Q̅(1 + R) + QR\n\n= Q̅ + Q̅R + QR\n\n= Q̅ + R (Q + Q̅)\n\n= Q̅ + R\n\n# The logic evaluated by the circuit at the output is", null, "1. XY̅ + YX̅\n2. $$\\left( {\\overline {X + Y} } \\right)XY$$\n3. X̅Y̅ + XY\n4. X̅Y + XY̅ + X + Y\n\nOption 1 : XY̅ + YX̅\n\n## Realization of Logic Gates MCQ Question 16 Detailed Solution", null, "$$f = X\\bar Y + \\bar XY$$\n\n# The A/O gates in which an additional variable or a combination of variables can be included in the logic operation are called\n\n1. AOI Gates\n2. Expandable Gates\n3. Variable Gates\n4. Scalable Gates\n\nOption 1 : AOI Gates\n\n## Realization of Logic Gates MCQ Question 17 Detailed Solution\n\nThe A/O gates in which an addition variable or a combination of variables can be included in the logic operations are called AOI gates.Common forms of complex logic gates are and-or-invert (AOI) and or-and-invert(OAI) gates, both of which implement sum-of-products/product-of-sums expressions.\n\nThe AOI (AND-OR-INVERT)  gate  enables the sum-of-products realization of a Boolean function in one logic stage.AND-OR-Invert (AOI) logic and AOI gates are two-level compound (or complex) logic functions constructed from the combination of one or more AND gates followed by a NOR gate.", null, "The complement of AOI Logic is OR-AND-Invert (OAI) logic where the OR gates precede a NAND gate.The OAI gate  enables the product-of-sums realization of a Boolean function in one logic stage.", null, "#", null, "The circuit shown above is to be used to implement the function $$Z = f\\left( {A,B} \\right) = \\overline A + B$$. The values of I and J are:\n\n1. I = 0 and J = B\n2. I = 1 and J = B\n3. I = B and J = 1\n4. I = B and J = 0\n\nOption 2 : I = 1 and J = B\n\n## Realization of Logic Gates MCQ Question 18 Detailed Solution\n\nZ = (A + I) (A̅ + J)\n\nNow,\n\nZ = (AJ + I A̅ + J)\n\nLet, I = 1\n\nZ = J (1 + A) + A̅\n\nJ = B\n\n# In NOR-NOR configuration, the minimum number of NOR gates needed to implement the switching function $$X + X\\overline Y + X\\overline Y Z$$ is:\n\n1. 5\n2. 3\n3. 2\n4. 0\n\nOption 4 : 0\n\n## Realization of Logic Gates MCQ Question 19 Detailed Solution\n\nConcept:\n\n1 + x = 1\n\n1 . x = x\n\n0 + x = x\n\n0 . x = 0\n\n{Where x = don’t carry; i.e. x = 0 / 1}\n\nCalculation:\n\nLet,\n\ny = x + xy̅ + xy̅z\n\n= x [1 + y̅ + y̅z]\n\n= x\n\n∴ Zero NOR gates required.\n\n# Directions: The item consists of two statements, one labelled as the ‘Assertion (A)’ and the other as ‘Reason (R)’.You are to examine these two statements carefully and select the answers to the item using the codes given below:Assertion (A): The TTL NAND gate in tristate output configuration can be used for a bus arrangement with more than one gate output connected to a common line.Reason (R): The tristate configuration has a control input, which can detach a logic level (0/1) from coming onto the bus line.\n\n1. Both A and R individually true and R is the correct explanation of A\n2. Both A and R are individually true but R is not the correct explanation of A\n3. A is true but R is false\n4. A is false but R is true\n\nOption 1 : Both A and R individually true and R is the correct explanation of A\n\n## Realization of Logic Gates MCQ Question 20 Detailed Solution\n\nAssertion:\n\nThe Transistor-Transistor Logic (TTL) family has heavy dependence on transistors to provide basic operation.\n\nThe figure shows an internal schematic of the basic two inputs TTL NAND gate.", null, "The basic NAND gate structure consists of three major sections:\n\n• The input transistor Q1 forms the multi-emitter input stage.\n• Transistor Q2 is a phase splitter.\n•  In the output stage, transistor Q3 sits above Q4, forms the totem pole arrangement.\n\nTristate logic gates have three possible output states, i.e. the logic ‘1’ state, the logic ‘0’ state, and a high-impedance state.\n\n• The high-impedance state is controlled by an external ENABLE input.\n• One of the main advantages of these gates is that their inputs and outputs can be connected in parallel to a common bus line.\n\nReason:\n\nThe figure shows the circuit configuration for the Tristate TTL inverter.", null, "The standard TTL NAND circuit is modified so as to act as an inverter with tristate logic. When enable input is active, the output changes the state and can be ‘0’ or ‘1’ depending upon input conditions.\n\nConclusionBoth the assertion and reason are true and the reason is the correct explanation of assertion." ]
[ null, "https://storage.googleapis.com/tb-img/production/21/07/F1_Ravi_Madhuri_31.07.2021_D1.png", null, "https://storage.googleapis.com/tb-img/production/20/12/F2_Shraddha_Neha_12.12.2020_D2.png", null, "https://storage.googleapis.com/tb-img/production/21/01/F1_Neha.B_30-12-20_Savita_D%201.png", null, "https://storage.googleapis.com/tb-img/production/20/09/F1_S.B_14.9.20_Pallavi_D7.png", null, "https://storage.googleapis.com/tb-img/production/19/06/GATE%20IN_2015_Official_Sunny_Madhu_Uday_Solution_images_Q9.PNG", null, "https://storage.googleapis.com/tb-img/production/19/06/GATE%20IN_2015_Official_Sunny_Madhu_Uday_Solution_images_Q9a.PNG", null, "https://storage.googleapis.com/tb-img/production/20/09/F1_S.B_14.9.20_Pallavi_D7.png", null, "https://storage.googleapis.com/tb-img/production/20/03/F1_S.B_Madhu_18.03.20_D5.png", null, "https://storage.googleapis.com/tb-img/production/20/12/F1_Shubham.B_02-111-20_Savita_D3.png", null, "https://storage.googleapis.com/tb-img/production/21/01/F2_Neha_2.1.21_Pallavi_D%201.png", null, "https://storage.googleapis.com/tb-img/production/20/09/F1_Shubham.B_07-09-2020_Savita_D%202.png", null, "https://storage.googleapis.com/tb-img/production/20/04/F1_S.B_Madhu_23.04.20_D5.png", null, "https://storage.googleapis.com/tb-img/production/20/04/F1_S.B_Madhu_23.04.20_D6.png", null, "https://storage.googleapis.com/tb-img/production/20/09/F1_S.B_Deepak_02.03.2020_D%2021.png", null, "https://storage.googleapis.com/tb-img/production/20/04/F1_S.B_Madhu_23.04.20_D4.png", null, "https://storage.googleapis.com/tb-img/production/20/04/F1_S.B_Madhu_23.04.20_D5.png", null, "https://storage.googleapis.com/tb-img/production/20/04/F1_S.B_Madhu_23.04.20_D6.png", null, "https://storage.googleapis.com/tb-img/production/20/04/F1_S.B_Madhu_23.04.20_D7.png", null, "https://storage.googleapis.com/tb-img/production/20/04/F1_S.B_Madhu_23.04.20_D8.png", null, "https://storage.googleapis.com/tb-img/production/20/09/F1_S.B_Deepak_02.03.2020_D%2021.png", null, "https://storage.googleapis.com/tb-img/production/20/08/F1_S.B_Madhu_25.07.20_D%206.png", null, "https://storage.googleapis.com/tb-img/production/20/12/F1_S.B_Madhu_25.07.20_D%2072.png", null, "https://cdn.testbook.com/resources/lms_creative_elements/additional-information-image.png", null, "https://storage.googleapis.com/tb-img/production/21/02/F1_Shubham.B_21-01-21_Savita_D13.png", null, "https://storage.googleapis.com/tb-img/production/21/03/F1_Shubham%20B_27.3.21_Pallavi_D7.png", null, "https://storage.googleapis.com/tb-img/production/16/06/Gate%20EC_2016%20paper%203_Images-Q18.PNG", null, "https://storage.googleapis.com/tb-img/production/19/08/26%20June_1.png", null, "https://storage.googleapis.com/tb-img/production/19/08/F1_U.B._N.J_3-08-2019_D%203.png", null, "https://storage.googleapis.com/tb-img/production/19/08/F1_U.B._N.J._26.08.2019_D%202.png", null, "https://storage.googleapis.com/tb-img/production/19/06/GATE%20IN_2015_Official_Sunny_Madhu_Uday_Solution_images_Q13.PNG", null, "https://storage.googleapis.com/tb-img/production/19/06/GATE%20IN_2015_Official_Sunny_Madhu_Uday_Solution_images_Q13a.PNG", null, "https://storage.googleapis.com/tb-img/production/21/04/F1_Neha%20B_29.4.21_Pallavi_D%201.png", null, "https://storage.googleapis.com/tb-img/production/21/04/F1_Neha%20B_29.4.21_Pallavi_D%202.png", null, "https://storage.googleapis.com/tb-img/production/21/02/F1_Neha.B_11-02-21_Savita_D7.png", null, "https://storage.googleapis.com/tb-img/production/21/04/F1_Shraddha_Neha_03.04.2021_D1.png", null, "https://storage.googleapis.com/tb-img/production/21/04/F1_Shraddha_Neha_03.04.2021_D2.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8084988,"math_prob":0.99884623,"size":8932,"snap":"2021-43-2021-49","text_gpt3_token_len":2911,"char_repetition_ratio":0.15647401,"word_repetition_ratio":0.14991763,"special_character_ratio":0.33497536,"punctuation_ratio":0.10270569,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9994166,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72],"im_url_duplicate_count":[null,1,null,1,null,1,null,3,null,2,null,2,null,3,null,8,null,2,null,2,null,1,null,4,null,4,null,2,null,3,null,4,null,4,null,3,null,3,null,2,null,1,null,1,null,null,null,1,null,1,null,2,null,null,null,1,null,1,null,2,null,2,null,1,null,1,null,1,null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-27T05:42:40Z\",\"WARC-Record-ID\":\"<urn:uuid:a083c19b-15d9-4cb4-bf10-6c043d80a5ca>\",\"Content-Length\":\"375921\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:717e986c-0986-4053-a79b-b3b9b97b2b0e>\",\"WARC-Concurrent-To\":\"<urn:uuid:7a96d044-252e-4bff-b153-c8ee2e8c46bc>\",\"WARC-IP-Address\":\"104.22.44.238\",\"WARC-Target-URI\":\"https://testbook.com/objective-questions/mcq-on-realization-of-logic-gates--5eea6a0d39140f30f369e25c\",\"WARC-Payload-Digest\":\"sha1:HX6QQH3OORD4SNXGQRFOOHONIKXXOETV\",\"WARC-Block-Digest\":\"sha1:GK4F44PMZU7ZVEPORVFSJAKFJDNU5YO6\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323588102.27_warc_CC-MAIN-20211027053727-20211027083727-00481.warc.gz\"}"}
https://www.originlab.com/doc/Tutorials/Fitting-Special-NAG
[ "# 4.2.2.11 Fitting with NAG Special Function\n\n## Summary\n\nOrigin allows user to define an Origin C fitting function using NAG special functions. You can call NAG routine to evaluate the special function.\n\nMinimum Origin Version Required: Origin 8.0 SR6\n\n## What you will learn\n\nThis tutorial will show you how to:\n\n• Create fitting function using Fitting Function Organizer\n• Create fitting function using NAG special function\n\n## Example and Steps\n\nWe will fit the following model:", null, "$inorm= A* exp(-td/2.0/(t-t0)) * ( I0(td/2.0/(t-t0))+I1(td/2.0/(t-t0)) ) \\,$\n\nHere", null, "$A$,", null, "$td$ and", null, "$t0$ are the model parameters we want to obtain from the data fitting.", null, "$I0$ and", null, "$I1$ are the first kind of Modified Bessel function of order 0 and order 1, respectively. For current example, we use the sample data in the end of this tutorial. The fitting procedure can be outlined into the following steps:\n\nPress F9 to open the Fitting Function Organizer and then create a new Category named FittingWithNAGSpecialFunc. Define a new fitting function FittingWithBessel in the new category as follow:\n\n Function Name: FittingWithBessel Function Type: User-Defined Independent Variables: t Dependent Variables: inorm Parameter Names: A,t0,td Function Form: Origin C Function:\n\nClick the button (icon) beside the Function box to open the code builder and define and compile and save the fitting function as follows:\n\n#include <origin.h>\n\n// For example, if you want to fit with functions from the NAG library,\n#include <OC_nag8.h>\n\n// Add code here for other Origin C functions that you want to define in this file,\n// and access in your fitting function.\n\n// You can access C functions defined in other files, if those files are loaded and compiled\n// in your workspace, and the functions have been prototyped in a header file that you have\n// included above.\n\n// You can access NLSF object methods and properties directly in your function code.\n\n// For instance, if your parameter name is P1, you cannot use p1 in your function code.\n// When using fractions, remember that integer division such as 1/2 is equal to 0, and not 0.5\n// Use 0.5 or 1/2.0 to get the correct value.\n\n// section of the Origin Help file.\n\n//----------------------------------------------------------\n//\nvoid _nlsfFittingWithBessel(\n// Fit Parameter(s):\ndouble A, double t0, double td,\n// Independent Variable(s):\ndouble t,\n// Dependent Variable(s):\ndouble& inorm)\n{\n// Beginning of editable part\n//inorm= A* exp(-td/2.0/(t-t0)) * ( s18aec(td/2.0/(t-t0),NAGERR_DEFAULT)+s18afc(td/2.0/(t-t0),NAGERR_DEFAULT) );\n\nstatic NagError fail1;\nstatic NagError fail2;\ndouble dtemp = td/2.0/(t-t0);\ninorm= A* exp(-dtemp) * ( s18aec(dtemp,&fail1)+s18afc(dtemp,&fail2) );\nif(fail1.code !=NE_NOERROR)\nprintf(\"%s\\n\",fail1.message);\nif(fail2.code !=NE_NOERROR)\nprintf(\"%s\\n\",fail2.message);\n\n// End of editable part\n}\n\n### Simulate the Function\n\nAfter the function body is defined, you can click the Compile button in Code Builder to check syntax errors. And then click Return to Dialog button to go back Fitting Function Organizer dialog box. Now click the Save button to generate the .FDF file (Function definition file).\n\nOnce you have a .FDF file, you can click the Simulate button to simulate a curve, this will be very helpful to evaluate the initial values. In the simcurve dialog, enter some proper parameter values and X range, and see what the curve looks like in the Preview panel.\n\n### Set the Initial Values for the Parameters\n\nAs it is a user-defined fitting function, you have to supply the initial guess values for the parameters before performing your fitting task for the data. You may do it by set them manually in the Parameter tab in Nonlinear Curve Fit dialog. For the sample data shown below, you can just set the initial values for the parameters A = 1, td = 1, t0 = 1. After the parameters are initialized, you can then do the fitting to obtain the fitting result, as shown to the right of the sample data.\n\n## Sample Data\n\nCopy the below sample data and use Import Wizard to import the data from Clipboard, then do the fitting using the given initial values for the parameters: A = 1, td = 1, t0 = 1.\n\nSample Data Results\nX Y", null, "2 0.7868954118\n2.080808081 0.8133022141\n2.161616162 0.8178216765\n2.242424242 0.8427866729\n2.323232323 0.8315815363\n2.404040404 0.8484657180\n2.565656566 0.8618233553\n2.646464646 0.8745962570\n2.727272727 0.8921620316\n2.808080808 0.8687399759" ]
[ null, "https://d2mvzyuse3lwjc.cloudfront.net/doc/en/Tutorial/images/Fitting_with_NAG_Special_Function/math-ce5b4588b18ccc8d106cebea09971c3a.png", null, "https://d2mvzyuse3lwjc.cloudfront.net/doc/en/Tutorial/images/Fitting_with_NAG_Special_Function/math-7fc56270e7a70fa81a5935b72eacbe29.png", null, "https://d2mvzyuse3lwjc.cloudfront.net/doc/en/Tutorial/images/Fitting_with_NAG_Special_Function/math-626726e60bd1215f36719a308a25b798.png", null, "https://d2mvzyuse3lwjc.cloudfront.net/doc/en/Tutorial/images/Fitting_with_NAG_Special_Function/math-809d4580aaed41565abc38d58f77f840.png", null, "https://d2mvzyuse3lwjc.cloudfront.net/doc/en/Tutorial/images/Fitting_with_NAG_Special_Function/math-f83ba497477f1b653cb3af0bcd5bd7f9.png", null, "https://d2mvzyuse3lwjc.cloudfront.net/doc/en/Tutorial/images/Fitting_with_NAG_Special_Function/math-a18c217c4f2a811afcaaf5052945e31b.png", null, "https://d2mvzyuse3lwjc.cloudfront.net/doc/en/Tutorial/images/Fitting_with_NAG_Special_Function/FittingWithBessel.PNG", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.684197,"math_prob":0.8676643,"size":4617,"snap":"2022-05-2022-21","text_gpt3_token_len":1224,"char_repetition_ratio":0.15152828,"word_repetition_ratio":0.021390375,"special_character_ratio":0.302794,"punctuation_ratio":0.14496036,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9658381,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14],"im_url_duplicate_count":[null,9,null,9,null,9,null,9,null,9,null,9,null,9,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-24T02:40:38Z\",\"WARC-Record-ID\":\"<urn:uuid:a34ab9bf-d4e6-4584-b2f7-dbbc370d8b67>\",\"Content-Length\":\"155380\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3a76859b-9dd6-4a3f-a2d8-ab6019e6845f>\",\"WARC-Concurrent-To\":\"<urn:uuid:15a25317-262e-47ed-924f-86eb6517ea5d>\",\"WARC-IP-Address\":\"208.118.247.127\",\"WARC-Target-URI\":\"https://www.originlab.com/doc/Tutorials/Fitting-Special-NAG\",\"WARC-Payload-Digest\":\"sha1:WP2P2X3DGJYTJS7CJLR6TDSHXZHJLM2X\",\"WARC-Block-Digest\":\"sha1:Y2OU6T3E6ZCXABCWEVUWI4EC7G4HZWKK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662562410.53_warc_CC-MAIN-20220524014636-20220524044636-00549.warc.gz\"}"}
https://www.algebra-cheat.com/algebra-cheat-sheet/proportions/exponents-and-powers-7th-grade.html
[ "Try the Free Math Solver or Scroll down to Tutorials!\n\n Depdendent Variable\n\n Number of equations to solve: 23456789\n Equ. #1:\n Equ. #2:\n\n Equ. #3:\n\n Equ. #4:\n\n Equ. #5:\n\n Equ. #6:\n\n Equ. #7:\n\n Equ. #8:\n\n Equ. #9:\n\n Solve for:\n\n Dependent Variable\n\n Number of inequalities to solve: 23456789\n Ineq. #1:\n Ineq. #2:\n\n Ineq. #3:\n\n Ineq. #4:\n\n Ineq. #5:\n\n Ineq. #6:\n\n Ineq. #7:\n\n Ineq. #8:\n\n Ineq. #9:\n\n Solve for:\n\n Please use this form if you would like to have this math solver on your website, free of charge. Name: Email: Your Website: Msg:\n\n### Our users:\n\nI was so proud when my son decided to take algebra honors, but I was disheartened when I realized that I could not help him with his homework. I had not taken algebra since high school, and simply did not remember how to complete some of the projects. Algebrator allowed us to go through each step together. Thank you for making a program that allows me to help my son!\nJoe Johnson, OH\n\nAbsolutely genius! Thanks!\nTommy Hobroken, WY\n\nI have been using Algebrator and it has helped a great deal. I find it very helpful as a check against my problems. I solve them manually and then use it to check my work.\nDon Woodward, ND\n\nThis software has really made my life easy as far as doing algebra homework is concerned.\nCharles B.,WI\n\nYou guys are GREAT!! It has been 20 years since I have even thought about Algebra, now with my daughter I want to be able to help her. The step-by-step approach is wonderful!!!\nM.D., Missouri\n\n### Students struggling with all kinds of algebra problems find out that our software is a life-saver. Here are the search phrases that today's searchers used to find our site. Can you find yours among them?\n\n#### Search phrases used on 2011-04-29:\n\n• ratio formula\n• solution of hungerford HIGHER algebra\n• introduction to algebra for 5th grade pdf\n• tricks tips elimination substitution graph\n• how to calculate two equations by TI-89\n• learn algebra online\n• cubes worksheets-area\n• How to use a graphic calculator TI-83 Plus parabolas\n• algebra expand simplify practice problems\n• simple formulas for GRE\n• cubed factor rules\n• KS3 printable literacy worksheets\n• algebraic difference of sums\n• proportions worksheet high school\n• solving simultaneous quadratic equations in MATLAB\n• USE OUR ONLINE GRAPHING CALCULATOR\n• cubed polynomial factoring\n• \"maple\" \"multivariable limits\n• QUADRATIC FORMULA STEP BY STEP\n• learning algebra online\n• algebra helper\n• Rational expression solver\n• dividing radical expressions with variables\n• algebra with pizzazz answer key\n• adding and subtracting like terms worksheet\n• Solving third root polynomials\n• algebra tiles worksheet\n• simplifing chemistry\n• ti-83 normpdf input\n• leaner equation\n• matlab & calculas\n• ti 89 pdf\n• help with factoring parabolas\n• rudin chapter 7\n• elementary algebra examples\n• differential calculator partial\n• square difference\n• adding and subtracting negative integers variables\n• equations\n• checking for square calculator\n• Scale Factor in Algebra\n• algebra 1 mcdougal littell worksheets\n• homework cheats\n• adding and subtracting integers problems\n• dividing integers\n• kumon test\n• Surds explanations AS level\n• parabola shift\n• simplify square roots calculator\n• graphing worksheet for Vertex and intercept forms of quadratic equations\n• how to use ode45 to solve second order\n• free aptitude model question paper\n• algebra trivia\n• example simultaneous second order differential equation\n• cube root chart\n• EXAMPLE OF WORKING OUT FOR DIVISION KS2\n• division math trivia\n• scale factor activities" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8663199,"math_prob":0.821327,"size":3762,"snap":"2021-43-2021-49","text_gpt3_token_len":872,"char_repetition_ratio":0.120808944,"word_repetition_ratio":0.0,"special_character_ratio":0.20839979,"punctuation_ratio":0.05479452,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9961923,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-12-08T09:07:11Z\",\"WARC-Record-ID\":\"<urn:uuid:d9fcf86f-f2e4-4e5c-b3ae-b564e9e3b8ee>\",\"Content-Length\":\"92173\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c4a92ee7-244f-4663-b88c-12b65b7c8c95>\",\"WARC-Concurrent-To\":\"<urn:uuid:4ea21c8a-63d2-40aa-9460-8ac6b540420a>\",\"WARC-IP-Address\":\"54.197.228.212\",\"WARC-Target-URI\":\"https://www.algebra-cheat.com/algebra-cheat-sheet/proportions/exponents-and-powers-7th-grade.html\",\"WARC-Payload-Digest\":\"sha1:ZAU2IE5HVNP7TBCY65UE6RPYTTB35OSM\",\"WARC-Block-Digest\":\"sha1:LTHC4WTNMUNOUKYLUGAJIOP25NSSNAJA\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964363465.47_warc_CC-MAIN-20211208083545-20211208113545-00394.warc.gz\"}"}
https://samacheerkalvi.guru/samacheer-kalvi-12th-business-maths-solutions-chapter-2-ex-2-5/
[ "# Samacheer Kalvi 12th Business Maths Solutions Chapter 2 Integral Calculus I Ex 2.5\n\nStudents can download 12th Business Maths Chapter 2 Integral Calculus I Ex 2.5 Questions and Answers, Samacheer Kalvi 12th Business Maths Book Solutions Guide Pdf helps you to revise the complete Tamilnadu State Board New Syllabus and score more marks in your examinations.\n\n## Tamilnadu Samacheer Kalvi 12th Business Maths Solutions Chapter 2 Integral Calculus I Ex 2.5\n\nIntegrate the following with respect to x.\n\nQuestion 1.\nx e-x\nSolution:", null, "Question 2.\nx3 e3x\nSolution:", null, "Question 3.\nlog x\nSolution:", null, "Question 4.\nx log x\nSolution:", null, "Question 5.\nxn log x\nSolution:", null, "Question 6.\n$$\\boldsymbol{x}^{\\boldsymbol{5}} \\boldsymbol{e}^{\\boldsymbol{x}^{2}}$$\nSolution:", null, "", null, "", null, "", null, "" ]
[ null, "https://samacheerkalvi.guru/wp-content/uploads/2020/06/Samacheer-Kalvi-12th-Business-Maths-Solutions-Chapter-2-Integral-Calculus-I-Ex-2.5-Q1.png", null, "https://samacheerkalvi.guru/wp-content/uploads/2020/06/Samacheer-Kalvi-12th-Business-Maths-Solutions-Chapter-2-Integral-Calculus-I-Ex-2.5-Q2.png", null, "https://samacheerkalvi.guru/wp-content/uploads/2020/06/Samacheer-Kalvi-12th-Business-Maths-Solutions-Chapter-2-Integral-Calculus-I-Ex-2.5-Q3.png", null, "https://samacheerkalvi.guru/wp-content/uploads/2020/06/Samacheer-Kalvi-12th-Business-Maths-Solutions-Chapter-2-Integral-Calculus-I-Ex-2.5-Q4.png", null, "https://samacheerkalvi.guru/wp-content/uploads/2020/06/Samacheer-Kalvi-12th-Business-Maths-Solutions-Chapter-2-Integral-Calculus-I-Ex-2.5-Q5.png", null, "https://samacheerkalvi.guru/wp-content/uploads/2020/06/Samacheer-Kalvi-12th-Business-Maths-Solutions-Chapter-2-Integral-Calculus-I-Ex-2.5-Q6.png", null, "https://samacheerkalvi.guru/wp-content/uploads/2020/06/Samacheer-Kalvi-12th-Business-Maths-Solutions-Chapter-2-Integral-Calculus-I-Ex-2.5-Q6.1.png", null, "https://samacheerkalvi.guru/wp-content/uploads/2020/06/Samacheer-Kalvi-12th-Business-Maths-Solutions-Chapter-2-Integral-Calculus-I-Ex-2.5-Q6.2.png", null, "https://samacheerkalvi.guru/wp-content/uploads/2020/06/Samacheer-Kalvi-12th-Business-Maths-Solutions-Chapter-2-Integral-Calculus-I-Ex-2.5-Q6.3.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6323293,"math_prob":0.86912215,"size":649,"snap":"2023-40-2023-50","text_gpt3_token_len":185,"char_repetition_ratio":0.1875969,"word_repetition_ratio":0.08695652,"special_character_ratio":0.24653313,"punctuation_ratio":0.13821138,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96168643,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18],"im_url_duplicate_count":[null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-22T07:41:34Z\",\"WARC-Record-ID\":\"<urn:uuid:81088af3-77c9-4742-929d-8c42461d00c2>\",\"Content-Length\":\"141286\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a29a32f7-bc8f-4b13-8aa2-bb0fa127700e>\",\"WARC-Concurrent-To\":\"<urn:uuid:89d812d2-59f4-4c71-b550-4de8c3ccae86>\",\"WARC-IP-Address\":\"104.26.4.105\",\"WARC-Target-URI\":\"https://samacheerkalvi.guru/samacheer-kalvi-12th-business-maths-solutions-chapter-2-ex-2-5/\",\"WARC-Payload-Digest\":\"sha1:5NN6XVLQVG44TI4H6HVJD75CG2XPTKIN\",\"WARC-Block-Digest\":\"sha1:WC2D7WZZAHPG35LDJ222WEH6C3GB6HOE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233506339.10_warc_CC-MAIN-20230922070214-20230922100214-00102.warc.gz\"}"}
https://terpconnect.umd.edu/~toh/spectrum/AppendixL.html
[ "[Introduction]  [Signal arithmetic]  [Signals and noise]   [Smoothing]   [Differentiation]  [Peak Sharpening]  [Harmonic analysis]   [Fourier convolution]  [Fourier deconvolution]  [Fourier filter]  [Wavelets]   [Peak area measurement]  [Linear Least Squares]  [Multicomponent Spectroscopy]  [Iterative Curve Fitting]  [Hyperlinear quantitative absorption spectrophotometry] [Appendix and Case Studies]  [Peak Finding and Measurement]  [iPeak]   [iSignal]  [Peak Fitters]   [iFilter]  [iPower]  [List of downloadable software]  [Interactive tools]\n\n### AppendixL: Why measure peak area rather than peak height?\n\nThis appendix examines more closely the question of measuring peak area rather than peak height to reduce the effect of peak broadening, which commonly occurs in chromatography, for reasons that are discussed previously, and also in some forms of spectroscopy. Under what conditions the measurement of peak area might be better than peak height?\n\nThe Matlab/Octave script \"HeightVsArea.m\" simulates the measurement of a series of standard samples whose concentrations are given by the vector 'standards'. Each standard produces an isolated peak whose peak height is directly proportional to the corresponding value in 'standards' and whose underlying shape is a Gaussian with a constant peak position ('pos') and width ('wid'). To simulate the measurement of these samples under typical conditions, the script changes the shape of the peaks (by exponential broadening) and adds a variable baseline and random noise. You can control, by means of the variable definitions in the first few lines of the script, the peak beginning and end, the sampling rate 'deltaX' (increment between x values), the peak position and width ('pos' and 'wid'), the sequence of peak heights ('standards'), the baseline amplitude ('baseline') and its degree of variability ('vba'), the extent of shape change ('vbr'), and the amount of random noise added to the final signal ('noise').\n\nThe resulting peaks a", null, "re shown in Figure 1. The script prepares a series of \"calibration curves\" plotting the values of 'standard' against the measured peak heights or areas for each measurement method. The measurement methods include peak height in Figure 2, peak area in Figure 3, and curve fitting height and area in Figures 4 and 5, respectively. These plots should ideally have an intercept of zero and an R2 of 1.000, but the slope is greater for the peak area measurements because area has different units and is numerically greater than peak height. All the measurement methods are baseline corrected; that is, they include code that attempts to compensate for changes in the baseline (controlled by the variable 'baseline').\n\nWith the initial values of 'baseline', 'noise', 'vba', and 'vbr', you can clearly see the advantage of peak area measurements (figure 3) compared to peak height (figure 2). This is primarily due to the effect of the variability of peak shape broadening ('vbr') and to the averaging out of random noise in the computation of area.", null, "", null, "Figure 2                                                                           Figure 3\n\nIf you set 'baseline', 'noise', 'vba', and 'vbr' all to zero, you've simulated a perfect world in which all methods work perfectly.\n\nCurve fitting can measure both peak height and area; it is not even absolutely necessary to use an accurate peak shape model. Using a simple Gaussian model in this example works much better for peak area (Figure 5) than for peak height (Figure 4) but is not significantly better than a simple peak area measurement (Figure 3). The best results are obtained if an exponentially-broadened Gaussian model (shape 31 or  39) is used, using the code in line 30, but that computation takes longer. Moreover, if the measured peak overlaps another peak significantly, curve fitting both of those peaks together can give much more accurate results that other peak area measurement methods.\n\nThis page is part of \"A Pragmatic Introduction to Signal Processing\", created and maintained by Prof. Tom O'Haver , Department of Chemistry and Biochemistry, The University of Maryland at College Park. Comments, suggestions and questions should be directed to Prof. O'Haver at [email protected]. Updated July, 2022." ]
[ null, "https://terpconnect.umd.edu/~toh/spectrum/AppendixLfigure1.png", null, "https://terpconnect.umd.edu/~toh/spectrum/AppendixLfigure2.png", null, "https://terpconnect.umd.edu/~toh/spectrum/AppendixLfigure3.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8992168,"math_prob":0.9329095,"size":4082,"snap":"2022-40-2023-06","text_gpt3_token_len":912,"char_repetition_ratio":0.13217263,"word_repetition_ratio":0.0,"special_character_ratio":0.20970112,"punctuation_ratio":0.08972504,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96667403,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-02-06T19:59:35Z\",\"WARC-Record-ID\":\"<urn:uuid:e37ab94a-7802-4b84-a3fc-2227d6467645>\",\"Content-Length\":\"22549\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:40814b4b-8410-44c6-954d-a655e2898974>\",\"WARC-Concurrent-To\":\"<urn:uuid:82c3bc5b-f369-4a92-89f1-200d59986d8d>\",\"WARC-IP-Address\":\"128.8.70.200\",\"WARC-Target-URI\":\"https://terpconnect.umd.edu/~toh/spectrum/AppendixL.html\",\"WARC-Payload-Digest\":\"sha1:5I64OH47ZU3PK3C6A45563SA6AVYMVTT\",\"WARC-Block-Digest\":\"sha1:H66RDVORKKNFHAMW5FXZZ564IEB36YKE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764500357.3_warc_CC-MAIN-20230206181343-20230206211343-00418.warc.gz\"}"}
https://www.expertsmind.com/library/what-are-the-corresponding-velocity-vectors-53646.aspx
[ "### What are the corresponding velocity vectors\n\nAssignment Help Physics\n##### Reference no: EM133646\n\nQuestion1\n\nConsider mass and a frictionless ramp. The mass is pushed upward with some initial velocity. Since of in?uence of gravity it moves upward and reaches the highest point. It then turns around and moves downward. Initial from point A, along the system it passes consecutively the points B, C, D and E at equal time intervals. Indicate by ?~vif the change in velocities between any pair of points (referred to as the initial and ?nal points); such as ~vf = ~vi + ?~vif , where ~vi and ~vf are the corresponding velocity vectors at the two points\n\nThe direction of ?~vCD as in the change of velocity vectors from C to D is\n\n1. 0.\n\n2. Uphill along ramp if vA > 0.098 m/s, and downhill if vA < 0.098 m/s.\n\n3. Uphill along ramp if vA < 0.098 m/s,and downhill if vA > 0.098 m/s.\n\n4. Uphill along ramp.\n\n5. Downhill along ramp\n\n### Previous Q& A\n\n#### Explain the fifo structure of the queue\n\nExplain the FIFO structure of the queue Explain how you would implement the queue data structure in its simplest form. Illustrate your answer fully with the necessary sample code\n\n#### Find out the minimum sound intensity\n\nFind out  the minimum sound intensity\n\n#### What is ratio of the applied force\n\nWhat is ratio of the applied force\n\n#### Find out the minimum horizontal velocity\n\nFind out the minimum horizontal velocity magnitude of the acceleration of the blocks\n\n#### How the system comes to thermal equilibrium\n\nHow  the system comes  to thermal equilibrium\n\n#### Provide your definition of supply chain management\n\nThe term supply chain management does not have a universally agreed definition. Consider three different views of supply chain management. Provide your definition of supply chain management. Explain how supply chain strategies should be aligned..\n\n#### Determine the simple harmonic motion of the spring\n\nDetermine the simple harmonic motion of the spring what ac velocity is ball moving\n\n#### How much heat energy can created throughout motion\n\nHow much heat energy can created throughout motion\n\n#### Doxey''s irridex model\n\nTourism Area Life Cycle, Tour operators nowadays play a very significant role in creating the images of destinations, imensions of sustainable community tourism development., Tourism has become a development tool for many rural and more isolated are..\n\n#### What is work done by the non-conservative force\n\nWhat is work done by the non-conservative force\n\n### Similar Q& A\n\n#### Calculate the smallest coefficient of static friction\n\nIntroductory Mechanics: Calculate the smallest coefficient of static friction\n\n#### How rapidly atom are assembled in this protein synthesis\n\nHow rapidly atom are assembled in this protein synthesis\n\n#### Question on center of gravity\n\nQuestion on center of gravity Discover speed of each block\n\n#### Find tension in the back muscle and the compressional force\n\nFind tension in the back muscle and the compressional force\n\n#### Determine the tension in each string\n\nDetermine the tension in each string\n\n#### What is the mass of the block of ice\n\nA dockworker applies a constant horizontal force of 84.0$${\\rm N}$$ to a block of ice on a smooth horizontal floor. The frictional force is negligible. The block starts from rest and moves a distance 12.5$${\\rm m}$$ in a time 4.80$${\\rm s}$$\n\n#### Blackbody\n\nQuestions on blackbody, Infra-Red Detectors & Optic Lens and Digital Image.\n\n#### What is speed of the resulting ball of clay\n\nWhat is speed of the resulting ball of clay\n\n#### The change in the kinetic energy of the system\n\nThe change in the kinetic energy of the system\n\n#### Introductory mechanics: dynamics\n\nCalculate the smallest coefficient of static friction necessary for mass A to remain stationary.\n\n#### Find out the minimum sound intensity\n\nFind out  the minimum sound intensity\n\n#### What is the magnitude of the current in the wire\n\nWhat is the magnitude of the current in the wire as a function of time?", null, "" ]
[ null, "https://www.expertsmind.com/prostyles/images/3.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.87188315,"math_prob":0.91198814,"size":3347,"snap":"2023-14-2023-23","text_gpt3_token_len":748,"char_repetition_ratio":0.09003889,"word_repetition_ratio":0.014678899,"special_character_ratio":0.21959963,"punctuation_ratio":0.108490564,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97304255,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-03-29T19:51:22Z\",\"WARC-Record-ID\":\"<urn:uuid:aba111ef-1ee9-45dd-8f3a-e05e4cdebf36>\",\"Content-Length\":\"67619\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5ff2e78d-14e6-4a82-8c10-a25896de39e6>\",\"WARC-Concurrent-To\":\"<urn:uuid:5383abeb-4bd8-4093-ac7f-8f0470e90a6b>\",\"WARC-IP-Address\":\"198.38.85.49\",\"WARC-Target-URI\":\"https://www.expertsmind.com/library/what-are-the-corresponding-velocity-vectors-53646.aspx\",\"WARC-Payload-Digest\":\"sha1:IZDILMINFRQGUYJ33QCPGISID3T3PTE4\",\"WARC-Block-Digest\":\"sha1:EVIM5IJVD6GYR4RQVNTXJW7BS44236NF\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296949025.18_warc_CC-MAIN-20230329182643-20230329212643-00319.warc.gz\"}"}
https://crypto.stackexchange.com/questions/40413/secret-suffix-md5-as-secure-prf-not-mac
[ "# Secret-suffix MD5 as secure PRF (not MAC)\n\nIn standard secret-suffix fashion, assume we compute the PRF of some message $m$ as $MD5(x || k)$. Preneel and Van Oorschot showed in that this secret-suffix method is unsatisfactory for constructing a secure MAC. (Particularly because MD5 is not collision resistant and therefore the scheme is susceptible to forgeries.) But does that also mean this is not a secure PRF?\n\n• This isn't a homework question -- and I am aware of both facts. I guess the answer is \"yes\" by virtue of contrapositive equivalence. That is, if PRF => MAC is true then ~MAC => ~PRF is also true. I just needed a sanity check. :) – caw Oct 3 '16 at 18:55\n\n## 1 Answer\n\nThe collision resistance of MD5 is fully broken, for 2 rounds, or even just 1 round. If follows that we can exhibit distinct 1024-bit or even 512-bit $x_0$, $x_1$ such that for any $k$, $\\operatorname{MD5}(x_0||k)=\\operatorname{MD5}(x_1||k)$.\n\nThis allows to make a near-perfect distinguisher between the PRF (family) parametrized by $k$: $x\\to\\operatorname{MD5}(x||k)$, and a random oracle; thus breaking that PRF.\n\n• Yep -- I acknowledged this in my comment above. As I mentioned, I just needed a sanity check. Thank you for your (second) response. – caw Oct 3 '16 at 20:09" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.88301057,"math_prob":0.9144107,"size":436,"snap":"2021-21-2021-25","text_gpt3_token_len":111,"char_repetition_ratio":0.09259259,"word_repetition_ratio":0.0,"special_character_ratio":0.2293578,"punctuation_ratio":0.12048193,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98446476,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-06-19T07:16:52Z\",\"WARC-Record-ID\":\"<urn:uuid:5551415c-e2ff-48df-a605-c560feecfb86>\",\"Content-Length\":\"164629\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4f6ea768-5f75-438e-9e36-cafaea7fad9b>\",\"WARC-Concurrent-To\":\"<urn:uuid:e469520a-fd08-4dc7-98f0-0f29706090a7>\",\"WARC-IP-Address\":\"151.101.1.69\",\"WARC-Target-URI\":\"https://crypto.stackexchange.com/questions/40413/secret-suffix-md5-as-secure-prf-not-mac\",\"WARC-Payload-Digest\":\"sha1:7MRXRW62LP4XL774H6TN4SUIKGTOA72G\",\"WARC-Block-Digest\":\"sha1:CZQRQ7WCJKFA2RLSSCQZYCHG7SP4F44Y\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-25/CC-MAIN-2021-25_segments_1623487643703.56_warc_CC-MAIN-20210619051239-20210619081239-00222.warc.gz\"}"}
https://mlochbaum.github.io/BQN/doc/train.html
[ "# Function trains\n\nTrains are an important aspect of BQN's tacit programming capabilities. In fact, a crucial one: with trains and the identity functions Left (`⊣`) and Right (`⊢`), a fully tacit program can express any explicit function whose body is a statement with `𝕨` and `𝕩` used only as arguments (that is, there are no assignments and `𝕨` and `𝕩` are not used in operands or lists. Functions with assignments may have too many variables active at once to be directly translated but can be emulated by constructing lists. But it's probably a bad idea). Without trains it isn't possible to have two different functions that each use both arguments to a dyadic function. With trains it's perfectly natural.\n\nBQN's trains are the same as those of Dyalog APL, except that Dyalog is missing the minor convenience of BQN's Nothing (`·`). There are many Dyalog-based documents and videos on trains you can view on the APL Wiki.\n\n## 2-train, 3-train\n\nTrains are an adaptation of the mathematical convention that, for example, two functions `F` and `G` can be added to get a new function `F+G` that applies as `(F+G)(x) = F(x)+G(x)`. With a little change to the syntax, we can do exactly this in BQN:\n\n↗️\n``` (⊢+⌽) ↕5\n⟨ 4 4 4 4 4 ⟩\n```\n\nSo given a list of the first few natural numbers, that same list plus its reverse gives a list of just one number repeated many times. I'm sure if I were Gauss I'd be able to find some clever use for that fact. The mathematical convention extends to any central operator and any number of function arguments, which in BQN means we use any three functions, and call the train with a left argument as well—the only numbers of arguments BQN syntax allows are 1 and 2.\n\n↗️\n``` 7 (+≍-) 2\n⟨ 9 5 ⟩\n```\n\nHere Couple (`≍`) is used to combine two units into a list, so we get seven plus and minus two. It's also possible to leave out the leftmost function of a train, or replace it with `·`. In this case the function on the right is called, then the other function is called on its result—it's identical to the mathematical composition `∘`, which is also part of BQN.\n\n↗️\n``` (∾⌽) \"ab\"‿\"cde\"‿\"f\"\n\"fcdeab\"\n(·∾⌽) \"ab\"‿\"cde\"‿\"f\"\n\"fcdeab\"\n∾∘⌽ \"ab\"‿\"cde\"‿\"f\"\n\"fcdeab\"\n```\n\nThe three functions `∾⌽`, `·∾⌽`, and `∾∘⌽` are completely identical: Join of Reverse. Why might we want three different ways to write the same thing? If we only want to define a function, there's hardly any difference. However, these three forms have different syntax, and might be easier or harder to use in different contexts. As we'll see, we can use `∾∘⌽` inside a train without parenthesizing it, and string `·∾⌽` but not `∾⌽` together with other trains. Let's look at how the train syntax extends to longer expressions.\n\n## Longer trains\n\nFunction application in trains, as in other contexts, shares the lowest precedence level with assignment. Modifiers and strands (with `‿`) have higher precedence, so they are applied before forming any trains. Once this is done, an expression is a subject expression if it ends with a subject and a function expression if it ends with a function (there are also modifier expressions, which aren't relevant here). A train is any function expression with multiple functions or subjects in it: while we've seen examples with two or three functions, any number are allowed.\n\nSubject expressions are the domain of \"old-school\" APL, and just apply one function after another to a subject, possibly assigning some of the results (that's the top-level picture—anything can still happen within parentheses). Subjects other than the first appear only as left arguments to functions, which means that two subjects can't appear next to each other because the one on the left would have no corresponding function. Here's an example from the compiler (at one point), with functions and assignments numbered in the order they are applied and their arguments marked with `«»`, and a fully-parenthesized version shown below.\n\n```cn←pi∾lt←/𝕩≥ci←vi+nv\n«6 «5 «43«2 «1 «0»\n\ncn←(pi∾(lt←(/(𝕩≥(ci←(vi+nv))))))\n```\n\nFunction expressions have related but different rules, driven by the central principle that functions can be used as \"arguments\". Because roles can no longer be used to distinguish functions from their arguments, every function is assumed to have two arguments unless there's nothing to the left of it, or an assignment. In trains, assignments can't appear in the middle, only at the left side after all the functions have been applied. Here's another example from the compiler. Remember that for our purposes `⌈`` behaves as a single component.\n\n```⊢>¯1»⌈`\n«1 «0»\n\n⊢>(¯1»⌈`)\n```\n\nIn a train, arguments alternate strictly with combining functions between them. Arguments can be either functions or subjects, except for the rightmost one, which has to be a function to indicate that the expression is a train. Trains tend to be shorter than subject expressions partly because to keep track of this alternation in a train of all functions, you need to know where each function is relative to the end of the train (subjects like the `¯1` above only occur as left arguments, so they can also serve as anchors).\n\n## Practice training\n\nThe train `⊢>¯1»⌈`` is actually a nice trick to get the result of Mark Firsts `∊𝕩` given the result of Classify `⊐𝕩`, without doing another search. Let's take a closer look, first by applying it mechanically. To do this, we apply each \"argument\" to the train's argument, and then combine them with the combining functions.\n\n```(⊢ > ¯1 » ⌈`) 𝕩\n(⊢𝕩) > (¯1) » (⌈`𝕩)\n𝕩 > ¯1 » ⌈`𝕩\n```\n\nSo—although not all trains simplify so much—this confusing train is just `{𝕩>¯1»⌈`𝕩}`! Why would I write it in such an obtuse way? To someone used to working with trains, the function `(⊢>¯1»⌈`)` isn't any more complicated to read: `⊢` in an argument position of a train just means `𝕩` while `⌈`` will be applied to the arguments. Using the train just means slightly shorter code and two fewer `𝕩`s to trip over.\n\nThis function's argument is Classify (`⊐`) of some list (in fact this technique also works on the index-of-self `𝕩⊐𝕩`). Classify moves along its argument, giving each major cell a number: the first unused natural number if that value hasn't been seen yet, and otherwise the number chosen when it was first seen. It can be implemented as `⍷⊐⊢`, another train!\n\n↗️\n``` ⊢ sc ← ⊐ \"tacittrains\"\n⟨ 0 1 2 3 0 0 4 1 3 5 6 ⟩\n```\n\nEach `'t'` is `0`, each `'a'` is `1`, and so on. We'd like to discard some of the information from Classify, to just find whether each major cell had a new value. Here are the input and desired result:\n\n↗️\n``` sc ≍ ∊ \"tacittrains\"\n┌─\n╵ 0 1 2 3 0 0 4 1 3 5 6\n1 1 1 1 0 0 1 0 0 1 1\n┘\n```\n\nThe result should be `1` when a new number appears, higher than all the previous numbers. To do this, we first find the highest previous number by taking the maximum-scan `⌈`` of the argument, then shifting to move the previous maximum to the current position. The first cell is always new, so we shift in a `¯1`, so it will be less than any element of the argument.\n\n↗️\n``` ¯1 » ⌈`sc\n⟨ ¯1 0 1 2 3 3 3 4 4 4 5 ⟩\n(¯1»⌈`) sc\n⟨ ¯1 0 1 2 3 3 3 4 4 4 5 ⟩\n```\n\nNow we compare the original list with the list of previous-maximums.\n\n↗️\n``` sc > ¯1»⌈`sc\n⟨ 1 1 1 1 0 0 1 0 0 1 1 ⟩\n(⊢>¯1»⌈`) sc\n⟨ 1 1 1 1 0 0 1 0 0 1 1 ⟩\n```\n\n## Composing trains\n\nThe example above uses a train with five functions: an odd number. Trains with an odd length are always composed of length-3 trains, and they themselves are composed the same way as subject expressions: an odd-length train can be placed in the last position of another train without parentheses, but it needs parentheses to go in any other position.\n\nBut we also saw the length-2 train `∾⌽` above. Even-length trains consist of a single function (`∾`) applied to a function or odd-length train (`⌽`); another perspective is that an even-length train is an odd-length train where the left argument of the final (leftmost) function is left out, so it's called with only a right argument. An even-length train always needs parentheses if it's used as one of the functions in another train. However, it can also be turned into an odd-length train by placing `·` at the left, making the implicit missing argument explicit. After this it can be used at the end of an odd-length train without parentheses. To get some intuition for even-length trains, let's look at an example of three functions used together: the unique (`⍷`) sorted (`∧`) absolute values (`|`) of an argument list.\n\n↗️\n``` ⍷∧| 3‿4‿¯3‿¯2‿0\n⟨ 0 2 3 4 ⟩\n```\n\nIf it doesn't have to be a function, it's easiest to write it all out! Let's assume we want a tacit function instead. With three one-argument functions, we can't use a 3-train, as the middle function in a 3-train always has two arguments. Instead, we will compose the functions with 2-trains. Composition is associative, meaning that this can be done starting at either the left or the right.\n\n↗️\n``` ((⍷∧)|) 3‿4‿¯3‿¯2‿0\n⟨ 0 2 3 4 ⟩\n(⍷(∧|)) 3‿4‿¯3‿¯2‿0\n⟨ 0 2 3 4 ⟩\n```\n\nWe might make the first train above easier to read by using Atop (`∘`) instead of a 2-train. Atop is a 2-modifier, so it doesn't need parentheses when used in a train. The second train can also be changed to `⍷∧∘|` in the same way, but there is another option: the rightmost train `∧|` can be expanded to `·∧|`. After this it's an odd-length train in the last position, and doesn't need parentheses anymore.\n\n↗️\n``` (⍷∘∧|) 3‿4‿¯3‿¯2‿0\n⟨ 0 2 3 4 ⟩\n(⍷·∧|) 3‿4‿¯3‿¯2‿0\n⟨ 0 2 3 4 ⟩\n```\n\nThese two forms have a different emphasis, because the first breaks into subfunctions `⍷∘∧` and `|` and the second into `⍷` and `∧|`. It's more common to use `⍷∘∧` as a unit than `∧|`, so in this case `⍷∘∧|` is probably the better train.\n\nMany one-argument functions strung together is a major weakness for train syntax. If there are many such functions it's probably best to stick with a block function instead!\n\n↗️\n``` {⍷∧|𝕩} 3‿4‿¯3‿¯2‿0\n⟨ 0 2 3 4 ⟩\n```\n\nIn our example, there aren't enough of these functions to really be cumbersome. If `⍷∘∧` is a common combination in a particular program, then the train `⍷∘∧|` will be more visually consistent and make it easier to use a utility function for `⍷∘∧` if that's wanted in the future." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9063126,"math_prob":0.90120476,"size":9816,"snap":"2022-05-2022-21","text_gpt3_token_len":2913,"char_repetition_ratio":0.14339584,"word_repetition_ratio":0.043986637,"special_character_ratio":0.25448248,"punctuation_ratio":0.083977334,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95987177,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-18T06:41:11Z\",\"WARC-Record-ID\":\"<urn:uuid:ed985b78-19c1-43d9-a423-24e93782e433>\",\"Content-Length\":\"26358\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:afd7c9ff-7b17-4fa4-a1ec-7e227cbb6c06>\",\"WARC-Concurrent-To\":\"<urn:uuid:4ee58385-c8c1-4ea3-adc0-136fd2a1d7d2>\",\"WARC-IP-Address\":\"185.199.109.153\",\"WARC-Target-URI\":\"https://mlochbaum.github.io/BQN/doc/train.html\",\"WARC-Payload-Digest\":\"sha1:UGTIQDXY4BR67QYM2W46YDDUTE62KLHX\",\"WARC-Block-Digest\":\"sha1:V2Z7BGWW2IFPM4NOL5SOPVLCJLPPPMYP\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662521152.22_warc_CC-MAIN-20220518052503-20220518082503-00456.warc.gz\"}"}
https://mobile.surenapps.com/2020/10/geometry.html
[ "### Geometry\n\n1. The sum of the interior angles of a polygon is 1620°. The number of sides of the polygon are\n\n2. A cyclic parallelogram having unequal adjacent sides is necessarily a :\n\n3. If the angles of a triangle are in the ratio 5 : 3 : 2, then the triangle could be :\n\n4. If one of the diagonals of a rhombus is equal to its side, then the diagonals of the rhombus are in the ratio:\n\n5. In the given figure, ∠ ABC and ∠ DEF are two angles such that BA ⊥ ED and EF ⊥ BC, then find value of ∠ ABC + ∠ DEF.", null, "6. In the given figure given below, E is the mid-point of AB and F is the midpoint of AD. if the area of FAEC is 13, what is the area of ABCD?", null, "7. Give that segment AB and CD are parallel, if lines ℓ, m and n intersect at point O. Find the ratio of θ to ∠ODS", null, "8. AB ⊥ BC and BD ⊥ AC. CE bisects the angle C. ∠A = 30°. Then, what is ∠CED?", null, "9. In triangle ABC, angle B is a right angle. If (AC) is 6 cm, and D is the mid – point of side AC. The length of BD is", null, "10. ABCD is a square of area 4, which is divided into four non overlapping triangles as shown in the fig. Then the sum of the perimeters of the triangles is", null, "11. The sides of a quadrilateral are extended to make the angles as shown below :", null, "What is the value of x?\n\n12. Instead of walking along two adjacent sides of a rectangular field, a boy took a short cut along the diagonal and saved a distance equal to half the longer side. Then the ratio of the shorter side to the longer side is\n\n13. If two parallel lines are cut by two distinct transversals, then the quadrilateral formed by these four lines will always be a :\n\n14. In the following figure, find ∠ADC.", null, "15. In a triangle ABC, the internal bisector of the angle A meets BC at D. If AB = 4, AC = 3 and ∠A = 60°, then the length of AD is\n\n16. In the figure AG = 9, AB = 12, AH = 6, Find HC.", null, "17. In ∆ABC, DE || BC and AD/DB = 3/5. If AC = 5.6 cm, find AE.", null, "18. In the given fig. AB || QR, find the length of PB.", null, "19. In a triangle ABC, the lengths of the sides AB, AC and BC are 3, 5 and 6 cm, respectively. If a point D on BC is drawn such that the line AD bisects the angle A internally, then what is the length of BD?\n\n20. The number of tangents that can be drawn to two non-intersecting circles is :" ]
[ null, "https://lh4.googleusercontent.com/proxy/CIcxOQNoTQYdkCX7GB78ZWG7wJTX31ynJ3VJBrTRDRghvVI7zmmeb98WepXdtvYf2dvIgCo_QcPDlCVlZIVAob_zUEAbkswyDYgdIyu2NVo32dpQahz7Jg=s0-d", null, "https://lh4.googleusercontent.com/proxy/CnfbMiAuDHspKfsIlcIfckCWM9JTULSsQGpzHeTUz06Jg3jd8iAqf001AWEvQDn2WrAYoMdg8QD_3R-G2qcaNB4Io9-UcZADd3Ni6CyPae_AnBWV5RQwWg=s0-d", null, "https://lh4.googleusercontent.com/proxy/Aq3VCWY2KTnG_zoA8jV1acc144JvxWl0J26QDzhykASAxqaMmIJq-oRInGmnIitNmFMeJqhKTTnzgyC57SV03CSg8IOJC0SuGCw8CJvHHltLhICiZ4GPPQ=s0-d", null, "https://lh4.googleusercontent.com/proxy/ohczkbf8T2v0CE_4vwBx7KXEY8U2jnNZMhYdErFYT3XsX8wkIR39bZgindNyjmEURNcJfm6GLU54q-2EHj61PCmmfL4MhqNP6EjAsXnx_gqOGQtTYNc5mA=s0-d", null, "https://lh6.googleusercontent.com/proxy/-Hkv0sPjEPHzva-vGZyq6tVkPcm382u4pAjMxvIe2f2s2yjb93QGuSbYDEhmJHKUX8Klv2TOhf3ujV8LmbDexnxs-BodvzGOqPffRdHhZ1W7evaOhrZzjA=s0-d", null, "https://lh3.googleusercontent.com/proxy/YJJVFring4OCQyqpZegehoiiBmlTp6REzThtRZDsw8kWFR5_JKzA-13ZpESuTQOMhz2LDfMjrSdEcVPCqeAN6FcD3_IxSXU7GqOwQewucchJYHDqM2S_sg=s0-d", null, "https://lh5.googleusercontent.com/proxy/Oh-ctZ_x6qQ5seHWwxAIdGzzVCcGLaiRBitsk5s2fVIJrVNWp24ugPg_ldLknlcp7J3IkK4LYBwhzHpRwMMa4Eis-wXRvsYdXwNXcNF-M75JqmZGoy_AGQ=s0-d", null, "https://lh6.googleusercontent.com/proxy/4fx2BaEwYeokUwvaezg6KXohM_Y5_1XGWPu0X1hcHVfz8MegUsdbTd7qlKUmnKXSsHAlHffRiZ7StwNTh4W10uDKP0oVY31Zj26zTVyqv7tFN6Mg4zgj_g=s0-d", null, "https://lh5.googleusercontent.com/proxy/aQFU-VkcEKB8fV0uy7C6A25A0YVrL_6tDX01b4_6-UTSrRZYeD2--9-r0lplFWNpsQ1KCTTUTrBdZM0xSRrkDYs3dQIPJXA5UAhzz8Yrpiswz8-2nLlxFQ=s0-d", null, "https://lh6.googleusercontent.com/proxy/KjsrKRk3_yAPcqqReE2w3-0WCNgBghqXm80YNOPZQTXgPDcOteinmaQxzoQBUddKG56ZgZzK2Qvk6IbxjcMRyX0t83BaxY-IffZveSUXdldhxIrGvFH9tQ=s0-d", null, "https://lh3.googleusercontent.com/proxy/pdAkUyYoigAwhxenJMPlT9AwQQIPNLzOIb9_jxrqiuhZU-8t4hgsuP4JW1Tn9BhHw7FyJB-s0Mj2tPdOmk3CVTxNRaSfQ-X1ne54feMWPxDgrjyG3xMk0Q=s0-d", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.92449856,"math_prob":0.9986624,"size":2137,"snap":"2021-43-2021-49","text_gpt3_token_len":605,"char_repetition_ratio":0.14627285,"word_repetition_ratio":0.004347826,"special_character_ratio":0.2700047,"punctuation_ratio":0.12127236,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999305,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22],"im_url_duplicate_count":[null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-25T23:21:41Z\",\"WARC-Record-ID\":\"<urn:uuid:d36a3c95-8f67-45f7-9ee5-c6109185d35c>\",\"Content-Length\":\"225258\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:59e8ef16-900e-4aa0-a17d-36f8c05f957e>\",\"WARC-Concurrent-To\":\"<urn:uuid:ff9a49e8-7ca4-4b26-84d6-5435ac5c6495>\",\"WARC-IP-Address\":\"142.251.45.115\",\"WARC-Target-URI\":\"https://mobile.surenapps.com/2020/10/geometry.html\",\"WARC-Payload-Digest\":\"sha1:KBG2HDO3GQUV73PMOJGPCIC4US6PESNF\",\"WARC-Block-Digest\":\"sha1:XYKZ6VFYVOEGEAUY7DA44EWHG46JR4DP\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323587770.37_warc_CC-MAIN-20211025220214-20211026010214-00509.warc.gz\"}"}
https://www.numbers.education/12203.html
[ "Is 12203 a prime number? What are the divisors of 12203?\n\n## Parity of 12 203\n\n12 203 is an odd number, because it is not evenly divisible by 2.\n\nFind out more:\n\n## Is 12 203 a perfect square number?\n\nA number is a perfect square (or a square number) if its square root is an integer; that is to say, it is the product of an integer with itself. Here, the square root of 12 203 is about 110.467.\n\nThus, the square root of 12 203 is not an integer, and therefore 12 203 is not a square number.\n\nAnyway, 12 203 is a prime number, and a prime number cannot be a perfect square.\n\n## What is the square number of 12 203?\n\nThe square of a number (here 12 203) is the result of the product of this number (12 203) by itself (i.e., 12 203 × 12 203); the square of 12 203 is sometimes called \"raising 12 203 to the power 2\", or \"12 203 squared\".\n\nThe square of 12 203 is 148 913 209 because 12 203 × 12 203 = 12 2032 = 148 913 209.\n\nAs a consequence, 12 203 is the square root of 148 913 209.\n\n## Number of digits of 12 203\n\n12 203 is a number with 5 digits.\n\n## What are the multiples of 12 203?\n\nThe multiples of 12 203 are all integers evenly divisible by 12 203, that is all numbers such that the remainder of the division by 12 203 is zero. There are infinitely many multiples of 12 203. The smallest multiples of 12 203 are:\n\n## Numbers near 12 203\n\n### Nearest numbers from 12 203\n\nFind out whether some integer is a prime number" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.88709295,"math_prob":0.9989013,"size":848,"snap":"2023-40-2023-50","text_gpt3_token_len":219,"char_repetition_ratio":0.19075829,"word_repetition_ratio":0.025641026,"special_character_ratio":0.2936321,"punctuation_ratio":0.12432432,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99880034,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-10-01T11:14:03Z\",\"WARC-Record-ID\":\"<urn:uuid:b4619066-0adc-4bfb-8684-4ab692bb0c56>\",\"Content-Length\":\"18622\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6ddbc7bc-0c7b-489b-a1f8-aaff8ae19e61>\",\"WARC-Concurrent-To\":\"<urn:uuid:d1401936-96a1-4cc9-a4e3-47495d9ab98e>\",\"WARC-IP-Address\":\"213.186.33.19\",\"WARC-Target-URI\":\"https://www.numbers.education/12203.html\",\"WARC-Payload-Digest\":\"sha1:54Q2YQJXXPLOYGFNWDB2P3KHM4WMBAYH\",\"WARC-Block-Digest\":\"sha1:3FOZGPPSDSEIBBIDIMODLQ55RL3FOPXM\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510888.64_warc_CC-MAIN-20231001105617-20231001135617-00423.warc.gz\"}"}
https://discuss.codechef.com/questions/145406/trdst-editorial
[ "You are not logged in. Please login at www.codechef.com to post your questions!\n\n×\n\n# TRDST - EDITORIAL\n\nSetter: Данило Мочернюк\nTester: Alexey Zayakin and Yash Chandnani\nEditorialist: Taranpreet Singh\n\nHard.\n\n# PROBLEM:\n\nGiven a tree with $N$ nodes and an array $K$ of length $N$, for each node $u$, calculate the maximum value $D[i]$ such that the number of nodes at distance greater than $D[i]$ is at least $K[i]$.\n\n# QUICK EXPLANATION\n\n• We decompose the tree to construct the Centroid Tree, which is guaranteed to have at most $log(N)$ depth. For each node, we also count the number of vertices at distance $x$ for $x$ in its centroid subtree. We make suffix sum array to print the number of vertices at distance $\\geq x$ instantly.\n• For each node, We run a binary search on the answer. Now we need to compute the number of nodes at distance $x$ from a node.\n• For each pair of node, we can split the path from $u$ to $v$ as path from $u$ to LCA(u,v) to node $v$. Distance from $u$ to $LCA(u, v)$ can be computed using any LCA finding technique. From LCA(u, v), we can use the precomputation to find the number of nodes at distance greater than or equal to x. But it may include nodes which were already counted at the lower level. So, to exclude it, we count the number of nodes in subtree at each distance from the parent of the current centroid and subtract it whenever required.\n• Since there are only $logN$ levels in centroid tree, we can handle query in $O(log^2(N))$ due to the number of levels in centroid tree and binary search. (Assuming we use $O(1)$ method for LCA).\n\n# EXPLANATION\n\nFirst of all, subtask 1 is easy to pass. We can just, for every node, run a DFS with that node as root and count the number of nodes at each value of distance, which we can use to answer for the current node, solving the problem in $O(N^2)$ time. We need a faster solution.\n\nTo proceed further, knowledge of Centroid decomposition is a must. Here's an excellent resource.\n\nLet us decompose the tree into the centroid tree. Centroid of a tree refers to any node, the removal of which divide the remains of the tree into a forest, maximum tree size being no more than half the number of nodes. This guarantees depth at most $logN$.\n\nAn important property of centroid tree is that the LCA of any pair of nodes $u$ and $v$ remain same. This allows us to split path from $u$ to $v$ into path from $u$ to LCA(u,v) to node $v$. The distance between each pair of nodes can be calculated using preprocessing, by common LCA finding methods like using RMQ over Euler tour (preferred) or using binary lifting.\n\nSuppose we decomposed the tree into centroid tree. For each node $x$ in centroid tree, we can run a DFS in a subtree, calculating the number of nodes at distance $\\geq d$ for all $0 \\leq d \\leq s$ where $s$ is the size of the subtree of $u$. Also, we need to calculate the number of nodes in the subtree of node $u$ at each distance value from the parent of $u$ in centroid tree. We shall see why.\n\nExplaining in one line, it is because subtree of an ancestor of a node also includes the subtree of the current node. So, to exclude it, we need to calculate the number of nodes at distance di from the parent of the node too, so as to avoid double-counting of nodes.\n\nFor example, Consider tree as example with seven nodes where there's an edge between $i$ and $i+1$ nodes for every $0 \\leq i < N-1$. The centroid tree looks like as shown in the image.", null, "Let us count the number of nodes at distance $\\geq 3$ from node 4. Considering the only subtree of node 4, there is no node at distance $\\geq 3$ from node 4. Moving to its ancestor node $5$ now. Distance from node $4$ to $5$ is 1, so, we need to count only nodes at distance $\\geq 3-1 = 2$ from node 5. There is no such node in the subtree of node $5$ at distance $\\geq 2$ from node 5.\n\nClimb over to its ancestor node $3$. The distance of node $3$ from node $4$ is 1, so we need to count the number of nodes in the subtree of node $3$ at distance $\\geq 3-1 = 2$. There are four such nodes, namely node 0,1,5,6. But node $5$ and node $6$ are not to be considered, as they were part of the subtree of node $5$ which is already considered. So, we need to exclude these two nodes. This requires to calculate the number of nodes in the subtree of a node, at each distance $\\geq d$ from the parent of the node in centroid tree. So, for subtree of node 5, there are two nodes (5 and 6) which are at distance $\\geq 3-dist(3,4) = 2$ from node $3$. So, excluding it from four nodes we found, we get two nodes at distance $\\geq 3$ from node 4. We can easily verify its correct.\n\nPartial sums are needed here because DFS gives us the number of nodes at distance $x$. To translate it into Number of nodes at distance $\\geq x$, we take suffix sum array.\n\nNow that we know how to count the number of vertices at distance $di$ from any node, we can run a binary search on the maximum value of $di$ for each node to obtain the answer.\n\n# Time Complexity Analysis\n\nThe time complexity for preprocessing takes $O(N*log(N))$ time for precomputing RMQ, Euler tour takes $O(N)$ time and centroid decomposition also takes $O(N)$ time. Running DFS over each centroid subtree takes $O(N*log(N))$ time as each node is present in at most $logN$ subtrees.\n\nFor each node, the binary search takes $O(logN)$ and within each binary search, we have to move over all ancestors of a node, which also takes $O(logN)$ time. If LCA takes $O(1)$ time using RMQ, we can calculate maximum distance in $O(N*log^2(N))$ time.\n\nSo, overall complexity comes out to be $O(N*log^2(N))$.\n\nMemory complexity is $O(N*log(N))$.\n\nFor practice, the problems mentioned in blog would do.\n\n# AUTHOR'S AND TESTER'S SOLUTIONS:\n\nFeel free to Share your approach, If it differs. Suggestions are always welcomed. :)\n\nThis question is marked \"community wiki\".\n\nasked 14 Feb, 00:21", null, "4.0k31103\naccept rate: 22%", null, "19.8k350498541\n\n 0 I am unable to see the codes of author, setter, and editorialist. answered 20 Feb, 13:54", null, "1 accept rate: 0%\n 0 @taran_1407 I am unable to see the codes of author, setter, and editorialist answered 21 Feb, 13:05", null, "11●1 accept rate: 0% Pinged admin. Meanwhile, my solution https://ideone.com/hwkgw1 (21 Feb, 13:50)\n toggle preview community wiki:\nPreview\n\n### Follow this question\n\nBy Email:\n\nOnce you sign in you will be able to subscribe for any updates here\n\nMarkdown Basics\n\n• *italic* or _italic_\n• **bold** or __bold__\n• image?![alt text](/path/img.jpg \"title\")\n• numbered list: 1. Foo 2. Bar\n• to add a line break simply add two spaces to where you would like the new line to be.\n• basic HTML tags are also supported\n• mathemetical formulas in Latex between \\$ symbol\n\nQuestion tags:\n\n×1,359\n×258\n×192\n×185\n×81\n×76\n×3\n\nquestion asked: 14 Feb, 00:21\n\nquestion was seen: 758 times\n\nlast updated: 21 Feb, 13:50" ]
[ null, "https://discuss.codechef.com/upfiles/trdst.png", null, "https://www.codechef.com/sites/default/files/uploads/pictures/default.jpg", null, "https://www.codechef.com/sites/default/files/uploads/pictures/a3078e9eb180b421790287710b1bb840.jpg", null, "https://www.codechef.com/sites/default/files/uploads/pictures/default.jpg", null, "https://www.codechef.com/sites/default/files/uploads/pictures/default.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9038599,"math_prob":0.997424,"size":4950,"snap":"2019-13-2019-22","text_gpt3_token_len":1326,"char_repetition_ratio":0.16761018,"word_repetition_ratio":0.0857461,"special_character_ratio":0.2658586,"punctuation_ratio":0.11393597,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9998808,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,4,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-03-18T21:59:53Z\",\"WARC-Record-ID\":\"<urn:uuid:8e3ce124-ce29-470a-bfdb-fcb52011c388>\",\"Content-Length\":\"49379\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:66982045-453d-45e2-8a17-512449cd19a1>\",\"WARC-Concurrent-To\":\"<urn:uuid:1ffee0b0-e185-44c7-81b3-f39c2f69d547>\",\"WARC-IP-Address\":\"54.211.41.159\",\"WARC-Target-URI\":\"https://discuss.codechef.com/questions/145406/trdst-editorial\",\"WARC-Payload-Digest\":\"sha1:ONEDYIXMFHU2PSMTRN5P4TJOIB2YPYIX\",\"WARC-Block-Digest\":\"sha1:VPXRB76WFMSPSJVZR7MK22XNBVBG67JA\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-13/CC-MAIN-2019-13_segments_1552912201707.53_warc_CC-MAIN-20190318211849-20190318233849-00375.warc.gz\"}"}
https://www.frontiersin.org/articles/10.3389/fnbot.2018.00022/full
[ "Impact Factor 3.000 | CiteScore 3.51\nMore on impact ›\n\n# Frontiers in Neurorobotics", null, "## Original Research ARTICLE\n\nFront. Neurorobot., 22 May 2018 | https://doi.org/10.3389/fnbot.2018.00022\n\n# Multimodal Hierarchical Dirichlet Process-Based Active Perception by a Robot", null, "Tadahiro Taniguchi1*,", null, "Ryo Yoshino1 and", null, "Toshiaki Takano2\n• 1Emergent Systems Laboratory, College of Information Science and Engineering, Ritsumeikan University, Ksatsu, Japan\n• 2Adaptive Systems Laboratory, Department of Computer Science, Shizuoka Institute of Science and Technology, Fukuroi, Japan\n\nIn this paper, we propose an active perception method for recognizing object categories based on the multimodal hierarchical Dirichlet process (MHDP). The MHDP enables a robot to form object categories using multimodal information, e.g., visual, auditory, and haptic information, which can be observed by performing actions on an object. However, performing many actions on a target object requires a long time. In a real-time scenario, i.e., when the time is limited, the robot has to determine the set of actions that is most effective for recognizing a target object. We propose an active perception for MHDP method that uses the information gain (IG) maximization criterion and lazy greedy algorithm. We show that the IG maximization criterion is optimal in the sense that the criterion is equivalent to a minimization of the expected Kullback–Leibler divergence between a final recognition state and the recognition state after the next set of actions. However, a straightforward calculation of IG is practically impossible. Therefore, we derive a Monte Carlo approximation method for IG by making use of a property of the MHDP. We also show that the IG has submodular and non-decreasing properties as a set function because of the structure of the graphical model of the MHDP. Therefore, the IG maximization problem is reduced to a submodular maximization problem. This means that greedy and lazy greedy algorithms are effective and have a theoretical justification for their performance. We conducted an experiment using an upper-torso humanoid robot and a second one using synthetic data. The experimental results show that the method enables the robot to select a set of actions that allow it to recognize target objects quickly and accurately. The numerical experiment using the synthetic data shows that the proposed method can work appropriately even when the number of actions is large and a set of target objects involves objects categorized into multiple classes. The results support our theoretical outcomes.\n\n## 1. Introduction\n\nActive perception is a fundamental component of our cognitive skills. Human infants autonomously and spontaneously perform actions on an object to determine its nature. The sensory information that we can obtain usually depends on the actions performed on the target object. For example, when people find a gift box placed in front of them, they cannot perceive its weight without holding the box, and they cannot determine its sound without hitting or shaking it. In other words, we can obtain sensory information about an object by selecting and executing actions to manipulate it. Adequate action selection is important for recognizing objects quickly and accurately. This example about a human also holds for a robot. An autonomous robot that moves and helps people in a living environment should also select adequate actions to recognize target objects. For example, when a person asks an autonomous robot to bring an empty plastic bottle, the robot has to examine many objects by applying several actions (Figure 1). This type of information is important, because our object categories are formed on the basis of multimodal information, i.e., not only visual information is used, but also auditory, haptic, and other information. Therefore, a computational model of the active perception should be consistently based on a computational model for multimodal object categorization and recognition.\n\nFIGURE 1", null, "Figure 1. Overview of active perception for multimodal object category recognition. The numbers attached to the arrows show a sample of the order of action selection by the robot.\n\nIn spite of the wide range of studies about active perception (e.g., Borotschnig et al., 2000; Dutta Roy et al., 2004; Eidenberger and Scharinger, 2010; Krainin et al., 2011; Ferreira et al., 2013) and multimodal categorization for robots (e.g., Nakamura et al., 2007, 2011a; Sinapov and Stoytchev, 2011; Celikkanat et al., 2014; Sinapov et al., 2014), active perception methods for a robot, i.e., action selection methods for perception for unsupervised multimodal categorization, have not been sufficiently explored (see section 2).\n\nThis paper considers the active perception problem for unsupervised multimodal object categorization under the condition that a robot has already obtained several action primitives that are used to examine target objects. In the context of this study, we need to study active perception on an unsupervised multimodal categorization method having generality as much as possible because it is believed that unsupervised multimodal categorization is important for future language learning by robots, and the findings obtained in this study should be able to be applied to other unsupervised multimodal categorization models. It was suggested that a child forms a category based on his/her sensorimotor experience before learning a word for the category in a Bayesian manner, and learning the word is a matter of attaching a new label to this preexisting category (Kemp et al., 2010). The multimodal hierarchical Dirichlet process (MHDP) is a mathematically very general and sophisticated nonparametric Bayesian multimodal categorization method. Therefore, we adopt MHDP proposed by Nakamura et al. (2011b) as a representative computational model for unsupervised multimodal object categorization.\n\nWe develop an active perception method based on the MHDP in this paper. The MHDP is a sophisticated, fully Bayesian, probabilistic model for multimodal object categorization (Nakamura et al., 2011b) that is developed by enabling hierarchical Dirichlet process (HDP) (Teh et al., 2006) to have multimodal emission distributions corresponding to multiple sensor information1. Nakamura et al. (2011b) showed that the MHDP enables a robot to form object categories using multimodal information, i.e., visual, auditory, and haptic information, in an unsupervised manner. The MHDP can estimate the number of object categories as well because of the nature of Bayesian nonparametrics.\n\nThis paper describes a new MHDP-based active perception method for multimodal object recognition based on object categories formed by a robot itself. We found that an active perception method that has a good theoretical nature, i.e., the performance of the greedy algorithm is theoretically guaranteed (see section 4), can be derived for MHDP. Our formulation is based on a hierarchical Bayesian model. If a cognitive system of a robot is modeled by using hierarchical Bayesian model, a recognition state are usually represented by posterior distribution over latent variables, e.g., object categories. The purpose of an active perception is to infer appropriate posterior distribution with a small number of actions. In our approach, we propose an action selection method that can reduce the distance between inferred posterior distributions and true posterior distributions.\n\nIn this study, we define the active perception problem in the context of unsupervised multimodal object categorization as following. Which set of actions should a robot take to recognize a target object as accurately as possible under the constraint that the number of actions is restricted2? Our MHDP-based active perception method uses an IG maximization criterion, Monte Carlo approximation, and the lazy greedy algorithm. In this paper, we show that the MHDP provides the following three advantages for deriving an efficient active perception method.\n\n1. The IG maximization criterion is optimal in the sense that a selected set of actions minimizes the expected Kullback–Leibler (KL) divergence between the final posterior distribution estimated using the information regarding all modalities and the posterior distribution of the category estimated using the selected set of actions (see section 4.1).\n\n2. The IG has a submodular and non-decreasing property as a set function. Therefore, for performance, the greedy and lazy greedy algorithms are guaranteed to be near-optimal strategies (see section 4.2).\n\n3. A Monte Carlo approximation method for the IG can be derived by exploiting MHDP's properties (see section 4.3).\n\nAlthough the above properties follow from the theoretical characteristics of the MHDP, this has never been pointed out in previous studies.\n\nThe main contributions of this paper are that we\n\n• develop an MHDP-based active perception method, and\n\n• show its effectiveness through experiments using an upper-torso humanoid robot and synthetic data.\n\nThe proposed active perception method can be used for general purposes, i.e., not only for robots but also for other target domains to which the MHDP can be applied. In addition, The proposed method can be easily extended for other multimodal categorization methods with similar graphical models, e.g., multimodal latent Dirichlet allocation (MLDA) (Nakamura et al., 2009). However, in this paper, we focus on the MHDP and the robot active perception scenario, and explain our method on the basis of this task.\n\nThe remainder of this paper is organized as follows. Section 2 describes the background and work related to our study. Section 3 briefly introduces the MHDP, proposed by Nakamura et al. (2011b), which enables a robot to obtain object categories by fusing multimodal sensor information in an unsupervised manner. Section 4 describes our proposed action selection method. Section 5 discusses the effectiveness of the action selection method through experiments using an upper-torso humanoid robot. Section 6 describes a supplemental experiment using synthetic data. Section 7 concludes this paper.\n\n## 2. Background and Related Work\n\n### 2.1. Multimodal Categorization\n\nThe human capability for object categorization is a fundamental topic in cognitive science (Barsalou, 1999). In the field of robotics, adaptive formation of object categories that considers a robot's embodiment, i.e., its sensory-motor system, is gathering attention as a way to solve the symbol grounding problem (Harnad, 1990; Taniguchi et al., 2016).\n\nRecently, various computational models and machine learning methods for multimodal object categorization have been proposed in artificial intelligence, cognitive robotics, and related research fields (Roy and Pentland, 2002; Natale et al., 2004; Nakamura et al., 2007, 2009, 2011a,b, 2014; Iwahashi et al., 2010; Sinapov and Stoytchev, 2011; Araki et al., 2012; Griffith et al., 2012; Ando et al., 2013; Celikkanat et al., 2014; Sinapov et al., 2014). For example, Sinapov and Stoytchev (2011) proposed a graph-based multimodal categorization method that allows a robot to recognize a new object by its similarity to a set of familiar objects. They also built a robotic system that categorizes 100 objects from multimodal information in a supervised manner (Sinapov et al., 2014). Celikkanat et al. (2014) modeled the context in terms of a set of concepts that allow many-to-many relationships between objects and contexts using LDA.\n\nOur focus of this paper is not a supervised learning-based, but an unsupervised learning-based multimodal categorization method and an active perception method for categories formed by the method. Of these, a series of statistical unsupervised multimodal categorization methods for autonomous robots have been proposed by extending LDA, i.e., a topic model (Nakamura et al., 2007, 2009, 2011a,b, 2014; Araki et al., 2012; Ando et al., 2013). All these methods are Bayesian generative models, and the MHDP is a representative method of this series (Nakamura et al., 2011b). The MHDP is an extension of the HDP, which was proposed by Teh et al. (2006), and the HDP is a nonparametric Bayesian extension of LDA (Blei et al., 2003). Concretely, the generative model of the MHDP has multiple types of emissions that correspond to various sensor data obtained through various modality inputs. In the HDP, observation data are usually represented as a bag-of-words (BoW). In contrast, the observation data in the MHDP use bag-of-features (BoF) representations for multimodal information. BoF is a histogram-based feature representation that is generated by quantizing observed feature vectors. Latent variables that are regarded as indicators of topics in the HDP correspond to object categories in the MHDP. Nakamura et al. (2011b) showed that the MHDP enables a robot to categorize a large number of objects in a home environment into categories that are similar to human categorization results.\n\nTo obtain multimodal information, a robot has to perform actions and interact with a target object in various ways, e.g., grasping, shaking, or rotating the object. If the number of actions and types of sensor information increase, multimodal categorization and recognition can require a longer time. When the recognition time is limited and/or if quick recognition is required, it becomes important for a robot to select a small number of actions that are effective for accurate recognition. Action selection for recognition is often called active perception. However, an active perception method for the MHDP has not been proposed. This paper aims to provide an active perception method for the MHDP.\n\n### 2.2. Active Perception\n\nGenerally, active perception is one of the most important cognitive capabilities of humans. From an engineering viewpoint, active perception has many specific tasks, e.g., localization, mapping, navigation, object recognition, object segmentation, and self–other differentiation.\n\nIn machine learning, active learning is defined as a task in which a method interactively queries an information source to obtain the desired outputs at new data points to learn efficiently Settles (2012). Active learning algorithms select an unobserved input datum and ask a user (labeler) to provide a training signal (label) in order to reduce uncertainty as quickly as possible (Cohn et al., 1996; Muslea et al., 2006; Settles, 2012). These algorithms usually assume a supervised learning problem. This problem is related to the problem in this paper, but is fundamentally different.\n\nHistorically, active vision, i.e., active visual perception, has been studied as an important engineering problem in computer vision. Dutta Roy et al. (2004) presented a comprehensive survey of active three-dimensional object recognition. For example, Borotschnig et al. (2000) proposed an active vision method in a parametric eigenspace to improve the visual classification results. Denzler and Brown (2002) proposed an information theoretic action selection method to gather information that conveys the true state of a system through an active camera. They used the mutual information (MI) as a criterion for action selection. Krainin et al. (2011) developed an active perception method in which a mobile robot manipulates an object to build a three-dimensional surface model of it. Their method uses the IG criterion to determine when and how the robot should grasp the object.\n\nModeling and/or recognizing a single object as well as modeling a scene and/or segmenting objects are also important tasks in the context of robotics. Eidenberger and Scharinger (2010) proposed an active perception planning method for scene modeling in a realistic environment. van Hoof et al. (2012) proposed an active scene exploration method that enables an autonomous robot to efficiently segment a scene into its constituent objects by interacting with the objects in an unstructured environment. They used IG as a criterion for action selection. InfoMax control for acoustic exploration was proposed by Rebguns et al. (2011).\n\nLocalization, mapping, and navigation are also targets of active perception. Velez et al. (2012) presented an online planning algorithm that enables a mobile robot to generate plans that maximize the expected performance of object detection. Burgard et al. (1997) proposed an active perception method for localization. Action selection is performed by maximizing the weighted sum of the expected entropy and expected costs. To reduce the computational cost, they only consider a subset of the next locations. Roy and Thrun (1999) proposed a coastal navigation method for a robot to generate trajectories for its goal by minimizing the positional uncertainty at the goal. Stachniss et al. (2005) proposed an information-gain-based exploration method for mapping and localization. Correa and Soto (2009) proposed an active perception method for a mobile robot with a visual sensor mounted on a pan-tilt mechanism to reduce localization uncertainty. They used the IG criterion, which was estimated using a particle filter.\n\nIn addition, various studies on active perception by a robot have been conducted (Natale et al., 2004; Ji and Carin, 2006; Schneider et al., 2009; Tuci et al., 2010; Saegusa et al., 2011; Fishel and Loeb, 2012; Pape et al., 2012; Sushkov and Sammut, 2012; Gouko et al., 2013; Hogman et al., 2013; Ivaldi et al., 2014; Zhang et al., 2017). In spite of a large number of contributions about active perception, few theories of active perception for multimodal object category recognition have been proposed. In particular, an MHDP-based active perception method has not yet been proposed, although the MHDP-based categorization method and its series have obtained many successful results and extensions.\n\n### 2.3. Active Perception for Multimodal Categorization\n\nSinapov et al. (2014) investigated multimodal categorization and active perception by making a robot perform 10 different behaviors; obtain visual, auditory, and haptic information; explore 100 different objects, and classify them into 20 object categories. In addition, they proposed an active behavior selection method based on confusion matrices. They reported that the method was able to reduce the exploration time by half by dynamically selecting the next exploratory behavior. However, their multimodal categorization is performed in a supervised manner, and the theory of active perception is still heuristic. The method does not have theoretical guarantees of performance.\n\nIG-based active perception is popular, as shown above, but the theoretical justification for using IG in each task is often missing in many robotics papers. Moreover, in many cases in robotics studies, IG cannot be evaluated directly, reliably, or accurately. When one takes an IG criterion-based approach, how to estimate the IG is an important problem. In this study, we focus on MHDP-based active perception and develop an efficient near-optimal method based on firm theoretical justification.\n\n## 3. Multimodal Hierarchical Dirichlet Process for Statistical Multimodal Categorization\n\nWe assume that a robot forms object categories using the MHDP from multimodal sensory data. In this section, we briefly introduce the MHDP on which our proposed active perception method is based (Nakamura et al., 2011b). The MHDP assumes that an observation node in its graphical model corresponds to an action and its corresponding modality. Nakamura et al. (2011b) employed three observation nodes in their graphical model, i.e., haptic, visual, and auditory information nodes. Three actions, i.e., grasping, looking around, and shaking, correspond to these modalities, respectively. However, the MHDP can be easily extended to a model with additional types of sensory inputs. It is without doubt that autonomous robots will also gain more types of action for perception. For modeling more general cases, an MHDP with M actions is described in this paper. A graphical model of the MHDP is illustrated in Figure 2. In this section, we describe the MHDP briefly. For more details, please refer to Nakamura et al. (2011b).\n\nFIGURE 2", null, "Figure 2. Graphical representation of an MHDP with M modalities corresponding to actions for perception.\n\nThe index mM (#(M) = M) in Figure 2 represents the type of information that corresponds to an action for perception, e.g., hitting an object to obtain its sound, grasping an object to test its shape and hardness, or looking at all of an object by rotating it. We assume that a robot has action primitives and it can execute one of the actions by selecting the index of the action primitives. The observation ${x}_{jn}^{m}\\in {X}^{m}$ is the m-th modality's n-th feature for the j-th target object. Xm represents a set of observation of m-th modality. The observation ${x}_{jn}^{m}$ is assumed to be drawn from a categorical distribution whose parameter is ${\\theta }_{k}^{m}$, where k is an index of a latent topic. Each index k is drawn from a categorical distribution whose parameter is β that is drawn from a Dirichlet distribution parametrized by γ. Parameter ${\\theta }_{k}^{m}$ is assumed to be drawn from the Dirichlet prior distribution whose parameter is ${\\alpha }_{0}^{m}$. The MHDP assumes that a robot obtains each modality's sensory information as a BoF representation. Each latent variable ${t}_{jn}^{m}$ is drawn from a topic proportion, i.e., a parameter of a multinomial distribution, of the j-th object πj whose prior is a Dirichlet distribution parametrized by λ.\n\nSimilarly to the generative process of the original HDP (Teh et al., 2006), the generative process of the MHDP can be described as a Chinese restaurant franchise, which is the name of a special type of probabilistic process in Bayesian nonparametrics (Teh et al., 2005). The learning and recognition algorithms are both derived using Gibbs sampling. In its learning process, the MHDP estimates a latent variable ${t}_{jn}^{m}$ for each feature of the j-th object and a topic index kjt for each latent variable t. The combination of latent variable and topic index corresponds to a topic in LDA (Blei et al., 2003). Using the estimated latent variables, the categorical distribution parameter ${\\theta }_{k}^{m}$ and topic proportion of the j-th object πj are drawn from the posterior distribution.\n\nThe selection procedure for latent variable ${t}_{jn}^{m}$ is as follows. The prior probability that ${x}_{jn}^{m}$ selects t is\n\n$P(tjnm=t|λ)={∑mwmNjtmλ+∑mwmNjm−1,(t=1,⋯,Tj),λλ+∑mwmNjm−1,(t=Tj+1),$\n\nwhere wm is a weight for the m-th modality, To balance the influence of different modalities, wm are set as hyperparameters. The weight wm increases the influence of the modality m on multimodal category formation. ${N}_{jt}^{m}$ is the number of m-th modality observations that are allocated to t in the j-th object, and λ is a hyperparameter. In the Chinese restaurant process, if the number of observed features ${N}_{jt}=\\sum _{m}{w}^{m}{N}_{jt}^{m}$ that are allocated to t increases, the probability at which a new observation is allocated to the latent variable t increases. Using the prior distribution, the posterior probability that observation ${x}_{jn}^{m}$ is allocated to the latent variable t becomes\n\nwhere ${N}_{j}^{m}$ is the number of the m-th modality's observations about the j-th object. The set of observations that correspond to the m-th modality and have the k-th topic in any object are represented by ${X}_{k}^{m}$.\n\nIn the Gibbs sampling procedure, a latent variable for each observation is drawn from the posterior probability distribution. If t = Tj + 1, a new observation is allocated to a new latent variable. The dish selection procedure is as follows. The prior probability that the k-th topic is allocated on the t-th latent variable becomes\n\nwhere K is the number of topic types, and Mk is the number of latent variables on which the k-th topic is placed. Therefore, the posterior probability that the k-th topic is allocated on the t-th latent variable becomes\n\nwhere $X={\\cup }_{m}{X}^{m}$, ${X}_{k}={\\cup }_{m}{X}_{k}^{m}$, and Xjt is the set of the j-th object's observations allocated to the t-th latent variable. A topic index for the latent variable t for the j-th object is drawn using the posterior probability, where γ is a hyperparameter. If k = K + 1, a new topic is placed on the latent variable.\n\nBy sampling ${t}_{jn}^{m}$ and kjt, the Gibbs sampler performs probabilistic object clustering:\n\nwhere ${X}^{-mjn}=X\\\\left\\{{x}_{jn}^{m}\\right\\}$, and ${X}^{-jt}=X\\{X}_{jt}$. By sampling ${t}_{jn}^{m}$ for each observation in every object using (1) and sampling kjt for each latent variable t in every object using (2), all of the latent variables in the MHDP can be inferred.\n\nIf ${t}_{jn}^{m}$ and kjt are given, the probability that the j-th object is included in the k-th category becomes\n\nwhere ${X}_{j}={\\cup }_{m}{X}_{j}^{m}$, wm is the weight for the m-th modality and δa(x) is a delta function.\n\nWhen a robot attempts to recognize a new object after the learning phase, the probability that feature ${x}_{jn}^{m}$ is generated from the k-th topic becomes\n\n$P(xjnm|Xkm)=wmNkxjnmm+α0mwmNkm+dmα0m,$\n\nwhere dm denotes the dimension of the m-th modality input, and ${N}_{k{x}_{jn}^{m}}^{m}$ represents the number of features ${x}_{jn}^{m}$ that is corresponding to the index k. Topic kt allocated to t for a new object is sampled from\n\n$kt~P(kjt=k|X,γ)∝P(Xjt|Xk)γγ+M−1.$\n\nThese sampling procedures play an important role in the Monte Carlo approximation of our proposed method (see section 4.2.).\n\nFor a more detailed explanation of the MHDP, please refer to Nakamura et al. (2011b). Basically, a robot can autonomously learn object categories and recognize new objects using the multimodal categorization procedure described above. The performance and effectiveness of the method was evaluated in the paper.\n\n## 4. Active Perception Method\n\n### 4.1. Basic Formulation\n\nA robot should have already conducted several actions and obtained information from several modalities when it attempts to select next action set for recognizing a target object. For example, visual information can usually be obtained by looking at the front face of the j-th object from a distance before interacting with the object physically. We assume that a robot has already obtained information corresponding to a subset of modalities mojM, where the subscript o means“originally” obtained modality information. When a robot faces a new object and has not obtained any information, moj = ∅.\n\nThe purpose of object recognition in multimodal categorization is different from conventional supervised learning-based pattern recognition problems. In supervised learning, the recognition result is evaluated by checking whether the output is the same as the truth label. However, in unsupervised learning, there are basically no truth labels. Therefore, the performance of active perception should be measured in a different manner.\n\nThe action set the robot selects is described as , where is a family of subsets of M \\ moj, i.e., AM \\ moj, aiM \\ moj and NA represents the number of available actions. We consider an effective action set for active perception to be one that largely reduces the distance between the final recognition state after the information from all modalities M is obtained and the recognition state after the robot executes the selected action set A. The recognition state is represented by the posterior distribution . Here, is a latent variable representing the j-th object's topic information, where ${X}_{j}^{A}={\\cup }_{m\\in A}{X}_{j}^{m},{X}_{j}^{m}=\\left\\{{x}_{j1}^{m},\\dots ,{x}_{jn}^{m},\\dots ,{x}_{j{N}_{j}^{m}}^{m}\\right\\}$. Probability represents the posterior distribution related to the object category after taking actions moj and A.\n\nThe final recognition state, i.e., posterior distribution over latent variables after obtaining the information from all modalities M, becomes $P\\left({\\text{z}}_{j}|{X}_{j}^{\\text{M}}\\right)$. The purpose of active perception is to select a set of actions that can estimate the posterior distribution most accurately. When L actions can be executed, if we employ KL divergence as the metric of the difference between the two probability distributions,\n\nis a reasonable evaluation criterion for realizing effective active perception, where is a feasible set of actions.\n\nHowever, neither the true can be observed before taking A on the j-th target object, and hence cannot be used at the moment of action selection. Therefore, a rational alternative for the evaluation criterion is the expected value of the KL divergence at the moment of action selection:\n\nHere, we propose to use the IG maximization criterion to select the next action set for active perception:\n\nwhere IG(X; Y|Z) is the IG of Y for X, which is calculated on the basis of the probability distribution commonly conditioned by Z as follows:\n\nBy definition, the expected KL divergence is the same as IG(X; Y). The definition of IG and its relation to KL divergence are as follows.\n\nThe optimality of the proposed criterion (6) is supported by Theorem 1.\n\nTheorem 1. The set of next actions that maximizes the minimizes the expected KL divergence between the posterior distribution over zj after all modality information has been observed and after A has been executed.\n\nProof. See Appendix A.\n\nThis theorem is essentially the result of well-known characteristics of IG (see MacKay, 2003; Russo and Van Roy, 2016 for example). This means that maximizing IG is the optimal policy for active perception in an MHDP-based multimodal object category recognition task. As a special case, when only a single action is permitted, the following corollary is satisfied.\n\nCorollary 1.1. The next action m ∈ M \\ moj that maximizes minimizes the expected KL divergence between the posterior distribution over zj after all modality information has been observed and after the action has been executed.\n\nProof. By substituting {m} into A in Theorem 1, we can obtain the corollary.\n\nUsing IG, the active perception strategy for the next single action is simply described as follows:\n\nThis means that the robot should select the action ${m}_{j}^{*}$ that can obtain the ${X}_{j}^{{m}_{j}^{*}}$ that maximizes the IG for the recognition result zj under the condition that the robot has already observed .\n\nHowever, we still have two problems, as follows.\n\n1. The argmax operation in (6) is a combinatorial optimization problem and incurs heavy computational cost when #(M \\ moj) and L become large.\n\n2. The calculation of cannot be performed in a straightforward manner.\n\nBased on some properties of the MHDP, we can obtain reasonable solutions for these two problems.\n\n### 4.2. Sequential Decision Making as a Submodular Maximization\n\nIf a robot wants to select L actions Aj = {a1, a2, …, aL} (aiM \\ moj), it has to solve (6), i.e., a combinatorial optimization problem. The number of combinations of L actions is #(M \\ moj)CL, which increases dramatically when the number of possible actions #(M \\ moj) and L increase. For example, Sinapov et al. (2014) gave a robot 10 different behaviors in their experiment on robotic multimodal categorization. Future autonomous robots will have more available actions for interacting with a target object and be able to obtain additional types of modality information through these interactions. Hence, it is important to develop an efficient solution for the combinatorial optimization problem.\n\nHere, the MHDP has advantages for solving this problem.\n\nTheorem 2. The evaluation criterion for multimodal active perception is a submodular and non-decreasing function with regard to A.\n\nProof. As shown in the graphical model of the MHDP in Figure 2, the observations for each modality ${X}_{j}^{m}$ are conditionally independent under the condition that a set of latent variables ${z}_{j}=\\left\\{{\\left\\{{k}_{jt}\\right\\}}_{1\\le t\\le {T}_{j}},{\\left\\{{t}_{jn}^{m}\\right\\}}_{m\\in M,1\\le n\\le {N}_{j}^{m}}\\right\\}$ is given. This satisfies the conditions of the theorem by Krause and Guestrin (2005). Therefore, is a submodular and non-decreasing function with regard to ${X}_{j}^{m}$.\n\nSubmodularity is a property similar to the convexity of a real-valued function in a vector space. If a set function F : VR satisfies\n\n$F(A∪x)−F(A)≥F(A′∪x)−F(A′),$\n\nwhere V is a finite set ∀AA′ ⊆ V and xA, the set function F has submodularity and is called a submodular function.\n\nFunction IG is not always a submodular function. However, Krause et al. proved that IG(U; A) is submodular and non-decreasing with regard to AS if all of the elements of S are conditionally independent under the condition that U is given. With this theorem, Krause and Guestrin (2005) solved the sensor allocation problem efficiently. Theorem 2 means that the problem (6) is reduced to a submodular maximization problem.\n\nIt is known that the greedy algorithm is an efficient strategy for the submodular maximization problem. Nemhauser et al. (1978) proved that the greedy algorithm can select a subset that is at most a constant factor (1−1/e) worse than the optimal set, if the evaluation function F(A) is submodular, non-decreasing, and F(∅) = 0, where F(·) is a set function, and A is a set. If the evaluation function is a submodular set function, a greedy algorithm is practically sufficient for selecting subsets in many cases. In sum, a greedy algorithm gives a near-optimal solution. However, the greedy algorithm is still inefficient because it requires an evaluation of all choices at each step of a sequential decision making process.\n\nMinoux (1978) proposed lazy greedy algorithm to make the greedy algorithm more efficient for the submodular evaluation function. The lazy greedy algorithm can reduce the number of evaluations by using the characteristics of a submodular function.\n\n### 4.3. Monte Carlo Approximation of IG\n\nEquations (6) and (9) provide a robot with an appropriate criterion for selecting an action to efficiently recognize a target object. However, at first glance, it looks difficult to calculate the IG. First, the calculation of the expectation procedure requires a sum operation over all possible ${X}_{j}^{\\text{A}}$. The number of possible ${X}_{j}^{\\text{A}}$ exponentially increases when the number of elements in the BoF increases. Second, the calculation of for each possible observation ${X}_{j}^{\\text{A}}$ requires the same computational cost as recognition in the multimodal categorization itself. Therefore, the straightforward calculation for solving (9) is computationally impossible in a practical sense.\n\nHowever, by exploiting a characteristic property of the MHDP, a Monte Carlo approximation can be derived. First, we describe IG as the expectation of a logarithm term.\n\nAn analytic evaluation of (10) is also practically impossible. Therefore, we adopt a Monte Carlo method. Equation (10) suggests that an efficient Monte Carlo approximation can be performed as shown below if we can sample\n\nFortunately, the MHDP provides a sampling procedure for and ${X}_{j}^{m\\left[k\\right]}~P\\left({X}_{j}^{m}|{z}_{j}^{\\left[k\\right]}\\right)$ in its original paper (Nakamura et al., 2011b). In the context of multimodal categorization by a robot, ${X}_{j}^{m\\left[k\\right]}~P\\left({X}_{j}^{m}|{z}_{j}^{\\left[k\\right]}\\right)$ is a prediction of an unobserved modality's sensation using observed modalities' sensations, i.e., cross-modal inference. The sampling process of $\\left({z}_{j}^{\\left[k\\right]},{X}_{j}^{m\\left[k\\right]}\\right)$ can be regarded as a mental simulation by a robot that predicts the unobserved modality's sensation leading to a categorization result based on the predicted sensation and observed information.\n\nIn (11), in the numerator can be easily calculated because all the parent nodes of ${X}_{j}^{m\\left[k\\right]}$ are given in the graphical model shown in Figure 2. However, in the denominator cannot be evaluated in a straightforward way. Again, a Monte Carlo method can be adopted, as follows:\n\nwhere K′ is the number of samples for the second Monte Carlo approximation. Fortunately, in this Monte Carlo approximation (12), we can reuse the samples drawn in the previous Monte Carlo approximation efficiently, i.e., K′ = K. By substituting (12) for (11), we finally obtain the approximate IG for the criterion of active perception, i.e., our proposed method, as follows:\n\nNote that the computational cost for evaluating IG becomes O(K2). In summary, a robot can approximately estimate the IG for unobserved modality information by generating virtual observations based on observed data and evaluating their likelihood.\n\n### 4.4. MHDP-Based Active Perception Methods\n\nWe propose the use of the greedy and lazy greedy algorithms for selecting L actions to recognize a target object on the basis of the submodular property of IG. The final greedy and lazy greedy algorithms for MHDP-based active perception, i.e., our proposed methods, are shown in Algorithms 1 and 2, respectively.\n\nALGORITHM 1\nALGORITHM 2\n\nThe main contribution of the lazy greedy algorithm is to reduce the computational cost of active perception. The majority of the computational cost originates from the number of times a robot evaluates IGm for determining action sequences. When a robot has to choose L actions, the brute-force algorithm that directly evaluates all alternatives using (6) requires #(M \\ moj)CL evaluations of . In contrast, the greedy algorithm requires {#(M \\ moj) + (#(M \\ moj)−1) + … + (#(M \\ moj)−L + 1)} evaluations of , i.e., O(ML). The lazy greedy algorithm incurs the same computational cost as the greedy algorithm only in the worst case. However, practically, the number of re-evaluations in the lazy greedy algorithm is quite small. Therefore, the computational cost of the lazy greedy algorithm increases almost in proportion to L, i.e., almost linearly. The memory requirement of the proposed method is also quite small. Both the greedy and lazy greedy algorithms only require memory for IGm for each modality and K samples for the Monte Carlo approximation. These requirements are negligibly small compared with the MHDP itself.\n\nNote that the IGm is not the exact IG, but an approximation. Therefore, the differences between IG and IGm may harm the performance of greedy and lazy greedy algorithms to a certain extent. However, the algorithms are expected to work practically. We evaluated the algorithms through experiments.\n\n## 5. Experiment 1: Humanoid Robot\n\n### 5.1. Conditions\n\nAn experiment using an upper-torso humanoid robot was conducted to verify the proposed active perception method in the real-world environment. In this experiment, RIC-Torso, developed by the RT Corporation, was used (see Figure 3). RIC-Torso is an upper-torso humanoid robot that has two robot hands. We prepared an experimental environment that is similar to the one in the original MHDP paper (Nakamura et al., 2011b). The robot has four available actions and four corresponding modality information. The set of modalities was M = {mv, mas, mah, mh}, which represent visual information, auditory information obtained by shaking an object, one by hitting an object and haptic information, respectively.\n\nFIGURE 3\n\n#### 5.1.1. Visual Information (mv)\n\nVisual information was obtained from the Xtion PRO LIVE set on the head of the robot. The camera was regarded as the eyes of the robot. The robot captured 74 images of a target object while it rotated on a turntable (see Figure 3). The size of each image was re-sized to 320 × 240. Scale-invariant feature transform (SIFT) feature vectors were extracted from each captured image (Lowe, 2004). A certain number of 128-dimensional feature vectors were obtained from each image. Note that the SIFT feature did not consider hue information. All of the obtained feature vectors were transformed into BoF representations using k-means clustering with k = 25. The number of clusters k was determined empirically, considering prior works (Nakamura et al., 2011b; Araki et al., 2012). The k-means clustering was performed using data from all objects in a training set, and the centroids of the clusters were determined. BoF representations were used as observation data for the visual modality of the MHDP. The index for this modality was defined as mv.\n\n#### 5.1.2. Auditory Information (mas and mah)\n\nAuditory information was obtained from a multipowered shotgun microphone NTG-2 by RODE Microphone. The microphone was regarded as the ear of the robot. In this experiment, two types of auditory information were acquired. One was generated by hitting the object, and the other was generated by shaking it. The two sounds were regarded as different auditory information and hence different modality observations in the MHDP model. The two actions, i.e., hitting and shaking, were manually programmed for the robot. Each action was implemented as a fixed trajectory. When the robot began to execute an action, it also started recording the objects's sound (see Figure 3). The sound was recorded until two seconds after the robot finished the action. The recorded auditory data were temporally divided into frames, and each frame was transformed into 13-dimensional Mel-frequency cepstral coefficients (MFCCs). The MFCC feature vectors were transformed into BoF representations using k-means clustering with k = 25 in the same way as the visual information. The indices of these modalities were defined as mas and mah, respectively, for “shake” and “hit.”\n\n#### 5.1.3. Haptic Information (mh)\n\nHaptic information was obtained by grasping a target object using the robot's hand. When the robot attempted to obtain haptic information from an object placed in front of it, it moved its hand to the object and gradually closed its hand until a certain amount of counterforce was detected (see Figure 3). The joint angle of the hand was measured when the hand touched the target object and when the hand stopped. The two variables and difference between the two angles were used as a three-dimensional feature vector. When obtaining haptic information, the robot grasped the target object 10 times and obtained 10 feature vectors. The feature vectors were transformed into BoF representations using k-means clustering with k = 5 in the same way as for the other information types. The index of the haptic modality was defined as mh.\n\n#### 5.1.4. Multimodal Information as BoF Representations\n\nIn summary, a robot could obtain multimodal information from four modalities for perception. The dimensions of the BoFs were set to 25, 25, 25, and 5 for mv, mas, mah, and mh, respectively. The dimension of each BoF corresponds to the number of clusters for k-means clustering. The numbers of clusters, i.e., the sizes of the dictionaries, were empirically determined on the basis of a preliminary experiment on multimodal categorization. All of the training datasets were used to train the dictionaries. The histograms of the feature vectors, i.e., the BoFs, were resampled to make their counts ${N}_{j}^{{m}^{v}}=100,{N}_{j}^{{m}^{as}}=80,{N}_{j}^{{m}^{ah}}=130$, and ${N}_{j}^{{m}^{h}}=30$. The weight of each modality wm was set to 1. The formation of multimodal object categories itself is out of the scope of this paper. Therefore, the constants were empirically determined so that the robot could form object categories that are similar to human participants. The number of samples K in the Monte Carlo approximation for estimating IG was set to K = 5, 000. The constant K was determined empirically. The effect of K will be examined in the experiment as well (see Figure 11).\n\n#### 5.1.5. Target Objects\n\nFor the target objects, 17 types of commodities were prepared for the experiment shown in Figure 4. An object was provided for obtaining a training data, i.e., data for object categorization, and another object was provided for obtaining test data, i.e., data for active perception, for each type of objects. Each index on the right-hand side of the figure indicates the index of each object. The hardness of the balls, the striking sounds of the cups, and the sounds made while shaking the bottles were different depending on the object categories. Therefore, ground-truth categorization could not be achieved using visual information alone.\n\nFIGURE 4", null, "Figure 4. (Left) target objects used in the experiment and (Right) categorization results obtained in the experiment.\n\n### 5.2. Procedure\n\nThe experimental procedure was as follows. First, the robot formed object categories through multimodal categorization in an unsupervised manner. An experimenter placed each object in front of the robot one by one. In this training phase, two objects for each type of objects were provided. The robot looked at the object to obtain visual features, grasped it to obtain haptic features, shook it to obtain auditory shaking features, and hit it to obtain the auditory striking features. After obtaining the multimodal information of the objects as a training data set, the MHDP was trained using a Gibbs sampler. The results of multimodal categorization are shown in Figure 4. The category that has the highest posterior probability for each object is shown in white. These results show that the robot can form multimodal object categories using MHDP, as described in Nakamura et al. (2011b). After the robot had formed object categories, we fixed the latent variables for the training data set3.\n\nSecond, an experimental procedure for active perception was conducted. An experimenter placed an object in front of the robot. The robot observed the object using its camera, obtained visual information, and set . An object was provided for each type of objects shown in Figure 4 to the robot one by one. Therefore, 17 objects were used for evaluating each active perception strategy. The sequential action selection and object recognition were performed once per an object. At each step of the sequential action selection, Gibbs sampler for MHDP was performed and it updated its latent variables, i.e., recognition state, of the MHDP. The robot then determined its next set of actions for recognizing the target object using its active perception strategy shown in Algorithms 1 and 2.\n\n### 5.3. Results\n\n#### 5.3.1. Selecting the Next Action\n\nFirst, we describe results for the first single action selection after obtaining visual information. In this experiment, the robot had three choices for its next action, i.e., mas, mah, and mh. To evaluate the results of active perception, we used , i.e., the distance between the posterior distribution over the object categories k in the final recognition state and that in the next recognition state as an evaluation criterion on behalf of , which is the original evaluation criterion in (4). The computational cost for numerical evaluation of using a Monte Carlo method is too high because ${z}_{j}=\\left\\{{\\left\\{{k}_{jt}\\right\\}}_{1\\le t\\le {T}_{j}},{\\left\\{{t}_{jn}^{m}\\right\\}}_{m\\in M,1\\le n\\le {N}_{j}^{m}}\\right\\}$ has so many variables and a posterior distributions over zj is very complex.\n\nFigure 5 (Top) shows samples of the KL divergence between the posterior probabilities of the category after obtaining the information from all modalities and after obtaining only visual information.\n\nFIGURE 5", null, "Figure 5. (Top) Samples of KL divergence between the final recognition state and the posterior probability estimated after obtaining only visual information, (Middle) samples of estimated IGm for each object based on visual information (v), and (Bottom) samples of KL divergence between the final recognition state and the posterior probability estimated after obtaining only visual information and each selected action where as, ah, h represent represent auditory information obtained by shaking an object, one by hitting an object and haptic information, respectively. Our theory of multimodal active perception suggests that the action with the highest information gain (shown in the middle) tends to lead its initial recognition state (whose KL divergence from the final recognition state is shown at the top) to a recognition state whose KL divergence from the final recognition state (shown at the bottom) is the smallest. These figures suggest the probabilistic relationships were satisfied as a whole.\n\nWith regard to some objects, e.g., objects 6 and 7, the figure shows samples of that visual information seems to be sufficient for the robot to recognize the objects as compared the other objects4. However, with regard to many objects, visual information alone could not lead the recognition state to the final state. However, it could be reached using the information of all modalities. Figure 5 (Middle) shows samples of IGm calculated using the visual information for each action. Figure 5 (Bottom) shows the KL divergence between the final recognition state and the posterior probability estimated after obtaining visual information and the information of each selected action. We observe that an action with a higher value of IGm tended to further reduce the KL divergence, as Theorem 1 suggests. Figure 6 shows the average KL divergence for the final recognition state after executing an action selected by the IGm criterion. Actions IG.min, IG.mid, and IG.max denote actions that have the minimum, middle, and maximum values of IGm, respectively. These results show that IG.max clearly reduced the uncertainty of the target objects.\n\nFIGURE 6", null, "Figure 6. Reduction in the KL divergence by executing an action selected on the basis of the IGm maximization criterion. The KL divergences between the recognition state after executing the second action and the final recognition state are calculated for all objects and shown with box plot. This shows that an action with more information brings the recognition of its state closer to the final recognition state.\n\nThe precision of category recognition after an action execution is summarized in Table 1. Basically, a category recognition result is obtained as the posterior distribution (3) in the MHDP. The category with the highest posterior probability is considered to be the recognition result for illustrative purposes in Table 1. Obtaining information by executing IG.max almost always increased recognition performance.\n\nTABLE 1\n\nExamples of changes in the posterior distribution are shown in Figure 7 (Left, Right) for objects 8 (“metal cup”) and 12 (“empty plastic bottle”), respectively. The robot could not clearly recognize the category of object 8 after obtaining visual information. Action IGm in Figure 5 shows that mah was IG.max for the 8th object. Figure 7 (Left) shows that mah reduced the uncertainty and allowed the robot to correctly recognize the object, as evidenced by category 6, a metal cup. This means that the robot noticed that the target object was a metal cup by hitting it and listening to its metallic sound. The metal cup did not make a sound when the robot shook it. Therefore, the IG for mas was small. As Figure 7 (Right) shows, the robot first recognized the 12th object as a plastic bottle containing bells with high probability and as an empty plastic bottle with a low probability. Figure 5 shows that the IGm criterion suggested mah as the first alternative and mas as the second alternative. Figure 7 (Right) shows that mas and mah could determine that the target object was an empty plastic bottle, but mh could not.\n\nFIGURE 7", null, "Figure 7. (Left) Posterior probability of the category for object 8 after executing each action. These results show that the action with the highest information gain, i.e., ah, allowed the robot to efficiently estimate that the true object category was “metal cup”. (Right) Posterior probability of the category for object 12 after executing each action. These results show that the actions with the highest and second highest information gain, i.e., ah and as, allowed the robot to efficiently estimate that the true object category was “empty plastic bottle”.\n\nAs humans, we would expect to differentiate an empty bottle from a bottle containing bells by shaking or hitting the bottle, and differentiate a metal cup from a plastic cup by hitting it. The proposed active perception method constructively reproduced this behavior in a robotic system using an unsupervised multimodal machine learning approach.\n\n#### 5.3.2. Selecting the Next Set of Multiple Actions\n\nWe evaluated the greedy and lazy greedy algorithms for active perception sequential decision making. The KL divergence from the final state for all target objects is averaged at each step and shown in Figure 8. For each condition, the KL divergence gradually decreased and reached almost zero. However, the rate of decrease notably differed. As the theory of submodular optimization suggests, the greedy algorithm was shown to be a better solution on average and slightly worse than the best case (Nemhauser et al., 1978). The best and worst cases were selected after all types of sequential actions had been performed. The “average” is the average of the KL divergence obtained by all possible types of sequential actions. The results for the lazy greedy algorithm were almost same as those of the greedy algorithm, as Minoux (1978) suggested.\n\nFIGURE 8", null, "Figure 8. KL divergence from the final state at each step for each sequential action selection procedure. Note that the line of the lazy greedy algorithm is overlapped by that of the greedy algorithm.\n\nThe sequential behaviors of IGm were observed to determine if their behaviors were consistent with our theories. For example, the changes in IGm at each step as the robot sequentially selected its action to perform on object 10 using the greedy algorithm is shown in Figure 9. Theorem 2 shows that the IG is a submodular function. This predicts that IGm decreases monotonically when a new action is executed in active perception. When the robot obtained only visual information (v only in Figure 9), all values of IGm were still large. After mah was executed on the basis of the greedy algorithm, ${IG}_{{m}^{ah}}$ became zero. At the same time, ${IG}_{{m}^{as}}$ and ${IG}_{{m}^{h}}$ decreased. In the same way, all values of IGm gradually decreased monotonically.\n\nFIGURE 9\n\nFigure 10 shows the time series of the posterior probability of the category for object 10 during sequential active perception. Using only visual information, the robot misclassified the target object as a plastic bottle containing bells (category 3). The action sequence in reverse order did not allow the robot to recognize the object as a steel can at its first step and change its recognition state to an empty plastic bottle (category 4). After the second action, i.e., grasping (mh), the robot recognized the object as a steel can. In contrast, the greedy algorithm could determine that the target object was in category 4, i.e., steel can, with its first action.\n\nFIGURE 10", null, "Figure 10. Time series of the posterior probability of the category for object 10 during sequential action selection based on (top) the greedy algorithm, i.e., mahmhmas, and (bottom) its reverse order, i.e., masmhmah.\n\nThe effect of the number of samples K for the Monte Carlo approximation was observed. Figure 11 shows the relation between K and the standard deviation of the estimated IGm for the 15th object for each action after obtaining a visual image. This figure shows that estimation error gradually decreases when K increases. Roughly speaking, K ≥ 1, 000 seems to be required for an appropriate estimate of IGm in our experimental setting. Evaluation of IGm required less than 1 second, which is far shorter than the time required for action execution by a robot. This means that our method can be used in a real-time manner.\n\nFIGURE 11", null, "Figure 11. Standard deviation of the estimated information gain IGm for the 15th object. For each K, 100 values of the estimated information gain IGm were obtained, and their standard deviation is shown.\n\nThese empirical results show that the proposed method for active perception allowed a robot to select appropriate actions sequentially to recognize an object in the real-world environment and in a real-time manner. It was shown that the theoretical results were supported, even in the real-world environment.\n\n## 6. Experiment 2: Synthetic Data\n\nIn experiment 1, the numbers of classes, actions, and modalities as well as the size of dataset were limited. In addition, it was difficult to control the robotic experimental settings so as to check some interesting theoretical properties of our proposed method. Therefore, we performed a supplemental experiment, Experiment 2, using synthetic data comprising 21 object types, 63 objects, and 20 actions, i.e., modalities.\n\nFirst, we checked the validity of our active perception method when the number of types of actions increases. Second, we checked how the method worked when two classes were assigned to the same object. Although the MHDP can categorize an object into two or more categories in a probabilistic manner, each object was classified into a single category in the previous experiment.\n\n### 6.1. Conditions\n\nA synthetic dataset was generated using the generative model that the MHDP assumes (see Figure 2). We prepared 21 virtual object classes, and three objects were generated from each object class, i.e., we obtained 63 objects in total. Among the object classes, 14 object classes are “pure,” and seven object classes are “mixed.” For each pure object class, a multinomial distribution was drawn from the Dirichlet distribution corresponding to each modality. We set the number of modalities M = 20. The hyperparameters of the Dirichlet distributions of the modalities were set to ${\\alpha }_{0}^{m}=0.4\\left(m-1\\right)$ for m > 1. For m = 1, we set ${\\alpha }_{0}^{1}=10$. For each mixed object class, a multinomial distribution for each modality was prepared by mixing the distributions of the two pure object classes. Specifically, the multinomial distribution for the i-th mixed object was obtained by averaging those of the (2i−1)-th and the 2i-th object classes. The observations for each modality of each object were drawn from the multinomial distributions corresponding to the object's class. The count of the BoFs for each modality was set to 20. Finally, 42 pure virtual objects and 21 mixed virtual objects were generated.\n\nThe experiment was performed almost in the same way as experiment 1. First, multimodal categorization was performed for the 63 virtual objects, and 14 categories were successfully formed in an unsupervised manner. The posterior distributions over the object categories are shown in Figure 12. Generally speaking, mixed objects were categorized into two or more classes. After categorization, a virtual robot was asked to recognize all of the target objects using the proposed active perception method.\n\nFIGURE 12", null, "Figure 12. Categorization results for the posterior probability distributions for each object.\n\n### 6.2. Results\n\nWe compared the greedy, lazy greedy, and random algorithms for the active perception sequential decision making process. The random algorithm is a baseline method that determines the next action randomly from the remaining actions that have not been taken. In other words, the random algorithm is the case in which a robot does not employ any active perception algorithms.\n\nThe KL divergence from the final state for all target objects is averaged at each step and shown in Figure 13. For each condition, the KL divergence gradually decreased and reached almost zero. However, the rate of decrease was different. The greedy and lazy greedy algorithms were clearly shown to be better solutions on average than the random algorithm. In contrast with experiment 1, the best and worst cases could not practically be calculated because of the prohibitive computational cost. Interestingly, the lazy greedy algorithm has almost the same performance as the greedy algorithm, as the theory suggests, although the laziness reduced the computational cost in reality.\n\nFIGURE 13", null, "Figure 13. KL divergence from the final state at each step for each sequential action selection procedure.\n\nThe number of times the robot evaluated IGm to determine the action sequences for all executable counts of actions L = 1, 2, …, M is summarized for each method. The number of times the lazy greedy algorithm was required for each target object was 71.7 (SD = 5.2) on average, and that of the greedy algorithm was 190. Theoretically, the greedy and lazy greedy algorithms require O(M2) evaluations. Practically, the number of re-evaluations needed by the lazy greedy algorithm is quite small. In contrast, the brute-force algorithm requires O(2M) evaluations, i.e., far more evaluations of IG are required.\n\nNext, a case in which two classes were assigned to the same object was investigated. The target dataset contained “mixed” objects. The results also imply that our method works well even when two classes are assigned to the same object. This is because our theory is completely derived on the basis of the probabilistic generative model, i.e., the MHDP. We show a typical result. Figure 14 shows the time series of the posterior probability of the category for object 51, i.e., one of the mixed objects, during sequential active perception. This shows that the greedy and lazy greedy algorithms quickly categorized the target object into two categories “correctly.” Our formulation assumes the categorization result to be a posterior distribution. Therefore, this type of probabilistic case can be treated naturally.\n\nFIGURE 14", null, "Figure 14. Time series of the posterior probability of the category for object 51 during sequential action selection based on (Top) the greedy algorithm, (Middle) the lazy greedy algorithm, and (Bottom) the random selection procedure.\n\n## 7. Conclusion and Discussion\n\nIn this paper, we described an MHDP-based active perception method for robotic multimodal object category recognition. We formulated a new active perception method on the basis of the MHDP (Nakamura et al., 2011b).\n\nFirst, we proposed an action selection method based on the IG criterion and showed that IG is an optimal criterion for active perception from the viewpoint of reducing the expected KL divergence between the final and current recognition states. Second, we proved that the IG has a submodular property and reduced the sequential active perception problem to a submodular maximization problem. Third, we derived a Monte Carlo approximation method for evaluating IG efficiently and made the action selection method executable. Given the theoretical results, we proposed to use the greedy and lazy greedy algorithms for selecting a set of actions for active perception. It is important to note that all of the three theoretical contributions mentioned above were naturally derived from the characteristics of the MHDP. These contributions are clearly a result of the theoretical soundness of the MHDP. In this sense, our theorems reveal a new advantage of the MHDP that other several heuristic multimodal object categorization methods do not have.\n\nTo evaluate the proposed methods empirically, we conducted experiments using an upper-torso humanoid robot and a synthetic dataset. Our results showed that the method enables the robot to actively select actions and recognize target objects quickly and accurately.\n\nOne of the most interesting points of this paper is that not only object categories but also an action selection for object recognition can be formed in an unsupervised manner. From the viewpoint of cognitive developmental robotics, providing an unsupervised learning model for bridging the development between perceptual and action systems is meaningful for shedding a new light on the computational understanding of cognitive development (Asada et al., 2009; Cangelosi and Schlesinger, 2015). It is believed that the coupling of action and perception is important for an embodied cognitive system (Pfeifer and Scheier, 2001).\n\nThe advantage of this paper compared with the related works in robotics is that our action selection method for multimodal category recognition has a clear theoretical basis and is tightly connected to the computational model for multimodal object categorization, i.e., MHDP. The theoretical basis gives the method preferable characteristics, i.e., theoretical guarantee.\n\nHowever, note that the theoretical guarantee is satisfied only when IG is correctly estimated. We assumed that outcome of each action is deterministic and fully observable when we apply the theory of submodular optimization to active perception in multimodal categorization. However, observations Xm and IG are measured somehow probabilistically because of real-world uncertainty and Monte Carlo approximation. For example, IG is approximately estimated at each step of the greedy and lazy greedy algorithms. Theoretically, given this approximation in evaluating the objective being maximized, the (1−1/e) bound no longer holds. Streeter et al. proposed to introduce an additional penalty based on a function approximation (Streeter and Golovin, 2009). Golovin et al. extended submodularity to adaptive submodularity to consider stochastic property (Golovin and Krause, 2011). Though we discussed the proposed method from the viewpoint of submodular optimization, this algorithm can be regarded as a version of the sequential information maximization, more specifically (Chen et al., 2015). Extending our idea by referring the adaptive submodularity and/or the sequential information maximization, and update our method is our future challenge.\n\nWe assumed that each action requires same cost, and tried to reduce the number of actions in active perception, i.e., to maximize the performance of perception with the fixed number of actions. However, practically, each action, e.g., shake, hit and look at, requires different duration and different energy. Therefore, practical cost is not always the number of actions, but total cost of actions. Zhang et al. (2017) tried to deal with this problem in the context of multimodal object identification. This problem leads us a knapsack problem-like formulation. This type of submodular optimization has been studied by many researchers (Streeter and Golovin, 2009; Zhou et al., 2013). Our method will be able to be extended in the similar way.\n\nIn addition to active perception, active “learning/exploration” for multimodal categorization is also an important research topic. It takes a longer time for a robot to gather multimodal information to form multimodal object categories from a massive number of daily objects than it does to recognize a new object. If a robot can notice that “the object is obviously a sample of learned category,” the robot need not obtain knowledge about object categories from such an object. In contrast, if a target object appears to be completely new to the robot, the robot should carefully interact with the object to obtain multimodal information from the object. Such a scenario will be achieved by developing an active “learning/exploration” method for multimodal categorization. It is likely that such a method will be able to be obtained by extending our proposed active perception method.\n\nConsidering more complex categorization scenario is our future challenge. For example, Schenck et al. (2014) is dealing with the more complex categorization scenario, i.e., 36 plastic containers with identical shape and 3 colors, 4 types of contents, and 3 different amounts of those contents. In this paper, we used MHDP which assumes an object is classified into a single object category and infers the posterior distribution over categories. When we consider human cognition, we can find that object categories have more complex characteristics. For example, object categories have a hierarchical structure, an object is categorized into several classes, and they have different modality-dependency based on the types of categories. Unsupervised machine learning methods for such complex categorization problem have proposed by several researchers based on hierarchical Bayesian models (Griffiths and Ghahramani, 2006; Ando et al., 2013; Nakamura et al., 2015). Theoretically, the main assumption we used was that the MHDP is a hierarchical Bayesian model and action selection is corresponding to obtaining an observation which is a probabilistic variable on the leaf node of its graphical model. Therefore, by applying the same idea to the more complex categorization methods, we will be able to extend our theory to more complex categorization problems. This is on of our future works.\n\nAnother challenge lies in feature representation for multimodal categorization. The MHDP assumed that observations are given as bag-of-features representations. However, there are many kinds of feature representations for visual, auditory and haptic information. In particular, the feature extraction capability of deep neural networks is gathering attention, recently. Theoretically, our main theorems do not depend on the type of emission distributions, i.e., bag-of-features representations. It is likely that the same approach can be used even when a multimodal categorization method uses different feature representations, e.g., the features in the last hidden layer of a pre-trained deep neural network. This extension is also a part of our future challenges.\n\nIn addition, the MHDP model treated in this paper assumed that an action for perception is related to only one modality, e.g., grasping only corresponds to mh. However, in reality, when we interact with an object with a specific action, e.g., grasping, shaking, or hitting, we obtain rich information related to various modalities. For example, when we shake a box to obtain auditory information, we also unwittingly obtain haptic information and information about its weight. The tight linkage between the modality information and an action is a type of approximation taken in this research. An extension of our model and the MHDP to a model that can treat actions that are related to various modalities is also a task for our future work.\n\n## Author Contributions\n\nThe main theory was developed by TaT. The experiments were conceived by RY. The data were analyzed by RY and ToT with help of TaT. The manuscript was written by TaT.\n\n## Funding\n\nThis research was partially supported by Tateishi Science and Technology Foundation, and JST, CREST. This was also partially supported by a Grant-in-Aid for Scientific Research on Innovative Areas (16H06569) and a Grant-in-Aid for Young Scientists (B) (24700233) funded by the Ministry of Education, Culture, Sports, Science, and Technology.\n\n## Conflict of Interest Statement\n\nThe authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.\n\n## Acknowledgments\n\nThe authors would like to thank undergraduate student Takuya Takeshita and graduate student Hajime Fukuda of Ritsumeikan University, who helped us develop the experimental instruments for obtaining our preliminary results.\n\n## Footnotes\n\n1. ^HDP is a nonparametric Bayesian extension of latent Dirichlet allocation (LDA) (Blei et al., 2003), which has been widely used for document-word clustering. The nonparametric Bayesian extension allows HDP to estimate the number of topics, i.e., clusters, as well.\n\n2. ^We can consider an extension of this problem by introducing different cost to each action, i.e., different action requires different time and energy. However, for simplicity, this paper focuses on the problem in which cost for each action is the same.\n\n3. ^The collected datasets for this experiment can be found in GitHub: https://github.com/tanichu/data-active-perception-hmdp\n\n4. ^Note that currently we don't have a good criteria of KL divergence to determine whether performing further actions are necessary or not.\n\n## References\n\nAndo, Y., Nakamura, T., Araki, T., and Nagai, T. (2013). “Formation of hierarchical object concept using hierarchical latent dirichlet allocation,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (Tokyo), 2272–2279.\n\nAraki, T., Nakamura, T., Nagai, T., Nagasaka, S., Taniguchi, T., and Iwahashi, N. (2012). “Online learning of concepts and words using multimodal LDA and hierarchical Pitman-Yor Language Model,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (Algarve), 1623–1630.\n\nAsada, M., Hosoda, K., Kuniyoshi, Y., Ishiguro, H., Inui, T., Yoshikawa, Y., et al. (2009). Cognitive Developmental Robotics: A Survey. IEEE Trans. Auton. Mental Develop. 1, 12–34. doi: 10.1109/TAMD.2009.2021702\n\nCrossRef Full Text\n\nBarsalou, L. W. (1999). Perceptual symbol systems. Behav. Brain Sci. 22, 1–16.\n\nBlei, D. M., Ng, A. Y., and Jordan, M. I. (2003). Latent dirichlet allocation. J. Mach. Learn. Res. 3, 993–1022.\n\nBorotschnig, H., Paletta, L., Prantl, M., and Pinz, A. (2000). Appearance-based active object recognition. Image Vision Comput. 18, 715–727. doi: 10.1016/S0262-8856(99)00075-X\n\nBurgard, W., Fox, D., and Thrun, S. (1997). “Active obile robot localization,” in Proceedings of the Fourteenth International Joint Conference on Artificial Intelligence (IJCAI) (Nagoya), 1346–1352.\n\nCangelosi, A., and Schlesinger, M. (2015). Developmental Robotics. Cambridge, MA: The MIT press.\n\nCelikkanat, H., Orhan, G., Pugeault, N., Guerin, F., Erol, S., and Kalkan, S. (2014). “Learning and Using Context on a Humanoid Robot Using Latent Dirichlet Allocation,” in Joint IEEE International Conferences on Development and Learning and Epigenetic Robotics (ICDL-Epirob) (Genoa), 201–207.\n\nChen, Y., Hassani, S. H., Karbasi, A., and Krause, A. (2015). “Sequential information maximization: When is greedy near-optimal?” in Conference on Learning Theory (Paris), 338–363.\n\nCohn, D. A., Ghahramani, Z., and Jordan, M. I. (1996). Active learning with statistical models. J. Artif. Intell. Res. 4, 129–145.\n\nCorrea, J., and Soto, A. (2009). Active Visual Perception for Mobile Robot Localization. J. Intell. Robot. Sys. 58, 339–354. doi: 10.1007/s10846-009-9348-4\n\nDenzler, J., and Brown, C. M. (2002). Information Theoretic Sensor Data Selection for Active Object Recognition and State Estimation. IEEE Trans. Patt. Anal. Mach. Intell. 24, 1–13. doi: 10.1109/34.982896\n\nDutta Roy, S., Chaudhury, S., and Banerjee, S. (2004). Active recognition through next view planning: a survey. Patt. Recogn. 37, 429–446. doi: 10.1016/j.patcog.2003.01.002\n\nEidenberger, R., and Scharinger, J. (2010). “Active perception and scene modeling by planning with probabilistic 6D object poses,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (Taipei), 1036–1043.\n\nFerreira, J., Lobo, J., Bessiere, P., Castelo-Branco, M., and Dias, J. (2013). A Bayesian framework for active artificial perception. IEEE Trans. Cyber. 43, 699–711. doi: 10.1109/TSMCB.2012.2214477\n\nFishel, J. A., and Loeb, G. E. (2012). Bayesian exploration for intelligent identification of textures. Front. Neurorobot. 6, 1–20. doi: 10.3389/fnbot.2012.00004\n\nGolovin, D., and Krause, A. (2011). Adaptive submodularity: theory and applications in active learning and stochastic optimization. J. Artif. Intell. Res. 42, 427–486. doi: 10.1613/jair.3278\n\nGouko, M., Kobayashi, Y., and Kim, C. H. (2013). “Online exploratory behavior acquisition of mobile robot based on reinforcement learning,” in 26th International Conference on Industrial Engineering and Other Applications of Applied Intelligence Systems, IEA/AIE 2013 (Amsterdam), 272–281.\n\nGriffith, S., Sinapov, J., Sukhoy, V., and Stoytchev, A. (2012). A behavior-grounded approach to forming object categories: Separating containers from noncontainers. IEEE Trans. Auton. Mental Develop. 4, 54–69. doi: 10.1109/TAMD.2011.2157504\n\nGriffiths, T. L., and Ghahramani, Z. (2006). “Infinite latent feature models and the indian buffet process,” in Advances in Neural Information Processing Systems 2006 (Vancouver, BC), 475–482.\n\nHarnad, S. (1990). The symbol grounding problem. Phys. D 42, 335–346.\n\nHogman, V., Bjorkman, M., and Kragic, D. (2013). “Interactive object classification using sensorimotor contingencies,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (Tokyo), 2799–2805.\n\nIvaldi, S., Nguyen, S. M., Lyubova, N., Droniou, A., Padois, V., Filliat, D., et al. (2014). Object learning through active exploration. IEEE Trans. Auton. Mental Develop. 6, 56–72. doi: 10.1109/TAMD.2013.2280614\n\nIwahashi, N., Sugiura, K., Taguchi, R., Nagai, T., and Taniguchi, T. (2010). “Robots that learn to communicate: a developmental approach to personally and physically situated human-robot conversations,” in Dialog with Robots Papers from the AAAI Fall Symposium (Palo Alto, CA), 38–43.\n\nJi, S., and Carin, L. (2006). Cost-Sensitive Feature Acquisition and Classification. Patt. Recogn. 40, 1474–1485. doi: 10.1016/j.patcog.2006.11.008\n\nKemp, C., Chang, K. M., and Lombardi, L. (2010). Category and feature identification. Acta Psychol. 133, 216–233. doi: 10.1016/j.actpsy.2009.11.012\n\nKrainin, M., Curless, B., and Fox, D. (2011). “Autonomous generation of complete 3D object models using next best view manipulation planning,” in IEEE International Conference on Robotics and Automation (Shanghai), 5031–5037.\n\nKrause, A., and Guestrin, C. E. (2005). “Near-optimal nonmyopic alue of information in graphical models,” in Proceedings of the Twenty-First Conference on Uncertainty in Artificial Intelligence (Edinburgh).\n\nLowe, D. G. (2004). Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 60, 91–110. doi: 10.1023/B:VISI.0000029664.99615.94\n\nMacKay, D. J. C. (2003). Information Theory, Inference and Learning Algorithms. Cambridge, UK: Cambridge University Press.\n\nMinoux, M. (1978). “Accelerated greedy algorithms for maximizing submodular set functions,” in Optimization Techniques, ed J. Stoer (Berlin: Springer), 234–243.\n\nMuslea, I., Minton, S., and Knoblock, C. A. (2006). Active learning with multiple views. J. Art. Intell. Res. 27, 203–233. doi: 10.1613/jair.2005\n\nNakamura, T., Ando, Y., Nagai, T., and Kaneko, M. (2015). “Concept formation by robots using an infinite mixture of models,” in 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (Hamburg), 4593–4599.\n\nNakamura, T., Nagai, T., Funakoshi, K., Nagasaka, S., Taniguchi, T., and Iwahashi, N. (2014). “Mutual learning of an object oncept and language model based on MLDA and NPYLM,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS'14) (Chicago, IL), 600–607.\n\nNakamura, T., Nagai, T., and Iwahashi, N. (2007). “Multimodal object categorization by a robot,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (San Diego, CA), 2415–2420.\n\nNakamura, T., Nagai, T., and Iwahashi, N. (2009). “Grounding of word meanings in multimodal concepts using LDA,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (St. Louis, MO), 3943–3948.\n\nNakamura, T., Nagai, T., and Iwahashi, N. (2011a). “Bag of multimodal LDA models for concept formation,” in IEEE International Conference on Robotics and Automation (Shanghai), 6233–6238.\n\nNakamura, T., Nagai, T., and Iwahashi, N. (2011b). “Multimodal categorization by hierarchical dirichlet process,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (San Francisco, CA), 1520–1525.\n\nNatale, L., Metta, G., and Sandini, G. (2004). “Learning haptic representation of objects,” in International Conference of Intelligent Manipulation and Grasping (Genoa).\n\nNemhauser, G. L., Wolsey, L. A., and Fisher, M. L. (1978). An analysis of approximations for maximizing submodular set functions-I. Math. Program. 14, 265–294.\n\nPape, L., Oddo, C. M., Controzzi, M., Cipriani, C., Förster, A., Carrozza, M. C., et al. (2012). Learning tactile skills through curious exploration. Front. Neurorobot. 6:6. doi: 10.3389/fnbot.2012.00006\n\nPfeifer, R., and Scheier, C. (2001). Understanding Intelligence. A Bradford Book. Cambridge, MA: MIT Press.\n\nRebguns, A., Ford, D., and Fasel, I. (2011). “InfoMax control for acoustic exploration of objects by a mobile robot,” in AAAI11 Workshop on Lifelong Learning (San Francisco, CA), 22–28.\n\nRoy, D. K., and Pentland, A. P. (2002). Learning words from sights and sounds: a computational model. Cogn. Sci. 26, 113–146. doi: 10.1207/s15516709cog2601_4\n\nRoy, N., and Thrun, S. (1999). “Coastal navigation with mobile robots,” in Advances in Neural Processing Systems 12. Cambridge, MA: The MIT Press.\n\nRusso, D., and Van Roy, B. (2016). An information-theoretic analysis of thompson sampling. J. Mach. Learn. Res. 17, 2442–2471. Available online at: http://jmlr.org/papers/v17/14-087.html\n\nSaegusa, R., Natale, L., Metta, G., and Sandini, G. (2011). “Cognitive Robotics - Active Perception of the Self and Others,” in The 4th International Conference on Human System Interactions (HSI) (Yokohama), 419–426.\n\nSchenck, C., Sinapov, J., Johnston, D., and Stoytchev, A. (2014). Which object fits best? solving matrix completion tasks with a humanoid robot. IEEE Trans. Auton. Mental Develop. 6, 226–240. doi: 10.1109/TAMD.2014.2325822\n\nSchneider, A., Sturm, J., Stachniss, C., Reisert, M., Burkhardt, H., and Burgard, W. (2009). “Object identification with tactile sensors using bag-of-features,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (St. Louis, MO), 243–248.\n\nSettles, B. (2012). Active learning. Synth. Lect. Artif. Intell. Mach. Learn. 6, 1–114. doi: 10.2200/S00429ED1V01Y201207AIM018\n\nSinapov, J., Schenck, C., Staley, K., Sukhoy, V., and Stoytchev, A. (2014). Grounding semantic categories in behavioral interactions: experiments with 100 objects. Robot. Auton. Sys. 62, 632–645. doi: 10.1016/j.robot.2012.10.007\n\nSinapov, J., and Stoytchev, A. (2011). “Object category recognition by a humanoid robot using behavior-Grounded Relational Learning,” in IEEE International Conference on Robotics and Automation (ICRA) (Shanghai), 184–190.\n\nStachniss, C., Grisetti, G., and Burgard, W. (2005). Information gain-based exploration using rao-blackwellized particle filters. in Robotics Science and Systems (RSS) (Cambridge, MA).\n\nStreeter, M., and Golovin, D. (2009). “An online algorithm for maximizing submodular functions,” in Advances in Neural Information Processing Systems (Vancouver, BC), 1577–1584.\n\nSushkov, O. O., and Sammut, C. (2012). “Active robot learning of object properties,” in 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (Algarve: IEEE), 2621–2628.\n\nTaniguchi, T., Nagai, T., Nakamura, T., Iwahashi, N., Ogata, T., and Asoh, H. (2016). Symbol emergence in robotics: a survey. Adv. Robot. 30, 706–728. doi: 10.1080/01691864.2016.1164622\n\nTeh, Y., Jordan, M., Beal, M., and Blei, D. (2006). Hierarchical Dirichlet processes. J. Am. Stat. Assoc. 101, 1566–1581. doi: 10.1198/016214506000000302\n\nCrossRef Full Text\n\nTeh, Y. W., Jordan, M. I., Beal, M. J., and Blei, D. M. (2005). “Sharing clusters among related groups: Hierarchical dirichlet processes,” in Advances in Neural Information Processing Systems (Vancouver, BC), 1385–1392.\n\nTuci, E., Massera, G., and Nolfi, S. (2010). Active categorical perception of object shapes in a simulated anthropomorphic robotic arm. IEEE Trans. Evol. Comput. 14, 885–899. doi: 10.1109/TEVC.2010.2046174\n\nvan Hoof, H., Kroemer, O., Ben Amor, H., and Peters, J. (2012). “Maximally informative interaction learning for scene exploration,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (Algarve), 5152–5158.\n\nVelez, J., Hemann, G., Huang, A. S., Posner, I., and Roy, N. (2012). Modelling observation correlations for active exploration and robust object detection. J. Artif. Intell. Res. 44, 423–453. doi: 10.1613/jair.3516\n\nZhang, S., Sinapov, J., Wei, S., and Stone, P. (2017). “Robot behavioral exploration and multimodal perception using pomdps,” in AAAI 2017 Spring Symposium on Interactive Multisensory Object Perception for Embodied Agents (Palo Alto, CA).\n\nZhou, J., Ross, S., Yue, Y., Dey, D., and Bagnell, J. A. (2013). “Knapsack constrained contextual submodular list prediction with application to multi-document summarization,” ICML 2013 Workshop on Inferning: Interactions between Inference and Learning (Atlanta).\n\n## Appendix A: Proof of the Optimality of the Proposed Active Perception Strategy\n\nIn this appendix, we show that the proposed active perception strategy, which maximizes the expected KL divergence between the current state and the posterior distribution of zj after a selected set of actions, minimizes the expected KL divergence between the next and final states.\n\nThe numerator inside of the log function does not depend on A. Therefore, the term related to the numerator can be deleted. In addition, by negating the remaining term, we obtain\n\nBy marginalizing from (2), we obtain\n\nKeywords: active perception, cognitive robotics, topic model, multimodal machine learning, submodular maximization\n\nCitation: Taniguchi T, Yoshino R and Takano T (2018) Multimodal Hierarchical Dirichlet Process-Based Active Perception by a Robot. Front. Neurorobot. 12:22. doi: 10.3389/fnbot.2018.00022\n\nReceived: 24 August 2017; Accepted: 23 April 2018;\nPublished: 22 May 2018.\n\nEdited by:" ]
[ null, "https://crossmark-cdn.crossref.org/widget/v2.0/logos/CROSSMARK_Color_square.svg", null, "https://loop.frontiersin.org/images/profile/379696/24", null, "https://f96a1a95aaa960e01625-a34624e694c43cdf8b40aa048a644ca4.ssl.cf2.rackcdn.com/Design/Images/newprofile_default_profileimage_new.jpg", null, "https://loop.frontiersin.org/images/profile/520432/24", null, "https://www.frontiersin.org/files/Articles/304720/fnbot-12-00022-HTML/image_t/fnbot-12-00022-g001.gif", null, "https://www.frontiersin.org/files/Articles/304720/fnbot-12-00022-HTML/image_t/fnbot-12-00022-g002.gif", null, "https://www.frontiersin.org/files/Articles/304720/fnbot-12-00022-HTML/image_t/fnbot-12-00022-g004.gif", null, "https://www.frontiersin.org/files/Articles/304720/fnbot-12-00022-HTML/image_t/fnbot-12-00022-g005.gif", null, "https://www.frontiersin.org/files/Articles/304720/fnbot-12-00022-HTML/image_t/fnbot-12-00022-g006.gif", null, "https://www.frontiersin.org/files/Articles/304720/fnbot-12-00022-HTML/image_t/fnbot-12-00022-g007.gif", null, "https://www.frontiersin.org/files/Articles/304720/fnbot-12-00022-HTML/image_t/fnbot-12-00022-g008.gif", null, "https://www.frontiersin.org/files/Articles/304720/fnbot-12-00022-HTML/image_t/fnbot-12-00022-g010.gif", null, "https://www.frontiersin.org/files/Articles/304720/fnbot-12-00022-HTML/image_t/fnbot-12-00022-g011.gif", null, "https://www.frontiersin.org/files/Articles/304720/fnbot-12-00022-HTML/image_t/fnbot-12-00022-g012.gif", null, "https://www.frontiersin.org/files/Articles/304720/fnbot-12-00022-HTML/image_t/fnbot-12-00022-g013.gif", null, "https://www.frontiersin.org/files/Articles/304720/fnbot-12-00022-HTML/image_t/fnbot-12-00022-g014.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8956756,"math_prob":0.9205929,"size":83703,"snap":"2019-51-2020-05","text_gpt3_token_len":18554,"char_repetition_ratio":0.1757966,"word_repetition_ratio":0.0651514,"special_character_ratio":0.22033858,"punctuation_ratio":0.16688716,"nsfw_num_words":2,"has_unicode_error":false,"math_prob_llama3":0.97266775,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32],"im_url_duplicate_count":[null,null,null,9,null,null,null,5,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-14T17:23:58Z\",\"WARC-Record-ID\":\"<urn:uuid:fa49d0a4-0002-4e53-b6cc-bfd180121e17>\",\"Content-Length\":\"437078\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d3edddf9-ce3b-48b5-800f-83c1e31d4bf6>\",\"WARC-Concurrent-To\":\"<urn:uuid:4a6d2602-1a87-425e-9a97-9a4afc9c0288>\",\"WARC-IP-Address\":\"134.213.70.247\",\"WARC-Target-URI\":\"https://www.frontiersin.org/articles/10.3389/fnbot.2018.00022/full\",\"WARC-Payload-Digest\":\"sha1:AOW7A3XITW2AH5FDABSND5PQ3YPNS5NF\",\"WARC-Block-Digest\":\"sha1:7AHW3OONKOODQCODN23PACGNCLJGNDK5\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575541281438.51_warc_CC-MAIN-20191214150439-20191214174439-00028.warc.gz\"}"}
https://www.transum.org/Maths/Exam/Online_Exercise.asp?NaCu=18
[ "", null, "# Exam-Style Questions.\n\n## Problems adapted from questions set for previous Mathematics exams.\n\n### 1.\n\nGCSE Higher\n\n(a) Complete the table of values for $$y=\\frac{(x^3-5x)}{10}$$\n\n $$x$$ -3 -2 -1 0 1 2 3 4 $$y$$ 0.2 1.2\n\n(b) On the grid below, draw the graph of $$y=\\frac{(x^3-5x)}{10}$$ for values of $$x$$ from -3 to 4.", null, "### 2.\n\nGCSE Higher\n\nOn the grid below, draw the graph of $$y = 1 - 2x$$ for values of $$x$$ from -2 to 2.", null, "If you would like space on the right of the question to write out the solution try this Thinning Feature. It will collapse the text into the left half of your screen but large diagrams will remain unchanged.\n\nThe exam-style questions appearing on this site are based on those set in previous examinations (or sample assessment papers for future examinations) by the major examination boards. The wording, diagrams and figures used in these questions have been changed from the originals so that students can have fresh, relevant problem solving practice even if they have previously worked through the related exam paper.\n\nThe solutions to the questions on this website are only available to those who have a Transum Subscription.\n\nExam-Style Questions Main Page\n\nSearch for exam-style questions containing a particular word or phrase:\n\nTo search the entire Transum website use the search box in the grey area below.", null, "" ]
[ null, "https://www.transum.org/Software/SW/Starter_of_the_day/Images/AppleVectorRing110.png", null, "https://www.transum.org/Maths/Exam/Diagrams/Diagram509.png", null, "https://www.transum.org/Maths/Exam/Diagrams/Diagram509.png", null, "https://www.transum.org/Software/SW/Starter_of_the_day/Images/Apple2.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9127245,"math_prob":0.9464973,"size":1503,"snap":"2023-40-2023-50","text_gpt3_token_len":369,"char_repetition_ratio":0.116744496,"word_repetition_ratio":0.04,"special_character_ratio":0.25415835,"punctuation_ratio":0.072916664,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97317994,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,null,null,6,null,6,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-05T06:16:14Z\",\"WARC-Record-ID\":\"<urn:uuid:25ff2f09-0de9-4a1c-981b-d2ad3ad01ea5>\",\"Content-Length\":\"16427\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8dac11a2-d4e5-4d70-b02a-0d71dd028c96>\",\"WARC-Concurrent-To\":\"<urn:uuid:53aaa48b-df0a-4125-958f-c73a4da7e034>\",\"WARC-IP-Address\":\"160.153.246.128\",\"WARC-Target-URI\":\"https://www.transum.org/Maths/Exam/Online_Exercise.asp?NaCu=18\",\"WARC-Payload-Digest\":\"sha1:ESDFLI6UJGFBZUXSNUKHWOCND6YJ53OY\",\"WARC-Block-Digest\":\"sha1:66GQYVA62HETQZDSNPCCHT3H67WSJJY2\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100545.7_warc_CC-MAIN-20231205041842-20231205071842-00204.warc.gz\"}"}
https://books.google.gr/books?pg=PA11&vq=Take&dq=editions:UOMDLPabq7928_0001_001&lr=&id=p-YDAAAAQAAJ&hl=el&output=html_text
[ "Ĺéęüíĺň óĺëßäáň PDF Çëĺęôń. Ýęäďóç\n .flow { margin: 0; font-size: 1em; } .flow .pagebreak { page-break-before: always; } .flow p { text-align: left; text-indent: 0; margin-top: 0; margin-bottom: 0.5em; } .flow .gstxt_sup { font-size: 75%; position: relative; bottom: 0.5em; } .flow .gstxt_sub { font-size: 75%; position: relative; top: 0.3em; } .flow .gstxt_hlt { background-color: yellow; } .flow div.gtxt_inset_box { padding: 0.5em 0.5em 0.5em 0.5em; margin: 1em 1em 1em 1em; border: 1px black solid; } .flow div.gtxt_footnote { padding: 0 0.5em 0 0.5em; border: 1px black dotted; } .flow .gstxt_underline { text-decoration: underline; } .flow .gtxt_heading { text-align: center; margin-bottom: 1em; font-size: 150%; font-weight: bold; font-variant: small-caps; } .flow .gtxt_h1_heading { text-align: center; font-size: 120%; font-weight: bold; } .flow .gtxt_h2_heading { font-size: 110%; font-weight: bold; } .flow .gtxt_h3_heading { font-weight: bold; } .flow .gtxt_lineated { margin-left: 2em; margin-top: 1em; margin-bottom: 1em; white-space: pre-wrap; } .flow .gtxt_lineated_code { margin-left: 2em; margin-top: 1em; margin-bottom: 1em; white-space: pre-wrap; font-family: monospace; } .flow .gtxt_quote { margin-left: 2em; margin-right: 2em; margin-top: 1em; margin-bottom: 1em; } .flow .gtxt_list_entry { margin-left: 2ex; text-indent: -2ex; } .flow .gimg_graphic { margin-top: 1em; margin-bottom: 1em; } .flow .gimg_table { margin-top: 1em; margin-bottom: 1em; } .flow { font-family: serif; } .flow span,p { font-family: inherit; } .flow-top-div {font-size:83%;} g In bd take any point f, and from a e the greater, cut off ag equal (i. 3) to a f, the less, and join fc, g b. Because a f is equal to ag, and ab to a c, the two sides fa, a c are equal to the two ga, a b, each to each ; and they contain the angle fag common to the two triangles afc, a gb; there а. fore the base fc is equal (i. 4) to the base g b, g and the triangle afc to the triangle agb; and the remaining angles of the one are equal (i. 4) to the remaining angles of the other, each to each, to which the equal sides are opposite ; viz. the angle acf to the angle a bg, and the angle afc b to the angle ag b: and because the whole a f is equal to the whole a g, of which the parts a b, a c, are equal; the remainder bf shall be equal (3 ax.) f to the remainder cg; and fc was proved to be equal to gb; therefore the two sides bf, fc are /d equal to the two cg, gb, each to each ; and the angle bfc is equal to the angle cgb, and the base bc is common to the two triangles bfc, cgb; wherefore the triangles are equal (i. 4), and their remaining angles, each to each, to which the equal sides are opposite; therefore the angle fb c is equal to the angle g cb, and the angle bcf to the angle cbg: and since it has been demonstrated, that the whole angle a bg is equal to the whole a cf, the parts of which, the angles c bg, bcf are also equal; the remaining angle a b c is therefore equal to the remaining angle a cb, which are the angles at the base of the triangle a bc: and it has also been proved that the angle f bc is equal to the angle gcb, which are the angles upon the other side of the base. Therefore the angles at the base, &c. Q. E. D. COROLLARY. Hence every equilateral triangle is also equiangular. e", null, "a PROPOSITION VI.—THEOREM. If two angles of a triangle be equal to one another, the sides which subtend, or are opposite to, the equal angles, shall also be equal to one another. LET a bc be a triangle having the angle a b c equal to the angle a cb; the side a b is also equal to the side a c. For, if a b be not equal to a c, one of them is а. greater than the other : let ab be the greater; and from it cut (i. 3) off d b equal to a c, the less, and join dc; therefore, because in the triangles db c, d acb, db is equal to a c, and b c common to both, the two sides, db, b c are equal to the two a c, cb, each to each ; and the angle dbc is equal to the l angle a cb; therefore the base dc is equal to the base a b, and the triangle db c is equal to the triangle (i. 4) a cb, the less to the greater; which b is absurd, Therefore a b is not unequal to a C, that is, it is equal to it. Wherefore, if two angles, &c. Q. E. D. COR. Hence every equiangular triangle is also equilateral. PROPOSITION VII.-THEOREM. Upon the same base, and on the same side of it, there cannot be two triangles that have their sides which are terminated in one extremity of the base equal to one another, and likewise those which cre terminated in the other extremity. If it be possible, let there be two triangles a c b, a d b, upon the same base a b, and upon the same side of it, which have d their sides ca, da terminated in the extremity a of the base equal to one another, and likewise their sides cb, db, that are terminated in b. Join cd; then, in the case in which the vertex of each of the triangles is without the other triangle, because a c is equal to a d, the angle a cd is equal (i. 5) to the angle a dc: but а. b the angle a cd is greater than the angle bcd; therefore the angle adc is greater also than bcd; much greater then is the angle bd c than the angle bed. Again, because cb is equal to db, the angle bdc is equal (i. 5) to the angle bed; but it has been demonstrated to be greater than it; which is impossible. But if one of the vertices, as d, be within the other triangle a cb; produce a c, ad to e, f; therefore, because a c is equal to ad in the triangle ac d, the angles ecd, fdc upon the other side of the base cd are equal (i. 5) to one another : but the angle ecd is greater than the angle bed: wherefore the angle fdc is likewise greater than b cd; much greater then is the angle bd c than the angle bed. Again, because cb is equal to db, the angle bdc is equal (i. 5) to the angle bcd; but bdc has been proved to be greater than the same bcd; which is ima The case in which the vertex of one triangle is upon a side of the other, needs no demonstration. Therefore, upon the same base, and on the same side of it, &c. Q.E.D. bo possible. PROPOSITION VIII.—THEOREM. If two triangles have two sides of the one equal to two sides of the other, each to each, and have likeurise their bases equal; the angle which is contained by the two sides of the one shall be equal to the angle contained by the two sides equal to them, of the other. LET a b c, def be two triangles, having the two sides a b, a c, equal to the two sides de, df, each to each, viz. a b to d e, and a c to df; and also the base b c equal to the base e f. The angle bac is a d 8 equal to the angle ed f. For, if the triangle a b c be applied to def, so that the pcint b be on e, and the straight line bc upon ef; the point c shall also coincide with the point f, be b с f cause bc is equal to e f. Therefore b c coinciding with ef; ba and a c shall coincide with ed and df; for, if the base b c coincides with the base e f, but the sides ba, ca do not coincide with the sides ed, fd, but have a different situation, as eg, fg, then upon the same base e f, and upon the same side of it, there can be two triangles that have their sides which are terminated in one extremity of the base equal to one another, and likewise their sides terminated in the other extremity ; but this is impossible (i. 7); therefore, if the base b c coincides with the base ef, the sides ba, a c, cannot but coincide with the sides ed, df; wherefore likewise the angle bac coincides with the angle ed f, and is equal (8ax.) to it. Therefore if two triangles, &c. Q.E.D. PROPOSITION IX.-PROBLEM. To bisect a given rectilineal angle, that is, to divide it into two equal angles. LET bac be the given rectilineal angle, it is required to bisect it. Take any point d in a b, and from a c cut (i. 3) off a e equal to ad; join de, and upon it describe (i. 1) an equilateral triangle def; then join a f; the straight line af a bisects the angle bac. Because a d is equal to a e, and a f is common to the two triangles da f, eaf; the two sides da, d af, are equal to the two sides ea, a f, each to each ; and the base df is equal to the base ef; therefore the angle daf is equal (i. 8) to the angle eaf; wherefore the given rectilineal angle bac is bisected b f by the straight line a f. Which was to be done. PROPOSITION X.-PROBLEM. To bisect a given finite straight line, that is, to divide it into two equal parts. LET a b be the given straight line, it is required to divide it into two equal parts. Describe (i. 1) upon a b an equilateral triangle a b c, and bisect (i. 9) the angle a c b by the straight line cd. ab is cut into two equal parts in the point d. Because a c is equal to cb, and cd common to the two triangles a cd, bcd; the two sides a C, cd are equal to bc, cd, each to each ; and the angle a c d is equal to the angle bed; therefore the base a d is equal to the base (i. 4) d b, and the b straight line a b is divided into two equal parts in the point d. Which was to be done.", null, "PROPOSITION XI.- PROBLEM. a с e To draw a straight line at right angles to a given straight line, from a given point in the same. LET a b be a given straight line, and c a point given in it; it is required to draw a straight line from the point c at right angles to a b. Take any point d in a c, and make (i. 3) ce equal to cd, and upon de describe (i. 1) the equilateral triangle dfe, and join fc, the straight line fc drawn from the given point c is at right angles to the given straight line a b. Because dc is equal to ce, and fc common to the two triangles dcf, ecf; the two sides dc, cf, are equal to the two ec, cf, each to each ; and the base d b d f is equal to the base e f; therefore the angle d cf is equal (i. 8) to the angle ecf; and they are adjacent angles. But, when the adjacent angles which one straight line makes with another straight line are equal to one another, each of them is called a right angle (10 def.); therefore each of the angles dcf, ecf, is a right angle. Wherefore, from the given point c, in the given straight line a b, fc has been drawn at right angles to a b. Which was to be done. COR. By help of this problem, it may be demonstrated, that two straight lines cannot have a common segment. If it be possible, let the two straight lines a b c, abd have the seg ment a b common to both of them. је From the point b draw be at right angles to a b; and because a b c is a straight line, the angle cbe is equal (10 def.) to the angle e ba; in the same manner, because a bd is a straight line, d the angle d be is equal to the angle eba ; wherefore the angle d be is a b с equal to the angle cbe, the less to the greater, which is impossible ; therefore two straight lines cannot have a common segment. ز PROPOSITION XII.—PROBLEM. To draw a straight line perpendicular to a given straight line of an unlimited length, from a given point without it. LET a b be the given straight line, which may be produced to any length both ways, and let c be a point without it. It is required to draw a straight line perpendicular to a b from the point c. Take any point d upon the other side of a b, and from the centre c, at the distance cd, describe (3 post.) the circle egf meeting a b in f, g; and bisect (i. 10) fg in h, and join h cf, ch, cg; the straight a f g b line ch, drawn from the d given point c, is perpendicular to the given straight line a b. Because fh is equal to hg, and hc common to the two triangles fh c, ghc, the two sides fh, hc are equal to the two gh, hc, each to each; and the base cf is equal (15 def.) to the base cg; therefore the angle chf is equal (i. 8) to the angle chg; and they are adjacent angles; but when (10 def.) a straight line standing on another straight line makes the adjacent angles equal to one another, each of them is a right angle; and the straight line which stands upon the other is called a perpendicular to it ; therefore from the given point ca perpendicular ch has been drawn to the given straight line a b. Which was to be done. PROPOSITION XIII.-THEOREM. The angles which one straight line makes with another upon one side of it are either two right angles, or are together equal to two right angles. LET the straight line a b make with cd, upon one side of it, the angles cba, abd : these are either two right angles, or are together equal to two right angles. For if the angle c ba be equal to a b d, each of them is a right angle", null, "d b d ъ (10 def.); but if not, from the point b draw be at right angles (i. 11) to cd; therefore the angles c be, ebd are two right angles (10 def.); and because cbe is equal to the two angles cba, a be together, add the angle e bd « ĐńďçăďýěĺíçÓőíÝ÷ĺéá »" ]
[ null, "https://books.google.gr/books/content", null, "https://books.google.gr/books/content", null, "https://books.google.gr/books/content", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9188142,"math_prob":0.9984173,"size":11033,"snap":"2022-40-2023-06","text_gpt3_token_len":3014,"char_repetition_ratio":0.24517182,"word_repetition_ratio":0.15850021,"special_character_ratio":0.25949425,"punctuation_ratio":0.14348303,"nsfw_num_words":1,"has_unicode_error":false,"math_prob_llama3":0.99966633,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-09-30T00:00:49Z\",\"WARC-Record-ID\":\"<urn:uuid:1d73861f-cfba-40ea-baf2-08d1e8157127>\",\"Content-Length\":\"47845\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a80f3417-0b31-4efa-bceb-927954101200>\",\"WARC-Concurrent-To\":\"<urn:uuid:7d4cd0c8-7a3f-4f8f-9c75-56d5a18c548f>\",\"WARC-IP-Address\":\"172.253.122.113\",\"WARC-Target-URI\":\"https://books.google.gr/books?pg=PA11&vq=Take&dq=editions:UOMDLPabq7928_0001_001&lr=&id=p-YDAAAAQAAJ&hl=el&output=html_text\",\"WARC-Payload-Digest\":\"sha1:IOAROBQM5I3AL67OBWCJRXAC7WGFBNYF\",\"WARC-Block-Digest\":\"sha1:O7JDJXR3BJP34OJGDGTQY3CEPNBCJ5YZ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030335396.92_warc_CC-MAIN-20220929225326-20220930015326-00699.warc.gz\"}"}
https://ixtrieve.fh-koeln.de/birds/litie/document/23907
[ "# Document (#23907)\n\nTitle\nLinks auch zu den relevanten Internetressourcen : Silverplatter Information\nSource\nYear\n2001\nAbstract\nDie Silverplatter Information GmbH (Berlin) stellte an Neuerungen auf der Infobase die neue Generation des ERL -Servers ERL 5. 0 mit der neuen Suchoberfläche Nebspirs 5. 0 sowie die neue Linktechnologie LinkWizard vor. NEBSPIRS wurde mit einem völlig neuen Interface ausgestattet, um den Recherchekomfort in Literaturdatenbanken zu erhöhen. In der Pipeline befinden sich ferner die Veröffentlichung mehrerer neuer bibliografischer Datenbanken aus verschiedenen Disziplinen, die Zusammenarbeit mit neuen Volltextanbietern für den direkten Zugriff auf den Volltext von Silverplatter sowie eine Kooperation mit der Weltgesundheitsorganisation (WHO) für die Bereitstellung medizinischer Online-Informationen in Entwicklungsländern\nObject\nNEBSPIRS\nSPIRS\nSilverPlatter\n\n## Similar documents (content)\n\n1. Borkenhagen F.: Sportwissenschaftliche Literaturdatenbanken auf CD-ROM (1996) 0.07\n```0.07013603 = sum of:\n0.07013603 = product of:\n0.87670046 = sum of:\n0.8276753 = weight(title_txt:literaturdatenbanken in 5875) [ClassicSimilarity], result of:\n0.8276753 = score(doc=5875,freq=1.0), product of:\n0.21098165 = queryWeight, product of:\n1.262905 = boost\n8.966795 = idf(docFreq=14, maxDocs=43254)\n0.018631026 = queryNorm\n3.9229727 = fieldWeight in 5875, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n8.966795 = idf(docFreq=14, maxDocs=43254)\n0.4375 = fieldNorm(doc=5875)\n0.049025167 = weight(abstract_txt:sowie in 5875) [ClassicSimilarity], result of:\n0.049025167 = score(doc=5875,freq=1.0), product of:\n0.112797275 = queryWeight, product of:\n1.3059082 = boost\n4.6360617 = idf(docFreq=1139, maxDocs=43254)\n0.018631026 = queryNorm\n0.43463078 = fieldWeight in 5875, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n4.6360617 = idf(docFreq=1139, maxDocs=43254)\n0.09375 = fieldNorm(doc=5875)\n0.08 = coord(2/25)\n```\n2. Polzer, J.: Deutsche Lexika im Wandel : Von der systematischen Enzyklopädie zum multimedialen Lexikon (2002) 0.07\n```0.067826316 = sum of:\n0.067826316 = product of:\n0.33913156 = sum of:\n0.05870159 = weight(abstract_txt:völlig in 4173) [ClassicSimilarity], result of:\n0.05870159 = score(doc=4173,freq=1.0), product of:\n0.13228278 = queryWeight, product of:\n7.100134 = idf(docFreq=96, maxDocs=43254)\n0.018631026 = queryNorm\n0.44375837 = fieldWeight in 4173, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n7.100134 = idf(docFreq=96, maxDocs=43254)\n0.0625 = fieldNorm(doc=4173)\n0.07732065 = weight(abstract_txt:ferner in 4173) [ClassicSimilarity], result of:\n0.07732065 = score(doc=4173,freq=1.0), product of:\n0.15895239 = queryWeight, product of:\n1.09618 = boost\n7.783025 = idf(docFreq=48, maxDocs=43254)\n0.018631026 = queryNorm\n0.48643905 = fieldWeight in 4173, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n7.783025 = idf(docFreq=48, maxDocs=43254)\n0.0625 = fieldNorm(doc=4173)\n0.056609385 = weight(abstract_txt:sowie in 4173) [ClassicSimilarity], result of:\n0.056609385 = score(doc=4173,freq=3.0), product of:\n0.112797275 = queryWeight, product of:\n1.3059082 = boost\n4.6360617 = idf(docFreq=1139, maxDocs=43254)\n0.018631026 = queryNorm\n0.50186837 = fieldWeight in 4173, product of:\n1.7320508 = tf(freq=3.0), with freq of:\n3.0 = termFreq=3.0\n4.6360617 = idf(docFreq=1139, maxDocs=43254)\n0.0625 = fieldNorm(doc=4173)\n0.053294137 = weight(abstract_txt:neue in 4173) [ClassicSimilarity], result of:\n0.053294137 = score(doc=4173,freq=2.0), product of:\n0.124028936 = queryWeight, product of:\n1.3693827 = boost\n4.8614006 = idf(docFreq=909, maxDocs=43254)\n0.018631026 = queryNorm\n0.42969117 = fieldWeight in 4173, product of:\n1.4142135 = tf(freq=2.0), with freq of:\n2.0 = termFreq=2.0\n4.8614006 = idf(docFreq=909, maxDocs=43254)\n0.0625 = fieldNorm(doc=4173)\n0.093205795 = weight(abstract_txt:neuen in 4173) [ClassicSimilarity], result of:\n0.093205795 = score(doc=4173,freq=2.0), product of:\n0.20609261 = queryWeight, product of:\n2.1619227 = boost\n5.1166472 = idf(docFreq=704, maxDocs=43254)\n0.018631026 = queryNorm\n0.452252 = fieldWeight in 4173, product of:\n1.4142135 = tf(freq=2.0), with freq of:\n2.0 = termFreq=2.0\n5.1166472 = idf(docFreq=704, maxDocs=43254)\n0.0625 = fieldNorm(doc=4173)\n0.2 = coord(5/25)\n```\n3. Microsoft Encarta 2002 (2001) 0.06\n```0.06474253 = sum of:\n0.06474253 = product of:\n0.26976055 = sum of:\n0.048325405 = weight(abstract_txt:ferner in 3724) [ClassicSimilarity], result of:\n0.048325405 = score(doc=3724,freq=1.0), product of:\n0.15895239 = queryWeight, product of:\n1.09618 = boost\n7.783025 = idf(docFreq=48, maxDocs=43254)\n0.018631026 = queryNorm\n0.3040244 = fieldWeight in 3724, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n7.783025 = idf(docFreq=48, maxDocs=43254)\n0.0390625 = fieldNorm(doc=3724)\n0.0587691 = weight(abstract_txt:neuerungen in 3724) [ClassicSimilarity], result of:\n0.0587691 = score(doc=3724,freq=1.0), product of:\n0.18109901 = queryWeight, product of:\n1.1700553 = boost\n8.307549 = idf(docFreq=28, maxDocs=43254)\n0.018631026 = queryNorm\n0.32451364 = fieldWeight in 3724, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n8.307549 = idf(docFreq=28, maxDocs=43254)\n0.0390625 = fieldNorm(doc=3724)\n0.07749443 = weight(abstract_txt:ausgestattet in 3724) [ClassicSimilarity], result of:\n0.07749443 = score(doc=3724,freq=1.0), product of:\n0.2177695 = queryWeight, product of:\n1.2830597 = boost\n9.109896 = idf(docFreq=12, maxDocs=43254)\n0.018631026 = queryNorm\n0.3558553 = fieldWeight in 3724, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n9.109896 = idf(docFreq=12, maxDocs=43254)\n0.0390625 = fieldNorm(doc=3724)\n0.020427154 = weight(abstract_txt:sowie in 3724) [ClassicSimilarity], result of:\n0.020427154 = score(doc=3724,freq=1.0), product of:\n0.112797275 = queryWeight, product of:\n1.3059082 = boost\n4.6360617 = idf(docFreq=1139, maxDocs=43254)\n0.018631026 = queryNorm\n0.18109617 = fieldWeight in 3724, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n4.6360617 = idf(docFreq=1139, maxDocs=43254)\n0.0390625 = fieldNorm(doc=3724)\n0.023552904 = weight(abstract_txt:neue in 3724) [ClassicSimilarity], result of:\n0.023552904 = score(doc=3724,freq=1.0), product of:\n0.124028936 = queryWeight, product of:\n1.3693827 = boost\n4.8614006 = idf(docFreq=909, maxDocs=43254)\n0.018631026 = queryNorm\n0.18989846 = fieldWeight in 3724, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n4.8614006 = idf(docFreq=909, maxDocs=43254)\n0.0390625 = fieldNorm(doc=3724)\n0.041191533 = weight(abstract_txt:neuen in 3724) [ClassicSimilarity], result of:\n0.041191533 = score(doc=3724,freq=1.0), product of:\n0.20609261 = queryWeight, product of:\n2.1619227 = boost\n5.1166472 = idf(docFreq=704, maxDocs=43254)\n0.018631026 = queryNorm\n0.19986904 = fieldWeight in 3724, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n5.1166472 = idf(docFreq=704, maxDocs=43254)\n0.0390625 = fieldNorm(doc=3724)\n0.24 = coord(6/25)\n```\n4. Informations- und Wissenstransfer in der Medizin und im Gesundheitswesen (1999) 0.06\n```0.062430125 = sum of:\n0.062430125 = product of:\n0.39018828 = sum of:\n0.15123709 = weight(abstract_txt:medizinischer in 4971) [ClassicSimilarity], result of:\n0.15123709 = score(doc=4971,freq=1.0), product of:\n0.21424083 = queryWeight, product of:\n1.2726221 = boost\n9.035788 = idf(docFreq=13, maxDocs=43254)\n0.018631026 = queryNorm\n0.70592093 = fieldWeight in 4971, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n9.035788 = idf(docFreq=13, maxDocs=43254)\n0.078125 = fieldNorm(doc=4971)\n0.04085431 = weight(abstract_txt:sowie in 4971) [ClassicSimilarity], result of:\n0.04085431 = score(doc=4971,freq=1.0), product of:\n0.112797275 = queryWeight, product of:\n1.3059082 = boost\n4.6360617 = idf(docFreq=1139, maxDocs=43254)\n0.018631026 = queryNorm\n0.36219233 = fieldWeight in 4971, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n4.6360617 = idf(docFreq=1139, maxDocs=43254)\n0.078125 = fieldNorm(doc=4971)\n0.081589654 = weight(abstract_txt:neue in 4971) [ClassicSimilarity], result of:\n0.081589654 = score(doc=4971,freq=3.0), product of:\n0.124028936 = queryWeight, product of:\n1.3693827 = boost\n4.8614006 = idf(docFreq=909, maxDocs=43254)\n0.018631026 = queryNorm\n0.65782756 = fieldWeight in 4971, product of:\n1.7320508 = tf(freq=3.0), with freq of:\n3.0 = termFreq=3.0\n4.8614006 = idf(docFreq=909, maxDocs=43254)\n0.078125 = fieldNorm(doc=4971)\n0.11650725 = weight(abstract_txt:neuen in 4971) [ClassicSimilarity], result of:\n0.11650725 = score(doc=4971,freq=2.0), product of:\n0.20609261 = queryWeight, product of:\n2.1619227 = boost\n5.1166472 = idf(docFreq=704, maxDocs=43254)\n0.018631026 = queryNorm\n0.565315 = fieldWeight in 4971, product of:\n1.4142135 = tf(freq=2.0), with freq of:\n2.0 = termFreq=2.0\n5.1166472 = idf(docFreq=704, maxDocs=43254)\n0.078125 = fieldNorm(doc=4971)\n0.16 = coord(4/25)\n```\n5. Böll, S.K.: Informations- und bibliothekswissenschaftliche Zeitschriften in Literaturdatenbanken (2010) 0.06\n```0.060023222 = sum of:\n0.060023222 = product of:\n0.7502903 = sum of:\n0.70943594 = weight(title_txt:literaturdatenbanken in 235) [ClassicSimilarity], result of:\n0.70943594 = score(doc=235,freq=1.0), product of:\n0.21098165 = queryWeight, product of:\n1.262905 = boost\n8.966795 = idf(docFreq=14, maxDocs=43254)\n0.018631026 = queryNorm\n3.362548 = fieldWeight in 235, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n8.966795 = idf(docFreq=14, maxDocs=43254)\n0.375 = fieldNorm(doc=235)\n0.04085431 = weight(abstract_txt:sowie in 235) [ClassicSimilarity], result of:\n0.04085431 = score(doc=235,freq=1.0), product of:\n0.112797275 = queryWeight, product of:\n1.3059082 = boost\n4.6360617 = idf(docFreq=1139, maxDocs=43254)\n0.018631026 = queryNorm\n0.36219233 = fieldWeight in 235, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n4.6360617 = idf(docFreq=1139, maxDocs=43254)\n0.078125 = fieldNorm(doc=235)\n0.08 = coord(2/25)\n```" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6435497,"math_prob":0.9974393,"size":9042,"snap":"2021-43-2021-49","text_gpt3_token_len":3518,"char_repetition_ratio":0.2173047,"word_repetition_ratio":0.43131867,"special_character_ratio":0.5363858,"punctuation_ratio":0.28094777,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99971104,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-17T06:42:11Z\",\"WARC-Record-ID\":\"<urn:uuid:1aabba0e-4db1-4551-9927-2987f5570678>\",\"Content-Length\":\"18124\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a99c46c7-5a4a-4ac3-bafc-4036ff0ef2a8>\",\"WARC-Concurrent-To\":\"<urn:uuid:73a48482-aa5c-4954-98a6-80ceb92e0bd6>\",\"WARC-IP-Address\":\"139.6.160.6\",\"WARC-Target-URI\":\"https://ixtrieve.fh-koeln.de/birds/litie/document/23907\",\"WARC-Payload-Digest\":\"sha1:AGT2647F6TDJN65DBW2MOFIOUVG7HJKE\",\"WARC-Block-Digest\":\"sha1:AMMDOZ62BVMN2HUPEXHU64QFN2ZIFJIF\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323585121.30_warc_CC-MAIN-20211017052025-20211017082025-00506.warc.gz\"}"}
https://discourse.julialang.org/t/a-uniform-way-to-generate-a-random-element-based-on-a-given-probability-distritution-function-a-random-number-genarator-and-a-given-element-interval/19843
[ "# A uniform way to generate a random element based on a given probability distritution function, a random number genarator and a given element interval?\n\nDispatching is one of Julia’s strong suits. I was wondering, is there a single function which would generate a\nrandom element based on a given probability distribution function, a random number genarator and a given element interval? I’m using Julia 0.7. If not, what are your suggestions?\n\nI’m talking about something similar to the fit_mle(D, x, w) method in the DIstribution.jl.\n\n`Distributions.fit_mle` — Method.\n\n``````fit_mle(D, x, w)\n``````\n\nFit a distribution of type `D` to a weighted data set `x` , with weights given by `w` .\n\nHere, `w` should be an array with length `n` , where `n` is the number of samples contained in `x` .\n\nsource\n\n### Applicable distributions\n\nThe `fit_mle` method has been implemented for the following distributions:\n\nUnivariate:\n\nMultivariate:\n\n1 Like\n\nFor univariate distributions, that’s pretty much what `rand` does, when you truncate distributions, see\n\nhttps://juliastats.github.io/Distributions.jl/latest/truncate.html\n\nFor multivariate distributions, there is no general efficient algorithm for ensuring that the result is withing a given interval. You will have to come up with something specialized, or iterate until the result is in the box, but that may be horribly inefficient if the probability mass there is small.\n\n4 Likes" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7976974,"math_prob":0.94494295,"size":863,"snap":"2022-05-2022-21","text_gpt3_token_len":229,"char_repetition_ratio":0.12107101,"word_repetition_ratio":0.0,"special_character_ratio":0.21784472,"punctuation_ratio":0.15286624,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97915477,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-25T23:05:06Z\",\"WARC-Record-ID\":\"<urn:uuid:857b1529-12a0-45fa-b23c-34594a68be61>\",\"Content-Length\":\"28410\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2875751f-5c71-435c-85c8-dfda08899e32>\",\"WARC-Concurrent-To\":\"<urn:uuid:0b14a916-7f9c-442f-96a8-c9a8a286950d>\",\"WARC-IP-Address\":\"64.71.144.205\",\"WARC-Target-URI\":\"https://discourse.julialang.org/t/a-uniform-way-to-generate-a-random-element-based-on-a-given-probability-distritution-function-a-random-number-genarator-and-a-given-element-interval/19843\",\"WARC-Payload-Digest\":\"sha1:EYBOD2RRCHNCYCHE4EPI2GRRNAMD6MLO\",\"WARC-Block-Digest\":\"sha1:FNAGIRT66HZI4M7MKSHRQSS7D5NNI2GJ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662594414.79_warc_CC-MAIN-20220525213545-20220526003545-00130.warc.gz\"}"}
http://dictionnaire.sensagent.leparisien.fr/Diffeomorphism/en-en/
[ " Diffeomorphism : définition de Diffeomorphism et synonymes de Diffeomorphism (anglais)\n\nPublicité ▼\n\n## définition - Diffeomorphism", null, "voir la définition de Wikipedia\n\n## locutions", null, "Diffeomorphism constraint • Large diffeomorphism • Local diffeomorphism • Representation theory of diffeomorphism groups\n\nPublicité ▼\n\nWikipedia\n\n# Diffeomorphism\n\nIn mathematics, a diffeomorphism is an isomorphism in the category of smooth manifolds. It is an invertible function that maps one differentiable manifold to another, such that both the function and its inverse are smooth.", null, "The image of a rectangular grid on a square under a diffeomorphism from the square onto itself.\n\n## Definition\n\nGiven two manifolds M and N, a bijective map f from M to N is called a diffeomorphism if both", null, "$f :M\\to N$\n\nand its inverse", null, "$f^{-1}:N\\to M$\n\nare differentiable (if these functions are r times continuously differentiable, f is called a", null, "$C^r$-diffeomorphism).\n\nTwo manifolds M and N are diffeomorphic (symbol usually being", null, "$\\simeq$) if there is a smooth bijective map f from M to N with a smooth inverse. They are", null, "$C^r$ diffeomorphic if there is an r times continuously differentiable bijective map between them whose inverse is also r times continuously differentiable.\n\n## Diffeomorphisms of subsets of manifolds\n\nGiven a subset X of a manifold M and a subset Y of a manifold N, a function f : XY is said to be smooth if for all p in X there is a neighborhood", null, "$U \\subset M$ of p and a smooth function g: UN such that the restrictions agree", null, "$g_{|U \\cap X} = f_{|U \\cap X}$ (note that g is an extension of f). We say that f is a diffeomorphism if it is bijective, smooth, and if its inverse is smooth.\n\n## Local description\n\nModel example: if U and V are two connected open subsets of Rn such that V is simply connected, a differentiable map f: UV is a diffeomorphism if it is proper and if\n\nRemarks\n• It is essential for U to be simply connected for the function f to be globally invertible (under the sole condition that its derivative is a bijective map at each point).\n• For example, consider the map", null, "$f:U\\ni(x,y)\\mapsto(x^2-y^2,2xy)\\in V$ (which is the \"realification\" of the complex square function) where U = V = R2 \\ {(0,0)}. Then the map f is surjective and its satisfies", null, "$\\det Df_x=4(x^2+y^2)\\neq0$ (thus Dfx is bijective at each point) yet f is not invertible, because it fails to be injective, e.g., f(1,0) = (1,0) = f(-1,0).\n• Since the differential at a point (for a differentiable function)", null, "$Df_x : T_xU \\to T_{f(x)}V$ is a linear map it has a well defined inverse if, and only if, Dfx is a bijection. The matrix representation of Dfx is the n × n matrix of first order partial derivatives whose entry in the i-th row and j-th colomn is", null, "$\\partial f_i / \\partial x_j$. We often use this so-called Jacobian matrix for explicit computations.\n• Diffeomorphisms are necessarily between manifolds of the same dimension. Imagine that f were going from dimension n to dimension k. If n < k then Dfx could never be surjective, and if n > k then Dfx could never be injective. So in both cases Dfx fails to be a bijection.\n• If Dfx is a bijection at x then we say that f is a local diffeomorphism (since by continuity Dfy will also be bijective for all y sufficiently close to x). If Dfx is a bijection for all x then we say that f is a (global) diffeomorphism.\n• Given a smooth map from dimension n to dimension k, if Df (resp. Dfx) is surjective then we say that f is a submersion (resp. local submersion), and if Df (resp. Dfx) is injective we say that f is an immersion (resp. local immersion).\n• A differentiable bijection is not necessarily a diffeomorphism, e.g. f(x) = x3 is not a diffeomorphism from R to itself because its derivative vanishes at 0 (and hence its inverse is not differentiable at 0). This is an example of a homeomorphism that is not a diffeomorphism.\n• f being a diffeomorphism is a stronger condition than f being a homeomorphism (when f is a map between differentiable manifolds). For a diffeomorphism we need f and its inverse to be differentiable. For a homeomorphism we only require that f and its inverse be continuous. Thus every diffeomorphism is a homeomorphism, but the converse is false: not every homeomorphism is a diffeomorphism.\n\nNow, f: MN is called a diffeomorphism if in coordinates charts it satisfies the definition above. More precisely, pick any cover of M by compatible coordinate charts, and do the same for N. Let φ and ψ be charts on M and N respectively, with U being the image of φ and V the image of ψ. Then the conditions says that the map ψ f φ−1: UV is a diffeomorphism as in the definition above (whenever it makes sense). One has to check that for every couple of charts φ, ψ of two given atlases, but once checked, it will be true for any other compatible chart. Again we see that dimensions have to agree.\n\n## Examples\n\nSince any manifold can be locally parametrised, we can consider some explicit maps from two-space into two-space.\n\n• Let", null, "$f(x,y) = (x^2 + y^3, x^2 - y^3)$. We can calculate the Jacobian matrix:", null, "$J_f = \\left( \\begin{array}{cc} 2x & 3y^2 \\\\ 2x & -3y^2 \\end{array} \\right) .$\n\nThe Jacobian matrix has zero determinant if, and only if xy = 0. We see that f is a diffeomorphism away from the x-axis and the y-axis.\n\n• Let", null, "$g(x,y) = (a_0 + a_{1,0}x + a_{0,1}y + \\cdots, b_0 + b_{1,0}x + b_{0,1}y + \\cdots)$ where the", null, "$a_{i,j}$ and", null, "$b_{i,j}$ are arbitrary real numbers, and the omitted terms are of degree at least two in x and y. We can calculate the Jacobian matrix at 0:", null, "$J_g(0,0) = \\left( \\begin{array}{cc} a_{1,0} & a_{0,1} \\\\ b_{1,0} & b_{0,1} \\end{array}\\right).$\n\nWe see that g is a local diffeomorphism at 0 if, and only if,", null, "$a_{1,0}b_{0,1} - a_{0,1}b_{1,0} \\neq 0$, i.e. the linear terms in the components of g are linearly independent as polynomials.\n\n• Now let", null, "$h(x,y) = (\\sin(x^2 + y^2), \\cos(x^2 + y^2))$. We can calculate the Jacobian matrix:", null, "$J_h = \\left( \\begin{array}{cc} 2x\\cos(x^2 + y^2) & 2y\\cos(x^2 + y^2) \\\\ -2x\\sin(x^2+y^2) & -2y\\sin(x^2 + y^2) \\end{array} \\right) .$\n\nThe Jacobian matrix has zero determinant everywhere! In fact we see that the image of h is the unit circle.\n\n## Diffeomorphism group\n\nLet M be a differentiable manifold that is second-countable and Hausdorff. The diffeomorphism group of M is the group of all Cr diffeomorphisms of M to itself, and is denoted by Diffr(M) or Diff(M) when r is understood. This is a 'large' group, in the sense that it is not locally compact (provided M is not zero-dimensional).\n\n### Topology\n\nThe diffeomorphism group has two natural topologies, called the weak and strong topology (Hirsch 1997). When the manifold is compact, these two topologies agree. The weak topology is always metrizable. When the manifold is not compact, the strong topology captures the behavior of functions \"at infinity\", and is not metrizable. It is, however, still Baire.\n\nFixing a Riemannian metric on M, the weak topology is the topology induced by the family of metrics", null, "$d_K(f,g) = \\sup_{x\\in K} d(f(x),g(x)) + \\sum_{1\\le p\\le r} \\sup_{x\\in K}\\|D^pf(x) - D^pg(x)\\|$\n\nas K varies over compact subsets of M. Indeed, since M is σ-compact, there is a sequence of compact subsets Kn whose union is M. Then, define", null, "$d(f,g) = \\sum_n 2^{-n}\\frac{d_{K_n}(f,g)}{1+d_{K_n}(f,g)}.$\n\nThe diffeomorphism group equipped with its weak topology is locally homeomorphic to the space of Cr vector fields (Leslie 1967). Over a compact subset of M, this follows by fixing a Riemannian metric on M and using the exponential map for that metric. If r is finite and the manifold is compact, the space of vector fields is a Banach space. Moreover, the transition maps from one chart of this atlas to another are smooth, making the diffeomorphism group into a Banach manifold. If r = ∞ or if the manifold is σ-compact, the space of vector fields is a Fréchet space. Moreover, the transition maps are smooth, making the diffeomorphism group into a Fréchet manifold.\n\n### Lie algebra\n\nIn particular, the Lie algebra of the diffeomorphism group of M consists of all vector fields on M, equipped with the Lie bracket of vector fields. Somewhat formally, this is seen by making a small change to the coordinate x at each point in space:", null, "$x^{\\mu} \\rightarrow x^{\\mu} + \\varepsilon h^{\\mu}(x)$\n\nso the infinitesimal generators are the vector fields", null, "$L_{h} = h^{\\mu}(x)\\frac{\\partial}{\\partial x_\\mu}.$\n\n### Examples\n\n• When M = G is a Lie group, there is a natural inclusion of G in its own diffeomorphism group via left-translation. Let Diff(G) denote the diffeomorphism group of G, then there is a splitting Diff(G) ≃ G × Diff(G,e) where Diff(G,e) is the subgroup of Diff(G) that fixes the identity element of the group.\n• The diffeomorphism group of Euclidean space Rn consists of two components, consisting of the orientation preserving and orientation reversing diffeomorphisms. In fact, the general linear group is a deformation retract of subgroup Diff(Rn,0) of diffeomorphisms fixing the origin under the map ƒ(x) ↦ ƒ(tx)/t, t ∈ (0,1]. Hence, in particular, the general linear group is also a deformation retract of the full diffeomorphism group as well.\n• For a finite set of points, the diffeomorphism group is simply the symmetric group. Similarly, if M is any manifold there is a group extension 0 → Diff0(M) → Diff(M) → Σ(π0M). Here Diff0(M)is the subgroup of Diff(M) that preserves all the components of M, and Σ(π0M) is the permutation group of the set π0M (the components of M). Moreover, the image of the map Diff(M) → Σ(π0M) is the bijections of π0M that preserve diffeomorphism classes.\n\n### Transitivity\n\nFor a connected manifold M the diffeomorphism group acts transitively on M. More generally, the diffeomorphism group acts transitively on the configuration space CkM. If the dimension of M is at least two the diffeomorphism group acts transitively on the configuration space FkM: the action on M is multiply transitive (Banyaga 1997, p. 29).\n\n### Extensions of diffeomorphisms\n\nIn 1926, Tibor Radó asked whether the harmonic extension of any homeomorphism (or diffeomorphism) of the unit circle to the unit disc yields a diffeomorphism on the open disc. An elegant proof was provided shortly afterwards by Hellmuth Kneser and a completely different proof was discovered in 1945 by Gustave Choquet, apparently unaware that the theorem was already known.\n\nThe (orientation-preserving) diffeomorphism group of the circle is pathwise connected. This can be seen by noting that any such diffeomorphism can be lifted to a diffeomorphism f of the reals satisfying f(x+1) = f(x) +1; this space is convex and hence path connected. A smooth eventually constant path to the identity gives a second more elementary way of extending a diffeomorphism from the circle to the open unit disc (this is a special case of the Alexander trick). Moreover, the diffeomorphism group of the circle has the homotopy-type of the orthogonal group O(2).\n\nThe corresponding extension problem for diffeomorphisms of higher dimensional spheres Sn−1 was much studied in the 1950s and 1960s, with notable contributions from René Thom, John Milnor and Stephen Smale. An obstruction to such extensions is given by the finite Abelian group Γn, the \"group of twisted spheres\", defined as the quotient of the Abelian component group of the diffeomorphism group by the subgroup of classes extending to diffeomorphisms of the ball Bn.\n\n### Connectedness\n\nFor manifolds the diffeomorphism group is usually not connected. Its component group is called the mapping class group. In dimension 2, i.e. for surfaces, the mapping class group is a finitely presented group, generated by Dehn twists (Dehn, Lickorish, Hatcher).[citation needed] Max Dehn and Jakob Nielsen showed that it can be identified with the outer automorphism group of the fundamental group of the surface.\n\nWilliam Thurston refined this analysis by classifying elements of the mapping class group into three types: those equivalent to a periodic diffeomorphism; those equivalent to a diffeomorphism leaving a simple closed curve invariant; and those equivalent to pseudo-Anosov diffeomorphisms. In the case of the torus S1 x S1 = R2/Z2, the mapping class group is just the modular group SL(2,Z) and the classification reduces to the classical one in terms of elliptic, parabolic and hyperbolic matrices. Thurston accomplished his classification by observing that the mapping class group acted naturally on a compactification of Teichmüller space; since this enlarged space was homeomorphic to a closed ball, the Brouwer fixed-point theorem became applicable.\n\nIf M is an oriented smooth closed manifold, it was conjectured by Smale that the identity component of the group of orientation-preserving diffeomorphisms is simple. This had first been proved for a product of circles by Michel Herman; it was proved in full generality by Thurston.\n\n### Homotopy types\n\n• The diffeomorphism group of S2 has the homotopy-type of the subgroup O(3). This was proven by Steve Smale.\n• The diffeomorphism group of the torus has the homotopy-type of its linear automorphisms: (S1)2 × GL(2, Z).\n• The diffeomorphism groups of orientable surfaces of genus g > 1 have the homotopy-type of their mapping class groups—i.e.: the components are contractible.\n• The homotopy-type of the diffeomorphism groups of 3-manifolds are fairly well-understood via the work of Ivanov, Hatcher, Gabai and Rubinstein although there are a few outstanding open cases, primarily 3-manifolds with finite fundamental groups.\n• The homotopy-type of diffeomorphism groups of n-manifolds for n > 3 are poorly undersood. For example, it is an open problem whether or not Diff(S4) has more than two components. But via the work of Milnor, Kahn and Antonelli it's known that Diff(Sn) does not have the homotopy-type of a finite CW-complex provided n > 6.\n\n## Homeomorphism and diffeomorphism\n\nIt is easy to find a homeomorphism that is not a diffeomorphism, but it is more difficult to find a pair of homeomorphic manifolds that are not diffeomorphic. In dimensions 1, 2, 3, any pair of homeomorphic smooth manifolds are diffeomorphic. In dimension 4 or greater, examples of homeomorphic but not diffeomorphic pairs have been found. The first such example was constructed by John Milnor in dimension 7. He constructed a smooth 7-dimensional manifold (called now Milnor's sphere) that is homeomorphic to the standard 7-sphere but not diffeomorphic to it. There are in fact 28 oriented diffeomorphism classes of manifolds homeomorphic to the 7-sphere (each of them is a total space of the fiber bundle over the 4-sphere with the 3-sphere as the fiber).\n\nMuch more extreme phenomena occur for 4-manifolds: in the early 1980s, a combination of results due to Simon Donaldson and Michael Freedman led to the discovery of exotic R4s: there are uncountably many pairwise non-diffeomorphic open subsets of R4 each of which is homeomorphic to R4, and also there are uncountably many pairwise non-diffeomorphic differentiable manifolds homeomorphic to R4 that do not embed smoothly in R4.", null, "Toutes les traductions de Diffeomorphism\n\nContenu de sensagent\n\n• définitions\n• synonymes\n• antonymes\n• encyclopédie\n\n• definition\n• synonym\n\nPublicité ▼\n\ndictionnaire et traducteur pour sites web\n\nAlexandria", null, "Une fenêtre (pop-into) d'information (contenu principal de Sensagent) est invoquée un double-clic sur n'importe quel mot de votre page web. LA fenêtre fournit des explications et des traductions contextuelles, c'est-à-dire sans obliger votre visiteur à quitter votre page web !\n\nEssayer ici, télécharger le code;\n\nSolution commerce électronique\n\nAugmenter le contenu de votre site\n\nAjouter de nouveaux contenus Add à votre site depuis Sensagent par XML.\n\nParcourir les produits et les annonces\n\nObtenir des informations en XML pour filtrer le meilleur contenu.\n\nIndexer des images et définir des méta-données\n\nFixer la signification de chaque méta-donnée (multilingue).\n\nRenseignements suite à un email de description de votre projet.\n\nJeux de lettres\n\nLes jeux de lettre français sont :\n○   Anagrammes\n○   jokers, mots-croisés\n○   Lettris\n○   Boggle.\n\nLettris\n\nLettris est un jeu de lettres gravitationnelles proche de Tetris. Chaque lettre qui apparaît descend ; il faut placer les lettres de telle manière que des mots se forment (gauche, droit, haut et bas) et que de la place soit libérée.\n\nboggle", null, "Il s'agit en 3 minutes de trouver le plus grand nombre de mots possibles de trois lettres et plus dans une grille de 16 lettres. Il est aussi possible de jouer avec la grille de 25 cases. Les lettres doivent être adjacentes et les mots les plus longs sont les meilleurs. Participer au concours et enregistrer votre nom dans la liste de meilleurs joueurs ! Jouer\n\nDictionnaire de la langue française\nPrincipales Références\n\nLa plupart des définitions du français sont proposées par SenseGates et comportent un approfondissement avec Littré et plusieurs auteurs techniques spécialisés.\nLe dictionnaire des synonymes est surtout dérivé du dictionnaire intégral (TID).\nL'encyclopédie française bénéficie de la licence Wikipedia (GNU).\n\nChanger la langue cible pour obtenir des traductions.\nAstuce: parcourir les champs sémantiques du dictionnaire analogique en plusieurs langues pour mieux apprendre avec sensagent.\n\n4817 visiteurs en ligne\n\ncalculé en 0,062s\n\nJe voudrais signaler :\nsection :\nune faute d'orthographe ou de grammaire\nun contenu abusif (raciste, pornographique, diffamatoire)" ]
[ null, "http://common.sensagent.eu/online-2010/images/fleche.gd.bleu.3.gif", null, "http://common.sensagent.eu/online-2010/images/fleche.gd.bleu.3.gif", null, "http://bin.sensegates.com/s/D/i/f/Diffeomorphism_of_a_square.svg.png", null, "http://bin.sensegates.com/s/f/0/c/f0cce74c577b4c8877c291d6f29522cc.png ", null, "http://bin.sensegates.com/s/2/c/0/2c0d0f3a723d4f18413774063f35bd0b.png ", null, "http://bin.sensegates.com/s/8/b/d/8bd63efa0967c9ae85f3f02fd76d6d9a.png ", null, "http://bin.sensegates.com/s/8/e/3/8e3648e8286c414a11b8f10a833d5596.png ", null, "http://bin.sensegates.com/s/8/b/d/8bd63efa0967c9ae85f3f02fd76d6d9a.png ", null, "http://bin.sensegates.com/s/7/5/2/752c6445c67b0a50297731269960e8f6.png ", null, "http://bin.sensegates.com/s/c/e/2/ce2e769e8d179df2397b6b611cba8e0c.png ", null, "http://bin.sensegates.com/s/7/c/c/7cc814b3eeb84b16d11a685f70aef2d5.png ", null, "http://bin.sensegates.com/s/7/d/1/7d19272cd2bda76ca1b547995dde1446.png ", null, "http://bin.sensegates.com/s/c/7/6/c762680f9e1a372c2aa5ef9874521dee.png ", null, "http://bin.sensegates.com/s/2/c/5/2c5baceddd4361a0662bb442031e5b06.png ", null, "http://bin.sensegates.com/s/0/a/1/0a15c94c2533c80ae199ca9b870790c1.png ", null, "http://bin.sensegates.com/s/4/7/9/479a02b8c0bf77e7a6ccc57273eb7668.png ", null, "http://bin.sensegates.com/s/d/3/8/d38bafa7b722341f5e46233e342f89a3.png ", null, "http://bin.sensegates.com/s/5/8/7/587eb95aa96f4f79c032daaf1f2686cc.png ", null, "http://bin.sensegates.com/s/8/0/7/807e0155cb45ea6188e441234f69583d.png ", null, "http://bin.sensegates.com/s/a/8/d/a8da14dd310cbc6e9bacc6c699c8c375.png ", null, "http://bin.sensegates.com/s/9/f/b/9fbde86749eeb762abc4fbe61cd57938.png ", null, "http://bin.sensegates.com/s/2/9/5/2956e664b53e27af45e2786998af9888.png ", null, "http://bin.sensegates.com/s/0/2/3/023a9f3b99607cfd9fcca2f4c9ff687c.png ", null, "http://bin.sensegates.com/s/e/f/5/ef51f7fb7f9df53ff578c90afaa14b1e.png ", null, "http://bin.sensegates.com/s/f/9/d/f9d4b8f11ec6e4163d790ba1ca26f958.png ", null, "http://bin.sensegates.com/s/1/a/2/1a2be898d54e50b69c360cf1bb90b9f1.png ", null, "http://bin.sensegates.com/s/8/c/5/8c5e0e8ddb9ba275eb64083534d56185.png ", null, "http://common-en.sensagent.com/images/alexandria/data/fleche.gd.bleu.3.gif", null, "http://common-fr.sensagent.com/common/online-2008/images/columns/dclic_en.bmp", null, "http://common-fr.sensagent.com/common/online-2008/images/columns/boggle.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7949816,"math_prob":0.97679305,"size":11363,"snap":"2020-45-2020-50","text_gpt3_token_len":2752,"char_repetition_ratio":0.15608768,"word_repetition_ratio":0.03218884,"special_character_ratio":0.19968319,"punctuation_ratio":0.093659945,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9972985,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60],"im_url_duplicate_count":[null,null,null,null,null,2,null,7,null,2,null,4,null,2,null,4,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,3,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-29T00:03:32Z\",\"WARC-Record-ID\":\"<urn:uuid:eb8cdd83-6cad-4825-8474-19ba5f14a1bd>\",\"Content-Length\":\"86234\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:64345b4a-8c01-4789-84d7-83ae542a2e72>\",\"WARC-Concurrent-To\":\"<urn:uuid:baa6bcad-bc00-4161-a8a8-6a3dd4c4950c>\",\"WARC-IP-Address\":\"109.205.64.10\",\"WARC-Target-URI\":\"http://dictionnaire.sensagent.leparisien.fr/Diffeomorphism/en-en/\",\"WARC-Payload-Digest\":\"sha1:HZKQCDZYKPWCHI4OM2QYYAMUCVTJQSL2\",\"WARC-Block-Digest\":\"sha1:AF66P3VFG7LGCUTXGLCJ2F6C5PKV443L\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107902038.86_warc_CC-MAIN-20201028221148-20201029011148-00428.warc.gz\"}"}
https://www.24houranswers.com/college-homework-library/Computer-Science/C-Family-Programming/11059
[ "", null, "# C++ Project: Binary Sort\n\n##", null, "Question\n\nImagine that the company you work for is going to create a lot of tutorials on Fractions. You are to create a robust Fraction class that will have all of the following (all examples are for a fraction half that has a numerator of 1 and a denominator of 2):\n- Private integers numerator and denominator ;\n- All public getter and setter functions for the numerator and denominator;\n- Safeguard that the denominator will NEVER become 0!\n- a default constructor with no arguments;\n- a constructor that accepts both the numerator and denominator;\n- a toDecimal method that returns the decimal value of the fraction, example: 1/2 will be 0.5;\n- a toString method that will return the fraction as a string, example: 1/2 will be \"1/2\";\n- a reduce method that will change the numerator and denominator by finding a common denominator and reducing the fraction. Example 3/12 becomes 1/4;\n\n##", null, "Solution Preview\n\nThis material may consist of step-by-step explanations on how to solve a problem or examples of proper writing, including the use of citations, references, bibliographies, and formatting. This material is made available for the sole purpose of studying and learning - misuse is strictly forbidden.\n\n#include \"Fraction.h\"\n\nFraction::Fraction() {\nnumerator = 0;\ndenominator = 1;\n}\n\nFraction::Fraction(int numerator, int denominator){\nsetNumerator(numerator);\nsetDenominator(denominator);\n}\n\nFraction::~Fraction() {\n}\n\nvoid Fraction::setDenominator(int denominator) {\n// denominator cannot be zero\nif (denominator == 0) {\nthis->denominator = 1;\n}\nelse{\nthis->denominator = denominator;\n}\n}...\n\\$15.00 for this solution\n\nPayPal, G Pay, ApplePay, Amazon Pay, and all major credit cards accepted.\n\n### Find A Tutor\n\nView available C-Family Programming Tutors\n\nGet College Homework Help.\n\nAre you sure you don't want to upload any files?\n\nFast tutor response requires as much info as possible." ]
[ null, "https://www.facebook.com/tr", null, "https://www.24houranswers.com/images/icons/qa-icon.png", null, "https://www.24houranswers.com/images/icons/form-icon.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.75726855,"math_prob":0.6705378,"size":1330,"snap":"2021-21-2021-25","text_gpt3_token_len":293,"char_repetition_ratio":0.21568628,"word_repetition_ratio":0.010471204,"special_character_ratio":0.23383458,"punctuation_ratio":0.17105263,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98393327,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-05-07T01:11:25Z\",\"WARC-Record-ID\":\"<urn:uuid:a7352fb0-dedf-4342-883e-8a4eb85de62b>\",\"Content-Length\":\"67841\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:fedecd45-caaa-4401-9356-651557423781>\",\"WARC-Concurrent-To\":\"<urn:uuid:56310058-c5a8-4f5e-8d1f-77a6fa4450f8>\",\"WARC-IP-Address\":\"54.174.124.40\",\"WARC-Target-URI\":\"https://www.24houranswers.com/college-homework-library/Computer-Science/C-Family-Programming/11059\",\"WARC-Payload-Digest\":\"sha1:UCKG2TF7PK4BK7OXCEIJOYNY4Q5IHUQG\",\"WARC-Block-Digest\":\"sha1:AUX45RYXERXUMJFYQ3Q3JDQT4JPBOSML\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-21/CC-MAIN-2021-21_segments_1620243988774.18_warc_CC-MAIN-20210506235514-20210507025514-00565.warc.gz\"}"}
https://grams-to-kilograms.appspot.com/583-grams-to-kilograms.html
[ "Grams To Kilograms\n\n# 583 g to kg583 Grams to Kilograms\n\ng\n=\nkg\n\n## How to convert 583 grams to kilograms?\n\n 583 g * 0.001 kg = 0.583 kg 1 g\nA common question is How many gram in 583 kilogram? And the answer is 583000.0 g in 583 kg. Likewise the question how many kilogram in 583 gram has the answer of 0.583 kg in 583 g.\n\n## How much are 583 grams in kilograms?\n\n583 grams equal 0.583 kilograms (583g = 0.583kg). Converting 583 g to kg is easy. Simply use our calculator above, or apply the formula to change the length 583 g to kg.\n\n## Convert 583 g to common mass\n\nUnitMass\nMicrogram583000000.0 µg\nMilligram583000.0 mg\nGram583.0 g\nOunce20.5647198166 oz\nPound1.2852949885 lbs\nKilogram0.583 kg\nStone0.0918067849 st\nUS ton0.0006426475 ton\nTonne0.000583 t\nImperial ton0.0005737924 Long tons\n\n## What is 583 grams in kg?\n\nTo convert 583 g to kg multiply the mass in grams by 0.001. The 583 g in kg formula is [kg] = 583 * 0.001. Thus, for 583 grams in kilogram we get 0.583 kg.\n\n## 583 Gram Conversion Table", null, "## Alternative spelling\n\n583 g to kg, 583 g in kg, 583 g to Kilogram, 583 g in Kilogram, 583 Grams to Kilogram, 583 Grams in Kilogram, 583 g to Kilograms, 583 g in Kilograms, 583 Grams to Kilograms, 583 Grams in Kilograms, 583 Gram to kg, 583 Gram in kg, 583 Grams to kg, 583 Grams in kg" ]
[ null, "https://grams-to-kilograms.appspot.com/image/583.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8003961,"math_prob":0.9878429,"size":746,"snap":"2023-40-2023-50","text_gpt3_token_len":239,"char_repetition_ratio":0.25067386,"word_repetition_ratio":0.0,"special_character_ratio":0.38873994,"punctuation_ratio":0.15384616,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98000073,"pos_list":[0,1,2],"im_url_duplicate_count":[null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-10-02T03:37:24Z\",\"WARC-Record-ID\":\"<urn:uuid:bafd4596-1beb-4327-9e6c-e3e01293b644>\",\"Content-Length\":\"29056\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:588f1033-bf6c-4e89-b512-a9d2c2219e39>\",\"WARC-Concurrent-To\":\"<urn:uuid:98354940-dbdd-4dcd-a75a-2cc82662cc79>\",\"WARC-IP-Address\":\"142.251.167.153\",\"WARC-Target-URI\":\"https://grams-to-kilograms.appspot.com/583-grams-to-kilograms.html\",\"WARC-Payload-Digest\":\"sha1:GKAO4SKPB3EMMVNQKN2MRX4OTSJRM5YG\",\"WARC-Block-Digest\":\"sha1:J5QXFY3UF7AMWEN7ZERSVBEKXL343PDJ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510967.73_warc_CC-MAIN-20231002033129-20231002063129-00041.warc.gz\"}"}
https://www.ncatlab.org/nlab/show/spacetime
[ "nLab spacetime\n\nContents\n\nContext\n\nRiemannian geometry\n\nRiemannian geometry\n\nContents\n\nIdea\n\nGeneral\n\nA spacetime is a manifold that models space and time in physics.\n\nThis is formalized by saying that a spacetime is a smooth Lorentzian space $(X,\\mu)$ equipped with a time orientation (see there).\n\nHence a point in a spacetime is called an event.\n\nIn the context of classical general relativity a spacetime is usually in addition assumed to be connected and four-dimensional. A connected Lorentzian manifold is either time orientable or it has a two-sheeted covering which is time orientable.\n\nIn classical physics, notably in special relativity and general relativity points in $X$ model coordinates where events can take place from the viewpoint of an observer (“points in space and time”) while the metric $\\mu$ models the field of gravity in general relativity.\n\nIntermingling of space and time\n\nThe noun “spacetime” is used in both special relativity and general relativity, but is best motivated from the viewpoint of general relativity. Special relativity deals with the Minkowski spacetime only. The Minkowski spacetime allows a canonical choice of global coordinates such that the metric tensor has in every point the form diag(-1, 1, 1, 1), which identifies the first coordinate as representing the time coordinate and the others as representing space coordinates.\n\nGiven a general spacetime, there is not necessarily a globally defined coordinate system, and therefore not necessarily a globally defined canonical time coordinate. More specifically, there are spacetimes that admit coordinates defined on subsets where the physical interpretation of the coordinates as modelling time and space coordinates changes over the domain of definition.\n\n(TODO: references and explanations).\n\nExamples\n\nBooks\n\n• S. W. Hawking, G. F. R. Ellis, The large scale structure of space-time, Cambridge Univ. Press\n• John Beem, Paul Ehrlich, Global Lorentzian geometry, Marcel Dekker 1981 (and Russian, updated translation, Mir 1985)\n\nArticles\n\n• L. Markus, Line element fields and Lorentz structures on differentiable manifolds, Ann. of Math. (2) 62 (1955), 411–417, MR0073169 jstor\n\n• Roger Penrose, Gravitational collapse and space-time singularities, Phys. Rev. Lett. 14, 57–59\n\ncategory: physics, geometry" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.84033024,"math_prob":0.98000854,"size":2458,"snap":"2019-26-2019-30","text_gpt3_token_len":537,"char_repetition_ratio":0.16462918,"word_repetition_ratio":0.1037464,"special_character_ratio":0.19528072,"punctuation_ratio":0.10344828,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.969292,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-06-20T01:36:19Z\",\"WARC-Record-ID\":\"<urn:uuid:c84f4761-c8c5-4ea7-8d14-b9c28c6a95cf>\",\"Content-Length\":\"31829\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3d2525aa-318f-40bf-88fe-8aad8abf5019>\",\"WARC-Concurrent-To\":\"<urn:uuid:de3f02e3-b2b2-44e4-bd27-0792ed00582f>\",\"WARC-IP-Address\":\"104.27.170.19\",\"WARC-Target-URI\":\"https://www.ncatlab.org/nlab/show/spacetime\",\"WARC-Payload-Digest\":\"sha1:UOTA2UFJYBTIZZSW2LJGSTZW62AG4FED\",\"WARC-Block-Digest\":\"sha1:JGXNGO7MVZGSCDYKRNOJDAELV3QGWQMK\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-26/CC-MAIN-2019-26_segments_1560627999130.50_warc_CC-MAIN-20190620004625-20190620030625-00534.warc.gz\"}"}
https://excelchamps.com/vba/functions/fix/
[ "# VBA FIX Function (Syntax + Example)\n\nHomeVBATop VBA FunctionsVBA FIX Function (Syntax + Example)\n\nThe VBA FIX function is listed under the math category of VBA functions. When you use it in a VBA code, it can truncate a supplied number to an integer. In simple words, it returns an integer in the result after ignoring decimal values from the original number. It’s almost the same as the VBA INT function.\n\nTable of Content\n\nFix(Number)\n\n## Arguments\n\n• Number: The numeric value for which you want to get the integer part.\n\n## Example\n\nTo practically understand how to use VBA FIX function, you need to go through the below example where we have written a vba code by using it:\n\n``````Sub example_FIX()\nRange(\"B1\").Value = Fix(Range(\"A1\"))\nEnd Sub``````\n\nIn the above example, we have used the FIX to truncate the number from the cell A1 (-98.12) and it has returned -98 in the cell B1." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.77118164,"math_prob":0.95645183,"size":1097,"snap":"2022-27-2022-33","text_gpt3_token_len":271,"char_repetition_ratio":0.13174748,"word_repetition_ratio":0.0,"special_character_ratio":0.24703738,"punctuation_ratio":0.07009346,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.996484,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-07-03T15:50:01Z\",\"WARC-Record-ID\":\"<urn:uuid:1cbc6700-74a7-4a8d-8be8-716ddf299f3e>\",\"Content-Length\":\"174694\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:fb35f6cb-744d-4335-9222-745a9d706e0b>\",\"WARC-Concurrent-To\":\"<urn:uuid:255ce43d-bb15-470d-8832-71ed6af91644>\",\"WARC-IP-Address\":\"162.159.134.42\",\"WARC-Target-URI\":\"https://excelchamps.com/vba/functions/fix/\",\"WARC-Payload-Digest\":\"sha1:JE7WD4Y45TKGCM3H3DYNTEZ3M4NAFRS4\",\"WARC-Block-Digest\":\"sha1:HFIMRDLJZTUJEZKV7YMNVSHOWYJPF3ZO\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656104244535.68_warc_CC-MAIN-20220703134535-20220703164535-00137.warc.gz\"}"}
https://quant.stackexchange.com/questions/8494/cointegrating-relationships-johansen-in-r
[ "# Cointegrating relationships - Johansen in R\n\nI read the posts, How to interpret results of Johansen Test? and How to interpret the eigenmatrix from a Johansen cointegration test? But still I am quite confused by the output. I have a project with two series: I don't reject both H0, therefore I'd say there is no cointegration.\n\nJohansen Procedure:\n\n• Test type: trace statistic, with linear trend.\n• Eigenvalues (lambda):\n 0.0189039550 0.0008903665\n\n• Values of test statistic and critical values of test:\n test 10pct 5pct 1pct\nr <= 1 | 0.39 6.50 8.18 11.65\nr = 0 | 8.65 15.66 17.95 23.52\n• Eigenvectors, normalised to first column (these are the cointegration relations):\n Oil.l1 Fuel.l1\nOil.l1 1.000000 1.0000\nFuel.l1 -1.484484 -11.1973\n Oil.l1 Fuel.l1\nOil.d -0.049059881 0.0002693549\nFuel.d 0.002111537 0.0002467205\n\nHowever, I'd like to impose one. Thus, I want to read alpha and beta. From what I understand these are the vectors below the largest eigenvalue? i.e. here, beta is (1, -1.48) and alpha is (-0.049, 0.002). But, if I want to build a cointegrating relationship, then are there two of them (below), or only one (the upper one)? I believe that lower one is very unrealistic due to low eigenvalue (first one too but we impose its not):\n\nOil.l1 - 1.48*Fuel.l1\nOil.l1 - 11.19*Fuel.l1\n\nAlso, to get the Gamma(j) matrices for differenced data for Vector Error Correction Form, I do the following:\n\nECF = ca.jo(ldata, type=\"trace\", spec=\"transitory\", K=14)\nvec2var(ECF,r=1) #r = 1 for cointegration rank\n\nAccording to theory there should be (p-1) matrices, i.e. 13 but I get 14. Should I simply ignore the last one?\n\nI'd be extremely thankful for help!\n\nTo get the VECM-form, you need to to use the command cajorls()(restricted) or cajoorls()(unrestricted). The vec2var() gives you a level (undifferenced) representation of the VECM. In a VECM you'll have 13 $(p-1)$ lags per variable. I think you will find the help on the commands, ca.jo, vec2var, cajorls and cajools very helpful." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.83127266,"math_prob":0.76038355,"size":1645,"snap":"2020-34-2020-40","text_gpt3_token_len":519,"char_repetition_ratio":0.100548446,"word_repetition_ratio":0.0,"special_character_ratio":0.34954408,"punctuation_ratio":0.20716113,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97544044,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-23T03:00:57Z\",\"WARC-Record-ID\":\"<urn:uuid:cd911353-5972-429d-86b3-ee55f6089d93>\",\"Content-Length\":\"146293\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8936b7e5-fb5f-4e50-9e48-fe9414579861>\",\"WARC-Concurrent-To\":\"<urn:uuid:8623bfbd-0950-4d10-96ec-ba063891d202>\",\"WARC-IP-Address\":\"151.101.1.69\",\"WARC-Target-URI\":\"https://quant.stackexchange.com/questions/8494/cointegrating-relationships-johansen-in-r\",\"WARC-Payload-Digest\":\"sha1:5SDQEMDENUMHMPNGUSV6BRFN3DXDOSK6\",\"WARC-Block-Digest\":\"sha1:QPOS2TMZD5C3W7HVSRNAKZOGMICAVL5K\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600400209665.4_warc_CC-MAIN-20200923015227-20200923045227-00740.warc.gz\"}"}
https://www.enotes.com/homework-help/perfectly-competitive-firm-market-price-10-output-331807
[ "# A perfectly competitive firm can sell a product at a market price of \\$10. For an output X, with total costs are TC = 10 + 2X + .25X^2. How many units should they produce to maximize profit?", null, "A perfectly competitive firm can sell its product at a market price of \\$10 per unit. The total costs incurred by the firm if X products are produced is given by TC = 10 + 2X + 0.25X^2. The revenue earned when X units are sold is 10*X. This gives the profit made when X units are sold as P = 10X - 10 - 2X - 0.25*X^2 = 8X - 10 - 0.25*X^2.\n\nTo determine the number of units that need to be produced to maximize profits, the first derivative of P with respect to X, P', has to be determined, this should be equated to 0 and the resulting equation solved for X.\n\nP' = 8 - 0.5X\n\n8 - 0.5X = 0\n\n=> X = 16\n\nThe firm should produce 16 units to maximize its profits. The maximum profits earned by the firm are \\$54.\n\nApproved by eNotes Editorial Team" ]
[ null, "https://static.enotescdn.net/images/main/illustrations/illo-answer.svg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9438818,"math_prob":0.99702716,"size":1033,"snap":"2021-31-2021-39","text_gpt3_token_len":296,"char_repetition_ratio":0.110787176,"word_repetition_ratio":0.009803922,"special_character_ratio":0.31848985,"punctuation_ratio":0.10526316,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99811274,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-07-26T22:14:22Z\",\"WARC-Record-ID\":\"<urn:uuid:3d53ace0-82d2-4be0-a4cc-705881f4db6f>\",\"Content-Length\":\"71412\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:22fc13ad-4822-4aa5-aa9a-be4cb5575964>\",\"WARC-Concurrent-To\":\"<urn:uuid:9c445039-cdf7-4783-a5dd-ea630ec01299>\",\"WARC-IP-Address\":\"104.26.5.75\",\"WARC-Target-URI\":\"https://www.enotes.com/homework-help/perfectly-competitive-firm-market-price-10-output-331807\",\"WARC-Payload-Digest\":\"sha1:O722Z7P4DTCB57AGIZCTNUY3AZQVDWKM\",\"WARC-Block-Digest\":\"sha1:RQ6W4SRVTBY6X2P6X2BO3MOATY6YQ2YQ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046152156.49_warc_CC-MAIN-20210726215020-20210727005020-00557.warc.gz\"}"}
https://drops.dagstuhl.de/opus/volltexte/2016/6340/
[ "", null, "License:", null, "Creative Commons Attribution 3.0 Unported license (CC-BY 3.0)\nwhen quoting this document, please refer to the following\nDOI:\nURN: urn:nbn:de:0030-drops-63401\nURL:\n\n; ;\n\n### Diameter and k-Center in Sliding Windows\n\n pdf-format:\n\n### Abstract\n\nIn this paper we develop streaming algorithms for the diameter problem and the k-center clustering problem in the sliding window model. In this model we are interested in maintaining a solution for the N most recent points of the stream. In the diameter problem we would like to maintain two points whose distance approximates the diameter of the point set in the window. Our algorithm computes a (3 + epsilon)-approximation and uses O(1/epsilon*ln(alpha)) memory cells, where alpha is the ratio of the largest and smallest distance and is assumed to be known in advance. We also prove that under reasonable assumptions obtaining a (3 - epsilon)-approximation requires Omega(N1/3) space.\n\nFor the k-center problem, where the goal is to find k centers that minimize the maximum distance of a point to its nearest center, we obtain a (6 + epsilon)-approximation using O(k/epsilon*ln(alpha)) memory cells and a (4 + epsilon)-approximation for the special case k = 2. We also prove that any algorithm for the 2-center problem that achieves an approximation ratio of less than 4 requires Omega(N^{1/3}) space.\n\n### BibTeX - Entry\n\n```@InProceedings{cohenaddad_et_al:LIPIcs:2016:6340,\ntitle =\t{{Diameter and k-Center in Sliding Windows}},\nbooktitle =\t{43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)},\npages =\t{19:1--19:12},\nseries =\t{Leibniz International Proceedings in Informatics (LIPIcs)},\nISBN =\t{978-3-95977-013-2},\nISSN =\t{1868-8969},\nyear =\t{2016},\nvolume =\t{55},\neditor =\t{Ioannis Chatzigiannakis and Michael Mitzenmacher and Yuval Rabani and Davide Sangiorgi},\npublisher =\t{Schloss Dagstuhl--Leibniz-Zentrum fuer Informatik},\nDROPS-Home | Fulltext Search | Imprint | Privacy", null, "" ]
[ null, "https://drops.dagstuhl.de/opus/Icons/drops-logo.png", null, "https://drops.dagstuhl.de/opus/Icons/by.png", null, "https://drops.dagstuhl.de/opus/Icons/logo_t.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.747775,"math_prob":0.9106935,"size":2308,"snap":"2022-05-2022-21","text_gpt3_token_len":633,"char_repetition_ratio":0.10329861,"word_repetition_ratio":0.02507837,"special_character_ratio":0.2768631,"punctuation_ratio":0.15827338,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9691888,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-16T17:56:15Z\",\"WARC-Record-ID\":\"<urn:uuid:fa753224-a45e-43b0-96d7-07b00da66e60>\",\"Content-Length\":\"10129\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:92117ded-9acf-430c-89a7-26c6cdb3667a>\",\"WARC-Concurrent-To\":\"<urn:uuid:f172074e-b831-476b-8958-31b921f383e7>\",\"WARC-IP-Address\":\"192.76.146.6\",\"WARC-Target-URI\":\"https://drops.dagstuhl.de/opus/volltexte/2016/6340/\",\"WARC-Payload-Digest\":\"sha1:N3WFRUMZFG5HKQXBBZM6XKXVPSA7UEIT\",\"WARC-Block-Digest\":\"sha1:4OGY2WRCD7KPZTRFGADCRCBNFUJRW3FS\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662512229.26_warc_CC-MAIN-20220516172745-20220516202745-00098.warc.gz\"}"}
https://routersecurity.org/SurfSOHO.audit.log.fw.802.php
[ "Router Security Peplink Audit Trail Website by     Michael Horowitz\nSee my new website: DefensiveComputingChecklist.com\n\n### The Event Log from one Peplink router auditing another\n\nThis test was done in March 2020 using firmware 8.0.2. It was not the first time I have run a test like this on a Peplink/Pepwave router, but it had been a long time between tests. The last test also showed that that the router does not phone home to Peplink. Many, probably most, routers do report back to their manufacturer. Synology is the worst I have seen at that, their router was constantly communicating with them despite my best efforts. Many routers require you to have an account with the manufacturer. Peplink does not.\n\nThe test was done by connecting the WAN port of the router being audited, the inner router if you will (a Surf SOHO) to a LAN port of the outer router (also a Surf SOHO). The outer router logged every outgoing connection made by the inner router (IP address 192.168.7.77). What you see below is the Event Log from the outer router. No devices were using the inner router at all. InControl2, the Peplink cloud service, was disabled on the inner router.\n\nMar 22 03:10:02 CONN=lan SRC=192.168.7.77 DST=104.25.204.4 LEN=60 TOS=0x00 PREC=0x00 TTL=63 ID=12571 DF PROTO=TCP SPT=49005 DPT=443 WINDOW=5600 RES=0x00 SYN URGP=0 MARK=0x2\n\nMar 23 03:10:02 CONN=lan SRC=192.168.7.77 DST=104.25.205.4 LEN=60 TOS=0x00 PREC=0x00 TTL=63 ID=31629 DF PROTO=TCP SPT=47967 DPT=443 WINDOW=5600 RES=0x00 SYN URGP=0 MARK=0x2\n\nThe rest is shown below. The router phones home for the Time of Day every 30 minutes. It makes UDP requests to port 123, standard fare for the Time of Day service. Each time, it makes four requests. It was using the default time server, 0.pepwave.pool.ntp.org. You can use any time server you prefer.\n\nNOTES: My audit ran for over two days, only a few hours are shown here because the rest is just more of the same. In the listing below DPT is Destination Port and DST is the Destination IP address. SRC is the IP address of the router being audited. From the perspective of the router doing the auditing (the outer one) it is just another device on the LAN and thus has a LAN side IP address.\n\nMar 20 21:26:00 CONN=lan SRC=192.168.7.77 DST=216.229.4.69 LEN=76 TOS=0x00 PREC=0x00 TTL=63 ID=0 DF PROTO=UDP SPT=49871 DPT=123 LEN=56 MARK=0x2\nMar 20 21:26:00 CONN=lan SRC=192.168.7.77 DST=50.205.244.39 LEN=76 TOS=0x00 PREC=0x00 TTL=63 ID=0 DF PROTO=UDP SPT=49871 DPT=123 LEN=56 MARK=0x2\nMar 20 21:26:00 CONN=lan SRC=192.168.7.77 DST=23.129.64.159 LEN=76 TOS=0x00 PREC=0x00 TTL=63 ID=0 DF PROTO=UDP SPT=49871 DPT=123 LEN=56 MARK=0x2\nMar 20 21:26:00 CONN=lan SRC=192.168.7.77 DST=216.126.233.109 LEN=76 TOS=0x00 PREC=0x00 TTL=63 ID=0 DF PROTO=UDP SPT=49871 DPT=123 LEN=56 MARK=0x2\n\nMar 20 20:55:53 CONN=lan SRC=192.168.7.77 DST=23.152.160.126 LEN=76 TOS=0x00 PREC=0x00 TTL=63 ID=0 DF PROTO=UDP SPT=44276 DPT=123 LEN=56 MARK=0x2\nMar 20 20:55:53 CONN=lan SRC=192.168.7.77 DST=74.122.204.3 LEN=76 TOS=0x00 PREC=0x00 TTL=63 ID=0 DF PROTO=UDP SPT=44276 DPT=123 LEN=56 MARK=0x2\nMar 20 20:55:53 CONN=lan SRC=192.168.7.77 DST=108.61.56.35 LEN=76 TOS=0x00 PREC=0x00 TTL=63 ID=0 DF PROTO=UDP SPT=44276 DPT=123 LEN=56 MARK=0x2\nMar 20 20:55:53 CONN=lan SRC=192.168.7.77 DST=74.208.235.60 LEN=76 TOS=0x00 PREC=0x00 TTL=63 ID=0 DF PROTO=UDP SPT=44276 DPT=123 LEN=56 MARK=0x2\n\nMar 20 20:25:46 CONN=lan SRC=192.168.7.77 DST=50.18.44.198 LEN=76 TOS=0x00 PREC=0x00 TTL=63 ID=0 DF PROTO=UDP SPT=49345 DPT=123 LEN=56 MARK=0x2\nMar 20 20:25:46 CONN=lan SRC=192.168.7.77 DST=23.31.21.164 LEN=76 TOS=0x00 PREC=0x00 TTL=63 ID=0 DF PROTO=UDP SPT=49345 DPT=123 LEN=56 MARK=0x2\nMar 20 20:25:46 CONN=lan SRC=192.168.7.77 DST=72.87.88.203 LEN=76 TOS=0x00 PREC=0x00 TTL=63 ID=0 DF PROTO=UDP SPT=49345 DPT=123 LEN=56 MARK=0x2\nMar 20 20:25:46 CONN=lan SRC=192.168.7.77 DST=162.159.200.123 LEN=76 TOS=0x00 PREC=0x00 TTL=63 ID=0 DF PROTO=UDP SPT=49345 DPT=123 LEN=56 MARK=0x2\n\nMar 20 19:55:39 CONN=lan SRC=192.168.7.77 DST=45.79.1.70 LEN=76 TOS=0x00 PREC=0x00 TTL=63 ID=0 DF PROTO=UDP SPT=54383 DPT=123 LEN=56 MARK=0x2\nMar 20 19:55:39 CONN=lan SRC=192.168.7.77 DST=74.6.168.73 LEN=76 TOS=0x00 PREC=0x00 TTL=63 ID=0 DF PROTO=UDP SPT=54383 DPT=123 LEN=56 MARK=0x2\nMar 20 19:55:39 CONN=lan SRC=192.168.7.77 DST=54.236.224.171 LEN=76 TOS=0x00 PREC=0x00 TTL=63 ID=0 DF PROTO=UDP SPT=54383 DPT=123 LEN=56 MARK=0x2\nMar 20 19:55:38 CONN=lan SRC=192.168.7.77 DST=72.14.183.239 LEN=76 TOS=0x00 PREC=0x00 TTL=63 ID=0 DF PROTO=UDP SPT=54383 DPT=123 LEN=56 MARK=0x2\n\nMar 20 19:25:32 CONN=lan SRC=192.168.7.77 DST=158.51.134.123 LEN=76 TOS=0x00 PREC=0x00 TTL=63 ID=0 DF PROTO=UDP SPT=34256 DPT=123 LEN=56 MARK=0x2\nMar 20 19:25:32 CONN=lan SRC=192.168.7.77 DST=140.82.60.75 LEN=76 TOS=0x00 PREC=0x00 TTL=63 ID=0 DF PROTO=UDP SPT=34256 DPT=123 LEN=56 MARK=0x2\nMar 20 19:25:31 CONN=lan SRC=192.168.7.77 DST=138.68.46.177 LEN=76 TOS=0x00 PREC=0x00 TTL=63 ID=0 DF PROTO=UDP SPT=34256 DPT=123 LEN=56 MARK=0x2\nMar 20 19:25:31 CONN=lan SRC=192.168.7.77 DST=23.31.21.163 LEN=76 TOS=0x00 PREC=0x00 TTL=63 ID=0 DF PROTO=UDP SPT=34256 DPT=123 LEN=56 MARK=0x2\n\nMar 20 18:55:25 CONN=lan SRC=192.168.7.77 DST=198.60.22.240 LEN=76 TOS=0x00 PREC=0x00 TTL=63 ID=0 DF PROTO=UDP SPT=51349 DPT=123 LEN=56 MARK=0x2\nMar 20 18:55:24 CONN=lan SRC=192.168.7.77 DST=47.190.36.235 LEN=76 TOS=0x00 PREC=0x00 TTL=63 ID=0 DF PROTO=UDP SPT=51349 DPT=123 LEN=56 MARK=0x2\nMar 20 18:55:24 CONN=lan SRC=192.168.7.77 DST=138.236.128.112 LEN=76 TOS=0x00 PREC=0x00 TTL=63 ID=0 DF PROTO=UDP SPT=51349 DPT=123 LEN=56 MARK=0x2\nMar 20 18:55:24 CONN=lan SRC=192.168.7.77 DST=193.29.63.150 LEN=76 TOS=0x00 PREC=0x00 TTL=63 ID=0 DF PROTO=UDP SPT=51349 DPT=123 LEN=56 MARK=0x2\n\nMar 20 18:25:18 CONN=lan SRC=192.168.7.77 DST=69.164.198.192 LEN=76 TOS=0x00 PREC=0x00 TTL=63 ID=0 DF PROTO=UDP SPT=60381 DPT=123 LEN=56 MARK=0x2\nMar 20 18:25:17 CONN=lan SRC=192.168.7.77 DST=108.53.168.46 LEN=76 TOS=0x00 PREC=0x00 TTL=63 ID=0 DF PROTO=UDP SPT=60381 DPT=123 LEN=56 MARK=0x2\nMar 20 18:25:17 CONN=lan SRC=192.168.7.77 DST=206.55.191.142 LEN=76 TOS=0x00 PREC=0x00 TTL=63 ID=0 DF PROTO=UDP SPT=60381 DPT=123 LEN=56 MARK=0x2\nMar 20 18:25:17 CONN=lan SRC=192.168.7.77 DST=69.89.207.199 LEN=76 TOS=0x00 PREC=0x00 TTL=63 ID=0 DF PROTO=UDP SPT=60381 DPT=123 LEN=56 MARK=0x2\n\nMar 20 17:55:10 CONN=lan SRC=192.168.7.77 DST=165.22.39.103 LEN=76 TOS=0x00 PREC=0x00 TTL=63 ID=0 DF PROTO=UDP SPT=45355 DPT=123 LEN=56 MARK=0x2\nMar 20 17:55:10 CONN=lan SRC=192.168.7.77 DST=45.79.36.123 LEN=76 TOS=0x00 PREC=0x00 TTL=63 ID=0 DF PROTO=UDP SPT=45355 DPT=123 LEN=56 MARK=0x2\nMar 20 17:55:10 CONN=lan SRC=192.168.7.77 DST=216.229.0.49 LEN=76 TOS=0x00 PREC=0x00 TTL=63 ID=0 DF PROTO=UDP SPT=45355 DPT=123 LEN=56 MARK=0x2\nMar 20 17:55:10 CONN=lan SRC=192.168.7.77 DST=23.31.21.163 LEN=76 TOS=0x00 PREC=0x00 TTL=63 ID=0 DF PROTO=UDP SPT=45355 DPT=123 LEN=56 MARK=0x2\n\nMar 20 17:25:03 CONN=lan SRC=192.168.7.77 DST=138.236.128.112 LEN=76 TOS=0x00 PREC=0x00 TTL=63 ID=0 DF PROTO=UDP SPT=36982 DPT=123 LEN=56 MARK=0x2\nMar 20 17:25:03 CONN=lan SRC=192.168.7.77 DST=45.76.244.202 LEN=76 TOS=0x00 PREC=0x00 TTL=63 ID=0 DF PROTO=UDP SPT=36982 DPT=123 LEN=56 MARK=0x2\nMar 20 17:25:03 CONN=lan SRC=192.168.7.77 DST=204.11.201.10 LEN=76 TOS=0x00 PREC=0x00 TTL=63 ID=0 DF PROTO=UDP SPT=36982 DPT=123 LEN=56 MARK=0x2\nMar 20 17:25:02 CONN=lan SRC=192.168.7.77 DST=206.55.191.142 LEN=76 TOS=0x00 PREC=0x00 TTL=63 ID=0 DF PROTO=UDP SPT=36982 DPT=123 LEN=56 MARK=0x2\n\nMar 20 16:54:56 CONN=lan SRC=192.168.7.77 DST=64.22.253.155 LEN=76 TOS=0x00 PREC=0x00 TTL=63 ID=0 DF PROTO=UDP SPT=37815 DPT=123 LEN=56 MARK=0x2\nMar 20 16:54:56 CONN=lan SRC=192.168.7.77 DST=204.93.207.12 LEN=76 TOS=0x00 PREC=0x00 TTL=63 ID=0 DF PROTO=UDP SPT=37815 DPT=123 LEN=56 MARK=0x2\nMar 20 16:54:56 CONN=lan SRC=192.168.7.77 DST=129.250.35.250 LEN=76 TOS=0x00 PREC=0x00 TTL=63 ID=0 DF PROTO=UDP SPT=37815 DPT=123 LEN=56 MARK=0x2\nMar 20 16:54:55 CONN=lan SRC=192.168.7.77 DST=162.159.200.1 LEN=76 TOS=0x00 PREC=0x00 TTL=63 ID=0 DF PROTO=UDP SPT=37815 DPT=123 LEN=56 MARK=0x2\n\nMar 20 16:24:48 CONN=lan SRC=192.168.7.77 DST=185.117.82.70 LEN=76 TOS=0x00 PREC=0x00 TTL=63 ID=0 DF PROTO=UDP SPT=40946 DPT=123 LEN=56 MARK=0x2\nMar 20 16:24:47 CONN=lan SRC=192.168.7.77 DST=178.79.160.57 LEN=76 TOS=0x00 PREC=0x00 TTL=63 ID=0 DF PROTO=UDP SPT=40946 DPT=123 LEN=56 MARK=0x2\nMar 20 16:24:47 CONN=lan SRC=192.168.7.77 DST=85.114.128.137 LEN=76 TOS=0x00 PREC=0x00 TTL=63 ID=0 DF PROTO=UDP SPT=40946 DPT=123 LEN=56 MARK=0x2\nMar 20 16:24:47 CONN=lan SRC=192.168.7.77 DST=88.212.196.95 LEN=76 TOS=0x00 PREC=0x00 TTL=63 ID=0 DF PROTO=UDP SPT=40946 DPT=123 LEN=56 MARK=0x2" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5791743,"math_prob":0.92911714,"size":9630,"snap":"2020-34-2020-40","text_gpt3_token_len":4158,"char_repetition_ratio":0.27488053,"word_repetition_ratio":0.41017732,"special_character_ratio":0.4938733,"punctuation_ratio":0.16333212,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99818206,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-29T17:25:26Z\",\"WARC-Record-ID\":\"<urn:uuid:287ded18-6cb9-45db-b2fb-56517217a2b3>\",\"Content-Length\":\"18172\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:50798cb8-f325-42e3-a8f8-02257649425d>\",\"WARC-Concurrent-To\":\"<urn:uuid:91c85a19-e946-4c40-bcc1-7630e720b98a>\",\"WARC-IP-Address\":\"216.92.136.14\",\"WARC-Target-URI\":\"https://routersecurity.org/SurfSOHO.audit.log.fw.802.php\",\"WARC-Payload-Digest\":\"sha1:TAQYUXWFUPTCMPJPVFIGVRQPFULZTSPR\",\"WARC-Block-Digest\":\"sha1:ZVTY6M4VNSUEMIBKUI34OO25WHI7G5YI\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600400202418.22_warc_CC-MAIN-20200929154729-20200929184729-00281.warc.gz\"}"}
https://byjus.com/question-answer/the-ratio-of-squares-of-first-n-natural-numbers-to-square-of-sum-of-first-1/
[ "", null, "", null, "Question\n\n# The ratio of squares of first n natural numbers to square of sum of first n natural numbers is17:325. The value of n is:24 25 26 none\n\nSolution\n\n## The correct option is B 25 Sum of squares of first n natural numbers=n(n+1)(2n+1)6 Sum of first n natural numbers =n(n+1)2 So solving the given equation we will get n=25", null, "", null, "Suggest corrections", null, "", null, "", null, "" ]
[ null, "data:image/svg+xml;base64,PHN2ZyB3aWR0aD0iNDQiIGhlaWdodD0iNDQiIHhtbG5zPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwL3N2ZyIgdmVyc2lvbj0iMS4xIi8+", null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null, "data:image/svg+xml;base64,PHN2ZyB3aWR0aD0iMzIiIGhlaWdodD0iMzIiIHhtbG5zPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwL3N2ZyIgdmVyc2lvbj0iMS4xIi8+", null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null, "data:image/svg+xml;base64,PHN2ZyB3aWR0aD0iMjQiIGhlaWdodD0iMjQiIHhtbG5zPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwL3N2ZyIgdmVyc2lvbj0iMS4xIi8+", null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.795716,"math_prob":0.99927145,"size":280,"snap":"2021-43-2021-49","text_gpt3_token_len":84,"char_repetition_ratio":0.1884058,"word_repetition_ratio":0.14285715,"special_character_ratio":0.31428573,"punctuation_ratio":0.04477612,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9990688,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-12-02T19:34:00Z\",\"WARC-Record-ID\":\"<urn:uuid:d77e9189-1dd2-479b-81d4-7d487849e2bf>\",\"Content-Length\":\"144826\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a6579fdb-af41-4cff-89a6-2c50b5683eeb>\",\"WARC-Concurrent-To\":\"<urn:uuid:b129c887-9b6a-4c5c-9d4d-77d2f11e4927>\",\"WARC-IP-Address\":\"162.159.130.41\",\"WARC-Target-URI\":\"https://byjus.com/question-answer/the-ratio-of-squares-of-first-n-natural-numbers-to-square-of-sum-of-first-1/\",\"WARC-Payload-Digest\":\"sha1:C2FZPB7OPG67DPC5RQSJ5AGV5PKBN5ZC\",\"WARC-Block-Digest\":\"sha1:4JUDUDNSM3ZQLOFSX3BRXYW7GFTM6DBS\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964362287.26_warc_CC-MAIN-20211202175510-20211202205510-00294.warc.gz\"}"}
https://www.xszz.org/faq-2/question-2018101939126.html
[ "6310", null, "# serial await requests in a return run in parallel?\n\nHaving a discussion with someone and came across this oddity:\n\n```const wait = async () => new Promise(resolve => setTimeout(resolve, 1000)); async function case1() { const {a, b} = {a: await wait(), b: await wait()}; return {a, b}; } async function case2() { return {a: await wait(), b: await wait()}; } async function case3() { const {a, b} = {a: wait(), b: wait()}; return {a: await a, b: await b}; } async function case4() { const {a, b} = {a: wait(), b: wait()}; const {c, d} = {c: await a, d: await b}; return {c, d}; } function test() { const start = new Date(); case1().then(() => console.log('case1:', +new Date() - start)); case2().then(() => console.log('case2:', +new Date() - start)); case3().then(() => console.log('case3:', +new Date() - start)); case4().then(() => console.log('case4:', +new Date() - start)); } ```\n\n`case1` and `case2` both run in 2 seconds. `case3` and `case4` run in 1 second.\n\nIs there some weird implicit `Promise.all` or something??\n\nYou call the function `wait()` without utilizing `await` at `case3` and `case4`. That is the difference.\n\nIn case#3 the `wait()` functions are called immediately, so there is only 1 second of timeout (for both of them), while in the other two (case#1 and case#2) the `await` will \"do it's job\" and wait for the async call to return.\n\nAs you can see here, the `console.log(Date())` is being called immediately for both calls. <div class=\"snippet\" data-lang=\"js\" data-hide=\"false\" data-console=\"true\" data-babel=\"false\"> <div class=\"snippet-code\">\n\n``````const wait = async () => new Promise(resolve => console.log(Date()) || setTimeout(resolve, 1000));\n\nasync function case3() {\nconst {a, b} = {a: wait(), b: wait()};\nreturn {a: await a, b: await b};\n}\n\nfunction test() {\nconst start = new Date();\ncase3().then(() => console.log('case3:', +new Date() - start));\n}\ntest();```\n\nAnd here it is being synchronized using the `await`:\n\n<div class=\"snippet\" data-lang=\"js\" data-hide=\"false\" data-console=\"true\" data-babel=\"false\">\n<div class=\"snippet-code\">\n```const wait = async () => new Promise(resolve => console.log(Date()) || setTimeout(resolve, 1000));\n\nasync function case1() {\nconst {a, b} = {a: await wait(), b: await wait()};\nreturn {a, b};\n}\n\nfunction test() {\nconst start = new Date();\n\ncase1().then(() => console.log('case1:', +new Date() - start));\n}\ntest();```\n\n```" ]
[ null, "https://www.xszz.org/skin/wt/rpic/t6.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5394761,"math_prob":0.9613245,"size":2297,"snap":"2020-34-2020-40","text_gpt3_token_len":648,"char_repetition_ratio":0.15220235,"word_repetition_ratio":0.44,"special_character_ratio":0.36264694,"punctuation_ratio":0.22929937,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9590143,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-28T05:46:36Z\",\"WARC-Record-ID\":\"<urn:uuid:6cb5c044-59fb-430d-bad7-4e8aa6680dea>\",\"Content-Length\":\"57003\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b14ba0cb-786c-4ba9-baf2-29e9eb0bda38>\",\"WARC-Concurrent-To\":\"<urn:uuid:d6510684-a962-40b6-b24a-da094fb08327>\",\"WARC-IP-Address\":\"149.28.49.7\",\"WARC-Target-URI\":\"https://www.xszz.org/faq-2/question-2018101939126.html\",\"WARC-Payload-Digest\":\"sha1:M7Q2TUIGTPVFLRE374UT47APP4TOM5CD\",\"WARC-Block-Digest\":\"sha1:NN4OOSHUONHNY7KE3A2GX4ZADZYYF6OE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600401585213.82_warc_CC-MAIN-20200928041630-20200928071630-00786.warc.gz\"}"}
https://math.answers.com/other-math/What_is_least_common_multiple_of_6_8_and_18
[ "", null, "", null, "", null, "", null, "0\n\n# What is least common multiple of 6 8 and 18?\n\nUpdated: 8/3/2022", null, "Wiki User\n\n9y ago\n\nThe least common multiple of 6 , 8 , 18 = 72", null, "Wiki User\n\n9y ago", null, "", null, "", null, "phonepyaekyaw\n\nLvl 6\n1y ago\n\nThe least common multiple of 6 , 8 , 18 = 72", null, "", null, "", null, "Earn +20 pts\nQ: What is least common multiple of 6 8 and 18?\nSubmit\nStill have questions?", null, "", null, "Related questions\n\n### What is the least common multiple for 6 and 18?\n\n6 is a factor of 18, so the least common multiple is 18.\n\n### What is the least common of 6 11 and 18?\n\nThe Least Common Multiple of 6, 11, and 18 is 198.\n\n### What is the least common multiple of 5 6 and 18?\n\nThe Least Common Multiple (LCM) for 5 18 6 is 90.\n\n### What is the least common multiple of 2 6 and 9?\n\nLeast common multiple of 2 and 9 and 6 is 18.\n\n### What is the least common multiple of 6 29 18?\n\nThe least common multiple of the numbers 6, 29 and 18 is 522.\n\n### What is the least common multiple of 6 18 and 42?\n\nThe Least Common Multiple (LCM) for 6 18 42 is 126.\n\n### What is the least common multiple of 18 17 and 6?\n\nThe Least Common Multiple (LCM) for 18 17 6 is 306.\n\n### The least common multiple of 6 15and18?\n\nThe Least Common Multiple of 6, 15, and 18 is 90.\n\n### What is the lowest multiple of 6 and 18?\n\n18 is the least common multiple.\n\n### What is the least common multiple of 2 6 18 and 21?\n\nThe Least Common Multiple (LCM) for 2 6 18 21 is 126.\n\n### What is the least common multiple of 4 18 and 6?\n\nThe Least Common Multiple (LCM) of (4,6,18) is 36.6*6=364*9=3618*2=36\n\n### What is the least common multiple 0f 6 and 9?\n\nThe Least Common Multiple (LCM) for 6 9 is 18." ]
[ null, "https://math.answers.com/icons/searchIcon.svg", null, "https://math.answers.com/icons/searchGlassWhiteIcon.svg", null, "https://math.answers.com/icons/notificationBellIcon.svg", null, "https://math.answers.com/icons/coinIcon.svg", null, "https://math.answers.com/images/avatars/default.png", null, "https://math.answers.com/images/avatars/default.png", null, "https://math.answers.com/images/avatars/default.png", null, "https://math.answers.com/icons/sendIcon.svg", null, "https://math.answers.com/images/avatars/default.png", null, "https://math.answers.com/images/avatars/default.png", null, "https://math.answers.com/icons/sendIcon.svg", null, "https://math.answers.com/icons/coinIcon.svg", null, "https://math.answers.com/icons/searchIcon.svg", null, "https://st.answers.com/html_test_assets/imp_-_pixel.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.92631316,"math_prob":0.9085955,"size":1263,"snap":"2023-40-2023-50","text_gpt3_token_len":431,"char_repetition_ratio":0.26290706,"word_repetition_ratio":0.17669173,"special_character_ratio":0.39034045,"punctuation_ratio":0.13607594,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9980708,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-30T03:56:29Z\",\"WARC-Record-ID\":\"<urn:uuid:2029a18f-0aad-41a6-8412-d18d6007603c>\",\"Content-Length\":\"170702\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c43d7a7d-81ca-44a4-a82f-26fddf7534dd>\",\"WARC-Concurrent-To\":\"<urn:uuid:7b5be972-6d97-496c-bdcb-0794c91177e9>\",\"WARC-IP-Address\":\"146.75.32.203\",\"WARC-Target-URI\":\"https://math.answers.com/other-math/What_is_least_common_multiple_of_6_8_and_18\",\"WARC-Payload-Digest\":\"sha1:XSGZH3NQFIFVGFLPM44BS72F7UVGTNQK\",\"WARC-Block-Digest\":\"sha1:G7SQDQ23UNWHBHT7JWGNOZ4LVKBUNP66\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510575.93_warc_CC-MAIN-20230930014147-20230930044147-00163.warc.gz\"}"}